DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/4] drivers/qat: isolate implementations of qat generations
@ 2021-09-01 14:47 Arek Kusztal
  2021-09-01 14:47 ` [dpdk-dev] [PATCH 1/4] common/qat: " Arek Kusztal
                   ` (5 more replies)
  0 siblings, 6 replies; 96+ messages in thread
From: Arek Kusztal @ 2021-09-01 14:47 UTC (permalink / raw)
  To: dev; +Cc: gakhil, roy.fan.zhang, Arek Kusztal

This patchset introduces new qat driver structure and updates
existing symmetric crypto qat PMD.

The purpose of the change is to isolate QAT generation specific implementations from one to another.

It is expected the changes to the specific generation driver code does minimum impact to
other generations' implementations. Also adding the support to new features or new qat
generation hardware will have zero impact to existing functionalities.

Arek Kusztal (4):
  common/qat: isolate implementations of qat generations
  crypto/qat: isolate implementations of symmetric operations
  crypto/qat: move capabilities initialization to spec files
  common/qat: add extra data to qat pci dev

 drivers/common/qat/dev/qat_dev_gen1.c     | 252 +++++++++
 drivers/common/qat/dev/qat_dev_gen1.h     |  55 ++
 drivers/common/qat/dev/qat_dev_gen2.c     |  39 ++
 drivers/common/qat/dev/qat_dev_gen3.c     |  77 +++
 drivers/common/qat/dev/qat_dev_gen4.c     | 285 ++++++++++
 drivers/common/qat/dev/qat_dev_gen4.h     |  18 +
 drivers/common/qat/meson.build            |  12 +-
 drivers/common/qat/qat_common.h           |   2 +
 drivers/common/qat/qat_device.c           | 183 +++---
 drivers/common/qat/qat_device.h           |  28 +-
 drivers/common/qat/qat_qp.c               | 641 ++++++++--------------
 drivers/common/qat/qat_qp.h               |  54 +-
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c |  78 +++
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.h |  15 +
 drivers/crypto/qat/dev/qat_sym_pmd_gen2.c | 103 ++++
 drivers/crypto/qat/dev/qat_sym_pmd_gen3.c |  63 +++
 drivers/crypto/qat/dev/qat_sym_pmd_gen4.c | 107 ++++
 drivers/crypto/qat/qat_sym_pmd.c          | 188 ++-----
 drivers/crypto/qat/qat_sym_pmd.h          |  40 ++
 19 files changed, 1540 insertions(+), 700 deletions(-)
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.h
 create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.h
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.h
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen2.c
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen3.c
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen4.c

-- 
2.30.2


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [PATCH 1/4] common/qat: isolate implementations of qat generations
  2021-09-01 14:47 [dpdk-dev] [PATCH 0/4] drivers/qat: isolate implementations of qat generations Arek Kusztal
@ 2021-09-01 14:47 ` Arek Kusztal
  2021-09-01 14:47 ` [dpdk-dev] [PATCH 2/4] crypto/qat: isolate implementations of symmetric operations Arek Kusztal
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 96+ messages in thread
From: Arek Kusztal @ 2021-09-01 14:47 UTC (permalink / raw)
  To: dev; +Cc: gakhil, roy.fan.zhang, Arek Kusztal

This commit isolates implementations of common part in QAT PMD.
When changing/expanding particular generation code, other generations
code should left intact. Generation code in drivers common is
invisible to other generations code.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c | 245 ++++++++++
 drivers/common/qat/dev/qat_dev_gen1.h |  52 +++
 drivers/common/qat/dev/qat_dev_gen2.c |  38 ++
 drivers/common/qat/dev/qat_dev_gen3.c |  76 +++
 drivers/common/qat/dev/qat_dev_gen4.c | 258 +++++++++++
 drivers/common/qat/meson.build        |   4 +
 drivers/common/qat/qat_common.h       |   2 +
 drivers/common/qat/qat_device.c       | 117 ++---
 drivers/common/qat/qat_device.h       |  24 +-
 drivers/common/qat/qat_qp.c           | 641 +++++++++-----------------
 drivers/common/qat/qat_qp.h           |  45 +-
 11 files changed, 983 insertions(+), 519 deletions(-)
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.h
 create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
new file mode 100644
index 0000000000..4d60c2a051
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -0,0 +1,245 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gen1.h"
+
+#include <stdint.h>
+
+#define ADF_ARB_REG_SLOT			0x1000
+
+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+__extension__
+const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
+	/* queue pairs which provide an asymmetric crypto service */
+	[QAT_SERVICE_ASYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 0,
+			.rx_ring_num = 8,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+
+		}, {
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 1,
+			.rx_ring_num = 9,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a symmetric crypto service */
+	[QAT_SERVICE_SYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 2,
+			.rx_ring_num = 10,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		},
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 3,
+			.rx_ring_num = 11,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a compression service */
+	[QAT_SERVICE_COMPRESSION] = {
+		{
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 6,
+			.rx_ring_num = 14,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}, {
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 7,
+			.rx_ring_num = 15,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	}
+};
+
+int
+qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
+{
+	int i = 0, count = 0, max_ops_per_srv = 0;
+	const struct qat_qp_hw_data *sym_hw_qps =
+			qat_gen_config[qat_dev->qat_dev_gen]
+			.qp_hw_data[service];
+
+	max_ops_per_srv = ADF_MAX_QPS_ON_ANY_SERVICE;
+	for (i = 0, count = 0; i < max_ops_per_srv; i++)
+		if (sym_hw_qps[i].service_type == service)
+			count++;
+	return count;
+}
+
+void
+qat_qp_csr_build_ring_base_gen1(void *io_addr,
+			struct qat_queue *queue)
+{
+	uint64_t queue_base;
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+			queue->queue_size);
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+		queue->hw_queue_number, queue_base);
+}
+
+void
+qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_ARB_REG_SLOT *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr,
+			arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+void
+qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT *
+						txq->hw_bundle_number);
+	uint32_t value;
+
+	rte_spinlock_lock(lock);
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value &= ~(0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+void
+qat_qp_adf_configure_queues_gen1(struct qat_qp *qp)
+{
+	uint32_t q_tx_config, q_resp_config;
+	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
+
+	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
+	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
+			ADF_RING_NEAR_WATERMARK_512,
+			ADF_RING_NEAR_WATERMARK_0);
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
+		q_tx->hw_bundle_number,	q_tx->hw_queue_number,
+		q_tx_config);
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
+		q_rx->hw_bundle_number,	q_rx->hw_queue_number,
+		q_resp_config);
+}
+
+void
+qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q)
+{
+	WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number,
+		q->hw_queue_number, q->tail);
+}
+
+void
+qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q,
+			uint32_t new_head)
+{
+	WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number,
+			q->hw_queue_number, new_head);
+}
+
+void
+qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev,
+			void *io_addr, struct qat_qp *qp)
+{
+	qat_qp_csr_build_ring_base_gen1(io_addr, &qp->tx_q);
+	qat_qp_csr_build_ring_base_gen1(io_addr, &qp->rx_q);
+	qat_qp_adf_configure_queues_gen1(qp);
+	qat_qp_adf_arb_enable_gen1(&qp->tx_q, qp->mmap_bar_addr,
+					&qat_dev->arb_csr_lock);
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen1 = {
+	.qat_qp_rings_per_service	= qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base		= qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable		= qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable		= qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues	= qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail		= qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head		= qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup		= qat_qp_csr_setup_gen1,
+};
+
+int qat_reset_ring_pairs_gen1(
+		struct qat_pci_device *qat_pci_dev __rte_unused)
+{
+	/*
+	 * Ring pairs reset not supported on base, continue
+	 */
+	return 0;
+}
+
+const struct
+rte_mem_resource *qat_dev_get_transport_bar_gen1(
+			struct rte_pci_device *pci_dev)
+{
+	return &pci_dev->mem_resource[0];
+}
+
+int
+qat_dev_get_misc_bar_gen1(
+		struct rte_mem_resource **mem_resource __rte_unused,
+		struct rte_pci_device *pci_dev __rte_unused)
+{
+	return -1;
+}
+
+int
+qat_dev_read_config_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	/*
+	 * Base generations do not have configuration,
+	 * but set this pointer anyway that we can
+	 * distinguish higher generations faulty set to NULL
+	 */
+	return 0;
+}
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = {
+	.qat_dev_reset_ring_pairs	= qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar	= qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar		= qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config		= qat_dev_read_config_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen1_init)
+{
+	qat_qp_hw_spec[QAT_GEN1]		= &qat_qp_hw_spec_gen1;
+	qat_dev_hw_spec[QAT_GEN1]		= &qat_dev_hw_spec_gen1;
+	qat_gen_config[QAT_GEN1].dev_gen	= QAT_GEN1;
+	qat_gen_config[QAT_GEN1].qp_hw_data	= qat_gen1_qps;
+	qat_gen_config[QAT_GEN1].comp_num_im_bufs_required =
+						QAT_NUM_INTERM_BUFS_GEN1;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen1.h b/drivers/common/qat/dev/qat_dev_gen1.h
new file mode 100644
index 0000000000..9bf4fcf01b
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen1.h
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _QAT_DEV_GEN_H_
+#define _QAT_DEV_GEN_H_
+
+#include "qat_device.h"
+#include "qat_qp.h"
+
+#include <stdint.h>
+
+extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE];
+
+int
+qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev,
+		enum qat_service_type service);
+void
+qat_qp_csr_build_ring_base_gen1(void *io_addr,
+			struct qat_queue *queue);
+void
+qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock);
+void
+qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock);
+void
+qat_qp_adf_configure_queues_gen1(struct qat_qp *qp);
+void
+qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q);
+void
+qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q,
+			uint32_t new_head);
+void
+qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev,
+			void *io_addr, struct qat_qp *qp);
+
+int
+qat_reset_ring_pairs_gen1(
+			struct qat_pci_device *qat_pci_dev __rte_unused);
+const struct
+rte_mem_resource *qat_dev_get_transport_bar_gen1(
+			struct rte_pci_device *pci_dev);
+int
+qat_dev_get_misc_bar_gen1(
+		struct rte_mem_resource **mem_resource __rte_unused,
+		struct rte_pci_device *pci_dev __rte_unused);
+int
+qat_dev_read_config_gen1(struct qat_pci_device *qat_dev __rte_unused);
+
+#endif
diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c
new file mode 100644
index 0000000000..ad1b643e00
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen2.c
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gen1.h"
+
+#include <stdint.h>
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen2 = {
+	.qat_qp_rings_per_service	= qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base		= qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable		= qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable		= qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues	= qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail		= qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head		= qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup		= qat_qp_csr_setup_gen1,
+};
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
+	.qat_dev_reset_ring_pairs	= qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar	= qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar		= qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config		= qat_dev_read_config_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen2_init)
+{
+	qat_qp_hw_spec[QAT_GEN2]		= &qat_qp_hw_spec_gen2;
+	qat_dev_hw_spec[QAT_GEN2]		= &qat_dev_hw_spec_gen2;
+	qat_gen_config[QAT_GEN2].dev_gen	= QAT_GEN2;
+	qat_gen_config[QAT_GEN2].qp_hw_data	= qat_gen1_qps;
+	qat_gen_config[QAT_GEN2].comp_num_im_bufs_required =
+						QAT_NUM_INTERM_BUFS_GEN2;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c
new file mode 100644
index 0000000000..407d21576b
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen3.c
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gen1.h"
+
+#include <stdint.h>
+
+__extension__
+const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
+	/* queue pairs which provide an asymmetric crypto service */
+	[QAT_SERVICE_ASYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 0,
+			.rx_ring_num = 4,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a symmetric crypto service */
+	[QAT_SERVICE_SYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 1,
+			.rx_ring_num = 5,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a compression service */
+	[QAT_SERVICE_COMPRESSION] = {
+		{
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 3,
+			.rx_ring_num = 7,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	}
+};
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen3 = {
+	.qat_qp_rings_per_service	= qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base		= qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable		= qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable		= qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues	= qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail		= qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head		= qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup		= qat_qp_csr_setup_gen1,
+};
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
+	.qat_dev_reset_ring_pairs	= qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar	= qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar		= qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config		= qat_dev_read_config_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen3_init)
+{
+	qat_qp_hw_spec[QAT_GEN3]		= &qat_qp_hw_spec_gen3;
+	qat_dev_hw_spec[QAT_GEN3]		= &qat_dev_hw_spec_gen3;
+	qat_gen_config[QAT_GEN3].dev_gen	= QAT_GEN3;
+	qat_gen_config[QAT_GEN3].qp_hw_data	= qat_gen3_qps;
+	qat_gen_config[QAT_GEN3].comp_num_im_bufs_required =
+						QAT_NUM_INTERM_BUFS_GEN3;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
new file mode 100644
index 0000000000..6394e17dde
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -0,0 +1,258 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "adf_transport_access_macros_gen4vf.h"
+#include "adf_pf2vf_msg.h"
+#include "qat_pf2vf.h"
+
+#include <stdint.h>
+
+static struct qat_pf2vf_dev qat_pf2vf_gen4 = {
+	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
+	.vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
+	.pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
+	.pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
+	.pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
+	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
+};
+
+static int
+qat_query_svc(struct qat_pci_device *qat_dev, uint8_t *val)
+{
+	struct qat_pf2vf_msg pf2vf_msg;
+
+	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ;
+	pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ;
+	pf2vf_msg.msg_data = 2;
+	return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
+}
+
+static int
+qat_qp_rings_per_service_gen4(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
+{
+	int i = 0, count = 0, max_ops_per_srv = 0;
+
+	max_ops_per_srv = QAT_GEN4_BUNDLE_NUM;
+	for (i = 0, count = 0; i < max_ops_per_srv; i++)
+		if (qat_dev->qp_gen4_data[i][0].service_type == service)
+			count++;
+	return count;
+}
+
+static int
+qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
+{
+	int i = 0;
+	uint16_t svc = 0;
+
+	if (qat_query_svc(qat_dev, (uint8_t *)&svc))
+		return -EFAULT;
+	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		struct qat_qp_hw_data *hw_data =
+			&qat_dev->qp_gen4_data[i][0];
+		uint8_t svc1 = (svc >> (3 * i)) & 0x7;
+		enum qat_service_type service_type = QAT_SERVICE_INVALID;
+
+		if (svc1 == QAT_SVC_SYM) {
+			service_type = QAT_SERVICE_SYMMETRIC;
+			QAT_LOG(DEBUG,
+				"Discovered SYMMETRIC service on bundle %d",
+				i);
+		} else if (svc1 == QAT_SVC_COMPRESSION) {
+			service_type = QAT_SERVICE_COMPRESSION;
+			QAT_LOG(DEBUG,
+				"Discovered COPRESSION service on bundle %d",
+				i);
+		} else if (svc1 == QAT_SVC_ASYM) {
+			service_type = QAT_SERVICE_ASYMMETRIC;
+			QAT_LOG(DEBUG,
+				"Discovered ASYMMETRIC service on bundle %d",
+				i);
+		} else {
+			QAT_LOG(ERR,
+				"Unrecognized service on bundle %d",
+				i);
+			return -EFAULT;
+		}
+
+		memset(hw_data, 0, sizeof(*hw_data));
+		hw_data->service_type = service_type;
+		if (service_type == QAT_SERVICE_ASYMMETRIC) {
+			hw_data->tx_msg_size = 64;
+			hw_data->rx_msg_size = 32;
+		} else if (service_type == QAT_SERVICE_SYMMETRIC ||
+				service_type ==
+					QAT_SERVICE_COMPRESSION) {
+			hw_data->tx_msg_size = 128;
+			hw_data->rx_msg_size = 32;
+		}
+		hw_data->tx_ring_num = 0;
+		hw_data->rx_ring_num = 1;
+		hw_data->hw_bundle_num = i;
+	}
+	return 0;
+}
+
+static void
+qat_qp_build_ring_base_gen4(void *io_addr,
+			struct qat_queue *queue)
+{
+	uint64_t queue_base;
+
+	queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr,
+			queue->queue_size);
+	WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number,
+		queue->hw_queue_number, queue_base);
+}
+
+static void
+qat_qp_adf_arb_enable_gen4(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_RING_BUNDLE_SIZE_GEN4 *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
+			arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+static void
+qat_qp_adf_arb_disable_gen4(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_RING_BUNDLE_SIZE_GEN4 *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
+			arb_csr_offset);
+	value &= ~(0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+static void
+qat_qp_adf_configure_queues_gen4(struct qat_qp *qp)
+{
+	uint32_t q_tx_config, q_resp_config;
+	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
+
+	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
+	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
+			ADF_RING_NEAR_WATERMARK_512,
+			ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
+		q_tx->hw_bundle_number,	q_tx->hw_queue_number,
+		q_tx_config);
+	WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
+		q_rx->hw_bundle_number,	q_rx->hw_queue_number,
+		q_resp_config);
+}
+
+static void
+qat_qp_csr_write_tail_gen4(struct qat_qp *qp, struct qat_queue *q)
+{
+	WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
+		q->hw_bundle_number, q->hw_queue_number, q->tail);
+}
+
+static void
+qat_qp_csr_write_head_gen4(struct qat_qp *qp, struct qat_queue *q,
+			uint32_t new_head)
+{
+	WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr,
+			q->hw_bundle_number, q->hw_queue_number, new_head);
+}
+
+static void
+qat_qp_csr_setup_gen4(struct qat_pci_device *qat_dev,
+			void *io_addr, struct qat_qp *qp)
+{
+	qat_qp_build_ring_base_gen4(io_addr, &qp->tx_q);
+	qat_qp_build_ring_base_gen4(io_addr, &qp->rx_q);
+	qat_qp_adf_configure_queues_gen4(qp);
+	qat_qp_adf_arb_enable_gen4(&qp->tx_q, qp->mmap_bar_addr,
+					&qat_dev->arb_csr_lock);
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen4 = {
+	.qat_qp_rings_per_service	= qat_qp_rings_per_service_gen4,
+	.qat_qp_build_ring_base		= qat_qp_build_ring_base_gen4,
+	.qat_qp_adf_arb_enable		= qat_qp_adf_arb_enable_gen4,
+	.qat_qp_adf_arb_disable		= qat_qp_adf_arb_disable_gen4,
+	.qat_qp_adf_configure_queues	= qat_qp_adf_configure_queues_gen4,
+	.qat_qp_csr_write_tail		= qat_qp_csr_write_tail_gen4,
+	.qat_qp_csr_write_head		= qat_qp_csr_write_head_gen4,
+	.qat_qp_csr_setup		= qat_qp_csr_setup_gen4,
+};
+
+static int
+qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
+{
+	int ret = 0, i;
+	uint8_t data[4];
+	struct qat_pf2vf_msg pf2vf_msg;
+
+	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET;
+	pf2vf_msg.block_hdr = -1;
+	for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		pf2vf_msg.msg_data = i;
+		ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data);
+		if (ret) {
+			QAT_LOG(ERR, "QAT error when reset bundle no %d",
+				i);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static const struct
+rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
+{
+	return &pci_dev->mem_resource[0];
+}
+
+static int
+qat_dev_get_misc_bar_gen4(
+			struct rte_mem_resource **mem_resource,
+			struct rte_pci_device *pci_dev)
+{
+	*mem_resource = &pci_dev->mem_resource[2];
+	return 0;
+}
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = {
+	.qat_dev_reset_ring_pairs	= qat_reset_ring_pairs_gen4,
+	.qat_dev_get_transport_bar	= qat_dev_get_transport_bar_gen4,
+	.qat_dev_get_misc_bar		= qat_dev_get_misc_bar_gen4,
+	.qat_dev_read_config		= qat_dev_read_config_gen4,
+};
+
+RTE_INIT(qat_dev_gen_4_init)
+{
+	qat_qp_hw_spec[QAT_GEN4]		= &qat_qp_hw_spec_gen4;
+	qat_dev_hw_spec[QAT_GEN4]		= &qat_dev_hw_spec_gen4;
+	qat_gen_config[QAT_GEN4].dev_gen	= QAT_GEN4;
+	qat_gen_config[QAT_GEN4].qp_hw_data	= NULL;
+	qat_gen_config[QAT_GEN4].comp_num_im_bufs_required =
+						QAT_NUM_INTERM_BUFS_GEN3;
+	qat_gen_config[QAT_GEN4].pf2vf_dev	= &qat_pf2vf_gen4;
+}
diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 053c219fed..532e0fabb3 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -50,6 +50,10 @@ sources += files(
         'qat_device.c',
         'qat_logs.c',
         'qat_pf2vf.c',
+        'dev/qat_dev_gen1.c',
+        'dev/qat_dev_gen2.c',
+        'dev/qat_dev_gen3.c',
+        'dev/qat_dev_gen4.c'
 )
 includes += include_directories(
         'qat_adf',
diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index 23715085f4..b15e980f0f 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -22,6 +22,8 @@ enum qat_device_gen {
 	QAT_GEN4
 };
 
+#define QAT_DEV_GEN_NO	(QAT_GEN4 + 1)
+
 enum qat_service_type {
 	QAT_SERVICE_ASYMMETRIC = 0,
 	QAT_SERVICE_SYMMETRIC,
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index 1b967cbcf7..030624b46d 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -13,42 +13,10 @@
 #include "adf_pf2vf_msg.h"
 #include "qat_pf2vf.h"
 
-/* pv2vf data Gen 4*/
-struct qat_pf2vf_dev qat_pf2vf_gen4 = {
-	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
-	.vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
-	.pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
-	.pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
-	.pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
-	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
-};
-
 /* Hardware device information per generation */
-__extension__
-struct qat_gen_hw_data qat_gen_config[] =  {
-	[QAT_GEN1] = {
-		.dev_gen = QAT_GEN1,
-		.qp_hw_data = qat_gen1_qps,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN1
-	},
-	[QAT_GEN2] = {
-		.dev_gen = QAT_GEN2,
-		.qp_hw_data = qat_gen1_qps,
-		/* gen2 has same ring layout as gen1 */
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN2
-	},
-	[QAT_GEN3] = {
-		.dev_gen = QAT_GEN3,
-		.qp_hw_data = qat_gen3_qps,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3
-	},
-	[QAT_GEN4] = {
-		.dev_gen = QAT_GEN4,
-		.qp_hw_data = NULL,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3,
-		.pf2vf_dev = &qat_pf2vf_gen4
-	},
-};
+
+struct qat_gen_hw_data qat_gen_config[QAT_DEV_GEN_NO];
+struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_DEV_GEN_NO];
 
 /* per-process array of device data */
 struct qat_device_info qat_pci_devs[RTE_PMD_QAT_MAX_PCI_DEVICES];
@@ -126,44 +94,6 @@ qat_get_qat_dev_from_pci_dev(struct rte_pci_device *pci_dev)
 	return qat_pci_get_named_dev(name);
 }
 
-static int
-qat_gen4_reset_ring_pair(struct qat_pci_device *qat_pci_dev)
-{
-	int ret = 0, i;
-	uint8_t data[4];
-	struct qat_pf2vf_msg pf2vf_msg;
-
-	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET;
-	pf2vf_msg.block_hdr = -1;
-	for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) {
-		pf2vf_msg.msg_data = i;
-		ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data);
-		if (ret) {
-			QAT_LOG(ERR, "QAT error when reset bundle no %d",
-				i);
-			return ret;
-		}
-	}
-
-	return 0;
-}
-
-int qat_query_svc(struct qat_pci_device *qat_dev, uint8_t *val)
-{
-	int ret = -(EINVAL);
-	struct qat_pf2vf_msg pf2vf_msg;
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ;
-		pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ;
-		pf2vf_msg.msg_data = 2;
-		ret = qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
-	}
-
-	return ret;
-}
-
-
 static void qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param
 		*qat_dev_cmd_param)
 {
@@ -229,6 +159,8 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 	uint8_t qat_dev_id = 0;
 	char name[QAT_DEV_NAME_MAX_LEN];
 	struct rte_devargs *devargs = pci_dev->device.devargs;
+	struct qat_dev_hw_spec_funcs *ops_hw = NULL;
+	struct rte_mem_resource *mem_resource;
 
 	rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
 	snprintf(name+strlen(name), QAT_DEV_NAME_MAX_LEN-strlen(name), "_qat");
@@ -300,24 +232,25 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 		return NULL;
 	}
 
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		qat_dev->misc_bar_io_addr = pci_dev->mem_resource[2].addr;
-		if (qat_dev->misc_bar_io_addr == NULL) {
+	ops_hw = qat_dev_hw_spec[qat_dev->qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_misc_bar, NULL);
+	if (ops_hw->qat_dev_get_misc_bar(&mem_resource, pci_dev) == 0) {
+		if (mem_resource->addr == NULL) {
 			QAT_LOG(ERR, "QAT cannot get access to VF misc bar");
 			return NULL;
 		}
-	}
+		qat_dev->misc_bar_io_addr = mem_resource->addr;
+	} else
+		qat_dev->misc_bar_io_addr = NULL;
 
 	if (devargs && devargs->drv_str)
 		qat_dev_parse_cmd(devargs->drv_str, qat_dev_cmd_param);
 
-	if (qat_dev->qat_dev_gen >= QAT_GEN4) {
-		if (qat_read_qp_config(qat_dev)) {
-			QAT_LOG(ERR,
-				"Cannot acquire ring configuration for QAT_%d",
-				qat_dev_id);
-			return NULL;
-		}
+	if (qat_read_qp_config(qat_dev)) {
+		QAT_LOG(ERR,
+			"Cannot acquire ring configuration for QAT_%d",
+			qat_dev_id);
+		return NULL;
 	}
 
 	rte_spinlock_init(&qat_dev->arb_csr_lock);
@@ -392,6 +325,7 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	int sym_ret = 0, asym_ret = 0, comp_ret = 0;
 	int num_pmds_created = 0;
 	struct qat_pci_device *qat_pci_dev;
+	struct qat_dev_hw_spec_funcs *ops;
 	struct qat_dev_cmd_param qat_dev_cmd_param[] = {
 			{ SYM_ENQ_THRESHOLD_NAME, 0 },
 			{ ASYM_ENQ_THRESHOLD_NAME, 0 },
@@ -408,13 +342,14 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	if (qat_pci_dev == NULL)
 		return -ENODEV;
 
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		if (qat_gen4_reset_ring_pair(qat_pci_dev)) {
-			QAT_LOG(ERR,
-				"Cannot reset ring pairs, does pf driver supports pf2vf comms?"
-				);
-			return -ENODEV;
-		}
+	ops = qat_dev_hw_spec[qat_pci_dev->qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_dev_reset_ring_pairs,
+		-ENOTSUP);
+	if (ops->qat_dev_reset_ring_pairs(qat_pci_dev)) {
+		QAT_LOG(ERR,
+			"Cannot reset ring pairs, does pf driver supports pf2vf comms?"
+			);
+		return -ENODEV;
 	}
 
 	sym_ret = qat_sym_dev_create(qat_pci_dev, qat_dev_cmd_param);
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 228c057d1e..531aa663ca 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -21,6 +21,24 @@
 #define COMP_ENQ_THRESHOLD_NAME "qat_comp_enq_threshold"
 #define MAX_QP_THRESHOLD_SIZE	32
 
+typedef int (*qat_dev_reset_ring_pairs_t)
+		(struct qat_pci_device *);
+typedef const struct rte_mem_resource* (*qat_dev_get_transport_bar_t)
+		(struct rte_pci_device *);
+typedef int (*qat_dev_get_misc_bar_t)
+		(struct rte_mem_resource **, struct rte_pci_device *);
+typedef int (*qat_dev_read_config_t)
+		(struct qat_pci_device *);
+
+struct qat_dev_hw_spec_funcs {
+	qat_dev_reset_ring_pairs_t	qat_dev_reset_ring_pairs;
+	qat_dev_get_transport_bar_t	qat_dev_get_transport_bar;
+	qat_dev_get_misc_bar_t		qat_dev_get_misc_bar;
+	qat_dev_read_config_t		qat_dev_read_config;
+};
+
+extern struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[];
+
 struct qat_dev_cmd_param {
 	const char *name;
 	uint16_t val;
@@ -57,6 +75,9 @@ struct qat_device_info {
 	 */
 };
 
+extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+
 extern struct qat_device_info qat_pci_devs[];
 
 struct qat_sym_dev_private;
@@ -159,7 +180,4 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev __rte_unused,
 int
 qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev __rte_unused);
 
-int
-qat_query_svc(struct qat_pci_device *qat_pci_dev, uint8_t *ret);
-
 #endif /* _QAT_DEVICE_H_ */
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 026ea5ee01..ff4d7fa95c 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -18,119 +18,14 @@
 #include "qat_sym.h"
 #include "qat_asym.h"
 #include "qat_comp.h"
-#include "adf_transport_access_macros.h"
-#include "adf_transport_access_macros_gen4vf.h"
 
 #define QAT_CQ_MAX_DEQ_RETRIES 10
 
 #define ADF_MAX_DESC				4096
 #define ADF_MIN_DESC				128
 
-#define ADF_ARB_REG_SLOT			0x1000
-#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
-
-#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
-	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
-	(ADF_ARB_REG_SLOT * index), value)
-
-__extension__
-const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
-					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
-	/* queue pairs which provide an asymmetric crypto service */
-	[QAT_SERVICE_ASYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 0,
-			.rx_ring_num = 8,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-
-		}, {
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 1,
-			.rx_ring_num = 9,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a symmetric crypto service */
-	[QAT_SERVICE_SYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 2,
-			.rx_ring_num = 10,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		},
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 3,
-			.rx_ring_num = 11,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a compression service */
-	[QAT_SERVICE_COMPRESSION] = {
-		{
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 6,
-			.rx_ring_num = 14,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}, {
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 7,
-			.rx_ring_num = 15,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	}
-};
-
-__extension__
-const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES]
-					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
-	/* queue pairs which provide an asymmetric crypto service */
-	[QAT_SERVICE_ASYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 0,
-			.rx_ring_num = 4,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a symmetric crypto service */
-	[QAT_SERVICE_SYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 1,
-			.rx_ring_num = 5,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a compression service */
-	[QAT_SERVICE_COMPRESSION] = {
-		{
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 3,
-			.rx_ring_num = 7,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	}
-};
+struct qat_qp_hw_spec_funcs*
+	qat_qp_hw_spec[QAT_DEV_GEN_NO];
 
 static int qat_qp_check_queue_alignment(uint64_t phys_addr,
 	uint32_t queue_size_bytes);
@@ -139,66 +34,19 @@ static int qat_queue_create(struct qat_pci_device *qat_dev,
 	struct qat_queue *queue, struct qat_qp_config *, uint8_t dir);
 static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
 	uint32_t *queue_size_for_csr);
-static void adf_configure_queues(struct qat_qp *queue,
+static int adf_configure_queues(struct qat_qp *queue,
 	enum qat_device_gen qat_dev_gen);
-static void adf_queue_arb_enable(enum qat_device_gen qat_dev_gen,
+static int adf_queue_arb_enable(struct qat_pci_device *qat_dev,
 	struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock);
-static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
+static int adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
 	struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock);
-
-int qat_qps_per_service(struct qat_pci_device *qat_dev,
-		enum qat_service_type service)
-{
-	int i = 0, count = 0, max_ops_per_srv = 0;
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		max_ops_per_srv = QAT_GEN4_BUNDLE_NUM;
-		for (i = 0, count = 0; i < max_ops_per_srv; i++)
-			if (qat_dev->qp_gen4_data[i][0].service_type == service)
-				count++;
-	} else {
-		const struct qat_qp_hw_data *sym_hw_qps =
-				qat_gen_config[qat_dev->qat_dev_gen]
-				.qp_hw_data[service];
-
-		max_ops_per_srv = ADF_MAX_QPS_ON_ANY_SERVICE;
-		for (i = 0, count = 0; i < max_ops_per_srv; i++)
-			if (sym_hw_qps[i].service_type == service)
-				count++;
-	}
-
-	return count;
-}
-
+static int qat_qp_build_ring_base(struct qat_pci_device *qat_dev,
+	void *io_addr, struct qat_queue *queue);
 static const struct rte_memzone *
-queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
-			int socket_id)
-{
-	const struct rte_memzone *mz;
-
-	mz = rte_memzone_lookup(queue_name);
-	if (mz != 0) {
-		if (((size_t)queue_size <= mz->len) &&
-				((socket_id == SOCKET_ID_ANY) ||
-					(socket_id == mz->socket_id))) {
-			QAT_LOG(DEBUG, "re-use memzone already "
-					"allocated for %s", queue_name);
-			return mz;
-		}
-
-		QAT_LOG(ERR, "Incompatible memzone already "
-				"allocated %s, size %u, socket %d. "
-				"Requested size %u, socket %u",
-				queue_name, (uint32_t)mz->len,
-				mz->socket_id, queue_size, socket_id);
-		return NULL;
-	}
-
-	QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
-					queue_name, queue_size, socket_id);
-	return rte_memzone_reserve_aligned(queue_name, queue_size,
-		socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size);
-}
+	queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+			int socket_id);
+static int qat_qp_csr_setup(struct qat_pci_device *qat_dev,
+		void *io_addr, struct qat_qp *qp);
 
 int qat_qp_setup(struct qat_pci_device *qat_dev,
 		struct qat_qp **qp_addr,
@@ -209,8 +57,10 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	struct rte_pci_device *pci_dev =
 			qat_pci_devs[qat_dev->qat_dev_id].pci_dev;
 	char op_cookie_pool_name[RTE_RING_NAMESIZE];
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
 	uint32_t i;
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev->qat_dev_gen];
+	void *io_addr;
 
 	QAT_LOG(DEBUG, "Setup qp %u on qat pci device %d gen %d",
 		queue_pair_id, qat_dev->qat_dev_id, qat_dev->qat_dev_gen);
@@ -222,7 +72,10 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		return -EINVAL;
 	}
 
-	if (pci_dev->mem_resource[0].addr == NULL) {
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_transport_bar,
+		-ENOTSUP);
+	io_addr = ops_hw->qat_dev_get_transport_bar(pci_dev)->addr;
+	if (io_addr == NULL) {
 		QAT_LOG(ERR, "Could not find VF config space "
 				"(UIO driver attached?).");
 		return -EINVAL;
@@ -246,7 +99,7 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		return -ENOMEM;
 	}
 
-	qp->mmap_bar_addr = pci_dev->mem_resource[0].addr;
+	qp->mmap_bar_addr = io_addr;
 	qp->enqueued = qp->dequeued = 0;
 
 	if (qat_queue_create(qat_dev, &(qp->tx_q), qat_qp_conf,
@@ -273,10 +126,6 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		goto create_err;
 	}
 
-	adf_configure_queues(qp, qat_dev_gen);
-	adf_queue_arb_enable(qat_dev_gen, &qp->tx_q, qp->mmap_bar_addr,
-					&qat_dev->arb_csr_lock);
-
 	snprintf(op_cookie_pool_name, RTE_RING_NAMESIZE,
 					"%s%d_cookies_%s_qp%hu",
 		pci_dev->driver->driver.name, qat_dev->qat_dev_id,
@@ -312,6 +161,8 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	QAT_LOG(DEBUG, "QP setup complete: id: %d, cookiepool: %s",
 			queue_pair_id, op_cookie_pool_name);
 
+	qat_qp_csr_setup(qat_dev, io_addr, qp);
+
 	*qp_addr = qp;
 	return 0;
 
@@ -323,80 +174,13 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	return -EFAULT;
 }
 
-
-int qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr)
-{
-	struct qat_qp *qp = *qp_addr;
-	uint32_t i;
-
-	if (qp == NULL) {
-		QAT_LOG(DEBUG, "qp already freed");
-		return 0;
-	}
-
-	QAT_LOG(DEBUG, "Free qp on qat_pci device %d",
-				qp->qat_dev->qat_dev_id);
-
-	/* Don't free memory if there are still responses to be processed */
-	if ((qp->enqueued - qp->dequeued) == 0) {
-		qat_queue_delete(&(qp->tx_q));
-		qat_queue_delete(&(qp->rx_q));
-	} else {
-		return -EAGAIN;
-	}
-
-	adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q), qp->mmap_bar_addr,
-				&qp->qat_dev->arb_csr_lock);
-
-	for (i = 0; i < qp->nb_descriptors; i++)
-		rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]);
-
-	if (qp->op_cookie_pool)
-		rte_mempool_free(qp->op_cookie_pool);
-
-	rte_free(qp->op_cookies);
-	rte_free(qp);
-	*qp_addr = NULL;
-	return 0;
-}
-
-
-static void qat_queue_delete(struct qat_queue *queue)
-{
-	const struct rte_memzone *mz;
-	int status = 0;
-
-	if (queue == NULL) {
-		QAT_LOG(DEBUG, "Invalid queue");
-		return;
-	}
-	QAT_LOG(DEBUG, "Free ring %d, memzone: %s",
-			queue->hw_queue_number, queue->memz_name);
-
-	mz = rte_memzone_lookup(queue->memz_name);
-	if (mz != NULL)	{
-		/* Write an unused pattern to the queue memory. */
-		memset(queue->base_addr, 0x7F, queue->queue_size);
-		status = rte_memzone_free(mz);
-		if (status != 0)
-			QAT_LOG(ERR, "Error %d on freeing queue %s",
-					status, queue->memz_name);
-	} else {
-		QAT_LOG(DEBUG, "queue %s doesn't exist",
-				queue->memz_name);
-	}
-}
-
 static int
 qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 		struct qat_qp_config *qp_conf, uint8_t dir)
 {
-	uint64_t queue_base;
-	void *io_addr;
 	const struct rte_memzone *qp_mz;
 	struct rte_pci_device *pci_dev =
 			qat_pci_devs[qat_dev->qat_dev_id].pci_dev;
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
 	int ret = 0;
 	uint16_t desc_size = (dir == ADF_RING_DIR_TX ?
 			qp_conf->hw->tx_msg_size : qp_conf->hw->rx_msg_size);
@@ -456,19 +240,6 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 	 * Write an unused pattern to the queue memory.
 	 */
 	memset(queue->base_addr, 0x7F, queue_size_bytes);
-	io_addr = pci_dev->mem_resource[0].addr;
-
-	if (qat_dev_gen == QAT_GEN4) {
-		queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr,
-					queue->queue_size);
-		WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number,
-			queue->hw_queue_number, queue_base);
-	} else {
-		queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
-				queue->queue_size);
-		WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
-			queue->hw_queue_number, queue_base);
-	}
 
 	QAT_LOG(DEBUG, "RING: Name:%s, size in CSR: %u, in bytes %u,"
 		" nb msgs %u, msg_size %u, modulo mask %u",
@@ -484,202 +255,216 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 	return ret;
 }
 
-int
-qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
-			enum qat_service_type service_type)
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+			int socket_id)
 {
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		int i = 0, valid_qps = 0;
-
-		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
-			if (qat_dev->qp_gen4_data[i][0].service_type ==
-				service_type) {
-				if (valid_qps == qp_id)
-					return i;
-				++valid_qps;
-			}
+	const struct rte_memzone *mz;
+
+	mz = rte_memzone_lookup(queue_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			QAT_LOG(DEBUG,
+				"re-use memzone already allocated for %s",
+				queue_name);
+			return mz;
 		}
+
+		QAT_LOG(ERR,
+			"Incompatible memzone already allocated %s, size %u, socket %d. Requested size %u, socket %u",
+			queue_name, (uint32_t)mz->len,
+			mz->socket_id, queue_size, socket_id);
+		return NULL;
 	}
-	return -1;
+
+	QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					queue_name, queue_size, socket_id);
+	return rte_memzone_reserve_aligned(queue_name, queue_size,
+		socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size);
 }
 
-int
-qat_read_qp_config(struct qat_pci_device *qat_dev)
+int qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr)
 {
-	int i = 0;
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
-
-	if (qat_dev_gen == QAT_GEN4) {
-		uint16_t svc = 0;
-
-		if (qat_query_svc(qat_dev, (uint8_t *)&svc))
-			return -(EFAULT);
-		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
-			struct qat_qp_hw_data *hw_data =
-				&qat_dev->qp_gen4_data[i][0];
-			uint8_t svc1 = (svc >> (3 * i)) & 0x7;
-			enum qat_service_type service_type = QAT_SERVICE_INVALID;
-
-			if (svc1 == QAT_SVC_SYM) {
-				service_type = QAT_SERVICE_SYMMETRIC;
-				QAT_LOG(DEBUG,
-					"Discovered SYMMETRIC service on bundle %d",
-					i);
-			} else if (svc1 == QAT_SVC_COMPRESSION) {
-				service_type = QAT_SERVICE_COMPRESSION;
-				QAT_LOG(DEBUG,
-					"Discovered COPRESSION service on bundle %d",
-					i);
-			} else if (svc1 == QAT_SVC_ASYM) {
-				service_type = QAT_SERVICE_ASYMMETRIC;
-				QAT_LOG(DEBUG,
-					"Discovered ASYMMETRIC service on bundle %d",
-					i);
-			} else {
-				QAT_LOG(ERR,
-					"Unrecognized service on bundle %d",
-					i);
-				return -(EFAULT);
-			}
+	int ret;
+	struct qat_qp *qp = *qp_addr;
+	uint32_t i;
 
-			memset(hw_data, 0, sizeof(*hw_data));
-			hw_data->service_type = service_type;
-			if (service_type == QAT_SERVICE_ASYMMETRIC) {
-				hw_data->tx_msg_size = 64;
-				hw_data->rx_msg_size = 32;
-			} else if (service_type == QAT_SERVICE_SYMMETRIC ||
-					service_type ==
-						QAT_SERVICE_COMPRESSION) {
-				hw_data->tx_msg_size = 128;
-				hw_data->rx_msg_size = 32;
-			}
-			hw_data->tx_ring_num = 0;
-			hw_data->rx_ring_num = 1;
-			hw_data->hw_bundle_num = i;
-		}
+	if (qp == NULL) {
+		QAT_LOG(DEBUG, "qp already freed");
 		return 0;
 	}
-	return -(EINVAL);
-}
 
-static int qat_qp_check_queue_alignment(uint64_t phys_addr,
-					uint32_t queue_size_bytes)
-{
-	if (((queue_size_bytes - 1) & phys_addr) != 0)
-		return -EINVAL;
+	QAT_LOG(DEBUG, "Free qp on qat_pci device %d",
+				qp->qat_dev->qat_dev_id);
+
+	/* Don't free memory if there are still responses to be processed */
+	if ((qp->enqueued - qp->dequeued) == 0) {
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
+	} else {
+		return -EAGAIN;
+	}
+
+	ret = adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q), qp->mmap_bar_addr,
+				&qp->qat_dev->arb_csr_lock);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < qp->nb_descriptors; i++)
+		rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]);
+
+	if (qp->op_cookie_pool)
+		rte_mempool_free(qp->op_cookie_pool);
+
+	rte_free(qp->op_cookies);
+	rte_free(qp);
+	*qp_addr = NULL;
 	return 0;
 }
 
-static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
-	uint32_t *p_queue_size_for_csr)
+
+static void qat_queue_delete(struct qat_queue *queue)
 {
-	uint8_t i = ADF_MIN_RING_SIZE;
+	const struct rte_memzone *mz;
+	int status = 0;
 
-	for (; i <= ADF_MAX_RING_SIZE; i++)
-		if ((msg_size * msg_num) ==
-				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
-			*p_queue_size_for_csr = i;
-			return 0;
-		}
-	QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
-	return -EINVAL;
+	if (queue == NULL) {
+		QAT_LOG(DEBUG, "Invalid queue");
+		return;
+	}
+	QAT_LOG(DEBUG, "Free ring %d, memzone: %s",
+			queue->hw_queue_number, queue->memz_name);
+
+	mz = rte_memzone_lookup(queue->memz_name);
+	if (mz != NULL)	{
+		/* Write an unused pattern to the queue memory. */
+		memset(queue->base_addr, 0x7F, queue->queue_size);
+		status = rte_memzone_free(mz);
+		if (status != 0)
+			QAT_LOG(ERR, "Error %d on freeing queue %s",
+					status, queue->memz_name);
+	} else {
+		QAT_LOG(DEBUG, "queue %s doesn't exist",
+				queue->memz_name);
+	}
 }
 
-static void
-adf_queue_arb_enable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq,
+static int __rte_unused
+adf_queue_arb_enable(struct qat_pci_device *qat_dev, struct qat_queue *txq,
 			void *base_addr, rte_spinlock_t *lock)
 {
-	uint32_t arb_csr_offset = 0, value;
-
-	rte_spinlock_lock(lock);
-	if (qat_dev_gen == QAT_GEN4) {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_RING_BUNDLE_SIZE_GEN4 *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
-				arb_csr_offset);
-	} else {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_ARB_REG_SLOT *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr,
-				arb_csr_offset);
-	}
-	value |= (0x01 << txq->hw_queue_number);
-	ADF_CSR_WR(base_addr, arb_csr_offset, value);
-	rte_spinlock_unlock(lock);
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_enable,
+			-ENOTSUP);
+	ops->qat_qp_adf_arb_enable(txq, base_addr, lock);
+	return 0;
 }
 
-static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
+static int adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
 		struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock)
 {
-	uint32_t arb_csr_offset = 0, value;
-
-	rte_spinlock_lock(lock);
-	if (qat_dev_gen == QAT_GEN4) {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_RING_BUNDLE_SIZE_GEN4 *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
-				arb_csr_offset);
-	} else {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_ARB_REG_SLOT *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr,
-				arb_csr_offset);
-	}
-	value &= ~(0x01 << txq->hw_queue_number);
-	ADF_CSR_WR(base_addr, arb_csr_offset, value);
-	rte_spinlock_unlock(lock);
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_disable,
+			-ENOTSUP);
+	ops->qat_qp_adf_arb_disable(txq, base_addr, lock);
+	return 0;
 }
 
-static void adf_configure_queues(struct qat_qp *qp,
-		enum qat_device_gen qat_dev_gen)
+static int __rte_unused
+qat_qp_build_ring_base(struct qat_pci_device *qat_dev, void *io_addr,
+				struct qat_queue *queue)
 {
-	uint32_t q_tx_config, q_resp_config;
-	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
-
-	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
-	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
-			ADF_RING_NEAR_WATERMARK_512,
-			ADF_RING_NEAR_WATERMARK_0);
-
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
-			q_tx->hw_bundle_number,	q_tx->hw_queue_number,
-			q_tx_config);
-		WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
-			q_rx->hw_bundle_number,	q_rx->hw_queue_number,
-			q_resp_config);
-	} else {
-		WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
-			q_tx->hw_bundle_number,	q_tx->hw_queue_number,
-			q_tx_config);
-		WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
-			q_rx->hw_bundle_number,	q_rx->hw_queue_number,
-			q_resp_config);
-	}
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_build_ring_base,
+			-ENOTSUP);
+	ops->qat_qp_build_ring_base(io_addr, queue);
+	return 0;
 }
 
-static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask)
+int qat_qps_per_service(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
 {
-	return data & modulo_mask;
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_rings_per_service,
+			-ENOTSUP);
+	return ops->qat_qp_rings_per_service(qat_dev, service);
+}
+
+int
+qat_read_qp_config(struct qat_pci_device *qat_dev)
+{
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_read_config,
+			-ENOTSUP);
+	return ops_hw->qat_dev_read_config(qat_dev);
+}
+
+static int __rte_unused
+adf_configure_queues(struct qat_qp *qp,	enum qat_device_gen qat_dev_gen)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_configure_queues,
+			-ENOTSUP);
+	ops->qat_qp_adf_configure_queues(qp);
+	return 0;
 }
 
 static inline void
 txq_write_tail(enum qat_device_gen qat_dev_gen,
-		struct qat_qp *qp, struct qat_queue *q) {
+		struct qat_qp *qp, struct qat_queue *q)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
 
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
-			q->hw_bundle_number, q->hw_queue_number, q->tail);
-	} else {
-		WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number,
-			q->hw_queue_number, q->tail);
-	}
+	/*
+	 * Pointer check should be done during
+	 * initialization
+	 */
+	ops->qat_qp_csr_write_tail(qp, q);
 }
 
+static inline void
+qat_qp_csr_write_head(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
+			struct qat_queue *q, uint32_t new_head)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
+
+	/*
+	 * Pointer check should be done during
+	 * initialization
+	 */
+	ops->qat_qp_csr_write_head(qp, q, new_head);
+}
+
+static int
+qat_qp_csr_setup(struct qat_pci_device *qat_dev,
+		void *io_addr, struct qat_qp *qp)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_csr_setup,
+			-ENOTSUP);
+	ops->qat_qp_csr_setup(qat_dev, io_addr, qp);
+	return 0;
+}
+
+
 static inline
 void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
 				struct qat_queue *q)
@@ -703,15 +488,35 @@ void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
 	q->nb_processed_responses = 0;
 	q->csr_head = new_head;
 
-	/* write current head to CSR */
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr,
-			q->hw_bundle_number, q->hw_queue_number, new_head);
-	} else {
-		WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number,
-				q->hw_queue_number, new_head);
-	}
+	qat_qp_csr_write_head(qat_dev_gen, qp, q, new_head);
+}
+
+static int qat_qp_check_queue_alignment(uint64_t phys_addr,
+					uint32_t queue_size_bytes)
+{
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return -EINVAL;
+	return 0;
+}
+
+static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+	uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
 
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return -EINVAL;
+}
+
+static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask)
+{
+	return data & modulo_mask;
 }
 
 uint16_t
diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
index e1627197fa..ffba3a3615 100644
--- a/drivers/common/qat/qat_qp.h
+++ b/drivers/common/qat/qat_qp.h
@@ -24,6 +24,8 @@ struct qat_pci_device;
 #define QAT_GEN4_BUNDLE_NUM             4
 #define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
 
+#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
+
 /**
  * Structure with data needed for creation of queue pair.
  */
@@ -96,9 +98,6 @@ struct qat_qp {
 	uint16_t min_enq_burst_threshold;
 } __rte_cache_aligned;
 
-extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-
 uint16_t
 qat_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops);
 
@@ -129,11 +128,43 @@ qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused,
 			  void *op_cookie __rte_unused,
 			  uint64_t *dequeue_err_count __rte_unused);
 
-int
-qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
-			enum qat_service_type service_type);
-
 int
 qat_read_qp_config(struct qat_pci_device *qat_dev);
 
+typedef int (*qat_qp_rings_per_service_t)
+		(struct qat_pci_device *, enum qat_service_type);
+typedef void (*qat_qp_build_ring_base_t)
+		(void *, struct qat_queue *);
+typedef void (*qat_qp_adf_arb_enable_t)
+		(const struct qat_queue *, void *,
+			rte_spinlock_t *);
+typedef void (*qat_qp_adf_arb_disable_t)
+		(const struct qat_queue *, void *,
+			rte_spinlock_t *);
+typedef void (*qat_qp_adf_configure_queues_t)(struct qat_qp *);
+
+typedef void (*qat_qp_csr_write_tail_t)(struct qat_qp *qp,
+					struct qat_queue *q);
+
+typedef void (*qat_qp_csr_write_head_t)(struct qat_qp *qp,
+					struct qat_queue *q,
+					uint32_t new_head);
+
+typedef void (*qat_qp_csr_setup_t)(struct qat_pci_device*,
+					void *, struct qat_qp *);
+
+struct qat_qp_hw_spec_funcs {
+	qat_qp_rings_per_service_t	qat_qp_rings_per_service;
+	qat_qp_build_ring_base_t	qat_qp_build_ring_base;
+	qat_qp_adf_arb_enable_t		qat_qp_adf_arb_enable;
+	qat_qp_adf_arb_disable_t	qat_qp_adf_arb_disable;
+	qat_qp_adf_configure_queues_t	qat_qp_adf_configure_queues;
+	qat_qp_csr_write_tail_t		qat_qp_csr_write_tail;
+	qat_qp_csr_write_head_t		qat_qp_csr_write_head;
+	qat_qp_csr_setup_t		qat_qp_csr_setup;
+};
+
+extern struct
+qat_qp_hw_spec_funcs *qat_qp_hw_spec[];
+
 #endif /* _QAT_QP_H_ */
-- 
2.30.2


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [PATCH 2/4] crypto/qat: isolate implementations of symmetric operations
  2021-09-01 14:47 [dpdk-dev] [PATCH 0/4] drivers/qat: isolate implementations of qat generations Arek Kusztal
  2021-09-01 14:47 ` [dpdk-dev] [PATCH 1/4] common/qat: " Arek Kusztal
@ 2021-09-01 14:47 ` Arek Kusztal
  2021-09-01 14:47 ` [dpdk-dev] [PATCH 3/4] crypto/qat: move capabilities initialization to spec files Arek Kusztal
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 96+ messages in thread
From: Arek Kusztal @ 2021-09-01 14:47 UTC (permalink / raw)
  To: dev; +Cc: gakhil, roy.fan.zhang, Arek Kusztal

This commit isolates implementations of symmetric part in QAT PMD.
When changing/expanding particular generation code, other generations
code should left intact. Generation code in drivers crypto is
invisible to other generations code.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
---
 drivers/common/qat/meson.build            |   6 +-
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c |  55 ++++++++++
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.h |  15 +++
 drivers/crypto/qat/dev/qat_sym_pmd_gen2.c |  80 +++++++++++++++
 drivers/crypto/qat/dev/qat_sym_pmd_gen3.c |  39 +++++++
 drivers/crypto/qat/dev/qat_sym_pmd_gen4.c |  82 +++++++++++++++
 drivers/crypto/qat/qat_sym_pmd.c          | 120 +++++-----------------
 drivers/crypto/qat/qat_sym_pmd.h          |  23 +++++
 8 files changed, 322 insertions(+), 98 deletions(-)
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.h
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen2.c
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen3.c
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen4.c

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 532e0fabb3..de54004b4c 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -69,7 +69,11 @@ endif
 
 if qat_crypto
     foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c',
-            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c']
+            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c',
+            'dev/qat_sym_pmd_gen1.c',
+            'dev/qat_sym_pmd_gen2.c',
+            'dev/qat_sym_pmd_gen3.c',
+            'dev/qat_sym_pmd_gen4.c']
         sources += files(join_paths(qat_crypto_relpath, f))
     endforeach
     deps += ['security']
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
new file mode 100644
index 0000000000..4a4dc9ab55
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include "qat_sym_pmd.h"
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_sym_pmd_gen1.h"
+
+int qat_sym_qp_setup_gen1(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf,
+	int socket_id)
+{
+	struct qat_qp_config qat_qp_conf = { };
+	const struct qat_qp_hw_data *sym_hw_qps = NULL;
+	struct qat_sym_dev_private *qat_sym_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_sym_private->qat_dev;
+
+	sym_hw_qps = qat_gen_config[qat_dev->qat_dev_gen]
+		.qp_hw_data[QAT_SERVICE_SYMMETRIC];
+	qat_qp_conf.hw = sym_hw_qps + qp_id;
+
+	return qat_sym_qp_setup(dev, qp_id, qp_conf, qat_qp_conf, socket_id);
+}
+
+struct rte_cryptodev_ops crypto_qat_gen1_ops = {
+
+		/* Device related operations */
+		.dev_configure		= qat_sym_dev_config,
+		.dev_start		= qat_sym_dev_start,
+		.dev_stop		= qat_sym_dev_stop,
+		.dev_close		= qat_sym_dev_close,
+		.dev_infos_get		= qat_sym_dev_info_get,
+
+		.stats_get		= qat_sym_stats_get,
+		.stats_reset		= qat_sym_stats_reset,
+		.queue_pair_setup	= qat_sym_qp_setup_gen1,
+		.queue_pair_release	= qat_sym_qp_release,
+
+		/* Crypto related operations */
+		.sym_session_get_size	= qat_sym_session_get_private_size,
+		.sym_session_configure	= qat_sym_session_configure,
+		.sym_session_clear	= qat_sym_session_clear,
+
+		/* Raw data-path API related operations */
+		.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+		.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+RTE_INIT(qat_sym_pmd_gen1_init)
+{
+	QAT_CRYPTODEV_OPS[QAT_GEN1] = &crypto_qat_gen1_ops;
+}
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.h b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.h
new file mode 100644
index 0000000000..397faab0b0
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <stdint.h>
+
+#ifndef _QAT_DEV_GEN_H_
+#define _QAT_DEV_GEN_H_
+
+int qat_sym_qp_setup_gen1(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf,
+	int socket_id);
+
+#endif
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen2.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen2.c
new file mode 100644
index 0000000000..6344d7de13
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen2.c
@@ -0,0 +1,80 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include "qat_sym_pmd.h"
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+
+#define MIXED_CRYPTO_MIN_FW_VER 0x04090000
+
+static int qat_sym_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf,
+	int socket_id)
+{
+	int ret;
+	struct qat_qp_config qat_qp_conf = { };
+	const struct qat_qp_hw_data *sym_hw_qps = NULL;
+	struct qat_sym_dev_private *qat_sym_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_sym_private->qat_dev;
+	struct qat_qp *qp;
+
+	sym_hw_qps = qat_gen_config[qat_dev->qat_dev_gen]
+		.qp_hw_data[QAT_SERVICE_SYMMETRIC];
+	qat_qp_conf.hw = sym_hw_qps + qp_id;
+
+	if (qat_sym_qp_setup(dev, qp_id, qp_conf, qat_qp_conf, socket_id)) {
+		return -1;
+	}
+	qp = qat_sym_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id];
+	ret = qat_cq_get_fw_version(qp);
+	if (ret < 0) {
+		qat_sym_qp_release(dev, qp_id);
+		return ret;
+	}
+
+	if (ret != 0)
+		QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d",
+				(ret >> 24) & 0xff,
+				(ret >> 16) & 0xff,
+				(ret >> 8) & 0xff);
+	else
+		QAT_LOG(DEBUG, "unknown QAT firmware version");
+
+	/* set capabilities based on the fw version */
+	qat_sym_private->internal_capabilities = QAT_SYM_CAP_VALID |
+			((ret >= MIXED_CRYPTO_MIN_FW_VER) ?
+					QAT_SYM_CAP_MIXED_CRYPTO : 0);
+	return 0;
+}
+
+struct rte_cryptodev_ops crypto_qat_gen2_ops = {
+
+		/* Device related operations */
+		.dev_configure		= qat_sym_dev_config,
+		.dev_start		= qat_sym_dev_start,
+		.dev_stop		= qat_sym_dev_stop,
+		.dev_close		= qat_sym_dev_close,
+		.dev_infos_get		= qat_sym_dev_info_get,
+
+		.stats_get		= qat_sym_stats_get,
+		.stats_reset		= qat_sym_stats_reset,
+		.queue_pair_setup	= qat_sym_qp_setup_gen2,
+		.queue_pair_release	= qat_sym_qp_release,
+
+		/* Crypto related operations */
+		.sym_session_get_size	= qat_sym_session_get_private_size,
+		.sym_session_configure	= qat_sym_session_configure,
+		.sym_session_clear	= qat_sym_session_clear,
+
+		/* Raw data-path API related operations */
+		.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+		.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+RTE_INIT(qat_sym_pmd_gen2)
+{
+	QAT_CRYPTODEV_OPS[QAT_GEN2] = &crypto_qat_gen2_ops;
+}
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen3.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen3.c
new file mode 100644
index 0000000000..f8488cd122
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen3.c
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include "qat_sym_pmd.h"
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_sym_pmd_gen1.h"
+
+struct rte_cryptodev_ops crypto_qat_gen3_ops = {
+
+		/* Device related operations */
+		.dev_configure		= qat_sym_dev_config,
+		.dev_start		= qat_sym_dev_start,
+		.dev_stop		= qat_sym_dev_stop,
+		.dev_close		= qat_sym_dev_close,
+		.dev_infos_get		= qat_sym_dev_info_get,
+
+		.stats_get		= qat_sym_stats_get,
+		.stats_reset		= qat_sym_stats_reset,
+		.queue_pair_setup	= qat_sym_qp_setup_gen1,
+		.queue_pair_release	= qat_sym_qp_release,
+
+		/* Crypto related operations */
+		.sym_session_get_size	= qat_sym_session_get_private_size,
+		.sym_session_configure	= qat_sym_session_configure,
+		.sym_session_clear	= qat_sym_session_clear,
+
+		/* Raw data-path API related operations */
+		.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+		.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+RTE_INIT(qat_sym_pmd_gen3_init)
+{
+	QAT_CRYPTODEV_OPS[QAT_GEN3] = &crypto_qat_gen3_ops;
+}
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c
new file mode 100644
index 0000000000..9470e78fb1
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c
@@ -0,0 +1,82 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include "qat_sym_pmd.h"
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+
+static int
+qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
+			enum qat_service_type service_type)
+{
+	int i = 0, valid_qps = 0;
+
+	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		if (qat_dev->qp_gen4_data[i][0].service_type ==
+			service_type) {
+			if (valid_qps == qp_id)
+				return i;
+			++valid_qps;
+		}
+	}
+	return -1;
+}
+
+static int qat_sym_qp_setup_gen4(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf,
+	int socket_id)
+{
+	int ret = 0;
+	int ring_pair;
+	struct qat_qp_config qat_qp_conf = { };
+	struct qat_sym_dev_private *qat_sym_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_sym_private->qat_dev;
+
+	ring_pair =
+		qat_select_valid_queue(qat_sym_private->qat_dev, qp_id,
+			QAT_SERVICE_SYMMETRIC);
+	if (ring_pair < 0) {
+		QAT_LOG(ERR,
+			"qp_id %u invalid for this device, no enough services allocated for GEN4 device",
+			qp_id);
+		return -EINVAL;
+	}
+	qat_qp_conf.hw =
+		&qat_dev->qp_gen4_data[ring_pair][0];
+
+	ret = qat_sym_qp_setup(dev, qp_id, qp_conf, qat_qp_conf, socket_id);
+
+	return ret;
+}
+
+struct rte_cryptodev_ops crypto_qat_gen4_ops = {
+
+		/* Device related operations */
+		.dev_configure		= qat_sym_dev_config,
+		.dev_start		= qat_sym_dev_start,
+		.dev_stop		= qat_sym_dev_stop,
+		.dev_close		= qat_sym_dev_close,
+		.dev_infos_get		= qat_sym_dev_info_get,
+
+		.stats_get		= qat_sym_stats_get,
+		.stats_reset		= qat_sym_stats_reset,
+		.queue_pair_setup	= qat_sym_qp_setup_gen4,
+		.queue_pair_release	= qat_sym_qp_release,
+
+		/* Crypto related operations */
+		.sym_session_get_size	= qat_sym_session_get_private_size,
+		.sym_session_configure	= qat_sym_session_configure,
+		.sym_session_clear	= qat_sym_session_clear,
+
+		/* Raw data-path API related operations */
+		.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+		.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+RTE_INIT(qat_sym_pmd_gen4_init)
+{
+	QAT_CRYPTODEV_OPS[QAT_GEN4] = &crypto_qat_gen4_ops;
+}
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index 6868e5f001..ee1a7e52bc 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -16,6 +16,7 @@
 #include "qat_sym.h"
 #include "qat_sym_session.h"
 #include "qat_sym_pmd.h"
+#include "qat_qp.h"
 
 #define MIXED_CRYPTO_MIN_FW_VER 0x04090000
 
@@ -59,26 +60,25 @@ static const struct rte_security_capability qat_security_capabilities[] = {
 };
 #endif
 
-static int qat_sym_qp_release(struct rte_cryptodev *dev,
-	uint16_t queue_pair_id);
+struct rte_cryptodev_ops *QAT_CRYPTODEV_OPS[QAT_DEV_GEN_NO];
 
-static int qat_sym_dev_config(__rte_unused struct rte_cryptodev *dev,
+int qat_sym_dev_config(__rte_unused struct rte_cryptodev *dev,
 		__rte_unused struct rte_cryptodev_config *config)
 {
 	return 0;
 }
 
-static int qat_sym_dev_start(__rte_unused struct rte_cryptodev *dev)
+int qat_sym_dev_start(__rte_unused struct rte_cryptodev *dev)
 {
 	return 0;
 }
 
-static void qat_sym_dev_stop(__rte_unused struct rte_cryptodev *dev)
+void qat_sym_dev_stop(__rte_unused struct rte_cryptodev *dev)
 {
 	return;
 }
 
-static int qat_sym_dev_close(struct rte_cryptodev *dev)
+int qat_sym_dev_close(struct rte_cryptodev *dev)
 {
 	int i, ret;
 
@@ -91,7 +91,7 @@ static int qat_sym_dev_close(struct rte_cryptodev *dev)
 	return 0;
 }
 
-static void qat_sym_dev_info_get(struct rte_cryptodev *dev,
+void qat_sym_dev_info_get(struct rte_cryptodev *dev,
 			struct rte_cryptodev_info *info)
 {
 	struct qat_sym_dev_private *internals = dev->data->dev_private;
@@ -108,7 +108,7 @@ static void qat_sym_dev_info_get(struct rte_cryptodev *dev,
 	}
 }
 
-static void qat_sym_stats_get(struct rte_cryptodev *dev,
+void qat_sym_stats_get(struct rte_cryptodev *dev,
 		struct rte_cryptodev_stats *stats)
 {
 	struct qat_common_stats qat_stats = {0};
@@ -127,7 +127,7 @@ static void qat_sym_stats_get(struct rte_cryptodev *dev,
 	stats->dequeue_err_count = qat_stats.dequeue_err_count;
 }
 
-static void qat_sym_stats_reset(struct rte_cryptodev *dev)
+void qat_sym_stats_reset(struct rte_cryptodev *dev)
 {
 	struct qat_sym_dev_private *qat_priv;
 
@@ -141,7 +141,7 @@ static void qat_sym_stats_reset(struct rte_cryptodev *dev)
 
 }
 
-static int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
 {
 	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
 	enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen;
@@ -156,70 +156,46 @@ static int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
 			&(dev->data->queue_pairs[queue_pair_id]));
 }
 
-static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-	const struct rte_cryptodev_qp_conf *qp_conf,
+int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf, struct qat_qp_config qat_qp_conf,
 	int socket_id)
 {
 	struct qat_qp *qp;
 	int ret = 0;
 	uint32_t i;
-	struct qat_qp_config qat_qp_conf;
-	const struct qat_qp_hw_data *sym_hw_qps = NULL;
-	const struct qat_qp_hw_data *qp_hw_data = NULL;
 
 	struct qat_qp **qp_addr =
 			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
-	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		int ring_pair =
-			qat_select_valid_queue(qat_dev, qp_id,
-				QAT_SERVICE_SYMMETRIC);
-
-		if (ring_pair < 0) {
-			QAT_LOG(ERR,
-				"qp_id %u invalid for this device, no enough services allocated for GEN4 device",
-				qp_id);
-			return -EINVAL;
-		}
-		sym_hw_qps =
-			&qat_dev->qp_gen4_data[0][0];
-		qp_hw_data =
-			&qat_dev->qp_gen4_data[ring_pair][0];
-	} else {
-		sym_hw_qps = qat_gen_config[qat_dev->qat_dev_gen]
-				.qp_hw_data[QAT_SERVICE_SYMMETRIC];
-		qp_hw_data = sym_hw_qps + qp_id;
-	}
+	struct qat_sym_dev_private *qat_sym_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_sym_private->qat_dev;
 
 	/* If qp is already in use free ring memory and qp metadata. */
 	if (*qp_addr != NULL) {
 		ret = qat_sym_qp_release(dev, qp_id);
 		if (ret < 0)
-			return ret;
+			return -EBUSY;
 	}
 	if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC)) {
 		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
 		return -EINVAL;
 	}
 
-	qat_qp_conf.hw = qp_hw_data;
-	qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie);
-	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
+	if (qat_qp_conf.cookie_size == 0)
+		qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie);
+	if (qat_qp_conf.nb_descriptors == 0)
+		qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
 	qat_qp_conf.socket_id = socket_id;
 	qat_qp_conf.service_str = "sym";
 
-	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
+	ret = qat_qp_setup(qat_dev, qp_addr, qp_id, &qat_qp_conf);
 	if (ret != 0)
 		return ret;
 
 	/* store a link to the qp in the qat_pci_device */
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id]
-							= *qp_addr;
+	qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id] = *qp_addr;
 
 	qp = (struct qat_qp *)*qp_addr;
-	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
+	qp->min_enq_burst_threshold = qat_sym_private->min_enq_burst_threshold;
 
 	for (i = 0; i < qp->nb_descriptors; i++) {
 
@@ -240,61 +216,11 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 				rte_mempool_virt2iova(cookie) +
 				offsetof(struct qat_sym_op_cookie,
 				opt.spc_gmac.cd_cipher);
-
-	}
-
-	/* Get fw version from QAT (GEN2), skip if we've got it already */
-	if (qp->qat_dev_gen == QAT_GEN2 && !(qat_private->internal_capabilities
-			& QAT_SYM_CAP_VALID)) {
-		ret = qat_cq_get_fw_version(qp);
-
-		if (ret < 0) {
-			qat_sym_qp_release(dev, qp_id);
-			return ret;
-		}
-
-		if (ret != 0)
-			QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d",
-					(ret >> 24) & 0xff,
-					(ret >> 16) & 0xff,
-					(ret >> 8) & 0xff);
-		else
-			QAT_LOG(DEBUG, "unknown QAT firmware version");
-
-		/* set capabilities based on the fw version */
-		qat_private->internal_capabilities = QAT_SYM_CAP_VALID |
-				((ret >= MIXED_CRYPTO_MIN_FW_VER) ?
-						QAT_SYM_CAP_MIXED_CRYPTO : 0);
-		ret = 0;
 	}
 
 	return ret;
 }
 
-static struct rte_cryptodev_ops crypto_qat_ops = {
-
-		/* Device related operations */
-		.dev_configure		= qat_sym_dev_config,
-		.dev_start		= qat_sym_dev_start,
-		.dev_stop		= qat_sym_dev_stop,
-		.dev_close		= qat_sym_dev_close,
-		.dev_infos_get		= qat_sym_dev_info_get,
-
-		.stats_get		= qat_sym_stats_get,
-		.stats_reset		= qat_sym_stats_reset,
-		.queue_pair_setup	= qat_sym_qp_setup,
-		.queue_pair_release	= qat_sym_qp_release,
-
-		/* Crypto related operations */
-		.sym_session_get_size	= qat_sym_session_get_private_size,
-		.sym_session_configure	= qat_sym_session_configure,
-		.sym_session_clear	= qat_sym_session_clear,
-
-		/* Raw data-path API related operations */
-		.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
-		.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
-};
-
 #ifdef RTE_LIB_SECURITY
 static const struct rte_security_capability *
 qat_security_cap_get(void *device __rte_unused)
@@ -397,7 +323,7 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	qat_dev_instance->sym_rte_dev.name = cryptodev->data->name;
 	cryptodev->driver_id = qat_sym_driver_id;
-	cryptodev->dev_ops = &crypto_qat_ops;
+	cryptodev->dev_ops = QAT_CRYPTODEV_OPS[qat_pci_dev->qat_dev_gen];
 
 	cryptodev->enqueue_burst = qat_sym_pmd_enqueue_op_burst;
 	cryptodev->dequeue_burst = qat_sym_pmd_dequeue_op_burst;
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index e0992cbe27..f676a296e4 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -15,6 +15,7 @@
 
 #include "qat_sym_capabilities.h"
 #include "qat_device.h"
+#include "qat_qp.h"
 
 /** Intel(R) QAT Symmetric Crypto PMD driver name */
 #define CRYPTODEV_NAME_QAT_SYM_PMD	crypto_qat
@@ -25,6 +26,8 @@
 
 extern uint8_t qat_sym_driver_id;
 
+extern struct rte_cryptodev_ops *QAT_CRYPTODEV_OPS[];
+
 /** private data structure for a QAT device.
  * This QAT device is a device offering only symmetric crypto service,
  * there can be one of these on each qat_pci_device (VF).
@@ -49,5 +52,25 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 int
 qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev);
 
+int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id);
+
+int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf, struct qat_qp_config qat_qp_conf,
+	int socket_id);
+
+void qat_sym_stats_reset(struct rte_cryptodev *dev);
+
+void qat_sym_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats);
+
+void qat_sym_dev_info_get(struct rte_cryptodev *dev,
+			struct rte_cryptodev_info *info);
+
+int qat_sym_dev_config(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused struct rte_cryptodev_config *config);
+int qat_sym_dev_close(struct rte_cryptodev *dev);
+void qat_sym_dev_stop(__rte_unused struct rte_cryptodev *dev);
+int qat_sym_dev_start(__rte_unused struct rte_cryptodev *dev);
+
 #endif
 #endif /* _QAT_SYM_PMD_H_ */
-- 
2.30.2


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [PATCH 3/4] crypto/qat: move capabilities initialization to spec files
  2021-09-01 14:47 [dpdk-dev] [PATCH 0/4] drivers/qat: isolate implementations of qat generations Arek Kusztal
  2021-09-01 14:47 ` [dpdk-dev] [PATCH 1/4] common/qat: " Arek Kusztal
  2021-09-01 14:47 ` [dpdk-dev] [PATCH 2/4] crypto/qat: isolate implementations of symmetric operations Arek Kusztal
@ 2021-09-01 14:47 ` Arek Kusztal
  2021-09-01 14:47 ` [dpdk-dev] [PATCH 4/4] common/qat: add extra data to qat pci dev Arek Kusztal
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 96+ messages in thread
From: Arek Kusztal @ 2021-09-01 14:47 UTC (permalink / raw)
  To: dev; +Cc: gakhil, roy.fan.zhang, Arek Kusztal

Move capabilites static struct to particular generations into
separate translation units that it can be isolated from each other.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
---
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 27 ++++++++-
 drivers/crypto/qat/dev/qat_sym_pmd_gen2.c | 25 ++++++++-
 drivers/crypto/qat/dev/qat_sym_pmd_gen3.c | 26 ++++++++-
 drivers/crypto/qat/dev/qat_sym_pmd_gen4.c | 24 +++++++-
 drivers/crypto/qat/qat_sym_pmd.c          | 68 +++++++----------------
 drivers/crypto/qat/qat_sym_pmd.h          | 19 ++++++-
 6 files changed, 135 insertions(+), 54 deletions(-)

diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
index 4a4dc9ab55..40ec77f846 100644
--- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
@@ -8,6 +8,12 @@
 #include "qat_sym_session.h"
 #include "qat_sym.h"
 #include "qat_sym_pmd_gen1.h"
+#include "qat_sym_capabilities.h"
+
+static struct rte_cryptodev_capabilities qat_gen1_sym_capabilities[] = {
+	QAT_BASE_GEN1_SYM_CAPABILITIES,
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
 
 int qat_sym_qp_setup_gen1(struct rte_cryptodev *dev, uint16_t qp_id,
 	const struct rte_cryptodev_qp_conf *qp_conf,
@@ -49,7 +55,24 @@ struct rte_cryptodev_ops crypto_qat_gen1_ops = {
 		.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
 };
 
-RTE_INIT(qat_sym_pmd_gen1_init)
+static struct
+qat_capabilities_info get_capabilties_gen1(
+			struct qat_pci_device *qat_dev __rte_unused)
 {
-	QAT_CRYPTODEV_OPS[QAT_GEN1] = &crypto_qat_gen1_ops;
+	struct qat_capabilities_info capa_info;
+
+	capa_info.data = qat_gen1_sym_capabilities;
+	capa_info.size = sizeof(qat_gen1_sym_capabilities);
+	return capa_info;
 }
+
+static struct
+qat_sym_pmd_dev_ops qat_sym_pmd_ops_gen1 = {
+	.qat_sym_get_capabilities	= get_capabilties_gen1,
+};
+
+RTE_INIT(qat_sym_pmd_gen1_init)
+{
+	QAT_CRYPTODEV_OPS[QAT_GEN1]	= &crypto_qat_gen1_ops;
+	qat_sym_pmd_ops[QAT_GEN1]	= &qat_sym_pmd_ops_gen1;
+}
\ No newline at end of file
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen2.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen2.c
index 6344d7de13..18dfca3a84 100644
--- a/drivers/crypto/qat/dev/qat_sym_pmd_gen2.c
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen2.c
@@ -7,9 +7,16 @@
 #include "qat_sym_pmd.h"
 #include "qat_sym_session.h"
 #include "qat_sym.h"
+#include "qat_sym_capabilities.h"
 
 #define MIXED_CRYPTO_MIN_FW_VER 0x04090000
 
+static struct rte_cryptodev_capabilities qat_gen2_sym_capabilities[] = {
+	QAT_BASE_GEN1_SYM_CAPABILITIES,
+	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
 static int qat_sym_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id,
 	const struct rte_cryptodev_qp_conf *qp_conf,
 	int socket_id)
@@ -74,7 +81,23 @@ struct rte_cryptodev_ops crypto_qat_gen2_ops = {
 		.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
 };
 
+static struct
+qat_capabilities_info get_capabilties_gen2(
+			struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_gen2_sym_capabilities;
+	capa_info.size = sizeof(qat_gen2_sym_capabilities);
+	return capa_info;
+}
+
+static struct
+qat_sym_pmd_dev_ops qat_sym_pmd_ops_gen2 = {
+	.qat_sym_get_capabilities	= get_capabilties_gen2,
+};
+
 RTE_INIT(qat_sym_pmd_gen2)
 {
-	QAT_CRYPTODEV_OPS[QAT_GEN2] = &crypto_qat_gen2_ops;
+	QAT_CRYPTODEV_OPS[QAT_GEN2]	= &crypto_qat_gen2_ops;
+	qat_sym_pmd_ops[QAT_GEN2]	= &qat_sym_pmd_ops_gen2;
 }
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen3.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen3.c
index f8488cd122..e914a09362 100644
--- a/drivers/crypto/qat/dev/qat_sym_pmd_gen3.c
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen3.c
@@ -9,6 +9,13 @@
 #include "qat_sym.h"
 #include "qat_sym_pmd_gen1.h"
 
+static struct rte_cryptodev_capabilities qat_gen3_sym_capabilities[] = {
+	QAT_BASE_GEN1_SYM_CAPABILITIES,
+	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
+	QAT_EXTRA_GEN3_SYM_CAPABILITIES,
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
 struct rte_cryptodev_ops crypto_qat_gen3_ops = {
 
 		/* Device related operations */
@@ -33,7 +40,24 @@ struct rte_cryptodev_ops crypto_qat_gen3_ops = {
 		.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
 };
 
+static struct
+qat_capabilities_info get_capabilties_gen3(
+			struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_gen3_sym_capabilities;
+	capa_info.size = sizeof(qat_gen3_sym_capabilities);
+	return capa_info;
+}
+
+static struct
+qat_sym_pmd_dev_ops qat_sym_pmd_ops_gen3 = {
+	.qat_sym_get_capabilities	= get_capabilties_gen3,
+};
+
+
 RTE_INIT(qat_sym_pmd_gen3_init)
 {
-	QAT_CRYPTODEV_OPS[QAT_GEN3] = &crypto_qat_gen3_ops;
+	QAT_CRYPTODEV_OPS[QAT_GEN3]	= &crypto_qat_gen3_ops;
+	qat_sym_pmd_ops[QAT_GEN3]	= &qat_sym_pmd_ops_gen3;
 }
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c
index 9470e78fb1..834ae88d38 100644
--- a/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c
@@ -8,6 +8,11 @@
 #include "qat_sym_session.h"
 #include "qat_sym.h"
 
+static struct rte_cryptodev_capabilities qat_gen4_sym_capabilities[] = {
+	QAT_BASE_GEN4_SYM_CAPABILITIES,
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
 static int
 qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
 			enum qat_service_type service_type)
@@ -76,7 +81,24 @@ struct rte_cryptodev_ops crypto_qat_gen4_ops = {
 		.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
 };
 
+static struct
+qat_capabilities_info get_capabilties_gen4(
+			struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+
+	capa_info.data = qat_gen4_sym_capabilities;
+	capa_info.size = sizeof(qat_gen4_sym_capabilities);
+	return capa_info;
+}
+
+static struct
+qat_sym_pmd_dev_ops qat_sym_pmd_ops_gen4 = {
+	.qat_sym_get_capabilities	= get_capabilties_gen4,
+};
+
 RTE_INIT(qat_sym_pmd_gen4_init)
 {
-	QAT_CRYPTODEV_OPS[QAT_GEN4] = &crypto_qat_gen4_ops;
+	QAT_CRYPTODEV_OPS[QAT_GEN4]	= &crypto_qat_gen4_ops;
+	qat_sym_pmd_ops[QAT_GEN4]	= &qat_sym_pmd_ops_gen4;
 }
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index ee1a7e52bc..dc1dcbe34f 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -22,28 +22,9 @@
 
 uint8_t qat_sym_driver_id;
 
-static const struct rte_cryptodev_capabilities qat_gen1_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen2_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen3_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN3_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen4_sym_capabilities[] = {
-	QAT_BASE_GEN4_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
+struct qat_capabilities_info qat_sym_capabilities[QAT_DEV_GEN_NO];
+struct rte_cryptodev_ops *QAT_CRYPTODEV_OPS[QAT_DEV_GEN_NO];
+struct qat_sym_pmd_dev_ops *qat_sym_pmd_ops[QAT_DEV_GEN_NO];
 
 #ifdef RTE_LIB_SECURITY
 static const struct rte_cryptodev_capabilities
@@ -62,6 +43,16 @@ static const struct rte_security_capability qat_security_capabilities[] = {
 
 struct rte_cryptodev_ops *QAT_CRYPTODEV_OPS[QAT_DEV_GEN_NO];
 
+static struct
+qat_capabilities_info qat_sym_get_capa_info(
+		struct qat_pci_device *qat_dev)
+{
+	struct qat_sym_pmd_dev_ops *ops =
+			qat_sym_pmd_ops[qat_dev->qat_dev_gen];
+
+	return ops->qat_sym_get_capabilities(qat_dev);
+}
+
 int qat_sym_dev_config(__rte_unused struct rte_cryptodev *dev,
 		__rte_unused struct rte_cryptodev_config *config)
 {
@@ -83,7 +74,7 @@ int qat_sym_dev_close(struct rte_cryptodev *dev)
 	int i, ret;
 
 	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
-		ret = qat_sym_qp_release(dev, i);
+		ret = dev->dev_ops->queue_pair_release(dev, i);
 		if (ret < 0)
 			return ret;
 	}
@@ -171,7 +162,7 @@ int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 
 	/* If qp is already in use free ring memory and qp metadata. */
 	if (*qp_addr != NULL) {
-		ret = qat_sym_qp_release(dev, qp_id);
+		ret = dev->dev_ops->queue_pair_release(dev, qp_id);
 		if (ret < 0)
 			return -EBUSY;
 	}
@@ -283,6 +274,7 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
 	struct qat_sym_dev_private *internals;
+	struct qat_capabilities_info capa_info;
 	const struct rte_cryptodev_capabilities *capabilities;
 	uint64_t capa_size;
 
@@ -370,30 +362,10 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 	internals->qat_dev = qat_pci_dev;
 
 	internals->sym_dev_id = cryptodev->data->dev_id;
-	switch (qat_pci_dev->qat_dev_gen) {
-	case QAT_GEN1:
-		capabilities = qat_gen1_sym_capabilities;
-		capa_size = sizeof(qat_gen1_sym_capabilities);
-		break;
-	case QAT_GEN2:
-		capabilities = qat_gen2_sym_capabilities;
-		capa_size = sizeof(qat_gen2_sym_capabilities);
-		break;
-	case QAT_GEN3:
-		capabilities = qat_gen3_sym_capabilities;
-		capa_size = sizeof(qat_gen3_sym_capabilities);
-		break;
-	case QAT_GEN4:
-		capabilities = qat_gen4_sym_capabilities;
-		capa_size = sizeof(qat_gen4_sym_capabilities);
-		break;
-	default:
-		QAT_LOG(DEBUG,
-			"QAT gen %d capabilities unknown",
-			qat_pci_dev->qat_dev_gen);
-		ret = -(EINVAL);
-		goto error;
-	}
+
+	capa_info = qat_sym_get_capa_info(qat_pci_dev);
+	capabilities = capa_info.data;
+	capa_size = capa_info.size;
 
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index f676a296e4..a03d2a0f04 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -26,7 +26,24 @@
 
 extern uint8_t qat_sym_driver_id;
 
-extern struct rte_cryptodev_ops *QAT_CRYPTODEV_OPS[];
+struct qat_capabilities_info {
+	struct rte_cryptodev_capabilities *data;
+	uint64_t size;
+};
+
+extern struct
+rte_cryptodev_ops *QAT_CRYPTODEV_OPS[];
+extern struct
+qat_capabilities_info qat_sym_capabilities[];
+
+typedef struct qat_capabilities_info (*get_capabilities_info_t)
+			(struct qat_pci_device *qat_dev);
+
+struct qat_sym_pmd_dev_ops {
+	get_capabilities_info_t qat_sym_get_capabilities;
+};
+
+extern struct qat_sym_pmd_dev_ops *qat_sym_pmd_ops[];
 
 /** private data structure for a QAT device.
  * This QAT device is a device offering only symmetric crypto service,
-- 
2.30.2


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [PATCH 4/4] common/qat: add extra data to qat pci dev
  2021-09-01 14:47 [dpdk-dev] [PATCH 0/4] drivers/qat: isolate implementations of qat generations Arek Kusztal
                   ` (2 preceding siblings ...)
  2021-09-01 14:47 ` [dpdk-dev] [PATCH 3/4] crypto/qat: move capabilities initialization to spec files Arek Kusztal
@ 2021-09-01 14:47 ` Arek Kusztal
  2021-09-06 18:24 ` [dpdk-dev] [EXT] [PATCH 0/4] drivers/qat: isolate implementations of qat generations Akhil Goyal
  2021-10-01 16:59 ` [dpdk-dev] [PATCH v2 00/10] " Fan Zhang
  5 siblings, 0 replies; 96+ messages in thread
From: Arek Kusztal @ 2021-09-01 14:47 UTC (permalink / raw)
  To: dev; +Cc: gakhil, roy.fan.zhang, Arek Kusztal

Add private data to qat_pci_device struct that will be
visible only by specific generation it belongs to.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c     |  7 +++
 drivers/common/qat/dev/qat_dev_gen1.h     |  3 ++
 drivers/common/qat/dev/qat_dev_gen2.c     |  1 +
 drivers/common/qat/dev/qat_dev_gen3.c     |  1 +
 drivers/common/qat/dev/qat_dev_gen4.c     | 31 ++++++++++-
 drivers/common/qat/dev/qat_dev_gen4.h     | 18 +++++++
 drivers/common/qat/meson.build            |  2 +
 drivers/common/qat/qat_device.c           | 66 +++++++++++++++--------
 drivers/common/qat/qat_device.h           | 10 ++--
 drivers/common/qat/qat_qp.h               |  9 ----
 drivers/crypto/qat/dev/qat_sym_pmd_gen4.c |  7 ++-
 11 files changed, 113 insertions(+), 42 deletions(-)
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.h

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
index 4d60c2a051..3c7a558959 100644
--- a/drivers/common/qat/dev/qat_dev_gen1.c
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -227,11 +227,18 @@ qat_dev_read_config_gen1(struct qat_pci_device *qat_dev __rte_unused)
 	return 0;
 }
 
+int
+qat_dev_get_extra_size_gen1(void)
+{
+	return 0;
+}
+
 static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = {
 	.qat_dev_reset_ring_pairs	= qat_reset_ring_pairs_gen1,
 	.qat_dev_get_transport_bar	= qat_dev_get_transport_bar_gen1,
 	.qat_dev_get_misc_bar		= qat_dev_get_misc_bar_gen1,
 	.qat_dev_read_config		= qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size		= qat_dev_get_extra_size_gen1,
 };
 
 RTE_INIT(qat_dev_gen_gen1_init)
diff --git a/drivers/common/qat/dev/qat_dev_gen1.h b/drivers/common/qat/dev/qat_dev_gen1.h
index 9bf4fcf01b..ec0af94655 100644
--- a/drivers/common/qat/dev/qat_dev_gen1.h
+++ b/drivers/common/qat/dev/qat_dev_gen1.h
@@ -13,6 +13,9 @@
 extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
 					 [ADF_MAX_QPS_ON_ANY_SERVICE];
 
+int
+qat_dev_get_extra_size_gen1(void);
+
 int
 qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev,
 		enum qat_service_type service);
diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c
index ad1b643e00..856463c06f 100644
--- a/drivers/common/qat/dev/qat_dev_gen2.c
+++ b/drivers/common/qat/dev/qat_dev_gen2.c
@@ -25,6 +25,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
 	.qat_dev_get_transport_bar	= qat_dev_get_transport_bar_gen1,
 	.qat_dev_get_misc_bar		= qat_dev_get_misc_bar_gen1,
 	.qat_dev_read_config		= qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size		= qat_dev_get_extra_size_gen1,
 };
 
 RTE_INIT(qat_dev_gen_gen2_init)
diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c
index 407d21576b..237712f1ef 100644
--- a/drivers/common/qat/dev/qat_dev_gen3.c
+++ b/drivers/common/qat/dev/qat_dev_gen3.c
@@ -63,6 +63,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
 	.qat_dev_get_transport_bar	= qat_dev_get_transport_bar_gen1,
 	.qat_dev_get_misc_bar		= qat_dev_get_misc_bar_gen1,
 	.qat_dev_read_config		= qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size		= qat_dev_get_extra_size_gen1,
 };
 
 RTE_INIT(qat_dev_gen_gen3_init)
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
index 6394e17dde..aecdedf375 100644
--- a/drivers/common/qat/dev/qat_dev_gen4.c
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -10,9 +10,27 @@
 #include "adf_transport_access_macros_gen4vf.h"
 #include "adf_pf2vf_msg.h"
 #include "qat_pf2vf.h"
+#include "qat_dev_gen4.h"
 
 #include <stdint.h>
 
+struct qat_dev_gen4_extra {
+	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
+		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
+};
+
+enum qat_service_type qat_dev4_get_qp_serv(
+		struct qat_dev_gen4_extra *dev_extra, int ring_pair)
+{
+	return dev_extra->qp_gen4_data[ring_pair][0].service_type;
+}
+
+const struct qat_qp_hw_data *qat_dev4_get_hw(
+		struct qat_dev_gen4_extra *dev_extra, int ring_pair)
+{
+	return &dev_extra->qp_gen4_data[ring_pair][0];
+}
+
 static struct qat_pf2vf_dev qat_pf2vf_gen4 = {
 	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
 	.vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
@@ -38,10 +56,11 @@ qat_qp_rings_per_service_gen4(struct qat_pci_device *qat_dev,
 		enum qat_service_type service)
 {
 	int i = 0, count = 0, max_ops_per_srv = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
 
 	max_ops_per_srv = QAT_GEN4_BUNDLE_NUM;
 	for (i = 0, count = 0; i < max_ops_per_srv; i++)
-		if (qat_dev->qp_gen4_data[i][0].service_type == service)
+		if (dev_extra->qp_gen4_data[i][0].service_type == service)
 			count++;
 	return count;
 }
@@ -51,12 +70,13 @@ qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
 {
 	int i = 0;
 	uint16_t svc = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
 
 	if (qat_query_svc(qat_dev, (uint8_t *)&svc))
 		return -EFAULT;
 	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
 		struct qat_qp_hw_data *hw_data =
-			&qat_dev->qp_gen4_data[i][0];
+			&dev_extra->qp_gen4_data[i][0];
 		uint8_t svc1 = (svc >> (3 * i)) & 0x7;
 		enum qat_service_type service_type = QAT_SERVICE_INVALID;
 
@@ -239,11 +259,18 @@ qat_dev_get_misc_bar_gen4(
 	return 0;
 }
 
+static int
+qat_dev_get_extra_size_gen4(void)
+{
+	return sizeof(struct qat_dev_gen4_extra);
+}
+
 static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = {
 	.qat_dev_reset_ring_pairs	= qat_reset_ring_pairs_gen4,
 	.qat_dev_get_transport_bar	= qat_dev_get_transport_bar_gen4,
 	.qat_dev_get_misc_bar		= qat_dev_get_misc_bar_gen4,
 	.qat_dev_read_config		= qat_dev_read_config_gen4,
+	.qat_dev_get_extra_size		= qat_dev_get_extra_size_gen4,
 };
 
 RTE_INIT(qat_dev_gen_4_init)
diff --git a/drivers/common/qat/dev/qat_dev_gen4.h b/drivers/common/qat/dev/qat_dev_gen4.h
new file mode 100644
index 0000000000..f588354603
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen4.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _QAT_DEV_GEN_H_
+#define _QAT_DEV_GEN_H_
+
+#include <stdint.h>
+
+struct qat_dev_gen4_extra;
+
+enum qat_service_type qat_dev4_get_qp_serv(
+		struct qat_dev_gen4_extra *dev_extra, int ring_pair);
+
+const struct qat_qp_hw_data *qat_dev4_get_hw(
+		struct qat_dev_gen4_extra *dev_extra, int ring_pair);
+
+#endif
diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index de54004b4c..6c5db48944 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -9,6 +9,7 @@ endif
 
 qat_crypto = true
 qat_crypto_path = 'crypto/qat'
+qat_devs_path = 'dev'
 qat_crypto_relpath = '../../' + qat_crypto_path
 qat_compress = true
 qat_compress_path = 'compress/qat'
@@ -59,6 +60,7 @@ includes += include_directories(
         'qat_adf',
         qat_crypto_relpath,
         qat_compress_relpath,
+        qat_devs_path
 )
 
 if qat_compress
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index 030624b46d..4a33a62824 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -51,6 +51,16 @@ static const struct rte_pci_id pci_id_qat_map[] = {
 		{.device_id = 0},
 };
 
+static int
+qat_pci_get_extra_size(enum qat_device_gen qat_dev_gen)
+{
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_extra_size,
+		-ENOTSUP);
+	return ops_hw->qat_dev_get_extra_size();
+}
+
 static struct qat_pci_device *
 qat_pci_get_named_dev(const char *name)
 {
@@ -156,15 +166,38 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param)
 {
 	struct qat_pci_device *qat_dev;
+	enum qat_device_gen qat_dev_gen;
 	uint8_t qat_dev_id = 0;
 	char name[QAT_DEV_NAME_MAX_LEN];
 	struct rte_devargs *devargs = pci_dev->device.devargs;
 	struct qat_dev_hw_spec_funcs *ops_hw = NULL;
 	struct rte_mem_resource *mem_resource;
+	int extra_size;
 
 	rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
 	snprintf(name+strlen(name), QAT_DEV_NAME_MAX_LEN-strlen(name), "_qat");
 
+	switch (pci_dev->id.device_id) {
+	case 0x0443:
+		qat_dev_gen = QAT_GEN1;
+		break;
+	case 0x37c9:
+	case 0x19e3:
+	case 0x6f55:
+	case 0x18ef:
+		qat_dev_gen = QAT_GEN2;
+		break;
+	case 0x18a1:
+		qat_dev_gen = QAT_GEN3;
+		break;
+	case 0x4941:
+		qat_dev_gen = QAT_GEN4;
+		break;
+	default:
+		QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
+		return NULL;
+	}
+
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		const struct rte_memzone *mz = rte_memzone_lookup(name);
 
@@ -194,9 +227,15 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 		QAT_LOG(ERR, "Reached maximum number of QAT devices");
 		return NULL;
 	}
-
+	extra_size = qat_pci_get_extra_size(qat_dev_gen);
+	if (extra_size < 0) {
+		QAT_LOG(ERR, "Error when acquiring extra size len QAT_%d",
+			qat_dev_id);
+		return NULL;
+	}
 	qat_pci_devs[qat_dev_id].mz = rte_memzone_reserve(name,
-		sizeof(struct qat_pci_device),
+		sizeof(struct qat_pci_device) +
+			extra_size,
 		rte_socket_id(), 0);
 
 	if (qat_pci_devs[qat_dev_id].mz == NULL) {
@@ -207,30 +246,11 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 
 	qat_dev = qat_pci_devs[qat_dev_id].mz->addr;
 	memset(qat_dev, 0, sizeof(*qat_dev));
+	qat_dev->dev_private = qat_dev + 1;
 	strlcpy(qat_dev->name, name, QAT_DEV_NAME_MAX_LEN);
 	qat_dev->qat_dev_id = qat_dev_id;
 	qat_pci_devs[qat_dev_id].pci_dev = pci_dev;
-	switch (pci_dev->id.device_id) {
-	case 0x0443:
-		qat_dev->qat_dev_gen = QAT_GEN1;
-		break;
-	case 0x37c9:
-	case 0x19e3:
-	case 0x6f55:
-	case 0x18ef:
-		qat_dev->qat_dev_gen = QAT_GEN2;
-		break;
-	case 0x18a1:
-		qat_dev->qat_dev_gen = QAT_GEN3;
-		break;
-	case 0x4941:
-		qat_dev->qat_dev_gen = QAT_GEN4;
-		break;
-	default:
-		QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
-		rte_memzone_free(qat_pci_devs[qat_dev->qat_dev_id].mz);
-		return NULL;
-	}
+	qat_dev->qat_dev_gen = qat_dev_gen;
 
 	ops_hw = qat_dev_hw_spec[qat_dev->qat_dev_gen];
 	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_misc_bar, NULL);
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 531aa663ca..c9923cdc54 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -29,12 +29,14 @@ typedef int (*qat_dev_get_misc_bar_t)
 		(struct rte_mem_resource **, struct rte_pci_device *);
 typedef int (*qat_dev_read_config_t)
 		(struct qat_pci_device *);
+typedef int (*qat_dev_get_extra_size_t)(void);
 
 struct qat_dev_hw_spec_funcs {
 	qat_dev_reset_ring_pairs_t	qat_dev_reset_ring_pairs;
 	qat_dev_get_transport_bar_t	qat_dev_get_transport_bar;
 	qat_dev_get_misc_bar_t		qat_dev_get_misc_bar;
 	qat_dev_read_config_t		qat_dev_read_config;
+	qat_dev_get_extra_size_t	qat_dev_get_extra_size;
 };
 
 extern struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[];
@@ -75,9 +77,6 @@ struct qat_device_info {
 	 */
 };
 
-extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-
 extern struct qat_device_info qat_pci_devs[];
 
 struct qat_sym_dev_private;
@@ -126,11 +125,10 @@ struct qat_pci_device {
 	/* Data relating to compression service */
 	struct qat_comp_dev_private *comp_dev;
 	/**< link back to compressdev private data */
-	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
-		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
-	/**< Data of ring configuration on gen4 */
 	void *misc_bar_io_addr;
 	/**< Address of misc bar */
+	void *dev_private;
+	/**< Address per generation */
 };
 
 struct qat_gen_hw_data {
diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
index ffba3a3615..4be54de2d9 100644
--- a/drivers/common/qat/qat_qp.h
+++ b/drivers/common/qat/qat_qp.h
@@ -38,15 +38,6 @@ struct qat_qp_hw_data {
 	uint16_t rx_msg_size;
 };
 
-/**
- * Structure with data needed for creation of queue pair on gen4.
- */
-struct qat_qp_gen4_data {
-	struct qat_qp_hw_data qat_qp_hw_data;
-	uint8_t reserved;
-	uint8_t valid;
-};
-
 /**
  * Structure with data needed for creation of queue pair.
  */
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c
index 834ae88d38..f8f795301c 100644
--- a/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen4.c
@@ -7,6 +7,7 @@
 #include "qat_sym_pmd.h"
 #include "qat_sym_session.h"
 #include "qat_sym.h"
+#include "qat_dev_gen4.h"
 
 static struct rte_cryptodev_capabilities qat_gen4_sym_capabilities[] = {
 	QAT_BASE_GEN4_SYM_CAPABILITIES,
@@ -18,9 +19,10 @@ qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
 			enum qat_service_type service_type)
 {
 	int i = 0, valid_qps = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
 
 	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
-		if (qat_dev->qp_gen4_data[i][0].service_type ==
+		if (qat_dev4_get_qp_serv(dev_extra, i) ==
 			service_type) {
 			if (valid_qps == qp_id)
 				return i;
@@ -39,6 +41,7 @@ static int qat_sym_qp_setup_gen4(struct rte_cryptodev *dev, uint16_t qp_id,
 	struct qat_qp_config qat_qp_conf = { };
 	struct qat_sym_dev_private *qat_sym_private = dev->data->dev_private;
 	struct qat_pci_device *qat_dev = qat_sym_private->qat_dev;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
 
 	ring_pair =
 		qat_select_valid_queue(qat_sym_private->qat_dev, qp_id,
@@ -50,7 +53,7 @@ static int qat_sym_qp_setup_gen4(struct rte_cryptodev *dev, uint16_t qp_id,
 		return -EINVAL;
 	}
 	qat_qp_conf.hw =
-		&qat_dev->qp_gen4_data[ring_pair][0];
+		qat_dev4_get_hw(dev_extra, ring_pair);
 
 	ret = qat_sym_qp_setup(dev, qp_id, qp_conf, qat_qp_conf, socket_id);
 
-- 
2.30.2


^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [EXT] [PATCH 0/4] drivers/qat: isolate implementations of qat generations
  2021-09-01 14:47 [dpdk-dev] [PATCH 0/4] drivers/qat: isolate implementations of qat generations Arek Kusztal
                   ` (3 preceding siblings ...)
  2021-09-01 14:47 ` [dpdk-dev] [PATCH 4/4] common/qat: add extra data to qat pci dev Arek Kusztal
@ 2021-09-06 18:24 ` Akhil Goyal
  2021-10-01 16:59 ` [dpdk-dev] [PATCH v2 00/10] " Fan Zhang
  5 siblings, 0 replies; 96+ messages in thread
From: Akhil Goyal @ 2021-09-06 18:24 UTC (permalink / raw)
  To: Arek Kusztal, dev; +Cc: roy.fan.zhang

> This patchset introduces new qat driver structure and updates
> existing symmetric crypto qat PMD.
> 
> The purpose of the change is to isolate QAT generation specific
> implementations from one to another.
> 
> It is expected the changes to the specific generation driver code does
> minimum impact to
> other generations' implementations. Also adding the support to new
> features or new qat
> generation hardware will have zero impact to existing functionalities.
> 
> Arek Kusztal (4):
>   common/qat: isolate implementations of qat generations
>   crypto/qat: isolate implementations of symmetric operations
>   crypto/qat: move capabilities initialization to spec files
>   common/qat: add extra data to qat pci dev
> 
>  drivers/common/qat/dev/qat_dev_gen1.c     | 252 +++++++++
>  drivers/common/qat/dev/qat_dev_gen1.h     |  55 ++
>  drivers/common/qat/dev/qat_dev_gen2.c     |  39 ++
>  drivers/common/qat/dev/qat_dev_gen3.c     |  77 +++
>  drivers/common/qat/dev/qat_dev_gen4.c     | 285 ++++++++++
>  drivers/common/qat/dev/qat_dev_gen4.h     |  18 +
>  drivers/common/qat/meson.build            |  12 +-
>  drivers/common/qat/qat_common.h           |   2 +
>  drivers/common/qat/qat_device.c           | 183 +++---
>  drivers/common/qat/qat_device.h           |  28 +-
>  drivers/common/qat/qat_qp.c               | 641 ++++++++--------------
>  drivers/common/qat/qat_qp.h               |  54 +-
>  drivers/crypto/qat/dev/qat_sym_pmd_gen1.c |  78 +++
>  drivers/crypto/qat/dev/qat_sym_pmd_gen1.h |  15 +
>  drivers/crypto/qat/dev/qat_sym_pmd_gen2.c | 103 ++++
>  drivers/crypto/qat/dev/qat_sym_pmd_gen3.c |  63 +++
>  drivers/crypto/qat/dev/qat_sym_pmd_gen4.c | 107 ++++
>  drivers/crypto/qat/qat_sym_pmd.c          | 188 ++-----
>  drivers/crypto/qat/qat_sym_pmd.h          |  40 ++
>  19 files changed, 1540 insertions(+), 700 deletions(-)
>  create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c
>  create mode 100644 drivers/common/qat/dev/qat_dev_gen1.h
>  create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c
>  create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c
>  create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c
>  create mode 100644 drivers/common/qat/dev/qat_dev_gen4.h
>  create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
>  create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.h
>  create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen2.c
>  create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen3.c
>  create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen4.c

Please fix checkpatch issues.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [PATCH v2 00/10] drivers/qat: isolate implementations of qat generations
  2021-09-01 14:47 [dpdk-dev] [PATCH 0/4] drivers/qat: isolate implementations of qat generations Arek Kusztal
                   ` (4 preceding siblings ...)
  2021-09-06 18:24 ` [dpdk-dev] [EXT] [PATCH 0/4] drivers/qat: isolate implementations of qat generations Akhil Goyal
@ 2021-10-01 16:59 ` Fan Zhang
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 01/10] common/qat: add gen specific data and function Fan Zhang
                     ` (10 more replies)
  5 siblings, 11 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-01 16:59 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang

This patchset introduces new qat driver structure and updates
existing symmetric crypto qat PMD.

The purpose of the change is to isolate QAT generation specific
implementations from one to another.

It is expected the changes to the specific generation driver
code does minimum impact to other generations' implementations.
Also adding the support to new features or new qat generation
hardware will have zero impact to existing functionalities.

Fan Zhang (10):
  common/qat: add gen specific data and function
  common/qat: add gen specific device implementation
  common/qat: add gen specific queue pair function
  common/qat: add gen specific queue implementation
  compress/qat: add gen specific data and function
  compress/qat: add gen specific implementation
  crypto/qat: unified device private data structure
  crypto/qat: add gen specific data and function
  crypto/qat: add gen specific implementation
  doc: update release note

 doc/guides/rel_notes/release_21_11.rst        |    4 +
 drivers/common/qat/dev/qat_dev_gen1.c         |  255 ++++
 drivers/common/qat/dev/qat_dev_gen2.c         |   37 +
 drivers/common/qat/dev/qat_dev_gen3.c         |   83 ++
 drivers/common/qat/dev/qat_dev_gen4.c         |  301 ++++
 drivers/common/qat/dev/qat_dev_gens.h         |   58 +
 drivers/common/qat/meson.build                |   15 +-
 .../qat/qat_adf/adf_transport_access_macros.h |    1 +
 .../common/qat/qat_adf/icp_qat_hw_gen4_comp.h |  195 +++
 .../qat/qat_adf/icp_qat_hw_gen4_comp_defs.h   |  300 ++++
 drivers/common/qat/qat_common.c               |    8 +
 drivers/common/qat/qat_common.h               |   16 +-
 drivers/common/qat/qat_device.c               |  185 +--
 drivers/common/qat/qat_device.h               |   42 +-
 drivers/common/qat/qat_qp.c                   |  664 ++++-----
 drivers/common/qat/qat_qp.h                   |   74 +-
 drivers/compress/qat/dev/qat_comp_pmd_gen1.c  |  177 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen2.c  |   30 +
 drivers/compress/qat/dev/qat_comp_pmd_gen3.c  |   30 +
 drivers/compress/qat/dev/qat_comp_pmd_gen4.c  |  213 +++
 drivers/compress/qat/dev/qat_comp_pmd_gens.h  |   30 +
 drivers/compress/qat/qat_comp.c               |  101 +-
 drivers/compress/qat/qat_comp.h               |    8 +-
 drivers/compress/qat/qat_comp_pmd.c           |  159 +--
 drivers/compress/qat/qat_comp_pmd.h           |   76 +
 drivers/crypto/qat/README                     |    7 -
 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c    |   76 +
 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c  |  224 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c  |  164 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c  |  125 ++
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h  |   36 +
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c     |  283 ++++
 drivers/crypto/qat/meson.build                |   26 -
 drivers/crypto/qat/qat_asym_capabilities.h    |   63 -
 drivers/crypto/qat/qat_asym_pmd.c             |  276 +---
 drivers/crypto/qat/qat_asym_pmd.h             |   55 +-
 drivers/crypto/qat/qat_crypto.c               |  172 +++
 drivers/crypto/qat/qat_crypto.h               |   91 ++
 drivers/crypto/qat/qat_sym_capabilities.h     | 1248 -----------------
 drivers/crypto/qat/qat_sym_pmd.c              |  428 +-----
 drivers/crypto/qat/qat_sym_pmd.h              |   71 +-
 drivers/crypto/qat/qat_sym_session.c          |   15 +-
 42 files changed, 3715 insertions(+), 2707 deletions(-)
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gens.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen1.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen2.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen3.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen4.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gens.h
 delete mode 100644 drivers/crypto/qat/README
 create mode 100644 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
 delete mode 100644 drivers/crypto/qat/meson.build
 delete mode 100644 drivers/crypto/qat/qat_asym_capabilities.h
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 delete mode 100644 drivers/crypto/qat/qat_sym_capabilities.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [PATCH v2 01/10] common/qat: add gen specific data and function
  2021-10-01 16:59 ` [dpdk-dev] [PATCH v2 00/10] " Fan Zhang
@ 2021-10-01 16:59   ` Fan Zhang
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 02/10] common/qat: add gen specific device implementation Fan Zhang
                     ` (9 subsequent siblings)
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-01 16:59 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch adds the data structure and function prototypes for
different QAT generations.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/qat_common.c |  8 ++++++++
 drivers/common/qat/qat_common.h | 16 ++++++++++------
 drivers/common/qat/qat_device.c |  4 ++++
 drivers/common/qat/qat_device.h | 23 +++++++++++++++++++++++
 4 files changed, 45 insertions(+), 6 deletions(-)

diff --git a/drivers/common/qat/qat_common.c b/drivers/common/qat/qat_common.c
index 5343a1451e..e813d5c165 100644
--- a/drivers/common/qat/qat_common.c
+++ b/drivers/common/qat/qat_common.c
@@ -6,6 +6,14 @@
 #include "qat_device.h"
 #include "qat_logs.h"
 
+/* Keep it the same ordering as enum qat_service_type */
+const char *qat_service_type_str[] = {
+	"asym",
+	"sym",
+	"comp",
+	"invalid"
+};
+
 int
 qat_sgl_fill_array(struct rte_mbuf *buf, int64_t offset,
 		void *list_in, uint32_t data_len,
diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index 23715085f4..55f1ab8611 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -15,20 +15,26 @@
 /* Intel(R) QuickAssist Technology device generation is enumerated
  * from one according to the generation of the device
  */
+
 enum qat_device_gen {
-	QAT_GEN1 = 1,
+	QAT_GEN1,
 	QAT_GEN2,
 	QAT_GEN3,
-	QAT_GEN4
+	QAT_GEN4,
+	QAT_N_GENS
 };
 
 enum qat_service_type {
-	QAT_SERVICE_ASYMMETRIC = 0,
+	QAT_SERVICE_ASYMMETRIC,
 	QAT_SERVICE_SYMMETRIC,
 	QAT_SERVICE_COMPRESSION,
-	QAT_SERVICE_INVALID
+	QAT_MAX_SERVICES
 };
 
+extern const char *qat_service_type_str[];
+
+#define QAT_SERVICE_INVALID	(QAT_MAX_SERVICES)
+
 enum qat_svc_list {
 	QAT_SVC_UNUSED = 0,
 	QAT_SVC_CRYPTO = 1,
@@ -37,8 +43,6 @@ enum qat_svc_list {
 	QAT_SVC_ASYM = 4,
 };
 
-#define QAT_MAX_SERVICES		(QAT_SERVICE_INVALID)
-
 /**< Common struct for scatter-gather list operations */
 struct qat_flat_buf {
 	uint32_t len;
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index 1b967cbcf7..e6b43c541f 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -13,6 +13,10 @@
 #include "adf_pf2vf_msg.h"
 #include "qat_pf2vf.h"
 
+/* Hardware device information per generation */
+struct qat_gen_hw_data qat_gen_config[QAT_N_GENS];
+struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_N_GENS];
+
 /* pv2vf data Gen 4*/
 struct qat_pf2vf_dev qat_pf2vf_gen4 = {
 	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 228c057d1e..b8b5c387a3 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -21,6 +21,29 @@
 #define COMP_ENQ_THRESHOLD_NAME "qat_comp_enq_threshold"
 #define MAX_QP_THRESHOLD_SIZE	32
 
+/**
+ * Function prototypes for GENx specific device operations.
+ **/
+typedef int (*qat_dev_reset_ring_pairs_t)
+		(struct qat_pci_device *);
+typedef const struct rte_mem_resource* (*qat_dev_get_transport_bar_t)
+		(struct rte_pci_device *);
+typedef int (*qat_dev_get_misc_bar_t)
+		(struct rte_mem_resource **, struct rte_pci_device *);
+typedef int (*qat_dev_read_config_t)
+		(struct qat_pci_device *);
+typedef int (*qat_dev_get_extra_size_t)(void);
+
+struct qat_dev_hw_spec_funcs {
+	qat_dev_reset_ring_pairs_t	qat_dev_reset_ring_pairs;
+	qat_dev_get_transport_bar_t	qat_dev_get_transport_bar;
+	qat_dev_get_misc_bar_t		qat_dev_get_misc_bar;
+	qat_dev_read_config_t		qat_dev_read_config;
+	qat_dev_get_extra_size_t	qat_dev_get_extra_size;
+};
+
+extern struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[];
+
 struct qat_dev_cmd_param {
 	const char *name;
 	uint16_t val;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [PATCH v2 02/10] common/qat: add gen specific device implementation
  2021-10-01 16:59 ` [dpdk-dev] [PATCH v2 00/10] " Fan Zhang
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 01/10] common/qat: add gen specific data and function Fan Zhang
@ 2021-10-01 16:59   ` Fan Zhang
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 03/10] common/qat: add gen specific queue pair function Fan Zhang
                     ` (8 subsequent siblings)
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-01 16:59 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch replaces the mixed QAT device configuration
implementation by separate files with shared or
individual implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c |  66 +++++++++
 drivers/common/qat/dev/qat_dev_gen2.c |  23 ++++
 drivers/common/qat/dev/qat_dev_gen3.c |  23 ++++
 drivers/common/qat/dev/qat_dev_gen4.c | 152 +++++++++++++++++++++
 drivers/common/qat/dev/qat_dev_gens.h |  34 +++++
 drivers/common/qat/meson.build        |   4 +
 drivers/common/qat/qat_device.c       | 185 ++++++++++----------------
 drivers/common/qat/qat_device.h       |   5 +-
 drivers/common/qat/qat_qp.c           |   3 +-
 9 files changed, 374 insertions(+), 121 deletions(-)
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gens.h

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
new file mode 100644
index 0000000000..d9e75fe9e2
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+#define ADF_ARB_REG_SLOT			0x1000
+
+int
+qat_reset_ring_pairs_gen1(struct qat_pci_device *qat_pci_dev __rte_unused)
+{
+	/*
+	 * Ring pairs reset not supported on base, continue
+	 */
+	return 0;
+}
+
+const struct rte_mem_resource *
+qat_dev_get_transport_bar_gen1(struct rte_pci_device *pci_dev)
+{
+	return &pci_dev->mem_resource[0];
+}
+
+int
+qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource __rte_unused,
+		struct rte_pci_device *pci_dev __rte_unused)
+{
+	return -1;
+}
+
+int
+qat_dev_read_config_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	/*
+	 * Base generations do not have configuration,
+	 * but set this pointer anyway that we can
+	 * distinguish higher generations faulty set to NULL
+	 */
+	return 0;
+}
+
+int
+qat_dev_get_extra_size_gen1(void)
+{
+	return 0;
+}
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen1_init)
+{
+	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
+	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
+	qat_gen_config[QAT_GEN1].comp_num_im_bufs_required =
+		QAT_NUM_INTERM_BUFS_GEN1;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c
new file mode 100644
index 0000000000..d3470ed6b8
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen2.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen2_init)
+{
+	qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2;
+	qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c
new file mode 100644
index 0000000000..e4a66869d2
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen3.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen3_init)
+{
+	qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3;
+	qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
new file mode 100644
index 0000000000..959f5cb4a7
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "adf_transport_access_macros_gen4vf.h"
+#include "adf_pf2vf_msg.h"
+#include "qat_pf2vf.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+struct qat_dev_gen4_extra {
+	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
+		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
+};
+
+static struct qat_pf2vf_dev qat_pf2vf_gen4 = {
+	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
+	.vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
+	.pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
+	.pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
+	.pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
+	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
+};
+
+int
+qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
+{
+	struct qat_pf2vf_msg pf2vf_msg;
+
+	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ;
+	pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ;
+	pf2vf_msg.msg_data = 2;
+	return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
+}
+
+static int
+qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
+{
+	int i = 0;
+	uint16_t svc = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+
+	if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
+		return -(EFAULT);
+	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		struct qat_qp_hw_data *hw_data =
+			&dev_extra->qp_gen4_data[i][0];
+		uint8_t svc1 = (svc >> (3 * i)) & 0x7;
+		enum qat_service_type service_type = QAT_SERVICE_INVALID;
+
+		if (svc1 == QAT_SVC_SYM) {
+			service_type = QAT_SERVICE_SYMMETRIC;
+			QAT_LOG(DEBUG,
+				"Discovered SYMMETRIC service on bundle %d",
+				i);
+		} else if (svc1 == QAT_SVC_COMPRESSION) {
+			service_type = QAT_SERVICE_COMPRESSION;
+			QAT_LOG(DEBUG,
+				"Discovered COPRESSION service on bundle %d",
+				i);
+		} else if (svc1 == QAT_SVC_ASYM) {
+			service_type = QAT_SERVICE_ASYMMETRIC;
+			QAT_LOG(DEBUG,
+				"Discovered ASYMMETRIC service on bundle %d",
+				i);
+		} else {
+			QAT_LOG(ERR,
+				"Unrecognized service on bundle %d",
+				i);
+			return -(EFAULT);
+		}
+
+		memset(hw_data, 0, sizeof(*hw_data));
+		hw_data->service_type = service_type;
+		if (service_type == QAT_SERVICE_ASYMMETRIC) {
+			hw_data->tx_msg_size = 64;
+			hw_data->rx_msg_size = 32;
+		} else if (service_type == QAT_SERVICE_SYMMETRIC ||
+				service_type ==
+					QAT_SERVICE_COMPRESSION) {
+			hw_data->tx_msg_size = 128;
+			hw_data->rx_msg_size = 32;
+		}
+		hw_data->tx_ring_num = 0;
+		hw_data->rx_ring_num = 1;
+		hw_data->hw_bundle_num = i;
+	}
+	return 0;
+}
+
+static int
+qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
+{
+	int ret = 0, i;
+	uint8_t data[4];
+	struct qat_pf2vf_msg pf2vf_msg;
+
+	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET;
+	pf2vf_msg.block_hdr = -1;
+	for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		pf2vf_msg.msg_data = i;
+		ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data);
+		if (ret) {
+			QAT_LOG(ERR, "QAT error when reset bundle no %d",
+				i);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static const struct
+rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
+{
+	return &pci_dev->mem_resource[0];
+}
+
+static int
+qat_dev_get_misc_bar_gen4(struct rte_mem_resource **mem_resource,
+		struct rte_pci_device *pci_dev)
+{
+	*mem_resource = &pci_dev->mem_resource[2];
+	return 0;
+}
+
+static int
+qat_dev_get_extra_size_gen4(void)
+{
+	return sizeof(struct qat_dev_gen4_extra);
+}
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen4,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen4,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen4,
+	.qat_dev_read_config = qat_dev_read_config_gen4,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen4,
+};
+
+RTE_INIT(qat_dev_gen_4_init)
+{
+	qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4;
+	qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4;
+	qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h
new file mode 100644
index 0000000000..fc069d8867
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gens.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _QAT_DEV_GEN_H_
+#define _QAT_DEV_GEN_H_
+
+#include "qat_device.h"
+#include "qat_qp.h"
+
+#include <stdint.h>
+
+extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE];
+
+int
+qat_dev_get_extra_size_gen1(void);
+
+int
+qat_reset_ring_pairs_gen1(
+		struct qat_pci_device *qat_pci_dev);
+const struct
+rte_mem_resource *qat_dev_get_transport_bar_gen1(
+		struct rte_pci_device *pci_dev);
+int
+qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource,
+		struct rte_pci_device *pci_dev);
+int
+qat_dev_read_config_gen1(struct qat_pci_device *qat_dev);
+
+int
+qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val);
+
+#endif
diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 053c219fed..532e0fabb3 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -50,6 +50,10 @@ sources += files(
         'qat_device.c',
         'qat_logs.c',
         'qat_pf2vf.c',
+        'dev/qat_dev_gen1.c',
+        'dev/qat_dev_gen2.c',
+        'dev/qat_dev_gen3.c',
+        'dev/qat_dev_gen4.c'
 )
 includes += include_directories(
         'qat_adf',
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index e6b43c541f..4759c6c166 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -17,43 +17,6 @@
 struct qat_gen_hw_data qat_gen_config[QAT_N_GENS];
 struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_N_GENS];
 
-/* pv2vf data Gen 4*/
-struct qat_pf2vf_dev qat_pf2vf_gen4 = {
-	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
-	.vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
-	.pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
-	.pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
-	.pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
-	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
-};
-
-/* Hardware device information per generation */
-__extension__
-struct qat_gen_hw_data qat_gen_config[] =  {
-	[QAT_GEN1] = {
-		.dev_gen = QAT_GEN1,
-		.qp_hw_data = qat_gen1_qps,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN1
-	},
-	[QAT_GEN2] = {
-		.dev_gen = QAT_GEN2,
-		.qp_hw_data = qat_gen1_qps,
-		/* gen2 has same ring layout as gen1 */
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN2
-	},
-	[QAT_GEN3] = {
-		.dev_gen = QAT_GEN3,
-		.qp_hw_data = qat_gen3_qps,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3
-	},
-	[QAT_GEN4] = {
-		.dev_gen = QAT_GEN4,
-		.qp_hw_data = NULL,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3,
-		.pf2vf_dev = &qat_pf2vf_gen4
-	},
-};
-
 /* per-process array of device data */
 struct qat_device_info qat_pci_devs[RTE_PMD_QAT_MAX_PCI_DEVICES];
 static int qat_nb_pci_devices;
@@ -87,6 +50,16 @@ static const struct rte_pci_id pci_id_qat_map[] = {
 		{.device_id = 0},
 };
 
+static int
+qat_pci_get_extra_size(enum qat_device_gen qat_dev_gen)
+{
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_extra_size,
+		-ENOTSUP);
+	return ops_hw->qat_dev_get_extra_size();
+}
+
 static struct qat_pci_device *
 qat_pci_get_named_dev(const char *name)
 {
@@ -130,45 +103,8 @@ qat_get_qat_dev_from_pci_dev(struct rte_pci_device *pci_dev)
 	return qat_pci_get_named_dev(name);
 }
 
-static int
-qat_gen4_reset_ring_pair(struct qat_pci_device *qat_pci_dev)
-{
-	int ret = 0, i;
-	uint8_t data[4];
-	struct qat_pf2vf_msg pf2vf_msg;
-
-	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET;
-	pf2vf_msg.block_hdr = -1;
-	for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) {
-		pf2vf_msg.msg_data = i;
-		ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data);
-		if (ret) {
-			QAT_LOG(ERR, "QAT error when reset bundle no %d",
-				i);
-			return ret;
-		}
-	}
-
-	return 0;
-}
-
-int qat_query_svc(struct qat_pci_device *qat_dev, uint8_t *val)
-{
-	int ret = -(EINVAL);
-	struct qat_pf2vf_msg pf2vf_msg;
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ;
-		pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ;
-		pf2vf_msg.msg_data = 2;
-		ret = qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
-	}
-
-	return ret;
-}
-
-
-static void qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param
+static void
+qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param
 		*qat_dev_cmd_param)
 {
 	int i = 0;
@@ -230,13 +166,38 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param)
 {
 	struct qat_pci_device *qat_dev;
+	enum qat_device_gen qat_dev_gen;
 	uint8_t qat_dev_id = 0;
 	char name[QAT_DEV_NAME_MAX_LEN];
 	struct rte_devargs *devargs = pci_dev->device.devargs;
+	struct qat_dev_hw_spec_funcs *ops_hw = NULL;
+	struct rte_mem_resource *mem_resource;
+	int extra_size;
 
 	rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
 	snprintf(name+strlen(name), QAT_DEV_NAME_MAX_LEN-strlen(name), "_qat");
 
+	switch (pci_dev->id.device_id) {
+	case 0x0443:
+		qat_dev_gen = QAT_GEN1;
+		break;
+	case 0x37c9:
+	case 0x19e3:
+	case 0x6f55:
+	case 0x18ef:
+		qat_dev_gen = QAT_GEN2;
+		break;
+	case 0x18a1:
+		qat_dev_gen = QAT_GEN3;
+		break;
+	case 0x4941:
+		qat_dev_gen = QAT_GEN4;
+		break;
+	default:
+		QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
+		return NULL;
+	}
+
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		const struct rte_memzone *mz = rte_memzone_lookup(name);
 
@@ -267,8 +228,13 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 		return NULL;
 	}
 
+	extra_size = qat_pci_get_extra_size(qat_dev_gen);
+	if (extra_size < 0)
+		return NULL;
+
 	qat_pci_devs[qat_dev_id].mz = rte_memzone_reserve(name,
-		sizeof(struct qat_pci_device),
+		sizeof(struct qat_pci_device) +
+			extra_size,
 		rte_socket_id(), 0);
 
 	if (qat_pci_devs[qat_dev_id].mz == NULL) {
@@ -279,49 +245,31 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 
 	qat_dev = qat_pci_devs[qat_dev_id].mz->addr;
 	memset(qat_dev, 0, sizeof(*qat_dev));
+	qat_dev->dev_private = qat_dev + 1;
 	strlcpy(qat_dev->name, name, QAT_DEV_NAME_MAX_LEN);
 	qat_dev->qat_dev_id = qat_dev_id;
 	qat_pci_devs[qat_dev_id].pci_dev = pci_dev;
-	switch (pci_dev->id.device_id) {
-	case 0x0443:
-		qat_dev->qat_dev_gen = QAT_GEN1;
-		break;
-	case 0x37c9:
-	case 0x19e3:
-	case 0x6f55:
-	case 0x18ef:
-		qat_dev->qat_dev_gen = QAT_GEN2;
-		break;
-	case 0x18a1:
-		qat_dev->qat_dev_gen = QAT_GEN3;
-		break;
-	case 0x4941:
-		qat_dev->qat_dev_gen = QAT_GEN4;
-		break;
-	default:
-		QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
-		rte_memzone_free(qat_pci_devs[qat_dev->qat_dev_id].mz);
-		return NULL;
-	}
+	qat_dev->qat_dev_gen = qat_dev_gen;
 
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		qat_dev->misc_bar_io_addr = pci_dev->mem_resource[2].addr;
-		if (qat_dev->misc_bar_io_addr == NULL) {
+	ops_hw = qat_dev_hw_spec[qat_dev->qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_misc_bar, NULL);
+	if (ops_hw->qat_dev_get_misc_bar(&mem_resource, pci_dev) == 0) {
+		if (mem_resource->addr == NULL) {
 			QAT_LOG(ERR, "QAT cannot get access to VF misc bar");
 			return NULL;
 		}
-	}
+		qat_dev->misc_bar_io_addr = mem_resource->addr;
+	} else
+		qat_dev->misc_bar_io_addr = NULL;
 
 	if (devargs && devargs->drv_str)
 		qat_dev_parse_cmd(devargs->drv_str, qat_dev_cmd_param);
 
-	if (qat_dev->qat_dev_gen >= QAT_GEN4) {
-		if (qat_read_qp_config(qat_dev)) {
-			QAT_LOG(ERR,
-				"Cannot acquire ring configuration for QAT_%d",
-				qat_dev_id);
-			return NULL;
-		}
+	if (qat_read_qp_config(qat_dev)) {
+		QAT_LOG(ERR,
+			"Cannot acquire ring configuration for QAT_%d",
+			qat_dev_id);
+		return NULL;
 	}
 
 	rte_spinlock_init(&qat_dev->arb_csr_lock);
@@ -396,6 +344,7 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	int sym_ret = 0, asym_ret = 0, comp_ret = 0;
 	int num_pmds_created = 0;
 	struct qat_pci_device *qat_pci_dev;
+	struct qat_dev_hw_spec_funcs *ops;
 	struct qat_dev_cmd_param qat_dev_cmd_param[] = {
 			{ SYM_ENQ_THRESHOLD_NAME, 0 },
 			{ ASYM_ENQ_THRESHOLD_NAME, 0 },
@@ -412,13 +361,14 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	if (qat_pci_dev == NULL)
 		return -ENODEV;
 
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		if (qat_gen4_reset_ring_pair(qat_pci_dev)) {
-			QAT_LOG(ERR,
-				"Cannot reset ring pairs, does pf driver supports pf2vf comms?"
-				);
-			return -ENODEV;
-		}
+	ops = qat_dev_hw_spec[qat_pci_dev->qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_dev_reset_ring_pairs,
+		-ENOTSUP);
+	if (ops->qat_dev_reset_ring_pairs(qat_pci_dev)) {
+		QAT_LOG(ERR,
+			"Cannot reset ring pairs, does pf driver supports pf2vf comms?"
+			);
+		return -ENODEV;
 	}
 
 	sym_ret = qat_sym_dev_create(qat_pci_dev, qat_dev_cmd_param);
@@ -453,7 +403,8 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	return 0;
 }
 
-static int qat_pci_remove(struct rte_pci_device *pci_dev)
+static int
+qat_pci_remove(struct rte_pci_device *pci_dev)
 {
 	struct qat_pci_device *qat_pci_dev;
 
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index b8b5c387a3..ce400b0dd2 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -133,6 +133,8 @@ struct qat_pci_device {
 	/**< Data of ring configuration on gen4 */
 	void *misc_bar_io_addr;
 	/**< Address of misc bar */
+	void *dev_private;
+	/**< Address per generation */
 };
 
 struct qat_gen_hw_data {
@@ -182,7 +184,4 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev __rte_unused,
 int
 qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev __rte_unused);
 
-int
-qat_query_svc(struct qat_pci_device *qat_pci_dev, uint8_t *ret);
-
 #endif /* _QAT_DEVICE_H_ */
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 026ea5ee01..b8c6000e86 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -20,6 +20,7 @@
 #include "qat_comp.h"
 #include "adf_transport_access_macros.h"
 #include "adf_transport_access_macros_gen4vf.h"
+#include "dev/qat_dev_gens.h"
 
 #define QAT_CQ_MAX_DEQ_RETRIES 10
 
@@ -512,7 +513,7 @@ qat_read_qp_config(struct qat_pci_device *qat_dev)
 	if (qat_dev_gen == QAT_GEN4) {
 		uint16_t svc = 0;
 
-		if (qat_query_svc(qat_dev, (uint8_t *)&svc))
+		if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
 			return -(EFAULT);
 		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
 			struct qat_qp_hw_data *hw_data =
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [PATCH v2 03/10] common/qat: add gen specific queue pair function
  2021-10-01 16:59 ` [dpdk-dev] [PATCH v2 00/10] " Fan Zhang
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 01/10] common/qat: add gen specific data and function Fan Zhang
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 02/10] common/qat: add gen specific device implementation Fan Zhang
@ 2021-10-01 16:59   ` Fan Zhang
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 04/10] common/qat: add gen specific queue implementation Fan Zhang
                     ` (7 subsequent siblings)
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-01 16:59 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang

This patch adds the queue pair data structure and function
prototypes for different QAT generations.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 drivers/common/qat/qat_qp.c |  3 +++
 drivers/common/qat/qat_qp.h | 45 +++++++++++++++++++++++++++++++++++++
 2 files changed, 48 insertions(+)

diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index b8c6000e86..27994036b8 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -34,6 +34,9 @@
 	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
 	(ADF_ARB_REG_SLOT * index), value)
 
+struct qat_qp_hw_spec_funcs*
+	qat_qp_hw_spec[QAT_N_GENS];
+
 __extension__
 const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
 					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
index e1627197fa..2de66b888b 100644
--- a/drivers/common/qat/qat_qp.h
+++ b/drivers/common/qat/qat_qp.h
@@ -8,6 +8,51 @@
 #include "adf_transport_access_macros.h"
 
 struct qat_pci_device;
+struct qat_qp_hw_data;
+struct qat_queue;
+struct qat_qp;
+
+/**
+ * Function prototypes for GENx specific queue pair operations.
+ **/
+typedef int (*qat_qp_rings_per_service_t)
+		(struct qat_pci_device *, enum qat_service_type);
+
+typedef void (*qat_qp_build_ring_base_t)(void *, struct qat_queue *);
+
+typedef void (*qat_qp_adf_arb_enable_t)(const struct qat_queue *, void *,
+		rte_spinlock_t *);
+
+typedef void (*qat_qp_adf_arb_disable_t)(const struct qat_queue *, void *,
+		rte_spinlock_t *);
+
+typedef void (*qat_qp_adf_configure_queues_t)(struct qat_qp *);
+
+typedef void (*qat_qp_csr_write_tail_t)(struct qat_qp *qp, struct qat_queue *q);
+
+typedef void (*qat_qp_csr_write_head_t)(struct qat_qp *qp, struct qat_queue *q,
+		uint32_t new_head);
+
+typedef void (*qat_qp_csr_setup_t)(struct qat_pci_device*, void *,
+		struct qat_qp *);
+
+typedef const struct qat_qp_hw_data * (*qat_qp_get_hw_data_t)(
+		struct qat_pci_device *dev, enum qat_service_type service_type,
+		uint16_t qp_id);
+
+struct qat_qp_hw_spec_funcs {
+	qat_qp_rings_per_service_t	qat_qp_rings_per_service;
+	qat_qp_build_ring_base_t	qat_qp_build_ring_base;
+	qat_qp_adf_arb_enable_t		qat_qp_adf_arb_enable;
+	qat_qp_adf_arb_disable_t	qat_qp_adf_arb_disable;
+	qat_qp_adf_configure_queues_t	qat_qp_adf_configure_queues;
+	qat_qp_csr_write_tail_t		qat_qp_csr_write_tail;
+	qat_qp_csr_write_head_t		qat_qp_csr_write_head;
+	qat_qp_csr_setup_t		qat_qp_csr_setup;
+	qat_qp_get_hw_data_t		qat_qp_get_hw_data;
+};
+
+extern struct qat_qp_hw_spec_funcs *qat_qp_hw_spec[];
 
 #define QAT_CSR_HEAD_WRITE_THRESH 32U
 /* number of requests to accumulate before writing head CSR */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [PATCH v2 04/10] common/qat: add gen specific queue implementation
  2021-10-01 16:59 ` [dpdk-dev] [PATCH v2 00/10] " Fan Zhang
                     ` (2 preceding siblings ...)
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 03/10] common/qat: add gen specific queue pair function Fan Zhang
@ 2021-10-01 16:59   ` Fan Zhang
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 05/10] compress/qat: add gen specific data and function Fan Zhang
                     ` (6 subsequent siblings)
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-01 16:59 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch replaces the mixed QAT queue pair configuration
implementation by separate files with shared or individual
implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c         | 193 ++++-
 drivers/common/qat/dev/qat_dev_gen2.c         |  14 +
 drivers/common/qat/dev/qat_dev_gen3.c         |  60 ++
 drivers/common/qat/dev/qat_dev_gen4.c         | 157 ++++-
 drivers/common/qat/dev/qat_dev_gens.h         |  30 +-
 .../qat/qat_adf/adf_transport_access_macros.h |   1 +
 drivers/common/qat/qat_qp.c                   | 664 +++++++-----------
 drivers/common/qat/qat_qp.h                   |  29 +-
 drivers/crypto/qat/qat_sym_pmd.c              |  32 +-
 9 files changed, 709 insertions(+), 471 deletions(-)

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
index d9e75fe9e2..f1f43c17b1 100644
--- a/drivers/common/qat/dev/qat_dev_gen1.c
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -3,6 +3,7 @@
  */
 
 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"
 
@@ -10,6 +11,195 @@
 
 #define ADF_ARB_REG_SLOT			0x1000
 
+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+__extension__
+const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
+	/* queue pairs which provide an asymmetric crypto service */
+	[QAT_SERVICE_ASYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 0,
+			.rx_ring_num = 8,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+
+		}, {
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 1,
+			.rx_ring_num = 9,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a symmetric crypto service */
+	[QAT_SERVICE_SYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 2,
+			.rx_ring_num = 10,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		},
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 3,
+			.rx_ring_num = 11,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a compression service */
+	[QAT_SERVICE_COMPRESSION] = {
+		{
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 6,
+			.rx_ring_num = 14,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}, {
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 7,
+			.rx_ring_num = 15,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	}
+};
+
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen1(struct qat_pci_device *dev __rte_unused,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	return qat_gen1_qps[service_type] + qp_id;
+}
+
+int
+qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
+{
+	int i = 0, count = 0;
+
+	for (i = 0; i < ADF_MAX_QPS_ON_ANY_SERVICE; i++) {
+		const struct qat_qp_hw_data *hw_qps =
+				qat_qp_get_hw_data(qat_dev, service, i);
+		if (hw_qps->service_type == service)
+			count++;
+	}
+
+	return count;
+}
+
+void
+qat_qp_csr_build_ring_base_gen1(void *io_addr,
+			struct qat_queue *queue)
+{
+	uint64_t queue_base;
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+			queue->queue_size);
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+		queue->hw_queue_number, queue_base);
+}
+
+void
+qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_ARB_REG_SLOT *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr,
+			arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+void
+qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT *
+						txq->hw_bundle_number);
+	uint32_t value;
+
+	rte_spinlock_lock(lock);
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value &= ~(0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+void
+qat_qp_adf_configure_queues_gen1(struct qat_qp *qp)
+{
+	uint32_t q_tx_config, q_resp_config;
+	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
+
+	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
+	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
+			ADF_RING_NEAR_WATERMARK_512,
+			ADF_RING_NEAR_WATERMARK_0);
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
+		q_tx->hw_bundle_number,	q_tx->hw_queue_number,
+		q_tx_config);
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
+		q_rx->hw_bundle_number,	q_rx->hw_queue_number,
+		q_resp_config);
+}
+
+void
+qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q)
+{
+	WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number,
+		q->hw_queue_number, q->tail);
+}
+
+void
+qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q,
+			uint32_t new_head)
+{
+	WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number,
+			q->hw_queue_number, new_head);
+}
+
+void
+qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev,
+			void *io_addr, struct qat_qp *qp)
+{
+	qat_qp_csr_build_ring_base_gen1(io_addr, &qp->tx_q);
+	qat_qp_csr_build_ring_base_gen1(io_addr, &qp->rx_q);
+	qat_qp_adf_configure_queues_gen1(qp);
+	qat_qp_adf_arb_enable_gen1(&qp->tx_q, qp->mmap_bar_addr,
+					&qat_dev->arb_csr_lock);
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen1 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen1,
+};
+
 int
 qat_reset_ring_pairs_gen1(struct qat_pci_device *qat_pci_dev __rte_unused)
 {
@@ -26,7 +216,7 @@ qat_dev_get_transport_bar_gen1(struct rte_pci_device *pci_dev)
 }
 
 int
-qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource __rte_unused,
+qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource  __rte_unused,
 		struct rte_pci_device *pci_dev __rte_unused)
 {
 	return -1;
@@ -59,6 +249,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = {
 
 RTE_INIT(qat_dev_gen_gen1_init)
 {
+	qat_qp_hw_spec[QAT_GEN1] = &qat_qp_hw_spec_gen1;
 	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
 	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
 	qat_gen_config[QAT_GEN1].comp_num_im_bufs_required =
diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c
index d3470ed6b8..f077fe9eef 100644
--- a/drivers/common/qat/dev/qat_dev_gen2.c
+++ b/drivers/common/qat/dev/qat_dev_gen2.c
@@ -3,11 +3,24 @@
  */
 
 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"
 
 #include <stdint.h>
 
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen2 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen1,
+};
+
 static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
 	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
 	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
@@ -18,6 +31,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
 
 RTE_INIT(qat_dev_gen_gen2_init)
 {
+	qat_qp_hw_spec[QAT_GEN2] = &qat_qp_hw_spec_gen2;
 	qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2;
 	qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c
index e4a66869d2..de3fa17fa9 100644
--- a/drivers/common/qat/dev/qat_dev_gen3.c
+++ b/drivers/common/qat/dev/qat_dev_gen3.c
@@ -3,11 +3,70 @@
  */
 
 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"
 
 #include <stdint.h>
 
+__extension__
+const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
+	/* queue pairs which provide an asymmetric crypto service */
+	[QAT_SERVICE_ASYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 0,
+			.rx_ring_num = 4,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a symmetric crypto service */
+	[QAT_SERVICE_SYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 1,
+			.rx_ring_num = 5,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a compression service */
+	[QAT_SERVICE_COMPRESSION] = {
+		{
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 3,
+			.rx_ring_num = 7,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	}
+};
+
+
+static const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen3(struct qat_pci_device *dev __rte_unused,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	return qat_gen3_qps[service_type] + qp_id;
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen3 = {
+	.qat_qp_rings_per_service  = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen3
+};
+
 static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
 	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
 	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
@@ -18,6 +77,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
 
 RTE_INIT(qat_dev_gen_gen3_init)
 {
+	qat_qp_hw_spec[QAT_GEN3] = &qat_qp_hw_spec_gen3;
 	qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3;
 	qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
index 959f5cb4a7..6a71fea2f5 100644
--- a/drivers/common/qat/dev/qat_dev_gen4.c
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -10,7 +10,6 @@
 #include "adf_transport_access_macros_gen4vf.h"
 #include "adf_pf2vf_msg.h"
 #include "qat_pf2vf.h"
-#include "qat_dev_gens.h"
 
 #include <stdint.h>
 
@@ -28,7 +27,7 @@ static struct qat_pf2vf_dev qat_pf2vf_gen4 = {
 	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
 };
 
-int
+static int
 qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
 {
 	struct qat_pf2vf_msg pf2vf_msg;
@@ -39,6 +38,52 @@ qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
 	return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
 }
 
+static int
+qat_select_valid_queue_gen4(struct qat_pci_device *qat_dev, int qp_id,
+			enum qat_service_type service_type)
+{
+	int i = 0, valid_qps = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+
+	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		if (dev_extra->qp_gen4_data[i][0].service_type ==
+			service_type) {
+			if (valid_qps == qp_id)
+				return i;
+			++valid_qps;
+		}
+	}
+	return -1;
+}
+
+static const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen4(struct qat_pci_device *qat_dev,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+	int ring_pair = qat_select_valid_queue_gen4(qat_dev, qp_id,
+			service_type);
+
+	if (ring_pair < 0)
+		return NULL;
+
+	return &dev_extra->qp_gen4_data[ring_pair][0];
+}
+
+static int
+qat_qp_rings_per_service_gen4(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
+{
+	int i = 0, count = 0, max_ops_per_srv = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+
+	max_ops_per_srv = QAT_GEN4_BUNDLE_NUM;
+	for (i = 0, count = 0; i < max_ops_per_srv; i++)
+		if (dev_extra->qp_gen4_data[i][0].service_type == service)
+			count++;
+	return count;
+}
+
 static int
 qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
 {
@@ -94,6 +139,109 @@ qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
 	return 0;
 }
 
+static void
+qat_qp_build_ring_base_gen4(void *io_addr,
+			struct qat_queue *queue)
+{
+	uint64_t queue_base;
+
+	queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr,
+			queue->queue_size);
+	WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number,
+		queue->hw_queue_number, queue_base);
+}
+
+static void
+qat_qp_adf_arb_enable_gen4(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_RING_BUNDLE_SIZE_GEN4 *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
+			arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+static void
+qat_qp_adf_arb_disable_gen4(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_RING_BUNDLE_SIZE_GEN4 *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
+			arb_csr_offset);
+	value &= ~(0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+static void
+qat_qp_adf_configure_queues_gen4(struct qat_qp *qp)
+{
+	uint32_t q_tx_config, q_resp_config;
+	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
+
+	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
+	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
+			ADF_RING_NEAR_WATERMARK_512,
+			ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
+		q_tx->hw_bundle_number,	q_tx->hw_queue_number,
+		q_tx_config);
+	WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
+		q_rx->hw_bundle_number,	q_rx->hw_queue_number,
+		q_resp_config);
+}
+
+static void
+qat_qp_csr_write_tail_gen4(struct qat_qp *qp, struct qat_queue *q)
+{
+	WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
+		q->hw_bundle_number, q->hw_queue_number, q->tail);
+}
+
+static void
+qat_qp_csr_write_head_gen4(struct qat_qp *qp, struct qat_queue *q,
+			uint32_t new_head)
+{
+	WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr,
+			q->hw_bundle_number, q->hw_queue_number, new_head);
+}
+
+static void
+qat_qp_csr_setup_gen4(struct qat_pci_device *qat_dev,
+			void *io_addr, struct qat_qp *qp)
+{
+	qat_qp_build_ring_base_gen4(io_addr, &qp->tx_q);
+	qat_qp_build_ring_base_gen4(io_addr, &qp->rx_q);
+	qat_qp_adf_configure_queues_gen4(qp);
+	qat_qp_adf_arb_enable_gen4(&qp->tx_q, qp->mmap_bar_addr,
+					&qat_dev->arb_csr_lock);
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen4 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen4,
+	.qat_qp_build_ring_base = qat_qp_build_ring_base_gen4,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen4,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen4,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen4,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen4,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen4,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen4,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen4,
+};
+
 static int
 qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
 {
@@ -116,8 +264,8 @@ qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
 	return 0;
 }
 
-static const struct
-rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
+static const struct rte_mem_resource *
+qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
 {
 	return &pci_dev->mem_resource[0];
 }
@@ -146,6 +294,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = {
 
 RTE_INIT(qat_dev_gen_4_init)
 {
+	qat_qp_hw_spec[QAT_GEN4] = &qat_qp_hw_spec_gen4;
 	qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4;
 	qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4;
 	qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4;
diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h
index fc069d8867..0a86b3e933 100644
--- a/drivers/common/qat/dev/qat_dev_gens.h
+++ b/drivers/common/qat/dev/qat_dev_gens.h
@@ -16,6 +16,33 @@ extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
 int
 qat_dev_get_extra_size_gen1(void);
 
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen1(struct qat_pci_device *dev,
+		enum qat_service_type service_type, uint16_t qp_id);
+
+int
+qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev,
+		enum qat_service_type service);
+void
+qat_qp_csr_build_ring_base_gen1(void *io_addr,
+		struct qat_queue *queue);
+void
+qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock);
+void
+qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock);
+void
+qat_qp_adf_configure_queues_gen1(struct qat_qp *qp);
+void
+qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q);
+void
+qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q,
+		uint32_t new_head);
+void
+qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev,
+		void *io_addr, struct qat_qp *qp);
+
 int
 qat_reset_ring_pairs_gen1(
 		struct qat_pci_device *qat_pci_dev);
@@ -28,7 +55,4 @@ qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource,
 int
 qat_dev_read_config_gen1(struct qat_pci_device *qat_dev);
 
-int
-qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val);
-
 #endif
diff --git a/drivers/common/qat/qat_adf/adf_transport_access_macros.h b/drivers/common/qat/qat_adf/adf_transport_access_macros.h
index 504ffb7236..a40bec8218 100644
--- a/drivers/common/qat/qat_adf/adf_transport_access_macros.h
+++ b/drivers/common/qat/qat_adf/adf_transport_access_macros.h
@@ -51,6 +51,7 @@
 #define ADF_MIN_RING_SIZE ADF_RING_SIZE_128
 #define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M
 #define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K
+#define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C
 
 /* Maximum number of qps on a device for any service type */
 #define ADF_MAX_QPS_ON_ANY_SERVICE	2
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 27994036b8..f0d0ba4bb7 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -18,124 +18,15 @@
 #include "qat_sym.h"
 #include "qat_asym.h"
 #include "qat_comp.h"
-#include "adf_transport_access_macros.h"
-#include "adf_transport_access_macros_gen4vf.h"
-#include "dev/qat_dev_gens.h"
 
 #define QAT_CQ_MAX_DEQ_RETRIES 10
 
 #define ADF_MAX_DESC				4096
 #define ADF_MIN_DESC				128
 
-#define ADF_ARB_REG_SLOT			0x1000
-#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
-
-#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
-	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
-	(ADF_ARB_REG_SLOT * index), value)
-
 struct qat_qp_hw_spec_funcs*
 	qat_qp_hw_spec[QAT_N_GENS];
 
-__extension__
-const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
-					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
-	/* queue pairs which provide an asymmetric crypto service */
-	[QAT_SERVICE_ASYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 0,
-			.rx_ring_num = 8,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-
-		}, {
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 1,
-			.rx_ring_num = 9,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a symmetric crypto service */
-	[QAT_SERVICE_SYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 2,
-			.rx_ring_num = 10,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		},
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 3,
-			.rx_ring_num = 11,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a compression service */
-	[QAT_SERVICE_COMPRESSION] = {
-		{
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 6,
-			.rx_ring_num = 14,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}, {
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 7,
-			.rx_ring_num = 15,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	}
-};
-
-__extension__
-const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES]
-					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
-	/* queue pairs which provide an asymmetric crypto service */
-	[QAT_SERVICE_ASYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 0,
-			.rx_ring_num = 4,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a symmetric crypto service */
-	[QAT_SERVICE_SYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 1,
-			.rx_ring_num = 5,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a compression service */
-	[QAT_SERVICE_COMPRESSION] = {
-		{
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 3,
-			.rx_ring_num = 7,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	}
-};
-
 static int qat_qp_check_queue_alignment(uint64_t phys_addr,
 	uint32_t queue_size_bytes);
 static void qat_queue_delete(struct qat_queue *queue);
@@ -143,68 +34,21 @@ static int qat_queue_create(struct qat_pci_device *qat_dev,
 	struct qat_queue *queue, struct qat_qp_config *, uint8_t dir);
 static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
 	uint32_t *queue_size_for_csr);
-static void adf_configure_queues(struct qat_qp *queue,
+static int adf_configure_queues(struct qat_qp *queue,
 	enum qat_device_gen qat_dev_gen);
-static void adf_queue_arb_enable(enum qat_device_gen qat_dev_gen,
+static int adf_queue_arb_enable(struct qat_pci_device *qat_dev,
 	struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock);
-static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
+static int adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
 	struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock);
+static int qat_qp_build_ring_base(struct qat_pci_device *qat_dev,
+	void *io_addr, struct qat_queue *queue);
+static const struct rte_memzone *queue_dma_zone_reserve(const char *queue_name,
+	uint32_t queue_size, int socket_id);
+static int qat_qp_csr_setup(struct qat_pci_device *qat_dev, void *io_addr,
+	struct qat_qp *qp);
 
-int qat_qps_per_service(struct qat_pci_device *qat_dev,
-		enum qat_service_type service)
-{
-	int i = 0, count = 0, max_ops_per_srv = 0;
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		max_ops_per_srv = QAT_GEN4_BUNDLE_NUM;
-		for (i = 0, count = 0; i < max_ops_per_srv; i++)
-			if (qat_dev->qp_gen4_data[i][0].service_type == service)
-				count++;
-	} else {
-		const struct qat_qp_hw_data *sym_hw_qps =
-				qat_gen_config[qat_dev->qat_dev_gen]
-				.qp_hw_data[service];
-
-		max_ops_per_srv = ADF_MAX_QPS_ON_ANY_SERVICE;
-		for (i = 0, count = 0; i < max_ops_per_srv; i++)
-			if (sym_hw_qps[i].service_type == service)
-				count++;
-	}
-
-	return count;
-}
-
-static const struct rte_memzone *
-queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
-			int socket_id)
-{
-	const struct rte_memzone *mz;
-
-	mz = rte_memzone_lookup(queue_name);
-	if (mz != 0) {
-		if (((size_t)queue_size <= mz->len) &&
-				((socket_id == SOCKET_ID_ANY) ||
-					(socket_id == mz->socket_id))) {
-			QAT_LOG(DEBUG, "re-use memzone already "
-					"allocated for %s", queue_name);
-			return mz;
-		}
-
-		QAT_LOG(ERR, "Incompatible memzone already "
-				"allocated %s, size %u, socket %d. "
-				"Requested size %u, socket %u",
-				queue_name, (uint32_t)mz->len,
-				mz->socket_id, queue_size, socket_id);
-		return NULL;
-	}
-
-	QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
-					queue_name, queue_size, socket_id);
-	return rte_memzone_reserve_aligned(queue_name, queue_size,
-		socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size);
-}
-
-int qat_qp_setup(struct qat_pci_device *qat_dev,
+int
+qat_qp_setup(struct qat_pci_device *qat_dev,
 		struct qat_qp **qp_addr,
 		uint16_t queue_pair_id,
 		struct qat_qp_config *qat_qp_conf)
@@ -213,7 +57,9 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	struct rte_pci_device *pci_dev =
 			qat_pci_devs[qat_dev->qat_dev_id].pci_dev;
 	char op_cookie_pool_name[RTE_RING_NAMESIZE];
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev->qat_dev_gen];
+	void *io_addr;
 	uint32_t i;
 
 	QAT_LOG(DEBUG, "Setup qp %u on qat pci device %d gen %d",
@@ -226,7 +72,11 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		return -EINVAL;
 	}
 
-	if (pci_dev->mem_resource[0].addr == NULL) {
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_transport_bar,
+		-ENOTSUP);
+
+	io_addr = ops_hw->qat_dev_get_transport_bar(pci_dev)->addr;
+	if (io_addr == NULL) {
 		QAT_LOG(ERR, "Could not find VF config space "
 				"(UIO driver attached?).");
 		return -EINVAL;
@@ -250,7 +100,7 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		return -ENOMEM;
 	}
 
-	qp->mmap_bar_addr = pci_dev->mem_resource[0].addr;
+	qp->mmap_bar_addr = io_addr;
 	qp->enqueued = qp->dequeued = 0;
 
 	if (qat_queue_create(qat_dev, &(qp->tx_q), qat_qp_conf,
@@ -277,10 +127,6 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		goto create_err;
 	}
 
-	adf_configure_queues(qp, qat_dev_gen);
-	adf_queue_arb_enable(qat_dev_gen, &qp->tx_q, qp->mmap_bar_addr,
-					&qat_dev->arb_csr_lock);
-
 	snprintf(op_cookie_pool_name, RTE_RING_NAMESIZE,
 					"%s%d_cookies_%s_qp%hu",
 		pci_dev->driver->driver.name, qat_dev->qat_dev_id,
@@ -316,6 +162,8 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	QAT_LOG(DEBUG, "QP setup complete: id: %d, cookiepool: %s",
 			queue_pair_id, op_cookie_pool_name);
 
+	qat_qp_csr_setup(qat_dev, io_addr, qp);
+
 	*qp_addr = qp;
 	return 0;
 
@@ -327,80 +175,15 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	return -EFAULT;
 }
 
-
-int qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr)
-{
-	struct qat_qp *qp = *qp_addr;
-	uint32_t i;
-
-	if (qp == NULL) {
-		QAT_LOG(DEBUG, "qp already freed");
-		return 0;
-	}
-
-	QAT_LOG(DEBUG, "Free qp on qat_pci device %d",
-				qp->qat_dev->qat_dev_id);
-
-	/* Don't free memory if there are still responses to be processed */
-	if ((qp->enqueued - qp->dequeued) == 0) {
-		qat_queue_delete(&(qp->tx_q));
-		qat_queue_delete(&(qp->rx_q));
-	} else {
-		return -EAGAIN;
-	}
-
-	adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q), qp->mmap_bar_addr,
-				&qp->qat_dev->arb_csr_lock);
-
-	for (i = 0; i < qp->nb_descriptors; i++)
-		rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]);
-
-	if (qp->op_cookie_pool)
-		rte_mempool_free(qp->op_cookie_pool);
-
-	rte_free(qp->op_cookies);
-	rte_free(qp);
-	*qp_addr = NULL;
-	return 0;
-}
-
-
-static void qat_queue_delete(struct qat_queue *queue)
-{
-	const struct rte_memzone *mz;
-	int status = 0;
-
-	if (queue == NULL) {
-		QAT_LOG(DEBUG, "Invalid queue");
-		return;
-	}
-	QAT_LOG(DEBUG, "Free ring %d, memzone: %s",
-			queue->hw_queue_number, queue->memz_name);
-
-	mz = rte_memzone_lookup(queue->memz_name);
-	if (mz != NULL)	{
-		/* Write an unused pattern to the queue memory. */
-		memset(queue->base_addr, 0x7F, queue->queue_size);
-		status = rte_memzone_free(mz);
-		if (status != 0)
-			QAT_LOG(ERR, "Error %d on freeing queue %s",
-					status, queue->memz_name);
-	} else {
-		QAT_LOG(DEBUG, "queue %s doesn't exist",
-				queue->memz_name);
-	}
-}
-
 static int
 qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 		struct qat_qp_config *qp_conf, uint8_t dir)
 {
-	uint64_t queue_base;
-	void *io_addr;
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev->qat_dev_gen];
 	const struct rte_memzone *qp_mz;
 	struct rte_pci_device *pci_dev =
 			qat_pci_devs[qat_dev->qat_dev_id].pci_dev;
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
 	int ret = 0;
 	uint16_t desc_size = (dir == ADF_RING_DIR_TX ?
 			qp_conf->hw->tx_msg_size : qp_conf->hw->rx_msg_size);
@@ -460,19 +243,9 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 	 * Write an unused pattern to the queue memory.
 	 */
 	memset(queue->base_addr, 0x7F, queue_size_bytes);
-	io_addr = pci_dev->mem_resource[0].addr;
 
-	if (qat_dev_gen == QAT_GEN4) {
-		queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr,
-					queue->queue_size);
-		WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number,
-			queue->hw_queue_number, queue_base);
-	} else {
-		queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
-				queue->queue_size);
-		WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
-			queue->hw_queue_number, queue_base);
-	}
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_transport_bar,
+		-ENOTSUP);
 
 	QAT_LOG(DEBUG, "RING: Name:%s, size in CSR: %u, in bytes %u,"
 		" nb msgs %u, msg_size %u, modulo mask %u",
@@ -488,202 +261,231 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 	return ret;
 }
 
-int
-qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
-			enum qat_service_type service_type)
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+		int socket_id)
 {
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		int i = 0, valid_qps = 0;
-
-		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
-			if (qat_dev->qp_gen4_data[i][0].service_type ==
-				service_type) {
-				if (valid_qps == qp_id)
-					return i;
-				++valid_qps;
-			}
+	const struct rte_memzone *mz;
+
+	mz = rte_memzone_lookup(queue_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			QAT_LOG(DEBUG, "re-use memzone already "
+					"allocated for %s", queue_name);
+			return mz;
 		}
+
+		QAT_LOG(ERR, "Incompatible memzone already "
+				"allocated %s, size %u, socket %d. "
+				"Requested size %u, socket %u",
+				queue_name, (uint32_t)mz->len,
+				mz->socket_id, queue_size, socket_id);
+		return NULL;
 	}
-	return -1;
+
+	QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					queue_name, queue_size, socket_id);
+	return rte_memzone_reserve_aligned(queue_name, queue_size,
+		socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size);
 }
 
 int
-qat_read_qp_config(struct qat_pci_device *qat_dev)
+qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr)
 {
-	int i = 0;
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
-
-	if (qat_dev_gen == QAT_GEN4) {
-		uint16_t svc = 0;
-
-		if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
-			return -(EFAULT);
-		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
-			struct qat_qp_hw_data *hw_data =
-				&qat_dev->qp_gen4_data[i][0];
-			uint8_t svc1 = (svc >> (3 * i)) & 0x7;
-			enum qat_service_type service_type = QAT_SERVICE_INVALID;
-
-			if (svc1 == QAT_SVC_SYM) {
-				service_type = QAT_SERVICE_SYMMETRIC;
-				QAT_LOG(DEBUG,
-					"Discovered SYMMETRIC service on bundle %d",
-					i);
-			} else if (svc1 == QAT_SVC_COMPRESSION) {
-				service_type = QAT_SERVICE_COMPRESSION;
-				QAT_LOG(DEBUG,
-					"Discovered COPRESSION service on bundle %d",
-					i);
-			} else if (svc1 == QAT_SVC_ASYM) {
-				service_type = QAT_SERVICE_ASYMMETRIC;
-				QAT_LOG(DEBUG,
-					"Discovered ASYMMETRIC service on bundle %d",
-					i);
-			} else {
-				QAT_LOG(ERR,
-					"Unrecognized service on bundle %d",
-					i);
-				return -(EFAULT);
-			}
+	int ret;
+	struct qat_qp *qp = *qp_addr;
+	uint32_t i;
 
-			memset(hw_data, 0, sizeof(*hw_data));
-			hw_data->service_type = service_type;
-			if (service_type == QAT_SERVICE_ASYMMETRIC) {
-				hw_data->tx_msg_size = 64;
-				hw_data->rx_msg_size = 32;
-			} else if (service_type == QAT_SERVICE_SYMMETRIC ||
-					service_type ==
-						QAT_SERVICE_COMPRESSION) {
-				hw_data->tx_msg_size = 128;
-				hw_data->rx_msg_size = 32;
-			}
-			hw_data->tx_ring_num = 0;
-			hw_data->rx_ring_num = 1;
-			hw_data->hw_bundle_num = i;
-		}
+	if (qp == NULL) {
+		QAT_LOG(DEBUG, "qp already freed");
 		return 0;
 	}
-	return -(EINVAL);
+
+	QAT_LOG(DEBUG, "Free qp on qat_pci device %d",
+				qp->qat_dev->qat_dev_id);
+
+	/* Don't free memory if there are still responses to be processed */
+	if ((qp->enqueued - qp->dequeued) == 0) {
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
+	} else {
+		return -EAGAIN;
+	}
+
+	ret = adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q),
+			qp->mmap_bar_addr, &qp->qat_dev->arb_csr_lock);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < qp->nb_descriptors; i++)
+		rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]);
+
+	if (qp->op_cookie_pool)
+		rte_mempool_free(qp->op_cookie_pool);
+
+	rte_free(qp->op_cookies);
+	rte_free(qp);
+	*qp_addr = NULL;
+	return 0;
 }
 
-static int qat_qp_check_queue_alignment(uint64_t phys_addr,
-					uint32_t queue_size_bytes)
+
+static void
+qat_queue_delete(struct qat_queue *queue)
 {
-	if (((queue_size_bytes - 1) & phys_addr) != 0)
-		return -EINVAL;
+	const struct rte_memzone *mz;
+	int status = 0;
+
+	if (queue == NULL) {
+		QAT_LOG(DEBUG, "Invalid queue");
+		return;
+	}
+	QAT_LOG(DEBUG, "Free ring %d, memzone: %s",
+			queue->hw_queue_number, queue->memz_name);
+
+	mz = rte_memzone_lookup(queue->memz_name);
+	if (mz != NULL)	{
+		/* Write an unused pattern to the queue memory. */
+		memset(queue->base_addr, 0x7F, queue->queue_size);
+		status = rte_memzone_free(mz);
+		if (status != 0)
+			QAT_LOG(ERR, "Error %d on freeing queue %s",
+					status, queue->memz_name);
+	} else {
+		QAT_LOG(DEBUG, "queue %s doesn't exist",
+				queue->memz_name);
+	}
+}
+
+static int __rte_unused
+adf_queue_arb_enable(struct qat_pci_device *qat_dev, struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_enable,
+			-ENOTSUP);
+	ops->qat_qp_adf_arb_enable(txq, base_addr, lock);
 	return 0;
 }
 
-static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
-	uint32_t *p_queue_size_for_csr)
+static int
+adf_queue_arb_disable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock)
 {
-	uint8_t i = ADF_MIN_RING_SIZE;
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
 
-	for (; i <= ADF_MAX_RING_SIZE; i++)
-		if ((msg_size * msg_num) ==
-				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
-			*p_queue_size_for_csr = i;
-			return 0;
-		}
-	QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
-	return -EINVAL;
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_disable,
+			-ENOTSUP);
+	ops->qat_qp_adf_arb_disable(txq, base_addr, lock);
+	return 0;
 }
 
-static void
-adf_queue_arb_enable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq,
-			void *base_addr, rte_spinlock_t *lock)
+static int __rte_unused
+qat_qp_build_ring_base(struct qat_pci_device *qat_dev, void *io_addr,
+		struct qat_queue *queue)
 {
-	uint32_t arb_csr_offset = 0, value;
-
-	rte_spinlock_lock(lock);
-	if (qat_dev_gen == QAT_GEN4) {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_RING_BUNDLE_SIZE_GEN4 *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
-				arb_csr_offset);
-	} else {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_ARB_REG_SLOT *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr,
-				arb_csr_offset);
-	}
-	value |= (0x01 << txq->hw_queue_number);
-	ADF_CSR_WR(base_addr, arb_csr_offset, value);
-	rte_spinlock_unlock(lock);
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_build_ring_base,
+			-ENOTSUP);
+	ops->qat_qp_build_ring_base(io_addr, queue);
+	return 0;
 }
 
-static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
-		struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock)
+int
+qat_qps_per_service(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
 {
-	uint32_t arb_csr_offset = 0, value;
-
-	rte_spinlock_lock(lock);
-	if (qat_dev_gen == QAT_GEN4) {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_RING_BUNDLE_SIZE_GEN4 *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
-				arb_csr_offset);
-	} else {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_ARB_REG_SLOT *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr,
-				arb_csr_offset);
-	}
-	value &= ~(0x01 << txq->hw_queue_number);
-	ADF_CSR_WR(base_addr, arb_csr_offset, value);
-	rte_spinlock_unlock(lock);
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_rings_per_service,
+			-ENOTSUP);
+	return ops->qat_qp_rings_per_service(qat_dev, service);
 }
 
-static void adf_configure_queues(struct qat_qp *qp,
-		enum qat_device_gen qat_dev_gen)
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data(struct qat_pci_device *qat_dev,
+		enum qat_service_type service, uint16_t qp_id)
 {
-	uint32_t q_tx_config, q_resp_config;
-	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
-
-	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
-	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
-			ADF_RING_NEAR_WATERMARK_512,
-			ADF_RING_NEAR_WATERMARK_0);
-
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
-			q_tx->hw_bundle_number,	q_tx->hw_queue_number,
-			q_tx_config);
-		WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
-			q_rx->hw_bundle_number,	q_rx->hw_queue_number,
-			q_resp_config);
-	} else {
-		WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
-			q_tx->hw_bundle_number,	q_tx->hw_queue_number,
-			q_tx_config);
-		WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
-			q_rx->hw_bundle_number,	q_rx->hw_queue_number,
-			q_resp_config);
-	}
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_get_hw_data, NULL);
+	return ops->qat_qp_get_hw_data(qat_dev, service, qp_id);
+}
+
+int
+qat_read_qp_config(struct qat_pci_device *qat_dev)
+{
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_read_config,
+			-ENOTSUP);
+	return ops_hw->qat_dev_read_config(qat_dev);
 }
 
-static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask)
+static int __rte_unused
+adf_configure_queues(struct qat_qp *qp, enum qat_device_gen qat_dev_gen)
 {
-	return data & modulo_mask;
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_configure_queues,
+			-ENOTSUP);
+	ops->qat_qp_adf_configure_queues(qp);
+	return 0;
 }
 
 static inline void
 txq_write_tail(enum qat_device_gen qat_dev_gen,
-		struct qat_qp *qp, struct qat_queue *q) {
+		struct qat_qp *qp, struct qat_queue *q)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
 
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
-			q->hw_bundle_number, q->hw_queue_number, q->tail);
-	} else {
-		WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number,
-			q->hw_queue_number, q->tail);
-	}
+	/*
+	 * Pointer check should be done during
+	 * initialization
+	 */
+	ops->qat_qp_csr_write_tail(qp, q);
 }
 
+static inline void
+qat_qp_csr_write_head(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
+			struct qat_queue *q, uint32_t new_head)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
+
+	/*
+	 * Pointer check should be done during
+	 * initialization
+	 */
+	ops->qat_qp_csr_write_head(qp, q, new_head);
+}
+
+static int
+qat_qp_csr_setup(struct qat_pci_device *qat_dev,
+		void *io_addr, struct qat_qp *qp)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_csr_setup,
+			-ENOTSUP);
+	ops->qat_qp_csr_setup(qat_dev, io_addr, qp);
+	return 0;
+}
+
+
 static inline
 void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
 				struct qat_queue *q)
@@ -707,15 +509,37 @@ void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
 	q->nb_processed_responses = 0;
 	q->csr_head = new_head;
 
-	/* write current head to CSR */
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr,
-			q->hw_bundle_number, q->hw_queue_number, new_head);
-	} else {
-		WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number,
-				q->hw_queue_number, new_head);
-	}
+	qat_qp_csr_write_head(qat_dev_gen, qp, q, new_head);
+}
 
+static int
+qat_qp_check_queue_alignment(uint64_t phys_addr, uint32_t queue_size_bytes)
+{
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return -EINVAL;
+	return 0;
+}
+
+static int
+adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+		uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
+
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return -EINVAL;
+}
+
+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t modulo_mask)
+{
+	return data & modulo_mask;
 }
 
 uint16_t
diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
index 2de66b888b..deba447c84 100644
--- a/drivers/common/qat/qat_qp.h
+++ b/drivers/common/qat/qat_qp.h
@@ -69,6 +69,12 @@ extern struct qat_qp_hw_spec_funcs *qat_qp_hw_spec[];
 #define QAT_GEN4_BUNDLE_NUM             4
 #define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
 
+/* Queue pair setup error codes */
+#define QAT_NOMEM		1
+#define QAT_QP_INVALID_DESC_NO	2
+#define QAT_QP_BUSY		3
+#define QAT_PCI_NO_RESOURCE	4
+
 /**
  * Structure with data needed for creation of queue pair.
  */
@@ -81,15 +87,6 @@ struct qat_qp_hw_data {
 	uint16_t rx_msg_size;
 };
 
-/**
- * Structure with data needed for creation of queue pair on gen4.
- */
-struct qat_qp_gen4_data {
-	struct qat_qp_hw_data qat_qp_hw_data;
-	uint8_t reserved;
-	uint8_t valid;
-};
-
 /**
  * Structure with data needed for creation of queue pair.
  */
@@ -141,9 +138,6 @@ struct qat_qp {
 	uint16_t min_enq_burst_threshold;
 } __rte_cache_aligned;
 
-extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-
 uint16_t
 qat_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops);
 
@@ -163,7 +157,11 @@ qat_qp_setup(struct qat_pci_device *qat_dev,
 
 int
 qat_qps_per_service(struct qat_pci_device *qat_dev,
-			enum qat_service_type service);
+		enum qat_service_type service);
+
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data(struct qat_pci_device *qat_dev,
+		enum qat_service_type service, uint16_t qp_id);
 
 int
 qat_cq_get_fw_version(struct qat_qp *qp);
@@ -173,11 +171,6 @@ int
 qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused,
 			  void *op_cookie __rte_unused,
 			  uint64_t *dequeue_err_count __rte_unused);
-
-int
-qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
-			enum qat_service_type service_type);
-
 int
 qat_read_qp_config(struct qat_pci_device *qat_dev);
 
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index efda921c05..71907a606d 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -164,35 +164,11 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	int ret = 0;
 	uint32_t i;
 	struct qat_qp_config qat_qp_conf;
-	const struct qat_qp_hw_data *sym_hw_qps = NULL;
-	const struct qat_qp_hw_data *qp_hw_data = NULL;
-
 	struct qat_qp **qp_addr =
 			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
 	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
 	struct qat_pci_device *qat_dev = qat_private->qat_dev;
 
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		int ring_pair =
-			qat_select_valid_queue(qat_dev, qp_id,
-				QAT_SERVICE_SYMMETRIC);
-
-		if (ring_pair < 0) {
-			QAT_LOG(ERR,
-				"qp_id %u invalid for this device, no enough services allocated for GEN4 device",
-				qp_id);
-			return -EINVAL;
-		}
-		sym_hw_qps =
-			&qat_dev->qp_gen4_data[0][0];
-		qp_hw_data =
-			&qat_dev->qp_gen4_data[ring_pair][0];
-	} else {
-		sym_hw_qps = qat_gen_config[qat_dev->qat_dev_gen]
-				.qp_hw_data[QAT_SERVICE_SYMMETRIC];
-		qp_hw_data = sym_hw_qps + qp_id;
-	}
-
 	/* If qp is already in use free ring memory and qp metadata. */
 	if (*qp_addr != NULL) {
 		ret = qat_sym_qp_release(dev, qp_id);
@@ -204,7 +180,13 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 		return -EINVAL;
 	}
 
-	qat_qp_conf.hw = qp_hw_data;
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_SYMMETRIC,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
 	qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie);
 	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
 	qat_qp_conf.socket_id = socket_id;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [PATCH v2 05/10] compress/qat: add gen specific data and function
  2021-10-01 16:59 ` [dpdk-dev] [PATCH v2 00/10] " Fan Zhang
                     ` (3 preceding siblings ...)
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 04/10] common/qat: add gen specific queue implementation Fan Zhang
@ 2021-10-01 16:59   ` Fan Zhang
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 06/10] compress/qat: add gen specific implementation Fan Zhang
                     ` (5 subsequent siblings)
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-01 16:59 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Adam Dybkowski, Arek Kusztal, Kai Ji

This patch adds the compression data structure and function
prototypes for different QAT generations.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c         |   2 -
 .../common/qat/qat_adf/icp_qat_hw_gen4_comp.h | 195 ++++++++++++
 .../qat/qat_adf/icp_qat_hw_gen4_comp_defs.h   | 300 ++++++++++++++++++
 drivers/common/qat/qat_device.h               |   7 -
 drivers/compress/qat/qat_comp.c               | 101 +++---
 drivers/compress/qat/qat_comp.h               |   8 +-
 drivers/compress/qat/qat_comp_pmd.c           | 159 ++++------
 drivers/compress/qat/qat_comp_pmd.h           |  76 +++++
 8 files changed, 674 insertions(+), 174 deletions(-)
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
index f1f43c17b1..ed4c4a2c03 100644
--- a/drivers/common/qat/dev/qat_dev_gen1.c
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -252,6 +252,4 @@ RTE_INIT(qat_dev_gen_gen1_init)
 	qat_qp_hw_spec[QAT_GEN1] = &qat_qp_hw_spec_gen1;
 	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
 	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
-	qat_gen_config[QAT_GEN1].comp_num_im_bufs_required =
-		QAT_NUM_INTERM_BUFS_GEN1;
 }
diff --git a/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
new file mode 100644
index 0000000000..ec69dc7105
--- /dev/null
+++ b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
@@ -0,0 +1,195 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _ICP_QAT_HW_GEN4_COMP_H_
+#define _ICP_QAT_HW_GEN4_COMP_H_
+
+#include "icp_qat_fw.h"
+#include "icp_qat_hw_gen4_comp_defs.h"
+
+struct icp_qat_hw_comp_20_config_csr_lower {
+	icp_qat_hw_comp_20_extended_delay_match_mode_t edmm;
+	icp_qat_hw_comp_20_hw_comp_format_t algo;
+	icp_qat_hw_comp_20_search_depth_t sd;
+	icp_qat_hw_comp_20_hbs_control_t hbs;
+	icp_qat_hw_comp_20_abd_t abd;
+	icp_qat_hw_comp_20_lllbd_ctrl_t lllbd;
+	icp_qat_hw_comp_20_min_match_control_t mmctrl;
+	icp_qat_hw_comp_20_skip_hash_collision_t hash_col;
+	icp_qat_hw_comp_20_skip_hash_update_t hash_update;
+	icp_qat_hw_comp_20_byte_skip_t skip_ctrl;
+};
+
+static inline uint32_t ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER(
+		struct icp_qat_hw_comp_20_config_csr_lower csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.algo,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_MASK);
+
+	QAT_FIELD_SET(val32, csr.sd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_MASK);
+
+	QAT_FIELD_SET(val32, csr.edmm,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_MASK);
+
+	QAT_FIELD_SET(val32, csr.hbs,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lllbd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK);
+
+	QAT_FIELD_SET(val32, csr.mmctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.hash_col,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_MASK);
+
+	QAT_FIELD_SET(val32, csr.hash_update,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_MASK);
+
+	QAT_FIELD_SET(val32, csr.skip_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_MASK);
+
+	QAT_FIELD_SET(val32, csr.abd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_MASK);
+
+	QAT_FIELD_SET(val32, csr.lllbd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_comp_20_config_csr_upper {
+	icp_qat_hw_comp_20_scb_control_t scb_ctrl;
+	icp_qat_hw_comp_20_rmb_control_t rmb_ctrl;
+	icp_qat_hw_comp_20_som_control_t som_ctrl;
+	icp_qat_hw_comp_20_skip_hash_rd_control_t skip_hash_ctrl;
+	icp_qat_hw_comp_20_scb_unload_control_t scb_unload_ctrl;
+	icp_qat_hw_comp_20_disable_token_fusion_control_t
+			disable_token_fusion_ctrl;
+	icp_qat_hw_comp_20_lbms_t lbms;
+	icp_qat_hw_comp_20_scb_mode_reset_mask_t scb_mode_reset;
+	uint16_t lazy;
+	uint16_t nice;
+};
+
+static inline uint32_t ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER(
+		struct icp_qat_hw_comp_20_config_csr_upper csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.scb_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.rmb_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.som_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.skip_hash_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.scb_unload_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.disable_token_fusion_ctrl,
+	ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_BITPOS,
+	ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbms,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_MASK);
+
+	QAT_FIELD_SET(val32, csr.scb_mode_reset,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK);
+
+	QAT_FIELD_SET(val32, csr.lazy,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_MASK);
+
+	QAT_FIELD_SET(val32, csr.nice,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_decomp_20_config_csr_lower {
+	icp_qat_hw_decomp_20_hbs_control_t hbs;
+	icp_qat_hw_decomp_20_lbms_t lbms;
+	icp_qat_hw_decomp_20_hw_comp_format_t algo;
+	icp_qat_hw_decomp_20_min_match_control_t mmctrl;
+	icp_qat_hw_decomp_20_lz4_block_checksum_present_t lbc;
+};
+
+static inline uint32_t ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER(
+		struct icp_qat_hw_decomp_20_config_csr_lower csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.hbs,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbms,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_MASK);
+
+	QAT_FIELD_SET(val32, csr.algo,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_MASK);
+
+	QAT_FIELD_SET(val32, csr.mmctrl,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbc,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_BITPOS,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_decomp_20_config_csr_upper {
+	icp_qat_hw_decomp_20_speculative_decoder_control_t sdc;
+	icp_qat_hw_decomp_20_mini_cam_control_t mcc;
+};
+
+static inline uint32_t ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_UPPER(
+		struct icp_qat_hw_decomp_20_config_csr_upper csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.sdc,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_BITPOS,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.mcc,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_MASK);
+
+	return rte_bswap32(val32);
+}
+
+#endif /* _ICP_QAT_HW_GEN4_COMP_H_ */
diff --git a/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
new file mode 100644
index 0000000000..0c2e1603f0
--- /dev/null
+++ b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
@@ -0,0 +1,300 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _ICP_QAT_HW_GEN4_COMP_DEFS_H
+#define _ICP_QAT_HW_GEN4_COMP_DEFS_H
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_BITPOS	31
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_comp_20_scb_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_BITPOS	30
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_ALL = 0x0,
+	ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_FC_ONLY = 0x1,
+} icp_qat_hw_comp_20_rmb_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_ALL
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_BITPOS	28
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_MASK		0x3
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_NORMAL_MODE = 0x0,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_REPLAY_MODE = 0x1,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_INPUT_CRC = 0x2,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_RESERVED_MODE = 0x3,
+} icp_qat_hw_comp_20_som_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SOM_CONTROL_NORMAL_MODE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_BITPOS	27
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_NO_SKIP = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_SKIP_HASH_READS = 0x1,
+} icp_qat_hw_comp_20_skip_hash_rd_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_NO_SKIP
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_BITPOS	26
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_UNLOAD = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_NO_UNLOAD = 0x1,
+} icp_qat_hw_comp_20_scb_unload_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_UNLOAD
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_BITPOS 21
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_comp_20_disable_token_fusion_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_DEFAULT_VAL \
+		ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_ENABLE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_BITPOS	19
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_MASK		0x3
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_64KB = 0x0,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_256KB = 0x1,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_1MB = 0x2,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_4MB = 0x3,
+} icp_qat_hw_comp_20_lbms_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_LBMS_LBMS_64KB
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS	18
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS_AND_HISTORY = 0x1,
+} icp_qat_hw_comp_20_scb_mode_reset_mask_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_BITPOS	9
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_MASK	0x1ff
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL	\
+		258
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_BITPOS	0
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_MASK	0x1ff
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL 259
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS	14
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_MASK		0x7
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_HBS_CONTROL_HBS_IS_32KB = 0x0,
+} icp_qat_hw_comp_20_hbs_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_HBS_CONTROL_HBS_IS_32KB
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_BITPOS	13
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_ABD_ABD_ENABLED = 0x0,
+	ICP_QAT_HW_COMP_20_ABD_ABD_DISABLED = 0x1,
+} icp_qat_hw_comp_20_abd_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_ABD_ABD_ENABLED
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS	12
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED = 0x0,
+	ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_DISABLED = 0x1,
+} icp_qat_hw_comp_20_lllbd_ctrl_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_BITPOS	8
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_MASK		0xf
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1 = 0x1,
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_6 = 0x3,
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_9 = 0x4,
+} icp_qat_hw_comp_20_search_depth_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_BITPOS	5
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77 = 0x0,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE = 0x1,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_LZ4 = 0x2,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_LZ4S = 0x3,
+} icp_qat_hw_comp_20_hw_comp_format_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS	4
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_3B = 0x0,
+	ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_4B = 0x1,
+} icp_qat_hw_comp_20_min_match_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_3B
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_BITPOS	3
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_ALLOW = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_DONT_ALLOW = 0x1,
+} icp_qat_hw_comp_20_skip_hash_collision_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_ALLOW
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_BITPOS	2
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_ALLOW = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW = 0x1,
+} icp_qat_hw_comp_20_skip_hash_update_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_ALLOW
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_BITPOS	1
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_TOKEN = 0x0,
+	ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL = 0x1,
+} icp_qat_hw_comp_20_byte_skip_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_TOKEN
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_BITPOS	0
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_DISABLED = 0x0,
+	ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED = 0x1,
+} icp_qat_hw_comp_20_extended_delay_match_mode_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_DEFAULT_VAL \
+		ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_DISABLED
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_BITPOS 31
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_decomp_20_speculative_decoder_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_DEFAULT_VAL\
+		ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_ENABLE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_BITPOS	30
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_decomp_20_mini_cam_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_ENABLE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS	14
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_HBS_CONTROL_HBS_IS_32KB = 0x0,
+} icp_qat_hw_decomp_20_hbs_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_HBS_CONTROL_HBS_IS_32KB
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_BITPOS	8
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_MASK	0x3
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_64KB = 0x0,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_256KB = 0x1,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_1MB = 0x2,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_4MB = 0x3,
+} icp_qat_hw_decomp_20_lbms_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_LBMS_LBMS_64KB
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_BITPOS	5
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE = 0x1,
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_LZ4 = 0x2,
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_LZ4S = 0x3,
+} icp_qat_hw_decomp_20_hw_comp_format_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS	4
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_3B = 0x0,
+	ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_4B = 0x1,
+} icp_qat_hw_decomp_20_min_match_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_3B
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_BITPOS 3
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_ABSENT  =  0x0,
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_PRESENT  =  0x1,
+} icp_qat_hw_decomp_20_lz4_block_checksum_present_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_DEFAULT_VAL \
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_ABSENT
+
+#endif /* _ICP_QAT_HW_GEN4_COMP_DEFS_H */
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index ce400b0dd2..421c8299e0 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -49,12 +49,6 @@ struct qat_dev_cmd_param {
 	uint16_t val;
 };
 
-enum qat_comp_num_im_buffers {
-	QAT_NUM_INTERM_BUFS_GEN1 = 12,
-	QAT_NUM_INTERM_BUFS_GEN2 = 20,
-	QAT_NUM_INTERM_BUFS_GEN3 = 64
-};
-
 struct qat_device_info {
 	const struct rte_memzone *mz;
 	/**< mz to store the qat_pci_device so it can be
@@ -140,7 +134,6 @@ struct qat_pci_device {
 struct qat_gen_hw_data {
 	enum qat_device_gen dev_gen;
 	const struct qat_qp_hw_data (*qp_hw_data)[ADF_MAX_QPS_ON_ANY_SERVICE];
-	enum qat_comp_num_im_buffers comp_num_im_bufs_required;
 	struct qat_pf2vf_dev *pf2vf_dev;
 };
 
diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index 7ac25a3b4c..e8f57c3cc4 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018-2019 Intel Corporation
+ * Copyright(c) 2018-2021 Intel Corporation
  */
 
 #include <rte_mempool.h>
@@ -332,7 +332,8 @@ qat_comp_build_request(void *in_op, uint8_t *out_msg,
 	return 0;
 }
 
-static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask)
+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t modulo_mask)
 {
 	return data & modulo_mask;
 }
@@ -793,8 +794,9 @@ qat_comp_stream_size(void)
 	return RTE_ALIGN_CEIL(sizeof(struct qat_comp_stream), 8);
 }
 
-static void qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
-				    enum qat_comp_request_type request)
+static void
+qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
+	    enum qat_comp_request_type request)
 {
 	if (request == QAT_COMP_REQUEST_FIXED_COMP_STATELESS)
 		header->service_cmd_id = ICP_QAT_FW_COMP_CMD_STATIC;
@@ -811,16 +813,17 @@ static void qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
 	    QAT_COMN_CD_FLD_TYPE_16BYTE_DATA, QAT_COMN_PTR_TYPE_FLAT);
 }
 
-static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
-			const struct rte_memzone *interm_buff_mz,
-			const struct rte_comp_xform *xform,
-			const struct qat_comp_stream *stream,
-			enum rte_comp_op_type op_type)
+static int
+qat_comp_create_templates(struct qat_comp_xform *qat_xform,
+			  const struct rte_memzone *interm_buff_mz,
+			  const struct rte_comp_xform *xform,
+			  const struct qat_comp_stream *stream,
+			  enum rte_comp_op_type op_type,
+			  enum qat_device_gen qat_dev_gen)
 {
 	struct icp_qat_fw_comp_req *comp_req;
-	int comp_level, algo;
 	uint32_t req_par_flags;
-	int direction = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS;
+	int res;
 
 	if (unlikely(qat_xform == NULL)) {
 		QAT_LOG(ERR, "Session was not created for this device");
@@ -839,46 +842,17 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 		}
 	}
 
-	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) {
-		direction = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS;
-		comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
+	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS)
 		req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(
 				ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP,
 				ICP_QAT_FW_COMP_BFINAL,
 				ICP_QAT_FW_COMP_CNV,
 				ICP_QAT_FW_COMP_CNV_RECOVERY);
-	} else {
-		if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
-		else if (xform->compress.level == 1)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
-		else if (xform->compress.level == 2)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_4;
-		else if (xform->compress.level == 3)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
-		else if (xform->compress.level >= 4 &&
-			 xform->compress.level <= 9)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_16;
-		else {
-			QAT_LOG(ERR, "compression level not supported");
-			return -EINVAL;
-		}
+	else
 		req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(
 				ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP,
 				ICP_QAT_FW_COMP_BFINAL, ICP_QAT_FW_COMP_CNV,
 				ICP_QAT_FW_COMP_CNV_RECOVERY);
-	}
-
-	switch (xform->compress.algo) {
-	case RTE_COMP_ALGO_DEFLATE:
-		algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE;
-		break;
-	case RTE_COMP_ALGO_LZS:
-	default:
-		/* RTE_COMP_NULL */
-		QAT_LOG(ERR, "compression algorithm not supported");
-		return -EINVAL;
-	}
 
 	comp_req = &qat_xform->qat_comp_req_tmpl;
 
@@ -899,18 +873,10 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 		comp_req->comp_cd_ctrl.comp_state_addr =
 				stream->state_registers_decomp_phys;
 
-		/* Enable A, B, C, D, and E (CAMs). */
+		/* RAM bank flags */
 		comp_req->comp_cd_ctrl.ram_bank_flags =
-			ICP_QAT_FW_COMP_RAM_FLAGS_BUILD(
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank E */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank D */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank C */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank B */
-				ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */
+				qat_comp_gen_dev_ops[qat_dev_gen]
+					.qat_comp_get_ram_bank_flags();
 
 		comp_req->comp_cd_ctrl.ram_banks_addr =
 				stream->inflate_context_phys;
@@ -924,13 +890,11 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 			ICP_QAT_FW_COMP_ENABLE_SECURE_RAM_USED_AS_INTMD_BUF);
 	}
 
-	comp_req->cd_pars.sl.comp_slice_cfg_word[0] =
-	    ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(
-		direction,
-		/* In CPM 1.6 only valid mode ! */
-		ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED, algo,
-		/* Translate level to depth */
-		comp_level, ICP_QAT_HW_COMPRESSION_FILE_TYPE_0);
+	res = qat_comp_gen_dev_ops[qat_dev_gen].qat_comp_set_slice_cfg_word(
+			qat_xform, xform, op_type,
+			comp_req->cd_pars.sl.comp_slice_cfg_word);
+	if (res)
+		return res;
 
 	comp_req->comp_pars.initial_adler = 1;
 	comp_req->comp_pars.initial_crc32 = 0;
@@ -958,7 +922,8 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 				ICP_QAT_FW_SLICE_XLAT);
 
 		comp_req->u1.xlt_pars.inter_buff_ptr =
-				interm_buff_mz->iova;
+				(qat_comp_get_num_im_bufs_required(qat_dev_gen)
+					== 0) ? 0 : interm_buff_mz->iova;
 	}
 
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
@@ -991,6 +956,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 			      void **private_xform)
 {
 	struct qat_comp_dev_private *qat = dev->data->dev_private;
+	enum qat_device_gen qat_dev_gen = qat->qat_dev->qat_dev_gen;
+	unsigned int im_bufs = qat_comp_get_num_im_bufs_required(qat_dev_gen);
 
 	if (unlikely(private_xform == NULL)) {
 		QAT_LOG(ERR, "QAT: private_xform parameter is NULL");
@@ -1012,7 +979,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 
 		if (xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_FIXED ||
 		  ((xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_DEFAULT)
-				   && qat->interm_buff_mz == NULL))
+				   && qat->interm_buff_mz == NULL
+				   && im_bufs > 0))
 			qat_xform->qat_comp_request_type =
 					QAT_COMP_REQUEST_FIXED_COMP_STATELESS;
 
@@ -1020,7 +988,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 				RTE_COMP_HUFFMAN_DYNAMIC ||
 				xform->compress.deflate.huffman ==
 						RTE_COMP_HUFFMAN_DEFAULT) &&
-				qat->interm_buff_mz != NULL)
+				(qat->interm_buff_mz != NULL ||
+						im_bufs == 0))
 
 			qat_xform->qat_comp_request_type =
 					QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS;
@@ -1039,7 +1008,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 	}
 
 	if (qat_comp_create_templates(qat_xform, qat->interm_buff_mz, xform,
-				      NULL, RTE_COMP_OP_STATELESS)) {
+				      NULL, RTE_COMP_OP_STATELESS,
+				      qat_dev_gen)) {
 		QAT_LOG(ERR, "QAT: Problem with setting compression");
 		return -EINVAL;
 	}
@@ -1138,7 +1108,8 @@ qat_comp_stream_create(struct rte_compressdev *dev,
 	ptr->qat_xform.checksum_type = xform->decompress.chksum;
 
 	if (qat_comp_create_templates(&ptr->qat_xform, qat->interm_buff_mz,
-				      xform, ptr, RTE_COMP_OP_STATEFUL)) {
+				      xform, ptr, RTE_COMP_OP_STATEFUL,
+				      qat->qat_dev->qat_dev_gen)) {
 		QAT_LOG(ERR, "QAT: problem with creating descriptor template for stream");
 		rte_mempool_put(qat->streampool, *stream);
 		*stream = NULL;
diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
index 0444b50a1e..da7b9a6eec 100644
--- a/drivers/compress/qat/qat_comp.h
+++ b/drivers/compress/qat/qat_comp.h
@@ -28,14 +28,16 @@
 #define QAT_MIN_OUT_BUF_SIZE 46
 
 /* maximum size of the state registers */
-#define QAT_STATE_REGISTERS_MAX_SIZE 64
+#define QAT_STATE_REGISTERS_MAX_SIZE 256 /* 64 bytes for GEN1-3, 256 for GEN4 */
 
 /* decompressor context size */
 #define QAT_INFLATE_CONTEXT_SIZE_GEN1 36864
 #define QAT_INFLATE_CONTEXT_SIZE_GEN2 34032
 #define QAT_INFLATE_CONTEXT_SIZE_GEN3 34032
-#define QAT_INFLATE_CONTEXT_SIZE RTE_MAX(RTE_MAX(QAT_INFLATE_CONTEXT_SIZE_GEN1,\
-		QAT_INFLATE_CONTEXT_SIZE_GEN2), QAT_INFLATE_CONTEXT_SIZE_GEN3)
+#define QAT_INFLATE_CONTEXT_SIZE_GEN4 36864
+#define QAT_INFLATE_CONTEXT_SIZE RTE_MAX(RTE_MAX(RTE_MAX(\
+		QAT_INFLATE_CONTEXT_SIZE_GEN1, QAT_INFLATE_CONTEXT_SIZE_GEN2), \
+		QAT_INFLATE_CONTEXT_SIZE_GEN3), QAT_INFLATE_CONTEXT_SIZE_GEN4)
 
 enum qat_comp_request_type {
 	QAT_COMP_REQUEST_FIXED_COMP_STATELESS,
diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c
index caac7839e9..9b24d46e97 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -9,30 +9,29 @@
 
 #define QAT_PMD_COMP_SGL_DEF_SEGMENTS 16
 
+struct qat_comp_gen_dev_ops qat_comp_gen_dev_ops[QAT_N_GENS];
+
 struct stream_create_info {
 	struct qat_comp_dev_private *comp_dev;
 	int socket_id;
 	int error;
 };
 
-static const struct rte_compressdev_capabilities qat_comp_gen_capabilities[] = {
-	{/* COMPRESSION - deflate */
-	 .algo = RTE_COMP_ALGO_DEFLATE,
-	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
-				RTE_COMP_FF_CRC32_CHECKSUM |
-				RTE_COMP_FF_ADLER32_CHECKSUM |
-				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
-				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
-				RTE_COMP_FF_HUFFMAN_FIXED |
-				RTE_COMP_FF_HUFFMAN_DYNAMIC |
-				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
-				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
-				RTE_COMP_FF_OOP_LB_IN_SGL_OUT |
-				RTE_COMP_FF_STATEFUL_DECOMPRESSION,
-	 .window_size = {.min = 15, .max = 15, .increment = 0} },
-	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+static struct
+qat_comp_capabilities_info qat_comp_get_capa_info(
+		enum qat_device_gen qat_dev_gen, struct qat_pci_device *qat_dev)
+{
+	struct qat_comp_capabilities_info ret = { .data = NULL, .size = 0 };
 
-static void
+	if (qat_dev_gen >= QAT_N_GENS)
+		return ret;
+	RTE_FUNC_PTR_OR_ERR_RET(qat_comp_gen_dev_ops[qat_dev_gen]
+			.qat_comp_get_capabilities, ret);
+	return qat_comp_gen_dev_ops[qat_dev_gen]
+			.qat_comp_get_capabilities(qat_dev);
+}
+
+void
 qat_comp_stats_get(struct rte_compressdev *dev,
 		struct rte_compressdev_stats *stats)
 {
@@ -52,7 +51,7 @@ qat_comp_stats_get(struct rte_compressdev *dev,
 	stats->dequeue_err_count = qat_stats.dequeue_err_count;
 }
 
-static void
+void
 qat_comp_stats_reset(struct rte_compressdev *dev)
 {
 	struct qat_comp_dev_private *qat_priv;
@@ -67,7 +66,7 @@ qat_comp_stats_reset(struct rte_compressdev *dev)
 
 }
 
-static int
+int
 qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 {
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
@@ -95,23 +94,18 @@ qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 			&(dev->data->queue_pairs[queue_pair_id]));
 }
 
-static int
+int
 qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
-		  uint32_t max_inflight_ops, int socket_id)
+		uint32_t max_inflight_ops, int socket_id)
 {
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-	struct qat_qp_config qat_qp_conf;
-
+	struct qat_qp_config qat_qp_conf = {0};
 	struct qat_qp **qp_addr =
 			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
 	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-	const struct qat_qp_hw_data *comp_hw_qps =
-			qat_gen_config[qat_private->qat_dev->qat_dev_gen]
-				      .qp_hw_data[QAT_SERVICE_COMPRESSION];
-	const struct qat_qp_hw_data *qp_hw_data = comp_hw_qps + qp_id;
+	struct qat_qp *qp;
+	uint32_t i;
+	int ret;
 
 	/* If qp is already in use free ring memory and qp metadata. */
 	if (*qp_addr != NULL) {
@@ -125,7 +119,13 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 		return -EINVAL;
 	}
 
-	qat_qp_conf.hw = qp_hw_data;
+
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_COMPRESSION,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
 	qat_qp_conf.cookie_size = sizeof(struct qat_comp_op_cookie);
 	qat_qp_conf.nb_descriptors = max_inflight_ops;
 	qat_qp_conf.socket_id = socket_id;
@@ -134,7 +134,6 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
 	if (ret != 0)
 		return ret;
-
 	/* store a link to the qp in the qat_pci_device */
 	qat_private->qat_dev->qps_in_use[QAT_SERVICE_COMPRESSION][qp_id]
 								= *qp_addr;
@@ -189,7 +188,7 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 
 
 #define QAT_IM_BUFFER_DEBUG 0
-static const struct rte_memzone *
+const struct rte_memzone *
 qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
 			      uint32_t buff_size)
 {
@@ -202,8 +201,8 @@ qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
 	uint32_t full_size;
 	uint32_t offset_of_flat_buffs;
 	int i;
-	int num_im_sgls = qat_gen_config[
-		comp_dev->qat_dev->qat_dev_gen].comp_num_im_bufs_required;
+	int num_im_sgls = qat_comp_get_num_im_bufs_required(
+			comp_dev->qat_dev->qat_dev_gen);
 
 	QAT_LOG(DEBUG, "QAT COMP device %s needs %d sgls",
 				comp_dev->qat_dev->name, num_im_sgls);
@@ -480,8 +479,8 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev)
 	/* Free intermediate buffers */
 	if (comp_dev->interm_buff_mz) {
 		char mz_name[RTE_MEMZONE_NAMESIZE];
-		int i = qat_gen_config[
-		      comp_dev->qat_dev->qat_dev_gen].comp_num_im_bufs_required;
+		int i = qat_comp_get_num_im_bufs_required(
+				comp_dev->qat_dev->qat_dev_gen);
 
 		while (--i >= 0) {
 			snprintf(mz_name, RTE_MEMZONE_NAMESIZE,
@@ -509,28 +508,13 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev)
 	}
 }
 
-static int
+int
 qat_comp_dev_config(struct rte_compressdev *dev,
 		struct rte_compressdev_config *config)
 {
 	struct qat_comp_dev_private *comp_dev = dev->data->dev_private;
 	int ret = 0;
 
-	if (RTE_PMD_QAT_COMP_IM_BUFFER_SIZE == 0) {
-		QAT_LOG(WARNING,
-			"RTE_PMD_QAT_COMP_IM_BUFFER_SIZE = 0 in config file, so"
-			" QAT device can't be used for Dynamic Deflate. "
-			"Did you really intend to do this?");
-	} else {
-		comp_dev->interm_buff_mz =
-				qat_comp_setup_inter_buffers(comp_dev,
-					RTE_PMD_QAT_COMP_IM_BUFFER_SIZE);
-		if (comp_dev->interm_buff_mz == NULL) {
-			ret = -ENOMEM;
-			goto error_out;
-		}
-	}
-
 	if (config->max_nb_priv_xforms) {
 		comp_dev->xformpool = qat_comp_create_xform_pool(comp_dev,
 					    config, config->max_nb_priv_xforms);
@@ -558,19 +542,19 @@ qat_comp_dev_config(struct rte_compressdev *dev,
 	return ret;
 }
 
-static int
+int
 qat_comp_dev_start(struct rte_compressdev *dev __rte_unused)
 {
 	return 0;
 }
 
-static void
+void
 qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused)
 {
 
 }
 
-static int
+int
 qat_comp_dev_close(struct rte_compressdev *dev)
 {
 	int i;
@@ -588,8 +572,7 @@ qat_comp_dev_close(struct rte_compressdev *dev)
 	return ret;
 }
 
-
-static void
+void
 qat_comp_dev_info_get(struct rte_compressdev *dev,
 			struct rte_compressdev_info *info)
 {
@@ -662,27 +645,6 @@ qat_comp_pmd_dequeue_first_op_burst(void *qp, struct rte_comp_op **ops,
 	return ret;
 }
 
-static struct rte_compressdev_ops compress_qat_ops = {
-
-	/* Device related operations */
-	.dev_configure		= qat_comp_dev_config,
-	.dev_start		= qat_comp_dev_start,
-	.dev_stop		= qat_comp_dev_stop,
-	.dev_close		= qat_comp_dev_close,
-	.dev_infos_get		= qat_comp_dev_info_get,
-
-	.stats_get		= qat_comp_stats_get,
-	.stats_reset		= qat_comp_stats_reset,
-	.queue_pair_setup	= qat_comp_qp_setup,
-	.queue_pair_release	= qat_comp_qp_release,
-
-	/* Compression related operations */
-	.private_xform_create	= qat_comp_private_xform_create,
-	.private_xform_free	= qat_comp_private_xform_free,
-	.stream_create		= qat_comp_stream_create,
-	.stream_free		= qat_comp_stream_free
-};
-
 /* An rte_driver is needed in the registration of the device with compressdev.
  * The actual qat pci's rte_driver can't be used as its name represents
  * the whole pci device with all services. Think of this as a holder for a name
@@ -693,6 +655,7 @@ static const struct rte_driver compdev_qat_driver = {
 	.name = qat_comp_drv_name,
 	.alias = qat_comp_drv_name
 };
+
 int
 qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param)
@@ -708,17 +671,21 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	char capa_memz_name[RTE_COMPRESSDEV_NAME_MAX_LEN];
 	struct rte_compressdev *compressdev;
 	struct qat_comp_dev_private *comp_dev;
+	struct qat_comp_capabilities_info capabilities_info;
 	const struct rte_compressdev_capabilities *capabilities;
+	const struct qat_comp_gen_dev_ops *qat_comp_gen_ops =
+			&qat_comp_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	uint64_t capa_size;
 
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		QAT_LOG(ERR, "Compression PMD not supported on QAT 4xxx");
-		return -EFAULT;
-	}
 	snprintf(name, RTE_COMPRESSDEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "comp");
 	QAT_LOG(DEBUG, "Creating QAT COMP device %s", name);
 
+	if (qat_comp_gen_ops->compressdev_ops == NULL) {
+		QAT_LOG(DEBUG, "Device %s does not support compression", name);
+		return -ENOTSUP;
+	}
+
 	/* Populate subset device to use in compressdev device creation */
 	qat_dev_instance->comp_rte_dev.driver = &compdev_qat_driver;
 	qat_dev_instance->comp_rte_dev.numa_node =
@@ -733,13 +700,13 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	if (compressdev == NULL)
 		return -ENODEV;
 
-	compressdev->dev_ops = &compress_qat_ops;
+	compressdev->dev_ops = qat_comp_gen_ops->compressdev_ops;
 
 	compressdev->enqueue_burst = (compressdev_enqueue_pkt_burst_t)
 			qat_enqueue_comp_op_burst;
 	compressdev->dequeue_burst = qat_comp_pmd_dequeue_first_op_burst;
-
-	compressdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED;
+	compressdev->feature_flags =
+			qat_comp_gen_ops->qat_comp_get_feature_flags();
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -752,22 +719,20 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	comp_dev->qat_dev = qat_pci_dev;
 	comp_dev->compressdev = compressdev;
 
-	switch (qat_pci_dev->qat_dev_gen) {
-	case QAT_GEN1:
-	case QAT_GEN2:
-	case QAT_GEN3:
-		capabilities = qat_comp_gen_capabilities;
-		capa_size = sizeof(qat_comp_gen_capabilities);
-		break;
-	default:
-		capabilities = qat_comp_gen_capabilities;
-		capa_size = sizeof(qat_comp_gen_capabilities);
+	capabilities_info = qat_comp_get_capa_info(qat_pci_dev->qat_dev_gen,
+			qat_pci_dev);
+
+	if (capabilities_info.data == NULL) {
 		QAT_LOG(DEBUG,
 			"QAT gen %d capabilities unknown, default to GEN1",
 					qat_pci_dev->qat_dev_gen);
-		break;
+		capabilities_info = qat_comp_get_capa_info(QAT_GEN1,
+				qat_pci_dev);
 	}
 
+	capabilities = capabilities_info.data;
+	capa_size = capabilities_info.size;
+
 	comp_dev->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (comp_dev->capa_mz == NULL) {
 		comp_dev->capa_mz = rte_memzone_reserve(capa_memz_name,
diff --git a/drivers/compress/qat/qat_comp_pmd.h b/drivers/compress/qat/qat_comp_pmd.h
index 252b4b24e3..86317a513c 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -11,10 +11,44 @@
 #include <rte_compressdev_pmd.h>
 
 #include "qat_device.h"
+#include "qat_comp.h"
 
 /**< Intel(R) QAT Compression PMD driver name */
 #define COMPRESSDEV_NAME_QAT_PMD	compress_qat
 
+/* Private data structure for a QAT compression device capability. */
+struct qat_comp_capabilities_info {
+	const struct rte_compressdev_capabilities *data;
+	uint64_t size;
+};
+
+/**
+ * Function prototypes for GENx specific compress device operations.
+ **/
+typedef struct qat_comp_capabilities_info (*get_comp_capabilities_info_t)
+		(struct qat_pci_device *qat_dev);
+
+typedef uint16_t (*get_comp_ram_bank_flags_t)(void);
+
+typedef int (*set_comp_slice_cfg_word_t)(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word);
+
+typedef unsigned int (*get_comp_num_im_bufs_required_t)(void);
+
+typedef uint64_t (*get_comp_feature_flags_t)(void);
+
+struct qat_comp_gen_dev_ops {
+	struct rte_compressdev_ops *compressdev_ops;
+	get_comp_feature_flags_t qat_comp_get_feature_flags;
+	get_comp_capabilities_info_t qat_comp_get_capabilities;
+	get_comp_ram_bank_flags_t qat_comp_get_ram_bank_flags;
+	set_comp_slice_cfg_word_t qat_comp_set_slice_cfg_word;
+	get_comp_num_im_bufs_required_t qat_comp_get_num_im_bufs_required;
+};
+
+extern struct qat_comp_gen_dev_ops qat_comp_gen_dev_ops[];
+
 /** private data structure for a QAT compression device.
  * This QAT device is a device offering only a compression service,
  * there can be one of these on each qat_pci_device (VF).
@@ -37,6 +71,41 @@ struct qat_comp_dev_private {
 	uint16_t min_enq_burst_threshold;
 };
 
+int
+qat_comp_dev_config(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config);
+
+int
+qat_comp_dev_start(struct rte_compressdev *dev __rte_unused);
+
+void
+qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused);
+
+int
+qat_comp_dev_close(struct rte_compressdev *dev);
+
+void
+qat_comp_dev_info_get(struct rte_compressdev *dev,
+		struct rte_compressdev_info *info);
+
+void
+qat_comp_stats_get(struct rte_compressdev *dev,
+		struct rte_compressdev_stats *stats);
+
+void
+qat_comp_stats_reset(struct rte_compressdev *dev);
+
+int
+qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id);
+
+int
+qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
+		uint32_t max_inflight_ops, int socket_id);
+
+const struct rte_memzone *
+qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
+		uint32_t buff_size);
+
 int
 qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
@@ -44,5 +113,12 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 int
 qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev);
 
+
+static __rte_always_inline unsigned int
+qat_comp_get_num_im_bufs_required(enum qat_device_gen gen)
+{
+	return (*qat_comp_gen_dev_ops[gen].qat_comp_get_num_im_bufs_required)();
+}
+
 #endif
 #endif /* _QAT_COMP_PMD_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [PATCH v2 06/10] compress/qat: add gen specific implementation
  2021-10-01 16:59 ` [dpdk-dev] [PATCH v2 00/10] " Fan Zhang
                     ` (4 preceding siblings ...)
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 05/10] compress/qat: add gen specific data and function Fan Zhang
@ 2021-10-01 16:59   ` Fan Zhang
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 07/10] crypto/qat: unified device private data structure Fan Zhang
                     ` (4 subsequent siblings)
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-01 16:59 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Adam Dybkowski, Arek Kusztal, Kai Ji

This patch replaces the mixed QAT compression support
implementation by separate files with shared or individual
implementation for specific QAT generation.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/meson.build               |   4 +-
 drivers/compress/qat/dev/qat_comp_pmd_gen1.c | 177 +++++++++++++++
 drivers/compress/qat/dev/qat_comp_pmd_gen2.c |  30 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen3.c |  30 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen4.c | 213 +++++++++++++++++++
 drivers/compress/qat/dev/qat_comp_pmd_gens.h |  30 +++
 6 files changed, 483 insertions(+), 1 deletion(-)
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen1.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen2.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen3.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen4.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gens.h

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 532e0fabb3..8a1c6d64e8 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -62,7 +62,9 @@ includes += include_directories(
 )
 
 if qat_compress
-    foreach f: ['qat_comp_pmd.c', 'qat_comp.c']
+    foreach f: ['qat_comp_pmd.c', 'qat_comp.c',
+            'dev/qat_comp_pmd_gen1.c', 'dev/qat_comp_pmd_gen2.c',
+            'dev/qat_comp_pmd_gen3.c', 'dev/qat_comp_pmd_gen4.c']
         sources += files(join_paths(qat_compress_relpath, f))
     endforeach
 endif
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen1.c b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c
new file mode 100644
index 0000000000..0e1afe544a
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c
@@ -0,0 +1,177 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_compressdev.h>
+#include <rte_compressdev_pmd.h>
+
+#include "qat_comp_pmd.h"
+#include "qat_comp.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN1 12
+
+const struct rte_compressdev_capabilities qat_gen1_comp_capabilities[] = {
+	{/* COMPRESSION - deflate */
+	 .algo = RTE_COMP_ALGO_DEFLATE,
+	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
+				RTE_COMP_FF_CRC32_CHECKSUM |
+				RTE_COMP_FF_ADLER32_CHECKSUM |
+				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
+				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+				RTE_COMP_FF_HUFFMAN_FIXED |
+				RTE_COMP_FF_HUFFMAN_DYNAMIC |
+				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
+				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
+				RTE_COMP_FF_OOP_LB_IN_SGL_OUT |
+				RTE_COMP_FF_STATEFUL_DECOMPRESSION,
+	 .window_size = {.min = 15, .max = 15, .increment = 0} },
+	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+
+static int
+qat_comp_dev_config_gen1(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config)
+{
+	struct qat_comp_dev_private *comp_dev = dev->data->dev_private;
+
+	if (RTE_PMD_QAT_COMP_IM_BUFFER_SIZE == 0) {
+		QAT_LOG(WARNING,
+			"RTE_PMD_QAT_COMP_IM_BUFFER_SIZE = 0 in config file, so"
+			" QAT device can't be used for Dynamic Deflate. "
+			"Did you really intend to do this?");
+	} else {
+		comp_dev->interm_buff_mz =
+				qat_comp_setup_inter_buffers(comp_dev,
+					RTE_PMD_QAT_COMP_IM_BUFFER_SIZE);
+		if (comp_dev->interm_buff_mz == NULL)
+			return -ENOMEM;
+	}
+
+	return qat_comp_dev_config(dev, config);
+}
+
+struct rte_compressdev_ops qat_comp_ops_gen1 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_comp_dev_config_gen1,
+	.dev_start		= qat_comp_dev_start,
+	.dev_stop		= qat_comp_dev_stop,
+	.dev_close		= qat_comp_dev_close,
+	.dev_infos_get		= qat_comp_dev_info_get,
+
+	.stats_get		= qat_comp_stats_get,
+	.stats_reset		= qat_comp_stats_reset,
+	.queue_pair_setup	= qat_comp_qp_setup,
+	.queue_pair_release	= qat_comp_qp_release,
+
+	/* Compression related operations */
+	.private_xform_create	= qat_comp_private_xform_create,
+	.private_xform_free	= qat_comp_private_xform_free,
+	.stream_create		= qat_comp_stream_create,
+	.stream_free		= qat_comp_stream_free
+};
+
+struct qat_comp_capabilities_info
+qat_comp_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_comp_capabilities_info capa_info = {
+		.data = qat_gen1_comp_capabilities,
+		.size = sizeof(qat_gen1_comp_capabilities)
+	};
+	return capa_info;
+}
+
+uint16_t
+qat_comp_get_ram_bank_flags_gen1(void)
+{
+	/* Enable A, B, C, D, and E (CAMs). */
+	return ICP_QAT_FW_COMP_RAM_FLAGS_BUILD(
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank E */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank D */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank C */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank B */
+			ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */
+}
+
+int
+qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		__rte_unused enum rte_comp_op_type op_type,
+		uint32_t *comp_slice_cfg_word)
+{
+	unsigned int algo, comp_level, direction;
+
+	if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE)
+		algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE;
+	else {
+		QAT_LOG(ERR, "compression algorithm not supported");
+		return -EINVAL;
+	}
+
+	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) {
+		direction = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS;
+		comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+	} else {
+		direction = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS;
+
+		if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+		else if (xform->compress.level == 1)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
+		else if (xform->compress.level == 2)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_4;
+		else if (xform->compress.level == 3)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+		else if (xform->compress.level >= 4 &&
+			 xform->compress.level <= 9)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_16;
+		else {
+			QAT_LOG(ERR, "compression level not supported");
+			return -EINVAL;
+		}
+	}
+
+	comp_slice_cfg_word[0] =
+			ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(
+				direction,
+				/* In CPM 1.6 only valid mode ! */
+				ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED,
+				algo,
+				/* Translate level to depth */
+				comp_level,
+				ICP_QAT_HW_COMPRESSION_FILE_TYPE_0);
+
+	return 0;
+}
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen1(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN1;
+}
+
+uint64_t
+qat_comp_get_features_gen1(void)
+{
+	return RTE_COMPDEV_FF_HW_ACCELERATED;
+}
+
+RTE_INIT(qat_comp_pmd_gen1_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN1].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen2.c b/drivers/compress/qat/dev/qat_comp_pmd_gen2.c
new file mode 100644
index 0000000000..fd6c966f26
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen2.c
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN2 20
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen2(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN2;
+}
+
+RTE_INIT(qat_comp_pmd_gen2_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN2].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen2;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen3.c b/drivers/compress/qat/dev/qat_comp_pmd_gen3.c
new file mode 100644
index 0000000000..fccb0941f1
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen3.c
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN3 64
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen3(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN3;
+}
+
+RTE_INIT(qat_comp_pmd_gen3_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN3].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen3;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen4.c b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
new file mode 100644
index 0000000000..79b2ceb414
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
@@ -0,0 +1,213 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp.h"
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+#include "icp_qat_hw_gen4_comp.h"
+#include "icp_qat_hw_gen4_comp_defs.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN4 0
+
+static const struct rte_compressdev_capabilities
+qat_gen4_comp_capabilities[] = {
+	{/* COMPRESSION - deflate */
+	 .algo = RTE_COMP_ALGO_DEFLATE,
+	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
+				RTE_COMP_FF_CRC32_CHECKSUM |
+				RTE_COMP_FF_ADLER32_CHECKSUM |
+				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
+				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+				RTE_COMP_FF_HUFFMAN_FIXED |
+				RTE_COMP_FF_HUFFMAN_DYNAMIC |
+				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
+				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
+				RTE_COMP_FF_OOP_LB_IN_SGL_OUT,
+	 .window_size = {.min = 15, .max = 15, .increment = 0} },
+	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+
+static int
+qat_comp_dev_config_gen4(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config)
+{
+	/* QAT GEN4 doesn't need preallocated intermediate buffers */
+
+	return qat_comp_dev_config(dev, config);
+}
+
+static struct rte_compressdev_ops qat_comp_ops_gen4 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_comp_dev_config_gen4,
+	.dev_start		= qat_comp_dev_start,
+	.dev_stop		= qat_comp_dev_stop,
+	.dev_close		= qat_comp_dev_close,
+	.dev_infos_get		= qat_comp_dev_info_get,
+
+	.stats_get		= qat_comp_stats_get,
+	.stats_reset		= qat_comp_stats_reset,
+	.queue_pair_setup	= qat_comp_qp_setup,
+	.queue_pair_release	= qat_comp_qp_release,
+
+	/* Compression related operations */
+	.private_xform_create	= qat_comp_private_xform_create,
+	.private_xform_free	= qat_comp_private_xform_free,
+	.stream_create		= qat_comp_stream_create,
+	.stream_free		= qat_comp_stream_free
+};
+
+static struct qat_comp_capabilities_info
+qat_comp_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_comp_capabilities_info capa_info = {
+		.data = qat_gen4_comp_capabilities,
+		.size = sizeof(qat_gen4_comp_capabilities)
+	};
+	return capa_info;
+}
+
+static uint16_t
+qat_comp_get_ram_bank_flags_gen4(void)
+{
+	return 0;
+}
+
+static int
+qat_comp_set_slice_cfg_word_gen4(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word)
+{
+	if (qat_xform->qat_comp_request_type ==
+			QAT_COMP_REQUEST_FIXED_COMP_STATELESS ||
+	    qat_xform->qat_comp_request_type ==
+			QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS) {
+		/* Compression */
+		struct icp_qat_hw_comp_20_config_csr_upper hw_comp_upper_csr;
+		struct icp_qat_hw_comp_20_config_csr_lower hw_comp_lower_csr;
+
+		memset(&hw_comp_upper_csr, 0, sizeof(hw_comp_upper_csr));
+		memset(&hw_comp_lower_csr, 0, sizeof(hw_comp_lower_csr));
+
+		hw_comp_lower_csr.lllbd =
+			ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_DISABLED;
+
+		if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE) {
+			hw_comp_lower_csr.skip_ctrl =
+				ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL;
+
+			if (qat_xform->qat_comp_request_type ==
+				QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS) {
+				hw_comp_lower_csr.algo =
+					ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77;
+				hw_comp_lower_csr.lllbd =
+				    ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED;
+			} else {
+				hw_comp_lower_csr.algo =
+				      ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE;
+				hw_comp_upper_csr.scb_ctrl =
+					ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE;
+			}
+
+			if (op_type == RTE_COMP_OP_STATEFUL) {
+				hw_comp_upper_csr.som_ctrl =
+				     ICP_QAT_HW_COMP_20_SOM_CONTROL_REPLAY_MODE;
+			}
+		} else {
+			QAT_LOG(ERR, "Compression algorithm not supported");
+			return -EINVAL;
+		}
+
+		switch (xform->compress.level) {
+		case 1:
+		case 2:
+		case 3:
+		case 4:
+		case 5:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1;
+			hw_comp_lower_csr.hash_col =
+			      ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_DONT_ALLOW;
+			break;
+		case 6:
+		case 7:
+		case 8:
+		case RTE_COMP_LEVEL_PMD_DEFAULT:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_6;
+			break;
+		case 9:
+		case 10:
+		case 11:
+		case 12:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_9;
+			break;
+		default:
+			QAT_LOG(ERR, "Compression level not supported");
+			return -EINVAL;
+		}
+
+		hw_comp_lower_csr.abd = ICP_QAT_HW_COMP_20_ABD_ABD_DISABLED;
+		hw_comp_lower_csr.hash_update =
+			ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW;
+		hw_comp_lower_csr.edmm =
+		      ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED;
+
+		hw_comp_upper_csr.nice =
+			ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL;
+		hw_comp_upper_csr.lazy =
+			ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL;
+
+		comp_slice_cfg_word[0] =
+				ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER(
+					hw_comp_lower_csr);
+		comp_slice_cfg_word[1] =
+				ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER(
+					hw_comp_upper_csr);
+	} else {
+		/* Decompression */
+		struct icp_qat_hw_decomp_20_config_csr_lower
+				hw_decomp_lower_csr;
+
+		memset(&hw_decomp_lower_csr, 0, sizeof(hw_decomp_lower_csr));
+
+		if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE)
+			hw_decomp_lower_csr.algo =
+				ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE;
+		else {
+			QAT_LOG(ERR, "Compression algorithm not supported");
+			return -EINVAL;
+		}
+
+		comp_slice_cfg_word[0] =
+				ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER(
+					hw_decomp_lower_csr);
+		comp_slice_cfg_word[1] = 0;
+	}
+
+	return 0;
+}
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen4(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN4;
+}
+
+
+RTE_INIT(qat_comp_pmd_gen4_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN4].compressdev_ops =
+			&qat_comp_ops_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gens.h b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
new file mode 100644
index 0000000000..35b75c56f1
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _QAT_COMP_PMD_GEN1_H_
+#define _QAT_COMP_PMD_GEN1_H_
+
+#include <rte_compressdev.h>
+#include <rte_compressdev_pmd.h>
+#include <stdint.h>
+
+#include "qat_comp_pmd.h"
+
+extern const struct rte_compressdev_capabilities qat_gen1_comp_capabilities[];
+
+struct qat_comp_capabilities_info
+qat_comp_cap_get_gen1(struct qat_pci_device *qat_dev);
+
+uint16_t qat_comp_get_ram_bank_flags_gen1(void);
+
+int qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type,
+		uint32_t *comp_slice_cfg_word);
+
+uint64_t qat_comp_get_features_gen1(void);
+
+extern struct rte_compressdev_ops qat_comp_ops_gen1;
+
+#endif /* _QAT_COMP_PMD_GEN1_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [PATCH v2 07/10] crypto/qat: unified device private data structure
  2021-10-01 16:59 ` [dpdk-dev] [PATCH v2 00/10] " Fan Zhang
                     ` (5 preceding siblings ...)
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 06/10] compress/qat: add gen specific implementation Fan Zhang
@ 2021-10-01 16:59   ` Fan Zhang
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 08/10] crypto/qat: add gen specific data and function Fan Zhang
                     ` (3 subsequent siblings)
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-01 16:59 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch unifies the QAT symmetric and asymmetric device
private data structures and functions.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/meson.build       |   2 +-
 drivers/common/qat/qat_device.h      |   7 +-
 drivers/crypto/qat/qat_asym_pmd.c    | 214 ++++-------------------
 drivers/crypto/qat/qat_asym_pmd.h    |  29 +---
 drivers/crypto/qat/qat_crypto.c      | 172 ++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h      |  78 +++++++++
 drivers/crypto/qat/qat_sym_pmd.c     | 250 +++++----------------------
 drivers/crypto/qat/qat_sym_pmd.h     |  21 +--
 drivers/crypto/qat/qat_sym_session.c |  15 +-
 9 files changed, 342 insertions(+), 446 deletions(-)
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 8a1c6d64e8..29fd0168ea 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -71,7 +71,7 @@ endif
 
 if qat_crypto
     foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c',
-            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c']
+            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c']
         sources += files(join_paths(qat_crypto_relpath, f))
     endforeach
     deps += ['security']
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 421c8299e0..5d5c64e168 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -76,8 +76,7 @@ struct qat_device_info {
 
 extern struct qat_device_info qat_pci_devs[];
 
-struct qat_sym_dev_private;
-struct qat_asym_dev_private;
+struct qat_cryptodev_private;
 struct qat_comp_dev_private;
 
 /*
@@ -106,14 +105,14 @@ struct qat_pci_device {
 	/**< links to qps set up for each service, index same as on API */
 
 	/* Data relating to symmetric crypto service */
-	struct qat_sym_dev_private *sym_dev;
+	struct qat_cryptodev_private *sym_dev;
 	/**< link back to cryptodev private data */
 
 	int qat_sym_driver_id;
 	/**< Symmetric driver id used by this device */
 
 	/* Data relating to asymmetric crypto service */
-	struct qat_asym_dev_private *asym_dev;
+	struct qat_cryptodev_private *asym_dev;
 	/**< link back to cryptodev private data */
 
 	int qat_asym_driver_id;
diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c
index e91bb0d317..63e61fa322 100644
--- a/drivers/crypto/qat/qat_asym_pmd.c
+++ b/drivers/crypto/qat/qat_asym_pmd.c
@@ -6,6 +6,7 @@
 
 #include "qat_logs.h"
 
+#include "qat_crypto.h"
 #include "qat_asym.h"
 #include "qat_asym_pmd.h"
 #include "qat_sym_capabilities.h"
@@ -18,190 +19,45 @@ static const struct rte_cryptodev_capabilities qat_gen1_asym_capabilities[] = {
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
-static int qat_asym_qp_release(struct rte_cryptodev *dev,
-			       uint16_t queue_pair_id);
-
-static int qat_asym_dev_config(__rte_unused struct rte_cryptodev *dev,
-			       __rte_unused struct rte_cryptodev_config *config)
-{
-	return 0;
-}
-
-static int qat_asym_dev_start(__rte_unused struct rte_cryptodev *dev)
-{
-	return 0;
-}
-
-static void qat_asym_dev_stop(__rte_unused struct rte_cryptodev *dev)
-{
-
-}
-
-static int qat_asym_dev_close(struct rte_cryptodev *dev)
-{
-	int i, ret;
-
-	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
-		ret = qat_asym_qp_release(dev, i);
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static void qat_asym_dev_info_get(struct rte_cryptodev *dev,
-				  struct rte_cryptodev_info *info)
-{
-	struct qat_asym_dev_private *internals = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = internals->qat_dev;
-
-	if (info != NULL) {
-		info->max_nb_queue_pairs = qat_qps_per_service(qat_dev,
-							QAT_SERVICE_ASYMMETRIC);
-		info->feature_flags = dev->feature_flags;
-		info->capabilities = internals->qat_dev_capabilities;
-		info->driver_id = qat_asym_driver_id;
-		/* No limit of number of sessions */
-		info->sym.max_nb_sessions = 0;
-	}
-}
-
-static void qat_asym_stats_get(struct rte_cryptodev *dev,
-			       struct rte_cryptodev_stats *stats)
-{
-	struct qat_common_stats qat_stats = {0};
-	struct qat_asym_dev_private *qat_priv;
-
-	if (stats == NULL || dev == NULL) {
-		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_get(qat_priv->qat_dev, &qat_stats, QAT_SERVICE_ASYMMETRIC);
-	stats->enqueued_count = qat_stats.enqueued_count;
-	stats->dequeued_count = qat_stats.dequeued_count;
-	stats->enqueue_err_count = qat_stats.enqueue_err_count;
-	stats->dequeue_err_count = qat_stats.dequeue_err_count;
-}
-
-static void qat_asym_stats_reset(struct rte_cryptodev *dev)
+void
+qat_asym_init_op_cookie(void *op_cookie)
 {
-	struct qat_asym_dev_private *qat_priv;
+	int j;
+	struct qat_asym_op_cookie *cookie = op_cookie;
 
-	if (dev == NULL) {
-		QAT_LOG(ERR, "invalid asymmetric cryptodev ptr %p", dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
+	cookie->input_addr = rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_asym_op_cookie,
+					input_params_ptrs);
 
-	qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_ASYMMETRIC);
-}
-
-static int qat_asym_qp_release(struct rte_cryptodev *dev,
-			       uint16_t queue_pair_id)
-{
-	struct qat_asym_dev_private *qat_private = dev->data->dev_private;
-	enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen;
-
-	QAT_LOG(DEBUG, "Release asym qp %u on device %d",
-				queue_pair_id, dev->data->dev_id);
-
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_ASYMMETRIC][queue_pair_id]
-						= NULL;
-
-	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
-			&(dev->data->queue_pairs[queue_pair_id]));
-}
+	cookie->output_addr = rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_asym_op_cookie,
+					output_params_ptrs);
 
-static int qat_asym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-			     const struct rte_cryptodev_qp_conf *qp_conf,
-			     int socket_id)
-{
-	struct qat_qp_config qat_qp_conf;
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-
-	struct qat_qp **qp_addr =
-			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
-	struct qat_asym_dev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-	const struct qat_qp_hw_data *asym_hw_qps =
-			qat_gen_config[qat_private->qat_dev->qat_dev_gen]
-				      .qp_hw_data[QAT_SERVICE_ASYMMETRIC];
-	const struct qat_qp_hw_data *qp_hw_data = asym_hw_qps + qp_id;
-
-	/* If qp is already in use free ring memory and qp metadata. */
-	if (*qp_addr != NULL) {
-		ret = qat_asym_qp_release(dev, qp_id);
-		if (ret < 0)
-			return ret;
-	}
-	if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_ASYMMETRIC)) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.hw = qp_hw_data;
-	qat_qp_conf.cookie_size = sizeof(struct qat_asym_op_cookie);
-	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
-	qat_qp_conf.socket_id = socket_id;
-	qat_qp_conf.service_str = "asym";
-
-	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
-	if (ret != 0)
-		return ret;
-
-	/* store a link to the qp in the qat_pci_device */
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_ASYMMETRIC][qp_id]
-							= *qp_addr;
-
-	qp = (struct qat_qp *)*qp_addr;
-	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
-
-	for (i = 0; i < qp->nb_descriptors; i++) {
-		int j;
-
-		struct qat_asym_op_cookie __rte_unused *cookie =
-				qp->op_cookies[i];
-		cookie->input_addr = rte_mempool_virt2iova(cookie) +
+	for (j = 0; j < 8; j++) {
+		cookie->input_params_ptrs[j] =
+				rte_mempool_virt2iova(cookie) +
 				offsetof(struct qat_asym_op_cookie,
-						input_params_ptrs);
-
-		cookie->output_addr = rte_mempool_virt2iova(cookie) +
+						input_array[j]);
+		cookie->output_params_ptrs[j] =
+				rte_mempool_virt2iova(cookie) +
 				offsetof(struct qat_asym_op_cookie,
-						output_params_ptrs);
-
-		for (j = 0; j < 8; j++) {
-			cookie->input_params_ptrs[j] =
-					rte_mempool_virt2iova(cookie) +
-					offsetof(struct qat_asym_op_cookie,
-							input_array[j]);
-			cookie->output_params_ptrs[j] =
-					rte_mempool_virt2iova(cookie) +
-					offsetof(struct qat_asym_op_cookie,
-							output_array[j]);
-		}
+						output_array[j]);
 	}
-
-	return ret;
 }
 
 struct rte_cryptodev_ops crypto_qat_ops = {
 
 	/* Device related operations */
-	.dev_configure		= qat_asym_dev_config,
-	.dev_start		= qat_asym_dev_start,
-	.dev_stop		= qat_asym_dev_stop,
-	.dev_close		= qat_asym_dev_close,
-	.dev_infos_get		= qat_asym_dev_info_get,
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
 
-	.stats_get		= qat_asym_stats_get,
-	.stats_reset		= qat_asym_stats_reset,
-	.queue_pair_setup	= qat_asym_qp_setup,
-	.queue_pair_release	= qat_asym_qp_release,
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
 
 	/* Crypto related operations */
 	.asym_session_get_size	= qat_asym_session_get_private_size,
@@ -241,15 +97,14 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	struct qat_device_info *qat_dev_instance =
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
 	struct rte_cryptodev_pmd_init_params init_params = {
-			.name = "",
-			.socket_id =
-				qat_dev_instance->pci_dev->device.numa_node,
-			.private_data_size = sizeof(struct qat_asym_dev_private)
+		.name = "",
+		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
+		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
-	struct qat_asym_dev_private *internals;
+	struct qat_cryptodev_private *internals;
 
 	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
 		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT 4xxx");
@@ -310,8 +165,9 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
-	internals->asym_dev_id = cryptodev->data->dev_id;
+	internals->dev_id = cryptodev->data->dev_id;
 	internals->qat_dev_capabilities = qat_gen1_asym_capabilities;
+	internals->service_type = QAT_SERVICE_ASYMMETRIC;
 
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
@@ -344,7 +200,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	qat_pci_dev->asym_dev = internals;
 	QAT_LOG(DEBUG, "Created QAT ASYM device %s as cryptodev instance %d",
-			cryptodev->data->name, internals->asym_dev_id);
+			cryptodev->data->name, internals->dev_id);
 	return 0;
 }
 
@@ -362,7 +218,7 @@ qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev)
 
 	/* free crypto device */
 	cryptodev = rte_cryptodev_pmd_get_dev(
-			qat_pci_dev->asym_dev->asym_dev_id);
+			qat_pci_dev->asym_dev->dev_id);
 	rte_cryptodev_pmd_destroy(cryptodev);
 	qat_pci_devs[qat_pci_dev->qat_dev_id].asym_rte_dev.name = NULL;
 	qat_pci_dev->asym_dev = NULL;
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index 3b5abddec8..c493796511 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -15,21 +15,8 @@
 
 extern uint8_t qat_asym_driver_id;
 
-/** private data structure for a QAT device.
- * This QAT device is a device offering only asymmetric crypto service,
- * there can be one of these on each qat_pci_device (VF).
- */
-struct qat_asym_dev_private {
-	struct qat_pci_device *qat_dev;
-	/**< The qat pci device hosting the service */
-	uint8_t asym_dev_id;
-	/**< Device instance for this rte_cryptodev */
-	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
-	/* QAT device asymmetric crypto capabilities */
-	const struct rte_memzone *capa_mz;
-	/* Shared memzone for storing capabilities */
-	uint16_t min_enq_burst_threshold;
-};
+void
+qat_asym_init_op_cookie(void *op_cookie);
 
 uint16_t
 qat_asym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops,
@@ -39,16 +26,4 @@ uint16_t
 qat_asym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops,
 			      uint16_t nb_ops);
 
-int qat_asym_session_configure(struct rte_cryptodev *dev,
-		struct rte_crypto_asym_xform *xform,
-		struct rte_cryptodev_asym_session *sess,
-		struct rte_mempool *mempool);
-
-int
-qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
-		struct qat_dev_cmd_param *qat_dev_cmd_param);
-
-int
-qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev);
-
 #endif /* _QAT_ASYM_PMD_H_ */
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
new file mode 100644
index 0000000000..1393c0b745
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -0,0 +1,172 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "qat_crypto.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+
+int
+qat_cryptodev_config(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused struct rte_cryptodev_config *config)
+{
+	return 0;
+}
+
+int
+qat_cryptodev_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+void
+qat_cryptodev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+int
+qat_cryptodev_close(struct rte_cryptodev *dev)
+{
+	int i, ret;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = dev->dev_ops->queue_pair_release(dev, i);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
+void
+qat_cryptodev_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *info)
+{
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_service_type service_type = qat_private->service_type;
+
+	if (info != NULL) {
+		info->max_nb_queue_pairs =
+			qat_qps_per_service(qat_dev, service_type);
+		info->feature_flags = dev->feature_flags;
+		info->capabilities = qat_private->qat_dev_capabilities;
+		info->driver_id = qat_sym_driver_id;
+		/* No limit of number of sessions */
+		info->sym.max_nb_sessions = 0;
+	}
+}
+
+void
+qat_cryptodev_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	struct qat_common_stats qat_stats = {0};
+	struct qat_cryptodev_private *qat_priv;
+
+	if (stats == NULL || dev == NULL) {
+		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
+		return;
+	}
+	qat_priv = dev->data->dev_private;
+
+	qat_stats_get(qat_priv->qat_dev, &qat_stats, qat_priv->service_type);
+	stats->enqueued_count = qat_stats.enqueued_count;
+	stats->dequeued_count = qat_stats.dequeued_count;
+	stats->enqueue_err_count = qat_stats.enqueue_err_count;
+	stats->dequeue_err_count = qat_stats.dequeue_err_count;
+}
+
+void
+qat_cryptodev_stats_reset(struct rte_cryptodev *dev)
+{
+	struct qat_cryptodev_private *qat_priv;
+
+	if (dev == NULL) {
+		QAT_LOG(ERR, "invalid cryptodev ptr %p", dev);
+		return;
+	}
+	qat_priv = dev->data->dev_private;
+
+	qat_stats_reset(qat_priv->qat_dev, qat_priv->service_type);
+
+}
+
+int
+qat_cryptodev_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
+	enum qat_service_type service_type = qat_private->service_type;
+
+	QAT_LOG(DEBUG, "Release %s qp %u on device %d",
+			qat_service_type_str[service_type],
+			queue_pair_id, dev->data->dev_id);
+
+	qat_private->qat_dev->qps_in_use[service_type][queue_pair_id] = NULL;
+
+	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
+			&(dev->data->queue_pairs[queue_pair_id]));
+}
+
+int
+qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct qat_qp **qp_addr =
+			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_service_type service_type = qat_private->service_type;
+	struct qat_qp_config qat_qp_conf = {0};
+	struct qat_qp *qp;
+	int ret = 0;
+	uint32_t i;
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (*qp_addr != NULL) {
+		ret = dev->dev_ops->queue_pair_release(dev, qp_id);
+		if (ret < 0)
+			return QAT_QP_BUSY;
+	}
+	if (qp_id >= qat_qps_per_service(qat_dev, service_type)) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, service_type,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
+	qat_qp_conf.cookie_size = service_type == QAT_SERVICE_SYMMETRIC ?
+			sizeof(struct qat_sym_op_cookie) :
+			sizeof(struct qat_asym_op_cookie);
+	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
+	qat_qp_conf.socket_id = socket_id;
+	qat_qp_conf.service_str = qat_service_type_str[service_type];
+
+	ret = qat_qp_setup(qat_dev, qp_addr, qp_id, &qat_qp_conf);
+	if (ret != 0)
+		return ret;
+
+	/* store a link to the qp in the qat_pci_device */
+	qat_dev->qps_in_use[service_type][qp_id] = *qp_addr;
+
+	qp = (struct qat_qp *)*qp_addr;
+	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
+
+	for (i = 0; i < qp->nb_descriptors; i++) {
+		if (service_type == QAT_SERVICE_SYMMETRIC)
+			qat_sym_init_op_cookie(qp->op_cookies[i]);
+		else
+			qat_asym_init_op_cookie(qp->op_cookies[i]);
+	}
+
+	return ret;
+}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
new file mode 100644
index 0000000000..3803fef19d
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+ #ifndef _QAT_CRYPTO_H_
+ #define _QAT_CRYPTO_H_
+
+#include <rte_cryptodev.h>
+#ifdef RTE_LIB_SECURITY
+#include <rte_security.h>
+#endif
+
+#include "qat_device.h"
+
+extern uint8_t qat_sym_driver_id;
+extern uint8_t qat_asym_driver_id;
+
+/** helper macro to set cryptodev capability range **/
+#define CAP_RNG(n, l, r, i) .n = {.min = l, .max = r, .increment = i}
+
+#define CAP_RNG_ZERO(n) .n = {.min = 0, .max = 0, .increment = 0}
+/** helper macro to set cryptodev capability value **/
+#define CAP_SET(n, v) .n = v
+
+/** private data structure for a QAT device.
+ * there can be one of these on each qat_pci_device (VF).
+ */
+struct qat_cryptodev_private {
+	struct qat_pci_device *qat_dev;
+	/**< The qat pci device hosting the service */
+	uint8_t dev_id;
+	/**< Device instance for this rte_cryptodev */
+	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
+	/* QAT device symmetric crypto capabilities */
+	const struct rte_memzone *capa_mz;
+	/* Shared memzone for storing capabilities */
+	uint16_t min_enq_burst_threshold;
+	uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */
+	enum qat_service_type service_type;
+};
+
+struct qat_capabilities_info {
+	struct rte_cryptodev_capabilities *data;
+	uint64_t size;
+};
+
+int
+qat_cryptodev_config(struct rte_cryptodev *dev,
+		struct rte_cryptodev_config *config);
+
+int
+qat_cryptodev_start(struct rte_cryptodev *dev);
+
+void
+qat_cryptodev_stop(struct rte_cryptodev *dev);
+
+int
+qat_cryptodev_close(struct rte_cryptodev *dev);
+
+void
+qat_cryptodev_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *info);
+
+void
+qat_cryptodev_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats);
+
+void
+qat_cryptodev_stats_reset(struct rte_cryptodev *dev);
+
+int
+qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+
+int
+qat_cryptodev_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id);
+
+#endif
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index 71907a606d..e03737c0d8 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -13,6 +13,7 @@
 #endif
 
 #include "qat_logs.h"
+#include "qat_crypto.h"
 #include "qat_sym.h"
 #include "qat_sym_session.h"
 #include "qat_sym_pmd.h"
@@ -59,213 +60,19 @@ static const struct rte_security_capability qat_security_capabilities[] = {
 };
 #endif
 
-static int qat_sym_qp_release(struct rte_cryptodev *dev,
-	uint16_t queue_pair_id);
-
-static int qat_sym_dev_config(__rte_unused struct rte_cryptodev *dev,
-		__rte_unused struct rte_cryptodev_config *config)
-{
-	return 0;
-}
-
-static int qat_sym_dev_start(__rte_unused struct rte_cryptodev *dev)
-{
-	return 0;
-}
-
-static void qat_sym_dev_stop(__rte_unused struct rte_cryptodev *dev)
-{
-	return;
-}
-
-static int qat_sym_dev_close(struct rte_cryptodev *dev)
-{
-	int i, ret;
-
-	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
-		ret = qat_sym_qp_release(dev, i);
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static void qat_sym_dev_info_get(struct rte_cryptodev *dev,
-			struct rte_cryptodev_info *info)
-{
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = internals->qat_dev;
-
-	if (info != NULL) {
-		info->max_nb_queue_pairs =
-			qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC);
-		info->feature_flags = dev->feature_flags;
-		info->capabilities = internals->qat_dev_capabilities;
-		info->driver_id = qat_sym_driver_id;
-		/* No limit of number of sessions */
-		info->sym.max_nb_sessions = 0;
-	}
-}
-
-static void qat_sym_stats_get(struct rte_cryptodev *dev,
-		struct rte_cryptodev_stats *stats)
-{
-	struct qat_common_stats qat_stats = {0};
-	struct qat_sym_dev_private *qat_priv;
-
-	if (stats == NULL || dev == NULL) {
-		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_get(qat_priv->qat_dev, &qat_stats, QAT_SERVICE_SYMMETRIC);
-	stats->enqueued_count = qat_stats.enqueued_count;
-	stats->dequeued_count = qat_stats.dequeued_count;
-	stats->enqueue_err_count = qat_stats.enqueue_err_count;
-	stats->dequeue_err_count = qat_stats.dequeue_err_count;
-}
-
-static void qat_sym_stats_reset(struct rte_cryptodev *dev)
-{
-	struct qat_sym_dev_private *qat_priv;
-
-	if (dev == NULL) {
-		QAT_LOG(ERR, "invalid cryptodev ptr %p", dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_SYMMETRIC);
-
-}
-
-static int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
-{
-	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
-	enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen;
-
-	QAT_LOG(DEBUG, "Release sym qp %u on device %d",
-				queue_pair_id, dev->data->dev_id);
-
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][queue_pair_id]
-						= NULL;
-
-	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
-			&(dev->data->queue_pairs[queue_pair_id]));
-}
-
-static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-	const struct rte_cryptodev_qp_conf *qp_conf,
-	int socket_id)
-{
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-	struct qat_qp_config qat_qp_conf;
-	struct qat_qp **qp_addr =
-			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
-	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-
-	/* If qp is already in use free ring memory and qp metadata. */
-	if (*qp_addr != NULL) {
-		ret = qat_sym_qp_release(dev, qp_id);
-		if (ret < 0)
-			return ret;
-	}
-	if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC)) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_SYMMETRIC,
-			qp_id);
-	if (qat_qp_conf.hw == NULL) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie);
-	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
-	qat_qp_conf.socket_id = socket_id;
-	qat_qp_conf.service_str = "sym";
-
-	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
-	if (ret != 0)
-		return ret;
-
-	/* store a link to the qp in the qat_pci_device */
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id]
-							= *qp_addr;
-
-	qp = (struct qat_qp *)*qp_addr;
-	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
-
-	for (i = 0; i < qp->nb_descriptors; i++) {
-
-		struct qat_sym_op_cookie *cookie =
-				qp->op_cookies[i];
-
-		cookie->qat_sgl_src_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				qat_sgl_src);
-
-		cookie->qat_sgl_dst_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				qat_sgl_dst);
-
-		cookie->opt.spc_gmac.cd_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				opt.spc_gmac.cd_cipher);
-
-	}
-
-	/* Get fw version from QAT (GEN2), skip if we've got it already */
-	if (qp->qat_dev_gen == QAT_GEN2 && !(qat_private->internal_capabilities
-			& QAT_SYM_CAP_VALID)) {
-		ret = qat_cq_get_fw_version(qp);
-
-		if (ret < 0) {
-			qat_sym_qp_release(dev, qp_id);
-			return ret;
-		}
-
-		if (ret != 0)
-			QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d",
-					(ret >> 24) & 0xff,
-					(ret >> 16) & 0xff,
-					(ret >> 8) & 0xff);
-		else
-			QAT_LOG(DEBUG, "unknown QAT firmware version");
-
-		/* set capabilities based on the fw version */
-		qat_private->internal_capabilities = QAT_SYM_CAP_VALID |
-				((ret >= MIXED_CRYPTO_MIN_FW_VER) ?
-						QAT_SYM_CAP_MIXED_CRYPTO : 0);
-		ret = 0;
-	}
-
-	return ret;
-}
-
 static struct rte_cryptodev_ops crypto_qat_ops = {
 
 		/* Device related operations */
-		.dev_configure		= qat_sym_dev_config,
-		.dev_start		= qat_sym_dev_start,
-		.dev_stop		= qat_sym_dev_stop,
-		.dev_close		= qat_sym_dev_close,
-		.dev_infos_get		= qat_sym_dev_info_get,
+		.dev_configure		= qat_cryptodev_config,
+		.dev_start		= qat_cryptodev_start,
+		.dev_stop		= qat_cryptodev_stop,
+		.dev_close		= qat_cryptodev_close,
+		.dev_infos_get		= qat_cryptodev_info_get,
 
-		.stats_get		= qat_sym_stats_get,
-		.stats_reset		= qat_sym_stats_reset,
-		.queue_pair_setup	= qat_sym_qp_setup,
-		.queue_pair_release	= qat_sym_qp_release,
+		.stats_get		= qat_cryptodev_stats_get,
+		.stats_reset		= qat_cryptodev_stats_reset,
+		.queue_pair_setup	= qat_cryptodev_qp_setup,
+		.queue_pair_release	= qat_cryptodev_qp_release,
 
 		/* Crypto related operations */
 		.sym_session_get_size	= qat_sym_session_get_private_size,
@@ -295,6 +102,27 @@ static struct rte_security_ops security_qat_ops = {
 };
 #endif
 
+void
+qat_sym_init_op_cookie(void *op_cookie)
+{
+	struct qat_sym_op_cookie *cookie = op_cookie;
+
+	cookie->qat_sgl_src_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			qat_sgl_src);
+
+	cookie->qat_sgl_dst_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			qat_sgl_dst);
+
+	cookie->opt.spc_gmac.cd_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			opt.spc_gmac.cd_cipher);
+}
+
 static uint16_t
 qat_sym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops,
 		uint16_t nb_ops)
@@ -330,15 +158,14 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
 
 	struct rte_cryptodev_pmd_init_params init_params = {
-			.name = "",
-			.socket_id =
-				qat_dev_instance->pci_dev->device.numa_node,
-			.private_data_size = sizeof(struct qat_sym_dev_private)
+		.name = "",
+		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
+		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
-	struct qat_sym_dev_private *internals;
+	struct qat_cryptodev_private *internals;
 	const struct rte_cryptodev_capabilities *capabilities;
 	uint64_t capa_size;
 
@@ -424,8 +251,9 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
+	internals->service_type = QAT_SERVICE_SYMMETRIC;
 
-	internals->sym_dev_id = cryptodev->data->dev_id;
+	internals->dev_id = cryptodev->data->dev_id;
 	switch (qat_pci_dev->qat_dev_gen) {
 	case QAT_GEN1:
 		capabilities = qat_gen1_sym_capabilities;
@@ -480,7 +308,7 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	qat_pci_dev->sym_dev = internals;
 	QAT_LOG(DEBUG, "Created QAT SYM device %s as cryptodev instance %d",
-			cryptodev->data->name, internals->sym_dev_id);
+			cryptodev->data->name, internals->dev_id);
 
 	return 0;
 
@@ -509,7 +337,7 @@ qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev)
 		rte_memzone_free(qat_pci_dev->sym_dev->capa_mz);
 
 	/* free crypto device */
-	cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->sym_dev_id);
+	cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->dev_id);
 #ifdef RTE_LIB_SECURITY
 	rte_free(cryptodev->security_ctx);
 	cryptodev->security_ctx = NULL;
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index e0992cbe27..d49b732ca0 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -14,6 +14,7 @@
 #endif
 
 #include "qat_sym_capabilities.h"
+#include "qat_crypto.h"
 #include "qat_device.h"
 
 /** Intel(R) QAT Symmetric Crypto PMD driver name */
@@ -25,23 +26,6 @@
 
 extern uint8_t qat_sym_driver_id;
 
-/** private data structure for a QAT device.
- * This QAT device is a device offering only symmetric crypto service,
- * there can be one of these on each qat_pci_device (VF).
- */
-struct qat_sym_dev_private {
-	struct qat_pci_device *qat_dev;
-	/**< The qat pci device hosting the service */
-	uint8_t sym_dev_id;
-	/**< Device instance for this rte_cryptodev */
-	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
-	/* QAT device symmetric crypto capabilities */
-	const struct rte_memzone *capa_mz;
-	/* Shared memzone for storing capabilities */
-	uint16_t min_enq_burst_threshold;
-	uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */
-};
-
 int
 qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
@@ -49,5 +33,8 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 int
 qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev);
 
+void
+qat_sym_init_op_cookie(void *op_cookie);
+
 #endif
 #endif /* _QAT_SYM_PMD_H_ */
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index 3f2f6736fc..8ca475ca8b 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -131,7 +131,7 @@ bpi_cipher_ctx_init(enum rte_crypto_cipher_algorithm cryptodev_algo,
 
 static int
 qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo,
-		struct qat_sym_dev_private *internals)
+		struct qat_cryptodev_private *internals)
 {
 	int i = 0;
 	const struct rte_cryptodev_capabilities *capability;
@@ -152,7 +152,7 @@ qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo,
 
 static int
 qat_is_auth_alg_supported(enum rte_crypto_auth_algorithm algo,
-		struct qat_sym_dev_private *internals)
+		struct qat_cryptodev_private *internals)
 {
 	int i = 0;
 	const struct rte_cryptodev_capabilities *capability;
@@ -267,7 +267,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
 		struct qat_sym_session *session)
 {
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	struct rte_crypto_cipher_xform *cipher_xform = NULL;
 	enum qat_device_gen qat_dev_gen =
 				internals->qat_dev->qat_dev_gen;
@@ -532,7 +532,8 @@ static void
 qat_sym_session_handle_mixed(const struct rte_cryptodev *dev,
 		struct qat_sym_session *session)
 {
-	const struct qat_sym_dev_private *qat_private = dev->data->dev_private;
+	const struct qat_cryptodev_private *qat_private =
+			dev->data->dev_private;
 	enum qat_device_gen min_dev_gen = (qat_private->internal_capabilities &
 			QAT_SYM_CAP_MIXED_CRYPTO) ? QAT_GEN2 : QAT_GEN3;
 
@@ -564,7 +565,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform, void *session_private)
 {
 	struct qat_sym_session *session = session_private;
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen;
 	int ret;
 	int qat_cmd_id;
@@ -707,7 +708,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
 				struct qat_sym_session *session)
 {
 	struct rte_crypto_auth_xform *auth_xform = qat_get_auth_xform(xform);
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	const uint8_t *key_data = auth_xform->key.data;
 	uint8_t key_length = auth_xform->key.length;
 	enum qat_device_gen qat_dev_gen =
@@ -875,7 +876,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
 {
 	struct rte_crypto_aead_xform *aead_xform = &xform->aead;
 	enum rte_crypto_auth_operation crypto_operation;
-	struct qat_sym_dev_private *internals =
+	struct qat_cryptodev_private *internals =
 			dev->data->dev_private;
 	enum qat_device_gen qat_dev_gen =
 			internals->qat_dev->qat_dev_gen;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [PATCH v2 08/10] crypto/qat: add gen specific data and function
  2021-10-01 16:59 ` [dpdk-dev] [PATCH v2 00/10] " Fan Zhang
                     ` (6 preceding siblings ...)
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 07/10] crypto/qat: unified device private data structure Fan Zhang
@ 2021-10-01 16:59   ` Fan Zhang
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 09/10] crypto/qat: add gen specific implementation Fan Zhang
                     ` (2 subsequent siblings)
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-01 16:59 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch adds the symmetric and asymmetric crypto data
structure and function prototypes for different QAT
generations.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/crypto/qat/README                  |    7 -
 drivers/crypto/qat/meson.build             |   26 -
 drivers/crypto/qat/qat_asym_capabilities.h |   63 -
 drivers/crypto/qat/qat_asym_pmd.c          |   62 +-
 drivers/crypto/qat/qat_asym_pmd.h          |   25 +
 drivers/crypto/qat/qat_crypto.h            |   16 +
 drivers/crypto/qat/qat_sym_capabilities.h  | 1248 --------------------
 drivers/crypto/qat/qat_sym_pmd.c           |  186 +--
 drivers/crypto/qat/qat_sym_pmd.h           |   52 +-
 9 files changed, 161 insertions(+), 1524 deletions(-)
 delete mode 100644 drivers/crypto/qat/README
 delete mode 100644 drivers/crypto/qat/meson.build
 delete mode 100644 drivers/crypto/qat/qat_asym_capabilities.h
 delete mode 100644 drivers/crypto/qat/qat_sym_capabilities.h

diff --git a/drivers/crypto/qat/README b/drivers/crypto/qat/README
deleted file mode 100644
index 444ae605f0..0000000000
--- a/drivers/crypto/qat/README
+++ /dev/null
@@ -1,7 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2015-2018 Intel Corporation
-
-Makefile for crypto QAT PMD is in common/qat directory.
-The build for the QAT driver is done from there as only one library is built for the
-whole QAT pci device and that library includes all the services (crypto, compression)
-which are enabled on the device.
diff --git a/drivers/crypto/qat/meson.build b/drivers/crypto/qat/meson.build
deleted file mode 100644
index b3b2d17258..0000000000
--- a/drivers/crypto/qat/meson.build
+++ /dev/null
@@ -1,26 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017-2018 Intel Corporation
-
-# this does not build the QAT driver, instead that is done in the compression
-# driver which comes later. Here we just add our sources files to the list
-build = false
-reason = '' # sentinal value to suppress printout
-dep = dependency('libcrypto', required: false, method: 'pkg-config')
-qat_includes += include_directories('.')
-qat_deps += 'cryptodev'
-qat_deps += 'net'
-qat_deps += 'security'
-if dep.found()
-    # Add our sources files to the list
-    qat_sources += files(
-            'qat_asym.c',
-            'qat_asym_pmd.c',
-            'qat_sym.c',
-            'qat_sym_hw_dp.c',
-            'qat_sym_pmd.c',
-            'qat_sym_session.c',
-	)
-    qat_ext_deps += dep
-    qat_cflags += '-DBUILD_QAT_SYM'
-    qat_cflags += '-DBUILD_QAT_ASYM'
-endif
diff --git a/drivers/crypto/qat/qat_asym_capabilities.h b/drivers/crypto/qat/qat_asym_capabilities.h
deleted file mode 100644
index 523b4da6d3..0000000000
--- a/drivers/crypto/qat/qat_asym_capabilities.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019 Intel Corporation
- */
-
-#ifndef _QAT_ASYM_CAPABILITIES_H_
-#define _QAT_ASYM_CAPABILITIES_H_
-
-#define QAT_BASE_GEN1_ASYM_CAPABILITIES						\
-	{	/* modexp */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_MODEX,	\
-				.op_types = 0,					\
-				{						\
-				.modlen = {					\
-				.min = 1,					\
-				.max = 512,					\
-				.increment = 1					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	},									\
-	{	/* modinv */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_MODINV,	\
-				.op_types = 0,					\
-				{						\
-				.modlen = {					\
-				.min = 1,					\
-				.max = 512,					\
-				.increment = 1					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	},									\
-	{	/* RSA */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_RSA,	\
-				.op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_VERIFY) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),	\
-				{						\
-				.modlen = {					\
-				/* min length is based on openssl rsa keygen */	\
-				.min = 64,					\
-				/* value 0 symbolizes no limit on max length */	\
-				.max = 512,					\
-				.increment = 64					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	}									\
-
-#endif /* _QAT_ASYM_CAPABILITIES_H_ */
diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c
index 63e61fa322..64d12f0c1c 100644
--- a/drivers/crypto/qat/qat_asym_pmd.c
+++ b/drivers/crypto/qat/qat_asym_pmd.c
@@ -9,15 +9,9 @@
 #include "qat_crypto.h"
 #include "qat_asym.h"
 #include "qat_asym_pmd.h"
-#include "qat_sym_capabilities.h"
-#include "qat_asym_capabilities.h"
 
 uint8_t qat_asym_driver_id;
-
-static const struct rte_cryptodev_capabilities qat_gen1_asym_capabilities[] = {
-	QAT_BASE_GEN1_ASYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
+struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[QAT_N_GENS];
 
 void
 qat_asym_init_op_cookie(void *op_cookie)
@@ -101,19 +95,22 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
 		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
+	struct qat_capabilities_info capa_info;
+	const struct rte_cryptodev_capabilities *capabilities;
+	const struct qat_crypto_gen_dev_ops *gen_dev_ops =
+		&qat_asym_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
 	struct qat_cryptodev_private *internals;
+	uint64_t capa_size;
 
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT 4xxx");
-		return -EFAULT;
-	}
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN3) {
-		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT c4xxx");
-		return -EFAULT;
+	if (gen_dev_ops->cryptodev_ops == NULL) {
+		QAT_LOG(ERR, "Device %s does not support symmetric crypto",
+				name);
+		return -(EFAULT);
 	}
+
 	snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "asym");
 	QAT_LOG(DEBUG, "Creating QAT ASYM device %s\n", name);
@@ -150,11 +147,8 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	cryptodev->enqueue_burst = qat_asym_pmd_enqueue_op_burst;
 	cryptodev->dequeue_burst = qat_asym_pmd_dequeue_op_burst;
 
-	cryptodev->feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
-			RTE_CRYPTODEV_FF_HW_ACCELERATED |
-			RTE_CRYPTODEV_FF_ASYM_SESSIONLESS |
-			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP |
-			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT;
+
+	cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev);
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -166,27 +160,29 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
 	internals->dev_id = cryptodev->data->dev_id;
-	internals->qat_dev_capabilities = qat_gen1_asym_capabilities;
 	internals->service_type = QAT_SERVICE_ASYMMETRIC;
 
+	capa_info = gen_dev_ops->get_capabilities(qat_pci_dev);
+	capabilities = capa_info.data;
+	capa_size = capa_info.size;
+
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
 		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
-			sizeof(qat_gen1_asym_capabilities),
-			rte_socket_id(), 0);
-	}
-	if (internals->capa_mz == NULL) {
-		QAT_LOG(DEBUG,
-			"Error allocating memzone for capabilities, destroying PMD for %s",
-			name);
-		rte_cryptodev_pmd_destroy(cryptodev);
-		memset(&qat_dev_instance->asym_rte_dev, 0,
-			sizeof(qat_dev_instance->asym_rte_dev));
-		return -EFAULT;
+				capa_size, rte_socket_id(), 0);
+		if (internals->capa_mz == NULL) {
+			QAT_LOG(DEBUG,
+				"Error allocating memzone for capabilities, "
+				"destroying PMD for %s",
+				name);
+			rte_cryptodev_pmd_destroy(cryptodev);
+			memset(&qat_dev_instance->asym_rte_dev, 0,
+				sizeof(qat_dev_instance->asym_rte_dev));
+			return -EFAULT;
+		}
 	}
 
-	memcpy(internals->capa_mz->addr, qat_gen1_asym_capabilities,
-			sizeof(qat_gen1_asym_capabilities));
+	memcpy(internals->capa_mz->addr, capabilities, capa_size);
 	internals->qat_dev_capabilities = internals->capa_mz->addr;
 
 	while (1) {
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index c493796511..fd6b406248 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -7,14 +7,39 @@
 #define _QAT_ASYM_PMD_H_
 
 #include <rte_cryptodev.h>
+#include "qat_crypto.h"
 #include "qat_device.h"
 
 /** Intel(R) QAT Asymmetric Crypto PMD driver name */
 #define CRYPTODEV_NAME_QAT_ASYM_PMD	crypto_qat_asym
 
 
+/**
+ * Helper function to add an asym capability
+ * <name> <op type> <modlen (min, max, increment)>
+ **/
+#define QAT_ASYM_CAP(n, o, l, r, i)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,			\
+		{.asym = {						\
+			.xform_capa = {					\
+				.xform_type = RTE_CRYPTO_ASYM_XFORM_##n,\
+				.op_types = o,				\
+				{					\
+				.modlen = {				\
+				.min = l,				\
+				.max = r,				\
+				.increment = i				\
+				}, }					\
+			}						\
+		},							\
+		}							\
+	}
+
 extern uint8_t qat_asym_driver_id;
 
+extern struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[];
+
 void
 qat_asym_init_op_cookie(void *op_cookie);
 
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index 3803fef19d..0a8afb0b31 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -44,6 +44,22 @@ struct qat_capabilities_info {
 	uint64_t size;
 };
 
+typedef struct qat_capabilities_info (*get_capabilities_info_t)
+			(struct qat_pci_device *qat_dev);
+
+typedef uint64_t (*get_feature_flags_t)(struct qat_pci_device *qat_dev);
+
+typedef void * (*create_security_ctx_t)(void *cryptodev);
+
+struct qat_crypto_gen_dev_ops {
+	get_feature_flags_t get_feature_flags;
+	get_capabilities_info_t get_capabilities;
+	struct rte_cryptodev_ops *cryptodev_ops;
+#ifdef RTE_LIB_SECURITY
+	create_security_ctx_t create_security_ctx;
+#endif
+};
+
 int
 qat_cryptodev_config(struct rte_cryptodev *dev,
 		struct rte_cryptodev_config *config);
diff --git a/drivers/crypto/qat/qat_sym_capabilities.h b/drivers/crypto/qat/qat_sym_capabilities.h
deleted file mode 100644
index cfb176ca94..0000000000
--- a/drivers/crypto/qat/qat_sym_capabilities.h
+++ /dev/null
@@ -1,1248 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017-2019 Intel Corporation
- */
-
-#ifndef _QAT_SYM_CAPABILITIES_H_
-#define _QAT_SYM_CAPABILITIES_H_
-
-#define QAT_BASE_GEN1_SYM_CAPABILITIES					\
-	{	/* SHA1 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA1 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* MD5 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 16,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XCBC MAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-				.aad_size = { 0 },			\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_CMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 4			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_CCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 2			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 224,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 7,			\
-					.max = 13,			\
-					.increment = 1			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_GCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GMAC (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_GMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SNOW 3G (UIA2) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CBC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XTS */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_XTS,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 64,			\
-					.increment = 32			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SNOW 3G (UEA2) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CTR,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* NULL (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, },						\
-		}, },							\
-	},								\
-	{	/* NULL (CIPHER) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				}					\
-			}, },						\
-		}, }							\
-	},								\
-	{       /* KASUMI (F8) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_KASUMI_F8,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{       /* KASUMI (F9) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_KASUMI_F9,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* 3DES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 24,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* 3DES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_3DES_CTR,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 24,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* DES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_DES_CBC,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* DES DOCSISBPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_DES_DOCSISBPI,\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_EXTRA_GEN2_SYM_CAPABILITIES					\
-	{	/* ZUC (EEA3) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_ZUC_EEA3,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* ZUC (EIA3) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_ZUC_EIA3,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_EXTRA_GEN3_SYM_CAPABILITIES					\
-	{	/* Chacha20-Poly1305 */					\
-	.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 32,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_BASE_GEN4_SYM_CAPABILITIES					\
-	{	/* AES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CBC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA1 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XCBC MAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-				.aad_size = { 0 },			\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_CMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 4			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* NULL (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, },						\
-		}, },							\
-	},								\
-	{	/* NULL (CIPHER) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				}					\
-			}, },						\
-		}, }							\
-	},								\
-	{	/* SHA1 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CTR,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_GCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_CCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 2			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 224,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 7,			\
-					.max = 13,			\
-					.increment = 1			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* Chacha20-Poly1305 */					\
-	.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 32,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GMAC (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_GMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				}					\
-			}, }						\
-		}, }							\
-	}								\
-
-
-
-#ifdef RTE_LIB_SECURITY
-#define QAT_SECURITY_SYM_CAPABILITIES					\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_SECURITY_CAPABILITIES(sym)					\
-	[0] = {	/* DOCSIS Uplink */					\
-		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
-		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
-		.docsis = {						\
-			.direction = RTE_SECURITY_DOCSIS_UPLINK		\
-		},							\
-		.crypto_capabilities = (sym)				\
-	},								\
-	[1] = {	/* DOCSIS Downlink */					\
-		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
-		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
-		.docsis = {						\
-			.direction = RTE_SECURITY_DOCSIS_DOWNLINK	\
-		},							\
-		.crypto_capabilities = (sym)				\
-	}
-#endif
-
-#endif /* _QAT_SYM_CAPABILITIES_H_ */
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index e03737c0d8..a029c2f03e 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -22,85 +22,7 @@
 
 uint8_t qat_sym_driver_id;
 
-static const struct rte_cryptodev_capabilities qat_gen1_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen2_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen3_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN3_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen4_sym_capabilities[] = {
-	QAT_BASE_GEN4_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-#ifdef RTE_LIB_SECURITY
-static const struct rte_cryptodev_capabilities
-					qat_security_sym_capabilities[] = {
-	QAT_SECURITY_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_security_capability qat_security_capabilities[] = {
-	QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities),
-	{
-		.action = RTE_SECURITY_ACTION_TYPE_NONE
-	}
-};
-#endif
-
-static struct rte_cryptodev_ops crypto_qat_ops = {
-
-		/* Device related operations */
-		.dev_configure		= qat_cryptodev_config,
-		.dev_start		= qat_cryptodev_start,
-		.dev_stop		= qat_cryptodev_stop,
-		.dev_close		= qat_cryptodev_close,
-		.dev_infos_get		= qat_cryptodev_info_get,
-
-		.stats_get		= qat_cryptodev_stats_get,
-		.stats_reset		= qat_cryptodev_stats_reset,
-		.queue_pair_setup	= qat_cryptodev_qp_setup,
-		.queue_pair_release	= qat_cryptodev_qp_release,
-
-		/* Crypto related operations */
-		.sym_session_get_size	= qat_sym_session_get_private_size,
-		.sym_session_configure	= qat_sym_session_configure,
-		.sym_session_clear	= qat_sym_session_clear,
-
-		/* Raw data-path API related operations */
-		.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
-		.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
-};
-
-#ifdef RTE_LIB_SECURITY
-static const struct rte_security_capability *
-qat_security_cap_get(void *device __rte_unused)
-{
-	return qat_security_capabilities;
-}
-
-static struct rte_security_ops security_qat_ops = {
-
-		.session_create = qat_security_session_create,
-		.session_update = NULL,
-		.session_stats_get = NULL,
-		.session_destroy = qat_security_session_destroy,
-		.set_pkt_metadata = NULL,
-		.capabilities_get = qat_security_cap_get
-};
-#endif
+struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[QAT_N_GENS];
 
 void
 qat_sym_init_op_cookie(void *op_cookie)
@@ -156,7 +78,6 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 	int i = 0, ret = 0;
 	struct qat_device_info *qat_dev_instance =
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
-
 	struct rte_cryptodev_pmd_init_params init_params = {
 		.name = "",
 		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
@@ -166,13 +87,22 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
 	struct qat_cryptodev_private *internals;
+	struct qat_capabilities_info capa_info;
 	const struct rte_cryptodev_capabilities *capabilities;
+	const struct qat_crypto_gen_dev_ops *gen_dev_ops =
+		&qat_sym_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	uint64_t capa_size;
 
 	snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "sym");
 	QAT_LOG(DEBUG, "Creating QAT SYM device %s", name);
 
+	if (gen_dev_ops->cryptodev_ops == NULL) {
+		QAT_LOG(ERR, "Device %s does not support symmetric crypto",
+				name);
+		return -(EFAULT);
+	}
+
 	/*
 	 * All processes must use same driver id so they can share sessions.
 	 * Store driver_id so we can validate that all processes have the same
@@ -206,92 +136,56 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	qat_dev_instance->sym_rte_dev.name = cryptodev->data->name;
 	cryptodev->driver_id = qat_sym_driver_id;
-	cryptodev->dev_ops = &crypto_qat_ops;
+	cryptodev->dev_ops = gen_dev_ops->cryptodev_ops;
 
 	cryptodev->enqueue_burst = qat_sym_pmd_enqueue_op_burst;
 	cryptodev->dequeue_burst = qat_sym_pmd_dequeue_op_burst;
 
-	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
-			RTE_CRYPTODEV_FF_HW_ACCELERATED |
-			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
-			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
-			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
-			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
-
-	if (qat_pci_dev->qat_dev_gen < QAT_GEN4)
-		cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SYM_RAW_DP;
+	cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev);
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
-	snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN,
-			"QAT_SYM_CAPA_GEN_%d",
-			qat_pci_dev->qat_dev_gen);
-
 #ifdef RTE_LIB_SECURITY
-	struct rte_security_ctx *security_instance;
-	security_instance = rte_malloc("qat_sec",
-				sizeof(struct rte_security_ctx),
-				RTE_CACHE_LINE_SIZE);
-	if (security_instance == NULL) {
-		QAT_LOG(ERR, "rte_security_ctx memory alloc failed");
-		ret = -ENOMEM;
-		goto error;
-	}
+	if (gen_dev_ops->create_security_ctx) {
+		cryptodev->security_ctx =
+			gen_dev_ops->create_security_ctx((void *)cryptodev);
+		if (cryptodev->security_ctx == NULL) {
+			QAT_LOG(ERR, "rte_security_ctx memory alloc failed");
+			ret = -ENOMEM;
+			goto error;
+		}
+
+		cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY;
+		QAT_LOG(INFO, "Device %s rte_security support ensabled", name);
+	} else
+		QAT_LOG(INFO, "Device %s rte_security support disabled", name);
 
-	security_instance->device = (void *)cryptodev;
-	security_instance->ops = &security_qat_ops;
-	security_instance->sess_cnt = 0;
-	cryptodev->security_ctx = security_instance;
-	cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY;
 #endif
+	snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"QAT_SYM_CAPA_GEN_%d",
+			qat_pci_dev->qat_dev_gen);
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
 	internals->service_type = QAT_SERVICE_SYMMETRIC;
-
 	internals->dev_id = cryptodev->data->dev_id;
-	switch (qat_pci_dev->qat_dev_gen) {
-	case QAT_GEN1:
-		capabilities = qat_gen1_sym_capabilities;
-		capa_size = sizeof(qat_gen1_sym_capabilities);
-		break;
-	case QAT_GEN2:
-		capabilities = qat_gen2_sym_capabilities;
-		capa_size = sizeof(qat_gen2_sym_capabilities);
-		break;
-	case QAT_GEN3:
-		capabilities = qat_gen3_sym_capabilities;
-		capa_size = sizeof(qat_gen3_sym_capabilities);
-		break;
-	case QAT_GEN4:
-		capabilities = qat_gen4_sym_capabilities;
-		capa_size = sizeof(qat_gen4_sym_capabilities);
-		break;
-	default:
-		QAT_LOG(DEBUG,
-			"QAT gen %d capabilities unknown",
-			qat_pci_dev->qat_dev_gen);
-		ret = -(EINVAL);
-		goto error;
-	}
+
+	capa_info = gen_dev_ops->get_capabilities(qat_pci_dev);
+	capabilities = capa_info.data;
+	capa_size = capa_info.size;
 
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
 		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
-		capa_size,
-		rte_socket_id(), 0);
-	}
-	if (internals->capa_mz == NULL) {
-		QAT_LOG(DEBUG,
-			"Error allocating memzone for capabilities, destroying "
-			"PMD for %s",
-			name);
-		ret = -EFAULT;
-		goto error;
+				capa_size, rte_socket_id(), 0);
+		if (internals->capa_mz == NULL) {
+			QAT_LOG(DEBUG,
+				"Error allocating capability memzon for %s",
+				name);
+			ret = -EFAULT;
+			goto error;
+		}
 	}
 
 	memcpy(internals->capa_mz->addr, capabilities, capa_size);
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index d49b732ca0..28a6572f6d 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -13,7 +13,6 @@
 #include <rte_security.h>
 #endif
 
-#include "qat_sym_capabilities.h"
 #include "qat_crypto.h"
 #include "qat_device.h"
 
@@ -24,8 +23,59 @@
 #define QAT_SYM_CAP_MIXED_CRYPTO	(1 << 0)
 #define QAT_SYM_CAP_VALID		(1 << 31)
 
+/* Macro to add a capability */
+#define QAT_SYM_PLAIN_AUTH_CAP(n, b, d)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
+			{.auth = {					\
+				.algo = RTE_CRYPTO_AUTH_##n,		\
+				b, d					\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_AUTH_CAP(n, b, k, d, a, i)				\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
+			{.auth = {					\
+				.algo = RTE_CRYPTO_AUTH_##n,		\
+				b, k, d, a, i				\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_AEAD_CAP(n, b, k, d, a, i)				\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
+			{.aead = {					\
+				.algo = RTE_CRYPTO_AEAD_##n,		\
+				b, k, d, a, i				\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_CIPHER_CAP(n, b, k, i)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
+			{.cipher = {					\
+				.algo = RTE_CRYPTO_CIPHER_##n,		\
+				b, k, i					\
+			}, }						\
+		}, }							\
+	}
+
 extern uint8_t qat_sym_driver_id;
 
+extern struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[];
+
 int
 qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [PATCH v2 09/10] crypto/qat: add gen specific implementation
  2021-10-01 16:59 ` [dpdk-dev] [PATCH v2 00/10] " Fan Zhang
                     ` (7 preceding siblings ...)
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 08/10] crypto/qat: add gen specific data and function Fan Zhang
@ 2021-10-01 16:59   ` Fan Zhang
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 10/10] doc: update release note Fan Zhang
  2021-10-14 16:11   ` [dpdk-dev] [dpdk-dev v3 00/10] drivers/qat: isolate implementations of qat generations Fan Zhang
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-01 16:59 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch replaces the mixed QAT symmetric and asymmetric
support implementation by separate files with shared or
individual implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/meson.build               |   7 +-
 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c   |  76 +++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 224 +++++++++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 164 +++++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 125 ++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h |  36 +++
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c    | 283 +++++++++++++++++++
 drivers/crypto/qat/qat_asym_pmd.h            |   1 +
 drivers/crypto/qat/qat_crypto.h              |   3 -
 9 files changed, 915 insertions(+), 4 deletions(-)
 create mode 100644 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 29fd0168ea..ce9959d103 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -71,7 +71,12 @@ endif
 
 if qat_crypto
     foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c',
-            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c']
+            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c',
+	    'dev/qat_sym_pmd_gen1.c',
+            'dev/qat_asym_pmd_gen1.c',
+            'dev/qat_crypto_pmd_gen2.c',
+            'dev/qat_crypto_pmd_gen3.c',
+            'dev/qat_crypto_pmd_gen4.c']
         sources += files(join_paths(qat_crypto_relpath, f))
     endforeach
     deps += ['security']
diff --git a/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
new file mode 100644
index 0000000000..61250fe433
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+#include "qat_pke_functionality_arrays.h"
+
+struct rte_cryptodev_ops qat_asym_crypto_ops_gen1 = {
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.asym_session_get_size	= qat_asym_session_get_private_size,
+	.asym_session_configure	= qat_asym_session_configure,
+	.asym_session_clear	= qat_asym_session_clear
+};
+
+static struct rte_cryptodev_capabilities qat_asym_crypto_caps_gen1[] = {
+	QAT_ASYM_CAP(MODEX, \
+		0, 1, 512, 1), \
+	QAT_ASYM_CAP(MODINV, \
+		0, 1, 512, 1), \
+	QAT_ASYM_CAP(RSA, \
+			((1 << RTE_CRYPTO_ASYM_OP_SIGN) | \
+			(1 << RTE_CRYPTO_ASYM_OP_VERIFY) | \
+			(1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) | \
+			(1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),	\
+			64, 512, 64),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+
+struct qat_capabilities_info
+qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_asym_crypto_caps_gen1;
+	capa_info.size = sizeof(qat_asym_crypto_caps_gen1);
+	return capa_info;
+}
+
+uint64_t
+qat_asym_crypto_feature_flags_get_gen1(
+	struct qat_pci_device *qat_dev __rte_unused)
+{
+	uint64_t feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_ASYM_SESSIONLESS |
+			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP |
+			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT;
+
+	return feature_flags;
+}
+
+RTE_INIT(qat_asym_crypto_gen1_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN1].cryptodev_ops =
+			&qat_asym_crypto_ops_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN1].get_capabilities =
+			qat_asym_crypto_cap_get_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN1].get_feature_flags =
+			qat_asym_crypto_feature_flags_get_gen1;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
new file mode 100644
index 0000000000..8611ef6864
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
@@ -0,0 +1,224 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+#define MIXED_CRYPTO_MIN_FW_VER 0x04090000
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen2[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(digest_size, 1, 20, 1)), \
+	QAT_SYM_AEAD_CAP(AES_GCM, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)), \
+	QAT_SYM_AEAD_CAP(AES_CCM,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2), \
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)), \
+	QAT_SYM_AUTH_CAP(AES_GMAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)), \
+	QAT_SYM_AUTH_CAP(AES_CMAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4), \
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA224, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA256, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA384, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA512, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA1_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA224_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA256_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA384_HMAC, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA512_HMAC, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(MD5_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AUTH_CAP(KASUMI_F9, \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(NULL, \
+		CAP_SET(block_size, 1), \
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(AES_CBC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_CTR,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_XTS,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(NULL,  \
+		CAP_SET(block_size, 1), \
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(3DES_CBC,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(3DES_CTR,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(DES_CBC,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(ZUC_EEA3, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AUTH_CAP(ZUC_EIA3, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static int
+qat_sym_crypto_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct qat_cryptodev_private *qat_sym_private = dev->data->dev_private;
+	struct qat_qp *qp;
+	int ret;
+
+	if (qat_cryptodev_qp_setup(dev, qp_id, qp_conf, socket_id)) {
+		/* Some error there */
+		return -1;
+	}
+
+	qp = qat_sym_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id];
+	ret = qat_cq_get_fw_version(qp);
+	if (ret < 0) {
+		qat_cryptodev_qp_release(dev, qp_id);
+		return ret;
+	}
+
+	if (ret != 0)
+		QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d",
+				(ret >> 24) & 0xff,
+				(ret >> 16) & 0xff,
+				(ret >> 8) & 0xff);
+	else
+		QAT_LOG(DEBUG, "unknown QAT firmware version");
+
+	/* set capabilities based on the fw version */
+	qat_sym_private->internal_capabilities = QAT_SYM_CAP_VALID |
+			((ret >= MIXED_CRYPTO_MIN_FW_VER) ?
+					QAT_SYM_CAP_MIXED_CRYPTO : 0);
+	return 0;
+}
+
+struct rte_cryptodev_ops qat_sym_crypto_ops_gen2 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_sym_crypto_qp_setup_gen2,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.sym_session_get_size	= qat_sym_session_get_private_size,
+	.sym_session_configure	= qat_sym_session_configure,
+	.sym_session_clear	= qat_sym_session_clear,
+
+	/* Raw data-path API related operations */
+	.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+	.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen2(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen2;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen2);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen2_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN2].cryptodev_ops = &qat_sym_crypto_ops_gen2;
+	qat_sym_gen_dev_ops[QAT_GEN2].get_capabilities =
+			qat_sym_crypto_cap_get_gen2;
+	qat_sym_gen_dev_ops[QAT_GEN2].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN2].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen2_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN2].cryptodev_ops =
+			&qat_asym_crypto_ops_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN2].get_capabilities =
+			qat_asym_crypto_cap_get_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN2].get_feature_flags =
+			qat_asym_crypto_feature_flags_get_gen1;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
new file mode 100644
index 0000000000..1af58b90ed
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen3[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(digest_size, 1, 20, 1)), \
+	QAT_SYM_AEAD_CAP(AES_GCM, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)), \
+	QAT_SYM_AEAD_CAP(AES_CCM,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2), \
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)), \
+	QAT_SYM_AUTH_CAP(AES_GMAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)), \
+	QAT_SYM_AUTH_CAP(AES_CMAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4), \
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA224, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA256, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA384, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA512, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA1_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA224_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA256_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA384_HMAC, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA512_HMAC, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(MD5_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AUTH_CAP(KASUMI_F9, \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(NULL, \
+		CAP_SET(block_size, 1), \
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(AES_CBC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_CTR,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_XTS,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(NULL,  \
+		CAP_SET(block_size, 1), \
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(3DES_CBC,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(3DES_CTR,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(DES_CBC,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(ZUC_EEA3, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AUTH_CAP(ZUC_EIA3, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AEAD_CAP(CHACHA20_POLY1305, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 32, 32, 0), \
+		CAP_RNG(digest_size, 16, 16, 0), \
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen3(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen3;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen3);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen3_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN3].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN3].get_capabilities =
+			qat_sym_crypto_cap_get_gen3;
+	qat_sym_gen_dev_ops[QAT_GEN3].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN3].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen3_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN3].cryptodev_ops = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN3].get_capabilities = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN3].get_feature_flags = NULL;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
new file mode 100644
index 0000000000..e44f91e90a
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
@@ -0,0 +1,125 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+/* AR: add GEN4 caps here */
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen4[] = {
+	QAT_SYM_CIPHER_CAP(AES_CBC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AUTH_CAP(SHA1_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA224_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA256_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA384_HMAC, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA512_HMAC, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(AES_CMAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AUTH_CAP(NULL, \
+		CAP_SET(block_size, 1), \
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(NULL,  \
+		CAP_SET(block_size, 1), \
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(digest_size, 1, 20, 1)), \
+	QAT_SYM_AUTH_CAP(SHA224, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA256, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA384, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA512, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(AES_CTR,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AEAD_CAP(AES_GCM, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)), \
+	QAT_SYM_AEAD_CAP(AES_CCM,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2), \
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)), \
+	QAT_SYM_AUTH_CAP(AES_GMAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)), \
+	QAT_SYM_AEAD_CAP(CHACHA20_POLY1305, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 32, 32, 0), \
+		CAP_RNG(digest_size, 16, 16, 0), \
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen4;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen4);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen4_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN4].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN4].get_capabilities =
+			qat_sym_crypto_cap_get_gen4;
+	qat_sym_gen_dev_ops[QAT_GEN4].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN4].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen4_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN4].cryptodev_ops = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN4].get_capabilities = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN4].get_feature_flags = NULL;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
new file mode 100644
index 0000000000..67a4d2cb2c
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#ifndef _QAT_CRYPTO_PMD_GENS_H_
+#define _QAT_CRYPTO_PMD_GENS_H_
+
+#include <rte_cryptodev.h>
+#include "qat_crypto.h"
+#include "qat_sym_session.h"
+
+extern struct rte_cryptodev_ops qat_sym_crypto_ops_gen1;
+extern struct rte_cryptodev_ops qat_asym_crypto_ops_gen1;
+
+/* -----------------GENx control path APIs ---------------- */
+uint64_t
+qat_sym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev);
+
+void
+qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session,
+		uint8_t hash_flag);
+
+struct qat_capabilities_info
+qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev);
+
+uint64_t
+qat_asym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev);
+
+#ifdef RTE_LIB_SECURITY
+extern struct rte_security_ops security_qat_ops_gen1;
+
+void *
+qat_sym_create_security_gen1(void *cryptodev);
+#endif
+
+#endif
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
new file mode 100644
index 0000000000..c6aa305845
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
@@ -0,0 +1,283 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#ifdef RTE_LIB_SECURITY
+#include <rte_security_driver.h>
+#endif
+
+#include "adf_transport_access_macros.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_sym_session.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen1[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(digest_size, 1, 20, 1)), \
+	QAT_SYM_AEAD_CAP(AES_GCM, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)), \
+	QAT_SYM_AEAD_CAP(AES_CCM,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2), \
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)), \
+	QAT_SYM_AUTH_CAP(AES_GMAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)), \
+	QAT_SYM_AUTH_CAP(AES_CMAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4), \
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA224, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA256, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA384, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA512, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA1_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA224_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA256_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA384_HMAC, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA512_HMAC, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(MD5_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AUTH_CAP(KASUMI_F9, \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(NULL, \
+		CAP_SET(block_size, 1), \
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(AES_CBC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_CTR,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_XTS,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(NULL,  \
+		CAP_SET(block_size, 1), \
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(3DES_CBC,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(3DES_CTR,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(DES_CBC,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+struct rte_cryptodev_ops qat_sym_crypto_ops_gen1 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.sym_session_get_size	= qat_sym_session_get_private_size,
+	.sym_session_configure	= qat_sym_session_configure,
+	.sym_session_clear	= qat_sym_session_clear,
+
+	/* Raw data-path API related operations */
+	.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+	.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen1;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen1);
+	return capa_info;
+}
+
+uint64_t
+qat_sym_crypto_feature_flags_get_gen1(
+	struct qat_pci_device *qat_dev __rte_unused)
+{
+	uint64_t feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
+			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
+			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
+			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED |
+			RTE_CRYPTODEV_FF_SYM_RAW_DP;
+
+	return feature_flags;
+}
+
+#ifdef RTE_LIB_SECURITY
+
+#define QAT_SECURITY_SYM_CAPABILITIES					\
+	{	/* AES DOCSIS BPI */					\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
+			{.cipher = {					\
+				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
+				.block_size = 16,			\
+				.key_size = {				\
+					.min = 16,			\
+					.max = 32,			\
+					.increment = 16			\
+				},					\
+				.iv_size = {				\
+					.min = 16,			\
+					.max = 16,			\
+					.increment = 0			\
+				}					\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SECURITY_CAPABILITIES(sym)					\
+	[0] = {	/* DOCSIS Uplink */					\
+		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
+		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
+		.docsis = {						\
+			.direction = RTE_SECURITY_DOCSIS_UPLINK		\
+		},							\
+		.crypto_capabilities = (sym)				\
+	},								\
+	[1] = {	/* DOCSIS Downlink */					\
+		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
+		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
+		.docsis = {						\
+			.direction = RTE_SECURITY_DOCSIS_DOWNLINK	\
+		},							\
+		.crypto_capabilities = (sym)				\
+	}
+
+static const struct rte_cryptodev_capabilities
+					qat_security_sym_capabilities[] = {
+	QAT_SECURITY_SYM_CAPABILITIES,
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static const struct rte_security_capability qat_security_capabilities_gen1[] = {
+	QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities),
+	{
+		.action = RTE_SECURITY_ACTION_TYPE_NONE
+	}
+};
+
+static const struct rte_security_capability *
+qat_security_cap_get_gen1(void *dev __rte_unused)
+{
+	return qat_security_capabilities_gen1;
+}
+
+struct rte_security_ops security_qat_ops_gen1 = {
+		.session_create = qat_security_session_create,
+		.session_update = NULL,
+		.session_stats_get = NULL,
+		.session_destroy = qat_security_session_destroy,
+		.set_pkt_metadata = NULL,
+		.capabilities_get = qat_security_cap_get_gen1
+};
+
+void *
+qat_sym_create_security_gen1(void *cryptodev)
+{
+	struct rte_security_ctx *security_instance;
+
+	security_instance = rte_malloc(NULL, sizeof(struct rte_security_ctx),
+			RTE_CACHE_LINE_SIZE);
+	if (security_instance == NULL)
+		return NULL;
+
+	security_instance->device = cryptodev;
+	security_instance->ops = &security_qat_ops_gen1;
+	security_instance->sess_cnt = 0;
+
+	return (void *)security_instance;
+}
+
+#endif
+
+RTE_INIT(qat_sym_crypto_gen1_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN1].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN1].get_capabilities =
+			qat_sym_crypto_cap_get_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN1].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN1].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index fd6b406248..74c12b4bc8 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -18,6 +18,7 @@
  * Helper function to add an asym capability
  * <name> <op type> <modlen (min, max, increment)>
  **/
+
 #define QAT_ASYM_CAP(n, o, l, r, i)					\
 	{								\
 		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,			\
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index 0a8afb0b31..6eaa15b975 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -6,9 +6,6 @@
  #define _QAT_CRYPTO_H_
 
 #include <rte_cryptodev.h>
-#ifdef RTE_LIB_SECURITY
-#include <rte_security.h>
-#endif
 
 #include "qat_device.h"
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [PATCH v2 10/10] doc: update release note
  2021-10-01 16:59 ` [dpdk-dev] [PATCH v2 00/10] " Fan Zhang
                     ` (8 preceding siblings ...)
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 09/10] crypto/qat: add gen specific implementation Fan Zhang
@ 2021-10-01 16:59   ` Fan Zhang
  2021-10-08 10:07     ` [dpdk-dev] [EXT] " Akhil Goyal
  2021-10-14 16:11   ` [dpdk-dev] [dpdk-dev v3 00/10] drivers/qat: isolate implementations of qat generations Fan Zhang
  10 siblings, 1 reply; 96+ messages in thread
From: Fan Zhang @ 2021-10-01 16:59 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang

This patch updates the release note to describe qat refactor
changes made.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 doc/guides/rel_notes/release_21_11.rst | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index 3ade7fe5ac..02a61be76b 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -157,6 +157,10 @@ API Changes
   the crypto/security operation. This field will be used to communicate
   events such as soft expiry with IPsec in lookaside mode.
 
+* common/qat: QAT PMD is refactored to divide generation specific control
+  path code into dedicated files. This change also applies qat compression,
+  qat symmetric crypto, and qat asymmetric crypto.
+
 
 ABI Changes
 -----------
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [EXT] [PATCH v2 10/10] doc: update release note
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 10/10] doc: update release note Fan Zhang
@ 2021-10-08 10:07     ` Akhil Goyal
  2021-10-08 10:34       ` Zhang, Roy Fan
  0 siblings, 1 reply; 96+ messages in thread
From: Akhil Goyal @ 2021-10-08 10:07 UTC (permalink / raw)
  To: Fan Zhang, dev

> This patch updates the release note to describe qat refactor
> changes made.
> 
> Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> ---
>  doc/guides/rel_notes/release_21_11.rst | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> index 3ade7fe5ac..02a61be76b 100644
> --- a/doc/guides/rel_notes/release_21_11.rst
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -157,6 +157,10 @@ API Changes
>    the crypto/security operation. This field will be used to communicate
>    events such as soft expiry with IPsec in lookaside mode.
> 
> +* common/qat: QAT PMD is refactored to divide generation specific control
> +  path code into dedicated files. This change also applies qat compression,
> +  qat symmetric crypto, and qat asymmetric crypto.
> +
Will there be any change wrt user interface?
I don't see any API change in the series. Its all internal to the QAT PMDs and common part.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [EXT] [PATCH v2 10/10] doc: update release note
  2021-10-08 10:07     ` [dpdk-dev] [EXT] " Akhil Goyal
@ 2021-10-08 10:34       ` Zhang, Roy Fan
  0 siblings, 0 replies; 96+ messages in thread
From: Zhang, Roy Fan @ 2021-10-08 10:34 UTC (permalink / raw)
  To: Akhil Goyal, dev

Hi,

Sory there isn't - Will update in v3.

Regards,
Fan

> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Friday, October 8, 2021 11:07 AM
> To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; dev@dpdk.org
> Subject: RE: [EXT] [PATCH v2 10/10] doc: update release note
> 
> > This patch updates the release note to describe qat refactor
> > changes made.
> >
> > Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> > ---
> >  doc/guides/rel_notes/release_21_11.rst | 4 ++++
> >  1 file changed, 4 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/release_21_11.rst
> > b/doc/guides/rel_notes/release_21_11.rst
> > index 3ade7fe5ac..02a61be76b 100644
> > --- a/doc/guides/rel_notes/release_21_11.rst
> > +++ b/doc/guides/rel_notes/release_21_11.rst
> > @@ -157,6 +157,10 @@ API Changes
> >    the crypto/security operation. This field will be used to communicate
> >    events such as soft expiry with IPsec in lookaside mode.
> >
> > +* common/qat: QAT PMD is refactored to divide generation specific
> control
> > +  path code into dedicated files. This change also applies qat compression,
> > +  qat symmetric crypto, and qat asymmetric crypto.
> > +
> Will there be any change wrt user interface?
> I don't see any API change in the series. Its all internal to the QAT PMDs and
> common part.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v3 00/10] drivers/qat: isolate implementations of qat generations
  2021-10-01 16:59 ` [dpdk-dev] [PATCH v2 00/10] " Fan Zhang
                     ` (9 preceding siblings ...)
  2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 10/10] doc: update release note Fan Zhang
@ 2021-10-14 16:11   ` Fan Zhang
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 01/10] common/qat: add gen specific data and function Fan Zhang
                       ` (10 more replies)
  10 siblings, 11 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-14 16:11 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang

This patchset introduces new qat driver structure and updates
existing symmetric crypto qat PMD.

The purpose of the change is to isolate QAT generation specific
implementations from one to another.

It is expected the changes to the specific generation driver
code does minimum impact to other generations' implementations.
Also adding the support to new features or new qat generation
hardware will have zero impact to existing functionalities.

v3:
- removed release note update.
- updated with more unified naming conventions.

v2:
- unified asym and sym data structures for qat.
- more refined per gen code split.

Arek Kusztal (1):
  common/qat: unify naming conventions in qat functions

Fan Zhang (9):
  common/qat: add gen specific data and function
  common/qat: add gen specific device implementation
  common/qat: add gen specific queue pair function
  common/qat: add gen specific queue implementation
  compress/qat: add gen specific data and function
  compress/qat: add gen specific implementation
  crypto/qat: unified device private data structure
  crypto/qat: add gen specific data and function
  crypto/qat: add gen specific implementation

 drivers/common/qat/dev/qat_dev_gen1.c         |  255 ++++
 drivers/common/qat/dev/qat_dev_gen2.c         |   37 +
 drivers/common/qat/dev/qat_dev_gen3.c         |   83 ++
 drivers/common/qat/dev/qat_dev_gen4.c         |  305 ++++
 drivers/common/qat/dev/qat_dev_gens.h         |   58 +
 drivers/common/qat/meson.build                |   15 +-
 .../qat/qat_adf/adf_transport_access_macros.h |    2 +
 .../common/qat/qat_adf/icp_qat_hw_gen4_comp.h |  195 +++
 .../qat/qat_adf/icp_qat_hw_gen4_comp_defs.h   |  300 ++++
 drivers/common/qat/qat_common.c               |   41 +-
 drivers/common/qat/qat_common.h               |   21 +-
 drivers/common/qat/qat_device.c               |  204 ++-
 drivers/common/qat/qat_device.h               |   71 +-
 drivers/common/qat/qat_logs.h                 |    6 +-
 drivers/common/qat/qat_qp.c                   |  667 ++++-----
 drivers/common/qat/qat_qp.h                   |  121 +-
 drivers/compress/qat/dev/qat_comp_pmd_gen1.c  |  177 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen2.c  |   30 +
 drivers/compress/qat/dev/qat_comp_pmd_gen3.c  |   30 +
 drivers/compress/qat/dev/qat_comp_pmd_gen4.c  |  213 +++
 drivers/compress/qat/dev/qat_comp_pmd_gens.h  |   30 +
 drivers/compress/qat/qat_comp.c               |  101 +-
 drivers/compress/qat/qat_comp.h               |    8 +-
 drivers/compress/qat/qat_comp_pmd.c           |  159 +--
 drivers/compress/qat/qat_comp_pmd.h           |   76 +
 drivers/crypto/qat/README                     |    7 -
 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c    |   76 +
 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c  |  224 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c  |  164 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c  |  125 ++
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h  |   36 +
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c     |  283 ++++
 drivers/crypto/qat/meson.build                |   26 -
 drivers/crypto/qat/qat_asym_capabilities.h    |   63 -
 drivers/crypto/qat/qat_asym_pmd.c             |  294 +---
 drivers/crypto/qat/qat_asym_pmd.h             |   56 +-
 drivers/crypto/qat/qat_crypto.c               |  172 +++
 drivers/crypto/qat/qat_crypto.h               |   91 ++
 drivers/crypto/qat/qat_sym_capabilities.h     | 1248 -----------------
 drivers/crypto/qat/qat_sym_pmd.c              |  448 +-----
 drivers/crypto/qat/qat_sym_pmd.h              |   71 +-
 drivers/crypto/qat/qat_sym_session.c          | 1058 +++++++-------
 42 files changed, 4327 insertions(+), 3320 deletions(-)
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gens.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen1.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen2.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen3.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen4.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gens.h
 delete mode 100644 drivers/crypto/qat/README
 create mode 100644 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
 delete mode 100644 drivers/crypto/qat/meson.build
 delete mode 100644 drivers/crypto/qat/qat_asym_capabilities.h
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 delete mode 100644 drivers/crypto/qat/qat_sym_capabilities.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v3 01/10] common/qat: add gen specific data and function
  2021-10-14 16:11   ` [dpdk-dev] [dpdk-dev v3 00/10] drivers/qat: isolate implementations of qat generations Fan Zhang
@ 2021-10-14 16:11     ` Fan Zhang
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 02/10] common/qat: add gen specific device implementation Fan Zhang
                       ` (9 subsequent siblings)
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-14 16:11 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch adds the data structure and function prototypes for
different QAT generations.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/qat_common.h | 14 ++++++++------
 drivers/common/qat/qat_device.c |  4 ++++
 drivers/common/qat/qat_device.h | 23 +++++++++++++++++++++++
 3 files changed, 35 insertions(+), 6 deletions(-)

diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index 23715085f4..1889ec4e88 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -15,20 +15,24 @@
 /* Intel(R) QuickAssist Technology device generation is enumerated
  * from one according to the generation of the device
  */
+
 enum qat_device_gen {
-	QAT_GEN1 = 1,
+	QAT_GEN1,
 	QAT_GEN2,
 	QAT_GEN3,
-	QAT_GEN4
+	QAT_GEN4,
+	QAT_N_GENS
 };
 
 enum qat_service_type {
-	QAT_SERVICE_ASYMMETRIC = 0,
+	QAT_SERVICE_ASYMMETRIC,
 	QAT_SERVICE_SYMMETRIC,
 	QAT_SERVICE_COMPRESSION,
-	QAT_SERVICE_INVALID
+	QAT_MAX_SERVICES
 };
 
+#define QAT_SERVICE_INVALID	(QAT_MAX_SERVICES)
+
 enum qat_svc_list {
 	QAT_SVC_UNUSED = 0,
 	QAT_SVC_CRYPTO = 1,
@@ -37,8 +41,6 @@ enum qat_svc_list {
 	QAT_SVC_ASYM = 4,
 };
 
-#define QAT_MAX_SERVICES		(QAT_SERVICE_INVALID)
-
 /**< Common struct for scatter-gather list operations */
 struct qat_flat_buf {
 	uint32_t len;
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index 1b967cbcf7..e6b43c541f 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -13,6 +13,10 @@
 #include "adf_pf2vf_msg.h"
 #include "qat_pf2vf.h"
 
+/* Hardware device information per generation */
+struct qat_gen_hw_data qat_gen_config[QAT_N_GENS];
+struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_N_GENS];
+
 /* pv2vf data Gen 4*/
 struct qat_pf2vf_dev qat_pf2vf_gen4 = {
 	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 228c057d1e..b8b5c387a3 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -21,6 +21,29 @@
 #define COMP_ENQ_THRESHOLD_NAME "qat_comp_enq_threshold"
 #define MAX_QP_THRESHOLD_SIZE	32
 
+/**
+ * Function prototypes for GENx specific device operations.
+ **/
+typedef int (*qat_dev_reset_ring_pairs_t)
+		(struct qat_pci_device *);
+typedef const struct rte_mem_resource* (*qat_dev_get_transport_bar_t)
+		(struct rte_pci_device *);
+typedef int (*qat_dev_get_misc_bar_t)
+		(struct rte_mem_resource **, struct rte_pci_device *);
+typedef int (*qat_dev_read_config_t)
+		(struct qat_pci_device *);
+typedef int (*qat_dev_get_extra_size_t)(void);
+
+struct qat_dev_hw_spec_funcs {
+	qat_dev_reset_ring_pairs_t	qat_dev_reset_ring_pairs;
+	qat_dev_get_transport_bar_t	qat_dev_get_transport_bar;
+	qat_dev_get_misc_bar_t		qat_dev_get_misc_bar;
+	qat_dev_read_config_t		qat_dev_read_config;
+	qat_dev_get_extra_size_t	qat_dev_get_extra_size;
+};
+
+extern struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[];
+
 struct qat_dev_cmd_param {
 	const char *name;
 	uint16_t val;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v3 02/10] common/qat: add gen specific device implementation
  2021-10-14 16:11   ` [dpdk-dev] [dpdk-dev v3 00/10] drivers/qat: isolate implementations of qat generations Fan Zhang
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 01/10] common/qat: add gen specific data and function Fan Zhang
@ 2021-10-14 16:11     ` Fan Zhang
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 03/10] common/qat: add gen specific queue pair function Fan Zhang
                       ` (8 subsequent siblings)
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-14 16:11 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch replaces the mixed QAT device configuration
implementation by separate files with shared or
individual implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c |  66 +++++++++
 drivers/common/qat/dev/qat_dev_gen2.c |  23 +++
 drivers/common/qat/dev/qat_dev_gen3.c |  23 +++
 drivers/common/qat/dev/qat_dev_gen4.c | 152 +++++++++++++++++++
 drivers/common/qat/dev/qat_dev_gens.h |  34 +++++
 drivers/common/qat/meson.build        |   4 +
 drivers/common/qat/qat_device.c       | 204 +++++++++++---------------
 drivers/common/qat/qat_device.h       |   5 +-
 drivers/common/qat/qat_qp.c           |   3 +-
 9 files changed, 390 insertions(+), 124 deletions(-)
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gens.h

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
new file mode 100644
index 0000000000..d9e75fe9e2
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+#define ADF_ARB_REG_SLOT			0x1000
+
+int
+qat_reset_ring_pairs_gen1(struct qat_pci_device *qat_pci_dev __rte_unused)
+{
+	/*
+	 * Ring pairs reset not supported on base, continue
+	 */
+	return 0;
+}
+
+const struct rte_mem_resource *
+qat_dev_get_transport_bar_gen1(struct rte_pci_device *pci_dev)
+{
+	return &pci_dev->mem_resource[0];
+}
+
+int
+qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource __rte_unused,
+		struct rte_pci_device *pci_dev __rte_unused)
+{
+	return -1;
+}
+
+int
+qat_dev_read_config_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	/*
+	 * Base generations do not have configuration,
+	 * but set this pointer anyway that we can
+	 * distinguish higher generations faulty set to NULL
+	 */
+	return 0;
+}
+
+int
+qat_dev_get_extra_size_gen1(void)
+{
+	return 0;
+}
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen1_init)
+{
+	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
+	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
+	qat_gen_config[QAT_GEN1].comp_num_im_bufs_required =
+		QAT_NUM_INTERM_BUFS_GEN1;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c
new file mode 100644
index 0000000000..d3470ed6b8
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen2.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen2_init)
+{
+	qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2;
+	qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c
new file mode 100644
index 0000000000..e4a66869d2
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen3.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen3_init)
+{
+	qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3;
+	qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
new file mode 100644
index 0000000000..5e5423ebfa
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "adf_transport_access_macros_gen4vf.h"
+#include "adf_pf2vf_msg.h"
+#include "qat_pf2vf.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+struct qat_dev_gen4_extra {
+	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
+		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
+};
+
+static struct qat_pf2vf_dev qat_pf2vf_gen4 = {
+	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
+	.vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
+	.pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
+	.pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
+	.pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
+	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
+};
+
+int
+qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
+{
+	struct qat_pf2vf_msg pf2vf_msg;
+
+	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ;
+	pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ;
+	pf2vf_msg.msg_data = 2;
+	return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
+}
+
+static enum qat_service_type
+gen4_pick_service(uint8_t hw_service)
+{
+	switch (hw_service) {
+	case QAT_SVC_SYM:
+		return QAT_SERVICE_SYMMETRIC;
+	case QAT_SVC_COMPRESSION:
+		return QAT_SERVICE_COMPRESSION;
+	case QAT_SVC_ASYM:
+		return QAT_SERVICE_ASYMMETRIC;
+	default:
+		return QAT_SERVICE_INVALID;
+	}
+}
+
+static int
+qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
+{
+	int i = 0;
+	uint16_t svc = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+	struct qat_qp_hw_data *hw_data;
+	enum qat_service_type service_type;
+	uint8_t hw_service;
+
+	if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
+		return -EFAULT;
+	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		hw_service = (svc >> (3 * i)) & 0x7;
+		service_type = gen4_pick_service(hw_service);
+		if (service_type == QAT_SERVICE_INVALID) {
+			QAT_LOG(ERR,
+				"Unrecognized service on bundle %d",
+				i);
+			return -ENOTSUP;
+		}
+		hw_data = &dev_extra->qp_gen4_data[i][0];
+		memset(hw_data, 0, sizeof(*hw_data));
+		hw_data->service_type = service_type;
+		if (service_type == QAT_SERVICE_ASYMMETRIC) {
+			hw_data->tx_msg_size = 64;
+			hw_data->rx_msg_size = 32;
+		} else if (service_type == QAT_SERVICE_SYMMETRIC ||
+				service_type ==
+					QAT_SERVICE_COMPRESSION) {
+			hw_data->tx_msg_size = 128;
+			hw_data->rx_msg_size = 32;
+		}
+		hw_data->tx_ring_num = 0;
+		hw_data->rx_ring_num = 1;
+		hw_data->hw_bundle_num = i;
+	}
+	return 0;
+}
+
+static int
+qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
+{
+	int ret = 0, i;
+	uint8_t data[4];
+	struct qat_pf2vf_msg pf2vf_msg;
+
+	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET;
+	pf2vf_msg.block_hdr = -1;
+	for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		pf2vf_msg.msg_data = i;
+		ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data);
+		if (ret) {
+			QAT_LOG(ERR, "QAT error when reset bundle no %d",
+				i);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static const struct
+rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
+{
+	return &pci_dev->mem_resource[0];
+}
+
+static int
+qat_dev_get_misc_bar_gen4(struct rte_mem_resource **mem_resource,
+		struct rte_pci_device *pci_dev)
+{
+	*mem_resource = &pci_dev->mem_resource[2];
+	return 0;
+}
+
+static int
+qat_dev_get_extra_size_gen4(void)
+{
+	return sizeof(struct qat_dev_gen4_extra);
+}
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen4,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen4,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen4,
+	.qat_dev_read_config = qat_dev_read_config_gen4,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen4,
+};
+
+RTE_INIT(qat_dev_gen_4_init)
+{
+	qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4;
+	qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4;
+	qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h
new file mode 100644
index 0000000000..fc069d8867
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gens.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _QAT_DEV_GEN_H_
+#define _QAT_DEV_GEN_H_
+
+#include "qat_device.h"
+#include "qat_qp.h"
+
+#include <stdint.h>
+
+extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE];
+
+int
+qat_dev_get_extra_size_gen1(void);
+
+int
+qat_reset_ring_pairs_gen1(
+		struct qat_pci_device *qat_pci_dev);
+const struct
+rte_mem_resource *qat_dev_get_transport_bar_gen1(
+		struct rte_pci_device *pci_dev);
+int
+qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource,
+		struct rte_pci_device *pci_dev);
+int
+qat_dev_read_config_gen1(struct qat_pci_device *qat_dev);
+
+int
+qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val);
+
+#endif
diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 053c219fed..532e0fabb3 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -50,6 +50,10 @@ sources += files(
         'qat_device.c',
         'qat_logs.c',
         'qat_pf2vf.c',
+        'dev/qat_dev_gen1.c',
+        'dev/qat_dev_gen2.c',
+        'dev/qat_dev_gen3.c',
+        'dev/qat_dev_gen4.c'
 )
 includes += include_directories(
         'qat_adf',
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index e6b43c541f..418059ea83 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -17,43 +17,6 @@
 struct qat_gen_hw_data qat_gen_config[QAT_N_GENS];
 struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_N_GENS];
 
-/* pv2vf data Gen 4*/
-struct qat_pf2vf_dev qat_pf2vf_gen4 = {
-	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
-	.vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
-	.pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
-	.pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
-	.pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
-	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
-};
-
-/* Hardware device information per generation */
-__extension__
-struct qat_gen_hw_data qat_gen_config[] =  {
-	[QAT_GEN1] = {
-		.dev_gen = QAT_GEN1,
-		.qp_hw_data = qat_gen1_qps,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN1
-	},
-	[QAT_GEN2] = {
-		.dev_gen = QAT_GEN2,
-		.qp_hw_data = qat_gen1_qps,
-		/* gen2 has same ring layout as gen1 */
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN2
-	},
-	[QAT_GEN3] = {
-		.dev_gen = QAT_GEN3,
-		.qp_hw_data = qat_gen3_qps,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3
-	},
-	[QAT_GEN4] = {
-		.dev_gen = QAT_GEN4,
-		.qp_hw_data = NULL,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3,
-		.pf2vf_dev = &qat_pf2vf_gen4
-	},
-};
-
 /* per-process array of device data */
 struct qat_device_info qat_pci_devs[RTE_PMD_QAT_MAX_PCI_DEVICES];
 static int qat_nb_pci_devices;
@@ -87,6 +50,16 @@ static const struct rte_pci_id pci_id_qat_map[] = {
 		{.device_id = 0},
 };
 
+static int
+qat_pci_get_extra_size(enum qat_device_gen qat_dev_gen)
+{
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_extra_size,
+		-ENOTSUP);
+	return ops_hw->qat_dev_get_extra_size();
+}
+
 static struct qat_pci_device *
 qat_pci_get_named_dev(const char *name)
 {
@@ -130,45 +103,8 @@ qat_get_qat_dev_from_pci_dev(struct rte_pci_device *pci_dev)
 	return qat_pci_get_named_dev(name);
 }
 
-static int
-qat_gen4_reset_ring_pair(struct qat_pci_device *qat_pci_dev)
-{
-	int ret = 0, i;
-	uint8_t data[4];
-	struct qat_pf2vf_msg pf2vf_msg;
-
-	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET;
-	pf2vf_msg.block_hdr = -1;
-	for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) {
-		pf2vf_msg.msg_data = i;
-		ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data);
-		if (ret) {
-			QAT_LOG(ERR, "QAT error when reset bundle no %d",
-				i);
-			return ret;
-		}
-	}
-
-	return 0;
-}
-
-int qat_query_svc(struct qat_pci_device *qat_dev, uint8_t *val)
-{
-	int ret = -(EINVAL);
-	struct qat_pf2vf_msg pf2vf_msg;
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ;
-		pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ;
-		pf2vf_msg.msg_data = 2;
-		ret = qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
-	}
-
-	return ret;
-}
-
-
-static void qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param
+static void
+qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param
 		*qat_dev_cmd_param)
 {
 	int i = 0;
@@ -230,13 +166,40 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param)
 {
 	struct qat_pci_device *qat_dev;
+	enum qat_device_gen qat_dev_gen;
 	uint8_t qat_dev_id = 0;
 	char name[QAT_DEV_NAME_MAX_LEN];
 	struct rte_devargs *devargs = pci_dev->device.devargs;
+	struct qat_dev_hw_spec_funcs *ops_hw;
+	struct rte_mem_resource *mem_resource;
+	const struct rte_memzone *qat_dev_mz;
+	int qat_dev_size;
+	int extra_size;
 
 	rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
 	snprintf(name+strlen(name), QAT_DEV_NAME_MAX_LEN-strlen(name), "_qat");
 
+	switch (pci_dev->id.device_id) {
+	case 0x0443:
+		qat_dev_gen = QAT_GEN1;
+		break;
+	case 0x37c9:
+	case 0x19e3:
+	case 0x6f55:
+	case 0x18ef:
+		qat_dev_gen = QAT_GEN2;
+		break;
+	case 0x18a1:
+		qat_dev_gen = QAT_GEN3;
+		break;
+	case 0x4941:
+		qat_dev_gen = QAT_GEN4;
+		break;
+	default:
+		QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
+		return NULL;
+	}
+
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		const struct rte_memzone *mz = rte_memzone_lookup(name);
 
@@ -267,63 +230,61 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 		return NULL;
 	}
 
-	qat_pci_devs[qat_dev_id].mz = rte_memzone_reserve(name,
-		sizeof(struct qat_pci_device),
+	extra_size = qat_pci_get_extra_size(qat_dev_gen);
+	if (extra_size < 0) {
+		QAT_LOG(ERR, "QAT internal error: no pci pointer for gen %d",
+			qat_dev_gen);
+		return NULL;
+	}
+
+	qat_dev_size = sizeof(struct qat_pci_device) + extra_size;
+	qat_dev_mz = rte_memzone_reserve(name, qat_dev_size,
 		rte_socket_id(), 0);
 
-	if (qat_pci_devs[qat_dev_id].mz == NULL) {
+	if (qat_dev_mz == NULL) {
 		QAT_LOG(ERR, "Error when allocating memzone for QAT_%d",
 			qat_dev_id);
 		return NULL;
 	}
 
-	qat_dev = qat_pci_devs[qat_dev_id].mz->addr;
-	memset(qat_dev, 0, sizeof(*qat_dev));
+	qat_dev = qat_dev_mz->addr;
+	memset(qat_dev, 0, qat_dev_size);
+	qat_dev->dev_private = qat_dev + 1;
 	strlcpy(qat_dev->name, name, QAT_DEV_NAME_MAX_LEN);
 	qat_dev->qat_dev_id = qat_dev_id;
 	qat_pci_devs[qat_dev_id].pci_dev = pci_dev;
-	switch (pci_dev->id.device_id) {
-	case 0x0443:
-		qat_dev->qat_dev_gen = QAT_GEN1;
-		break;
-	case 0x37c9:
-	case 0x19e3:
-	case 0x6f55:
-	case 0x18ef:
-		qat_dev->qat_dev_gen = QAT_GEN2;
-		break;
-	case 0x18a1:
-		qat_dev->qat_dev_gen = QAT_GEN3;
-		break;
-	case 0x4941:
-		qat_dev->qat_dev_gen = QAT_GEN4;
-		break;
-	default:
-		QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
-		rte_memzone_free(qat_pci_devs[qat_dev->qat_dev_id].mz);
+	qat_dev->qat_dev_gen = qat_dev_gen;
+
+	ops_hw = qat_dev_hw_spec[qat_dev->qat_dev_gen];
+	if (ops_hw->qat_dev_get_misc_bar == NULL) {
+		QAT_LOG(ERR, "qat_dev_get_misc_bar function pointer not set");
+		rte_memzone_free(qat_dev_mz);
 		return NULL;
 	}
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		qat_dev->misc_bar_io_addr = pci_dev->mem_resource[2].addr;
-		if (qat_dev->misc_bar_io_addr == NULL) {
+	if (ops_hw->qat_dev_get_misc_bar(&mem_resource, pci_dev) == 0) {
+		if (mem_resource->addr == NULL) {
 			QAT_LOG(ERR, "QAT cannot get access to VF misc bar");
+			rte_memzone_free(qat_dev_mz);
 			return NULL;
 		}
-	}
+		qat_dev->misc_bar_io_addr = mem_resource->addr;
+	} else
+		qat_dev->misc_bar_io_addr = NULL;
 
 	if (devargs && devargs->drv_str)
 		qat_dev_parse_cmd(devargs->drv_str, qat_dev_cmd_param);
 
-	if (qat_dev->qat_dev_gen >= QAT_GEN4) {
-		if (qat_read_qp_config(qat_dev)) {
-			QAT_LOG(ERR,
-				"Cannot acquire ring configuration for QAT_%d",
-				qat_dev_id);
-			return NULL;
-		}
+	if (qat_read_qp_config(qat_dev)) {
+		QAT_LOG(ERR,
+			"Cannot acquire ring configuration for QAT_%d",
+			qat_dev_id);
+			rte_memzone_free(qat_dev_mz);
+		return NULL;
 	}
 
+	/* No errors when allocating, attach memzone with qat_dev to list of devices */
+	qat_pci_devs[qat_dev_id].mz = qat_dev_mz;
+
 	rte_spinlock_init(&qat_dev->arb_csr_lock);
 	qat_nb_pci_devices++;
 
@@ -396,6 +357,7 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	int sym_ret = 0, asym_ret = 0, comp_ret = 0;
 	int num_pmds_created = 0;
 	struct qat_pci_device *qat_pci_dev;
+	struct qat_dev_hw_spec_funcs *ops_hw;
 	struct qat_dev_cmd_param qat_dev_cmd_param[] = {
 			{ SYM_ENQ_THRESHOLD_NAME, 0 },
 			{ ASYM_ENQ_THRESHOLD_NAME, 0 },
@@ -412,13 +374,14 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	if (qat_pci_dev == NULL)
 		return -ENODEV;
 
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		if (qat_gen4_reset_ring_pair(qat_pci_dev)) {
-			QAT_LOG(ERR,
-				"Cannot reset ring pairs, does pf driver supports pf2vf comms?"
-				);
-			return -ENODEV;
-		}
+	ops_hw = qat_dev_hw_spec[qat_pci_dev->qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_reset_ring_pairs,
+		-ENOTSUP);
+	if (ops_hw->qat_dev_reset_ring_pairs(qat_pci_dev)) {
+		QAT_LOG(ERR,
+			"Cannot reset ring pairs, does pf driver supports pf2vf comms?"
+			);
+		return -ENODEV;
 	}
 
 	sym_ret = qat_sym_dev_create(qat_pci_dev, qat_dev_cmd_param);
@@ -453,7 +416,8 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	return 0;
 }
 
-static int qat_pci_remove(struct rte_pci_device *pci_dev)
+static int
+qat_pci_remove(struct rte_pci_device *pci_dev)
 {
 	struct qat_pci_device *qat_pci_dev;
 
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index b8b5c387a3..8b69206df5 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -133,6 +133,8 @@ struct qat_pci_device {
 	/**< Data of ring configuration on gen4 */
 	void *misc_bar_io_addr;
 	/**< Address of misc bar */
+	void *dev_private;
+	/**< Per generation specific information */
 };
 
 struct qat_gen_hw_data {
@@ -182,7 +184,4 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev __rte_unused,
 int
 qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev __rte_unused);
 
-int
-qat_query_svc(struct qat_pci_device *qat_pci_dev, uint8_t *ret);
-
 #endif /* _QAT_DEVICE_H_ */
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 026ea5ee01..b8c6000e86 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -20,6 +20,7 @@
 #include "qat_comp.h"
 #include "adf_transport_access_macros.h"
 #include "adf_transport_access_macros_gen4vf.h"
+#include "dev/qat_dev_gens.h"
 
 #define QAT_CQ_MAX_DEQ_RETRIES 10
 
@@ -512,7 +513,7 @@ qat_read_qp_config(struct qat_pci_device *qat_dev)
 	if (qat_dev_gen == QAT_GEN4) {
 		uint16_t svc = 0;
 
-		if (qat_query_svc(qat_dev, (uint8_t *)&svc))
+		if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
 			return -(EFAULT);
 		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
 			struct qat_qp_hw_data *hw_data =
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v3 03/10] common/qat: add gen specific queue pair function
  2021-10-14 16:11   ` [dpdk-dev] [dpdk-dev v3 00/10] drivers/qat: isolate implementations of qat generations Fan Zhang
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 01/10] common/qat: add gen specific data and function Fan Zhang
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 02/10] common/qat: add gen specific device implementation Fan Zhang
@ 2021-10-14 16:11     ` Fan Zhang
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 04/10] common/qat: add gen specific queue implementation Fan Zhang
                       ` (7 subsequent siblings)
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-14 16:11 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang

This patch adds the queue pair data structure and function
prototypes for different QAT generations.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 drivers/common/qat/qat_qp.c |   3 ++
 drivers/common/qat/qat_qp.h | 103 ++++++++++++++++++++++++------------
 2 files changed, 71 insertions(+), 35 deletions(-)

diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index b8c6000e86..27994036b8 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -34,6 +34,9 @@
 	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
 	(ADF_ARB_REG_SLOT * index), value)
 
+struct qat_qp_hw_spec_funcs*
+	qat_qp_hw_spec[QAT_N_GENS];
+
 __extension__
 const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
 					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
index e1627197fa..726cd2ef61 100644
--- a/drivers/common/qat/qat_qp.h
+++ b/drivers/common/qat/qat_qp.h
@@ -7,8 +7,6 @@
 #include "qat_common.h"
 #include "adf_transport_access_macros.h"
 
-struct qat_pci_device;
-
 #define QAT_CSR_HEAD_WRITE_THRESH 32U
 /* number of requests to accumulate before writing head CSR */
 
@@ -24,37 +22,7 @@ struct qat_pci_device;
 #define QAT_GEN4_BUNDLE_NUM             4
 #define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
 
-/**
- * Structure with data needed for creation of queue pair.
- */
-struct qat_qp_hw_data {
-	enum qat_service_type service_type;
-	uint8_t hw_bundle_num;
-	uint8_t tx_ring_num;
-	uint8_t rx_ring_num;
-	uint16_t tx_msg_size;
-	uint16_t rx_msg_size;
-};
-
-/**
- * Structure with data needed for creation of queue pair on gen4.
- */
-struct qat_qp_gen4_data {
-	struct qat_qp_hw_data qat_qp_hw_data;
-	uint8_t reserved;
-	uint8_t valid;
-};
-
-/**
- * Structure with data needed for creation of queue pair.
- */
-struct qat_qp_config {
-	const struct qat_qp_hw_data *hw;
-	uint32_t nb_descriptors;
-	uint32_t cookie_size;
-	int socket_id;
-	const char *service_str;
-};
+struct qat_pci_device;
 
 /**
  * Structure associated with each queue.
@@ -96,8 +64,28 @@ struct qat_qp {
 	uint16_t min_enq_burst_threshold;
 } __rte_cache_aligned;
 
-extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+/**
+ * Structure with data needed for creation of queue pair.
+ */
+struct qat_qp_hw_data {
+	enum qat_service_type service_type;
+	uint8_t hw_bundle_num;
+	uint8_t tx_ring_num;
+	uint8_t rx_ring_num;
+	uint16_t tx_msg_size;
+	uint16_t rx_msg_size;
+};
+
+/**
+ * Structure with data needed for creation of queue pair.
+ */
+struct qat_qp_config {
+	const struct qat_qp_hw_data *hw;
+	uint32_t nb_descriptors;
+	uint32_t cookie_size;
+	int socket_id;
+	const char *service_str;
+};
 
 uint16_t
 qat_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops);
@@ -136,4 +124,49 @@ qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
 int
 qat_read_qp_config(struct qat_pci_device *qat_dev);
 
+/**
+ * Function prototypes for GENx specific queue pair operations.
+ **/
+typedef int (*qat_qp_rings_per_service_t)
+		(struct qat_pci_device *, enum qat_service_type);
+
+typedef void (*qat_qp_build_ring_base_t)(void *, struct qat_queue *);
+
+typedef void (*qat_qp_adf_arb_enable_t)(const struct qat_queue *, void *,
+		rte_spinlock_t *);
+
+typedef void (*qat_qp_adf_arb_disable_t)(const struct qat_queue *, void *,
+		rte_spinlock_t *);
+
+typedef void (*qat_qp_adf_configure_queues_t)(struct qat_qp *);
+
+typedef void (*qat_qp_csr_write_tail_t)(struct qat_qp *qp, struct qat_queue *q);
+
+typedef void (*qat_qp_csr_write_head_t)(struct qat_qp *qp, struct qat_queue *q,
+		uint32_t new_head);
+
+typedef void (*qat_qp_csr_setup_t)(struct qat_pci_device*, void *,
+		struct qat_qp *);
+
+typedef const struct qat_qp_hw_data * (*qat_qp_get_hw_data_t)(
+		struct qat_pci_device *dev, enum qat_service_type service_type,
+		uint16_t qp_id);
+
+struct qat_qp_hw_spec_funcs {
+	qat_qp_rings_per_service_t	qat_qp_rings_per_service;
+	qat_qp_build_ring_base_t	qat_qp_build_ring_base;
+	qat_qp_adf_arb_enable_t		qat_qp_adf_arb_enable;
+	qat_qp_adf_arb_disable_t	qat_qp_adf_arb_disable;
+	qat_qp_adf_configure_queues_t	qat_qp_adf_configure_queues;
+	qat_qp_csr_write_tail_t		qat_qp_csr_write_tail;
+	qat_qp_csr_write_head_t		qat_qp_csr_write_head;
+	qat_qp_csr_setup_t		qat_qp_csr_setup;
+	qat_qp_get_hw_data_t		qat_qp_get_hw_data;
+};
+
+extern struct qat_qp_hw_spec_funcs *qat_qp_hw_spec[];
+
+extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+
 #endif /* _QAT_QP_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v3 04/10] common/qat: add gen specific queue implementation
  2021-10-14 16:11   ` [dpdk-dev] [dpdk-dev v3 00/10] drivers/qat: isolate implementations of qat generations Fan Zhang
                       ` (2 preceding siblings ...)
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 03/10] common/qat: add gen specific queue pair function Fan Zhang
@ 2021-10-14 16:11     ` Fan Zhang
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 05/10] compress/qat: add gen specific data and function Fan Zhang
                       ` (6 subsequent siblings)
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-14 16:11 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch replaces the mixed QAT queue pair configuration
implementation by separate files with shared or individual
implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c         | 193 ++++-
 drivers/common/qat/dev/qat_dev_gen2.c         |  14 +
 drivers/common/qat/dev/qat_dev_gen3.c         |  60 ++
 drivers/common/qat/dev/qat_dev_gen4.c         | 161 ++++-
 drivers/common/qat/dev/qat_dev_gens.h         |  30 +-
 .../qat/qat_adf/adf_transport_access_macros.h |   2 +
 drivers/common/qat/qat_device.h               |   3 -
 drivers/common/qat/qat_qp.c                   | 667 +++++++-----------
 drivers/common/qat/qat_qp.h                   |  24 +-
 drivers/crypto/qat/qat_sym_pmd.c              |  32 +-
 10 files changed, 710 insertions(+), 476 deletions(-)

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
index d9e75fe9e2..f1f43c17b1 100644
--- a/drivers/common/qat/dev/qat_dev_gen1.c
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -3,6 +3,7 @@
  */
 
 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"
 
@@ -10,6 +11,195 @@
 
 #define ADF_ARB_REG_SLOT			0x1000
 
+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+__extension__
+const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
+	/* queue pairs which provide an asymmetric crypto service */
+	[QAT_SERVICE_ASYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 0,
+			.rx_ring_num = 8,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+
+		}, {
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 1,
+			.rx_ring_num = 9,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a symmetric crypto service */
+	[QAT_SERVICE_SYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 2,
+			.rx_ring_num = 10,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		},
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 3,
+			.rx_ring_num = 11,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a compression service */
+	[QAT_SERVICE_COMPRESSION] = {
+		{
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 6,
+			.rx_ring_num = 14,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}, {
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 7,
+			.rx_ring_num = 15,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	}
+};
+
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen1(struct qat_pci_device *dev __rte_unused,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	return qat_gen1_qps[service_type] + qp_id;
+}
+
+int
+qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
+{
+	int i = 0, count = 0;
+
+	for (i = 0; i < ADF_MAX_QPS_ON_ANY_SERVICE; i++) {
+		const struct qat_qp_hw_data *hw_qps =
+				qat_qp_get_hw_data(qat_dev, service, i);
+		if (hw_qps->service_type == service)
+			count++;
+	}
+
+	return count;
+}
+
+void
+qat_qp_csr_build_ring_base_gen1(void *io_addr,
+			struct qat_queue *queue)
+{
+	uint64_t queue_base;
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+			queue->queue_size);
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+		queue->hw_queue_number, queue_base);
+}
+
+void
+qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_ARB_REG_SLOT *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr,
+			arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+void
+qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+					(ADF_ARB_REG_SLOT *
+						txq->hw_bundle_number);
+	uint32_t value;
+
+	rte_spinlock_lock(lock);
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value &= ~(0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+void
+qat_qp_adf_configure_queues_gen1(struct qat_qp *qp)
+{
+	uint32_t q_tx_config, q_resp_config;
+	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
+
+	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
+	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
+			ADF_RING_NEAR_WATERMARK_512,
+			ADF_RING_NEAR_WATERMARK_0);
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
+		q_tx->hw_bundle_number,	q_tx->hw_queue_number,
+		q_tx_config);
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
+		q_rx->hw_bundle_number,	q_rx->hw_queue_number,
+		q_resp_config);
+}
+
+void
+qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q)
+{
+	WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number,
+		q->hw_queue_number, q->tail);
+}
+
+void
+qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q,
+			uint32_t new_head)
+{
+	WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number,
+			q->hw_queue_number, new_head);
+}
+
+void
+qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev,
+			void *io_addr, struct qat_qp *qp)
+{
+	qat_qp_csr_build_ring_base_gen1(io_addr, &qp->tx_q);
+	qat_qp_csr_build_ring_base_gen1(io_addr, &qp->rx_q);
+	qat_qp_adf_configure_queues_gen1(qp);
+	qat_qp_adf_arb_enable_gen1(&qp->tx_q, qp->mmap_bar_addr,
+					&qat_dev->arb_csr_lock);
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen1 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen1,
+};
+
 int
 qat_reset_ring_pairs_gen1(struct qat_pci_device *qat_pci_dev __rte_unused)
 {
@@ -26,7 +216,7 @@ qat_dev_get_transport_bar_gen1(struct rte_pci_device *pci_dev)
 }
 
 int
-qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource __rte_unused,
+qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource  __rte_unused,
 		struct rte_pci_device *pci_dev __rte_unused)
 {
 	return -1;
@@ -59,6 +249,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = {
 
 RTE_INIT(qat_dev_gen_gen1_init)
 {
+	qat_qp_hw_spec[QAT_GEN1] = &qat_qp_hw_spec_gen1;
 	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
 	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
 	qat_gen_config[QAT_GEN1].comp_num_im_bufs_required =
diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c
index d3470ed6b8..f077fe9eef 100644
--- a/drivers/common/qat/dev/qat_dev_gen2.c
+++ b/drivers/common/qat/dev/qat_dev_gen2.c
@@ -3,11 +3,24 @@
  */
 
 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"
 
 #include <stdint.h>
 
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen2 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen1,
+};
+
 static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
 	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
 	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
@@ -18,6 +31,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
 
 RTE_INIT(qat_dev_gen_gen2_init)
 {
+	qat_qp_hw_spec[QAT_GEN2] = &qat_qp_hw_spec_gen2;
 	qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2;
 	qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c
index e4a66869d2..de3fa17fa9 100644
--- a/drivers/common/qat/dev/qat_dev_gen3.c
+++ b/drivers/common/qat/dev/qat_dev_gen3.c
@@ -3,11 +3,70 @@
  */
 
 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"
 
 #include <stdint.h>
 
+__extension__
+const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
+	/* queue pairs which provide an asymmetric crypto service */
+	[QAT_SERVICE_ASYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 0,
+			.rx_ring_num = 4,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a symmetric crypto service */
+	[QAT_SERVICE_SYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 1,
+			.rx_ring_num = 5,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a compression service */
+	[QAT_SERVICE_COMPRESSION] = {
+		{
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 3,
+			.rx_ring_num = 7,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	}
+};
+
+
+static const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen3(struct qat_pci_device *dev __rte_unused,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	return qat_gen3_qps[service_type] + qp_id;
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen3 = {
+	.qat_qp_rings_per_service  = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen3
+};
+
 static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
 	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
 	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
@@ -18,6 +77,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
 
 RTE_INIT(qat_dev_gen_gen3_init)
 {
+	qat_qp_hw_spec[QAT_GEN3] = &qat_qp_hw_spec_gen3;
 	qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3;
 	qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
index 5e5423ebfa..7ffde5f4c8 100644
--- a/drivers/common/qat/dev/qat_dev_gen4.c
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -10,10 +10,13 @@
 #include "adf_transport_access_macros_gen4vf.h"
 #include "adf_pf2vf_msg.h"
 #include "qat_pf2vf.h"
-#include "qat_dev_gens.h"
 
 #include <stdint.h>
 
+/* QAT GEN 4 specific macros */
+#define QAT_GEN4_BUNDLE_NUM             4
+#define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
+
 struct qat_dev_gen4_extra {
 	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
 		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
@@ -28,7 +31,7 @@ static struct qat_pf2vf_dev qat_pf2vf_gen4 = {
 	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
 };
 
-int
+static int
 qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
 {
 	struct qat_pf2vf_msg pf2vf_msg;
@@ -39,6 +42,52 @@ qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
 	return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
 }
 
+static int
+qat_select_valid_queue_gen4(struct qat_pci_device *qat_dev, int qp_id,
+			enum qat_service_type service_type)
+{
+	int i = 0, valid_qps = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+
+	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		if (dev_extra->qp_gen4_data[i][0].service_type ==
+			service_type) {
+			if (valid_qps == qp_id)
+				return i;
+			++valid_qps;
+		}
+	}
+	return -1;
+}
+
+static const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen4(struct qat_pci_device *qat_dev,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+	int ring_pair = qat_select_valid_queue_gen4(qat_dev, qp_id,
+			service_type);
+
+	if (ring_pair < 0)
+		return NULL;
+
+	return &dev_extra->qp_gen4_data[ring_pair][0];
+}
+
+static int
+qat_qp_rings_per_service_gen4(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
+{
+	int i = 0, count = 0, max_ops_per_srv = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+
+	max_ops_per_srv = QAT_GEN4_BUNDLE_NUM;
+	for (i = 0, count = 0; i < max_ops_per_srv; i++)
+		if (dev_extra->qp_gen4_data[i][0].service_type == service)
+			count++;
+	return count;
+}
+
 static enum qat_service_type
 gen4_pick_service(uint8_t hw_service)
 {
@@ -94,6 +143,109 @@ qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
 	return 0;
 }
 
+static void
+qat_qp_build_ring_base_gen4(void *io_addr,
+			struct qat_queue *queue)
+{
+	uint64_t queue_base;
+
+	queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr,
+			queue->queue_size);
+	WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number,
+		queue->hw_queue_number, queue_base);
+}
+
+static void
+qat_qp_adf_arb_enable_gen4(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_RING_BUNDLE_SIZE_GEN4 *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
+			arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+static void
+qat_qp_adf_arb_disable_gen4(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_RING_BUNDLE_SIZE_GEN4 *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
+			arb_csr_offset);
+	value &= ~(0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+static void
+qat_qp_adf_configure_queues_gen4(struct qat_qp *qp)
+{
+	uint32_t q_tx_config, q_resp_config;
+	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
+
+	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
+	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
+			ADF_RING_NEAR_WATERMARK_512,
+			ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
+		q_tx->hw_bundle_number,	q_tx->hw_queue_number,
+		q_tx_config);
+	WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
+		q_rx->hw_bundle_number,	q_rx->hw_queue_number,
+		q_resp_config);
+}
+
+static void
+qat_qp_csr_write_tail_gen4(struct qat_qp *qp, struct qat_queue *q)
+{
+	WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
+		q->hw_bundle_number, q->hw_queue_number, q->tail);
+}
+
+static void
+qat_qp_csr_write_head_gen4(struct qat_qp *qp, struct qat_queue *q,
+			uint32_t new_head)
+{
+	WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr,
+			q->hw_bundle_number, q->hw_queue_number, new_head);
+}
+
+static void
+qat_qp_csr_setup_gen4(struct qat_pci_device *qat_dev,
+			void *io_addr, struct qat_qp *qp)
+{
+	qat_qp_build_ring_base_gen4(io_addr, &qp->tx_q);
+	qat_qp_build_ring_base_gen4(io_addr, &qp->rx_q);
+	qat_qp_adf_configure_queues_gen4(qp);
+	qat_qp_adf_arb_enable_gen4(&qp->tx_q, qp->mmap_bar_addr,
+					&qat_dev->arb_csr_lock);
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen4 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen4,
+	.qat_qp_build_ring_base = qat_qp_build_ring_base_gen4,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen4,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen4,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen4,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen4,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen4,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen4,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen4,
+};
+
 static int
 qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
 {
@@ -116,8 +268,8 @@ qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
 	return 0;
 }
 
-static const struct
-rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
+static const struct rte_mem_resource *
+qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
 {
 	return &pci_dev->mem_resource[0];
 }
@@ -146,6 +298,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = {
 
 RTE_INIT(qat_dev_gen_4_init)
 {
+	qat_qp_hw_spec[QAT_GEN4] = &qat_qp_hw_spec_gen4;
 	qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4;
 	qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4;
 	qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4;
diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h
index fc069d8867..0a86b3e933 100644
--- a/drivers/common/qat/dev/qat_dev_gens.h
+++ b/drivers/common/qat/dev/qat_dev_gens.h
@@ -16,6 +16,33 @@ extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
 int
 qat_dev_get_extra_size_gen1(void);
 
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen1(struct qat_pci_device *dev,
+		enum qat_service_type service_type, uint16_t qp_id);
+
+int
+qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev,
+		enum qat_service_type service);
+void
+qat_qp_csr_build_ring_base_gen1(void *io_addr,
+		struct qat_queue *queue);
+void
+qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock);
+void
+qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock);
+void
+qat_qp_adf_configure_queues_gen1(struct qat_qp *qp);
+void
+qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q);
+void
+qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q,
+		uint32_t new_head);
+void
+qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev,
+		void *io_addr, struct qat_qp *qp);
+
 int
 qat_reset_ring_pairs_gen1(
 		struct qat_pci_device *qat_pci_dev);
@@ -28,7 +55,4 @@ qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource,
 int
 qat_dev_read_config_gen1(struct qat_pci_device *qat_dev);
 
-int
-qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val);
-
 #endif
diff --git a/drivers/common/qat/qat_adf/adf_transport_access_macros.h b/drivers/common/qat/qat_adf/adf_transport_access_macros.h
index 504ffb7236..f98bbb5001 100644
--- a/drivers/common/qat/qat_adf/adf_transport_access_macros.h
+++ b/drivers/common/qat/qat_adf/adf_transport_access_macros.h
@@ -51,6 +51,8 @@
 #define ADF_MIN_RING_SIZE ADF_RING_SIZE_128
 #define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M
 #define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K
+/* ARB CSR offset */
+#define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C
 
 /* Maximum number of qps on a device for any service type */
 #define ADF_MAX_QPS_ON_ANY_SERVICE	2
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 8b69206df5..8233cc045d 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -128,9 +128,6 @@ struct qat_pci_device {
 	/* Data relating to compression service */
 	struct qat_comp_dev_private *comp_dev;
 	/**< link back to compressdev private data */
-	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
-		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
-	/**< Data of ring configuration on gen4 */
 	void *misc_bar_io_addr;
 	/**< Address of misc bar */
 	void *dev_private;
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 27994036b8..39a329d5d8 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -18,124 +18,15 @@
 #include "qat_sym.h"
 #include "qat_asym.h"
 #include "qat_comp.h"
-#include "adf_transport_access_macros.h"
-#include "adf_transport_access_macros_gen4vf.h"
-#include "dev/qat_dev_gens.h"
 
 #define QAT_CQ_MAX_DEQ_RETRIES 10
 
 #define ADF_MAX_DESC				4096
 #define ADF_MIN_DESC				128
 
-#define ADF_ARB_REG_SLOT			0x1000
-#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
-
-#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
-	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
-	(ADF_ARB_REG_SLOT * index), value)
-
 struct qat_qp_hw_spec_funcs*
 	qat_qp_hw_spec[QAT_N_GENS];
 
-__extension__
-const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
-					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
-	/* queue pairs which provide an asymmetric crypto service */
-	[QAT_SERVICE_ASYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 0,
-			.rx_ring_num = 8,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-
-		}, {
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 1,
-			.rx_ring_num = 9,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a symmetric crypto service */
-	[QAT_SERVICE_SYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 2,
-			.rx_ring_num = 10,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		},
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 3,
-			.rx_ring_num = 11,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a compression service */
-	[QAT_SERVICE_COMPRESSION] = {
-		{
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 6,
-			.rx_ring_num = 14,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}, {
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 7,
-			.rx_ring_num = 15,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	}
-};
-
-__extension__
-const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES]
-					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
-	/* queue pairs which provide an asymmetric crypto service */
-	[QAT_SERVICE_ASYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 0,
-			.rx_ring_num = 4,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a symmetric crypto service */
-	[QAT_SERVICE_SYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 1,
-			.rx_ring_num = 5,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a compression service */
-	[QAT_SERVICE_COMPRESSION] = {
-		{
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 3,
-			.rx_ring_num = 7,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	}
-};
-
 static int qat_qp_check_queue_alignment(uint64_t phys_addr,
 	uint32_t queue_size_bytes);
 static void qat_queue_delete(struct qat_queue *queue);
@@ -143,68 +34,21 @@ static int qat_queue_create(struct qat_pci_device *qat_dev,
 	struct qat_queue *queue, struct qat_qp_config *, uint8_t dir);
 static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
 	uint32_t *queue_size_for_csr);
-static void adf_configure_queues(struct qat_qp *queue,
+static int adf_configure_queues(struct qat_qp *queue,
 	enum qat_device_gen qat_dev_gen);
-static void adf_queue_arb_enable(enum qat_device_gen qat_dev_gen,
+static int adf_queue_arb_enable(struct qat_pci_device *qat_dev,
 	struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock);
-static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
+static int adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
 	struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock);
+static int qat_qp_build_ring_base(struct qat_pci_device *qat_dev,
+	void *io_addr, struct qat_queue *queue);
+static const struct rte_memzone *queue_dma_zone_reserve(const char *queue_name,
+	uint32_t queue_size, int socket_id);
+static int qat_qp_csr_setup(struct qat_pci_device *qat_dev, void *io_addr,
+	struct qat_qp *qp);
 
-int qat_qps_per_service(struct qat_pci_device *qat_dev,
-		enum qat_service_type service)
-{
-	int i = 0, count = 0, max_ops_per_srv = 0;
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		max_ops_per_srv = QAT_GEN4_BUNDLE_NUM;
-		for (i = 0, count = 0; i < max_ops_per_srv; i++)
-			if (qat_dev->qp_gen4_data[i][0].service_type == service)
-				count++;
-	} else {
-		const struct qat_qp_hw_data *sym_hw_qps =
-				qat_gen_config[qat_dev->qat_dev_gen]
-				.qp_hw_data[service];
-
-		max_ops_per_srv = ADF_MAX_QPS_ON_ANY_SERVICE;
-		for (i = 0, count = 0; i < max_ops_per_srv; i++)
-			if (sym_hw_qps[i].service_type == service)
-				count++;
-	}
-
-	return count;
-}
-
-static const struct rte_memzone *
-queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
-			int socket_id)
-{
-	const struct rte_memzone *mz;
-
-	mz = rte_memzone_lookup(queue_name);
-	if (mz != 0) {
-		if (((size_t)queue_size <= mz->len) &&
-				((socket_id == SOCKET_ID_ANY) ||
-					(socket_id == mz->socket_id))) {
-			QAT_LOG(DEBUG, "re-use memzone already "
-					"allocated for %s", queue_name);
-			return mz;
-		}
-
-		QAT_LOG(ERR, "Incompatible memzone already "
-				"allocated %s, size %u, socket %d. "
-				"Requested size %u, socket %u",
-				queue_name, (uint32_t)mz->len,
-				mz->socket_id, queue_size, socket_id);
-		return NULL;
-	}
-
-	QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
-					queue_name, queue_size, socket_id);
-	return rte_memzone_reserve_aligned(queue_name, queue_size,
-		socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size);
-}
-
-int qat_qp_setup(struct qat_pci_device *qat_dev,
+int
+qat_qp_setup(struct qat_pci_device *qat_dev,
 		struct qat_qp **qp_addr,
 		uint16_t queue_pair_id,
 		struct qat_qp_config *qat_qp_conf)
@@ -213,7 +57,9 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	struct rte_pci_device *pci_dev =
 			qat_pci_devs[qat_dev->qat_dev_id].pci_dev;
 	char op_cookie_pool_name[RTE_RING_NAMESIZE];
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev->qat_dev_gen];
+	void *io_addr;
 	uint32_t i;
 
 	QAT_LOG(DEBUG, "Setup qp %u on qat pci device %d gen %d",
@@ -226,7 +72,15 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		return -EINVAL;
 	}
 
-	if (pci_dev->mem_resource[0].addr == NULL) {
+	if (ops_hw->qat_dev_get_transport_bar == NULL)	{
+		QAT_LOG(ERR,
+			"QAT Internal Error: qat_dev_get_transport_bar not set for gen %d",
+			qat_dev->qat_dev_gen);
+		goto create_err;
+	}
+
+	io_addr = ops_hw->qat_dev_get_transport_bar(pci_dev)->addr;
+	if (io_addr == NULL) {
 		QAT_LOG(ERR, "Could not find VF config space "
 				"(UIO driver attached?).");
 		return -EINVAL;
@@ -250,7 +104,7 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		return -ENOMEM;
 	}
 
-	qp->mmap_bar_addr = pci_dev->mem_resource[0].addr;
+	qp->mmap_bar_addr = io_addr;
 	qp->enqueued = qp->dequeued = 0;
 
 	if (qat_queue_create(qat_dev, &(qp->tx_q), qat_qp_conf,
@@ -277,10 +131,6 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		goto create_err;
 	}
 
-	adf_configure_queues(qp, qat_dev_gen);
-	adf_queue_arb_enable(qat_dev_gen, &qp->tx_q, qp->mmap_bar_addr,
-					&qat_dev->arb_csr_lock);
-
 	snprintf(op_cookie_pool_name, RTE_RING_NAMESIZE,
 					"%s%d_cookies_%s_qp%hu",
 		pci_dev->driver->driver.name, qat_dev->qat_dev_id,
@@ -298,6 +148,8 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	if (!qp->op_cookie_pool) {
 		QAT_LOG(ERR, "QAT PMD Cannot create"
 				" op mempool");
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
 		goto create_err;
 	}
 
@@ -316,6 +168,8 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	QAT_LOG(DEBUG, "QP setup complete: id: %d, cookiepool: %s",
 			queue_pair_id, op_cookie_pool_name);
 
+	qat_qp_csr_setup(qat_dev, io_addr, qp);
+
 	*qp_addr = qp;
 	return 0;
 
@@ -327,80 +181,13 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	return -EFAULT;
 }
 
-
-int qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr)
-{
-	struct qat_qp *qp = *qp_addr;
-	uint32_t i;
-
-	if (qp == NULL) {
-		QAT_LOG(DEBUG, "qp already freed");
-		return 0;
-	}
-
-	QAT_LOG(DEBUG, "Free qp on qat_pci device %d",
-				qp->qat_dev->qat_dev_id);
-
-	/* Don't free memory if there are still responses to be processed */
-	if ((qp->enqueued - qp->dequeued) == 0) {
-		qat_queue_delete(&(qp->tx_q));
-		qat_queue_delete(&(qp->rx_q));
-	} else {
-		return -EAGAIN;
-	}
-
-	adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q), qp->mmap_bar_addr,
-				&qp->qat_dev->arb_csr_lock);
-
-	for (i = 0; i < qp->nb_descriptors; i++)
-		rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]);
-
-	if (qp->op_cookie_pool)
-		rte_mempool_free(qp->op_cookie_pool);
-
-	rte_free(qp->op_cookies);
-	rte_free(qp);
-	*qp_addr = NULL;
-	return 0;
-}
-
-
-static void qat_queue_delete(struct qat_queue *queue)
-{
-	const struct rte_memzone *mz;
-	int status = 0;
-
-	if (queue == NULL) {
-		QAT_LOG(DEBUG, "Invalid queue");
-		return;
-	}
-	QAT_LOG(DEBUG, "Free ring %d, memzone: %s",
-			queue->hw_queue_number, queue->memz_name);
-
-	mz = rte_memzone_lookup(queue->memz_name);
-	if (mz != NULL)	{
-		/* Write an unused pattern to the queue memory. */
-		memset(queue->base_addr, 0x7F, queue->queue_size);
-		status = rte_memzone_free(mz);
-		if (status != 0)
-			QAT_LOG(ERR, "Error %d on freeing queue %s",
-					status, queue->memz_name);
-	} else {
-		QAT_LOG(DEBUG, "queue %s doesn't exist",
-				queue->memz_name);
-	}
-}
-
 static int
 qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 		struct qat_qp_config *qp_conf, uint8_t dir)
 {
-	uint64_t queue_base;
-	void *io_addr;
 	const struct rte_memzone *qp_mz;
 	struct rte_pci_device *pci_dev =
 			qat_pci_devs[qat_dev->qat_dev_id].pci_dev;
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
 	int ret = 0;
 	uint16_t desc_size = (dir == ADF_RING_DIR_TX ?
 			qp_conf->hw->tx_msg_size : qp_conf->hw->rx_msg_size);
@@ -460,19 +247,6 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 	 * Write an unused pattern to the queue memory.
 	 */
 	memset(queue->base_addr, 0x7F, queue_size_bytes);
-	io_addr = pci_dev->mem_resource[0].addr;
-
-	if (qat_dev_gen == QAT_GEN4) {
-		queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr,
-					queue->queue_size);
-		WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number,
-			queue->hw_queue_number, queue_base);
-	} else {
-		queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
-				queue->queue_size);
-		WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
-			queue->hw_queue_number, queue_base);
-	}
 
 	QAT_LOG(DEBUG, "RING: Name:%s, size in CSR: %u, in bytes %u,"
 		" nb msgs %u, msg_size %u, modulo mask %u",
@@ -488,202 +262,231 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 	return ret;
 }
 
-int
-qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
-			enum qat_service_type service_type)
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+		int socket_id)
 {
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		int i = 0, valid_qps = 0;
-
-		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
-			if (qat_dev->qp_gen4_data[i][0].service_type ==
-				service_type) {
-				if (valid_qps == qp_id)
-					return i;
-				++valid_qps;
-			}
+	const struct rte_memzone *mz;
+
+	mz = rte_memzone_lookup(queue_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			QAT_LOG(DEBUG, "re-use memzone already "
+					"allocated for %s", queue_name);
+			return mz;
 		}
+
+		QAT_LOG(ERR, "Incompatible memzone already "
+				"allocated %s, size %u, socket %d. "
+				"Requested size %u, socket %u",
+				queue_name, (uint32_t)mz->len,
+				mz->socket_id, queue_size, socket_id);
+		return NULL;
 	}
-	return -1;
+
+	QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					queue_name, queue_size, socket_id);
+	return rte_memzone_reserve_aligned(queue_name, queue_size,
+		socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size);
 }
 
 int
-qat_read_qp_config(struct qat_pci_device *qat_dev)
+qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr)
 {
-	int i = 0;
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
-
-	if (qat_dev_gen == QAT_GEN4) {
-		uint16_t svc = 0;
-
-		if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
-			return -(EFAULT);
-		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
-			struct qat_qp_hw_data *hw_data =
-				&qat_dev->qp_gen4_data[i][0];
-			uint8_t svc1 = (svc >> (3 * i)) & 0x7;
-			enum qat_service_type service_type = QAT_SERVICE_INVALID;
-
-			if (svc1 == QAT_SVC_SYM) {
-				service_type = QAT_SERVICE_SYMMETRIC;
-				QAT_LOG(DEBUG,
-					"Discovered SYMMETRIC service on bundle %d",
-					i);
-			} else if (svc1 == QAT_SVC_COMPRESSION) {
-				service_type = QAT_SERVICE_COMPRESSION;
-				QAT_LOG(DEBUG,
-					"Discovered COPRESSION service on bundle %d",
-					i);
-			} else if (svc1 == QAT_SVC_ASYM) {
-				service_type = QAT_SERVICE_ASYMMETRIC;
-				QAT_LOG(DEBUG,
-					"Discovered ASYMMETRIC service on bundle %d",
-					i);
-			} else {
-				QAT_LOG(ERR,
-					"Unrecognized service on bundle %d",
-					i);
-				return -(EFAULT);
-			}
+	int ret;
+	struct qat_qp *qp = *qp_addr;
+	uint32_t i;
 
-			memset(hw_data, 0, sizeof(*hw_data));
-			hw_data->service_type = service_type;
-			if (service_type == QAT_SERVICE_ASYMMETRIC) {
-				hw_data->tx_msg_size = 64;
-				hw_data->rx_msg_size = 32;
-			} else if (service_type == QAT_SERVICE_SYMMETRIC ||
-					service_type ==
-						QAT_SERVICE_COMPRESSION) {
-				hw_data->tx_msg_size = 128;
-				hw_data->rx_msg_size = 32;
-			}
-			hw_data->tx_ring_num = 0;
-			hw_data->rx_ring_num = 1;
-			hw_data->hw_bundle_num = i;
-		}
+	if (qp == NULL) {
+		QAT_LOG(DEBUG, "qp already freed");
 		return 0;
 	}
-	return -(EINVAL);
+
+	QAT_LOG(DEBUG, "Free qp on qat_pci device %d",
+				qp->qat_dev->qat_dev_id);
+
+	/* Don't free memory if there are still responses to be processed */
+	if ((qp->enqueued - qp->dequeued) == 0) {
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
+	} else {
+		return -EAGAIN;
+	}
+
+	ret = adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q),
+			qp->mmap_bar_addr, &qp->qat_dev->arb_csr_lock);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < qp->nb_descriptors; i++)
+		rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]);
+
+	if (qp->op_cookie_pool)
+		rte_mempool_free(qp->op_cookie_pool);
+
+	rte_free(qp->op_cookies);
+	rte_free(qp);
+	*qp_addr = NULL;
+	return 0;
 }
 
-static int qat_qp_check_queue_alignment(uint64_t phys_addr,
-					uint32_t queue_size_bytes)
+
+static void
+qat_queue_delete(struct qat_queue *queue)
 {
-	if (((queue_size_bytes - 1) & phys_addr) != 0)
-		return -EINVAL;
+	const struct rte_memzone *mz;
+	int status = 0;
+
+	if (queue == NULL) {
+		QAT_LOG(DEBUG, "Invalid queue");
+		return;
+	}
+	QAT_LOG(DEBUG, "Free ring %d, memzone: %s",
+			queue->hw_queue_number, queue->memz_name);
+
+	mz = rte_memzone_lookup(queue->memz_name);
+	if (mz != NULL)	{
+		/* Write an unused pattern to the queue memory. */
+		memset(queue->base_addr, 0x7F, queue->queue_size);
+		status = rte_memzone_free(mz);
+		if (status != 0)
+			QAT_LOG(ERR, "Error %d on freeing queue %s",
+					status, queue->memz_name);
+	} else {
+		QAT_LOG(DEBUG, "queue %s doesn't exist",
+				queue->memz_name);
+	}
+}
+
+static int __rte_unused
+adf_queue_arb_enable(struct qat_pci_device *qat_dev, struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_enable,
+			-ENOTSUP);
+	ops->qat_qp_adf_arb_enable(txq, base_addr, lock);
 	return 0;
 }
 
-static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
-	uint32_t *p_queue_size_for_csr)
+static int
+adf_queue_arb_disable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock)
 {
-	uint8_t i = ADF_MIN_RING_SIZE;
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
 
-	for (; i <= ADF_MAX_RING_SIZE; i++)
-		if ((msg_size * msg_num) ==
-				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
-			*p_queue_size_for_csr = i;
-			return 0;
-		}
-	QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
-	return -EINVAL;
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_disable,
+			-ENOTSUP);
+	ops->qat_qp_adf_arb_disable(txq, base_addr, lock);
+	return 0;
 }
 
-static void
-adf_queue_arb_enable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq,
-			void *base_addr, rte_spinlock_t *lock)
+static int __rte_unused
+qat_qp_build_ring_base(struct qat_pci_device *qat_dev, void *io_addr,
+		struct qat_queue *queue)
 {
-	uint32_t arb_csr_offset = 0, value;
-
-	rte_spinlock_lock(lock);
-	if (qat_dev_gen == QAT_GEN4) {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_RING_BUNDLE_SIZE_GEN4 *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
-				arb_csr_offset);
-	} else {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_ARB_REG_SLOT *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr,
-				arb_csr_offset);
-	}
-	value |= (0x01 << txq->hw_queue_number);
-	ADF_CSR_WR(base_addr, arb_csr_offset, value);
-	rte_spinlock_unlock(lock);
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_build_ring_base,
+			-ENOTSUP);
+	ops->qat_qp_build_ring_base(io_addr, queue);
+	return 0;
 }
 
-static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
-		struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock)
+int
+qat_qps_per_service(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
 {
-	uint32_t arb_csr_offset = 0, value;
-
-	rte_spinlock_lock(lock);
-	if (qat_dev_gen == QAT_GEN4) {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_RING_BUNDLE_SIZE_GEN4 *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
-				arb_csr_offset);
-	} else {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_ARB_REG_SLOT *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr,
-				arb_csr_offset);
-	}
-	value &= ~(0x01 << txq->hw_queue_number);
-	ADF_CSR_WR(base_addr, arb_csr_offset, value);
-	rte_spinlock_unlock(lock);
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_rings_per_service,
+			-ENOTSUP);
+	return ops->qat_qp_rings_per_service(qat_dev, service);
 }
 
-static void adf_configure_queues(struct qat_qp *qp,
-		enum qat_device_gen qat_dev_gen)
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data(struct qat_pci_device *qat_dev,
+		enum qat_service_type service, uint16_t qp_id)
 {
-	uint32_t q_tx_config, q_resp_config;
-	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
-
-	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
-	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
-			ADF_RING_NEAR_WATERMARK_512,
-			ADF_RING_NEAR_WATERMARK_0);
-
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
-			q_tx->hw_bundle_number,	q_tx->hw_queue_number,
-			q_tx_config);
-		WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
-			q_rx->hw_bundle_number,	q_rx->hw_queue_number,
-			q_resp_config);
-	} else {
-		WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
-			q_tx->hw_bundle_number,	q_tx->hw_queue_number,
-			q_tx_config);
-		WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
-			q_rx->hw_bundle_number,	q_rx->hw_queue_number,
-			q_resp_config);
-	}
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_get_hw_data, NULL);
+	return ops->qat_qp_get_hw_data(qat_dev, service, qp_id);
 }
 
-static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask)
+int
+qat_read_qp_config(struct qat_pci_device *qat_dev)
 {
-	return data & modulo_mask;
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_read_config,
+			-ENOTSUP);
+	return ops_hw->qat_dev_read_config(qat_dev);
+}
+
+static int __rte_unused
+adf_configure_queues(struct qat_qp *qp, enum qat_device_gen qat_dev_gen)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_configure_queues,
+			-ENOTSUP);
+	ops->qat_qp_adf_configure_queues(qp);
+	return 0;
 }
 
 static inline void
 txq_write_tail(enum qat_device_gen qat_dev_gen,
-		struct qat_qp *qp, struct qat_queue *q) {
+		struct qat_qp *qp, struct qat_queue *q)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
 
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
-			q->hw_bundle_number, q->hw_queue_number, q->tail);
-	} else {
-		WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number,
-			q->hw_queue_number, q->tail);
-	}
+	/*
+	 * Pointer check should be done during
+	 * initialization
+	 */
+	ops->qat_qp_csr_write_tail(qp, q);
+}
+
+static inline void
+qat_qp_csr_write_head(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
+			struct qat_queue *q, uint32_t new_head)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
+
+	/*
+	 * Pointer check should be done during
+	 * initialization
+	 */
+	ops->qat_qp_csr_write_head(qp, q, new_head);
+}
+
+static int
+qat_qp_csr_setup(struct qat_pci_device *qat_dev,
+		void *io_addr, struct qat_qp *qp)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_csr_setup,
+			-ENOTSUP);
+	ops->qat_qp_csr_setup(qat_dev, io_addr, qp);
+	return 0;
 }
 
+
 static inline
 void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
 				struct qat_queue *q)
@@ -707,15 +510,37 @@ void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
 	q->nb_processed_responses = 0;
 	q->csr_head = new_head;
 
-	/* write current head to CSR */
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr,
-			q->hw_bundle_number, q->hw_queue_number, new_head);
-	} else {
-		WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number,
-				q->hw_queue_number, new_head);
-	}
+	qat_qp_csr_write_head(qat_dev_gen, qp, q, new_head);
+}
+
+static int
+qat_qp_check_queue_alignment(uint64_t phys_addr, uint32_t queue_size_bytes)
+{
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return -EINVAL;
+	return 0;
+}
+
+static int
+adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+		uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
+
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return -EINVAL;
+}
 
+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t modulo_mask)
+{
+	return data & modulo_mask;
 }
 
 uint16_t
diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
index 726cd2ef61..deafb407b3 100644
--- a/drivers/common/qat/qat_qp.h
+++ b/drivers/common/qat/qat_qp.h
@@ -12,16 +12,6 @@
 
 #define QAT_QP_MIN_INFL_THRESHOLD	256
 
-/* Default qp configuration for GEN4 devices */
-#define QAT_GEN4_QP_DEFCON	(QAT_SERVICE_SYMMETRIC |	\
-				QAT_SERVICE_SYMMETRIC << 8 |	\
-				QAT_SERVICE_SYMMETRIC << 16 |	\
-				QAT_SERVICE_SYMMETRIC << 24)
-
-/* QAT GEN 4 specific macros */
-#define QAT_GEN4_BUNDLE_NUM             4
-#define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
-
 struct qat_pci_device;
 
 /**
@@ -106,7 +96,11 @@ qat_qp_setup(struct qat_pci_device *qat_dev,
 
 int
 qat_qps_per_service(struct qat_pci_device *qat_dev,
-			enum qat_service_type service);
+		enum qat_service_type service);
+
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data(struct qat_pci_device *qat_dev,
+		enum qat_service_type service, uint16_t qp_id);
 
 int
 qat_cq_get_fw_version(struct qat_qp *qp);
@@ -116,11 +110,6 @@ int
 qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused,
 			  void *op_cookie __rte_unused,
 			  uint64_t *dequeue_err_count __rte_unused);
-
-int
-qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
-			enum qat_service_type service_type);
-
 int
 qat_read_qp_config(struct qat_pci_device *qat_dev);
 
@@ -166,7 +155,4 @@ struct qat_qp_hw_spec_funcs {
 
 extern struct qat_qp_hw_spec_funcs *qat_qp_hw_spec[];
 
-extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-
 #endif /* _QAT_QP_H_ */
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index efda921c05..71907a606d 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -164,35 +164,11 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	int ret = 0;
 	uint32_t i;
 	struct qat_qp_config qat_qp_conf;
-	const struct qat_qp_hw_data *sym_hw_qps = NULL;
-	const struct qat_qp_hw_data *qp_hw_data = NULL;
-
 	struct qat_qp **qp_addr =
 			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
 	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
 	struct qat_pci_device *qat_dev = qat_private->qat_dev;
 
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		int ring_pair =
-			qat_select_valid_queue(qat_dev, qp_id,
-				QAT_SERVICE_SYMMETRIC);
-
-		if (ring_pair < 0) {
-			QAT_LOG(ERR,
-				"qp_id %u invalid for this device, no enough services allocated for GEN4 device",
-				qp_id);
-			return -EINVAL;
-		}
-		sym_hw_qps =
-			&qat_dev->qp_gen4_data[0][0];
-		qp_hw_data =
-			&qat_dev->qp_gen4_data[ring_pair][0];
-	} else {
-		sym_hw_qps = qat_gen_config[qat_dev->qat_dev_gen]
-				.qp_hw_data[QAT_SERVICE_SYMMETRIC];
-		qp_hw_data = sym_hw_qps + qp_id;
-	}
-
 	/* If qp is already in use free ring memory and qp metadata. */
 	if (*qp_addr != NULL) {
 		ret = qat_sym_qp_release(dev, qp_id);
@@ -204,7 +180,13 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 		return -EINVAL;
 	}
 
-	qat_qp_conf.hw = qp_hw_data;
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_SYMMETRIC,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
 	qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie);
 	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
 	qat_qp_conf.socket_id = socket_id;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v3 05/10] compress/qat: add gen specific data and function
  2021-10-14 16:11   ` [dpdk-dev] [dpdk-dev v3 00/10] drivers/qat: isolate implementations of qat generations Fan Zhang
                       ` (3 preceding siblings ...)
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 04/10] common/qat: add gen specific queue implementation Fan Zhang
@ 2021-10-14 16:11     ` Fan Zhang
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 06/10] compress/qat: add gen specific implementation Fan Zhang
                       ` (5 subsequent siblings)
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-14 16:11 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Adam Dybkowski, Arek Kusztal, Kai Ji

This patch adds the compression data structure and function
prototypes for different QAT generations.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c         |   2 -
 .../common/qat/qat_adf/icp_qat_hw_gen4_comp.h | 195 ++++++++++++
 .../qat/qat_adf/icp_qat_hw_gen4_comp_defs.h   | 300 ++++++++++++++++++
 drivers/common/qat/qat_device.h               |   7 -
 drivers/compress/qat/qat_comp.c               | 101 +++---
 drivers/compress/qat/qat_comp.h               |   8 +-
 drivers/compress/qat/qat_comp_pmd.c           | 159 ++++------
 drivers/compress/qat/qat_comp_pmd.h           |  76 +++++
 8 files changed, 674 insertions(+), 174 deletions(-)
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
index f1f43c17b1..ed4c4a2c03 100644
--- a/drivers/common/qat/dev/qat_dev_gen1.c
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -252,6 +252,4 @@ RTE_INIT(qat_dev_gen_gen1_init)
 	qat_qp_hw_spec[QAT_GEN1] = &qat_qp_hw_spec_gen1;
 	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
 	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
-	qat_gen_config[QAT_GEN1].comp_num_im_bufs_required =
-		QAT_NUM_INTERM_BUFS_GEN1;
 }
diff --git a/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
new file mode 100644
index 0000000000..ec69dc7105
--- /dev/null
+++ b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
@@ -0,0 +1,195 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _ICP_QAT_HW_GEN4_COMP_H_
+#define _ICP_QAT_HW_GEN4_COMP_H_
+
+#include "icp_qat_fw.h"
+#include "icp_qat_hw_gen4_comp_defs.h"
+
+struct icp_qat_hw_comp_20_config_csr_lower {
+	icp_qat_hw_comp_20_extended_delay_match_mode_t edmm;
+	icp_qat_hw_comp_20_hw_comp_format_t algo;
+	icp_qat_hw_comp_20_search_depth_t sd;
+	icp_qat_hw_comp_20_hbs_control_t hbs;
+	icp_qat_hw_comp_20_abd_t abd;
+	icp_qat_hw_comp_20_lllbd_ctrl_t lllbd;
+	icp_qat_hw_comp_20_min_match_control_t mmctrl;
+	icp_qat_hw_comp_20_skip_hash_collision_t hash_col;
+	icp_qat_hw_comp_20_skip_hash_update_t hash_update;
+	icp_qat_hw_comp_20_byte_skip_t skip_ctrl;
+};
+
+static inline uint32_t ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER(
+		struct icp_qat_hw_comp_20_config_csr_lower csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.algo,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_MASK);
+
+	QAT_FIELD_SET(val32, csr.sd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_MASK);
+
+	QAT_FIELD_SET(val32, csr.edmm,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_MASK);
+
+	QAT_FIELD_SET(val32, csr.hbs,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lllbd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK);
+
+	QAT_FIELD_SET(val32, csr.mmctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.hash_col,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_MASK);
+
+	QAT_FIELD_SET(val32, csr.hash_update,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_MASK);
+
+	QAT_FIELD_SET(val32, csr.skip_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_MASK);
+
+	QAT_FIELD_SET(val32, csr.abd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_MASK);
+
+	QAT_FIELD_SET(val32, csr.lllbd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_comp_20_config_csr_upper {
+	icp_qat_hw_comp_20_scb_control_t scb_ctrl;
+	icp_qat_hw_comp_20_rmb_control_t rmb_ctrl;
+	icp_qat_hw_comp_20_som_control_t som_ctrl;
+	icp_qat_hw_comp_20_skip_hash_rd_control_t skip_hash_ctrl;
+	icp_qat_hw_comp_20_scb_unload_control_t scb_unload_ctrl;
+	icp_qat_hw_comp_20_disable_token_fusion_control_t
+			disable_token_fusion_ctrl;
+	icp_qat_hw_comp_20_lbms_t lbms;
+	icp_qat_hw_comp_20_scb_mode_reset_mask_t scb_mode_reset;
+	uint16_t lazy;
+	uint16_t nice;
+};
+
+static inline uint32_t ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER(
+		struct icp_qat_hw_comp_20_config_csr_upper csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.scb_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.rmb_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.som_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.skip_hash_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.scb_unload_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.disable_token_fusion_ctrl,
+	ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_BITPOS,
+	ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbms,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_MASK);
+
+	QAT_FIELD_SET(val32, csr.scb_mode_reset,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK);
+
+	QAT_FIELD_SET(val32, csr.lazy,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_MASK);
+
+	QAT_FIELD_SET(val32, csr.nice,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_decomp_20_config_csr_lower {
+	icp_qat_hw_decomp_20_hbs_control_t hbs;
+	icp_qat_hw_decomp_20_lbms_t lbms;
+	icp_qat_hw_decomp_20_hw_comp_format_t algo;
+	icp_qat_hw_decomp_20_min_match_control_t mmctrl;
+	icp_qat_hw_decomp_20_lz4_block_checksum_present_t lbc;
+};
+
+static inline uint32_t ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER(
+		struct icp_qat_hw_decomp_20_config_csr_lower csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.hbs,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbms,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_MASK);
+
+	QAT_FIELD_SET(val32, csr.algo,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_MASK);
+
+	QAT_FIELD_SET(val32, csr.mmctrl,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbc,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_BITPOS,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_decomp_20_config_csr_upper {
+	icp_qat_hw_decomp_20_speculative_decoder_control_t sdc;
+	icp_qat_hw_decomp_20_mini_cam_control_t mcc;
+};
+
+static inline uint32_t ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_UPPER(
+		struct icp_qat_hw_decomp_20_config_csr_upper csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.sdc,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_BITPOS,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.mcc,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_MASK);
+
+	return rte_bswap32(val32);
+}
+
+#endif /* _ICP_QAT_HW_GEN4_COMP_H_ */
diff --git a/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
new file mode 100644
index 0000000000..0c2e1603f0
--- /dev/null
+++ b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
@@ -0,0 +1,300 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _ICP_QAT_HW_GEN4_COMP_DEFS_H
+#define _ICP_QAT_HW_GEN4_COMP_DEFS_H
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_BITPOS	31
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_comp_20_scb_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_BITPOS	30
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_ALL = 0x0,
+	ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_FC_ONLY = 0x1,
+} icp_qat_hw_comp_20_rmb_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_ALL
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_BITPOS	28
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_MASK		0x3
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_NORMAL_MODE = 0x0,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_REPLAY_MODE = 0x1,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_INPUT_CRC = 0x2,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_RESERVED_MODE = 0x3,
+} icp_qat_hw_comp_20_som_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SOM_CONTROL_NORMAL_MODE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_BITPOS	27
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_NO_SKIP = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_SKIP_HASH_READS = 0x1,
+} icp_qat_hw_comp_20_skip_hash_rd_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_NO_SKIP
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_BITPOS	26
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_UNLOAD = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_NO_UNLOAD = 0x1,
+} icp_qat_hw_comp_20_scb_unload_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_UNLOAD
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_BITPOS 21
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_comp_20_disable_token_fusion_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_DEFAULT_VAL \
+		ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_ENABLE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_BITPOS	19
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_MASK		0x3
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_64KB = 0x0,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_256KB = 0x1,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_1MB = 0x2,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_4MB = 0x3,
+} icp_qat_hw_comp_20_lbms_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_LBMS_LBMS_64KB
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS	18
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS_AND_HISTORY = 0x1,
+} icp_qat_hw_comp_20_scb_mode_reset_mask_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_BITPOS	9
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_MASK	0x1ff
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL	\
+		258
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_BITPOS	0
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_MASK	0x1ff
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL 259
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS	14
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_MASK		0x7
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_HBS_CONTROL_HBS_IS_32KB = 0x0,
+} icp_qat_hw_comp_20_hbs_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_HBS_CONTROL_HBS_IS_32KB
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_BITPOS	13
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_ABD_ABD_ENABLED = 0x0,
+	ICP_QAT_HW_COMP_20_ABD_ABD_DISABLED = 0x1,
+} icp_qat_hw_comp_20_abd_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_ABD_ABD_ENABLED
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS	12
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED = 0x0,
+	ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_DISABLED = 0x1,
+} icp_qat_hw_comp_20_lllbd_ctrl_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_BITPOS	8
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_MASK		0xf
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1 = 0x1,
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_6 = 0x3,
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_9 = 0x4,
+} icp_qat_hw_comp_20_search_depth_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_BITPOS	5
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77 = 0x0,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE = 0x1,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_LZ4 = 0x2,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_LZ4S = 0x3,
+} icp_qat_hw_comp_20_hw_comp_format_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS	4
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_3B = 0x0,
+	ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_4B = 0x1,
+} icp_qat_hw_comp_20_min_match_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_3B
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_BITPOS	3
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_ALLOW = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_DONT_ALLOW = 0x1,
+} icp_qat_hw_comp_20_skip_hash_collision_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_ALLOW
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_BITPOS	2
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_ALLOW = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW = 0x1,
+} icp_qat_hw_comp_20_skip_hash_update_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_ALLOW
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_BITPOS	1
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_TOKEN = 0x0,
+	ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL = 0x1,
+} icp_qat_hw_comp_20_byte_skip_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_TOKEN
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_BITPOS	0
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_DISABLED = 0x0,
+	ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED = 0x1,
+} icp_qat_hw_comp_20_extended_delay_match_mode_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_DEFAULT_VAL \
+		ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_DISABLED
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_BITPOS 31
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_decomp_20_speculative_decoder_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_DEFAULT_VAL\
+		ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_ENABLE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_BITPOS	30
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_decomp_20_mini_cam_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_ENABLE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS	14
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_HBS_CONTROL_HBS_IS_32KB = 0x0,
+} icp_qat_hw_decomp_20_hbs_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_HBS_CONTROL_HBS_IS_32KB
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_BITPOS	8
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_MASK	0x3
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_64KB = 0x0,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_256KB = 0x1,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_1MB = 0x2,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_4MB = 0x3,
+} icp_qat_hw_decomp_20_lbms_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_LBMS_LBMS_64KB
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_BITPOS	5
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE = 0x1,
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_LZ4 = 0x2,
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_LZ4S = 0x3,
+} icp_qat_hw_decomp_20_hw_comp_format_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS	4
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_3B = 0x0,
+	ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_4B = 0x1,
+} icp_qat_hw_decomp_20_min_match_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_3B
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_BITPOS 3
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_ABSENT  =  0x0,
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_PRESENT  =  0x1,
+} icp_qat_hw_decomp_20_lz4_block_checksum_present_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_DEFAULT_VAL \
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_ABSENT
+
+#endif /* _ICP_QAT_HW_GEN4_COMP_DEFS_H */
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 8233cc045d..e7c7e9af95 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -49,12 +49,6 @@ struct qat_dev_cmd_param {
 	uint16_t val;
 };
 
-enum qat_comp_num_im_buffers {
-	QAT_NUM_INTERM_BUFS_GEN1 = 12,
-	QAT_NUM_INTERM_BUFS_GEN2 = 20,
-	QAT_NUM_INTERM_BUFS_GEN3 = 64
-};
-
 struct qat_device_info {
 	const struct rte_memzone *mz;
 	/**< mz to store the qat_pci_device so it can be
@@ -137,7 +131,6 @@ struct qat_pci_device {
 struct qat_gen_hw_data {
 	enum qat_device_gen dev_gen;
 	const struct qat_qp_hw_data (*qp_hw_data)[ADF_MAX_QPS_ON_ANY_SERVICE];
-	enum qat_comp_num_im_buffers comp_num_im_bufs_required;
 	struct qat_pf2vf_dev *pf2vf_dev;
 };
 
diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index 7ac25a3b4c..e8f57c3cc4 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018-2019 Intel Corporation
+ * Copyright(c) 2018-2021 Intel Corporation
  */
 
 #include <rte_mempool.h>
@@ -332,7 +332,8 @@ qat_comp_build_request(void *in_op, uint8_t *out_msg,
 	return 0;
 }
 
-static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask)
+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t modulo_mask)
 {
 	return data & modulo_mask;
 }
@@ -793,8 +794,9 @@ qat_comp_stream_size(void)
 	return RTE_ALIGN_CEIL(sizeof(struct qat_comp_stream), 8);
 }
 
-static void qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
-				    enum qat_comp_request_type request)
+static void
+qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
+	    enum qat_comp_request_type request)
 {
 	if (request == QAT_COMP_REQUEST_FIXED_COMP_STATELESS)
 		header->service_cmd_id = ICP_QAT_FW_COMP_CMD_STATIC;
@@ -811,16 +813,17 @@ static void qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
 	    QAT_COMN_CD_FLD_TYPE_16BYTE_DATA, QAT_COMN_PTR_TYPE_FLAT);
 }
 
-static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
-			const struct rte_memzone *interm_buff_mz,
-			const struct rte_comp_xform *xform,
-			const struct qat_comp_stream *stream,
-			enum rte_comp_op_type op_type)
+static int
+qat_comp_create_templates(struct qat_comp_xform *qat_xform,
+			  const struct rte_memzone *interm_buff_mz,
+			  const struct rte_comp_xform *xform,
+			  const struct qat_comp_stream *stream,
+			  enum rte_comp_op_type op_type,
+			  enum qat_device_gen qat_dev_gen)
 {
 	struct icp_qat_fw_comp_req *comp_req;
-	int comp_level, algo;
 	uint32_t req_par_flags;
-	int direction = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS;
+	int res;
 
 	if (unlikely(qat_xform == NULL)) {
 		QAT_LOG(ERR, "Session was not created for this device");
@@ -839,46 +842,17 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 		}
 	}
 
-	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) {
-		direction = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS;
-		comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
+	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS)
 		req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(
 				ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP,
 				ICP_QAT_FW_COMP_BFINAL,
 				ICP_QAT_FW_COMP_CNV,
 				ICP_QAT_FW_COMP_CNV_RECOVERY);
-	} else {
-		if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
-		else if (xform->compress.level == 1)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
-		else if (xform->compress.level == 2)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_4;
-		else if (xform->compress.level == 3)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
-		else if (xform->compress.level >= 4 &&
-			 xform->compress.level <= 9)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_16;
-		else {
-			QAT_LOG(ERR, "compression level not supported");
-			return -EINVAL;
-		}
+	else
 		req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(
 				ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP,
 				ICP_QAT_FW_COMP_BFINAL, ICP_QAT_FW_COMP_CNV,
 				ICP_QAT_FW_COMP_CNV_RECOVERY);
-	}
-
-	switch (xform->compress.algo) {
-	case RTE_COMP_ALGO_DEFLATE:
-		algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE;
-		break;
-	case RTE_COMP_ALGO_LZS:
-	default:
-		/* RTE_COMP_NULL */
-		QAT_LOG(ERR, "compression algorithm not supported");
-		return -EINVAL;
-	}
 
 	comp_req = &qat_xform->qat_comp_req_tmpl;
 
@@ -899,18 +873,10 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 		comp_req->comp_cd_ctrl.comp_state_addr =
 				stream->state_registers_decomp_phys;
 
-		/* Enable A, B, C, D, and E (CAMs). */
+		/* RAM bank flags */
 		comp_req->comp_cd_ctrl.ram_bank_flags =
-			ICP_QAT_FW_COMP_RAM_FLAGS_BUILD(
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank E */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank D */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank C */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank B */
-				ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */
+				qat_comp_gen_dev_ops[qat_dev_gen]
+					.qat_comp_get_ram_bank_flags();
 
 		comp_req->comp_cd_ctrl.ram_banks_addr =
 				stream->inflate_context_phys;
@@ -924,13 +890,11 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 			ICP_QAT_FW_COMP_ENABLE_SECURE_RAM_USED_AS_INTMD_BUF);
 	}
 
-	comp_req->cd_pars.sl.comp_slice_cfg_word[0] =
-	    ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(
-		direction,
-		/* In CPM 1.6 only valid mode ! */
-		ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED, algo,
-		/* Translate level to depth */
-		comp_level, ICP_QAT_HW_COMPRESSION_FILE_TYPE_0);
+	res = qat_comp_gen_dev_ops[qat_dev_gen].qat_comp_set_slice_cfg_word(
+			qat_xform, xform, op_type,
+			comp_req->cd_pars.sl.comp_slice_cfg_word);
+	if (res)
+		return res;
 
 	comp_req->comp_pars.initial_adler = 1;
 	comp_req->comp_pars.initial_crc32 = 0;
@@ -958,7 +922,8 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 				ICP_QAT_FW_SLICE_XLAT);
 
 		comp_req->u1.xlt_pars.inter_buff_ptr =
-				interm_buff_mz->iova;
+				(qat_comp_get_num_im_bufs_required(qat_dev_gen)
+					== 0) ? 0 : interm_buff_mz->iova;
 	}
 
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
@@ -991,6 +956,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 			      void **private_xform)
 {
 	struct qat_comp_dev_private *qat = dev->data->dev_private;
+	enum qat_device_gen qat_dev_gen = qat->qat_dev->qat_dev_gen;
+	unsigned int im_bufs = qat_comp_get_num_im_bufs_required(qat_dev_gen);
 
 	if (unlikely(private_xform == NULL)) {
 		QAT_LOG(ERR, "QAT: private_xform parameter is NULL");
@@ -1012,7 +979,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 
 		if (xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_FIXED ||
 		  ((xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_DEFAULT)
-				   && qat->interm_buff_mz == NULL))
+				   && qat->interm_buff_mz == NULL
+				   && im_bufs > 0))
 			qat_xform->qat_comp_request_type =
 					QAT_COMP_REQUEST_FIXED_COMP_STATELESS;
 
@@ -1020,7 +988,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 				RTE_COMP_HUFFMAN_DYNAMIC ||
 				xform->compress.deflate.huffman ==
 						RTE_COMP_HUFFMAN_DEFAULT) &&
-				qat->interm_buff_mz != NULL)
+				(qat->interm_buff_mz != NULL ||
+						im_bufs == 0))
 
 			qat_xform->qat_comp_request_type =
 					QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS;
@@ -1039,7 +1008,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 	}
 
 	if (qat_comp_create_templates(qat_xform, qat->interm_buff_mz, xform,
-				      NULL, RTE_COMP_OP_STATELESS)) {
+				      NULL, RTE_COMP_OP_STATELESS,
+				      qat_dev_gen)) {
 		QAT_LOG(ERR, "QAT: Problem with setting compression");
 		return -EINVAL;
 	}
@@ -1138,7 +1108,8 @@ qat_comp_stream_create(struct rte_compressdev *dev,
 	ptr->qat_xform.checksum_type = xform->decompress.chksum;
 
 	if (qat_comp_create_templates(&ptr->qat_xform, qat->interm_buff_mz,
-				      xform, ptr, RTE_COMP_OP_STATEFUL)) {
+				      xform, ptr, RTE_COMP_OP_STATEFUL,
+				      qat->qat_dev->qat_dev_gen)) {
 		QAT_LOG(ERR, "QAT: problem with creating descriptor template for stream");
 		rte_mempool_put(qat->streampool, *stream);
 		*stream = NULL;
diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
index 0444b50a1e..da7b9a6eec 100644
--- a/drivers/compress/qat/qat_comp.h
+++ b/drivers/compress/qat/qat_comp.h
@@ -28,14 +28,16 @@
 #define QAT_MIN_OUT_BUF_SIZE 46
 
 /* maximum size of the state registers */
-#define QAT_STATE_REGISTERS_MAX_SIZE 64
+#define QAT_STATE_REGISTERS_MAX_SIZE 256 /* 64 bytes for GEN1-3, 256 for GEN4 */
 
 /* decompressor context size */
 #define QAT_INFLATE_CONTEXT_SIZE_GEN1 36864
 #define QAT_INFLATE_CONTEXT_SIZE_GEN2 34032
 #define QAT_INFLATE_CONTEXT_SIZE_GEN3 34032
-#define QAT_INFLATE_CONTEXT_SIZE RTE_MAX(RTE_MAX(QAT_INFLATE_CONTEXT_SIZE_GEN1,\
-		QAT_INFLATE_CONTEXT_SIZE_GEN2), QAT_INFLATE_CONTEXT_SIZE_GEN3)
+#define QAT_INFLATE_CONTEXT_SIZE_GEN4 36864
+#define QAT_INFLATE_CONTEXT_SIZE RTE_MAX(RTE_MAX(RTE_MAX(\
+		QAT_INFLATE_CONTEXT_SIZE_GEN1, QAT_INFLATE_CONTEXT_SIZE_GEN2), \
+		QAT_INFLATE_CONTEXT_SIZE_GEN3), QAT_INFLATE_CONTEXT_SIZE_GEN4)
 
 enum qat_comp_request_type {
 	QAT_COMP_REQUEST_FIXED_COMP_STATELESS,
diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c
index caac7839e9..9b24d46e97 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -9,30 +9,29 @@
 
 #define QAT_PMD_COMP_SGL_DEF_SEGMENTS 16
 
+struct qat_comp_gen_dev_ops qat_comp_gen_dev_ops[QAT_N_GENS];
+
 struct stream_create_info {
 	struct qat_comp_dev_private *comp_dev;
 	int socket_id;
 	int error;
 };
 
-static const struct rte_compressdev_capabilities qat_comp_gen_capabilities[] = {
-	{/* COMPRESSION - deflate */
-	 .algo = RTE_COMP_ALGO_DEFLATE,
-	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
-				RTE_COMP_FF_CRC32_CHECKSUM |
-				RTE_COMP_FF_ADLER32_CHECKSUM |
-				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
-				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
-				RTE_COMP_FF_HUFFMAN_FIXED |
-				RTE_COMP_FF_HUFFMAN_DYNAMIC |
-				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
-				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
-				RTE_COMP_FF_OOP_LB_IN_SGL_OUT |
-				RTE_COMP_FF_STATEFUL_DECOMPRESSION,
-	 .window_size = {.min = 15, .max = 15, .increment = 0} },
-	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+static struct
+qat_comp_capabilities_info qat_comp_get_capa_info(
+		enum qat_device_gen qat_dev_gen, struct qat_pci_device *qat_dev)
+{
+	struct qat_comp_capabilities_info ret = { .data = NULL, .size = 0 };
 
-static void
+	if (qat_dev_gen >= QAT_N_GENS)
+		return ret;
+	RTE_FUNC_PTR_OR_ERR_RET(qat_comp_gen_dev_ops[qat_dev_gen]
+			.qat_comp_get_capabilities, ret);
+	return qat_comp_gen_dev_ops[qat_dev_gen]
+			.qat_comp_get_capabilities(qat_dev);
+}
+
+void
 qat_comp_stats_get(struct rte_compressdev *dev,
 		struct rte_compressdev_stats *stats)
 {
@@ -52,7 +51,7 @@ qat_comp_stats_get(struct rte_compressdev *dev,
 	stats->dequeue_err_count = qat_stats.dequeue_err_count;
 }
 
-static void
+void
 qat_comp_stats_reset(struct rte_compressdev *dev)
 {
 	struct qat_comp_dev_private *qat_priv;
@@ -67,7 +66,7 @@ qat_comp_stats_reset(struct rte_compressdev *dev)
 
 }
 
-static int
+int
 qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 {
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
@@ -95,23 +94,18 @@ qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 			&(dev->data->queue_pairs[queue_pair_id]));
 }
 
-static int
+int
 qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
-		  uint32_t max_inflight_ops, int socket_id)
+		uint32_t max_inflight_ops, int socket_id)
 {
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-	struct qat_qp_config qat_qp_conf;
-
+	struct qat_qp_config qat_qp_conf = {0};
 	struct qat_qp **qp_addr =
 			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
 	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-	const struct qat_qp_hw_data *comp_hw_qps =
-			qat_gen_config[qat_private->qat_dev->qat_dev_gen]
-				      .qp_hw_data[QAT_SERVICE_COMPRESSION];
-	const struct qat_qp_hw_data *qp_hw_data = comp_hw_qps + qp_id;
+	struct qat_qp *qp;
+	uint32_t i;
+	int ret;
 
 	/* If qp is already in use free ring memory and qp metadata. */
 	if (*qp_addr != NULL) {
@@ -125,7 +119,13 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 		return -EINVAL;
 	}
 
-	qat_qp_conf.hw = qp_hw_data;
+
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_COMPRESSION,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
 	qat_qp_conf.cookie_size = sizeof(struct qat_comp_op_cookie);
 	qat_qp_conf.nb_descriptors = max_inflight_ops;
 	qat_qp_conf.socket_id = socket_id;
@@ -134,7 +134,6 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
 	if (ret != 0)
 		return ret;
-
 	/* store a link to the qp in the qat_pci_device */
 	qat_private->qat_dev->qps_in_use[QAT_SERVICE_COMPRESSION][qp_id]
 								= *qp_addr;
@@ -189,7 +188,7 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 
 
 #define QAT_IM_BUFFER_DEBUG 0
-static const struct rte_memzone *
+const struct rte_memzone *
 qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
 			      uint32_t buff_size)
 {
@@ -202,8 +201,8 @@ qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
 	uint32_t full_size;
 	uint32_t offset_of_flat_buffs;
 	int i;
-	int num_im_sgls = qat_gen_config[
-		comp_dev->qat_dev->qat_dev_gen].comp_num_im_bufs_required;
+	int num_im_sgls = qat_comp_get_num_im_bufs_required(
+			comp_dev->qat_dev->qat_dev_gen);
 
 	QAT_LOG(DEBUG, "QAT COMP device %s needs %d sgls",
 				comp_dev->qat_dev->name, num_im_sgls);
@@ -480,8 +479,8 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev)
 	/* Free intermediate buffers */
 	if (comp_dev->interm_buff_mz) {
 		char mz_name[RTE_MEMZONE_NAMESIZE];
-		int i = qat_gen_config[
-		      comp_dev->qat_dev->qat_dev_gen].comp_num_im_bufs_required;
+		int i = qat_comp_get_num_im_bufs_required(
+				comp_dev->qat_dev->qat_dev_gen);
 
 		while (--i >= 0) {
 			snprintf(mz_name, RTE_MEMZONE_NAMESIZE,
@@ -509,28 +508,13 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev)
 	}
 }
 
-static int
+int
 qat_comp_dev_config(struct rte_compressdev *dev,
 		struct rte_compressdev_config *config)
 {
 	struct qat_comp_dev_private *comp_dev = dev->data->dev_private;
 	int ret = 0;
 
-	if (RTE_PMD_QAT_COMP_IM_BUFFER_SIZE == 0) {
-		QAT_LOG(WARNING,
-			"RTE_PMD_QAT_COMP_IM_BUFFER_SIZE = 0 in config file, so"
-			" QAT device can't be used for Dynamic Deflate. "
-			"Did you really intend to do this?");
-	} else {
-		comp_dev->interm_buff_mz =
-				qat_comp_setup_inter_buffers(comp_dev,
-					RTE_PMD_QAT_COMP_IM_BUFFER_SIZE);
-		if (comp_dev->interm_buff_mz == NULL) {
-			ret = -ENOMEM;
-			goto error_out;
-		}
-	}
-
 	if (config->max_nb_priv_xforms) {
 		comp_dev->xformpool = qat_comp_create_xform_pool(comp_dev,
 					    config, config->max_nb_priv_xforms);
@@ -558,19 +542,19 @@ qat_comp_dev_config(struct rte_compressdev *dev,
 	return ret;
 }
 
-static int
+int
 qat_comp_dev_start(struct rte_compressdev *dev __rte_unused)
 {
 	return 0;
 }
 
-static void
+void
 qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused)
 {
 
 }
 
-static int
+int
 qat_comp_dev_close(struct rte_compressdev *dev)
 {
 	int i;
@@ -588,8 +572,7 @@ qat_comp_dev_close(struct rte_compressdev *dev)
 	return ret;
 }
 
-
-static void
+void
 qat_comp_dev_info_get(struct rte_compressdev *dev,
 			struct rte_compressdev_info *info)
 {
@@ -662,27 +645,6 @@ qat_comp_pmd_dequeue_first_op_burst(void *qp, struct rte_comp_op **ops,
 	return ret;
 }
 
-static struct rte_compressdev_ops compress_qat_ops = {
-
-	/* Device related operations */
-	.dev_configure		= qat_comp_dev_config,
-	.dev_start		= qat_comp_dev_start,
-	.dev_stop		= qat_comp_dev_stop,
-	.dev_close		= qat_comp_dev_close,
-	.dev_infos_get		= qat_comp_dev_info_get,
-
-	.stats_get		= qat_comp_stats_get,
-	.stats_reset		= qat_comp_stats_reset,
-	.queue_pair_setup	= qat_comp_qp_setup,
-	.queue_pair_release	= qat_comp_qp_release,
-
-	/* Compression related operations */
-	.private_xform_create	= qat_comp_private_xform_create,
-	.private_xform_free	= qat_comp_private_xform_free,
-	.stream_create		= qat_comp_stream_create,
-	.stream_free		= qat_comp_stream_free
-};
-
 /* An rte_driver is needed in the registration of the device with compressdev.
  * The actual qat pci's rte_driver can't be used as its name represents
  * the whole pci device with all services. Think of this as a holder for a name
@@ -693,6 +655,7 @@ static const struct rte_driver compdev_qat_driver = {
 	.name = qat_comp_drv_name,
 	.alias = qat_comp_drv_name
 };
+
 int
 qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param)
@@ -708,17 +671,21 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	char capa_memz_name[RTE_COMPRESSDEV_NAME_MAX_LEN];
 	struct rte_compressdev *compressdev;
 	struct qat_comp_dev_private *comp_dev;
+	struct qat_comp_capabilities_info capabilities_info;
 	const struct rte_compressdev_capabilities *capabilities;
+	const struct qat_comp_gen_dev_ops *qat_comp_gen_ops =
+			&qat_comp_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	uint64_t capa_size;
 
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		QAT_LOG(ERR, "Compression PMD not supported on QAT 4xxx");
-		return -EFAULT;
-	}
 	snprintf(name, RTE_COMPRESSDEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "comp");
 	QAT_LOG(DEBUG, "Creating QAT COMP device %s", name);
 
+	if (qat_comp_gen_ops->compressdev_ops == NULL) {
+		QAT_LOG(DEBUG, "Device %s does not support compression", name);
+		return -ENOTSUP;
+	}
+
 	/* Populate subset device to use in compressdev device creation */
 	qat_dev_instance->comp_rte_dev.driver = &compdev_qat_driver;
 	qat_dev_instance->comp_rte_dev.numa_node =
@@ -733,13 +700,13 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	if (compressdev == NULL)
 		return -ENODEV;
 
-	compressdev->dev_ops = &compress_qat_ops;
+	compressdev->dev_ops = qat_comp_gen_ops->compressdev_ops;
 
 	compressdev->enqueue_burst = (compressdev_enqueue_pkt_burst_t)
 			qat_enqueue_comp_op_burst;
 	compressdev->dequeue_burst = qat_comp_pmd_dequeue_first_op_burst;
-
-	compressdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED;
+	compressdev->feature_flags =
+			qat_comp_gen_ops->qat_comp_get_feature_flags();
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -752,22 +719,20 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	comp_dev->qat_dev = qat_pci_dev;
 	comp_dev->compressdev = compressdev;
 
-	switch (qat_pci_dev->qat_dev_gen) {
-	case QAT_GEN1:
-	case QAT_GEN2:
-	case QAT_GEN3:
-		capabilities = qat_comp_gen_capabilities;
-		capa_size = sizeof(qat_comp_gen_capabilities);
-		break;
-	default:
-		capabilities = qat_comp_gen_capabilities;
-		capa_size = sizeof(qat_comp_gen_capabilities);
+	capabilities_info = qat_comp_get_capa_info(qat_pci_dev->qat_dev_gen,
+			qat_pci_dev);
+
+	if (capabilities_info.data == NULL) {
 		QAT_LOG(DEBUG,
 			"QAT gen %d capabilities unknown, default to GEN1",
 					qat_pci_dev->qat_dev_gen);
-		break;
+		capabilities_info = qat_comp_get_capa_info(QAT_GEN1,
+				qat_pci_dev);
 	}
 
+	capabilities = capabilities_info.data;
+	capa_size = capabilities_info.size;
+
 	comp_dev->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (comp_dev->capa_mz == NULL) {
 		comp_dev->capa_mz = rte_memzone_reserve(capa_memz_name,
diff --git a/drivers/compress/qat/qat_comp_pmd.h b/drivers/compress/qat/qat_comp_pmd.h
index 252b4b24e3..86317a513c 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -11,10 +11,44 @@
 #include <rte_compressdev_pmd.h>
 
 #include "qat_device.h"
+#include "qat_comp.h"
 
 /**< Intel(R) QAT Compression PMD driver name */
 #define COMPRESSDEV_NAME_QAT_PMD	compress_qat
 
+/* Private data structure for a QAT compression device capability. */
+struct qat_comp_capabilities_info {
+	const struct rte_compressdev_capabilities *data;
+	uint64_t size;
+};
+
+/**
+ * Function prototypes for GENx specific compress device operations.
+ **/
+typedef struct qat_comp_capabilities_info (*get_comp_capabilities_info_t)
+		(struct qat_pci_device *qat_dev);
+
+typedef uint16_t (*get_comp_ram_bank_flags_t)(void);
+
+typedef int (*set_comp_slice_cfg_word_t)(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word);
+
+typedef unsigned int (*get_comp_num_im_bufs_required_t)(void);
+
+typedef uint64_t (*get_comp_feature_flags_t)(void);
+
+struct qat_comp_gen_dev_ops {
+	struct rte_compressdev_ops *compressdev_ops;
+	get_comp_feature_flags_t qat_comp_get_feature_flags;
+	get_comp_capabilities_info_t qat_comp_get_capabilities;
+	get_comp_ram_bank_flags_t qat_comp_get_ram_bank_flags;
+	set_comp_slice_cfg_word_t qat_comp_set_slice_cfg_word;
+	get_comp_num_im_bufs_required_t qat_comp_get_num_im_bufs_required;
+};
+
+extern struct qat_comp_gen_dev_ops qat_comp_gen_dev_ops[];
+
 /** private data structure for a QAT compression device.
  * This QAT device is a device offering only a compression service,
  * there can be one of these on each qat_pci_device (VF).
@@ -37,6 +71,41 @@ struct qat_comp_dev_private {
 	uint16_t min_enq_burst_threshold;
 };
 
+int
+qat_comp_dev_config(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config);
+
+int
+qat_comp_dev_start(struct rte_compressdev *dev __rte_unused);
+
+void
+qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused);
+
+int
+qat_comp_dev_close(struct rte_compressdev *dev);
+
+void
+qat_comp_dev_info_get(struct rte_compressdev *dev,
+		struct rte_compressdev_info *info);
+
+void
+qat_comp_stats_get(struct rte_compressdev *dev,
+		struct rte_compressdev_stats *stats);
+
+void
+qat_comp_stats_reset(struct rte_compressdev *dev);
+
+int
+qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id);
+
+int
+qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
+		uint32_t max_inflight_ops, int socket_id);
+
+const struct rte_memzone *
+qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
+		uint32_t buff_size);
+
 int
 qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
@@ -44,5 +113,12 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 int
 qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev);
 
+
+static __rte_always_inline unsigned int
+qat_comp_get_num_im_bufs_required(enum qat_device_gen gen)
+{
+	return (*qat_comp_gen_dev_ops[gen].qat_comp_get_num_im_bufs_required)();
+}
+
 #endif
 #endif /* _QAT_COMP_PMD_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v3 06/10] compress/qat: add gen specific implementation
  2021-10-14 16:11   ` [dpdk-dev] [dpdk-dev v3 00/10] drivers/qat: isolate implementations of qat generations Fan Zhang
                       ` (4 preceding siblings ...)
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 05/10] compress/qat: add gen specific data and function Fan Zhang
@ 2021-10-14 16:11     ` Fan Zhang
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 07/10] crypto/qat: unified device private data structure Fan Zhang
                       ` (4 subsequent siblings)
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-14 16:11 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Adam Dybkowski, Arek Kusztal, Kai Ji

This patch replaces the mixed QAT compression support
implementation by separate files with shared or individual
implementation for specific QAT generation.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/meson.build               |   4 +-
 drivers/compress/qat/dev/qat_comp_pmd_gen1.c | 177 +++++++++++++++
 drivers/compress/qat/dev/qat_comp_pmd_gen2.c |  30 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen3.c |  30 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen4.c | 213 +++++++++++++++++++
 drivers/compress/qat/dev/qat_comp_pmd_gens.h |  30 +++
 6 files changed, 483 insertions(+), 1 deletion(-)
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen1.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen2.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen3.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen4.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gens.h

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 532e0fabb3..8a1c6d64e8 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -62,7 +62,9 @@ includes += include_directories(
 )
 
 if qat_compress
-    foreach f: ['qat_comp_pmd.c', 'qat_comp.c']
+    foreach f: ['qat_comp_pmd.c', 'qat_comp.c',
+            'dev/qat_comp_pmd_gen1.c', 'dev/qat_comp_pmd_gen2.c',
+            'dev/qat_comp_pmd_gen3.c', 'dev/qat_comp_pmd_gen4.c']
         sources += files(join_paths(qat_compress_relpath, f))
     endforeach
 endif
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen1.c b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c
new file mode 100644
index 0000000000..0e1afe544a
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c
@@ -0,0 +1,177 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_compressdev.h>
+#include <rte_compressdev_pmd.h>
+
+#include "qat_comp_pmd.h"
+#include "qat_comp.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN1 12
+
+const struct rte_compressdev_capabilities qat_gen1_comp_capabilities[] = {
+	{/* COMPRESSION - deflate */
+	 .algo = RTE_COMP_ALGO_DEFLATE,
+	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
+				RTE_COMP_FF_CRC32_CHECKSUM |
+				RTE_COMP_FF_ADLER32_CHECKSUM |
+				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
+				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+				RTE_COMP_FF_HUFFMAN_FIXED |
+				RTE_COMP_FF_HUFFMAN_DYNAMIC |
+				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
+				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
+				RTE_COMP_FF_OOP_LB_IN_SGL_OUT |
+				RTE_COMP_FF_STATEFUL_DECOMPRESSION,
+	 .window_size = {.min = 15, .max = 15, .increment = 0} },
+	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+
+static int
+qat_comp_dev_config_gen1(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config)
+{
+	struct qat_comp_dev_private *comp_dev = dev->data->dev_private;
+
+	if (RTE_PMD_QAT_COMP_IM_BUFFER_SIZE == 0) {
+		QAT_LOG(WARNING,
+			"RTE_PMD_QAT_COMP_IM_BUFFER_SIZE = 0 in config file, so"
+			" QAT device can't be used for Dynamic Deflate. "
+			"Did you really intend to do this?");
+	} else {
+		comp_dev->interm_buff_mz =
+				qat_comp_setup_inter_buffers(comp_dev,
+					RTE_PMD_QAT_COMP_IM_BUFFER_SIZE);
+		if (comp_dev->interm_buff_mz == NULL)
+			return -ENOMEM;
+	}
+
+	return qat_comp_dev_config(dev, config);
+}
+
+struct rte_compressdev_ops qat_comp_ops_gen1 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_comp_dev_config_gen1,
+	.dev_start		= qat_comp_dev_start,
+	.dev_stop		= qat_comp_dev_stop,
+	.dev_close		= qat_comp_dev_close,
+	.dev_infos_get		= qat_comp_dev_info_get,
+
+	.stats_get		= qat_comp_stats_get,
+	.stats_reset		= qat_comp_stats_reset,
+	.queue_pair_setup	= qat_comp_qp_setup,
+	.queue_pair_release	= qat_comp_qp_release,
+
+	/* Compression related operations */
+	.private_xform_create	= qat_comp_private_xform_create,
+	.private_xform_free	= qat_comp_private_xform_free,
+	.stream_create		= qat_comp_stream_create,
+	.stream_free		= qat_comp_stream_free
+};
+
+struct qat_comp_capabilities_info
+qat_comp_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_comp_capabilities_info capa_info = {
+		.data = qat_gen1_comp_capabilities,
+		.size = sizeof(qat_gen1_comp_capabilities)
+	};
+	return capa_info;
+}
+
+uint16_t
+qat_comp_get_ram_bank_flags_gen1(void)
+{
+	/* Enable A, B, C, D, and E (CAMs). */
+	return ICP_QAT_FW_COMP_RAM_FLAGS_BUILD(
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank E */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank D */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank C */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank B */
+			ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */
+}
+
+int
+qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		__rte_unused enum rte_comp_op_type op_type,
+		uint32_t *comp_slice_cfg_word)
+{
+	unsigned int algo, comp_level, direction;
+
+	if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE)
+		algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE;
+	else {
+		QAT_LOG(ERR, "compression algorithm not supported");
+		return -EINVAL;
+	}
+
+	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) {
+		direction = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS;
+		comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+	} else {
+		direction = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS;
+
+		if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+		else if (xform->compress.level == 1)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
+		else if (xform->compress.level == 2)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_4;
+		else if (xform->compress.level == 3)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+		else if (xform->compress.level >= 4 &&
+			 xform->compress.level <= 9)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_16;
+		else {
+			QAT_LOG(ERR, "compression level not supported");
+			return -EINVAL;
+		}
+	}
+
+	comp_slice_cfg_word[0] =
+			ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(
+				direction,
+				/* In CPM 1.6 only valid mode ! */
+				ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED,
+				algo,
+				/* Translate level to depth */
+				comp_level,
+				ICP_QAT_HW_COMPRESSION_FILE_TYPE_0);
+
+	return 0;
+}
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen1(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN1;
+}
+
+uint64_t
+qat_comp_get_features_gen1(void)
+{
+	return RTE_COMPDEV_FF_HW_ACCELERATED;
+}
+
+RTE_INIT(qat_comp_pmd_gen1_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN1].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen2.c b/drivers/compress/qat/dev/qat_comp_pmd_gen2.c
new file mode 100644
index 0000000000..fd6c966f26
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen2.c
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN2 20
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen2(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN2;
+}
+
+RTE_INIT(qat_comp_pmd_gen2_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN2].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen2;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen3.c b/drivers/compress/qat/dev/qat_comp_pmd_gen3.c
new file mode 100644
index 0000000000..fccb0941f1
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen3.c
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN3 64
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen3(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN3;
+}
+
+RTE_INIT(qat_comp_pmd_gen3_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN3].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen3;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen4.c b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
new file mode 100644
index 0000000000..79b2ceb414
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
@@ -0,0 +1,213 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp.h"
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+#include "icp_qat_hw_gen4_comp.h"
+#include "icp_qat_hw_gen4_comp_defs.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN4 0
+
+static const struct rte_compressdev_capabilities
+qat_gen4_comp_capabilities[] = {
+	{/* COMPRESSION - deflate */
+	 .algo = RTE_COMP_ALGO_DEFLATE,
+	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
+				RTE_COMP_FF_CRC32_CHECKSUM |
+				RTE_COMP_FF_ADLER32_CHECKSUM |
+				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
+				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+				RTE_COMP_FF_HUFFMAN_FIXED |
+				RTE_COMP_FF_HUFFMAN_DYNAMIC |
+				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
+				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
+				RTE_COMP_FF_OOP_LB_IN_SGL_OUT,
+	 .window_size = {.min = 15, .max = 15, .increment = 0} },
+	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+
+static int
+qat_comp_dev_config_gen4(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config)
+{
+	/* QAT GEN4 doesn't need preallocated intermediate buffers */
+
+	return qat_comp_dev_config(dev, config);
+}
+
+static struct rte_compressdev_ops qat_comp_ops_gen4 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_comp_dev_config_gen4,
+	.dev_start		= qat_comp_dev_start,
+	.dev_stop		= qat_comp_dev_stop,
+	.dev_close		= qat_comp_dev_close,
+	.dev_infos_get		= qat_comp_dev_info_get,
+
+	.stats_get		= qat_comp_stats_get,
+	.stats_reset		= qat_comp_stats_reset,
+	.queue_pair_setup	= qat_comp_qp_setup,
+	.queue_pair_release	= qat_comp_qp_release,
+
+	/* Compression related operations */
+	.private_xform_create	= qat_comp_private_xform_create,
+	.private_xform_free	= qat_comp_private_xform_free,
+	.stream_create		= qat_comp_stream_create,
+	.stream_free		= qat_comp_stream_free
+};
+
+static struct qat_comp_capabilities_info
+qat_comp_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_comp_capabilities_info capa_info = {
+		.data = qat_gen4_comp_capabilities,
+		.size = sizeof(qat_gen4_comp_capabilities)
+	};
+	return capa_info;
+}
+
+static uint16_t
+qat_comp_get_ram_bank_flags_gen4(void)
+{
+	return 0;
+}
+
+static int
+qat_comp_set_slice_cfg_word_gen4(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word)
+{
+	if (qat_xform->qat_comp_request_type ==
+			QAT_COMP_REQUEST_FIXED_COMP_STATELESS ||
+	    qat_xform->qat_comp_request_type ==
+			QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS) {
+		/* Compression */
+		struct icp_qat_hw_comp_20_config_csr_upper hw_comp_upper_csr;
+		struct icp_qat_hw_comp_20_config_csr_lower hw_comp_lower_csr;
+
+		memset(&hw_comp_upper_csr, 0, sizeof(hw_comp_upper_csr));
+		memset(&hw_comp_lower_csr, 0, sizeof(hw_comp_lower_csr));
+
+		hw_comp_lower_csr.lllbd =
+			ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_DISABLED;
+
+		if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE) {
+			hw_comp_lower_csr.skip_ctrl =
+				ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL;
+
+			if (qat_xform->qat_comp_request_type ==
+				QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS) {
+				hw_comp_lower_csr.algo =
+					ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77;
+				hw_comp_lower_csr.lllbd =
+				    ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED;
+			} else {
+				hw_comp_lower_csr.algo =
+				      ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE;
+				hw_comp_upper_csr.scb_ctrl =
+					ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE;
+			}
+
+			if (op_type == RTE_COMP_OP_STATEFUL) {
+				hw_comp_upper_csr.som_ctrl =
+				     ICP_QAT_HW_COMP_20_SOM_CONTROL_REPLAY_MODE;
+			}
+		} else {
+			QAT_LOG(ERR, "Compression algorithm not supported");
+			return -EINVAL;
+		}
+
+		switch (xform->compress.level) {
+		case 1:
+		case 2:
+		case 3:
+		case 4:
+		case 5:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1;
+			hw_comp_lower_csr.hash_col =
+			      ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_DONT_ALLOW;
+			break;
+		case 6:
+		case 7:
+		case 8:
+		case RTE_COMP_LEVEL_PMD_DEFAULT:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_6;
+			break;
+		case 9:
+		case 10:
+		case 11:
+		case 12:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_9;
+			break;
+		default:
+			QAT_LOG(ERR, "Compression level not supported");
+			return -EINVAL;
+		}
+
+		hw_comp_lower_csr.abd = ICP_QAT_HW_COMP_20_ABD_ABD_DISABLED;
+		hw_comp_lower_csr.hash_update =
+			ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW;
+		hw_comp_lower_csr.edmm =
+		      ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED;
+
+		hw_comp_upper_csr.nice =
+			ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL;
+		hw_comp_upper_csr.lazy =
+			ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL;
+
+		comp_slice_cfg_word[0] =
+				ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER(
+					hw_comp_lower_csr);
+		comp_slice_cfg_word[1] =
+				ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER(
+					hw_comp_upper_csr);
+	} else {
+		/* Decompression */
+		struct icp_qat_hw_decomp_20_config_csr_lower
+				hw_decomp_lower_csr;
+
+		memset(&hw_decomp_lower_csr, 0, sizeof(hw_decomp_lower_csr));
+
+		if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE)
+			hw_decomp_lower_csr.algo =
+				ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE;
+		else {
+			QAT_LOG(ERR, "Compression algorithm not supported");
+			return -EINVAL;
+		}
+
+		comp_slice_cfg_word[0] =
+				ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER(
+					hw_decomp_lower_csr);
+		comp_slice_cfg_word[1] = 0;
+	}
+
+	return 0;
+}
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen4(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN4;
+}
+
+
+RTE_INIT(qat_comp_pmd_gen4_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN4].compressdev_ops =
+			&qat_comp_ops_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gens.h b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
new file mode 100644
index 0000000000..35b75c56f1
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _QAT_COMP_PMD_GEN1_H_
+#define _QAT_COMP_PMD_GEN1_H_
+
+#include <rte_compressdev.h>
+#include <rte_compressdev_pmd.h>
+#include <stdint.h>
+
+#include "qat_comp_pmd.h"
+
+extern const struct rte_compressdev_capabilities qat_gen1_comp_capabilities[];
+
+struct qat_comp_capabilities_info
+qat_comp_cap_get_gen1(struct qat_pci_device *qat_dev);
+
+uint16_t qat_comp_get_ram_bank_flags_gen1(void);
+
+int qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type,
+		uint32_t *comp_slice_cfg_word);
+
+uint64_t qat_comp_get_features_gen1(void);
+
+extern struct rte_compressdev_ops qat_comp_ops_gen1;
+
+#endif /* _QAT_COMP_PMD_GEN1_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v3 07/10] crypto/qat: unified device private data structure
  2021-10-14 16:11   ` [dpdk-dev] [dpdk-dev v3 00/10] drivers/qat: isolate implementations of qat generations Fan Zhang
                       ` (5 preceding siblings ...)
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 06/10] compress/qat: add gen specific implementation Fan Zhang
@ 2021-10-14 16:11     ` Fan Zhang
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 08/10] crypto/qat: add gen specific data and function Fan Zhang
                       ` (3 subsequent siblings)
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-14 16:11 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch unifies the QAT symmetric and asymmetric device
private data structures and functions.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/meson.build       |   2 +-
 drivers/common/qat/qat_common.c      |  15 ++
 drivers/common/qat/qat_common.h      |   3 +
 drivers/common/qat/qat_device.h      |   7 +-
 drivers/crypto/qat/qat_asym_pmd.c    | 216 ++++-------------------
 drivers/crypto/qat/qat_asym_pmd.h    |  29 +---
 drivers/crypto/qat/qat_crypto.c      | 172 ++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h      |  78 +++++++++
 drivers/crypto/qat/qat_sym_pmd.c     | 250 +++++----------------------
 drivers/crypto/qat/qat_sym_pmd.h     |  21 +--
 drivers/crypto/qat/qat_sym_session.c |  15 +-
 11 files changed, 361 insertions(+), 447 deletions(-)
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 8a1c6d64e8..29fd0168ea 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -71,7 +71,7 @@ endif
 
 if qat_crypto
     foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c',
-            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c']
+            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c']
         sources += files(join_paths(qat_crypto_relpath, f))
     endforeach
     deps += ['security']
diff --git a/drivers/common/qat/qat_common.c b/drivers/common/qat/qat_common.c
index 5343a1451e..59e7e02622 100644
--- a/drivers/common/qat/qat_common.c
+++ b/drivers/common/qat/qat_common.c
@@ -6,6 +6,21 @@
 #include "qat_device.h"
 #include "qat_logs.h"
 
+const char *
+qat_service_get_str(enum qat_service_type type)
+{
+	switch (type) {
+	case QAT_SERVICE_SYMMETRIC:
+		return "sym";
+	case QAT_SERVICE_ASYMMETRIC:
+		return "asym";
+	case QAT_SERVICE_COMPRESSION:
+		return "comp";
+	default:
+		return "invalid";
+	}
+}
+
 int
 qat_sgl_fill_array(struct rte_mbuf *buf, int64_t offset,
 		void *list_in, uint32_t data_len,
diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index 1889ec4e88..0d488c9611 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -91,4 +91,7 @@ void
 qat_stats_reset(struct qat_pci_device *dev,
 		enum qat_service_type service);
 
+const char *
+qat_service_get_str(enum qat_service_type type);
+
 #endif /* _QAT_COMMON_H_ */
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index e7c7e9af95..85fae7b7c7 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -76,8 +76,7 @@ struct qat_device_info {
 
 extern struct qat_device_info qat_pci_devs[];
 
-struct qat_sym_dev_private;
-struct qat_asym_dev_private;
+struct qat_cryptodev_private;
 struct qat_comp_dev_private;
 
 /*
@@ -106,14 +105,14 @@ struct qat_pci_device {
 	/**< links to qps set up for each service, index same as on API */
 
 	/* Data relating to symmetric crypto service */
-	struct qat_sym_dev_private *sym_dev;
+	struct qat_cryptodev_private *sym_dev;
 	/**< link back to cryptodev private data */
 
 	int qat_sym_driver_id;
 	/**< Symmetric driver id used by this device */
 
 	/* Data relating to asymmetric crypto service */
-	struct qat_asym_dev_private *asym_dev;
+	struct qat_cryptodev_private *asym_dev;
 	/**< link back to cryptodev private data */
 
 	int qat_asym_driver_id;
diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c
index e91bb0d317..b03d8acbac 100644
--- a/drivers/crypto/qat/qat_asym_pmd.c
+++ b/drivers/crypto/qat/qat_asym_pmd.c
@@ -6,6 +6,7 @@
 
 #include "qat_logs.h"
 
+#include "qat_crypto.h"
 #include "qat_asym.h"
 #include "qat_asym_pmd.h"
 #include "qat_sym_capabilities.h"
@@ -18,190 +19,45 @@ static const struct rte_cryptodev_capabilities qat_gen1_asym_capabilities[] = {
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
-static int qat_asym_qp_release(struct rte_cryptodev *dev,
-			       uint16_t queue_pair_id);
-
-static int qat_asym_dev_config(__rte_unused struct rte_cryptodev *dev,
-			       __rte_unused struct rte_cryptodev_config *config)
-{
-	return 0;
-}
-
-static int qat_asym_dev_start(__rte_unused struct rte_cryptodev *dev)
-{
-	return 0;
-}
-
-static void qat_asym_dev_stop(__rte_unused struct rte_cryptodev *dev)
-{
-
-}
-
-static int qat_asym_dev_close(struct rte_cryptodev *dev)
-{
-	int i, ret;
-
-	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
-		ret = qat_asym_qp_release(dev, i);
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static void qat_asym_dev_info_get(struct rte_cryptodev *dev,
-				  struct rte_cryptodev_info *info)
-{
-	struct qat_asym_dev_private *internals = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = internals->qat_dev;
-
-	if (info != NULL) {
-		info->max_nb_queue_pairs = qat_qps_per_service(qat_dev,
-							QAT_SERVICE_ASYMMETRIC);
-		info->feature_flags = dev->feature_flags;
-		info->capabilities = internals->qat_dev_capabilities;
-		info->driver_id = qat_asym_driver_id;
-		/* No limit of number of sessions */
-		info->sym.max_nb_sessions = 0;
-	}
-}
-
-static void qat_asym_stats_get(struct rte_cryptodev *dev,
-			       struct rte_cryptodev_stats *stats)
-{
-	struct qat_common_stats qat_stats = {0};
-	struct qat_asym_dev_private *qat_priv;
-
-	if (stats == NULL || dev == NULL) {
-		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_get(qat_priv->qat_dev, &qat_stats, QAT_SERVICE_ASYMMETRIC);
-	stats->enqueued_count = qat_stats.enqueued_count;
-	stats->dequeued_count = qat_stats.dequeued_count;
-	stats->enqueue_err_count = qat_stats.enqueue_err_count;
-	stats->dequeue_err_count = qat_stats.dequeue_err_count;
-}
-
-static void qat_asym_stats_reset(struct rte_cryptodev *dev)
+void
+qat_asym_init_op_cookie(void *op_cookie)
 {
-	struct qat_asym_dev_private *qat_priv;
+	int j;
+	struct qat_asym_op_cookie *cookie = op_cookie;
 
-	if (dev == NULL) {
-		QAT_LOG(ERR, "invalid asymmetric cryptodev ptr %p", dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
+	cookie->input_addr = rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_asym_op_cookie,
+					input_params_ptrs);
 
-	qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_ASYMMETRIC);
-}
-
-static int qat_asym_qp_release(struct rte_cryptodev *dev,
-			       uint16_t queue_pair_id)
-{
-	struct qat_asym_dev_private *qat_private = dev->data->dev_private;
-	enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen;
-
-	QAT_LOG(DEBUG, "Release asym qp %u on device %d",
-				queue_pair_id, dev->data->dev_id);
-
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_ASYMMETRIC][queue_pair_id]
-						= NULL;
-
-	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
-			&(dev->data->queue_pairs[queue_pair_id]));
-}
+	cookie->output_addr = rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_asym_op_cookie,
+					output_params_ptrs);
 
-static int qat_asym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-			     const struct rte_cryptodev_qp_conf *qp_conf,
-			     int socket_id)
-{
-	struct qat_qp_config qat_qp_conf;
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-
-	struct qat_qp **qp_addr =
-			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
-	struct qat_asym_dev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-	const struct qat_qp_hw_data *asym_hw_qps =
-			qat_gen_config[qat_private->qat_dev->qat_dev_gen]
-				      .qp_hw_data[QAT_SERVICE_ASYMMETRIC];
-	const struct qat_qp_hw_data *qp_hw_data = asym_hw_qps + qp_id;
-
-	/* If qp is already in use free ring memory and qp metadata. */
-	if (*qp_addr != NULL) {
-		ret = qat_asym_qp_release(dev, qp_id);
-		if (ret < 0)
-			return ret;
-	}
-	if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_ASYMMETRIC)) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.hw = qp_hw_data;
-	qat_qp_conf.cookie_size = sizeof(struct qat_asym_op_cookie);
-	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
-	qat_qp_conf.socket_id = socket_id;
-	qat_qp_conf.service_str = "asym";
-
-	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
-	if (ret != 0)
-		return ret;
-
-	/* store a link to the qp in the qat_pci_device */
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_ASYMMETRIC][qp_id]
-							= *qp_addr;
-
-	qp = (struct qat_qp *)*qp_addr;
-	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
-
-	for (i = 0; i < qp->nb_descriptors; i++) {
-		int j;
-
-		struct qat_asym_op_cookie __rte_unused *cookie =
-				qp->op_cookies[i];
-		cookie->input_addr = rte_mempool_virt2iova(cookie) +
+	for (j = 0; j < 8; j++) {
+		cookie->input_params_ptrs[j] =
+				rte_mempool_virt2iova(cookie) +
 				offsetof(struct qat_asym_op_cookie,
-						input_params_ptrs);
-
-		cookie->output_addr = rte_mempool_virt2iova(cookie) +
+						input_array[j]);
+		cookie->output_params_ptrs[j] =
+				rte_mempool_virt2iova(cookie) +
 				offsetof(struct qat_asym_op_cookie,
-						output_params_ptrs);
-
-		for (j = 0; j < 8; j++) {
-			cookie->input_params_ptrs[j] =
-					rte_mempool_virt2iova(cookie) +
-					offsetof(struct qat_asym_op_cookie,
-							input_array[j]);
-			cookie->output_params_ptrs[j] =
-					rte_mempool_virt2iova(cookie) +
-					offsetof(struct qat_asym_op_cookie,
-							output_array[j]);
-		}
+						output_array[j]);
 	}
-
-	return ret;
 }
 
-struct rte_cryptodev_ops crypto_qat_ops = {
+static struct rte_cryptodev_ops crypto_qat_ops = {
 
 	/* Device related operations */
-	.dev_configure		= qat_asym_dev_config,
-	.dev_start		= qat_asym_dev_start,
-	.dev_stop		= qat_asym_dev_stop,
-	.dev_close		= qat_asym_dev_close,
-	.dev_infos_get		= qat_asym_dev_info_get,
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
 
-	.stats_get		= qat_asym_stats_get,
-	.stats_reset		= qat_asym_stats_reset,
-	.queue_pair_setup	= qat_asym_qp_setup,
-	.queue_pair_release	= qat_asym_qp_release,
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
 
 	/* Crypto related operations */
 	.asym_session_get_size	= qat_asym_session_get_private_size,
@@ -241,15 +97,14 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	struct qat_device_info *qat_dev_instance =
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
 	struct rte_cryptodev_pmd_init_params init_params = {
-			.name = "",
-			.socket_id =
-				qat_dev_instance->pci_dev->device.numa_node,
-			.private_data_size = sizeof(struct qat_asym_dev_private)
+		.name = "",
+		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
+		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
-	struct qat_asym_dev_private *internals;
+	struct qat_cryptodev_private *internals;
 
 	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
 		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT 4xxx");
@@ -310,8 +165,9 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
-	internals->asym_dev_id = cryptodev->data->dev_id;
+	internals->dev_id = cryptodev->data->dev_id;
 	internals->qat_dev_capabilities = qat_gen1_asym_capabilities;
+	internals->service_type = QAT_SERVICE_ASYMMETRIC;
 
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
@@ -344,7 +200,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	qat_pci_dev->asym_dev = internals;
 	QAT_LOG(DEBUG, "Created QAT ASYM device %s as cryptodev instance %d",
-			cryptodev->data->name, internals->asym_dev_id);
+			cryptodev->data->name, internals->dev_id);
 	return 0;
 }
 
@@ -362,7 +218,7 @@ qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev)
 
 	/* free crypto device */
 	cryptodev = rte_cryptodev_pmd_get_dev(
-			qat_pci_dev->asym_dev->asym_dev_id);
+			qat_pci_dev->asym_dev->dev_id);
 	rte_cryptodev_pmd_destroy(cryptodev);
 	qat_pci_devs[qat_pci_dev->qat_dev_id].asym_rte_dev.name = NULL;
 	qat_pci_dev->asym_dev = NULL;
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index 3b5abddec8..c493796511 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -15,21 +15,8 @@
 
 extern uint8_t qat_asym_driver_id;
 
-/** private data structure for a QAT device.
- * This QAT device is a device offering only asymmetric crypto service,
- * there can be one of these on each qat_pci_device (VF).
- */
-struct qat_asym_dev_private {
-	struct qat_pci_device *qat_dev;
-	/**< The qat pci device hosting the service */
-	uint8_t asym_dev_id;
-	/**< Device instance for this rte_cryptodev */
-	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
-	/* QAT device asymmetric crypto capabilities */
-	const struct rte_memzone *capa_mz;
-	/* Shared memzone for storing capabilities */
-	uint16_t min_enq_burst_threshold;
-};
+void
+qat_asym_init_op_cookie(void *op_cookie);
 
 uint16_t
 qat_asym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops,
@@ -39,16 +26,4 @@ uint16_t
 qat_asym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops,
 			      uint16_t nb_ops);
 
-int qat_asym_session_configure(struct rte_cryptodev *dev,
-		struct rte_crypto_asym_xform *xform,
-		struct rte_cryptodev_asym_session *sess,
-		struct rte_mempool *mempool);
-
-int
-qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
-		struct qat_dev_cmd_param *qat_dev_cmd_param);
-
-int
-qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev);
-
 #endif /* _QAT_ASYM_PMD_H_ */
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
new file mode 100644
index 0000000000..01d2439b93
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -0,0 +1,172 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "qat_crypto.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+
+int
+qat_cryptodev_config(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused struct rte_cryptodev_config *config)
+{
+	return 0;
+}
+
+int
+qat_cryptodev_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+void
+qat_cryptodev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+int
+qat_cryptodev_close(struct rte_cryptodev *dev)
+{
+	int i, ret;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = dev->dev_ops->queue_pair_release(dev, i);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
+void
+qat_cryptodev_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *info)
+{
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_service_type service_type = qat_private->service_type;
+
+	if (info != NULL) {
+		info->max_nb_queue_pairs =
+			qat_qps_per_service(qat_dev, service_type);
+		info->feature_flags = dev->feature_flags;
+		info->capabilities = qat_private->qat_dev_capabilities;
+		info->driver_id = qat_sym_driver_id;
+		/* No limit of number of sessions */
+		info->sym.max_nb_sessions = 0;
+	}
+}
+
+void
+qat_cryptodev_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	struct qat_common_stats qat_stats = {0};
+	struct qat_cryptodev_private *qat_priv;
+
+	if (stats == NULL || dev == NULL) {
+		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
+		return;
+	}
+	qat_priv = dev->data->dev_private;
+
+	qat_stats_get(qat_priv->qat_dev, &qat_stats, qat_priv->service_type);
+	stats->enqueued_count = qat_stats.enqueued_count;
+	stats->dequeued_count = qat_stats.dequeued_count;
+	stats->enqueue_err_count = qat_stats.enqueue_err_count;
+	stats->dequeue_err_count = qat_stats.dequeue_err_count;
+}
+
+void
+qat_cryptodev_stats_reset(struct rte_cryptodev *dev)
+{
+	struct qat_cryptodev_private *qat_priv;
+
+	if (dev == NULL) {
+		QAT_LOG(ERR, "invalid cryptodev ptr %p", dev);
+		return;
+	}
+	qat_priv = dev->data->dev_private;
+
+	qat_stats_reset(qat_priv->qat_dev, qat_priv->service_type);
+
+}
+
+int
+qat_cryptodev_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
+	enum qat_service_type service_type = qat_private->service_type;
+
+	QAT_LOG(DEBUG, "Release %s qp %u on device %d",
+			qat_service_get_str(service_type),
+			queue_pair_id, dev->data->dev_id);
+
+	qat_private->qat_dev->qps_in_use[service_type][queue_pair_id] = NULL;
+
+	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
+			&(dev->data->queue_pairs[queue_pair_id]));
+}
+
+int
+qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct qat_qp **qp_addr =
+			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_service_type service_type = qat_private->service_type;
+	struct qat_qp_config qat_qp_conf = {0};
+	struct qat_qp *qp;
+	int ret = 0;
+	uint32_t i;
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (*qp_addr != NULL) {
+		ret = dev->dev_ops->queue_pair_release(dev, qp_id);
+		if (ret < 0)
+			return -EBUSY;
+	}
+	if (qp_id >= qat_qps_per_service(qat_dev, service_type)) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, service_type,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
+	qat_qp_conf.cookie_size = service_type == QAT_SERVICE_SYMMETRIC ?
+			sizeof(struct qat_sym_op_cookie) :
+			sizeof(struct qat_asym_op_cookie);
+	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
+	qat_qp_conf.socket_id = socket_id;
+	qat_qp_conf.service_str = qat_service_get_str(service_type);
+
+	ret = qat_qp_setup(qat_dev, qp_addr, qp_id, &qat_qp_conf);
+	if (ret != 0)
+		return ret;
+
+	/* store a link to the qp in the qat_pci_device */
+	qat_dev->qps_in_use[service_type][qp_id] = *qp_addr;
+
+	qp = (struct qat_qp *)*qp_addr;
+	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
+
+	for (i = 0; i < qp->nb_descriptors; i++) {
+		if (service_type == QAT_SERVICE_SYMMETRIC)
+			qat_sym_init_op_cookie(qp->op_cookies[i]);
+		else
+			qat_asym_init_op_cookie(qp->op_cookies[i]);
+	}
+
+	return ret;
+}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
new file mode 100644
index 0000000000..3803fef19d
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+ #ifndef _QAT_CRYPTO_H_
+ #define _QAT_CRYPTO_H_
+
+#include <rte_cryptodev.h>
+#ifdef RTE_LIB_SECURITY
+#include <rte_security.h>
+#endif
+
+#include "qat_device.h"
+
+extern uint8_t qat_sym_driver_id;
+extern uint8_t qat_asym_driver_id;
+
+/** helper macro to set cryptodev capability range **/
+#define CAP_RNG(n, l, r, i) .n = {.min = l, .max = r, .increment = i}
+
+#define CAP_RNG_ZERO(n) .n = {.min = 0, .max = 0, .increment = 0}
+/** helper macro to set cryptodev capability value **/
+#define CAP_SET(n, v) .n = v
+
+/** private data structure for a QAT device.
+ * there can be one of these on each qat_pci_device (VF).
+ */
+struct qat_cryptodev_private {
+	struct qat_pci_device *qat_dev;
+	/**< The qat pci device hosting the service */
+	uint8_t dev_id;
+	/**< Device instance for this rte_cryptodev */
+	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
+	/* QAT device symmetric crypto capabilities */
+	const struct rte_memzone *capa_mz;
+	/* Shared memzone for storing capabilities */
+	uint16_t min_enq_burst_threshold;
+	uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */
+	enum qat_service_type service_type;
+};
+
+struct qat_capabilities_info {
+	struct rte_cryptodev_capabilities *data;
+	uint64_t size;
+};
+
+int
+qat_cryptodev_config(struct rte_cryptodev *dev,
+		struct rte_cryptodev_config *config);
+
+int
+qat_cryptodev_start(struct rte_cryptodev *dev);
+
+void
+qat_cryptodev_stop(struct rte_cryptodev *dev);
+
+int
+qat_cryptodev_close(struct rte_cryptodev *dev);
+
+void
+qat_cryptodev_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *info);
+
+void
+qat_cryptodev_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats);
+
+void
+qat_cryptodev_stats_reset(struct rte_cryptodev *dev);
+
+int
+qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+
+int
+qat_cryptodev_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id);
+
+#endif
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index 71907a606d..e03737c0d8 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -13,6 +13,7 @@
 #endif
 
 #include "qat_logs.h"
+#include "qat_crypto.h"
 #include "qat_sym.h"
 #include "qat_sym_session.h"
 #include "qat_sym_pmd.h"
@@ -59,213 +60,19 @@ static const struct rte_security_capability qat_security_capabilities[] = {
 };
 #endif
 
-static int qat_sym_qp_release(struct rte_cryptodev *dev,
-	uint16_t queue_pair_id);
-
-static int qat_sym_dev_config(__rte_unused struct rte_cryptodev *dev,
-		__rte_unused struct rte_cryptodev_config *config)
-{
-	return 0;
-}
-
-static int qat_sym_dev_start(__rte_unused struct rte_cryptodev *dev)
-{
-	return 0;
-}
-
-static void qat_sym_dev_stop(__rte_unused struct rte_cryptodev *dev)
-{
-	return;
-}
-
-static int qat_sym_dev_close(struct rte_cryptodev *dev)
-{
-	int i, ret;
-
-	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
-		ret = qat_sym_qp_release(dev, i);
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static void qat_sym_dev_info_get(struct rte_cryptodev *dev,
-			struct rte_cryptodev_info *info)
-{
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = internals->qat_dev;
-
-	if (info != NULL) {
-		info->max_nb_queue_pairs =
-			qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC);
-		info->feature_flags = dev->feature_flags;
-		info->capabilities = internals->qat_dev_capabilities;
-		info->driver_id = qat_sym_driver_id;
-		/* No limit of number of sessions */
-		info->sym.max_nb_sessions = 0;
-	}
-}
-
-static void qat_sym_stats_get(struct rte_cryptodev *dev,
-		struct rte_cryptodev_stats *stats)
-{
-	struct qat_common_stats qat_stats = {0};
-	struct qat_sym_dev_private *qat_priv;
-
-	if (stats == NULL || dev == NULL) {
-		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_get(qat_priv->qat_dev, &qat_stats, QAT_SERVICE_SYMMETRIC);
-	stats->enqueued_count = qat_stats.enqueued_count;
-	stats->dequeued_count = qat_stats.dequeued_count;
-	stats->enqueue_err_count = qat_stats.enqueue_err_count;
-	stats->dequeue_err_count = qat_stats.dequeue_err_count;
-}
-
-static void qat_sym_stats_reset(struct rte_cryptodev *dev)
-{
-	struct qat_sym_dev_private *qat_priv;
-
-	if (dev == NULL) {
-		QAT_LOG(ERR, "invalid cryptodev ptr %p", dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_SYMMETRIC);
-
-}
-
-static int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
-{
-	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
-	enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen;
-
-	QAT_LOG(DEBUG, "Release sym qp %u on device %d",
-				queue_pair_id, dev->data->dev_id);
-
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][queue_pair_id]
-						= NULL;
-
-	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
-			&(dev->data->queue_pairs[queue_pair_id]));
-}
-
-static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-	const struct rte_cryptodev_qp_conf *qp_conf,
-	int socket_id)
-{
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-	struct qat_qp_config qat_qp_conf;
-	struct qat_qp **qp_addr =
-			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
-	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-
-	/* If qp is already in use free ring memory and qp metadata. */
-	if (*qp_addr != NULL) {
-		ret = qat_sym_qp_release(dev, qp_id);
-		if (ret < 0)
-			return ret;
-	}
-	if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC)) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_SYMMETRIC,
-			qp_id);
-	if (qat_qp_conf.hw == NULL) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie);
-	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
-	qat_qp_conf.socket_id = socket_id;
-	qat_qp_conf.service_str = "sym";
-
-	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
-	if (ret != 0)
-		return ret;
-
-	/* store a link to the qp in the qat_pci_device */
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id]
-							= *qp_addr;
-
-	qp = (struct qat_qp *)*qp_addr;
-	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
-
-	for (i = 0; i < qp->nb_descriptors; i++) {
-
-		struct qat_sym_op_cookie *cookie =
-				qp->op_cookies[i];
-
-		cookie->qat_sgl_src_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				qat_sgl_src);
-
-		cookie->qat_sgl_dst_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				qat_sgl_dst);
-
-		cookie->opt.spc_gmac.cd_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				opt.spc_gmac.cd_cipher);
-
-	}
-
-	/* Get fw version from QAT (GEN2), skip if we've got it already */
-	if (qp->qat_dev_gen == QAT_GEN2 && !(qat_private->internal_capabilities
-			& QAT_SYM_CAP_VALID)) {
-		ret = qat_cq_get_fw_version(qp);
-
-		if (ret < 0) {
-			qat_sym_qp_release(dev, qp_id);
-			return ret;
-		}
-
-		if (ret != 0)
-			QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d",
-					(ret >> 24) & 0xff,
-					(ret >> 16) & 0xff,
-					(ret >> 8) & 0xff);
-		else
-			QAT_LOG(DEBUG, "unknown QAT firmware version");
-
-		/* set capabilities based on the fw version */
-		qat_private->internal_capabilities = QAT_SYM_CAP_VALID |
-				((ret >= MIXED_CRYPTO_MIN_FW_VER) ?
-						QAT_SYM_CAP_MIXED_CRYPTO : 0);
-		ret = 0;
-	}
-
-	return ret;
-}
-
 static struct rte_cryptodev_ops crypto_qat_ops = {
 
 		/* Device related operations */
-		.dev_configure		= qat_sym_dev_config,
-		.dev_start		= qat_sym_dev_start,
-		.dev_stop		= qat_sym_dev_stop,
-		.dev_close		= qat_sym_dev_close,
-		.dev_infos_get		= qat_sym_dev_info_get,
+		.dev_configure		= qat_cryptodev_config,
+		.dev_start		= qat_cryptodev_start,
+		.dev_stop		= qat_cryptodev_stop,
+		.dev_close		= qat_cryptodev_close,
+		.dev_infos_get		= qat_cryptodev_info_get,
 
-		.stats_get		= qat_sym_stats_get,
-		.stats_reset		= qat_sym_stats_reset,
-		.queue_pair_setup	= qat_sym_qp_setup,
-		.queue_pair_release	= qat_sym_qp_release,
+		.stats_get		= qat_cryptodev_stats_get,
+		.stats_reset		= qat_cryptodev_stats_reset,
+		.queue_pair_setup	= qat_cryptodev_qp_setup,
+		.queue_pair_release	= qat_cryptodev_qp_release,
 
 		/* Crypto related operations */
 		.sym_session_get_size	= qat_sym_session_get_private_size,
@@ -295,6 +102,27 @@ static struct rte_security_ops security_qat_ops = {
 };
 #endif
 
+void
+qat_sym_init_op_cookie(void *op_cookie)
+{
+	struct qat_sym_op_cookie *cookie = op_cookie;
+
+	cookie->qat_sgl_src_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			qat_sgl_src);
+
+	cookie->qat_sgl_dst_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			qat_sgl_dst);
+
+	cookie->opt.spc_gmac.cd_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			opt.spc_gmac.cd_cipher);
+}
+
 static uint16_t
 qat_sym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops,
 		uint16_t nb_ops)
@@ -330,15 +158,14 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
 
 	struct rte_cryptodev_pmd_init_params init_params = {
-			.name = "",
-			.socket_id =
-				qat_dev_instance->pci_dev->device.numa_node,
-			.private_data_size = sizeof(struct qat_sym_dev_private)
+		.name = "",
+		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
+		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
-	struct qat_sym_dev_private *internals;
+	struct qat_cryptodev_private *internals;
 	const struct rte_cryptodev_capabilities *capabilities;
 	uint64_t capa_size;
 
@@ -424,8 +251,9 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
+	internals->service_type = QAT_SERVICE_SYMMETRIC;
 
-	internals->sym_dev_id = cryptodev->data->dev_id;
+	internals->dev_id = cryptodev->data->dev_id;
 	switch (qat_pci_dev->qat_dev_gen) {
 	case QAT_GEN1:
 		capabilities = qat_gen1_sym_capabilities;
@@ -480,7 +308,7 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	qat_pci_dev->sym_dev = internals;
 	QAT_LOG(DEBUG, "Created QAT SYM device %s as cryptodev instance %d",
-			cryptodev->data->name, internals->sym_dev_id);
+			cryptodev->data->name, internals->dev_id);
 
 	return 0;
 
@@ -509,7 +337,7 @@ qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev)
 		rte_memzone_free(qat_pci_dev->sym_dev->capa_mz);
 
 	/* free crypto device */
-	cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->sym_dev_id);
+	cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->dev_id);
 #ifdef RTE_LIB_SECURITY
 	rte_free(cryptodev->security_ctx);
 	cryptodev->security_ctx = NULL;
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index e0992cbe27..d49b732ca0 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -14,6 +14,7 @@
 #endif
 
 #include "qat_sym_capabilities.h"
+#include "qat_crypto.h"
 #include "qat_device.h"
 
 /** Intel(R) QAT Symmetric Crypto PMD driver name */
@@ -25,23 +26,6 @@
 
 extern uint8_t qat_sym_driver_id;
 
-/** private data structure for a QAT device.
- * This QAT device is a device offering only symmetric crypto service,
- * there can be one of these on each qat_pci_device (VF).
- */
-struct qat_sym_dev_private {
-	struct qat_pci_device *qat_dev;
-	/**< The qat pci device hosting the service */
-	uint8_t sym_dev_id;
-	/**< Device instance for this rte_cryptodev */
-	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
-	/* QAT device symmetric crypto capabilities */
-	const struct rte_memzone *capa_mz;
-	/* Shared memzone for storing capabilities */
-	uint16_t min_enq_burst_threshold;
-	uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */
-};
-
 int
 qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
@@ -49,5 +33,8 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 int
 qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev);
 
+void
+qat_sym_init_op_cookie(void *op_cookie);
+
 #endif
 #endif /* _QAT_SYM_PMD_H_ */
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index 3f2f6736fc..8ca475ca8b 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -131,7 +131,7 @@ bpi_cipher_ctx_init(enum rte_crypto_cipher_algorithm cryptodev_algo,
 
 static int
 qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo,
-		struct qat_sym_dev_private *internals)
+		struct qat_cryptodev_private *internals)
 {
 	int i = 0;
 	const struct rte_cryptodev_capabilities *capability;
@@ -152,7 +152,7 @@ qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo,
 
 static int
 qat_is_auth_alg_supported(enum rte_crypto_auth_algorithm algo,
-		struct qat_sym_dev_private *internals)
+		struct qat_cryptodev_private *internals)
 {
 	int i = 0;
 	const struct rte_cryptodev_capabilities *capability;
@@ -267,7 +267,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
 		struct qat_sym_session *session)
 {
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	struct rte_crypto_cipher_xform *cipher_xform = NULL;
 	enum qat_device_gen qat_dev_gen =
 				internals->qat_dev->qat_dev_gen;
@@ -532,7 +532,8 @@ static void
 qat_sym_session_handle_mixed(const struct rte_cryptodev *dev,
 		struct qat_sym_session *session)
 {
-	const struct qat_sym_dev_private *qat_private = dev->data->dev_private;
+	const struct qat_cryptodev_private *qat_private =
+			dev->data->dev_private;
 	enum qat_device_gen min_dev_gen = (qat_private->internal_capabilities &
 			QAT_SYM_CAP_MIXED_CRYPTO) ? QAT_GEN2 : QAT_GEN3;
 
@@ -564,7 +565,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform, void *session_private)
 {
 	struct qat_sym_session *session = session_private;
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen;
 	int ret;
 	int qat_cmd_id;
@@ -707,7 +708,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
 				struct qat_sym_session *session)
 {
 	struct rte_crypto_auth_xform *auth_xform = qat_get_auth_xform(xform);
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	const uint8_t *key_data = auth_xform->key.data;
 	uint8_t key_length = auth_xform->key.length;
 	enum qat_device_gen qat_dev_gen =
@@ -875,7 +876,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
 {
 	struct rte_crypto_aead_xform *aead_xform = &xform->aead;
 	enum rte_crypto_auth_operation crypto_operation;
-	struct qat_sym_dev_private *internals =
+	struct qat_cryptodev_private *internals =
 			dev->data->dev_private;
 	enum qat_device_gen qat_dev_gen =
 			internals->qat_dev->qat_dev_gen;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v3 08/10] crypto/qat: add gen specific data and function
  2021-10-14 16:11   ` [dpdk-dev] [dpdk-dev v3 00/10] drivers/qat: isolate implementations of qat generations Fan Zhang
                       ` (6 preceding siblings ...)
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 07/10] crypto/qat: unified device private data structure Fan Zhang
@ 2021-10-14 16:11     ` Fan Zhang
  2021-10-16 11:46       ` [dpdk-dev] [EXT] " Akhil Goyal
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 09/10] crypto/qat: add gen specific implementation Fan Zhang
                       ` (2 subsequent siblings)
  10 siblings, 1 reply; 96+ messages in thread
From: Fan Zhang @ 2021-10-14 16:11 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch adds the symmetric and asymmetric crypto data
structure and function prototypes for different QAT
generations.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/crypto/qat/README                  |    7 -
 drivers/crypto/qat/meson.build             |   26 -
 drivers/crypto/qat/qat_asym_capabilities.h |   63 -
 drivers/crypto/qat/qat_asym_pmd.c          |   60 +-
 drivers/crypto/qat/qat_asym_pmd.h          |   25 +
 drivers/crypto/qat/qat_crypto.h            |   16 +
 drivers/crypto/qat/qat_sym_capabilities.h  | 1248 --------------------
 drivers/crypto/qat/qat_sym_pmd.c           |  186 +--
 drivers/crypto/qat/qat_sym_pmd.h           |   52 +-
 9 files changed, 160 insertions(+), 1523 deletions(-)
 delete mode 100644 drivers/crypto/qat/README
 delete mode 100644 drivers/crypto/qat/meson.build
 delete mode 100644 drivers/crypto/qat/qat_asym_capabilities.h
 delete mode 100644 drivers/crypto/qat/qat_sym_capabilities.h

diff --git a/drivers/crypto/qat/README b/drivers/crypto/qat/README
deleted file mode 100644
index 444ae605f0..0000000000
--- a/drivers/crypto/qat/README
+++ /dev/null
@@ -1,7 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2015-2018 Intel Corporation
-
-Makefile for crypto QAT PMD is in common/qat directory.
-The build for the QAT driver is done from there as only one library is built for the
-whole QAT pci device and that library includes all the services (crypto, compression)
-which are enabled on the device.
diff --git a/drivers/crypto/qat/meson.build b/drivers/crypto/qat/meson.build
deleted file mode 100644
index b3b2d17258..0000000000
--- a/drivers/crypto/qat/meson.build
+++ /dev/null
@@ -1,26 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017-2018 Intel Corporation
-
-# this does not build the QAT driver, instead that is done in the compression
-# driver which comes later. Here we just add our sources files to the list
-build = false
-reason = '' # sentinal value to suppress printout
-dep = dependency('libcrypto', required: false, method: 'pkg-config')
-qat_includes += include_directories('.')
-qat_deps += 'cryptodev'
-qat_deps += 'net'
-qat_deps += 'security'
-if dep.found()
-    # Add our sources files to the list
-    qat_sources += files(
-            'qat_asym.c',
-            'qat_asym_pmd.c',
-            'qat_sym.c',
-            'qat_sym_hw_dp.c',
-            'qat_sym_pmd.c',
-            'qat_sym_session.c',
-	)
-    qat_ext_deps += dep
-    qat_cflags += '-DBUILD_QAT_SYM'
-    qat_cflags += '-DBUILD_QAT_ASYM'
-endif
diff --git a/drivers/crypto/qat/qat_asym_capabilities.h b/drivers/crypto/qat/qat_asym_capabilities.h
deleted file mode 100644
index 523b4da6d3..0000000000
--- a/drivers/crypto/qat/qat_asym_capabilities.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019 Intel Corporation
- */
-
-#ifndef _QAT_ASYM_CAPABILITIES_H_
-#define _QAT_ASYM_CAPABILITIES_H_
-
-#define QAT_BASE_GEN1_ASYM_CAPABILITIES						\
-	{	/* modexp */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_MODEX,	\
-				.op_types = 0,					\
-				{						\
-				.modlen = {					\
-				.min = 1,					\
-				.max = 512,					\
-				.increment = 1					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	},									\
-	{	/* modinv */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_MODINV,	\
-				.op_types = 0,					\
-				{						\
-				.modlen = {					\
-				.min = 1,					\
-				.max = 512,					\
-				.increment = 1					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	},									\
-	{	/* RSA */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_RSA,	\
-				.op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_VERIFY) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),	\
-				{						\
-				.modlen = {					\
-				/* min length is based on openssl rsa keygen */	\
-				.min = 64,					\
-				/* value 0 symbolizes no limit on max length */	\
-				.max = 512,					\
-				.increment = 64					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	}									\
-
-#endif /* _QAT_ASYM_CAPABILITIES_H_ */
diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c
index b03d8acbac..83e046666c 100644
--- a/drivers/crypto/qat/qat_asym_pmd.c
+++ b/drivers/crypto/qat/qat_asym_pmd.c
@@ -9,15 +9,9 @@
 #include "qat_crypto.h"
 #include "qat_asym.h"
 #include "qat_asym_pmd.h"
-#include "qat_sym_capabilities.h"
-#include "qat_asym_capabilities.h"
 
 uint8_t qat_asym_driver_id;
-
-static const struct rte_cryptodev_capabilities qat_gen1_asym_capabilities[] = {
-	QAT_BASE_GEN1_ASYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
+struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[QAT_N_GENS];
 
 void
 qat_asym_init_op_cookie(void *op_cookie)
@@ -101,19 +95,22 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
 		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
+	struct qat_capabilities_info capa_info;
+	const struct rte_cryptodev_capabilities *capabilities;
+	const struct qat_crypto_gen_dev_ops *gen_dev_ops =
+		&qat_asym_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
 	struct qat_cryptodev_private *internals;
+	uint64_t capa_size;
 
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT 4xxx");
-		return -EFAULT;
-	}
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN3) {
-		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT c4xxx");
+	if (gen_dev_ops->cryptodev_ops == NULL) {
+		QAT_LOG(ERR, "Device %s does not support symmetric crypto",
+				name);
 		return -EFAULT;
 	}
+
 	snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "asym");
 	QAT_LOG(DEBUG, "Creating QAT ASYM device %s\n", name);
@@ -150,11 +147,8 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	cryptodev->enqueue_burst = qat_asym_pmd_enqueue_op_burst;
 	cryptodev->dequeue_burst = qat_asym_pmd_dequeue_op_burst;
 
-	cryptodev->feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
-			RTE_CRYPTODEV_FF_HW_ACCELERATED |
-			RTE_CRYPTODEV_FF_ASYM_SESSIONLESS |
-			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP |
-			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT;
+
+	cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev);
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -166,27 +160,29 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
 	internals->dev_id = cryptodev->data->dev_id;
-	internals->qat_dev_capabilities = qat_gen1_asym_capabilities;
 	internals->service_type = QAT_SERVICE_ASYMMETRIC;
 
+	capa_info = gen_dev_ops->get_capabilities(qat_pci_dev);
+	capabilities = capa_info.data;
+	capa_size = capa_info.size;
+
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
 		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
-			sizeof(qat_gen1_asym_capabilities),
-			rte_socket_id(), 0);
-	}
-	if (internals->capa_mz == NULL) {
-		QAT_LOG(DEBUG,
-			"Error allocating memzone for capabilities, destroying PMD for %s",
-			name);
-		rte_cryptodev_pmd_destroy(cryptodev);
-		memset(&qat_dev_instance->asym_rte_dev, 0,
-			sizeof(qat_dev_instance->asym_rte_dev));
-		return -EFAULT;
+				capa_size, rte_socket_id(), 0);
+		if (internals->capa_mz == NULL) {
+			QAT_LOG(DEBUG,
+				"Error allocating memzone for capabilities, "
+				"destroying PMD for %s",
+				name);
+			rte_cryptodev_pmd_destroy(cryptodev);
+			memset(&qat_dev_instance->asym_rte_dev, 0,
+				sizeof(qat_dev_instance->asym_rte_dev));
+			return -EFAULT;
+		}
 	}
 
-	memcpy(internals->capa_mz->addr, qat_gen1_asym_capabilities,
-			sizeof(qat_gen1_asym_capabilities));
+	memcpy(internals->capa_mz->addr, capabilities, capa_size);
 	internals->qat_dev_capabilities = internals->capa_mz->addr;
 
 	while (1) {
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index c493796511..fd6b406248 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -7,14 +7,39 @@
 #define _QAT_ASYM_PMD_H_
 
 #include <rte_cryptodev.h>
+#include "qat_crypto.h"
 #include "qat_device.h"
 
 /** Intel(R) QAT Asymmetric Crypto PMD driver name */
 #define CRYPTODEV_NAME_QAT_ASYM_PMD	crypto_qat_asym
 
 
+/**
+ * Helper function to add an asym capability
+ * <name> <op type> <modlen (min, max, increment)>
+ **/
+#define QAT_ASYM_CAP(n, o, l, r, i)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,			\
+		{.asym = {						\
+			.xform_capa = {					\
+				.xform_type = RTE_CRYPTO_ASYM_XFORM_##n,\
+				.op_types = o,				\
+				{					\
+				.modlen = {				\
+				.min = l,				\
+				.max = r,				\
+				.increment = i				\
+				}, }					\
+			}						\
+		},							\
+		}							\
+	}
+
 extern uint8_t qat_asym_driver_id;
 
+extern struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[];
+
 void
 qat_asym_init_op_cookie(void *op_cookie);
 
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index 3803fef19d..0a8afb0b31 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -44,6 +44,22 @@ struct qat_capabilities_info {
 	uint64_t size;
 };
 
+typedef struct qat_capabilities_info (*get_capabilities_info_t)
+			(struct qat_pci_device *qat_dev);
+
+typedef uint64_t (*get_feature_flags_t)(struct qat_pci_device *qat_dev);
+
+typedef void * (*create_security_ctx_t)(void *cryptodev);
+
+struct qat_crypto_gen_dev_ops {
+	get_feature_flags_t get_feature_flags;
+	get_capabilities_info_t get_capabilities;
+	struct rte_cryptodev_ops *cryptodev_ops;
+#ifdef RTE_LIB_SECURITY
+	create_security_ctx_t create_security_ctx;
+#endif
+};
+
 int
 qat_cryptodev_config(struct rte_cryptodev *dev,
 		struct rte_cryptodev_config *config);
diff --git a/drivers/crypto/qat/qat_sym_capabilities.h b/drivers/crypto/qat/qat_sym_capabilities.h
deleted file mode 100644
index cfb176ca94..0000000000
--- a/drivers/crypto/qat/qat_sym_capabilities.h
+++ /dev/null
@@ -1,1248 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017-2019 Intel Corporation
- */
-
-#ifndef _QAT_SYM_CAPABILITIES_H_
-#define _QAT_SYM_CAPABILITIES_H_
-
-#define QAT_BASE_GEN1_SYM_CAPABILITIES					\
-	{	/* SHA1 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA1 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* MD5 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 16,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XCBC MAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-				.aad_size = { 0 },			\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_CMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 4			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_CCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 2			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 224,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 7,			\
-					.max = 13,			\
-					.increment = 1			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_GCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GMAC (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_GMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SNOW 3G (UIA2) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CBC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XTS */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_XTS,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 64,			\
-					.increment = 32			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SNOW 3G (UEA2) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CTR,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* NULL (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, },						\
-		}, },							\
-	},								\
-	{	/* NULL (CIPHER) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				}					\
-			}, },						\
-		}, }							\
-	},								\
-	{       /* KASUMI (F8) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_KASUMI_F8,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{       /* KASUMI (F9) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_KASUMI_F9,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* 3DES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 24,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* 3DES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_3DES_CTR,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 24,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* DES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_DES_CBC,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* DES DOCSISBPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_DES_DOCSISBPI,\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_EXTRA_GEN2_SYM_CAPABILITIES					\
-	{	/* ZUC (EEA3) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_ZUC_EEA3,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* ZUC (EIA3) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_ZUC_EIA3,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_EXTRA_GEN3_SYM_CAPABILITIES					\
-	{	/* Chacha20-Poly1305 */					\
-	.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 32,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_BASE_GEN4_SYM_CAPABILITIES					\
-	{	/* AES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CBC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA1 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XCBC MAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-				.aad_size = { 0 },			\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_CMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 4			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* NULL (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, },						\
-		}, },							\
-	},								\
-	{	/* NULL (CIPHER) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				}					\
-			}, },						\
-		}, }							\
-	},								\
-	{	/* SHA1 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CTR,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_GCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_CCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 2			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 224,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 7,			\
-					.max = 13,			\
-					.increment = 1			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* Chacha20-Poly1305 */					\
-	.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 32,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GMAC (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_GMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				}					\
-			}, }						\
-		}, }							\
-	}								\
-
-
-
-#ifdef RTE_LIB_SECURITY
-#define QAT_SECURITY_SYM_CAPABILITIES					\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_SECURITY_CAPABILITIES(sym)					\
-	[0] = {	/* DOCSIS Uplink */					\
-		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
-		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
-		.docsis = {						\
-			.direction = RTE_SECURITY_DOCSIS_UPLINK		\
-		},							\
-		.crypto_capabilities = (sym)				\
-	},								\
-	[1] = {	/* DOCSIS Downlink */					\
-		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
-		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
-		.docsis = {						\
-			.direction = RTE_SECURITY_DOCSIS_DOWNLINK	\
-		},							\
-		.crypto_capabilities = (sym)				\
-	}
-#endif
-
-#endif /* _QAT_SYM_CAPABILITIES_H_ */
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index e03737c0d8..abb80d4604 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -22,85 +22,7 @@
 
 uint8_t qat_sym_driver_id;
 
-static const struct rte_cryptodev_capabilities qat_gen1_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen2_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen3_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN3_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen4_sym_capabilities[] = {
-	QAT_BASE_GEN4_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-#ifdef RTE_LIB_SECURITY
-static const struct rte_cryptodev_capabilities
-					qat_security_sym_capabilities[] = {
-	QAT_SECURITY_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_security_capability qat_security_capabilities[] = {
-	QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities),
-	{
-		.action = RTE_SECURITY_ACTION_TYPE_NONE
-	}
-};
-#endif
-
-static struct rte_cryptodev_ops crypto_qat_ops = {
-
-		/* Device related operations */
-		.dev_configure		= qat_cryptodev_config,
-		.dev_start		= qat_cryptodev_start,
-		.dev_stop		= qat_cryptodev_stop,
-		.dev_close		= qat_cryptodev_close,
-		.dev_infos_get		= qat_cryptodev_info_get,
-
-		.stats_get		= qat_cryptodev_stats_get,
-		.stats_reset		= qat_cryptodev_stats_reset,
-		.queue_pair_setup	= qat_cryptodev_qp_setup,
-		.queue_pair_release	= qat_cryptodev_qp_release,
-
-		/* Crypto related operations */
-		.sym_session_get_size	= qat_sym_session_get_private_size,
-		.sym_session_configure	= qat_sym_session_configure,
-		.sym_session_clear	= qat_sym_session_clear,
-
-		/* Raw data-path API related operations */
-		.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
-		.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
-};
-
-#ifdef RTE_LIB_SECURITY
-static const struct rte_security_capability *
-qat_security_cap_get(void *device __rte_unused)
-{
-	return qat_security_capabilities;
-}
-
-static struct rte_security_ops security_qat_ops = {
-
-		.session_create = qat_security_session_create,
-		.session_update = NULL,
-		.session_stats_get = NULL,
-		.session_destroy = qat_security_session_destroy,
-		.set_pkt_metadata = NULL,
-		.capabilities_get = qat_security_cap_get
-};
-#endif
+struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[QAT_N_GENS];
 
 void
 qat_sym_init_op_cookie(void *op_cookie)
@@ -156,7 +78,6 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 	int i = 0, ret = 0;
 	struct qat_device_info *qat_dev_instance =
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
-
 	struct rte_cryptodev_pmd_init_params init_params = {
 		.name = "",
 		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
@@ -166,13 +87,22 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
 	struct qat_cryptodev_private *internals;
+	struct qat_capabilities_info capa_info;
 	const struct rte_cryptodev_capabilities *capabilities;
+	const struct qat_crypto_gen_dev_ops *gen_dev_ops =
+		&qat_sym_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	uint64_t capa_size;
 
 	snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "sym");
 	QAT_LOG(DEBUG, "Creating QAT SYM device %s", name);
 
+	if (gen_dev_ops->cryptodev_ops == NULL) {
+		QAT_LOG(ERR, "Device %s does not support symmetric crypto",
+				name);
+		return -EFAULT;
+	}
+
 	/*
 	 * All processes must use same driver id so they can share sessions.
 	 * Store driver_id so we can validate that all processes have the same
@@ -206,92 +136,56 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	qat_dev_instance->sym_rte_dev.name = cryptodev->data->name;
 	cryptodev->driver_id = qat_sym_driver_id;
-	cryptodev->dev_ops = &crypto_qat_ops;
+	cryptodev->dev_ops = gen_dev_ops->cryptodev_ops;
 
 	cryptodev->enqueue_burst = qat_sym_pmd_enqueue_op_burst;
 	cryptodev->dequeue_burst = qat_sym_pmd_dequeue_op_burst;
 
-	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
-			RTE_CRYPTODEV_FF_HW_ACCELERATED |
-			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
-			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
-			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
-			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
-
-	if (qat_pci_dev->qat_dev_gen < QAT_GEN4)
-		cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SYM_RAW_DP;
+	cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev);
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
-	snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN,
-			"QAT_SYM_CAPA_GEN_%d",
-			qat_pci_dev->qat_dev_gen);
-
 #ifdef RTE_LIB_SECURITY
-	struct rte_security_ctx *security_instance;
-	security_instance = rte_malloc("qat_sec",
-				sizeof(struct rte_security_ctx),
-				RTE_CACHE_LINE_SIZE);
-	if (security_instance == NULL) {
-		QAT_LOG(ERR, "rte_security_ctx memory alloc failed");
-		ret = -ENOMEM;
-		goto error;
-	}
+	if (gen_dev_ops->create_security_ctx) {
+		cryptodev->security_ctx =
+			gen_dev_ops->create_security_ctx((void *)cryptodev);
+		if (cryptodev->security_ctx == NULL) {
+			QAT_LOG(ERR, "rte_security_ctx memory alloc failed");
+			ret = -ENOMEM;
+			goto error;
+		}
+
+		cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY;
+		QAT_LOG(INFO, "Device %s rte_security support ensabled", name);
+	} else
+		QAT_LOG(INFO, "Device %s rte_security support disabled", name);
 
-	security_instance->device = (void *)cryptodev;
-	security_instance->ops = &security_qat_ops;
-	security_instance->sess_cnt = 0;
-	cryptodev->security_ctx = security_instance;
-	cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY;
 #endif
+	snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"QAT_SYM_CAPA_GEN_%d",
+			qat_pci_dev->qat_dev_gen);
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
 	internals->service_type = QAT_SERVICE_SYMMETRIC;
-
 	internals->dev_id = cryptodev->data->dev_id;
-	switch (qat_pci_dev->qat_dev_gen) {
-	case QAT_GEN1:
-		capabilities = qat_gen1_sym_capabilities;
-		capa_size = sizeof(qat_gen1_sym_capabilities);
-		break;
-	case QAT_GEN2:
-		capabilities = qat_gen2_sym_capabilities;
-		capa_size = sizeof(qat_gen2_sym_capabilities);
-		break;
-	case QAT_GEN3:
-		capabilities = qat_gen3_sym_capabilities;
-		capa_size = sizeof(qat_gen3_sym_capabilities);
-		break;
-	case QAT_GEN4:
-		capabilities = qat_gen4_sym_capabilities;
-		capa_size = sizeof(qat_gen4_sym_capabilities);
-		break;
-	default:
-		QAT_LOG(DEBUG,
-			"QAT gen %d capabilities unknown",
-			qat_pci_dev->qat_dev_gen);
-		ret = -(EINVAL);
-		goto error;
-	}
+
+	capa_info = gen_dev_ops->get_capabilities(qat_pci_dev);
+	capabilities = capa_info.data;
+	capa_size = capa_info.size;
 
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
 		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
-		capa_size,
-		rte_socket_id(), 0);
-	}
-	if (internals->capa_mz == NULL) {
-		QAT_LOG(DEBUG,
-			"Error allocating memzone for capabilities, destroying "
-			"PMD for %s",
-			name);
-		ret = -EFAULT;
-		goto error;
+				capa_size, rte_socket_id(), 0);
+		if (internals->capa_mz == NULL) {
+			QAT_LOG(DEBUG,
+				"Error allocating capability memzon for %s",
+				name);
+			ret = -EFAULT;
+			goto error;
+		}
 	}
 
 	memcpy(internals->capa_mz->addr, capabilities, capa_size);
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index d49b732ca0..28a6572f6d 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -13,7 +13,6 @@
 #include <rte_security.h>
 #endif
 
-#include "qat_sym_capabilities.h"
 #include "qat_crypto.h"
 #include "qat_device.h"
 
@@ -24,8 +23,59 @@
 #define QAT_SYM_CAP_MIXED_CRYPTO	(1 << 0)
 #define QAT_SYM_CAP_VALID		(1 << 31)
 
+/* Macro to add a capability */
+#define QAT_SYM_PLAIN_AUTH_CAP(n, b, d)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
+			{.auth = {					\
+				.algo = RTE_CRYPTO_AUTH_##n,		\
+				b, d					\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_AUTH_CAP(n, b, k, d, a, i)				\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
+			{.auth = {					\
+				.algo = RTE_CRYPTO_AUTH_##n,		\
+				b, k, d, a, i				\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_AEAD_CAP(n, b, k, d, a, i)				\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
+			{.aead = {					\
+				.algo = RTE_CRYPTO_AEAD_##n,		\
+				b, k, d, a, i				\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_CIPHER_CAP(n, b, k, i)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
+			{.cipher = {					\
+				.algo = RTE_CRYPTO_CIPHER_##n,		\
+				b, k, i					\
+			}, }						\
+		}, }							\
+	}
+
 extern uint8_t qat_sym_driver_id;
 
+extern struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[];
+
 int
 qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v3 09/10] crypto/qat: add gen specific implementation
  2021-10-14 16:11   ` [dpdk-dev] [dpdk-dev v3 00/10] drivers/qat: isolate implementations of qat generations Fan Zhang
                       ` (7 preceding siblings ...)
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 08/10] crypto/qat: add gen specific data and function Fan Zhang
@ 2021-10-14 16:11     ` Fan Zhang
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 10/10] common/qat: unify naming conventions in qat functions Fan Zhang
  2021-10-22 17:03     ` [dpdk-dev] [dpdk-dev v4 0/9] drivers/qat: isolate implementations of qat generations Fan Zhang
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-14 16:11 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch replaces the mixed QAT symmetric and asymmetric
support implementation by separate files with shared or
individual implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/meson.build               |   7 +-
 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c   |  76 +++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 224 +++++++++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 164 +++++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 125 ++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h |  36 +++
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c    | 283 +++++++++++++++++++
 drivers/crypto/qat/qat_asym_pmd.h            |   1 +
 drivers/crypto/qat/qat_crypto.h              |   3 -
 9 files changed, 915 insertions(+), 4 deletions(-)
 create mode 100644 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 29fd0168ea..ce9959d103 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -71,7 +71,12 @@ endif
 
 if qat_crypto
     foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c',
-            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c']
+            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c',
+	    'dev/qat_sym_pmd_gen1.c',
+            'dev/qat_asym_pmd_gen1.c',
+            'dev/qat_crypto_pmd_gen2.c',
+            'dev/qat_crypto_pmd_gen3.c',
+            'dev/qat_crypto_pmd_gen4.c']
         sources += files(join_paths(qat_crypto_relpath, f))
     endforeach
     deps += ['security']
diff --git a/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
new file mode 100644
index 0000000000..61250fe433
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+#include "qat_pke_functionality_arrays.h"
+
+struct rte_cryptodev_ops qat_asym_crypto_ops_gen1 = {
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.asym_session_get_size	= qat_asym_session_get_private_size,
+	.asym_session_configure	= qat_asym_session_configure,
+	.asym_session_clear	= qat_asym_session_clear
+};
+
+static struct rte_cryptodev_capabilities qat_asym_crypto_caps_gen1[] = {
+	QAT_ASYM_CAP(MODEX, \
+		0, 1, 512, 1), \
+	QAT_ASYM_CAP(MODINV, \
+		0, 1, 512, 1), \
+	QAT_ASYM_CAP(RSA, \
+			((1 << RTE_CRYPTO_ASYM_OP_SIGN) | \
+			(1 << RTE_CRYPTO_ASYM_OP_VERIFY) | \
+			(1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) | \
+			(1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),	\
+			64, 512, 64),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+
+struct qat_capabilities_info
+qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_asym_crypto_caps_gen1;
+	capa_info.size = sizeof(qat_asym_crypto_caps_gen1);
+	return capa_info;
+}
+
+uint64_t
+qat_asym_crypto_feature_flags_get_gen1(
+	struct qat_pci_device *qat_dev __rte_unused)
+{
+	uint64_t feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_ASYM_SESSIONLESS |
+			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP |
+			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT;
+
+	return feature_flags;
+}
+
+RTE_INIT(qat_asym_crypto_gen1_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN1].cryptodev_ops =
+			&qat_asym_crypto_ops_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN1].get_capabilities =
+			qat_asym_crypto_cap_get_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN1].get_feature_flags =
+			qat_asym_crypto_feature_flags_get_gen1;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
new file mode 100644
index 0000000000..8611ef6864
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
@@ -0,0 +1,224 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+#define MIXED_CRYPTO_MIN_FW_VER 0x04090000
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen2[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(digest_size, 1, 20, 1)), \
+	QAT_SYM_AEAD_CAP(AES_GCM, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)), \
+	QAT_SYM_AEAD_CAP(AES_CCM,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2), \
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)), \
+	QAT_SYM_AUTH_CAP(AES_GMAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)), \
+	QAT_SYM_AUTH_CAP(AES_CMAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4), \
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA224, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA256, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA384, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA512, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA1_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA224_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA256_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA384_HMAC, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA512_HMAC, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(MD5_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AUTH_CAP(KASUMI_F9, \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(NULL, \
+		CAP_SET(block_size, 1), \
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(AES_CBC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_CTR,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_XTS,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(NULL,  \
+		CAP_SET(block_size, 1), \
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(3DES_CBC,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(3DES_CTR,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(DES_CBC,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(ZUC_EEA3, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AUTH_CAP(ZUC_EIA3, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static int
+qat_sym_crypto_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct qat_cryptodev_private *qat_sym_private = dev->data->dev_private;
+	struct qat_qp *qp;
+	int ret;
+
+	if (qat_cryptodev_qp_setup(dev, qp_id, qp_conf, socket_id)) {
+		/* Some error there */
+		return -1;
+	}
+
+	qp = qat_sym_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id];
+	ret = qat_cq_get_fw_version(qp);
+	if (ret < 0) {
+		qat_cryptodev_qp_release(dev, qp_id);
+		return ret;
+	}
+
+	if (ret != 0)
+		QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d",
+				(ret >> 24) & 0xff,
+				(ret >> 16) & 0xff,
+				(ret >> 8) & 0xff);
+	else
+		QAT_LOG(DEBUG, "unknown QAT firmware version");
+
+	/* set capabilities based on the fw version */
+	qat_sym_private->internal_capabilities = QAT_SYM_CAP_VALID |
+			((ret >= MIXED_CRYPTO_MIN_FW_VER) ?
+					QAT_SYM_CAP_MIXED_CRYPTO : 0);
+	return 0;
+}
+
+struct rte_cryptodev_ops qat_sym_crypto_ops_gen2 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_sym_crypto_qp_setup_gen2,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.sym_session_get_size	= qat_sym_session_get_private_size,
+	.sym_session_configure	= qat_sym_session_configure,
+	.sym_session_clear	= qat_sym_session_clear,
+
+	/* Raw data-path API related operations */
+	.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+	.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen2(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen2;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen2);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen2_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN2].cryptodev_ops = &qat_sym_crypto_ops_gen2;
+	qat_sym_gen_dev_ops[QAT_GEN2].get_capabilities =
+			qat_sym_crypto_cap_get_gen2;
+	qat_sym_gen_dev_ops[QAT_GEN2].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN2].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen2_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN2].cryptodev_ops =
+			&qat_asym_crypto_ops_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN2].get_capabilities =
+			qat_asym_crypto_cap_get_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN2].get_feature_flags =
+			qat_asym_crypto_feature_flags_get_gen1;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
new file mode 100644
index 0000000000..1af58b90ed
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen3[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(digest_size, 1, 20, 1)), \
+	QAT_SYM_AEAD_CAP(AES_GCM, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)), \
+	QAT_SYM_AEAD_CAP(AES_CCM,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2), \
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)), \
+	QAT_SYM_AUTH_CAP(AES_GMAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)), \
+	QAT_SYM_AUTH_CAP(AES_CMAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4), \
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA224, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA256, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA384, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA512, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA1_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA224_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA256_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA384_HMAC, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA512_HMAC, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(MD5_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AUTH_CAP(KASUMI_F9, \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(NULL, \
+		CAP_SET(block_size, 1), \
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(AES_CBC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_CTR,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_XTS,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(NULL,  \
+		CAP_SET(block_size, 1), \
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(3DES_CBC,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(3DES_CTR,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(DES_CBC,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(ZUC_EEA3, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AUTH_CAP(ZUC_EIA3, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AEAD_CAP(CHACHA20_POLY1305, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 32, 32, 0), \
+		CAP_RNG(digest_size, 16, 16, 0), \
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen3(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen3;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen3);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen3_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN3].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN3].get_capabilities =
+			qat_sym_crypto_cap_get_gen3;
+	qat_sym_gen_dev_ops[QAT_GEN3].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN3].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen3_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN3].cryptodev_ops = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN3].get_capabilities = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN3].get_feature_flags = NULL;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
new file mode 100644
index 0000000000..e44f91e90a
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
@@ -0,0 +1,125 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+/* AR: add GEN4 caps here */
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen4[] = {
+	QAT_SYM_CIPHER_CAP(AES_CBC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AUTH_CAP(SHA1_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA224_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA256_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA384_HMAC, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA512_HMAC, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(AES_CMAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AUTH_CAP(NULL, \
+		CAP_SET(block_size, 1), \
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(NULL,  \
+		CAP_SET(block_size, 1), \
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(digest_size, 1, 20, 1)), \
+	QAT_SYM_AUTH_CAP(SHA224, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA256, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA384, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA512, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(AES_CTR,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AEAD_CAP(AES_GCM, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)), \
+	QAT_SYM_AEAD_CAP(AES_CCM,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2), \
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)), \
+	QAT_SYM_AUTH_CAP(AES_GMAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)), \
+	QAT_SYM_AEAD_CAP(CHACHA20_POLY1305, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 32, 32, 0), \
+		CAP_RNG(digest_size, 16, 16, 0), \
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen4;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen4);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen4_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN4].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN4].get_capabilities =
+			qat_sym_crypto_cap_get_gen4;
+	qat_sym_gen_dev_ops[QAT_GEN4].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN4].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen4_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN4].cryptodev_ops = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN4].get_capabilities = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN4].get_feature_flags = NULL;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
new file mode 100644
index 0000000000..67a4d2cb2c
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#ifndef _QAT_CRYPTO_PMD_GENS_H_
+#define _QAT_CRYPTO_PMD_GENS_H_
+
+#include <rte_cryptodev.h>
+#include "qat_crypto.h"
+#include "qat_sym_session.h"
+
+extern struct rte_cryptodev_ops qat_sym_crypto_ops_gen1;
+extern struct rte_cryptodev_ops qat_asym_crypto_ops_gen1;
+
+/* -----------------GENx control path APIs ---------------- */
+uint64_t
+qat_sym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev);
+
+void
+qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session,
+		uint8_t hash_flag);
+
+struct qat_capabilities_info
+qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev);
+
+uint64_t
+qat_asym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev);
+
+#ifdef RTE_LIB_SECURITY
+extern struct rte_security_ops security_qat_ops_gen1;
+
+void *
+qat_sym_create_security_gen1(void *cryptodev);
+#endif
+
+#endif
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
new file mode 100644
index 0000000000..c6aa305845
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
@@ -0,0 +1,283 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#ifdef RTE_LIB_SECURITY
+#include <rte_security_driver.h>
+#endif
+
+#include "adf_transport_access_macros.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_sym_session.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen1[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(digest_size, 1, 20, 1)), \
+	QAT_SYM_AEAD_CAP(AES_GCM, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)), \
+	QAT_SYM_AEAD_CAP(AES_CCM,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2), \
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)), \
+	QAT_SYM_AUTH_CAP(AES_GMAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)), \
+	QAT_SYM_AUTH_CAP(AES_CMAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4), \
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA224, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA256, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA384, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA512, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA1_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA224_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA256_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA384_HMAC, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SHA512_HMAC, \
+		CAP_SET(block_size, 128), \
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(MD5_HMAC, \
+		CAP_SET(block_size, 64), \
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_AUTH_CAP(KASUMI_F9, \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_AUTH_CAP(NULL, \
+		CAP_SET(block_size, 1), \
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size), \
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(AES_CBC, \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_CTR,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_XTS,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,  \
+		CAP_SET(block_size, 16), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)), \
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(NULL,  \
+		CAP_SET(block_size, 1), \
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)), \
+	QAT_SYM_CIPHER_CAP(3DES_CBC,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(3DES_CTR,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(DES_CBC,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)), \
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,  \
+		CAP_SET(block_size, 8), \
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+struct rte_cryptodev_ops qat_sym_crypto_ops_gen1 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.sym_session_get_size	= qat_sym_session_get_private_size,
+	.sym_session_configure	= qat_sym_session_configure,
+	.sym_session_clear	= qat_sym_session_clear,
+
+	/* Raw data-path API related operations */
+	.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+	.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen1;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen1);
+	return capa_info;
+}
+
+uint64_t
+qat_sym_crypto_feature_flags_get_gen1(
+	struct qat_pci_device *qat_dev __rte_unused)
+{
+	uint64_t feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
+			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
+			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
+			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED |
+			RTE_CRYPTODEV_FF_SYM_RAW_DP;
+
+	return feature_flags;
+}
+
+#ifdef RTE_LIB_SECURITY
+
+#define QAT_SECURITY_SYM_CAPABILITIES					\
+	{	/* AES DOCSIS BPI */					\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
+			{.cipher = {					\
+				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
+				.block_size = 16,			\
+				.key_size = {				\
+					.min = 16,			\
+					.max = 32,			\
+					.increment = 16			\
+				},					\
+				.iv_size = {				\
+					.min = 16,			\
+					.max = 16,			\
+					.increment = 0			\
+				}					\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SECURITY_CAPABILITIES(sym)					\
+	[0] = {	/* DOCSIS Uplink */					\
+		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
+		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
+		.docsis = {						\
+			.direction = RTE_SECURITY_DOCSIS_UPLINK		\
+		},							\
+		.crypto_capabilities = (sym)				\
+	},								\
+	[1] = {	/* DOCSIS Downlink */					\
+		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
+		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
+		.docsis = {						\
+			.direction = RTE_SECURITY_DOCSIS_DOWNLINK	\
+		},							\
+		.crypto_capabilities = (sym)				\
+	}
+
+static const struct rte_cryptodev_capabilities
+					qat_security_sym_capabilities[] = {
+	QAT_SECURITY_SYM_CAPABILITIES,
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static const struct rte_security_capability qat_security_capabilities_gen1[] = {
+	QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities),
+	{
+		.action = RTE_SECURITY_ACTION_TYPE_NONE
+	}
+};
+
+static const struct rte_security_capability *
+qat_security_cap_get_gen1(void *dev __rte_unused)
+{
+	return qat_security_capabilities_gen1;
+}
+
+struct rte_security_ops security_qat_ops_gen1 = {
+		.session_create = qat_security_session_create,
+		.session_update = NULL,
+		.session_stats_get = NULL,
+		.session_destroy = qat_security_session_destroy,
+		.set_pkt_metadata = NULL,
+		.capabilities_get = qat_security_cap_get_gen1
+};
+
+void *
+qat_sym_create_security_gen1(void *cryptodev)
+{
+	struct rte_security_ctx *security_instance;
+
+	security_instance = rte_malloc(NULL, sizeof(struct rte_security_ctx),
+			RTE_CACHE_LINE_SIZE);
+	if (security_instance == NULL)
+		return NULL;
+
+	security_instance->device = cryptodev;
+	security_instance->ops = &security_qat_ops_gen1;
+	security_instance->sess_cnt = 0;
+
+	return (void *)security_instance;
+}
+
+#endif
+
+RTE_INIT(qat_sym_crypto_gen1_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN1].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN1].get_capabilities =
+			qat_sym_crypto_cap_get_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN1].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN1].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index fd6b406248..74c12b4bc8 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -18,6 +18,7 @@
  * Helper function to add an asym capability
  * <name> <op type> <modlen (min, max, increment)>
  **/
+
 #define QAT_ASYM_CAP(n, o, l, r, i)					\
 	{								\
 		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,			\
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index 0a8afb0b31..6eaa15b975 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -6,9 +6,6 @@
  #define _QAT_CRYPTO_H_
 
 #include <rte_cryptodev.h>
-#ifdef RTE_LIB_SECURITY
-#include <rte_security.h>
-#endif
 
 #include "qat_device.h"
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v3 10/10] common/qat: unify naming conventions in qat functions
  2021-10-14 16:11   ` [dpdk-dev] [dpdk-dev v3 00/10] drivers/qat: isolate implementations of qat generations Fan Zhang
                       ` (8 preceding siblings ...)
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 09/10] crypto/qat: add gen specific implementation Fan Zhang
@ 2021-10-14 16:11     ` Fan Zhang
  2021-10-22 17:03     ` [dpdk-dev] [dpdk-dev v4 0/9] drivers/qat: isolate implementations of qat generations Fan Zhang
  10 siblings, 0 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-14 16:11 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Arek Kusztal, Fan Zhang, Kai Ji

From: Arek Kusztal <arkadiuszx.kusztal@intel.com>

This patch unifies naming conventions across QAT PMD
files. It will help maintaining code and further
development.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/qat_common.c      |   30 +-
 drivers/common/qat/qat_common.h      |    4 +-
 drivers/common/qat/qat_device.h      |   76 +-
 drivers/common/qat/qat_logs.h        |    6 +-
 drivers/crypto/qat/qat_asym_pmd.c    |   28 +-
 drivers/crypto/qat/qat_asym_pmd.h    |    7 +-
 drivers/crypto/qat/qat_crypto.c      |   52 +-
 drivers/crypto/qat/qat_sym_pmd.c     |   28 +-
 drivers/crypto/qat/qat_sym_session.c | 1057 +++++++++++++-------------
 9 files changed, 643 insertions(+), 645 deletions(-)

diff --git a/drivers/common/qat/qat_common.c b/drivers/common/qat/qat_common.c
index 59e7e02622..774becee2e 100644
--- a/drivers/common/qat/qat_common.c
+++ b/drivers/common/qat/qat_common.c
@@ -7,9 +7,9 @@
 #include "qat_logs.h"
 
 const char *
-qat_service_get_str(enum qat_service_type type)
+qat_service_get_str(enum qat_service_type qat_service)
 {
-	switch (type) {
+	switch (qat_service) {
 	case QAT_SERVICE_SYMMETRIC:
 		return "sym";
 	case QAT_SERVICE_ASYMMETRIC:
@@ -84,24 +84,24 @@ qat_sgl_fill_array(struct rte_mbuf *buf, int64_t offset,
 	return res;
 }
 
-void qat_stats_get(struct qat_pci_device *dev,
+void qat_stats_get(struct qat_pci_device *qat_dev,
 		struct qat_common_stats *stats,
-		enum qat_service_type service)
+		enum qat_service_type qat_service)
 {
 	int i;
 	struct qat_qp **qp;
 
-	if (stats == NULL || dev == NULL || service >= QAT_SERVICE_INVALID) {
+	if (stats == NULL || qat_dev == NULL || qat_service >= QAT_SERVICE_INVALID) {
 		QAT_LOG(ERR, "invalid param: stats %p, dev %p, service %d",
-				stats, dev, service);
+				stats, qat_dev, qat_service);
 		return;
 	}
 
-	qp = dev->qps_in_use[service];
+	qp = qat_dev->qps_in_use[qat_service];
 	for (i = 0; i < ADF_MAX_QPS_ON_ANY_SERVICE; i++) {
 		if (qp[i] == NULL) {
 			QAT_LOG(DEBUG, "Service %d Uninitialised qp %d",
-					service, i);
+					qat_service, i);
 			continue;
 		}
 
@@ -115,27 +115,27 @@ void qat_stats_get(struct qat_pci_device *dev,
 	}
 }
 
-void qat_stats_reset(struct qat_pci_device *dev,
-		enum qat_service_type service)
+void qat_stats_reset(struct qat_pci_device *qat_dev,
+		enum qat_service_type qat_service)
 {
 	int i;
 	struct qat_qp **qp;
 
-	if (dev == NULL || service >= QAT_SERVICE_INVALID) {
+	if (qat_dev == NULL || qat_service >= QAT_SERVICE_INVALID) {
 		QAT_LOG(ERR, "invalid param: dev %p, service %d",
-				dev, service);
+				qat_dev, qat_service);
 		return;
 	}
 
-	qp = dev->qps_in_use[service];
+	qp = qat_dev->qps_in_use[qat_service];
 	for (i = 0; i < ADF_MAX_QPS_ON_ANY_SERVICE; i++) {
 		if (qp[i] == NULL) {
 			QAT_LOG(DEBUG, "Service %d Uninitialised qp %d",
-					service, i);
+					qat_service, i);
 			continue;
 		}
 		memset(&(qp[i]->stats), 0, sizeof(qp[i]->stats));
 	}
 
-	QAT_LOG(DEBUG, "QAT: %d stats cleared", service);
+	QAT_LOG(DEBUG, "QAT: %d stats cleared", qat_service);
 }
diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index 0d488c9611..92cc584c67 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -84,11 +84,11 @@ qat_sgl_fill_array(struct rte_mbuf *buf, int64_t offset,
 		void *list_in, uint32_t data_len,
 		const uint16_t max_segs);
 void
-qat_stats_get(struct qat_pci_device *dev,
+qat_stats_get(struct qat_pci_device *qat_dev,
 		struct qat_common_stats *stats,
 		enum qat_service_type service);
 void
-qat_stats_reset(struct qat_pci_device *dev,
+qat_stats_reset(struct qat_pci_device *qat_dev,
 		enum qat_service_type service);
 
 const char *
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 85fae7b7c7..9cd2236fb7 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -21,33 +21,8 @@
 #define COMP_ENQ_THRESHOLD_NAME "qat_comp_enq_threshold"
 #define MAX_QP_THRESHOLD_SIZE	32
 
-/**
- * Function prototypes for GENx specific device operations.
- **/
-typedef int (*qat_dev_reset_ring_pairs_t)
-		(struct qat_pci_device *);
-typedef const struct rte_mem_resource* (*qat_dev_get_transport_bar_t)
-		(struct rte_pci_device *);
-typedef int (*qat_dev_get_misc_bar_t)
-		(struct rte_mem_resource **, struct rte_pci_device *);
-typedef int (*qat_dev_read_config_t)
-		(struct qat_pci_device *);
-typedef int (*qat_dev_get_extra_size_t)(void);
-
-struct qat_dev_hw_spec_funcs {
-	qat_dev_reset_ring_pairs_t	qat_dev_reset_ring_pairs;
-	qat_dev_get_transport_bar_t	qat_dev_get_transport_bar;
-	qat_dev_get_misc_bar_t		qat_dev_get_misc_bar;
-	qat_dev_read_config_t		qat_dev_read_config;
-	qat_dev_get_extra_size_t	qat_dev_get_extra_size;
-};
-
-extern struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[];
-
-struct qat_dev_cmd_param {
-	const char *name;
-	uint16_t val;
-};
+struct qat_cryptodev_private;
+struct qat_comp_dev_private;
 
 struct qat_device_info {
 	const struct rte_memzone *mz;
@@ -74,11 +49,6 @@ struct qat_device_info {
 	 */
 };
 
-extern struct qat_device_info qat_pci_devs[];
-
-struct qat_cryptodev_private;
-struct qat_comp_dev_private;
-
 /*
  * This struct holds all the data about a QAT pci device
  * including data about all services it supports.
@@ -142,7 +112,10 @@ struct qat_pf2vf_dev {
 	uint32_t pf2vf_data_mask;
 };
 
-extern struct qat_gen_hw_data qat_gen_config[];
+struct qat_dev_cmd_param {
+	const char *name;
+	uint16_t val;
+};
 
 struct qat_pci_device *
 qat_pci_device_allocate(struct rte_pci_device *pci_dev,
@@ -153,24 +126,49 @@ qat_get_qat_dev_from_pci_dev(struct rte_pci_device *pci_dev);
 
 /* declaration needed for weak functions */
 int
-qat_sym_dev_create(struct qat_pci_device *qat_pci_dev __rte_unused,
+qat_sym_dev_create(struct qat_pci_device *qat_dev __rte_unused,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
 
 int
-qat_asym_dev_create(struct qat_pci_device *qat_pci_dev __rte_unused,
+qat_asym_dev_create(struct qat_pci_device *qat_dev __rte_unused,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
 
 int
-qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev __rte_unused);
+qat_sym_dev_destroy(struct qat_pci_device *qat_dev __rte_unused);
 
 int
-qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev __rte_unused);
+qat_asym_dev_destroy(struct qat_pci_device *qat_dev __rte_unused);
 
 int
-qat_comp_dev_create(struct qat_pci_device *qat_pci_dev __rte_unused,
+qat_comp_dev_create(struct qat_pci_device *qat_dev __rte_unused,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
 
 int
-qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev __rte_unused);
+qat_comp_dev_destroy(struct qat_pci_device *qat_dev __rte_unused);
+
+/**
+ * Function prototypes for GENx specific device operations.
+ **/
+typedef int (*qat_dev_reset_ring_pairs_t)
+		(struct qat_pci_device *);
+typedef const struct rte_mem_resource* (*qat_dev_get_transport_bar_t)
+		(struct rte_pci_device *);
+typedef int (*qat_dev_get_misc_bar_t)
+		(struct rte_mem_resource **, struct rte_pci_device *);
+typedef int (*qat_dev_read_config_t)
+		(struct qat_pci_device *);
+typedef int (*qat_dev_get_extra_size_t)(void);
+
+struct qat_dev_hw_spec_funcs {
+	qat_dev_reset_ring_pairs_t	qat_dev_reset_ring_pairs;
+	qat_dev_get_transport_bar_t	qat_dev_get_transport_bar;
+	qat_dev_get_misc_bar_t		qat_dev_get_misc_bar;
+	qat_dev_read_config_t		qat_dev_read_config;
+	qat_dev_get_extra_size_t	qat_dev_get_extra_size;
+};
+
+extern struct qat_gen_hw_data qat_gen_config[];
+extern struct qat_device_info qat_pci_devs[];
+extern struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[];
 
 #endif /* _QAT_DEVICE_H_ */
diff --git a/drivers/common/qat/qat_logs.h b/drivers/common/qat/qat_logs.h
index 2e4d3945cb..b7afab1216 100644
--- a/drivers/common/qat/qat_logs.h
+++ b/drivers/common/qat/qat_logs.h
@@ -5,9 +5,6 @@
 #ifndef _QAT_LOGS_H_
 #define _QAT_LOGS_H_
 
-extern int qat_gen_logtype;
-extern int qat_dp_logtype;
-
 #define QAT_LOG(level, fmt, args...)			\
 	rte_log(RTE_LOG_ ## level, qat_gen_logtype,		\
 			"%s(): " fmt "\n", __func__, ## args)
@@ -30,4 +27,7 @@ int
 qat_hexdump_log(uint32_t level, uint32_t logtype, const char *title,
 		const void *buf, unsigned int len);
 
+extern int qat_gen_logtype;
+extern int qat_dp_logtype;
+
 #endif /* _QAT_LOGS_H_ */
diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c
index 83e046666c..c4668cd0e0 100644
--- a/drivers/crypto/qat/qat_asym_pmd.c
+++ b/drivers/crypto/qat/qat_asym_pmd.c
@@ -102,7 +102,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
-	struct qat_cryptodev_private *internals;
+	struct qat_cryptodev_private *qat_crypto;
 	uint64_t capa_size;
 
 	if (gen_dev_ops->cryptodev_ops == NULL) {
@@ -157,20 +157,20 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 			"QAT_ASYM_CAPA_GEN_%d",
 			qat_pci_dev->qat_dev_gen);
 
-	internals = cryptodev->data->dev_private;
-	internals->qat_dev = qat_pci_dev;
-	internals->dev_id = cryptodev->data->dev_id;
-	internals->service_type = QAT_SERVICE_ASYMMETRIC;
+	qat_crypto = cryptodev->data->dev_private;
+	qat_crypto->qat_dev = qat_pci_dev;
+	qat_crypto->dev_id = cryptodev->data->dev_id;
+	qat_crypto->service_type = QAT_SERVICE_ASYMMETRIC;
 
 	capa_info = gen_dev_ops->get_capabilities(qat_pci_dev);
 	capabilities = capa_info.data;
 	capa_size = capa_info.size;
 
-	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
-	if (internals->capa_mz == NULL) {
-		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
+	qat_crypto->capa_mz = rte_memzone_lookup(capa_memz_name);
+	if (qat_crypto->capa_mz == NULL) {
+		qat_crypto->capa_mz = rte_memzone_reserve(capa_memz_name,
 				capa_size, rte_socket_id(), 0);
-		if (internals->capa_mz == NULL) {
+		if (qat_crypto->capa_mz == NULL) {
 			QAT_LOG(DEBUG,
 				"Error allocating memzone for capabilities, "
 				"destroying PMD for %s",
@@ -182,21 +182,21 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 		}
 	}
 
-	memcpy(internals->capa_mz->addr, capabilities, capa_size);
-	internals->qat_dev_capabilities = internals->capa_mz->addr;
+	memcpy(qat_crypto->capa_mz->addr, capabilities, capa_size);
+	qat_crypto->qat_dev_capabilities = qat_crypto->capa_mz->addr;
 
 	while (1) {
 		if (qat_dev_cmd_param[i].name == NULL)
 			break;
 		if (!strcmp(qat_dev_cmd_param[i].name, ASYM_ENQ_THRESHOLD_NAME))
-			internals->min_enq_burst_threshold =
+			qat_crypto->min_enq_burst_threshold =
 					qat_dev_cmd_param[i].val;
 		i++;
 	}
 
-	qat_pci_dev->asym_dev = internals;
+	qat_pci_dev->asym_dev = qat_crypto;
 	QAT_LOG(DEBUG, "Created QAT ASYM device %s as cryptodev instance %d",
-			cryptodev->data->name, internals->dev_id);
+			cryptodev->data->name, qat_crypto->dev_id);
 	return 0;
 }
 
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index 74c12b4bc8..3a18f0669e 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -37,10 +37,6 @@
 		}							\
 	}
 
-extern uint8_t qat_asym_driver_id;
-
-extern struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[];
-
 void
 qat_asym_init_op_cookie(void *op_cookie);
 
@@ -52,4 +48,7 @@ uint16_t
 qat_asym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops,
 			      uint16_t nb_ops);
 
+extern struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[];
+extern uint8_t qat_asym_driver_id;
+
 #endif /* _QAT_ASYM_PMD_H_ */
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
index 01d2439b93..6922daaddb 100644
--- a/drivers/crypto/qat/qat_crypto.c
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -44,15 +44,15 @@ void
 qat_cryptodev_info_get(struct rte_cryptodev *dev,
 		struct rte_cryptodev_info *info)
 {
-	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-	enum qat_service_type service_type = qat_private->service_type;
+	struct qat_cryptodev_private *qat_crypto = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_crypto->qat_dev;
+	enum qat_service_type qat_service = qat_crypto->service_type;
 
 	if (info != NULL) {
 		info->max_nb_queue_pairs =
-			qat_qps_per_service(qat_dev, service_type);
+			qat_qps_per_service(qat_dev, qat_service);
 		info->feature_flags = dev->feature_flags;
-		info->capabilities = qat_private->qat_dev_capabilities;
+		info->capabilities = qat_crypto->qat_dev_capabilities;
 		info->driver_id = qat_sym_driver_id;
 		/* No limit of number of sessions */
 		info->sym.max_nb_sessions = 0;
@@ -64,15 +64,15 @@ qat_cryptodev_stats_get(struct rte_cryptodev *dev,
 		struct rte_cryptodev_stats *stats)
 {
 	struct qat_common_stats qat_stats = {0};
-	struct qat_cryptodev_private *qat_priv;
+	struct qat_cryptodev_private *qat_crypto;
 
 	if (stats == NULL || dev == NULL) {
 		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
 		return;
 	}
-	qat_priv = dev->data->dev_private;
+	qat_crypto = dev->data->dev_private;
 
-	qat_stats_get(qat_priv->qat_dev, &qat_stats, qat_priv->service_type);
+	qat_stats_get(qat_crypto->qat_dev, &qat_stats, qat_crypto->service_type);
 	stats->enqueued_count = qat_stats.enqueued_count;
 	stats->dequeued_count = qat_stats.dequeued_count;
 	stats->enqueue_err_count = qat_stats.enqueue_err_count;
@@ -82,31 +82,31 @@ qat_cryptodev_stats_get(struct rte_cryptodev *dev,
 void
 qat_cryptodev_stats_reset(struct rte_cryptodev *dev)
 {
-	struct qat_cryptodev_private *qat_priv;
+	struct qat_cryptodev_private *qat_crypto;
 
 	if (dev == NULL) {
 		QAT_LOG(ERR, "invalid cryptodev ptr %p", dev);
 		return;
 	}
-	qat_priv = dev->data->dev_private;
+	qat_crypto = dev->data->dev_private;
 
-	qat_stats_reset(qat_priv->qat_dev, qat_priv->service_type);
+	qat_stats_reset(qat_crypto->qat_dev, qat_crypto->service_type);
 
 }
 
 int
 qat_cryptodev_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
 {
-	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	struct qat_cryptodev_private *qat_crypto = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_crypto->qat_dev;
 	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
-	enum qat_service_type service_type = qat_private->service_type;
+	enum qat_service_type qat_service = qat_crypto->service_type;
 
 	QAT_LOG(DEBUG, "Release %s qp %u on device %d",
-			qat_service_get_str(service_type),
+			qat_service_get_str(qat_service),
 			queue_pair_id, dev->data->dev_id);
 
-	qat_private->qat_dev->qps_in_use[service_type][queue_pair_id] = NULL;
+	qat_crypto->qat_dev->qps_in_use[qat_service][queue_pair_id] = NULL;
 
 	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
 			&(dev->data->queue_pairs[queue_pair_id]));
@@ -118,9 +118,9 @@ qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 {
 	struct qat_qp **qp_addr =
 			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
-	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-	enum qat_service_type service_type = qat_private->service_type;
+	struct qat_cryptodev_private *qat_crypto = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_crypto->qat_dev;
+	enum qat_service_type qat_service = qat_crypto->service_type;
 	struct qat_qp_config qat_qp_conf = {0};
 	struct qat_qp *qp;
 	int ret = 0;
@@ -132,37 +132,37 @@ qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 		if (ret < 0)
 			return -EBUSY;
 	}
-	if (qp_id >= qat_qps_per_service(qat_dev, service_type)) {
+	if (qp_id >= qat_qps_per_service(qat_dev, qat_service)) {
 		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
 		return -EINVAL;
 	}
 
-	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, service_type,
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, qat_service,
 			qp_id);
 	if (qat_qp_conf.hw == NULL) {
 		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
 		return -EINVAL;
 	}
 
-	qat_qp_conf.cookie_size = service_type == QAT_SERVICE_SYMMETRIC ?
+	qat_qp_conf.cookie_size = qat_service == QAT_SERVICE_SYMMETRIC ?
 			sizeof(struct qat_sym_op_cookie) :
 			sizeof(struct qat_asym_op_cookie);
 	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
 	qat_qp_conf.socket_id = socket_id;
-	qat_qp_conf.service_str = qat_service_get_str(service_type);
+	qat_qp_conf.service_str = qat_service_get_str(qat_service);
 
 	ret = qat_qp_setup(qat_dev, qp_addr, qp_id, &qat_qp_conf);
 	if (ret != 0)
 		return ret;
 
 	/* store a link to the qp in the qat_pci_device */
-	qat_dev->qps_in_use[service_type][qp_id] = *qp_addr;
+	qat_dev->qps_in_use[qat_service][qp_id] = *qp_addr;
 
 	qp = (struct qat_qp *)*qp_addr;
-	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
+	qp->min_enq_burst_threshold = qat_crypto->min_enq_burst_threshold;
 
 	for (i = 0; i < qp->nb_descriptors; i++) {
-		if (service_type == QAT_SERVICE_SYMMETRIC)
+		if (qat_service == QAT_SERVICE_SYMMETRIC)
 			qat_sym_init_op_cookie(qp->op_cookies[i]);
 		else
 			qat_asym_init_op_cookie(qp->op_cookies[i]);
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index abb80d4604..171e5bc661 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -86,7 +86,7 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
-	struct qat_cryptodev_private *internals;
+	struct qat_cryptodev_private *qat_crypto;
 	struct qat_capabilities_info capa_info;
 	const struct rte_cryptodev_capabilities *capabilities;
 	const struct qat_crypto_gen_dev_ops *gen_dev_ops =
@@ -166,20 +166,20 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 			"QAT_SYM_CAPA_GEN_%d",
 			qat_pci_dev->qat_dev_gen);
 
-	internals = cryptodev->data->dev_private;
-	internals->qat_dev = qat_pci_dev;
-	internals->service_type = QAT_SERVICE_SYMMETRIC;
-	internals->dev_id = cryptodev->data->dev_id;
+	qat_crypto = cryptodev->data->dev_private;
+	qat_crypto->qat_dev = qat_pci_dev;
+	qat_crypto->service_type = QAT_SERVICE_SYMMETRIC;
+	qat_crypto->dev_id = cryptodev->data->dev_id;
 
 	capa_info = gen_dev_ops->get_capabilities(qat_pci_dev);
 	capabilities = capa_info.data;
 	capa_size = capa_info.size;
 
-	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
-	if (internals->capa_mz == NULL) {
-		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
+	qat_crypto->capa_mz = rte_memzone_lookup(capa_memz_name);
+	if (qat_crypto->capa_mz == NULL) {
+		qat_crypto->capa_mz = rte_memzone_reserve(capa_memz_name,
 				capa_size, rte_socket_id(), 0);
-		if (internals->capa_mz == NULL) {
+		if (qat_crypto->capa_mz == NULL) {
 			QAT_LOG(DEBUG,
 				"Error allocating capability memzon for %s",
 				name);
@@ -188,21 +188,21 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 		}
 	}
 
-	memcpy(internals->capa_mz->addr, capabilities, capa_size);
-	internals->qat_dev_capabilities = internals->capa_mz->addr;
+	memcpy(qat_crypto->capa_mz->addr, capabilities, capa_size);
+	qat_crypto->qat_dev_capabilities = qat_crypto->capa_mz->addr;
 
 	while (1) {
 		if (qat_dev_cmd_param[i].name == NULL)
 			break;
 		if (!strcmp(qat_dev_cmd_param[i].name, SYM_ENQ_THRESHOLD_NAME))
-			internals->min_enq_burst_threshold =
+			qat_crypto->min_enq_burst_threshold =
 					qat_dev_cmd_param[i].val;
 		i++;
 	}
 
-	qat_pci_dev->sym_dev = internals;
+	qat_pci_dev->sym_dev = qat_crypto;
 	QAT_LOG(DEBUG, "Created QAT SYM device %s as cryptodev instance %d",
-			cryptodev->data->name, internals->dev_id);
+			cryptodev->data->name, qat_crypto->dev_id);
 
 	return 0;
 
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index 8ca475ca8b..675c6392a9 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -58,26 +58,26 @@ static const uint8_t sha512InitialState[] = {
 	0x6b, 0x5b, 0xe0, 0xcd, 0x19, 0x13, 0x7e, 0x21, 0x79};
 
 static int
-qat_sym_cd_cipher_set(struct qat_sym_session *cd,
+qat_sym_cd_cipher_set(struct qat_sym_session *qat_session,
 						const uint8_t *enckey,
 						uint32_t enckeylen);
 
 static int
-qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
+qat_sym_cd_auth_set(struct qat_sym_session *qat_session,
 						const uint8_t *authkey,
 						uint32_t authkeylen,
-						uint32_t aad_length,
+						uint32_t aadlen,
 						uint32_t digestsize,
 						unsigned int operation);
 static void
-qat_sym_session_init_common_hdr(struct qat_sym_session *session);
+qat_sym_session_init_common_hdr(struct qat_sym_session *qat_session);
 
 /* Req/cd init functions */
 
 static void
-qat_sym_session_finalize(struct qat_sym_session *session)
+qat_sym_session_finalize(struct qat_sym_session *qat_session)
 {
-	qat_sym_session_init_common_hdr(session);
+	qat_sym_session_init_common_hdr(qat_session);
 }
 
 /** Frees a context previously created
@@ -94,9 +94,9 @@ bpi_cipher_ctx_free(void *bpi_ctx)
  *  Depends on openssl libcrypto
  */
 static int
-bpi_cipher_ctx_init(enum rte_crypto_cipher_algorithm cryptodev_algo,
+bpi_cipher_ctx_init(enum rte_crypto_cipher_algorithm cipher_alg,
 		enum rte_crypto_cipher_operation direction __rte_unused,
-		const uint8_t *key, uint16_t key_length, void **ctx)
+		const uint8_t *enckey, uint16_t key_length, void **ctx)
 {
 	const EVP_CIPHER *algo = NULL;
 	int ret;
@@ -107,7 +107,7 @@ bpi_cipher_ctx_init(enum rte_crypto_cipher_algorithm cryptodev_algo,
 		goto ctx_init_err;
 	}
 
-	if (cryptodev_algo == RTE_CRYPTO_CIPHER_DES_DOCSISBPI)
+	if (cipher_alg == RTE_CRYPTO_CIPHER_DES_DOCSISBPI)
 		algo = EVP_des_ecb();
 	else
 		if (key_length == ICP_QAT_HW_AES_128_KEY_SZ)
@@ -116,7 +116,7 @@ bpi_cipher_ctx_init(enum rte_crypto_cipher_algorithm cryptodev_algo,
 			algo = EVP_aes_256_ecb();
 
 	/* IV will be ECB encrypted whether direction is encrypt or decrypt*/
-	if (EVP_EncryptInit_ex(*ctx, algo, NULL, key, 0) != 1) {
+	if (EVP_EncryptInit_ex(*ctx, algo, NULL, enckey, 0) != 1) {
 		ret = -EINVAL;
 		goto ctx_init_err;
 	}
@@ -130,13 +130,13 @@ bpi_cipher_ctx_init(enum rte_crypto_cipher_algorithm cryptodev_algo,
 }
 
 static int
-qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo,
-		struct qat_cryptodev_private *internals)
+qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm cipher_alg,
+		struct qat_cryptodev_private *qat_crypto)
 {
 	int i = 0;
 	const struct rte_cryptodev_capabilities *capability;
 
-	while ((capability = &(internals->qat_dev_capabilities[i++]))->op !=
+	while ((capability = &(qat_crypto->qat_dev_capabilities[i++]))->op !=
 			RTE_CRYPTO_OP_TYPE_UNDEFINED) {
 		if (capability->op != RTE_CRYPTO_OP_TYPE_SYMMETRIC)
 			continue;
@@ -144,20 +144,20 @@ qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo,
 		if (capability->sym.xform_type != RTE_CRYPTO_SYM_XFORM_CIPHER)
 			continue;
 
-		if (capability->sym.cipher.algo == algo)
+		if (capability->sym.cipher.algo == cipher_alg)
 			return 1;
 	}
 	return 0;
 }
 
 static int
-qat_is_auth_alg_supported(enum rte_crypto_auth_algorithm algo,
-		struct qat_cryptodev_private *internals)
+qat_is_auth_alg_supported(enum rte_crypto_auth_algorithm cipher_alg,
+		struct qat_cryptodev_private *qat_crypto)
 {
 	int i = 0;
 	const struct rte_cryptodev_capabilities *capability;
 
-	while ((capability = &(internals->qat_dev_capabilities[i++]))->op !=
+	while ((capability = &(qat_crypto->qat_dev_capabilities[i++]))->op !=
 			RTE_CRYPTO_OP_TYPE_UNDEFINED) {
 		if (capability->op != RTE_CRYPTO_OP_TYPE_SYMMETRIC)
 			continue;
@@ -165,7 +165,7 @@ qat_is_auth_alg_supported(enum rte_crypto_auth_algorithm algo,
 		if (capability->sym.xform_type != RTE_CRYPTO_SYM_XFORM_AUTH)
 			continue;
 
-		if (capability->sym.auth.algo == algo)
+		if (capability->sym.auth.algo == cipher_alg)
 			return 1;
 	}
 	return 0;
@@ -173,20 +173,20 @@ qat_is_auth_alg_supported(enum rte_crypto_auth_algorithm algo,
 
 void
 qat_sym_session_clear(struct rte_cryptodev *dev,
-		struct rte_cryptodev_sym_session *sess)
+		struct rte_cryptodev_sym_session *session)
 {
 	uint8_t index = dev->driver_id;
-	void *sess_priv = get_sym_session_private_data(sess, index);
-	struct qat_sym_session *s = (struct qat_sym_session *)sess_priv;
+	struct qat_sym_session *qat_session = (struct qat_sym_session *)
+			get_sym_session_private_data(session, index);
 
-	if (sess_priv) {
-		if (s->bpi_ctx)
-			bpi_cipher_ctx_free(s->bpi_ctx);
-		memset(s, 0, qat_sym_session_get_private_size(dev));
-		struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
+	if (qat_session) {
+		if (qat_session->bpi_ctx)
+			bpi_cipher_ctx_free(qat_session->bpi_ctx);
+		memset(qat_session, 0, qat_sym_session_get_private_size(dev));
+		struct rte_mempool *sess_mp = rte_mempool_from_obj(qat_session);
 
-		set_sym_session_private_data(sess, index, NULL);
-		rte_mempool_put(sess_mp, sess_priv);
+		set_sym_session_private_data(session, index, NULL);
+		rte_mempool_put(sess_mp, qat_session);
 	}
 }
 
@@ -265,89 +265,89 @@ qat_get_cipher_xform(struct rte_crypto_sym_xform *xform)
 int
 qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct qat_sym_session *session)
+		struct qat_sym_session *qat_session)
 {
-	struct qat_cryptodev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *qat_crypto = dev->data->dev_private;
 	struct rte_crypto_cipher_xform *cipher_xform = NULL;
 	enum qat_device_gen qat_dev_gen =
-				internals->qat_dev->qat_dev_gen;
+				qat_crypto->qat_dev->qat_dev_gen;
 	int ret;
 
 	/* Get cipher xform from crypto xform chain */
 	cipher_xform = qat_get_cipher_xform(xform);
 
-	session->cipher_iv.offset = cipher_xform->iv.offset;
-	session->cipher_iv.length = cipher_xform->iv.length;
+	qat_session->cipher_iv.offset = cipher_xform->iv.offset;
+	qat_session->cipher_iv.length = cipher_xform->iv.length;
 
 	switch (cipher_xform->algo) {
 	case RTE_CRYPTO_CIPHER_AES_CBC:
 		if (qat_sym_validate_aes_key(cipher_xform->key.length,
-				&session->qat_cipher_alg) != 0) {
+				&qat_session->qat_cipher_alg) != 0) {
 			QAT_LOG(ERR, "Invalid AES cipher key size");
 			ret = -EINVAL;
 			goto error_out;
 		}
-		session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
+		qat_session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
 		break;
 	case RTE_CRYPTO_CIPHER_AES_CTR:
 		if (qat_sym_validate_aes_key(cipher_xform->key.length,
-				&session->qat_cipher_alg) != 0) {
+				&qat_session->qat_cipher_alg) != 0) {
 			QAT_LOG(ERR, "Invalid AES cipher key size");
 			ret = -EINVAL;
 			goto error_out;
 		}
-		session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
+		qat_session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
 		if (qat_dev_gen == QAT_GEN4)
-			session->is_ucs = 1;
+			qat_session->is_ucs = 1;
 		break;
 	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
 		if (qat_sym_validate_snow3g_key(cipher_xform->key.length,
-					&session->qat_cipher_alg) != 0) {
+					&qat_session->qat_cipher_alg) != 0) {
 			QAT_LOG(ERR, "Invalid SNOW 3G cipher key size");
 			ret = -EINVAL;
 			goto error_out;
 		}
-		session->qat_mode = ICP_QAT_HW_CIPHER_ECB_MODE;
+		qat_session->qat_mode = ICP_QAT_HW_CIPHER_ECB_MODE;
 		break;
 	case RTE_CRYPTO_CIPHER_NULL:
-		session->qat_cipher_alg = ICP_QAT_HW_CIPHER_ALGO_NULL;
-		session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
+		qat_session->qat_cipher_alg = ICP_QAT_HW_CIPHER_ALGO_NULL;
+		qat_session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
 		break;
 	case RTE_CRYPTO_CIPHER_KASUMI_F8:
 		if (qat_sym_validate_kasumi_key(cipher_xform->key.length,
-					&session->qat_cipher_alg) != 0) {
+					&qat_session->qat_cipher_alg) != 0) {
 			QAT_LOG(ERR, "Invalid KASUMI cipher key size");
 			ret = -EINVAL;
 			goto error_out;
 		}
-		session->qat_mode = ICP_QAT_HW_CIPHER_F8_MODE;
+		qat_session->qat_mode = ICP_QAT_HW_CIPHER_F8_MODE;
 		break;
 	case RTE_CRYPTO_CIPHER_3DES_CBC:
 		if (qat_sym_validate_3des_key(cipher_xform->key.length,
-				&session->qat_cipher_alg) != 0) {
+				&qat_session->qat_cipher_alg) != 0) {
 			QAT_LOG(ERR, "Invalid 3DES cipher key size");
 			ret = -EINVAL;
 			goto error_out;
 		}
-		session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
+		qat_session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
 		break;
 	case RTE_CRYPTO_CIPHER_DES_CBC:
 		if (qat_sym_validate_des_key(cipher_xform->key.length,
-				&session->qat_cipher_alg) != 0) {
+				&qat_session->qat_cipher_alg) != 0) {
 			QAT_LOG(ERR, "Invalid DES cipher key size");
 			ret = -EINVAL;
 			goto error_out;
 		}
-		session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
+		qat_session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
 		break;
 	case RTE_CRYPTO_CIPHER_3DES_CTR:
 		if (qat_sym_validate_3des_key(cipher_xform->key.length,
-				&session->qat_cipher_alg) != 0) {
+				&qat_session->qat_cipher_alg) != 0) {
 			QAT_LOG(ERR, "Invalid 3DES cipher key size");
 			ret = -EINVAL;
 			goto error_out;
 		}
-		session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
+		qat_session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
 		break;
 	case RTE_CRYPTO_CIPHER_DES_DOCSISBPI:
 		ret = bpi_cipher_ctx_init(
@@ -355,18 +355,18 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 					cipher_xform->op,
 					cipher_xform->key.data,
 					cipher_xform->key.length,
-					&session->bpi_ctx);
+					&qat_session->bpi_ctx);
 		if (ret != 0) {
 			QAT_LOG(ERR, "failed to create DES BPI ctx");
 			goto error_out;
 		}
 		if (qat_sym_validate_des_key(cipher_xform->key.length,
-				&session->qat_cipher_alg) != 0) {
+				&qat_session->qat_cipher_alg) != 0) {
 			QAT_LOG(ERR, "Invalid DES cipher key size");
 			ret = -EINVAL;
 			goto error_out;
 		}
-		session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
+		qat_session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
 		break;
 	case RTE_CRYPTO_CIPHER_AES_DOCSISBPI:
 		ret = bpi_cipher_ctx_init(
@@ -374,22 +374,22 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 					cipher_xform->op,
 					cipher_xform->key.data,
 					cipher_xform->key.length,
-					&session->bpi_ctx);
+					&qat_session->bpi_ctx);
 		if (ret != 0) {
 			QAT_LOG(ERR, "failed to create AES BPI ctx");
 			goto error_out;
 		}
 		if (qat_sym_validate_aes_docsisbpi_key(cipher_xform->key.length,
-				&session->qat_cipher_alg) != 0) {
+				&qat_session->qat_cipher_alg) != 0) {
 			QAT_LOG(ERR, "Invalid AES DOCSISBPI key size");
 			ret = -EINVAL;
 			goto error_out;
 		}
-		session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
+		qat_session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE;
 		break;
 	case RTE_CRYPTO_CIPHER_ZUC_EEA3:
 		if (!qat_is_cipher_alg_supported(
-			cipher_xform->algo, internals)) {
+			cipher_xform->algo, qat_crypto)) {
 			QAT_LOG(ERR, "%s not supported on this device",
 				rte_crypto_cipher_algorithm_strings
 					[cipher_xform->algo]);
@@ -397,12 +397,12 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 			goto error_out;
 		}
 		if (qat_sym_validate_zuc_key(cipher_xform->key.length,
-				&session->qat_cipher_alg) != 0) {
+				&qat_session->qat_cipher_alg) != 0) {
 			QAT_LOG(ERR, "Invalid ZUC cipher key size");
 			ret = -EINVAL;
 			goto error_out;
 		}
-		session->qat_mode = ICP_QAT_HW_CIPHER_ECB_MODE;
+		qat_session->qat_mode = ICP_QAT_HW_CIPHER_ECB_MODE;
 		break;
 	case RTE_CRYPTO_CIPHER_AES_XTS:
 		if ((cipher_xform->key.length/2) == ICP_QAT_HW_AES_192_KEY_SZ) {
@@ -411,12 +411,12 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 			goto error_out;
 		}
 		if (qat_sym_validate_aes_key((cipher_xform->key.length/2),
-				&session->qat_cipher_alg) != 0) {
+				&qat_session->qat_cipher_alg) != 0) {
 			QAT_LOG(ERR, "Invalid AES-XTS cipher key size");
 			ret = -EINVAL;
 			goto error_out;
 		}
-		session->qat_mode = ICP_QAT_HW_CIPHER_XTS_MODE;
+		qat_session->qat_mode = ICP_QAT_HW_CIPHER_XTS_MODE;
 		break;
 	case RTE_CRYPTO_CIPHER_3DES_ECB:
 	case RTE_CRYPTO_CIPHER_AES_ECB:
@@ -434,13 +434,13 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 	}
 
 	if (cipher_xform->op == RTE_CRYPTO_CIPHER_OP_ENCRYPT)
-		session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
+		qat_session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
 	else
-		session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
+		qat_session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
 
-	if (qat_sym_cd_cipher_set(session,
-						cipher_xform->key.data,
-						cipher_xform->key.length)) {
+	if (qat_sym_cd_cipher_set(qat_session,
+				cipher_xform->key.data,
+				cipher_xform->key.length)) {
 		ret = -EINVAL;
 		goto error_out;
 	}
@@ -448,9 +448,9 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 	return 0;
 
 error_out:
-	if (session->bpi_ctx) {
-		bpi_cipher_ctx_free(session->bpi_ctx);
-		session->bpi_ctx = NULL;
+	if (qat_session->bpi_ctx) {
+		bpi_cipher_ctx_free(qat_session->bpi_ctx);
+		qat_session->bpi_ctx = NULL;
 	}
 	return ret;
 }
@@ -458,30 +458,30 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 int
 qat_sym_session_configure(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
-		struct rte_cryptodev_sym_session *sess,
+		struct rte_cryptodev_sym_session *session,
 		struct rte_mempool *mempool)
 {
-	void *sess_private_data;
+	void *session_private;
 	int ret;
 
-	if (rte_mempool_get(mempool, &sess_private_data)) {
+	if (rte_mempool_get(mempool, &session_private)) {
 		CDEV_LOG_ERR(
 			"Couldn't get object from session mempool");
 		return -ENOMEM;
 	}
 
-	ret = qat_sym_session_set_parameters(dev, xform, sess_private_data);
+	ret = qat_sym_session_set_parameters(dev, xform, session_private);
 	if (ret != 0) {
 		QAT_LOG(ERR,
 		    "Crypto QAT PMD: failed to configure session parameters");
 
 		/* Return session to mempool */
-		rte_mempool_put(mempool, sess_private_data);
+		rte_mempool_put(mempool, session_private);
 		return ret;
 	}
 
-	set_sym_session_private_data(sess, dev->driver_id,
-		sess_private_data);
+	set_sym_session_private_data(session, dev->driver_id,
+		session_private);
 
 	return 0;
 }
@@ -490,73 +490,73 @@ static void
 qat_sym_session_set_ext_hash_flags(struct qat_sym_session *session,
 		uint8_t hash_flag)
 {
-	struct icp_qat_fw_comn_req_hdr *header = &session->fw_req.comn_hdr;
-	struct icp_qat_fw_cipher_auth_cd_ctrl_hdr *cd_ctrl =
+	struct icp_qat_fw_comn_req_hdr *qat_fw_hdr = &session->fw_req.comn_hdr;
+	struct icp_qat_fw_cipher_auth_cd_ctrl_hdr *qat_fw_cd_ctrl =
 			(struct icp_qat_fw_cipher_auth_cd_ctrl_hdr *)
 			session->fw_req.cd_ctrl.content_desc_ctrl_lw;
 
 	/* Set the Use Extended Protocol Flags bit in LW 1 */
-	QAT_FIELD_SET(header->comn_req_flags,
+	QAT_FIELD_SET(qat_fw_hdr->comn_req_flags,
 			QAT_COMN_EXT_FLAGS_USED,
 			QAT_COMN_EXT_FLAGS_BITPOS,
 			QAT_COMN_EXT_FLAGS_MASK);
 
 	/* Set Hash Flags in LW 28 */
-	cd_ctrl->hash_flags |= hash_flag;
+	qat_fw_cd_ctrl->hash_flags |= hash_flag;
 
 	/* Set proto flags in LW 1 */
 	switch (session->qat_cipher_alg) {
 	case ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2:
-		ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
+		ICP_QAT_FW_LA_PROTO_SET(qat_fw_hdr->serv_specif_flags,
 				ICP_QAT_FW_LA_SNOW_3G_PROTO);
 		ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(
-				header->serv_specif_flags, 0);
+				qat_fw_hdr->serv_specif_flags, 0);
 		break;
 	case ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3:
-		ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
+		ICP_QAT_FW_LA_PROTO_SET(qat_fw_hdr->serv_specif_flags,
 				ICP_QAT_FW_LA_NO_PROTO);
 		ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(
-				header->serv_specif_flags,
+				qat_fw_hdr->serv_specif_flags,
 				ICP_QAT_FW_LA_ZUC_3G_PROTO);
 		break;
 	default:
-		ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
+		ICP_QAT_FW_LA_PROTO_SET(qat_fw_hdr->serv_specif_flags,
 				ICP_QAT_FW_LA_NO_PROTO);
 		ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(
-				header->serv_specif_flags, 0);
+				qat_fw_hdr->serv_specif_flags, 0);
 		break;
 	}
 }
 
 static void
 qat_sym_session_handle_mixed(const struct rte_cryptodev *dev,
-		struct qat_sym_session *session)
+		struct qat_sym_session *qat_session)
 {
-	const struct qat_cryptodev_private *qat_private =
+	const struct qat_cryptodev_private *qat_crypto =
 			dev->data->dev_private;
-	enum qat_device_gen min_dev_gen = (qat_private->internal_capabilities &
+	enum qat_device_gen qat_min_dev_gen = (qat_crypto->internal_capabilities &
 			QAT_SYM_CAP_MIXED_CRYPTO) ? QAT_GEN2 : QAT_GEN3;
 
-	if (session->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 &&
-			session->qat_cipher_alg !=
+	if (qat_session->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 &&
+			qat_session->qat_cipher_alg !=
 			ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3) {
-		session->min_qat_dev_gen = min_dev_gen;
-		qat_sym_session_set_ext_hash_flags(session,
+		qat_session->min_qat_dev_gen = qat_min_dev_gen;
+		qat_sym_session_set_ext_hash_flags(qat_session,
 			1 << ICP_QAT_FW_AUTH_HDR_FLAG_ZUC_EIA3_BITPOS);
-	} else if (session->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 &&
-			session->qat_cipher_alg !=
+	} else if (qat_session->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 &&
+			qat_session->qat_cipher_alg !=
 			ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2) {
-		session->min_qat_dev_gen = min_dev_gen;
-		qat_sym_session_set_ext_hash_flags(session,
+		qat_session->min_qat_dev_gen = qat_min_dev_gen;
+		qat_sym_session_set_ext_hash_flags(qat_session,
 			1 << ICP_QAT_FW_AUTH_HDR_FLAG_SNOW3G_UIA2_BITPOS);
-	} else if ((session->aes_cmac ||
-			session->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) &&
-			(session->qat_cipher_alg ==
+	} else if ((qat_session->aes_cmac ||
+			qat_session->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) &&
+			(qat_session->qat_cipher_alg ==
 			ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 ||
-			session->qat_cipher_alg ==
+			qat_session->qat_cipher_alg ==
 			ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3)) {
-		session->min_qat_dev_gen = min_dev_gen;
-		qat_sym_session_set_ext_hash_flags(session, 0);
+		qat_session->min_qat_dev_gen = qat_min_dev_gen;
+		qat_sym_session_set_ext_hash_flags(qat_session, 0);
 	}
 }
 
@@ -564,29 +564,29 @@ int
 qat_sym_session_set_parameters(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform, void *session_private)
 {
-	struct qat_sym_session *session = session_private;
-	struct qat_cryptodev_private *internals = dev->data->dev_private;
-	enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen;
+	struct qat_sym_session *qat_session = session_private;
+	struct qat_cryptodev_private *qat_crypto = dev->data->dev_private;
+	enum qat_device_gen qat_dev_gen = qat_crypto->qat_dev->qat_dev_gen;
 	int ret;
 	int qat_cmd_id;
 	int handle_mixed = 0;
 
 	/* Verify the session physical address is known */
-	rte_iova_t session_paddr = rte_mempool_virt2iova(session);
+	rte_iova_t session_paddr = rte_mempool_virt2iova(qat_session);
 	if (session_paddr == 0 || session_paddr == RTE_BAD_IOVA) {
 		QAT_LOG(ERR,
 			"Session physical address unknown. Bad memory pool.");
 		return -EINVAL;
 	}
 
-	memset(session, 0, sizeof(*session));
+	memset(qat_session, 0, sizeof(*qat_session));
 	/* Set context descriptor physical address */
-	session->cd_paddr = session_paddr +
+	qat_session->cd_paddr = session_paddr +
 			offsetof(struct qat_sym_session, cd);
 
-	session->min_qat_dev_gen = QAT_GEN1;
-	session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_NONE;
-	session->is_ucs = 0;
+	qat_session->min_qat_dev_gen = QAT_GEN1;
+	qat_session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_NONE;
+	qat_session->is_ucs = 0;
 
 	/* Get requested QAT command id */
 	qat_cmd_id = qat_get_cmd_id(xform);
@@ -594,18 +594,18 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev,
 		QAT_LOG(ERR, "Unsupported xform chain requested");
 		return -ENOTSUP;
 	}
-	session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id;
-	switch (session->qat_cmd) {
+	qat_session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id;
+	switch (qat_session->qat_cmd) {
 	case ICP_QAT_FW_LA_CMD_CIPHER:
-		ret = qat_sym_session_configure_cipher(dev, xform, session);
+		ret = qat_sym_session_configure_cipher(dev, xform, qat_session);
 		if (ret < 0)
 			return ret;
 		break;
 	case ICP_QAT_FW_LA_CMD_AUTH:
-		ret = qat_sym_session_configure_auth(dev, xform, session);
+		ret = qat_sym_session_configure_auth(dev, xform, qat_session);
 		if (ret < 0)
 			return ret;
-		session->is_single_pass_gmac =
+		qat_session->is_single_pass_gmac =
 			       qat_dev_gen == QAT_GEN3 &&
 			       xform->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC &&
 			       xform->auth.iv.length == QAT_AES_GCM_SPC_IV_SIZE;
@@ -613,16 +613,16 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev,
 	case ICP_QAT_FW_LA_CMD_CIPHER_HASH:
 		if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
 			ret = qat_sym_session_configure_aead(dev, xform,
-					session);
+					qat_session);
 			if (ret < 0)
 				return ret;
 		} else {
 			ret = qat_sym_session_configure_cipher(dev,
-					xform, session);
+					xform, qat_session);
 			if (ret < 0)
 				return ret;
 			ret = qat_sym_session_configure_auth(dev,
-					xform, session);
+					xform, qat_session);
 			if (ret < 0)
 				return ret;
 			handle_mixed = 1;
@@ -631,16 +631,16 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev,
 	case ICP_QAT_FW_LA_CMD_HASH_CIPHER:
 		if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
 			ret = qat_sym_session_configure_aead(dev, xform,
-					session);
+					qat_session);
 			if (ret < 0)
 				return ret;
 		} else {
 			ret = qat_sym_session_configure_auth(dev,
-					xform, session);
+					xform, qat_session);
 			if (ret < 0)
 				return ret;
 			ret = qat_sym_session_configure_cipher(dev,
-					xform, session);
+					xform, qat_session);
 			if (ret < 0)
 				return ret;
 			handle_mixed = 1;
@@ -656,47 +656,47 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev,
 	case ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP:
 	case ICP_QAT_FW_LA_CMD_DELIMITER:
 	QAT_LOG(ERR, "Unsupported Service %u",
-		session->qat_cmd);
+		qat_session->qat_cmd);
 		return -ENOTSUP;
 	default:
 	QAT_LOG(ERR, "Unsupported Service %u",
-		session->qat_cmd);
+		qat_session->qat_cmd);
 		return -ENOTSUP;
 	}
-	qat_sym_session_finalize(session);
+	qat_sym_session_finalize(qat_session);
 	if (handle_mixed) {
 		/* Special handling of mixed hash+cipher algorithms */
-		qat_sym_session_handle_mixed(dev, session);
+		qat_sym_session_handle_mixed(dev, qat_session);
 	}
 
 	return 0;
 }
 
 static int
-qat_sym_session_handle_single_pass(struct qat_sym_session *session,
+qat_sym_session_handle_single_pass(struct qat_sym_session *qat_session,
 		const struct rte_crypto_aead_xform *aead_xform)
 {
-	session->is_single_pass = 1;
-	session->is_auth = 1;
-	session->min_qat_dev_gen = QAT_GEN3;
-	session->qat_cmd = ICP_QAT_FW_LA_CMD_CIPHER;
+	qat_session->is_single_pass = 1;
+	qat_session->is_auth = 1;
+	qat_session->min_qat_dev_gen = QAT_GEN3;
+	qat_session->qat_cmd = ICP_QAT_FW_LA_CMD_CIPHER;
 	/* Chacha-Poly is special case that use QAT CTR mode */
 	if (aead_xform->algo == RTE_CRYPTO_AEAD_AES_GCM) {
-		session->qat_mode = ICP_QAT_HW_CIPHER_AEAD_MODE;
+		qat_session->qat_mode = ICP_QAT_HW_CIPHER_AEAD_MODE;
 	} else {
-		session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
+		qat_session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
 	}
-	session->cipher_iv.offset = aead_xform->iv.offset;
-	session->cipher_iv.length = aead_xform->iv.length;
-	session->aad_len = aead_xform->aad_length;
-	session->digest_length = aead_xform->digest_length;
+	qat_session->cipher_iv.offset = aead_xform->iv.offset;
+	qat_session->cipher_iv.length = aead_xform->iv.length;
+	qat_session->aad_len = aead_xform->aad_length;
+	qat_session->digest_length = aead_xform->digest_length;
 
 	if (aead_xform->op == RTE_CRYPTO_AEAD_OP_ENCRYPT) {
-		session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
-		session->auth_op = ICP_QAT_HW_AUTH_GENERATE;
+		qat_session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
+		qat_session->auth_op = ICP_QAT_HW_AUTH_GENERATE;
 	} else {
-		session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
-		session->auth_op = ICP_QAT_HW_AUTH_VERIFY;
+		qat_session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
+		qat_session->auth_op = ICP_QAT_HW_AUTH_VERIFY;
 	}
 
 	return 0;
@@ -705,103 +705,103 @@ qat_sym_session_handle_single_pass(struct qat_sym_session *session,
 int
 qat_sym_session_configure_auth(struct rte_cryptodev *dev,
 				struct rte_crypto_sym_xform *xform,
-				struct qat_sym_session *session)
+				struct qat_sym_session *qat_session)
 {
 	struct rte_crypto_auth_xform *auth_xform = qat_get_auth_xform(xform);
-	struct qat_cryptodev_private *internals = dev->data->dev_private;
-	const uint8_t *key_data = auth_xform->key.data;
-	uint8_t key_length = auth_xform->key.length;
+	struct qat_cryptodev_private *qat_crypto = dev->data->dev_private;
+	const uint8_t *authkey = auth_xform->key.data;
+	uint8_t authkeylen = auth_xform->key.length;
 	enum qat_device_gen qat_dev_gen =
-			internals->qat_dev->qat_dev_gen;
+			qat_crypto->qat_dev->qat_dev_gen;
 
-	session->aes_cmac = 0;
-	session->auth_key_length = auth_xform->key.length;
-	session->auth_iv.offset = auth_xform->iv.offset;
-	session->auth_iv.length = auth_xform->iv.length;
-	session->auth_mode = ICP_QAT_HW_AUTH_MODE1;
-	session->is_auth = 1;
-	session->digest_length = auth_xform->digest_length;
+	qat_session->aes_cmac = 0;
+	qat_session->auth_key_length = auth_xform->key.length;
+	qat_session->auth_iv.offset = auth_xform->iv.offset;
+	qat_session->auth_iv.length = auth_xform->iv.length;
+	qat_session->auth_mode = ICP_QAT_HW_AUTH_MODE1;
+	qat_session->is_auth = 1;
+	qat_session->digest_length = auth_xform->digest_length;
 
 	switch (auth_xform->algo) {
 	case RTE_CRYPTO_AUTH_SHA1:
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1;
-		session->auth_mode = ICP_QAT_HW_AUTH_MODE0;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1;
+		qat_session->auth_mode = ICP_QAT_HW_AUTH_MODE0;
 		break;
 	case RTE_CRYPTO_AUTH_SHA224:
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA224;
-		session->auth_mode = ICP_QAT_HW_AUTH_MODE0;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA224;
+		qat_session->auth_mode = ICP_QAT_HW_AUTH_MODE0;
 		break;
 	case RTE_CRYPTO_AUTH_SHA256:
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA256;
-		session->auth_mode = ICP_QAT_HW_AUTH_MODE0;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA256;
+		qat_session->auth_mode = ICP_QAT_HW_AUTH_MODE0;
 		break;
 	case RTE_CRYPTO_AUTH_SHA384:
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA384;
-		session->auth_mode = ICP_QAT_HW_AUTH_MODE0;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA384;
+		qat_session->auth_mode = ICP_QAT_HW_AUTH_MODE0;
 		break;
 	case RTE_CRYPTO_AUTH_SHA512:
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA512;
-		session->auth_mode = ICP_QAT_HW_AUTH_MODE0;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA512;
+		qat_session->auth_mode = ICP_QAT_HW_AUTH_MODE0;
 		break;
 	case RTE_CRYPTO_AUTH_SHA1_HMAC:
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1;
 		break;
 	case RTE_CRYPTO_AUTH_SHA224_HMAC:
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA224;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA224;
 		break;
 	case RTE_CRYPTO_AUTH_SHA256_HMAC:
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA256;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA256;
 		break;
 	case RTE_CRYPTO_AUTH_SHA384_HMAC:
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA384;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA384;
 		break;
 	case RTE_CRYPTO_AUTH_SHA512_HMAC:
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA512;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA512;
 		break;
 	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
 		break;
 	case RTE_CRYPTO_AUTH_AES_CMAC:
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
-		session->aes_cmac = 1;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
+		qat_session->aes_cmac = 1;
 		break;
 	case RTE_CRYPTO_AUTH_AES_GMAC:
 		if (qat_sym_validate_aes_key(auth_xform->key.length,
-				&session->qat_cipher_alg) != 0) {
+				&qat_session->qat_cipher_alg) != 0) {
 			QAT_LOG(ERR, "Invalid AES key size");
 			return -EINVAL;
 		}
-		session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_GALOIS_128;
-		if (session->auth_iv.length == 0)
-			session->auth_iv.length = AES_GCM_J0_LEN;
+		qat_session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_GALOIS_128;
+		if (qat_session->auth_iv.length == 0)
+			qat_session->auth_iv.length = AES_GCM_J0_LEN;
 		else
-			session->is_iv12B = 1;
+			qat_session->is_iv12B = 1;
 		if (qat_dev_gen == QAT_GEN4) {
-			session->is_cnt_zero = 1;
-			session->is_ucs = 1;
+			qat_session->is_cnt_zero = 1;
+			qat_session->is_ucs = 1;
 		}
 		break;
 	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2;
 		break;
 	case RTE_CRYPTO_AUTH_MD5_HMAC:
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_MD5;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_MD5;
 		break;
 	case RTE_CRYPTO_AUTH_NULL:
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_NULL;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_NULL;
 		break;
 	case RTE_CRYPTO_AUTH_KASUMI_F9:
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_KASUMI_F9;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_KASUMI_F9;
 		break;
 	case RTE_CRYPTO_AUTH_ZUC_EIA3:
-		if (!qat_is_auth_alg_supported(auth_xform->algo, internals)) {
+		if (!qat_is_auth_alg_supported(auth_xform->algo, qat_crypto)) {
 			QAT_LOG(ERR, "%s not supported on this device",
 				rte_crypto_auth_algorithm_strings
 				[auth_xform->algo]);
 			return -ENOTSUP;
 		}
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3;
 		break;
 	case RTE_CRYPTO_AUTH_MD5:
 	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
@@ -815,51 +815,51 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
 	}
 
 	if (auth_xform->algo == RTE_CRYPTO_AUTH_AES_GMAC) {
-		session->is_gmac = 1;
+		qat_session->is_gmac = 1;
 		if (auth_xform->op == RTE_CRYPTO_AUTH_OP_GENERATE) {
-			session->qat_cmd = ICP_QAT_FW_LA_CMD_CIPHER_HASH;
-			session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
+			qat_session->qat_cmd = ICP_QAT_FW_LA_CMD_CIPHER_HASH;
+			qat_session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
 			/*
 			 * It needs to create cipher desc content first,
 			 * then authentication
 			 */
-			if (qat_sym_cd_cipher_set(session,
+			if (qat_sym_cd_cipher_set(qat_session,
 						auth_xform->key.data,
 						auth_xform->key.length))
 				return -EINVAL;
 
-			if (qat_sym_cd_auth_set(session,
-						key_data,
-						key_length,
+			if (qat_sym_cd_auth_set(qat_session,
+						authkey,
+						authkeylen,
 						0,
 						auth_xform->digest_length,
 						auth_xform->op))
 				return -EINVAL;
 		} else {
-			session->qat_cmd = ICP_QAT_FW_LA_CMD_HASH_CIPHER;
-			session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
+			qat_session->qat_cmd = ICP_QAT_FW_LA_CMD_HASH_CIPHER;
+			qat_session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
 			/*
 			 * It needs to create authentication desc content first,
 			 * then cipher
 			 */
 
-			if (qat_sym_cd_auth_set(session,
-					key_data,
-					key_length,
+			if (qat_sym_cd_auth_set(qat_session,
+					authkey,
+					authkeylen,
 					0,
 					auth_xform->digest_length,
 					auth_xform->op))
 				return -EINVAL;
 
-			if (qat_sym_cd_cipher_set(session,
+			if (qat_sym_cd_cipher_set(qat_session,
 						auth_xform->key.data,
 						auth_xform->key.length))
 				return -EINVAL;
 		}
 	} else {
-		if (qat_sym_cd_auth_set(session,
-				key_data,
-				key_length,
+		if (qat_sym_cd_auth_set(qat_session,
+				authkey,
+				authkeylen,
 				0,
 				auth_xform->digest_length,
 				auth_xform->op))
@@ -872,68 +872,68 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
 int
 qat_sym_session_configure_aead(struct rte_cryptodev *dev,
 				struct rte_crypto_sym_xform *xform,
-				struct qat_sym_session *session)
+				struct qat_sym_session *qat_session)
 {
 	struct rte_crypto_aead_xform *aead_xform = &xform->aead;
 	enum rte_crypto_auth_operation crypto_operation;
-	struct qat_cryptodev_private *internals =
+	struct qat_cryptodev_private *qat_crypto =
 			dev->data->dev_private;
 	enum qat_device_gen qat_dev_gen =
-			internals->qat_dev->qat_dev_gen;
+			qat_crypto->qat_dev->qat_dev_gen;
 
 	/*
 	 * Store AEAD IV parameters as cipher IV,
 	 * to avoid unnecessary memory usage
 	 */
-	session->cipher_iv.offset = xform->aead.iv.offset;
-	session->cipher_iv.length = xform->aead.iv.length;
+	qat_session->cipher_iv.offset = xform->aead.iv.offset;
+	qat_session->cipher_iv.length = xform->aead.iv.length;
 
-	session->auth_mode = ICP_QAT_HW_AUTH_MODE1;
-	session->is_auth = 1;
-	session->digest_length = aead_xform->digest_length;
+	qat_session->auth_mode = ICP_QAT_HW_AUTH_MODE1;
+	qat_session->is_auth = 1;
+	qat_session->digest_length = aead_xform->digest_length;
 
-	session->is_single_pass = 0;
+	qat_session->is_single_pass = 0;
 	switch (aead_xform->algo) {
 	case RTE_CRYPTO_AEAD_AES_GCM:
 		if (qat_sym_validate_aes_key(aead_xform->key.length,
-				&session->qat_cipher_alg) != 0) {
+				&qat_session->qat_cipher_alg) != 0) {
 			QAT_LOG(ERR, "Invalid AES key size");
 			return -EINVAL;
 		}
-		session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_GALOIS_128;
+		qat_session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_GALOIS_128;
 
 		if (qat_dev_gen == QAT_GEN4)
-			session->is_ucs = 1;
-		if (session->cipher_iv.length == 0) {
-			session->cipher_iv.length = AES_GCM_J0_LEN;
+			qat_session->is_ucs = 1;
+		if (qat_session->cipher_iv.length == 0) {
+			qat_session->cipher_iv.length = AES_GCM_J0_LEN;
 			break;
 		}
-		session->is_iv12B = 1;
+		qat_session->is_iv12B = 1;
 		if (qat_dev_gen < QAT_GEN3)
 			break;
-		qat_sym_session_handle_single_pass(session,
+		qat_sym_session_handle_single_pass(qat_session,
 				aead_xform);
 		break;
 	case RTE_CRYPTO_AEAD_AES_CCM:
 		if (qat_sym_validate_aes_key(aead_xform->key.length,
-				&session->qat_cipher_alg) != 0) {
+				&qat_session->qat_cipher_alg) != 0) {
 			QAT_LOG(ERR, "Invalid AES key size");
 			return -EINVAL;
 		}
-		session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
-		session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC;
+		qat_session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
+		qat_session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC;
 		if (qat_dev_gen == QAT_GEN4)
-			session->is_ucs = 1;
+			qat_session->is_ucs = 1;
 		break;
 	case RTE_CRYPTO_AEAD_CHACHA20_POLY1305:
 		if (aead_xform->key.length != ICP_QAT_HW_CHACHAPOLY_KEY_SZ)
 			return -EINVAL;
 		if (qat_dev_gen == QAT_GEN4)
-			session->is_ucs = 1;
-		session->qat_cipher_alg =
+			qat_session->is_ucs = 1;
+		qat_session->qat_cipher_alg =
 				ICP_QAT_HW_CIPHER_ALGO_CHACHA20_POLY1305;
-		qat_sym_session_handle_single_pass(session,
+		qat_sym_session_handle_single_pass(qat_session,
 						aead_xform);
 		break;
 	default:
@@ -942,15 +942,15 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
 		return -EINVAL;
 	}
 
-	if (session->is_single_pass) {
-		if (qat_sym_cd_cipher_set(session,
+	if (qat_session->is_single_pass) {
+		if (qat_sym_cd_cipher_set(qat_session,
 				aead_xform->key.data, aead_xform->key.length))
 			return -EINVAL;
 	} else if ((aead_xform->op == RTE_CRYPTO_AEAD_OP_ENCRYPT &&
 			aead_xform->algo == RTE_CRYPTO_AEAD_AES_GCM) ||
 			(aead_xform->op == RTE_CRYPTO_AEAD_OP_DECRYPT &&
 			aead_xform->algo == RTE_CRYPTO_AEAD_AES_CCM)) {
-		session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
+		qat_session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
 		/*
 		 * It needs to create cipher desc content first,
 		 * then authentication
@@ -958,12 +958,12 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
 		crypto_operation = aead_xform->algo == RTE_CRYPTO_AEAD_AES_GCM ?
 			RTE_CRYPTO_AUTH_OP_GENERATE : RTE_CRYPTO_AUTH_OP_VERIFY;
 
-		if (qat_sym_cd_cipher_set(session,
+		if (qat_sym_cd_cipher_set(qat_session,
 					aead_xform->key.data,
 					aead_xform->key.length))
 			return -EINVAL;
 
-		if (qat_sym_cd_auth_set(session,
+		if (qat_sym_cd_auth_set(qat_session,
 					aead_xform->key.data,
 					aead_xform->key.length,
 					aead_xform->aad_length,
@@ -971,7 +971,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
 					crypto_operation))
 			return -EINVAL;
 	} else {
-		session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
+		qat_session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
 		/*
 		 * It needs to create authentication desc content first,
 		 * then cipher
@@ -980,7 +980,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
 		crypto_operation = aead_xform->algo == RTE_CRYPTO_AEAD_AES_GCM ?
 			RTE_CRYPTO_AUTH_OP_VERIFY : RTE_CRYPTO_AUTH_OP_GENERATE;
 
-		if (qat_sym_cd_auth_set(session,
+		if (qat_sym_cd_auth_set(qat_session,
 					aead_xform->key.data,
 					aead_xform->key.length,
 					aead_xform->aad_length,
@@ -988,7 +988,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
 					crypto_operation))
 			return -EINVAL;
 
-		if (qat_sym_cd_cipher_set(session,
+		if (qat_sym_cd_cipher_set(qat_session,
 					aead_xform->key.data,
 					aead_xform->key.length))
 			return -EINVAL;
@@ -1468,309 +1468,309 @@ static int qat_sym_do_precomputes(enum icp_qat_hw_auth_algo hash_alg,
 }
 
 static void
-qat_sym_session_init_common_hdr(struct qat_sym_session *session)
+qat_sym_session_init_common_hdr(struct qat_sym_session *qat_session)
 {
-	struct icp_qat_fw_la_bulk_req *req_tmpl = &session->fw_req;
-	struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
-	enum qat_sym_proto_flag proto_flags = session->qat_proto_flag;
-	uint32_t slice_flags = session->slice_types;
+	struct icp_qat_fw_la_bulk_req *qat_fw_req = &qat_session->fw_req;
+	struct icp_qat_fw_comn_req_hdr *qat_fw_hdr = &qat_fw_req->comn_hdr;
+	enum qat_sym_proto_flag qat_proto = qat_session->qat_proto_flag;
+	uint32_t qat_slice_flags = qat_session->slice_types;
 
-	header->hdr_flags =
+	qat_fw_hdr->hdr_flags =
 		ICP_QAT_FW_COMN_HDR_FLAGS_BUILD(ICP_QAT_FW_COMN_REQ_FLAG_SET);
-	header->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_LA;
-	header->service_cmd_id = session->qat_cmd;
-	header->comn_req_flags =
+	qat_fw_hdr->service_type = ICP_QAT_FW_COMN_REQ_CPM_FW_LA;
+	qat_fw_hdr->service_cmd_id = qat_session->qat_cmd;
+	qat_fw_hdr->comn_req_flags =
 		ICP_QAT_FW_COMN_FLAGS_BUILD(QAT_COMN_CD_FLD_TYPE_64BIT_ADR,
 					QAT_COMN_PTR_TYPE_FLAT);
-	ICP_QAT_FW_LA_PARTIAL_SET(header->serv_specif_flags,
+	ICP_QAT_FW_LA_PARTIAL_SET(qat_fw_hdr->serv_specif_flags,
 				  ICP_QAT_FW_LA_PARTIAL_NONE);
-	ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(header->serv_specif_flags,
+	ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(qat_fw_hdr->serv_specif_flags,
 					   ICP_QAT_FW_CIPH_IV_16BYTE_DATA);
 
-	switch (proto_flags)		{
+	switch (qat_proto)		{
 	case QAT_CRYPTO_PROTO_FLAG_NONE:
-		ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
+		ICP_QAT_FW_LA_PROTO_SET(qat_fw_hdr->serv_specif_flags,
 					ICP_QAT_FW_LA_NO_PROTO);
 		break;
 	case QAT_CRYPTO_PROTO_FLAG_CCM:
-		ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
+		ICP_QAT_FW_LA_PROTO_SET(qat_fw_hdr->serv_specif_flags,
 					ICP_QAT_FW_LA_CCM_PROTO);
 		break;
 	case QAT_CRYPTO_PROTO_FLAG_GCM:
-		ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
+		ICP_QAT_FW_LA_PROTO_SET(qat_fw_hdr->serv_specif_flags,
 					ICP_QAT_FW_LA_GCM_PROTO);
 		break;
 	case QAT_CRYPTO_PROTO_FLAG_SNOW3G:
-		ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
+		ICP_QAT_FW_LA_PROTO_SET(qat_fw_hdr->serv_specif_flags,
 					ICP_QAT_FW_LA_SNOW_3G_PROTO);
 		break;
 	case QAT_CRYPTO_PROTO_FLAG_ZUC:
-		ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(header->serv_specif_flags,
+		ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(qat_fw_hdr->serv_specif_flags,
 			ICP_QAT_FW_LA_ZUC_3G_PROTO);
 		break;
 	}
 
 	/* More than one of the following flags can be set at once */
-	if (QAT_SESSION_IS_SLICE_SET(slice_flags, QAT_CRYPTO_SLICE_SPC)) {
+	if (QAT_SESSION_IS_SLICE_SET(qat_slice_flags, QAT_CRYPTO_SLICE_SPC)) {
 		ICP_QAT_FW_LA_SINGLE_PASS_PROTO_FLAG_SET(
-			header->serv_specif_flags,
+			qat_fw_hdr->serv_specif_flags,
 			ICP_QAT_FW_LA_SINGLE_PASS_PROTO);
 	}
-	if (QAT_SESSION_IS_SLICE_SET(slice_flags, QAT_CRYPTO_SLICE_UCS)) {
+	if (QAT_SESSION_IS_SLICE_SET(qat_slice_flags, QAT_CRYPTO_SLICE_UCS)) {
 		ICP_QAT_FW_LA_SLICE_TYPE_SET(
-			header->serv_specif_flags,
+			qat_fw_hdr->serv_specif_flags,
 			ICP_QAT_FW_LA_USE_UCS_SLICE_TYPE);
 	}
 
-	if (session->is_auth) {
-		if (session->auth_op == ICP_QAT_HW_AUTH_VERIFY) {
-			ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+	if (qat_session->is_auth) {
+		if (qat_session->auth_op == ICP_QAT_HW_AUTH_VERIFY) {
+			ICP_QAT_FW_LA_RET_AUTH_SET(qat_fw_hdr->serv_specif_flags,
 					ICP_QAT_FW_LA_NO_RET_AUTH_RES);
-			ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+			ICP_QAT_FW_LA_CMP_AUTH_SET(qat_fw_hdr->serv_specif_flags,
 					ICP_QAT_FW_LA_CMP_AUTH_RES);
-		} else if (session->auth_op == ICP_QAT_HW_AUTH_GENERATE) {
-			ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+		} else if (qat_session->auth_op == ICP_QAT_HW_AUTH_GENERATE) {
+			ICP_QAT_FW_LA_RET_AUTH_SET(qat_fw_hdr->serv_specif_flags,
 						ICP_QAT_FW_LA_RET_AUTH_RES);
-			ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+			ICP_QAT_FW_LA_CMP_AUTH_SET(qat_fw_hdr->serv_specif_flags,
 						ICP_QAT_FW_LA_NO_CMP_AUTH_RES);
 		}
 	} else {
-		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+		ICP_QAT_FW_LA_RET_AUTH_SET(qat_fw_hdr->serv_specif_flags,
 					ICP_QAT_FW_LA_NO_RET_AUTH_RES);
-		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+		ICP_QAT_FW_LA_CMP_AUTH_SET(qat_fw_hdr->serv_specif_flags,
 					ICP_QAT_FW_LA_NO_CMP_AUTH_RES);
 	}
 
-	if (session->is_iv12B) {
+	if (qat_session->is_iv12B) {
 		ICP_QAT_FW_LA_GCM_IV_LEN_FLAG_SET(
-			header->serv_specif_flags,
+			qat_fw_hdr->serv_specif_flags,
 			ICP_QAT_FW_LA_GCM_IV_LEN_12_OCTETS);
 	}
 
-	ICP_QAT_FW_LA_UPDATE_STATE_SET(header->serv_specif_flags,
+	ICP_QAT_FW_LA_UPDATE_STATE_SET(qat_fw_hdr->serv_specif_flags,
 					   ICP_QAT_FW_LA_NO_UPDATE_STATE);
-	ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(header->serv_specif_flags,
+	ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET(qat_fw_hdr->serv_specif_flags,
 					ICP_QAT_FW_LA_NO_DIGEST_IN_BUFFER);
 }
 
-int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc,
-						const uint8_t *cipherkey,
-						uint32_t cipherkeylen)
+int qat_sym_cd_cipher_set(struct qat_sym_session *qat_session,
+						const uint8_t *enckey,
+						uint32_t enckeylen)
 {
-	struct icp_qat_hw_cipher_algo_blk *cipher;
-	struct icp_qat_hw_cipher_algo_blk20 *cipher20;
-	struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req;
-	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars;
-	struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
-	void *ptr = &req_tmpl->cd_ctrl;
-	struct icp_qat_fw_cipher_cd_ctrl_hdr *cipher_cd_ctrl = ptr;
-	struct icp_qat_fw_auth_cd_ctrl_hdr *hash_cd_ctrl = ptr;
-	enum icp_qat_hw_cipher_convert key_convert;
-	struct icp_qat_fw_la_cipher_20_req_params *req_ucs =
+	struct icp_qat_hw_cipher_algo_blk *qat_fw_cd_cipher;
+	struct icp_qat_hw_cipher_algo_blk20 *qat_fw_cd_cipher20;
+	struct icp_qat_fw_la_bulk_req *qat_fw_req = &qat_session->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *qat_fw_cd_pars = &qat_fw_req->cd_pars;
+	struct icp_qat_fw_comn_req_hdr *qat_fw_hdr = &qat_fw_req->comn_hdr;
+	void *ptr = &qat_fw_req->cd_ctrl;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *qat_fw_cipher = ptr;
+	struct icp_qat_fw_auth_cd_ctrl_hdr *qat_fw_hash = ptr;
+	enum icp_qat_hw_cipher_convert qat_fw_key_convert;
+	struct icp_qat_fw_la_cipher_20_req_params *qat_fw_req_ucs =
 			(struct icp_qat_fw_la_cipher_20_req_params *)
-			&cdesc->fw_req.serv_specif_rqpars;
-	struct icp_qat_fw_la_cipher_req_params *req_cipher =
+			&qat_session->fw_req.serv_specif_rqpars;
+	struct icp_qat_fw_la_cipher_req_params *qat_fw_req_spc =
 			(struct icp_qat_fw_la_cipher_req_params *)
-			&cdesc->fw_req.serv_specif_rqpars;
+			&qat_session->fw_req.serv_specif_rqpars;
 	uint32_t total_key_size;
 	uint16_t cipher_offset, cd_size;
 	uint32_t wordIndex  = 0;
 	uint32_t *temp_key = NULL;
 
-	if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) {
-		cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
-		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl,
+	if (qat_session->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) {
+		qat_fw_cd_pars->u.s.content_desc_addr = qat_session->cd_paddr;
+		ICP_QAT_FW_COMN_CURR_ID_SET(qat_fw_cipher,
 					ICP_QAT_FW_SLICE_CIPHER);
-		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl,
+		ICP_QAT_FW_COMN_NEXT_ID_SET(qat_fw_cipher,
 					ICP_QAT_FW_SLICE_DRAM_WR);
-		ICP_QAT_FW_LA_RET_AUTH_SET(header->serv_specif_flags,
+		ICP_QAT_FW_LA_RET_AUTH_SET(qat_fw_hdr->serv_specif_flags,
 					ICP_QAT_FW_LA_NO_RET_AUTH_RES);
-		ICP_QAT_FW_LA_CMP_AUTH_SET(header->serv_specif_flags,
+		ICP_QAT_FW_LA_CMP_AUTH_SET(qat_fw_hdr->serv_specif_flags,
 					ICP_QAT_FW_LA_NO_CMP_AUTH_RES);
-		cdesc->cd_cur_ptr = (uint8_t *)&cdesc->cd;
-	} else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
-		cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
-		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl,
+		qat_session->cd_cur_ptr = (uint8_t *)&qat_session->cd;
+	} else if (qat_session->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
+		qat_fw_cd_pars->u.s.content_desc_addr = qat_session->cd_paddr;
+		ICP_QAT_FW_COMN_CURR_ID_SET(qat_fw_cipher,
 					ICP_QAT_FW_SLICE_CIPHER);
-		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl,
+		ICP_QAT_FW_COMN_NEXT_ID_SET(qat_fw_cipher,
 					ICP_QAT_FW_SLICE_AUTH);
-		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl,
+		ICP_QAT_FW_COMN_CURR_ID_SET(qat_fw_hash,
 					ICP_QAT_FW_SLICE_AUTH);
-		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl,
+		ICP_QAT_FW_COMN_NEXT_ID_SET(qat_fw_hash,
 					ICP_QAT_FW_SLICE_DRAM_WR);
-		cdesc->cd_cur_ptr = (uint8_t *)&cdesc->cd;
-	} else if (cdesc->qat_cmd != ICP_QAT_FW_LA_CMD_HASH_CIPHER) {
+		qat_session->cd_cur_ptr = (uint8_t *)&qat_session->cd;
+	} else if (qat_session->qat_cmd != ICP_QAT_FW_LA_CMD_HASH_CIPHER) {
 		QAT_LOG(ERR, "Invalid param, must be a cipher command.");
 		return -EFAULT;
 	}
 
-	if (cdesc->qat_mode == ICP_QAT_HW_CIPHER_CTR_MODE) {
+	if (qat_session->qat_mode == ICP_QAT_HW_CIPHER_CTR_MODE) {
 		/*
 		 * CTR Streaming ciphers are a special case. Decrypt = encrypt
 		 * Overriding default values previously set.
 		 * Chacha20-Poly1305 is special case, CTR but single-pass
 		 * so both direction need to be used.
 		 */
-		cdesc->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
-		if (cdesc->qat_cipher_alg ==
+		qat_session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
+		if (qat_session->qat_cipher_alg ==
 			ICP_QAT_HW_CIPHER_ALGO_CHACHA20_POLY1305 &&
-			cdesc->auth_op == ICP_QAT_HW_AUTH_VERIFY) {
-				cdesc->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
+			qat_session->auth_op == ICP_QAT_HW_AUTH_VERIFY) {
+				qat_session->qat_dir = ICP_QAT_HW_CIPHER_DECRYPT;
 		}
-		key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
-	} else if (cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2
-		|| cdesc->qat_cipher_alg ==
+		qat_fw_key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
+	} else if (qat_session->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2
+		|| qat_session->qat_cipher_alg ==
 			ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3)
-		key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT;
-	else if (cdesc->qat_dir == ICP_QAT_HW_CIPHER_ENCRYPT)
-		key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
-	else if (cdesc->qat_mode == ICP_QAT_HW_CIPHER_AEAD_MODE)
-		key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
+		qat_fw_key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT;
+	else if (qat_session->qat_dir == ICP_QAT_HW_CIPHER_ENCRYPT)
+		qat_fw_key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
+	else if (qat_session->qat_mode == ICP_QAT_HW_CIPHER_AEAD_MODE)
+		qat_fw_key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
 	else
-		key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT;
+		qat_fw_key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT;
 
-	if (cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2) {
+	if (qat_session->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2) {
 		total_key_size = ICP_QAT_HW_SNOW_3G_UEA2_KEY_SZ +
 			ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ;
-		cipher_cd_ctrl->cipher_state_sz =
+		qat_fw_cipher->cipher_state_sz =
 			ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ >> 3;
-		cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_SNOW3G;
+		qat_session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_SNOW3G;
 
-	} else if (cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_KASUMI) {
+	} else if (qat_session->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_KASUMI) {
 		total_key_size = ICP_QAT_HW_KASUMI_F8_KEY_SZ;
-		cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_KASUMI_BLK_SZ >> 3;
-		cipher_cd_ctrl->cipher_padding_sz =
+		qat_fw_cipher->cipher_state_sz = ICP_QAT_HW_KASUMI_BLK_SZ >> 3;
+		qat_fw_cipher->cipher_padding_sz =
 					(2 * ICP_QAT_HW_KASUMI_BLK_SZ) >> 3;
-	} else if (cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_3DES) {
+	} else if (qat_session->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_3DES) {
 		total_key_size = ICP_QAT_HW_3DES_KEY_SZ;
-		cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_3DES_BLK_SZ >> 3;
-	} else if (cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_DES) {
+		qat_fw_cipher->cipher_state_sz = ICP_QAT_HW_3DES_BLK_SZ >> 3;
+	} else if (qat_session->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_DES) {
 		total_key_size = ICP_QAT_HW_DES_KEY_SZ;
-		cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_DES_BLK_SZ >> 3;
-	} else if (cdesc->qat_cipher_alg ==
+		qat_fw_cipher->cipher_state_sz = ICP_QAT_HW_DES_BLK_SZ >> 3;
+	} else if (qat_session->qat_cipher_alg ==
 		ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3) {
 		total_key_size = ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ +
 			ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ;
-		cipher_cd_ctrl->cipher_state_sz =
+		qat_fw_cipher->cipher_state_sz =
 			ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ >> 3;
-		cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC;
-		cdesc->min_qat_dev_gen = QAT_GEN2;
+		qat_session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC;
+		qat_session->min_qat_dev_gen = QAT_GEN2;
 	} else {
-		total_key_size = cipherkeylen;
-		cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
+		total_key_size = enckeylen;
+		qat_fw_cipher->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
 	}
-	cipher_offset = cdesc->cd_cur_ptr-((uint8_t *)&cdesc->cd);
-	cipher_cd_ctrl->cipher_cfg_offset = cipher_offset >> 3;
-
-	cipher = (struct icp_qat_hw_cipher_algo_blk *)cdesc->cd_cur_ptr;
-	cipher20 = (struct icp_qat_hw_cipher_algo_blk20 *)cdesc->cd_cur_ptr;
-	cipher->cipher_config.val =
-	    ICP_QAT_HW_CIPHER_CONFIG_BUILD(cdesc->qat_mode,
-					cdesc->qat_cipher_alg, key_convert,
-					cdesc->qat_dir);
-
-	if (cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_KASUMI) {
-		temp_key = (uint32_t *)(cdesc->cd_cur_ptr +
+	cipher_offset = qat_session->cd_cur_ptr-((uint8_t *)&qat_session->cd);
+	qat_fw_cipher->cipher_cfg_offset = cipher_offset >> 3;
+
+	qat_fw_cd_cipher = (struct icp_qat_hw_cipher_algo_blk *)qat_session->cd_cur_ptr;
+	qat_fw_cd_cipher20 = (struct icp_qat_hw_cipher_algo_blk20 *)qat_session->cd_cur_ptr;
+	qat_fw_cd_cipher->cipher_config.val =
+	    ICP_QAT_HW_CIPHER_CONFIG_BUILD(qat_session->qat_mode,
+					qat_session->qat_cipher_alg, qat_fw_key_convert,
+					qat_session->qat_dir);
+
+	if (qat_session->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_KASUMI) {
+		temp_key = (uint32_t *)(qat_session->cd_cur_ptr +
 					sizeof(struct icp_qat_hw_cipher_config)
-					+ cipherkeylen);
-		memcpy(cipher->key, cipherkey, cipherkeylen);
-		memcpy(temp_key, cipherkey, cipherkeylen);
+					+ enckeylen);
+		memcpy(qat_fw_cd_cipher->key, enckey, enckeylen);
+		memcpy(temp_key, enckey, enckeylen);
 
 		/* XOR Key with KASUMI F8 key modifier at 4 bytes level */
-		for (wordIndex = 0; wordIndex < (cipherkeylen >> 2);
+		for (wordIndex = 0; wordIndex < (enckeylen >> 2);
 								wordIndex++)
 			temp_key[wordIndex] ^= KASUMI_F8_KEY_MODIFIER_4_BYTES;
 
-		cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_cipher_config) +
-					cipherkeylen + cipherkeylen;
-	} else if (cdesc->is_ucs) {
-		const uint8_t *final_key = cipherkey;
+		qat_session->cd_cur_ptr += sizeof(struct icp_qat_hw_cipher_config) +
+					enckeylen + enckeylen;
+	} else if (qat_session->is_ucs) {
+		const uint8_t *final_key = enckey;
 
-		cdesc->slice_types |= QAT_CRYPTO_SLICE_UCS;
-		total_key_size = RTE_ALIGN_CEIL(cipherkeylen,
+		qat_session->slice_types |= QAT_CRYPTO_SLICE_UCS;
+		total_key_size = RTE_ALIGN_CEIL(enckeylen,
 			ICP_QAT_HW_AES_128_KEY_SZ);
-		cipher20->cipher_config.reserved[0] = 0;
-		cipher20->cipher_config.reserved[1] = 0;
-		cipher20->cipher_config.reserved[2] = 0;
+		qat_fw_cd_cipher20->cipher_config.reserved[0] = 0;
+		qat_fw_cd_cipher20->cipher_config.reserved[1] = 0;
+		qat_fw_cd_cipher20->cipher_config.reserved[2] = 0;
 
-		rte_memcpy(cipher20->key, final_key, cipherkeylen);
-		cdesc->cd_cur_ptr +=
+		rte_memcpy(qat_fw_cd_cipher20->key, final_key, enckeylen);
+		qat_session->cd_cur_ptr +=
 			sizeof(struct icp_qat_hw_ucs_cipher_config) +
-					cipherkeylen;
+					enckeylen;
 	} else {
-		memcpy(cipher->key, cipherkey, cipherkeylen);
-		cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_cipher_config) +
-					cipherkeylen;
+		memcpy(qat_fw_cd_cipher->key, enckey, enckeylen);
+		qat_session->cd_cur_ptr += sizeof(struct icp_qat_hw_cipher_config) +
+					enckeylen;
 	}
 
-	if (cdesc->is_single_pass) {
-		QAT_FIELD_SET(cipher->cipher_config.val,
-			cdesc->digest_length,
+	if (qat_session->is_single_pass) {
+		QAT_FIELD_SET(qat_fw_cd_cipher->cipher_config.val,
+			qat_session->digest_length,
 			QAT_CIPHER_AEAD_HASH_CMP_LEN_BITPOS,
 			QAT_CIPHER_AEAD_HASH_CMP_LEN_MASK);
 		/* UCS and SPC 1.8/2.0 share configuration of 2nd config word */
-		cdesc->cd.cipher.cipher_config.reserved =
+		qat_session->cd.cipher.cipher_config.reserved =
 				ICP_QAT_HW_CIPHER_CONFIG_BUILD_UPPER(
-					cdesc->aad_len);
-		cdesc->slice_types |= QAT_CRYPTO_SLICE_SPC;
+					qat_session->aad_len);
+		qat_session->slice_types |= QAT_CRYPTO_SLICE_SPC;
 	}
 
-	if (total_key_size > cipherkeylen) {
-		uint32_t padding_size =  total_key_size-cipherkeylen;
-		if ((cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_3DES)
-			&& (cipherkeylen == QAT_3DES_KEY_SZ_OPT2)) {
+	if (total_key_size > enckeylen) {
+		uint32_t padding_size =  total_key_size-enckeylen;
+		if ((qat_session->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_3DES)
+			&& (enckeylen == QAT_3DES_KEY_SZ_OPT2)) {
 			/* K3 not provided so use K1 = K3*/
-			memcpy(cdesc->cd_cur_ptr, cipherkey, padding_size);
-		} else if ((cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_3DES)
-			&& (cipherkeylen == QAT_3DES_KEY_SZ_OPT3)) {
+			memcpy(qat_session->cd_cur_ptr, enckey, padding_size);
+		} else if ((qat_session->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_3DES)
+			&& (enckeylen == QAT_3DES_KEY_SZ_OPT3)) {
 			/* K2 and K3 not provided so use K1 = K2 = K3*/
-			memcpy(cdesc->cd_cur_ptr, cipherkey,
-				cipherkeylen);
-			memcpy(cdesc->cd_cur_ptr+cipherkeylen,
-				cipherkey, cipherkeylen);
+			memcpy(qat_session->cd_cur_ptr, enckey,
+				enckeylen);
+			memcpy(qat_session->cd_cur_ptr + enckeylen,
+				enckey, enckeylen);
 		} else
-			memset(cdesc->cd_cur_ptr, 0, padding_size);
+			memset(qat_session->cd_cur_ptr, 0, padding_size);
 
-		cdesc->cd_cur_ptr += padding_size;
+		qat_session->cd_cur_ptr += padding_size;
 	}
-	if (cdesc->is_ucs) {
+	if (qat_session->is_ucs) {
 		/*
 		 * These values match in terms of position auth
 		 * slice request fields
 		 */
-		req_ucs->spc_auth_res_sz = cdesc->digest_length;
-		if (!cdesc->is_gmac) {
-			req_ucs->spc_aad_sz = cdesc->aad_len;
-			req_ucs->spc_aad_offset = 0;
+		qat_fw_req_ucs->spc_auth_res_sz = qat_session->digest_length;
+		if (!qat_session->is_gmac) {
+			qat_fw_req_ucs->spc_aad_sz = qat_session->aad_len;
+			qat_fw_req_ucs->spc_aad_offset = 0;
 		}
-	} else if (cdesc->is_single_pass) {
-		req_cipher->spc_aad_sz = cdesc->aad_len;
-		req_cipher->spc_auth_res_sz = cdesc->digest_length;
+	} else if (qat_session->is_single_pass) {
+		qat_fw_req_spc->spc_aad_sz = qat_session->aad_len;
+		qat_fw_req_spc->spc_auth_res_sz = qat_session->digest_length;
 	}
-	cd_size = cdesc->cd_cur_ptr-(uint8_t *)&cdesc->cd;
-	cd_pars->u.s.content_desc_params_sz = RTE_ALIGN_CEIL(cd_size, 8) >> 3;
-	cipher_cd_ctrl->cipher_key_sz = total_key_size >> 3;
+	cd_size = qat_session->cd_cur_ptr - (uint8_t *)&qat_session->cd;
+	qat_fw_cd_pars->u.s.content_desc_params_sz = RTE_ALIGN_CEIL(cd_size, 8) >> 3;
+	qat_fw_cipher->cipher_key_sz = total_key_size >> 3;
 
 	return 0;
 }
 
-int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
+int qat_sym_cd_auth_set(struct qat_sym_session *qat_session,
 						const uint8_t *authkey,
 						uint32_t authkeylen,
-						uint32_t aad_length,
+						uint32_t aadlen,
 						uint32_t digestsize,
 						unsigned int operation)
 {
-	struct icp_qat_hw_auth_setup *hash;
-	struct icp_qat_hw_cipher_algo_blk *cipherconfig;
-	struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req;
-	struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars;
-	void *ptr = &req_tmpl->cd_ctrl;
-	struct icp_qat_fw_cipher_cd_ctrl_hdr *cipher_cd_ctrl = ptr;
-	struct icp_qat_fw_auth_cd_ctrl_hdr *hash_cd_ctrl = ptr;
-	struct icp_qat_fw_la_auth_req_params *auth_param =
+	struct icp_qat_hw_auth_setup *qat_fw_cd_auth;
+	struct icp_qat_hw_cipher_algo_blk *qat_fw_cd_cipher;
+	struct icp_qat_fw_la_bulk_req *qat_fw_req = &qat_session->fw_req;
+	struct icp_qat_fw_comn_req_hdr_cd_pars *qat_fw_cd_pars = &qat_fw_req->cd_pars;
+	void *ptr = &qat_fw_req->cd_ctrl;
+	struct icp_qat_fw_cipher_cd_ctrl_hdr *qat_fw_cipher = ptr;
+	struct icp_qat_fw_auth_cd_ctrl_hdr *qat_fw_hash = ptr;
+	struct icp_qat_fw_la_auth_req_params *qat_fw_req_auth =
 		(struct icp_qat_fw_la_auth_req_params *)
-		((char *)&req_tmpl->serv_specif_rqpars +
+		((char *)&qat_fw_req->serv_specif_rqpars +
 		ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET);
 	uint16_t state1_size = 0, state2_size = 0, cd_extra_size = 0;
 	uint16_t hash_offset, cd_size;
@@ -1778,151 +1778,151 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
 	uint32_t wordIndex  = 0;
 	uint32_t *pTempKey;
 
-	if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH) {
-		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl,
+	if (qat_session->qat_cmd == ICP_QAT_FW_LA_CMD_AUTH) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(qat_fw_hash,
 					ICP_QAT_FW_SLICE_AUTH);
-		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl,
+		ICP_QAT_FW_COMN_NEXT_ID_SET(qat_fw_hash,
 					ICP_QAT_FW_SLICE_DRAM_WR);
-		cdesc->cd_cur_ptr = (uint8_t *)&cdesc->cd;
-	} else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER) {
-		ICP_QAT_FW_COMN_CURR_ID_SET(hash_cd_ctrl,
+		qat_session->cd_cur_ptr = (uint8_t *)&qat_session->cd;
+	} else if (qat_session->qat_cmd == ICP_QAT_FW_LA_CMD_HASH_CIPHER) {
+		ICP_QAT_FW_COMN_CURR_ID_SET(qat_fw_hash,
 				ICP_QAT_FW_SLICE_AUTH);
-		ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl,
+		ICP_QAT_FW_COMN_NEXT_ID_SET(qat_fw_hash,
 				ICP_QAT_FW_SLICE_CIPHER);
-		ICP_QAT_FW_COMN_CURR_ID_SET(cipher_cd_ctrl,
+		ICP_QAT_FW_COMN_CURR_ID_SET(qat_fw_cipher,
 				ICP_QAT_FW_SLICE_CIPHER);
-		ICP_QAT_FW_COMN_NEXT_ID_SET(cipher_cd_ctrl,
+		ICP_QAT_FW_COMN_NEXT_ID_SET(qat_fw_cipher,
 				ICP_QAT_FW_SLICE_DRAM_WR);
-		cdesc->cd_cur_ptr = (uint8_t *)&cdesc->cd;
-	} else if (cdesc->qat_cmd != ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
+		qat_session->cd_cur_ptr = (uint8_t *)&qat_session->cd;
+	} else if (qat_session->qat_cmd != ICP_QAT_FW_LA_CMD_CIPHER_HASH) {
 		QAT_LOG(ERR, "Invalid param, must be a hash command.");
 		return -EFAULT;
 	}
 
 	if (operation == RTE_CRYPTO_AUTH_OP_VERIFY)
-		cdesc->auth_op = ICP_QAT_HW_AUTH_VERIFY;
+		qat_session->auth_op = ICP_QAT_HW_AUTH_VERIFY;
 	else
-		cdesc->auth_op = ICP_QAT_HW_AUTH_GENERATE;
+		qat_session->auth_op = ICP_QAT_HW_AUTH_GENERATE;
 
 	/*
 	 * Setup the inner hash config
 	 */
-	hash_offset = cdesc->cd_cur_ptr-((uint8_t *)&cdesc->cd);
-	hash = (struct icp_qat_hw_auth_setup *)cdesc->cd_cur_ptr;
-	hash->auth_config.reserved = 0;
-	hash->auth_config.config =
-			ICP_QAT_HW_AUTH_CONFIG_BUILD(cdesc->auth_mode,
-				cdesc->qat_hash_alg, digestsize);
-
-	if (cdesc->auth_mode == ICP_QAT_HW_AUTH_MODE0
-		|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2
-		|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_KASUMI_F9
-		|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3
-		|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC
-		|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC
-		|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL
-		|| cdesc->is_cnt_zero
+	hash_offset = qat_session->cd_cur_ptr-((uint8_t *)&qat_session->cd);
+	qat_fw_cd_auth = (struct icp_qat_hw_auth_setup *)qat_session->cd_cur_ptr;
+	qat_fw_cd_auth->auth_config.reserved = 0;
+	qat_fw_cd_auth->auth_config.config =
+			ICP_QAT_HW_AUTH_CONFIG_BUILD(qat_session->auth_mode,
+				qat_session->qat_hash_alg, digestsize);
+
+	if (qat_session->auth_mode == ICP_QAT_HW_AUTH_MODE0
+		|| qat_session->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2
+		|| qat_session->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_KASUMI_F9
+		|| qat_session->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3
+		|| qat_session->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC
+		|| qat_session->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC
+		|| qat_session->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL
+		|| qat_session->is_cnt_zero
 			)
-		hash->auth_counter.counter = 0;
+		qat_fw_cd_auth->auth_counter.counter = 0;
 	else {
-		int block_size = qat_hash_get_block_size(cdesc->qat_hash_alg);
+		int block_size = qat_hash_get_block_size(qat_session->qat_hash_alg);
 
 		if (block_size < 0)
 			return block_size;
-		hash->auth_counter.counter = rte_bswap32(block_size);
+		qat_fw_cd_auth->auth_counter.counter = rte_bswap32(block_size);
 	}
 
-	cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_auth_setup);
+	qat_session->cd_cur_ptr += sizeof(struct icp_qat_hw_auth_setup);
 
 	/*
 	 * cd_cur_ptr now points at the state1 information.
 	 */
-	switch (cdesc->qat_hash_alg) {
+	switch (qat_session->qat_hash_alg) {
 	case ICP_QAT_HW_AUTH_ALGO_SHA1:
-		if (cdesc->auth_mode == ICP_QAT_HW_AUTH_MODE0) {
+		if (qat_session->auth_mode == ICP_QAT_HW_AUTH_MODE0) {
 			/* Plain SHA-1 */
-			rte_memcpy(cdesc->cd_cur_ptr, sha1InitialState,
+			rte_memcpy(qat_session->cd_cur_ptr, sha1InitialState,
 					sizeof(sha1InitialState));
 			state1_size = qat_hash_get_state1_size(
-					cdesc->qat_hash_alg);
+					qat_session->qat_hash_alg);
 			break;
 		}
 		/* SHA-1 HMAC */
 		if (qat_sym_do_precomputes(ICP_QAT_HW_AUTH_ALGO_SHA1, authkey,
-			authkeylen, cdesc->cd_cur_ptr, &state1_size,
-			cdesc->aes_cmac)) {
+			authkeylen, qat_session->cd_cur_ptr, &state1_size,
+			qat_session->aes_cmac)) {
 			QAT_LOG(ERR, "(SHA)precompute failed");
 			return -EFAULT;
 		}
 		state2_size = RTE_ALIGN_CEIL(ICP_QAT_HW_SHA1_STATE2_SZ, 8);
 		break;
 	case ICP_QAT_HW_AUTH_ALGO_SHA224:
-		if (cdesc->auth_mode == ICP_QAT_HW_AUTH_MODE0) {
+		if (qat_session->auth_mode == ICP_QAT_HW_AUTH_MODE0) {
 			/* Plain SHA-224 */
-			rte_memcpy(cdesc->cd_cur_ptr, sha224InitialState,
+			rte_memcpy(qat_session->cd_cur_ptr, sha224InitialState,
 					sizeof(sha224InitialState));
 			state1_size = qat_hash_get_state1_size(
-					cdesc->qat_hash_alg);
+					qat_session->qat_hash_alg);
 			break;
 		}
 		/* SHA-224 HMAC */
 		if (qat_sym_do_precomputes(ICP_QAT_HW_AUTH_ALGO_SHA224, authkey,
-			authkeylen, cdesc->cd_cur_ptr, &state1_size,
-			cdesc->aes_cmac)) {
+			authkeylen, qat_session->cd_cur_ptr, &state1_size,
+			qat_session->aes_cmac)) {
 			QAT_LOG(ERR, "(SHA)precompute failed");
 			return -EFAULT;
 		}
 		state2_size = ICP_QAT_HW_SHA224_STATE2_SZ;
 		break;
 	case ICP_QAT_HW_AUTH_ALGO_SHA256:
-		if (cdesc->auth_mode == ICP_QAT_HW_AUTH_MODE0) {
+		if (qat_session->auth_mode == ICP_QAT_HW_AUTH_MODE0) {
 			/* Plain SHA-256 */
-			rte_memcpy(cdesc->cd_cur_ptr, sha256InitialState,
+			rte_memcpy(qat_session->cd_cur_ptr, sha256InitialState,
 					sizeof(sha256InitialState));
 			state1_size = qat_hash_get_state1_size(
-					cdesc->qat_hash_alg);
+					qat_session->qat_hash_alg);
 			break;
 		}
 		/* SHA-256 HMAC */
 		if (qat_sym_do_precomputes(ICP_QAT_HW_AUTH_ALGO_SHA256, authkey,
-			authkeylen, cdesc->cd_cur_ptr,	&state1_size,
-			cdesc->aes_cmac)) {
+			authkeylen, qat_session->cd_cur_ptr,	&state1_size,
+			qat_session->aes_cmac)) {
 			QAT_LOG(ERR, "(SHA)precompute failed");
 			return -EFAULT;
 		}
 		state2_size = ICP_QAT_HW_SHA256_STATE2_SZ;
 		break;
 	case ICP_QAT_HW_AUTH_ALGO_SHA384:
-		if (cdesc->auth_mode == ICP_QAT_HW_AUTH_MODE0) {
+		if (qat_session->auth_mode == ICP_QAT_HW_AUTH_MODE0) {
 			/* Plain SHA-384 */
-			rte_memcpy(cdesc->cd_cur_ptr, sha384InitialState,
+			rte_memcpy(qat_session->cd_cur_ptr, sha384InitialState,
 					sizeof(sha384InitialState));
 			state1_size = qat_hash_get_state1_size(
-					cdesc->qat_hash_alg);
+					qat_session->qat_hash_alg);
 			break;
 		}
 		/* SHA-384 HMAC */
 		if (qat_sym_do_precomputes(ICP_QAT_HW_AUTH_ALGO_SHA384, authkey,
-			authkeylen, cdesc->cd_cur_ptr, &state1_size,
-			cdesc->aes_cmac)) {
+			authkeylen, qat_session->cd_cur_ptr, &state1_size,
+			qat_session->aes_cmac)) {
 			QAT_LOG(ERR, "(SHA)precompute failed");
 			return -EFAULT;
 		}
 		state2_size = ICP_QAT_HW_SHA384_STATE2_SZ;
 		break;
 	case ICP_QAT_HW_AUTH_ALGO_SHA512:
-		if (cdesc->auth_mode == ICP_QAT_HW_AUTH_MODE0) {
+		if (qat_session->auth_mode == ICP_QAT_HW_AUTH_MODE0) {
 			/* Plain SHA-512 */
-			rte_memcpy(cdesc->cd_cur_ptr, sha512InitialState,
+			rte_memcpy(qat_session->cd_cur_ptr, sha512InitialState,
 					sizeof(sha512InitialState));
 			state1_size = qat_hash_get_state1_size(
-					cdesc->qat_hash_alg);
+					qat_session->qat_hash_alg);
 			break;
 		}
 		/* SHA-512 HMAC */
 		if (qat_sym_do_precomputes(ICP_QAT_HW_AUTH_ALGO_SHA512, authkey,
-			authkeylen, cdesc->cd_cur_ptr,	&state1_size,
-			cdesc->aes_cmac)) {
+			authkeylen, qat_session->cd_cur_ptr,	&state1_size,
+			qat_session->aes_cmac)) {
 			QAT_LOG(ERR, "(SHA)precompute failed");
 			return -EFAULT;
 		}
@@ -1931,12 +1931,12 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
 	case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
 		state1_size = ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ;
 
-		if (cdesc->aes_cmac)
-			memset(cdesc->cd_cur_ptr, 0, state1_size);
+		if (qat_session->aes_cmac)
+			memset(qat_session->cd_cur_ptr, 0, state1_size);
 		if (qat_sym_do_precomputes(ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC,
-			authkey, authkeylen, cdesc->cd_cur_ptr + state1_size,
-			&state2_size, cdesc->aes_cmac)) {
-			cdesc->aes_cmac ? QAT_LOG(ERR,
+			authkey, authkeylen, qat_session->cd_cur_ptr + state1_size,
+			&state2_size, qat_session->aes_cmac)) {
+			qat_session->aes_cmac ? QAT_LOG(ERR,
 						  "(CMAC)precompute failed")
 					: QAT_LOG(ERR,
 						  "(XCBC)precompute failed");
@@ -1945,11 +1945,11 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
 		break;
 	case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
 	case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
-		cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_GCM;
+		qat_session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_GCM;
 		state1_size = ICP_QAT_HW_GALOIS_128_STATE1_SZ;
-		if (qat_sym_do_precomputes(cdesc->qat_hash_alg, authkey,
-			authkeylen, cdesc->cd_cur_ptr + state1_size,
-			&state2_size, cdesc->aes_cmac)) {
+		if (qat_sym_do_precomputes(qat_session->qat_hash_alg, authkey,
+			authkeylen, qat_session->cd_cur_ptr + state1_size,
+			&state2_size, qat_session->aes_cmac)) {
 			QAT_LOG(ERR, "(GCM)precompute failed");
 			return -EFAULT;
 		}
@@ -1957,58 +1957,58 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
 		 * Write (the length of AAD) into bytes 16-19 of state2
 		 * in big-endian format. This field is 8 bytes
 		 */
-		auth_param->u2.aad_sz =
-				RTE_ALIGN_CEIL(aad_length, 16);
-		auth_param->hash_state_sz = (auth_param->u2.aad_sz) >> 3;
+		qat_fw_req_auth->u2.aad_sz =
+				RTE_ALIGN_CEIL(aadlen, 16);
+		qat_fw_req_auth->hash_state_sz = (qat_fw_req_auth->u2.aad_sz) >> 3;
 
-		aad_len = (uint32_t *)(cdesc->cd_cur_ptr +
+		aad_len = (uint32_t *)(qat_session->cd_cur_ptr +
 					ICP_QAT_HW_GALOIS_128_STATE1_SZ +
 					ICP_QAT_HW_GALOIS_H_SZ);
-		*aad_len = rte_bswap32(aad_length);
-		cdesc->aad_len = aad_length;
+		*aad_len = rte_bswap32(aadlen);
+		qat_session->aad_len = aadlen;
 		break;
 	case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
-		cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_SNOW3G;
+		qat_session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_SNOW3G;
 		state1_size = qat_hash_get_state1_size(
 				ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2);
 		state2_size = ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ;
-		memset(cdesc->cd_cur_ptr, 0, state1_size + state2_size);
+		memset(qat_session->cd_cur_ptr, 0, state1_size + state2_size);
 
-		cipherconfig = (struct icp_qat_hw_cipher_algo_blk *)
-				(cdesc->cd_cur_ptr + state1_size + state2_size);
-		cipherconfig->cipher_config.val =
+		qat_fw_cd_cipher = (struct icp_qat_hw_cipher_algo_blk *)
+				(qat_session->cd_cur_ptr + state1_size + state2_size);
+		qat_fw_cd_cipher->cipher_config.val =
 		ICP_QAT_HW_CIPHER_CONFIG_BUILD(ICP_QAT_HW_CIPHER_ECB_MODE,
 			ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2,
 			ICP_QAT_HW_CIPHER_KEY_CONVERT,
 			ICP_QAT_HW_CIPHER_ENCRYPT);
-		memcpy(cipherconfig->key, authkey, authkeylen);
-		memset(cipherconfig->key + authkeylen,
+		memcpy(qat_fw_cd_cipher->key, authkey, authkeylen);
+		memset(qat_fw_cd_cipher->key + authkeylen,
 				0, ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ);
 		cd_extra_size += sizeof(struct icp_qat_hw_cipher_config) +
 				authkeylen + ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ;
-		auth_param->hash_state_sz = ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ >> 3;
+		qat_fw_req_auth->hash_state_sz = ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ >> 3;
 		break;
 	case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
-		hash->auth_config.config =
+		qat_fw_cd_auth->auth_config.config =
 			ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE0,
-				cdesc->qat_hash_alg, digestsize);
-		cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC;
+				qat_session->qat_hash_alg, digestsize);
+		qat_session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC;
 		state1_size = qat_hash_get_state1_size(
 				ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3);
 		state2_size = ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ;
-		memset(cdesc->cd_cur_ptr, 0, state1_size + state2_size
+		memset(qat_session->cd_cur_ptr, 0, state1_size + state2_size
 			+ ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ);
 
-		memcpy(cdesc->cd_cur_ptr + state1_size, authkey, authkeylen);
+		memcpy(qat_session->cd_cur_ptr + state1_size, authkey, authkeylen);
 		cd_extra_size += ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ;
-		auth_param->hash_state_sz = ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ >> 3;
-		cdesc->min_qat_dev_gen = QAT_GEN2;
+		qat_fw_req_auth->hash_state_sz = ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ >> 3;
+		qat_session->min_qat_dev_gen = QAT_GEN2;
 
 		break;
 	case ICP_QAT_HW_AUTH_ALGO_MD5:
 		if (qat_sym_do_precomputes(ICP_QAT_HW_AUTH_ALGO_MD5, authkey,
-			authkeylen, cdesc->cd_cur_ptr, &state1_size,
-			cdesc->aes_cmac)) {
+			authkeylen, qat_session->cd_cur_ptr, &state1_size,
+			qat_session->aes_cmac)) {
 			QAT_LOG(ERR, "(MD5)precompute failed");
 			return -EFAULT;
 		}
@@ -2020,35 +2020,35 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
 		state2_size = ICP_QAT_HW_NULL_STATE2_SZ;
 		break;
 	case ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC:
-		cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_CCM;
+		qat_session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_CCM;
 		state1_size = qat_hash_get_state1_size(
 				ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC);
 		state2_size = ICP_QAT_HW_AES_CBC_MAC_KEY_SZ +
 				ICP_QAT_HW_AES_CCM_CBC_E_CTR0_SZ;
 
-		if (aad_length > 0) {
-			aad_length += ICP_QAT_HW_CCM_AAD_B0_LEN +
+		if (aadlen > 0) {
+			aadlen += ICP_QAT_HW_CCM_AAD_B0_LEN +
 			ICP_QAT_HW_CCM_AAD_LEN_INFO;
-			auth_param->u2.aad_sz =
-			RTE_ALIGN_CEIL(aad_length,
+			qat_fw_req_auth->u2.aad_sz =
+			RTE_ALIGN_CEIL(aadlen,
 			ICP_QAT_HW_CCM_AAD_ALIGNMENT);
 		} else {
-			auth_param->u2.aad_sz = ICP_QAT_HW_CCM_AAD_B0_LEN;
+			qat_fw_req_auth->u2.aad_sz = ICP_QAT_HW_CCM_AAD_B0_LEN;
 		}
-		cdesc->aad_len = aad_length;
-		hash->auth_counter.counter = 0;
+		qat_session->aad_len = aadlen;
+		qat_fw_cd_auth->auth_counter.counter = 0;
 
-		hash_cd_ctrl->outer_prefix_sz = digestsize;
-		auth_param->hash_state_sz = digestsize;
+		qat_fw_hash->outer_prefix_sz = digestsize;
+		qat_fw_req_auth->hash_state_sz = digestsize;
 
-		memcpy(cdesc->cd_cur_ptr + state1_size, authkey, authkeylen);
+		memcpy(qat_session->cd_cur_ptr + state1_size, authkey, authkeylen);
 		break;
 	case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9:
 		state1_size = qat_hash_get_state1_size(
 				ICP_QAT_HW_AUTH_ALGO_KASUMI_F9);
 		state2_size = ICP_QAT_HW_KASUMI_F9_STATE2_SZ;
-		memset(cdesc->cd_cur_ptr, 0, state1_size + state2_size);
-		pTempKey = (uint32_t *)(cdesc->cd_cur_ptr + state1_size
+		memset(qat_session->cd_cur_ptr, 0, state1_size + state2_size);
+		pTempKey = (uint32_t *)(qat_session->cd_cur_ptr + state1_size
 							+ authkeylen);
 		/*
 		* The Inner Hash Initial State2 block must contain IK
@@ -2056,7 +2056,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
 		* (Key Modifier): IK||(IK^KM).
 		*/
 		/* write the auth key */
-		memcpy(cdesc->cd_cur_ptr + state1_size, authkey, authkeylen);
+		memcpy(qat_session->cd_cur_ptr + state1_size, authkey, authkeylen);
 		/* initialise temp key with auth key */
 		memcpy(pTempKey, authkey, authkeylen);
 		/* XOR Key with KASUMI F9 key modifier at 4 bytes level */
@@ -2064,29 +2064,30 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
 			pTempKey[wordIndex] ^= KASUMI_F9_KEY_MODIFIER_4_BYTES;
 		break;
 	default:
-		QAT_LOG(ERR, "Invalid HASH alg %u", cdesc->qat_hash_alg);
+		QAT_LOG(ERR, "Invalid HASH alg %u", qat_session->qat_hash_alg);
 		return -EFAULT;
 	}
 
 	/* Auth CD config setup */
-	hash_cd_ctrl->hash_cfg_offset = hash_offset >> 3;
-	hash_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED;
-	hash_cd_ctrl->inner_res_sz = digestsize;
-	hash_cd_ctrl->final_sz = digestsize;
-	hash_cd_ctrl->inner_state1_sz = state1_size;
-	auth_param->auth_res_sz = digestsize;
-
-	hash_cd_ctrl->inner_state2_sz  = state2_size;
-	hash_cd_ctrl->inner_state2_offset = hash_cd_ctrl->hash_cfg_offset +
+	qat_fw_hash->hash_cfg_offset = hash_offset >> 3;
+	qat_fw_hash->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED;
+	qat_fw_hash->inner_res_sz = digestsize;
+	qat_fw_hash->final_sz = digestsize;
+	qat_fw_hash->inner_state1_sz = state1_size;
+	qat_fw_req_auth->auth_res_sz = digestsize;
+
+	qat_fw_hash->inner_state2_sz  = state2_size;
+	qat_fw_hash->inner_state2_offset = qat_fw_hash->hash_cfg_offset +
 			((sizeof(struct icp_qat_hw_auth_setup) +
-			 RTE_ALIGN_CEIL(hash_cd_ctrl->inner_state1_sz, 8))
+			 RTE_ALIGN_CEIL(qat_fw_hash->inner_state1_sz, 8))
 					>> 3);
 
-	cdesc->cd_cur_ptr += state1_size + state2_size + cd_extra_size;
-	cd_size = cdesc->cd_cur_ptr-(uint8_t *)&cdesc->cd;
+	qat_session->cd_cur_ptr += state1_size + state2_size + cd_extra_size;
+	cd_size = qat_session->cd_cur_ptr-(uint8_t *)&qat_session->cd;
 
-	cd_pars->u.s.content_desc_addr = cdesc->cd_paddr;
-	cd_pars->u.s.content_desc_params_sz = RTE_ALIGN_CEIL(cd_size, 8) >> 3;
+	qat_fw_cd_pars->u.s.content_desc_addr = qat_session->cd_paddr;
+	qat_fw_cd_pars->u.s.content_desc_params_sz =
+			RTE_ALIGN_CEIL(cd_size, 8) >> 3;
 
 	return 0;
 }
@@ -2238,10 +2239,10 @@ qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev,
 	int ret;
 	int qat_cmd_id;
 	struct rte_crypto_sym_xform *xform = NULL;
-	struct qat_sym_session *session = session_private;
+	struct qat_sym_session *qat_session = session_private;
 
 	/* Clear the session */
-	memset(session, 0, qat_sym_session_get_private_size(dev));
+	memset(qat_session, 0, qat_sym_session_get_private_size(dev));
 
 	ret = qat_sec_session_check_docsis(conf);
 	if (ret) {
@@ -2252,7 +2253,7 @@ qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev,
 	xform = conf->crypto_xform;
 
 	/* Verify the session physical address is known */
-	rte_iova_t session_paddr = rte_mempool_virt2iova(session);
+	rte_iova_t session_paddr = rte_mempool_virt2iova(qat_session);
 	if (session_paddr == 0 || session_paddr == RTE_BAD_IOVA) {
 		QAT_LOG(ERR,
 			"Session physical address unknown. Bad memory pool.");
@@ -2260,10 +2261,10 @@ qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev,
 	}
 
 	/* Set context descriptor physical address */
-	session->cd_paddr = session_paddr +
+	qat_session->cd_paddr = session_paddr +
 			offsetof(struct qat_sym_session, cd);
 
-	session->min_qat_dev_gen = QAT_GEN1;
+	qat_session->min_qat_dev_gen = QAT_GEN1;
 
 	/* Get requested QAT command id - should be cipher */
 	qat_cmd_id = qat_get_cmd_id(xform);
@@ -2271,12 +2272,12 @@ qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev,
 		QAT_LOG(ERR, "Unsupported xform chain requested");
 		return -ENOTSUP;
 	}
-	session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id;
+	qat_session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id;
 
-	ret = qat_sym_session_configure_cipher(dev, xform, session);
+	ret = qat_sym_session_configure_cipher(dev, xform, qat_session);
 	if (ret < 0)
 		return ret;
-	qat_sym_session_finalize(session);
+	qat_sym_session_finalize(qat_session);
 
 	return 0;
 }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [EXT] [dpdk-dev v3 08/10] crypto/qat: add gen specific data and function
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 08/10] crypto/qat: add gen specific data and function Fan Zhang
@ 2021-10-16 11:46       ` Akhil Goyal
  0 siblings, 0 replies; 96+ messages in thread
From: Akhil Goyal @ 2021-10-16 11:46 UTC (permalink / raw)
  To: Fan Zhang, dev; +Cc: Arek Kusztal, Kai Ji

> +/* Macro to add a capability */
> +#define QAT_SYM_PLAIN_AUTH_CAP(n, b, d)

Can you add a comment for each of the defines, specifying what these
 variables (n,b,d,k,a,I etc)depict.
> 	\
> +	{								\
> +		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> 	\
> +		{.sym = {						\
> +			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> 	\
> +			{.auth = {					\
> +				.algo = RTE_CRYPTO_AUTH_##n,
> 	\
> +				b, d					\
> +			}, }						\
> +		}, }							\
> +	}
> +
> +#define QAT_SYM_AUTH_CAP(n, b, k, d, a, i)				\
> +	{								\
> +		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> 	\
> +		{.sym = {						\
> +			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
> 	\
> +			{.auth = {					\
> +				.algo = RTE_CRYPTO_AUTH_##n,
> 	\
> +				b, k, d, a, i				\
> +			}, }						\
> +		}, }							\
> +	}
> +
> +#define QAT_SYM_AEAD_CAP(n, b, k, d, a, i)				\
> +	{								\
> +		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> 	\
> +		{.sym = {						\
> +			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
> 	\
> +			{.aead = {					\
> +				.algo = RTE_CRYPTO_AEAD_##n,
> 	\
> +				b, k, d, a, i				\
> +			}, }						\
> +		}, }							\
> +	}
> +
> +#define QAT_SYM_CIPHER_CAP(n, b, k, i)
> 	\
> +	{								\
> +		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
> 	\
> +		{.sym = {						\
> +			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
> 	\
> +			{.cipher = {					\
> +				.algo = RTE_CRYPTO_CIPHER_##n,
> 	\
> +				b, k, i					\
> +			}, }						\
> +		}, }							\
> +	}
> +
>  extern uint8_t qat_sym_driver_id;
> 
> +extern struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[];
> +
>  int
>  qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
>  		struct qat_dev_cmd_param *qat_dev_cmd_param);
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v4 0/9] drivers/qat: isolate implementations of qat generations
  2021-10-14 16:11   ` [dpdk-dev] [dpdk-dev v3 00/10] drivers/qat: isolate implementations of qat generations Fan Zhang
                       ` (9 preceding siblings ...)
  2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 10/10] common/qat: unify naming conventions in qat functions Fan Zhang
@ 2021-10-22 17:03     ` Fan Zhang
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 1/9] common/qat: add gen specific data and function Fan Zhang
                         ` (9 more replies)
  10 siblings, 10 replies; 96+ messages in thread
From: Fan Zhang @ 2021-10-22 17:03 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang

This patchset introduces new qat driver structure and updates
existing symmetric crypto qat PMD.

The purpose of the change is to isolate QAT generation specific
implementations from one to another.

It is expected the changes to the specific generation driver
code does minimum impact to other generations' implementations.
Also adding the support to new features or new qat generation
hardware will have zero impact to existing functionalities.

v4:
- rebased on top of latest master.
- updated comments.
- removed naming convention patch.

v3:
- removed release note update.
- updated with more unified naming conventions.

v2:
- unified asym and sym data structures for qat.
- more refined per gen code split.

Fan Zhang (9):
  common/qat: add gen specific data and function
  common/qat: add gen specific device implementation
  common/qat: add gen specific queue pair function
  common/qat: add gen specific queue implementation
  compress/qat: add gen specific data and function
  compress/qat: add gen specific implementation
  crypto/qat: unified device private data structure
  crypto/qat: add gen specific data and function
  crypto/qat: add gen specific implementation

 drivers/common/qat/dev/qat_dev_gen1.c         |  254 ++++
 drivers/common/qat/dev/qat_dev_gen2.c         |   37 +
 drivers/common/qat/dev/qat_dev_gen3.c         |   83 ++
 drivers/common/qat/dev/qat_dev_gen4.c         |  305 ++++
 drivers/common/qat/dev/qat_dev_gens.h         |   65 +
 drivers/common/qat/meson.build                |   15 +-
 .../qat/qat_adf/adf_transport_access_macros.h |    2 +
 .../common/qat/qat_adf/icp_qat_hw_gen4_comp.h |  195 +++
 .../qat/qat_adf/icp_qat_hw_gen4_comp_defs.h   |  299 ++++
 drivers/common/qat/qat_common.c               |   15 +
 drivers/common/qat/qat_common.h               |   19 +-
 drivers/common/qat/qat_device.c               |  205 ++-
 drivers/common/qat/qat_device.h               |   45 +-
 drivers/common/qat/qat_qp.c                   |  677 ++++-----
 drivers/common/qat/qat_qp.h                   |  121 +-
 drivers/compress/qat/dev/qat_comp_pmd_gen1.c  |  175 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen2.c  |   30 +
 drivers/compress/qat/dev/qat_comp_pmd_gen3.c  |   30 +
 drivers/compress/qat/dev/qat_comp_pmd_gen4.c  |  213 +++
 drivers/compress/qat/dev/qat_comp_pmd_gens.h  |   30 +
 drivers/compress/qat/qat_comp.c               |  101 +-
 drivers/compress/qat/qat_comp.h               |    8 +-
 drivers/compress/qat/qat_comp_pmd.c           |  159 +--
 drivers/compress/qat/qat_comp_pmd.h           |   76 +
 drivers/crypto/qat/README                     |    7 -
 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c    |   76 +
 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c  |  224 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c  |  164 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c  |  124 ++
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h  |   36 +
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c     |  283 ++++
 drivers/crypto/qat/meson.build                |   26 -
 drivers/crypto/qat/qat_asym_capabilities.h    |   63 -
 drivers/crypto/qat/qat_asym_pmd.c             |  276 +---
 drivers/crypto/qat/qat_asym_pmd.h             |   54 +-
 drivers/crypto/qat/qat_crypto.c               |  172 +++
 drivers/crypto/qat/qat_crypto.h               |   91 ++
 drivers/crypto/qat/qat_sym_capabilities.h     | 1248 -----------------
 drivers/crypto/qat/qat_sym_pmd.c              |  428 +-----
 drivers/crypto/qat/qat_sym_pmd.h              |   76 +-
 drivers/crypto/qat/qat_sym_session.c          |   15 +-
 41 files changed, 3772 insertions(+), 2750 deletions(-)
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gens.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen1.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen2.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen3.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen4.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gens.h
 delete mode 100644 drivers/crypto/qat/README
 create mode 100644 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
 delete mode 100644 drivers/crypto/qat/meson.build
 delete mode 100644 drivers/crypto/qat/qat_asym_capabilities.h
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 delete mode 100644 drivers/crypto/qat/qat_sym_capabilities.h

-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v4 1/9] common/qat: add gen specific data and function
  2021-10-22 17:03     ` [dpdk-dev] [dpdk-dev v4 0/9] drivers/qat: isolate implementations of qat generations Fan Zhang
@ 2021-10-22 17:03       ` Fan Zhang
  2021-10-26 15:06         ` Power, Ciara
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 2/9] common/qat: add gen specific device implementation Fan Zhang
                         ` (8 subsequent siblings)
  9 siblings, 1 reply; 96+ messages in thread
From: Fan Zhang @ 2021-10-22 17:03 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch adds the data structure and function prototypes for
different QAT generations.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/qat_common.h | 14 ++++++++------
 drivers/common/qat/qat_device.c |  4 ++++
 drivers/common/qat/qat_device.h | 23 +++++++++++++++++++++++
 3 files changed, 35 insertions(+), 6 deletions(-)

diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index 23715085f4..1889ec4e88 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -15,20 +15,24 @@
 /* Intel(R) QuickAssist Technology device generation is enumerated
  * from one according to the generation of the device
  */
+
 enum qat_device_gen {
-	QAT_GEN1 = 1,
+	QAT_GEN1,
 	QAT_GEN2,
 	QAT_GEN3,
-	QAT_GEN4
+	QAT_GEN4,
+	QAT_N_GENS
 };
 
 enum qat_service_type {
-	QAT_SERVICE_ASYMMETRIC = 0,
+	QAT_SERVICE_ASYMMETRIC,
 	QAT_SERVICE_SYMMETRIC,
 	QAT_SERVICE_COMPRESSION,
-	QAT_SERVICE_INVALID
+	QAT_MAX_SERVICES
 };
 
+#define QAT_SERVICE_INVALID	(QAT_MAX_SERVICES)
+
 enum qat_svc_list {
 	QAT_SVC_UNUSED = 0,
 	QAT_SVC_CRYPTO = 1,
@@ -37,8 +41,6 @@ enum qat_svc_list {
 	QAT_SVC_ASYM = 4,
 };
 
-#define QAT_MAX_SERVICES		(QAT_SERVICE_INVALID)
-
 /**< Common struct for scatter-gather list operations */
 struct qat_flat_buf {
 	uint32_t len;
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index 1b967cbcf7..e6b43c541f 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -13,6 +13,10 @@
 #include "adf_pf2vf_msg.h"
 #include "qat_pf2vf.h"
 
+/* Hardware device information per generation */
+struct qat_gen_hw_data qat_gen_config[QAT_N_GENS];
+struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_N_GENS];
+
 /* pv2vf data Gen 4*/
 struct qat_pf2vf_dev qat_pf2vf_gen4 = {
 	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 228c057d1e..b8b5c387a3 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -21,6 +21,29 @@
 #define COMP_ENQ_THRESHOLD_NAME "qat_comp_enq_threshold"
 #define MAX_QP_THRESHOLD_SIZE	32
 
+/**
+ * Function prototypes for GENx specific device operations.
+ **/
+typedef int (*qat_dev_reset_ring_pairs_t)
+		(struct qat_pci_device *);
+typedef const struct rte_mem_resource* (*qat_dev_get_transport_bar_t)
+		(struct rte_pci_device *);
+typedef int (*qat_dev_get_misc_bar_t)
+		(struct rte_mem_resource **, struct rte_pci_device *);
+typedef int (*qat_dev_read_config_t)
+		(struct qat_pci_device *);
+typedef int (*qat_dev_get_extra_size_t)(void);
+
+struct qat_dev_hw_spec_funcs {
+	qat_dev_reset_ring_pairs_t	qat_dev_reset_ring_pairs;
+	qat_dev_get_transport_bar_t	qat_dev_get_transport_bar;
+	qat_dev_get_misc_bar_t		qat_dev_get_misc_bar;
+	qat_dev_read_config_t		qat_dev_read_config;
+	qat_dev_get_extra_size_t	qat_dev_get_extra_size;
+};
+
+extern struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[];
+
 struct qat_dev_cmd_param {
 	const char *name;
 	uint16_t val;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v4 2/9] common/qat: add gen specific device implementation
  2021-10-22 17:03     ` [dpdk-dev] [dpdk-dev v4 0/9] drivers/qat: isolate implementations of qat generations Fan Zhang
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 1/9] common/qat: add gen specific data and function Fan Zhang
@ 2021-10-22 17:03       ` Fan Zhang
  2021-10-26 15:11         ` Power, Ciara
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 3/9] common/qat: add gen specific queue pair function Fan Zhang
                         ` (7 subsequent siblings)
  9 siblings, 1 reply; 96+ messages in thread
From: Fan Zhang @ 2021-10-22 17:03 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch replaces the mixed QAT device configuration
implementation by separate files with shared or
individual implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c |  66 +++++++++
 drivers/common/qat/dev/qat_dev_gen2.c |  23 +++
 drivers/common/qat/dev/qat_dev_gen3.c |  23 +++
 drivers/common/qat/dev/qat_dev_gen4.c | 152 +++++++++++++++++++
 drivers/common/qat/dev/qat_dev_gens.h |  34 +++++
 drivers/common/qat/meson.build        |   4 +
 drivers/common/qat/qat_device.c       | 205 +++++++++++---------------
 drivers/common/qat/qat_device.h       |   5 +-
 drivers/common/qat/qat_qp.c           |   3 +-
 9 files changed, 391 insertions(+), 124 deletions(-)
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gens.h

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
new file mode 100644
index 0000000000..d9e75fe9e2
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -0,0 +1,66 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+#define ADF_ARB_REG_SLOT			0x1000
+
+int
+qat_reset_ring_pairs_gen1(struct qat_pci_device *qat_pci_dev __rte_unused)
+{
+	/*
+	 * Ring pairs reset not supported on base, continue
+	 */
+	return 0;
+}
+
+const struct rte_mem_resource *
+qat_dev_get_transport_bar_gen1(struct rte_pci_device *pci_dev)
+{
+	return &pci_dev->mem_resource[0];
+}
+
+int
+qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource __rte_unused,
+		struct rte_pci_device *pci_dev __rte_unused)
+{
+	return -1;
+}
+
+int
+qat_dev_read_config_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	/*
+	 * Base generations do not have configuration,
+	 * but set this pointer anyway that we can
+	 * distinguish higher generations faulty set to NULL
+	 */
+	return 0;
+}
+
+int
+qat_dev_get_extra_size_gen1(void)
+{
+	return 0;
+}
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen1_init)
+{
+	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
+	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
+	qat_gen_config[QAT_GEN1].comp_num_im_bufs_required =
+		QAT_NUM_INTERM_BUFS_GEN1;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c
new file mode 100644
index 0000000000..d3470ed6b8
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen2.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen2_init)
+{
+	qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2;
+	qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c
new file mode 100644
index 0000000000..e4a66869d2
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen3.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen3_init)
+{
+	qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3;
+	qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
new file mode 100644
index 0000000000..5e5423ebfa
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "adf_transport_access_macros_gen4vf.h"
+#include "adf_pf2vf_msg.h"
+#include "qat_pf2vf.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+struct qat_dev_gen4_extra {
+	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
+		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
+};
+
+static struct qat_pf2vf_dev qat_pf2vf_gen4 = {
+	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
+	.vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
+	.pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
+	.pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
+	.pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
+	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
+};
+
+int
+qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
+{
+	struct qat_pf2vf_msg pf2vf_msg;
+
+	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ;
+	pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ;
+	pf2vf_msg.msg_data = 2;
+	return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
+}
+
+static enum qat_service_type
+gen4_pick_service(uint8_t hw_service)
+{
+	switch (hw_service) {
+	case QAT_SVC_SYM:
+		return QAT_SERVICE_SYMMETRIC;
+	case QAT_SVC_COMPRESSION:
+		return QAT_SERVICE_COMPRESSION;
+	case QAT_SVC_ASYM:
+		return QAT_SERVICE_ASYMMETRIC;
+	default:
+		return QAT_SERVICE_INVALID;
+	}
+}
+
+static int
+qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
+{
+	int i = 0;
+	uint16_t svc = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+	struct qat_qp_hw_data *hw_data;
+	enum qat_service_type service_type;
+	uint8_t hw_service;
+
+	if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
+		return -EFAULT;
+	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		hw_service = (svc >> (3 * i)) & 0x7;
+		service_type = gen4_pick_service(hw_service);
+		if (service_type == QAT_SERVICE_INVALID) {
+			QAT_LOG(ERR,
+				"Unrecognized service on bundle %d",
+				i);
+			return -ENOTSUP;
+		}
+		hw_data = &dev_extra->qp_gen4_data[i][0];
+		memset(hw_data, 0, sizeof(*hw_data));
+		hw_data->service_type = service_type;
+		if (service_type == QAT_SERVICE_ASYMMETRIC) {
+			hw_data->tx_msg_size = 64;
+			hw_data->rx_msg_size = 32;
+		} else if (service_type == QAT_SERVICE_SYMMETRIC ||
+				service_type ==
+					QAT_SERVICE_COMPRESSION) {
+			hw_data->tx_msg_size = 128;
+			hw_data->rx_msg_size = 32;
+		}
+		hw_data->tx_ring_num = 0;
+		hw_data->rx_ring_num = 1;
+		hw_data->hw_bundle_num = i;
+	}
+	return 0;
+}
+
+static int
+qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
+{
+	int ret = 0, i;
+	uint8_t data[4];
+	struct qat_pf2vf_msg pf2vf_msg;
+
+	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET;
+	pf2vf_msg.block_hdr = -1;
+	for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		pf2vf_msg.msg_data = i;
+		ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data);
+		if (ret) {
+			QAT_LOG(ERR, "QAT error when reset bundle no %d",
+				i);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static const struct
+rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
+{
+	return &pci_dev->mem_resource[0];
+}
+
+static int
+qat_dev_get_misc_bar_gen4(struct rte_mem_resource **mem_resource,
+		struct rte_pci_device *pci_dev)
+{
+	*mem_resource = &pci_dev->mem_resource[2];
+	return 0;
+}
+
+static int
+qat_dev_get_extra_size_gen4(void)
+{
+	return sizeof(struct qat_dev_gen4_extra);
+}
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen4,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen4,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen4,
+	.qat_dev_read_config = qat_dev_read_config_gen4,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen4,
+};
+
+RTE_INIT(qat_dev_gen_4_init)
+{
+	qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4;
+	qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4;
+	qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h
new file mode 100644
index 0000000000..4ad0ffa728
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gens.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _QAT_DEV_GENS_H_
+#define _QAT_DEV_GENS_H_
+
+#include "qat_device.h"
+#include "qat_qp.h"
+
+#include <stdint.h>
+
+extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE];
+
+int
+qat_dev_get_extra_size_gen1(void);
+
+int
+qat_reset_ring_pairs_gen1(
+		struct qat_pci_device *qat_pci_dev);
+const struct
+rte_mem_resource *qat_dev_get_transport_bar_gen1(
+		struct rte_pci_device *pci_dev);
+int
+qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource,
+		struct rte_pci_device *pci_dev);
+int
+qat_dev_read_config_gen1(struct qat_pci_device *qat_dev);
+
+int
+qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val);
+
+#endif
diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 053c219fed..532e0fabb3 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -50,6 +50,10 @@ sources += files(
         'qat_device.c',
         'qat_logs.c',
         'qat_pf2vf.c',
+        'dev/qat_dev_gen1.c',
+        'dev/qat_dev_gen2.c',
+        'dev/qat_dev_gen3.c',
+        'dev/qat_dev_gen4.c'
 )
 includes += include_directories(
         'qat_adf',
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index e6b43c541f..437996f2e8 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -17,43 +17,6 @@
 struct qat_gen_hw_data qat_gen_config[QAT_N_GENS];
 struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_N_GENS];
 
-/* pv2vf data Gen 4*/
-struct qat_pf2vf_dev qat_pf2vf_gen4 = {
-	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
-	.vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
-	.pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
-	.pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
-	.pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
-	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
-};
-
-/* Hardware device information per generation */
-__extension__
-struct qat_gen_hw_data qat_gen_config[] =  {
-	[QAT_GEN1] = {
-		.dev_gen = QAT_GEN1,
-		.qp_hw_data = qat_gen1_qps,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN1
-	},
-	[QAT_GEN2] = {
-		.dev_gen = QAT_GEN2,
-		.qp_hw_data = qat_gen1_qps,
-		/* gen2 has same ring layout as gen1 */
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN2
-	},
-	[QAT_GEN3] = {
-		.dev_gen = QAT_GEN3,
-		.qp_hw_data = qat_gen3_qps,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3
-	},
-	[QAT_GEN4] = {
-		.dev_gen = QAT_GEN4,
-		.qp_hw_data = NULL,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3,
-		.pf2vf_dev = &qat_pf2vf_gen4
-	},
-};
-
 /* per-process array of device data */
 struct qat_device_info qat_pci_devs[RTE_PMD_QAT_MAX_PCI_DEVICES];
 static int qat_nb_pci_devices;
@@ -87,6 +50,16 @@ static const struct rte_pci_id pci_id_qat_map[] = {
 		{.device_id = 0},
 };
 
+static int
+qat_pci_get_extra_size(enum qat_device_gen qat_dev_gen)
+{
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_extra_size,
+		-ENOTSUP);
+	return ops_hw->qat_dev_get_extra_size();
+}
+
 static struct qat_pci_device *
 qat_pci_get_named_dev(const char *name)
 {
@@ -130,45 +103,8 @@ qat_get_qat_dev_from_pci_dev(struct rte_pci_device *pci_dev)
 	return qat_pci_get_named_dev(name);
 }
 
-static int
-qat_gen4_reset_ring_pair(struct qat_pci_device *qat_pci_dev)
-{
-	int ret = 0, i;
-	uint8_t data[4];
-	struct qat_pf2vf_msg pf2vf_msg;
-
-	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET;
-	pf2vf_msg.block_hdr = -1;
-	for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) {
-		pf2vf_msg.msg_data = i;
-		ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data);
-		if (ret) {
-			QAT_LOG(ERR, "QAT error when reset bundle no %d",
-				i);
-			return ret;
-		}
-	}
-
-	return 0;
-}
-
-int qat_query_svc(struct qat_pci_device *qat_dev, uint8_t *val)
-{
-	int ret = -(EINVAL);
-	struct qat_pf2vf_msg pf2vf_msg;
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ;
-		pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ;
-		pf2vf_msg.msg_data = 2;
-		ret = qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
-	}
-
-	return ret;
-}
-
-
-static void qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param
+static void
+qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param
 		*qat_dev_cmd_param)
 {
 	int i = 0;
@@ -230,13 +166,39 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param)
 {
 	struct qat_pci_device *qat_dev;
+	enum qat_device_gen qat_dev_gen;
 	uint8_t qat_dev_id = 0;
 	char name[QAT_DEV_NAME_MAX_LEN];
 	struct rte_devargs *devargs = pci_dev->device.devargs;
+	struct qat_dev_hw_spec_funcs *ops_hw;
+	struct rte_mem_resource *mem_resource;
+	const struct rte_memzone *qat_dev_mz;
+	int qat_dev_size, extra_size;
 
 	rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
 	snprintf(name+strlen(name), QAT_DEV_NAME_MAX_LEN-strlen(name), "_qat");
 
+	switch (pci_dev->id.device_id) {
+	case 0x0443:
+		qat_dev_gen = QAT_GEN1;
+		break;
+	case 0x37c9:
+	case 0x19e3:
+	case 0x6f55:
+	case 0x18ef:
+		qat_dev_gen = QAT_GEN2;
+		break;
+	case 0x18a1:
+		qat_dev_gen = QAT_GEN3;
+		break;
+	case 0x4941:
+		qat_dev_gen = QAT_GEN4;
+		break;
+	default:
+		QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
+		return NULL;
+	}
+
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		const struct rte_memzone *mz = rte_memzone_lookup(name);
 
@@ -267,63 +229,63 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 		return NULL;
 	}
 
-	qat_pci_devs[qat_dev_id].mz = rte_memzone_reserve(name,
-		sizeof(struct qat_pci_device),
+	extra_size = qat_pci_get_extra_size(qat_dev_gen);
+	if (extra_size < 0) {
+		QAT_LOG(ERR, "QAT internal error: no pci pointer for gen %d",
+			qat_dev_gen);
+		return NULL;
+	}
+
+	qat_dev_size = sizeof(struct qat_pci_device) + extra_size;
+	qat_dev_mz = rte_memzone_reserve(name, qat_dev_size,
 		rte_socket_id(), 0);
 
-	if (qat_pci_devs[qat_dev_id].mz == NULL) {
+	if (qat_dev_mz == NULL) {
 		QAT_LOG(ERR, "Error when allocating memzone for QAT_%d",
 			qat_dev_id);
 		return NULL;
 	}
 
-	qat_dev = qat_pci_devs[qat_dev_id].mz->addr;
-	memset(qat_dev, 0, sizeof(*qat_dev));
+	qat_dev = qat_dev_mz->addr;
+	memset(qat_dev, 0, qat_dev_size);
+	qat_dev->dev_private = qat_dev + 1;
 	strlcpy(qat_dev->name, name, QAT_DEV_NAME_MAX_LEN);
 	qat_dev->qat_dev_id = qat_dev_id;
 	qat_pci_devs[qat_dev_id].pci_dev = pci_dev;
-	switch (pci_dev->id.device_id) {
-	case 0x0443:
-		qat_dev->qat_dev_gen = QAT_GEN1;
-		break;
-	case 0x37c9:
-	case 0x19e3:
-	case 0x6f55:
-	case 0x18ef:
-		qat_dev->qat_dev_gen = QAT_GEN2;
-		break;
-	case 0x18a1:
-		qat_dev->qat_dev_gen = QAT_GEN3;
-		break;
-	case 0x4941:
-		qat_dev->qat_dev_gen = QAT_GEN4;
-		break;
-	default:
-		QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
-		rte_memzone_free(qat_pci_devs[qat_dev->qat_dev_id].mz);
+	qat_dev->qat_dev_gen = qat_dev_gen;
+
+	ops_hw = qat_dev_hw_spec[qat_dev->qat_dev_gen];
+	if (ops_hw->qat_dev_get_misc_bar == NULL) {
+		QAT_LOG(ERR, "qat_dev_get_misc_bar function pointer not set");
+		rte_memzone_free(qat_dev_mz);
 		return NULL;
 	}
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		qat_dev->misc_bar_io_addr = pci_dev->mem_resource[2].addr;
-		if (qat_dev->misc_bar_io_addr == NULL) {
+	if (ops_hw->qat_dev_get_misc_bar(&mem_resource, pci_dev) == 0) {
+		if (mem_resource->addr == NULL) {
 			QAT_LOG(ERR, "QAT cannot get access to VF misc bar");
+			rte_memzone_free(qat_dev_mz);
 			return NULL;
 		}
-	}
+		qat_dev->misc_bar_io_addr = mem_resource->addr;
+	} else
+		qat_dev->misc_bar_io_addr = NULL;
 
 	if (devargs && devargs->drv_str)
 		qat_dev_parse_cmd(devargs->drv_str, qat_dev_cmd_param);
 
-	if (qat_dev->qat_dev_gen >= QAT_GEN4) {
-		if (qat_read_qp_config(qat_dev)) {
-			QAT_LOG(ERR,
-				"Cannot acquire ring configuration for QAT_%d",
-				qat_dev_id);
-			return NULL;
-		}
+	if (qat_read_qp_config(qat_dev)) {
+		QAT_LOG(ERR,
+			"Cannot acquire ring configuration for QAT_%d",
+			qat_dev_id);
+			rte_memzone_free(qat_dev_mz);
+		return NULL;
 	}
 
+	/* No errors when allocating, attach memzone with
+	 * qat_dev to list of devices
+	 */
+	qat_pci_devs[qat_dev_id].mz = qat_dev_mz;
+
 	rte_spinlock_init(&qat_dev->arb_csr_lock);
 	qat_nb_pci_devices++;
 
@@ -396,6 +358,7 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	int sym_ret = 0, asym_ret = 0, comp_ret = 0;
 	int num_pmds_created = 0;
 	struct qat_pci_device *qat_pci_dev;
+	struct qat_dev_hw_spec_funcs *ops_hw;
 	struct qat_dev_cmd_param qat_dev_cmd_param[] = {
 			{ SYM_ENQ_THRESHOLD_NAME, 0 },
 			{ ASYM_ENQ_THRESHOLD_NAME, 0 },
@@ -412,13 +375,14 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	if (qat_pci_dev == NULL)
 		return -ENODEV;
 
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		if (qat_gen4_reset_ring_pair(qat_pci_dev)) {
-			QAT_LOG(ERR,
-				"Cannot reset ring pairs, does pf driver supports pf2vf comms?"
-				);
-			return -ENODEV;
-		}
+	ops_hw = qat_dev_hw_spec[qat_pci_dev->qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_reset_ring_pairs,
+		-ENOTSUP);
+	if (ops_hw->qat_dev_reset_ring_pairs(qat_pci_dev)) {
+		QAT_LOG(ERR,
+			"Cannot reset ring pairs, does pf driver supports pf2vf comms?"
+			);
+		return -ENODEV;
 	}
 
 	sym_ret = qat_sym_dev_create(qat_pci_dev, qat_dev_cmd_param);
@@ -453,7 +417,8 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	return 0;
 }
 
-static int qat_pci_remove(struct rte_pci_device *pci_dev)
+static int
+qat_pci_remove(struct rte_pci_device *pci_dev)
 {
 	struct qat_pci_device *qat_pci_dev;
 
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index b8b5c387a3..8b69206df5 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -133,6 +133,8 @@ struct qat_pci_device {
 	/**< Data of ring configuration on gen4 */
 	void *misc_bar_io_addr;
 	/**< Address of misc bar */
+	void *dev_private;
+	/**< Per generation specific information */
 };
 
 struct qat_gen_hw_data {
@@ -182,7 +184,4 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev __rte_unused,
 int
 qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev __rte_unused);
 
-int
-qat_query_svc(struct qat_pci_device *qat_pci_dev, uint8_t *ret);
-
 #endif /* _QAT_DEVICE_H_ */
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 026ea5ee01..b8c6000e86 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -20,6 +20,7 @@
 #include "qat_comp.h"
 #include "adf_transport_access_macros.h"
 #include "adf_transport_access_macros_gen4vf.h"
+#include "dev/qat_dev_gens.h"
 
 #define QAT_CQ_MAX_DEQ_RETRIES 10
 
@@ -512,7 +513,7 @@ qat_read_qp_config(struct qat_pci_device *qat_dev)
 	if (qat_dev_gen == QAT_GEN4) {
 		uint16_t svc = 0;
 
-		if (qat_query_svc(qat_dev, (uint8_t *)&svc))
+		if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
 			return -(EFAULT);
 		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
 			struct qat_qp_hw_data *hw_data =
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v4 3/9] common/qat: add gen specific queue pair function
  2021-10-22 17:03     ` [dpdk-dev] [dpdk-dev v4 0/9] drivers/qat: isolate implementations of qat generations Fan Zhang
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 1/9] common/qat: add gen specific data and function Fan Zhang
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 2/9] common/qat: add gen specific device implementation Fan Zhang
@ 2021-10-22 17:03       ` Fan Zhang
  2021-10-26 15:28         ` Power, Ciara
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 4/9] common/qat: add gen specific queue implementation Fan Zhang
                         ` (6 subsequent siblings)
  9 siblings, 1 reply; 96+ messages in thread
From: Fan Zhang @ 2021-10-22 17:03 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang

This patch adds the queue pair data structure and function
prototypes for different QAT generations.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 drivers/common/qat/qat_qp.c |   3 ++
 drivers/common/qat/qat_qp.h | 103 ++++++++++++++++++++++++------------
 2 files changed, 71 insertions(+), 35 deletions(-)

diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index b8c6000e86..27994036b8 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -34,6 +34,9 @@
 	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
 	(ADF_ARB_REG_SLOT * index), value)
 
+struct qat_qp_hw_spec_funcs*
+	qat_qp_hw_spec[QAT_N_GENS];
+
 __extension__
 const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
 					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
index e1627197fa..726cd2ef61 100644
--- a/drivers/common/qat/qat_qp.h
+++ b/drivers/common/qat/qat_qp.h
@@ -7,8 +7,6 @@
 #include "qat_common.h"
 #include "adf_transport_access_macros.h"
 
-struct qat_pci_device;
-
 #define QAT_CSR_HEAD_WRITE_THRESH 32U
 /* number of requests to accumulate before writing head CSR */
 
@@ -24,37 +22,7 @@ struct qat_pci_device;
 #define QAT_GEN4_BUNDLE_NUM             4
 #define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
 
-/**
- * Structure with data needed for creation of queue pair.
- */
-struct qat_qp_hw_data {
-	enum qat_service_type service_type;
-	uint8_t hw_bundle_num;
-	uint8_t tx_ring_num;
-	uint8_t rx_ring_num;
-	uint16_t tx_msg_size;
-	uint16_t rx_msg_size;
-};
-
-/**
- * Structure with data needed for creation of queue pair on gen4.
- */
-struct qat_qp_gen4_data {
-	struct qat_qp_hw_data qat_qp_hw_data;
-	uint8_t reserved;
-	uint8_t valid;
-};
-
-/**
- * Structure with data needed for creation of queue pair.
- */
-struct qat_qp_config {
-	const struct qat_qp_hw_data *hw;
-	uint32_t nb_descriptors;
-	uint32_t cookie_size;
-	int socket_id;
-	const char *service_str;
-};
+struct qat_pci_device;
 
 /**
  * Structure associated with each queue.
@@ -96,8 +64,28 @@ struct qat_qp {
 	uint16_t min_enq_burst_threshold;
 } __rte_cache_aligned;
 
-extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+/**
+ * Structure with data needed for creation of queue pair.
+ */
+struct qat_qp_hw_data {
+	enum qat_service_type service_type;
+	uint8_t hw_bundle_num;
+	uint8_t tx_ring_num;
+	uint8_t rx_ring_num;
+	uint16_t tx_msg_size;
+	uint16_t rx_msg_size;
+};
+
+/**
+ * Structure with data needed for creation of queue pair.
+ */
+struct qat_qp_config {
+	const struct qat_qp_hw_data *hw;
+	uint32_t nb_descriptors;
+	uint32_t cookie_size;
+	int socket_id;
+	const char *service_str;
+};
 
 uint16_t
 qat_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops);
@@ -136,4 +124,49 @@ qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
 int
 qat_read_qp_config(struct qat_pci_device *qat_dev);
 
+/**
+ * Function prototypes for GENx specific queue pair operations.
+ **/
+typedef int (*qat_qp_rings_per_service_t)
+		(struct qat_pci_device *, enum qat_service_type);
+
+typedef void (*qat_qp_build_ring_base_t)(void *, struct qat_queue *);
+
+typedef void (*qat_qp_adf_arb_enable_t)(const struct qat_queue *, void *,
+		rte_spinlock_t *);
+
+typedef void (*qat_qp_adf_arb_disable_t)(const struct qat_queue *, void *,
+		rte_spinlock_t *);
+
+typedef void (*qat_qp_adf_configure_queues_t)(struct qat_qp *);
+
+typedef void (*qat_qp_csr_write_tail_t)(struct qat_qp *qp, struct qat_queue *q);
+
+typedef void (*qat_qp_csr_write_head_t)(struct qat_qp *qp, struct qat_queue *q,
+		uint32_t new_head);
+
+typedef void (*qat_qp_csr_setup_t)(struct qat_pci_device*, void *,
+		struct qat_qp *);
+
+typedef const struct qat_qp_hw_data * (*qat_qp_get_hw_data_t)(
+		struct qat_pci_device *dev, enum qat_service_type service_type,
+		uint16_t qp_id);
+
+struct qat_qp_hw_spec_funcs {
+	qat_qp_rings_per_service_t	qat_qp_rings_per_service;
+	qat_qp_build_ring_base_t	qat_qp_build_ring_base;
+	qat_qp_adf_arb_enable_t		qat_qp_adf_arb_enable;
+	qat_qp_adf_arb_disable_t	qat_qp_adf_arb_disable;
+	qat_qp_adf_configure_queues_t	qat_qp_adf_configure_queues;
+	qat_qp_csr_write_tail_t		qat_qp_csr_write_tail;
+	qat_qp_csr_write_head_t		qat_qp_csr_write_head;
+	qat_qp_csr_setup_t		qat_qp_csr_setup;
+	qat_qp_get_hw_data_t		qat_qp_get_hw_data;
+};
+
+extern struct qat_qp_hw_spec_funcs *qat_qp_hw_spec[];
+
+extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+
 #endif /* _QAT_QP_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v4 4/9] common/qat: add gen specific queue implementation
  2021-10-22 17:03     ` [dpdk-dev] [dpdk-dev v4 0/9] drivers/qat: isolate implementations of qat generations Fan Zhang
                         ` (2 preceding siblings ...)
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 3/9] common/qat: add gen specific queue pair function Fan Zhang
@ 2021-10-22 17:03       ` Fan Zhang
  2021-10-26 15:52         ` Power, Ciara
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 5/9] compress/qat: add gen specific data and function Fan Zhang
                         ` (5 subsequent siblings)
  9 siblings, 1 reply; 96+ messages in thread
From: Fan Zhang @ 2021-10-22 17:03 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch replaces the mixed QAT queue pair configuration
implementation by separate files with shared or individual
implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c         | 190 +++++
 drivers/common/qat/dev/qat_dev_gen2.c         |  14 +
 drivers/common/qat/dev/qat_dev_gen3.c         |  60 ++
 drivers/common/qat/dev/qat_dev_gen4.c         | 161 ++++-
 drivers/common/qat/dev/qat_dev_gens.h         |  37 +-
 .../qat/qat_adf/adf_transport_access_macros.h |   2 +
 drivers/common/qat/qat_device.h               |   3 -
 drivers/common/qat/qat_qp.c                   | 677 +++++++-----------
 drivers/common/qat/qat_qp.h                   |  24 +-
 drivers/crypto/qat/qat_sym_pmd.c              |  32 +-
 10 files changed, 723 insertions(+), 477 deletions(-)

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
index d9e75fe9e2..cc63b55bd1 100644
--- a/drivers/common/qat/dev/qat_dev_gen1.c
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -3,6 +3,7 @@
  */
 
 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"
 
@@ -10,6 +11,194 @@
 
 #define ADF_ARB_REG_SLOT			0x1000
 
+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+__extension__
+const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
+	/* queue pairs which provide an asymmetric crypto service */
+	[QAT_SERVICE_ASYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 0,
+			.rx_ring_num = 8,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+
+		}, {
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 1,
+			.rx_ring_num = 9,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a symmetric crypto service */
+	[QAT_SERVICE_SYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 2,
+			.rx_ring_num = 10,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		},
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 3,
+			.rx_ring_num = 11,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a compression service */
+	[QAT_SERVICE_COMPRESSION] = {
+		{
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 6,
+			.rx_ring_num = 14,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}, {
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 7,
+			.rx_ring_num = 15,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	}
+};
+
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen1(struct qat_pci_device *dev __rte_unused,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	return qat_gen1_qps[service_type] + qp_id;
+}
+
+int
+qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
+{
+	int i = 0, count = 0;
+
+	for (i = 0; i < ADF_MAX_QPS_ON_ANY_SERVICE; i++) {
+		const struct qat_qp_hw_data *hw_qps =
+				qat_qp_get_hw_data(qat_dev, service, i);
+		if (hw_qps->service_type == service)
+			count++;
+	}
+
+	return count;
+}
+
+void
+qat_qp_csr_build_ring_base_gen1(void *io_addr,
+			struct qat_queue *queue)
+{
+	uint64_t queue_base;
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+			queue->queue_size);
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+		queue->hw_queue_number, queue_base);
+}
+
+void
+qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_ARB_REG_SLOT *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr,
+			arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+void
+qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+				(ADF_ARB_REG_SLOT * txq->hw_bundle_number);
+	uint32_t value;
+
+	rte_spinlock_lock(lock);
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value &= ~(0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+void
+qat_qp_adf_configure_queues_gen1(struct qat_qp *qp)
+{
+	uint32_t q_tx_config, q_resp_config;
+	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
+
+	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
+	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
+			ADF_RING_NEAR_WATERMARK_512,
+			ADF_RING_NEAR_WATERMARK_0);
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
+		q_tx->hw_bundle_number,	q_tx->hw_queue_number,
+		q_tx_config);
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
+		q_rx->hw_bundle_number,	q_rx->hw_queue_number,
+		q_resp_config);
+}
+
+void
+qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q)
+{
+	WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number,
+		q->hw_queue_number, q->tail);
+}
+
+void
+qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q,
+			uint32_t new_head)
+{
+	WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number,
+			q->hw_queue_number, new_head);
+}
+
+void
+qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev,
+			void *io_addr, struct qat_qp *qp)
+{
+	qat_qp_csr_build_ring_base_gen1(io_addr, &qp->tx_q);
+	qat_qp_csr_build_ring_base_gen1(io_addr, &qp->rx_q);
+	qat_qp_adf_configure_queues_gen1(qp);
+	qat_qp_adf_arb_enable_gen1(&qp->tx_q, qp->mmap_bar_addr,
+					&qat_dev->arb_csr_lock);
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen1 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen1,
+};
+
 int
 qat_reset_ring_pairs_gen1(struct qat_pci_device *qat_pci_dev __rte_unused)
 {
@@ -59,6 +248,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = {
 
 RTE_INIT(qat_dev_gen_gen1_init)
 {
+	qat_qp_hw_spec[QAT_GEN1] = &qat_qp_hw_spec_gen1;
 	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
 	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
 	qat_gen_config[QAT_GEN1].comp_num_im_bufs_required =
diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c
index d3470ed6b8..f077fe9eef 100644
--- a/drivers/common/qat/dev/qat_dev_gen2.c
+++ b/drivers/common/qat/dev/qat_dev_gen2.c
@@ -3,11 +3,24 @@
  */
 
 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"
 
 #include <stdint.h>
 
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen2 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen1,
+};
+
 static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
 	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
 	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
@@ -18,6 +31,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
 
 RTE_INIT(qat_dev_gen_gen2_init)
 {
+	qat_qp_hw_spec[QAT_GEN2] = &qat_qp_hw_spec_gen2;
 	qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2;
 	qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c
index e4a66869d2..de3fa17fa9 100644
--- a/drivers/common/qat/dev/qat_dev_gen3.c
+++ b/drivers/common/qat/dev/qat_dev_gen3.c
@@ -3,11 +3,70 @@
  */
 
 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"
 
 #include <stdint.h>
 
+__extension__
+const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
+	/* queue pairs which provide an asymmetric crypto service */
+	[QAT_SERVICE_ASYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 0,
+			.rx_ring_num = 4,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a symmetric crypto service */
+	[QAT_SERVICE_SYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 1,
+			.rx_ring_num = 5,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a compression service */
+	[QAT_SERVICE_COMPRESSION] = {
+		{
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 3,
+			.rx_ring_num = 7,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	}
+};
+
+
+static const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen3(struct qat_pci_device *dev __rte_unused,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	return qat_gen3_qps[service_type] + qp_id;
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen3 = {
+	.qat_qp_rings_per_service  = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen3
+};
+
 static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
 	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
 	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
@@ -18,6 +77,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
 
 RTE_INIT(qat_dev_gen_gen3_init)
 {
+	qat_qp_hw_spec[QAT_GEN3] = &qat_qp_hw_spec_gen3;
 	qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3;
 	qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
index 5e5423ebfa..7ffde5f4c8 100644
--- a/drivers/common/qat/dev/qat_dev_gen4.c
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -10,10 +10,13 @@
 #include "adf_transport_access_macros_gen4vf.h"
 #include "adf_pf2vf_msg.h"
 #include "qat_pf2vf.h"
-#include "qat_dev_gens.h"
 
 #include <stdint.h>
 
+/* QAT GEN 4 specific macros */
+#define QAT_GEN4_BUNDLE_NUM             4
+#define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
+
 struct qat_dev_gen4_extra {
 	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
 		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
@@ -28,7 +31,7 @@ static struct qat_pf2vf_dev qat_pf2vf_gen4 = {
 	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
 };
 
-int
+static int
 qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
 {
 	struct qat_pf2vf_msg pf2vf_msg;
@@ -39,6 +42,52 @@ qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
 	return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
 }
 
+static int
+qat_select_valid_queue_gen4(struct qat_pci_device *qat_dev, int qp_id,
+			enum qat_service_type service_type)
+{
+	int i = 0, valid_qps = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+
+	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		if (dev_extra->qp_gen4_data[i][0].service_type ==
+			service_type) {
+			if (valid_qps == qp_id)
+				return i;
+			++valid_qps;
+		}
+	}
+	return -1;
+}
+
+static const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen4(struct qat_pci_device *qat_dev,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+	int ring_pair = qat_select_valid_queue_gen4(qat_dev, qp_id,
+			service_type);
+
+	if (ring_pair < 0)
+		return NULL;
+
+	return &dev_extra->qp_gen4_data[ring_pair][0];
+}
+
+static int
+qat_qp_rings_per_service_gen4(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
+{
+	int i = 0, count = 0, max_ops_per_srv = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+
+	max_ops_per_srv = QAT_GEN4_BUNDLE_NUM;
+	for (i = 0, count = 0; i < max_ops_per_srv; i++)
+		if (dev_extra->qp_gen4_data[i][0].service_type == service)
+			count++;
+	return count;
+}
+
 static enum qat_service_type
 gen4_pick_service(uint8_t hw_service)
 {
@@ -94,6 +143,109 @@ qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
 	return 0;
 }
 
+static void
+qat_qp_build_ring_base_gen4(void *io_addr,
+			struct qat_queue *queue)
+{
+	uint64_t queue_base;
+
+	queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr,
+			queue->queue_size);
+	WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number,
+		queue->hw_queue_number, queue_base);
+}
+
+static void
+qat_qp_adf_arb_enable_gen4(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_RING_BUNDLE_SIZE_GEN4 *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
+			arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+static void
+qat_qp_adf_arb_disable_gen4(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_RING_BUNDLE_SIZE_GEN4 *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
+			arb_csr_offset);
+	value &= ~(0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+static void
+qat_qp_adf_configure_queues_gen4(struct qat_qp *qp)
+{
+	uint32_t q_tx_config, q_resp_config;
+	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
+
+	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
+	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
+			ADF_RING_NEAR_WATERMARK_512,
+			ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
+		q_tx->hw_bundle_number,	q_tx->hw_queue_number,
+		q_tx_config);
+	WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
+		q_rx->hw_bundle_number,	q_rx->hw_queue_number,
+		q_resp_config);
+}
+
+static void
+qat_qp_csr_write_tail_gen4(struct qat_qp *qp, struct qat_queue *q)
+{
+	WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
+		q->hw_bundle_number, q->hw_queue_number, q->tail);
+}
+
+static void
+qat_qp_csr_write_head_gen4(struct qat_qp *qp, struct qat_queue *q,
+			uint32_t new_head)
+{
+	WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr,
+			q->hw_bundle_number, q->hw_queue_number, new_head);
+}
+
+static void
+qat_qp_csr_setup_gen4(struct qat_pci_device *qat_dev,
+			void *io_addr, struct qat_qp *qp)
+{
+	qat_qp_build_ring_base_gen4(io_addr, &qp->tx_q);
+	qat_qp_build_ring_base_gen4(io_addr, &qp->rx_q);
+	qat_qp_adf_configure_queues_gen4(qp);
+	qat_qp_adf_arb_enable_gen4(&qp->tx_q, qp->mmap_bar_addr,
+					&qat_dev->arb_csr_lock);
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen4 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen4,
+	.qat_qp_build_ring_base = qat_qp_build_ring_base_gen4,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen4,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen4,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen4,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen4,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen4,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen4,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen4,
+};
+
 static int
 qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
 {
@@ -116,8 +268,8 @@ qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
 	return 0;
 }
 
-static const struct
-rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
+static const struct rte_mem_resource *
+qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
 {
 	return &pci_dev->mem_resource[0];
 }
@@ -146,6 +298,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = {
 
 RTE_INIT(qat_dev_gen_4_init)
 {
+	qat_qp_hw_spec[QAT_GEN4] = &qat_qp_hw_spec_gen4;
 	qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4;
 	qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4;
 	qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4;
diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h
index 4ad0ffa728..7c92f1938c 100644
--- a/drivers/common/qat/dev/qat_dev_gens.h
+++ b/drivers/common/qat/dev/qat_dev_gens.h
@@ -16,6 +16,40 @@ extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
 int
 qat_dev_get_extra_size_gen1(void);
 
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen1(struct qat_pci_device *dev,
+		enum qat_service_type service_type, uint16_t qp_id);
+
+int
+qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev,
+		enum qat_service_type service);
+
+void
+qat_qp_csr_build_ring_base_gen1(void *io_addr,
+		struct qat_queue *queue);
+
+void
+qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock);
+
+void
+qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock);
+
+void
+qat_qp_adf_configure_queues_gen1(struct qat_qp *qp);
+
+void
+qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q);
+
+void
+qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q,
+		uint32_t new_head);
+
+void
+qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev,
+		void *io_addr, struct qat_qp *qp);
+
 int
 qat_reset_ring_pairs_gen1(
 		struct qat_pci_device *qat_pci_dev);
@@ -28,7 +62,4 @@ qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource,
 int
 qat_dev_read_config_gen1(struct qat_pci_device *qat_dev);
 
-int
-qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val);
-
 #endif
diff --git a/drivers/common/qat/qat_adf/adf_transport_access_macros.h b/drivers/common/qat/qat_adf/adf_transport_access_macros.h
index 504ffb7236..f98bbb5001 100644
--- a/drivers/common/qat/qat_adf/adf_transport_access_macros.h
+++ b/drivers/common/qat/qat_adf/adf_transport_access_macros.h
@@ -51,6 +51,8 @@
 #define ADF_MIN_RING_SIZE ADF_RING_SIZE_128
 #define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M
 #define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K
+/* ARB CSR offset */
+#define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C
 
 /* Maximum number of qps on a device for any service type */
 #define ADF_MAX_QPS_ON_ANY_SERVICE	2
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 8b69206df5..8233cc045d 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -128,9 +128,6 @@ struct qat_pci_device {
 	/* Data relating to compression service */
 	struct qat_comp_dev_private *comp_dev;
 	/**< link back to compressdev private data */
-	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
-		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
-	/**< Data of ring configuration on gen4 */
 	void *misc_bar_io_addr;
 	/**< Address of misc bar */
 	void *dev_private;
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 27994036b8..cde421eb77 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -18,124 +18,15 @@
 #include "qat_sym.h"
 #include "qat_asym.h"
 #include "qat_comp.h"
-#include "adf_transport_access_macros.h"
-#include "adf_transport_access_macros_gen4vf.h"
-#include "dev/qat_dev_gens.h"
 
 #define QAT_CQ_MAX_DEQ_RETRIES 10
 
 #define ADF_MAX_DESC				4096
 #define ADF_MIN_DESC				128
 
-#define ADF_ARB_REG_SLOT			0x1000
-#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
-
-#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
-	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
-	(ADF_ARB_REG_SLOT * index), value)
-
 struct qat_qp_hw_spec_funcs*
 	qat_qp_hw_spec[QAT_N_GENS];
 
-__extension__
-const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
-					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
-	/* queue pairs which provide an asymmetric crypto service */
-	[QAT_SERVICE_ASYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 0,
-			.rx_ring_num = 8,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-
-		}, {
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 1,
-			.rx_ring_num = 9,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a symmetric crypto service */
-	[QAT_SERVICE_SYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 2,
-			.rx_ring_num = 10,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		},
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 3,
-			.rx_ring_num = 11,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a compression service */
-	[QAT_SERVICE_COMPRESSION] = {
-		{
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 6,
-			.rx_ring_num = 14,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}, {
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 7,
-			.rx_ring_num = 15,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	}
-};
-
-__extension__
-const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES]
-					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
-	/* queue pairs which provide an asymmetric crypto service */
-	[QAT_SERVICE_ASYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 0,
-			.rx_ring_num = 4,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a symmetric crypto service */
-	[QAT_SERVICE_SYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 1,
-			.rx_ring_num = 5,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a compression service */
-	[QAT_SERVICE_COMPRESSION] = {
-		{
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 3,
-			.rx_ring_num = 7,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	}
-};
-
 static int qat_qp_check_queue_alignment(uint64_t phys_addr,
 	uint32_t queue_size_bytes);
 static void qat_queue_delete(struct qat_queue *queue);
@@ -143,77 +34,32 @@ static int qat_queue_create(struct qat_pci_device *qat_dev,
 	struct qat_queue *queue, struct qat_qp_config *, uint8_t dir);
 static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
 	uint32_t *queue_size_for_csr);
-static void adf_configure_queues(struct qat_qp *queue,
+static int adf_configure_queues(struct qat_qp *queue,
 	enum qat_device_gen qat_dev_gen);
-static void adf_queue_arb_enable(enum qat_device_gen qat_dev_gen,
+static int adf_queue_arb_enable(struct qat_pci_device *qat_dev,
 	struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock);
-static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
+static int adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
 	struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock);
+static int qat_qp_build_ring_base(struct qat_pci_device *qat_dev,
+	void *io_addr, struct qat_queue *queue);
+static const struct rte_memzone *queue_dma_zone_reserve(const char *queue_name,
+	uint32_t queue_size, int socket_id);
+static int qat_qp_csr_setup(struct qat_pci_device *qat_dev, void *io_addr,
+	struct qat_qp *qp);
 
-int qat_qps_per_service(struct qat_pci_device *qat_dev,
-		enum qat_service_type service)
-{
-	int i = 0, count = 0, max_ops_per_srv = 0;
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		max_ops_per_srv = QAT_GEN4_BUNDLE_NUM;
-		for (i = 0, count = 0; i < max_ops_per_srv; i++)
-			if (qat_dev->qp_gen4_data[i][0].service_type == service)
-				count++;
-	} else {
-		const struct qat_qp_hw_data *sym_hw_qps =
-				qat_gen_config[qat_dev->qat_dev_gen]
-				.qp_hw_data[service];
-
-		max_ops_per_srv = ADF_MAX_QPS_ON_ANY_SERVICE;
-		for (i = 0, count = 0; i < max_ops_per_srv; i++)
-			if (sym_hw_qps[i].service_type == service)
-				count++;
-	}
-
-	return count;
-}
-
-static const struct rte_memzone *
-queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
-			int socket_id)
-{
-	const struct rte_memzone *mz;
-
-	mz = rte_memzone_lookup(queue_name);
-	if (mz != 0) {
-		if (((size_t)queue_size <= mz->len) &&
-				((socket_id == SOCKET_ID_ANY) ||
-					(socket_id == mz->socket_id))) {
-			QAT_LOG(DEBUG, "re-use memzone already "
-					"allocated for %s", queue_name);
-			return mz;
-		}
-
-		QAT_LOG(ERR, "Incompatible memzone already "
-				"allocated %s, size %u, socket %d. "
-				"Requested size %u, socket %u",
-				queue_name, (uint32_t)mz->len,
-				mz->socket_id, queue_size, socket_id);
-		return NULL;
-	}
-
-	QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
-					queue_name, queue_size, socket_id);
-	return rte_memzone_reserve_aligned(queue_name, queue_size,
-		socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size);
-}
-
-int qat_qp_setup(struct qat_pci_device *qat_dev,
+int
+qat_qp_setup(struct qat_pci_device *qat_dev,
 		struct qat_qp **qp_addr,
 		uint16_t queue_pair_id,
 		struct qat_qp_config *qat_qp_conf)
 {
-	struct qat_qp *qp;
+	struct qat_qp *qp = NULL;
 	struct rte_pci_device *pci_dev =
 			qat_pci_devs[qat_dev->qat_dev_id].pci_dev;
 	char op_cookie_pool_name[RTE_RING_NAMESIZE];
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev->qat_dev_gen];
+	void *io_addr;
 	uint32_t i;
 
 	QAT_LOG(DEBUG, "Setup qp %u on qat pci device %d gen %d",
@@ -226,7 +72,15 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		return -EINVAL;
 	}
 
-	if (pci_dev->mem_resource[0].addr == NULL) {
+	if (ops_hw->qat_dev_get_transport_bar == NULL)	{
+		QAT_LOG(ERR,
+			"QAT Internal Error: qat_dev_get_transport_bar not set for gen %d",
+			qat_dev->qat_dev_gen);
+		goto create_err;
+	}
+
+	io_addr = ops_hw->qat_dev_get_transport_bar(pci_dev)->addr;
+	if (io_addr == NULL) {
 		QAT_LOG(ERR, "Could not find VF config space "
 				"(UIO driver attached?).");
 		return -EINVAL;
@@ -250,7 +104,7 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		return -ENOMEM;
 	}
 
-	qp->mmap_bar_addr = pci_dev->mem_resource[0].addr;
+	qp->mmap_bar_addr = io_addr;
 	qp->enqueued = qp->dequeued = 0;
 
 	if (qat_queue_create(qat_dev, &(qp->tx_q), qat_qp_conf,
@@ -277,10 +131,6 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		goto create_err;
 	}
 
-	adf_configure_queues(qp, qat_dev_gen);
-	adf_queue_arb_enable(qat_dev_gen, &qp->tx_q, qp->mmap_bar_addr,
-					&qat_dev->arb_csr_lock);
-
 	snprintf(op_cookie_pool_name, RTE_RING_NAMESIZE,
 					"%s%d_cookies_%s_qp%hu",
 		pci_dev->driver->driver.name, qat_dev->qat_dev_id,
@@ -298,6 +148,8 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	if (!qp->op_cookie_pool) {
 		QAT_LOG(ERR, "QAT PMD Cannot create"
 				" op mempool");
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
 		goto create_err;
 	}
 
@@ -316,91 +168,32 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	QAT_LOG(DEBUG, "QP setup complete: id: %d, cookiepool: %s",
 			queue_pair_id, op_cookie_pool_name);
 
+	qat_qp_csr_setup(qat_dev, io_addr, qp);
+
 	*qp_addr = qp;
 	return 0;
 
 create_err:
-	if (qp->op_cookie_pool)
-		rte_mempool_free(qp->op_cookie_pool);
-	rte_free(qp->op_cookies);
-	rte_free(qp);
-	return -EFAULT;
-}
-
-
-int qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr)
-{
-	struct qat_qp *qp = *qp_addr;
-	uint32_t i;
-
-	if (qp == NULL) {
-		QAT_LOG(DEBUG, "qp already freed");
-		return 0;
-	}
+	if (qp) {
+		if (qp->op_cookie_pool)
+			rte_mempool_free(qp->op_cookie_pool);
 
-	QAT_LOG(DEBUG, "Free qp on qat_pci device %d",
-				qp->qat_dev->qat_dev_id);
-
-	/* Don't free memory if there are still responses to be processed */
-	if ((qp->enqueued - qp->dequeued) == 0) {
-		qat_queue_delete(&(qp->tx_q));
-		qat_queue_delete(&(qp->rx_q));
-	} else {
-		return -EAGAIN;
-	}
+		if (qp->op_cookies)
+			rte_free(qp->op_cookies);
 
-	adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q), qp->mmap_bar_addr,
-				&qp->qat_dev->arb_csr_lock);
-
-	for (i = 0; i < qp->nb_descriptors; i++)
-		rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]);
-
-	if (qp->op_cookie_pool)
-		rte_mempool_free(qp->op_cookie_pool);
-
-	rte_free(qp->op_cookies);
-	rte_free(qp);
-	*qp_addr = NULL;
-	return 0;
-}
-
-
-static void qat_queue_delete(struct qat_queue *queue)
-{
-	const struct rte_memzone *mz;
-	int status = 0;
-
-	if (queue == NULL) {
-		QAT_LOG(DEBUG, "Invalid queue");
-		return;
+		rte_free(qp);
 	}
-	QAT_LOG(DEBUG, "Free ring %d, memzone: %s",
-			queue->hw_queue_number, queue->memz_name);
 
-	mz = rte_memzone_lookup(queue->memz_name);
-	if (mz != NULL)	{
-		/* Write an unused pattern to the queue memory. */
-		memset(queue->base_addr, 0x7F, queue->queue_size);
-		status = rte_memzone_free(mz);
-		if (status != 0)
-			QAT_LOG(ERR, "Error %d on freeing queue %s",
-					status, queue->memz_name);
-	} else {
-		QAT_LOG(DEBUG, "queue %s doesn't exist",
-				queue->memz_name);
-	}
+	return -EFAULT;
 }
 
 static int
 qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 		struct qat_qp_config *qp_conf, uint8_t dir)
 {
-	uint64_t queue_base;
-	void *io_addr;
 	const struct rte_memzone *qp_mz;
 	struct rte_pci_device *pci_dev =
 			qat_pci_devs[qat_dev->qat_dev_id].pci_dev;
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
 	int ret = 0;
 	uint16_t desc_size = (dir == ADF_RING_DIR_TX ?
 			qp_conf->hw->tx_msg_size : qp_conf->hw->rx_msg_size);
@@ -460,19 +253,6 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 	 * Write an unused pattern to the queue memory.
 	 */
 	memset(queue->base_addr, 0x7F, queue_size_bytes);
-	io_addr = pci_dev->mem_resource[0].addr;
-
-	if (qat_dev_gen == QAT_GEN4) {
-		queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr,
-					queue->queue_size);
-		WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number,
-			queue->hw_queue_number, queue_base);
-	} else {
-		queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
-				queue->queue_size);
-		WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
-			queue->hw_queue_number, queue_base);
-	}
 
 	QAT_LOG(DEBUG, "RING: Name:%s, size in CSR: %u, in bytes %u,"
 		" nb msgs %u, msg_size %u, modulo mask %u",
@@ -488,202 +268,231 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 	return ret;
 }
 
-int
-qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
-			enum qat_service_type service_type)
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+		int socket_id)
 {
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		int i = 0, valid_qps = 0;
-
-		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
-			if (qat_dev->qp_gen4_data[i][0].service_type ==
-				service_type) {
-				if (valid_qps == qp_id)
-					return i;
-				++valid_qps;
-			}
+	const struct rte_memzone *mz;
+
+	mz = rte_memzone_lookup(queue_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			QAT_LOG(DEBUG, "re-use memzone already "
+					"allocated for %s", queue_name);
+			return mz;
 		}
+
+		QAT_LOG(ERR, "Incompatible memzone already "
+				"allocated %s, size %u, socket %d. "
+				"Requested size %u, socket %u",
+				queue_name, (uint32_t)mz->len,
+				mz->socket_id, queue_size, socket_id);
+		return NULL;
 	}
-	return -1;
+
+	QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					queue_name, queue_size, socket_id);
+	return rte_memzone_reserve_aligned(queue_name, queue_size,
+		socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size);
 }
 
 int
-qat_read_qp_config(struct qat_pci_device *qat_dev)
+qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr)
 {
-	int i = 0;
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
-
-	if (qat_dev_gen == QAT_GEN4) {
-		uint16_t svc = 0;
-
-		if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
-			return -(EFAULT);
-		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
-			struct qat_qp_hw_data *hw_data =
-				&qat_dev->qp_gen4_data[i][0];
-			uint8_t svc1 = (svc >> (3 * i)) & 0x7;
-			enum qat_service_type service_type = QAT_SERVICE_INVALID;
-
-			if (svc1 == QAT_SVC_SYM) {
-				service_type = QAT_SERVICE_SYMMETRIC;
-				QAT_LOG(DEBUG,
-					"Discovered SYMMETRIC service on bundle %d",
-					i);
-			} else if (svc1 == QAT_SVC_COMPRESSION) {
-				service_type = QAT_SERVICE_COMPRESSION;
-				QAT_LOG(DEBUG,
-					"Discovered COPRESSION service on bundle %d",
-					i);
-			} else if (svc1 == QAT_SVC_ASYM) {
-				service_type = QAT_SERVICE_ASYMMETRIC;
-				QAT_LOG(DEBUG,
-					"Discovered ASYMMETRIC service on bundle %d",
-					i);
-			} else {
-				QAT_LOG(ERR,
-					"Unrecognized service on bundle %d",
-					i);
-				return -(EFAULT);
-			}
+	int ret;
+	struct qat_qp *qp = *qp_addr;
+	uint32_t i;
 
-			memset(hw_data, 0, sizeof(*hw_data));
-			hw_data->service_type = service_type;
-			if (service_type == QAT_SERVICE_ASYMMETRIC) {
-				hw_data->tx_msg_size = 64;
-				hw_data->rx_msg_size = 32;
-			} else if (service_type == QAT_SERVICE_SYMMETRIC ||
-					service_type ==
-						QAT_SERVICE_COMPRESSION) {
-				hw_data->tx_msg_size = 128;
-				hw_data->rx_msg_size = 32;
-			}
-			hw_data->tx_ring_num = 0;
-			hw_data->rx_ring_num = 1;
-			hw_data->hw_bundle_num = i;
-		}
+	if (qp == NULL) {
+		QAT_LOG(DEBUG, "qp already freed");
 		return 0;
 	}
-	return -(EINVAL);
+
+	QAT_LOG(DEBUG, "Free qp on qat_pci device %d",
+				qp->qat_dev->qat_dev_id);
+
+	/* Don't free memory if there are still responses to be processed */
+	if ((qp->enqueued - qp->dequeued) == 0) {
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
+	} else {
+		return -EAGAIN;
+	}
+
+	ret = adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q),
+			qp->mmap_bar_addr, &qp->qat_dev->arb_csr_lock);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < qp->nb_descriptors; i++)
+		rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]);
+
+	if (qp->op_cookie_pool)
+		rte_mempool_free(qp->op_cookie_pool);
+
+	rte_free(qp->op_cookies);
+	rte_free(qp);
+	*qp_addr = NULL;
+	return 0;
 }
 
-static int qat_qp_check_queue_alignment(uint64_t phys_addr,
-					uint32_t queue_size_bytes)
+
+static void
+qat_queue_delete(struct qat_queue *queue)
 {
-	if (((queue_size_bytes - 1) & phys_addr) != 0)
-		return -EINVAL;
+	const struct rte_memzone *mz;
+	int status = 0;
+
+	if (queue == NULL) {
+		QAT_LOG(DEBUG, "Invalid queue");
+		return;
+	}
+	QAT_LOG(DEBUG, "Free ring %d, memzone: %s",
+			queue->hw_queue_number, queue->memz_name);
+
+	mz = rte_memzone_lookup(queue->memz_name);
+	if (mz != NULL)	{
+		/* Write an unused pattern to the queue memory. */
+		memset(queue->base_addr, 0x7F, queue->queue_size);
+		status = rte_memzone_free(mz);
+		if (status != 0)
+			QAT_LOG(ERR, "Error %d on freeing queue %s",
+					status, queue->memz_name);
+	} else {
+		QAT_LOG(DEBUG, "queue %s doesn't exist",
+				queue->memz_name);
+	}
+}
+
+static int __rte_unused
+adf_queue_arb_enable(struct qat_pci_device *qat_dev, struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_enable,
+			-ENOTSUP);
+	ops->qat_qp_adf_arb_enable(txq, base_addr, lock);
 	return 0;
 }
 
-static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
-	uint32_t *p_queue_size_for_csr)
+static int
+adf_queue_arb_disable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock)
 {
-	uint8_t i = ADF_MIN_RING_SIZE;
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
 
-	for (; i <= ADF_MAX_RING_SIZE; i++)
-		if ((msg_size * msg_num) ==
-				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
-			*p_queue_size_for_csr = i;
-			return 0;
-		}
-	QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
-	return -EINVAL;
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_disable,
+			-ENOTSUP);
+	ops->qat_qp_adf_arb_disable(txq, base_addr, lock);
+	return 0;
 }
 
-static void
-adf_queue_arb_enable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq,
-			void *base_addr, rte_spinlock_t *lock)
+static int __rte_unused
+qat_qp_build_ring_base(struct qat_pci_device *qat_dev, void *io_addr,
+		struct qat_queue *queue)
 {
-	uint32_t arb_csr_offset = 0, value;
-
-	rte_spinlock_lock(lock);
-	if (qat_dev_gen == QAT_GEN4) {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_RING_BUNDLE_SIZE_GEN4 *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
-				arb_csr_offset);
-	} else {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_ARB_REG_SLOT *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr,
-				arb_csr_offset);
-	}
-	value |= (0x01 << txq->hw_queue_number);
-	ADF_CSR_WR(base_addr, arb_csr_offset, value);
-	rte_spinlock_unlock(lock);
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_build_ring_base,
+			-ENOTSUP);
+	ops->qat_qp_build_ring_base(io_addr, queue);
+	return 0;
 }
 
-static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
-		struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock)
+int
+qat_qps_per_service(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
 {
-	uint32_t arb_csr_offset = 0, value;
-
-	rte_spinlock_lock(lock);
-	if (qat_dev_gen == QAT_GEN4) {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_RING_BUNDLE_SIZE_GEN4 *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
-				arb_csr_offset);
-	} else {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_ARB_REG_SLOT *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr,
-				arb_csr_offset);
-	}
-	value &= ~(0x01 << txq->hw_queue_number);
-	ADF_CSR_WR(base_addr, arb_csr_offset, value);
-	rte_spinlock_unlock(lock);
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_rings_per_service,
+			-ENOTSUP);
+	return ops->qat_qp_rings_per_service(qat_dev, service);
 }
 
-static void adf_configure_queues(struct qat_qp *qp,
-		enum qat_device_gen qat_dev_gen)
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data(struct qat_pci_device *qat_dev,
+		enum qat_service_type service, uint16_t qp_id)
 {
-	uint32_t q_tx_config, q_resp_config;
-	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
-
-	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
-	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
-			ADF_RING_NEAR_WATERMARK_512,
-			ADF_RING_NEAR_WATERMARK_0);
-
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
-			q_tx->hw_bundle_number,	q_tx->hw_queue_number,
-			q_tx_config);
-		WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
-			q_rx->hw_bundle_number,	q_rx->hw_queue_number,
-			q_resp_config);
-	} else {
-		WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
-			q_tx->hw_bundle_number,	q_tx->hw_queue_number,
-			q_tx_config);
-		WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
-			q_rx->hw_bundle_number,	q_rx->hw_queue_number,
-			q_resp_config);
-	}
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_get_hw_data, NULL);
+	return ops->qat_qp_get_hw_data(qat_dev, service, qp_id);
 }
 
-static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask)
+int
+qat_read_qp_config(struct qat_pci_device *qat_dev)
 {
-	return data & modulo_mask;
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_read_config,
+			-ENOTSUP);
+	return ops_hw->qat_dev_read_config(qat_dev);
+}
+
+static int __rte_unused
+adf_configure_queues(struct qat_qp *qp, enum qat_device_gen qat_dev_gen)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_configure_queues,
+			-ENOTSUP);
+	ops->qat_qp_adf_configure_queues(qp);
+	return 0;
 }
 
 static inline void
 txq_write_tail(enum qat_device_gen qat_dev_gen,
-		struct qat_qp *qp, struct qat_queue *q) {
+		struct qat_qp *qp, struct qat_queue *q)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
 
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
-			q->hw_bundle_number, q->hw_queue_number, q->tail);
-	} else {
-		WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number,
-			q->hw_queue_number, q->tail);
-	}
+	/*
+	 * Pointer check should be done during
+	 * initialization
+	 */
+	ops->qat_qp_csr_write_tail(qp, q);
 }
 
+static inline void
+qat_qp_csr_write_head(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
+			struct qat_queue *q, uint32_t new_head)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
+
+	/*
+	 * Pointer check should be done during
+	 * initialization
+	 */
+	ops->qat_qp_csr_write_head(qp, q, new_head);
+}
+
+static int
+qat_qp_csr_setup(struct qat_pci_device *qat_dev,
+		void *io_addr, struct qat_qp *qp)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_csr_setup,
+			-ENOTSUP);
+	ops->qat_qp_csr_setup(qat_dev, io_addr, qp);
+	return 0;
+}
+
+
 static inline
 void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
 				struct qat_queue *q)
@@ -707,15 +516,37 @@ void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
 	q->nb_processed_responses = 0;
 	q->csr_head = new_head;
 
-	/* write current head to CSR */
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr,
-			q->hw_bundle_number, q->hw_queue_number, new_head);
-	} else {
-		WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number,
-				q->hw_queue_number, new_head);
-	}
+	qat_qp_csr_write_head(qat_dev_gen, qp, q, new_head);
+}
+
+static int
+qat_qp_check_queue_alignment(uint64_t phys_addr, uint32_t queue_size_bytes)
+{
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return -EINVAL;
+	return 0;
+}
+
+static int
+adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+		uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
+
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return -EINVAL;
+}
 
+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t modulo_mask)
+{
+	return data & modulo_mask;
 }
 
 uint16_t
diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
index 726cd2ef61..deafb407b3 100644
--- a/drivers/common/qat/qat_qp.h
+++ b/drivers/common/qat/qat_qp.h
@@ -12,16 +12,6 @@
 
 #define QAT_QP_MIN_INFL_THRESHOLD	256
 
-/* Default qp configuration for GEN4 devices */
-#define QAT_GEN4_QP_DEFCON	(QAT_SERVICE_SYMMETRIC |	\
-				QAT_SERVICE_SYMMETRIC << 8 |	\
-				QAT_SERVICE_SYMMETRIC << 16 |	\
-				QAT_SERVICE_SYMMETRIC << 24)
-
-/* QAT GEN 4 specific macros */
-#define QAT_GEN4_BUNDLE_NUM             4
-#define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
-
 struct qat_pci_device;
 
 /**
@@ -106,7 +96,11 @@ qat_qp_setup(struct qat_pci_device *qat_dev,
 
 int
 qat_qps_per_service(struct qat_pci_device *qat_dev,
-			enum qat_service_type service);
+		enum qat_service_type service);
+
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data(struct qat_pci_device *qat_dev,
+		enum qat_service_type service, uint16_t qp_id);
 
 int
 qat_cq_get_fw_version(struct qat_qp *qp);
@@ -116,11 +110,6 @@ int
 qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused,
 			  void *op_cookie __rte_unused,
 			  uint64_t *dequeue_err_count __rte_unused);
-
-int
-qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
-			enum qat_service_type service_type);
-
 int
 qat_read_qp_config(struct qat_pci_device *qat_dev);
 
@@ -166,7 +155,4 @@ struct qat_qp_hw_spec_funcs {
 
 extern struct qat_qp_hw_spec_funcs *qat_qp_hw_spec[];
 
-extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-
 #endif /* _QAT_QP_H_ */
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index d4f087733f..5b8ee4bee6 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -164,35 +164,11 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	int ret = 0;
 	uint32_t i;
 	struct qat_qp_config qat_qp_conf;
-	const struct qat_qp_hw_data *sym_hw_qps = NULL;
-	const struct qat_qp_hw_data *qp_hw_data = NULL;
-
 	struct qat_qp **qp_addr =
 			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
 	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
 	struct qat_pci_device *qat_dev = qat_private->qat_dev;
 
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		int ring_pair =
-			qat_select_valid_queue(qat_dev, qp_id,
-				QAT_SERVICE_SYMMETRIC);
-
-		if (ring_pair < 0) {
-			QAT_LOG(ERR,
-				"qp_id %u invalid for this device, no enough services allocated for GEN4 device",
-				qp_id);
-			return -EINVAL;
-		}
-		sym_hw_qps =
-			&qat_dev->qp_gen4_data[0][0];
-		qp_hw_data =
-			&qat_dev->qp_gen4_data[ring_pair][0];
-	} else {
-		sym_hw_qps = qat_gen_config[qat_dev->qat_dev_gen]
-				.qp_hw_data[QAT_SERVICE_SYMMETRIC];
-		qp_hw_data = sym_hw_qps + qp_id;
-	}
-
 	/* If qp is already in use free ring memory and qp metadata. */
 	if (*qp_addr != NULL) {
 		ret = qat_sym_qp_release(dev, qp_id);
@@ -204,7 +180,13 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 		return -EINVAL;
 	}
 
-	qat_qp_conf.hw = qp_hw_data;
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_SYMMETRIC,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
 	qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie);
 	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
 	qat_qp_conf.socket_id = socket_id;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v4 5/9] compress/qat: add gen specific data and function
  2021-10-22 17:03     ` [dpdk-dev] [dpdk-dev v4 0/9] drivers/qat: isolate implementations of qat generations Fan Zhang
                         ` (3 preceding siblings ...)
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 4/9] common/qat: add gen specific queue implementation Fan Zhang
@ 2021-10-22 17:03       ` Fan Zhang
  2021-10-26 16:22         ` Power, Ciara
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 6/9] compress/qat: add gen specific implementation Fan Zhang
                         ` (4 subsequent siblings)
  9 siblings, 1 reply; 96+ messages in thread
From: Fan Zhang @ 2021-10-22 17:03 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Adam Dybkowski, Arek Kusztal, Kai Ji

This patch adds the compression data structure and function
prototypes for different QAT generations.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c         |   2 -
 .../common/qat/qat_adf/icp_qat_hw_gen4_comp.h | 195 ++++++++++++
 .../qat/qat_adf/icp_qat_hw_gen4_comp_defs.h   | 299 ++++++++++++++++++
 drivers/common/qat/qat_common.h               |   4 +-
 drivers/common/qat/qat_device.h               |   7 -
 drivers/compress/qat/qat_comp.c               | 101 +++---
 drivers/compress/qat/qat_comp.h               |   8 +-
 drivers/compress/qat/qat_comp_pmd.c           | 159 ++++------
 drivers/compress/qat/qat_comp_pmd.h           |  76 +++++
 9 files changed, 675 insertions(+), 176 deletions(-)
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
index cc63b55bd1..38757e6e40 100644
--- a/drivers/common/qat/dev/qat_dev_gen1.c
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -251,6 +251,4 @@ RTE_INIT(qat_dev_gen_gen1_init)
 	qat_qp_hw_spec[QAT_GEN1] = &qat_qp_hw_spec_gen1;
 	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
 	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
-	qat_gen_config[QAT_GEN1].comp_num_im_bufs_required =
-		QAT_NUM_INTERM_BUFS_GEN1;
 }
diff --git a/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
new file mode 100644
index 0000000000..ec69dc7105
--- /dev/null
+++ b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
@@ -0,0 +1,195 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _ICP_QAT_HW_GEN4_COMP_H_
+#define _ICP_QAT_HW_GEN4_COMP_H_
+
+#include "icp_qat_fw.h"
+#include "icp_qat_hw_gen4_comp_defs.h"
+
+struct icp_qat_hw_comp_20_config_csr_lower {
+	icp_qat_hw_comp_20_extended_delay_match_mode_t edmm;
+	icp_qat_hw_comp_20_hw_comp_format_t algo;
+	icp_qat_hw_comp_20_search_depth_t sd;
+	icp_qat_hw_comp_20_hbs_control_t hbs;
+	icp_qat_hw_comp_20_abd_t abd;
+	icp_qat_hw_comp_20_lllbd_ctrl_t lllbd;
+	icp_qat_hw_comp_20_min_match_control_t mmctrl;
+	icp_qat_hw_comp_20_skip_hash_collision_t hash_col;
+	icp_qat_hw_comp_20_skip_hash_update_t hash_update;
+	icp_qat_hw_comp_20_byte_skip_t skip_ctrl;
+};
+
+static inline uint32_t ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER(
+		struct icp_qat_hw_comp_20_config_csr_lower csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.algo,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_MASK);
+
+	QAT_FIELD_SET(val32, csr.sd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_MASK);
+
+	QAT_FIELD_SET(val32, csr.edmm,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_MASK);
+
+	QAT_FIELD_SET(val32, csr.hbs,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lllbd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK);
+
+	QAT_FIELD_SET(val32, csr.mmctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.hash_col,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_MASK);
+
+	QAT_FIELD_SET(val32, csr.hash_update,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_MASK);
+
+	QAT_FIELD_SET(val32, csr.skip_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_MASK);
+
+	QAT_FIELD_SET(val32, csr.abd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_MASK);
+
+	QAT_FIELD_SET(val32, csr.lllbd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_comp_20_config_csr_upper {
+	icp_qat_hw_comp_20_scb_control_t scb_ctrl;
+	icp_qat_hw_comp_20_rmb_control_t rmb_ctrl;
+	icp_qat_hw_comp_20_som_control_t som_ctrl;
+	icp_qat_hw_comp_20_skip_hash_rd_control_t skip_hash_ctrl;
+	icp_qat_hw_comp_20_scb_unload_control_t scb_unload_ctrl;
+	icp_qat_hw_comp_20_disable_token_fusion_control_t
+			disable_token_fusion_ctrl;
+	icp_qat_hw_comp_20_lbms_t lbms;
+	icp_qat_hw_comp_20_scb_mode_reset_mask_t scb_mode_reset;
+	uint16_t lazy;
+	uint16_t nice;
+};
+
+static inline uint32_t ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER(
+		struct icp_qat_hw_comp_20_config_csr_upper csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.scb_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.rmb_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.som_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.skip_hash_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.scb_unload_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.disable_token_fusion_ctrl,
+	ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_BITPOS,
+	ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbms,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_MASK);
+
+	QAT_FIELD_SET(val32, csr.scb_mode_reset,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK);
+
+	QAT_FIELD_SET(val32, csr.lazy,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_MASK);
+
+	QAT_FIELD_SET(val32, csr.nice,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_decomp_20_config_csr_lower {
+	icp_qat_hw_decomp_20_hbs_control_t hbs;
+	icp_qat_hw_decomp_20_lbms_t lbms;
+	icp_qat_hw_decomp_20_hw_comp_format_t algo;
+	icp_qat_hw_decomp_20_min_match_control_t mmctrl;
+	icp_qat_hw_decomp_20_lz4_block_checksum_present_t lbc;
+};
+
+static inline uint32_t ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER(
+		struct icp_qat_hw_decomp_20_config_csr_lower csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.hbs,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbms,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_MASK);
+
+	QAT_FIELD_SET(val32, csr.algo,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_MASK);
+
+	QAT_FIELD_SET(val32, csr.mmctrl,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbc,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_BITPOS,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_decomp_20_config_csr_upper {
+	icp_qat_hw_decomp_20_speculative_decoder_control_t sdc;
+	icp_qat_hw_decomp_20_mini_cam_control_t mcc;
+};
+
+static inline uint32_t ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_UPPER(
+		struct icp_qat_hw_decomp_20_config_csr_upper csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.sdc,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_BITPOS,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.mcc,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_MASK);
+
+	return rte_bswap32(val32);
+}
+
+#endif /* _ICP_QAT_HW_GEN4_COMP_H_ */
diff --git a/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
new file mode 100644
index 0000000000..ad02d06b12
--- /dev/null
+++ b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
@@ -0,0 +1,299 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _ICP_QAT_HW_GEN4_COMP_DEFS_H
+#define _ICP_QAT_HW_GEN4_COMP_DEFS_H
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_BITPOS	31
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_comp_20_scb_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_BITPOS	30
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_ALL = 0x0,
+	ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_FC_ONLY = 0x1,
+} icp_qat_hw_comp_20_rmb_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_ALL
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_BITPOS	28
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_MASK		0x3
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_NORMAL_MODE = 0x0,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_REPLAY_MODE = 0x1,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_INPUT_CRC = 0x2,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_RESERVED_MODE = 0x3,
+} icp_qat_hw_comp_20_som_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SOM_CONTROL_NORMAL_MODE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_BITPOS	27
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_NO_SKIP = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_SKIP_HASH_READS = 0x1,
+} icp_qat_hw_comp_20_skip_hash_rd_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_NO_SKIP
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_BITPOS	26
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_UNLOAD = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_NO_UNLOAD = 0x1,
+} icp_qat_hw_comp_20_scb_unload_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_UNLOAD
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_BITPOS 21
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_comp_20_disable_token_fusion_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_DEFAULT_VAL \
+		ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_ENABLE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_BITPOS	19
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_MASK		0x3
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_64KB = 0x0,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_256KB = 0x1,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_1MB = 0x2,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_4MB = 0x3,
+} icp_qat_hw_comp_20_lbms_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_LBMS_LBMS_64KB
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS	18
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS_AND_HISTORY = 0x1,
+} icp_qat_hw_comp_20_scb_mode_reset_mask_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_BITPOS	9
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_MASK	0x1ff
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL 258
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_BITPOS	0
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_MASK	0x1ff
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL 259
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS	14
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_MASK		0x7
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_HBS_CONTROL_HBS_IS_32KB = 0x0,
+} icp_qat_hw_comp_20_hbs_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_HBS_CONTROL_HBS_IS_32KB
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_BITPOS	13
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_ABD_ABD_ENABLED = 0x0,
+	ICP_QAT_HW_COMP_20_ABD_ABD_DISABLED = 0x1,
+} icp_qat_hw_comp_20_abd_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_ABD_ABD_ENABLED
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS	12
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED = 0x0,
+	ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_DISABLED = 0x1,
+} icp_qat_hw_comp_20_lllbd_ctrl_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_BITPOS	8
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_MASK		0xf
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1 = 0x1,
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_6 = 0x3,
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_9 = 0x4,
+} icp_qat_hw_comp_20_search_depth_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_BITPOS	5
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77 = 0x0,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE = 0x1,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_LZ4 = 0x2,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_LZ4S = 0x3,
+} icp_qat_hw_comp_20_hw_comp_format_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS	4
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_3B = 0x0,
+	ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_4B = 0x1,
+} icp_qat_hw_comp_20_min_match_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_3B
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_BITPOS	3
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_ALLOW = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_DONT_ALLOW = 0x1,
+} icp_qat_hw_comp_20_skip_hash_collision_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_ALLOW
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_BITPOS	2
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_ALLOW = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW = 0x1,
+} icp_qat_hw_comp_20_skip_hash_update_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_ALLOW
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_BITPOS	1
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_TOKEN = 0x0,
+	ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL = 0x1,
+} icp_qat_hw_comp_20_byte_skip_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_TOKEN
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_BITPOS	0
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_DISABLED = 0x0,
+	ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED = 0x1,
+} icp_qat_hw_comp_20_extended_delay_match_mode_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_DEFAULT_VAL \
+		ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_DISABLED
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_BITPOS 31
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_decomp_20_speculative_decoder_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_DEFAULT_VAL\
+		ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_ENABLE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_BITPOS	30
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_decomp_20_mini_cam_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_ENABLE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS	14
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_HBS_CONTROL_HBS_IS_32KB = 0x0,
+} icp_qat_hw_decomp_20_hbs_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_HBS_CONTROL_HBS_IS_32KB
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_BITPOS	8
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_MASK	0x3
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_64KB = 0x0,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_256KB = 0x1,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_1MB = 0x2,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_4MB = 0x3,
+} icp_qat_hw_decomp_20_lbms_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_LBMS_LBMS_64KB
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_BITPOS	5
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE = 0x1,
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_LZ4 = 0x2,
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_LZ4S = 0x3,
+} icp_qat_hw_decomp_20_hw_comp_format_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS	4
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_3B = 0x0,
+	ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_4B = 0x1,
+} icp_qat_hw_decomp_20_min_match_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_3B
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_BITPOS 3
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_ABSENT  =  0x0,
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_PRESENT  =  0x1,
+} icp_qat_hw_decomp_20_lz4_block_checksum_present_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_DEFAULT_VAL \
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_ABSENT
+
+#endif /* _ICP_QAT_HW_GEN4_COMP_DEFS_H */
diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index 1889ec4e88..a7632e31f8 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -13,9 +13,9 @@
 #define QAT_64_BTYE_ALIGN_MASK (~0x3f)
 
 /* Intel(R) QuickAssist Technology device generation is enumerated
- * from one according to the generation of the device
+ * from one according to the generation of the device.
+ * QAT_GEN* is used as the index to find all devices
  */
-
 enum qat_device_gen {
 	QAT_GEN1,
 	QAT_GEN2,
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 8233cc045d..e7c7e9af95 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -49,12 +49,6 @@ struct qat_dev_cmd_param {
 	uint16_t val;
 };
 
-enum qat_comp_num_im_buffers {
-	QAT_NUM_INTERM_BUFS_GEN1 = 12,
-	QAT_NUM_INTERM_BUFS_GEN2 = 20,
-	QAT_NUM_INTERM_BUFS_GEN3 = 64
-};
-
 struct qat_device_info {
 	const struct rte_memzone *mz;
 	/**< mz to store the qat_pci_device so it can be
@@ -137,7 +131,6 @@ struct qat_pci_device {
 struct qat_gen_hw_data {
 	enum qat_device_gen dev_gen;
 	const struct qat_qp_hw_data (*qp_hw_data)[ADF_MAX_QPS_ON_ANY_SERVICE];
-	enum qat_comp_num_im_buffers comp_num_im_bufs_required;
 	struct qat_pf2vf_dev *pf2vf_dev;
 };
 
diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index 7ac25a3b4c..e8f57c3cc4 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018-2019 Intel Corporation
+ * Copyright(c) 2018-2021 Intel Corporation
  */
 
 #include <rte_mempool.h>
@@ -332,7 +332,8 @@ qat_comp_build_request(void *in_op, uint8_t *out_msg,
 	return 0;
 }
 
-static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask)
+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t modulo_mask)
 {
 	return data & modulo_mask;
 }
@@ -793,8 +794,9 @@ qat_comp_stream_size(void)
 	return RTE_ALIGN_CEIL(sizeof(struct qat_comp_stream), 8);
 }
 
-static void qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
-				    enum qat_comp_request_type request)
+static void
+qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
+	    enum qat_comp_request_type request)
 {
 	if (request == QAT_COMP_REQUEST_FIXED_COMP_STATELESS)
 		header->service_cmd_id = ICP_QAT_FW_COMP_CMD_STATIC;
@@ -811,16 +813,17 @@ static void qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
 	    QAT_COMN_CD_FLD_TYPE_16BYTE_DATA, QAT_COMN_PTR_TYPE_FLAT);
 }
 
-static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
-			const struct rte_memzone *interm_buff_mz,
-			const struct rte_comp_xform *xform,
-			const struct qat_comp_stream *stream,
-			enum rte_comp_op_type op_type)
+static int
+qat_comp_create_templates(struct qat_comp_xform *qat_xform,
+			  const struct rte_memzone *interm_buff_mz,
+			  const struct rte_comp_xform *xform,
+			  const struct qat_comp_stream *stream,
+			  enum rte_comp_op_type op_type,
+			  enum qat_device_gen qat_dev_gen)
 {
 	struct icp_qat_fw_comp_req *comp_req;
-	int comp_level, algo;
 	uint32_t req_par_flags;
-	int direction = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS;
+	int res;
 
 	if (unlikely(qat_xform == NULL)) {
 		QAT_LOG(ERR, "Session was not created for this device");
@@ -839,46 +842,17 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 		}
 	}
 
-	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) {
-		direction = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS;
-		comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
+	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS)
 		req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(
 				ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP,
 				ICP_QAT_FW_COMP_BFINAL,
 				ICP_QAT_FW_COMP_CNV,
 				ICP_QAT_FW_COMP_CNV_RECOVERY);
-	} else {
-		if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
-		else if (xform->compress.level == 1)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
-		else if (xform->compress.level == 2)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_4;
-		else if (xform->compress.level == 3)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
-		else if (xform->compress.level >= 4 &&
-			 xform->compress.level <= 9)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_16;
-		else {
-			QAT_LOG(ERR, "compression level not supported");
-			return -EINVAL;
-		}
+	else
 		req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(
 				ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP,
 				ICP_QAT_FW_COMP_BFINAL, ICP_QAT_FW_COMP_CNV,
 				ICP_QAT_FW_COMP_CNV_RECOVERY);
-	}
-
-	switch (xform->compress.algo) {
-	case RTE_COMP_ALGO_DEFLATE:
-		algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE;
-		break;
-	case RTE_COMP_ALGO_LZS:
-	default:
-		/* RTE_COMP_NULL */
-		QAT_LOG(ERR, "compression algorithm not supported");
-		return -EINVAL;
-	}
 
 	comp_req = &qat_xform->qat_comp_req_tmpl;
 
@@ -899,18 +873,10 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 		comp_req->comp_cd_ctrl.comp_state_addr =
 				stream->state_registers_decomp_phys;
 
-		/* Enable A, B, C, D, and E (CAMs). */
+		/* RAM bank flags */
 		comp_req->comp_cd_ctrl.ram_bank_flags =
-			ICP_QAT_FW_COMP_RAM_FLAGS_BUILD(
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank E */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank D */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank C */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank B */
-				ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */
+				qat_comp_gen_dev_ops[qat_dev_gen]
+					.qat_comp_get_ram_bank_flags();
 
 		comp_req->comp_cd_ctrl.ram_banks_addr =
 				stream->inflate_context_phys;
@@ -924,13 +890,11 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 			ICP_QAT_FW_COMP_ENABLE_SECURE_RAM_USED_AS_INTMD_BUF);
 	}
 
-	comp_req->cd_pars.sl.comp_slice_cfg_word[0] =
-	    ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(
-		direction,
-		/* In CPM 1.6 only valid mode ! */
-		ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED, algo,
-		/* Translate level to depth */
-		comp_level, ICP_QAT_HW_COMPRESSION_FILE_TYPE_0);
+	res = qat_comp_gen_dev_ops[qat_dev_gen].qat_comp_set_slice_cfg_word(
+			qat_xform, xform, op_type,
+			comp_req->cd_pars.sl.comp_slice_cfg_word);
+	if (res)
+		return res;
 
 	comp_req->comp_pars.initial_adler = 1;
 	comp_req->comp_pars.initial_crc32 = 0;
@@ -958,7 +922,8 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 				ICP_QAT_FW_SLICE_XLAT);
 
 		comp_req->u1.xlt_pars.inter_buff_ptr =
-				interm_buff_mz->iova;
+				(qat_comp_get_num_im_bufs_required(qat_dev_gen)
+					== 0) ? 0 : interm_buff_mz->iova;
 	}
 
 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
@@ -991,6 +956,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 			      void **private_xform)
 {
 	struct qat_comp_dev_private *qat = dev->data->dev_private;
+	enum qat_device_gen qat_dev_gen = qat->qat_dev->qat_dev_gen;
+	unsigned int im_bufs = qat_comp_get_num_im_bufs_required(qat_dev_gen);
 
 	if (unlikely(private_xform == NULL)) {
 		QAT_LOG(ERR, "QAT: private_xform parameter is NULL");
@@ -1012,7 +979,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 
 		if (xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_FIXED ||
 		  ((xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_DEFAULT)
-				   && qat->interm_buff_mz == NULL))
+				   && qat->interm_buff_mz == NULL
+				   && im_bufs > 0))
 			qat_xform->qat_comp_request_type =
 					QAT_COMP_REQUEST_FIXED_COMP_STATELESS;
 
@@ -1020,7 +988,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 				RTE_COMP_HUFFMAN_DYNAMIC ||
 				xform->compress.deflate.huffman ==
 						RTE_COMP_HUFFMAN_DEFAULT) &&
-				qat->interm_buff_mz != NULL)
+				(qat->interm_buff_mz != NULL ||
+						im_bufs == 0))
 
 			qat_xform->qat_comp_request_type =
 					QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS;
@@ -1039,7 +1008,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 	}
 
 	if (qat_comp_create_templates(qat_xform, qat->interm_buff_mz, xform,
-				      NULL, RTE_COMP_OP_STATELESS)) {
+				      NULL, RTE_COMP_OP_STATELESS,
+				      qat_dev_gen)) {
 		QAT_LOG(ERR, "QAT: Problem with setting compression");
 		return -EINVAL;
 	}
@@ -1138,7 +1108,8 @@ qat_comp_stream_create(struct rte_compressdev *dev,
 	ptr->qat_xform.checksum_type = xform->decompress.chksum;
 
 	if (qat_comp_create_templates(&ptr->qat_xform, qat->interm_buff_mz,
-				      xform, ptr, RTE_COMP_OP_STATEFUL)) {
+				      xform, ptr, RTE_COMP_OP_STATEFUL,
+				      qat->qat_dev->qat_dev_gen)) {
 		QAT_LOG(ERR, "QAT: problem with creating descriptor template for stream");
 		rte_mempool_put(qat->streampool, *stream);
 		*stream = NULL;
diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
index 0444b50a1e..da7b9a6eec 100644
--- a/drivers/compress/qat/qat_comp.h
+++ b/drivers/compress/qat/qat_comp.h
@@ -28,14 +28,16 @@
 #define QAT_MIN_OUT_BUF_SIZE 46
 
 /* maximum size of the state registers */
-#define QAT_STATE_REGISTERS_MAX_SIZE 64
+#define QAT_STATE_REGISTERS_MAX_SIZE 256 /* 64 bytes for GEN1-3, 256 for GEN4 */
 
 /* decompressor context size */
 #define QAT_INFLATE_CONTEXT_SIZE_GEN1 36864
 #define QAT_INFLATE_CONTEXT_SIZE_GEN2 34032
 #define QAT_INFLATE_CONTEXT_SIZE_GEN3 34032
-#define QAT_INFLATE_CONTEXT_SIZE RTE_MAX(RTE_MAX(QAT_INFLATE_CONTEXT_SIZE_GEN1,\
-		QAT_INFLATE_CONTEXT_SIZE_GEN2), QAT_INFLATE_CONTEXT_SIZE_GEN3)
+#define QAT_INFLATE_CONTEXT_SIZE_GEN4 36864
+#define QAT_INFLATE_CONTEXT_SIZE RTE_MAX(RTE_MAX(RTE_MAX(\
+		QAT_INFLATE_CONTEXT_SIZE_GEN1, QAT_INFLATE_CONTEXT_SIZE_GEN2), \
+		QAT_INFLATE_CONTEXT_SIZE_GEN3), QAT_INFLATE_CONTEXT_SIZE_GEN4)
 
 enum qat_comp_request_type {
 	QAT_COMP_REQUEST_FIXED_COMP_STATELESS,
diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c
index caac7839e9..9b24d46e97 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -9,30 +9,29 @@
 
 #define QAT_PMD_COMP_SGL_DEF_SEGMENTS 16
 
+struct qat_comp_gen_dev_ops qat_comp_gen_dev_ops[QAT_N_GENS];
+
 struct stream_create_info {
 	struct qat_comp_dev_private *comp_dev;
 	int socket_id;
 	int error;
 };
 
-static const struct rte_compressdev_capabilities qat_comp_gen_capabilities[] = {
-	{/* COMPRESSION - deflate */
-	 .algo = RTE_COMP_ALGO_DEFLATE,
-	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
-				RTE_COMP_FF_CRC32_CHECKSUM |
-				RTE_COMP_FF_ADLER32_CHECKSUM |
-				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
-				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
-				RTE_COMP_FF_HUFFMAN_FIXED |
-				RTE_COMP_FF_HUFFMAN_DYNAMIC |
-				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
-				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
-				RTE_COMP_FF_OOP_LB_IN_SGL_OUT |
-				RTE_COMP_FF_STATEFUL_DECOMPRESSION,
-	 .window_size = {.min = 15, .max = 15, .increment = 0} },
-	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+static struct
+qat_comp_capabilities_info qat_comp_get_capa_info(
+		enum qat_device_gen qat_dev_gen, struct qat_pci_device *qat_dev)
+{
+	struct qat_comp_capabilities_info ret = { .data = NULL, .size = 0 };
 
-static void
+	if (qat_dev_gen >= QAT_N_GENS)
+		return ret;
+	RTE_FUNC_PTR_OR_ERR_RET(qat_comp_gen_dev_ops[qat_dev_gen]
+			.qat_comp_get_capabilities, ret);
+	return qat_comp_gen_dev_ops[qat_dev_gen]
+			.qat_comp_get_capabilities(qat_dev);
+}
+
+void
 qat_comp_stats_get(struct rte_compressdev *dev,
 		struct rte_compressdev_stats *stats)
 {
@@ -52,7 +51,7 @@ qat_comp_stats_get(struct rte_compressdev *dev,
 	stats->dequeue_err_count = qat_stats.dequeue_err_count;
 }
 
-static void
+void
 qat_comp_stats_reset(struct rte_compressdev *dev)
 {
 	struct qat_comp_dev_private *qat_priv;
@@ -67,7 +66,7 @@ qat_comp_stats_reset(struct rte_compressdev *dev)
 
 }
 
-static int
+int
 qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 {
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
@@ -95,23 +94,18 @@ qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 			&(dev->data->queue_pairs[queue_pair_id]));
 }
 
-static int
+int
 qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
-		  uint32_t max_inflight_ops, int socket_id)
+		uint32_t max_inflight_ops, int socket_id)
 {
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-	struct qat_qp_config qat_qp_conf;
-
+	struct qat_qp_config qat_qp_conf = {0};
 	struct qat_qp **qp_addr =
 			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
 	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-	const struct qat_qp_hw_data *comp_hw_qps =
-			qat_gen_config[qat_private->qat_dev->qat_dev_gen]
-				      .qp_hw_data[QAT_SERVICE_COMPRESSION];
-	const struct qat_qp_hw_data *qp_hw_data = comp_hw_qps + qp_id;
+	struct qat_qp *qp;
+	uint32_t i;
+	int ret;
 
 	/* If qp is already in use free ring memory and qp metadata. */
 	if (*qp_addr != NULL) {
@@ -125,7 +119,13 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 		return -EINVAL;
 	}
 
-	qat_qp_conf.hw = qp_hw_data;
+
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_COMPRESSION,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
 	qat_qp_conf.cookie_size = sizeof(struct qat_comp_op_cookie);
 	qat_qp_conf.nb_descriptors = max_inflight_ops;
 	qat_qp_conf.socket_id = socket_id;
@@ -134,7 +134,6 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
 	if (ret != 0)
 		return ret;
-
 	/* store a link to the qp in the qat_pci_device */
 	qat_private->qat_dev->qps_in_use[QAT_SERVICE_COMPRESSION][qp_id]
 								= *qp_addr;
@@ -189,7 +188,7 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 
 
 #define QAT_IM_BUFFER_DEBUG 0
-static const struct rte_memzone *
+const struct rte_memzone *
 qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
 			      uint32_t buff_size)
 {
@@ -202,8 +201,8 @@ qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
 	uint32_t full_size;
 	uint32_t offset_of_flat_buffs;
 	int i;
-	int num_im_sgls = qat_gen_config[
-		comp_dev->qat_dev->qat_dev_gen].comp_num_im_bufs_required;
+	int num_im_sgls = qat_comp_get_num_im_bufs_required(
+			comp_dev->qat_dev->qat_dev_gen);
 
 	QAT_LOG(DEBUG, "QAT COMP device %s needs %d sgls",
 				comp_dev->qat_dev->name, num_im_sgls);
@@ -480,8 +479,8 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev)
 	/* Free intermediate buffers */
 	if (comp_dev->interm_buff_mz) {
 		char mz_name[RTE_MEMZONE_NAMESIZE];
-		int i = qat_gen_config[
-		      comp_dev->qat_dev->qat_dev_gen].comp_num_im_bufs_required;
+		int i = qat_comp_get_num_im_bufs_required(
+				comp_dev->qat_dev->qat_dev_gen);
 
 		while (--i >= 0) {
 			snprintf(mz_name, RTE_MEMZONE_NAMESIZE,
@@ -509,28 +508,13 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev)
 	}
 }
 
-static int
+int
 qat_comp_dev_config(struct rte_compressdev *dev,
 		struct rte_compressdev_config *config)
 {
 	struct qat_comp_dev_private *comp_dev = dev->data->dev_private;
 	int ret = 0;
 
-	if (RTE_PMD_QAT_COMP_IM_BUFFER_SIZE == 0) {
-		QAT_LOG(WARNING,
-			"RTE_PMD_QAT_COMP_IM_BUFFER_SIZE = 0 in config file, so"
-			" QAT device can't be used for Dynamic Deflate. "
-			"Did you really intend to do this?");
-	} else {
-		comp_dev->interm_buff_mz =
-				qat_comp_setup_inter_buffers(comp_dev,
-					RTE_PMD_QAT_COMP_IM_BUFFER_SIZE);
-		if (comp_dev->interm_buff_mz == NULL) {
-			ret = -ENOMEM;
-			goto error_out;
-		}
-	}
-
 	if (config->max_nb_priv_xforms) {
 		comp_dev->xformpool = qat_comp_create_xform_pool(comp_dev,
 					    config, config->max_nb_priv_xforms);
@@ -558,19 +542,19 @@ qat_comp_dev_config(struct rte_compressdev *dev,
 	return ret;
 }
 
-static int
+int
 qat_comp_dev_start(struct rte_compressdev *dev __rte_unused)
 {
 	return 0;
 }
 
-static void
+void
 qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused)
 {
 
 }
 
-static int
+int
 qat_comp_dev_close(struct rte_compressdev *dev)
 {
 	int i;
@@ -588,8 +572,7 @@ qat_comp_dev_close(struct rte_compressdev *dev)
 	return ret;
 }
 
-
-static void
+void
 qat_comp_dev_info_get(struct rte_compressdev *dev,
 			struct rte_compressdev_info *info)
 {
@@ -662,27 +645,6 @@ qat_comp_pmd_dequeue_first_op_burst(void *qp, struct rte_comp_op **ops,
 	return ret;
 }
 
-static struct rte_compressdev_ops compress_qat_ops = {
-
-	/* Device related operations */
-	.dev_configure		= qat_comp_dev_config,
-	.dev_start		= qat_comp_dev_start,
-	.dev_stop		= qat_comp_dev_stop,
-	.dev_close		= qat_comp_dev_close,
-	.dev_infos_get		= qat_comp_dev_info_get,
-
-	.stats_get		= qat_comp_stats_get,
-	.stats_reset		= qat_comp_stats_reset,
-	.queue_pair_setup	= qat_comp_qp_setup,
-	.queue_pair_release	= qat_comp_qp_release,
-
-	/* Compression related operations */
-	.private_xform_create	= qat_comp_private_xform_create,
-	.private_xform_free	= qat_comp_private_xform_free,
-	.stream_create		= qat_comp_stream_create,
-	.stream_free		= qat_comp_stream_free
-};
-
 /* An rte_driver is needed in the registration of the device with compressdev.
  * The actual qat pci's rte_driver can't be used as its name represents
  * the whole pci device with all services. Think of this as a holder for a name
@@ -693,6 +655,7 @@ static const struct rte_driver compdev_qat_driver = {
 	.name = qat_comp_drv_name,
 	.alias = qat_comp_drv_name
 };
+
 int
 qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param)
@@ -708,17 +671,21 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	char capa_memz_name[RTE_COMPRESSDEV_NAME_MAX_LEN];
 	struct rte_compressdev *compressdev;
 	struct qat_comp_dev_private *comp_dev;
+	struct qat_comp_capabilities_info capabilities_info;
 	const struct rte_compressdev_capabilities *capabilities;
+	const struct qat_comp_gen_dev_ops *qat_comp_gen_ops =
+			&qat_comp_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	uint64_t capa_size;
 
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		QAT_LOG(ERR, "Compression PMD not supported on QAT 4xxx");
-		return -EFAULT;
-	}
 	snprintf(name, RTE_COMPRESSDEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "comp");
 	QAT_LOG(DEBUG, "Creating QAT COMP device %s", name);
 
+	if (qat_comp_gen_ops->compressdev_ops == NULL) {
+		QAT_LOG(DEBUG, "Device %s does not support compression", name);
+		return -ENOTSUP;
+	}
+
 	/* Populate subset device to use in compressdev device creation */
 	qat_dev_instance->comp_rte_dev.driver = &compdev_qat_driver;
 	qat_dev_instance->comp_rte_dev.numa_node =
@@ -733,13 +700,13 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	if (compressdev == NULL)
 		return -ENODEV;
 
-	compressdev->dev_ops = &compress_qat_ops;
+	compressdev->dev_ops = qat_comp_gen_ops->compressdev_ops;
 
 	compressdev->enqueue_burst = (compressdev_enqueue_pkt_burst_t)
 			qat_enqueue_comp_op_burst;
 	compressdev->dequeue_burst = qat_comp_pmd_dequeue_first_op_burst;
-
-	compressdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED;
+	compressdev->feature_flags =
+			qat_comp_gen_ops->qat_comp_get_feature_flags();
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -752,22 +719,20 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	comp_dev->qat_dev = qat_pci_dev;
 	comp_dev->compressdev = compressdev;
 
-	switch (qat_pci_dev->qat_dev_gen) {
-	case QAT_GEN1:
-	case QAT_GEN2:
-	case QAT_GEN3:
-		capabilities = qat_comp_gen_capabilities;
-		capa_size = sizeof(qat_comp_gen_capabilities);
-		break;
-	default:
-		capabilities = qat_comp_gen_capabilities;
-		capa_size = sizeof(qat_comp_gen_capabilities);
+	capabilities_info = qat_comp_get_capa_info(qat_pci_dev->qat_dev_gen,
+			qat_pci_dev);
+
+	if (capabilities_info.data == NULL) {
 		QAT_LOG(DEBUG,
 			"QAT gen %d capabilities unknown, default to GEN1",
 					qat_pci_dev->qat_dev_gen);
-		break;
+		capabilities_info = qat_comp_get_capa_info(QAT_GEN1,
+				qat_pci_dev);
 	}
 
+	capabilities = capabilities_info.data;
+	capa_size = capabilities_info.size;
+
 	comp_dev->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (comp_dev->capa_mz == NULL) {
 		comp_dev->capa_mz = rte_memzone_reserve(capa_memz_name,
diff --git a/drivers/compress/qat/qat_comp_pmd.h b/drivers/compress/qat/qat_comp_pmd.h
index 252b4b24e3..86317a513c 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -11,10 +11,44 @@
 #include <rte_compressdev_pmd.h>
 
 #include "qat_device.h"
+#include "qat_comp.h"
 
 /**< Intel(R) QAT Compression PMD driver name */
 #define COMPRESSDEV_NAME_QAT_PMD	compress_qat
 
+/* Private data structure for a QAT compression device capability. */
+struct qat_comp_capabilities_info {
+	const struct rte_compressdev_capabilities *data;
+	uint64_t size;
+};
+
+/**
+ * Function prototypes for GENx specific compress device operations.
+ **/
+typedef struct qat_comp_capabilities_info (*get_comp_capabilities_info_t)
+		(struct qat_pci_device *qat_dev);
+
+typedef uint16_t (*get_comp_ram_bank_flags_t)(void);
+
+typedef int (*set_comp_slice_cfg_word_t)(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word);
+
+typedef unsigned int (*get_comp_num_im_bufs_required_t)(void);
+
+typedef uint64_t (*get_comp_feature_flags_t)(void);
+
+struct qat_comp_gen_dev_ops {
+	struct rte_compressdev_ops *compressdev_ops;
+	get_comp_feature_flags_t qat_comp_get_feature_flags;
+	get_comp_capabilities_info_t qat_comp_get_capabilities;
+	get_comp_ram_bank_flags_t qat_comp_get_ram_bank_flags;
+	set_comp_slice_cfg_word_t qat_comp_set_slice_cfg_word;
+	get_comp_num_im_bufs_required_t qat_comp_get_num_im_bufs_required;
+};
+
+extern struct qat_comp_gen_dev_ops qat_comp_gen_dev_ops[];
+
 /** private data structure for a QAT compression device.
  * This QAT device is a device offering only a compression service,
  * there can be one of these on each qat_pci_device (VF).
@@ -37,6 +71,41 @@ struct qat_comp_dev_private {
 	uint16_t min_enq_burst_threshold;
 };
 
+int
+qat_comp_dev_config(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config);
+
+int
+qat_comp_dev_start(struct rte_compressdev *dev __rte_unused);
+
+void
+qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused);
+
+int
+qat_comp_dev_close(struct rte_compressdev *dev);
+
+void
+qat_comp_dev_info_get(struct rte_compressdev *dev,
+		struct rte_compressdev_info *info);
+
+void
+qat_comp_stats_get(struct rte_compressdev *dev,
+		struct rte_compressdev_stats *stats);
+
+void
+qat_comp_stats_reset(struct rte_compressdev *dev);
+
+int
+qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id);
+
+int
+qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
+		uint32_t max_inflight_ops, int socket_id);
+
+const struct rte_memzone *
+qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
+		uint32_t buff_size);
+
 int
 qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
@@ -44,5 +113,12 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 int
 qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev);
 
+
+static __rte_always_inline unsigned int
+qat_comp_get_num_im_bufs_required(enum qat_device_gen gen)
+{
+	return (*qat_comp_gen_dev_ops[gen].qat_comp_get_num_im_bufs_required)();
+}
+
 #endif
 #endif /* _QAT_COMP_PMD_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v4 6/9] compress/qat: add gen specific implementation
  2021-10-22 17:03     ` [dpdk-dev] [dpdk-dev v4 0/9] drivers/qat: isolate implementations of qat generations Fan Zhang
                         ` (4 preceding siblings ...)
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 5/9] compress/qat: add gen specific data and function Fan Zhang
@ 2021-10-22 17:03       ` Fan Zhang
  2021-10-26 16:24         ` Power, Ciara
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 7/9] crypto/qat: unified device private data structure Fan Zhang
                         ` (3 subsequent siblings)
  9 siblings, 1 reply; 96+ messages in thread
From: Fan Zhang @ 2021-10-22 17:03 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Adam Dybkowski, Arek Kusztal, Kai Ji

This patch replaces the mixed QAT compression support
implementation by separate files with shared or individual
implementation for specific QAT generation.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/meson.build               |   4 +-
 drivers/compress/qat/dev/qat_comp_pmd_gen1.c | 175 +++++++++++++++
 drivers/compress/qat/dev/qat_comp_pmd_gen2.c |  30 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen3.c |  30 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen4.c | 213 +++++++++++++++++++
 drivers/compress/qat/dev/qat_comp_pmd_gens.h |  30 +++
 6 files changed, 481 insertions(+), 1 deletion(-)
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen1.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen2.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen3.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen4.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gens.h

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 532e0fabb3..8a1c6d64e8 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -62,7 +62,9 @@ includes += include_directories(
 )
 
 if qat_compress
-    foreach f: ['qat_comp_pmd.c', 'qat_comp.c']
+    foreach f: ['qat_comp_pmd.c', 'qat_comp.c',
+            'dev/qat_comp_pmd_gen1.c', 'dev/qat_comp_pmd_gen2.c',
+            'dev/qat_comp_pmd_gen3.c', 'dev/qat_comp_pmd_gen4.c']
         sources += files(join_paths(qat_compress_relpath, f))
     endforeach
 endif
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen1.c b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c
new file mode 100644
index 0000000000..8a8fa4aec5
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_compressdev.h>
+#include <rte_compressdev_pmd.h>
+
+#include "qat_comp_pmd.h"
+#include "qat_comp.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN1 12
+
+const struct rte_compressdev_capabilities qat_gen1_comp_capabilities[] = {
+	{/* COMPRESSION - deflate */
+	 .algo = RTE_COMP_ALGO_DEFLATE,
+	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
+				RTE_COMP_FF_CRC32_CHECKSUM |
+				RTE_COMP_FF_ADLER32_CHECKSUM |
+				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
+				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+				RTE_COMP_FF_HUFFMAN_FIXED |
+				RTE_COMP_FF_HUFFMAN_DYNAMIC |
+				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
+				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
+				RTE_COMP_FF_OOP_LB_IN_SGL_OUT |
+				RTE_COMP_FF_STATEFUL_DECOMPRESSION,
+	 .window_size = {.min = 15, .max = 15, .increment = 0} },
+	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+
+static int
+qat_comp_dev_config_gen1(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config)
+{
+	struct qat_comp_dev_private *comp_dev = dev->data->dev_private;
+
+	if (RTE_PMD_QAT_COMP_IM_BUFFER_SIZE == 0) {
+		QAT_LOG(WARNING,
+			"QAT device cannot be used for Dynamic Deflate.");
+	} else {
+		comp_dev->interm_buff_mz =
+				qat_comp_setup_inter_buffers(comp_dev,
+					RTE_PMD_QAT_COMP_IM_BUFFER_SIZE);
+		if (comp_dev->interm_buff_mz == NULL)
+			return -ENOMEM;
+	}
+
+	return qat_comp_dev_config(dev, config);
+}
+
+struct rte_compressdev_ops qat_comp_ops_gen1 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_comp_dev_config_gen1,
+	.dev_start		= qat_comp_dev_start,
+	.dev_stop		= qat_comp_dev_stop,
+	.dev_close		= qat_comp_dev_close,
+	.dev_infos_get		= qat_comp_dev_info_get,
+
+	.stats_get		= qat_comp_stats_get,
+	.stats_reset		= qat_comp_stats_reset,
+	.queue_pair_setup	= qat_comp_qp_setup,
+	.queue_pair_release	= qat_comp_qp_release,
+
+	/* Compression related operations */
+	.private_xform_create	= qat_comp_private_xform_create,
+	.private_xform_free	= qat_comp_private_xform_free,
+	.stream_create		= qat_comp_stream_create,
+	.stream_free		= qat_comp_stream_free
+};
+
+struct qat_comp_capabilities_info
+qat_comp_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_comp_capabilities_info capa_info = {
+		.data = qat_gen1_comp_capabilities,
+		.size = sizeof(qat_gen1_comp_capabilities)
+	};
+	return capa_info;
+}
+
+uint16_t
+qat_comp_get_ram_bank_flags_gen1(void)
+{
+	/* Enable A, B, C, D, and E (CAMs). */
+	return ICP_QAT_FW_COMP_RAM_FLAGS_BUILD(
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank E */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank D */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank C */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank B */
+			ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */
+}
+
+int
+qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		__rte_unused enum rte_comp_op_type op_type,
+		uint32_t *comp_slice_cfg_word)
+{
+	unsigned int algo, comp_level, direction;
+
+	if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE)
+		algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE;
+	else {
+		QAT_LOG(ERR, "compression algorithm not supported");
+		return -EINVAL;
+	}
+
+	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) {
+		direction = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS;
+		comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+	} else {
+		direction = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS;
+
+		if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+		else if (xform->compress.level == 1)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
+		else if (xform->compress.level == 2)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_4;
+		else if (xform->compress.level == 3)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+		else if (xform->compress.level >= 4 &&
+			 xform->compress.level <= 9)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_16;
+		else {
+			QAT_LOG(ERR, "compression level not supported");
+			return -EINVAL;
+		}
+	}
+
+	comp_slice_cfg_word[0] =
+			ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(
+				direction,
+				/* In CPM 1.6 only valid mode ! */
+				ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED,
+				algo,
+				/* Translate level to depth */
+				comp_level,
+				ICP_QAT_HW_COMPRESSION_FILE_TYPE_0);
+
+	return 0;
+}
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen1(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN1;
+}
+
+uint64_t
+qat_comp_get_features_gen1(void)
+{
+	return RTE_COMPDEV_FF_HW_ACCELERATED;
+}
+
+RTE_INIT(qat_comp_pmd_gen1_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN1].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen2.c b/drivers/compress/qat/dev/qat_comp_pmd_gen2.c
new file mode 100644
index 0000000000..fd6c966f26
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen2.c
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN2 20
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen2(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN2;
+}
+
+RTE_INIT(qat_comp_pmd_gen2_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN2].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen2;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen3.c b/drivers/compress/qat/dev/qat_comp_pmd_gen3.c
new file mode 100644
index 0000000000..fccb0941f1
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen3.c
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN3 64
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen3(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN3;
+}
+
+RTE_INIT(qat_comp_pmd_gen3_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN3].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen3;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen4.c b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
new file mode 100644
index 0000000000..79b2ceb414
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
@@ -0,0 +1,213 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp.h"
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+#include "icp_qat_hw_gen4_comp.h"
+#include "icp_qat_hw_gen4_comp_defs.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN4 0
+
+static const struct rte_compressdev_capabilities
+qat_gen4_comp_capabilities[] = {
+	{/* COMPRESSION - deflate */
+	 .algo = RTE_COMP_ALGO_DEFLATE,
+	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
+				RTE_COMP_FF_CRC32_CHECKSUM |
+				RTE_COMP_FF_ADLER32_CHECKSUM |
+				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
+				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+				RTE_COMP_FF_HUFFMAN_FIXED |
+				RTE_COMP_FF_HUFFMAN_DYNAMIC |
+				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
+				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
+				RTE_COMP_FF_OOP_LB_IN_SGL_OUT,
+	 .window_size = {.min = 15, .max = 15, .increment = 0} },
+	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+
+static int
+qat_comp_dev_config_gen4(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config)
+{
+	/* QAT GEN4 doesn't need preallocated intermediate buffers */
+
+	return qat_comp_dev_config(dev, config);
+}
+
+static struct rte_compressdev_ops qat_comp_ops_gen4 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_comp_dev_config_gen4,
+	.dev_start		= qat_comp_dev_start,
+	.dev_stop		= qat_comp_dev_stop,
+	.dev_close		= qat_comp_dev_close,
+	.dev_infos_get		= qat_comp_dev_info_get,
+
+	.stats_get		= qat_comp_stats_get,
+	.stats_reset		= qat_comp_stats_reset,
+	.queue_pair_setup	= qat_comp_qp_setup,
+	.queue_pair_release	= qat_comp_qp_release,
+
+	/* Compression related operations */
+	.private_xform_create	= qat_comp_private_xform_create,
+	.private_xform_free	= qat_comp_private_xform_free,
+	.stream_create		= qat_comp_stream_create,
+	.stream_free		= qat_comp_stream_free
+};
+
+static struct qat_comp_capabilities_info
+qat_comp_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_comp_capabilities_info capa_info = {
+		.data = qat_gen4_comp_capabilities,
+		.size = sizeof(qat_gen4_comp_capabilities)
+	};
+	return capa_info;
+}
+
+static uint16_t
+qat_comp_get_ram_bank_flags_gen4(void)
+{
+	return 0;
+}
+
+static int
+qat_comp_set_slice_cfg_word_gen4(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word)
+{
+	if (qat_xform->qat_comp_request_type ==
+			QAT_COMP_REQUEST_FIXED_COMP_STATELESS ||
+	    qat_xform->qat_comp_request_type ==
+			QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS) {
+		/* Compression */
+		struct icp_qat_hw_comp_20_config_csr_upper hw_comp_upper_csr;
+		struct icp_qat_hw_comp_20_config_csr_lower hw_comp_lower_csr;
+
+		memset(&hw_comp_upper_csr, 0, sizeof(hw_comp_upper_csr));
+		memset(&hw_comp_lower_csr, 0, sizeof(hw_comp_lower_csr));
+
+		hw_comp_lower_csr.lllbd =
+			ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_DISABLED;
+
+		if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE) {
+			hw_comp_lower_csr.skip_ctrl =
+				ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL;
+
+			if (qat_xform->qat_comp_request_type ==
+				QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS) {
+				hw_comp_lower_csr.algo =
+					ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77;
+				hw_comp_lower_csr.lllbd =
+				    ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED;
+			} else {
+				hw_comp_lower_csr.algo =
+				      ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE;
+				hw_comp_upper_csr.scb_ctrl =
+					ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE;
+			}
+
+			if (op_type == RTE_COMP_OP_STATEFUL) {
+				hw_comp_upper_csr.som_ctrl =
+				     ICP_QAT_HW_COMP_20_SOM_CONTROL_REPLAY_MODE;
+			}
+		} else {
+			QAT_LOG(ERR, "Compression algorithm not supported");
+			return -EINVAL;
+		}
+
+		switch (xform->compress.level) {
+		case 1:
+		case 2:
+		case 3:
+		case 4:
+		case 5:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1;
+			hw_comp_lower_csr.hash_col =
+			      ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_DONT_ALLOW;
+			break;
+		case 6:
+		case 7:
+		case 8:
+		case RTE_COMP_LEVEL_PMD_DEFAULT:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_6;
+			break;
+		case 9:
+		case 10:
+		case 11:
+		case 12:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_9;
+			break;
+		default:
+			QAT_LOG(ERR, "Compression level not supported");
+			return -EINVAL;
+		}
+
+		hw_comp_lower_csr.abd = ICP_QAT_HW_COMP_20_ABD_ABD_DISABLED;
+		hw_comp_lower_csr.hash_update =
+			ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW;
+		hw_comp_lower_csr.edmm =
+		      ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED;
+
+		hw_comp_upper_csr.nice =
+			ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL;
+		hw_comp_upper_csr.lazy =
+			ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL;
+
+		comp_slice_cfg_word[0] =
+				ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER(
+					hw_comp_lower_csr);
+		comp_slice_cfg_word[1] =
+				ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER(
+					hw_comp_upper_csr);
+	} else {
+		/* Decompression */
+		struct icp_qat_hw_decomp_20_config_csr_lower
+				hw_decomp_lower_csr;
+
+		memset(&hw_decomp_lower_csr, 0, sizeof(hw_decomp_lower_csr));
+
+		if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE)
+			hw_decomp_lower_csr.algo =
+				ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE;
+		else {
+			QAT_LOG(ERR, "Compression algorithm not supported");
+			return -EINVAL;
+		}
+
+		comp_slice_cfg_word[0] =
+				ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER(
+					hw_decomp_lower_csr);
+		comp_slice_cfg_word[1] = 0;
+	}
+
+	return 0;
+}
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen4(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN4;
+}
+
+
+RTE_INIT(qat_comp_pmd_gen4_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN4].compressdev_ops =
+			&qat_comp_ops_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gens.h b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
new file mode 100644
index 0000000000..35b75c56f1
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _QAT_COMP_PMD_GEN1_H_
+#define _QAT_COMP_PMD_GEN1_H_
+
+#include <rte_compressdev.h>
+#include <rte_compressdev_pmd.h>
+#include <stdint.h>
+
+#include "qat_comp_pmd.h"
+
+extern const struct rte_compressdev_capabilities qat_gen1_comp_capabilities[];
+
+struct qat_comp_capabilities_info
+qat_comp_cap_get_gen1(struct qat_pci_device *qat_dev);
+
+uint16_t qat_comp_get_ram_bank_flags_gen1(void);
+
+int qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type,
+		uint32_t *comp_slice_cfg_word);
+
+uint64_t qat_comp_get_features_gen1(void);
+
+extern struct rte_compressdev_ops qat_comp_ops_gen1;
+
+#endif /* _QAT_COMP_PMD_GEN1_H_ */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v4 7/9] crypto/qat: unified device private data structure
  2021-10-22 17:03     ` [dpdk-dev] [dpdk-dev v4 0/9] drivers/qat: isolate implementations of qat generations Fan Zhang
                         ` (5 preceding siblings ...)
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 6/9] compress/qat: add gen specific implementation Fan Zhang
@ 2021-10-22 17:03       ` Fan Zhang
  2021-10-27  8:11         ` Power, Ciara
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 8/9] crypto/qat: add gen specific data and function Fan Zhang
                         ` (2 subsequent siblings)
  9 siblings, 1 reply; 96+ messages in thread
From: Fan Zhang @ 2021-10-22 17:03 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch unifies the QAT symmetric and asymmetric device
private data structures and functions.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/meson.build       |   2 +-
 drivers/common/qat/qat_common.c      |  15 ++
 drivers/common/qat/qat_common.h      |   3 +
 drivers/common/qat/qat_device.h      |   7 +-
 drivers/crypto/qat/qat_asym_pmd.c    | 216 ++++-------------------
 drivers/crypto/qat/qat_asym_pmd.h    |  29 +---
 drivers/crypto/qat/qat_crypto.c      | 172 ++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h      |  78 +++++++++
 drivers/crypto/qat/qat_sym_pmd.c     | 250 +++++----------------------
 drivers/crypto/qat/qat_sym_pmd.h     |  21 +--
 drivers/crypto/qat/qat_sym_session.c |  15 +-
 11 files changed, 361 insertions(+), 447 deletions(-)
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 8a1c6d64e8..29fd0168ea 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -71,7 +71,7 @@ endif
 
 if qat_crypto
     foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c',
-            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c']
+            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c']
         sources += files(join_paths(qat_crypto_relpath, f))
     endforeach
     deps += ['security']
diff --git a/drivers/common/qat/qat_common.c b/drivers/common/qat/qat_common.c
index 5343a1451e..59e7e02622 100644
--- a/drivers/common/qat/qat_common.c
+++ b/drivers/common/qat/qat_common.c
@@ -6,6 +6,21 @@
 #include "qat_device.h"
 #include "qat_logs.h"
 
+const char *
+qat_service_get_str(enum qat_service_type type)
+{
+	switch (type) {
+	case QAT_SERVICE_SYMMETRIC:
+		return "sym";
+	case QAT_SERVICE_ASYMMETRIC:
+		return "asym";
+	case QAT_SERVICE_COMPRESSION:
+		return "comp";
+	default:
+		return "invalid";
+	}
+}
+
 int
 qat_sgl_fill_array(struct rte_mbuf *buf, int64_t offset,
 		void *list_in, uint32_t data_len,
diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index a7632e31f8..9411a79301 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -91,4 +91,7 @@ void
 qat_stats_reset(struct qat_pci_device *dev,
 		enum qat_service_type service);
 
+const char *
+qat_service_get_str(enum qat_service_type type);
+
 #endif /* _QAT_COMMON_H_ */
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index e7c7e9af95..85fae7b7c7 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -76,8 +76,7 @@ struct qat_device_info {
 
 extern struct qat_device_info qat_pci_devs[];
 
-struct qat_sym_dev_private;
-struct qat_asym_dev_private;
+struct qat_cryptodev_private;
 struct qat_comp_dev_private;
 
 /*
@@ -106,14 +105,14 @@ struct qat_pci_device {
 	/**< links to qps set up for each service, index same as on API */
 
 	/* Data relating to symmetric crypto service */
-	struct qat_sym_dev_private *sym_dev;
+	struct qat_cryptodev_private *sym_dev;
 	/**< link back to cryptodev private data */
 
 	int qat_sym_driver_id;
 	/**< Symmetric driver id used by this device */
 
 	/* Data relating to asymmetric crypto service */
-	struct qat_asym_dev_private *asym_dev;
+	struct qat_cryptodev_private *asym_dev;
 	/**< link back to cryptodev private data */
 
 	int qat_asym_driver_id;
diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c
index 0944d27a4d..042f39ddcc 100644
--- a/drivers/crypto/qat/qat_asym_pmd.c
+++ b/drivers/crypto/qat/qat_asym_pmd.c
@@ -6,6 +6,7 @@
 
 #include "qat_logs.h"
 
+#include "qat_crypto.h"
 #include "qat_asym.h"
 #include "qat_asym_pmd.h"
 #include "qat_sym_capabilities.h"
@@ -18,190 +19,45 @@ static const struct rte_cryptodev_capabilities qat_gen1_asym_capabilities[] = {
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
-static int qat_asym_qp_release(struct rte_cryptodev *dev,
-			       uint16_t queue_pair_id);
-
-static int qat_asym_dev_config(__rte_unused struct rte_cryptodev *dev,
-			       __rte_unused struct rte_cryptodev_config *config)
-{
-	return 0;
-}
-
-static int qat_asym_dev_start(__rte_unused struct rte_cryptodev *dev)
-{
-	return 0;
-}
-
-static void qat_asym_dev_stop(__rte_unused struct rte_cryptodev *dev)
-{
-
-}
-
-static int qat_asym_dev_close(struct rte_cryptodev *dev)
-{
-	int i, ret;
-
-	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
-		ret = qat_asym_qp_release(dev, i);
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static void qat_asym_dev_info_get(struct rte_cryptodev *dev,
-				  struct rte_cryptodev_info *info)
-{
-	struct qat_asym_dev_private *internals = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = internals->qat_dev;
-
-	if (info != NULL) {
-		info->max_nb_queue_pairs = qat_qps_per_service(qat_dev,
-							QAT_SERVICE_ASYMMETRIC);
-		info->feature_flags = dev->feature_flags;
-		info->capabilities = internals->qat_dev_capabilities;
-		info->driver_id = qat_asym_driver_id;
-		/* No limit of number of sessions */
-		info->sym.max_nb_sessions = 0;
-	}
-}
-
-static void qat_asym_stats_get(struct rte_cryptodev *dev,
-			       struct rte_cryptodev_stats *stats)
-{
-	struct qat_common_stats qat_stats = {0};
-	struct qat_asym_dev_private *qat_priv;
-
-	if (stats == NULL || dev == NULL) {
-		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_get(qat_priv->qat_dev, &qat_stats, QAT_SERVICE_ASYMMETRIC);
-	stats->enqueued_count = qat_stats.enqueued_count;
-	stats->dequeued_count = qat_stats.dequeued_count;
-	stats->enqueue_err_count = qat_stats.enqueue_err_count;
-	stats->dequeue_err_count = qat_stats.dequeue_err_count;
-}
-
-static void qat_asym_stats_reset(struct rte_cryptodev *dev)
+void
+qat_asym_init_op_cookie(void *op_cookie)
 {
-	struct qat_asym_dev_private *qat_priv;
+	int j;
+	struct qat_asym_op_cookie *cookie = op_cookie;
 
-	if (dev == NULL) {
-		QAT_LOG(ERR, "invalid asymmetric cryptodev ptr %p", dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
+	cookie->input_addr = rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_asym_op_cookie,
+					input_params_ptrs);
 
-	qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_ASYMMETRIC);
-}
-
-static int qat_asym_qp_release(struct rte_cryptodev *dev,
-			       uint16_t queue_pair_id)
-{
-	struct qat_asym_dev_private *qat_private = dev->data->dev_private;
-	enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen;
-
-	QAT_LOG(DEBUG, "Release asym qp %u on device %d",
-				queue_pair_id, dev->data->dev_id);
-
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_ASYMMETRIC][queue_pair_id]
-						= NULL;
-
-	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
-			&(dev->data->queue_pairs[queue_pair_id]));
-}
+	cookie->output_addr = rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_asym_op_cookie,
+					output_params_ptrs);
 
-static int qat_asym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-			     const struct rte_cryptodev_qp_conf *qp_conf,
-			     int socket_id)
-{
-	struct qat_qp_config qat_qp_conf;
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-
-	struct qat_qp **qp_addr =
-			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
-	struct qat_asym_dev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-	const struct qat_qp_hw_data *asym_hw_qps =
-			qat_gen_config[qat_private->qat_dev->qat_dev_gen]
-				      .qp_hw_data[QAT_SERVICE_ASYMMETRIC];
-	const struct qat_qp_hw_data *qp_hw_data = asym_hw_qps + qp_id;
-
-	/* If qp is already in use free ring memory and qp metadata. */
-	if (*qp_addr != NULL) {
-		ret = qat_asym_qp_release(dev, qp_id);
-		if (ret < 0)
-			return ret;
-	}
-	if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_ASYMMETRIC)) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.hw = qp_hw_data;
-	qat_qp_conf.cookie_size = sizeof(struct qat_asym_op_cookie);
-	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
-	qat_qp_conf.socket_id = socket_id;
-	qat_qp_conf.service_str = "asym";
-
-	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
-	if (ret != 0)
-		return ret;
-
-	/* store a link to the qp in the qat_pci_device */
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_ASYMMETRIC][qp_id]
-							= *qp_addr;
-
-	qp = (struct qat_qp *)*qp_addr;
-	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
-
-	for (i = 0; i < qp->nb_descriptors; i++) {
-		int j;
-
-		struct qat_asym_op_cookie __rte_unused *cookie =
-				qp->op_cookies[i];
-		cookie->input_addr = rte_mempool_virt2iova(cookie) +
+	for (j = 0; j < 8; j++) {
+		cookie->input_params_ptrs[j] =
+				rte_mempool_virt2iova(cookie) +
 				offsetof(struct qat_asym_op_cookie,
-						input_params_ptrs);
-
-		cookie->output_addr = rte_mempool_virt2iova(cookie) +
+						input_array[j]);
+		cookie->output_params_ptrs[j] =
+				rte_mempool_virt2iova(cookie) +
 				offsetof(struct qat_asym_op_cookie,
-						output_params_ptrs);
-
-		for (j = 0; j < 8; j++) {
-			cookie->input_params_ptrs[j] =
-					rte_mempool_virt2iova(cookie) +
-					offsetof(struct qat_asym_op_cookie,
-							input_array[j]);
-			cookie->output_params_ptrs[j] =
-					rte_mempool_virt2iova(cookie) +
-					offsetof(struct qat_asym_op_cookie,
-							output_array[j]);
-		}
+						output_array[j]);
 	}
-
-	return ret;
 }
 
-struct rte_cryptodev_ops crypto_qat_ops = {
+static struct rte_cryptodev_ops crypto_qat_ops = {
 
 	/* Device related operations */
-	.dev_configure		= qat_asym_dev_config,
-	.dev_start		= qat_asym_dev_start,
-	.dev_stop		= qat_asym_dev_stop,
-	.dev_close		= qat_asym_dev_close,
-	.dev_infos_get		= qat_asym_dev_info_get,
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
 
-	.stats_get		= qat_asym_stats_get,
-	.stats_reset		= qat_asym_stats_reset,
-	.queue_pair_setup	= qat_asym_qp_setup,
-	.queue_pair_release	= qat_asym_qp_release,
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
 
 	/* Crypto related operations */
 	.asym_session_get_size	= qat_asym_session_get_private_size,
@@ -241,15 +97,14 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	struct qat_device_info *qat_dev_instance =
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
 	struct rte_cryptodev_pmd_init_params init_params = {
-			.name = "",
-			.socket_id =
-				qat_dev_instance->pci_dev->device.numa_node,
-			.private_data_size = sizeof(struct qat_asym_dev_private)
+		.name = "",
+		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
+		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
-	struct qat_asym_dev_private *internals;
+	struct qat_cryptodev_private *internals;
 
 	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
 		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT 4xxx");
@@ -310,8 +165,9 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
-	internals->asym_dev_id = cryptodev->data->dev_id;
+	internals->dev_id = cryptodev->data->dev_id;
 	internals->qat_dev_capabilities = qat_gen1_asym_capabilities;
+	internals->service_type = QAT_SERVICE_ASYMMETRIC;
 
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
@@ -347,7 +203,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	rte_cryptodev_pmd_probing_finish(cryptodev);
 
 	QAT_LOG(DEBUG, "Created QAT ASYM device %s as cryptodev instance %d",
-			cryptodev->data->name, internals->asym_dev_id);
+			cryptodev->data->name, internals->dev_id);
 	return 0;
 }
 
@@ -365,7 +221,7 @@ qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev)
 
 	/* free crypto device */
 	cryptodev = rte_cryptodev_pmd_get_dev(
-			qat_pci_dev->asym_dev->asym_dev_id);
+			qat_pci_dev->asym_dev->dev_id);
 	rte_cryptodev_pmd_destroy(cryptodev);
 	qat_pci_devs[qat_pci_dev->qat_dev_id].asym_rte_dev.name = NULL;
 	qat_pci_dev->asym_dev = NULL;
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index 3b5abddec8..c493796511 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -15,21 +15,8 @@
 
 extern uint8_t qat_asym_driver_id;
 
-/** private data structure for a QAT device.
- * This QAT device is a device offering only asymmetric crypto service,
- * there can be one of these on each qat_pci_device (VF).
- */
-struct qat_asym_dev_private {
-	struct qat_pci_device *qat_dev;
-	/**< The qat pci device hosting the service */
-	uint8_t asym_dev_id;
-	/**< Device instance for this rte_cryptodev */
-	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
-	/* QAT device asymmetric crypto capabilities */
-	const struct rte_memzone *capa_mz;
-	/* Shared memzone for storing capabilities */
-	uint16_t min_enq_burst_threshold;
-};
+void
+qat_asym_init_op_cookie(void *op_cookie);
 
 uint16_t
 qat_asym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops,
@@ -39,16 +26,4 @@ uint16_t
 qat_asym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops,
 			      uint16_t nb_ops);
 
-int qat_asym_session_configure(struct rte_cryptodev *dev,
-		struct rte_crypto_asym_xform *xform,
-		struct rte_cryptodev_asym_session *sess,
-		struct rte_mempool *mempool);
-
-int
-qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
-		struct qat_dev_cmd_param *qat_dev_cmd_param);
-
-int
-qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev);
-
 #endif /* _QAT_ASYM_PMD_H_ */
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
new file mode 100644
index 0000000000..01d2439b93
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -0,0 +1,172 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "qat_crypto.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+
+int
+qat_cryptodev_config(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused struct rte_cryptodev_config *config)
+{
+	return 0;
+}
+
+int
+qat_cryptodev_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+void
+qat_cryptodev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+int
+qat_cryptodev_close(struct rte_cryptodev *dev)
+{
+	int i, ret;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = dev->dev_ops->queue_pair_release(dev, i);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
+void
+qat_cryptodev_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *info)
+{
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_service_type service_type = qat_private->service_type;
+
+	if (info != NULL) {
+		info->max_nb_queue_pairs =
+			qat_qps_per_service(qat_dev, service_type);
+		info->feature_flags = dev->feature_flags;
+		info->capabilities = qat_private->qat_dev_capabilities;
+		info->driver_id = qat_sym_driver_id;
+		/* No limit of number of sessions */
+		info->sym.max_nb_sessions = 0;
+	}
+}
+
+void
+qat_cryptodev_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	struct qat_common_stats qat_stats = {0};
+	struct qat_cryptodev_private *qat_priv;
+
+	if (stats == NULL || dev == NULL) {
+		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
+		return;
+	}
+	qat_priv = dev->data->dev_private;
+
+	qat_stats_get(qat_priv->qat_dev, &qat_stats, qat_priv->service_type);
+	stats->enqueued_count = qat_stats.enqueued_count;
+	stats->dequeued_count = qat_stats.dequeued_count;
+	stats->enqueue_err_count = qat_stats.enqueue_err_count;
+	stats->dequeue_err_count = qat_stats.dequeue_err_count;
+}
+
+void
+qat_cryptodev_stats_reset(struct rte_cryptodev *dev)
+{
+	struct qat_cryptodev_private *qat_priv;
+
+	if (dev == NULL) {
+		QAT_LOG(ERR, "invalid cryptodev ptr %p", dev);
+		return;
+	}
+	qat_priv = dev->data->dev_private;
+
+	qat_stats_reset(qat_priv->qat_dev, qat_priv->service_type);
+
+}
+
+int
+qat_cryptodev_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
+	enum qat_service_type service_type = qat_private->service_type;
+
+	QAT_LOG(DEBUG, "Release %s qp %u on device %d",
+			qat_service_get_str(service_type),
+			queue_pair_id, dev->data->dev_id);
+
+	qat_private->qat_dev->qps_in_use[service_type][queue_pair_id] = NULL;
+
+	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
+			&(dev->data->queue_pairs[queue_pair_id]));
+}
+
+int
+qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct qat_qp **qp_addr =
+			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_service_type service_type = qat_private->service_type;
+	struct qat_qp_config qat_qp_conf = {0};
+	struct qat_qp *qp;
+	int ret = 0;
+	uint32_t i;
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (*qp_addr != NULL) {
+		ret = dev->dev_ops->queue_pair_release(dev, qp_id);
+		if (ret < 0)
+			return -EBUSY;
+	}
+	if (qp_id >= qat_qps_per_service(qat_dev, service_type)) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, service_type,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
+	qat_qp_conf.cookie_size = service_type == QAT_SERVICE_SYMMETRIC ?
+			sizeof(struct qat_sym_op_cookie) :
+			sizeof(struct qat_asym_op_cookie);
+	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
+	qat_qp_conf.socket_id = socket_id;
+	qat_qp_conf.service_str = qat_service_get_str(service_type);
+
+	ret = qat_qp_setup(qat_dev, qp_addr, qp_id, &qat_qp_conf);
+	if (ret != 0)
+		return ret;
+
+	/* store a link to the qp in the qat_pci_device */
+	qat_dev->qps_in_use[service_type][qp_id] = *qp_addr;
+
+	qp = (struct qat_qp *)*qp_addr;
+	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
+
+	for (i = 0; i < qp->nb_descriptors; i++) {
+		if (service_type == QAT_SERVICE_SYMMETRIC)
+			qat_sym_init_op_cookie(qp->op_cookies[i]);
+		else
+			qat_asym_init_op_cookie(qp->op_cookies[i]);
+	}
+
+	return ret;
+}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
new file mode 100644
index 0000000000..3803fef19d
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+ #ifndef _QAT_CRYPTO_H_
+ #define _QAT_CRYPTO_H_
+
+#include <rte_cryptodev.h>
+#ifdef RTE_LIB_SECURITY
+#include <rte_security.h>
+#endif
+
+#include "qat_device.h"
+
+extern uint8_t qat_sym_driver_id;
+extern uint8_t qat_asym_driver_id;
+
+/** helper macro to set cryptodev capability range **/
+#define CAP_RNG(n, l, r, i) .n = {.min = l, .max = r, .increment = i}
+
+#define CAP_RNG_ZERO(n) .n = {.min = 0, .max = 0, .increment = 0}
+/** helper macro to set cryptodev capability value **/
+#define CAP_SET(n, v) .n = v
+
+/** private data structure for a QAT device.
+ * there can be one of these on each qat_pci_device (VF).
+ */
+struct qat_cryptodev_private {
+	struct qat_pci_device *qat_dev;
+	/**< The qat pci device hosting the service */
+	uint8_t dev_id;
+	/**< Device instance for this rte_cryptodev */
+	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
+	/* QAT device symmetric crypto capabilities */
+	const struct rte_memzone *capa_mz;
+	/* Shared memzone for storing capabilities */
+	uint16_t min_enq_burst_threshold;
+	uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */
+	enum qat_service_type service_type;
+};
+
+struct qat_capabilities_info {
+	struct rte_cryptodev_capabilities *data;
+	uint64_t size;
+};
+
+int
+qat_cryptodev_config(struct rte_cryptodev *dev,
+		struct rte_cryptodev_config *config);
+
+int
+qat_cryptodev_start(struct rte_cryptodev *dev);
+
+void
+qat_cryptodev_stop(struct rte_cryptodev *dev);
+
+int
+qat_cryptodev_close(struct rte_cryptodev *dev);
+
+void
+qat_cryptodev_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *info);
+
+void
+qat_cryptodev_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats);
+
+void
+qat_cryptodev_stats_reset(struct rte_cryptodev *dev);
+
+int
+qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+
+int
+qat_cryptodev_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id);
+
+#endif
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index 5b8ee4bee6..dec877cfab 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -13,6 +13,7 @@
 #endif
 
 #include "qat_logs.h"
+#include "qat_crypto.h"
 #include "qat_sym.h"
 #include "qat_sym_session.h"
 #include "qat_sym_pmd.h"
@@ -59,213 +60,19 @@ static const struct rte_security_capability qat_security_capabilities[] = {
 };
 #endif
 
-static int qat_sym_qp_release(struct rte_cryptodev *dev,
-	uint16_t queue_pair_id);
-
-static int qat_sym_dev_config(__rte_unused struct rte_cryptodev *dev,
-		__rte_unused struct rte_cryptodev_config *config)
-{
-	return 0;
-}
-
-static int qat_sym_dev_start(__rte_unused struct rte_cryptodev *dev)
-{
-	return 0;
-}
-
-static void qat_sym_dev_stop(__rte_unused struct rte_cryptodev *dev)
-{
-	return;
-}
-
-static int qat_sym_dev_close(struct rte_cryptodev *dev)
-{
-	int i, ret;
-
-	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
-		ret = qat_sym_qp_release(dev, i);
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static void qat_sym_dev_info_get(struct rte_cryptodev *dev,
-			struct rte_cryptodev_info *info)
-{
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = internals->qat_dev;
-
-	if (info != NULL) {
-		info->max_nb_queue_pairs =
-			qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC);
-		info->feature_flags = dev->feature_flags;
-		info->capabilities = internals->qat_dev_capabilities;
-		info->driver_id = qat_sym_driver_id;
-		/* No limit of number of sessions */
-		info->sym.max_nb_sessions = 0;
-	}
-}
-
-static void qat_sym_stats_get(struct rte_cryptodev *dev,
-		struct rte_cryptodev_stats *stats)
-{
-	struct qat_common_stats qat_stats = {0};
-	struct qat_sym_dev_private *qat_priv;
-
-	if (stats == NULL || dev == NULL) {
-		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_get(qat_priv->qat_dev, &qat_stats, QAT_SERVICE_SYMMETRIC);
-	stats->enqueued_count = qat_stats.enqueued_count;
-	stats->dequeued_count = qat_stats.dequeued_count;
-	stats->enqueue_err_count = qat_stats.enqueue_err_count;
-	stats->dequeue_err_count = qat_stats.dequeue_err_count;
-}
-
-static void qat_sym_stats_reset(struct rte_cryptodev *dev)
-{
-	struct qat_sym_dev_private *qat_priv;
-
-	if (dev == NULL) {
-		QAT_LOG(ERR, "invalid cryptodev ptr %p", dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_SYMMETRIC);
-
-}
-
-static int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
-{
-	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
-	enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen;
-
-	QAT_LOG(DEBUG, "Release sym qp %u on device %d",
-				queue_pair_id, dev->data->dev_id);
-
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][queue_pair_id]
-						= NULL;
-
-	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
-			&(dev->data->queue_pairs[queue_pair_id]));
-}
-
-static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-	const struct rte_cryptodev_qp_conf *qp_conf,
-	int socket_id)
-{
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-	struct qat_qp_config qat_qp_conf;
-	struct qat_qp **qp_addr =
-			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
-	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-
-	/* If qp is already in use free ring memory and qp metadata. */
-	if (*qp_addr != NULL) {
-		ret = qat_sym_qp_release(dev, qp_id);
-		if (ret < 0)
-			return ret;
-	}
-	if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC)) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_SYMMETRIC,
-			qp_id);
-	if (qat_qp_conf.hw == NULL) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie);
-	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
-	qat_qp_conf.socket_id = socket_id;
-	qat_qp_conf.service_str = "sym";
-
-	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
-	if (ret != 0)
-		return ret;
-
-	/* store a link to the qp in the qat_pci_device */
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id]
-							= *qp_addr;
-
-	qp = (struct qat_qp *)*qp_addr;
-	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
-
-	for (i = 0; i < qp->nb_descriptors; i++) {
-
-		struct qat_sym_op_cookie *cookie =
-				qp->op_cookies[i];
-
-		cookie->qat_sgl_src_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				qat_sgl_src);
-
-		cookie->qat_sgl_dst_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				qat_sgl_dst);
-
-		cookie->opt.spc_gmac.cd_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				opt.spc_gmac.cd_cipher);
-
-	}
-
-	/* Get fw version from QAT (GEN2), skip if we've got it already */
-	if (qp->qat_dev_gen == QAT_GEN2 && !(qat_private->internal_capabilities
-			& QAT_SYM_CAP_VALID)) {
-		ret = qat_cq_get_fw_version(qp);
-
-		if (ret < 0) {
-			qat_sym_qp_release(dev, qp_id);
-			return ret;
-		}
-
-		if (ret != 0)
-			QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d",
-					(ret >> 24) & 0xff,
-					(ret >> 16) & 0xff,
-					(ret >> 8) & 0xff);
-		else
-			QAT_LOG(DEBUG, "unknown QAT firmware version");
-
-		/* set capabilities based on the fw version */
-		qat_private->internal_capabilities = QAT_SYM_CAP_VALID |
-				((ret >= MIXED_CRYPTO_MIN_FW_VER) ?
-						QAT_SYM_CAP_MIXED_CRYPTO : 0);
-		ret = 0;
-	}
-
-	return ret;
-}
-
 static struct rte_cryptodev_ops crypto_qat_ops = {
 
 		/* Device related operations */
-		.dev_configure		= qat_sym_dev_config,
-		.dev_start		= qat_sym_dev_start,
-		.dev_stop		= qat_sym_dev_stop,
-		.dev_close		= qat_sym_dev_close,
-		.dev_infos_get		= qat_sym_dev_info_get,
+		.dev_configure		= qat_cryptodev_config,
+		.dev_start		= qat_cryptodev_start,
+		.dev_stop		= qat_cryptodev_stop,
+		.dev_close		= qat_cryptodev_close,
+		.dev_infos_get		= qat_cryptodev_info_get,
 
-		.stats_get		= qat_sym_stats_get,
-		.stats_reset		= qat_sym_stats_reset,
-		.queue_pair_setup	= qat_sym_qp_setup,
-		.queue_pair_release	= qat_sym_qp_release,
+		.stats_get		= qat_cryptodev_stats_get,
+		.stats_reset		= qat_cryptodev_stats_reset,
+		.queue_pair_setup	= qat_cryptodev_qp_setup,
+		.queue_pair_release	= qat_cryptodev_qp_release,
 
 		/* Crypto related operations */
 		.sym_session_get_size	= qat_sym_session_get_private_size,
@@ -295,6 +102,27 @@ static struct rte_security_ops security_qat_ops = {
 };
 #endif
 
+void
+qat_sym_init_op_cookie(void *op_cookie)
+{
+	struct qat_sym_op_cookie *cookie = op_cookie;
+
+	cookie->qat_sgl_src_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			qat_sgl_src);
+
+	cookie->qat_sgl_dst_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			qat_sgl_dst);
+
+	cookie->opt.spc_gmac.cd_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			opt.spc_gmac.cd_cipher);
+}
+
 static uint16_t
 qat_sym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops,
 		uint16_t nb_ops)
@@ -330,15 +158,14 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
 
 	struct rte_cryptodev_pmd_init_params init_params = {
-			.name = "",
-			.socket_id =
-				qat_dev_instance->pci_dev->device.numa_node,
-			.private_data_size = sizeof(struct qat_sym_dev_private)
+		.name = "",
+		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
+		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
-	struct qat_sym_dev_private *internals;
+	struct qat_cryptodev_private *internals;
 	const struct rte_cryptodev_capabilities *capabilities;
 	uint64_t capa_size;
 
@@ -424,8 +251,9 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
+	internals->service_type = QAT_SERVICE_SYMMETRIC;
 
-	internals->sym_dev_id = cryptodev->data->dev_id;
+	internals->dev_id = cryptodev->data->dev_id;
 	switch (qat_pci_dev->qat_dev_gen) {
 	case QAT_GEN1:
 		capabilities = qat_gen1_sym_capabilities;
@@ -480,7 +308,7 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	qat_pci_dev->sym_dev = internals;
 	QAT_LOG(DEBUG, "Created QAT SYM device %s as cryptodev instance %d",
-			cryptodev->data->name, internals->sym_dev_id);
+			cryptodev->data->name, internals->dev_id);
 
 	rte_cryptodev_pmd_probing_finish(cryptodev);
 
@@ -511,7 +339,7 @@ qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev)
 		rte_memzone_free(qat_pci_dev->sym_dev->capa_mz);
 
 	/* free crypto device */
-	cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->sym_dev_id);
+	cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->dev_id);
 #ifdef RTE_LIB_SECURITY
 	rte_free(cryptodev->security_ctx);
 	cryptodev->security_ctx = NULL;
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index e0992cbe27..d49b732ca0 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -14,6 +14,7 @@
 #endif
 
 #include "qat_sym_capabilities.h"
+#include "qat_crypto.h"
 #include "qat_device.h"
 
 /** Intel(R) QAT Symmetric Crypto PMD driver name */
@@ -25,23 +26,6 @@
 
 extern uint8_t qat_sym_driver_id;
 
-/** private data structure for a QAT device.
- * This QAT device is a device offering only symmetric crypto service,
- * there can be one of these on each qat_pci_device (VF).
- */
-struct qat_sym_dev_private {
-	struct qat_pci_device *qat_dev;
-	/**< The qat pci device hosting the service */
-	uint8_t sym_dev_id;
-	/**< Device instance for this rte_cryptodev */
-	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
-	/* QAT device symmetric crypto capabilities */
-	const struct rte_memzone *capa_mz;
-	/* Shared memzone for storing capabilities */
-	uint16_t min_enq_burst_threshold;
-	uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */
-};
-
 int
 qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
@@ -49,5 +33,8 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 int
 qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev);
 
+void
+qat_sym_init_op_cookie(void *op_cookie);
+
 #endif
 #endif /* _QAT_SYM_PMD_H_ */
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index 3f2f6736fc..8ca475ca8b 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -131,7 +131,7 @@ bpi_cipher_ctx_init(enum rte_crypto_cipher_algorithm cryptodev_algo,
 
 static int
 qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo,
-		struct qat_sym_dev_private *internals)
+		struct qat_cryptodev_private *internals)
 {
 	int i = 0;
 	const struct rte_cryptodev_capabilities *capability;
@@ -152,7 +152,7 @@ qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo,
 
 static int
 qat_is_auth_alg_supported(enum rte_crypto_auth_algorithm algo,
-		struct qat_sym_dev_private *internals)
+		struct qat_cryptodev_private *internals)
 {
 	int i = 0;
 	const struct rte_cryptodev_capabilities *capability;
@@ -267,7 +267,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
 		struct qat_sym_session *session)
 {
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	struct rte_crypto_cipher_xform *cipher_xform = NULL;
 	enum qat_device_gen qat_dev_gen =
 				internals->qat_dev->qat_dev_gen;
@@ -532,7 +532,8 @@ static void
 qat_sym_session_handle_mixed(const struct rte_cryptodev *dev,
 		struct qat_sym_session *session)
 {
-	const struct qat_sym_dev_private *qat_private = dev->data->dev_private;
+	const struct qat_cryptodev_private *qat_private =
+			dev->data->dev_private;
 	enum qat_device_gen min_dev_gen = (qat_private->internal_capabilities &
 			QAT_SYM_CAP_MIXED_CRYPTO) ? QAT_GEN2 : QAT_GEN3;
 
@@ -564,7 +565,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform, void *session_private)
 {
 	struct qat_sym_session *session = session_private;
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen;
 	int ret;
 	int qat_cmd_id;
@@ -707,7 +708,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
 				struct qat_sym_session *session)
 {
 	struct rte_crypto_auth_xform *auth_xform = qat_get_auth_xform(xform);
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	const uint8_t *key_data = auth_xform->key.data;
 	uint8_t key_length = auth_xform->key.length;
 	enum qat_device_gen qat_dev_gen =
@@ -875,7 +876,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
 {
 	struct rte_crypto_aead_xform *aead_xform = &xform->aead;
 	enum rte_crypto_auth_operation crypto_operation;
-	struct qat_sym_dev_private *internals =
+	struct qat_cryptodev_private *internals =
 			dev->data->dev_private;
 	enum qat_device_gen qat_dev_gen =
 			internals->qat_dev->qat_dev_gen;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v4 8/9] crypto/qat: add gen specific data and function
  2021-10-22 17:03     ` [dpdk-dev] [dpdk-dev v4 0/9] drivers/qat: isolate implementations of qat generations Fan Zhang
                         ` (6 preceding siblings ...)
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 7/9] crypto/qat: unified device private data structure Fan Zhang
@ 2021-10-22 17:03       ` Fan Zhang
  2021-10-27  9:28         ` Power, Ciara
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 9/9] crypto/qat: add gen specific implementation Fan Zhang
  2021-10-26 16:44       ` [dpdk-dev] [dpdk-dev v5 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
  9 siblings, 1 reply; 96+ messages in thread
From: Fan Zhang @ 2021-10-22 17:03 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch adds the symmetric and asymmetric crypto data
structure and function prototypes for different QAT
generations.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/crypto/qat/README                  |    7 -
 drivers/crypto/qat/meson.build             |   26 -
 drivers/crypto/qat/qat_asym_capabilities.h |   63 -
 drivers/crypto/qat/qat_asym_pmd.c          |   60 +-
 drivers/crypto/qat/qat_asym_pmd.h          |   25 +
 drivers/crypto/qat/qat_crypto.h            |   16 +
 drivers/crypto/qat/qat_sym_capabilities.h  | 1248 --------------------
 drivers/crypto/qat/qat_sym_pmd.c           |  186 +--
 drivers/crypto/qat/qat_sym_pmd.h           |   57 +-
 9 files changed, 165 insertions(+), 1523 deletions(-)
 delete mode 100644 drivers/crypto/qat/README
 delete mode 100644 drivers/crypto/qat/meson.build
 delete mode 100644 drivers/crypto/qat/qat_asym_capabilities.h
 delete mode 100644 drivers/crypto/qat/qat_sym_capabilities.h

diff --git a/drivers/crypto/qat/README b/drivers/crypto/qat/README
deleted file mode 100644
index 444ae605f0..0000000000
--- a/drivers/crypto/qat/README
+++ /dev/null
@@ -1,7 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2015-2018 Intel Corporation
-
-Makefile for crypto QAT PMD is in common/qat directory.
-The build for the QAT driver is done from there as only one library is built for the
-whole QAT pci device and that library includes all the services (crypto, compression)
-which are enabled on the device.
diff --git a/drivers/crypto/qat/meson.build b/drivers/crypto/qat/meson.build
deleted file mode 100644
index b3b2d17258..0000000000
--- a/drivers/crypto/qat/meson.build
+++ /dev/null
@@ -1,26 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017-2018 Intel Corporation
-
-# this does not build the QAT driver, instead that is done in the compression
-# driver which comes later. Here we just add our sources files to the list
-build = false
-reason = '' # sentinal value to suppress printout
-dep = dependency('libcrypto', required: false, method: 'pkg-config')
-qat_includes += include_directories('.')
-qat_deps += 'cryptodev'
-qat_deps += 'net'
-qat_deps += 'security'
-if dep.found()
-    # Add our sources files to the list
-    qat_sources += files(
-            'qat_asym.c',
-            'qat_asym_pmd.c',
-            'qat_sym.c',
-            'qat_sym_hw_dp.c',
-            'qat_sym_pmd.c',
-            'qat_sym_session.c',
-	)
-    qat_ext_deps += dep
-    qat_cflags += '-DBUILD_QAT_SYM'
-    qat_cflags += '-DBUILD_QAT_ASYM'
-endif
diff --git a/drivers/crypto/qat/qat_asym_capabilities.h b/drivers/crypto/qat/qat_asym_capabilities.h
deleted file mode 100644
index 523b4da6d3..0000000000
--- a/drivers/crypto/qat/qat_asym_capabilities.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019 Intel Corporation
- */
-
-#ifndef _QAT_ASYM_CAPABILITIES_H_
-#define _QAT_ASYM_CAPABILITIES_H_
-
-#define QAT_BASE_GEN1_ASYM_CAPABILITIES						\
-	{	/* modexp */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_MODEX,	\
-				.op_types = 0,					\
-				{						\
-				.modlen = {					\
-				.min = 1,					\
-				.max = 512,					\
-				.increment = 1					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	},									\
-	{	/* modinv */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_MODINV,	\
-				.op_types = 0,					\
-				{						\
-				.modlen = {					\
-				.min = 1,					\
-				.max = 512,					\
-				.increment = 1					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	},									\
-	{	/* RSA */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_RSA,	\
-				.op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_VERIFY) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),	\
-				{						\
-				.modlen = {					\
-				/* min length is based on openssl rsa keygen */	\
-				.min = 64,					\
-				/* value 0 symbolizes no limit on max length */	\
-				.max = 512,					\
-				.increment = 64					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	}									\
-
-#endif /* _QAT_ASYM_CAPABILITIES_H_ */
diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c
index 042f39ddcc..284b8096fe 100644
--- a/drivers/crypto/qat/qat_asym_pmd.c
+++ b/drivers/crypto/qat/qat_asym_pmd.c
@@ -9,15 +9,9 @@
 #include "qat_crypto.h"
 #include "qat_asym.h"
 #include "qat_asym_pmd.h"
-#include "qat_sym_capabilities.h"
-#include "qat_asym_capabilities.h"
 
 uint8_t qat_asym_driver_id;
-
-static const struct rte_cryptodev_capabilities qat_gen1_asym_capabilities[] = {
-	QAT_BASE_GEN1_ASYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
+struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[QAT_N_GENS];
 
 void
 qat_asym_init_op_cookie(void *op_cookie)
@@ -101,19 +95,22 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
 		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
+	struct qat_capabilities_info capa_info;
+	const struct rte_cryptodev_capabilities *capabilities;
+	const struct qat_crypto_gen_dev_ops *gen_dev_ops =
+		&qat_asym_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
 	struct qat_cryptodev_private *internals;
+	uint64_t capa_size;
 
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT 4xxx");
-		return -EFAULT;
-	}
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN3) {
-		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT c4xxx");
+	if (gen_dev_ops->cryptodev_ops == NULL) {
+		QAT_LOG(ERR, "Device %s does not support asymmetric crypto",
+				name);
 		return -EFAULT;
 	}
+
 	snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "asym");
 	QAT_LOG(DEBUG, "Creating QAT ASYM device %s\n", name);
@@ -150,11 +147,8 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	cryptodev->enqueue_burst = qat_asym_pmd_enqueue_op_burst;
 	cryptodev->dequeue_burst = qat_asym_pmd_dequeue_op_burst;
 
-	cryptodev->feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
-			RTE_CRYPTODEV_FF_HW_ACCELERATED |
-			RTE_CRYPTODEV_FF_ASYM_SESSIONLESS |
-			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP |
-			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT;
+
+	cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev);
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -166,27 +160,29 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
 	internals->dev_id = cryptodev->data->dev_id;
-	internals->qat_dev_capabilities = qat_gen1_asym_capabilities;
 	internals->service_type = QAT_SERVICE_ASYMMETRIC;
 
+	capa_info = gen_dev_ops->get_capabilities(qat_pci_dev);
+	capabilities = capa_info.data;
+	capa_size = capa_info.size;
+
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
 		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
-			sizeof(qat_gen1_asym_capabilities),
-			rte_socket_id(), 0);
-	}
-	if (internals->capa_mz == NULL) {
-		QAT_LOG(DEBUG,
-			"Error allocating memzone for capabilities, destroying PMD for %s",
-			name);
-		rte_cryptodev_pmd_destroy(cryptodev);
-		memset(&qat_dev_instance->asym_rte_dev, 0,
-			sizeof(qat_dev_instance->asym_rte_dev));
-		return -EFAULT;
+				capa_size, rte_socket_id(), 0);
+		if (internals->capa_mz == NULL) {
+			QAT_LOG(DEBUG,
+				"Error allocating memzone for capabilities, "
+				"destroying PMD for %s",
+				name);
+			rte_cryptodev_pmd_destroy(cryptodev);
+			memset(&qat_dev_instance->asym_rte_dev, 0,
+				sizeof(qat_dev_instance->asym_rte_dev));
+			return -EFAULT;
+		}
 	}
 
-	memcpy(internals->capa_mz->addr, qat_gen1_asym_capabilities,
-			sizeof(qat_gen1_asym_capabilities));
+	memcpy(internals->capa_mz->addr, capabilities, capa_size);
 	internals->qat_dev_capabilities = internals->capa_mz->addr;
 
 	while (1) {
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index c493796511..fd6b406248 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -7,14 +7,39 @@
 #define _QAT_ASYM_PMD_H_
 
 #include <rte_cryptodev.h>
+#include "qat_crypto.h"
 #include "qat_device.h"
 
 /** Intel(R) QAT Asymmetric Crypto PMD driver name */
 #define CRYPTODEV_NAME_QAT_ASYM_PMD	crypto_qat_asym
 
 
+/**
+ * Helper function to add an asym capability
+ * <name> <op type> <modlen (min, max, increment)>
+ **/
+#define QAT_ASYM_CAP(n, o, l, r, i)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,			\
+		{.asym = {						\
+			.xform_capa = {					\
+				.xform_type = RTE_CRYPTO_ASYM_XFORM_##n,\
+				.op_types = o,				\
+				{					\
+				.modlen = {				\
+				.min = l,				\
+				.max = r,				\
+				.increment = i				\
+				}, }					\
+			}						\
+		},							\
+		}							\
+	}
+
 extern uint8_t qat_asym_driver_id;
 
+extern struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[];
+
 void
 qat_asym_init_op_cookie(void *op_cookie);
 
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index 3803fef19d..0a8afb0b31 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -44,6 +44,22 @@ struct qat_capabilities_info {
 	uint64_t size;
 };
 
+typedef struct qat_capabilities_info (*get_capabilities_info_t)
+			(struct qat_pci_device *qat_dev);
+
+typedef uint64_t (*get_feature_flags_t)(struct qat_pci_device *qat_dev);
+
+typedef void * (*create_security_ctx_t)(void *cryptodev);
+
+struct qat_crypto_gen_dev_ops {
+	get_feature_flags_t get_feature_flags;
+	get_capabilities_info_t get_capabilities;
+	struct rte_cryptodev_ops *cryptodev_ops;
+#ifdef RTE_LIB_SECURITY
+	create_security_ctx_t create_security_ctx;
+#endif
+};
+
 int
 qat_cryptodev_config(struct rte_cryptodev *dev,
 		struct rte_cryptodev_config *config);
diff --git a/drivers/crypto/qat/qat_sym_capabilities.h b/drivers/crypto/qat/qat_sym_capabilities.h
deleted file mode 100644
index cfb176ca94..0000000000
--- a/drivers/crypto/qat/qat_sym_capabilities.h
+++ /dev/null
@@ -1,1248 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017-2019 Intel Corporation
- */
-
-#ifndef _QAT_SYM_CAPABILITIES_H_
-#define _QAT_SYM_CAPABILITIES_H_
-
-#define QAT_BASE_GEN1_SYM_CAPABILITIES					\
-	{	/* SHA1 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA1 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* MD5 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 16,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XCBC MAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-				.aad_size = { 0 },			\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_CMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 4			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_CCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 2			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 224,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 7,			\
-					.max = 13,			\
-					.increment = 1			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_GCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GMAC (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_GMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SNOW 3G (UIA2) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CBC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XTS */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_XTS,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 64,			\
-					.increment = 32			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SNOW 3G (UEA2) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CTR,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* NULL (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, },						\
-		}, },							\
-	},								\
-	{	/* NULL (CIPHER) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				}					\
-			}, },						\
-		}, }							\
-	},								\
-	{       /* KASUMI (F8) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_KASUMI_F8,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{       /* KASUMI (F9) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_KASUMI_F9,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* 3DES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 24,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* 3DES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_3DES_CTR,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 24,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* DES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_DES_CBC,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* DES DOCSISBPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_DES_DOCSISBPI,\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_EXTRA_GEN2_SYM_CAPABILITIES					\
-	{	/* ZUC (EEA3) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_ZUC_EEA3,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* ZUC (EIA3) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_ZUC_EIA3,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_EXTRA_GEN3_SYM_CAPABILITIES					\
-	{	/* Chacha20-Poly1305 */					\
-	.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 32,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_BASE_GEN4_SYM_CAPABILITIES					\
-	{	/* AES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CBC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA1 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XCBC MAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-				.aad_size = { 0 },			\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_CMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 4			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* NULL (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, },						\
-		}, },							\
-	},								\
-	{	/* NULL (CIPHER) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				}					\
-			}, },						\
-		}, }							\
-	},								\
-	{	/* SHA1 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CTR,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_GCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_CCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 2			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 224,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 7,			\
-					.max = 13,			\
-					.increment = 1			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* Chacha20-Poly1305 */					\
-	.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 32,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GMAC (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_GMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				}					\
-			}, }						\
-		}, }							\
-	}								\
-
-
-
-#ifdef RTE_LIB_SECURITY
-#define QAT_SECURITY_SYM_CAPABILITIES					\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_SECURITY_CAPABILITIES(sym)					\
-	[0] = {	/* DOCSIS Uplink */					\
-		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
-		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
-		.docsis = {						\
-			.direction = RTE_SECURITY_DOCSIS_UPLINK		\
-		},							\
-		.crypto_capabilities = (sym)				\
-	},								\
-	[1] = {	/* DOCSIS Downlink */					\
-		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
-		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
-		.docsis = {						\
-			.direction = RTE_SECURITY_DOCSIS_DOWNLINK	\
-		},							\
-		.crypto_capabilities = (sym)				\
-	}
-#endif
-
-#endif /* _QAT_SYM_CAPABILITIES_H_ */
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index dec877cfab..b835245f17 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -22,85 +22,7 @@
 
 uint8_t qat_sym_driver_id;
 
-static const struct rte_cryptodev_capabilities qat_gen1_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen2_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen3_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN3_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen4_sym_capabilities[] = {
-	QAT_BASE_GEN4_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-#ifdef RTE_LIB_SECURITY
-static const struct rte_cryptodev_capabilities
-					qat_security_sym_capabilities[] = {
-	QAT_SECURITY_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_security_capability qat_security_capabilities[] = {
-	QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities),
-	{
-		.action = RTE_SECURITY_ACTION_TYPE_NONE
-	}
-};
-#endif
-
-static struct rte_cryptodev_ops crypto_qat_ops = {
-
-		/* Device related operations */
-		.dev_configure		= qat_cryptodev_config,
-		.dev_start		= qat_cryptodev_start,
-		.dev_stop		= qat_cryptodev_stop,
-		.dev_close		= qat_cryptodev_close,
-		.dev_infos_get		= qat_cryptodev_info_get,
-
-		.stats_get		= qat_cryptodev_stats_get,
-		.stats_reset		= qat_cryptodev_stats_reset,
-		.queue_pair_setup	= qat_cryptodev_qp_setup,
-		.queue_pair_release	= qat_cryptodev_qp_release,
-
-		/* Crypto related operations */
-		.sym_session_get_size	= qat_sym_session_get_private_size,
-		.sym_session_configure	= qat_sym_session_configure,
-		.sym_session_clear	= qat_sym_session_clear,
-
-		/* Raw data-path API related operations */
-		.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
-		.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
-};
-
-#ifdef RTE_LIB_SECURITY
-static const struct rte_security_capability *
-qat_security_cap_get(void *device __rte_unused)
-{
-	return qat_security_capabilities;
-}
-
-static struct rte_security_ops security_qat_ops = {
-
-		.session_create = qat_security_session_create,
-		.session_update = NULL,
-		.session_stats_get = NULL,
-		.session_destroy = qat_security_session_destroy,
-		.set_pkt_metadata = NULL,
-		.capabilities_get = qat_security_cap_get
-};
-#endif
+struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[QAT_N_GENS];
 
 void
 qat_sym_init_op_cookie(void *op_cookie)
@@ -156,7 +78,6 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 	int i = 0, ret = 0;
 	struct qat_device_info *qat_dev_instance =
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
-
 	struct rte_cryptodev_pmd_init_params init_params = {
 		.name = "",
 		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
@@ -166,13 +87,22 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
 	struct qat_cryptodev_private *internals;
+	struct qat_capabilities_info capa_info;
 	const struct rte_cryptodev_capabilities *capabilities;
+	const struct qat_crypto_gen_dev_ops *gen_dev_ops =
+		&qat_sym_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	uint64_t capa_size;
 
 	snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "sym");
 	QAT_LOG(DEBUG, "Creating QAT SYM device %s", name);
 
+	if (gen_dev_ops->cryptodev_ops == NULL) {
+		QAT_LOG(ERR, "Device %s does not support symmetric crypto",
+				name);
+		return -EFAULT;
+	}
+
 	/*
 	 * All processes must use same driver id so they can share sessions.
 	 * Store driver_id so we can validate that all processes have the same
@@ -206,92 +136,56 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	qat_dev_instance->sym_rte_dev.name = cryptodev->data->name;
 	cryptodev->driver_id = qat_sym_driver_id;
-	cryptodev->dev_ops = &crypto_qat_ops;
+	cryptodev->dev_ops = gen_dev_ops->cryptodev_ops;
 
 	cryptodev->enqueue_burst = qat_sym_pmd_enqueue_op_burst;
 	cryptodev->dequeue_burst = qat_sym_pmd_dequeue_op_burst;
 
-	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
-			RTE_CRYPTODEV_FF_HW_ACCELERATED |
-			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
-			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
-			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
-			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
-
-	if (qat_pci_dev->qat_dev_gen < QAT_GEN4)
-		cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SYM_RAW_DP;
+	cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev);
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
-	snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN,
-			"QAT_SYM_CAPA_GEN_%d",
-			qat_pci_dev->qat_dev_gen);
-
 #ifdef RTE_LIB_SECURITY
-	struct rte_security_ctx *security_instance;
-	security_instance = rte_malloc("qat_sec",
-				sizeof(struct rte_security_ctx),
-				RTE_CACHE_LINE_SIZE);
-	if (security_instance == NULL) {
-		QAT_LOG(ERR, "rte_security_ctx memory alloc failed");
-		ret = -ENOMEM;
-		goto error;
-	}
+	if (gen_dev_ops->create_security_ctx) {
+		cryptodev->security_ctx =
+			gen_dev_ops->create_security_ctx((void *)cryptodev);
+		if (cryptodev->security_ctx == NULL) {
+			QAT_LOG(ERR, "rte_security_ctx memory alloc failed");
+			ret = -ENOMEM;
+			goto error;
+		}
+
+		cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY;
+		QAT_LOG(INFO, "Device %s rte_security support enabled", name);
+	} else
+		QAT_LOG(INFO, "Device %s rte_security support disabled", name);
 
-	security_instance->device = (void *)cryptodev;
-	security_instance->ops = &security_qat_ops;
-	security_instance->sess_cnt = 0;
-	cryptodev->security_ctx = security_instance;
-	cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY;
 #endif
+	snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"QAT_SYM_CAPA_GEN_%d",
+			qat_pci_dev->qat_dev_gen);
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
 	internals->service_type = QAT_SERVICE_SYMMETRIC;
-
 	internals->dev_id = cryptodev->data->dev_id;
-	switch (qat_pci_dev->qat_dev_gen) {
-	case QAT_GEN1:
-		capabilities = qat_gen1_sym_capabilities;
-		capa_size = sizeof(qat_gen1_sym_capabilities);
-		break;
-	case QAT_GEN2:
-		capabilities = qat_gen2_sym_capabilities;
-		capa_size = sizeof(qat_gen2_sym_capabilities);
-		break;
-	case QAT_GEN3:
-		capabilities = qat_gen3_sym_capabilities;
-		capa_size = sizeof(qat_gen3_sym_capabilities);
-		break;
-	case QAT_GEN4:
-		capabilities = qat_gen4_sym_capabilities;
-		capa_size = sizeof(qat_gen4_sym_capabilities);
-		break;
-	default:
-		QAT_LOG(DEBUG,
-			"QAT gen %d capabilities unknown",
-			qat_pci_dev->qat_dev_gen);
-		ret = -(EINVAL);
-		goto error;
-	}
+
+	capa_info = gen_dev_ops->get_capabilities(qat_pci_dev);
+	capabilities = capa_info.data;
+	capa_size = capa_info.size;
 
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
 		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
-		capa_size,
-		rte_socket_id(), 0);
-	}
-	if (internals->capa_mz == NULL) {
-		QAT_LOG(DEBUG,
-			"Error allocating memzone for capabilities, destroying "
-			"PMD for %s",
-			name);
-		ret = -EFAULT;
-		goto error;
+				capa_size, rte_socket_id(), 0);
+		if (internals->capa_mz == NULL) {
+			QAT_LOG(DEBUG,
+				"Error allocating capability memzon for %s",
+				name);
+			ret = -EFAULT;
+			goto error;
+		}
 	}
 
 	memcpy(internals->capa_mz->addr, capabilities, capa_size);
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index d49b732ca0..0dc0c6f0d9 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -13,7 +13,6 @@
 #include <rte_security.h>
 #endif
 
-#include "qat_sym_capabilities.h"
 #include "qat_crypto.h"
 #include "qat_device.h"
 
@@ -24,8 +23,64 @@
 #define QAT_SYM_CAP_MIXED_CRYPTO	(1 << 0)
 #define QAT_SYM_CAP_VALID		(1 << 31)
 
+/**
+ * Macro to add a sym capability
+ * helper function to add an sym capability
+ * <n: name> <b: block size> <k: key size> <d: digest size>
+ * <a: aad_size> <i: iv_size>
+ **/
+#define QAT_SYM_PLAIN_AUTH_CAP(n, b, d)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
+			{.auth = {					\
+				.algo = RTE_CRYPTO_AUTH_##n,		\
+				b, d					\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_AUTH_CAP(n, b, k, d, a, i)				\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
+			{.auth = {					\
+				.algo = RTE_CRYPTO_AUTH_##n,		\
+				b, k, d, a, i				\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_AEAD_CAP(n, b, k, d, a, i)				\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
+			{.aead = {					\
+				.algo = RTE_CRYPTO_AEAD_##n,		\
+				b, k, d, a, i				\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_CIPHER_CAP(n, b, k, i)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
+			{.cipher = {					\
+				.algo = RTE_CRYPTO_CIPHER_##n,		\
+				b, k, i					\
+			}, }						\
+		}, }							\
+	}
+
 extern uint8_t qat_sym_driver_id;
 
+extern struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[];
+
 int
 qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v4 9/9] crypto/qat: add gen specific implementation
  2021-10-22 17:03     ` [dpdk-dev] [dpdk-dev v4 0/9] drivers/qat: isolate implementations of qat generations Fan Zhang
                         ` (7 preceding siblings ...)
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 8/9] crypto/qat: add gen specific data and function Fan Zhang
@ 2021-10-22 17:03       ` Fan Zhang
  2021-10-27 10:16         ` Power, Ciara
  2021-10-26 16:44       ` [dpdk-dev] [dpdk-dev v5 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
  9 siblings, 1 reply; 96+ messages in thread
From: Fan Zhang @ 2021-10-22 17:03 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

This patch replaces the mixed QAT symmetric and asymmetric
support implementation by separate files with shared or
individual implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/meson.build               |   7 +-
 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c   |  76 +++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 224 +++++++++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 164 +++++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 124 ++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h |  36 +++
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c    | 283 +++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h              |   3 -
 8 files changed, 913 insertions(+), 4 deletions(-)
 create mode 100644 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 29fd0168ea..ce9959d103 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -71,7 +71,12 @@ endif
 
 if qat_crypto
     foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c',
-            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c']
+            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c',
+	    'dev/qat_sym_pmd_gen1.c',
+            'dev/qat_asym_pmd_gen1.c',
+            'dev/qat_crypto_pmd_gen2.c',
+            'dev/qat_crypto_pmd_gen3.c',
+            'dev/qat_crypto_pmd_gen4.c']
         sources += files(join_paths(qat_crypto_relpath, f))
     endforeach
     deps += ['security']
diff --git a/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
new file mode 100644
index 0000000000..9ed1f21d9d
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+#include "qat_pke_functionality_arrays.h"
+
+struct rte_cryptodev_ops qat_asym_crypto_ops_gen1 = {
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.asym_session_get_size	= qat_asym_session_get_private_size,
+	.asym_session_configure	= qat_asym_session_configure,
+	.asym_session_clear	= qat_asym_session_clear
+};
+
+static struct rte_cryptodev_capabilities qat_asym_crypto_caps_gen1[] = {
+	QAT_ASYM_CAP(MODEX,
+		0, 1, 512, 1),
+	QAT_ASYM_CAP(MODINV,
+		0, 1, 512, 1),
+	QAT_ASYM_CAP(RSA,
+			((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
+			(1 << RTE_CRYPTO_ASYM_OP_VERIFY) |
+			(1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |
+			(1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),
+			64, 512, 64),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+
+struct qat_capabilities_info
+qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_asym_crypto_caps_gen1;
+	capa_info.size = sizeof(qat_asym_crypto_caps_gen1);
+	return capa_info;
+}
+
+uint64_t
+qat_asym_crypto_feature_flags_get_gen1(
+	struct qat_pci_device *qat_dev __rte_unused)
+{
+	uint64_t feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_ASYM_SESSIONLESS |
+			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP |
+			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT;
+
+	return feature_flags;
+}
+
+RTE_INIT(qat_asym_crypto_gen1_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN1].cryptodev_ops =
+			&qat_asym_crypto_ops_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN1].get_capabilities =
+			qat_asym_crypto_cap_get_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN1].get_feature_flags =
+			qat_asym_crypto_feature_flags_get_gen1;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
new file mode 100644
index 0000000000..b4ec440e05
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
@@ -0,0 +1,224 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+#define MIXED_CRYPTO_MIN_FW_VER 0x04090000
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen2[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(MD5_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(KASUMI_F9,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_XTS,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(3DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(3DES_CTR,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(ZUC_EEA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(ZUC_EIA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static int
+qat_sym_crypto_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct qat_cryptodev_private *qat_sym_private = dev->data->dev_private;
+	struct qat_qp *qp;
+	int ret;
+
+	if (qat_cryptodev_qp_setup(dev, qp_id, qp_conf, socket_id)) {
+		QAT_LOG(DEBUG, "QAT qp setup failed");
+		return -1;
+	}
+
+	qp = qat_sym_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id];
+	ret = qat_cq_get_fw_version(qp);
+	if (ret < 0) {
+		qat_cryptodev_qp_release(dev, qp_id);
+		return ret;
+	}
+
+	if (ret != 0)
+		QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d",
+				(ret >> 24) & 0xff,
+				(ret >> 16) & 0xff,
+				(ret >> 8) & 0xff);
+	else
+		QAT_LOG(DEBUG, "unknown QAT firmware version");
+
+	/* set capabilities based on the fw version */
+	qat_sym_private->internal_capabilities = QAT_SYM_CAP_VALID |
+			((ret >= MIXED_CRYPTO_MIN_FW_VER) ?
+					QAT_SYM_CAP_MIXED_CRYPTO : 0);
+	return 0;
+}
+
+struct rte_cryptodev_ops qat_sym_crypto_ops_gen2 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_sym_crypto_qp_setup_gen2,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.sym_session_get_size	= qat_sym_session_get_private_size,
+	.sym_session_configure	= qat_sym_session_configure,
+	.sym_session_clear	= qat_sym_session_clear,
+
+	/* Raw data-path API related operations */
+	.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+	.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen2(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen2;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen2);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen2_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN2].cryptodev_ops = &qat_sym_crypto_ops_gen2;
+	qat_sym_gen_dev_ops[QAT_GEN2].get_capabilities =
+			qat_sym_crypto_cap_get_gen2;
+	qat_sym_gen_dev_ops[QAT_GEN2].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN2].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen2_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN2].cryptodev_ops =
+			&qat_asym_crypto_ops_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN2].get_capabilities =
+			qat_asym_crypto_cap_get_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN2].get_feature_flags =
+			qat_asym_crypto_feature_flags_get_gen1;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
new file mode 100644
index 0000000000..d3336cf4a1
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen3[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(MD5_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(KASUMI_F9,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_XTS,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(3DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(3DES_CTR,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(ZUC_EEA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(ZUC_EIA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AEAD_CAP(CHACHA20_POLY1305,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 32, 32, 0),
+		CAP_RNG(digest_size, 16, 16, 0),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen3(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen3;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen3);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen3_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN3].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN3].get_capabilities =
+			qat_sym_crypto_cap_get_gen3;
+	qat_sym_gen_dev_ops[QAT_GEN3].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN3].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen3_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN3].cryptodev_ops = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN3].get_capabilities = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN3].get_feature_flags = NULL;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
new file mode 100644
index 0000000000..37a58c026f
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
@@ -0,0 +1,124 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen4[] = {
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(CHACHA20_POLY1305,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 32, 32, 0),
+		CAP_RNG(digest_size, 16, 16, 0),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen4;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen4);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen4_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN4].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN4].get_capabilities =
+			qat_sym_crypto_cap_get_gen4;
+	qat_sym_gen_dev_ops[QAT_GEN4].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN4].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen4_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN4].cryptodev_ops = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN4].get_capabilities = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN4].get_feature_flags = NULL;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
new file mode 100644
index 0000000000..67a4d2cb2c
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#ifndef _QAT_CRYPTO_PMD_GENS_H_
+#define _QAT_CRYPTO_PMD_GENS_H_
+
+#include <rte_cryptodev.h>
+#include "qat_crypto.h"
+#include "qat_sym_session.h"
+
+extern struct rte_cryptodev_ops qat_sym_crypto_ops_gen1;
+extern struct rte_cryptodev_ops qat_asym_crypto_ops_gen1;
+
+/* -----------------GENx control path APIs ---------------- */
+uint64_t
+qat_sym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev);
+
+void
+qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session,
+		uint8_t hash_flag);
+
+struct qat_capabilities_info
+qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev);
+
+uint64_t
+qat_asym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev);
+
+#ifdef RTE_LIB_SECURITY
+extern struct rte_security_ops security_qat_ops_gen1;
+
+void *
+qat_sym_create_security_gen1(void *cryptodev);
+#endif
+
+#endif
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
new file mode 100644
index 0000000000..e156f194e2
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
@@ -0,0 +1,283 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#ifdef RTE_LIB_SECURITY
+#include <rte_security_driver.h>
+#endif
+
+#include "adf_transport_access_macros.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_sym_session.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen1[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(MD5_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(KASUMI_F9,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_XTS,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(3DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(3DES_CTR,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+struct rte_cryptodev_ops qat_sym_crypto_ops_gen1 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.sym_session_get_size	= qat_sym_session_get_private_size,
+	.sym_session_configure	= qat_sym_session_configure,
+	.sym_session_clear	= qat_sym_session_clear,
+
+	/* Raw data-path API related operations */
+	.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+	.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen1;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen1);
+	return capa_info;
+}
+
+uint64_t
+qat_sym_crypto_feature_flags_get_gen1(
+	struct qat_pci_device *qat_dev __rte_unused)
+{
+	uint64_t feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
+			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
+			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
+			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED |
+			RTE_CRYPTODEV_FF_SYM_RAW_DP;
+
+	return feature_flags;
+}
+
+#ifdef RTE_LIB_SECURITY
+
+#define QAT_SECURITY_SYM_CAPABILITIES					\
+	{	/* AES DOCSIS BPI */					\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
+			{.cipher = {					\
+				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
+				.block_size = 16,			\
+				.key_size = {				\
+					.min = 16,			\
+					.max = 32,			\
+					.increment = 16			\
+				},					\
+				.iv_size = {				\
+					.min = 16,			\
+					.max = 16,			\
+					.increment = 0			\
+				}					\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SECURITY_CAPABILITIES(sym)					\
+	[0] = {	/* DOCSIS Uplink */					\
+		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
+		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
+		.docsis = {						\
+			.direction = RTE_SECURITY_DOCSIS_UPLINK		\
+		},							\
+		.crypto_capabilities = (sym)				\
+	},								\
+	[1] = {	/* DOCSIS Downlink */					\
+		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
+		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
+		.docsis = {						\
+			.direction = RTE_SECURITY_DOCSIS_DOWNLINK	\
+		},							\
+		.crypto_capabilities = (sym)				\
+	}
+
+static const struct rte_cryptodev_capabilities
+					qat_security_sym_capabilities[] = {
+	QAT_SECURITY_SYM_CAPABILITIES,
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static const struct rte_security_capability qat_security_capabilities_gen1[] = {
+	QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities),
+	{
+		.action = RTE_SECURITY_ACTION_TYPE_NONE
+	}
+};
+
+static const struct rte_security_capability *
+qat_security_cap_get_gen1(void *dev __rte_unused)
+{
+	return qat_security_capabilities_gen1;
+}
+
+struct rte_security_ops security_qat_ops_gen1 = {
+		.session_create = qat_security_session_create,
+		.session_update = NULL,
+		.session_stats_get = NULL,
+		.session_destroy = qat_security_session_destroy,
+		.set_pkt_metadata = NULL,
+		.capabilities_get = qat_security_cap_get_gen1
+};
+
+void *
+qat_sym_create_security_gen1(void *cryptodev)
+{
+	struct rte_security_ctx *security_instance;
+
+	security_instance = rte_malloc(NULL, sizeof(struct rte_security_ctx),
+			RTE_CACHE_LINE_SIZE);
+	if (security_instance == NULL)
+		return NULL;
+
+	security_instance->device = cryptodev;
+	security_instance->ops = &security_qat_ops_gen1;
+	security_instance->sess_cnt = 0;
+
+	return (void *)security_instance;
+}
+
+#endif
+
+RTE_INIT(qat_sym_crypto_gen1_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN1].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN1].get_capabilities =
+			qat_sym_crypto_cap_get_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN1].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN1].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index 0a8afb0b31..6eaa15b975 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -6,9 +6,6 @@
  #define _QAT_CRYPTO_H_
 
 #include <rte_cryptodev.h>
-#ifdef RTE_LIB_SECURITY
-#include <rte_security.h>
-#endif
 
 #include "qat_device.h"
 
-- 
2.25.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [dpdk-dev v4 1/9] common/qat: add gen specific data and function
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 1/9] common/qat: add gen specific data and function Fan Zhang
@ 2021-10-26 15:06         ` Power, Ciara
  0 siblings, 0 replies; 96+ messages in thread
From: Power, Ciara @ 2021-10-26 15:06 UTC (permalink / raw)
  To: Zhang, Roy Fan, dev; +Cc: gakhil, Zhang, Roy Fan, Kusztal, ArkadiuszX, Ji, Kai

>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Fan Zhang
>Sent: Friday 22 October 2021 18:04
>To: dev@dpdk.org
>Cc: gakhil@marvell.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>; Kusztal,
>ArkadiuszX <arkadiuszx.kusztal@intel.com>; Ji, Kai <kai.ji@intel.com>
>Subject: [dpdk-dev] [dpdk-dev v4 1/9] common/qat: add gen specific data
>and function
>
>This patch adds the data structure and function prototypes for different QAT
>generations.
>
>Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
>Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
>Signed-off-by: Kai Ji <kai.ji@intel.com>
>---

Acked-by: Ciara Power <ciara.power@intel.com

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [dpdk-dev v4 2/9] common/qat: add gen specific device implementation
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 2/9] common/qat: add gen specific device implementation Fan Zhang
@ 2021-10-26 15:11         ` Power, Ciara
  0 siblings, 0 replies; 96+ messages in thread
From: Power, Ciara @ 2021-10-26 15:11 UTC (permalink / raw)
  To: Zhang, Roy Fan, dev; +Cc: gakhil, Zhang, Roy Fan, Kusztal, ArkadiuszX, Ji, Kai

Hi Fan,

>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Fan Zhang
>Sent: Friday 22 October 2021 18:04
>To: dev@dpdk.org
>Cc: gakhil@marvell.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>; Kusztal,
>ArkadiuszX <arkadiuszx.kusztal@intel.com>; Ji, Kai <kai.ji@intel.com>
>Subject: [dpdk-dev] [dpdk-dev v4 2/9] common/qat: add gen specific device
>implementation
>
>This patch replaces the mixed QAT device configuration implementation by
>separate files with shared or individual implementation for specific QAT
>generation.
>
>Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
>Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
>Signed-off-by: Kai Ji <kai.ji@intel.com>
>---
<snip>

>+RTE_INIT(qat_dev_gen_gen1_init)
>+{
>+	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
>+	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
>+	qat_gen_config[QAT_GEN1].comp_num_im_bufs_required =
>+		QAT_NUM_INTERM_BUFS_GEN1;
>+}

This line for setting the comp_num_im_bufs_required field seems to be removed in patch 5, is it needed at all?
If it is needed, maybe it is also required for GEN2/3/4 , which aren't being set below.

<snip>
>+RTE_INIT(qat_dev_gen_gen2_init)
>+{
>+	qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2;
>+	qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2; }
>diff --git a/drivers/common/qat/dev/qat_dev_gen3.c

<snip>

>+RTE_INIT(qat_dev_gen_gen3_init)
>+{
>+	qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3;
>+	qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3; }

<snip>

>+RTE_INIT(qat_dev_gen_4_init)
>+{
>+	qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4;
>+	qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4;
>+	qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4; }

Thanks,
Ciara 

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [dpdk-dev v4 3/9] common/qat: add gen specific queue pair function
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 3/9] common/qat: add gen specific queue pair function Fan Zhang
@ 2021-10-26 15:28         ` Power, Ciara
  0 siblings, 0 replies; 96+ messages in thread
From: Power, Ciara @ 2021-10-26 15:28 UTC (permalink / raw)
  To: Zhang, Roy Fan, dev; +Cc: gakhil, Zhang, Roy Fan

>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Fan Zhang
>Sent: Friday 22 October 2021 18:04
>To: dev@dpdk.org
>Cc: gakhil@marvell.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>
>Subject: [dpdk-dev] [dpdk-dev v4 3/9] common/qat: add gen specific queue
>pair function
>
>This patch adds the queue pair data structure and function prototypes for
>different QAT generations.
>
>Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
>---
> drivers/common/qat/qat_qp.c |   3 ++
> drivers/common/qat/qat_qp.h | 103 ++++++++++++++++++++++++----------
>--
<snip>

Acked-by: Ciara Power <ciara.power@intel.com>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [dpdk-dev v4 4/9] common/qat: add gen specific queue implementation
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 4/9] common/qat: add gen specific queue implementation Fan Zhang
@ 2021-10-26 15:52         ` Power, Ciara
  0 siblings, 0 replies; 96+ messages in thread
From: Power, Ciara @ 2021-10-26 15:52 UTC (permalink / raw)
  To: Zhang, Roy Fan, dev; +Cc: gakhil, Zhang, Roy Fan, Kusztal, ArkadiuszX, Ji, Kai

>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Fan Zhang
>Sent: Friday 22 October 2021 18:04
>To: dev@dpdk.org
>Cc: gakhil@marvell.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>; Kusztal,
>ArkadiuszX <arkadiuszx.kusztal@intel.com>; Ji, Kai <kai.ji@intel.com>
>Subject: [dpdk-dev] [dpdk-dev v4 4/9] common/qat: add gen specific queue
>implementation
>
>This patch replaces the mixed QAT queue pair configuration implementation
>by separate files with shared or individual implementation for specific QAT
>generation.
>
>Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
>Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
>Signed-off-by: Kai Ji <kai.ji@intel.com>
>---
<snip>

Acked-by: Ciara Power <ciara.power@intel.com>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [dpdk-dev v4 5/9] compress/qat: add gen specific data and function
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 5/9] compress/qat: add gen specific data and function Fan Zhang
@ 2021-10-26 16:22         ` Power, Ciara
  0 siblings, 0 replies; 96+ messages in thread
From: Power, Ciara @ 2021-10-26 16:22 UTC (permalink / raw)
  To: Zhang, Roy Fan, dev
  Cc: gakhil, Zhang, Roy Fan, Adam Dybkowski, Kusztal, ArkadiuszX, Ji, Kai

>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Fan Zhang
>Sent: Friday 22 October 2021 18:04
>To: dev@dpdk.org
>Cc: gakhil@marvell.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>; Adam
>Dybkowski <adamx.dybkowski@intel.com>; Kusztal, ArkadiuszX
><arkadiuszx.kusztal@intel.com>; Ji, Kai <kai.ji@intel.com>
>Subject: [dpdk-dev] [dpdk-dev v4 5/9] compress/qat: add gen specific data
>and function
>
>This patch adds the compression data structure and function prototypes for
>different QAT generations.
>
>Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
>Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
>Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
>Signed-off-by: Kai Ji <kai.ji@intel.com>
>---
<snip>

Acked-by: Ciara Power <ciara.power@intel.com>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [dpdk-dev v4 6/9] compress/qat: add gen specific implementation
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 6/9] compress/qat: add gen specific implementation Fan Zhang
@ 2021-10-26 16:24         ` Power, Ciara
  0 siblings, 0 replies; 96+ messages in thread
From: Power, Ciara @ 2021-10-26 16:24 UTC (permalink / raw)
  To: Zhang, Roy Fan, dev
  Cc: gakhil, Zhang, Roy Fan, Adam Dybkowski, Kusztal, ArkadiuszX, Ji, Kai

>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Fan Zhang
>Sent: Friday 22 October 2021 18:04
>To: dev@dpdk.org
>Cc: gakhil@marvell.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>; Adam
>Dybkowski <adamx.dybkowski@intel.com>; Kusztal, ArkadiuszX
><arkadiuszx.kusztal@intel.com>; Ji, Kai <kai.ji@intel.com>
>Subject: [dpdk-dev] [dpdk-dev v4 6/9] compress/qat: add gen specific
>implementation
>
>This patch replaces the mixed QAT compression support implementation by
>separate files with shared or individual implementation for specific QAT
>generation.
>
>Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
>Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
>Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
>Signed-off-by: Kai Ji <kai.ji@intel.com>
>---
<snip>

>--- /dev/null
>+++ b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
>@@ -0,0 +1,30 @@
>+/* SPDX-License-Identifier: BSD-3-Clause
>+ * Copyright(c) 2021 Intel Corporation
>+ */
>+
>+#ifndef _QAT_COMP_PMD_GEN1_H_
>+#define _QAT_COMP_PMD_GEN1_H_
>+

Maybe this should match the file name.

>+#include <rte_compressdev.h>
>+#include <rte_compressdev_pmd.h>
>+#include <stdint.h>
>+
>+#include "qat_comp_pmd.h"
>+
>+extern const struct rte_compressdev_capabilities
>+qat_gen1_comp_capabilities[];
>+
>+struct qat_comp_capabilities_info
>+qat_comp_cap_get_gen1(struct qat_pci_device *qat_dev);
>+
>+uint16_t qat_comp_get_ram_bank_flags_gen1(void);
>+
>+int qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform
>*qat_xform,
>+		const struct rte_comp_xform *xform,
>+		enum rte_comp_op_type op_type,
>+		uint32_t *comp_slice_cfg_word);
>+
>+uint64_t qat_comp_get_features_gen1(void);
>+
>+extern struct rte_compressdev_ops qat_comp_ops_gen1;
>+
>+#endif /* _QAT_COMP_PMD_GEN1_H_ */
>--
>2.25.1

Asides from that small comment,
Acked-by: Ciara Power <ciara.power@intel.com>


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v5 0/9] drivers/qat: isolate implementations of qat generations
  2021-10-22 17:03     ` [dpdk-dev] [dpdk-dev v4 0/9] drivers/qat: isolate implementations of qat generations Fan Zhang
                         ` (8 preceding siblings ...)
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 9/9] crypto/qat: add gen specific implementation Fan Zhang
@ 2021-10-26 16:44       ` Kai Ji
  2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 1/9] common/qat: add gen specific data and function Kai Ji
                           ` (8 more replies)
  9 siblings, 9 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 16:44 UTC (permalink / raw)
  To: dev; +Cc: Kai Ji

This patchset introduces new qat driver structure and updates
existing symmetric crypto qat PMD.

The purpose of the change is to isolate QAT generation specific
implementations from one to another.

It is expected the changes to the specific generation driver
code does minimum impact to other generations' implementations.
Also adding the support to new features or new qat generation
hardware will have zero impact to existing functionalities.

v5:
- review comments addressed

v4:
- rebased on top of latest master.
- updated comments.
- removed naming convention patch.

v3:
- removed release note update.
- updated with more unified naming conventions.

v2:
- unified asym and sym data structures for qat.
- more refined per gen code split.

Fan Zhang (9):
  common/qat: add gen specific data and function
  common/qat: add gen specific device implementation
  common/qat: add gen specific queue pair function
  common/qat: add gen specific queue implementation
  compress/qat: add gen specific data and function
  compress/qat: add gen specific implementation
  crypto/qat: unified device private data structure
  crypto/qat: add gen specific data and function
  crypto/qat: add gen specific implementation

 drivers/common/qat/dev/qat_dev_gen1.c         |  254 ++++
 drivers/common/qat/dev/qat_dev_gen2.c         |   37 +
 drivers/common/qat/dev/qat_dev_gen3.c         |   83 ++
 drivers/common/qat/dev/qat_dev_gen4.c         |  305 ++++
 drivers/common/qat/dev/qat_dev_gens.h         |   65 +
 drivers/common/qat/meson.build                |   15 +-
 .../qat/qat_adf/adf_transport_access_macros.h |    2 +
 .../common/qat/qat_adf/icp_qat_hw_gen4_comp.h |  195 +++
 .../qat/qat_adf/icp_qat_hw_gen4_comp_defs.h   |  299 ++++
 drivers/common/qat/qat_common.c               |   15 +
 drivers/common/qat/qat_common.h               |   19 +-
 drivers/common/qat/qat_device.c               |  205 ++-
 drivers/common/qat/qat_device.h               |   45 +-
 drivers/common/qat/qat_qp.c                   |  677 ++++-----
 drivers/common/qat/qat_qp.h                   |  121 +-
 drivers/compress/qat/dev/qat_comp_pmd_gen1.c  |  176 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen2.c  |   30 +
 drivers/compress/qat/dev/qat_comp_pmd_gen3.c  |   30 +
 drivers/compress/qat/dev/qat_comp_pmd_gen4.c  |  213 +++
 drivers/compress/qat/dev/qat_comp_pmd_gens.h  |   30 +
 drivers/compress/qat/qat_comp.c               |  101 +-
 drivers/compress/qat/qat_comp.h               |    8 +-
 drivers/compress/qat/qat_comp_pmd.c           |  159 +--
 drivers/compress/qat/qat_comp_pmd.h           |   76 +
 drivers/crypto/qat/README                     |    7 -
 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c    |   76 +
 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c  |  224 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c  |  164 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c  |  124 ++
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h  |   36 +
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c     |  283 ++++
 drivers/crypto/qat/meson.build                |   26 -
 drivers/crypto/qat/qat_asym_capabilities.h    |   63 -
 drivers/crypto/qat/qat_asym_pmd.c             |  276 +---
 drivers/crypto/qat/qat_asym_pmd.h             |   54 +-
 drivers/crypto/qat/qat_crypto.c               |  172 +++
 drivers/crypto/qat/qat_crypto.h               |   91 ++
 drivers/crypto/qat/qat_sym_capabilities.h     | 1248 -----------------
 drivers/crypto/qat/qat_sym_pmd.c              |  428 +-----
 drivers/crypto/qat/qat_sym_pmd.h              |   76 +-
 drivers/crypto/qat/qat_sym_session.c          |   15 +-
 41 files changed, 3773 insertions(+), 2750 deletions(-)
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gens.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen1.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen2.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen3.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen4.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gens.h
 delete mode 100644 drivers/crypto/qat/README
 create mode 100644 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
 delete mode 100644 drivers/crypto/qat/meson.build
 delete mode 100644 drivers/crypto/qat/qat_asym_capabilities.h
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 delete mode 100644 drivers/crypto/qat/qat_sym_capabilities.h

--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v5 1/9] common/qat: add gen specific data and function
  2021-10-26 16:44       ` [dpdk-dev] [dpdk-dev v5 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
@ 2021-10-26 16:44         ` Kai Ji
  2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 2/9] common/qat: add gen specific device implementation Kai Ji
                           ` (7 subsequent siblings)
  8 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 16:44 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch adds the data structure and function prototypes for
different QAT generations.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com
---
 drivers/common/qat/qat_common.h | 14 ++++++++------
 drivers/common/qat/qat_device.c |  4 ++++
 drivers/common/qat/qat_device.h | 23 +++++++++++++++++++++++
 3 files changed, 35 insertions(+), 6 deletions(-)

diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index 23715085f4..1889ec4e88 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -15,20 +15,24 @@
 /* Intel(R) QuickAssist Technology device generation is enumerated
  * from one according to the generation of the device
  */
+
 enum qat_device_gen {
-	QAT_GEN1 = 1,
+	QAT_GEN1,
 	QAT_GEN2,
 	QAT_GEN3,
-	QAT_GEN4
+	QAT_GEN4,
+	QAT_N_GENS
 };

 enum qat_service_type {
-	QAT_SERVICE_ASYMMETRIC = 0,
+	QAT_SERVICE_ASYMMETRIC,
 	QAT_SERVICE_SYMMETRIC,
 	QAT_SERVICE_COMPRESSION,
-	QAT_SERVICE_INVALID
+	QAT_MAX_SERVICES
 };

+#define QAT_SERVICE_INVALID	(QAT_MAX_SERVICES)
+
 enum qat_svc_list {
 	QAT_SVC_UNUSED = 0,
 	QAT_SVC_CRYPTO = 1,
@@ -37,8 +41,6 @@ enum qat_svc_list {
 	QAT_SVC_ASYM = 4,
 };

-#define QAT_MAX_SERVICES		(QAT_SERVICE_INVALID)
-
 /**< Common struct for scatter-gather list operations */
 struct qat_flat_buf {
 	uint32_t len;
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index 1b967cbcf7..e6b43c541f 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -13,6 +13,10 @@
 #include "adf_pf2vf_msg.h"
 #include "qat_pf2vf.h"

+/* Hardware device information per generation */
+struct qat_gen_hw_data qat_gen_config[QAT_N_GENS];
+struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_N_GENS];
+
 /* pv2vf data Gen 4*/
 struct qat_pf2vf_dev qat_pf2vf_gen4 = {
 	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 228c057d1e..b8b5c387a3 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -21,6 +21,29 @@
 #define COMP_ENQ_THRESHOLD_NAME "qat_comp_enq_threshold"
 #define MAX_QP_THRESHOLD_SIZE	32

+/**
+ * Function prototypes for GENx specific device operations.
+ **/
+typedef int (*qat_dev_reset_ring_pairs_t)
+		(struct qat_pci_device *);
+typedef const struct rte_mem_resource* (*qat_dev_get_transport_bar_t)
+		(struct rte_pci_device *);
+typedef int (*qat_dev_get_misc_bar_t)
+		(struct rte_mem_resource **, struct rte_pci_device *);
+typedef int (*qat_dev_read_config_t)
+		(struct qat_pci_device *);
+typedef int (*qat_dev_get_extra_size_t)(void);
+
+struct qat_dev_hw_spec_funcs {
+	qat_dev_reset_ring_pairs_t	qat_dev_reset_ring_pairs;
+	qat_dev_get_transport_bar_t	qat_dev_get_transport_bar;
+	qat_dev_get_misc_bar_t		qat_dev_get_misc_bar;
+	qat_dev_read_config_t		qat_dev_read_config;
+	qat_dev_get_extra_size_t	qat_dev_get_extra_size;
+};
+
+extern struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[];
+
 struct qat_dev_cmd_param {
 	const char *name;
 	uint16_t val;
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v5 2/9] common/qat: add gen specific device implementation
  2021-10-26 16:44       ` [dpdk-dev] [dpdk-dev v5 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
  2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 1/9] common/qat: add gen specific data and function Kai Ji
@ 2021-10-26 16:44         ` Kai Ji
  2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 3/9] common/qat: add gen specific queue pair function Kai Ji
                           ` (6 subsequent siblings)
  8 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 16:44 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch replaces the mixed QAT device configuration
implementation by separate files with shared or
individual implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c |  64 ++++++++
 drivers/common/qat/dev/qat_dev_gen2.c |  23 +++
 drivers/common/qat/dev/qat_dev_gen3.c |  23 +++
 drivers/common/qat/dev/qat_dev_gen4.c | 152 +++++++++++++++++++
 drivers/common/qat/dev/qat_dev_gens.h |  34 +++++
 drivers/common/qat/meson.build        |   4 +
 drivers/common/qat/qat_device.c       | 205 +++++++++++---------------
 drivers/common/qat/qat_device.h       |   5 +-
 drivers/common/qat/qat_qp.c           |   3 +-
 9 files changed, 389 insertions(+), 124 deletions(-)
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gens.h

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
new file mode 100644
index 0000000000..9972280e06
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+#define ADF_ARB_REG_SLOT			0x1000
+
+int
+qat_reset_ring_pairs_gen1(struct qat_pci_device *qat_pci_dev __rte_unused)
+{
+	/*
+	 * Ring pairs reset not supported on base, continue
+	 */
+	return 0;
+}
+
+const struct rte_mem_resource *
+qat_dev_get_transport_bar_gen1(struct rte_pci_device *pci_dev)
+{
+	return &pci_dev->mem_resource[0];
+}
+
+int
+qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource __rte_unused,
+		struct rte_pci_device *pci_dev __rte_unused)
+{
+	return -1;
+}
+
+int
+qat_dev_read_config_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	/*
+	 * Base generations do not have configuration,
+	 * but set this pointer anyway that we can
+	 * distinguish higher generations faulty set to NULL
+	 */
+	return 0;
+}
+
+int
+qat_dev_get_extra_size_gen1(void)
+{
+	return 0;
+}
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen1_init)
+{
+	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
+	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c
new file mode 100644
index 0000000000..d3470ed6b8
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen2.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen2_init)
+{
+	qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2;
+	qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c
new file mode 100644
index 0000000000..e4a66869d2
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen3.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen3_init)
+{
+	qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3;
+	qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
new file mode 100644
index 0000000000..5e5423ebfa
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "adf_transport_access_macros_gen4vf.h"
+#include "adf_pf2vf_msg.h"
+#include "qat_pf2vf.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+struct qat_dev_gen4_extra {
+	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
+		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
+};
+
+static struct qat_pf2vf_dev qat_pf2vf_gen4 = {
+	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
+	.vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
+	.pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
+	.pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
+	.pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
+	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
+};
+
+int
+qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
+{
+	struct qat_pf2vf_msg pf2vf_msg;
+
+	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ;
+	pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ;
+	pf2vf_msg.msg_data = 2;
+	return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
+}
+
+static enum qat_service_type
+gen4_pick_service(uint8_t hw_service)
+{
+	switch (hw_service) {
+	case QAT_SVC_SYM:
+		return QAT_SERVICE_SYMMETRIC;
+	case QAT_SVC_COMPRESSION:
+		return QAT_SERVICE_COMPRESSION;
+	case QAT_SVC_ASYM:
+		return QAT_SERVICE_ASYMMETRIC;
+	default:
+		return QAT_SERVICE_INVALID;
+	}
+}
+
+static int
+qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
+{
+	int i = 0;
+	uint16_t svc = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+	struct qat_qp_hw_data *hw_data;
+	enum qat_service_type service_type;
+	uint8_t hw_service;
+
+	if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
+		return -EFAULT;
+	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		hw_service = (svc >> (3 * i)) & 0x7;
+		service_type = gen4_pick_service(hw_service);
+		if (service_type == QAT_SERVICE_INVALID) {
+			QAT_LOG(ERR,
+				"Unrecognized service on bundle %d",
+				i);
+			return -ENOTSUP;
+		}
+		hw_data = &dev_extra->qp_gen4_data[i][0];
+		memset(hw_data, 0, sizeof(*hw_data));
+		hw_data->service_type = service_type;
+		if (service_type == QAT_SERVICE_ASYMMETRIC) {
+			hw_data->tx_msg_size = 64;
+			hw_data->rx_msg_size = 32;
+		} else if (service_type == QAT_SERVICE_SYMMETRIC ||
+				service_type ==
+					QAT_SERVICE_COMPRESSION) {
+			hw_data->tx_msg_size = 128;
+			hw_data->rx_msg_size = 32;
+		}
+		hw_data->tx_ring_num = 0;
+		hw_data->rx_ring_num = 1;
+		hw_data->hw_bundle_num = i;
+	}
+	return 0;
+}
+
+static int
+qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
+{
+	int ret = 0, i;
+	uint8_t data[4];
+	struct qat_pf2vf_msg pf2vf_msg;
+
+	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET;
+	pf2vf_msg.block_hdr = -1;
+	for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		pf2vf_msg.msg_data = i;
+		ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data);
+		if (ret) {
+			QAT_LOG(ERR, "QAT error when reset bundle no %d",
+				i);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static const struct
+rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
+{
+	return &pci_dev->mem_resource[0];
+}
+
+static int
+qat_dev_get_misc_bar_gen4(struct rte_mem_resource **mem_resource,
+		struct rte_pci_device *pci_dev)
+{
+	*mem_resource = &pci_dev->mem_resource[2];
+	return 0;
+}
+
+static int
+qat_dev_get_extra_size_gen4(void)
+{
+	return sizeof(struct qat_dev_gen4_extra);
+}
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen4,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen4,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen4,
+	.qat_dev_read_config = qat_dev_read_config_gen4,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen4,
+};
+
+RTE_INIT(qat_dev_gen_4_init)
+{
+	qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4;
+	qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4;
+	qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h
new file mode 100644
index 0000000000..4ad0ffa728
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gens.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _QAT_DEV_GENS_H_
+#define _QAT_DEV_GENS_H_
+
+#include "qat_device.h"
+#include "qat_qp.h"
+
+#include <stdint.h>
+
+extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE];
+
+int
+qat_dev_get_extra_size_gen1(void);
+
+int
+qat_reset_ring_pairs_gen1(
+		struct qat_pci_device *qat_pci_dev);
+const struct
+rte_mem_resource *qat_dev_get_transport_bar_gen1(
+		struct rte_pci_device *pci_dev);
+int
+qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource,
+		struct rte_pci_device *pci_dev);
+int
+qat_dev_read_config_gen1(struct qat_pci_device *qat_dev);
+
+int
+qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val);
+
+#endif
diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 053c219fed..532e0fabb3 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -50,6 +50,10 @@ sources += files(
         'qat_device.c',
         'qat_logs.c',
         'qat_pf2vf.c',
+        'dev/qat_dev_gen1.c',
+        'dev/qat_dev_gen2.c',
+        'dev/qat_dev_gen3.c',
+        'dev/qat_dev_gen4.c'
 )
 includes += include_directories(
         'qat_adf',
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index e6b43c541f..437996f2e8 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -17,43 +17,6 @@
 struct qat_gen_hw_data qat_gen_config[QAT_N_GENS];
 struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_N_GENS];
 
-/* pv2vf data Gen 4*/
-struct qat_pf2vf_dev qat_pf2vf_gen4 = {
-	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
-	.vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
-	.pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
-	.pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
-	.pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
-	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
-};
-
-/* Hardware device information per generation */
-__extension__
-struct qat_gen_hw_data qat_gen_config[] =  {
-	[QAT_GEN1] = {
-		.dev_gen = QAT_GEN1,
-		.qp_hw_data = qat_gen1_qps,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN1
-	},
-	[QAT_GEN2] = {
-		.dev_gen = QAT_GEN2,
-		.qp_hw_data = qat_gen1_qps,
-		/* gen2 has same ring layout as gen1 */
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN2
-	},
-	[QAT_GEN3] = {
-		.dev_gen = QAT_GEN3,
-		.qp_hw_data = qat_gen3_qps,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3
-	},
-	[QAT_GEN4] = {
-		.dev_gen = QAT_GEN4,
-		.qp_hw_data = NULL,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3,
-		.pf2vf_dev = &qat_pf2vf_gen4
-	},
-};
-
 /* per-process array of device data */
 struct qat_device_info qat_pci_devs[RTE_PMD_QAT_MAX_PCI_DEVICES];
 static int qat_nb_pci_devices;
@@ -87,6 +50,16 @@ static const struct rte_pci_id pci_id_qat_map[] = {
 		{.device_id = 0},
 };
 
+static int
+qat_pci_get_extra_size(enum qat_device_gen qat_dev_gen)
+{
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_extra_size,
+		-ENOTSUP);
+	return ops_hw->qat_dev_get_extra_size();
+}
+
 static struct qat_pci_device *
 qat_pci_get_named_dev(const char *name)
 {
@@ -130,45 +103,8 @@ qat_get_qat_dev_from_pci_dev(struct rte_pci_device *pci_dev)
 	return qat_pci_get_named_dev(name);
 }
 
-static int
-qat_gen4_reset_ring_pair(struct qat_pci_device *qat_pci_dev)
-{
-	int ret = 0, i;
-	uint8_t data[4];
-	struct qat_pf2vf_msg pf2vf_msg;
-
-	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET;
-	pf2vf_msg.block_hdr = -1;
-	for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) {
-		pf2vf_msg.msg_data = i;
-		ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data);
-		if (ret) {
-			QAT_LOG(ERR, "QAT error when reset bundle no %d",
-				i);
-			return ret;
-		}
-	}
-
-	return 0;
-}
-
-int qat_query_svc(struct qat_pci_device *qat_dev, uint8_t *val)
-{
-	int ret = -(EINVAL);
-	struct qat_pf2vf_msg pf2vf_msg;
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ;
-		pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ;
-		pf2vf_msg.msg_data = 2;
-		ret = qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
-	}
-
-	return ret;
-}
-
-
-static void qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param
+static void
+qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param
 		*qat_dev_cmd_param)
 {
 	int i = 0;
@@ -230,13 +166,39 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param)
 {
 	struct qat_pci_device *qat_dev;
+	enum qat_device_gen qat_dev_gen;
 	uint8_t qat_dev_id = 0;
 	char name[QAT_DEV_NAME_MAX_LEN];
 	struct rte_devargs *devargs = pci_dev->device.devargs;
+	struct qat_dev_hw_spec_funcs *ops_hw;
+	struct rte_mem_resource *mem_resource;
+	const struct rte_memzone *qat_dev_mz;
+	int qat_dev_size, extra_size;
 
 	rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
 	snprintf(name+strlen(name), QAT_DEV_NAME_MAX_LEN-strlen(name), "_qat");
 
+	switch (pci_dev->id.device_id) {
+	case 0x0443:
+		qat_dev_gen = QAT_GEN1;
+		break;
+	case 0x37c9:
+	case 0x19e3:
+	case 0x6f55:
+	case 0x18ef:
+		qat_dev_gen = QAT_GEN2;
+		break;
+	case 0x18a1:
+		qat_dev_gen = QAT_GEN3;
+		break;
+	case 0x4941:
+		qat_dev_gen = QAT_GEN4;
+		break;
+	default:
+		QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
+		return NULL;
+	}
+
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		const struct rte_memzone *mz = rte_memzone_lookup(name);
 
@@ -267,63 +229,63 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 		return NULL;
 	}
 
-	qat_pci_devs[qat_dev_id].mz = rte_memzone_reserve(name,
-		sizeof(struct qat_pci_device),
+	extra_size = qat_pci_get_extra_size(qat_dev_gen);
+	if (extra_size < 0) {
+		QAT_LOG(ERR, "QAT internal error: no pci pointer for gen %d",
+			qat_dev_gen);
+		return NULL;
+	}
+
+	qat_dev_size = sizeof(struct qat_pci_device) + extra_size;
+	qat_dev_mz = rte_memzone_reserve(name, qat_dev_size,
 		rte_socket_id(), 0);
 
-	if (qat_pci_devs[qat_dev_id].mz == NULL) {
+	if (qat_dev_mz == NULL) {
 		QAT_LOG(ERR, "Error when allocating memzone for QAT_%d",
 			qat_dev_id);
 		return NULL;
 	}
 
-	qat_dev = qat_pci_devs[qat_dev_id].mz->addr;
-	memset(qat_dev, 0, sizeof(*qat_dev));
+	qat_dev = qat_dev_mz->addr;
+	memset(qat_dev, 0, qat_dev_size);
+	qat_dev->dev_private = qat_dev + 1;
 	strlcpy(qat_dev->name, name, QAT_DEV_NAME_MAX_LEN);
 	qat_dev->qat_dev_id = qat_dev_id;
 	qat_pci_devs[qat_dev_id].pci_dev = pci_dev;
-	switch (pci_dev->id.device_id) {
-	case 0x0443:
-		qat_dev->qat_dev_gen = QAT_GEN1;
-		break;
-	case 0x37c9:
-	case 0x19e3:
-	case 0x6f55:
-	case 0x18ef:
-		qat_dev->qat_dev_gen = QAT_GEN2;
-		break;
-	case 0x18a1:
-		qat_dev->qat_dev_gen = QAT_GEN3;
-		break;
-	case 0x4941:
-		qat_dev->qat_dev_gen = QAT_GEN4;
-		break;
-	default:
-		QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
-		rte_memzone_free(qat_pci_devs[qat_dev->qat_dev_id].mz);
+	qat_dev->qat_dev_gen = qat_dev_gen;
+
+	ops_hw = qat_dev_hw_spec[qat_dev->qat_dev_gen];
+	if (ops_hw->qat_dev_get_misc_bar == NULL) {
+		QAT_LOG(ERR, "qat_dev_get_misc_bar function pointer not set");
+		rte_memzone_free(qat_dev_mz);
 		return NULL;
 	}
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		qat_dev->misc_bar_io_addr = pci_dev->mem_resource[2].addr;
-		if (qat_dev->misc_bar_io_addr == NULL) {
+	if (ops_hw->qat_dev_get_misc_bar(&mem_resource, pci_dev) == 0) {
+		if (mem_resource->addr == NULL) {
 			QAT_LOG(ERR, "QAT cannot get access to VF misc bar");
+			rte_memzone_free(qat_dev_mz);
 			return NULL;
 		}
-	}
+		qat_dev->misc_bar_io_addr = mem_resource->addr;
+	} else
+		qat_dev->misc_bar_io_addr = NULL;
 
 	if (devargs && devargs->drv_str)
 		qat_dev_parse_cmd(devargs->drv_str, qat_dev_cmd_param);
 
-	if (qat_dev->qat_dev_gen >= QAT_GEN4) {
-		if (qat_read_qp_config(qat_dev)) {
-			QAT_LOG(ERR,
-				"Cannot acquire ring configuration for QAT_%d",
-				qat_dev_id);
-			return NULL;
-		}
+	if (qat_read_qp_config(qat_dev)) {
+		QAT_LOG(ERR,
+			"Cannot acquire ring configuration for QAT_%d",
+			qat_dev_id);
+			rte_memzone_free(qat_dev_mz);
+		return NULL;
 	}
 
+	/* No errors when allocating, attach memzone with
+	 * qat_dev to list of devices
+	 */
+	qat_pci_devs[qat_dev_id].mz = qat_dev_mz;
+
 	rte_spinlock_init(&qat_dev->arb_csr_lock);
 	qat_nb_pci_devices++;
 
@@ -396,6 +358,7 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	int sym_ret = 0, asym_ret = 0, comp_ret = 0;
 	int num_pmds_created = 0;
 	struct qat_pci_device *qat_pci_dev;
+	struct qat_dev_hw_spec_funcs *ops_hw;
 	struct qat_dev_cmd_param qat_dev_cmd_param[] = {
 			{ SYM_ENQ_THRESHOLD_NAME, 0 },
 			{ ASYM_ENQ_THRESHOLD_NAME, 0 },
@@ -412,13 +375,14 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	if (qat_pci_dev == NULL)
 		return -ENODEV;
 
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		if (qat_gen4_reset_ring_pair(qat_pci_dev)) {
-			QAT_LOG(ERR,
-				"Cannot reset ring pairs, does pf driver supports pf2vf comms?"
-				);
-			return -ENODEV;
-		}
+	ops_hw = qat_dev_hw_spec[qat_pci_dev->qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_reset_ring_pairs,
+		-ENOTSUP);
+	if (ops_hw->qat_dev_reset_ring_pairs(qat_pci_dev)) {
+		QAT_LOG(ERR,
+			"Cannot reset ring pairs, does pf driver supports pf2vf comms?"
+			);
+		return -ENODEV;
 	}
 
 	sym_ret = qat_sym_dev_create(qat_pci_dev, qat_dev_cmd_param);
@@ -453,7 +417,8 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	return 0;
 }
 
-static int qat_pci_remove(struct rte_pci_device *pci_dev)
+static int
+qat_pci_remove(struct rte_pci_device *pci_dev)
 {
 	struct qat_pci_device *qat_pci_dev;
 
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index b8b5c387a3..8b69206df5 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -133,6 +133,8 @@ struct qat_pci_device {
 	/**< Data of ring configuration on gen4 */
 	void *misc_bar_io_addr;
 	/**< Address of misc bar */
+	void *dev_private;
+	/**< Per generation specific information */
 };
 
 struct qat_gen_hw_data {
@@ -182,7 +184,4 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev __rte_unused,
 int
 qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev __rte_unused);
 
-int
-qat_query_svc(struct qat_pci_device *qat_pci_dev, uint8_t *ret);
-
 #endif /* _QAT_DEVICE_H_ */
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 026ea5ee01..b8c6000e86 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -20,6 +20,7 @@
 #include "qat_comp.h"
 #include "adf_transport_access_macros.h"
 #include "adf_transport_access_macros_gen4vf.h"
+#include "dev/qat_dev_gens.h"
 
 #define QAT_CQ_MAX_DEQ_RETRIES 10
 
@@ -512,7 +513,7 @@ qat_read_qp_config(struct qat_pci_device *qat_dev)
 	if (qat_dev_gen == QAT_GEN4) {
 		uint16_t svc = 0;
 
-		if (qat_query_svc(qat_dev, (uint8_t *)&svc))
+		if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
 			return -(EFAULT);
 		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
 			struct qat_qp_hw_data *hw_data =
-- 
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v5 3/9] common/qat: add gen specific queue pair function
  2021-10-26 16:44       ` [dpdk-dev] [dpdk-dev v5 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
  2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 1/9] common/qat: add gen specific data and function Kai Ji
  2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 2/9] common/qat: add gen specific device implementation Kai Ji
@ 2021-10-26 16:44         ` Kai Ji
  2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 4/9] common/qat: add gen specific queue implementation Kai Ji
                           ` (5 subsequent siblings)
  8 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 16:44 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch adds the queue pair data structure and function
prototypes for different QAT generations.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 drivers/common/qat/qat_qp.c |   3 ++
 drivers/common/qat/qat_qp.h | 103 ++++++++++++++++++++++++------------
 2 files changed, 71 insertions(+), 35 deletions(-)

diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index b8c6000e86..27994036b8 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -34,6 +34,9 @@
 	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
 	(ADF_ARB_REG_SLOT * index), value)
 
+struct qat_qp_hw_spec_funcs*
+	qat_qp_hw_spec[QAT_N_GENS];
+
 __extension__
 const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
 					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
index e1627197fa..726cd2ef61 100644
--- a/drivers/common/qat/qat_qp.h
+++ b/drivers/common/qat/qat_qp.h
@@ -7,8 +7,6 @@
 #include "qat_common.h"
 #include "adf_transport_access_macros.h"
 
-struct qat_pci_device;
-
 #define QAT_CSR_HEAD_WRITE_THRESH 32U
 /* number of requests to accumulate before writing head CSR */
 
@@ -24,37 +22,7 @@ struct qat_pci_device;
 #define QAT_GEN4_BUNDLE_NUM             4
 #define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
 
-/**
- * Structure with data needed for creation of queue pair.
- */
-struct qat_qp_hw_data {
-	enum qat_service_type service_type;
-	uint8_t hw_bundle_num;
-	uint8_t tx_ring_num;
-	uint8_t rx_ring_num;
-	uint16_t tx_msg_size;
-	uint16_t rx_msg_size;
-};
-
-/**
- * Structure with data needed for creation of queue pair on gen4.
- */
-struct qat_qp_gen4_data {
-	struct qat_qp_hw_data qat_qp_hw_data;
-	uint8_t reserved;
-	uint8_t valid;
-};
-
-/**
- * Structure with data needed for creation of queue pair.
- */
-struct qat_qp_config {
-	const struct qat_qp_hw_data *hw;
-	uint32_t nb_descriptors;
-	uint32_t cookie_size;
-	int socket_id;
-	const char *service_str;
-};
+struct qat_pci_device;
 
 /**
  * Structure associated with each queue.
@@ -96,8 +64,28 @@ struct qat_qp {
 	uint16_t min_enq_burst_threshold;
 } __rte_cache_aligned;
 
-extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+/**
+ * Structure with data needed for creation of queue pair.
+ */
+struct qat_qp_hw_data {
+	enum qat_service_type service_type;
+	uint8_t hw_bundle_num;
+	uint8_t tx_ring_num;
+	uint8_t rx_ring_num;
+	uint16_t tx_msg_size;
+	uint16_t rx_msg_size;
+};
+
+/**
+ * Structure with data needed for creation of queue pair.
+ */
+struct qat_qp_config {
+	const struct qat_qp_hw_data *hw;
+	uint32_t nb_descriptors;
+	uint32_t cookie_size;
+	int socket_id;
+	const char *service_str;
+};
 
 uint16_t
 qat_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops);
@@ -136,4 +124,49 @@ qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
 int
 qat_read_qp_config(struct qat_pci_device *qat_dev);
 
+/**
+ * Function prototypes for GENx specific queue pair operations.
+ **/
+typedef int (*qat_qp_rings_per_service_t)
+		(struct qat_pci_device *, enum qat_service_type);
+
+typedef void (*qat_qp_build_ring_base_t)(void *, struct qat_queue *);
+
+typedef void (*qat_qp_adf_arb_enable_t)(const struct qat_queue *, void *,
+		rte_spinlock_t *);
+
+typedef void (*qat_qp_adf_arb_disable_t)(const struct qat_queue *, void *,
+		rte_spinlock_t *);
+
+typedef void (*qat_qp_adf_configure_queues_t)(struct qat_qp *);
+
+typedef void (*qat_qp_csr_write_tail_t)(struct qat_qp *qp, struct qat_queue *q);
+
+typedef void (*qat_qp_csr_write_head_t)(struct qat_qp *qp, struct qat_queue *q,
+		uint32_t new_head);
+
+typedef void (*qat_qp_csr_setup_t)(struct qat_pci_device*, void *,
+		struct qat_qp *);
+
+typedef const struct qat_qp_hw_data * (*qat_qp_get_hw_data_t)(
+		struct qat_pci_device *dev, enum qat_service_type service_type,
+		uint16_t qp_id);
+
+struct qat_qp_hw_spec_funcs {
+	qat_qp_rings_per_service_t	qat_qp_rings_per_service;
+	qat_qp_build_ring_base_t	qat_qp_build_ring_base;
+	qat_qp_adf_arb_enable_t		qat_qp_adf_arb_enable;
+	qat_qp_adf_arb_disable_t	qat_qp_adf_arb_disable;
+	qat_qp_adf_configure_queues_t	qat_qp_adf_configure_queues;
+	qat_qp_csr_write_tail_t		qat_qp_csr_write_tail;
+	qat_qp_csr_write_head_t		qat_qp_csr_write_head;
+	qat_qp_csr_setup_t		qat_qp_csr_setup;
+	qat_qp_get_hw_data_t		qat_qp_get_hw_data;
+};
+
+extern struct qat_qp_hw_spec_funcs *qat_qp_hw_spec[];
+
+extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+
 #endif /* _QAT_QP_H_ */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v5 4/9] common/qat: add gen specific queue implementation
  2021-10-26 16:44       ` [dpdk-dev] [dpdk-dev v5 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                           ` (2 preceding siblings ...)
  2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 3/9] common/qat: add gen specific queue pair function Kai Ji
@ 2021-10-26 16:44         ` Kai Ji
  2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 5/9] compress/qat: add gen specific data and function Kai Ji
                           ` (4 subsequent siblings)
  8 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 16:44 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch replaces the mixed QAT queue pair configuration
implementation by separate files with shared or individual
implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com
---
 drivers/common/qat/dev/qat_dev_gen1.c         | 190 +++++
 drivers/common/qat/dev/qat_dev_gen2.c         |  14 +
 drivers/common/qat/dev/qat_dev_gen3.c         |  60 ++
 drivers/common/qat/dev/qat_dev_gen4.c         | 161 ++++-
 drivers/common/qat/dev/qat_dev_gens.h         |  37 +-
 .../qat/qat_adf/adf_transport_access_macros.h |   2 +
 drivers/common/qat/qat_device.h               |   3 -
 drivers/common/qat/qat_qp.c                   | 677 +++++++-----------
 drivers/common/qat/qat_qp.h                   |  24 +-
 drivers/crypto/qat/qat_sym_pmd.c              |  32 +-
 10 files changed, 723 insertions(+), 477 deletions(-)

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
index 9972280e06..38757e6e40 100644
--- a/drivers/common/qat/dev/qat_dev_gen1.c
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -3,6 +3,7 @@
  */

 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"

@@ -10,6 +11,194 @@

 #define ADF_ARB_REG_SLOT			0x1000

+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+__extension__
+const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
+	/* queue pairs which provide an asymmetric crypto service */
+	[QAT_SERVICE_ASYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 0,
+			.rx_ring_num = 8,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+
+		}, {
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 1,
+			.rx_ring_num = 9,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a symmetric crypto service */
+	[QAT_SERVICE_SYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 2,
+			.rx_ring_num = 10,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		},
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 3,
+			.rx_ring_num = 11,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a compression service */
+	[QAT_SERVICE_COMPRESSION] = {
+		{
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 6,
+			.rx_ring_num = 14,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}, {
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 7,
+			.rx_ring_num = 15,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	}
+};
+
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen1(struct qat_pci_device *dev __rte_unused,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	return qat_gen1_qps[service_type] + qp_id;
+}
+
+int
+qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
+{
+	int i = 0, count = 0;
+
+	for (i = 0; i < ADF_MAX_QPS_ON_ANY_SERVICE; i++) {
+		const struct qat_qp_hw_data *hw_qps =
+				qat_qp_get_hw_data(qat_dev, service, i);
+		if (hw_qps->service_type == service)
+			count++;
+	}
+
+	return count;
+}
+
+void
+qat_qp_csr_build_ring_base_gen1(void *io_addr,
+			struct qat_queue *queue)
+{
+	uint64_t queue_base;
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+			queue->queue_size);
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+		queue->hw_queue_number, queue_base);
+}
+
+void
+qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_ARB_REG_SLOT *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr,
+			arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+void
+qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+				(ADF_ARB_REG_SLOT * txq->hw_bundle_number);
+	uint32_t value;
+
+	rte_spinlock_lock(lock);
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value &= ~(0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+void
+qat_qp_adf_configure_queues_gen1(struct qat_qp *qp)
+{
+	uint32_t q_tx_config, q_resp_config;
+	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
+
+	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
+	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
+			ADF_RING_NEAR_WATERMARK_512,
+			ADF_RING_NEAR_WATERMARK_0);
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
+		q_tx->hw_bundle_number,	q_tx->hw_queue_number,
+		q_tx_config);
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
+		q_rx->hw_bundle_number,	q_rx->hw_queue_number,
+		q_resp_config);
+}
+
+void
+qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q)
+{
+	WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number,
+		q->hw_queue_number, q->tail);
+}
+
+void
+qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q,
+			uint32_t new_head)
+{
+	WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number,
+			q->hw_queue_number, new_head);
+}
+
+void
+qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev,
+			void *io_addr, struct qat_qp *qp)
+{
+	qat_qp_csr_build_ring_base_gen1(io_addr, &qp->tx_q);
+	qat_qp_csr_build_ring_base_gen1(io_addr, &qp->rx_q);
+	qat_qp_adf_configure_queues_gen1(qp);
+	qat_qp_adf_arb_enable_gen1(&qp->tx_q, qp->mmap_bar_addr,
+					&qat_dev->arb_csr_lock);
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen1 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen1,
+};
+
 int
 qat_reset_ring_pairs_gen1(struct qat_pci_device *qat_pci_dev __rte_unused)
 {
@@ -59,6 +248,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = {

 RTE_INIT(qat_dev_gen_gen1_init)
 {
+	qat_qp_hw_spec[QAT_GEN1] = &qat_qp_hw_spec_gen1;
 	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
 	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c
index d3470ed6b8..f077fe9eef 100644
--- a/drivers/common/qat/dev/qat_dev_gen2.c
+++ b/drivers/common/qat/dev/qat_dev_gen2.c
@@ -3,11 +3,24 @@
  */

 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"

 #include <stdint.h>

+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen2 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen1,
+};
+
 static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
 	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
 	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
@@ -18,6 +31,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {

 RTE_INIT(qat_dev_gen_gen2_init)
 {
+	qat_qp_hw_spec[QAT_GEN2] = &qat_qp_hw_spec_gen2;
 	qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2;
 	qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c
index e4a66869d2..de3fa17fa9 100644
--- a/drivers/common/qat/dev/qat_dev_gen3.c
+++ b/drivers/common/qat/dev/qat_dev_gen3.c
@@ -3,11 +3,70 @@
  */

 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"

 #include <stdint.h>

+__extension__
+const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
+	/* queue pairs which provide an asymmetric crypto service */
+	[QAT_SERVICE_ASYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 0,
+			.rx_ring_num = 4,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a symmetric crypto service */
+	[QAT_SERVICE_SYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 1,
+			.rx_ring_num = 5,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a compression service */
+	[QAT_SERVICE_COMPRESSION] = {
+		{
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 3,
+			.rx_ring_num = 7,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	}
+};
+
+
+static const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen3(struct qat_pci_device *dev __rte_unused,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	return qat_gen3_qps[service_type] + qp_id;
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen3 = {
+	.qat_qp_rings_per_service  = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen3
+};
+
 static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
 	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
 	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
@@ -18,6 +77,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {

 RTE_INIT(qat_dev_gen_gen3_init)
 {
+	qat_qp_hw_spec[QAT_GEN3] = &qat_qp_hw_spec_gen3;
 	qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3;
 	qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
index 5e5423ebfa..7ffde5f4c8 100644
--- a/drivers/common/qat/dev/qat_dev_gen4.c
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -10,10 +10,13 @@
 #include "adf_transport_access_macros_gen4vf.h"
 #include "adf_pf2vf_msg.h"
 #include "qat_pf2vf.h"
-#include "qat_dev_gens.h"

 #include <stdint.h>

+/* QAT GEN 4 specific macros */
+#define QAT_GEN4_BUNDLE_NUM             4
+#define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
+
 struct qat_dev_gen4_extra {
 	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
 		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
@@ -28,7 +31,7 @@ static struct qat_pf2vf_dev qat_pf2vf_gen4 = {
 	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
 };

-int
+static int
 qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
 {
 	struct qat_pf2vf_msg pf2vf_msg;
@@ -39,6 +42,52 @@ qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
 	return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
 }

+static int
+qat_select_valid_queue_gen4(struct qat_pci_device *qat_dev, int qp_id,
+			enum qat_service_type service_type)
+{
+	int i = 0, valid_qps = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+
+	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		if (dev_extra->qp_gen4_data[i][0].service_type ==
+			service_type) {
+			if (valid_qps == qp_id)
+				return i;
+			++valid_qps;
+		}
+	}
+	return -1;
+}
+
+static const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen4(struct qat_pci_device *qat_dev,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+	int ring_pair = qat_select_valid_queue_gen4(qat_dev, qp_id,
+			service_type);
+
+	if (ring_pair < 0)
+		return NULL;
+
+	return &dev_extra->qp_gen4_data[ring_pair][0];
+}
+
+static int
+qat_qp_rings_per_service_gen4(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
+{
+	int i = 0, count = 0, max_ops_per_srv = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+
+	max_ops_per_srv = QAT_GEN4_BUNDLE_NUM;
+	for (i = 0, count = 0; i < max_ops_per_srv; i++)
+		if (dev_extra->qp_gen4_data[i][0].service_type == service)
+			count++;
+	return count;
+}
+
 static enum qat_service_type
 gen4_pick_service(uint8_t hw_service)
 {
@@ -94,6 +143,109 @@ qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
 	return 0;
 }

+static void
+qat_qp_build_ring_base_gen4(void *io_addr,
+			struct qat_queue *queue)
+{
+	uint64_t queue_base;
+
+	queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr,
+			queue->queue_size);
+	WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number,
+		queue->hw_queue_number, queue_base);
+}
+
+static void
+qat_qp_adf_arb_enable_gen4(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_RING_BUNDLE_SIZE_GEN4 *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
+			arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+static void
+qat_qp_adf_arb_disable_gen4(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_RING_BUNDLE_SIZE_GEN4 *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
+			arb_csr_offset);
+	value &= ~(0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+static void
+qat_qp_adf_configure_queues_gen4(struct qat_qp *qp)
+{
+	uint32_t q_tx_config, q_resp_config;
+	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
+
+	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
+	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
+			ADF_RING_NEAR_WATERMARK_512,
+			ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
+		q_tx->hw_bundle_number,	q_tx->hw_queue_number,
+		q_tx_config);
+	WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
+		q_rx->hw_bundle_number,	q_rx->hw_queue_number,
+		q_resp_config);
+}
+
+static void
+qat_qp_csr_write_tail_gen4(struct qat_qp *qp, struct qat_queue *q)
+{
+	WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
+		q->hw_bundle_number, q->hw_queue_number, q->tail);
+}
+
+static void
+qat_qp_csr_write_head_gen4(struct qat_qp *qp, struct qat_queue *q,
+			uint32_t new_head)
+{
+	WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr,
+			q->hw_bundle_number, q->hw_queue_number, new_head);
+}
+
+static void
+qat_qp_csr_setup_gen4(struct qat_pci_device *qat_dev,
+			void *io_addr, struct qat_qp *qp)
+{
+	qat_qp_build_ring_base_gen4(io_addr, &qp->tx_q);
+	qat_qp_build_ring_base_gen4(io_addr, &qp->rx_q);
+	qat_qp_adf_configure_queues_gen4(qp);
+	qat_qp_adf_arb_enable_gen4(&qp->tx_q, qp->mmap_bar_addr,
+					&qat_dev->arb_csr_lock);
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen4 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen4,
+	.qat_qp_build_ring_base = qat_qp_build_ring_base_gen4,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen4,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen4,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen4,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen4,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen4,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen4,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen4,
+};
+
 static int
 qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
 {
@@ -116,8 +268,8 @@ qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
 	return 0;
 }

-static const struct
-rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
+static const struct rte_mem_resource *
+qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
 {
 	return &pci_dev->mem_resource[0];
 }
@@ -146,6 +298,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = {

 RTE_INIT(qat_dev_gen_4_init)
 {
+	qat_qp_hw_spec[QAT_GEN4] = &qat_qp_hw_spec_gen4;
 	qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4;
 	qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4;
 	qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4;
diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h
index 4ad0ffa728..7c92f1938c 100644
--- a/drivers/common/qat/dev/qat_dev_gens.h
+++ b/drivers/common/qat/dev/qat_dev_gens.h
@@ -16,6 +16,40 @@ extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
 int
 qat_dev_get_extra_size_gen1(void);

+const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen1(struct qat_pci_device *dev,
+		enum qat_service_type service_type, uint16_t qp_id);
+
+int
+qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev,
+		enum qat_service_type service);
+
+void
+qat_qp_csr_build_ring_base_gen1(void *io_addr,
+		struct qat_queue *queue);
+
+void
+qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock);
+
+void
+qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock);
+
+void
+qat_qp_adf_configure_queues_gen1(struct qat_qp *qp);
+
+void
+qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q);
+
+void
+qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q,
+		uint32_t new_head);
+
+void
+qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev,
+		void *io_addr, struct qat_qp *qp);
+
 int
 qat_reset_ring_pairs_gen1(
 		struct qat_pci_device *qat_pci_dev);
@@ -28,7 +62,4 @@ qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource,
 int
 qat_dev_read_config_gen1(struct qat_pci_device *qat_dev);

-int
-qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val);
-
 #endif
diff --git a/drivers/common/qat/qat_adf/adf_transport_access_macros.h b/drivers/common/qat/qat_adf/adf_transport_access_macros.h
index 504ffb7236..f98bbb5001 100644
--- a/drivers/common/qat/qat_adf/adf_transport_access_macros.h
+++ b/drivers/common/qat/qat_adf/adf_transport_access_macros.h
@@ -51,6 +51,8 @@
 #define ADF_MIN_RING_SIZE ADF_RING_SIZE_128
 #define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M
 #define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K
+/* ARB CSR offset */
+#define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C

 /* Maximum number of qps on a device for any service type */
 #define ADF_MAX_QPS_ON_ANY_SERVICE	2
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 8b69206df5..8233cc045d 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -128,9 +128,6 @@ struct qat_pci_device {
 	/* Data relating to compression service */
 	struct qat_comp_dev_private *comp_dev;
 	/**< link back to compressdev private data */
-	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
-		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
-	/**< Data of ring configuration on gen4 */
 	void *misc_bar_io_addr;
 	/**< Address of misc bar */
 	void *dev_private;
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 27994036b8..cde421eb77 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -18,124 +18,15 @@
 #include "qat_sym.h"
 #include "qat_asym.h"
 #include "qat_comp.h"
-#include "adf_transport_access_macros.h"
-#include "adf_transport_access_macros_gen4vf.h"
-#include "dev/qat_dev_gens.h"

 #define QAT_CQ_MAX_DEQ_RETRIES 10

 #define ADF_MAX_DESC				4096
 #define ADF_MIN_DESC				128

-#define ADF_ARB_REG_SLOT			0x1000
-#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
-
-#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
-	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
-	(ADF_ARB_REG_SLOT * index), value)
-
 struct qat_qp_hw_spec_funcs*
 	qat_qp_hw_spec[QAT_N_GENS];

-__extension__
-const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
-					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
-	/* queue pairs which provide an asymmetric crypto service */
-	[QAT_SERVICE_ASYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 0,
-			.rx_ring_num = 8,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-
-		}, {
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 1,
-			.rx_ring_num = 9,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a symmetric crypto service */
-	[QAT_SERVICE_SYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 2,
-			.rx_ring_num = 10,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		},
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 3,
-			.rx_ring_num = 11,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a compression service */
-	[QAT_SERVICE_COMPRESSION] = {
-		{
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 6,
-			.rx_ring_num = 14,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}, {
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 7,
-			.rx_ring_num = 15,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	}
-};
-
-__extension__
-const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES]
-					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
-	/* queue pairs which provide an asymmetric crypto service */
-	[QAT_SERVICE_ASYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 0,
-			.rx_ring_num = 4,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a symmetric crypto service */
-	[QAT_SERVICE_SYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 1,
-			.rx_ring_num = 5,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a compression service */
-	[QAT_SERVICE_COMPRESSION] = {
-		{
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 3,
-			.rx_ring_num = 7,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	}
-};
-
 static int qat_qp_check_queue_alignment(uint64_t phys_addr,
 	uint32_t queue_size_bytes);
 static void qat_queue_delete(struct qat_queue *queue);
@@ -143,77 +34,32 @@ static int qat_queue_create(struct qat_pci_device *qat_dev,
 	struct qat_queue *queue, struct qat_qp_config *, uint8_t dir);
 static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
 	uint32_t *queue_size_for_csr);
-static void adf_configure_queues(struct qat_qp *queue,
+static int adf_configure_queues(struct qat_qp *queue,
 	enum qat_device_gen qat_dev_gen);
-static void adf_queue_arb_enable(enum qat_device_gen qat_dev_gen,
+static int adf_queue_arb_enable(struct qat_pci_device *qat_dev,
 	struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock);
-static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
+static int adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
 	struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock);
+static int qat_qp_build_ring_base(struct qat_pci_device *qat_dev,
+	void *io_addr, struct qat_queue *queue);
+static const struct rte_memzone *queue_dma_zone_reserve(const char *queue_name,
+	uint32_t queue_size, int socket_id);
+static int qat_qp_csr_setup(struct qat_pci_device *qat_dev, void *io_addr,
+	struct qat_qp *qp);

-int qat_qps_per_service(struct qat_pci_device *qat_dev,
-		enum qat_service_type service)
-{
-	int i = 0, count = 0, max_ops_per_srv = 0;
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		max_ops_per_srv = QAT_GEN4_BUNDLE_NUM;
-		for (i = 0, count = 0; i < max_ops_per_srv; i++)
-			if (qat_dev->qp_gen4_data[i][0].service_type == service)
-				count++;
-	} else {
-		const struct qat_qp_hw_data *sym_hw_qps =
-				qat_gen_config[qat_dev->qat_dev_gen]
-				.qp_hw_data[service];
-
-		max_ops_per_srv = ADF_MAX_QPS_ON_ANY_SERVICE;
-		for (i = 0, count = 0; i < max_ops_per_srv; i++)
-			if (sym_hw_qps[i].service_type == service)
-				count++;
-	}
-
-	return count;
-}
-
-static const struct rte_memzone *
-queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
-			int socket_id)
-{
-	const struct rte_memzone *mz;
-
-	mz = rte_memzone_lookup(queue_name);
-	if (mz != 0) {
-		if (((size_t)queue_size <= mz->len) &&
-				((socket_id == SOCKET_ID_ANY) ||
-					(socket_id == mz->socket_id))) {
-			QAT_LOG(DEBUG, "re-use memzone already "
-					"allocated for %s", queue_name);
-			return mz;
-		}
-
-		QAT_LOG(ERR, "Incompatible memzone already "
-				"allocated %s, size %u, socket %d. "
-				"Requested size %u, socket %u",
-				queue_name, (uint32_t)mz->len,
-				mz->socket_id, queue_size, socket_id);
-		return NULL;
-	}
-
-	QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
-					queue_name, queue_size, socket_id);
-	return rte_memzone_reserve_aligned(queue_name, queue_size,
-		socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size);
-}
-
-int qat_qp_setup(struct qat_pci_device *qat_dev,
+int
+qat_qp_setup(struct qat_pci_device *qat_dev,
 		struct qat_qp **qp_addr,
 		uint16_t queue_pair_id,
 		struct qat_qp_config *qat_qp_conf)
 {
-	struct qat_qp *qp;
+	struct qat_qp *qp = NULL;
 	struct rte_pci_device *pci_dev =
 			qat_pci_devs[qat_dev->qat_dev_id].pci_dev;
 	char op_cookie_pool_name[RTE_RING_NAMESIZE];
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev->qat_dev_gen];
+	void *io_addr;
 	uint32_t i;

 	QAT_LOG(DEBUG, "Setup qp %u on qat pci device %d gen %d",
@@ -226,7 +72,15 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		return -EINVAL;
 	}

-	if (pci_dev->mem_resource[0].addr == NULL) {
+	if (ops_hw->qat_dev_get_transport_bar == NULL)	{
+		QAT_LOG(ERR,
+			"QAT Internal Error: qat_dev_get_transport_bar not set for gen %d",
+			qat_dev->qat_dev_gen);
+		goto create_err;
+	}
+
+	io_addr = ops_hw->qat_dev_get_transport_bar(pci_dev)->addr;
+	if (io_addr == NULL) {
 		QAT_LOG(ERR, "Could not find VF config space "
 				"(UIO driver attached?).");
 		return -EINVAL;
@@ -250,7 +104,7 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		return -ENOMEM;
 	}

-	qp->mmap_bar_addr = pci_dev->mem_resource[0].addr;
+	qp->mmap_bar_addr = io_addr;
 	qp->enqueued = qp->dequeued = 0;

 	if (qat_queue_create(qat_dev, &(qp->tx_q), qat_qp_conf,
@@ -277,10 +131,6 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		goto create_err;
 	}

-	adf_configure_queues(qp, qat_dev_gen);
-	adf_queue_arb_enable(qat_dev_gen, &qp->tx_q, qp->mmap_bar_addr,
-					&qat_dev->arb_csr_lock);
-
 	snprintf(op_cookie_pool_name, RTE_RING_NAMESIZE,
 					"%s%d_cookies_%s_qp%hu",
 		pci_dev->driver->driver.name, qat_dev->qat_dev_id,
@@ -298,6 +148,8 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	if (!qp->op_cookie_pool) {
 		QAT_LOG(ERR, "QAT PMD Cannot create"
 				" op mempool");
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
 		goto create_err;
 	}

@@ -316,91 +168,32 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	QAT_LOG(DEBUG, "QP setup complete: id: %d, cookiepool: %s",
 			queue_pair_id, op_cookie_pool_name);

+	qat_qp_csr_setup(qat_dev, io_addr, qp);
+
 	*qp_addr = qp;
 	return 0;

 create_err:
-	if (qp->op_cookie_pool)
-		rte_mempool_free(qp->op_cookie_pool);
-	rte_free(qp->op_cookies);
-	rte_free(qp);
-	return -EFAULT;
-}
-
-
-int qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr)
-{
-	struct qat_qp *qp = *qp_addr;
-	uint32_t i;
-
-	if (qp == NULL) {
-		QAT_LOG(DEBUG, "qp already freed");
-		return 0;
-	}
+	if (qp) {
+		if (qp->op_cookie_pool)
+			rte_mempool_free(qp->op_cookie_pool);

-	QAT_LOG(DEBUG, "Free qp on qat_pci device %d",
-				qp->qat_dev->qat_dev_id);
-
-	/* Don't free memory if there are still responses to be processed */
-	if ((qp->enqueued - qp->dequeued) == 0) {
-		qat_queue_delete(&(qp->tx_q));
-		qat_queue_delete(&(qp->rx_q));
-	} else {
-		return -EAGAIN;
-	}
+		if (qp->op_cookies)
+			rte_free(qp->op_cookies);

-	adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q), qp->mmap_bar_addr,
-				&qp->qat_dev->arb_csr_lock);
-
-	for (i = 0; i < qp->nb_descriptors; i++)
-		rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]);
-
-	if (qp->op_cookie_pool)
-		rte_mempool_free(qp->op_cookie_pool);
-
-	rte_free(qp->op_cookies);
-	rte_free(qp);
-	*qp_addr = NULL;
-	return 0;
-}
-
-
-static void qat_queue_delete(struct qat_queue *queue)
-{
-	const struct rte_memzone *mz;
-	int status = 0;
-
-	if (queue == NULL) {
-		QAT_LOG(DEBUG, "Invalid queue");
-		return;
+		rte_free(qp);
 	}
-	QAT_LOG(DEBUG, "Free ring %d, memzone: %s",
-			queue->hw_queue_number, queue->memz_name);

-	mz = rte_memzone_lookup(queue->memz_name);
-	if (mz != NULL)	{
-		/* Write an unused pattern to the queue memory. */
-		memset(queue->base_addr, 0x7F, queue->queue_size);
-		status = rte_memzone_free(mz);
-		if (status != 0)
-			QAT_LOG(ERR, "Error %d on freeing queue %s",
-					status, queue->memz_name);
-	} else {
-		QAT_LOG(DEBUG, "queue %s doesn't exist",
-				queue->memz_name);
-	}
+	return -EFAULT;
 }

 static int
 qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 		struct qat_qp_config *qp_conf, uint8_t dir)
 {
-	uint64_t queue_base;
-	void *io_addr;
 	const struct rte_memzone *qp_mz;
 	struct rte_pci_device *pci_dev =
 			qat_pci_devs[qat_dev->qat_dev_id].pci_dev;
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
 	int ret = 0;
 	uint16_t desc_size = (dir == ADF_RING_DIR_TX ?
 			qp_conf->hw->tx_msg_size : qp_conf->hw->rx_msg_size);
@@ -460,19 +253,6 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 	 * Write an unused pattern to the queue memory.
 	 */
 	memset(queue->base_addr, 0x7F, queue_size_bytes);
-	io_addr = pci_dev->mem_resource[0].addr;
-
-	if (qat_dev_gen == QAT_GEN4) {
-		queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr,
-					queue->queue_size);
-		WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number,
-			queue->hw_queue_number, queue_base);
-	} else {
-		queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
-				queue->queue_size);
-		WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
-			queue->hw_queue_number, queue_base);
-	}

 	QAT_LOG(DEBUG, "RING: Name:%s, size in CSR: %u, in bytes %u,"
 		" nb msgs %u, msg_size %u, modulo mask %u",
@@ -488,202 +268,231 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 	return ret;
 }

-int
-qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
-			enum qat_service_type service_type)
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+		int socket_id)
 {
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		int i = 0, valid_qps = 0;
-
-		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
-			if (qat_dev->qp_gen4_data[i][0].service_type ==
-				service_type) {
-				if (valid_qps == qp_id)
-					return i;
-				++valid_qps;
-			}
+	const struct rte_memzone *mz;
+
+	mz = rte_memzone_lookup(queue_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			QAT_LOG(DEBUG, "re-use memzone already "
+					"allocated for %s", queue_name);
+			return mz;
 		}
+
+		QAT_LOG(ERR, "Incompatible memzone already "
+				"allocated %s, size %u, socket %d. "
+				"Requested size %u, socket %u",
+				queue_name, (uint32_t)mz->len,
+				mz->socket_id, queue_size, socket_id);
+		return NULL;
 	}
-	return -1;
+
+	QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					queue_name, queue_size, socket_id);
+	return rte_memzone_reserve_aligned(queue_name, queue_size,
+		socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size);
 }

 int
-qat_read_qp_config(struct qat_pci_device *qat_dev)
+qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr)
 {
-	int i = 0;
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
-
-	if (qat_dev_gen == QAT_GEN4) {
-		uint16_t svc = 0;
-
-		if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
-			return -(EFAULT);
-		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
-			struct qat_qp_hw_data *hw_data =
-				&qat_dev->qp_gen4_data[i][0];
-			uint8_t svc1 = (svc >> (3 * i)) & 0x7;
-			enum qat_service_type service_type = QAT_SERVICE_INVALID;
-
-			if (svc1 == QAT_SVC_SYM) {
-				service_type = QAT_SERVICE_SYMMETRIC;
-				QAT_LOG(DEBUG,
-					"Discovered SYMMETRIC service on bundle %d",
-					i);
-			} else if (svc1 == QAT_SVC_COMPRESSION) {
-				service_type = QAT_SERVICE_COMPRESSION;
-				QAT_LOG(DEBUG,
-					"Discovered COPRESSION service on bundle %d",
-					i);
-			} else if (svc1 == QAT_SVC_ASYM) {
-				service_type = QAT_SERVICE_ASYMMETRIC;
-				QAT_LOG(DEBUG,
-					"Discovered ASYMMETRIC service on bundle %d",
-					i);
-			} else {
-				QAT_LOG(ERR,
-					"Unrecognized service on bundle %d",
-					i);
-				return -(EFAULT);
-			}
+	int ret;
+	struct qat_qp *qp = *qp_addr;
+	uint32_t i;

-			memset(hw_data, 0, sizeof(*hw_data));
-			hw_data->service_type = service_type;
-			if (service_type == QAT_SERVICE_ASYMMETRIC) {
-				hw_data->tx_msg_size = 64;
-				hw_data->rx_msg_size = 32;
-			} else if (service_type == QAT_SERVICE_SYMMETRIC ||
-					service_type ==
-						QAT_SERVICE_COMPRESSION) {
-				hw_data->tx_msg_size = 128;
-				hw_data->rx_msg_size = 32;
-			}
-			hw_data->tx_ring_num = 0;
-			hw_data->rx_ring_num = 1;
-			hw_data->hw_bundle_num = i;
-		}
+	if (qp == NULL) {
+		QAT_LOG(DEBUG, "qp already freed");
 		return 0;
 	}
-	return -(EINVAL);
+
+	QAT_LOG(DEBUG, "Free qp on qat_pci device %d",
+				qp->qat_dev->qat_dev_id);
+
+	/* Don't free memory if there are still responses to be processed */
+	if ((qp->enqueued - qp->dequeued) == 0) {
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
+	} else {
+		return -EAGAIN;
+	}
+
+	ret = adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q),
+			qp->mmap_bar_addr, &qp->qat_dev->arb_csr_lock);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < qp->nb_descriptors; i++)
+		rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]);
+
+	if (qp->op_cookie_pool)
+		rte_mempool_free(qp->op_cookie_pool);
+
+	rte_free(qp->op_cookies);
+	rte_free(qp);
+	*qp_addr = NULL;
+	return 0;
 }

-static int qat_qp_check_queue_alignment(uint64_t phys_addr,
-					uint32_t queue_size_bytes)
+
+static void
+qat_queue_delete(struct qat_queue *queue)
 {
-	if (((queue_size_bytes - 1) & phys_addr) != 0)
-		return -EINVAL;
+	const struct rte_memzone *mz;
+	int status = 0;
+
+	if (queue == NULL) {
+		QAT_LOG(DEBUG, "Invalid queue");
+		return;
+	}
+	QAT_LOG(DEBUG, "Free ring %d, memzone: %s",
+			queue->hw_queue_number, queue->memz_name);
+
+	mz = rte_memzone_lookup(queue->memz_name);
+	if (mz != NULL)	{
+		/* Write an unused pattern to the queue memory. */
+		memset(queue->base_addr, 0x7F, queue->queue_size);
+		status = rte_memzone_free(mz);
+		if (status != 0)
+			QAT_LOG(ERR, "Error %d on freeing queue %s",
+					status, queue->memz_name);
+	} else {
+		QAT_LOG(DEBUG, "queue %s doesn't exist",
+				queue->memz_name);
+	}
+}
+
+static int __rte_unused
+adf_queue_arb_enable(struct qat_pci_device *qat_dev, struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_enable,
+			-ENOTSUP);
+	ops->qat_qp_adf_arb_enable(txq, base_addr, lock);
 	return 0;
 }

-static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
-	uint32_t *p_queue_size_for_csr)
+static int
+adf_queue_arb_disable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock)
 {
-	uint8_t i = ADF_MIN_RING_SIZE;
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];

-	for (; i <= ADF_MAX_RING_SIZE; i++)
-		if ((msg_size * msg_num) ==
-				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
-			*p_queue_size_for_csr = i;
-			return 0;
-		}
-	QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
-	return -EINVAL;
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_disable,
+			-ENOTSUP);
+	ops->qat_qp_adf_arb_disable(txq, base_addr, lock);
+	return 0;
 }

-static void
-adf_queue_arb_enable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq,
-			void *base_addr, rte_spinlock_t *lock)
+static int __rte_unused
+qat_qp_build_ring_base(struct qat_pci_device *qat_dev, void *io_addr,
+		struct qat_queue *queue)
 {
-	uint32_t arb_csr_offset = 0, value;
-
-	rte_spinlock_lock(lock);
-	if (qat_dev_gen == QAT_GEN4) {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_RING_BUNDLE_SIZE_GEN4 *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
-				arb_csr_offset);
-	} else {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_ARB_REG_SLOT *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr,
-				arb_csr_offset);
-	}
-	value |= (0x01 << txq->hw_queue_number);
-	ADF_CSR_WR(base_addr, arb_csr_offset, value);
-	rte_spinlock_unlock(lock);
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_build_ring_base,
+			-ENOTSUP);
+	ops->qat_qp_build_ring_base(io_addr, queue);
+	return 0;
 }

-static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
-		struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock)
+int
+qat_qps_per_service(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
 {
-	uint32_t arb_csr_offset = 0, value;
-
-	rte_spinlock_lock(lock);
-	if (qat_dev_gen == QAT_GEN4) {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_RING_BUNDLE_SIZE_GEN4 *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
-				arb_csr_offset);
-	} else {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_ARB_REG_SLOT *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr,
-				arb_csr_offset);
-	}
-	value &= ~(0x01 << txq->hw_queue_number);
-	ADF_CSR_WR(base_addr, arb_csr_offset, value);
-	rte_spinlock_unlock(lock);
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_rings_per_service,
+			-ENOTSUP);
+	return ops->qat_qp_rings_per_service(qat_dev, service);
 }

-static void adf_configure_queues(struct qat_qp *qp,
-		enum qat_device_gen qat_dev_gen)
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data(struct qat_pci_device *qat_dev,
+		enum qat_service_type service, uint16_t qp_id)
 {
-	uint32_t q_tx_config, q_resp_config;
-	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
-
-	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
-	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
-			ADF_RING_NEAR_WATERMARK_512,
-			ADF_RING_NEAR_WATERMARK_0);
-
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
-			q_tx->hw_bundle_number,	q_tx->hw_queue_number,
-			q_tx_config);
-		WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
-			q_rx->hw_bundle_number,	q_rx->hw_queue_number,
-			q_resp_config);
-	} else {
-		WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
-			q_tx->hw_bundle_number,	q_tx->hw_queue_number,
-			q_tx_config);
-		WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
-			q_rx->hw_bundle_number,	q_rx->hw_queue_number,
-			q_resp_config);
-	}
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_get_hw_data, NULL);
+	return ops->qat_qp_get_hw_data(qat_dev, service, qp_id);
 }

-static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask)
+int
+qat_read_qp_config(struct qat_pci_device *qat_dev)
 {
-	return data & modulo_mask;
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_read_config,
+			-ENOTSUP);
+	return ops_hw->qat_dev_read_config(qat_dev);
+}
+
+static int __rte_unused
+adf_configure_queues(struct qat_qp *qp, enum qat_device_gen qat_dev_gen)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_configure_queues,
+			-ENOTSUP);
+	ops->qat_qp_adf_configure_queues(qp);
+	return 0;
 }

 static inline void
 txq_write_tail(enum qat_device_gen qat_dev_gen,
-		struct qat_qp *qp, struct qat_queue *q) {
+		struct qat_qp *qp, struct qat_queue *q)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];

-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
-			q->hw_bundle_number, q->hw_queue_number, q->tail);
-	} else {
-		WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number,
-			q->hw_queue_number, q->tail);
-	}
+	/*
+	 * Pointer check should be done during
+	 * initialization
+	 */
+	ops->qat_qp_csr_write_tail(qp, q);
 }

+static inline void
+qat_qp_csr_write_head(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
+			struct qat_queue *q, uint32_t new_head)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
+
+	/*
+	 * Pointer check should be done during
+	 * initialization
+	 */
+	ops->qat_qp_csr_write_head(qp, q, new_head);
+}
+
+static int
+qat_qp_csr_setup(struct qat_pci_device *qat_dev,
+		void *io_addr, struct qat_qp *qp)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_csr_setup,
+			-ENOTSUP);
+	ops->qat_qp_csr_setup(qat_dev, io_addr, qp);
+	return 0;
+}
+
+
 static inline
 void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
 				struct qat_queue *q)
@@ -707,15 +516,37 @@ void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
 	q->nb_processed_responses = 0;
 	q->csr_head = new_head;

-	/* write current head to CSR */
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr,
-			q->hw_bundle_number, q->hw_queue_number, new_head);
-	} else {
-		WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number,
-				q->hw_queue_number, new_head);
-	}
+	qat_qp_csr_write_head(qat_dev_gen, qp, q, new_head);
+}
+
+static int
+qat_qp_check_queue_alignment(uint64_t phys_addr, uint32_t queue_size_bytes)
+{
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return -EINVAL;
+	return 0;
+}
+
+static int
+adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+		uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
+
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return -EINVAL;
+}

+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t modulo_mask)
+{
+	return data & modulo_mask;
 }

 uint16_t
diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
index 726cd2ef61..deafb407b3 100644
--- a/drivers/common/qat/qat_qp.h
+++ b/drivers/common/qat/qat_qp.h
@@ -12,16 +12,6 @@

 #define QAT_QP_MIN_INFL_THRESHOLD	256

-/* Default qp configuration for GEN4 devices */
-#define QAT_GEN4_QP_DEFCON	(QAT_SERVICE_SYMMETRIC |	\
-				QAT_SERVICE_SYMMETRIC << 8 |	\
-				QAT_SERVICE_SYMMETRIC << 16 |	\
-				QAT_SERVICE_SYMMETRIC << 24)
-
-/* QAT GEN 4 specific macros */
-#define QAT_GEN4_BUNDLE_NUM             4
-#define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
-
 struct qat_pci_device;

 /**
@@ -106,7 +96,11 @@ qat_qp_setup(struct qat_pci_device *qat_dev,

 int
 qat_qps_per_service(struct qat_pci_device *qat_dev,
-			enum qat_service_type service);
+		enum qat_service_type service);
+
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data(struct qat_pci_device *qat_dev,
+		enum qat_service_type service, uint16_t qp_id);

 int
 qat_cq_get_fw_version(struct qat_qp *qp);
@@ -116,11 +110,6 @@ int
 qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused,
 			  void *op_cookie __rte_unused,
 			  uint64_t *dequeue_err_count __rte_unused);
-
-int
-qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
-			enum qat_service_type service_type);
-
 int
 qat_read_qp_config(struct qat_pci_device *qat_dev);

@@ -166,7 +155,4 @@ struct qat_qp_hw_spec_funcs {

 extern struct qat_qp_hw_spec_funcs *qat_qp_hw_spec[];

-extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-
 #endif /* _QAT_QP_H_ */
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index d4f087733f..5b8ee4bee6 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -164,35 +164,11 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	int ret = 0;
 	uint32_t i;
 	struct qat_qp_config qat_qp_conf;
-	const struct qat_qp_hw_data *sym_hw_qps = NULL;
-	const struct qat_qp_hw_data *qp_hw_data = NULL;
-
 	struct qat_qp **qp_addr =
 			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
 	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
 	struct qat_pci_device *qat_dev = qat_private->qat_dev;

-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		int ring_pair =
-			qat_select_valid_queue(qat_dev, qp_id,
-				QAT_SERVICE_SYMMETRIC);
-
-		if (ring_pair < 0) {
-			QAT_LOG(ERR,
-				"qp_id %u invalid for this device, no enough services allocated for GEN4 device",
-				qp_id);
-			return -EINVAL;
-		}
-		sym_hw_qps =
-			&qat_dev->qp_gen4_data[0][0];
-		qp_hw_data =
-			&qat_dev->qp_gen4_data[ring_pair][0];
-	} else {
-		sym_hw_qps = qat_gen_config[qat_dev->qat_dev_gen]
-				.qp_hw_data[QAT_SERVICE_SYMMETRIC];
-		qp_hw_data = sym_hw_qps + qp_id;
-	}
-
 	/* If qp is already in use free ring memory and qp metadata. */
 	if (*qp_addr != NULL) {
 		ret = qat_sym_qp_release(dev, qp_id);
@@ -204,7 +180,13 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 		return -EINVAL;
 	}

-	qat_qp_conf.hw = qp_hw_data;
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_SYMMETRIC,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
 	qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie);
 	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
 	qat_qp_conf.socket_id = socket_id;
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v5 5/9] compress/qat: add gen specific data and function
  2021-10-26 16:44       ` [dpdk-dev] [dpdk-dev v5 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                           ` (3 preceding siblings ...)
  2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 4/9] common/qat: add gen specific queue implementation Kai Ji
@ 2021-10-26 16:44         ` Kai Ji
  2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 6/9] compress/qat: add gen specific implementation Kai Ji
                           ` (3 subsequent siblings)
  8 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 16:44 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Adam Dybkowski, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch adds the compression data structure and function
prototypes for different QAT generations.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com
---
 .../common/qat/qat_adf/icp_qat_hw_gen4_comp.h | 195 ++++++++++++
 .../qat/qat_adf/icp_qat_hw_gen4_comp_defs.h   | 299 ++++++++++++++++++
 drivers/common/qat/qat_common.h               |   4 +-
 drivers/common/qat/qat_device.h               |   7 -
 drivers/compress/qat/qat_comp.c               | 101 +++---
 drivers/compress/qat/qat_comp.h               |   8 +-
 drivers/compress/qat/qat_comp_pmd.c           | 159 ++++------
 drivers/compress/qat/qat_comp_pmd.h           |  76 +++++
 8 files changed, 675 insertions(+), 174 deletions(-)
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h

diff --git a/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
new file mode 100644
index 0000000000..ec69dc7105
--- /dev/null
+++ b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
@@ -0,0 +1,195 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _ICP_QAT_HW_GEN4_COMP_H_
+#define _ICP_QAT_HW_GEN4_COMP_H_
+
+#include "icp_qat_fw.h"
+#include "icp_qat_hw_gen4_comp_defs.h"
+
+struct icp_qat_hw_comp_20_config_csr_lower {
+	icp_qat_hw_comp_20_extended_delay_match_mode_t edmm;
+	icp_qat_hw_comp_20_hw_comp_format_t algo;
+	icp_qat_hw_comp_20_search_depth_t sd;
+	icp_qat_hw_comp_20_hbs_control_t hbs;
+	icp_qat_hw_comp_20_abd_t abd;
+	icp_qat_hw_comp_20_lllbd_ctrl_t lllbd;
+	icp_qat_hw_comp_20_min_match_control_t mmctrl;
+	icp_qat_hw_comp_20_skip_hash_collision_t hash_col;
+	icp_qat_hw_comp_20_skip_hash_update_t hash_update;
+	icp_qat_hw_comp_20_byte_skip_t skip_ctrl;
+};
+
+static inline uint32_t ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER(
+		struct icp_qat_hw_comp_20_config_csr_lower csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.algo,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_MASK);
+
+	QAT_FIELD_SET(val32, csr.sd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_MASK);
+
+	QAT_FIELD_SET(val32, csr.edmm,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_MASK);
+
+	QAT_FIELD_SET(val32, csr.hbs,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lllbd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK);
+
+	QAT_FIELD_SET(val32, csr.mmctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.hash_col,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_MASK);
+
+	QAT_FIELD_SET(val32, csr.hash_update,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_MASK);
+
+	QAT_FIELD_SET(val32, csr.skip_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_MASK);
+
+	QAT_FIELD_SET(val32, csr.abd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_MASK);
+
+	QAT_FIELD_SET(val32, csr.lllbd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_comp_20_config_csr_upper {
+	icp_qat_hw_comp_20_scb_control_t scb_ctrl;
+	icp_qat_hw_comp_20_rmb_control_t rmb_ctrl;
+	icp_qat_hw_comp_20_som_control_t som_ctrl;
+	icp_qat_hw_comp_20_skip_hash_rd_control_t skip_hash_ctrl;
+	icp_qat_hw_comp_20_scb_unload_control_t scb_unload_ctrl;
+	icp_qat_hw_comp_20_disable_token_fusion_control_t
+			disable_token_fusion_ctrl;
+	icp_qat_hw_comp_20_lbms_t lbms;
+	icp_qat_hw_comp_20_scb_mode_reset_mask_t scb_mode_reset;
+	uint16_t lazy;
+	uint16_t nice;
+};
+
+static inline uint32_t ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER(
+		struct icp_qat_hw_comp_20_config_csr_upper csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.scb_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.rmb_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.som_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.skip_hash_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.scb_unload_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.disable_token_fusion_ctrl,
+	ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_BITPOS,
+	ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbms,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_MASK);
+
+	QAT_FIELD_SET(val32, csr.scb_mode_reset,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK);
+
+	QAT_FIELD_SET(val32, csr.lazy,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_MASK);
+
+	QAT_FIELD_SET(val32, csr.nice,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_decomp_20_config_csr_lower {
+	icp_qat_hw_decomp_20_hbs_control_t hbs;
+	icp_qat_hw_decomp_20_lbms_t lbms;
+	icp_qat_hw_decomp_20_hw_comp_format_t algo;
+	icp_qat_hw_decomp_20_min_match_control_t mmctrl;
+	icp_qat_hw_decomp_20_lz4_block_checksum_present_t lbc;
+};
+
+static inline uint32_t ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER(
+		struct icp_qat_hw_decomp_20_config_csr_lower csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.hbs,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbms,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_MASK);
+
+	QAT_FIELD_SET(val32, csr.algo,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_MASK);
+
+	QAT_FIELD_SET(val32, csr.mmctrl,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbc,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_BITPOS,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_decomp_20_config_csr_upper {
+	icp_qat_hw_decomp_20_speculative_decoder_control_t sdc;
+	icp_qat_hw_decomp_20_mini_cam_control_t mcc;
+};
+
+static inline uint32_t ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_UPPER(
+		struct icp_qat_hw_decomp_20_config_csr_upper csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.sdc,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_BITPOS,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.mcc,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_MASK);
+
+	return rte_bswap32(val32);
+}
+
+#endif /* _ICP_QAT_HW_GEN4_COMP_H_ */
diff --git a/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
new file mode 100644
index 0000000000..ad02d06b12
--- /dev/null
+++ b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
@@ -0,0 +1,299 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _ICP_QAT_HW_GEN4_COMP_DEFS_H
+#define _ICP_QAT_HW_GEN4_COMP_DEFS_H
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_BITPOS	31
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_comp_20_scb_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_BITPOS	30
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_ALL = 0x0,
+	ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_FC_ONLY = 0x1,
+} icp_qat_hw_comp_20_rmb_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_ALL
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_BITPOS	28
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_MASK		0x3
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_NORMAL_MODE = 0x0,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_REPLAY_MODE = 0x1,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_INPUT_CRC = 0x2,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_RESERVED_MODE = 0x3,
+} icp_qat_hw_comp_20_som_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SOM_CONTROL_NORMAL_MODE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_BITPOS	27
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_NO_SKIP = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_SKIP_HASH_READS = 0x1,
+} icp_qat_hw_comp_20_skip_hash_rd_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_NO_SKIP
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_BITPOS	26
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_UNLOAD = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_NO_UNLOAD = 0x1,
+} icp_qat_hw_comp_20_scb_unload_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_UNLOAD
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_BITPOS 21
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_comp_20_disable_token_fusion_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_DEFAULT_VAL \
+		ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_ENABLE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_BITPOS	19
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_MASK		0x3
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_64KB = 0x0,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_256KB = 0x1,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_1MB = 0x2,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_4MB = 0x3,
+} icp_qat_hw_comp_20_lbms_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_LBMS_LBMS_64KB
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS	18
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS_AND_HISTORY = 0x1,
+} icp_qat_hw_comp_20_scb_mode_reset_mask_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_BITPOS	9
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_MASK	0x1ff
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL 258
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_BITPOS	0
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_MASK	0x1ff
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL 259
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS	14
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_MASK		0x7
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_HBS_CONTROL_HBS_IS_32KB = 0x0,
+} icp_qat_hw_comp_20_hbs_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_HBS_CONTROL_HBS_IS_32KB
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_BITPOS	13
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_ABD_ABD_ENABLED = 0x0,
+	ICP_QAT_HW_COMP_20_ABD_ABD_DISABLED = 0x1,
+} icp_qat_hw_comp_20_abd_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_ABD_ABD_ENABLED
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS	12
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED = 0x0,
+	ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_DISABLED = 0x1,
+} icp_qat_hw_comp_20_lllbd_ctrl_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_BITPOS	8
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_MASK		0xf
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1 = 0x1,
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_6 = 0x3,
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_9 = 0x4,
+} icp_qat_hw_comp_20_search_depth_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_BITPOS	5
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77 = 0x0,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE = 0x1,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_LZ4 = 0x2,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_LZ4S = 0x3,
+} icp_qat_hw_comp_20_hw_comp_format_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS	4
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_3B = 0x0,
+	ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_4B = 0x1,
+} icp_qat_hw_comp_20_min_match_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_3B
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_BITPOS	3
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_ALLOW = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_DONT_ALLOW = 0x1,
+} icp_qat_hw_comp_20_skip_hash_collision_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_ALLOW
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_BITPOS	2
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_ALLOW = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW = 0x1,
+} icp_qat_hw_comp_20_skip_hash_update_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_ALLOW
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_BITPOS	1
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_TOKEN = 0x0,
+	ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL = 0x1,
+} icp_qat_hw_comp_20_byte_skip_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_TOKEN
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_BITPOS	0
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_DISABLED = 0x0,
+	ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED = 0x1,
+} icp_qat_hw_comp_20_extended_delay_match_mode_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_DEFAULT_VAL \
+		ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_DISABLED
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_BITPOS 31
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_decomp_20_speculative_decoder_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_DEFAULT_VAL\
+		ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_ENABLE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_BITPOS	30
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_decomp_20_mini_cam_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_ENABLE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS	14
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_HBS_CONTROL_HBS_IS_32KB = 0x0,
+} icp_qat_hw_decomp_20_hbs_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_HBS_CONTROL_HBS_IS_32KB
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_BITPOS	8
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_MASK	0x3
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_64KB = 0x0,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_256KB = 0x1,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_1MB = 0x2,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_4MB = 0x3,
+} icp_qat_hw_decomp_20_lbms_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_LBMS_LBMS_64KB
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_BITPOS	5
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE = 0x1,
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_LZ4 = 0x2,
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_LZ4S = 0x3,
+} icp_qat_hw_decomp_20_hw_comp_format_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS	4
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_3B = 0x0,
+	ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_4B = 0x1,
+} icp_qat_hw_decomp_20_min_match_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_3B
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_BITPOS 3
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_ABSENT  =  0x0,
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_PRESENT  =  0x1,
+} icp_qat_hw_decomp_20_lz4_block_checksum_present_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_DEFAULT_VAL \
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_ABSENT
+
+#endif /* _ICP_QAT_HW_GEN4_COMP_DEFS_H */
diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index 1889ec4e88..a7632e31f8 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -13,9 +13,9 @@
 #define QAT_64_BTYE_ALIGN_MASK (~0x3f)

 /* Intel(R) QuickAssist Technology device generation is enumerated
- * from one according to the generation of the device
+ * from one according to the generation of the device.
+ * QAT_GEN* is used as the index to find all devices
  */
-
 enum qat_device_gen {
 	QAT_GEN1,
 	QAT_GEN2,
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 8233cc045d..e7c7e9af95 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -49,12 +49,6 @@ struct qat_dev_cmd_param {
 	uint16_t val;
 };

-enum qat_comp_num_im_buffers {
-	QAT_NUM_INTERM_BUFS_GEN1 = 12,
-	QAT_NUM_INTERM_BUFS_GEN2 = 20,
-	QAT_NUM_INTERM_BUFS_GEN3 = 64
-};
-
 struct qat_device_info {
 	const struct rte_memzone *mz;
 	/**< mz to store the qat_pci_device so it can be
@@ -137,7 +131,6 @@ struct qat_pci_device {
 struct qat_gen_hw_data {
 	enum qat_device_gen dev_gen;
 	const struct qat_qp_hw_data (*qp_hw_data)[ADF_MAX_QPS_ON_ANY_SERVICE];
-	enum qat_comp_num_im_buffers comp_num_im_bufs_required;
 	struct qat_pf2vf_dev *pf2vf_dev;
 };

diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index 7ac25a3b4c..e8f57c3cc4 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018-2019 Intel Corporation
+ * Copyright(c) 2018-2021 Intel Corporation
  */

 #include <rte_mempool.h>
@@ -332,7 +332,8 @@ qat_comp_build_request(void *in_op, uint8_t *out_msg,
 	return 0;
 }

-static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask)
+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t modulo_mask)
 {
 	return data & modulo_mask;
 }
@@ -793,8 +794,9 @@ qat_comp_stream_size(void)
 	return RTE_ALIGN_CEIL(sizeof(struct qat_comp_stream), 8);
 }

-static void qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
-				    enum qat_comp_request_type request)
+static void
+qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
+	    enum qat_comp_request_type request)
 {
 	if (request == QAT_COMP_REQUEST_FIXED_COMP_STATELESS)
 		header->service_cmd_id = ICP_QAT_FW_COMP_CMD_STATIC;
@@ -811,16 +813,17 @@ static void qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
 	    QAT_COMN_CD_FLD_TYPE_16BYTE_DATA, QAT_COMN_PTR_TYPE_FLAT);
 }

-static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
-			const struct rte_memzone *interm_buff_mz,
-			const struct rte_comp_xform *xform,
-			const struct qat_comp_stream *stream,
-			enum rte_comp_op_type op_type)
+static int
+qat_comp_create_templates(struct qat_comp_xform *qat_xform,
+			  const struct rte_memzone *interm_buff_mz,
+			  const struct rte_comp_xform *xform,
+			  const struct qat_comp_stream *stream,
+			  enum rte_comp_op_type op_type,
+			  enum qat_device_gen qat_dev_gen)
 {
 	struct icp_qat_fw_comp_req *comp_req;
-	int comp_level, algo;
 	uint32_t req_par_flags;
-	int direction = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS;
+	int res;

 	if (unlikely(qat_xform == NULL)) {
 		QAT_LOG(ERR, "Session was not created for this device");
@@ -839,46 +842,17 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 		}
 	}

-	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) {
-		direction = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS;
-		comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
+	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS)
 		req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(
 				ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP,
 				ICP_QAT_FW_COMP_BFINAL,
 				ICP_QAT_FW_COMP_CNV,
 				ICP_QAT_FW_COMP_CNV_RECOVERY);
-	} else {
-		if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
-		else if (xform->compress.level == 1)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
-		else if (xform->compress.level == 2)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_4;
-		else if (xform->compress.level == 3)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
-		else if (xform->compress.level >= 4 &&
-			 xform->compress.level <= 9)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_16;
-		else {
-			QAT_LOG(ERR, "compression level not supported");
-			return -EINVAL;
-		}
+	else
 		req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(
 				ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP,
 				ICP_QAT_FW_COMP_BFINAL, ICP_QAT_FW_COMP_CNV,
 				ICP_QAT_FW_COMP_CNV_RECOVERY);
-	}
-
-	switch (xform->compress.algo) {
-	case RTE_COMP_ALGO_DEFLATE:
-		algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE;
-		break;
-	case RTE_COMP_ALGO_LZS:
-	default:
-		/* RTE_COMP_NULL */
-		QAT_LOG(ERR, "compression algorithm not supported");
-		return -EINVAL;
-	}

 	comp_req = &qat_xform->qat_comp_req_tmpl;

@@ -899,18 +873,10 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 		comp_req->comp_cd_ctrl.comp_state_addr =
 				stream->state_registers_decomp_phys;

-		/* Enable A, B, C, D, and E (CAMs). */
+		/* RAM bank flags */
 		comp_req->comp_cd_ctrl.ram_bank_flags =
-			ICP_QAT_FW_COMP_RAM_FLAGS_BUILD(
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank E */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank D */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank C */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank B */
-				ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */
+				qat_comp_gen_dev_ops[qat_dev_gen]
+					.qat_comp_get_ram_bank_flags();

 		comp_req->comp_cd_ctrl.ram_banks_addr =
 				stream->inflate_context_phys;
@@ -924,13 +890,11 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 			ICP_QAT_FW_COMP_ENABLE_SECURE_RAM_USED_AS_INTMD_BUF);
 	}

-	comp_req->cd_pars.sl.comp_slice_cfg_word[0] =
-	    ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(
-		direction,
-		/* In CPM 1.6 only valid mode ! */
-		ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED, algo,
-		/* Translate level to depth */
-		comp_level, ICP_QAT_HW_COMPRESSION_FILE_TYPE_0);
+	res = qat_comp_gen_dev_ops[qat_dev_gen].qat_comp_set_slice_cfg_word(
+			qat_xform, xform, op_type,
+			comp_req->cd_pars.sl.comp_slice_cfg_word);
+	if (res)
+		return res;

 	comp_req->comp_pars.initial_adler = 1;
 	comp_req->comp_pars.initial_crc32 = 0;
@@ -958,7 +922,8 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 				ICP_QAT_FW_SLICE_XLAT);

 		comp_req->u1.xlt_pars.inter_buff_ptr =
-				interm_buff_mz->iova;
+				(qat_comp_get_num_im_bufs_required(qat_dev_gen)
+					== 0) ? 0 : interm_buff_mz->iova;
 	}

 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
@@ -991,6 +956,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 			      void **private_xform)
 {
 	struct qat_comp_dev_private *qat = dev->data->dev_private;
+	enum qat_device_gen qat_dev_gen = qat->qat_dev->qat_dev_gen;
+	unsigned int im_bufs = qat_comp_get_num_im_bufs_required(qat_dev_gen);

 	if (unlikely(private_xform == NULL)) {
 		QAT_LOG(ERR, "QAT: private_xform parameter is NULL");
@@ -1012,7 +979,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,

 		if (xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_FIXED ||
 		  ((xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_DEFAULT)
-				   && qat->interm_buff_mz == NULL))
+				   && qat->interm_buff_mz == NULL
+				   && im_bufs > 0))
 			qat_xform->qat_comp_request_type =
 					QAT_COMP_REQUEST_FIXED_COMP_STATELESS;

@@ -1020,7 +988,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 				RTE_COMP_HUFFMAN_DYNAMIC ||
 				xform->compress.deflate.huffman ==
 						RTE_COMP_HUFFMAN_DEFAULT) &&
-				qat->interm_buff_mz != NULL)
+				(qat->interm_buff_mz != NULL ||
+						im_bufs == 0))

 			qat_xform->qat_comp_request_type =
 					QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS;
@@ -1039,7 +1008,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 	}

 	if (qat_comp_create_templates(qat_xform, qat->interm_buff_mz, xform,
-				      NULL, RTE_COMP_OP_STATELESS)) {
+				      NULL, RTE_COMP_OP_STATELESS,
+				      qat_dev_gen)) {
 		QAT_LOG(ERR, "QAT: Problem with setting compression");
 		return -EINVAL;
 	}
@@ -1138,7 +1108,8 @@ qat_comp_stream_create(struct rte_compressdev *dev,
 	ptr->qat_xform.checksum_type = xform->decompress.chksum;

 	if (qat_comp_create_templates(&ptr->qat_xform, qat->interm_buff_mz,
-				      xform, ptr, RTE_COMP_OP_STATEFUL)) {
+				      xform, ptr, RTE_COMP_OP_STATEFUL,
+				      qat->qat_dev->qat_dev_gen)) {
 		QAT_LOG(ERR, "QAT: problem with creating descriptor template for stream");
 		rte_mempool_put(qat->streampool, *stream);
 		*stream = NULL;
diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
index 0444b50a1e..da7b9a6eec 100644
--- a/drivers/compress/qat/qat_comp.h
+++ b/drivers/compress/qat/qat_comp.h
@@ -28,14 +28,16 @@
 #define QAT_MIN_OUT_BUF_SIZE 46

 /* maximum size of the state registers */
-#define QAT_STATE_REGISTERS_MAX_SIZE 64
+#define QAT_STATE_REGISTERS_MAX_SIZE 256 /* 64 bytes for GEN1-3, 256 for GEN4 */

 /* decompressor context size */
 #define QAT_INFLATE_CONTEXT_SIZE_GEN1 36864
 #define QAT_INFLATE_CONTEXT_SIZE_GEN2 34032
 #define QAT_INFLATE_CONTEXT_SIZE_GEN3 34032
-#define QAT_INFLATE_CONTEXT_SIZE RTE_MAX(RTE_MAX(QAT_INFLATE_CONTEXT_SIZE_GEN1,\
-		QAT_INFLATE_CONTEXT_SIZE_GEN2), QAT_INFLATE_CONTEXT_SIZE_GEN3)
+#define QAT_INFLATE_CONTEXT_SIZE_GEN4 36864
+#define QAT_INFLATE_CONTEXT_SIZE RTE_MAX(RTE_MAX(RTE_MAX(\
+		QAT_INFLATE_CONTEXT_SIZE_GEN1, QAT_INFLATE_CONTEXT_SIZE_GEN2), \
+		QAT_INFLATE_CONTEXT_SIZE_GEN3), QAT_INFLATE_CONTEXT_SIZE_GEN4)

 enum qat_comp_request_type {
 	QAT_COMP_REQUEST_FIXED_COMP_STATELESS,
diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c
index caac7839e9..9b24d46e97 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -9,30 +9,29 @@

 #define QAT_PMD_COMP_SGL_DEF_SEGMENTS 16

+struct qat_comp_gen_dev_ops qat_comp_gen_dev_ops[QAT_N_GENS];
+
 struct stream_create_info {
 	struct qat_comp_dev_private *comp_dev;
 	int socket_id;
 	int error;
 };

-static const struct rte_compressdev_capabilities qat_comp_gen_capabilities[] = {
-	{/* COMPRESSION - deflate */
-	 .algo = RTE_COMP_ALGO_DEFLATE,
-	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
-				RTE_COMP_FF_CRC32_CHECKSUM |
-				RTE_COMP_FF_ADLER32_CHECKSUM |
-				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
-				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
-				RTE_COMP_FF_HUFFMAN_FIXED |
-				RTE_COMP_FF_HUFFMAN_DYNAMIC |
-				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
-				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
-				RTE_COMP_FF_OOP_LB_IN_SGL_OUT |
-				RTE_COMP_FF_STATEFUL_DECOMPRESSION,
-	 .window_size = {.min = 15, .max = 15, .increment = 0} },
-	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+static struct
+qat_comp_capabilities_info qat_comp_get_capa_info(
+		enum qat_device_gen qat_dev_gen, struct qat_pci_device *qat_dev)
+{
+	struct qat_comp_capabilities_info ret = { .data = NULL, .size = 0 };

-static void
+	if (qat_dev_gen >= QAT_N_GENS)
+		return ret;
+	RTE_FUNC_PTR_OR_ERR_RET(qat_comp_gen_dev_ops[qat_dev_gen]
+			.qat_comp_get_capabilities, ret);
+	return qat_comp_gen_dev_ops[qat_dev_gen]
+			.qat_comp_get_capabilities(qat_dev);
+}
+
+void
 qat_comp_stats_get(struct rte_compressdev *dev,
 		struct rte_compressdev_stats *stats)
 {
@@ -52,7 +51,7 @@ qat_comp_stats_get(struct rte_compressdev *dev,
 	stats->dequeue_err_count = qat_stats.dequeue_err_count;
 }

-static void
+void
 qat_comp_stats_reset(struct rte_compressdev *dev)
 {
 	struct qat_comp_dev_private *qat_priv;
@@ -67,7 +66,7 @@ qat_comp_stats_reset(struct rte_compressdev *dev)

 }

-static int
+int
 qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 {
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
@@ -95,23 +94,18 @@ qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 			&(dev->data->queue_pairs[queue_pair_id]));
 }

-static int
+int
 qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
-		  uint32_t max_inflight_ops, int socket_id)
+		uint32_t max_inflight_ops, int socket_id)
 {
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-	struct qat_qp_config qat_qp_conf;
-
+	struct qat_qp_config qat_qp_conf = {0};
 	struct qat_qp **qp_addr =
 			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
 	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-	const struct qat_qp_hw_data *comp_hw_qps =
-			qat_gen_config[qat_private->qat_dev->qat_dev_gen]
-				      .qp_hw_data[QAT_SERVICE_COMPRESSION];
-	const struct qat_qp_hw_data *qp_hw_data = comp_hw_qps + qp_id;
+	struct qat_qp *qp;
+	uint32_t i;
+	int ret;

 	/* If qp is already in use free ring memory and qp metadata. */
 	if (*qp_addr != NULL) {
@@ -125,7 +119,13 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 		return -EINVAL;
 	}

-	qat_qp_conf.hw = qp_hw_data;
+
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_COMPRESSION,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
 	qat_qp_conf.cookie_size = sizeof(struct qat_comp_op_cookie);
 	qat_qp_conf.nb_descriptors = max_inflight_ops;
 	qat_qp_conf.socket_id = socket_id;
@@ -134,7 +134,6 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
 	if (ret != 0)
 		return ret;
-
 	/* store a link to the qp in the qat_pci_device */
 	qat_private->qat_dev->qps_in_use[QAT_SERVICE_COMPRESSION][qp_id]
 								= *qp_addr;
@@ -189,7 +188,7 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,


 #define QAT_IM_BUFFER_DEBUG 0
-static const struct rte_memzone *
+const struct rte_memzone *
 qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
 			      uint32_t buff_size)
 {
@@ -202,8 +201,8 @@ qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
 	uint32_t full_size;
 	uint32_t offset_of_flat_buffs;
 	int i;
-	int num_im_sgls = qat_gen_config[
-		comp_dev->qat_dev->qat_dev_gen].comp_num_im_bufs_required;
+	int num_im_sgls = qat_comp_get_num_im_bufs_required(
+			comp_dev->qat_dev->qat_dev_gen);

 	QAT_LOG(DEBUG, "QAT COMP device %s needs %d sgls",
 				comp_dev->qat_dev->name, num_im_sgls);
@@ -480,8 +479,8 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev)
 	/* Free intermediate buffers */
 	if (comp_dev->interm_buff_mz) {
 		char mz_name[RTE_MEMZONE_NAMESIZE];
-		int i = qat_gen_config[
-		      comp_dev->qat_dev->qat_dev_gen].comp_num_im_bufs_required;
+		int i = qat_comp_get_num_im_bufs_required(
+				comp_dev->qat_dev->qat_dev_gen);

 		while (--i >= 0) {
 			snprintf(mz_name, RTE_MEMZONE_NAMESIZE,
@@ -509,28 +508,13 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev)
 	}
 }

-static int
+int
 qat_comp_dev_config(struct rte_compressdev *dev,
 		struct rte_compressdev_config *config)
 {
 	struct qat_comp_dev_private *comp_dev = dev->data->dev_private;
 	int ret = 0;

-	if (RTE_PMD_QAT_COMP_IM_BUFFER_SIZE == 0) {
-		QAT_LOG(WARNING,
-			"RTE_PMD_QAT_COMP_IM_BUFFER_SIZE = 0 in config file, so"
-			" QAT device can't be used for Dynamic Deflate. "
-			"Did you really intend to do this?");
-	} else {
-		comp_dev->interm_buff_mz =
-				qat_comp_setup_inter_buffers(comp_dev,
-					RTE_PMD_QAT_COMP_IM_BUFFER_SIZE);
-		if (comp_dev->interm_buff_mz == NULL) {
-			ret = -ENOMEM;
-			goto error_out;
-		}
-	}
-
 	if (config->max_nb_priv_xforms) {
 		comp_dev->xformpool = qat_comp_create_xform_pool(comp_dev,
 					    config, config->max_nb_priv_xforms);
@@ -558,19 +542,19 @@ qat_comp_dev_config(struct rte_compressdev *dev,
 	return ret;
 }

-static int
+int
 qat_comp_dev_start(struct rte_compressdev *dev __rte_unused)
 {
 	return 0;
 }

-static void
+void
 qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused)
 {

 }

-static int
+int
 qat_comp_dev_close(struct rte_compressdev *dev)
 {
 	int i;
@@ -588,8 +572,7 @@ qat_comp_dev_close(struct rte_compressdev *dev)
 	return ret;
 }

-
-static void
+void
 qat_comp_dev_info_get(struct rte_compressdev *dev,
 			struct rte_compressdev_info *info)
 {
@@ -662,27 +645,6 @@ qat_comp_pmd_dequeue_first_op_burst(void *qp, struct rte_comp_op **ops,
 	return ret;
 }

-static struct rte_compressdev_ops compress_qat_ops = {
-
-	/* Device related operations */
-	.dev_configure		= qat_comp_dev_config,
-	.dev_start		= qat_comp_dev_start,
-	.dev_stop		= qat_comp_dev_stop,
-	.dev_close		= qat_comp_dev_close,
-	.dev_infos_get		= qat_comp_dev_info_get,
-
-	.stats_get		= qat_comp_stats_get,
-	.stats_reset		= qat_comp_stats_reset,
-	.queue_pair_setup	= qat_comp_qp_setup,
-	.queue_pair_release	= qat_comp_qp_release,
-
-	/* Compression related operations */
-	.private_xform_create	= qat_comp_private_xform_create,
-	.private_xform_free	= qat_comp_private_xform_free,
-	.stream_create		= qat_comp_stream_create,
-	.stream_free		= qat_comp_stream_free
-};
-
 /* An rte_driver is needed in the registration of the device with compressdev.
  * The actual qat pci's rte_driver can't be used as its name represents
  * the whole pci device with all services. Think of this as a holder for a name
@@ -693,6 +655,7 @@ static const struct rte_driver compdev_qat_driver = {
 	.name = qat_comp_drv_name,
 	.alias = qat_comp_drv_name
 };
+
 int
 qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param)
@@ -708,17 +671,21 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	char capa_memz_name[RTE_COMPRESSDEV_NAME_MAX_LEN];
 	struct rte_compressdev *compressdev;
 	struct qat_comp_dev_private *comp_dev;
+	struct qat_comp_capabilities_info capabilities_info;
 	const struct rte_compressdev_capabilities *capabilities;
+	const struct qat_comp_gen_dev_ops *qat_comp_gen_ops =
+			&qat_comp_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	uint64_t capa_size;

-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		QAT_LOG(ERR, "Compression PMD not supported on QAT 4xxx");
-		return -EFAULT;
-	}
 	snprintf(name, RTE_COMPRESSDEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "comp");
 	QAT_LOG(DEBUG, "Creating QAT COMP device %s", name);

+	if (qat_comp_gen_ops->compressdev_ops == NULL) {
+		QAT_LOG(DEBUG, "Device %s does not support compression", name);
+		return -ENOTSUP;
+	}
+
 	/* Populate subset device to use in compressdev device creation */
 	qat_dev_instance->comp_rte_dev.driver = &compdev_qat_driver;
 	qat_dev_instance->comp_rte_dev.numa_node =
@@ -733,13 +700,13 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	if (compressdev == NULL)
 		return -ENODEV;

-	compressdev->dev_ops = &compress_qat_ops;
+	compressdev->dev_ops = qat_comp_gen_ops->compressdev_ops;

 	compressdev->enqueue_burst = (compressdev_enqueue_pkt_burst_t)
 			qat_enqueue_comp_op_burst;
 	compressdev->dequeue_burst = qat_comp_pmd_dequeue_first_op_burst;
-
-	compressdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED;
+	compressdev->feature_flags =
+			qat_comp_gen_ops->qat_comp_get_feature_flags();

 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -752,22 +719,20 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	comp_dev->qat_dev = qat_pci_dev;
 	comp_dev->compressdev = compressdev;

-	switch (qat_pci_dev->qat_dev_gen) {
-	case QAT_GEN1:
-	case QAT_GEN2:
-	case QAT_GEN3:
-		capabilities = qat_comp_gen_capabilities;
-		capa_size = sizeof(qat_comp_gen_capabilities);
-		break;
-	default:
-		capabilities = qat_comp_gen_capabilities;
-		capa_size = sizeof(qat_comp_gen_capabilities);
+	capabilities_info = qat_comp_get_capa_info(qat_pci_dev->qat_dev_gen,
+			qat_pci_dev);
+
+	if (capabilities_info.data == NULL) {
 		QAT_LOG(DEBUG,
 			"QAT gen %d capabilities unknown, default to GEN1",
 					qat_pci_dev->qat_dev_gen);
-		break;
+		capabilities_info = qat_comp_get_capa_info(QAT_GEN1,
+				qat_pci_dev);
 	}

+	capabilities = capabilities_info.data;
+	capa_size = capabilities_info.size;
+
 	comp_dev->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (comp_dev->capa_mz == NULL) {
 		comp_dev->capa_mz = rte_memzone_reserve(capa_memz_name,
diff --git a/drivers/compress/qat/qat_comp_pmd.h b/drivers/compress/qat/qat_comp_pmd.h
index 252b4b24e3..86317a513c 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -11,10 +11,44 @@
 #include <rte_compressdev_pmd.h>

 #include "qat_device.h"
+#include "qat_comp.h"

 /**< Intel(R) QAT Compression PMD driver name */
 #define COMPRESSDEV_NAME_QAT_PMD	compress_qat

+/* Private data structure for a QAT compression device capability. */
+struct qat_comp_capabilities_info {
+	const struct rte_compressdev_capabilities *data;
+	uint64_t size;
+};
+
+/**
+ * Function prototypes for GENx specific compress device operations.
+ **/
+typedef struct qat_comp_capabilities_info (*get_comp_capabilities_info_t)
+		(struct qat_pci_device *qat_dev);
+
+typedef uint16_t (*get_comp_ram_bank_flags_t)(void);
+
+typedef int (*set_comp_slice_cfg_word_t)(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word);
+
+typedef unsigned int (*get_comp_num_im_bufs_required_t)(void);
+
+typedef uint64_t (*get_comp_feature_flags_t)(void);
+
+struct qat_comp_gen_dev_ops {
+	struct rte_compressdev_ops *compressdev_ops;
+	get_comp_feature_flags_t qat_comp_get_feature_flags;
+	get_comp_capabilities_info_t qat_comp_get_capabilities;
+	get_comp_ram_bank_flags_t qat_comp_get_ram_bank_flags;
+	set_comp_slice_cfg_word_t qat_comp_set_slice_cfg_word;
+	get_comp_num_im_bufs_required_t qat_comp_get_num_im_bufs_required;
+};
+
+extern struct qat_comp_gen_dev_ops qat_comp_gen_dev_ops[];
+
 /** private data structure for a QAT compression device.
  * This QAT device is a device offering only a compression service,
  * there can be one of these on each qat_pci_device (VF).
@@ -37,6 +71,41 @@ struct qat_comp_dev_private {
 	uint16_t min_enq_burst_threshold;
 };

+int
+qat_comp_dev_config(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config);
+
+int
+qat_comp_dev_start(struct rte_compressdev *dev __rte_unused);
+
+void
+qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused);
+
+int
+qat_comp_dev_close(struct rte_compressdev *dev);
+
+void
+qat_comp_dev_info_get(struct rte_compressdev *dev,
+		struct rte_compressdev_info *info);
+
+void
+qat_comp_stats_get(struct rte_compressdev *dev,
+		struct rte_compressdev_stats *stats);
+
+void
+qat_comp_stats_reset(struct rte_compressdev *dev);
+
+int
+qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id);
+
+int
+qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
+		uint32_t max_inflight_ops, int socket_id);
+
+const struct rte_memzone *
+qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
+		uint32_t buff_size);
+
 int
 qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
@@ -44,5 +113,12 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 int
 qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev);

+
+static __rte_always_inline unsigned int
+qat_comp_get_num_im_bufs_required(enum qat_device_gen gen)
+{
+	return (*qat_comp_gen_dev_ops[gen].qat_comp_get_num_im_bufs_required)();
+}
+
 #endif
 #endif /* _QAT_COMP_PMD_H_ */
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v5 6/9] compress/qat: add gen specific implementation
  2021-10-26 16:44       ` [dpdk-dev] [dpdk-dev v5 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                           ` (4 preceding siblings ...)
  2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 5/9] compress/qat: add gen specific data and function Kai Ji
@ 2021-10-26 16:44         ` Kai Ji
  2021-10-26 16:45         ` [dpdk-dev] [dpdk-dev v5 7/9] crypto/qat: unified device private data structure Kai Ji
                           ` (2 subsequent siblings)
  8 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 16:44 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Adam Dybkowski, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch replaces the mixed QAT compression support
implementation by separate files with shared or individual
implementation for specific QAT generation.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com
---
 drivers/common/qat/meson.build               |   4 +-
 drivers/compress/qat/dev/qat_comp_pmd_gen1.c | 176 +++++++++++++++
 drivers/compress/qat/dev/qat_comp_pmd_gen2.c |  30 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen3.c |  30 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen4.c | 213 +++++++++++++++++++
 drivers/compress/qat/dev/qat_comp_pmd_gens.h |  30 +++
 6 files changed, 482 insertions(+), 1 deletion(-)
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen1.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen2.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen3.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen4.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gens.h

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 532e0fabb3..8a1c6d64e8 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -62,7 +62,9 @@ includes += include_directories(
 )

 if qat_compress
-    foreach f: ['qat_comp_pmd.c', 'qat_comp.c']
+    foreach f: ['qat_comp_pmd.c', 'qat_comp.c',
+            'dev/qat_comp_pmd_gen1.c', 'dev/qat_comp_pmd_gen2.c',
+            'dev/qat_comp_pmd_gen3.c', 'dev/qat_comp_pmd_gen4.c']
         sources += files(join_paths(qat_compress_relpath, f))
     endforeach
 endif
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen1.c b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c
new file mode 100644
index 0000000000..e3e75c8289
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c
@@ -0,0 +1,176 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_compressdev.h>
+#include <rte_compressdev_pmd.h>
+
+#include "qat_comp_pmd.h"
+#include "qat_comp.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN1 12
+
+const struct rte_compressdev_capabilities qat_gen1_comp_capabilities[] = {
+	{/* COMPRESSION - deflate */
+	 .algo = RTE_COMP_ALGO_DEFLATE,
+	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
+				RTE_COMP_FF_CRC32_CHECKSUM |
+				RTE_COMP_FF_ADLER32_CHECKSUM |
+				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
+				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+				RTE_COMP_FF_HUFFMAN_FIXED |
+				RTE_COMP_FF_HUFFMAN_DYNAMIC |
+				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
+				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
+				RTE_COMP_FF_OOP_LB_IN_SGL_OUT |
+				RTE_COMP_FF_STATEFUL_DECOMPRESSION,
+	 .window_size = {.min = 15, .max = 15, .increment = 0} },
+	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+
+static int
+qat_comp_dev_config_gen1(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config)
+{
+	struct qat_comp_dev_private *comp_dev = dev->data->dev_private;
+
+	if (RTE_PMD_QAT_COMP_IM_BUFFER_SIZE == 0) {
+		QAT_LOG(WARNING,
+			"RTE_PMD_QAT_COMP_IM_BUFFER_SIZE = 0 in config file, so"
+			"QAT device can't be used for Dynamic Deflate.");
+	} else {
+		comp_dev->interm_buff_mz =
+				qat_comp_setup_inter_buffers(comp_dev,
+					RTE_PMD_QAT_COMP_IM_BUFFER_SIZE);
+		if (comp_dev->interm_buff_mz == NULL)
+			return -ENOMEM;
+	}
+
+	return qat_comp_dev_config(dev, config);
+}
+
+struct rte_compressdev_ops qat_comp_ops_gen1 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_comp_dev_config_gen1,
+	.dev_start		= qat_comp_dev_start,
+	.dev_stop		= qat_comp_dev_stop,
+	.dev_close		= qat_comp_dev_close,
+	.dev_infos_get		= qat_comp_dev_info_get,
+
+	.stats_get		= qat_comp_stats_get,
+	.stats_reset		= qat_comp_stats_reset,
+	.queue_pair_setup	= qat_comp_qp_setup,
+	.queue_pair_release	= qat_comp_qp_release,
+
+	/* Compression related operations */
+	.private_xform_create	= qat_comp_private_xform_create,
+	.private_xform_free	= qat_comp_private_xform_free,
+	.stream_create		= qat_comp_stream_create,
+	.stream_free		= qat_comp_stream_free
+};
+
+struct qat_comp_capabilities_info
+qat_comp_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_comp_capabilities_info capa_info = {
+		.data = qat_gen1_comp_capabilities,
+		.size = sizeof(qat_gen1_comp_capabilities)
+	};
+	return capa_info;
+}
+
+uint16_t
+qat_comp_get_ram_bank_flags_gen1(void)
+{
+	/* Enable A, B, C, D, and E (CAMs). */
+	return ICP_QAT_FW_COMP_RAM_FLAGS_BUILD(
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank E */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank D */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank C */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank B */
+			ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */
+}
+
+int
+qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		__rte_unused enum rte_comp_op_type op_type,
+		uint32_t *comp_slice_cfg_word)
+{
+	unsigned int algo, comp_level, direction;
+
+	if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE)
+		algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE;
+	else {
+		QAT_LOG(ERR, "compression algorithm not supported");
+		return -EINVAL;
+	}
+
+	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) {
+		direction = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS;
+		comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+	} else {
+		direction = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS;
+
+		if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+		else if (xform->compress.level == 1)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
+		else if (xform->compress.level == 2)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_4;
+		else if (xform->compress.level == 3)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+		else if (xform->compress.level >= 4 &&
+			 xform->compress.level <= 9)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_16;
+		else {
+			QAT_LOG(ERR, "compression level not supported");
+			return -EINVAL;
+		}
+	}
+
+	comp_slice_cfg_word[0] =
+			ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(
+				direction,
+				/* In CPM 1.6 only valid mode ! */
+				ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED,
+				algo,
+				/* Translate level to depth */
+				comp_level,
+				ICP_QAT_HW_COMPRESSION_FILE_TYPE_0);
+
+	return 0;
+}
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen1(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN1;
+}
+
+uint64_t
+qat_comp_get_features_gen1(void)
+{
+	return RTE_COMPDEV_FF_HW_ACCELERATED;
+}
+
+RTE_INIT(qat_comp_pmd_gen1_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN1].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen2.c b/drivers/compress/qat/dev/qat_comp_pmd_gen2.c
new file mode 100644
index 0000000000..fd6c966f26
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen2.c
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN2 20
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen2(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN2;
+}
+
+RTE_INIT(qat_comp_pmd_gen2_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN2].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen2;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen3.c b/drivers/compress/qat/dev/qat_comp_pmd_gen3.c
new file mode 100644
index 0000000000..fccb0941f1
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen3.c
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN3 64
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen3(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN3;
+}
+
+RTE_INIT(qat_comp_pmd_gen3_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN3].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen3;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen4.c b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
new file mode 100644
index 0000000000..79b2ceb414
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
@@ -0,0 +1,213 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp.h"
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+#include "icp_qat_hw_gen4_comp.h"
+#include "icp_qat_hw_gen4_comp_defs.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN4 0
+
+static const struct rte_compressdev_capabilities
+qat_gen4_comp_capabilities[] = {
+	{/* COMPRESSION - deflate */
+	 .algo = RTE_COMP_ALGO_DEFLATE,
+	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
+				RTE_COMP_FF_CRC32_CHECKSUM |
+				RTE_COMP_FF_ADLER32_CHECKSUM |
+				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
+				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+				RTE_COMP_FF_HUFFMAN_FIXED |
+				RTE_COMP_FF_HUFFMAN_DYNAMIC |
+				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
+				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
+				RTE_COMP_FF_OOP_LB_IN_SGL_OUT,
+	 .window_size = {.min = 15, .max = 15, .increment = 0} },
+	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+
+static int
+qat_comp_dev_config_gen4(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config)
+{
+	/* QAT GEN4 doesn't need preallocated intermediate buffers */
+
+	return qat_comp_dev_config(dev, config);
+}
+
+static struct rte_compressdev_ops qat_comp_ops_gen4 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_comp_dev_config_gen4,
+	.dev_start		= qat_comp_dev_start,
+	.dev_stop		= qat_comp_dev_stop,
+	.dev_close		= qat_comp_dev_close,
+	.dev_infos_get		= qat_comp_dev_info_get,
+
+	.stats_get		= qat_comp_stats_get,
+	.stats_reset		= qat_comp_stats_reset,
+	.queue_pair_setup	= qat_comp_qp_setup,
+	.queue_pair_release	= qat_comp_qp_release,
+
+	/* Compression related operations */
+	.private_xform_create	= qat_comp_private_xform_create,
+	.private_xform_free	= qat_comp_private_xform_free,
+	.stream_create		= qat_comp_stream_create,
+	.stream_free		= qat_comp_stream_free
+};
+
+static struct qat_comp_capabilities_info
+qat_comp_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_comp_capabilities_info capa_info = {
+		.data = qat_gen4_comp_capabilities,
+		.size = sizeof(qat_gen4_comp_capabilities)
+	};
+	return capa_info;
+}
+
+static uint16_t
+qat_comp_get_ram_bank_flags_gen4(void)
+{
+	return 0;
+}
+
+static int
+qat_comp_set_slice_cfg_word_gen4(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word)
+{
+	if (qat_xform->qat_comp_request_type ==
+			QAT_COMP_REQUEST_FIXED_COMP_STATELESS ||
+	    qat_xform->qat_comp_request_type ==
+			QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS) {
+		/* Compression */
+		struct icp_qat_hw_comp_20_config_csr_upper hw_comp_upper_csr;
+		struct icp_qat_hw_comp_20_config_csr_lower hw_comp_lower_csr;
+
+		memset(&hw_comp_upper_csr, 0, sizeof(hw_comp_upper_csr));
+		memset(&hw_comp_lower_csr, 0, sizeof(hw_comp_lower_csr));
+
+		hw_comp_lower_csr.lllbd =
+			ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_DISABLED;
+
+		if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE) {
+			hw_comp_lower_csr.skip_ctrl =
+				ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL;
+
+			if (qat_xform->qat_comp_request_type ==
+				QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS) {
+				hw_comp_lower_csr.algo =
+					ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77;
+				hw_comp_lower_csr.lllbd =
+				    ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED;
+			} else {
+				hw_comp_lower_csr.algo =
+				      ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE;
+				hw_comp_upper_csr.scb_ctrl =
+					ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE;
+			}
+
+			if (op_type == RTE_COMP_OP_STATEFUL) {
+				hw_comp_upper_csr.som_ctrl =
+				     ICP_QAT_HW_COMP_20_SOM_CONTROL_REPLAY_MODE;
+			}
+		} else {
+			QAT_LOG(ERR, "Compression algorithm not supported");
+			return -EINVAL;
+		}
+
+		switch (xform->compress.level) {
+		case 1:
+		case 2:
+		case 3:
+		case 4:
+		case 5:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1;
+			hw_comp_lower_csr.hash_col =
+			      ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_DONT_ALLOW;
+			break;
+		case 6:
+		case 7:
+		case 8:
+		case RTE_COMP_LEVEL_PMD_DEFAULT:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_6;
+			break;
+		case 9:
+		case 10:
+		case 11:
+		case 12:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_9;
+			break;
+		default:
+			QAT_LOG(ERR, "Compression level not supported");
+			return -EINVAL;
+		}
+
+		hw_comp_lower_csr.abd = ICP_QAT_HW_COMP_20_ABD_ABD_DISABLED;
+		hw_comp_lower_csr.hash_update =
+			ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW;
+		hw_comp_lower_csr.edmm =
+		      ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED;
+
+		hw_comp_upper_csr.nice =
+			ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL;
+		hw_comp_upper_csr.lazy =
+			ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL;
+
+		comp_slice_cfg_word[0] =
+				ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER(
+					hw_comp_lower_csr);
+		comp_slice_cfg_word[1] =
+				ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER(
+					hw_comp_upper_csr);
+	} else {
+		/* Decompression */
+		struct icp_qat_hw_decomp_20_config_csr_lower
+				hw_decomp_lower_csr;
+
+		memset(&hw_decomp_lower_csr, 0, sizeof(hw_decomp_lower_csr));
+
+		if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE)
+			hw_decomp_lower_csr.algo =
+				ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE;
+		else {
+			QAT_LOG(ERR, "Compression algorithm not supported");
+			return -EINVAL;
+		}
+
+		comp_slice_cfg_word[0] =
+				ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER(
+					hw_decomp_lower_csr);
+		comp_slice_cfg_word[1] = 0;
+	}
+
+	return 0;
+}
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen4(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN4;
+}
+
+
+RTE_INIT(qat_comp_pmd_gen4_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN4].compressdev_ops =
+			&qat_comp_ops_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gens.h b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
new file mode 100644
index 0000000000..35b75c56f1
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _QAT_COMP_PMD_GEN1_H_
+#define _QAT_COMP_PMD_GEN1_H_
+
+#include <rte_compressdev.h>
+#include <rte_compressdev_pmd.h>
+#include <stdint.h>
+
+#include "qat_comp_pmd.h"
+
+extern const struct rte_compressdev_capabilities qat_gen1_comp_capabilities[];
+
+struct qat_comp_capabilities_info
+qat_comp_cap_get_gen1(struct qat_pci_device *qat_dev);
+
+uint16_t qat_comp_get_ram_bank_flags_gen1(void);
+
+int qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type,
+		uint32_t *comp_slice_cfg_word);
+
+uint64_t qat_comp_get_features_gen1(void);
+
+extern struct rte_compressdev_ops qat_comp_ops_gen1;
+
+#endif /* _QAT_COMP_PMD_GEN1_H_ */
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v5 7/9] crypto/qat: unified device private data structure
  2021-10-26 16:44       ` [dpdk-dev] [dpdk-dev v5 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                           ` (5 preceding siblings ...)
  2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 6/9] compress/qat: add gen specific implementation Kai Ji
@ 2021-10-26 16:45         ` Kai Ji
  2021-10-26 16:45         ` [dpdk-dev] [dpdk-dev v5 8/9] crypto/qat: add gen specific data and function Kai Ji
  2021-10-26 16:45         ` [dpdk-dev] [dpdk-dev v5 9/9] crypto/qat: add gen specific implementation Kai Ji
  8 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 16:45 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch unifies the QAT symmetric and asymmetric device
private data structures and functions.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/meson.build       |   2 +-
 drivers/common/qat/qat_common.c      |  15 ++
 drivers/common/qat/qat_common.h      |   3 +
 drivers/common/qat/qat_device.h      |   7 +-
 drivers/crypto/qat/qat_asym_pmd.c    | 216 ++++-------------------
 drivers/crypto/qat/qat_asym_pmd.h    |  29 +---
 drivers/crypto/qat/qat_crypto.c      | 172 ++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h      |  78 +++++++++
 drivers/crypto/qat/qat_sym_pmd.c     | 250 +++++----------------------
 drivers/crypto/qat/qat_sym_pmd.h     |  21 +--
 drivers/crypto/qat/qat_sym_session.c |  15 +-
 11 files changed, 361 insertions(+), 447 deletions(-)
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 8a1c6d64e8..29fd0168ea 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -71,7 +71,7 @@ endif
 
 if qat_crypto
     foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c',
-            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c']
+            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c']
         sources += files(join_paths(qat_crypto_relpath, f))
     endforeach
     deps += ['security']
diff --git a/drivers/common/qat/qat_common.c b/drivers/common/qat/qat_common.c
index 5343a1451e..59e7e02622 100644
--- a/drivers/common/qat/qat_common.c
+++ b/drivers/common/qat/qat_common.c
@@ -6,6 +6,21 @@
 #include "qat_device.h"
 #include "qat_logs.h"
 
+const char *
+qat_service_get_str(enum qat_service_type type)
+{
+	switch (type) {
+	case QAT_SERVICE_SYMMETRIC:
+		return "sym";
+	case QAT_SERVICE_ASYMMETRIC:
+		return "asym";
+	case QAT_SERVICE_COMPRESSION:
+		return "comp";
+	default:
+		return "invalid";
+	}
+}
+
 int
 qat_sgl_fill_array(struct rte_mbuf *buf, int64_t offset,
 		void *list_in, uint32_t data_len,
diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index a7632e31f8..9411a79301 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -91,4 +91,7 @@ void
 qat_stats_reset(struct qat_pci_device *dev,
 		enum qat_service_type service);
 
+const char *
+qat_service_get_str(enum qat_service_type type);
+
 #endif /* _QAT_COMMON_H_ */
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index e7c7e9af95..85fae7b7c7 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -76,8 +76,7 @@ struct qat_device_info {
 
 extern struct qat_device_info qat_pci_devs[];
 
-struct qat_sym_dev_private;
-struct qat_asym_dev_private;
+struct qat_cryptodev_private;
 struct qat_comp_dev_private;
 
 /*
@@ -106,14 +105,14 @@ struct qat_pci_device {
 	/**< links to qps set up for each service, index same as on API */
 
 	/* Data relating to symmetric crypto service */
-	struct qat_sym_dev_private *sym_dev;
+	struct qat_cryptodev_private *sym_dev;
 	/**< link back to cryptodev private data */
 
 	int qat_sym_driver_id;
 	/**< Symmetric driver id used by this device */
 
 	/* Data relating to asymmetric crypto service */
-	struct qat_asym_dev_private *asym_dev;
+	struct qat_cryptodev_private *asym_dev;
 	/**< link back to cryptodev private data */
 
 	int qat_asym_driver_id;
diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c
index 0944d27a4d..042f39ddcc 100644
--- a/drivers/crypto/qat/qat_asym_pmd.c
+++ b/drivers/crypto/qat/qat_asym_pmd.c
@@ -6,6 +6,7 @@
 
 #include "qat_logs.h"
 
+#include "qat_crypto.h"
 #include "qat_asym.h"
 #include "qat_asym_pmd.h"
 #include "qat_sym_capabilities.h"
@@ -18,190 +19,45 @@ static const struct rte_cryptodev_capabilities qat_gen1_asym_capabilities[] = {
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
-static int qat_asym_qp_release(struct rte_cryptodev *dev,
-			       uint16_t queue_pair_id);
-
-static int qat_asym_dev_config(__rte_unused struct rte_cryptodev *dev,
-			       __rte_unused struct rte_cryptodev_config *config)
-{
-	return 0;
-}
-
-static int qat_asym_dev_start(__rte_unused struct rte_cryptodev *dev)
-{
-	return 0;
-}
-
-static void qat_asym_dev_stop(__rte_unused struct rte_cryptodev *dev)
-{
-
-}
-
-static int qat_asym_dev_close(struct rte_cryptodev *dev)
-{
-	int i, ret;
-
-	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
-		ret = qat_asym_qp_release(dev, i);
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static void qat_asym_dev_info_get(struct rte_cryptodev *dev,
-				  struct rte_cryptodev_info *info)
-{
-	struct qat_asym_dev_private *internals = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = internals->qat_dev;
-
-	if (info != NULL) {
-		info->max_nb_queue_pairs = qat_qps_per_service(qat_dev,
-							QAT_SERVICE_ASYMMETRIC);
-		info->feature_flags = dev->feature_flags;
-		info->capabilities = internals->qat_dev_capabilities;
-		info->driver_id = qat_asym_driver_id;
-		/* No limit of number of sessions */
-		info->sym.max_nb_sessions = 0;
-	}
-}
-
-static void qat_asym_stats_get(struct rte_cryptodev *dev,
-			       struct rte_cryptodev_stats *stats)
-{
-	struct qat_common_stats qat_stats = {0};
-	struct qat_asym_dev_private *qat_priv;
-
-	if (stats == NULL || dev == NULL) {
-		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_get(qat_priv->qat_dev, &qat_stats, QAT_SERVICE_ASYMMETRIC);
-	stats->enqueued_count = qat_stats.enqueued_count;
-	stats->dequeued_count = qat_stats.dequeued_count;
-	stats->enqueue_err_count = qat_stats.enqueue_err_count;
-	stats->dequeue_err_count = qat_stats.dequeue_err_count;
-}
-
-static void qat_asym_stats_reset(struct rte_cryptodev *dev)
+void
+qat_asym_init_op_cookie(void *op_cookie)
 {
-	struct qat_asym_dev_private *qat_priv;
+	int j;
+	struct qat_asym_op_cookie *cookie = op_cookie;
 
-	if (dev == NULL) {
-		QAT_LOG(ERR, "invalid asymmetric cryptodev ptr %p", dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
+	cookie->input_addr = rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_asym_op_cookie,
+					input_params_ptrs);
 
-	qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_ASYMMETRIC);
-}
-
-static int qat_asym_qp_release(struct rte_cryptodev *dev,
-			       uint16_t queue_pair_id)
-{
-	struct qat_asym_dev_private *qat_private = dev->data->dev_private;
-	enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen;
-
-	QAT_LOG(DEBUG, "Release asym qp %u on device %d",
-				queue_pair_id, dev->data->dev_id);
-
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_ASYMMETRIC][queue_pair_id]
-						= NULL;
-
-	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
-			&(dev->data->queue_pairs[queue_pair_id]));
-}
+	cookie->output_addr = rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_asym_op_cookie,
+					output_params_ptrs);
 
-static int qat_asym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-			     const struct rte_cryptodev_qp_conf *qp_conf,
-			     int socket_id)
-{
-	struct qat_qp_config qat_qp_conf;
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-
-	struct qat_qp **qp_addr =
-			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
-	struct qat_asym_dev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-	const struct qat_qp_hw_data *asym_hw_qps =
-			qat_gen_config[qat_private->qat_dev->qat_dev_gen]
-				      .qp_hw_data[QAT_SERVICE_ASYMMETRIC];
-	const struct qat_qp_hw_data *qp_hw_data = asym_hw_qps + qp_id;
-
-	/* If qp is already in use free ring memory and qp metadata. */
-	if (*qp_addr != NULL) {
-		ret = qat_asym_qp_release(dev, qp_id);
-		if (ret < 0)
-			return ret;
-	}
-	if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_ASYMMETRIC)) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.hw = qp_hw_data;
-	qat_qp_conf.cookie_size = sizeof(struct qat_asym_op_cookie);
-	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
-	qat_qp_conf.socket_id = socket_id;
-	qat_qp_conf.service_str = "asym";
-
-	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
-	if (ret != 0)
-		return ret;
-
-	/* store a link to the qp in the qat_pci_device */
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_ASYMMETRIC][qp_id]
-							= *qp_addr;
-
-	qp = (struct qat_qp *)*qp_addr;
-	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
-
-	for (i = 0; i < qp->nb_descriptors; i++) {
-		int j;
-
-		struct qat_asym_op_cookie __rte_unused *cookie =
-				qp->op_cookies[i];
-		cookie->input_addr = rte_mempool_virt2iova(cookie) +
+	for (j = 0; j < 8; j++) {
+		cookie->input_params_ptrs[j] =
+				rte_mempool_virt2iova(cookie) +
 				offsetof(struct qat_asym_op_cookie,
-						input_params_ptrs);
-
-		cookie->output_addr = rte_mempool_virt2iova(cookie) +
+						input_array[j]);
+		cookie->output_params_ptrs[j] =
+				rte_mempool_virt2iova(cookie) +
 				offsetof(struct qat_asym_op_cookie,
-						output_params_ptrs);
-
-		for (j = 0; j < 8; j++) {
-			cookie->input_params_ptrs[j] =
-					rte_mempool_virt2iova(cookie) +
-					offsetof(struct qat_asym_op_cookie,
-							input_array[j]);
-			cookie->output_params_ptrs[j] =
-					rte_mempool_virt2iova(cookie) +
-					offsetof(struct qat_asym_op_cookie,
-							output_array[j]);
-		}
+						output_array[j]);
 	}
-
-	return ret;
 }
 
-struct rte_cryptodev_ops crypto_qat_ops = {
+static struct rte_cryptodev_ops crypto_qat_ops = {
 
 	/* Device related operations */
-	.dev_configure		= qat_asym_dev_config,
-	.dev_start		= qat_asym_dev_start,
-	.dev_stop		= qat_asym_dev_stop,
-	.dev_close		= qat_asym_dev_close,
-	.dev_infos_get		= qat_asym_dev_info_get,
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
 
-	.stats_get		= qat_asym_stats_get,
-	.stats_reset		= qat_asym_stats_reset,
-	.queue_pair_setup	= qat_asym_qp_setup,
-	.queue_pair_release	= qat_asym_qp_release,
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
 
 	/* Crypto related operations */
 	.asym_session_get_size	= qat_asym_session_get_private_size,
@@ -241,15 +97,14 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	struct qat_device_info *qat_dev_instance =
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
 	struct rte_cryptodev_pmd_init_params init_params = {
-			.name = "",
-			.socket_id =
-				qat_dev_instance->pci_dev->device.numa_node,
-			.private_data_size = sizeof(struct qat_asym_dev_private)
+		.name = "",
+		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
+		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
-	struct qat_asym_dev_private *internals;
+	struct qat_cryptodev_private *internals;
 
 	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
 		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT 4xxx");
@@ -310,8 +165,9 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
-	internals->asym_dev_id = cryptodev->data->dev_id;
+	internals->dev_id = cryptodev->data->dev_id;
 	internals->qat_dev_capabilities = qat_gen1_asym_capabilities;
+	internals->service_type = QAT_SERVICE_ASYMMETRIC;
 
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
@@ -347,7 +203,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	rte_cryptodev_pmd_probing_finish(cryptodev);
 
 	QAT_LOG(DEBUG, "Created QAT ASYM device %s as cryptodev instance %d",
-			cryptodev->data->name, internals->asym_dev_id);
+			cryptodev->data->name, internals->dev_id);
 	return 0;
 }
 
@@ -365,7 +221,7 @@ qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev)
 
 	/* free crypto device */
 	cryptodev = rte_cryptodev_pmd_get_dev(
-			qat_pci_dev->asym_dev->asym_dev_id);
+			qat_pci_dev->asym_dev->dev_id);
 	rte_cryptodev_pmd_destroy(cryptodev);
 	qat_pci_devs[qat_pci_dev->qat_dev_id].asym_rte_dev.name = NULL;
 	qat_pci_dev->asym_dev = NULL;
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index 3b5abddec8..c493796511 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -15,21 +15,8 @@
 
 extern uint8_t qat_asym_driver_id;
 
-/** private data structure for a QAT device.
- * This QAT device is a device offering only asymmetric crypto service,
- * there can be one of these on each qat_pci_device (VF).
- */
-struct qat_asym_dev_private {
-	struct qat_pci_device *qat_dev;
-	/**< The qat pci device hosting the service */
-	uint8_t asym_dev_id;
-	/**< Device instance for this rte_cryptodev */
-	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
-	/* QAT device asymmetric crypto capabilities */
-	const struct rte_memzone *capa_mz;
-	/* Shared memzone for storing capabilities */
-	uint16_t min_enq_burst_threshold;
-};
+void
+qat_asym_init_op_cookie(void *op_cookie);
 
 uint16_t
 qat_asym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops,
@@ -39,16 +26,4 @@ uint16_t
 qat_asym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops,
 			      uint16_t nb_ops);
 
-int qat_asym_session_configure(struct rte_cryptodev *dev,
-		struct rte_crypto_asym_xform *xform,
-		struct rte_cryptodev_asym_session *sess,
-		struct rte_mempool *mempool);
-
-int
-qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
-		struct qat_dev_cmd_param *qat_dev_cmd_param);
-
-int
-qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev);
-
 #endif /* _QAT_ASYM_PMD_H_ */
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
new file mode 100644
index 0000000000..01d2439b93
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -0,0 +1,172 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "qat_crypto.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+
+int
+qat_cryptodev_config(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused struct rte_cryptodev_config *config)
+{
+	return 0;
+}
+
+int
+qat_cryptodev_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+void
+qat_cryptodev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+int
+qat_cryptodev_close(struct rte_cryptodev *dev)
+{
+	int i, ret;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = dev->dev_ops->queue_pair_release(dev, i);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
+void
+qat_cryptodev_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *info)
+{
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_service_type service_type = qat_private->service_type;
+
+	if (info != NULL) {
+		info->max_nb_queue_pairs =
+			qat_qps_per_service(qat_dev, service_type);
+		info->feature_flags = dev->feature_flags;
+		info->capabilities = qat_private->qat_dev_capabilities;
+		info->driver_id = qat_sym_driver_id;
+		/* No limit of number of sessions */
+		info->sym.max_nb_sessions = 0;
+	}
+}
+
+void
+qat_cryptodev_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	struct qat_common_stats qat_stats = {0};
+	struct qat_cryptodev_private *qat_priv;
+
+	if (stats == NULL || dev == NULL) {
+		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
+		return;
+	}
+	qat_priv = dev->data->dev_private;
+
+	qat_stats_get(qat_priv->qat_dev, &qat_stats, qat_priv->service_type);
+	stats->enqueued_count = qat_stats.enqueued_count;
+	stats->dequeued_count = qat_stats.dequeued_count;
+	stats->enqueue_err_count = qat_stats.enqueue_err_count;
+	stats->dequeue_err_count = qat_stats.dequeue_err_count;
+}
+
+void
+qat_cryptodev_stats_reset(struct rte_cryptodev *dev)
+{
+	struct qat_cryptodev_private *qat_priv;
+
+	if (dev == NULL) {
+		QAT_LOG(ERR, "invalid cryptodev ptr %p", dev);
+		return;
+	}
+	qat_priv = dev->data->dev_private;
+
+	qat_stats_reset(qat_priv->qat_dev, qat_priv->service_type);
+
+}
+
+int
+qat_cryptodev_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
+	enum qat_service_type service_type = qat_private->service_type;
+
+	QAT_LOG(DEBUG, "Release %s qp %u on device %d",
+			qat_service_get_str(service_type),
+			queue_pair_id, dev->data->dev_id);
+
+	qat_private->qat_dev->qps_in_use[service_type][queue_pair_id] = NULL;
+
+	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
+			&(dev->data->queue_pairs[queue_pair_id]));
+}
+
+int
+qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct qat_qp **qp_addr =
+			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_service_type service_type = qat_private->service_type;
+	struct qat_qp_config qat_qp_conf = {0};
+	struct qat_qp *qp;
+	int ret = 0;
+	uint32_t i;
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (*qp_addr != NULL) {
+		ret = dev->dev_ops->queue_pair_release(dev, qp_id);
+		if (ret < 0)
+			return -EBUSY;
+	}
+	if (qp_id >= qat_qps_per_service(qat_dev, service_type)) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, service_type,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
+	qat_qp_conf.cookie_size = service_type == QAT_SERVICE_SYMMETRIC ?
+			sizeof(struct qat_sym_op_cookie) :
+			sizeof(struct qat_asym_op_cookie);
+	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
+	qat_qp_conf.socket_id = socket_id;
+	qat_qp_conf.service_str = qat_service_get_str(service_type);
+
+	ret = qat_qp_setup(qat_dev, qp_addr, qp_id, &qat_qp_conf);
+	if (ret != 0)
+		return ret;
+
+	/* store a link to the qp in the qat_pci_device */
+	qat_dev->qps_in_use[service_type][qp_id] = *qp_addr;
+
+	qp = (struct qat_qp *)*qp_addr;
+	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
+
+	for (i = 0; i < qp->nb_descriptors; i++) {
+		if (service_type == QAT_SERVICE_SYMMETRIC)
+			qat_sym_init_op_cookie(qp->op_cookies[i]);
+		else
+			qat_asym_init_op_cookie(qp->op_cookies[i]);
+	}
+
+	return ret;
+}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
new file mode 100644
index 0000000000..3803fef19d
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+ #ifndef _QAT_CRYPTO_H_
+ #define _QAT_CRYPTO_H_
+
+#include <rte_cryptodev.h>
+#ifdef RTE_LIB_SECURITY
+#include <rte_security.h>
+#endif
+
+#include "qat_device.h"
+
+extern uint8_t qat_sym_driver_id;
+extern uint8_t qat_asym_driver_id;
+
+/** helper macro to set cryptodev capability range **/
+#define CAP_RNG(n, l, r, i) .n = {.min = l, .max = r, .increment = i}
+
+#define CAP_RNG_ZERO(n) .n = {.min = 0, .max = 0, .increment = 0}
+/** helper macro to set cryptodev capability value **/
+#define CAP_SET(n, v) .n = v
+
+/** private data structure for a QAT device.
+ * there can be one of these on each qat_pci_device (VF).
+ */
+struct qat_cryptodev_private {
+	struct qat_pci_device *qat_dev;
+	/**< The qat pci device hosting the service */
+	uint8_t dev_id;
+	/**< Device instance for this rte_cryptodev */
+	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
+	/* QAT device symmetric crypto capabilities */
+	const struct rte_memzone *capa_mz;
+	/* Shared memzone for storing capabilities */
+	uint16_t min_enq_burst_threshold;
+	uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */
+	enum qat_service_type service_type;
+};
+
+struct qat_capabilities_info {
+	struct rte_cryptodev_capabilities *data;
+	uint64_t size;
+};
+
+int
+qat_cryptodev_config(struct rte_cryptodev *dev,
+		struct rte_cryptodev_config *config);
+
+int
+qat_cryptodev_start(struct rte_cryptodev *dev);
+
+void
+qat_cryptodev_stop(struct rte_cryptodev *dev);
+
+int
+qat_cryptodev_close(struct rte_cryptodev *dev);
+
+void
+qat_cryptodev_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *info);
+
+void
+qat_cryptodev_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats);
+
+void
+qat_cryptodev_stats_reset(struct rte_cryptodev *dev);
+
+int
+qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+
+int
+qat_cryptodev_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id);
+
+#endif
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index 5b8ee4bee6..dec877cfab 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -13,6 +13,7 @@
 #endif
 
 #include "qat_logs.h"
+#include "qat_crypto.h"
 #include "qat_sym.h"
 #include "qat_sym_session.h"
 #include "qat_sym_pmd.h"
@@ -59,213 +60,19 @@ static const struct rte_security_capability qat_security_capabilities[] = {
 };
 #endif
 
-static int qat_sym_qp_release(struct rte_cryptodev *dev,
-	uint16_t queue_pair_id);
-
-static int qat_sym_dev_config(__rte_unused struct rte_cryptodev *dev,
-		__rte_unused struct rte_cryptodev_config *config)
-{
-	return 0;
-}
-
-static int qat_sym_dev_start(__rte_unused struct rte_cryptodev *dev)
-{
-	return 0;
-}
-
-static void qat_sym_dev_stop(__rte_unused struct rte_cryptodev *dev)
-{
-	return;
-}
-
-static int qat_sym_dev_close(struct rte_cryptodev *dev)
-{
-	int i, ret;
-
-	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
-		ret = qat_sym_qp_release(dev, i);
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static void qat_sym_dev_info_get(struct rte_cryptodev *dev,
-			struct rte_cryptodev_info *info)
-{
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = internals->qat_dev;
-
-	if (info != NULL) {
-		info->max_nb_queue_pairs =
-			qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC);
-		info->feature_flags = dev->feature_flags;
-		info->capabilities = internals->qat_dev_capabilities;
-		info->driver_id = qat_sym_driver_id;
-		/* No limit of number of sessions */
-		info->sym.max_nb_sessions = 0;
-	}
-}
-
-static void qat_sym_stats_get(struct rte_cryptodev *dev,
-		struct rte_cryptodev_stats *stats)
-{
-	struct qat_common_stats qat_stats = {0};
-	struct qat_sym_dev_private *qat_priv;
-
-	if (stats == NULL || dev == NULL) {
-		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_get(qat_priv->qat_dev, &qat_stats, QAT_SERVICE_SYMMETRIC);
-	stats->enqueued_count = qat_stats.enqueued_count;
-	stats->dequeued_count = qat_stats.dequeued_count;
-	stats->enqueue_err_count = qat_stats.enqueue_err_count;
-	stats->dequeue_err_count = qat_stats.dequeue_err_count;
-}
-
-static void qat_sym_stats_reset(struct rte_cryptodev *dev)
-{
-	struct qat_sym_dev_private *qat_priv;
-
-	if (dev == NULL) {
-		QAT_LOG(ERR, "invalid cryptodev ptr %p", dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_SYMMETRIC);
-
-}
-
-static int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
-{
-	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
-	enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen;
-
-	QAT_LOG(DEBUG, "Release sym qp %u on device %d",
-				queue_pair_id, dev->data->dev_id);
-
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][queue_pair_id]
-						= NULL;
-
-	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
-			&(dev->data->queue_pairs[queue_pair_id]));
-}
-
-static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-	const struct rte_cryptodev_qp_conf *qp_conf,
-	int socket_id)
-{
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-	struct qat_qp_config qat_qp_conf;
-	struct qat_qp **qp_addr =
-			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
-	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-
-	/* If qp is already in use free ring memory and qp metadata. */
-	if (*qp_addr != NULL) {
-		ret = qat_sym_qp_release(dev, qp_id);
-		if (ret < 0)
-			return ret;
-	}
-	if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC)) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_SYMMETRIC,
-			qp_id);
-	if (qat_qp_conf.hw == NULL) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie);
-	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
-	qat_qp_conf.socket_id = socket_id;
-	qat_qp_conf.service_str = "sym";
-
-	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
-	if (ret != 0)
-		return ret;
-
-	/* store a link to the qp in the qat_pci_device */
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id]
-							= *qp_addr;
-
-	qp = (struct qat_qp *)*qp_addr;
-	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
-
-	for (i = 0; i < qp->nb_descriptors; i++) {
-
-		struct qat_sym_op_cookie *cookie =
-				qp->op_cookies[i];
-
-		cookie->qat_sgl_src_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				qat_sgl_src);
-
-		cookie->qat_sgl_dst_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				qat_sgl_dst);
-
-		cookie->opt.spc_gmac.cd_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				opt.spc_gmac.cd_cipher);
-
-	}
-
-	/* Get fw version from QAT (GEN2), skip if we've got it already */
-	if (qp->qat_dev_gen == QAT_GEN2 && !(qat_private->internal_capabilities
-			& QAT_SYM_CAP_VALID)) {
-		ret = qat_cq_get_fw_version(qp);
-
-		if (ret < 0) {
-			qat_sym_qp_release(dev, qp_id);
-			return ret;
-		}
-
-		if (ret != 0)
-			QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d",
-					(ret >> 24) & 0xff,
-					(ret >> 16) & 0xff,
-					(ret >> 8) & 0xff);
-		else
-			QAT_LOG(DEBUG, "unknown QAT firmware version");
-
-		/* set capabilities based on the fw version */
-		qat_private->internal_capabilities = QAT_SYM_CAP_VALID |
-				((ret >= MIXED_CRYPTO_MIN_FW_VER) ?
-						QAT_SYM_CAP_MIXED_CRYPTO : 0);
-		ret = 0;
-	}
-
-	return ret;
-}
-
 static struct rte_cryptodev_ops crypto_qat_ops = {
 
 		/* Device related operations */
-		.dev_configure		= qat_sym_dev_config,
-		.dev_start		= qat_sym_dev_start,
-		.dev_stop		= qat_sym_dev_stop,
-		.dev_close		= qat_sym_dev_close,
-		.dev_infos_get		= qat_sym_dev_info_get,
+		.dev_configure		= qat_cryptodev_config,
+		.dev_start		= qat_cryptodev_start,
+		.dev_stop		= qat_cryptodev_stop,
+		.dev_close		= qat_cryptodev_close,
+		.dev_infos_get		= qat_cryptodev_info_get,
 
-		.stats_get		= qat_sym_stats_get,
-		.stats_reset		= qat_sym_stats_reset,
-		.queue_pair_setup	= qat_sym_qp_setup,
-		.queue_pair_release	= qat_sym_qp_release,
+		.stats_get		= qat_cryptodev_stats_get,
+		.stats_reset		= qat_cryptodev_stats_reset,
+		.queue_pair_setup	= qat_cryptodev_qp_setup,
+		.queue_pair_release	= qat_cryptodev_qp_release,
 
 		/* Crypto related operations */
 		.sym_session_get_size	= qat_sym_session_get_private_size,
@@ -295,6 +102,27 @@ static struct rte_security_ops security_qat_ops = {
 };
 #endif
 
+void
+qat_sym_init_op_cookie(void *op_cookie)
+{
+	struct qat_sym_op_cookie *cookie = op_cookie;
+
+	cookie->qat_sgl_src_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			qat_sgl_src);
+
+	cookie->qat_sgl_dst_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			qat_sgl_dst);
+
+	cookie->opt.spc_gmac.cd_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			opt.spc_gmac.cd_cipher);
+}
+
 static uint16_t
 qat_sym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops,
 		uint16_t nb_ops)
@@ -330,15 +158,14 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
 
 	struct rte_cryptodev_pmd_init_params init_params = {
-			.name = "",
-			.socket_id =
-				qat_dev_instance->pci_dev->device.numa_node,
-			.private_data_size = sizeof(struct qat_sym_dev_private)
+		.name = "",
+		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
+		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
-	struct qat_sym_dev_private *internals;
+	struct qat_cryptodev_private *internals;
 	const struct rte_cryptodev_capabilities *capabilities;
 	uint64_t capa_size;
 
@@ -424,8 +251,9 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
+	internals->service_type = QAT_SERVICE_SYMMETRIC;
 
-	internals->sym_dev_id = cryptodev->data->dev_id;
+	internals->dev_id = cryptodev->data->dev_id;
 	switch (qat_pci_dev->qat_dev_gen) {
 	case QAT_GEN1:
 		capabilities = qat_gen1_sym_capabilities;
@@ -480,7 +308,7 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	qat_pci_dev->sym_dev = internals;
 	QAT_LOG(DEBUG, "Created QAT SYM device %s as cryptodev instance %d",
-			cryptodev->data->name, internals->sym_dev_id);
+			cryptodev->data->name, internals->dev_id);
 
 	rte_cryptodev_pmd_probing_finish(cryptodev);
 
@@ -511,7 +339,7 @@ qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev)
 		rte_memzone_free(qat_pci_dev->sym_dev->capa_mz);
 
 	/* free crypto device */
-	cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->sym_dev_id);
+	cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->dev_id);
 #ifdef RTE_LIB_SECURITY
 	rte_free(cryptodev->security_ctx);
 	cryptodev->security_ctx = NULL;
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index e0992cbe27..d49b732ca0 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -14,6 +14,7 @@
 #endif
 
 #include "qat_sym_capabilities.h"
+#include "qat_crypto.h"
 #include "qat_device.h"
 
 /** Intel(R) QAT Symmetric Crypto PMD driver name */
@@ -25,23 +26,6 @@
 
 extern uint8_t qat_sym_driver_id;
 
-/** private data structure for a QAT device.
- * This QAT device is a device offering only symmetric crypto service,
- * there can be one of these on each qat_pci_device (VF).
- */
-struct qat_sym_dev_private {
-	struct qat_pci_device *qat_dev;
-	/**< The qat pci device hosting the service */
-	uint8_t sym_dev_id;
-	/**< Device instance for this rte_cryptodev */
-	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
-	/* QAT device symmetric crypto capabilities */
-	const struct rte_memzone *capa_mz;
-	/* Shared memzone for storing capabilities */
-	uint16_t min_enq_burst_threshold;
-	uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */
-};
-
 int
 qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
@@ -49,5 +33,8 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 int
 qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev);
 
+void
+qat_sym_init_op_cookie(void *op_cookie);
+
 #endif
 #endif /* _QAT_SYM_PMD_H_ */
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index 3f2f6736fc..8ca475ca8b 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -131,7 +131,7 @@ bpi_cipher_ctx_init(enum rte_crypto_cipher_algorithm cryptodev_algo,
 
 static int
 qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo,
-		struct qat_sym_dev_private *internals)
+		struct qat_cryptodev_private *internals)
 {
 	int i = 0;
 	const struct rte_cryptodev_capabilities *capability;
@@ -152,7 +152,7 @@ qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo,
 
 static int
 qat_is_auth_alg_supported(enum rte_crypto_auth_algorithm algo,
-		struct qat_sym_dev_private *internals)
+		struct qat_cryptodev_private *internals)
 {
 	int i = 0;
 	const struct rte_cryptodev_capabilities *capability;
@@ -267,7 +267,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
 		struct qat_sym_session *session)
 {
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	struct rte_crypto_cipher_xform *cipher_xform = NULL;
 	enum qat_device_gen qat_dev_gen =
 				internals->qat_dev->qat_dev_gen;
@@ -532,7 +532,8 @@ static void
 qat_sym_session_handle_mixed(const struct rte_cryptodev *dev,
 		struct qat_sym_session *session)
 {
-	const struct qat_sym_dev_private *qat_private = dev->data->dev_private;
+	const struct qat_cryptodev_private *qat_private =
+			dev->data->dev_private;
 	enum qat_device_gen min_dev_gen = (qat_private->internal_capabilities &
 			QAT_SYM_CAP_MIXED_CRYPTO) ? QAT_GEN2 : QAT_GEN3;
 
@@ -564,7 +565,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform, void *session_private)
 {
 	struct qat_sym_session *session = session_private;
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen;
 	int ret;
 	int qat_cmd_id;
@@ -707,7 +708,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
 				struct qat_sym_session *session)
 {
 	struct rte_crypto_auth_xform *auth_xform = qat_get_auth_xform(xform);
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	const uint8_t *key_data = auth_xform->key.data;
 	uint8_t key_length = auth_xform->key.length;
 	enum qat_device_gen qat_dev_gen =
@@ -875,7 +876,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
 {
 	struct rte_crypto_aead_xform *aead_xform = &xform->aead;
 	enum rte_crypto_auth_operation crypto_operation;
-	struct qat_sym_dev_private *internals =
+	struct qat_cryptodev_private *internals =
 			dev->data->dev_private;
 	enum qat_device_gen qat_dev_gen =
 			internals->qat_dev->qat_dev_gen;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v5 8/9] crypto/qat: add gen specific data and function
  2021-10-26 16:44       ` [dpdk-dev] [dpdk-dev v5 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                           ` (6 preceding siblings ...)
  2021-10-26 16:45         ` [dpdk-dev] [dpdk-dev v5 7/9] crypto/qat: unified device private data structure Kai Ji
@ 2021-10-26 16:45         ` Kai Ji
  2021-10-26 16:45         ` [dpdk-dev] [dpdk-dev v5 9/9] crypto/qat: add gen specific implementation Kai Ji
  8 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 16:45 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch adds the symmetric and asymmetric crypto data
structure and function prototypes for different QAT
generations.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/crypto/qat/README                  |    7 -
 drivers/crypto/qat/meson.build             |   26 -
 drivers/crypto/qat/qat_asym_capabilities.h |   63 -
 drivers/crypto/qat/qat_asym_pmd.c          |   60 +-
 drivers/crypto/qat/qat_asym_pmd.h          |   25 +
 drivers/crypto/qat/qat_crypto.h            |   16 +
 drivers/crypto/qat/qat_sym_capabilities.h  | 1248 --------------------
 drivers/crypto/qat/qat_sym_pmd.c           |  186 +--
 drivers/crypto/qat/qat_sym_pmd.h           |   57 +-
 9 files changed, 165 insertions(+), 1523 deletions(-)
 delete mode 100644 drivers/crypto/qat/README
 delete mode 100644 drivers/crypto/qat/meson.build
 delete mode 100644 drivers/crypto/qat/qat_asym_capabilities.h
 delete mode 100644 drivers/crypto/qat/qat_sym_capabilities.h

diff --git a/drivers/crypto/qat/README b/drivers/crypto/qat/README
deleted file mode 100644
index 444ae605f0..0000000000
--- a/drivers/crypto/qat/README
+++ /dev/null
@@ -1,7 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2015-2018 Intel Corporation
-
-Makefile for crypto QAT PMD is in common/qat directory.
-The build for the QAT driver is done from there as only one library is built for the
-whole QAT pci device and that library includes all the services (crypto, compression)
-which are enabled on the device.
diff --git a/drivers/crypto/qat/meson.build b/drivers/crypto/qat/meson.build
deleted file mode 100644
index b3b2d17258..0000000000
--- a/drivers/crypto/qat/meson.build
+++ /dev/null
@@ -1,26 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017-2018 Intel Corporation
-
-# this does not build the QAT driver, instead that is done in the compression
-# driver which comes later. Here we just add our sources files to the list
-build = false
-reason = '' # sentinal value to suppress printout
-dep = dependency('libcrypto', required: false, method: 'pkg-config')
-qat_includes += include_directories('.')
-qat_deps += 'cryptodev'
-qat_deps += 'net'
-qat_deps += 'security'
-if dep.found()
-    # Add our sources files to the list
-    qat_sources += files(
-            'qat_asym.c',
-            'qat_asym_pmd.c',
-            'qat_sym.c',
-            'qat_sym_hw_dp.c',
-            'qat_sym_pmd.c',
-            'qat_sym_session.c',
-	)
-    qat_ext_deps += dep
-    qat_cflags += '-DBUILD_QAT_SYM'
-    qat_cflags += '-DBUILD_QAT_ASYM'
-endif
diff --git a/drivers/crypto/qat/qat_asym_capabilities.h b/drivers/crypto/qat/qat_asym_capabilities.h
deleted file mode 100644
index 523b4da6d3..0000000000
--- a/drivers/crypto/qat/qat_asym_capabilities.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019 Intel Corporation
- */
-
-#ifndef _QAT_ASYM_CAPABILITIES_H_
-#define _QAT_ASYM_CAPABILITIES_H_
-
-#define QAT_BASE_GEN1_ASYM_CAPABILITIES						\
-	{	/* modexp */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_MODEX,	\
-				.op_types = 0,					\
-				{						\
-				.modlen = {					\
-				.min = 1,					\
-				.max = 512,					\
-				.increment = 1					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	},									\
-	{	/* modinv */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_MODINV,	\
-				.op_types = 0,					\
-				{						\
-				.modlen = {					\
-				.min = 1,					\
-				.max = 512,					\
-				.increment = 1					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	},									\
-	{	/* RSA */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_RSA,	\
-				.op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_VERIFY) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),	\
-				{						\
-				.modlen = {					\
-				/* min length is based on openssl rsa keygen */	\
-				.min = 64,					\
-				/* value 0 symbolizes no limit on max length */	\
-				.max = 512,					\
-				.increment = 64					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	}									\
-
-#endif /* _QAT_ASYM_CAPABILITIES_H_ */
diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c
index 042f39ddcc..284b8096fe 100644
--- a/drivers/crypto/qat/qat_asym_pmd.c
+++ b/drivers/crypto/qat/qat_asym_pmd.c
@@ -9,15 +9,9 @@
 #include "qat_crypto.h"
 #include "qat_asym.h"
 #include "qat_asym_pmd.h"
-#include "qat_sym_capabilities.h"
-#include "qat_asym_capabilities.h"
 
 uint8_t qat_asym_driver_id;
-
-static const struct rte_cryptodev_capabilities qat_gen1_asym_capabilities[] = {
-	QAT_BASE_GEN1_ASYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
+struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[QAT_N_GENS];
 
 void
 qat_asym_init_op_cookie(void *op_cookie)
@@ -101,19 +95,22 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
 		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
+	struct qat_capabilities_info capa_info;
+	const struct rte_cryptodev_capabilities *capabilities;
+	const struct qat_crypto_gen_dev_ops *gen_dev_ops =
+		&qat_asym_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
 	struct qat_cryptodev_private *internals;
+	uint64_t capa_size;
 
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT 4xxx");
-		return -EFAULT;
-	}
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN3) {
-		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT c4xxx");
+	if (gen_dev_ops->cryptodev_ops == NULL) {
+		QAT_LOG(ERR, "Device %s does not support asymmetric crypto",
+				name);
 		return -EFAULT;
 	}
+
 	snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "asym");
 	QAT_LOG(DEBUG, "Creating QAT ASYM device %s\n", name);
@@ -150,11 +147,8 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	cryptodev->enqueue_burst = qat_asym_pmd_enqueue_op_burst;
 	cryptodev->dequeue_burst = qat_asym_pmd_dequeue_op_burst;
 
-	cryptodev->feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
-			RTE_CRYPTODEV_FF_HW_ACCELERATED |
-			RTE_CRYPTODEV_FF_ASYM_SESSIONLESS |
-			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP |
-			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT;
+
+	cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev);
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -166,27 +160,29 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
 	internals->dev_id = cryptodev->data->dev_id;
-	internals->qat_dev_capabilities = qat_gen1_asym_capabilities;
 	internals->service_type = QAT_SERVICE_ASYMMETRIC;
 
+	capa_info = gen_dev_ops->get_capabilities(qat_pci_dev);
+	capabilities = capa_info.data;
+	capa_size = capa_info.size;
+
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
 		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
-			sizeof(qat_gen1_asym_capabilities),
-			rte_socket_id(), 0);
-	}
-	if (internals->capa_mz == NULL) {
-		QAT_LOG(DEBUG,
-			"Error allocating memzone for capabilities, destroying PMD for %s",
-			name);
-		rte_cryptodev_pmd_destroy(cryptodev);
-		memset(&qat_dev_instance->asym_rte_dev, 0,
-			sizeof(qat_dev_instance->asym_rte_dev));
-		return -EFAULT;
+				capa_size, rte_socket_id(), 0);
+		if (internals->capa_mz == NULL) {
+			QAT_LOG(DEBUG,
+				"Error allocating memzone for capabilities, "
+				"destroying PMD for %s",
+				name);
+			rte_cryptodev_pmd_destroy(cryptodev);
+			memset(&qat_dev_instance->asym_rte_dev, 0,
+				sizeof(qat_dev_instance->asym_rte_dev));
+			return -EFAULT;
+		}
 	}
 
-	memcpy(internals->capa_mz->addr, qat_gen1_asym_capabilities,
-			sizeof(qat_gen1_asym_capabilities));
+	memcpy(internals->capa_mz->addr, capabilities, capa_size);
 	internals->qat_dev_capabilities = internals->capa_mz->addr;
 
 	while (1) {
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index c493796511..fd6b406248 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -7,14 +7,39 @@
 #define _QAT_ASYM_PMD_H_
 
 #include <rte_cryptodev.h>
+#include "qat_crypto.h"
 #include "qat_device.h"
 
 /** Intel(R) QAT Asymmetric Crypto PMD driver name */
 #define CRYPTODEV_NAME_QAT_ASYM_PMD	crypto_qat_asym
 
 
+/**
+ * Helper function to add an asym capability
+ * <name> <op type> <modlen (min, max, increment)>
+ **/
+#define QAT_ASYM_CAP(n, o, l, r, i)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,			\
+		{.asym = {						\
+			.xform_capa = {					\
+				.xform_type = RTE_CRYPTO_ASYM_XFORM_##n,\
+				.op_types = o,				\
+				{					\
+				.modlen = {				\
+				.min = l,				\
+				.max = r,				\
+				.increment = i				\
+				}, }					\
+			}						\
+		},							\
+		}							\
+	}
+
 extern uint8_t qat_asym_driver_id;
 
+extern struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[];
+
 void
 qat_asym_init_op_cookie(void *op_cookie);
 
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index 3803fef19d..0a8afb0b31 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -44,6 +44,22 @@ struct qat_capabilities_info {
 	uint64_t size;
 };
 
+typedef struct qat_capabilities_info (*get_capabilities_info_t)
+			(struct qat_pci_device *qat_dev);
+
+typedef uint64_t (*get_feature_flags_t)(struct qat_pci_device *qat_dev);
+
+typedef void * (*create_security_ctx_t)(void *cryptodev);
+
+struct qat_crypto_gen_dev_ops {
+	get_feature_flags_t get_feature_flags;
+	get_capabilities_info_t get_capabilities;
+	struct rte_cryptodev_ops *cryptodev_ops;
+#ifdef RTE_LIB_SECURITY
+	create_security_ctx_t create_security_ctx;
+#endif
+};
+
 int
 qat_cryptodev_config(struct rte_cryptodev *dev,
 		struct rte_cryptodev_config *config);
diff --git a/drivers/crypto/qat/qat_sym_capabilities.h b/drivers/crypto/qat/qat_sym_capabilities.h
deleted file mode 100644
index cfb176ca94..0000000000
--- a/drivers/crypto/qat/qat_sym_capabilities.h
+++ /dev/null
@@ -1,1248 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017-2019 Intel Corporation
- */
-
-#ifndef _QAT_SYM_CAPABILITIES_H_
-#define _QAT_SYM_CAPABILITIES_H_
-
-#define QAT_BASE_GEN1_SYM_CAPABILITIES					\
-	{	/* SHA1 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA1 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* MD5 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 16,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XCBC MAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-				.aad_size = { 0 },			\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_CMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 4			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_CCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 2			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 224,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 7,			\
-					.max = 13,			\
-					.increment = 1			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_GCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GMAC (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_GMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SNOW 3G (UIA2) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CBC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XTS */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_XTS,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 64,			\
-					.increment = 32			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SNOW 3G (UEA2) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CTR,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* NULL (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, },						\
-		}, },							\
-	},								\
-	{	/* NULL (CIPHER) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				}					\
-			}, },						\
-		}, }							\
-	},								\
-	{       /* KASUMI (F8) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_KASUMI_F8,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{       /* KASUMI (F9) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_KASUMI_F9,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* 3DES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 24,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* 3DES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_3DES_CTR,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 24,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* DES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_DES_CBC,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* DES DOCSISBPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_DES_DOCSISBPI,\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_EXTRA_GEN2_SYM_CAPABILITIES					\
-	{	/* ZUC (EEA3) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_ZUC_EEA3,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* ZUC (EIA3) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_ZUC_EIA3,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_EXTRA_GEN3_SYM_CAPABILITIES					\
-	{	/* Chacha20-Poly1305 */					\
-	.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 32,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_BASE_GEN4_SYM_CAPABILITIES					\
-	{	/* AES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CBC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA1 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XCBC MAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-				.aad_size = { 0 },			\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_CMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 4			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* NULL (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, },						\
-		}, },							\
-	},								\
-	{	/* NULL (CIPHER) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				}					\
-			}, },						\
-		}, }							\
-	},								\
-	{	/* SHA1 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CTR,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_GCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_CCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 2			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 224,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 7,			\
-					.max = 13,			\
-					.increment = 1			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* Chacha20-Poly1305 */					\
-	.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 32,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GMAC (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_GMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				}					\
-			}, }						\
-		}, }							\
-	}								\
-
-
-
-#ifdef RTE_LIB_SECURITY
-#define QAT_SECURITY_SYM_CAPABILITIES					\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_SECURITY_CAPABILITIES(sym)					\
-	[0] = {	/* DOCSIS Uplink */					\
-		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
-		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
-		.docsis = {						\
-			.direction = RTE_SECURITY_DOCSIS_UPLINK		\
-		},							\
-		.crypto_capabilities = (sym)				\
-	},								\
-	[1] = {	/* DOCSIS Downlink */					\
-		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
-		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
-		.docsis = {						\
-			.direction = RTE_SECURITY_DOCSIS_DOWNLINK	\
-		},							\
-		.crypto_capabilities = (sym)				\
-	}
-#endif
-
-#endif /* _QAT_SYM_CAPABILITIES_H_ */
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index dec877cfab..b835245f17 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -22,85 +22,7 @@
 
 uint8_t qat_sym_driver_id;
 
-static const struct rte_cryptodev_capabilities qat_gen1_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen2_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen3_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN3_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen4_sym_capabilities[] = {
-	QAT_BASE_GEN4_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-#ifdef RTE_LIB_SECURITY
-static const struct rte_cryptodev_capabilities
-					qat_security_sym_capabilities[] = {
-	QAT_SECURITY_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_security_capability qat_security_capabilities[] = {
-	QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities),
-	{
-		.action = RTE_SECURITY_ACTION_TYPE_NONE
-	}
-};
-#endif
-
-static struct rte_cryptodev_ops crypto_qat_ops = {
-
-		/* Device related operations */
-		.dev_configure		= qat_cryptodev_config,
-		.dev_start		= qat_cryptodev_start,
-		.dev_stop		= qat_cryptodev_stop,
-		.dev_close		= qat_cryptodev_close,
-		.dev_infos_get		= qat_cryptodev_info_get,
-
-		.stats_get		= qat_cryptodev_stats_get,
-		.stats_reset		= qat_cryptodev_stats_reset,
-		.queue_pair_setup	= qat_cryptodev_qp_setup,
-		.queue_pair_release	= qat_cryptodev_qp_release,
-
-		/* Crypto related operations */
-		.sym_session_get_size	= qat_sym_session_get_private_size,
-		.sym_session_configure	= qat_sym_session_configure,
-		.sym_session_clear	= qat_sym_session_clear,
-
-		/* Raw data-path API related operations */
-		.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
-		.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
-};
-
-#ifdef RTE_LIB_SECURITY
-static const struct rte_security_capability *
-qat_security_cap_get(void *device __rte_unused)
-{
-	return qat_security_capabilities;
-}
-
-static struct rte_security_ops security_qat_ops = {
-
-		.session_create = qat_security_session_create,
-		.session_update = NULL,
-		.session_stats_get = NULL,
-		.session_destroy = qat_security_session_destroy,
-		.set_pkt_metadata = NULL,
-		.capabilities_get = qat_security_cap_get
-};
-#endif
+struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[QAT_N_GENS];
 
 void
 qat_sym_init_op_cookie(void *op_cookie)
@@ -156,7 +78,6 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 	int i = 0, ret = 0;
 	struct qat_device_info *qat_dev_instance =
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
-
 	struct rte_cryptodev_pmd_init_params init_params = {
 		.name = "",
 		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
@@ -166,13 +87,22 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
 	struct qat_cryptodev_private *internals;
+	struct qat_capabilities_info capa_info;
 	const struct rte_cryptodev_capabilities *capabilities;
+	const struct qat_crypto_gen_dev_ops *gen_dev_ops =
+		&qat_sym_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	uint64_t capa_size;
 
 	snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "sym");
 	QAT_LOG(DEBUG, "Creating QAT SYM device %s", name);
 
+	if (gen_dev_ops->cryptodev_ops == NULL) {
+		QAT_LOG(ERR, "Device %s does not support symmetric crypto",
+				name);
+		return -EFAULT;
+	}
+
 	/*
 	 * All processes must use same driver id so they can share sessions.
 	 * Store driver_id so we can validate that all processes have the same
@@ -206,92 +136,56 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	qat_dev_instance->sym_rte_dev.name = cryptodev->data->name;
 	cryptodev->driver_id = qat_sym_driver_id;
-	cryptodev->dev_ops = &crypto_qat_ops;
+	cryptodev->dev_ops = gen_dev_ops->cryptodev_ops;
 
 	cryptodev->enqueue_burst = qat_sym_pmd_enqueue_op_burst;
 	cryptodev->dequeue_burst = qat_sym_pmd_dequeue_op_burst;
 
-	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
-			RTE_CRYPTODEV_FF_HW_ACCELERATED |
-			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
-			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
-			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
-			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
-
-	if (qat_pci_dev->qat_dev_gen < QAT_GEN4)
-		cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SYM_RAW_DP;
+	cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev);
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
-	snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN,
-			"QAT_SYM_CAPA_GEN_%d",
-			qat_pci_dev->qat_dev_gen);
-
 #ifdef RTE_LIB_SECURITY
-	struct rte_security_ctx *security_instance;
-	security_instance = rte_malloc("qat_sec",
-				sizeof(struct rte_security_ctx),
-				RTE_CACHE_LINE_SIZE);
-	if (security_instance == NULL) {
-		QAT_LOG(ERR, "rte_security_ctx memory alloc failed");
-		ret = -ENOMEM;
-		goto error;
-	}
+	if (gen_dev_ops->create_security_ctx) {
+		cryptodev->security_ctx =
+			gen_dev_ops->create_security_ctx((void *)cryptodev);
+		if (cryptodev->security_ctx == NULL) {
+			QAT_LOG(ERR, "rte_security_ctx memory alloc failed");
+			ret = -ENOMEM;
+			goto error;
+		}
+
+		cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY;
+		QAT_LOG(INFO, "Device %s rte_security support enabled", name);
+	} else
+		QAT_LOG(INFO, "Device %s rte_security support disabled", name);
 
-	security_instance->device = (void *)cryptodev;
-	security_instance->ops = &security_qat_ops;
-	security_instance->sess_cnt = 0;
-	cryptodev->security_ctx = security_instance;
-	cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY;
 #endif
+	snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"QAT_SYM_CAPA_GEN_%d",
+			qat_pci_dev->qat_dev_gen);
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
 	internals->service_type = QAT_SERVICE_SYMMETRIC;
-
 	internals->dev_id = cryptodev->data->dev_id;
-	switch (qat_pci_dev->qat_dev_gen) {
-	case QAT_GEN1:
-		capabilities = qat_gen1_sym_capabilities;
-		capa_size = sizeof(qat_gen1_sym_capabilities);
-		break;
-	case QAT_GEN2:
-		capabilities = qat_gen2_sym_capabilities;
-		capa_size = sizeof(qat_gen2_sym_capabilities);
-		break;
-	case QAT_GEN3:
-		capabilities = qat_gen3_sym_capabilities;
-		capa_size = sizeof(qat_gen3_sym_capabilities);
-		break;
-	case QAT_GEN4:
-		capabilities = qat_gen4_sym_capabilities;
-		capa_size = sizeof(qat_gen4_sym_capabilities);
-		break;
-	default:
-		QAT_LOG(DEBUG,
-			"QAT gen %d capabilities unknown",
-			qat_pci_dev->qat_dev_gen);
-		ret = -(EINVAL);
-		goto error;
-	}
+
+	capa_info = gen_dev_ops->get_capabilities(qat_pci_dev);
+	capabilities = capa_info.data;
+	capa_size = capa_info.size;
 
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
 		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
-		capa_size,
-		rte_socket_id(), 0);
-	}
-	if (internals->capa_mz == NULL) {
-		QAT_LOG(DEBUG,
-			"Error allocating memzone for capabilities, destroying "
-			"PMD for %s",
-			name);
-		ret = -EFAULT;
-		goto error;
+				capa_size, rte_socket_id(), 0);
+		if (internals->capa_mz == NULL) {
+			QAT_LOG(DEBUG,
+				"Error allocating capability memzon for %s",
+				name);
+			ret = -EFAULT;
+			goto error;
+		}
 	}
 
 	memcpy(internals->capa_mz->addr, capabilities, capa_size);
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index d49b732ca0..0dc0c6f0d9 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -13,7 +13,6 @@
 #include <rte_security.h>
 #endif
 
-#include "qat_sym_capabilities.h"
 #include "qat_crypto.h"
 #include "qat_device.h"
 
@@ -24,8 +23,64 @@
 #define QAT_SYM_CAP_MIXED_CRYPTO	(1 << 0)
 #define QAT_SYM_CAP_VALID		(1 << 31)
 
+/**
+ * Macro to add a sym capability
+ * helper function to add an sym capability
+ * <n: name> <b: block size> <k: key size> <d: digest size>
+ * <a: aad_size> <i: iv_size>
+ **/
+#define QAT_SYM_PLAIN_AUTH_CAP(n, b, d)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
+			{.auth = {					\
+				.algo = RTE_CRYPTO_AUTH_##n,		\
+				b, d					\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_AUTH_CAP(n, b, k, d, a, i)				\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
+			{.auth = {					\
+				.algo = RTE_CRYPTO_AUTH_##n,		\
+				b, k, d, a, i				\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_AEAD_CAP(n, b, k, d, a, i)				\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
+			{.aead = {					\
+				.algo = RTE_CRYPTO_AEAD_##n,		\
+				b, k, d, a, i				\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_CIPHER_CAP(n, b, k, i)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
+			{.cipher = {					\
+				.algo = RTE_CRYPTO_CIPHER_##n,		\
+				b, k, i					\
+			}, }						\
+		}, }							\
+	}
+
 extern uint8_t qat_sym_driver_id;
 
+extern struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[];
+
 int
 qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v5 9/9] crypto/qat: add gen specific implementation
  2021-10-26 16:44       ` [dpdk-dev] [dpdk-dev v5 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                           ` (7 preceding siblings ...)
  2021-10-26 16:45         ` [dpdk-dev] [dpdk-dev v5 8/9] crypto/qat: add gen specific data and function Kai Ji
@ 2021-10-26 16:45         ` Kai Ji
  2021-10-26 17:16           ` [dpdk-dev] [dpdk-dev v6 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
  8 siblings, 1 reply; 96+ messages in thread
From: Kai Ji @ 2021-10-26 16:45 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch replaces the mixed QAT symmetric and asymmetric
support implementation by separate files with shared or
individual implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/meson.build               |   7 +-
 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c   |  76 +++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 224 +++++++++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 164 +++++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 124 ++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h |  36 +++
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c    | 283 +++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h              |   3 -
 8 files changed, 913 insertions(+), 4 deletions(-)
 create mode 100644 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 29fd0168ea..ce9959d103 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -71,7 +71,12 @@ endif
 
 if qat_crypto
     foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c',
-            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c']
+            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c',
+	    'dev/qat_sym_pmd_gen1.c',
+            'dev/qat_asym_pmd_gen1.c',
+            'dev/qat_crypto_pmd_gen2.c',
+            'dev/qat_crypto_pmd_gen3.c',
+            'dev/qat_crypto_pmd_gen4.c']
         sources += files(join_paths(qat_crypto_relpath, f))
     endforeach
     deps += ['security']
diff --git a/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
new file mode 100644
index 0000000000..9ed1f21d9d
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+#include "qat_pke_functionality_arrays.h"
+
+struct rte_cryptodev_ops qat_asym_crypto_ops_gen1 = {
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.asym_session_get_size	= qat_asym_session_get_private_size,
+	.asym_session_configure	= qat_asym_session_configure,
+	.asym_session_clear	= qat_asym_session_clear
+};
+
+static struct rte_cryptodev_capabilities qat_asym_crypto_caps_gen1[] = {
+	QAT_ASYM_CAP(MODEX,
+		0, 1, 512, 1),
+	QAT_ASYM_CAP(MODINV,
+		0, 1, 512, 1),
+	QAT_ASYM_CAP(RSA,
+			((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
+			(1 << RTE_CRYPTO_ASYM_OP_VERIFY) |
+			(1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |
+			(1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),
+			64, 512, 64),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+
+struct qat_capabilities_info
+qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_asym_crypto_caps_gen1;
+	capa_info.size = sizeof(qat_asym_crypto_caps_gen1);
+	return capa_info;
+}
+
+uint64_t
+qat_asym_crypto_feature_flags_get_gen1(
+	struct qat_pci_device *qat_dev __rte_unused)
+{
+	uint64_t feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_ASYM_SESSIONLESS |
+			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP |
+			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT;
+
+	return feature_flags;
+}
+
+RTE_INIT(qat_asym_crypto_gen1_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN1].cryptodev_ops =
+			&qat_asym_crypto_ops_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN1].get_capabilities =
+			qat_asym_crypto_cap_get_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN1].get_feature_flags =
+			qat_asym_crypto_feature_flags_get_gen1;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
new file mode 100644
index 0000000000..b4ec440e05
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
@@ -0,0 +1,224 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+#define MIXED_CRYPTO_MIN_FW_VER 0x04090000
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen2[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(MD5_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(KASUMI_F9,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_XTS,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(3DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(3DES_CTR,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(ZUC_EEA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(ZUC_EIA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static int
+qat_sym_crypto_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct qat_cryptodev_private *qat_sym_private = dev->data->dev_private;
+	struct qat_qp *qp;
+	int ret;
+
+	if (qat_cryptodev_qp_setup(dev, qp_id, qp_conf, socket_id)) {
+		QAT_LOG(DEBUG, "QAT qp setup failed");
+		return -1;
+	}
+
+	qp = qat_sym_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id];
+	ret = qat_cq_get_fw_version(qp);
+	if (ret < 0) {
+		qat_cryptodev_qp_release(dev, qp_id);
+		return ret;
+	}
+
+	if (ret != 0)
+		QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d",
+				(ret >> 24) & 0xff,
+				(ret >> 16) & 0xff,
+				(ret >> 8) & 0xff);
+	else
+		QAT_LOG(DEBUG, "unknown QAT firmware version");
+
+	/* set capabilities based on the fw version */
+	qat_sym_private->internal_capabilities = QAT_SYM_CAP_VALID |
+			((ret >= MIXED_CRYPTO_MIN_FW_VER) ?
+					QAT_SYM_CAP_MIXED_CRYPTO : 0);
+	return 0;
+}
+
+struct rte_cryptodev_ops qat_sym_crypto_ops_gen2 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_sym_crypto_qp_setup_gen2,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.sym_session_get_size	= qat_sym_session_get_private_size,
+	.sym_session_configure	= qat_sym_session_configure,
+	.sym_session_clear	= qat_sym_session_clear,
+
+	/* Raw data-path API related operations */
+	.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+	.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen2(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen2;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen2);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen2_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN2].cryptodev_ops = &qat_sym_crypto_ops_gen2;
+	qat_sym_gen_dev_ops[QAT_GEN2].get_capabilities =
+			qat_sym_crypto_cap_get_gen2;
+	qat_sym_gen_dev_ops[QAT_GEN2].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN2].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen2_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN2].cryptodev_ops =
+			&qat_asym_crypto_ops_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN2].get_capabilities =
+			qat_asym_crypto_cap_get_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN2].get_feature_flags =
+			qat_asym_crypto_feature_flags_get_gen1;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
new file mode 100644
index 0000000000..d3336cf4a1
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen3[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(MD5_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(KASUMI_F9,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_XTS,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(3DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(3DES_CTR,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(ZUC_EEA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(ZUC_EIA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AEAD_CAP(CHACHA20_POLY1305,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 32, 32, 0),
+		CAP_RNG(digest_size, 16, 16, 0),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen3(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen3;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen3);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen3_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN3].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN3].get_capabilities =
+			qat_sym_crypto_cap_get_gen3;
+	qat_sym_gen_dev_ops[QAT_GEN3].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN3].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen3_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN3].cryptodev_ops = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN3].get_capabilities = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN3].get_feature_flags = NULL;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
new file mode 100644
index 0000000000..37a58c026f
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
@@ -0,0 +1,124 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen4[] = {
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(CHACHA20_POLY1305,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 32, 32, 0),
+		CAP_RNG(digest_size, 16, 16, 0),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen4;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen4);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen4_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN4].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN4].get_capabilities =
+			qat_sym_crypto_cap_get_gen4;
+	qat_sym_gen_dev_ops[QAT_GEN4].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN4].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen4_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN4].cryptodev_ops = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN4].get_capabilities = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN4].get_feature_flags = NULL;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
new file mode 100644
index 0000000000..67a4d2cb2c
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#ifndef _QAT_CRYPTO_PMD_GENS_H_
+#define _QAT_CRYPTO_PMD_GENS_H_
+
+#include <rte_cryptodev.h>
+#include "qat_crypto.h"
+#include "qat_sym_session.h"
+
+extern struct rte_cryptodev_ops qat_sym_crypto_ops_gen1;
+extern struct rte_cryptodev_ops qat_asym_crypto_ops_gen1;
+
+/* -----------------GENx control path APIs ---------------- */
+uint64_t
+qat_sym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev);
+
+void
+qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session,
+		uint8_t hash_flag);
+
+struct qat_capabilities_info
+qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev);
+
+uint64_t
+qat_asym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev);
+
+#ifdef RTE_LIB_SECURITY
+extern struct rte_security_ops security_qat_ops_gen1;
+
+void *
+qat_sym_create_security_gen1(void *cryptodev);
+#endif
+
+#endif
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
new file mode 100644
index 0000000000..e156f194e2
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
@@ -0,0 +1,283 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#ifdef RTE_LIB_SECURITY
+#include <rte_security_driver.h>
+#endif
+
+#include "adf_transport_access_macros.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_sym_session.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen1[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(MD5_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(KASUMI_F9,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_XTS,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(3DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(3DES_CTR,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+struct rte_cryptodev_ops qat_sym_crypto_ops_gen1 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.sym_session_get_size	= qat_sym_session_get_private_size,
+	.sym_session_configure	= qat_sym_session_configure,
+	.sym_session_clear	= qat_sym_session_clear,
+
+	/* Raw data-path API related operations */
+	.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+	.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen1;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen1);
+	return capa_info;
+}
+
+uint64_t
+qat_sym_crypto_feature_flags_get_gen1(
+	struct qat_pci_device *qat_dev __rte_unused)
+{
+	uint64_t feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
+			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
+			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
+			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED |
+			RTE_CRYPTODEV_FF_SYM_RAW_DP;
+
+	return feature_flags;
+}
+
+#ifdef RTE_LIB_SECURITY
+
+#define QAT_SECURITY_SYM_CAPABILITIES					\
+	{	/* AES DOCSIS BPI */					\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
+			{.cipher = {					\
+				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
+				.block_size = 16,			\
+				.key_size = {				\
+					.min = 16,			\
+					.max = 32,			\
+					.increment = 16			\
+				},					\
+				.iv_size = {				\
+					.min = 16,			\
+					.max = 16,			\
+					.increment = 0			\
+				}					\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SECURITY_CAPABILITIES(sym)					\
+	[0] = {	/* DOCSIS Uplink */					\
+		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
+		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
+		.docsis = {						\
+			.direction = RTE_SECURITY_DOCSIS_UPLINK		\
+		},							\
+		.crypto_capabilities = (sym)				\
+	},								\
+	[1] = {	/* DOCSIS Downlink */					\
+		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
+		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
+		.docsis = {						\
+			.direction = RTE_SECURITY_DOCSIS_DOWNLINK	\
+		},							\
+		.crypto_capabilities = (sym)				\
+	}
+
+static const struct rte_cryptodev_capabilities
+					qat_security_sym_capabilities[] = {
+	QAT_SECURITY_SYM_CAPABILITIES,
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static const struct rte_security_capability qat_security_capabilities_gen1[] = {
+	QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities),
+	{
+		.action = RTE_SECURITY_ACTION_TYPE_NONE
+	}
+};
+
+static const struct rte_security_capability *
+qat_security_cap_get_gen1(void *dev __rte_unused)
+{
+	return qat_security_capabilities_gen1;
+}
+
+struct rte_security_ops security_qat_ops_gen1 = {
+		.session_create = qat_security_session_create,
+		.session_update = NULL,
+		.session_stats_get = NULL,
+		.session_destroy = qat_security_session_destroy,
+		.set_pkt_metadata = NULL,
+		.capabilities_get = qat_security_cap_get_gen1
+};
+
+void *
+qat_sym_create_security_gen1(void *cryptodev)
+{
+	struct rte_security_ctx *security_instance;
+
+	security_instance = rte_malloc(NULL, sizeof(struct rte_security_ctx),
+			RTE_CACHE_LINE_SIZE);
+	if (security_instance == NULL)
+		return NULL;
+
+	security_instance->device = cryptodev;
+	security_instance->ops = &security_qat_ops_gen1;
+	security_instance->sess_cnt = 0;
+
+	return (void *)security_instance;
+}
+
+#endif
+
+RTE_INIT(qat_sym_crypto_gen1_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN1].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN1].get_capabilities =
+			qat_sym_crypto_cap_get_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN1].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN1].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index 0a8afb0b31..6eaa15b975 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -6,9 +6,6 @@
  #define _QAT_CRYPTO_H_
 
 #include <rte_cryptodev.h>
-#ifdef RTE_LIB_SECURITY
-#include <rte_security.h>
-#endif
 
 #include "qat_device.h"
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v6 0/9] drivers/qat: isolate implementations of qat generations
  2021-10-26 16:45         ` [dpdk-dev] [dpdk-dev v5 9/9] crypto/qat: add gen specific implementation Kai Ji
@ 2021-10-26 17:16           ` Kai Ji
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 1/9] common/qat: add gen specific data and function Kai Ji
                               ` (9 more replies)
  0 siblings, 10 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 17:16 UTC (permalink / raw)
  To: dev; +Cc: Kai Ji

This patchset introduces new qat driver structure and updates
existing symmetric crypto qat PMD.

The purpose of the change is to isolate QAT generation specific
implementations from one to another.

It is expected the changes to the specific generation driver
code does minimum impact to other generations' implementations.
Also adding the support to new features or new qat generation
hardware will have zero impact to existing functionalities.

v6:
- updates on commit messages

v5:
- review comments addressed

v4:
- rebased on top of latest master.
- updated comments.
- removed naming convention patch.

v3:
- removed release note update.
- updated with more unified naming conventions.

v2:
- unified asym and sym data structures for qat.
- more refined per gen code split.

Fan Zhang (9):
  common/qat: add gen specific data and function
  common/qat: add gen specific device implementation
  common/qat: add gen specific queue pair function
  common/qat: add gen specific queue implementation
  compress/qat: add gen specific data and function
  compress/qat: add gen specific implementation
  crypto/qat: unified device private data structure
  crypto/qat: add gen specific data and function
  crypto/qat: add gen specific implementation

 drivers/common/qat/dev/qat_dev_gen1.c         |  254 ++++
 drivers/common/qat/dev/qat_dev_gen2.c         |   37 +
 drivers/common/qat/dev/qat_dev_gen3.c         |   83 ++
 drivers/common/qat/dev/qat_dev_gen4.c         |  305 ++++
 drivers/common/qat/dev/qat_dev_gens.h         |   65 +
 drivers/common/qat/meson.build                |   15 +-
 .../qat/qat_adf/adf_transport_access_macros.h |    2 +
 .../common/qat/qat_adf/icp_qat_hw_gen4_comp.h |  195 +++
 .../qat/qat_adf/icp_qat_hw_gen4_comp_defs.h   |  299 ++++
 drivers/common/qat/qat_common.c               |   15 +
 drivers/common/qat/qat_common.h               |   19 +-
 drivers/common/qat/qat_device.c               |  205 ++-
 drivers/common/qat/qat_device.h               |   45 +-
 drivers/common/qat/qat_qp.c                   |  677 ++++-----
 drivers/common/qat/qat_qp.h                   |  121 +-
 drivers/compress/qat/dev/qat_comp_pmd_gen1.c  |  176 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen2.c  |   30 +
 drivers/compress/qat/dev/qat_comp_pmd_gen3.c  |   30 +
 drivers/compress/qat/dev/qat_comp_pmd_gen4.c  |  213 +++
 drivers/compress/qat/dev/qat_comp_pmd_gens.h  |   30 +
 drivers/compress/qat/qat_comp.c               |  101 +-
 drivers/compress/qat/qat_comp.h               |    8 +-
 drivers/compress/qat/qat_comp_pmd.c           |  159 +--
 drivers/compress/qat/qat_comp_pmd.h           |   76 +
 drivers/crypto/qat/README                     |    7 -
 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c    |   76 +
 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c  |  224 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c  |  164 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c  |  124 ++
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h  |   36 +
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c     |  283 ++++
 drivers/crypto/qat/meson.build                |   26 -
 drivers/crypto/qat/qat_asym_capabilities.h    |   63 -
 drivers/crypto/qat/qat_asym_pmd.c             |  276 +---
 drivers/crypto/qat/qat_asym_pmd.h             |   54 +-
 drivers/crypto/qat/qat_crypto.c               |  172 +++
 drivers/crypto/qat/qat_crypto.h               |   91 ++
 drivers/crypto/qat/qat_sym_capabilities.h     | 1248 -----------------
 drivers/crypto/qat/qat_sym_pmd.c              |  428 +-----
 drivers/crypto/qat/qat_sym_pmd.h              |   76 +-
 drivers/crypto/qat/qat_sym_session.c          |   15 +-
 41 files changed, 3773 insertions(+), 2750 deletions(-)
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gens.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen1.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen2.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen3.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen4.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gens.h
 delete mode 100644 drivers/crypto/qat/README
 create mode 100644 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
 delete mode 100644 drivers/crypto/qat/meson.build
 delete mode 100644 drivers/crypto/qat/qat_asym_capabilities.h
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 delete mode 100644 drivers/crypto/qat/qat_sym_capabilities.h

--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v6 1/9] common/qat: add gen specific data and function
  2021-10-26 17:16           ` [dpdk-dev] [dpdk-dev v6 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
@ 2021-10-26 17:16             ` Kai Ji
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 2/9] common/qat: add gen specific device implementation Kai Ji
                               ` (8 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 17:16 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch adds the data structure and function prototypes for
different QAT generations.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 drivers/common/qat/qat_common.h | 14 ++++++++------
 drivers/common/qat/qat_device.c |  4 ++++
 drivers/common/qat/qat_device.h | 23 +++++++++++++++++++++++
 3 files changed, 35 insertions(+), 6 deletions(-)

diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index 23715085f4..1889ec4e88 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -15,20 +15,24 @@
 /* Intel(R) QuickAssist Technology device generation is enumerated
  * from one according to the generation of the device
  */
+
 enum qat_device_gen {
-	QAT_GEN1 = 1,
+	QAT_GEN1,
 	QAT_GEN2,
 	QAT_GEN3,
-	QAT_GEN4
+	QAT_GEN4,
+	QAT_N_GENS
 };

 enum qat_service_type {
-	QAT_SERVICE_ASYMMETRIC = 0,
+	QAT_SERVICE_ASYMMETRIC,
 	QAT_SERVICE_SYMMETRIC,
 	QAT_SERVICE_COMPRESSION,
-	QAT_SERVICE_INVALID
+	QAT_MAX_SERVICES
 };

+#define QAT_SERVICE_INVALID	(QAT_MAX_SERVICES)
+
 enum qat_svc_list {
 	QAT_SVC_UNUSED = 0,
 	QAT_SVC_CRYPTO = 1,
@@ -37,8 +41,6 @@ enum qat_svc_list {
 	QAT_SVC_ASYM = 4,
 };

-#define QAT_MAX_SERVICES		(QAT_SERVICE_INVALID)
-
 /**< Common struct for scatter-gather list operations */
 struct qat_flat_buf {
 	uint32_t len;
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index 1b967cbcf7..e6b43c541f 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -13,6 +13,10 @@
 #include "adf_pf2vf_msg.h"
 #include "qat_pf2vf.h"

+/* Hardware device information per generation */
+struct qat_gen_hw_data qat_gen_config[QAT_N_GENS];
+struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_N_GENS];
+
 /* pv2vf data Gen 4*/
 struct qat_pf2vf_dev qat_pf2vf_gen4 = {
 	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 228c057d1e..b8b5c387a3 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -21,6 +21,29 @@
 #define COMP_ENQ_THRESHOLD_NAME "qat_comp_enq_threshold"
 #define MAX_QP_THRESHOLD_SIZE	32

+/**
+ * Function prototypes for GENx specific device operations.
+ **/
+typedef int (*qat_dev_reset_ring_pairs_t)
+		(struct qat_pci_device *);
+typedef const struct rte_mem_resource* (*qat_dev_get_transport_bar_t)
+		(struct rte_pci_device *);
+typedef int (*qat_dev_get_misc_bar_t)
+		(struct rte_mem_resource **, struct rte_pci_device *);
+typedef int (*qat_dev_read_config_t)
+		(struct qat_pci_device *);
+typedef int (*qat_dev_get_extra_size_t)(void);
+
+struct qat_dev_hw_spec_funcs {
+	qat_dev_reset_ring_pairs_t	qat_dev_reset_ring_pairs;
+	qat_dev_get_transport_bar_t	qat_dev_get_transport_bar;
+	qat_dev_get_misc_bar_t		qat_dev_get_misc_bar;
+	qat_dev_read_config_t		qat_dev_read_config;
+	qat_dev_get_extra_size_t	qat_dev_get_extra_size;
+};
+
+extern struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[];
+
 struct qat_dev_cmd_param {
 	const char *name;
 	uint16_t val;
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v6 2/9] common/qat: add gen specific device implementation
  2021-10-26 17:16           ` [dpdk-dev] [dpdk-dev v6 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 1/9] common/qat: add gen specific data and function Kai Ji
@ 2021-10-26 17:16             ` Kai Ji
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 3/9] common/qat: add gen specific queue pair function Kai Ji
                               ` (7 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 17:16 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch replaces the mixed QAT device configuration
implementation by separate files with shared or
individual implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c |  64 ++++++++
 drivers/common/qat/dev/qat_dev_gen2.c |  23 +++
 drivers/common/qat/dev/qat_dev_gen3.c |  23 +++
 drivers/common/qat/dev/qat_dev_gen4.c | 152 +++++++++++++++++++
 drivers/common/qat/dev/qat_dev_gens.h |  34 +++++
 drivers/common/qat/meson.build        |   4 +
 drivers/common/qat/qat_device.c       | 205 +++++++++++---------------
 drivers/common/qat/qat_device.h       |   5 +-
 drivers/common/qat/qat_qp.c           |   3 +-
 9 files changed, 389 insertions(+), 124 deletions(-)
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gens.h

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
new file mode 100644
index 0000000000..9972280e06
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+#define ADF_ARB_REG_SLOT			0x1000
+
+int
+qat_reset_ring_pairs_gen1(struct qat_pci_device *qat_pci_dev __rte_unused)
+{
+	/*
+	 * Ring pairs reset not supported on base, continue
+	 */
+	return 0;
+}
+
+const struct rte_mem_resource *
+qat_dev_get_transport_bar_gen1(struct rte_pci_device *pci_dev)
+{
+	return &pci_dev->mem_resource[0];
+}
+
+int
+qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource __rte_unused,
+		struct rte_pci_device *pci_dev __rte_unused)
+{
+	return -1;
+}
+
+int
+qat_dev_read_config_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	/*
+	 * Base generations do not have configuration,
+	 * but set this pointer anyway that we can
+	 * distinguish higher generations faulty set to NULL
+	 */
+	return 0;
+}
+
+int
+qat_dev_get_extra_size_gen1(void)
+{
+	return 0;
+}
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen1_init)
+{
+	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
+	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c
new file mode 100644
index 0000000000..d3470ed6b8
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen2.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen2_init)
+{
+	qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2;
+	qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c
new file mode 100644
index 0000000000..e4a66869d2
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen3.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen3_init)
+{
+	qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3;
+	qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
new file mode 100644
index 0000000000..5e5423ebfa
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "adf_transport_access_macros_gen4vf.h"
+#include "adf_pf2vf_msg.h"
+#include "qat_pf2vf.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+struct qat_dev_gen4_extra {
+	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
+		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
+};
+
+static struct qat_pf2vf_dev qat_pf2vf_gen4 = {
+	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
+	.vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
+	.pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
+	.pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
+	.pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
+	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
+};
+
+int
+qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
+{
+	struct qat_pf2vf_msg pf2vf_msg;
+
+	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ;
+	pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ;
+	pf2vf_msg.msg_data = 2;
+	return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
+}
+
+static enum qat_service_type
+gen4_pick_service(uint8_t hw_service)
+{
+	switch (hw_service) {
+	case QAT_SVC_SYM:
+		return QAT_SERVICE_SYMMETRIC;
+	case QAT_SVC_COMPRESSION:
+		return QAT_SERVICE_COMPRESSION;
+	case QAT_SVC_ASYM:
+		return QAT_SERVICE_ASYMMETRIC;
+	default:
+		return QAT_SERVICE_INVALID;
+	}
+}
+
+static int
+qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
+{
+	int i = 0;
+	uint16_t svc = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+	struct qat_qp_hw_data *hw_data;
+	enum qat_service_type service_type;
+	uint8_t hw_service;
+
+	if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
+		return -EFAULT;
+	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		hw_service = (svc >> (3 * i)) & 0x7;
+		service_type = gen4_pick_service(hw_service);
+		if (service_type == QAT_SERVICE_INVALID) {
+			QAT_LOG(ERR,
+				"Unrecognized service on bundle %d",
+				i);
+			return -ENOTSUP;
+		}
+		hw_data = &dev_extra->qp_gen4_data[i][0];
+		memset(hw_data, 0, sizeof(*hw_data));
+		hw_data->service_type = service_type;
+		if (service_type == QAT_SERVICE_ASYMMETRIC) {
+			hw_data->tx_msg_size = 64;
+			hw_data->rx_msg_size = 32;
+		} else if (service_type == QAT_SERVICE_SYMMETRIC ||
+				service_type ==
+					QAT_SERVICE_COMPRESSION) {
+			hw_data->tx_msg_size = 128;
+			hw_data->rx_msg_size = 32;
+		}
+		hw_data->tx_ring_num = 0;
+		hw_data->rx_ring_num = 1;
+		hw_data->hw_bundle_num = i;
+	}
+	return 0;
+}
+
+static int
+qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
+{
+	int ret = 0, i;
+	uint8_t data[4];
+	struct qat_pf2vf_msg pf2vf_msg;
+
+	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET;
+	pf2vf_msg.block_hdr = -1;
+	for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		pf2vf_msg.msg_data = i;
+		ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data);
+		if (ret) {
+			QAT_LOG(ERR, "QAT error when reset bundle no %d",
+				i);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static const struct
+rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
+{
+	return &pci_dev->mem_resource[0];
+}
+
+static int
+qat_dev_get_misc_bar_gen4(struct rte_mem_resource **mem_resource,
+		struct rte_pci_device *pci_dev)
+{
+	*mem_resource = &pci_dev->mem_resource[2];
+	return 0;
+}
+
+static int
+qat_dev_get_extra_size_gen4(void)
+{
+	return sizeof(struct qat_dev_gen4_extra);
+}
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen4,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen4,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen4,
+	.qat_dev_read_config = qat_dev_read_config_gen4,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen4,
+};
+
+RTE_INIT(qat_dev_gen_4_init)
+{
+	qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4;
+	qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4;
+	qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h
new file mode 100644
index 0000000000..4ad0ffa728
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gens.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _QAT_DEV_GENS_H_
+#define _QAT_DEV_GENS_H_
+
+#include "qat_device.h"
+#include "qat_qp.h"
+
+#include <stdint.h>
+
+extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE];
+
+int
+qat_dev_get_extra_size_gen1(void);
+
+int
+qat_reset_ring_pairs_gen1(
+		struct qat_pci_device *qat_pci_dev);
+const struct
+rte_mem_resource *qat_dev_get_transport_bar_gen1(
+		struct rte_pci_device *pci_dev);
+int
+qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource,
+		struct rte_pci_device *pci_dev);
+int
+qat_dev_read_config_gen1(struct qat_pci_device *qat_dev);
+
+int
+qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val);
+
+#endif
diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 053c219fed..532e0fabb3 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -50,6 +50,10 @@ sources += files(
         'qat_device.c',
         'qat_logs.c',
         'qat_pf2vf.c',
+        'dev/qat_dev_gen1.c',
+        'dev/qat_dev_gen2.c',
+        'dev/qat_dev_gen3.c',
+        'dev/qat_dev_gen4.c'
 )
 includes += include_directories(
         'qat_adf',
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index e6b43c541f..437996f2e8 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -17,43 +17,6 @@
 struct qat_gen_hw_data qat_gen_config[QAT_N_GENS];
 struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_N_GENS];
 
-/* pv2vf data Gen 4*/
-struct qat_pf2vf_dev qat_pf2vf_gen4 = {
-	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
-	.vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
-	.pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
-	.pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
-	.pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
-	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
-};
-
-/* Hardware device information per generation */
-__extension__
-struct qat_gen_hw_data qat_gen_config[] =  {
-	[QAT_GEN1] = {
-		.dev_gen = QAT_GEN1,
-		.qp_hw_data = qat_gen1_qps,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN1
-	},
-	[QAT_GEN2] = {
-		.dev_gen = QAT_GEN2,
-		.qp_hw_data = qat_gen1_qps,
-		/* gen2 has same ring layout as gen1 */
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN2
-	},
-	[QAT_GEN3] = {
-		.dev_gen = QAT_GEN3,
-		.qp_hw_data = qat_gen3_qps,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3
-	},
-	[QAT_GEN4] = {
-		.dev_gen = QAT_GEN4,
-		.qp_hw_data = NULL,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3,
-		.pf2vf_dev = &qat_pf2vf_gen4
-	},
-};
-
 /* per-process array of device data */
 struct qat_device_info qat_pci_devs[RTE_PMD_QAT_MAX_PCI_DEVICES];
 static int qat_nb_pci_devices;
@@ -87,6 +50,16 @@ static const struct rte_pci_id pci_id_qat_map[] = {
 		{.device_id = 0},
 };
 
+static int
+qat_pci_get_extra_size(enum qat_device_gen qat_dev_gen)
+{
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_extra_size,
+		-ENOTSUP);
+	return ops_hw->qat_dev_get_extra_size();
+}
+
 static struct qat_pci_device *
 qat_pci_get_named_dev(const char *name)
 {
@@ -130,45 +103,8 @@ qat_get_qat_dev_from_pci_dev(struct rte_pci_device *pci_dev)
 	return qat_pci_get_named_dev(name);
 }
 
-static int
-qat_gen4_reset_ring_pair(struct qat_pci_device *qat_pci_dev)
-{
-	int ret = 0, i;
-	uint8_t data[4];
-	struct qat_pf2vf_msg pf2vf_msg;
-
-	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET;
-	pf2vf_msg.block_hdr = -1;
-	for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) {
-		pf2vf_msg.msg_data = i;
-		ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data);
-		if (ret) {
-			QAT_LOG(ERR, "QAT error when reset bundle no %d",
-				i);
-			return ret;
-		}
-	}
-
-	return 0;
-}
-
-int qat_query_svc(struct qat_pci_device *qat_dev, uint8_t *val)
-{
-	int ret = -(EINVAL);
-	struct qat_pf2vf_msg pf2vf_msg;
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ;
-		pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ;
-		pf2vf_msg.msg_data = 2;
-		ret = qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
-	}
-
-	return ret;
-}
-
-
-static void qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param
+static void
+qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param
 		*qat_dev_cmd_param)
 {
 	int i = 0;
@@ -230,13 +166,39 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param)
 {
 	struct qat_pci_device *qat_dev;
+	enum qat_device_gen qat_dev_gen;
 	uint8_t qat_dev_id = 0;
 	char name[QAT_DEV_NAME_MAX_LEN];
 	struct rte_devargs *devargs = pci_dev->device.devargs;
+	struct qat_dev_hw_spec_funcs *ops_hw;
+	struct rte_mem_resource *mem_resource;
+	const struct rte_memzone *qat_dev_mz;
+	int qat_dev_size, extra_size;
 
 	rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
 	snprintf(name+strlen(name), QAT_DEV_NAME_MAX_LEN-strlen(name), "_qat");
 
+	switch (pci_dev->id.device_id) {
+	case 0x0443:
+		qat_dev_gen = QAT_GEN1;
+		break;
+	case 0x37c9:
+	case 0x19e3:
+	case 0x6f55:
+	case 0x18ef:
+		qat_dev_gen = QAT_GEN2;
+		break;
+	case 0x18a1:
+		qat_dev_gen = QAT_GEN3;
+		break;
+	case 0x4941:
+		qat_dev_gen = QAT_GEN4;
+		break;
+	default:
+		QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
+		return NULL;
+	}
+
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		const struct rte_memzone *mz = rte_memzone_lookup(name);
 
@@ -267,63 +229,63 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 		return NULL;
 	}
 
-	qat_pci_devs[qat_dev_id].mz = rte_memzone_reserve(name,
-		sizeof(struct qat_pci_device),
+	extra_size = qat_pci_get_extra_size(qat_dev_gen);
+	if (extra_size < 0) {
+		QAT_LOG(ERR, "QAT internal error: no pci pointer for gen %d",
+			qat_dev_gen);
+		return NULL;
+	}
+
+	qat_dev_size = sizeof(struct qat_pci_device) + extra_size;
+	qat_dev_mz = rte_memzone_reserve(name, qat_dev_size,
 		rte_socket_id(), 0);
 
-	if (qat_pci_devs[qat_dev_id].mz == NULL) {
+	if (qat_dev_mz == NULL) {
 		QAT_LOG(ERR, "Error when allocating memzone for QAT_%d",
 			qat_dev_id);
 		return NULL;
 	}
 
-	qat_dev = qat_pci_devs[qat_dev_id].mz->addr;
-	memset(qat_dev, 0, sizeof(*qat_dev));
+	qat_dev = qat_dev_mz->addr;
+	memset(qat_dev, 0, qat_dev_size);
+	qat_dev->dev_private = qat_dev + 1;
 	strlcpy(qat_dev->name, name, QAT_DEV_NAME_MAX_LEN);
 	qat_dev->qat_dev_id = qat_dev_id;
 	qat_pci_devs[qat_dev_id].pci_dev = pci_dev;
-	switch (pci_dev->id.device_id) {
-	case 0x0443:
-		qat_dev->qat_dev_gen = QAT_GEN1;
-		break;
-	case 0x37c9:
-	case 0x19e3:
-	case 0x6f55:
-	case 0x18ef:
-		qat_dev->qat_dev_gen = QAT_GEN2;
-		break;
-	case 0x18a1:
-		qat_dev->qat_dev_gen = QAT_GEN3;
-		break;
-	case 0x4941:
-		qat_dev->qat_dev_gen = QAT_GEN4;
-		break;
-	default:
-		QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
-		rte_memzone_free(qat_pci_devs[qat_dev->qat_dev_id].mz);
+	qat_dev->qat_dev_gen = qat_dev_gen;
+
+	ops_hw = qat_dev_hw_spec[qat_dev->qat_dev_gen];
+	if (ops_hw->qat_dev_get_misc_bar == NULL) {
+		QAT_LOG(ERR, "qat_dev_get_misc_bar function pointer not set");
+		rte_memzone_free(qat_dev_mz);
 		return NULL;
 	}
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		qat_dev->misc_bar_io_addr = pci_dev->mem_resource[2].addr;
-		if (qat_dev->misc_bar_io_addr == NULL) {
+	if (ops_hw->qat_dev_get_misc_bar(&mem_resource, pci_dev) == 0) {
+		if (mem_resource->addr == NULL) {
 			QAT_LOG(ERR, "QAT cannot get access to VF misc bar");
+			rte_memzone_free(qat_dev_mz);
 			return NULL;
 		}
-	}
+		qat_dev->misc_bar_io_addr = mem_resource->addr;
+	} else
+		qat_dev->misc_bar_io_addr = NULL;
 
 	if (devargs && devargs->drv_str)
 		qat_dev_parse_cmd(devargs->drv_str, qat_dev_cmd_param);
 
-	if (qat_dev->qat_dev_gen >= QAT_GEN4) {
-		if (qat_read_qp_config(qat_dev)) {
-			QAT_LOG(ERR,
-				"Cannot acquire ring configuration for QAT_%d",
-				qat_dev_id);
-			return NULL;
-		}
+	if (qat_read_qp_config(qat_dev)) {
+		QAT_LOG(ERR,
+			"Cannot acquire ring configuration for QAT_%d",
+			qat_dev_id);
+			rte_memzone_free(qat_dev_mz);
+		return NULL;
 	}
 
+	/* No errors when allocating, attach memzone with
+	 * qat_dev to list of devices
+	 */
+	qat_pci_devs[qat_dev_id].mz = qat_dev_mz;
+
 	rte_spinlock_init(&qat_dev->arb_csr_lock);
 	qat_nb_pci_devices++;
 
@@ -396,6 +358,7 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	int sym_ret = 0, asym_ret = 0, comp_ret = 0;
 	int num_pmds_created = 0;
 	struct qat_pci_device *qat_pci_dev;
+	struct qat_dev_hw_spec_funcs *ops_hw;
 	struct qat_dev_cmd_param qat_dev_cmd_param[] = {
 			{ SYM_ENQ_THRESHOLD_NAME, 0 },
 			{ ASYM_ENQ_THRESHOLD_NAME, 0 },
@@ -412,13 +375,14 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	if (qat_pci_dev == NULL)
 		return -ENODEV;
 
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		if (qat_gen4_reset_ring_pair(qat_pci_dev)) {
-			QAT_LOG(ERR,
-				"Cannot reset ring pairs, does pf driver supports pf2vf comms?"
-				);
-			return -ENODEV;
-		}
+	ops_hw = qat_dev_hw_spec[qat_pci_dev->qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_reset_ring_pairs,
+		-ENOTSUP);
+	if (ops_hw->qat_dev_reset_ring_pairs(qat_pci_dev)) {
+		QAT_LOG(ERR,
+			"Cannot reset ring pairs, does pf driver supports pf2vf comms?"
+			);
+		return -ENODEV;
 	}
 
 	sym_ret = qat_sym_dev_create(qat_pci_dev, qat_dev_cmd_param);
@@ -453,7 +417,8 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	return 0;
 }
 
-static int qat_pci_remove(struct rte_pci_device *pci_dev)
+static int
+qat_pci_remove(struct rte_pci_device *pci_dev)
 {
 	struct qat_pci_device *qat_pci_dev;
 
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index b8b5c387a3..8b69206df5 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -133,6 +133,8 @@ struct qat_pci_device {
 	/**< Data of ring configuration on gen4 */
 	void *misc_bar_io_addr;
 	/**< Address of misc bar */
+	void *dev_private;
+	/**< Per generation specific information */
 };
 
 struct qat_gen_hw_data {
@@ -182,7 +184,4 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev __rte_unused,
 int
 qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev __rte_unused);
 
-int
-qat_query_svc(struct qat_pci_device *qat_pci_dev, uint8_t *ret);
-
 #endif /* _QAT_DEVICE_H_ */
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 026ea5ee01..b8c6000e86 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -20,6 +20,7 @@
 #include "qat_comp.h"
 #include "adf_transport_access_macros.h"
 #include "adf_transport_access_macros_gen4vf.h"
+#include "dev/qat_dev_gens.h"
 
 #define QAT_CQ_MAX_DEQ_RETRIES 10
 
@@ -512,7 +513,7 @@ qat_read_qp_config(struct qat_pci_device *qat_dev)
 	if (qat_dev_gen == QAT_GEN4) {
 		uint16_t svc = 0;
 
-		if (qat_query_svc(qat_dev, (uint8_t *)&svc))
+		if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
 			return -(EFAULT);
 		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
 			struct qat_qp_hw_data *hw_data =
-- 
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v6 3/9] common/qat: add gen specific queue pair function
  2021-10-26 17:16           ` [dpdk-dev] [dpdk-dev v6 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 1/9] common/qat: add gen specific data and function Kai Ji
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 2/9] common/qat: add gen specific device implementation Kai Ji
@ 2021-10-26 17:16             ` Kai Ji
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 4/9] common/qat: add gen specific queue implementation Kai Ji
                               ` (6 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 17:16 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch adds the queue pair data structure and function
prototypes for different QAT generations.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
---
 drivers/common/qat/qat_qp.c |   3 ++
 drivers/common/qat/qat_qp.h | 103 ++++++++++++++++++++++++------------
 2 files changed, 71 insertions(+), 35 deletions(-)

diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index b8c6000e86..27994036b8 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -34,6 +34,9 @@
 	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
 	(ADF_ARB_REG_SLOT * index), value)
 
+struct qat_qp_hw_spec_funcs*
+	qat_qp_hw_spec[QAT_N_GENS];
+
 __extension__
 const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
 					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
index e1627197fa..726cd2ef61 100644
--- a/drivers/common/qat/qat_qp.h
+++ b/drivers/common/qat/qat_qp.h
@@ -7,8 +7,6 @@
 #include "qat_common.h"
 #include "adf_transport_access_macros.h"
 
-struct qat_pci_device;
-
 #define QAT_CSR_HEAD_WRITE_THRESH 32U
 /* number of requests to accumulate before writing head CSR */
 
@@ -24,37 +22,7 @@ struct qat_pci_device;
 #define QAT_GEN4_BUNDLE_NUM             4
 #define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
 
-/**
- * Structure with data needed for creation of queue pair.
- */
-struct qat_qp_hw_data {
-	enum qat_service_type service_type;
-	uint8_t hw_bundle_num;
-	uint8_t tx_ring_num;
-	uint8_t rx_ring_num;
-	uint16_t tx_msg_size;
-	uint16_t rx_msg_size;
-};
-
-/**
- * Structure with data needed for creation of queue pair on gen4.
- */
-struct qat_qp_gen4_data {
-	struct qat_qp_hw_data qat_qp_hw_data;
-	uint8_t reserved;
-	uint8_t valid;
-};
-
-/**
- * Structure with data needed for creation of queue pair.
- */
-struct qat_qp_config {
-	const struct qat_qp_hw_data *hw;
-	uint32_t nb_descriptors;
-	uint32_t cookie_size;
-	int socket_id;
-	const char *service_str;
-};
+struct qat_pci_device;
 
 /**
  * Structure associated with each queue.
@@ -96,8 +64,28 @@ struct qat_qp {
 	uint16_t min_enq_burst_threshold;
 } __rte_cache_aligned;
 
-extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+/**
+ * Structure with data needed for creation of queue pair.
+ */
+struct qat_qp_hw_data {
+	enum qat_service_type service_type;
+	uint8_t hw_bundle_num;
+	uint8_t tx_ring_num;
+	uint8_t rx_ring_num;
+	uint16_t tx_msg_size;
+	uint16_t rx_msg_size;
+};
+
+/**
+ * Structure with data needed for creation of queue pair.
+ */
+struct qat_qp_config {
+	const struct qat_qp_hw_data *hw;
+	uint32_t nb_descriptors;
+	uint32_t cookie_size;
+	int socket_id;
+	const char *service_str;
+};
 
 uint16_t
 qat_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops);
@@ -136,4 +124,49 @@ qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
 int
 qat_read_qp_config(struct qat_pci_device *qat_dev);
 
+/**
+ * Function prototypes for GENx specific queue pair operations.
+ **/
+typedef int (*qat_qp_rings_per_service_t)
+		(struct qat_pci_device *, enum qat_service_type);
+
+typedef void (*qat_qp_build_ring_base_t)(void *, struct qat_queue *);
+
+typedef void (*qat_qp_adf_arb_enable_t)(const struct qat_queue *, void *,
+		rte_spinlock_t *);
+
+typedef void (*qat_qp_adf_arb_disable_t)(const struct qat_queue *, void *,
+		rte_spinlock_t *);
+
+typedef void (*qat_qp_adf_configure_queues_t)(struct qat_qp *);
+
+typedef void (*qat_qp_csr_write_tail_t)(struct qat_qp *qp, struct qat_queue *q);
+
+typedef void (*qat_qp_csr_write_head_t)(struct qat_qp *qp, struct qat_queue *q,
+		uint32_t new_head);
+
+typedef void (*qat_qp_csr_setup_t)(struct qat_pci_device*, void *,
+		struct qat_qp *);
+
+typedef const struct qat_qp_hw_data * (*qat_qp_get_hw_data_t)(
+		struct qat_pci_device *dev, enum qat_service_type service_type,
+		uint16_t qp_id);
+
+struct qat_qp_hw_spec_funcs {
+	qat_qp_rings_per_service_t	qat_qp_rings_per_service;
+	qat_qp_build_ring_base_t	qat_qp_build_ring_base;
+	qat_qp_adf_arb_enable_t		qat_qp_adf_arb_enable;
+	qat_qp_adf_arb_disable_t	qat_qp_adf_arb_disable;
+	qat_qp_adf_configure_queues_t	qat_qp_adf_configure_queues;
+	qat_qp_csr_write_tail_t		qat_qp_csr_write_tail;
+	qat_qp_csr_write_head_t		qat_qp_csr_write_head;
+	qat_qp_csr_setup_t		qat_qp_csr_setup;
+	qat_qp_get_hw_data_t		qat_qp_get_hw_data;
+};
+
+extern struct qat_qp_hw_spec_funcs *qat_qp_hw_spec[];
+
+extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+
 #endif /* _QAT_QP_H_ */
-- 
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v6 4/9] common/qat: add gen specific queue implementation
  2021-10-26 17:16           ` [dpdk-dev] [dpdk-dev v6 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                               ` (2 preceding siblings ...)
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 3/9] common/qat: add gen specific queue pair function Kai Ji
@ 2021-10-26 17:16             ` Kai Ji
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 5/9] compress/qat: add gen specific data and function Kai Ji
                               ` (5 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 17:16 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch replaces the mixed QAT queue pair configuration
implementation by separate files with shared or individual
implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c         | 190 +++++
 drivers/common/qat/dev/qat_dev_gen2.c         |  14 +
 drivers/common/qat/dev/qat_dev_gen3.c         |  60 ++
 drivers/common/qat/dev/qat_dev_gen4.c         | 161 ++++-
 drivers/common/qat/dev/qat_dev_gens.h         |  37 +-
 .../qat/qat_adf/adf_transport_access_macros.h |   2 +
 drivers/common/qat/qat_device.h               |   3 -
 drivers/common/qat/qat_qp.c                   | 677 +++++++-----------
 drivers/common/qat/qat_qp.h                   |  24 +-
 drivers/crypto/qat/qat_sym_pmd.c              |  32 +-
 10 files changed, 723 insertions(+), 477 deletions(-)

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
index 9972280e06..38757e6e40 100644
--- a/drivers/common/qat/dev/qat_dev_gen1.c
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -3,6 +3,7 @@
  */

 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"

@@ -10,6 +11,194 @@

 #define ADF_ARB_REG_SLOT			0x1000

+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+__extension__
+const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
+	/* queue pairs which provide an asymmetric crypto service */
+	[QAT_SERVICE_ASYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 0,
+			.rx_ring_num = 8,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+
+		}, {
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 1,
+			.rx_ring_num = 9,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a symmetric crypto service */
+	[QAT_SERVICE_SYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 2,
+			.rx_ring_num = 10,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		},
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 3,
+			.rx_ring_num = 11,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a compression service */
+	[QAT_SERVICE_COMPRESSION] = {
+		{
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 6,
+			.rx_ring_num = 14,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}, {
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 7,
+			.rx_ring_num = 15,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	}
+};
+
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen1(struct qat_pci_device *dev __rte_unused,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	return qat_gen1_qps[service_type] + qp_id;
+}
+
+int
+qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
+{
+	int i = 0, count = 0;
+
+	for (i = 0; i < ADF_MAX_QPS_ON_ANY_SERVICE; i++) {
+		const struct qat_qp_hw_data *hw_qps =
+				qat_qp_get_hw_data(qat_dev, service, i);
+		if (hw_qps->service_type == service)
+			count++;
+	}
+
+	return count;
+}
+
+void
+qat_qp_csr_build_ring_base_gen1(void *io_addr,
+			struct qat_queue *queue)
+{
+	uint64_t queue_base;
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+			queue->queue_size);
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+		queue->hw_queue_number, queue_base);
+}
+
+void
+qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_ARB_REG_SLOT *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr,
+			arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+void
+qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+				(ADF_ARB_REG_SLOT * txq->hw_bundle_number);
+	uint32_t value;
+
+	rte_spinlock_lock(lock);
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value &= ~(0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+void
+qat_qp_adf_configure_queues_gen1(struct qat_qp *qp)
+{
+	uint32_t q_tx_config, q_resp_config;
+	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
+
+	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
+	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
+			ADF_RING_NEAR_WATERMARK_512,
+			ADF_RING_NEAR_WATERMARK_0);
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
+		q_tx->hw_bundle_number,	q_tx->hw_queue_number,
+		q_tx_config);
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
+		q_rx->hw_bundle_number,	q_rx->hw_queue_number,
+		q_resp_config);
+}
+
+void
+qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q)
+{
+	WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number,
+		q->hw_queue_number, q->tail);
+}
+
+void
+qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q,
+			uint32_t new_head)
+{
+	WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number,
+			q->hw_queue_number, new_head);
+}
+
+void
+qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev,
+			void *io_addr, struct qat_qp *qp)
+{
+	qat_qp_csr_build_ring_base_gen1(io_addr, &qp->tx_q);
+	qat_qp_csr_build_ring_base_gen1(io_addr, &qp->rx_q);
+	qat_qp_adf_configure_queues_gen1(qp);
+	qat_qp_adf_arb_enable_gen1(&qp->tx_q, qp->mmap_bar_addr,
+					&qat_dev->arb_csr_lock);
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen1 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen1,
+};
+
 int
 qat_reset_ring_pairs_gen1(struct qat_pci_device *qat_pci_dev __rte_unused)
 {
@@ -59,6 +248,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = {

 RTE_INIT(qat_dev_gen_gen1_init)
 {
+	qat_qp_hw_spec[QAT_GEN1] = &qat_qp_hw_spec_gen1;
 	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
 	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c
index d3470ed6b8..f077fe9eef 100644
--- a/drivers/common/qat/dev/qat_dev_gen2.c
+++ b/drivers/common/qat/dev/qat_dev_gen2.c
@@ -3,11 +3,24 @@
  */

 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"

 #include <stdint.h>

+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen2 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen1,
+};
+
 static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
 	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
 	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
@@ -18,6 +31,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {

 RTE_INIT(qat_dev_gen_gen2_init)
 {
+	qat_qp_hw_spec[QAT_GEN2] = &qat_qp_hw_spec_gen2;
 	qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2;
 	qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c
index e4a66869d2..de3fa17fa9 100644
--- a/drivers/common/qat/dev/qat_dev_gen3.c
+++ b/drivers/common/qat/dev/qat_dev_gen3.c
@@ -3,11 +3,70 @@
  */

 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"

 #include <stdint.h>

+__extension__
+const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
+	/* queue pairs which provide an asymmetric crypto service */
+	[QAT_SERVICE_ASYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 0,
+			.rx_ring_num = 4,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a symmetric crypto service */
+	[QAT_SERVICE_SYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 1,
+			.rx_ring_num = 5,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a compression service */
+	[QAT_SERVICE_COMPRESSION] = {
+		{
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 3,
+			.rx_ring_num = 7,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	}
+};
+
+
+static const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen3(struct qat_pci_device *dev __rte_unused,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	return qat_gen3_qps[service_type] + qp_id;
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen3 = {
+	.qat_qp_rings_per_service  = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen3
+};
+
 static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
 	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
 	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
@@ -18,6 +77,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {

 RTE_INIT(qat_dev_gen_gen3_init)
 {
+	qat_qp_hw_spec[QAT_GEN3] = &qat_qp_hw_spec_gen3;
 	qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3;
 	qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
index 5e5423ebfa..7ffde5f4c8 100644
--- a/drivers/common/qat/dev/qat_dev_gen4.c
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -10,10 +10,13 @@
 #include "adf_transport_access_macros_gen4vf.h"
 #include "adf_pf2vf_msg.h"
 #include "qat_pf2vf.h"
-#include "qat_dev_gens.h"

 #include <stdint.h>

+/* QAT GEN 4 specific macros */
+#define QAT_GEN4_BUNDLE_NUM             4
+#define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
+
 struct qat_dev_gen4_extra {
 	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
 		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
@@ -28,7 +31,7 @@ static struct qat_pf2vf_dev qat_pf2vf_gen4 = {
 	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
 };

-int
+static int
 qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
 {
 	struct qat_pf2vf_msg pf2vf_msg;
@@ -39,6 +42,52 @@ qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
 	return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
 }

+static int
+qat_select_valid_queue_gen4(struct qat_pci_device *qat_dev, int qp_id,
+			enum qat_service_type service_type)
+{
+	int i = 0, valid_qps = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+
+	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		if (dev_extra->qp_gen4_data[i][0].service_type ==
+			service_type) {
+			if (valid_qps == qp_id)
+				return i;
+			++valid_qps;
+		}
+	}
+	return -1;
+}
+
+static const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen4(struct qat_pci_device *qat_dev,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+	int ring_pair = qat_select_valid_queue_gen4(qat_dev, qp_id,
+			service_type);
+
+	if (ring_pair < 0)
+		return NULL;
+
+	return &dev_extra->qp_gen4_data[ring_pair][0];
+}
+
+static int
+qat_qp_rings_per_service_gen4(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
+{
+	int i = 0, count = 0, max_ops_per_srv = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+
+	max_ops_per_srv = QAT_GEN4_BUNDLE_NUM;
+	for (i = 0, count = 0; i < max_ops_per_srv; i++)
+		if (dev_extra->qp_gen4_data[i][0].service_type == service)
+			count++;
+	return count;
+}
+
 static enum qat_service_type
 gen4_pick_service(uint8_t hw_service)
 {
@@ -94,6 +143,109 @@ qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
 	return 0;
 }

+static void
+qat_qp_build_ring_base_gen4(void *io_addr,
+			struct qat_queue *queue)
+{
+	uint64_t queue_base;
+
+	queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr,
+			queue->queue_size);
+	WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number,
+		queue->hw_queue_number, queue_base);
+}
+
+static void
+qat_qp_adf_arb_enable_gen4(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_RING_BUNDLE_SIZE_GEN4 *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
+			arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+static void
+qat_qp_adf_arb_disable_gen4(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_RING_BUNDLE_SIZE_GEN4 *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
+			arb_csr_offset);
+	value &= ~(0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+static void
+qat_qp_adf_configure_queues_gen4(struct qat_qp *qp)
+{
+	uint32_t q_tx_config, q_resp_config;
+	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
+
+	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
+	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
+			ADF_RING_NEAR_WATERMARK_512,
+			ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
+		q_tx->hw_bundle_number,	q_tx->hw_queue_number,
+		q_tx_config);
+	WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
+		q_rx->hw_bundle_number,	q_rx->hw_queue_number,
+		q_resp_config);
+}
+
+static void
+qat_qp_csr_write_tail_gen4(struct qat_qp *qp, struct qat_queue *q)
+{
+	WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
+		q->hw_bundle_number, q->hw_queue_number, q->tail);
+}
+
+static void
+qat_qp_csr_write_head_gen4(struct qat_qp *qp, struct qat_queue *q,
+			uint32_t new_head)
+{
+	WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr,
+			q->hw_bundle_number, q->hw_queue_number, new_head);
+}
+
+static void
+qat_qp_csr_setup_gen4(struct qat_pci_device *qat_dev,
+			void *io_addr, struct qat_qp *qp)
+{
+	qat_qp_build_ring_base_gen4(io_addr, &qp->tx_q);
+	qat_qp_build_ring_base_gen4(io_addr, &qp->rx_q);
+	qat_qp_adf_configure_queues_gen4(qp);
+	qat_qp_adf_arb_enable_gen4(&qp->tx_q, qp->mmap_bar_addr,
+					&qat_dev->arb_csr_lock);
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen4 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen4,
+	.qat_qp_build_ring_base = qat_qp_build_ring_base_gen4,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen4,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen4,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen4,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen4,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen4,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen4,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen4,
+};
+
 static int
 qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
 {
@@ -116,8 +268,8 @@ qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
 	return 0;
 }

-static const struct
-rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
+static const struct rte_mem_resource *
+qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
 {
 	return &pci_dev->mem_resource[0];
 }
@@ -146,6 +298,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = {

 RTE_INIT(qat_dev_gen_4_init)
 {
+	qat_qp_hw_spec[QAT_GEN4] = &qat_qp_hw_spec_gen4;
 	qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4;
 	qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4;
 	qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4;
diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h
index 4ad0ffa728..7c92f1938c 100644
--- a/drivers/common/qat/dev/qat_dev_gens.h
+++ b/drivers/common/qat/dev/qat_dev_gens.h
@@ -16,6 +16,40 @@ extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
 int
 qat_dev_get_extra_size_gen1(void);

+const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen1(struct qat_pci_device *dev,
+		enum qat_service_type service_type, uint16_t qp_id);
+
+int
+qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev,
+		enum qat_service_type service);
+
+void
+qat_qp_csr_build_ring_base_gen1(void *io_addr,
+		struct qat_queue *queue);
+
+void
+qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock);
+
+void
+qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock);
+
+void
+qat_qp_adf_configure_queues_gen1(struct qat_qp *qp);
+
+void
+qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q);
+
+void
+qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q,
+		uint32_t new_head);
+
+void
+qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev,
+		void *io_addr, struct qat_qp *qp);
+
 int
 qat_reset_ring_pairs_gen1(
 		struct qat_pci_device *qat_pci_dev);
@@ -28,7 +62,4 @@ qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource,
 int
 qat_dev_read_config_gen1(struct qat_pci_device *qat_dev);

-int
-qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val);
-
 #endif
diff --git a/drivers/common/qat/qat_adf/adf_transport_access_macros.h b/drivers/common/qat/qat_adf/adf_transport_access_macros.h
index 504ffb7236..f98bbb5001 100644
--- a/drivers/common/qat/qat_adf/adf_transport_access_macros.h
+++ b/drivers/common/qat/qat_adf/adf_transport_access_macros.h
@@ -51,6 +51,8 @@
 #define ADF_MIN_RING_SIZE ADF_RING_SIZE_128
 #define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M
 #define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K
+/* ARB CSR offset */
+#define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C

 /* Maximum number of qps on a device for any service type */
 #define ADF_MAX_QPS_ON_ANY_SERVICE	2
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 8b69206df5..8233cc045d 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -128,9 +128,6 @@ struct qat_pci_device {
 	/* Data relating to compression service */
 	struct qat_comp_dev_private *comp_dev;
 	/**< link back to compressdev private data */
-	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
-		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
-	/**< Data of ring configuration on gen4 */
 	void *misc_bar_io_addr;
 	/**< Address of misc bar */
 	void *dev_private;
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 27994036b8..cde421eb77 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -18,124 +18,15 @@
 #include "qat_sym.h"
 #include "qat_asym.h"
 #include "qat_comp.h"
-#include "adf_transport_access_macros.h"
-#include "adf_transport_access_macros_gen4vf.h"
-#include "dev/qat_dev_gens.h"

 #define QAT_CQ_MAX_DEQ_RETRIES 10

 #define ADF_MAX_DESC				4096
 #define ADF_MIN_DESC				128

-#define ADF_ARB_REG_SLOT			0x1000
-#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
-
-#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
-	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
-	(ADF_ARB_REG_SLOT * index), value)
-
 struct qat_qp_hw_spec_funcs*
 	qat_qp_hw_spec[QAT_N_GENS];

-__extension__
-const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
-					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
-	/* queue pairs which provide an asymmetric crypto service */
-	[QAT_SERVICE_ASYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 0,
-			.rx_ring_num = 8,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-
-		}, {
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 1,
-			.rx_ring_num = 9,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a symmetric crypto service */
-	[QAT_SERVICE_SYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 2,
-			.rx_ring_num = 10,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		},
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 3,
-			.rx_ring_num = 11,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a compression service */
-	[QAT_SERVICE_COMPRESSION] = {
-		{
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 6,
-			.rx_ring_num = 14,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}, {
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 7,
-			.rx_ring_num = 15,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	}
-};
-
-__extension__
-const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES]
-					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
-	/* queue pairs which provide an asymmetric crypto service */
-	[QAT_SERVICE_ASYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 0,
-			.rx_ring_num = 4,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a symmetric crypto service */
-	[QAT_SERVICE_SYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 1,
-			.rx_ring_num = 5,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a compression service */
-	[QAT_SERVICE_COMPRESSION] = {
-		{
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 3,
-			.rx_ring_num = 7,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	}
-};
-
 static int qat_qp_check_queue_alignment(uint64_t phys_addr,
 	uint32_t queue_size_bytes);
 static void qat_queue_delete(struct qat_queue *queue);
@@ -143,77 +34,32 @@ static int qat_queue_create(struct qat_pci_device *qat_dev,
 	struct qat_queue *queue, struct qat_qp_config *, uint8_t dir);
 static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
 	uint32_t *queue_size_for_csr);
-static void adf_configure_queues(struct qat_qp *queue,
+static int adf_configure_queues(struct qat_qp *queue,
 	enum qat_device_gen qat_dev_gen);
-static void adf_queue_arb_enable(enum qat_device_gen qat_dev_gen,
+static int adf_queue_arb_enable(struct qat_pci_device *qat_dev,
 	struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock);
-static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
+static int adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
 	struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock);
+static int qat_qp_build_ring_base(struct qat_pci_device *qat_dev,
+	void *io_addr, struct qat_queue *queue);
+static const struct rte_memzone *queue_dma_zone_reserve(const char *queue_name,
+	uint32_t queue_size, int socket_id);
+static int qat_qp_csr_setup(struct qat_pci_device *qat_dev, void *io_addr,
+	struct qat_qp *qp);

-int qat_qps_per_service(struct qat_pci_device *qat_dev,
-		enum qat_service_type service)
-{
-	int i = 0, count = 0, max_ops_per_srv = 0;
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		max_ops_per_srv = QAT_GEN4_BUNDLE_NUM;
-		for (i = 0, count = 0; i < max_ops_per_srv; i++)
-			if (qat_dev->qp_gen4_data[i][0].service_type == service)
-				count++;
-	} else {
-		const struct qat_qp_hw_data *sym_hw_qps =
-				qat_gen_config[qat_dev->qat_dev_gen]
-				.qp_hw_data[service];
-
-		max_ops_per_srv = ADF_MAX_QPS_ON_ANY_SERVICE;
-		for (i = 0, count = 0; i < max_ops_per_srv; i++)
-			if (sym_hw_qps[i].service_type == service)
-				count++;
-	}
-
-	return count;
-}
-
-static const struct rte_memzone *
-queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
-			int socket_id)
-{
-	const struct rte_memzone *mz;
-
-	mz = rte_memzone_lookup(queue_name);
-	if (mz != 0) {
-		if (((size_t)queue_size <= mz->len) &&
-				((socket_id == SOCKET_ID_ANY) ||
-					(socket_id == mz->socket_id))) {
-			QAT_LOG(DEBUG, "re-use memzone already "
-					"allocated for %s", queue_name);
-			return mz;
-		}
-
-		QAT_LOG(ERR, "Incompatible memzone already "
-				"allocated %s, size %u, socket %d. "
-				"Requested size %u, socket %u",
-				queue_name, (uint32_t)mz->len,
-				mz->socket_id, queue_size, socket_id);
-		return NULL;
-	}
-
-	QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
-					queue_name, queue_size, socket_id);
-	return rte_memzone_reserve_aligned(queue_name, queue_size,
-		socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size);
-}
-
-int qat_qp_setup(struct qat_pci_device *qat_dev,
+int
+qat_qp_setup(struct qat_pci_device *qat_dev,
 		struct qat_qp **qp_addr,
 		uint16_t queue_pair_id,
 		struct qat_qp_config *qat_qp_conf)
 {
-	struct qat_qp *qp;
+	struct qat_qp *qp = NULL;
 	struct rte_pci_device *pci_dev =
 			qat_pci_devs[qat_dev->qat_dev_id].pci_dev;
 	char op_cookie_pool_name[RTE_RING_NAMESIZE];
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev->qat_dev_gen];
+	void *io_addr;
 	uint32_t i;

 	QAT_LOG(DEBUG, "Setup qp %u on qat pci device %d gen %d",
@@ -226,7 +72,15 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		return -EINVAL;
 	}

-	if (pci_dev->mem_resource[0].addr == NULL) {
+	if (ops_hw->qat_dev_get_transport_bar == NULL)	{
+		QAT_LOG(ERR,
+			"QAT Internal Error: qat_dev_get_transport_bar not set for gen %d",
+			qat_dev->qat_dev_gen);
+		goto create_err;
+	}
+
+	io_addr = ops_hw->qat_dev_get_transport_bar(pci_dev)->addr;
+	if (io_addr == NULL) {
 		QAT_LOG(ERR, "Could not find VF config space "
 				"(UIO driver attached?).");
 		return -EINVAL;
@@ -250,7 +104,7 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		return -ENOMEM;
 	}

-	qp->mmap_bar_addr = pci_dev->mem_resource[0].addr;
+	qp->mmap_bar_addr = io_addr;
 	qp->enqueued = qp->dequeued = 0;

 	if (qat_queue_create(qat_dev, &(qp->tx_q), qat_qp_conf,
@@ -277,10 +131,6 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		goto create_err;
 	}

-	adf_configure_queues(qp, qat_dev_gen);
-	adf_queue_arb_enable(qat_dev_gen, &qp->tx_q, qp->mmap_bar_addr,
-					&qat_dev->arb_csr_lock);
-
 	snprintf(op_cookie_pool_name, RTE_RING_NAMESIZE,
 					"%s%d_cookies_%s_qp%hu",
 		pci_dev->driver->driver.name, qat_dev->qat_dev_id,
@@ -298,6 +148,8 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	if (!qp->op_cookie_pool) {
 		QAT_LOG(ERR, "QAT PMD Cannot create"
 				" op mempool");
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
 		goto create_err;
 	}

@@ -316,91 +168,32 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	QAT_LOG(DEBUG, "QP setup complete: id: %d, cookiepool: %s",
 			queue_pair_id, op_cookie_pool_name);

+	qat_qp_csr_setup(qat_dev, io_addr, qp);
+
 	*qp_addr = qp;
 	return 0;

 create_err:
-	if (qp->op_cookie_pool)
-		rte_mempool_free(qp->op_cookie_pool);
-	rte_free(qp->op_cookies);
-	rte_free(qp);
-	return -EFAULT;
-}
-
-
-int qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr)
-{
-	struct qat_qp *qp = *qp_addr;
-	uint32_t i;
-
-	if (qp == NULL) {
-		QAT_LOG(DEBUG, "qp already freed");
-		return 0;
-	}
+	if (qp) {
+		if (qp->op_cookie_pool)
+			rte_mempool_free(qp->op_cookie_pool);

-	QAT_LOG(DEBUG, "Free qp on qat_pci device %d",
-				qp->qat_dev->qat_dev_id);
-
-	/* Don't free memory if there are still responses to be processed */
-	if ((qp->enqueued - qp->dequeued) == 0) {
-		qat_queue_delete(&(qp->tx_q));
-		qat_queue_delete(&(qp->rx_q));
-	} else {
-		return -EAGAIN;
-	}
+		if (qp->op_cookies)
+			rte_free(qp->op_cookies);

-	adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q), qp->mmap_bar_addr,
-				&qp->qat_dev->arb_csr_lock);
-
-	for (i = 0; i < qp->nb_descriptors; i++)
-		rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]);
-
-	if (qp->op_cookie_pool)
-		rte_mempool_free(qp->op_cookie_pool);
-
-	rte_free(qp->op_cookies);
-	rte_free(qp);
-	*qp_addr = NULL;
-	return 0;
-}
-
-
-static void qat_queue_delete(struct qat_queue *queue)
-{
-	const struct rte_memzone *mz;
-	int status = 0;
-
-	if (queue == NULL) {
-		QAT_LOG(DEBUG, "Invalid queue");
-		return;
+		rte_free(qp);
 	}
-	QAT_LOG(DEBUG, "Free ring %d, memzone: %s",
-			queue->hw_queue_number, queue->memz_name);

-	mz = rte_memzone_lookup(queue->memz_name);
-	if (mz != NULL)	{
-		/* Write an unused pattern to the queue memory. */
-		memset(queue->base_addr, 0x7F, queue->queue_size);
-		status = rte_memzone_free(mz);
-		if (status != 0)
-			QAT_LOG(ERR, "Error %d on freeing queue %s",
-					status, queue->memz_name);
-	} else {
-		QAT_LOG(DEBUG, "queue %s doesn't exist",
-				queue->memz_name);
-	}
+	return -EFAULT;
 }

 static int
 qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 		struct qat_qp_config *qp_conf, uint8_t dir)
 {
-	uint64_t queue_base;
-	void *io_addr;
 	const struct rte_memzone *qp_mz;
 	struct rte_pci_device *pci_dev =
 			qat_pci_devs[qat_dev->qat_dev_id].pci_dev;
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
 	int ret = 0;
 	uint16_t desc_size = (dir == ADF_RING_DIR_TX ?
 			qp_conf->hw->tx_msg_size : qp_conf->hw->rx_msg_size);
@@ -460,19 +253,6 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 	 * Write an unused pattern to the queue memory.
 	 */
 	memset(queue->base_addr, 0x7F, queue_size_bytes);
-	io_addr = pci_dev->mem_resource[0].addr;
-
-	if (qat_dev_gen == QAT_GEN4) {
-		queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr,
-					queue->queue_size);
-		WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number,
-			queue->hw_queue_number, queue_base);
-	} else {
-		queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
-				queue->queue_size);
-		WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
-			queue->hw_queue_number, queue_base);
-	}

 	QAT_LOG(DEBUG, "RING: Name:%s, size in CSR: %u, in bytes %u,"
 		" nb msgs %u, msg_size %u, modulo mask %u",
@@ -488,202 +268,231 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 	return ret;
 }

-int
-qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
-			enum qat_service_type service_type)
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+		int socket_id)
 {
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		int i = 0, valid_qps = 0;
-
-		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
-			if (qat_dev->qp_gen4_data[i][0].service_type ==
-				service_type) {
-				if (valid_qps == qp_id)
-					return i;
-				++valid_qps;
-			}
+	const struct rte_memzone *mz;
+
+	mz = rte_memzone_lookup(queue_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			QAT_LOG(DEBUG, "re-use memzone already "
+					"allocated for %s", queue_name);
+			return mz;
 		}
+
+		QAT_LOG(ERR, "Incompatible memzone already "
+				"allocated %s, size %u, socket %d. "
+				"Requested size %u, socket %u",
+				queue_name, (uint32_t)mz->len,
+				mz->socket_id, queue_size, socket_id);
+		return NULL;
 	}
-	return -1;
+
+	QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					queue_name, queue_size, socket_id);
+	return rte_memzone_reserve_aligned(queue_name, queue_size,
+		socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size);
 }

 int
-qat_read_qp_config(struct qat_pci_device *qat_dev)
+qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr)
 {
-	int i = 0;
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
-
-	if (qat_dev_gen == QAT_GEN4) {
-		uint16_t svc = 0;
-
-		if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
-			return -(EFAULT);
-		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
-			struct qat_qp_hw_data *hw_data =
-				&qat_dev->qp_gen4_data[i][0];
-			uint8_t svc1 = (svc >> (3 * i)) & 0x7;
-			enum qat_service_type service_type = QAT_SERVICE_INVALID;
-
-			if (svc1 == QAT_SVC_SYM) {
-				service_type = QAT_SERVICE_SYMMETRIC;
-				QAT_LOG(DEBUG,
-					"Discovered SYMMETRIC service on bundle %d",
-					i);
-			} else if (svc1 == QAT_SVC_COMPRESSION) {
-				service_type = QAT_SERVICE_COMPRESSION;
-				QAT_LOG(DEBUG,
-					"Discovered COPRESSION service on bundle %d",
-					i);
-			} else if (svc1 == QAT_SVC_ASYM) {
-				service_type = QAT_SERVICE_ASYMMETRIC;
-				QAT_LOG(DEBUG,
-					"Discovered ASYMMETRIC service on bundle %d",
-					i);
-			} else {
-				QAT_LOG(ERR,
-					"Unrecognized service on bundle %d",
-					i);
-				return -(EFAULT);
-			}
+	int ret;
+	struct qat_qp *qp = *qp_addr;
+	uint32_t i;

-			memset(hw_data, 0, sizeof(*hw_data));
-			hw_data->service_type = service_type;
-			if (service_type == QAT_SERVICE_ASYMMETRIC) {
-				hw_data->tx_msg_size = 64;
-				hw_data->rx_msg_size = 32;
-			} else if (service_type == QAT_SERVICE_SYMMETRIC ||
-					service_type ==
-						QAT_SERVICE_COMPRESSION) {
-				hw_data->tx_msg_size = 128;
-				hw_data->rx_msg_size = 32;
-			}
-			hw_data->tx_ring_num = 0;
-			hw_data->rx_ring_num = 1;
-			hw_data->hw_bundle_num = i;
-		}
+	if (qp == NULL) {
+		QAT_LOG(DEBUG, "qp already freed");
 		return 0;
 	}
-	return -(EINVAL);
+
+	QAT_LOG(DEBUG, "Free qp on qat_pci device %d",
+				qp->qat_dev->qat_dev_id);
+
+	/* Don't free memory if there are still responses to be processed */
+	if ((qp->enqueued - qp->dequeued) == 0) {
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
+	} else {
+		return -EAGAIN;
+	}
+
+	ret = adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q),
+			qp->mmap_bar_addr, &qp->qat_dev->arb_csr_lock);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < qp->nb_descriptors; i++)
+		rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]);
+
+	if (qp->op_cookie_pool)
+		rte_mempool_free(qp->op_cookie_pool);
+
+	rte_free(qp->op_cookies);
+	rte_free(qp);
+	*qp_addr = NULL;
+	return 0;
 }

-static int qat_qp_check_queue_alignment(uint64_t phys_addr,
-					uint32_t queue_size_bytes)
+
+static void
+qat_queue_delete(struct qat_queue *queue)
 {
-	if (((queue_size_bytes - 1) & phys_addr) != 0)
-		return -EINVAL;
+	const struct rte_memzone *mz;
+	int status = 0;
+
+	if (queue == NULL) {
+		QAT_LOG(DEBUG, "Invalid queue");
+		return;
+	}
+	QAT_LOG(DEBUG, "Free ring %d, memzone: %s",
+			queue->hw_queue_number, queue->memz_name);
+
+	mz = rte_memzone_lookup(queue->memz_name);
+	if (mz != NULL)	{
+		/* Write an unused pattern to the queue memory. */
+		memset(queue->base_addr, 0x7F, queue->queue_size);
+		status = rte_memzone_free(mz);
+		if (status != 0)
+			QAT_LOG(ERR, "Error %d on freeing queue %s",
+					status, queue->memz_name);
+	} else {
+		QAT_LOG(DEBUG, "queue %s doesn't exist",
+				queue->memz_name);
+	}
+}
+
+static int __rte_unused
+adf_queue_arb_enable(struct qat_pci_device *qat_dev, struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_enable,
+			-ENOTSUP);
+	ops->qat_qp_adf_arb_enable(txq, base_addr, lock);
 	return 0;
 }

-static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
-	uint32_t *p_queue_size_for_csr)
+static int
+adf_queue_arb_disable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock)
 {
-	uint8_t i = ADF_MIN_RING_SIZE;
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];

-	for (; i <= ADF_MAX_RING_SIZE; i++)
-		if ((msg_size * msg_num) ==
-				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
-			*p_queue_size_for_csr = i;
-			return 0;
-		}
-	QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
-	return -EINVAL;
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_disable,
+			-ENOTSUP);
+	ops->qat_qp_adf_arb_disable(txq, base_addr, lock);
+	return 0;
 }

-static void
-adf_queue_arb_enable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq,
-			void *base_addr, rte_spinlock_t *lock)
+static int __rte_unused
+qat_qp_build_ring_base(struct qat_pci_device *qat_dev, void *io_addr,
+		struct qat_queue *queue)
 {
-	uint32_t arb_csr_offset = 0, value;
-
-	rte_spinlock_lock(lock);
-	if (qat_dev_gen == QAT_GEN4) {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_RING_BUNDLE_SIZE_GEN4 *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
-				arb_csr_offset);
-	} else {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_ARB_REG_SLOT *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr,
-				arb_csr_offset);
-	}
-	value |= (0x01 << txq->hw_queue_number);
-	ADF_CSR_WR(base_addr, arb_csr_offset, value);
-	rte_spinlock_unlock(lock);
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_build_ring_base,
+			-ENOTSUP);
+	ops->qat_qp_build_ring_base(io_addr, queue);
+	return 0;
 }

-static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
-		struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock)
+int
+qat_qps_per_service(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
 {
-	uint32_t arb_csr_offset = 0, value;
-
-	rte_spinlock_lock(lock);
-	if (qat_dev_gen == QAT_GEN4) {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_RING_BUNDLE_SIZE_GEN4 *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
-				arb_csr_offset);
-	} else {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_ARB_REG_SLOT *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr,
-				arb_csr_offset);
-	}
-	value &= ~(0x01 << txq->hw_queue_number);
-	ADF_CSR_WR(base_addr, arb_csr_offset, value);
-	rte_spinlock_unlock(lock);
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_rings_per_service,
+			-ENOTSUP);
+	return ops->qat_qp_rings_per_service(qat_dev, service);
 }

-static void adf_configure_queues(struct qat_qp *qp,
-		enum qat_device_gen qat_dev_gen)
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data(struct qat_pci_device *qat_dev,
+		enum qat_service_type service, uint16_t qp_id)
 {
-	uint32_t q_tx_config, q_resp_config;
-	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
-
-	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
-	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
-			ADF_RING_NEAR_WATERMARK_512,
-			ADF_RING_NEAR_WATERMARK_0);
-
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
-			q_tx->hw_bundle_number,	q_tx->hw_queue_number,
-			q_tx_config);
-		WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
-			q_rx->hw_bundle_number,	q_rx->hw_queue_number,
-			q_resp_config);
-	} else {
-		WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
-			q_tx->hw_bundle_number,	q_tx->hw_queue_number,
-			q_tx_config);
-		WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
-			q_rx->hw_bundle_number,	q_rx->hw_queue_number,
-			q_resp_config);
-	}
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_get_hw_data, NULL);
+	return ops->qat_qp_get_hw_data(qat_dev, service, qp_id);
 }

-static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask)
+int
+qat_read_qp_config(struct qat_pci_device *qat_dev)
 {
-	return data & modulo_mask;
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_read_config,
+			-ENOTSUP);
+	return ops_hw->qat_dev_read_config(qat_dev);
+}
+
+static int __rte_unused
+adf_configure_queues(struct qat_qp *qp, enum qat_device_gen qat_dev_gen)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_configure_queues,
+			-ENOTSUP);
+	ops->qat_qp_adf_configure_queues(qp);
+	return 0;
 }

 static inline void
 txq_write_tail(enum qat_device_gen qat_dev_gen,
-		struct qat_qp *qp, struct qat_queue *q) {
+		struct qat_qp *qp, struct qat_queue *q)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];

-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
-			q->hw_bundle_number, q->hw_queue_number, q->tail);
-	} else {
-		WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number,
-			q->hw_queue_number, q->tail);
-	}
+	/*
+	 * Pointer check should be done during
+	 * initialization
+	 */
+	ops->qat_qp_csr_write_tail(qp, q);
 }

+static inline void
+qat_qp_csr_write_head(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
+			struct qat_queue *q, uint32_t new_head)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
+
+	/*
+	 * Pointer check should be done during
+	 * initialization
+	 */
+	ops->qat_qp_csr_write_head(qp, q, new_head);
+}
+
+static int
+qat_qp_csr_setup(struct qat_pci_device *qat_dev,
+		void *io_addr, struct qat_qp *qp)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_csr_setup,
+			-ENOTSUP);
+	ops->qat_qp_csr_setup(qat_dev, io_addr, qp);
+	return 0;
+}
+
+
 static inline
 void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
 				struct qat_queue *q)
@@ -707,15 +516,37 @@ void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
 	q->nb_processed_responses = 0;
 	q->csr_head = new_head;

-	/* write current head to CSR */
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr,
-			q->hw_bundle_number, q->hw_queue_number, new_head);
-	} else {
-		WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number,
-				q->hw_queue_number, new_head);
-	}
+	qat_qp_csr_write_head(qat_dev_gen, qp, q, new_head);
+}
+
+static int
+qat_qp_check_queue_alignment(uint64_t phys_addr, uint32_t queue_size_bytes)
+{
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return -EINVAL;
+	return 0;
+}
+
+static int
+adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+		uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
+
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return -EINVAL;
+}

+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t modulo_mask)
+{
+	return data & modulo_mask;
 }

 uint16_t
diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
index 726cd2ef61..deafb407b3 100644
--- a/drivers/common/qat/qat_qp.h
+++ b/drivers/common/qat/qat_qp.h
@@ -12,16 +12,6 @@

 #define QAT_QP_MIN_INFL_THRESHOLD	256

-/* Default qp configuration for GEN4 devices */
-#define QAT_GEN4_QP_DEFCON	(QAT_SERVICE_SYMMETRIC |	\
-				QAT_SERVICE_SYMMETRIC << 8 |	\
-				QAT_SERVICE_SYMMETRIC << 16 |	\
-				QAT_SERVICE_SYMMETRIC << 24)
-
-/* QAT GEN 4 specific macros */
-#define QAT_GEN4_BUNDLE_NUM             4
-#define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
-
 struct qat_pci_device;

 /**
@@ -106,7 +96,11 @@ qat_qp_setup(struct qat_pci_device *qat_dev,

 int
 qat_qps_per_service(struct qat_pci_device *qat_dev,
-			enum qat_service_type service);
+		enum qat_service_type service);
+
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data(struct qat_pci_device *qat_dev,
+		enum qat_service_type service, uint16_t qp_id);

 int
 qat_cq_get_fw_version(struct qat_qp *qp);
@@ -116,11 +110,6 @@ int
 qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused,
 			  void *op_cookie __rte_unused,
 			  uint64_t *dequeue_err_count __rte_unused);
-
-int
-qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
-			enum qat_service_type service_type);
-
 int
 qat_read_qp_config(struct qat_pci_device *qat_dev);

@@ -166,7 +155,4 @@ struct qat_qp_hw_spec_funcs {

 extern struct qat_qp_hw_spec_funcs *qat_qp_hw_spec[];

-extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-
 #endif /* _QAT_QP_H_ */
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index d4f087733f..5b8ee4bee6 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -164,35 +164,11 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	int ret = 0;
 	uint32_t i;
 	struct qat_qp_config qat_qp_conf;
-	const struct qat_qp_hw_data *sym_hw_qps = NULL;
-	const struct qat_qp_hw_data *qp_hw_data = NULL;
-
 	struct qat_qp **qp_addr =
 			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
 	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
 	struct qat_pci_device *qat_dev = qat_private->qat_dev;

-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		int ring_pair =
-			qat_select_valid_queue(qat_dev, qp_id,
-				QAT_SERVICE_SYMMETRIC);
-
-		if (ring_pair < 0) {
-			QAT_LOG(ERR,
-				"qp_id %u invalid for this device, no enough services allocated for GEN4 device",
-				qp_id);
-			return -EINVAL;
-		}
-		sym_hw_qps =
-			&qat_dev->qp_gen4_data[0][0];
-		qp_hw_data =
-			&qat_dev->qp_gen4_data[ring_pair][0];
-	} else {
-		sym_hw_qps = qat_gen_config[qat_dev->qat_dev_gen]
-				.qp_hw_data[QAT_SERVICE_SYMMETRIC];
-		qp_hw_data = sym_hw_qps + qp_id;
-	}
-
 	/* If qp is already in use free ring memory and qp metadata. */
 	if (*qp_addr != NULL) {
 		ret = qat_sym_qp_release(dev, qp_id);
@@ -204,7 +180,13 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 		return -EINVAL;
 	}

-	qat_qp_conf.hw = qp_hw_data;
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_SYMMETRIC,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
 	qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie);
 	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
 	qat_qp_conf.socket_id = socket_id;
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v6 5/9] compress/qat: add gen specific data and function
  2021-10-26 17:16           ` [dpdk-dev] [dpdk-dev v6 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                               ` (3 preceding siblings ...)
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 4/9] common/qat: add gen specific queue implementation Kai Ji
@ 2021-10-26 17:16             ` Kai Ji
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 6/9] compress/qat: add gen specific implementation Kai Ji
                               ` (4 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 17:16 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Adam Dybkowski, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch adds the compression data structure and function
prototypes for different QAT generations.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 .../common/qat/qat_adf/icp_qat_hw_gen4_comp.h | 195 ++++++++++++
 .../qat/qat_adf/icp_qat_hw_gen4_comp_defs.h   | 299 ++++++++++++++++++
 drivers/common/qat/qat_common.h               |   4 +-
 drivers/common/qat/qat_device.h               |   7 -
 drivers/compress/qat/qat_comp.c               | 101 +++---
 drivers/compress/qat/qat_comp.h               |   8 +-
 drivers/compress/qat/qat_comp_pmd.c           | 159 ++++------
 drivers/compress/qat/qat_comp_pmd.h           |  76 +++++
 8 files changed, 675 insertions(+), 174 deletions(-)
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h

diff --git a/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
new file mode 100644
index 0000000000..ec69dc7105
--- /dev/null
+++ b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
@@ -0,0 +1,195 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _ICP_QAT_HW_GEN4_COMP_H_
+#define _ICP_QAT_HW_GEN4_COMP_H_
+
+#include "icp_qat_fw.h"
+#include "icp_qat_hw_gen4_comp_defs.h"
+
+struct icp_qat_hw_comp_20_config_csr_lower {
+	icp_qat_hw_comp_20_extended_delay_match_mode_t edmm;
+	icp_qat_hw_comp_20_hw_comp_format_t algo;
+	icp_qat_hw_comp_20_search_depth_t sd;
+	icp_qat_hw_comp_20_hbs_control_t hbs;
+	icp_qat_hw_comp_20_abd_t abd;
+	icp_qat_hw_comp_20_lllbd_ctrl_t lllbd;
+	icp_qat_hw_comp_20_min_match_control_t mmctrl;
+	icp_qat_hw_comp_20_skip_hash_collision_t hash_col;
+	icp_qat_hw_comp_20_skip_hash_update_t hash_update;
+	icp_qat_hw_comp_20_byte_skip_t skip_ctrl;
+};
+
+static inline uint32_t ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER(
+		struct icp_qat_hw_comp_20_config_csr_lower csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.algo,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_MASK);
+
+	QAT_FIELD_SET(val32, csr.sd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_MASK);
+
+	QAT_FIELD_SET(val32, csr.edmm,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_MASK);
+
+	QAT_FIELD_SET(val32, csr.hbs,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lllbd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK);
+
+	QAT_FIELD_SET(val32, csr.mmctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.hash_col,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_MASK);
+
+	QAT_FIELD_SET(val32, csr.hash_update,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_MASK);
+
+	QAT_FIELD_SET(val32, csr.skip_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_MASK);
+
+	QAT_FIELD_SET(val32, csr.abd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_MASK);
+
+	QAT_FIELD_SET(val32, csr.lllbd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_comp_20_config_csr_upper {
+	icp_qat_hw_comp_20_scb_control_t scb_ctrl;
+	icp_qat_hw_comp_20_rmb_control_t rmb_ctrl;
+	icp_qat_hw_comp_20_som_control_t som_ctrl;
+	icp_qat_hw_comp_20_skip_hash_rd_control_t skip_hash_ctrl;
+	icp_qat_hw_comp_20_scb_unload_control_t scb_unload_ctrl;
+	icp_qat_hw_comp_20_disable_token_fusion_control_t
+			disable_token_fusion_ctrl;
+	icp_qat_hw_comp_20_lbms_t lbms;
+	icp_qat_hw_comp_20_scb_mode_reset_mask_t scb_mode_reset;
+	uint16_t lazy;
+	uint16_t nice;
+};
+
+static inline uint32_t ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER(
+		struct icp_qat_hw_comp_20_config_csr_upper csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.scb_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.rmb_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.som_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.skip_hash_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.scb_unload_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.disable_token_fusion_ctrl,
+	ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_BITPOS,
+	ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbms,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_MASK);
+
+	QAT_FIELD_SET(val32, csr.scb_mode_reset,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK);
+
+	QAT_FIELD_SET(val32, csr.lazy,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_MASK);
+
+	QAT_FIELD_SET(val32, csr.nice,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_decomp_20_config_csr_lower {
+	icp_qat_hw_decomp_20_hbs_control_t hbs;
+	icp_qat_hw_decomp_20_lbms_t lbms;
+	icp_qat_hw_decomp_20_hw_comp_format_t algo;
+	icp_qat_hw_decomp_20_min_match_control_t mmctrl;
+	icp_qat_hw_decomp_20_lz4_block_checksum_present_t lbc;
+};
+
+static inline uint32_t ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER(
+		struct icp_qat_hw_decomp_20_config_csr_lower csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.hbs,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbms,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_MASK);
+
+	QAT_FIELD_SET(val32, csr.algo,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_MASK);
+
+	QAT_FIELD_SET(val32, csr.mmctrl,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbc,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_BITPOS,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_decomp_20_config_csr_upper {
+	icp_qat_hw_decomp_20_speculative_decoder_control_t sdc;
+	icp_qat_hw_decomp_20_mini_cam_control_t mcc;
+};
+
+static inline uint32_t ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_UPPER(
+		struct icp_qat_hw_decomp_20_config_csr_upper csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.sdc,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_BITPOS,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.mcc,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_MASK);
+
+	return rte_bswap32(val32);
+}
+
+#endif /* _ICP_QAT_HW_GEN4_COMP_H_ */
diff --git a/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
new file mode 100644
index 0000000000..ad02d06b12
--- /dev/null
+++ b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
@@ -0,0 +1,299 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _ICP_QAT_HW_GEN4_COMP_DEFS_H
+#define _ICP_QAT_HW_GEN4_COMP_DEFS_H
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_BITPOS	31
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_comp_20_scb_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_BITPOS	30
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_ALL = 0x0,
+	ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_FC_ONLY = 0x1,
+} icp_qat_hw_comp_20_rmb_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_ALL
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_BITPOS	28
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_MASK		0x3
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_NORMAL_MODE = 0x0,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_REPLAY_MODE = 0x1,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_INPUT_CRC = 0x2,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_RESERVED_MODE = 0x3,
+} icp_qat_hw_comp_20_som_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SOM_CONTROL_NORMAL_MODE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_BITPOS	27
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_NO_SKIP = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_SKIP_HASH_READS = 0x1,
+} icp_qat_hw_comp_20_skip_hash_rd_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_NO_SKIP
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_BITPOS	26
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_UNLOAD = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_NO_UNLOAD = 0x1,
+} icp_qat_hw_comp_20_scb_unload_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_UNLOAD
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_BITPOS 21
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_comp_20_disable_token_fusion_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_DEFAULT_VAL \
+		ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_ENABLE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_BITPOS	19
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_MASK		0x3
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_64KB = 0x0,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_256KB = 0x1,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_1MB = 0x2,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_4MB = 0x3,
+} icp_qat_hw_comp_20_lbms_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_LBMS_LBMS_64KB
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS	18
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS_AND_HISTORY = 0x1,
+} icp_qat_hw_comp_20_scb_mode_reset_mask_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_BITPOS	9
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_MASK	0x1ff
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL 258
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_BITPOS	0
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_MASK	0x1ff
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL 259
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS	14
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_MASK		0x7
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_HBS_CONTROL_HBS_IS_32KB = 0x0,
+} icp_qat_hw_comp_20_hbs_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_HBS_CONTROL_HBS_IS_32KB
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_BITPOS	13
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_ABD_ABD_ENABLED = 0x0,
+	ICP_QAT_HW_COMP_20_ABD_ABD_DISABLED = 0x1,
+} icp_qat_hw_comp_20_abd_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_ABD_ABD_ENABLED
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS	12
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED = 0x0,
+	ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_DISABLED = 0x1,
+} icp_qat_hw_comp_20_lllbd_ctrl_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_BITPOS	8
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_MASK		0xf
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1 = 0x1,
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_6 = 0x3,
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_9 = 0x4,
+} icp_qat_hw_comp_20_search_depth_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_BITPOS	5
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77 = 0x0,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE = 0x1,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_LZ4 = 0x2,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_LZ4S = 0x3,
+} icp_qat_hw_comp_20_hw_comp_format_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS	4
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_3B = 0x0,
+	ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_4B = 0x1,
+} icp_qat_hw_comp_20_min_match_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_3B
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_BITPOS	3
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_ALLOW = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_DONT_ALLOW = 0x1,
+} icp_qat_hw_comp_20_skip_hash_collision_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_ALLOW
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_BITPOS	2
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_ALLOW = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW = 0x1,
+} icp_qat_hw_comp_20_skip_hash_update_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_ALLOW
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_BITPOS	1
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_TOKEN = 0x0,
+	ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL = 0x1,
+} icp_qat_hw_comp_20_byte_skip_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_TOKEN
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_BITPOS	0
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_DISABLED = 0x0,
+	ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED = 0x1,
+} icp_qat_hw_comp_20_extended_delay_match_mode_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_DEFAULT_VAL \
+		ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_DISABLED
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_BITPOS 31
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_decomp_20_speculative_decoder_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_DEFAULT_VAL\
+		ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_ENABLE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_BITPOS	30
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_decomp_20_mini_cam_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_ENABLE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS	14
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_HBS_CONTROL_HBS_IS_32KB = 0x0,
+} icp_qat_hw_decomp_20_hbs_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_HBS_CONTROL_HBS_IS_32KB
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_BITPOS	8
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_MASK	0x3
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_64KB = 0x0,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_256KB = 0x1,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_1MB = 0x2,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_4MB = 0x3,
+} icp_qat_hw_decomp_20_lbms_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_LBMS_LBMS_64KB
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_BITPOS	5
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE = 0x1,
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_LZ4 = 0x2,
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_LZ4S = 0x3,
+} icp_qat_hw_decomp_20_hw_comp_format_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS	4
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_3B = 0x0,
+	ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_4B = 0x1,
+} icp_qat_hw_decomp_20_min_match_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_3B
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_BITPOS 3
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_ABSENT  =  0x0,
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_PRESENT  =  0x1,
+} icp_qat_hw_decomp_20_lz4_block_checksum_present_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_DEFAULT_VAL \
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_ABSENT
+
+#endif /* _ICP_QAT_HW_GEN4_COMP_DEFS_H */
diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index 1889ec4e88..a7632e31f8 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -13,9 +13,9 @@
 #define QAT_64_BTYE_ALIGN_MASK (~0x3f)

 /* Intel(R) QuickAssist Technology device generation is enumerated
- * from one according to the generation of the device
+ * from one according to the generation of the device.
+ * QAT_GEN* is used as the index to find all devices
  */
-
 enum qat_device_gen {
 	QAT_GEN1,
 	QAT_GEN2,
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 8233cc045d..e7c7e9af95 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -49,12 +49,6 @@ struct qat_dev_cmd_param {
 	uint16_t val;
 };

-enum qat_comp_num_im_buffers {
-	QAT_NUM_INTERM_BUFS_GEN1 = 12,
-	QAT_NUM_INTERM_BUFS_GEN2 = 20,
-	QAT_NUM_INTERM_BUFS_GEN3 = 64
-};
-
 struct qat_device_info {
 	const struct rte_memzone *mz;
 	/**< mz to store the qat_pci_device so it can be
@@ -137,7 +131,6 @@ struct qat_pci_device {
 struct qat_gen_hw_data {
 	enum qat_device_gen dev_gen;
 	const struct qat_qp_hw_data (*qp_hw_data)[ADF_MAX_QPS_ON_ANY_SERVICE];
-	enum qat_comp_num_im_buffers comp_num_im_bufs_required;
 	struct qat_pf2vf_dev *pf2vf_dev;
 };

diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index 7ac25a3b4c..e8f57c3cc4 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018-2019 Intel Corporation
+ * Copyright(c) 2018-2021 Intel Corporation
  */

 #include <rte_mempool.h>
@@ -332,7 +332,8 @@ qat_comp_build_request(void *in_op, uint8_t *out_msg,
 	return 0;
 }

-static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask)
+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t modulo_mask)
 {
 	return data & modulo_mask;
 }
@@ -793,8 +794,9 @@ qat_comp_stream_size(void)
 	return RTE_ALIGN_CEIL(sizeof(struct qat_comp_stream), 8);
 }

-static void qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
-				    enum qat_comp_request_type request)
+static void
+qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
+	    enum qat_comp_request_type request)
 {
 	if (request == QAT_COMP_REQUEST_FIXED_COMP_STATELESS)
 		header->service_cmd_id = ICP_QAT_FW_COMP_CMD_STATIC;
@@ -811,16 +813,17 @@ static void qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
 	    QAT_COMN_CD_FLD_TYPE_16BYTE_DATA, QAT_COMN_PTR_TYPE_FLAT);
 }

-static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
-			const struct rte_memzone *interm_buff_mz,
-			const struct rte_comp_xform *xform,
-			const struct qat_comp_stream *stream,
-			enum rte_comp_op_type op_type)
+static int
+qat_comp_create_templates(struct qat_comp_xform *qat_xform,
+			  const struct rte_memzone *interm_buff_mz,
+			  const struct rte_comp_xform *xform,
+			  const struct qat_comp_stream *stream,
+			  enum rte_comp_op_type op_type,
+			  enum qat_device_gen qat_dev_gen)
 {
 	struct icp_qat_fw_comp_req *comp_req;
-	int comp_level, algo;
 	uint32_t req_par_flags;
-	int direction = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS;
+	int res;

 	if (unlikely(qat_xform == NULL)) {
 		QAT_LOG(ERR, "Session was not created for this device");
@@ -839,46 +842,17 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 		}
 	}

-	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) {
-		direction = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS;
-		comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
+	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS)
 		req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(
 				ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP,
 				ICP_QAT_FW_COMP_BFINAL,
 				ICP_QAT_FW_COMP_CNV,
 				ICP_QAT_FW_COMP_CNV_RECOVERY);
-	} else {
-		if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
-		else if (xform->compress.level == 1)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
-		else if (xform->compress.level == 2)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_4;
-		else if (xform->compress.level == 3)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
-		else if (xform->compress.level >= 4 &&
-			 xform->compress.level <= 9)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_16;
-		else {
-			QAT_LOG(ERR, "compression level not supported");
-			return -EINVAL;
-		}
+	else
 		req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(
 				ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP,
 				ICP_QAT_FW_COMP_BFINAL, ICP_QAT_FW_COMP_CNV,
 				ICP_QAT_FW_COMP_CNV_RECOVERY);
-	}
-
-	switch (xform->compress.algo) {
-	case RTE_COMP_ALGO_DEFLATE:
-		algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE;
-		break;
-	case RTE_COMP_ALGO_LZS:
-	default:
-		/* RTE_COMP_NULL */
-		QAT_LOG(ERR, "compression algorithm not supported");
-		return -EINVAL;
-	}

 	comp_req = &qat_xform->qat_comp_req_tmpl;

@@ -899,18 +873,10 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 		comp_req->comp_cd_ctrl.comp_state_addr =
 				stream->state_registers_decomp_phys;

-		/* Enable A, B, C, D, and E (CAMs). */
+		/* RAM bank flags */
 		comp_req->comp_cd_ctrl.ram_bank_flags =
-			ICP_QAT_FW_COMP_RAM_FLAGS_BUILD(
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank E */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank D */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank C */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank B */
-				ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */
+				qat_comp_gen_dev_ops[qat_dev_gen]
+					.qat_comp_get_ram_bank_flags();

 		comp_req->comp_cd_ctrl.ram_banks_addr =
 				stream->inflate_context_phys;
@@ -924,13 +890,11 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 			ICP_QAT_FW_COMP_ENABLE_SECURE_RAM_USED_AS_INTMD_BUF);
 	}

-	comp_req->cd_pars.sl.comp_slice_cfg_word[0] =
-	    ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(
-		direction,
-		/* In CPM 1.6 only valid mode ! */
-		ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED, algo,
-		/* Translate level to depth */
-		comp_level, ICP_QAT_HW_COMPRESSION_FILE_TYPE_0);
+	res = qat_comp_gen_dev_ops[qat_dev_gen].qat_comp_set_slice_cfg_word(
+			qat_xform, xform, op_type,
+			comp_req->cd_pars.sl.comp_slice_cfg_word);
+	if (res)
+		return res;

 	comp_req->comp_pars.initial_adler = 1;
 	comp_req->comp_pars.initial_crc32 = 0;
@@ -958,7 +922,8 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 				ICP_QAT_FW_SLICE_XLAT);

 		comp_req->u1.xlt_pars.inter_buff_ptr =
-				interm_buff_mz->iova;
+				(qat_comp_get_num_im_bufs_required(qat_dev_gen)
+					== 0) ? 0 : interm_buff_mz->iova;
 	}

 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
@@ -991,6 +956,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 			      void **private_xform)
 {
 	struct qat_comp_dev_private *qat = dev->data->dev_private;
+	enum qat_device_gen qat_dev_gen = qat->qat_dev->qat_dev_gen;
+	unsigned int im_bufs = qat_comp_get_num_im_bufs_required(qat_dev_gen);

 	if (unlikely(private_xform == NULL)) {
 		QAT_LOG(ERR, "QAT: private_xform parameter is NULL");
@@ -1012,7 +979,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,

 		if (xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_FIXED ||
 		  ((xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_DEFAULT)
-				   && qat->interm_buff_mz == NULL))
+				   && qat->interm_buff_mz == NULL
+				   && im_bufs > 0))
 			qat_xform->qat_comp_request_type =
 					QAT_COMP_REQUEST_FIXED_COMP_STATELESS;

@@ -1020,7 +988,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 				RTE_COMP_HUFFMAN_DYNAMIC ||
 				xform->compress.deflate.huffman ==
 						RTE_COMP_HUFFMAN_DEFAULT) &&
-				qat->interm_buff_mz != NULL)
+				(qat->interm_buff_mz != NULL ||
+						im_bufs == 0))

 			qat_xform->qat_comp_request_type =
 					QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS;
@@ -1039,7 +1008,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 	}

 	if (qat_comp_create_templates(qat_xform, qat->interm_buff_mz, xform,
-				      NULL, RTE_COMP_OP_STATELESS)) {
+				      NULL, RTE_COMP_OP_STATELESS,
+				      qat_dev_gen)) {
 		QAT_LOG(ERR, "QAT: Problem with setting compression");
 		return -EINVAL;
 	}
@@ -1138,7 +1108,8 @@ qat_comp_stream_create(struct rte_compressdev *dev,
 	ptr->qat_xform.checksum_type = xform->decompress.chksum;

 	if (qat_comp_create_templates(&ptr->qat_xform, qat->interm_buff_mz,
-				      xform, ptr, RTE_COMP_OP_STATEFUL)) {
+				      xform, ptr, RTE_COMP_OP_STATEFUL,
+				      qat->qat_dev->qat_dev_gen)) {
 		QAT_LOG(ERR, "QAT: problem with creating descriptor template for stream");
 		rte_mempool_put(qat->streampool, *stream);
 		*stream = NULL;
diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
index 0444b50a1e..da7b9a6eec 100644
--- a/drivers/compress/qat/qat_comp.h
+++ b/drivers/compress/qat/qat_comp.h
@@ -28,14 +28,16 @@
 #define QAT_MIN_OUT_BUF_SIZE 46

 /* maximum size of the state registers */
-#define QAT_STATE_REGISTERS_MAX_SIZE 64
+#define QAT_STATE_REGISTERS_MAX_SIZE 256 /* 64 bytes for GEN1-3, 256 for GEN4 */

 /* decompressor context size */
 #define QAT_INFLATE_CONTEXT_SIZE_GEN1 36864
 #define QAT_INFLATE_CONTEXT_SIZE_GEN2 34032
 #define QAT_INFLATE_CONTEXT_SIZE_GEN3 34032
-#define QAT_INFLATE_CONTEXT_SIZE RTE_MAX(RTE_MAX(QAT_INFLATE_CONTEXT_SIZE_GEN1,\
-		QAT_INFLATE_CONTEXT_SIZE_GEN2), QAT_INFLATE_CONTEXT_SIZE_GEN3)
+#define QAT_INFLATE_CONTEXT_SIZE_GEN4 36864
+#define QAT_INFLATE_CONTEXT_SIZE RTE_MAX(RTE_MAX(RTE_MAX(\
+		QAT_INFLATE_CONTEXT_SIZE_GEN1, QAT_INFLATE_CONTEXT_SIZE_GEN2), \
+		QAT_INFLATE_CONTEXT_SIZE_GEN3), QAT_INFLATE_CONTEXT_SIZE_GEN4)

 enum qat_comp_request_type {
 	QAT_COMP_REQUEST_FIXED_COMP_STATELESS,
diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c
index caac7839e9..9b24d46e97 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -9,30 +9,29 @@

 #define QAT_PMD_COMP_SGL_DEF_SEGMENTS 16

+struct qat_comp_gen_dev_ops qat_comp_gen_dev_ops[QAT_N_GENS];
+
 struct stream_create_info {
 	struct qat_comp_dev_private *comp_dev;
 	int socket_id;
 	int error;
 };

-static const struct rte_compressdev_capabilities qat_comp_gen_capabilities[] = {
-	{/* COMPRESSION - deflate */
-	 .algo = RTE_COMP_ALGO_DEFLATE,
-	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
-				RTE_COMP_FF_CRC32_CHECKSUM |
-				RTE_COMP_FF_ADLER32_CHECKSUM |
-				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
-				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
-				RTE_COMP_FF_HUFFMAN_FIXED |
-				RTE_COMP_FF_HUFFMAN_DYNAMIC |
-				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
-				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
-				RTE_COMP_FF_OOP_LB_IN_SGL_OUT |
-				RTE_COMP_FF_STATEFUL_DECOMPRESSION,
-	 .window_size = {.min = 15, .max = 15, .increment = 0} },
-	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+static struct
+qat_comp_capabilities_info qat_comp_get_capa_info(
+		enum qat_device_gen qat_dev_gen, struct qat_pci_device *qat_dev)
+{
+	struct qat_comp_capabilities_info ret = { .data = NULL, .size = 0 };

-static void
+	if (qat_dev_gen >= QAT_N_GENS)
+		return ret;
+	RTE_FUNC_PTR_OR_ERR_RET(qat_comp_gen_dev_ops[qat_dev_gen]
+			.qat_comp_get_capabilities, ret);
+	return qat_comp_gen_dev_ops[qat_dev_gen]
+			.qat_comp_get_capabilities(qat_dev);
+}
+
+void
 qat_comp_stats_get(struct rte_compressdev *dev,
 		struct rte_compressdev_stats *stats)
 {
@@ -52,7 +51,7 @@ qat_comp_stats_get(struct rte_compressdev *dev,
 	stats->dequeue_err_count = qat_stats.dequeue_err_count;
 }

-static void
+void
 qat_comp_stats_reset(struct rte_compressdev *dev)
 {
 	struct qat_comp_dev_private *qat_priv;
@@ -67,7 +66,7 @@ qat_comp_stats_reset(struct rte_compressdev *dev)

 }

-static int
+int
 qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 {
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
@@ -95,23 +94,18 @@ qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 			&(dev->data->queue_pairs[queue_pair_id]));
 }

-static int
+int
 qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
-		  uint32_t max_inflight_ops, int socket_id)
+		uint32_t max_inflight_ops, int socket_id)
 {
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-	struct qat_qp_config qat_qp_conf;
-
+	struct qat_qp_config qat_qp_conf = {0};
 	struct qat_qp **qp_addr =
 			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
 	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-	const struct qat_qp_hw_data *comp_hw_qps =
-			qat_gen_config[qat_private->qat_dev->qat_dev_gen]
-				      .qp_hw_data[QAT_SERVICE_COMPRESSION];
-	const struct qat_qp_hw_data *qp_hw_data = comp_hw_qps + qp_id;
+	struct qat_qp *qp;
+	uint32_t i;
+	int ret;

 	/* If qp is already in use free ring memory and qp metadata. */
 	if (*qp_addr != NULL) {
@@ -125,7 +119,13 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 		return -EINVAL;
 	}

-	qat_qp_conf.hw = qp_hw_data;
+
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_COMPRESSION,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
 	qat_qp_conf.cookie_size = sizeof(struct qat_comp_op_cookie);
 	qat_qp_conf.nb_descriptors = max_inflight_ops;
 	qat_qp_conf.socket_id = socket_id;
@@ -134,7 +134,6 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
 	if (ret != 0)
 		return ret;
-
 	/* store a link to the qp in the qat_pci_device */
 	qat_private->qat_dev->qps_in_use[QAT_SERVICE_COMPRESSION][qp_id]
 								= *qp_addr;
@@ -189,7 +188,7 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,


 #define QAT_IM_BUFFER_DEBUG 0
-static const struct rte_memzone *
+const struct rte_memzone *
 qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
 			      uint32_t buff_size)
 {
@@ -202,8 +201,8 @@ qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
 	uint32_t full_size;
 	uint32_t offset_of_flat_buffs;
 	int i;
-	int num_im_sgls = qat_gen_config[
-		comp_dev->qat_dev->qat_dev_gen].comp_num_im_bufs_required;
+	int num_im_sgls = qat_comp_get_num_im_bufs_required(
+			comp_dev->qat_dev->qat_dev_gen);

 	QAT_LOG(DEBUG, "QAT COMP device %s needs %d sgls",
 				comp_dev->qat_dev->name, num_im_sgls);
@@ -480,8 +479,8 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev)
 	/* Free intermediate buffers */
 	if (comp_dev->interm_buff_mz) {
 		char mz_name[RTE_MEMZONE_NAMESIZE];
-		int i = qat_gen_config[
-		      comp_dev->qat_dev->qat_dev_gen].comp_num_im_bufs_required;
+		int i = qat_comp_get_num_im_bufs_required(
+				comp_dev->qat_dev->qat_dev_gen);

 		while (--i >= 0) {
 			snprintf(mz_name, RTE_MEMZONE_NAMESIZE,
@@ -509,28 +508,13 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev)
 	}
 }

-static int
+int
 qat_comp_dev_config(struct rte_compressdev *dev,
 		struct rte_compressdev_config *config)
 {
 	struct qat_comp_dev_private *comp_dev = dev->data->dev_private;
 	int ret = 0;

-	if (RTE_PMD_QAT_COMP_IM_BUFFER_SIZE == 0) {
-		QAT_LOG(WARNING,
-			"RTE_PMD_QAT_COMP_IM_BUFFER_SIZE = 0 in config file, so"
-			" QAT device can't be used for Dynamic Deflate. "
-			"Did you really intend to do this?");
-	} else {
-		comp_dev->interm_buff_mz =
-				qat_comp_setup_inter_buffers(comp_dev,
-					RTE_PMD_QAT_COMP_IM_BUFFER_SIZE);
-		if (comp_dev->interm_buff_mz == NULL) {
-			ret = -ENOMEM;
-			goto error_out;
-		}
-	}
-
 	if (config->max_nb_priv_xforms) {
 		comp_dev->xformpool = qat_comp_create_xform_pool(comp_dev,
 					    config, config->max_nb_priv_xforms);
@@ -558,19 +542,19 @@ qat_comp_dev_config(struct rte_compressdev *dev,
 	return ret;
 }

-static int
+int
 qat_comp_dev_start(struct rte_compressdev *dev __rte_unused)
 {
 	return 0;
 }

-static void
+void
 qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused)
 {

 }

-static int
+int
 qat_comp_dev_close(struct rte_compressdev *dev)
 {
 	int i;
@@ -588,8 +572,7 @@ qat_comp_dev_close(struct rte_compressdev *dev)
 	return ret;
 }

-
-static void
+void
 qat_comp_dev_info_get(struct rte_compressdev *dev,
 			struct rte_compressdev_info *info)
 {
@@ -662,27 +645,6 @@ qat_comp_pmd_dequeue_first_op_burst(void *qp, struct rte_comp_op **ops,
 	return ret;
 }

-static struct rte_compressdev_ops compress_qat_ops = {
-
-	/* Device related operations */
-	.dev_configure		= qat_comp_dev_config,
-	.dev_start		= qat_comp_dev_start,
-	.dev_stop		= qat_comp_dev_stop,
-	.dev_close		= qat_comp_dev_close,
-	.dev_infos_get		= qat_comp_dev_info_get,
-
-	.stats_get		= qat_comp_stats_get,
-	.stats_reset		= qat_comp_stats_reset,
-	.queue_pair_setup	= qat_comp_qp_setup,
-	.queue_pair_release	= qat_comp_qp_release,
-
-	/* Compression related operations */
-	.private_xform_create	= qat_comp_private_xform_create,
-	.private_xform_free	= qat_comp_private_xform_free,
-	.stream_create		= qat_comp_stream_create,
-	.stream_free		= qat_comp_stream_free
-};
-
 /* An rte_driver is needed in the registration of the device with compressdev.
  * The actual qat pci's rte_driver can't be used as its name represents
  * the whole pci device with all services. Think of this as a holder for a name
@@ -693,6 +655,7 @@ static const struct rte_driver compdev_qat_driver = {
 	.name = qat_comp_drv_name,
 	.alias = qat_comp_drv_name
 };
+
 int
 qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param)
@@ -708,17 +671,21 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	char capa_memz_name[RTE_COMPRESSDEV_NAME_MAX_LEN];
 	struct rte_compressdev *compressdev;
 	struct qat_comp_dev_private *comp_dev;
+	struct qat_comp_capabilities_info capabilities_info;
 	const struct rte_compressdev_capabilities *capabilities;
+	const struct qat_comp_gen_dev_ops *qat_comp_gen_ops =
+			&qat_comp_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	uint64_t capa_size;

-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		QAT_LOG(ERR, "Compression PMD not supported on QAT 4xxx");
-		return -EFAULT;
-	}
 	snprintf(name, RTE_COMPRESSDEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "comp");
 	QAT_LOG(DEBUG, "Creating QAT COMP device %s", name);

+	if (qat_comp_gen_ops->compressdev_ops == NULL) {
+		QAT_LOG(DEBUG, "Device %s does not support compression", name);
+		return -ENOTSUP;
+	}
+
 	/* Populate subset device to use in compressdev device creation */
 	qat_dev_instance->comp_rte_dev.driver = &compdev_qat_driver;
 	qat_dev_instance->comp_rte_dev.numa_node =
@@ -733,13 +700,13 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	if (compressdev == NULL)
 		return -ENODEV;

-	compressdev->dev_ops = &compress_qat_ops;
+	compressdev->dev_ops = qat_comp_gen_ops->compressdev_ops;

 	compressdev->enqueue_burst = (compressdev_enqueue_pkt_burst_t)
 			qat_enqueue_comp_op_burst;
 	compressdev->dequeue_burst = qat_comp_pmd_dequeue_first_op_burst;
-
-	compressdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED;
+	compressdev->feature_flags =
+			qat_comp_gen_ops->qat_comp_get_feature_flags();

 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -752,22 +719,20 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	comp_dev->qat_dev = qat_pci_dev;
 	comp_dev->compressdev = compressdev;

-	switch (qat_pci_dev->qat_dev_gen) {
-	case QAT_GEN1:
-	case QAT_GEN2:
-	case QAT_GEN3:
-		capabilities = qat_comp_gen_capabilities;
-		capa_size = sizeof(qat_comp_gen_capabilities);
-		break;
-	default:
-		capabilities = qat_comp_gen_capabilities;
-		capa_size = sizeof(qat_comp_gen_capabilities);
+	capabilities_info = qat_comp_get_capa_info(qat_pci_dev->qat_dev_gen,
+			qat_pci_dev);
+
+	if (capabilities_info.data == NULL) {
 		QAT_LOG(DEBUG,
 			"QAT gen %d capabilities unknown, default to GEN1",
 					qat_pci_dev->qat_dev_gen);
-		break;
+		capabilities_info = qat_comp_get_capa_info(QAT_GEN1,
+				qat_pci_dev);
 	}

+	capabilities = capabilities_info.data;
+	capa_size = capabilities_info.size;
+
 	comp_dev->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (comp_dev->capa_mz == NULL) {
 		comp_dev->capa_mz = rte_memzone_reserve(capa_memz_name,
diff --git a/drivers/compress/qat/qat_comp_pmd.h b/drivers/compress/qat/qat_comp_pmd.h
index 252b4b24e3..86317a513c 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -11,10 +11,44 @@
 #include <rte_compressdev_pmd.h>

 #include "qat_device.h"
+#include "qat_comp.h"

 /**< Intel(R) QAT Compression PMD driver name */
 #define COMPRESSDEV_NAME_QAT_PMD	compress_qat

+/* Private data structure for a QAT compression device capability. */
+struct qat_comp_capabilities_info {
+	const struct rte_compressdev_capabilities *data;
+	uint64_t size;
+};
+
+/**
+ * Function prototypes for GENx specific compress device operations.
+ **/
+typedef struct qat_comp_capabilities_info (*get_comp_capabilities_info_t)
+		(struct qat_pci_device *qat_dev);
+
+typedef uint16_t (*get_comp_ram_bank_flags_t)(void);
+
+typedef int (*set_comp_slice_cfg_word_t)(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word);
+
+typedef unsigned int (*get_comp_num_im_bufs_required_t)(void);
+
+typedef uint64_t (*get_comp_feature_flags_t)(void);
+
+struct qat_comp_gen_dev_ops {
+	struct rte_compressdev_ops *compressdev_ops;
+	get_comp_feature_flags_t qat_comp_get_feature_flags;
+	get_comp_capabilities_info_t qat_comp_get_capabilities;
+	get_comp_ram_bank_flags_t qat_comp_get_ram_bank_flags;
+	set_comp_slice_cfg_word_t qat_comp_set_slice_cfg_word;
+	get_comp_num_im_bufs_required_t qat_comp_get_num_im_bufs_required;
+};
+
+extern struct qat_comp_gen_dev_ops qat_comp_gen_dev_ops[];
+
 /** private data structure for a QAT compression device.
  * This QAT device is a device offering only a compression service,
  * there can be one of these on each qat_pci_device (VF).
@@ -37,6 +71,41 @@ struct qat_comp_dev_private {
 	uint16_t min_enq_burst_threshold;
 };

+int
+qat_comp_dev_config(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config);
+
+int
+qat_comp_dev_start(struct rte_compressdev *dev __rte_unused);
+
+void
+qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused);
+
+int
+qat_comp_dev_close(struct rte_compressdev *dev);
+
+void
+qat_comp_dev_info_get(struct rte_compressdev *dev,
+		struct rte_compressdev_info *info);
+
+void
+qat_comp_stats_get(struct rte_compressdev *dev,
+		struct rte_compressdev_stats *stats);
+
+void
+qat_comp_stats_reset(struct rte_compressdev *dev);
+
+int
+qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id);
+
+int
+qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
+		uint32_t max_inflight_ops, int socket_id);
+
+const struct rte_memzone *
+qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
+		uint32_t buff_size);
+
 int
 qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
@@ -44,5 +113,12 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 int
 qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev);

+
+static __rte_always_inline unsigned int
+qat_comp_get_num_im_bufs_required(enum qat_device_gen gen)
+{
+	return (*qat_comp_gen_dev_ops[gen].qat_comp_get_num_im_bufs_required)();
+}
+
 #endif
 #endif /* _QAT_COMP_PMD_H_ */
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v6 6/9] compress/qat: add gen specific implementation
  2021-10-26 17:16           ` [dpdk-dev] [dpdk-dev v6 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                               ` (4 preceding siblings ...)
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 5/9] compress/qat: add gen specific data and function Kai Ji
@ 2021-10-26 17:16             ` Kai Ji
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 7/9] crypto/qat: unified device private data structure Kai Ji
                               ` (3 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 17:16 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Adam Dybkowski, Arek Kusztal

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch replaces the mixed QAT compression support
implementation by separate files with shared or individual
implementation for specific QAT generation.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 drivers/common/qat/meson.build               |   4 +-
 drivers/compress/qat/dev/qat_comp_pmd_gen1.c | 176 +++++++++++++++
 drivers/compress/qat/dev/qat_comp_pmd_gen2.c |  30 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen3.c |  30 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen4.c | 213 +++++++++++++++++++
 drivers/compress/qat/dev/qat_comp_pmd_gens.h |  30 +++
 6 files changed, 482 insertions(+), 1 deletion(-)
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen1.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen2.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen3.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen4.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gens.h

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 532e0fabb3..8a1c6d64e8 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -62,7 +62,9 @@ includes += include_directories(
 )

 if qat_compress
-    foreach f: ['qat_comp_pmd.c', 'qat_comp.c']
+    foreach f: ['qat_comp_pmd.c', 'qat_comp.c',
+            'dev/qat_comp_pmd_gen1.c', 'dev/qat_comp_pmd_gen2.c',
+            'dev/qat_comp_pmd_gen3.c', 'dev/qat_comp_pmd_gen4.c']
         sources += files(join_paths(qat_compress_relpath, f))
     endforeach
 endif
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen1.c b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c
new file mode 100644
index 0000000000..e3e75c8289
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c
@@ -0,0 +1,176 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_compressdev.h>
+#include <rte_compressdev_pmd.h>
+
+#include "qat_comp_pmd.h"
+#include "qat_comp.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN1 12
+
+const struct rte_compressdev_capabilities qat_gen1_comp_capabilities[] = {
+	{/* COMPRESSION - deflate */
+	 .algo = RTE_COMP_ALGO_DEFLATE,
+	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
+				RTE_COMP_FF_CRC32_CHECKSUM |
+				RTE_COMP_FF_ADLER32_CHECKSUM |
+				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
+				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+				RTE_COMP_FF_HUFFMAN_FIXED |
+				RTE_COMP_FF_HUFFMAN_DYNAMIC |
+				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
+				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
+				RTE_COMP_FF_OOP_LB_IN_SGL_OUT |
+				RTE_COMP_FF_STATEFUL_DECOMPRESSION,
+	 .window_size = {.min = 15, .max = 15, .increment = 0} },
+	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+
+static int
+qat_comp_dev_config_gen1(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config)
+{
+	struct qat_comp_dev_private *comp_dev = dev->data->dev_private;
+
+	if (RTE_PMD_QAT_COMP_IM_BUFFER_SIZE == 0) {
+		QAT_LOG(WARNING,
+			"RTE_PMD_QAT_COMP_IM_BUFFER_SIZE = 0 in config file, so"
+			"QAT device can't be used for Dynamic Deflate.");
+	} else {
+		comp_dev->interm_buff_mz =
+				qat_comp_setup_inter_buffers(comp_dev,
+					RTE_PMD_QAT_COMP_IM_BUFFER_SIZE);
+		if (comp_dev->interm_buff_mz == NULL)
+			return -ENOMEM;
+	}
+
+	return qat_comp_dev_config(dev, config);
+}
+
+struct rte_compressdev_ops qat_comp_ops_gen1 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_comp_dev_config_gen1,
+	.dev_start		= qat_comp_dev_start,
+	.dev_stop		= qat_comp_dev_stop,
+	.dev_close		= qat_comp_dev_close,
+	.dev_infos_get		= qat_comp_dev_info_get,
+
+	.stats_get		= qat_comp_stats_get,
+	.stats_reset		= qat_comp_stats_reset,
+	.queue_pair_setup	= qat_comp_qp_setup,
+	.queue_pair_release	= qat_comp_qp_release,
+
+	/* Compression related operations */
+	.private_xform_create	= qat_comp_private_xform_create,
+	.private_xform_free	= qat_comp_private_xform_free,
+	.stream_create		= qat_comp_stream_create,
+	.stream_free		= qat_comp_stream_free
+};
+
+struct qat_comp_capabilities_info
+qat_comp_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_comp_capabilities_info capa_info = {
+		.data = qat_gen1_comp_capabilities,
+		.size = sizeof(qat_gen1_comp_capabilities)
+	};
+	return capa_info;
+}
+
+uint16_t
+qat_comp_get_ram_bank_flags_gen1(void)
+{
+	/* Enable A, B, C, D, and E (CAMs). */
+	return ICP_QAT_FW_COMP_RAM_FLAGS_BUILD(
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank E */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank D */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank C */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank B */
+			ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */
+}
+
+int
+qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		__rte_unused enum rte_comp_op_type op_type,
+		uint32_t *comp_slice_cfg_word)
+{
+	unsigned int algo, comp_level, direction;
+
+	if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE)
+		algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE;
+	else {
+		QAT_LOG(ERR, "compression algorithm not supported");
+		return -EINVAL;
+	}
+
+	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) {
+		direction = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS;
+		comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+	} else {
+		direction = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS;
+
+		if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+		else if (xform->compress.level == 1)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
+		else if (xform->compress.level == 2)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_4;
+		else if (xform->compress.level == 3)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+		else if (xform->compress.level >= 4 &&
+			 xform->compress.level <= 9)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_16;
+		else {
+			QAT_LOG(ERR, "compression level not supported");
+			return -EINVAL;
+		}
+	}
+
+	comp_slice_cfg_word[0] =
+			ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(
+				direction,
+				/* In CPM 1.6 only valid mode ! */
+				ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED,
+				algo,
+				/* Translate level to depth */
+				comp_level,
+				ICP_QAT_HW_COMPRESSION_FILE_TYPE_0);
+
+	return 0;
+}
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen1(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN1;
+}
+
+uint64_t
+qat_comp_get_features_gen1(void)
+{
+	return RTE_COMPDEV_FF_HW_ACCELERATED;
+}
+
+RTE_INIT(qat_comp_pmd_gen1_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN1].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen2.c b/drivers/compress/qat/dev/qat_comp_pmd_gen2.c
new file mode 100644
index 0000000000..fd6c966f26
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen2.c
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN2 20
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen2(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN2;
+}
+
+RTE_INIT(qat_comp_pmd_gen2_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN2].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen2;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen3.c b/drivers/compress/qat/dev/qat_comp_pmd_gen3.c
new file mode 100644
index 0000000000..fccb0941f1
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen3.c
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN3 64
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen3(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN3;
+}
+
+RTE_INIT(qat_comp_pmd_gen3_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN3].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen3;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen4.c b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
new file mode 100644
index 0000000000..79b2ceb414
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
@@ -0,0 +1,213 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp.h"
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+#include "icp_qat_hw_gen4_comp.h"
+#include "icp_qat_hw_gen4_comp_defs.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN4 0
+
+static const struct rte_compressdev_capabilities
+qat_gen4_comp_capabilities[] = {
+	{/* COMPRESSION - deflate */
+	 .algo = RTE_COMP_ALGO_DEFLATE,
+	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
+				RTE_COMP_FF_CRC32_CHECKSUM |
+				RTE_COMP_FF_ADLER32_CHECKSUM |
+				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
+				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+				RTE_COMP_FF_HUFFMAN_FIXED |
+				RTE_COMP_FF_HUFFMAN_DYNAMIC |
+				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
+				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
+				RTE_COMP_FF_OOP_LB_IN_SGL_OUT,
+	 .window_size = {.min = 15, .max = 15, .increment = 0} },
+	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+
+static int
+qat_comp_dev_config_gen4(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config)
+{
+	/* QAT GEN4 doesn't need preallocated intermediate buffers */
+
+	return qat_comp_dev_config(dev, config);
+}
+
+static struct rte_compressdev_ops qat_comp_ops_gen4 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_comp_dev_config_gen4,
+	.dev_start		= qat_comp_dev_start,
+	.dev_stop		= qat_comp_dev_stop,
+	.dev_close		= qat_comp_dev_close,
+	.dev_infos_get		= qat_comp_dev_info_get,
+
+	.stats_get		= qat_comp_stats_get,
+	.stats_reset		= qat_comp_stats_reset,
+	.queue_pair_setup	= qat_comp_qp_setup,
+	.queue_pair_release	= qat_comp_qp_release,
+
+	/* Compression related operations */
+	.private_xform_create	= qat_comp_private_xform_create,
+	.private_xform_free	= qat_comp_private_xform_free,
+	.stream_create		= qat_comp_stream_create,
+	.stream_free		= qat_comp_stream_free
+};
+
+static struct qat_comp_capabilities_info
+qat_comp_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_comp_capabilities_info capa_info = {
+		.data = qat_gen4_comp_capabilities,
+		.size = sizeof(qat_gen4_comp_capabilities)
+	};
+	return capa_info;
+}
+
+static uint16_t
+qat_comp_get_ram_bank_flags_gen4(void)
+{
+	return 0;
+}
+
+static int
+qat_comp_set_slice_cfg_word_gen4(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word)
+{
+	if (qat_xform->qat_comp_request_type ==
+			QAT_COMP_REQUEST_FIXED_COMP_STATELESS ||
+	    qat_xform->qat_comp_request_type ==
+			QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS) {
+		/* Compression */
+		struct icp_qat_hw_comp_20_config_csr_upper hw_comp_upper_csr;
+		struct icp_qat_hw_comp_20_config_csr_lower hw_comp_lower_csr;
+
+		memset(&hw_comp_upper_csr, 0, sizeof(hw_comp_upper_csr));
+		memset(&hw_comp_lower_csr, 0, sizeof(hw_comp_lower_csr));
+
+		hw_comp_lower_csr.lllbd =
+			ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_DISABLED;
+
+		if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE) {
+			hw_comp_lower_csr.skip_ctrl =
+				ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL;
+
+			if (qat_xform->qat_comp_request_type ==
+				QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS) {
+				hw_comp_lower_csr.algo =
+					ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77;
+				hw_comp_lower_csr.lllbd =
+				    ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED;
+			} else {
+				hw_comp_lower_csr.algo =
+				      ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE;
+				hw_comp_upper_csr.scb_ctrl =
+					ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE;
+			}
+
+			if (op_type == RTE_COMP_OP_STATEFUL) {
+				hw_comp_upper_csr.som_ctrl =
+				     ICP_QAT_HW_COMP_20_SOM_CONTROL_REPLAY_MODE;
+			}
+		} else {
+			QAT_LOG(ERR, "Compression algorithm not supported");
+			return -EINVAL;
+		}
+
+		switch (xform->compress.level) {
+		case 1:
+		case 2:
+		case 3:
+		case 4:
+		case 5:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1;
+			hw_comp_lower_csr.hash_col =
+			      ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_DONT_ALLOW;
+			break;
+		case 6:
+		case 7:
+		case 8:
+		case RTE_COMP_LEVEL_PMD_DEFAULT:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_6;
+			break;
+		case 9:
+		case 10:
+		case 11:
+		case 12:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_9;
+			break;
+		default:
+			QAT_LOG(ERR, "Compression level not supported");
+			return -EINVAL;
+		}
+
+		hw_comp_lower_csr.abd = ICP_QAT_HW_COMP_20_ABD_ABD_DISABLED;
+		hw_comp_lower_csr.hash_update =
+			ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW;
+		hw_comp_lower_csr.edmm =
+		      ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED;
+
+		hw_comp_upper_csr.nice =
+			ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL;
+		hw_comp_upper_csr.lazy =
+			ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL;
+
+		comp_slice_cfg_word[0] =
+				ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER(
+					hw_comp_lower_csr);
+		comp_slice_cfg_word[1] =
+				ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER(
+					hw_comp_upper_csr);
+	} else {
+		/* Decompression */
+		struct icp_qat_hw_decomp_20_config_csr_lower
+				hw_decomp_lower_csr;
+
+		memset(&hw_decomp_lower_csr, 0, sizeof(hw_decomp_lower_csr));
+
+		if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE)
+			hw_decomp_lower_csr.algo =
+				ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE;
+		else {
+			QAT_LOG(ERR, "Compression algorithm not supported");
+			return -EINVAL;
+		}
+
+		comp_slice_cfg_word[0] =
+				ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER(
+					hw_decomp_lower_csr);
+		comp_slice_cfg_word[1] = 0;
+	}
+
+	return 0;
+}
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen4(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN4;
+}
+
+
+RTE_INIT(qat_comp_pmd_gen4_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN4].compressdev_ops =
+			&qat_comp_ops_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gens.h b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
new file mode 100644
index 0000000000..35b75c56f1
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _QAT_COMP_PMD_GEN1_H_
+#define _QAT_COMP_PMD_GEN1_H_
+
+#include <rte_compressdev.h>
+#include <rte_compressdev_pmd.h>
+#include <stdint.h>
+
+#include "qat_comp_pmd.h"
+
+extern const struct rte_compressdev_capabilities qat_gen1_comp_capabilities[];
+
+struct qat_comp_capabilities_info
+qat_comp_cap_get_gen1(struct qat_pci_device *qat_dev);
+
+uint16_t qat_comp_get_ram_bank_flags_gen1(void);
+
+int qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type,
+		uint32_t *comp_slice_cfg_word);
+
+uint64_t qat_comp_get_features_gen1(void);
+
+extern struct rte_compressdev_ops qat_comp_ops_gen1;
+
+#endif /* _QAT_COMP_PMD_GEN1_H_ */
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v6 7/9] crypto/qat: unified device private data structure
  2021-10-26 17:16           ` [dpdk-dev] [dpdk-dev v6 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                               ` (5 preceding siblings ...)
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 6/9] compress/qat: add gen specific implementation Kai Ji
@ 2021-10-26 17:16             ` Kai Ji
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 8/9] crypto/qat: add gen specific data and function Kai Ji
                               ` (2 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 17:16 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch unifies the QAT symmetric and asymmetric device
private data structures and functions.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/meson.build       |   2 +-
 drivers/common/qat/qat_common.c      |  15 ++
 drivers/common/qat/qat_common.h      |   3 +
 drivers/common/qat/qat_device.h      |   7 +-
 drivers/crypto/qat/qat_asym_pmd.c    | 216 ++++-------------------
 drivers/crypto/qat/qat_asym_pmd.h    |  29 +---
 drivers/crypto/qat/qat_crypto.c      | 172 ++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h      |  78 +++++++++
 drivers/crypto/qat/qat_sym_pmd.c     | 250 +++++----------------------
 drivers/crypto/qat/qat_sym_pmd.h     |  21 +--
 drivers/crypto/qat/qat_sym_session.c |  15 +-
 11 files changed, 361 insertions(+), 447 deletions(-)
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 8a1c6d64e8..29fd0168ea 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -71,7 +71,7 @@ endif
 
 if qat_crypto
     foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c',
-            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c']
+            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c']
         sources += files(join_paths(qat_crypto_relpath, f))
     endforeach
     deps += ['security']
diff --git a/drivers/common/qat/qat_common.c b/drivers/common/qat/qat_common.c
index 5343a1451e..59e7e02622 100644
--- a/drivers/common/qat/qat_common.c
+++ b/drivers/common/qat/qat_common.c
@@ -6,6 +6,21 @@
 #include "qat_device.h"
 #include "qat_logs.h"
 
+const char *
+qat_service_get_str(enum qat_service_type type)
+{
+	switch (type) {
+	case QAT_SERVICE_SYMMETRIC:
+		return "sym";
+	case QAT_SERVICE_ASYMMETRIC:
+		return "asym";
+	case QAT_SERVICE_COMPRESSION:
+		return "comp";
+	default:
+		return "invalid";
+	}
+}
+
 int
 qat_sgl_fill_array(struct rte_mbuf *buf, int64_t offset,
 		void *list_in, uint32_t data_len,
diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index a7632e31f8..9411a79301 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -91,4 +91,7 @@ void
 qat_stats_reset(struct qat_pci_device *dev,
 		enum qat_service_type service);
 
+const char *
+qat_service_get_str(enum qat_service_type type);
+
 #endif /* _QAT_COMMON_H_ */
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index e7c7e9af95..85fae7b7c7 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -76,8 +76,7 @@ struct qat_device_info {
 
 extern struct qat_device_info qat_pci_devs[];
 
-struct qat_sym_dev_private;
-struct qat_asym_dev_private;
+struct qat_cryptodev_private;
 struct qat_comp_dev_private;
 
 /*
@@ -106,14 +105,14 @@ struct qat_pci_device {
 	/**< links to qps set up for each service, index same as on API */
 
 	/* Data relating to symmetric crypto service */
-	struct qat_sym_dev_private *sym_dev;
+	struct qat_cryptodev_private *sym_dev;
 	/**< link back to cryptodev private data */
 
 	int qat_sym_driver_id;
 	/**< Symmetric driver id used by this device */
 
 	/* Data relating to asymmetric crypto service */
-	struct qat_asym_dev_private *asym_dev;
+	struct qat_cryptodev_private *asym_dev;
 	/**< link back to cryptodev private data */
 
 	int qat_asym_driver_id;
diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c
index 0944d27a4d..042f39ddcc 100644
--- a/drivers/crypto/qat/qat_asym_pmd.c
+++ b/drivers/crypto/qat/qat_asym_pmd.c
@@ -6,6 +6,7 @@
 
 #include "qat_logs.h"
 
+#include "qat_crypto.h"
 #include "qat_asym.h"
 #include "qat_asym_pmd.h"
 #include "qat_sym_capabilities.h"
@@ -18,190 +19,45 @@ static const struct rte_cryptodev_capabilities qat_gen1_asym_capabilities[] = {
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
-static int qat_asym_qp_release(struct rte_cryptodev *dev,
-			       uint16_t queue_pair_id);
-
-static int qat_asym_dev_config(__rte_unused struct rte_cryptodev *dev,
-			       __rte_unused struct rte_cryptodev_config *config)
-{
-	return 0;
-}
-
-static int qat_asym_dev_start(__rte_unused struct rte_cryptodev *dev)
-{
-	return 0;
-}
-
-static void qat_asym_dev_stop(__rte_unused struct rte_cryptodev *dev)
-{
-
-}
-
-static int qat_asym_dev_close(struct rte_cryptodev *dev)
-{
-	int i, ret;
-
-	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
-		ret = qat_asym_qp_release(dev, i);
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static void qat_asym_dev_info_get(struct rte_cryptodev *dev,
-				  struct rte_cryptodev_info *info)
-{
-	struct qat_asym_dev_private *internals = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = internals->qat_dev;
-
-	if (info != NULL) {
-		info->max_nb_queue_pairs = qat_qps_per_service(qat_dev,
-							QAT_SERVICE_ASYMMETRIC);
-		info->feature_flags = dev->feature_flags;
-		info->capabilities = internals->qat_dev_capabilities;
-		info->driver_id = qat_asym_driver_id;
-		/* No limit of number of sessions */
-		info->sym.max_nb_sessions = 0;
-	}
-}
-
-static void qat_asym_stats_get(struct rte_cryptodev *dev,
-			       struct rte_cryptodev_stats *stats)
-{
-	struct qat_common_stats qat_stats = {0};
-	struct qat_asym_dev_private *qat_priv;
-
-	if (stats == NULL || dev == NULL) {
-		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_get(qat_priv->qat_dev, &qat_stats, QAT_SERVICE_ASYMMETRIC);
-	stats->enqueued_count = qat_stats.enqueued_count;
-	stats->dequeued_count = qat_stats.dequeued_count;
-	stats->enqueue_err_count = qat_stats.enqueue_err_count;
-	stats->dequeue_err_count = qat_stats.dequeue_err_count;
-}
-
-static void qat_asym_stats_reset(struct rte_cryptodev *dev)
+void
+qat_asym_init_op_cookie(void *op_cookie)
 {
-	struct qat_asym_dev_private *qat_priv;
+	int j;
+	struct qat_asym_op_cookie *cookie = op_cookie;
 
-	if (dev == NULL) {
-		QAT_LOG(ERR, "invalid asymmetric cryptodev ptr %p", dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
+	cookie->input_addr = rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_asym_op_cookie,
+					input_params_ptrs);
 
-	qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_ASYMMETRIC);
-}
-
-static int qat_asym_qp_release(struct rte_cryptodev *dev,
-			       uint16_t queue_pair_id)
-{
-	struct qat_asym_dev_private *qat_private = dev->data->dev_private;
-	enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen;
-
-	QAT_LOG(DEBUG, "Release asym qp %u on device %d",
-				queue_pair_id, dev->data->dev_id);
-
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_ASYMMETRIC][queue_pair_id]
-						= NULL;
-
-	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
-			&(dev->data->queue_pairs[queue_pair_id]));
-}
+	cookie->output_addr = rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_asym_op_cookie,
+					output_params_ptrs);
 
-static int qat_asym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-			     const struct rte_cryptodev_qp_conf *qp_conf,
-			     int socket_id)
-{
-	struct qat_qp_config qat_qp_conf;
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-
-	struct qat_qp **qp_addr =
-			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
-	struct qat_asym_dev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-	const struct qat_qp_hw_data *asym_hw_qps =
-			qat_gen_config[qat_private->qat_dev->qat_dev_gen]
-				      .qp_hw_data[QAT_SERVICE_ASYMMETRIC];
-	const struct qat_qp_hw_data *qp_hw_data = asym_hw_qps + qp_id;
-
-	/* If qp is already in use free ring memory and qp metadata. */
-	if (*qp_addr != NULL) {
-		ret = qat_asym_qp_release(dev, qp_id);
-		if (ret < 0)
-			return ret;
-	}
-	if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_ASYMMETRIC)) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.hw = qp_hw_data;
-	qat_qp_conf.cookie_size = sizeof(struct qat_asym_op_cookie);
-	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
-	qat_qp_conf.socket_id = socket_id;
-	qat_qp_conf.service_str = "asym";
-
-	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
-	if (ret != 0)
-		return ret;
-
-	/* store a link to the qp in the qat_pci_device */
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_ASYMMETRIC][qp_id]
-							= *qp_addr;
-
-	qp = (struct qat_qp *)*qp_addr;
-	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
-
-	for (i = 0; i < qp->nb_descriptors; i++) {
-		int j;
-
-		struct qat_asym_op_cookie __rte_unused *cookie =
-				qp->op_cookies[i];
-		cookie->input_addr = rte_mempool_virt2iova(cookie) +
+	for (j = 0; j < 8; j++) {
+		cookie->input_params_ptrs[j] =
+				rte_mempool_virt2iova(cookie) +
 				offsetof(struct qat_asym_op_cookie,
-						input_params_ptrs);
-
-		cookie->output_addr = rte_mempool_virt2iova(cookie) +
+						input_array[j]);
+		cookie->output_params_ptrs[j] =
+				rte_mempool_virt2iova(cookie) +
 				offsetof(struct qat_asym_op_cookie,
-						output_params_ptrs);
-
-		for (j = 0; j < 8; j++) {
-			cookie->input_params_ptrs[j] =
-					rte_mempool_virt2iova(cookie) +
-					offsetof(struct qat_asym_op_cookie,
-							input_array[j]);
-			cookie->output_params_ptrs[j] =
-					rte_mempool_virt2iova(cookie) +
-					offsetof(struct qat_asym_op_cookie,
-							output_array[j]);
-		}
+						output_array[j]);
 	}
-
-	return ret;
 }
 
-struct rte_cryptodev_ops crypto_qat_ops = {
+static struct rte_cryptodev_ops crypto_qat_ops = {
 
 	/* Device related operations */
-	.dev_configure		= qat_asym_dev_config,
-	.dev_start		= qat_asym_dev_start,
-	.dev_stop		= qat_asym_dev_stop,
-	.dev_close		= qat_asym_dev_close,
-	.dev_infos_get		= qat_asym_dev_info_get,
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
 
-	.stats_get		= qat_asym_stats_get,
-	.stats_reset		= qat_asym_stats_reset,
-	.queue_pair_setup	= qat_asym_qp_setup,
-	.queue_pair_release	= qat_asym_qp_release,
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
 
 	/* Crypto related operations */
 	.asym_session_get_size	= qat_asym_session_get_private_size,
@@ -241,15 +97,14 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	struct qat_device_info *qat_dev_instance =
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
 	struct rte_cryptodev_pmd_init_params init_params = {
-			.name = "",
-			.socket_id =
-				qat_dev_instance->pci_dev->device.numa_node,
-			.private_data_size = sizeof(struct qat_asym_dev_private)
+		.name = "",
+		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
+		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
-	struct qat_asym_dev_private *internals;
+	struct qat_cryptodev_private *internals;
 
 	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
 		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT 4xxx");
@@ -310,8 +165,9 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
-	internals->asym_dev_id = cryptodev->data->dev_id;
+	internals->dev_id = cryptodev->data->dev_id;
 	internals->qat_dev_capabilities = qat_gen1_asym_capabilities;
+	internals->service_type = QAT_SERVICE_ASYMMETRIC;
 
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
@@ -347,7 +203,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	rte_cryptodev_pmd_probing_finish(cryptodev);
 
 	QAT_LOG(DEBUG, "Created QAT ASYM device %s as cryptodev instance %d",
-			cryptodev->data->name, internals->asym_dev_id);
+			cryptodev->data->name, internals->dev_id);
 	return 0;
 }
 
@@ -365,7 +221,7 @@ qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev)
 
 	/* free crypto device */
 	cryptodev = rte_cryptodev_pmd_get_dev(
-			qat_pci_dev->asym_dev->asym_dev_id);
+			qat_pci_dev->asym_dev->dev_id);
 	rte_cryptodev_pmd_destroy(cryptodev);
 	qat_pci_devs[qat_pci_dev->qat_dev_id].asym_rte_dev.name = NULL;
 	qat_pci_dev->asym_dev = NULL;
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index 3b5abddec8..c493796511 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -15,21 +15,8 @@
 
 extern uint8_t qat_asym_driver_id;
 
-/** private data structure for a QAT device.
- * This QAT device is a device offering only asymmetric crypto service,
- * there can be one of these on each qat_pci_device (VF).
- */
-struct qat_asym_dev_private {
-	struct qat_pci_device *qat_dev;
-	/**< The qat pci device hosting the service */
-	uint8_t asym_dev_id;
-	/**< Device instance for this rte_cryptodev */
-	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
-	/* QAT device asymmetric crypto capabilities */
-	const struct rte_memzone *capa_mz;
-	/* Shared memzone for storing capabilities */
-	uint16_t min_enq_burst_threshold;
-};
+void
+qat_asym_init_op_cookie(void *op_cookie);
 
 uint16_t
 qat_asym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops,
@@ -39,16 +26,4 @@ uint16_t
 qat_asym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops,
 			      uint16_t nb_ops);
 
-int qat_asym_session_configure(struct rte_cryptodev *dev,
-		struct rte_crypto_asym_xform *xform,
-		struct rte_cryptodev_asym_session *sess,
-		struct rte_mempool *mempool);
-
-int
-qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
-		struct qat_dev_cmd_param *qat_dev_cmd_param);
-
-int
-qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev);
-
 #endif /* _QAT_ASYM_PMD_H_ */
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
new file mode 100644
index 0000000000..01d2439b93
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -0,0 +1,172 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "qat_crypto.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+
+int
+qat_cryptodev_config(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused struct rte_cryptodev_config *config)
+{
+	return 0;
+}
+
+int
+qat_cryptodev_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+void
+qat_cryptodev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+int
+qat_cryptodev_close(struct rte_cryptodev *dev)
+{
+	int i, ret;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = dev->dev_ops->queue_pair_release(dev, i);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
+void
+qat_cryptodev_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *info)
+{
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_service_type service_type = qat_private->service_type;
+
+	if (info != NULL) {
+		info->max_nb_queue_pairs =
+			qat_qps_per_service(qat_dev, service_type);
+		info->feature_flags = dev->feature_flags;
+		info->capabilities = qat_private->qat_dev_capabilities;
+		info->driver_id = qat_sym_driver_id;
+		/* No limit of number of sessions */
+		info->sym.max_nb_sessions = 0;
+	}
+}
+
+void
+qat_cryptodev_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	struct qat_common_stats qat_stats = {0};
+	struct qat_cryptodev_private *qat_priv;
+
+	if (stats == NULL || dev == NULL) {
+		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
+		return;
+	}
+	qat_priv = dev->data->dev_private;
+
+	qat_stats_get(qat_priv->qat_dev, &qat_stats, qat_priv->service_type);
+	stats->enqueued_count = qat_stats.enqueued_count;
+	stats->dequeued_count = qat_stats.dequeued_count;
+	stats->enqueue_err_count = qat_stats.enqueue_err_count;
+	stats->dequeue_err_count = qat_stats.dequeue_err_count;
+}
+
+void
+qat_cryptodev_stats_reset(struct rte_cryptodev *dev)
+{
+	struct qat_cryptodev_private *qat_priv;
+
+	if (dev == NULL) {
+		QAT_LOG(ERR, "invalid cryptodev ptr %p", dev);
+		return;
+	}
+	qat_priv = dev->data->dev_private;
+
+	qat_stats_reset(qat_priv->qat_dev, qat_priv->service_type);
+
+}
+
+int
+qat_cryptodev_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
+	enum qat_service_type service_type = qat_private->service_type;
+
+	QAT_LOG(DEBUG, "Release %s qp %u on device %d",
+			qat_service_get_str(service_type),
+			queue_pair_id, dev->data->dev_id);
+
+	qat_private->qat_dev->qps_in_use[service_type][queue_pair_id] = NULL;
+
+	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
+			&(dev->data->queue_pairs[queue_pair_id]));
+}
+
+int
+qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct qat_qp **qp_addr =
+			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_service_type service_type = qat_private->service_type;
+	struct qat_qp_config qat_qp_conf = {0};
+	struct qat_qp *qp;
+	int ret = 0;
+	uint32_t i;
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (*qp_addr != NULL) {
+		ret = dev->dev_ops->queue_pair_release(dev, qp_id);
+		if (ret < 0)
+			return -EBUSY;
+	}
+	if (qp_id >= qat_qps_per_service(qat_dev, service_type)) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, service_type,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
+	qat_qp_conf.cookie_size = service_type == QAT_SERVICE_SYMMETRIC ?
+			sizeof(struct qat_sym_op_cookie) :
+			sizeof(struct qat_asym_op_cookie);
+	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
+	qat_qp_conf.socket_id = socket_id;
+	qat_qp_conf.service_str = qat_service_get_str(service_type);
+
+	ret = qat_qp_setup(qat_dev, qp_addr, qp_id, &qat_qp_conf);
+	if (ret != 0)
+		return ret;
+
+	/* store a link to the qp in the qat_pci_device */
+	qat_dev->qps_in_use[service_type][qp_id] = *qp_addr;
+
+	qp = (struct qat_qp *)*qp_addr;
+	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
+
+	for (i = 0; i < qp->nb_descriptors; i++) {
+		if (service_type == QAT_SERVICE_SYMMETRIC)
+			qat_sym_init_op_cookie(qp->op_cookies[i]);
+		else
+			qat_asym_init_op_cookie(qp->op_cookies[i]);
+	}
+
+	return ret;
+}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
new file mode 100644
index 0000000000..3803fef19d
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+ #ifndef _QAT_CRYPTO_H_
+ #define _QAT_CRYPTO_H_
+
+#include <rte_cryptodev.h>
+#ifdef RTE_LIB_SECURITY
+#include <rte_security.h>
+#endif
+
+#include "qat_device.h"
+
+extern uint8_t qat_sym_driver_id;
+extern uint8_t qat_asym_driver_id;
+
+/** helper macro to set cryptodev capability range **/
+#define CAP_RNG(n, l, r, i) .n = {.min = l, .max = r, .increment = i}
+
+#define CAP_RNG_ZERO(n) .n = {.min = 0, .max = 0, .increment = 0}
+/** helper macro to set cryptodev capability value **/
+#define CAP_SET(n, v) .n = v
+
+/** private data structure for a QAT device.
+ * there can be one of these on each qat_pci_device (VF).
+ */
+struct qat_cryptodev_private {
+	struct qat_pci_device *qat_dev;
+	/**< The qat pci device hosting the service */
+	uint8_t dev_id;
+	/**< Device instance for this rte_cryptodev */
+	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
+	/* QAT device symmetric crypto capabilities */
+	const struct rte_memzone *capa_mz;
+	/* Shared memzone for storing capabilities */
+	uint16_t min_enq_burst_threshold;
+	uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */
+	enum qat_service_type service_type;
+};
+
+struct qat_capabilities_info {
+	struct rte_cryptodev_capabilities *data;
+	uint64_t size;
+};
+
+int
+qat_cryptodev_config(struct rte_cryptodev *dev,
+		struct rte_cryptodev_config *config);
+
+int
+qat_cryptodev_start(struct rte_cryptodev *dev);
+
+void
+qat_cryptodev_stop(struct rte_cryptodev *dev);
+
+int
+qat_cryptodev_close(struct rte_cryptodev *dev);
+
+void
+qat_cryptodev_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *info);
+
+void
+qat_cryptodev_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats);
+
+void
+qat_cryptodev_stats_reset(struct rte_cryptodev *dev);
+
+int
+qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+
+int
+qat_cryptodev_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id);
+
+#endif
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index 5b8ee4bee6..dec877cfab 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -13,6 +13,7 @@
 #endif
 
 #include "qat_logs.h"
+#include "qat_crypto.h"
 #include "qat_sym.h"
 #include "qat_sym_session.h"
 #include "qat_sym_pmd.h"
@@ -59,213 +60,19 @@ static const struct rte_security_capability qat_security_capabilities[] = {
 };
 #endif
 
-static int qat_sym_qp_release(struct rte_cryptodev *dev,
-	uint16_t queue_pair_id);
-
-static int qat_sym_dev_config(__rte_unused struct rte_cryptodev *dev,
-		__rte_unused struct rte_cryptodev_config *config)
-{
-	return 0;
-}
-
-static int qat_sym_dev_start(__rte_unused struct rte_cryptodev *dev)
-{
-	return 0;
-}
-
-static void qat_sym_dev_stop(__rte_unused struct rte_cryptodev *dev)
-{
-	return;
-}
-
-static int qat_sym_dev_close(struct rte_cryptodev *dev)
-{
-	int i, ret;
-
-	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
-		ret = qat_sym_qp_release(dev, i);
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static void qat_sym_dev_info_get(struct rte_cryptodev *dev,
-			struct rte_cryptodev_info *info)
-{
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = internals->qat_dev;
-
-	if (info != NULL) {
-		info->max_nb_queue_pairs =
-			qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC);
-		info->feature_flags = dev->feature_flags;
-		info->capabilities = internals->qat_dev_capabilities;
-		info->driver_id = qat_sym_driver_id;
-		/* No limit of number of sessions */
-		info->sym.max_nb_sessions = 0;
-	}
-}
-
-static void qat_sym_stats_get(struct rte_cryptodev *dev,
-		struct rte_cryptodev_stats *stats)
-{
-	struct qat_common_stats qat_stats = {0};
-	struct qat_sym_dev_private *qat_priv;
-
-	if (stats == NULL || dev == NULL) {
-		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_get(qat_priv->qat_dev, &qat_stats, QAT_SERVICE_SYMMETRIC);
-	stats->enqueued_count = qat_stats.enqueued_count;
-	stats->dequeued_count = qat_stats.dequeued_count;
-	stats->enqueue_err_count = qat_stats.enqueue_err_count;
-	stats->dequeue_err_count = qat_stats.dequeue_err_count;
-}
-
-static void qat_sym_stats_reset(struct rte_cryptodev *dev)
-{
-	struct qat_sym_dev_private *qat_priv;
-
-	if (dev == NULL) {
-		QAT_LOG(ERR, "invalid cryptodev ptr %p", dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_SYMMETRIC);
-
-}
-
-static int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
-{
-	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
-	enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen;
-
-	QAT_LOG(DEBUG, "Release sym qp %u on device %d",
-				queue_pair_id, dev->data->dev_id);
-
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][queue_pair_id]
-						= NULL;
-
-	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
-			&(dev->data->queue_pairs[queue_pair_id]));
-}
-
-static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-	const struct rte_cryptodev_qp_conf *qp_conf,
-	int socket_id)
-{
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-	struct qat_qp_config qat_qp_conf;
-	struct qat_qp **qp_addr =
-			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
-	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-
-	/* If qp is already in use free ring memory and qp metadata. */
-	if (*qp_addr != NULL) {
-		ret = qat_sym_qp_release(dev, qp_id);
-		if (ret < 0)
-			return ret;
-	}
-	if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC)) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_SYMMETRIC,
-			qp_id);
-	if (qat_qp_conf.hw == NULL) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie);
-	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
-	qat_qp_conf.socket_id = socket_id;
-	qat_qp_conf.service_str = "sym";
-
-	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
-	if (ret != 0)
-		return ret;
-
-	/* store a link to the qp in the qat_pci_device */
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id]
-							= *qp_addr;
-
-	qp = (struct qat_qp *)*qp_addr;
-	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
-
-	for (i = 0; i < qp->nb_descriptors; i++) {
-
-		struct qat_sym_op_cookie *cookie =
-				qp->op_cookies[i];
-
-		cookie->qat_sgl_src_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				qat_sgl_src);
-
-		cookie->qat_sgl_dst_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				qat_sgl_dst);
-
-		cookie->opt.spc_gmac.cd_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				opt.spc_gmac.cd_cipher);
-
-	}
-
-	/* Get fw version from QAT (GEN2), skip if we've got it already */
-	if (qp->qat_dev_gen == QAT_GEN2 && !(qat_private->internal_capabilities
-			& QAT_SYM_CAP_VALID)) {
-		ret = qat_cq_get_fw_version(qp);
-
-		if (ret < 0) {
-			qat_sym_qp_release(dev, qp_id);
-			return ret;
-		}
-
-		if (ret != 0)
-			QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d",
-					(ret >> 24) & 0xff,
-					(ret >> 16) & 0xff,
-					(ret >> 8) & 0xff);
-		else
-			QAT_LOG(DEBUG, "unknown QAT firmware version");
-
-		/* set capabilities based on the fw version */
-		qat_private->internal_capabilities = QAT_SYM_CAP_VALID |
-				((ret >= MIXED_CRYPTO_MIN_FW_VER) ?
-						QAT_SYM_CAP_MIXED_CRYPTO : 0);
-		ret = 0;
-	}
-
-	return ret;
-}
-
 static struct rte_cryptodev_ops crypto_qat_ops = {
 
 		/* Device related operations */
-		.dev_configure		= qat_sym_dev_config,
-		.dev_start		= qat_sym_dev_start,
-		.dev_stop		= qat_sym_dev_stop,
-		.dev_close		= qat_sym_dev_close,
-		.dev_infos_get		= qat_sym_dev_info_get,
+		.dev_configure		= qat_cryptodev_config,
+		.dev_start		= qat_cryptodev_start,
+		.dev_stop		= qat_cryptodev_stop,
+		.dev_close		= qat_cryptodev_close,
+		.dev_infos_get		= qat_cryptodev_info_get,
 
-		.stats_get		= qat_sym_stats_get,
-		.stats_reset		= qat_sym_stats_reset,
-		.queue_pair_setup	= qat_sym_qp_setup,
-		.queue_pair_release	= qat_sym_qp_release,
+		.stats_get		= qat_cryptodev_stats_get,
+		.stats_reset		= qat_cryptodev_stats_reset,
+		.queue_pair_setup	= qat_cryptodev_qp_setup,
+		.queue_pair_release	= qat_cryptodev_qp_release,
 
 		/* Crypto related operations */
 		.sym_session_get_size	= qat_sym_session_get_private_size,
@@ -295,6 +102,27 @@ static struct rte_security_ops security_qat_ops = {
 };
 #endif
 
+void
+qat_sym_init_op_cookie(void *op_cookie)
+{
+	struct qat_sym_op_cookie *cookie = op_cookie;
+
+	cookie->qat_sgl_src_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			qat_sgl_src);
+
+	cookie->qat_sgl_dst_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			qat_sgl_dst);
+
+	cookie->opt.spc_gmac.cd_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			opt.spc_gmac.cd_cipher);
+}
+
 static uint16_t
 qat_sym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops,
 		uint16_t nb_ops)
@@ -330,15 +158,14 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
 
 	struct rte_cryptodev_pmd_init_params init_params = {
-			.name = "",
-			.socket_id =
-				qat_dev_instance->pci_dev->device.numa_node,
-			.private_data_size = sizeof(struct qat_sym_dev_private)
+		.name = "",
+		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
+		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
-	struct qat_sym_dev_private *internals;
+	struct qat_cryptodev_private *internals;
 	const struct rte_cryptodev_capabilities *capabilities;
 	uint64_t capa_size;
 
@@ -424,8 +251,9 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
+	internals->service_type = QAT_SERVICE_SYMMETRIC;
 
-	internals->sym_dev_id = cryptodev->data->dev_id;
+	internals->dev_id = cryptodev->data->dev_id;
 	switch (qat_pci_dev->qat_dev_gen) {
 	case QAT_GEN1:
 		capabilities = qat_gen1_sym_capabilities;
@@ -480,7 +308,7 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	qat_pci_dev->sym_dev = internals;
 	QAT_LOG(DEBUG, "Created QAT SYM device %s as cryptodev instance %d",
-			cryptodev->data->name, internals->sym_dev_id);
+			cryptodev->data->name, internals->dev_id);
 
 	rte_cryptodev_pmd_probing_finish(cryptodev);
 
@@ -511,7 +339,7 @@ qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev)
 		rte_memzone_free(qat_pci_dev->sym_dev->capa_mz);
 
 	/* free crypto device */
-	cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->sym_dev_id);
+	cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->dev_id);
 #ifdef RTE_LIB_SECURITY
 	rte_free(cryptodev->security_ctx);
 	cryptodev->security_ctx = NULL;
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index e0992cbe27..d49b732ca0 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -14,6 +14,7 @@
 #endif
 
 #include "qat_sym_capabilities.h"
+#include "qat_crypto.h"
 #include "qat_device.h"
 
 /** Intel(R) QAT Symmetric Crypto PMD driver name */
@@ -25,23 +26,6 @@
 
 extern uint8_t qat_sym_driver_id;
 
-/** private data structure for a QAT device.
- * This QAT device is a device offering only symmetric crypto service,
- * there can be one of these on each qat_pci_device (VF).
- */
-struct qat_sym_dev_private {
-	struct qat_pci_device *qat_dev;
-	/**< The qat pci device hosting the service */
-	uint8_t sym_dev_id;
-	/**< Device instance for this rte_cryptodev */
-	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
-	/* QAT device symmetric crypto capabilities */
-	const struct rte_memzone *capa_mz;
-	/* Shared memzone for storing capabilities */
-	uint16_t min_enq_burst_threshold;
-	uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */
-};
-
 int
 qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
@@ -49,5 +33,8 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 int
 qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev);
 
+void
+qat_sym_init_op_cookie(void *op_cookie);
+
 #endif
 #endif /* _QAT_SYM_PMD_H_ */
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index 3f2f6736fc..8ca475ca8b 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -131,7 +131,7 @@ bpi_cipher_ctx_init(enum rte_crypto_cipher_algorithm cryptodev_algo,
 
 static int
 qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo,
-		struct qat_sym_dev_private *internals)
+		struct qat_cryptodev_private *internals)
 {
 	int i = 0;
 	const struct rte_cryptodev_capabilities *capability;
@@ -152,7 +152,7 @@ qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo,
 
 static int
 qat_is_auth_alg_supported(enum rte_crypto_auth_algorithm algo,
-		struct qat_sym_dev_private *internals)
+		struct qat_cryptodev_private *internals)
 {
 	int i = 0;
 	const struct rte_cryptodev_capabilities *capability;
@@ -267,7 +267,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
 		struct qat_sym_session *session)
 {
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	struct rte_crypto_cipher_xform *cipher_xform = NULL;
 	enum qat_device_gen qat_dev_gen =
 				internals->qat_dev->qat_dev_gen;
@@ -532,7 +532,8 @@ static void
 qat_sym_session_handle_mixed(const struct rte_cryptodev *dev,
 		struct qat_sym_session *session)
 {
-	const struct qat_sym_dev_private *qat_private = dev->data->dev_private;
+	const struct qat_cryptodev_private *qat_private =
+			dev->data->dev_private;
 	enum qat_device_gen min_dev_gen = (qat_private->internal_capabilities &
 			QAT_SYM_CAP_MIXED_CRYPTO) ? QAT_GEN2 : QAT_GEN3;
 
@@ -564,7 +565,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform, void *session_private)
 {
 	struct qat_sym_session *session = session_private;
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen;
 	int ret;
 	int qat_cmd_id;
@@ -707,7 +708,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
 				struct qat_sym_session *session)
 {
 	struct rte_crypto_auth_xform *auth_xform = qat_get_auth_xform(xform);
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	const uint8_t *key_data = auth_xform->key.data;
 	uint8_t key_length = auth_xform->key.length;
 	enum qat_device_gen qat_dev_gen =
@@ -875,7 +876,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
 {
 	struct rte_crypto_aead_xform *aead_xform = &xform->aead;
 	enum rte_crypto_auth_operation crypto_operation;
-	struct qat_sym_dev_private *internals =
+	struct qat_cryptodev_private *internals =
 			dev->data->dev_private;
 	enum qat_device_gen qat_dev_gen =
 			internals->qat_dev->qat_dev_gen;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v6 8/9] crypto/qat: add gen specific data and function
  2021-10-26 17:16           ` [dpdk-dev] [dpdk-dev v6 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                               ` (6 preceding siblings ...)
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 7/9] crypto/qat: unified device private data structure Kai Ji
@ 2021-10-26 17:16             ` Kai Ji
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 9/9] crypto/qat: add gen specific implementation Kai Ji
  2021-10-27 15:50             ` [dpdk-dev] [dpdk-dev v7 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 17:16 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch adds the symmetric and asymmetric crypto data
structure and function prototypes for different QAT
generations.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/crypto/qat/README                  |    7 -
 drivers/crypto/qat/meson.build             |   26 -
 drivers/crypto/qat/qat_asym_capabilities.h |   63 -
 drivers/crypto/qat/qat_asym_pmd.c          |   60 +-
 drivers/crypto/qat/qat_asym_pmd.h          |   25 +
 drivers/crypto/qat/qat_crypto.h            |   16 +
 drivers/crypto/qat/qat_sym_capabilities.h  | 1248 --------------------
 drivers/crypto/qat/qat_sym_pmd.c           |  186 +--
 drivers/crypto/qat/qat_sym_pmd.h           |   57 +-
 9 files changed, 165 insertions(+), 1523 deletions(-)
 delete mode 100644 drivers/crypto/qat/README
 delete mode 100644 drivers/crypto/qat/meson.build
 delete mode 100644 drivers/crypto/qat/qat_asym_capabilities.h
 delete mode 100644 drivers/crypto/qat/qat_sym_capabilities.h

diff --git a/drivers/crypto/qat/README b/drivers/crypto/qat/README
deleted file mode 100644
index 444ae605f0..0000000000
--- a/drivers/crypto/qat/README
+++ /dev/null
@@ -1,7 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2015-2018 Intel Corporation
-
-Makefile for crypto QAT PMD is in common/qat directory.
-The build for the QAT driver is done from there as only one library is built for the
-whole QAT pci device and that library includes all the services (crypto, compression)
-which are enabled on the device.
diff --git a/drivers/crypto/qat/meson.build b/drivers/crypto/qat/meson.build
deleted file mode 100644
index b3b2d17258..0000000000
--- a/drivers/crypto/qat/meson.build
+++ /dev/null
@@ -1,26 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017-2018 Intel Corporation
-
-# this does not build the QAT driver, instead that is done in the compression
-# driver which comes later. Here we just add our sources files to the list
-build = false
-reason = '' # sentinal value to suppress printout
-dep = dependency('libcrypto', required: false, method: 'pkg-config')
-qat_includes += include_directories('.')
-qat_deps += 'cryptodev'
-qat_deps += 'net'
-qat_deps += 'security'
-if dep.found()
-    # Add our sources files to the list
-    qat_sources += files(
-            'qat_asym.c',
-            'qat_asym_pmd.c',
-            'qat_sym.c',
-            'qat_sym_hw_dp.c',
-            'qat_sym_pmd.c',
-            'qat_sym_session.c',
-	)
-    qat_ext_deps += dep
-    qat_cflags += '-DBUILD_QAT_SYM'
-    qat_cflags += '-DBUILD_QAT_ASYM'
-endif
diff --git a/drivers/crypto/qat/qat_asym_capabilities.h b/drivers/crypto/qat/qat_asym_capabilities.h
deleted file mode 100644
index 523b4da6d3..0000000000
--- a/drivers/crypto/qat/qat_asym_capabilities.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019 Intel Corporation
- */
-
-#ifndef _QAT_ASYM_CAPABILITIES_H_
-#define _QAT_ASYM_CAPABILITIES_H_
-
-#define QAT_BASE_GEN1_ASYM_CAPABILITIES						\
-	{	/* modexp */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_MODEX,	\
-				.op_types = 0,					\
-				{						\
-				.modlen = {					\
-				.min = 1,					\
-				.max = 512,					\
-				.increment = 1					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	},									\
-	{	/* modinv */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_MODINV,	\
-				.op_types = 0,					\
-				{						\
-				.modlen = {					\
-				.min = 1,					\
-				.max = 512,					\
-				.increment = 1					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	},									\
-	{	/* RSA */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_RSA,	\
-				.op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_VERIFY) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),	\
-				{						\
-				.modlen = {					\
-				/* min length is based on openssl rsa keygen */	\
-				.min = 64,					\
-				/* value 0 symbolizes no limit on max length */	\
-				.max = 512,					\
-				.increment = 64					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	}									\
-
-#endif /* _QAT_ASYM_CAPABILITIES_H_ */
diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c
index 042f39ddcc..284b8096fe 100644
--- a/drivers/crypto/qat/qat_asym_pmd.c
+++ b/drivers/crypto/qat/qat_asym_pmd.c
@@ -9,15 +9,9 @@
 #include "qat_crypto.h"
 #include "qat_asym.h"
 #include "qat_asym_pmd.h"
-#include "qat_sym_capabilities.h"
-#include "qat_asym_capabilities.h"
 
 uint8_t qat_asym_driver_id;
-
-static const struct rte_cryptodev_capabilities qat_gen1_asym_capabilities[] = {
-	QAT_BASE_GEN1_ASYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
+struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[QAT_N_GENS];
 
 void
 qat_asym_init_op_cookie(void *op_cookie)
@@ -101,19 +95,22 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
 		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
+	struct qat_capabilities_info capa_info;
+	const struct rte_cryptodev_capabilities *capabilities;
+	const struct qat_crypto_gen_dev_ops *gen_dev_ops =
+		&qat_asym_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
 	struct qat_cryptodev_private *internals;
+	uint64_t capa_size;
 
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT 4xxx");
-		return -EFAULT;
-	}
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN3) {
-		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT c4xxx");
+	if (gen_dev_ops->cryptodev_ops == NULL) {
+		QAT_LOG(ERR, "Device %s does not support asymmetric crypto",
+				name);
 		return -EFAULT;
 	}
+
 	snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "asym");
 	QAT_LOG(DEBUG, "Creating QAT ASYM device %s\n", name);
@@ -150,11 +147,8 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	cryptodev->enqueue_burst = qat_asym_pmd_enqueue_op_burst;
 	cryptodev->dequeue_burst = qat_asym_pmd_dequeue_op_burst;
 
-	cryptodev->feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
-			RTE_CRYPTODEV_FF_HW_ACCELERATED |
-			RTE_CRYPTODEV_FF_ASYM_SESSIONLESS |
-			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP |
-			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT;
+
+	cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev);
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -166,27 +160,29 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
 	internals->dev_id = cryptodev->data->dev_id;
-	internals->qat_dev_capabilities = qat_gen1_asym_capabilities;
 	internals->service_type = QAT_SERVICE_ASYMMETRIC;
 
+	capa_info = gen_dev_ops->get_capabilities(qat_pci_dev);
+	capabilities = capa_info.data;
+	capa_size = capa_info.size;
+
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
 		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
-			sizeof(qat_gen1_asym_capabilities),
-			rte_socket_id(), 0);
-	}
-	if (internals->capa_mz == NULL) {
-		QAT_LOG(DEBUG,
-			"Error allocating memzone for capabilities, destroying PMD for %s",
-			name);
-		rte_cryptodev_pmd_destroy(cryptodev);
-		memset(&qat_dev_instance->asym_rte_dev, 0,
-			sizeof(qat_dev_instance->asym_rte_dev));
-		return -EFAULT;
+				capa_size, rte_socket_id(), 0);
+		if (internals->capa_mz == NULL) {
+			QAT_LOG(DEBUG,
+				"Error allocating memzone for capabilities, "
+				"destroying PMD for %s",
+				name);
+			rte_cryptodev_pmd_destroy(cryptodev);
+			memset(&qat_dev_instance->asym_rte_dev, 0,
+				sizeof(qat_dev_instance->asym_rte_dev));
+			return -EFAULT;
+		}
 	}
 
-	memcpy(internals->capa_mz->addr, qat_gen1_asym_capabilities,
-			sizeof(qat_gen1_asym_capabilities));
+	memcpy(internals->capa_mz->addr, capabilities, capa_size);
 	internals->qat_dev_capabilities = internals->capa_mz->addr;
 
 	while (1) {
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index c493796511..fd6b406248 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -7,14 +7,39 @@
 #define _QAT_ASYM_PMD_H_
 
 #include <rte_cryptodev.h>
+#include "qat_crypto.h"
 #include "qat_device.h"
 
 /** Intel(R) QAT Asymmetric Crypto PMD driver name */
 #define CRYPTODEV_NAME_QAT_ASYM_PMD	crypto_qat_asym
 
 
+/**
+ * Helper function to add an asym capability
+ * <name> <op type> <modlen (min, max, increment)>
+ **/
+#define QAT_ASYM_CAP(n, o, l, r, i)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,			\
+		{.asym = {						\
+			.xform_capa = {					\
+				.xform_type = RTE_CRYPTO_ASYM_XFORM_##n,\
+				.op_types = o,				\
+				{					\
+				.modlen = {				\
+				.min = l,				\
+				.max = r,				\
+				.increment = i				\
+				}, }					\
+			}						\
+		},							\
+		}							\
+	}
+
 extern uint8_t qat_asym_driver_id;
 
+extern struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[];
+
 void
 qat_asym_init_op_cookie(void *op_cookie);
 
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index 3803fef19d..0a8afb0b31 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -44,6 +44,22 @@ struct qat_capabilities_info {
 	uint64_t size;
 };
 
+typedef struct qat_capabilities_info (*get_capabilities_info_t)
+			(struct qat_pci_device *qat_dev);
+
+typedef uint64_t (*get_feature_flags_t)(struct qat_pci_device *qat_dev);
+
+typedef void * (*create_security_ctx_t)(void *cryptodev);
+
+struct qat_crypto_gen_dev_ops {
+	get_feature_flags_t get_feature_flags;
+	get_capabilities_info_t get_capabilities;
+	struct rte_cryptodev_ops *cryptodev_ops;
+#ifdef RTE_LIB_SECURITY
+	create_security_ctx_t create_security_ctx;
+#endif
+};
+
 int
 qat_cryptodev_config(struct rte_cryptodev *dev,
 		struct rte_cryptodev_config *config);
diff --git a/drivers/crypto/qat/qat_sym_capabilities.h b/drivers/crypto/qat/qat_sym_capabilities.h
deleted file mode 100644
index cfb176ca94..0000000000
--- a/drivers/crypto/qat/qat_sym_capabilities.h
+++ /dev/null
@@ -1,1248 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017-2019 Intel Corporation
- */
-
-#ifndef _QAT_SYM_CAPABILITIES_H_
-#define _QAT_SYM_CAPABILITIES_H_
-
-#define QAT_BASE_GEN1_SYM_CAPABILITIES					\
-	{	/* SHA1 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA1 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* MD5 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 16,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XCBC MAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-				.aad_size = { 0 },			\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_CMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 4			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_CCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 2			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 224,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 7,			\
-					.max = 13,			\
-					.increment = 1			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_GCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GMAC (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_GMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SNOW 3G (UIA2) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CBC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XTS */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_XTS,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 64,			\
-					.increment = 32			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SNOW 3G (UEA2) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CTR,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* NULL (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, },						\
-		}, },							\
-	},								\
-	{	/* NULL (CIPHER) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				}					\
-			}, },						\
-		}, }							\
-	},								\
-	{       /* KASUMI (F8) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_KASUMI_F8,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{       /* KASUMI (F9) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_KASUMI_F9,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* 3DES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 24,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* 3DES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_3DES_CTR,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 24,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* DES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_DES_CBC,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* DES DOCSISBPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_DES_DOCSISBPI,\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_EXTRA_GEN2_SYM_CAPABILITIES					\
-	{	/* ZUC (EEA3) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_ZUC_EEA3,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* ZUC (EIA3) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_ZUC_EIA3,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_EXTRA_GEN3_SYM_CAPABILITIES					\
-	{	/* Chacha20-Poly1305 */					\
-	.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 32,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_BASE_GEN4_SYM_CAPABILITIES					\
-	{	/* AES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CBC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA1 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XCBC MAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-				.aad_size = { 0 },			\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_CMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 4			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* NULL (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, },						\
-		}, },							\
-	},								\
-	{	/* NULL (CIPHER) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				}					\
-			}, },						\
-		}, }							\
-	},								\
-	{	/* SHA1 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CTR,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_GCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_CCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 2			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 224,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 7,			\
-					.max = 13,			\
-					.increment = 1			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* Chacha20-Poly1305 */					\
-	.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 32,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GMAC (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_GMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				}					\
-			}, }						\
-		}, }							\
-	}								\
-
-
-
-#ifdef RTE_LIB_SECURITY
-#define QAT_SECURITY_SYM_CAPABILITIES					\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_SECURITY_CAPABILITIES(sym)					\
-	[0] = {	/* DOCSIS Uplink */					\
-		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
-		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
-		.docsis = {						\
-			.direction = RTE_SECURITY_DOCSIS_UPLINK		\
-		},							\
-		.crypto_capabilities = (sym)				\
-	},								\
-	[1] = {	/* DOCSIS Downlink */					\
-		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
-		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
-		.docsis = {						\
-			.direction = RTE_SECURITY_DOCSIS_DOWNLINK	\
-		},							\
-		.crypto_capabilities = (sym)				\
-	}
-#endif
-
-#endif /* _QAT_SYM_CAPABILITIES_H_ */
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index dec877cfab..b835245f17 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -22,85 +22,7 @@
 
 uint8_t qat_sym_driver_id;
 
-static const struct rte_cryptodev_capabilities qat_gen1_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen2_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen3_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN3_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen4_sym_capabilities[] = {
-	QAT_BASE_GEN4_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-#ifdef RTE_LIB_SECURITY
-static const struct rte_cryptodev_capabilities
-					qat_security_sym_capabilities[] = {
-	QAT_SECURITY_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_security_capability qat_security_capabilities[] = {
-	QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities),
-	{
-		.action = RTE_SECURITY_ACTION_TYPE_NONE
-	}
-};
-#endif
-
-static struct rte_cryptodev_ops crypto_qat_ops = {
-
-		/* Device related operations */
-		.dev_configure		= qat_cryptodev_config,
-		.dev_start		= qat_cryptodev_start,
-		.dev_stop		= qat_cryptodev_stop,
-		.dev_close		= qat_cryptodev_close,
-		.dev_infos_get		= qat_cryptodev_info_get,
-
-		.stats_get		= qat_cryptodev_stats_get,
-		.stats_reset		= qat_cryptodev_stats_reset,
-		.queue_pair_setup	= qat_cryptodev_qp_setup,
-		.queue_pair_release	= qat_cryptodev_qp_release,
-
-		/* Crypto related operations */
-		.sym_session_get_size	= qat_sym_session_get_private_size,
-		.sym_session_configure	= qat_sym_session_configure,
-		.sym_session_clear	= qat_sym_session_clear,
-
-		/* Raw data-path API related operations */
-		.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
-		.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
-};
-
-#ifdef RTE_LIB_SECURITY
-static const struct rte_security_capability *
-qat_security_cap_get(void *device __rte_unused)
-{
-	return qat_security_capabilities;
-}
-
-static struct rte_security_ops security_qat_ops = {
-
-		.session_create = qat_security_session_create,
-		.session_update = NULL,
-		.session_stats_get = NULL,
-		.session_destroy = qat_security_session_destroy,
-		.set_pkt_metadata = NULL,
-		.capabilities_get = qat_security_cap_get
-};
-#endif
+struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[QAT_N_GENS];
 
 void
 qat_sym_init_op_cookie(void *op_cookie)
@@ -156,7 +78,6 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 	int i = 0, ret = 0;
 	struct qat_device_info *qat_dev_instance =
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
-
 	struct rte_cryptodev_pmd_init_params init_params = {
 		.name = "",
 		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
@@ -166,13 +87,22 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
 	struct qat_cryptodev_private *internals;
+	struct qat_capabilities_info capa_info;
 	const struct rte_cryptodev_capabilities *capabilities;
+	const struct qat_crypto_gen_dev_ops *gen_dev_ops =
+		&qat_sym_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	uint64_t capa_size;
 
 	snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "sym");
 	QAT_LOG(DEBUG, "Creating QAT SYM device %s", name);
 
+	if (gen_dev_ops->cryptodev_ops == NULL) {
+		QAT_LOG(ERR, "Device %s does not support symmetric crypto",
+				name);
+		return -EFAULT;
+	}
+
 	/*
 	 * All processes must use same driver id so they can share sessions.
 	 * Store driver_id so we can validate that all processes have the same
@@ -206,92 +136,56 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	qat_dev_instance->sym_rte_dev.name = cryptodev->data->name;
 	cryptodev->driver_id = qat_sym_driver_id;
-	cryptodev->dev_ops = &crypto_qat_ops;
+	cryptodev->dev_ops = gen_dev_ops->cryptodev_ops;
 
 	cryptodev->enqueue_burst = qat_sym_pmd_enqueue_op_burst;
 	cryptodev->dequeue_burst = qat_sym_pmd_dequeue_op_burst;
 
-	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
-			RTE_CRYPTODEV_FF_HW_ACCELERATED |
-			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
-			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
-			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
-			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
-
-	if (qat_pci_dev->qat_dev_gen < QAT_GEN4)
-		cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SYM_RAW_DP;
+	cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev);
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
-	snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN,
-			"QAT_SYM_CAPA_GEN_%d",
-			qat_pci_dev->qat_dev_gen);
-
 #ifdef RTE_LIB_SECURITY
-	struct rte_security_ctx *security_instance;
-	security_instance = rte_malloc("qat_sec",
-				sizeof(struct rte_security_ctx),
-				RTE_CACHE_LINE_SIZE);
-	if (security_instance == NULL) {
-		QAT_LOG(ERR, "rte_security_ctx memory alloc failed");
-		ret = -ENOMEM;
-		goto error;
-	}
+	if (gen_dev_ops->create_security_ctx) {
+		cryptodev->security_ctx =
+			gen_dev_ops->create_security_ctx((void *)cryptodev);
+		if (cryptodev->security_ctx == NULL) {
+			QAT_LOG(ERR, "rte_security_ctx memory alloc failed");
+			ret = -ENOMEM;
+			goto error;
+		}
+
+		cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY;
+		QAT_LOG(INFO, "Device %s rte_security support enabled", name);
+	} else
+		QAT_LOG(INFO, "Device %s rte_security support disabled", name);
 
-	security_instance->device = (void *)cryptodev;
-	security_instance->ops = &security_qat_ops;
-	security_instance->sess_cnt = 0;
-	cryptodev->security_ctx = security_instance;
-	cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY;
 #endif
+	snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"QAT_SYM_CAPA_GEN_%d",
+			qat_pci_dev->qat_dev_gen);
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
 	internals->service_type = QAT_SERVICE_SYMMETRIC;
-
 	internals->dev_id = cryptodev->data->dev_id;
-	switch (qat_pci_dev->qat_dev_gen) {
-	case QAT_GEN1:
-		capabilities = qat_gen1_sym_capabilities;
-		capa_size = sizeof(qat_gen1_sym_capabilities);
-		break;
-	case QAT_GEN2:
-		capabilities = qat_gen2_sym_capabilities;
-		capa_size = sizeof(qat_gen2_sym_capabilities);
-		break;
-	case QAT_GEN3:
-		capabilities = qat_gen3_sym_capabilities;
-		capa_size = sizeof(qat_gen3_sym_capabilities);
-		break;
-	case QAT_GEN4:
-		capabilities = qat_gen4_sym_capabilities;
-		capa_size = sizeof(qat_gen4_sym_capabilities);
-		break;
-	default:
-		QAT_LOG(DEBUG,
-			"QAT gen %d capabilities unknown",
-			qat_pci_dev->qat_dev_gen);
-		ret = -(EINVAL);
-		goto error;
-	}
+
+	capa_info = gen_dev_ops->get_capabilities(qat_pci_dev);
+	capabilities = capa_info.data;
+	capa_size = capa_info.size;
 
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
 		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
-		capa_size,
-		rte_socket_id(), 0);
-	}
-	if (internals->capa_mz == NULL) {
-		QAT_LOG(DEBUG,
-			"Error allocating memzone for capabilities, destroying "
-			"PMD for %s",
-			name);
-		ret = -EFAULT;
-		goto error;
+				capa_size, rte_socket_id(), 0);
+		if (internals->capa_mz == NULL) {
+			QAT_LOG(DEBUG,
+				"Error allocating capability memzon for %s",
+				name);
+			ret = -EFAULT;
+			goto error;
+		}
 	}
 
 	memcpy(internals->capa_mz->addr, capabilities, capa_size);
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index d49b732ca0..0dc0c6f0d9 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -13,7 +13,6 @@
 #include <rte_security.h>
 #endif
 
-#include "qat_sym_capabilities.h"
 #include "qat_crypto.h"
 #include "qat_device.h"
 
@@ -24,8 +23,64 @@
 #define QAT_SYM_CAP_MIXED_CRYPTO	(1 << 0)
 #define QAT_SYM_CAP_VALID		(1 << 31)
 
+/**
+ * Macro to add a sym capability
+ * helper function to add an sym capability
+ * <n: name> <b: block size> <k: key size> <d: digest size>
+ * <a: aad_size> <i: iv_size>
+ **/
+#define QAT_SYM_PLAIN_AUTH_CAP(n, b, d)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
+			{.auth = {					\
+				.algo = RTE_CRYPTO_AUTH_##n,		\
+				b, d					\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_AUTH_CAP(n, b, k, d, a, i)				\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
+			{.auth = {					\
+				.algo = RTE_CRYPTO_AUTH_##n,		\
+				b, k, d, a, i				\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_AEAD_CAP(n, b, k, d, a, i)				\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
+			{.aead = {					\
+				.algo = RTE_CRYPTO_AEAD_##n,		\
+				b, k, d, a, i				\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_CIPHER_CAP(n, b, k, i)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
+			{.cipher = {					\
+				.algo = RTE_CRYPTO_CIPHER_##n,		\
+				b, k, i					\
+			}, }						\
+		}, }							\
+	}
+
 extern uint8_t qat_sym_driver_id;
 
+extern struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[];
+
 int
 qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v6 9/9] crypto/qat: add gen specific implementation
  2021-10-26 17:16           ` [dpdk-dev] [dpdk-dev v6 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                               ` (7 preceding siblings ...)
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 8/9] crypto/qat: add gen specific data and function Kai Ji
@ 2021-10-26 17:16             ` Kai Ji
  2021-10-27 15:50             ` [dpdk-dev] [dpdk-dev v7 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-26 17:16 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch replaces the mixed QAT symmetric and asymmetric
support implementation by separate files with shared or
individual implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/meson.build               |   7 +-
 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c   |  76 +++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 224 +++++++++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 164 +++++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 124 ++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h |  36 +++
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c    | 283 +++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h              |   3 -
 8 files changed, 913 insertions(+), 4 deletions(-)
 create mode 100644 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 29fd0168ea..ce9959d103 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -71,7 +71,12 @@ endif
 
 if qat_crypto
     foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c',
-            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c']
+            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c',
+	    'dev/qat_sym_pmd_gen1.c',
+            'dev/qat_asym_pmd_gen1.c',
+            'dev/qat_crypto_pmd_gen2.c',
+            'dev/qat_crypto_pmd_gen3.c',
+            'dev/qat_crypto_pmd_gen4.c']
         sources += files(join_paths(qat_crypto_relpath, f))
     endforeach
     deps += ['security']
diff --git a/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
new file mode 100644
index 0000000000..9ed1f21d9d
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+#include "qat_pke_functionality_arrays.h"
+
+struct rte_cryptodev_ops qat_asym_crypto_ops_gen1 = {
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.asym_session_get_size	= qat_asym_session_get_private_size,
+	.asym_session_configure	= qat_asym_session_configure,
+	.asym_session_clear	= qat_asym_session_clear
+};
+
+static struct rte_cryptodev_capabilities qat_asym_crypto_caps_gen1[] = {
+	QAT_ASYM_CAP(MODEX,
+		0, 1, 512, 1),
+	QAT_ASYM_CAP(MODINV,
+		0, 1, 512, 1),
+	QAT_ASYM_CAP(RSA,
+			((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
+			(1 << RTE_CRYPTO_ASYM_OP_VERIFY) |
+			(1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |
+			(1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),
+			64, 512, 64),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+
+struct qat_capabilities_info
+qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_asym_crypto_caps_gen1;
+	capa_info.size = sizeof(qat_asym_crypto_caps_gen1);
+	return capa_info;
+}
+
+uint64_t
+qat_asym_crypto_feature_flags_get_gen1(
+	struct qat_pci_device *qat_dev __rte_unused)
+{
+	uint64_t feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_ASYM_SESSIONLESS |
+			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP |
+			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT;
+
+	return feature_flags;
+}
+
+RTE_INIT(qat_asym_crypto_gen1_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN1].cryptodev_ops =
+			&qat_asym_crypto_ops_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN1].get_capabilities =
+			qat_asym_crypto_cap_get_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN1].get_feature_flags =
+			qat_asym_crypto_feature_flags_get_gen1;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
new file mode 100644
index 0000000000..b4ec440e05
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
@@ -0,0 +1,224 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+#define MIXED_CRYPTO_MIN_FW_VER 0x04090000
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen2[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(MD5_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(KASUMI_F9,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_XTS,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(3DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(3DES_CTR,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(ZUC_EEA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(ZUC_EIA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static int
+qat_sym_crypto_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct qat_cryptodev_private *qat_sym_private = dev->data->dev_private;
+	struct qat_qp *qp;
+	int ret;
+
+	if (qat_cryptodev_qp_setup(dev, qp_id, qp_conf, socket_id)) {
+		QAT_LOG(DEBUG, "QAT qp setup failed");
+		return -1;
+	}
+
+	qp = qat_sym_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id];
+	ret = qat_cq_get_fw_version(qp);
+	if (ret < 0) {
+		qat_cryptodev_qp_release(dev, qp_id);
+		return ret;
+	}
+
+	if (ret != 0)
+		QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d",
+				(ret >> 24) & 0xff,
+				(ret >> 16) & 0xff,
+				(ret >> 8) & 0xff);
+	else
+		QAT_LOG(DEBUG, "unknown QAT firmware version");
+
+	/* set capabilities based on the fw version */
+	qat_sym_private->internal_capabilities = QAT_SYM_CAP_VALID |
+			((ret >= MIXED_CRYPTO_MIN_FW_VER) ?
+					QAT_SYM_CAP_MIXED_CRYPTO : 0);
+	return 0;
+}
+
+struct rte_cryptodev_ops qat_sym_crypto_ops_gen2 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_sym_crypto_qp_setup_gen2,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.sym_session_get_size	= qat_sym_session_get_private_size,
+	.sym_session_configure	= qat_sym_session_configure,
+	.sym_session_clear	= qat_sym_session_clear,
+
+	/* Raw data-path API related operations */
+	.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+	.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen2(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen2;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen2);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen2_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN2].cryptodev_ops = &qat_sym_crypto_ops_gen2;
+	qat_sym_gen_dev_ops[QAT_GEN2].get_capabilities =
+			qat_sym_crypto_cap_get_gen2;
+	qat_sym_gen_dev_ops[QAT_GEN2].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN2].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen2_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN2].cryptodev_ops =
+			&qat_asym_crypto_ops_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN2].get_capabilities =
+			qat_asym_crypto_cap_get_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN2].get_feature_flags =
+			qat_asym_crypto_feature_flags_get_gen1;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
new file mode 100644
index 0000000000..d3336cf4a1
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen3[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(MD5_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(KASUMI_F9,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_XTS,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(3DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(3DES_CTR,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(ZUC_EEA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(ZUC_EIA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AEAD_CAP(CHACHA20_POLY1305,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 32, 32, 0),
+		CAP_RNG(digest_size, 16, 16, 0),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen3(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen3;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen3);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen3_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN3].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN3].get_capabilities =
+			qat_sym_crypto_cap_get_gen3;
+	qat_sym_gen_dev_ops[QAT_GEN3].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN3].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen3_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN3].cryptodev_ops = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN3].get_capabilities = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN3].get_feature_flags = NULL;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
new file mode 100644
index 0000000000..37a58c026f
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
@@ -0,0 +1,124 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen4[] = {
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(CHACHA20_POLY1305,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 32, 32, 0),
+		CAP_RNG(digest_size, 16, 16, 0),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen4;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen4);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen4_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN4].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN4].get_capabilities =
+			qat_sym_crypto_cap_get_gen4;
+	qat_sym_gen_dev_ops[QAT_GEN4].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN4].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen4_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN4].cryptodev_ops = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN4].get_capabilities = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN4].get_feature_flags = NULL;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
new file mode 100644
index 0000000000..67a4d2cb2c
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#ifndef _QAT_CRYPTO_PMD_GENS_H_
+#define _QAT_CRYPTO_PMD_GENS_H_
+
+#include <rte_cryptodev.h>
+#include "qat_crypto.h"
+#include "qat_sym_session.h"
+
+extern struct rte_cryptodev_ops qat_sym_crypto_ops_gen1;
+extern struct rte_cryptodev_ops qat_asym_crypto_ops_gen1;
+
+/* -----------------GENx control path APIs ---------------- */
+uint64_t
+qat_sym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev);
+
+void
+qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session,
+		uint8_t hash_flag);
+
+struct qat_capabilities_info
+qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev);
+
+uint64_t
+qat_asym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev);
+
+#ifdef RTE_LIB_SECURITY
+extern struct rte_security_ops security_qat_ops_gen1;
+
+void *
+qat_sym_create_security_gen1(void *cryptodev);
+#endif
+
+#endif
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
new file mode 100644
index 0000000000..e156f194e2
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
@@ -0,0 +1,283 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#ifdef RTE_LIB_SECURITY
+#include <rte_security_driver.h>
+#endif
+
+#include "adf_transport_access_macros.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_sym_session.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen1[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(MD5_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(KASUMI_F9,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_XTS,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(3DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(3DES_CTR,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+struct rte_cryptodev_ops qat_sym_crypto_ops_gen1 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.sym_session_get_size	= qat_sym_session_get_private_size,
+	.sym_session_configure	= qat_sym_session_configure,
+	.sym_session_clear	= qat_sym_session_clear,
+
+	/* Raw data-path API related operations */
+	.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+	.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen1;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen1);
+	return capa_info;
+}
+
+uint64_t
+qat_sym_crypto_feature_flags_get_gen1(
+	struct qat_pci_device *qat_dev __rte_unused)
+{
+	uint64_t feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
+			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
+			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
+			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED |
+			RTE_CRYPTODEV_FF_SYM_RAW_DP;
+
+	return feature_flags;
+}
+
+#ifdef RTE_LIB_SECURITY
+
+#define QAT_SECURITY_SYM_CAPABILITIES					\
+	{	/* AES DOCSIS BPI */					\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
+			{.cipher = {					\
+				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
+				.block_size = 16,			\
+				.key_size = {				\
+					.min = 16,			\
+					.max = 32,			\
+					.increment = 16			\
+				},					\
+				.iv_size = {				\
+					.min = 16,			\
+					.max = 16,			\
+					.increment = 0			\
+				}					\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SECURITY_CAPABILITIES(sym)					\
+	[0] = {	/* DOCSIS Uplink */					\
+		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
+		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
+		.docsis = {						\
+			.direction = RTE_SECURITY_DOCSIS_UPLINK		\
+		},							\
+		.crypto_capabilities = (sym)				\
+	},								\
+	[1] = {	/* DOCSIS Downlink */					\
+		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
+		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
+		.docsis = {						\
+			.direction = RTE_SECURITY_DOCSIS_DOWNLINK	\
+		},							\
+		.crypto_capabilities = (sym)				\
+	}
+
+static const struct rte_cryptodev_capabilities
+					qat_security_sym_capabilities[] = {
+	QAT_SECURITY_SYM_CAPABILITIES,
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static const struct rte_security_capability qat_security_capabilities_gen1[] = {
+	QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities),
+	{
+		.action = RTE_SECURITY_ACTION_TYPE_NONE
+	}
+};
+
+static const struct rte_security_capability *
+qat_security_cap_get_gen1(void *dev __rte_unused)
+{
+	return qat_security_capabilities_gen1;
+}
+
+struct rte_security_ops security_qat_ops_gen1 = {
+		.session_create = qat_security_session_create,
+		.session_update = NULL,
+		.session_stats_get = NULL,
+		.session_destroy = qat_security_session_destroy,
+		.set_pkt_metadata = NULL,
+		.capabilities_get = qat_security_cap_get_gen1
+};
+
+void *
+qat_sym_create_security_gen1(void *cryptodev)
+{
+	struct rte_security_ctx *security_instance;
+
+	security_instance = rte_malloc(NULL, sizeof(struct rte_security_ctx),
+			RTE_CACHE_LINE_SIZE);
+	if (security_instance == NULL)
+		return NULL;
+
+	security_instance->device = cryptodev;
+	security_instance->ops = &security_qat_ops_gen1;
+	security_instance->sess_cnt = 0;
+
+	return (void *)security_instance;
+}
+
+#endif
+
+RTE_INIT(qat_sym_crypto_gen1_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN1].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN1].get_capabilities =
+			qat_sym_crypto_cap_get_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN1].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN1].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index 0a8afb0b31..6eaa15b975 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -6,9 +6,6 @@
  #define _QAT_CRYPTO_H_
 
 #include <rte_cryptodev.h>
-#ifdef RTE_LIB_SECURITY
-#include <rte_security.h>
-#endif
 
 #include "qat_device.h"
 
-- 
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [dpdk-dev v4 7/9] crypto/qat: unified device private data structure
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 7/9] crypto/qat: unified device private data structure Fan Zhang
@ 2021-10-27  8:11         ` Power, Ciara
  0 siblings, 0 replies; 96+ messages in thread
From: Power, Ciara @ 2021-10-27  8:11 UTC (permalink / raw)
  To: Zhang, Roy Fan, dev; +Cc: gakhil, Zhang, Roy Fan, Kusztal, ArkadiuszX, Ji, Kai

Hi Fan,

>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Fan Zhang
>Sent: Friday 22 October 2021 18:04
>To: dev@dpdk.org
>Cc: gakhil@marvell.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>; Kusztal,
>ArkadiuszX <arkadiuszx.kusztal@intel.com>; Ji, Kai <kai.ji@intel.com>
>Subject: [dpdk-dev] [dpdk-dev v4 7/9] crypto/qat: unified device private data
>structure
>
>This patch unifies the QAT symmetric and asymmetric device private data
>structures and functions.
>
>Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
>Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
>Signed-off-by: Kai Ji <kai.ji@intel.com>
>---
<snip>
>+
>+void
>+qat_cryptodev_info_get(struct rte_cryptodev *dev,
>+		struct rte_cryptodev_info *info)
>+{
>+	struct qat_cryptodev_private *qat_private = dev->data-
>>dev_private;
>+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
>+	enum qat_service_type service_type = qat_private->service_type;
>+
>+	if (info != NULL) {
>+		info->max_nb_queue_pairs =
>+			qat_qps_per_service(qat_dev, service_type);
>+		info->feature_flags = dev->feature_flags;
>+		info->capabilities = qat_private->qat_dev_capabilities;
>+		info->driver_id = qat_sym_driver_id;

As this is a shared function between sym and asym, the driver id is being incorrectly set as always qat_sym_driver_id

>+		/* No limit of number of sessions */
>+		info->sym.max_nb_sessions = 0;
>+	}
>+}
>+

Thanks,
Ciara

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [dpdk-dev v4 8/9] crypto/qat: add gen specific data and function
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 8/9] crypto/qat: add gen specific data and function Fan Zhang
@ 2021-10-27  9:28         ` Power, Ciara
  0 siblings, 0 replies; 96+ messages in thread
From: Power, Ciara @ 2021-10-27  9:28 UTC (permalink / raw)
  To: Zhang, Roy Fan, dev; +Cc: gakhil, Zhang, Roy Fan, Kusztal, ArkadiuszX, Ji, Kai

Hi Fan,

>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Fan Zhang
>Sent: Friday 22 October 2021 18:04
>To: dev@dpdk.org
>Cc: gakhil@marvell.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>; Kusztal,
>ArkadiuszX <arkadiuszx.kusztal@intel.com>; Ji, Kai <kai.ji@intel.com>
>Subject: [dpdk-dev] [dpdk-dev v4 8/9] crypto/qat: add gen specific data and
>function
>
>This patch adds the symmetric and asymmetric crypto data
>structure and function prototypes for different QAT
>generations.
>
>Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
>Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
>Signed-off-by: Kai Ji <kai.ji@intel.com>
>---
<snip>
>@@ -101,19 +95,22 @@ qat_asym_dev_create(struct qat_pci_device
>*qat_pci_dev,
> 		.socket_id = qat_dev_instance->pci_dev-
>>device.numa_node,
> 		.private_data_size = sizeof(struct qat_cryptodev_private)
> 	};
>+	struct qat_capabilities_info capa_info;
>+	const struct rte_cryptodev_capabilities *capabilities;
>+	const struct qat_crypto_gen_dev_ops *gen_dev_ops =
>+		&qat_asym_gen_dev_ops[qat_pci_dev->qat_dev_gen];
> 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
> 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
> 	struct rte_cryptodev *cryptodev;
> 	struct qat_cryptodev_private *internals;
>+	uint64_t capa_size;
>
>-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
>-		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on
>QAT 4xxx");
>-		return -EFAULT;
>-	}
>-	if (qat_pci_dev->qat_dev_gen == QAT_GEN3) {
>-		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on
>QAT c4xxx");
>+	if (gen_dev_ops->cryptodev_ops == NULL) {
>+		QAT_LOG(ERR, "Device %s does not support asymmetric
>crypto",
>+				name);
> 		return -EFAULT;
> 	}

I believe the name buffer is empty when it is included in the LOG above - it seems to be set below using snprintf.

>+
> 	snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
> 			qat_pci_dev->name, "asym");
> 	QAT_LOG(DEBUG, "Creating QAT ASYM device %s\n", name);
<snip>

Thanks,
Ciara

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [dpdk-dev v4 9/9] crypto/qat: add gen specific implementation
  2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 9/9] crypto/qat: add gen specific implementation Fan Zhang
@ 2021-10-27 10:16         ` Power, Ciara
  0 siblings, 0 replies; 96+ messages in thread
From: Power, Ciara @ 2021-10-27 10:16 UTC (permalink / raw)
  To: Zhang, Roy Fan, dev; +Cc: gakhil, Zhang, Roy Fan, Kusztal, ArkadiuszX, Ji, Kai

>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Fan Zhang
>Sent: Friday 22 October 2021 18:04
>To: dev@dpdk.org
>Cc: gakhil@marvell.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>; Kusztal,
>ArkadiuszX <arkadiuszx.kusztal@intel.com>; Ji, Kai <kai.ji@intel.com>
>Subject: [dpdk-dev] [dpdk-dev v4 9/9] crypto/qat: add gen specific
>implementation
>
>This patch replaces the mixed QAT symmetric and asymmetric support
>implementation by separate files with shared or individual implementation for
>specific QAT generation.
>
>Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
>Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
>Signed-off-by: Kai Ji <kai.ji@intel.com>
>---

Acked-by: Ciara Power <ciara.power@intel.com>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v7 0/9] drivers/qat: isolate implementations of qat generations
  2021-10-26 17:16           ` [dpdk-dev] [dpdk-dev v6 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                               ` (8 preceding siblings ...)
  2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 9/9] crypto/qat: add gen specific implementation Kai Ji
@ 2021-10-27 15:50             ` Kai Ji
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 1/9] common/qat: add gen specific data and function Kai Ji
                                 ` (9 more replies)
  9 siblings, 10 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-27 15:50 UTC (permalink / raw)
  To: dev; +Cc: Kai Ji

This patchset introduces new qat driver structure and updates
existing symmetric crypto qat PMD.

The purpose of the change is to isolate QAT generation specific
implementations from one to another.

It is expected the changes to the specific generation driver
code does minimum impact to other generations' implementations.
Also adding the support to new features or new qat generation
hardware will have zero impact to existing functionalities.

v7:
- rebased on the top of latest master
- review comments addressed

v6:
- updates on commit messages

v5:
- review comments addressed

v4:
- rebased on top of latest master.
- updated comments.
- removed naming convention patch.

v3:
- removed release note update.
- updated with more unified naming conventions.

v2:
- unified asym and sym data structures for qat.
- more refined per gen code split.

Fan Zhang (9):
  common/qat: add gen specific data and function
  common/qat: add gen specific device implementation
  common/qat: add gen specific queue pair function
  common/qat: add gen specific queue implementation
  compress/qat: add gen specific data and function
  compress/qat: add gen specific implementation
  crypto/qat: unified device private data structure
  crypto/qat: add gen specific data and function
  crypto/qat: add gen specific implementation

 drivers/common/qat/dev/qat_dev_gen1.c         |  254 ++++
 drivers/common/qat/dev/qat_dev_gen2.c         |   37 +
 drivers/common/qat/dev/qat_dev_gen3.c         |   83 ++
 drivers/common/qat/dev/qat_dev_gen4.c         |  305 ++++
 drivers/common/qat/dev/qat_dev_gens.h         |   65 +
 drivers/common/qat/meson.build                |   15 +-
 .../qat/qat_adf/adf_transport_access_macros.h |    2 +
 .../common/qat/qat_adf/icp_qat_hw_gen4_comp.h |  195 +++
 .../qat/qat_adf/icp_qat_hw_gen4_comp_defs.h   |  299 ++++
 drivers/common/qat/qat_common.c               |   15 +
 drivers/common/qat/qat_common.h               |   19 +-
 drivers/common/qat/qat_device.c               |  205 ++-
 drivers/common/qat/qat_device.h               |   45 +-
 drivers/common/qat/qat_qp.c                   |  677 ++++-----
 drivers/common/qat/qat_qp.h                   |  121 +-
 drivers/compress/qat/dev/qat_comp_pmd_gen1.c  |  176 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen2.c  |   30 +
 drivers/compress/qat/dev/qat_comp_pmd_gen3.c  |   30 +
 drivers/compress/qat/dev/qat_comp_pmd_gen4.c  |  213 +++
 drivers/compress/qat/dev/qat_comp_pmd_gens.h  |   30 +
 drivers/compress/qat/qat_comp.c               |  101 +-
 drivers/compress/qat/qat_comp.h               |    8 +-
 drivers/compress/qat/qat_comp_pmd.c           |  159 +--
 drivers/compress/qat/qat_comp_pmd.h           |   76 +
 drivers/crypto/qat/README                     |    7 -
 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c    |   76 +
 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c  |  224 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c  |  164 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c  |  124 ++
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h  |   36 +
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c     |  283 ++++
 drivers/crypto/qat/meson.build                |   26 -
 drivers/crypto/qat/qat_asym_capabilities.h    |   63 -
 drivers/crypto/qat/qat_asym_pmd.c             |  280 +---
 drivers/crypto/qat/qat_asym_pmd.h             |   54 +-
 drivers/crypto/qat/qat_crypto.c               |  176 +++
 drivers/crypto/qat/qat_crypto.h               |   91 ++
 drivers/crypto/qat/qat_sym_capabilities.h     | 1248 -----------------
 drivers/crypto/qat/qat_sym_pmd.c              |  428 +-----
 drivers/crypto/qat/qat_sym_pmd.h              |   76 +-
 drivers/crypto/qat/qat_sym_session.c          |   15 +-
 41 files changed, 3779 insertions(+), 2752 deletions(-)
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gens.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen1.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen2.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen3.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen4.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gens.h
 delete mode 100644 drivers/crypto/qat/README
 create mode 100644 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
 delete mode 100644 drivers/crypto/qat/meson.build
 delete mode 100644 drivers/crypto/qat/qat_asym_capabilities.h
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 delete mode 100644 drivers/crypto/qat/qat_sym_capabilities.h

--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v7 1/9] common/qat: add gen specific data and function
  2021-10-27 15:50             ` [dpdk-dev] [dpdk-dev v7 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
@ 2021-10-27 15:50               ` Kai Ji
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 2/9] common/qat: add gen specific device implementation Kai Ji
                                 ` (8 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-27 15:50 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch adds the data structure and function prototypes for
different QAT generations.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 drivers/common/qat/qat_common.h | 14 ++++++++------
 drivers/common/qat/qat_device.c |  4 ++++
 drivers/common/qat/qat_device.h | 23 +++++++++++++++++++++++
 3 files changed, 35 insertions(+), 6 deletions(-)

diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index 23715085f4..1889ec4e88 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -15,20 +15,24 @@
 /* Intel(R) QuickAssist Technology device generation is enumerated
  * from one according to the generation of the device
  */
+
 enum qat_device_gen {
-	QAT_GEN1 = 1,
+	QAT_GEN1,
 	QAT_GEN2,
 	QAT_GEN3,
-	QAT_GEN4
+	QAT_GEN4,
+	QAT_N_GENS
 };

 enum qat_service_type {
-	QAT_SERVICE_ASYMMETRIC = 0,
+	QAT_SERVICE_ASYMMETRIC,
 	QAT_SERVICE_SYMMETRIC,
 	QAT_SERVICE_COMPRESSION,
-	QAT_SERVICE_INVALID
+	QAT_MAX_SERVICES
 };

+#define QAT_SERVICE_INVALID	(QAT_MAX_SERVICES)
+
 enum qat_svc_list {
 	QAT_SVC_UNUSED = 0,
 	QAT_SVC_CRYPTO = 1,
@@ -37,8 +41,6 @@ enum qat_svc_list {
 	QAT_SVC_ASYM = 4,
 };

-#define QAT_MAX_SERVICES		(QAT_SERVICE_INVALID)
-
 /**< Common struct for scatter-gather list operations */
 struct qat_flat_buf {
 	uint32_t len;
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index 1b967cbcf7..e6b43c541f 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -13,6 +13,10 @@
 #include "adf_pf2vf_msg.h"
 #include "qat_pf2vf.h"

+/* Hardware device information per generation */
+struct qat_gen_hw_data qat_gen_config[QAT_N_GENS];
+struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_N_GENS];
+
 /* pv2vf data Gen 4*/
 struct qat_pf2vf_dev qat_pf2vf_gen4 = {
 	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 228c057d1e..b8b5c387a3 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -21,6 +21,29 @@
 #define COMP_ENQ_THRESHOLD_NAME "qat_comp_enq_threshold"
 #define MAX_QP_THRESHOLD_SIZE	32

+/**
+ * Function prototypes for GENx specific device operations.
+ **/
+typedef int (*qat_dev_reset_ring_pairs_t)
+		(struct qat_pci_device *);
+typedef const struct rte_mem_resource* (*qat_dev_get_transport_bar_t)
+		(struct rte_pci_device *);
+typedef int (*qat_dev_get_misc_bar_t)
+		(struct rte_mem_resource **, struct rte_pci_device *);
+typedef int (*qat_dev_read_config_t)
+		(struct qat_pci_device *);
+typedef int (*qat_dev_get_extra_size_t)(void);
+
+struct qat_dev_hw_spec_funcs {
+	qat_dev_reset_ring_pairs_t	qat_dev_reset_ring_pairs;
+	qat_dev_get_transport_bar_t	qat_dev_get_transport_bar;
+	qat_dev_get_misc_bar_t		qat_dev_get_misc_bar;
+	qat_dev_read_config_t		qat_dev_read_config;
+	qat_dev_get_extra_size_t	qat_dev_get_extra_size;
+};
+
+extern struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[];
+
 struct qat_dev_cmd_param {
 	const char *name;
 	uint16_t val;
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v7 2/9] common/qat: add gen specific device implementation
  2021-10-27 15:50             ` [dpdk-dev] [dpdk-dev v7 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 1/9] common/qat: add gen specific data and function Kai Ji
@ 2021-10-27 15:50               ` Kai Ji
  2021-10-28  9:32                 ` Power, Ciara
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 3/9] common/qat: add gen specific queue pair function Kai Ji
                                 ` (7 subsequent siblings)
  9 siblings, 1 reply; 96+ messages in thread
From: Kai Ji @ 2021-10-27 15:50 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch replaces the mixed QAT device configuration
implementation by separate files with shared or
individual implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c |  64 ++++++++
 drivers/common/qat/dev/qat_dev_gen2.c |  23 +++
 drivers/common/qat/dev/qat_dev_gen3.c |  23 +++
 drivers/common/qat/dev/qat_dev_gen4.c | 152 +++++++++++++++++++
 drivers/common/qat/dev/qat_dev_gens.h |  34 +++++
 drivers/common/qat/meson.build        |   4 +
 drivers/common/qat/qat_device.c       | 205 +++++++++++---------------
 drivers/common/qat/qat_device.h       |   5 +-
 drivers/common/qat/qat_qp.c           |   3 +-
 9 files changed, 389 insertions(+), 124 deletions(-)
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gens.h

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
new file mode 100644
index 0000000000..9972280e06
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+#define ADF_ARB_REG_SLOT			0x1000
+
+int
+qat_reset_ring_pairs_gen1(struct qat_pci_device *qat_pci_dev __rte_unused)
+{
+	/*
+	 * Ring pairs reset not supported on base, continue
+	 */
+	return 0;
+}
+
+const struct rte_mem_resource *
+qat_dev_get_transport_bar_gen1(struct rte_pci_device *pci_dev)
+{
+	return &pci_dev->mem_resource[0];
+}
+
+int
+qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource __rte_unused,
+		struct rte_pci_device *pci_dev __rte_unused)
+{
+	return -1;
+}
+
+int
+qat_dev_read_config_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	/*
+	 * Base generations do not have configuration,
+	 * but set this pointer anyway that we can
+	 * distinguish higher generations faulty set to NULL
+	 */
+	return 0;
+}
+
+int
+qat_dev_get_extra_size_gen1(void)
+{
+	return 0;
+}
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen1_init)
+{
+	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
+	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c
new file mode 100644
index 0000000000..d3470ed6b8
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen2.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen2_init)
+{
+	qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2;
+	qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c
new file mode 100644
index 0000000000..e4a66869d2
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen3.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen3_init)
+{
+	qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3;
+	qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
new file mode 100644
index 0000000000..5e5423ebfa
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "adf_transport_access_macros_gen4vf.h"
+#include "adf_pf2vf_msg.h"
+#include "qat_pf2vf.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+struct qat_dev_gen4_extra {
+	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
+		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
+};
+
+static struct qat_pf2vf_dev qat_pf2vf_gen4 = {
+	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
+	.vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
+	.pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
+	.pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
+	.pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
+	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
+};
+
+int
+qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
+{
+	struct qat_pf2vf_msg pf2vf_msg;
+
+	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ;
+	pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ;
+	pf2vf_msg.msg_data = 2;
+	return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
+}
+
+static enum qat_service_type
+gen4_pick_service(uint8_t hw_service)
+{
+	switch (hw_service) {
+	case QAT_SVC_SYM:
+		return QAT_SERVICE_SYMMETRIC;
+	case QAT_SVC_COMPRESSION:
+		return QAT_SERVICE_COMPRESSION;
+	case QAT_SVC_ASYM:
+		return QAT_SERVICE_ASYMMETRIC;
+	default:
+		return QAT_SERVICE_INVALID;
+	}
+}
+
+static int
+qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
+{
+	int i = 0;
+	uint16_t svc = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+	struct qat_qp_hw_data *hw_data;
+	enum qat_service_type service_type;
+	uint8_t hw_service;
+
+	if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
+		return -EFAULT;
+	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		hw_service = (svc >> (3 * i)) & 0x7;
+		service_type = gen4_pick_service(hw_service);
+		if (service_type == QAT_SERVICE_INVALID) {
+			QAT_LOG(ERR,
+				"Unrecognized service on bundle %d",
+				i);
+			return -ENOTSUP;
+		}
+		hw_data = &dev_extra->qp_gen4_data[i][0];
+		memset(hw_data, 0, sizeof(*hw_data));
+		hw_data->service_type = service_type;
+		if (service_type == QAT_SERVICE_ASYMMETRIC) {
+			hw_data->tx_msg_size = 64;
+			hw_data->rx_msg_size = 32;
+		} else if (service_type == QAT_SERVICE_SYMMETRIC ||
+				service_type ==
+					QAT_SERVICE_COMPRESSION) {
+			hw_data->tx_msg_size = 128;
+			hw_data->rx_msg_size = 32;
+		}
+		hw_data->tx_ring_num = 0;
+		hw_data->rx_ring_num = 1;
+		hw_data->hw_bundle_num = i;
+	}
+	return 0;
+}
+
+static int
+qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
+{
+	int ret = 0, i;
+	uint8_t data[4];
+	struct qat_pf2vf_msg pf2vf_msg;
+
+	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET;
+	pf2vf_msg.block_hdr = -1;
+	for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		pf2vf_msg.msg_data = i;
+		ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data);
+		if (ret) {
+			QAT_LOG(ERR, "QAT error when reset bundle no %d",
+				i);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static const struct
+rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
+{
+	return &pci_dev->mem_resource[0];
+}
+
+static int
+qat_dev_get_misc_bar_gen4(struct rte_mem_resource **mem_resource,
+		struct rte_pci_device *pci_dev)
+{
+	*mem_resource = &pci_dev->mem_resource[2];
+	return 0;
+}
+
+static int
+qat_dev_get_extra_size_gen4(void)
+{
+	return sizeof(struct qat_dev_gen4_extra);
+}
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen4,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen4,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen4,
+	.qat_dev_read_config = qat_dev_read_config_gen4,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen4,
+};
+
+RTE_INIT(qat_dev_gen_4_init)
+{
+	qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4;
+	qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4;
+	qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h
new file mode 100644
index 0000000000..4ad0ffa728
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gens.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _QAT_DEV_GENS_H_
+#define _QAT_DEV_GENS_H_
+
+#include "qat_device.h"
+#include "qat_qp.h"
+
+#include <stdint.h>
+
+extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE];
+
+int
+qat_dev_get_extra_size_gen1(void);
+
+int
+qat_reset_ring_pairs_gen1(
+		struct qat_pci_device *qat_pci_dev);
+const struct
+rte_mem_resource *qat_dev_get_transport_bar_gen1(
+		struct rte_pci_device *pci_dev);
+int
+qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource,
+		struct rte_pci_device *pci_dev);
+int
+qat_dev_read_config_gen1(struct qat_pci_device *qat_dev);
+
+int
+qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val);
+
+#endif
diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 053c219fed..532e0fabb3 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -50,6 +50,10 @@ sources += files(
         'qat_device.c',
         'qat_logs.c',
         'qat_pf2vf.c',
+        'dev/qat_dev_gen1.c',
+        'dev/qat_dev_gen2.c',
+        'dev/qat_dev_gen3.c',
+        'dev/qat_dev_gen4.c'
 )
 includes += include_directories(
         'qat_adf',
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index e6b43c541f..437996f2e8 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -17,43 +17,6 @@
 struct qat_gen_hw_data qat_gen_config[QAT_N_GENS];
 struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_N_GENS];
 
-/* pv2vf data Gen 4*/
-struct qat_pf2vf_dev qat_pf2vf_gen4 = {
-	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
-	.vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
-	.pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
-	.pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
-	.pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
-	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
-};
-
-/* Hardware device information per generation */
-__extension__
-struct qat_gen_hw_data qat_gen_config[] =  {
-	[QAT_GEN1] = {
-		.dev_gen = QAT_GEN1,
-		.qp_hw_data = qat_gen1_qps,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN1
-	},
-	[QAT_GEN2] = {
-		.dev_gen = QAT_GEN2,
-		.qp_hw_data = qat_gen1_qps,
-		/* gen2 has same ring layout as gen1 */
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN2
-	},
-	[QAT_GEN3] = {
-		.dev_gen = QAT_GEN3,
-		.qp_hw_data = qat_gen3_qps,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3
-	},
-	[QAT_GEN4] = {
-		.dev_gen = QAT_GEN4,
-		.qp_hw_data = NULL,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3,
-		.pf2vf_dev = &qat_pf2vf_gen4
-	},
-};
-
 /* per-process array of device data */
 struct qat_device_info qat_pci_devs[RTE_PMD_QAT_MAX_PCI_DEVICES];
 static int qat_nb_pci_devices;
@@ -87,6 +50,16 @@ static const struct rte_pci_id pci_id_qat_map[] = {
 		{.device_id = 0},
 };
 
+static int
+qat_pci_get_extra_size(enum qat_device_gen qat_dev_gen)
+{
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_extra_size,
+		-ENOTSUP);
+	return ops_hw->qat_dev_get_extra_size();
+}
+
 static struct qat_pci_device *
 qat_pci_get_named_dev(const char *name)
 {
@@ -130,45 +103,8 @@ qat_get_qat_dev_from_pci_dev(struct rte_pci_device *pci_dev)
 	return qat_pci_get_named_dev(name);
 }
 
-static int
-qat_gen4_reset_ring_pair(struct qat_pci_device *qat_pci_dev)
-{
-	int ret = 0, i;
-	uint8_t data[4];
-	struct qat_pf2vf_msg pf2vf_msg;
-
-	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET;
-	pf2vf_msg.block_hdr = -1;
-	for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) {
-		pf2vf_msg.msg_data = i;
-		ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data);
-		if (ret) {
-			QAT_LOG(ERR, "QAT error when reset bundle no %d",
-				i);
-			return ret;
-		}
-	}
-
-	return 0;
-}
-
-int qat_query_svc(struct qat_pci_device *qat_dev, uint8_t *val)
-{
-	int ret = -(EINVAL);
-	struct qat_pf2vf_msg pf2vf_msg;
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ;
-		pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ;
-		pf2vf_msg.msg_data = 2;
-		ret = qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
-	}
-
-	return ret;
-}
-
-
-static void qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param
+static void
+qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param
 		*qat_dev_cmd_param)
 {
 	int i = 0;
@@ -230,13 +166,39 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param)
 {
 	struct qat_pci_device *qat_dev;
+	enum qat_device_gen qat_dev_gen;
 	uint8_t qat_dev_id = 0;
 	char name[QAT_DEV_NAME_MAX_LEN];
 	struct rte_devargs *devargs = pci_dev->device.devargs;
+	struct qat_dev_hw_spec_funcs *ops_hw;
+	struct rte_mem_resource *mem_resource;
+	const struct rte_memzone *qat_dev_mz;
+	int qat_dev_size, extra_size;
 
 	rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
 	snprintf(name+strlen(name), QAT_DEV_NAME_MAX_LEN-strlen(name), "_qat");
 
+	switch (pci_dev->id.device_id) {
+	case 0x0443:
+		qat_dev_gen = QAT_GEN1;
+		break;
+	case 0x37c9:
+	case 0x19e3:
+	case 0x6f55:
+	case 0x18ef:
+		qat_dev_gen = QAT_GEN2;
+		break;
+	case 0x18a1:
+		qat_dev_gen = QAT_GEN3;
+		break;
+	case 0x4941:
+		qat_dev_gen = QAT_GEN4;
+		break;
+	default:
+		QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
+		return NULL;
+	}
+
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		const struct rte_memzone *mz = rte_memzone_lookup(name);
 
@@ -267,63 +229,63 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 		return NULL;
 	}
 
-	qat_pci_devs[qat_dev_id].mz = rte_memzone_reserve(name,
-		sizeof(struct qat_pci_device),
+	extra_size = qat_pci_get_extra_size(qat_dev_gen);
+	if (extra_size < 0) {
+		QAT_LOG(ERR, "QAT internal error: no pci pointer for gen %d",
+			qat_dev_gen);
+		return NULL;
+	}
+
+	qat_dev_size = sizeof(struct qat_pci_device) + extra_size;
+	qat_dev_mz = rte_memzone_reserve(name, qat_dev_size,
 		rte_socket_id(), 0);
 
-	if (qat_pci_devs[qat_dev_id].mz == NULL) {
+	if (qat_dev_mz == NULL) {
 		QAT_LOG(ERR, "Error when allocating memzone for QAT_%d",
 			qat_dev_id);
 		return NULL;
 	}
 
-	qat_dev = qat_pci_devs[qat_dev_id].mz->addr;
-	memset(qat_dev, 0, sizeof(*qat_dev));
+	qat_dev = qat_dev_mz->addr;
+	memset(qat_dev, 0, qat_dev_size);
+	qat_dev->dev_private = qat_dev + 1;
 	strlcpy(qat_dev->name, name, QAT_DEV_NAME_MAX_LEN);
 	qat_dev->qat_dev_id = qat_dev_id;
 	qat_pci_devs[qat_dev_id].pci_dev = pci_dev;
-	switch (pci_dev->id.device_id) {
-	case 0x0443:
-		qat_dev->qat_dev_gen = QAT_GEN1;
-		break;
-	case 0x37c9:
-	case 0x19e3:
-	case 0x6f55:
-	case 0x18ef:
-		qat_dev->qat_dev_gen = QAT_GEN2;
-		break;
-	case 0x18a1:
-		qat_dev->qat_dev_gen = QAT_GEN3;
-		break;
-	case 0x4941:
-		qat_dev->qat_dev_gen = QAT_GEN4;
-		break;
-	default:
-		QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
-		rte_memzone_free(qat_pci_devs[qat_dev->qat_dev_id].mz);
+	qat_dev->qat_dev_gen = qat_dev_gen;
+
+	ops_hw = qat_dev_hw_spec[qat_dev->qat_dev_gen];
+	if (ops_hw->qat_dev_get_misc_bar == NULL) {
+		QAT_LOG(ERR, "qat_dev_get_misc_bar function pointer not set");
+		rte_memzone_free(qat_dev_mz);
 		return NULL;
 	}
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		qat_dev->misc_bar_io_addr = pci_dev->mem_resource[2].addr;
-		if (qat_dev->misc_bar_io_addr == NULL) {
+	if (ops_hw->qat_dev_get_misc_bar(&mem_resource, pci_dev) == 0) {
+		if (mem_resource->addr == NULL) {
 			QAT_LOG(ERR, "QAT cannot get access to VF misc bar");
+			rte_memzone_free(qat_dev_mz);
 			return NULL;
 		}
-	}
+		qat_dev->misc_bar_io_addr = mem_resource->addr;
+	} else
+		qat_dev->misc_bar_io_addr = NULL;
 
 	if (devargs && devargs->drv_str)
 		qat_dev_parse_cmd(devargs->drv_str, qat_dev_cmd_param);
 
-	if (qat_dev->qat_dev_gen >= QAT_GEN4) {
-		if (qat_read_qp_config(qat_dev)) {
-			QAT_LOG(ERR,
-				"Cannot acquire ring configuration for QAT_%d",
-				qat_dev_id);
-			return NULL;
-		}
+	if (qat_read_qp_config(qat_dev)) {
+		QAT_LOG(ERR,
+			"Cannot acquire ring configuration for QAT_%d",
+			qat_dev_id);
+			rte_memzone_free(qat_dev_mz);
+		return NULL;
 	}
 
+	/* No errors when allocating, attach memzone with
+	 * qat_dev to list of devices
+	 */
+	qat_pci_devs[qat_dev_id].mz = qat_dev_mz;
+
 	rte_spinlock_init(&qat_dev->arb_csr_lock);
 	qat_nb_pci_devices++;
 
@@ -396,6 +358,7 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	int sym_ret = 0, asym_ret = 0, comp_ret = 0;
 	int num_pmds_created = 0;
 	struct qat_pci_device *qat_pci_dev;
+	struct qat_dev_hw_spec_funcs *ops_hw;
 	struct qat_dev_cmd_param qat_dev_cmd_param[] = {
 			{ SYM_ENQ_THRESHOLD_NAME, 0 },
 			{ ASYM_ENQ_THRESHOLD_NAME, 0 },
@@ -412,13 +375,14 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	if (qat_pci_dev == NULL)
 		return -ENODEV;
 
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		if (qat_gen4_reset_ring_pair(qat_pci_dev)) {
-			QAT_LOG(ERR,
-				"Cannot reset ring pairs, does pf driver supports pf2vf comms?"
-				);
-			return -ENODEV;
-		}
+	ops_hw = qat_dev_hw_spec[qat_pci_dev->qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_reset_ring_pairs,
+		-ENOTSUP);
+	if (ops_hw->qat_dev_reset_ring_pairs(qat_pci_dev)) {
+		QAT_LOG(ERR,
+			"Cannot reset ring pairs, does pf driver supports pf2vf comms?"
+			);
+		return -ENODEV;
 	}
 
 	sym_ret = qat_sym_dev_create(qat_pci_dev, qat_dev_cmd_param);
@@ -453,7 +417,8 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	return 0;
 }
 
-static int qat_pci_remove(struct rte_pci_device *pci_dev)
+static int
+qat_pci_remove(struct rte_pci_device *pci_dev)
 {
 	struct qat_pci_device *qat_pci_dev;
 
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index b8b5c387a3..8b69206df5 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -133,6 +133,8 @@ struct qat_pci_device {
 	/**< Data of ring configuration on gen4 */
 	void *misc_bar_io_addr;
 	/**< Address of misc bar */
+	void *dev_private;
+	/**< Per generation specific information */
 };
 
 struct qat_gen_hw_data {
@@ -182,7 +184,4 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev __rte_unused,
 int
 qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev __rte_unused);
 
-int
-qat_query_svc(struct qat_pci_device *qat_pci_dev, uint8_t *ret);
-
 #endif /* _QAT_DEVICE_H_ */
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 026ea5ee01..b8c6000e86 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -20,6 +20,7 @@
 #include "qat_comp.h"
 #include "adf_transport_access_macros.h"
 #include "adf_transport_access_macros_gen4vf.h"
+#include "dev/qat_dev_gens.h"
 
 #define QAT_CQ_MAX_DEQ_RETRIES 10
 
@@ -512,7 +513,7 @@ qat_read_qp_config(struct qat_pci_device *qat_dev)
 	if (qat_dev_gen == QAT_GEN4) {
 		uint16_t svc = 0;
 
-		if (qat_query_svc(qat_dev, (uint8_t *)&svc))
+		if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
 			return -(EFAULT);
 		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
 			struct qat_qp_hw_data *hw_data =
-- 
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v7 3/9] common/qat: add gen specific queue pair function
  2021-10-27 15:50             ` [dpdk-dev] [dpdk-dev v7 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 1/9] common/qat: add gen specific data and function Kai Ji
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 2/9] common/qat: add gen specific device implementation Kai Ji
@ 2021-10-27 15:50               ` Kai Ji
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 4/9] common/qat: add gen specific queue implementation Kai Ji
                                 ` (6 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-27 15:50 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch adds the queue pair data structure and function
prototypes for different QAT generations.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 drivers/common/qat/qat_qp.c |   3 ++
 drivers/common/qat/qat_qp.h | 103 ++++++++++++++++++++++++------------
 2 files changed, 71 insertions(+), 35 deletions(-)

diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index b8c6000e86..27994036b8 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -34,6 +34,9 @@
 	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
 	(ADF_ARB_REG_SLOT * index), value)

+struct qat_qp_hw_spec_funcs*
+	qat_qp_hw_spec[QAT_N_GENS];
+
 __extension__
 const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
 					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
index e1627197fa..726cd2ef61 100644
--- a/drivers/common/qat/qat_qp.h
+++ b/drivers/common/qat/qat_qp.h
@@ -7,8 +7,6 @@
 #include "qat_common.h"
 #include "adf_transport_access_macros.h"

-struct qat_pci_device;
-
 #define QAT_CSR_HEAD_WRITE_THRESH 32U
 /* number of requests to accumulate before writing head CSR */

@@ -24,37 +22,7 @@ struct qat_pci_device;
 #define QAT_GEN4_BUNDLE_NUM             4
 #define QAT_GEN4_QPS_PER_BUNDLE_NUM     1

-/**
- * Structure with data needed for creation of queue pair.
- */
-struct qat_qp_hw_data {
-	enum qat_service_type service_type;
-	uint8_t hw_bundle_num;
-	uint8_t tx_ring_num;
-	uint8_t rx_ring_num;
-	uint16_t tx_msg_size;
-	uint16_t rx_msg_size;
-};
-
-/**
- * Structure with data needed for creation of queue pair on gen4.
- */
-struct qat_qp_gen4_data {
-	struct qat_qp_hw_data qat_qp_hw_data;
-	uint8_t reserved;
-	uint8_t valid;
-};
-
-/**
- * Structure with data needed for creation of queue pair.
- */
-struct qat_qp_config {
-	const struct qat_qp_hw_data *hw;
-	uint32_t nb_descriptors;
-	uint32_t cookie_size;
-	int socket_id;
-	const char *service_str;
-};
+struct qat_pci_device;

 /**
  * Structure associated with each queue.
@@ -96,8 +64,28 @@ struct qat_qp {
 	uint16_t min_enq_burst_threshold;
 } __rte_cache_aligned;

-extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+/**
+ * Structure with data needed for creation of queue pair.
+ */
+struct qat_qp_hw_data {
+	enum qat_service_type service_type;
+	uint8_t hw_bundle_num;
+	uint8_t tx_ring_num;
+	uint8_t rx_ring_num;
+	uint16_t tx_msg_size;
+	uint16_t rx_msg_size;
+};
+
+/**
+ * Structure with data needed for creation of queue pair.
+ */
+struct qat_qp_config {
+	const struct qat_qp_hw_data *hw;
+	uint32_t nb_descriptors;
+	uint32_t cookie_size;
+	int socket_id;
+	const char *service_str;
+};

 uint16_t
 qat_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops);
@@ -136,4 +124,49 @@ qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
 int
 qat_read_qp_config(struct qat_pci_device *qat_dev);

+/**
+ * Function prototypes for GENx specific queue pair operations.
+ **/
+typedef int (*qat_qp_rings_per_service_t)
+		(struct qat_pci_device *, enum qat_service_type);
+
+typedef void (*qat_qp_build_ring_base_t)(void *, struct qat_queue *);
+
+typedef void (*qat_qp_adf_arb_enable_t)(const struct qat_queue *, void *,
+		rte_spinlock_t *);
+
+typedef void (*qat_qp_adf_arb_disable_t)(const struct qat_queue *, void *,
+		rte_spinlock_t *);
+
+typedef void (*qat_qp_adf_configure_queues_t)(struct qat_qp *);
+
+typedef void (*qat_qp_csr_write_tail_t)(struct qat_qp *qp, struct qat_queue *q);
+
+typedef void (*qat_qp_csr_write_head_t)(struct qat_qp *qp, struct qat_queue *q,
+		uint32_t new_head);
+
+typedef void (*qat_qp_csr_setup_t)(struct qat_pci_device*, void *,
+		struct qat_qp *);
+
+typedef const struct qat_qp_hw_data * (*qat_qp_get_hw_data_t)(
+		struct qat_pci_device *dev, enum qat_service_type service_type,
+		uint16_t qp_id);
+
+struct qat_qp_hw_spec_funcs {
+	qat_qp_rings_per_service_t	qat_qp_rings_per_service;
+	qat_qp_build_ring_base_t	qat_qp_build_ring_base;
+	qat_qp_adf_arb_enable_t		qat_qp_adf_arb_enable;
+	qat_qp_adf_arb_disable_t	qat_qp_adf_arb_disable;
+	qat_qp_adf_configure_queues_t	qat_qp_adf_configure_queues;
+	qat_qp_csr_write_tail_t		qat_qp_csr_write_tail;
+	qat_qp_csr_write_head_t		qat_qp_csr_write_head;
+	qat_qp_csr_setup_t		qat_qp_csr_setup;
+	qat_qp_get_hw_data_t		qat_qp_get_hw_data;
+};
+
+extern struct qat_qp_hw_spec_funcs *qat_qp_hw_spec[];
+
+extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+
 #endif /* _QAT_QP_H_ */
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v7 4/9] common/qat: add gen specific queue implementation
  2021-10-27 15:50             ` [dpdk-dev] [dpdk-dev v7 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                                 ` (2 preceding siblings ...)
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 3/9] common/qat: add gen specific queue pair function Kai Ji
@ 2021-10-27 15:50               ` Kai Ji
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 5/9] compress/qat: add gen specific data and function Kai Ji
                                 ` (5 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-27 15:50 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch replaces the mixed QAT queue pair configuration
implementation by separate files with shared or individual
implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c         | 190 +++++
 drivers/common/qat/dev/qat_dev_gen2.c         |  14 +
 drivers/common/qat/dev/qat_dev_gen3.c         |  60 ++
 drivers/common/qat/dev/qat_dev_gen4.c         | 161 ++++-
 drivers/common/qat/dev/qat_dev_gens.h         |  37 +-
 .../qat/qat_adf/adf_transport_access_macros.h |   2 +
 drivers/common/qat/qat_device.h               |   3 -
 drivers/common/qat/qat_qp.c                   | 677 +++++++-----------
 drivers/common/qat/qat_qp.h                   |  24 +-
 drivers/crypto/qat/qat_sym_pmd.c              |  32 +-
 10 files changed, 723 insertions(+), 477 deletions(-)

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
index 9972280e06..38757e6e40 100644
--- a/drivers/common/qat/dev/qat_dev_gen1.c
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -3,6 +3,7 @@
  */

 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"

@@ -10,6 +11,194 @@

 #define ADF_ARB_REG_SLOT			0x1000

+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+__extension__
+const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
+	/* queue pairs which provide an asymmetric crypto service */
+	[QAT_SERVICE_ASYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 0,
+			.rx_ring_num = 8,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+
+		}, {
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 1,
+			.rx_ring_num = 9,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a symmetric crypto service */
+	[QAT_SERVICE_SYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 2,
+			.rx_ring_num = 10,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		},
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 3,
+			.rx_ring_num = 11,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a compression service */
+	[QAT_SERVICE_COMPRESSION] = {
+		{
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 6,
+			.rx_ring_num = 14,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}, {
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 7,
+			.rx_ring_num = 15,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	}
+};
+
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen1(struct qat_pci_device *dev __rte_unused,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	return qat_gen1_qps[service_type] + qp_id;
+}
+
+int
+qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
+{
+	int i = 0, count = 0;
+
+	for (i = 0; i < ADF_MAX_QPS_ON_ANY_SERVICE; i++) {
+		const struct qat_qp_hw_data *hw_qps =
+				qat_qp_get_hw_data(qat_dev, service, i);
+		if (hw_qps->service_type == service)
+			count++;
+	}
+
+	return count;
+}
+
+void
+qat_qp_csr_build_ring_base_gen1(void *io_addr,
+			struct qat_queue *queue)
+{
+	uint64_t queue_base;
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+			queue->queue_size);
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+		queue->hw_queue_number, queue_base);
+}
+
+void
+qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_ARB_REG_SLOT *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr,
+			arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+void
+qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+				(ADF_ARB_REG_SLOT * txq->hw_bundle_number);
+	uint32_t value;
+
+	rte_spinlock_lock(lock);
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value &= ~(0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+void
+qat_qp_adf_configure_queues_gen1(struct qat_qp *qp)
+{
+	uint32_t q_tx_config, q_resp_config;
+	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
+
+	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
+	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
+			ADF_RING_NEAR_WATERMARK_512,
+			ADF_RING_NEAR_WATERMARK_0);
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
+		q_tx->hw_bundle_number,	q_tx->hw_queue_number,
+		q_tx_config);
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
+		q_rx->hw_bundle_number,	q_rx->hw_queue_number,
+		q_resp_config);
+}
+
+void
+qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q)
+{
+	WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number,
+		q->hw_queue_number, q->tail);
+}
+
+void
+qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q,
+			uint32_t new_head)
+{
+	WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number,
+			q->hw_queue_number, new_head);
+}
+
+void
+qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev,
+			void *io_addr, struct qat_qp *qp)
+{
+	qat_qp_csr_build_ring_base_gen1(io_addr, &qp->tx_q);
+	qat_qp_csr_build_ring_base_gen1(io_addr, &qp->rx_q);
+	qat_qp_adf_configure_queues_gen1(qp);
+	qat_qp_adf_arb_enable_gen1(&qp->tx_q, qp->mmap_bar_addr,
+					&qat_dev->arb_csr_lock);
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen1 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen1,
+};
+
 int
 qat_reset_ring_pairs_gen1(struct qat_pci_device *qat_pci_dev __rte_unused)
 {
@@ -59,6 +248,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = {

 RTE_INIT(qat_dev_gen_gen1_init)
 {
+	qat_qp_hw_spec[QAT_GEN1] = &qat_qp_hw_spec_gen1;
 	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
 	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c
index d3470ed6b8..f077fe9eef 100644
--- a/drivers/common/qat/dev/qat_dev_gen2.c
+++ b/drivers/common/qat/dev/qat_dev_gen2.c
@@ -3,11 +3,24 @@
  */

 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"

 #include <stdint.h>

+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen2 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen1,
+};
+
 static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
 	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
 	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
@@ -18,6 +31,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {

 RTE_INIT(qat_dev_gen_gen2_init)
 {
+	qat_qp_hw_spec[QAT_GEN2] = &qat_qp_hw_spec_gen2;
 	qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2;
 	qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c
index e4a66869d2..de3fa17fa9 100644
--- a/drivers/common/qat/dev/qat_dev_gen3.c
+++ b/drivers/common/qat/dev/qat_dev_gen3.c
@@ -3,11 +3,70 @@
  */

 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"

 #include <stdint.h>

+__extension__
+const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
+	/* queue pairs which provide an asymmetric crypto service */
+	[QAT_SERVICE_ASYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 0,
+			.rx_ring_num = 4,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a symmetric crypto service */
+	[QAT_SERVICE_SYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 1,
+			.rx_ring_num = 5,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a compression service */
+	[QAT_SERVICE_COMPRESSION] = {
+		{
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 3,
+			.rx_ring_num = 7,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	}
+};
+
+
+static const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen3(struct qat_pci_device *dev __rte_unused,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	return qat_gen3_qps[service_type] + qp_id;
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen3 = {
+	.qat_qp_rings_per_service  = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen3
+};
+
 static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
 	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
 	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
@@ -18,6 +77,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {

 RTE_INIT(qat_dev_gen_gen3_init)
 {
+	qat_qp_hw_spec[QAT_GEN3] = &qat_qp_hw_spec_gen3;
 	qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3;
 	qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
index 5e5423ebfa..7ffde5f4c8 100644
--- a/drivers/common/qat/dev/qat_dev_gen4.c
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -10,10 +10,13 @@
 #include "adf_transport_access_macros_gen4vf.h"
 #include "adf_pf2vf_msg.h"
 #include "qat_pf2vf.h"
-#include "qat_dev_gens.h"

 #include <stdint.h>

+/* QAT GEN 4 specific macros */
+#define QAT_GEN4_BUNDLE_NUM             4
+#define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
+
 struct qat_dev_gen4_extra {
 	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
 		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
@@ -28,7 +31,7 @@ static struct qat_pf2vf_dev qat_pf2vf_gen4 = {
 	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
 };

-int
+static int
 qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
 {
 	struct qat_pf2vf_msg pf2vf_msg;
@@ -39,6 +42,52 @@ qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
 	return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
 }

+static int
+qat_select_valid_queue_gen4(struct qat_pci_device *qat_dev, int qp_id,
+			enum qat_service_type service_type)
+{
+	int i = 0, valid_qps = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+
+	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		if (dev_extra->qp_gen4_data[i][0].service_type ==
+			service_type) {
+			if (valid_qps == qp_id)
+				return i;
+			++valid_qps;
+		}
+	}
+	return -1;
+}
+
+static const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen4(struct qat_pci_device *qat_dev,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+	int ring_pair = qat_select_valid_queue_gen4(qat_dev, qp_id,
+			service_type);
+
+	if (ring_pair < 0)
+		return NULL;
+
+	return &dev_extra->qp_gen4_data[ring_pair][0];
+}
+
+static int
+qat_qp_rings_per_service_gen4(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
+{
+	int i = 0, count = 0, max_ops_per_srv = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+
+	max_ops_per_srv = QAT_GEN4_BUNDLE_NUM;
+	for (i = 0, count = 0; i < max_ops_per_srv; i++)
+		if (dev_extra->qp_gen4_data[i][0].service_type == service)
+			count++;
+	return count;
+}
+
 static enum qat_service_type
 gen4_pick_service(uint8_t hw_service)
 {
@@ -94,6 +143,109 @@ qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
 	return 0;
 }

+static void
+qat_qp_build_ring_base_gen4(void *io_addr,
+			struct qat_queue *queue)
+{
+	uint64_t queue_base;
+
+	queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr,
+			queue->queue_size);
+	WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number,
+		queue->hw_queue_number, queue_base);
+}
+
+static void
+qat_qp_adf_arb_enable_gen4(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_RING_BUNDLE_SIZE_GEN4 *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
+			arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+static void
+qat_qp_adf_arb_disable_gen4(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_RING_BUNDLE_SIZE_GEN4 *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
+			arb_csr_offset);
+	value &= ~(0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+static void
+qat_qp_adf_configure_queues_gen4(struct qat_qp *qp)
+{
+	uint32_t q_tx_config, q_resp_config;
+	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
+
+	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
+	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
+			ADF_RING_NEAR_WATERMARK_512,
+			ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
+		q_tx->hw_bundle_number,	q_tx->hw_queue_number,
+		q_tx_config);
+	WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
+		q_rx->hw_bundle_number,	q_rx->hw_queue_number,
+		q_resp_config);
+}
+
+static void
+qat_qp_csr_write_tail_gen4(struct qat_qp *qp, struct qat_queue *q)
+{
+	WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
+		q->hw_bundle_number, q->hw_queue_number, q->tail);
+}
+
+static void
+qat_qp_csr_write_head_gen4(struct qat_qp *qp, struct qat_queue *q,
+			uint32_t new_head)
+{
+	WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr,
+			q->hw_bundle_number, q->hw_queue_number, new_head);
+}
+
+static void
+qat_qp_csr_setup_gen4(struct qat_pci_device *qat_dev,
+			void *io_addr, struct qat_qp *qp)
+{
+	qat_qp_build_ring_base_gen4(io_addr, &qp->tx_q);
+	qat_qp_build_ring_base_gen4(io_addr, &qp->rx_q);
+	qat_qp_adf_configure_queues_gen4(qp);
+	qat_qp_adf_arb_enable_gen4(&qp->tx_q, qp->mmap_bar_addr,
+					&qat_dev->arb_csr_lock);
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen4 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen4,
+	.qat_qp_build_ring_base = qat_qp_build_ring_base_gen4,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen4,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen4,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen4,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen4,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen4,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen4,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen4,
+};
+
 static int
 qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
 {
@@ -116,8 +268,8 @@ qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
 	return 0;
 }

-static const struct
-rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
+static const struct rte_mem_resource *
+qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
 {
 	return &pci_dev->mem_resource[0];
 }
@@ -146,6 +298,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = {

 RTE_INIT(qat_dev_gen_4_init)
 {
+	qat_qp_hw_spec[QAT_GEN4] = &qat_qp_hw_spec_gen4;
 	qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4;
 	qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4;
 	qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4;
diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h
index 4ad0ffa728..7c92f1938c 100644
--- a/drivers/common/qat/dev/qat_dev_gens.h
+++ b/drivers/common/qat/dev/qat_dev_gens.h
@@ -16,6 +16,40 @@ extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
 int
 qat_dev_get_extra_size_gen1(void);

+const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen1(struct qat_pci_device *dev,
+		enum qat_service_type service_type, uint16_t qp_id);
+
+int
+qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev,
+		enum qat_service_type service);
+
+void
+qat_qp_csr_build_ring_base_gen1(void *io_addr,
+		struct qat_queue *queue);
+
+void
+qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock);
+
+void
+qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock);
+
+void
+qat_qp_adf_configure_queues_gen1(struct qat_qp *qp);
+
+void
+qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q);
+
+void
+qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q,
+		uint32_t new_head);
+
+void
+qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev,
+		void *io_addr, struct qat_qp *qp);
+
 int
 qat_reset_ring_pairs_gen1(
 		struct qat_pci_device *qat_pci_dev);
@@ -28,7 +62,4 @@ qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource,
 int
 qat_dev_read_config_gen1(struct qat_pci_device *qat_dev);

-int
-qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val);
-
 #endif
diff --git a/drivers/common/qat/qat_adf/adf_transport_access_macros.h b/drivers/common/qat/qat_adf/adf_transport_access_macros.h
index 504ffb7236..f98bbb5001 100644
--- a/drivers/common/qat/qat_adf/adf_transport_access_macros.h
+++ b/drivers/common/qat/qat_adf/adf_transport_access_macros.h
@@ -51,6 +51,8 @@
 #define ADF_MIN_RING_SIZE ADF_RING_SIZE_128
 #define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M
 #define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K
+/* ARB CSR offset */
+#define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C

 /* Maximum number of qps on a device for any service type */
 #define ADF_MAX_QPS_ON_ANY_SERVICE	2
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 8b69206df5..8233cc045d 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -128,9 +128,6 @@ struct qat_pci_device {
 	/* Data relating to compression service */
 	struct qat_comp_dev_private *comp_dev;
 	/**< link back to compressdev private data */
-	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
-		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
-	/**< Data of ring configuration on gen4 */
 	void *misc_bar_io_addr;
 	/**< Address of misc bar */
 	void *dev_private;
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 27994036b8..cde421eb77 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -18,124 +18,15 @@
 #include "qat_sym.h"
 #include "qat_asym.h"
 #include "qat_comp.h"
-#include "adf_transport_access_macros.h"
-#include "adf_transport_access_macros_gen4vf.h"
-#include "dev/qat_dev_gens.h"

 #define QAT_CQ_MAX_DEQ_RETRIES 10

 #define ADF_MAX_DESC				4096
 #define ADF_MIN_DESC				128

-#define ADF_ARB_REG_SLOT			0x1000
-#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
-
-#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
-	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
-	(ADF_ARB_REG_SLOT * index), value)
-
 struct qat_qp_hw_spec_funcs*
 	qat_qp_hw_spec[QAT_N_GENS];

-__extension__
-const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
-					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
-	/* queue pairs which provide an asymmetric crypto service */
-	[QAT_SERVICE_ASYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 0,
-			.rx_ring_num = 8,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-
-		}, {
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 1,
-			.rx_ring_num = 9,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a symmetric crypto service */
-	[QAT_SERVICE_SYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 2,
-			.rx_ring_num = 10,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		},
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 3,
-			.rx_ring_num = 11,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a compression service */
-	[QAT_SERVICE_COMPRESSION] = {
-		{
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 6,
-			.rx_ring_num = 14,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}, {
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 7,
-			.rx_ring_num = 15,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	}
-};
-
-__extension__
-const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES]
-					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
-	/* queue pairs which provide an asymmetric crypto service */
-	[QAT_SERVICE_ASYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 0,
-			.rx_ring_num = 4,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a symmetric crypto service */
-	[QAT_SERVICE_SYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 1,
-			.rx_ring_num = 5,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a compression service */
-	[QAT_SERVICE_COMPRESSION] = {
-		{
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 3,
-			.rx_ring_num = 7,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	}
-};
-
 static int qat_qp_check_queue_alignment(uint64_t phys_addr,
 	uint32_t queue_size_bytes);
 static void qat_queue_delete(struct qat_queue *queue);
@@ -143,77 +34,32 @@ static int qat_queue_create(struct qat_pci_device *qat_dev,
 	struct qat_queue *queue, struct qat_qp_config *, uint8_t dir);
 static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
 	uint32_t *queue_size_for_csr);
-static void adf_configure_queues(struct qat_qp *queue,
+static int adf_configure_queues(struct qat_qp *queue,
 	enum qat_device_gen qat_dev_gen);
-static void adf_queue_arb_enable(enum qat_device_gen qat_dev_gen,
+static int adf_queue_arb_enable(struct qat_pci_device *qat_dev,
 	struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock);
-static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
+static int adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
 	struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock);
+static int qat_qp_build_ring_base(struct qat_pci_device *qat_dev,
+	void *io_addr, struct qat_queue *queue);
+static const struct rte_memzone *queue_dma_zone_reserve(const char *queue_name,
+	uint32_t queue_size, int socket_id);
+static int qat_qp_csr_setup(struct qat_pci_device *qat_dev, void *io_addr,
+	struct qat_qp *qp);

-int qat_qps_per_service(struct qat_pci_device *qat_dev,
-		enum qat_service_type service)
-{
-	int i = 0, count = 0, max_ops_per_srv = 0;
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		max_ops_per_srv = QAT_GEN4_BUNDLE_NUM;
-		for (i = 0, count = 0; i < max_ops_per_srv; i++)
-			if (qat_dev->qp_gen4_data[i][0].service_type == service)
-				count++;
-	} else {
-		const struct qat_qp_hw_data *sym_hw_qps =
-				qat_gen_config[qat_dev->qat_dev_gen]
-				.qp_hw_data[service];
-
-		max_ops_per_srv = ADF_MAX_QPS_ON_ANY_SERVICE;
-		for (i = 0, count = 0; i < max_ops_per_srv; i++)
-			if (sym_hw_qps[i].service_type == service)
-				count++;
-	}
-
-	return count;
-}
-
-static const struct rte_memzone *
-queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
-			int socket_id)
-{
-	const struct rte_memzone *mz;
-
-	mz = rte_memzone_lookup(queue_name);
-	if (mz != 0) {
-		if (((size_t)queue_size <= mz->len) &&
-				((socket_id == SOCKET_ID_ANY) ||
-					(socket_id == mz->socket_id))) {
-			QAT_LOG(DEBUG, "re-use memzone already "
-					"allocated for %s", queue_name);
-			return mz;
-		}
-
-		QAT_LOG(ERR, "Incompatible memzone already "
-				"allocated %s, size %u, socket %d. "
-				"Requested size %u, socket %u",
-				queue_name, (uint32_t)mz->len,
-				mz->socket_id, queue_size, socket_id);
-		return NULL;
-	}
-
-	QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
-					queue_name, queue_size, socket_id);
-	return rte_memzone_reserve_aligned(queue_name, queue_size,
-		socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size);
-}
-
-int qat_qp_setup(struct qat_pci_device *qat_dev,
+int
+qat_qp_setup(struct qat_pci_device *qat_dev,
 		struct qat_qp **qp_addr,
 		uint16_t queue_pair_id,
 		struct qat_qp_config *qat_qp_conf)
 {
-	struct qat_qp *qp;
+	struct qat_qp *qp = NULL;
 	struct rte_pci_device *pci_dev =
 			qat_pci_devs[qat_dev->qat_dev_id].pci_dev;
 	char op_cookie_pool_name[RTE_RING_NAMESIZE];
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev->qat_dev_gen];
+	void *io_addr;
 	uint32_t i;

 	QAT_LOG(DEBUG, "Setup qp %u on qat pci device %d gen %d",
@@ -226,7 +72,15 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		return -EINVAL;
 	}

-	if (pci_dev->mem_resource[0].addr == NULL) {
+	if (ops_hw->qat_dev_get_transport_bar == NULL)	{
+		QAT_LOG(ERR,
+			"QAT Internal Error: qat_dev_get_transport_bar not set for gen %d",
+			qat_dev->qat_dev_gen);
+		goto create_err;
+	}
+
+	io_addr = ops_hw->qat_dev_get_transport_bar(pci_dev)->addr;
+	if (io_addr == NULL) {
 		QAT_LOG(ERR, "Could not find VF config space "
 				"(UIO driver attached?).");
 		return -EINVAL;
@@ -250,7 +104,7 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		return -ENOMEM;
 	}

-	qp->mmap_bar_addr = pci_dev->mem_resource[0].addr;
+	qp->mmap_bar_addr = io_addr;
 	qp->enqueued = qp->dequeued = 0;

 	if (qat_queue_create(qat_dev, &(qp->tx_q), qat_qp_conf,
@@ -277,10 +131,6 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		goto create_err;
 	}

-	adf_configure_queues(qp, qat_dev_gen);
-	adf_queue_arb_enable(qat_dev_gen, &qp->tx_q, qp->mmap_bar_addr,
-					&qat_dev->arb_csr_lock);
-
 	snprintf(op_cookie_pool_name, RTE_RING_NAMESIZE,
 					"%s%d_cookies_%s_qp%hu",
 		pci_dev->driver->driver.name, qat_dev->qat_dev_id,
@@ -298,6 +148,8 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	if (!qp->op_cookie_pool) {
 		QAT_LOG(ERR, "QAT PMD Cannot create"
 				" op mempool");
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
 		goto create_err;
 	}

@@ -316,91 +168,32 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	QAT_LOG(DEBUG, "QP setup complete: id: %d, cookiepool: %s",
 			queue_pair_id, op_cookie_pool_name);

+	qat_qp_csr_setup(qat_dev, io_addr, qp);
+
 	*qp_addr = qp;
 	return 0;

 create_err:
-	if (qp->op_cookie_pool)
-		rte_mempool_free(qp->op_cookie_pool);
-	rte_free(qp->op_cookies);
-	rte_free(qp);
-	return -EFAULT;
-}
-
-
-int qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr)
-{
-	struct qat_qp *qp = *qp_addr;
-	uint32_t i;
-
-	if (qp == NULL) {
-		QAT_LOG(DEBUG, "qp already freed");
-		return 0;
-	}
+	if (qp) {
+		if (qp->op_cookie_pool)
+			rte_mempool_free(qp->op_cookie_pool);

-	QAT_LOG(DEBUG, "Free qp on qat_pci device %d",
-				qp->qat_dev->qat_dev_id);
-
-	/* Don't free memory if there are still responses to be processed */
-	if ((qp->enqueued - qp->dequeued) == 0) {
-		qat_queue_delete(&(qp->tx_q));
-		qat_queue_delete(&(qp->rx_q));
-	} else {
-		return -EAGAIN;
-	}
+		if (qp->op_cookies)
+			rte_free(qp->op_cookies);

-	adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q), qp->mmap_bar_addr,
-				&qp->qat_dev->arb_csr_lock);
-
-	for (i = 0; i < qp->nb_descriptors; i++)
-		rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]);
-
-	if (qp->op_cookie_pool)
-		rte_mempool_free(qp->op_cookie_pool);
-
-	rte_free(qp->op_cookies);
-	rte_free(qp);
-	*qp_addr = NULL;
-	return 0;
-}
-
-
-static void qat_queue_delete(struct qat_queue *queue)
-{
-	const struct rte_memzone *mz;
-	int status = 0;
-
-	if (queue == NULL) {
-		QAT_LOG(DEBUG, "Invalid queue");
-		return;
+		rte_free(qp);
 	}
-	QAT_LOG(DEBUG, "Free ring %d, memzone: %s",
-			queue->hw_queue_number, queue->memz_name);

-	mz = rte_memzone_lookup(queue->memz_name);
-	if (mz != NULL)	{
-		/* Write an unused pattern to the queue memory. */
-		memset(queue->base_addr, 0x7F, queue->queue_size);
-		status = rte_memzone_free(mz);
-		if (status != 0)
-			QAT_LOG(ERR, "Error %d on freeing queue %s",
-					status, queue->memz_name);
-	} else {
-		QAT_LOG(DEBUG, "queue %s doesn't exist",
-				queue->memz_name);
-	}
+	return -EFAULT;
 }

 static int
 qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 		struct qat_qp_config *qp_conf, uint8_t dir)
 {
-	uint64_t queue_base;
-	void *io_addr;
 	const struct rte_memzone *qp_mz;
 	struct rte_pci_device *pci_dev =
 			qat_pci_devs[qat_dev->qat_dev_id].pci_dev;
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
 	int ret = 0;
 	uint16_t desc_size = (dir == ADF_RING_DIR_TX ?
 			qp_conf->hw->tx_msg_size : qp_conf->hw->rx_msg_size);
@@ -460,19 +253,6 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 	 * Write an unused pattern to the queue memory.
 	 */
 	memset(queue->base_addr, 0x7F, queue_size_bytes);
-	io_addr = pci_dev->mem_resource[0].addr;
-
-	if (qat_dev_gen == QAT_GEN4) {
-		queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr,
-					queue->queue_size);
-		WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number,
-			queue->hw_queue_number, queue_base);
-	} else {
-		queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
-				queue->queue_size);
-		WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
-			queue->hw_queue_number, queue_base);
-	}

 	QAT_LOG(DEBUG, "RING: Name:%s, size in CSR: %u, in bytes %u,"
 		" nb msgs %u, msg_size %u, modulo mask %u",
@@ -488,202 +268,231 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 	return ret;
 }

-int
-qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
-			enum qat_service_type service_type)
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+		int socket_id)
 {
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		int i = 0, valid_qps = 0;
-
-		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
-			if (qat_dev->qp_gen4_data[i][0].service_type ==
-				service_type) {
-				if (valid_qps == qp_id)
-					return i;
-				++valid_qps;
-			}
+	const struct rte_memzone *mz;
+
+	mz = rte_memzone_lookup(queue_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			QAT_LOG(DEBUG, "re-use memzone already "
+					"allocated for %s", queue_name);
+			return mz;
 		}
+
+		QAT_LOG(ERR, "Incompatible memzone already "
+				"allocated %s, size %u, socket %d. "
+				"Requested size %u, socket %u",
+				queue_name, (uint32_t)mz->len,
+				mz->socket_id, queue_size, socket_id);
+		return NULL;
 	}
-	return -1;
+
+	QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					queue_name, queue_size, socket_id);
+	return rte_memzone_reserve_aligned(queue_name, queue_size,
+		socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size);
 }

 int
-qat_read_qp_config(struct qat_pci_device *qat_dev)
+qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr)
 {
-	int i = 0;
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
-
-	if (qat_dev_gen == QAT_GEN4) {
-		uint16_t svc = 0;
-
-		if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
-			return -(EFAULT);
-		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
-			struct qat_qp_hw_data *hw_data =
-				&qat_dev->qp_gen4_data[i][0];
-			uint8_t svc1 = (svc >> (3 * i)) & 0x7;
-			enum qat_service_type service_type = QAT_SERVICE_INVALID;
-
-			if (svc1 == QAT_SVC_SYM) {
-				service_type = QAT_SERVICE_SYMMETRIC;
-				QAT_LOG(DEBUG,
-					"Discovered SYMMETRIC service on bundle %d",
-					i);
-			} else if (svc1 == QAT_SVC_COMPRESSION) {
-				service_type = QAT_SERVICE_COMPRESSION;
-				QAT_LOG(DEBUG,
-					"Discovered COPRESSION service on bundle %d",
-					i);
-			} else if (svc1 == QAT_SVC_ASYM) {
-				service_type = QAT_SERVICE_ASYMMETRIC;
-				QAT_LOG(DEBUG,
-					"Discovered ASYMMETRIC service on bundle %d",
-					i);
-			} else {
-				QAT_LOG(ERR,
-					"Unrecognized service on bundle %d",
-					i);
-				return -(EFAULT);
-			}
+	int ret;
+	struct qat_qp *qp = *qp_addr;
+	uint32_t i;

-			memset(hw_data, 0, sizeof(*hw_data));
-			hw_data->service_type = service_type;
-			if (service_type == QAT_SERVICE_ASYMMETRIC) {
-				hw_data->tx_msg_size = 64;
-				hw_data->rx_msg_size = 32;
-			} else if (service_type == QAT_SERVICE_SYMMETRIC ||
-					service_type ==
-						QAT_SERVICE_COMPRESSION) {
-				hw_data->tx_msg_size = 128;
-				hw_data->rx_msg_size = 32;
-			}
-			hw_data->tx_ring_num = 0;
-			hw_data->rx_ring_num = 1;
-			hw_data->hw_bundle_num = i;
-		}
+	if (qp == NULL) {
+		QAT_LOG(DEBUG, "qp already freed");
 		return 0;
 	}
-	return -(EINVAL);
+
+	QAT_LOG(DEBUG, "Free qp on qat_pci device %d",
+				qp->qat_dev->qat_dev_id);
+
+	/* Don't free memory if there are still responses to be processed */
+	if ((qp->enqueued - qp->dequeued) == 0) {
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
+	} else {
+		return -EAGAIN;
+	}
+
+	ret = adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q),
+			qp->mmap_bar_addr, &qp->qat_dev->arb_csr_lock);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < qp->nb_descriptors; i++)
+		rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]);
+
+	if (qp->op_cookie_pool)
+		rte_mempool_free(qp->op_cookie_pool);
+
+	rte_free(qp->op_cookies);
+	rte_free(qp);
+	*qp_addr = NULL;
+	return 0;
 }

-static int qat_qp_check_queue_alignment(uint64_t phys_addr,
-					uint32_t queue_size_bytes)
+
+static void
+qat_queue_delete(struct qat_queue *queue)
 {
-	if (((queue_size_bytes - 1) & phys_addr) != 0)
-		return -EINVAL;
+	const struct rte_memzone *mz;
+	int status = 0;
+
+	if (queue == NULL) {
+		QAT_LOG(DEBUG, "Invalid queue");
+		return;
+	}
+	QAT_LOG(DEBUG, "Free ring %d, memzone: %s",
+			queue->hw_queue_number, queue->memz_name);
+
+	mz = rte_memzone_lookup(queue->memz_name);
+	if (mz != NULL)	{
+		/* Write an unused pattern to the queue memory. */
+		memset(queue->base_addr, 0x7F, queue->queue_size);
+		status = rte_memzone_free(mz);
+		if (status != 0)
+			QAT_LOG(ERR, "Error %d on freeing queue %s",
+					status, queue->memz_name);
+	} else {
+		QAT_LOG(DEBUG, "queue %s doesn't exist",
+				queue->memz_name);
+	}
+}
+
+static int __rte_unused
+adf_queue_arb_enable(struct qat_pci_device *qat_dev, struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_enable,
+			-ENOTSUP);
+	ops->qat_qp_adf_arb_enable(txq, base_addr, lock);
 	return 0;
 }

-static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
-	uint32_t *p_queue_size_for_csr)
+static int
+adf_queue_arb_disable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock)
 {
-	uint8_t i = ADF_MIN_RING_SIZE;
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];

-	for (; i <= ADF_MAX_RING_SIZE; i++)
-		if ((msg_size * msg_num) ==
-				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
-			*p_queue_size_for_csr = i;
-			return 0;
-		}
-	QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
-	return -EINVAL;
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_disable,
+			-ENOTSUP);
+	ops->qat_qp_adf_arb_disable(txq, base_addr, lock);
+	return 0;
 }

-static void
-adf_queue_arb_enable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq,
-			void *base_addr, rte_spinlock_t *lock)
+static int __rte_unused
+qat_qp_build_ring_base(struct qat_pci_device *qat_dev, void *io_addr,
+		struct qat_queue *queue)
 {
-	uint32_t arb_csr_offset = 0, value;
-
-	rte_spinlock_lock(lock);
-	if (qat_dev_gen == QAT_GEN4) {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_RING_BUNDLE_SIZE_GEN4 *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
-				arb_csr_offset);
-	} else {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_ARB_REG_SLOT *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr,
-				arb_csr_offset);
-	}
-	value |= (0x01 << txq->hw_queue_number);
-	ADF_CSR_WR(base_addr, arb_csr_offset, value);
-	rte_spinlock_unlock(lock);
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_build_ring_base,
+			-ENOTSUP);
+	ops->qat_qp_build_ring_base(io_addr, queue);
+	return 0;
 }

-static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
-		struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock)
+int
+qat_qps_per_service(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
 {
-	uint32_t arb_csr_offset = 0, value;
-
-	rte_spinlock_lock(lock);
-	if (qat_dev_gen == QAT_GEN4) {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_RING_BUNDLE_SIZE_GEN4 *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
-				arb_csr_offset);
-	} else {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_ARB_REG_SLOT *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr,
-				arb_csr_offset);
-	}
-	value &= ~(0x01 << txq->hw_queue_number);
-	ADF_CSR_WR(base_addr, arb_csr_offset, value);
-	rte_spinlock_unlock(lock);
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_rings_per_service,
+			-ENOTSUP);
+	return ops->qat_qp_rings_per_service(qat_dev, service);
 }

-static void adf_configure_queues(struct qat_qp *qp,
-		enum qat_device_gen qat_dev_gen)
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data(struct qat_pci_device *qat_dev,
+		enum qat_service_type service, uint16_t qp_id)
 {
-	uint32_t q_tx_config, q_resp_config;
-	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
-
-	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
-	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
-			ADF_RING_NEAR_WATERMARK_512,
-			ADF_RING_NEAR_WATERMARK_0);
-
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
-			q_tx->hw_bundle_number,	q_tx->hw_queue_number,
-			q_tx_config);
-		WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
-			q_rx->hw_bundle_number,	q_rx->hw_queue_number,
-			q_resp_config);
-	} else {
-		WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
-			q_tx->hw_bundle_number,	q_tx->hw_queue_number,
-			q_tx_config);
-		WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
-			q_rx->hw_bundle_number,	q_rx->hw_queue_number,
-			q_resp_config);
-	}
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_get_hw_data, NULL);
+	return ops->qat_qp_get_hw_data(qat_dev, service, qp_id);
 }

-static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask)
+int
+qat_read_qp_config(struct qat_pci_device *qat_dev)
 {
-	return data & modulo_mask;
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_read_config,
+			-ENOTSUP);
+	return ops_hw->qat_dev_read_config(qat_dev);
+}
+
+static int __rte_unused
+adf_configure_queues(struct qat_qp *qp, enum qat_device_gen qat_dev_gen)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_configure_queues,
+			-ENOTSUP);
+	ops->qat_qp_adf_configure_queues(qp);
+	return 0;
 }

 static inline void
 txq_write_tail(enum qat_device_gen qat_dev_gen,
-		struct qat_qp *qp, struct qat_queue *q) {
+		struct qat_qp *qp, struct qat_queue *q)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];

-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
-			q->hw_bundle_number, q->hw_queue_number, q->tail);
-	} else {
-		WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number,
-			q->hw_queue_number, q->tail);
-	}
+	/*
+	 * Pointer check should be done during
+	 * initialization
+	 */
+	ops->qat_qp_csr_write_tail(qp, q);
 }

+static inline void
+qat_qp_csr_write_head(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
+			struct qat_queue *q, uint32_t new_head)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
+
+	/*
+	 * Pointer check should be done during
+	 * initialization
+	 */
+	ops->qat_qp_csr_write_head(qp, q, new_head);
+}
+
+static int
+qat_qp_csr_setup(struct qat_pci_device *qat_dev,
+		void *io_addr, struct qat_qp *qp)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_csr_setup,
+			-ENOTSUP);
+	ops->qat_qp_csr_setup(qat_dev, io_addr, qp);
+	return 0;
+}
+
+
 static inline
 void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
 				struct qat_queue *q)
@@ -707,15 +516,37 @@ void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
 	q->nb_processed_responses = 0;
 	q->csr_head = new_head;

-	/* write current head to CSR */
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr,
-			q->hw_bundle_number, q->hw_queue_number, new_head);
-	} else {
-		WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number,
-				q->hw_queue_number, new_head);
-	}
+	qat_qp_csr_write_head(qat_dev_gen, qp, q, new_head);
+}
+
+static int
+qat_qp_check_queue_alignment(uint64_t phys_addr, uint32_t queue_size_bytes)
+{
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return -EINVAL;
+	return 0;
+}
+
+static int
+adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+		uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
+
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return -EINVAL;
+}

+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t modulo_mask)
+{
+	return data & modulo_mask;
 }

 uint16_t
diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
index 726cd2ef61..deafb407b3 100644
--- a/drivers/common/qat/qat_qp.h
+++ b/drivers/common/qat/qat_qp.h
@@ -12,16 +12,6 @@

 #define QAT_QP_MIN_INFL_THRESHOLD	256

-/* Default qp configuration for GEN4 devices */
-#define QAT_GEN4_QP_DEFCON	(QAT_SERVICE_SYMMETRIC |	\
-				QAT_SERVICE_SYMMETRIC << 8 |	\
-				QAT_SERVICE_SYMMETRIC << 16 |	\
-				QAT_SERVICE_SYMMETRIC << 24)
-
-/* QAT GEN 4 specific macros */
-#define QAT_GEN4_BUNDLE_NUM             4
-#define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
-
 struct qat_pci_device;

 /**
@@ -106,7 +96,11 @@ qat_qp_setup(struct qat_pci_device *qat_dev,

 int
 qat_qps_per_service(struct qat_pci_device *qat_dev,
-			enum qat_service_type service);
+		enum qat_service_type service);
+
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data(struct qat_pci_device *qat_dev,
+		enum qat_service_type service, uint16_t qp_id);

 int
 qat_cq_get_fw_version(struct qat_qp *qp);
@@ -116,11 +110,6 @@ int
 qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused,
 			  void *op_cookie __rte_unused,
 			  uint64_t *dequeue_err_count __rte_unused);
-
-int
-qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
-			enum qat_service_type service_type);
-
 int
 qat_read_qp_config(struct qat_pci_device *qat_dev);

@@ -166,7 +155,4 @@ struct qat_qp_hw_spec_funcs {

 extern struct qat_qp_hw_spec_funcs *qat_qp_hw_spec[];

-extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-
 #endif /* _QAT_QP_H_ */
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index d4f087733f..5b8ee4bee6 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -164,35 +164,11 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	int ret = 0;
 	uint32_t i;
 	struct qat_qp_config qat_qp_conf;
-	const struct qat_qp_hw_data *sym_hw_qps = NULL;
-	const struct qat_qp_hw_data *qp_hw_data = NULL;
-
 	struct qat_qp **qp_addr =
 			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
 	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
 	struct qat_pci_device *qat_dev = qat_private->qat_dev;

-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		int ring_pair =
-			qat_select_valid_queue(qat_dev, qp_id,
-				QAT_SERVICE_SYMMETRIC);
-
-		if (ring_pair < 0) {
-			QAT_LOG(ERR,
-				"qp_id %u invalid for this device, no enough services allocated for GEN4 device",
-				qp_id);
-			return -EINVAL;
-		}
-		sym_hw_qps =
-			&qat_dev->qp_gen4_data[0][0];
-		qp_hw_data =
-			&qat_dev->qp_gen4_data[ring_pair][0];
-	} else {
-		sym_hw_qps = qat_gen_config[qat_dev->qat_dev_gen]
-				.qp_hw_data[QAT_SERVICE_SYMMETRIC];
-		qp_hw_data = sym_hw_qps + qp_id;
-	}
-
 	/* If qp is already in use free ring memory and qp metadata. */
 	if (*qp_addr != NULL) {
 		ret = qat_sym_qp_release(dev, qp_id);
@@ -204,7 +180,13 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 		return -EINVAL;
 	}

-	qat_qp_conf.hw = qp_hw_data;
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_SYMMETRIC,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
 	qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie);
 	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
 	qat_qp_conf.socket_id = socket_id;
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v7 5/9] compress/qat: add gen specific data and function
  2021-10-27 15:50             ` [dpdk-dev] [dpdk-dev v7 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                                 ` (3 preceding siblings ...)
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 4/9] common/qat: add gen specific queue implementation Kai Ji
@ 2021-10-27 15:50               ` Kai Ji
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 6/9] compress/qat: add gen specific implementation Kai Ji
                                 ` (4 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-27 15:50 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Adam Dybkowski, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch adds the compression data structure and function
prototypes for different QAT generations.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 .../common/qat/qat_adf/icp_qat_hw_gen4_comp.h | 195 ++++++++++++
 .../qat/qat_adf/icp_qat_hw_gen4_comp_defs.h   | 299 ++++++++++++++++++
 drivers/common/qat/qat_common.h               |   4 +-
 drivers/common/qat/qat_device.h               |   7 -
 drivers/compress/qat/qat_comp.c               | 101 +++---
 drivers/compress/qat/qat_comp.h               |   8 +-
 drivers/compress/qat/qat_comp_pmd.c           | 159 ++++------
 drivers/compress/qat/qat_comp_pmd.h           |  76 +++++
 8 files changed, 675 insertions(+), 174 deletions(-)
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h

diff --git a/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
new file mode 100644
index 0000000000..ec69dc7105
--- /dev/null
+++ b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
@@ -0,0 +1,195 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _ICP_QAT_HW_GEN4_COMP_H_
+#define _ICP_QAT_HW_GEN4_COMP_H_
+
+#include "icp_qat_fw.h"
+#include "icp_qat_hw_gen4_comp_defs.h"
+
+struct icp_qat_hw_comp_20_config_csr_lower {
+	icp_qat_hw_comp_20_extended_delay_match_mode_t edmm;
+	icp_qat_hw_comp_20_hw_comp_format_t algo;
+	icp_qat_hw_comp_20_search_depth_t sd;
+	icp_qat_hw_comp_20_hbs_control_t hbs;
+	icp_qat_hw_comp_20_abd_t abd;
+	icp_qat_hw_comp_20_lllbd_ctrl_t lllbd;
+	icp_qat_hw_comp_20_min_match_control_t mmctrl;
+	icp_qat_hw_comp_20_skip_hash_collision_t hash_col;
+	icp_qat_hw_comp_20_skip_hash_update_t hash_update;
+	icp_qat_hw_comp_20_byte_skip_t skip_ctrl;
+};
+
+static inline uint32_t ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER(
+		struct icp_qat_hw_comp_20_config_csr_lower csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.algo,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_MASK);
+
+	QAT_FIELD_SET(val32, csr.sd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_MASK);
+
+	QAT_FIELD_SET(val32, csr.edmm,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_MASK);
+
+	QAT_FIELD_SET(val32, csr.hbs,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lllbd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK);
+
+	QAT_FIELD_SET(val32, csr.mmctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.hash_col,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_MASK);
+
+	QAT_FIELD_SET(val32, csr.hash_update,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_MASK);
+
+	QAT_FIELD_SET(val32, csr.skip_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_MASK);
+
+	QAT_FIELD_SET(val32, csr.abd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_MASK);
+
+	QAT_FIELD_SET(val32, csr.lllbd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_comp_20_config_csr_upper {
+	icp_qat_hw_comp_20_scb_control_t scb_ctrl;
+	icp_qat_hw_comp_20_rmb_control_t rmb_ctrl;
+	icp_qat_hw_comp_20_som_control_t som_ctrl;
+	icp_qat_hw_comp_20_skip_hash_rd_control_t skip_hash_ctrl;
+	icp_qat_hw_comp_20_scb_unload_control_t scb_unload_ctrl;
+	icp_qat_hw_comp_20_disable_token_fusion_control_t
+			disable_token_fusion_ctrl;
+	icp_qat_hw_comp_20_lbms_t lbms;
+	icp_qat_hw_comp_20_scb_mode_reset_mask_t scb_mode_reset;
+	uint16_t lazy;
+	uint16_t nice;
+};
+
+static inline uint32_t ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER(
+		struct icp_qat_hw_comp_20_config_csr_upper csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.scb_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.rmb_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.som_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.skip_hash_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.scb_unload_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.disable_token_fusion_ctrl,
+	ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_BITPOS,
+	ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbms,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_MASK);
+
+	QAT_FIELD_SET(val32, csr.scb_mode_reset,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK);
+
+	QAT_FIELD_SET(val32, csr.lazy,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_MASK);
+
+	QAT_FIELD_SET(val32, csr.nice,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_decomp_20_config_csr_lower {
+	icp_qat_hw_decomp_20_hbs_control_t hbs;
+	icp_qat_hw_decomp_20_lbms_t lbms;
+	icp_qat_hw_decomp_20_hw_comp_format_t algo;
+	icp_qat_hw_decomp_20_min_match_control_t mmctrl;
+	icp_qat_hw_decomp_20_lz4_block_checksum_present_t lbc;
+};
+
+static inline uint32_t ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER(
+		struct icp_qat_hw_decomp_20_config_csr_lower csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.hbs,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbms,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_MASK);
+
+	QAT_FIELD_SET(val32, csr.algo,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_MASK);
+
+	QAT_FIELD_SET(val32, csr.mmctrl,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbc,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_BITPOS,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_decomp_20_config_csr_upper {
+	icp_qat_hw_decomp_20_speculative_decoder_control_t sdc;
+	icp_qat_hw_decomp_20_mini_cam_control_t mcc;
+};
+
+static inline uint32_t ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_UPPER(
+		struct icp_qat_hw_decomp_20_config_csr_upper csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.sdc,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_BITPOS,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.mcc,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_MASK);
+
+	return rte_bswap32(val32);
+}
+
+#endif /* _ICP_QAT_HW_GEN4_COMP_H_ */
diff --git a/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
new file mode 100644
index 0000000000..ad02d06b12
--- /dev/null
+++ b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
@@ -0,0 +1,299 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _ICP_QAT_HW_GEN4_COMP_DEFS_H
+#define _ICP_QAT_HW_GEN4_COMP_DEFS_H
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_BITPOS	31
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_comp_20_scb_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_BITPOS	30
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_ALL = 0x0,
+	ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_FC_ONLY = 0x1,
+} icp_qat_hw_comp_20_rmb_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_ALL
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_BITPOS	28
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_MASK		0x3
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_NORMAL_MODE = 0x0,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_REPLAY_MODE = 0x1,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_INPUT_CRC = 0x2,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_RESERVED_MODE = 0x3,
+} icp_qat_hw_comp_20_som_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SOM_CONTROL_NORMAL_MODE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_BITPOS	27
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_NO_SKIP = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_SKIP_HASH_READS = 0x1,
+} icp_qat_hw_comp_20_skip_hash_rd_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_NO_SKIP
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_BITPOS	26
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_UNLOAD = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_NO_UNLOAD = 0x1,
+} icp_qat_hw_comp_20_scb_unload_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_UNLOAD
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_BITPOS 21
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_comp_20_disable_token_fusion_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_DEFAULT_VAL \
+		ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_ENABLE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_BITPOS	19
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_MASK		0x3
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_64KB = 0x0,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_256KB = 0x1,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_1MB = 0x2,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_4MB = 0x3,
+} icp_qat_hw_comp_20_lbms_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_LBMS_LBMS_64KB
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS	18
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS_AND_HISTORY = 0x1,
+} icp_qat_hw_comp_20_scb_mode_reset_mask_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_BITPOS	9
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_MASK	0x1ff
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL 258
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_BITPOS	0
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_MASK	0x1ff
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL 259
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS	14
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_MASK		0x7
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_HBS_CONTROL_HBS_IS_32KB = 0x0,
+} icp_qat_hw_comp_20_hbs_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_HBS_CONTROL_HBS_IS_32KB
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_BITPOS	13
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_ABD_ABD_ENABLED = 0x0,
+	ICP_QAT_HW_COMP_20_ABD_ABD_DISABLED = 0x1,
+} icp_qat_hw_comp_20_abd_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_ABD_ABD_ENABLED
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS	12
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED = 0x0,
+	ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_DISABLED = 0x1,
+} icp_qat_hw_comp_20_lllbd_ctrl_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_BITPOS	8
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_MASK		0xf
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1 = 0x1,
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_6 = 0x3,
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_9 = 0x4,
+} icp_qat_hw_comp_20_search_depth_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_BITPOS	5
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77 = 0x0,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE = 0x1,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_LZ4 = 0x2,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_LZ4S = 0x3,
+} icp_qat_hw_comp_20_hw_comp_format_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS	4
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_3B = 0x0,
+	ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_4B = 0x1,
+} icp_qat_hw_comp_20_min_match_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_3B
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_BITPOS	3
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_ALLOW = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_DONT_ALLOW = 0x1,
+} icp_qat_hw_comp_20_skip_hash_collision_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_ALLOW
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_BITPOS	2
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_ALLOW = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW = 0x1,
+} icp_qat_hw_comp_20_skip_hash_update_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_ALLOW
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_BITPOS	1
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_TOKEN = 0x0,
+	ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL = 0x1,
+} icp_qat_hw_comp_20_byte_skip_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_TOKEN
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_BITPOS	0
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_DISABLED = 0x0,
+	ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED = 0x1,
+} icp_qat_hw_comp_20_extended_delay_match_mode_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_DEFAULT_VAL \
+		ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_DISABLED
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_BITPOS 31
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_decomp_20_speculative_decoder_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_DEFAULT_VAL\
+		ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_ENABLE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_BITPOS	30
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_decomp_20_mini_cam_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_ENABLE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS	14
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_HBS_CONTROL_HBS_IS_32KB = 0x0,
+} icp_qat_hw_decomp_20_hbs_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_HBS_CONTROL_HBS_IS_32KB
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_BITPOS	8
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_MASK	0x3
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_64KB = 0x0,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_256KB = 0x1,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_1MB = 0x2,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_4MB = 0x3,
+} icp_qat_hw_decomp_20_lbms_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_LBMS_LBMS_64KB
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_BITPOS	5
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE = 0x1,
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_LZ4 = 0x2,
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_LZ4S = 0x3,
+} icp_qat_hw_decomp_20_hw_comp_format_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS	4
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_3B = 0x0,
+	ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_4B = 0x1,
+} icp_qat_hw_decomp_20_min_match_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_3B
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_BITPOS 3
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_ABSENT  =  0x0,
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_PRESENT  =  0x1,
+} icp_qat_hw_decomp_20_lz4_block_checksum_present_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_DEFAULT_VAL \
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_ABSENT
+
+#endif /* _ICP_QAT_HW_GEN4_COMP_DEFS_H */
diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index 1889ec4e88..a7632e31f8 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -13,9 +13,9 @@
 #define QAT_64_BTYE_ALIGN_MASK (~0x3f)

 /* Intel(R) QuickAssist Technology device generation is enumerated
- * from one according to the generation of the device
+ * from one according to the generation of the device.
+ * QAT_GEN* is used as the index to find all devices
  */
-
 enum qat_device_gen {
 	QAT_GEN1,
 	QAT_GEN2,
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 8233cc045d..e7c7e9af95 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -49,12 +49,6 @@ struct qat_dev_cmd_param {
 	uint16_t val;
 };

-enum qat_comp_num_im_buffers {
-	QAT_NUM_INTERM_BUFS_GEN1 = 12,
-	QAT_NUM_INTERM_BUFS_GEN2 = 20,
-	QAT_NUM_INTERM_BUFS_GEN3 = 64
-};
-
 struct qat_device_info {
 	const struct rte_memzone *mz;
 	/**< mz to store the qat_pci_device so it can be
@@ -137,7 +131,6 @@ struct qat_pci_device {
 struct qat_gen_hw_data {
 	enum qat_device_gen dev_gen;
 	const struct qat_qp_hw_data (*qp_hw_data)[ADF_MAX_QPS_ON_ANY_SERVICE];
-	enum qat_comp_num_im_buffers comp_num_im_bufs_required;
 	struct qat_pf2vf_dev *pf2vf_dev;
 };

diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index 7ac25a3b4c..e8f57c3cc4 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018-2019 Intel Corporation
+ * Copyright(c) 2018-2021 Intel Corporation
  */

 #include <rte_mempool.h>
@@ -332,7 +332,8 @@ qat_comp_build_request(void *in_op, uint8_t *out_msg,
 	return 0;
 }

-static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask)
+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t modulo_mask)
 {
 	return data & modulo_mask;
 }
@@ -793,8 +794,9 @@ qat_comp_stream_size(void)
 	return RTE_ALIGN_CEIL(sizeof(struct qat_comp_stream), 8);
 }

-static void qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
-				    enum qat_comp_request_type request)
+static void
+qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
+	    enum qat_comp_request_type request)
 {
 	if (request == QAT_COMP_REQUEST_FIXED_COMP_STATELESS)
 		header->service_cmd_id = ICP_QAT_FW_COMP_CMD_STATIC;
@@ -811,16 +813,17 @@ static void qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
 	    QAT_COMN_CD_FLD_TYPE_16BYTE_DATA, QAT_COMN_PTR_TYPE_FLAT);
 }

-static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
-			const struct rte_memzone *interm_buff_mz,
-			const struct rte_comp_xform *xform,
-			const struct qat_comp_stream *stream,
-			enum rte_comp_op_type op_type)
+static int
+qat_comp_create_templates(struct qat_comp_xform *qat_xform,
+			  const struct rte_memzone *interm_buff_mz,
+			  const struct rte_comp_xform *xform,
+			  const struct qat_comp_stream *stream,
+			  enum rte_comp_op_type op_type,
+			  enum qat_device_gen qat_dev_gen)
 {
 	struct icp_qat_fw_comp_req *comp_req;
-	int comp_level, algo;
 	uint32_t req_par_flags;
-	int direction = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS;
+	int res;

 	if (unlikely(qat_xform == NULL)) {
 		QAT_LOG(ERR, "Session was not created for this device");
@@ -839,46 +842,17 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 		}
 	}

-	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) {
-		direction = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS;
-		comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
+	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS)
 		req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(
 				ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP,
 				ICP_QAT_FW_COMP_BFINAL,
 				ICP_QAT_FW_COMP_CNV,
 				ICP_QAT_FW_COMP_CNV_RECOVERY);
-	} else {
-		if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
-		else if (xform->compress.level == 1)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
-		else if (xform->compress.level == 2)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_4;
-		else if (xform->compress.level == 3)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
-		else if (xform->compress.level >= 4 &&
-			 xform->compress.level <= 9)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_16;
-		else {
-			QAT_LOG(ERR, "compression level not supported");
-			return -EINVAL;
-		}
+	else
 		req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(
 				ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP,
 				ICP_QAT_FW_COMP_BFINAL, ICP_QAT_FW_COMP_CNV,
 				ICP_QAT_FW_COMP_CNV_RECOVERY);
-	}
-
-	switch (xform->compress.algo) {
-	case RTE_COMP_ALGO_DEFLATE:
-		algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE;
-		break;
-	case RTE_COMP_ALGO_LZS:
-	default:
-		/* RTE_COMP_NULL */
-		QAT_LOG(ERR, "compression algorithm not supported");
-		return -EINVAL;
-	}

 	comp_req = &qat_xform->qat_comp_req_tmpl;

@@ -899,18 +873,10 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 		comp_req->comp_cd_ctrl.comp_state_addr =
 				stream->state_registers_decomp_phys;

-		/* Enable A, B, C, D, and E (CAMs). */
+		/* RAM bank flags */
 		comp_req->comp_cd_ctrl.ram_bank_flags =
-			ICP_QAT_FW_COMP_RAM_FLAGS_BUILD(
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank E */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank D */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank C */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank B */
-				ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */
+				qat_comp_gen_dev_ops[qat_dev_gen]
+					.qat_comp_get_ram_bank_flags();

 		comp_req->comp_cd_ctrl.ram_banks_addr =
 				stream->inflate_context_phys;
@@ -924,13 +890,11 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 			ICP_QAT_FW_COMP_ENABLE_SECURE_RAM_USED_AS_INTMD_BUF);
 	}

-	comp_req->cd_pars.sl.comp_slice_cfg_word[0] =
-	    ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(
-		direction,
-		/* In CPM 1.6 only valid mode ! */
-		ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED, algo,
-		/* Translate level to depth */
-		comp_level, ICP_QAT_HW_COMPRESSION_FILE_TYPE_0);
+	res = qat_comp_gen_dev_ops[qat_dev_gen].qat_comp_set_slice_cfg_word(
+			qat_xform, xform, op_type,
+			comp_req->cd_pars.sl.comp_slice_cfg_word);
+	if (res)
+		return res;

 	comp_req->comp_pars.initial_adler = 1;
 	comp_req->comp_pars.initial_crc32 = 0;
@@ -958,7 +922,8 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 				ICP_QAT_FW_SLICE_XLAT);

 		comp_req->u1.xlt_pars.inter_buff_ptr =
-				interm_buff_mz->iova;
+				(qat_comp_get_num_im_bufs_required(qat_dev_gen)
+					== 0) ? 0 : interm_buff_mz->iova;
 	}

 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
@@ -991,6 +956,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 			      void **private_xform)
 {
 	struct qat_comp_dev_private *qat = dev->data->dev_private;
+	enum qat_device_gen qat_dev_gen = qat->qat_dev->qat_dev_gen;
+	unsigned int im_bufs = qat_comp_get_num_im_bufs_required(qat_dev_gen);

 	if (unlikely(private_xform == NULL)) {
 		QAT_LOG(ERR, "QAT: private_xform parameter is NULL");
@@ -1012,7 +979,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,

 		if (xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_FIXED ||
 		  ((xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_DEFAULT)
-				   && qat->interm_buff_mz == NULL))
+				   && qat->interm_buff_mz == NULL
+				   && im_bufs > 0))
 			qat_xform->qat_comp_request_type =
 					QAT_COMP_REQUEST_FIXED_COMP_STATELESS;

@@ -1020,7 +988,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 				RTE_COMP_HUFFMAN_DYNAMIC ||
 				xform->compress.deflate.huffman ==
 						RTE_COMP_HUFFMAN_DEFAULT) &&
-				qat->interm_buff_mz != NULL)
+				(qat->interm_buff_mz != NULL ||
+						im_bufs == 0))

 			qat_xform->qat_comp_request_type =
 					QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS;
@@ -1039,7 +1008,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 	}

 	if (qat_comp_create_templates(qat_xform, qat->interm_buff_mz, xform,
-				      NULL, RTE_COMP_OP_STATELESS)) {
+				      NULL, RTE_COMP_OP_STATELESS,
+				      qat_dev_gen)) {
 		QAT_LOG(ERR, "QAT: Problem with setting compression");
 		return -EINVAL;
 	}
@@ -1138,7 +1108,8 @@ qat_comp_stream_create(struct rte_compressdev *dev,
 	ptr->qat_xform.checksum_type = xform->decompress.chksum;

 	if (qat_comp_create_templates(&ptr->qat_xform, qat->interm_buff_mz,
-				      xform, ptr, RTE_COMP_OP_STATEFUL)) {
+				      xform, ptr, RTE_COMP_OP_STATEFUL,
+				      qat->qat_dev->qat_dev_gen)) {
 		QAT_LOG(ERR, "QAT: problem with creating descriptor template for stream");
 		rte_mempool_put(qat->streampool, *stream);
 		*stream = NULL;
diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
index 0444b50a1e..da7b9a6eec 100644
--- a/drivers/compress/qat/qat_comp.h
+++ b/drivers/compress/qat/qat_comp.h
@@ -28,14 +28,16 @@
 #define QAT_MIN_OUT_BUF_SIZE 46

 /* maximum size of the state registers */
-#define QAT_STATE_REGISTERS_MAX_SIZE 64
+#define QAT_STATE_REGISTERS_MAX_SIZE 256 /* 64 bytes for GEN1-3, 256 for GEN4 */

 /* decompressor context size */
 #define QAT_INFLATE_CONTEXT_SIZE_GEN1 36864
 #define QAT_INFLATE_CONTEXT_SIZE_GEN2 34032
 #define QAT_INFLATE_CONTEXT_SIZE_GEN3 34032
-#define QAT_INFLATE_CONTEXT_SIZE RTE_MAX(RTE_MAX(QAT_INFLATE_CONTEXT_SIZE_GEN1,\
-		QAT_INFLATE_CONTEXT_SIZE_GEN2), QAT_INFLATE_CONTEXT_SIZE_GEN3)
+#define QAT_INFLATE_CONTEXT_SIZE_GEN4 36864
+#define QAT_INFLATE_CONTEXT_SIZE RTE_MAX(RTE_MAX(RTE_MAX(\
+		QAT_INFLATE_CONTEXT_SIZE_GEN1, QAT_INFLATE_CONTEXT_SIZE_GEN2), \
+		QAT_INFLATE_CONTEXT_SIZE_GEN3), QAT_INFLATE_CONTEXT_SIZE_GEN4)

 enum qat_comp_request_type {
 	QAT_COMP_REQUEST_FIXED_COMP_STATELESS,
diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c
index caac7839e9..9b24d46e97 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -9,30 +9,29 @@

 #define QAT_PMD_COMP_SGL_DEF_SEGMENTS 16

+struct qat_comp_gen_dev_ops qat_comp_gen_dev_ops[QAT_N_GENS];
+
 struct stream_create_info {
 	struct qat_comp_dev_private *comp_dev;
 	int socket_id;
 	int error;
 };

-static const struct rte_compressdev_capabilities qat_comp_gen_capabilities[] = {
-	{/* COMPRESSION - deflate */
-	 .algo = RTE_COMP_ALGO_DEFLATE,
-	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
-				RTE_COMP_FF_CRC32_CHECKSUM |
-				RTE_COMP_FF_ADLER32_CHECKSUM |
-				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
-				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
-				RTE_COMP_FF_HUFFMAN_FIXED |
-				RTE_COMP_FF_HUFFMAN_DYNAMIC |
-				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
-				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
-				RTE_COMP_FF_OOP_LB_IN_SGL_OUT |
-				RTE_COMP_FF_STATEFUL_DECOMPRESSION,
-	 .window_size = {.min = 15, .max = 15, .increment = 0} },
-	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+static struct
+qat_comp_capabilities_info qat_comp_get_capa_info(
+		enum qat_device_gen qat_dev_gen, struct qat_pci_device *qat_dev)
+{
+	struct qat_comp_capabilities_info ret = { .data = NULL, .size = 0 };

-static void
+	if (qat_dev_gen >= QAT_N_GENS)
+		return ret;
+	RTE_FUNC_PTR_OR_ERR_RET(qat_comp_gen_dev_ops[qat_dev_gen]
+			.qat_comp_get_capabilities, ret);
+	return qat_comp_gen_dev_ops[qat_dev_gen]
+			.qat_comp_get_capabilities(qat_dev);
+}
+
+void
 qat_comp_stats_get(struct rte_compressdev *dev,
 		struct rte_compressdev_stats *stats)
 {
@@ -52,7 +51,7 @@ qat_comp_stats_get(struct rte_compressdev *dev,
 	stats->dequeue_err_count = qat_stats.dequeue_err_count;
 }

-static void
+void
 qat_comp_stats_reset(struct rte_compressdev *dev)
 {
 	struct qat_comp_dev_private *qat_priv;
@@ -67,7 +66,7 @@ qat_comp_stats_reset(struct rte_compressdev *dev)

 }

-static int
+int
 qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 {
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
@@ -95,23 +94,18 @@ qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 			&(dev->data->queue_pairs[queue_pair_id]));
 }

-static int
+int
 qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
-		  uint32_t max_inflight_ops, int socket_id)
+		uint32_t max_inflight_ops, int socket_id)
 {
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-	struct qat_qp_config qat_qp_conf;
-
+	struct qat_qp_config qat_qp_conf = {0};
 	struct qat_qp **qp_addr =
 			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
 	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-	const struct qat_qp_hw_data *comp_hw_qps =
-			qat_gen_config[qat_private->qat_dev->qat_dev_gen]
-				      .qp_hw_data[QAT_SERVICE_COMPRESSION];
-	const struct qat_qp_hw_data *qp_hw_data = comp_hw_qps + qp_id;
+	struct qat_qp *qp;
+	uint32_t i;
+	int ret;

 	/* If qp is already in use free ring memory and qp metadata. */
 	if (*qp_addr != NULL) {
@@ -125,7 +119,13 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 		return -EINVAL;
 	}

-	qat_qp_conf.hw = qp_hw_data;
+
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_COMPRESSION,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
 	qat_qp_conf.cookie_size = sizeof(struct qat_comp_op_cookie);
 	qat_qp_conf.nb_descriptors = max_inflight_ops;
 	qat_qp_conf.socket_id = socket_id;
@@ -134,7 +134,6 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
 	if (ret != 0)
 		return ret;
-
 	/* store a link to the qp in the qat_pci_device */
 	qat_private->qat_dev->qps_in_use[QAT_SERVICE_COMPRESSION][qp_id]
 								= *qp_addr;
@@ -189,7 +188,7 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,


 #define QAT_IM_BUFFER_DEBUG 0
-static const struct rte_memzone *
+const struct rte_memzone *
 qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
 			      uint32_t buff_size)
 {
@@ -202,8 +201,8 @@ qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
 	uint32_t full_size;
 	uint32_t offset_of_flat_buffs;
 	int i;
-	int num_im_sgls = qat_gen_config[
-		comp_dev->qat_dev->qat_dev_gen].comp_num_im_bufs_required;
+	int num_im_sgls = qat_comp_get_num_im_bufs_required(
+			comp_dev->qat_dev->qat_dev_gen);

 	QAT_LOG(DEBUG, "QAT COMP device %s needs %d sgls",
 				comp_dev->qat_dev->name, num_im_sgls);
@@ -480,8 +479,8 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev)
 	/* Free intermediate buffers */
 	if (comp_dev->interm_buff_mz) {
 		char mz_name[RTE_MEMZONE_NAMESIZE];
-		int i = qat_gen_config[
-		      comp_dev->qat_dev->qat_dev_gen].comp_num_im_bufs_required;
+		int i = qat_comp_get_num_im_bufs_required(
+				comp_dev->qat_dev->qat_dev_gen);

 		while (--i >= 0) {
 			snprintf(mz_name, RTE_MEMZONE_NAMESIZE,
@@ -509,28 +508,13 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev)
 	}
 }

-static int
+int
 qat_comp_dev_config(struct rte_compressdev *dev,
 		struct rte_compressdev_config *config)
 {
 	struct qat_comp_dev_private *comp_dev = dev->data->dev_private;
 	int ret = 0;

-	if (RTE_PMD_QAT_COMP_IM_BUFFER_SIZE == 0) {
-		QAT_LOG(WARNING,
-			"RTE_PMD_QAT_COMP_IM_BUFFER_SIZE = 0 in config file, so"
-			" QAT device can't be used for Dynamic Deflate. "
-			"Did you really intend to do this?");
-	} else {
-		comp_dev->interm_buff_mz =
-				qat_comp_setup_inter_buffers(comp_dev,
-					RTE_PMD_QAT_COMP_IM_BUFFER_SIZE);
-		if (comp_dev->interm_buff_mz == NULL) {
-			ret = -ENOMEM;
-			goto error_out;
-		}
-	}
-
 	if (config->max_nb_priv_xforms) {
 		comp_dev->xformpool = qat_comp_create_xform_pool(comp_dev,
 					    config, config->max_nb_priv_xforms);
@@ -558,19 +542,19 @@ qat_comp_dev_config(struct rte_compressdev *dev,
 	return ret;
 }

-static int
+int
 qat_comp_dev_start(struct rte_compressdev *dev __rte_unused)
 {
 	return 0;
 }

-static void
+void
 qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused)
 {

 }

-static int
+int
 qat_comp_dev_close(struct rte_compressdev *dev)
 {
 	int i;
@@ -588,8 +572,7 @@ qat_comp_dev_close(struct rte_compressdev *dev)
 	return ret;
 }

-
-static void
+void
 qat_comp_dev_info_get(struct rte_compressdev *dev,
 			struct rte_compressdev_info *info)
 {
@@ -662,27 +645,6 @@ qat_comp_pmd_dequeue_first_op_burst(void *qp, struct rte_comp_op **ops,
 	return ret;
 }

-static struct rte_compressdev_ops compress_qat_ops = {
-
-	/* Device related operations */
-	.dev_configure		= qat_comp_dev_config,
-	.dev_start		= qat_comp_dev_start,
-	.dev_stop		= qat_comp_dev_stop,
-	.dev_close		= qat_comp_dev_close,
-	.dev_infos_get		= qat_comp_dev_info_get,
-
-	.stats_get		= qat_comp_stats_get,
-	.stats_reset		= qat_comp_stats_reset,
-	.queue_pair_setup	= qat_comp_qp_setup,
-	.queue_pair_release	= qat_comp_qp_release,
-
-	/* Compression related operations */
-	.private_xform_create	= qat_comp_private_xform_create,
-	.private_xform_free	= qat_comp_private_xform_free,
-	.stream_create		= qat_comp_stream_create,
-	.stream_free		= qat_comp_stream_free
-};
-
 /* An rte_driver is needed in the registration of the device with compressdev.
  * The actual qat pci's rte_driver can't be used as its name represents
  * the whole pci device with all services. Think of this as a holder for a name
@@ -693,6 +655,7 @@ static const struct rte_driver compdev_qat_driver = {
 	.name = qat_comp_drv_name,
 	.alias = qat_comp_drv_name
 };
+
 int
 qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param)
@@ -708,17 +671,21 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	char capa_memz_name[RTE_COMPRESSDEV_NAME_MAX_LEN];
 	struct rte_compressdev *compressdev;
 	struct qat_comp_dev_private *comp_dev;
+	struct qat_comp_capabilities_info capabilities_info;
 	const struct rte_compressdev_capabilities *capabilities;
+	const struct qat_comp_gen_dev_ops *qat_comp_gen_ops =
+			&qat_comp_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	uint64_t capa_size;

-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		QAT_LOG(ERR, "Compression PMD not supported on QAT 4xxx");
-		return -EFAULT;
-	}
 	snprintf(name, RTE_COMPRESSDEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "comp");
 	QAT_LOG(DEBUG, "Creating QAT COMP device %s", name);

+	if (qat_comp_gen_ops->compressdev_ops == NULL) {
+		QAT_LOG(DEBUG, "Device %s does not support compression", name);
+		return -ENOTSUP;
+	}
+
 	/* Populate subset device to use in compressdev device creation */
 	qat_dev_instance->comp_rte_dev.driver = &compdev_qat_driver;
 	qat_dev_instance->comp_rte_dev.numa_node =
@@ -733,13 +700,13 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	if (compressdev == NULL)
 		return -ENODEV;

-	compressdev->dev_ops = &compress_qat_ops;
+	compressdev->dev_ops = qat_comp_gen_ops->compressdev_ops;

 	compressdev->enqueue_burst = (compressdev_enqueue_pkt_burst_t)
 			qat_enqueue_comp_op_burst;
 	compressdev->dequeue_burst = qat_comp_pmd_dequeue_first_op_burst;
-
-	compressdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED;
+	compressdev->feature_flags =
+			qat_comp_gen_ops->qat_comp_get_feature_flags();

 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -752,22 +719,20 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	comp_dev->qat_dev = qat_pci_dev;
 	comp_dev->compressdev = compressdev;

-	switch (qat_pci_dev->qat_dev_gen) {
-	case QAT_GEN1:
-	case QAT_GEN2:
-	case QAT_GEN3:
-		capabilities = qat_comp_gen_capabilities;
-		capa_size = sizeof(qat_comp_gen_capabilities);
-		break;
-	default:
-		capabilities = qat_comp_gen_capabilities;
-		capa_size = sizeof(qat_comp_gen_capabilities);
+	capabilities_info = qat_comp_get_capa_info(qat_pci_dev->qat_dev_gen,
+			qat_pci_dev);
+
+	if (capabilities_info.data == NULL) {
 		QAT_LOG(DEBUG,
 			"QAT gen %d capabilities unknown, default to GEN1",
 					qat_pci_dev->qat_dev_gen);
-		break;
+		capabilities_info = qat_comp_get_capa_info(QAT_GEN1,
+				qat_pci_dev);
 	}

+	capabilities = capabilities_info.data;
+	capa_size = capabilities_info.size;
+
 	comp_dev->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (comp_dev->capa_mz == NULL) {
 		comp_dev->capa_mz = rte_memzone_reserve(capa_memz_name,
diff --git a/drivers/compress/qat/qat_comp_pmd.h b/drivers/compress/qat/qat_comp_pmd.h
index 252b4b24e3..86317a513c 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -11,10 +11,44 @@
 #include <rte_compressdev_pmd.h>

 #include "qat_device.h"
+#include "qat_comp.h"

 /**< Intel(R) QAT Compression PMD driver name */
 #define COMPRESSDEV_NAME_QAT_PMD	compress_qat

+/* Private data structure for a QAT compression device capability. */
+struct qat_comp_capabilities_info {
+	const struct rte_compressdev_capabilities *data;
+	uint64_t size;
+};
+
+/**
+ * Function prototypes for GENx specific compress device operations.
+ **/
+typedef struct qat_comp_capabilities_info (*get_comp_capabilities_info_t)
+		(struct qat_pci_device *qat_dev);
+
+typedef uint16_t (*get_comp_ram_bank_flags_t)(void);
+
+typedef int (*set_comp_slice_cfg_word_t)(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word);
+
+typedef unsigned int (*get_comp_num_im_bufs_required_t)(void);
+
+typedef uint64_t (*get_comp_feature_flags_t)(void);
+
+struct qat_comp_gen_dev_ops {
+	struct rte_compressdev_ops *compressdev_ops;
+	get_comp_feature_flags_t qat_comp_get_feature_flags;
+	get_comp_capabilities_info_t qat_comp_get_capabilities;
+	get_comp_ram_bank_flags_t qat_comp_get_ram_bank_flags;
+	set_comp_slice_cfg_word_t qat_comp_set_slice_cfg_word;
+	get_comp_num_im_bufs_required_t qat_comp_get_num_im_bufs_required;
+};
+
+extern struct qat_comp_gen_dev_ops qat_comp_gen_dev_ops[];
+
 /** private data structure for a QAT compression device.
  * This QAT device is a device offering only a compression service,
  * there can be one of these on each qat_pci_device (VF).
@@ -37,6 +71,41 @@ struct qat_comp_dev_private {
 	uint16_t min_enq_burst_threshold;
 };

+int
+qat_comp_dev_config(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config);
+
+int
+qat_comp_dev_start(struct rte_compressdev *dev __rte_unused);
+
+void
+qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused);
+
+int
+qat_comp_dev_close(struct rte_compressdev *dev);
+
+void
+qat_comp_dev_info_get(struct rte_compressdev *dev,
+		struct rte_compressdev_info *info);
+
+void
+qat_comp_stats_get(struct rte_compressdev *dev,
+		struct rte_compressdev_stats *stats);
+
+void
+qat_comp_stats_reset(struct rte_compressdev *dev);
+
+int
+qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id);
+
+int
+qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
+		uint32_t max_inflight_ops, int socket_id);
+
+const struct rte_memzone *
+qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
+		uint32_t buff_size);
+
 int
 qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
@@ -44,5 +113,12 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 int
 qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev);

+
+static __rte_always_inline unsigned int
+qat_comp_get_num_im_bufs_required(enum qat_device_gen gen)
+{
+	return (*qat_comp_gen_dev_ops[gen].qat_comp_get_num_im_bufs_required)();
+}
+
 #endif
 #endif /* _QAT_COMP_PMD_H_ */
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v7 6/9] compress/qat: add gen specific implementation
  2021-10-27 15:50             ` [dpdk-dev] [dpdk-dev v7 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                                 ` (4 preceding siblings ...)
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 5/9] compress/qat: add gen specific data and function Kai Ji
@ 2021-10-27 15:50               ` Kai Ji
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 7/9] crypto/qat: unified device private data structure Kai Ji
                                 ` (3 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-27 15:50 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Adam Dybkowski, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch replaces the mixed QAT compression support
implementation by separate files with shared or individual
implementation for specific QAT generation.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 drivers/common/qat/meson.build               |   4 +-
 drivers/compress/qat/dev/qat_comp_pmd_gen1.c | 176 +++++++++++++++
 drivers/compress/qat/dev/qat_comp_pmd_gen2.c |  30 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen3.c |  30 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen4.c | 213 +++++++++++++++++++
 drivers/compress/qat/dev/qat_comp_pmd_gens.h |  30 +++
 6 files changed, 482 insertions(+), 1 deletion(-)
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen1.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen2.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen3.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen4.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gens.h

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 532e0fabb3..8a1c6d64e8 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -62,7 +62,9 @@ includes += include_directories(
 )

 if qat_compress
-    foreach f: ['qat_comp_pmd.c', 'qat_comp.c']
+    foreach f: ['qat_comp_pmd.c', 'qat_comp.c',
+            'dev/qat_comp_pmd_gen1.c', 'dev/qat_comp_pmd_gen2.c',
+            'dev/qat_comp_pmd_gen3.c', 'dev/qat_comp_pmd_gen4.c']
         sources += files(join_paths(qat_compress_relpath, f))
     endforeach
 endif
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen1.c b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c
new file mode 100644
index 0000000000..e3e75c8289
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c
@@ -0,0 +1,176 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_compressdev.h>
+#include <rte_compressdev_pmd.h>
+
+#include "qat_comp_pmd.h"
+#include "qat_comp.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN1 12
+
+const struct rte_compressdev_capabilities qat_gen1_comp_capabilities[] = {
+	{/* COMPRESSION - deflate */
+	 .algo = RTE_COMP_ALGO_DEFLATE,
+	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
+				RTE_COMP_FF_CRC32_CHECKSUM |
+				RTE_COMP_FF_ADLER32_CHECKSUM |
+				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
+				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+				RTE_COMP_FF_HUFFMAN_FIXED |
+				RTE_COMP_FF_HUFFMAN_DYNAMIC |
+				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
+				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
+				RTE_COMP_FF_OOP_LB_IN_SGL_OUT |
+				RTE_COMP_FF_STATEFUL_DECOMPRESSION,
+	 .window_size = {.min = 15, .max = 15, .increment = 0} },
+	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+
+static int
+qat_comp_dev_config_gen1(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config)
+{
+	struct qat_comp_dev_private *comp_dev = dev->data->dev_private;
+
+	if (RTE_PMD_QAT_COMP_IM_BUFFER_SIZE == 0) {
+		QAT_LOG(WARNING,
+			"RTE_PMD_QAT_COMP_IM_BUFFER_SIZE = 0 in config file, so"
+			"QAT device can't be used for Dynamic Deflate.");
+	} else {
+		comp_dev->interm_buff_mz =
+				qat_comp_setup_inter_buffers(comp_dev,
+					RTE_PMD_QAT_COMP_IM_BUFFER_SIZE);
+		if (comp_dev->interm_buff_mz == NULL)
+			return -ENOMEM;
+	}
+
+	return qat_comp_dev_config(dev, config);
+}
+
+struct rte_compressdev_ops qat_comp_ops_gen1 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_comp_dev_config_gen1,
+	.dev_start		= qat_comp_dev_start,
+	.dev_stop		= qat_comp_dev_stop,
+	.dev_close		= qat_comp_dev_close,
+	.dev_infos_get		= qat_comp_dev_info_get,
+
+	.stats_get		= qat_comp_stats_get,
+	.stats_reset		= qat_comp_stats_reset,
+	.queue_pair_setup	= qat_comp_qp_setup,
+	.queue_pair_release	= qat_comp_qp_release,
+
+	/* Compression related operations */
+	.private_xform_create	= qat_comp_private_xform_create,
+	.private_xform_free	= qat_comp_private_xform_free,
+	.stream_create		= qat_comp_stream_create,
+	.stream_free		= qat_comp_stream_free
+};
+
+struct qat_comp_capabilities_info
+qat_comp_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_comp_capabilities_info capa_info = {
+		.data = qat_gen1_comp_capabilities,
+		.size = sizeof(qat_gen1_comp_capabilities)
+	};
+	return capa_info;
+}
+
+uint16_t
+qat_comp_get_ram_bank_flags_gen1(void)
+{
+	/* Enable A, B, C, D, and E (CAMs). */
+	return ICP_QAT_FW_COMP_RAM_FLAGS_BUILD(
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank E */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank D */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank C */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank B */
+			ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */
+}
+
+int
+qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		__rte_unused enum rte_comp_op_type op_type,
+		uint32_t *comp_slice_cfg_word)
+{
+	unsigned int algo, comp_level, direction;
+
+	if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE)
+		algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE;
+	else {
+		QAT_LOG(ERR, "compression algorithm not supported");
+		return -EINVAL;
+	}
+
+	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) {
+		direction = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS;
+		comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+	} else {
+		direction = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS;
+
+		if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+		else if (xform->compress.level == 1)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
+		else if (xform->compress.level == 2)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_4;
+		else if (xform->compress.level == 3)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+		else if (xform->compress.level >= 4 &&
+			 xform->compress.level <= 9)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_16;
+		else {
+			QAT_LOG(ERR, "compression level not supported");
+			return -EINVAL;
+		}
+	}
+
+	comp_slice_cfg_word[0] =
+			ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(
+				direction,
+				/* In CPM 1.6 only valid mode ! */
+				ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED,
+				algo,
+				/* Translate level to depth */
+				comp_level,
+				ICP_QAT_HW_COMPRESSION_FILE_TYPE_0);
+
+	return 0;
+}
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen1(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN1;
+}
+
+uint64_t
+qat_comp_get_features_gen1(void)
+{
+	return RTE_COMPDEV_FF_HW_ACCELERATED;
+}
+
+RTE_INIT(qat_comp_pmd_gen1_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN1].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen2.c b/drivers/compress/qat/dev/qat_comp_pmd_gen2.c
new file mode 100644
index 0000000000..fd6c966f26
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen2.c
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN2 20
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen2(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN2;
+}
+
+RTE_INIT(qat_comp_pmd_gen2_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN2].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen2;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen3.c b/drivers/compress/qat/dev/qat_comp_pmd_gen3.c
new file mode 100644
index 0000000000..fccb0941f1
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen3.c
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN3 64
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen3(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN3;
+}
+
+RTE_INIT(qat_comp_pmd_gen3_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN3].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen3;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen4.c b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
new file mode 100644
index 0000000000..79b2ceb414
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
@@ -0,0 +1,213 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp.h"
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+#include "icp_qat_hw_gen4_comp.h"
+#include "icp_qat_hw_gen4_comp_defs.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN4 0
+
+static const struct rte_compressdev_capabilities
+qat_gen4_comp_capabilities[] = {
+	{/* COMPRESSION - deflate */
+	 .algo = RTE_COMP_ALGO_DEFLATE,
+	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
+				RTE_COMP_FF_CRC32_CHECKSUM |
+				RTE_COMP_FF_ADLER32_CHECKSUM |
+				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
+				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+				RTE_COMP_FF_HUFFMAN_FIXED |
+				RTE_COMP_FF_HUFFMAN_DYNAMIC |
+				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
+				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
+				RTE_COMP_FF_OOP_LB_IN_SGL_OUT,
+	 .window_size = {.min = 15, .max = 15, .increment = 0} },
+	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+
+static int
+qat_comp_dev_config_gen4(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config)
+{
+	/* QAT GEN4 doesn't need preallocated intermediate buffers */
+
+	return qat_comp_dev_config(dev, config);
+}
+
+static struct rte_compressdev_ops qat_comp_ops_gen4 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_comp_dev_config_gen4,
+	.dev_start		= qat_comp_dev_start,
+	.dev_stop		= qat_comp_dev_stop,
+	.dev_close		= qat_comp_dev_close,
+	.dev_infos_get		= qat_comp_dev_info_get,
+
+	.stats_get		= qat_comp_stats_get,
+	.stats_reset		= qat_comp_stats_reset,
+	.queue_pair_setup	= qat_comp_qp_setup,
+	.queue_pair_release	= qat_comp_qp_release,
+
+	/* Compression related operations */
+	.private_xform_create	= qat_comp_private_xform_create,
+	.private_xform_free	= qat_comp_private_xform_free,
+	.stream_create		= qat_comp_stream_create,
+	.stream_free		= qat_comp_stream_free
+};
+
+static struct qat_comp_capabilities_info
+qat_comp_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_comp_capabilities_info capa_info = {
+		.data = qat_gen4_comp_capabilities,
+		.size = sizeof(qat_gen4_comp_capabilities)
+	};
+	return capa_info;
+}
+
+static uint16_t
+qat_comp_get_ram_bank_flags_gen4(void)
+{
+	return 0;
+}
+
+static int
+qat_comp_set_slice_cfg_word_gen4(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word)
+{
+	if (qat_xform->qat_comp_request_type ==
+			QAT_COMP_REQUEST_FIXED_COMP_STATELESS ||
+	    qat_xform->qat_comp_request_type ==
+			QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS) {
+		/* Compression */
+		struct icp_qat_hw_comp_20_config_csr_upper hw_comp_upper_csr;
+		struct icp_qat_hw_comp_20_config_csr_lower hw_comp_lower_csr;
+
+		memset(&hw_comp_upper_csr, 0, sizeof(hw_comp_upper_csr));
+		memset(&hw_comp_lower_csr, 0, sizeof(hw_comp_lower_csr));
+
+		hw_comp_lower_csr.lllbd =
+			ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_DISABLED;
+
+		if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE) {
+			hw_comp_lower_csr.skip_ctrl =
+				ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL;
+
+			if (qat_xform->qat_comp_request_type ==
+				QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS) {
+				hw_comp_lower_csr.algo =
+					ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77;
+				hw_comp_lower_csr.lllbd =
+				    ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED;
+			} else {
+				hw_comp_lower_csr.algo =
+				      ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE;
+				hw_comp_upper_csr.scb_ctrl =
+					ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE;
+			}
+
+			if (op_type == RTE_COMP_OP_STATEFUL) {
+				hw_comp_upper_csr.som_ctrl =
+				     ICP_QAT_HW_COMP_20_SOM_CONTROL_REPLAY_MODE;
+			}
+		} else {
+			QAT_LOG(ERR, "Compression algorithm not supported");
+			return -EINVAL;
+		}
+
+		switch (xform->compress.level) {
+		case 1:
+		case 2:
+		case 3:
+		case 4:
+		case 5:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1;
+			hw_comp_lower_csr.hash_col =
+			      ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_DONT_ALLOW;
+			break;
+		case 6:
+		case 7:
+		case 8:
+		case RTE_COMP_LEVEL_PMD_DEFAULT:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_6;
+			break;
+		case 9:
+		case 10:
+		case 11:
+		case 12:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_9;
+			break;
+		default:
+			QAT_LOG(ERR, "Compression level not supported");
+			return -EINVAL;
+		}
+
+		hw_comp_lower_csr.abd = ICP_QAT_HW_COMP_20_ABD_ABD_DISABLED;
+		hw_comp_lower_csr.hash_update =
+			ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW;
+		hw_comp_lower_csr.edmm =
+		      ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED;
+
+		hw_comp_upper_csr.nice =
+			ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL;
+		hw_comp_upper_csr.lazy =
+			ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL;
+
+		comp_slice_cfg_word[0] =
+				ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER(
+					hw_comp_lower_csr);
+		comp_slice_cfg_word[1] =
+				ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER(
+					hw_comp_upper_csr);
+	} else {
+		/* Decompression */
+		struct icp_qat_hw_decomp_20_config_csr_lower
+				hw_decomp_lower_csr;
+
+		memset(&hw_decomp_lower_csr, 0, sizeof(hw_decomp_lower_csr));
+
+		if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE)
+			hw_decomp_lower_csr.algo =
+				ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE;
+		else {
+			QAT_LOG(ERR, "Compression algorithm not supported");
+			return -EINVAL;
+		}
+
+		comp_slice_cfg_word[0] =
+				ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER(
+					hw_decomp_lower_csr);
+		comp_slice_cfg_word[1] = 0;
+	}
+
+	return 0;
+}
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen4(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN4;
+}
+
+
+RTE_INIT(qat_comp_pmd_gen4_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN4].compressdev_ops =
+			&qat_comp_ops_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gens.h b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
new file mode 100644
index 0000000000..67293092ea
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _QAT_COMP_PMD_GENS_H_
+#define _QAT_COMP_PMD_GENS_H_
+
+#include <rte_compressdev.h>
+#include <rte_compressdev_pmd.h>
+#include <stdint.h>
+
+#include "qat_comp_pmd.h"
+
+extern const struct rte_compressdev_capabilities qat_gen1_comp_capabilities[];
+
+struct qat_comp_capabilities_info
+qat_comp_cap_get_gen1(struct qat_pci_device *qat_dev);
+
+uint16_t qat_comp_get_ram_bank_flags_gen1(void);
+
+int qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type,
+		uint32_t *comp_slice_cfg_word);
+
+uint64_t qat_comp_get_features_gen1(void);
+
+extern struct rte_compressdev_ops qat_comp_ops_gen1;
+
+#endif /* _QAT_COMP_PMD_GENS_H_ */
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v7 7/9] crypto/qat: unified device private data structure
  2021-10-27 15:50             ` [dpdk-dev] [dpdk-dev v7 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                                 ` (5 preceding siblings ...)
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 6/9] compress/qat: add gen specific implementation Kai Ji
@ 2021-10-27 15:50               ` Kai Ji
  2021-10-28  9:31                 ` Power, Ciara
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 8/9] crypto/qat: add gen specific data and function Kai Ji
                                 ` (2 subsequent siblings)
  9 siblings, 1 reply; 96+ messages in thread
From: Kai Ji @ 2021-10-27 15:50 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch unifies the QAT symmetric and asymmetric device
private data structures and functions.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/common/qat/meson.build       |   2 +-
 drivers/common/qat/qat_common.c      |  15 ++
 drivers/common/qat/qat_common.h      |   3 +
 drivers/common/qat/qat_device.h      |   7 +-
 drivers/crypto/qat/qat_asym_pmd.c    | 216 ++++-------------------
 drivers/crypto/qat/qat_asym_pmd.h    |  29 +---
 drivers/crypto/qat/qat_crypto.c      | 176 +++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h      |  78 +++++++++
 drivers/crypto/qat/qat_sym_pmd.c     | 250 +++++----------------------
 drivers/crypto/qat/qat_sym_pmd.h     |  21 +--
 drivers/crypto/qat/qat_sym_session.c |  15 +-
 11 files changed, 365 insertions(+), 447 deletions(-)
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 8a1c6d64e8..29fd0168ea 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -71,7 +71,7 @@ endif
 
 if qat_crypto
     foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c',
-            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c']
+            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c']
         sources += files(join_paths(qat_crypto_relpath, f))
     endforeach
     deps += ['security']
diff --git a/drivers/common/qat/qat_common.c b/drivers/common/qat/qat_common.c
index 5343a1451e..59e7e02622 100644
--- a/drivers/common/qat/qat_common.c
+++ b/drivers/common/qat/qat_common.c
@@ -6,6 +6,21 @@
 #include "qat_device.h"
 #include "qat_logs.h"
 
+const char *
+qat_service_get_str(enum qat_service_type type)
+{
+	switch (type) {
+	case QAT_SERVICE_SYMMETRIC:
+		return "sym";
+	case QAT_SERVICE_ASYMMETRIC:
+		return "asym";
+	case QAT_SERVICE_COMPRESSION:
+		return "comp";
+	default:
+		return "invalid";
+	}
+}
+
 int
 qat_sgl_fill_array(struct rte_mbuf *buf, int64_t offset,
 		void *list_in, uint32_t data_len,
diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index a7632e31f8..9411a79301 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -91,4 +91,7 @@ void
 qat_stats_reset(struct qat_pci_device *dev,
 		enum qat_service_type service);
 
+const char *
+qat_service_get_str(enum qat_service_type type);
+
 #endif /* _QAT_COMMON_H_ */
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index e7c7e9af95..85fae7b7c7 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -76,8 +76,7 @@ struct qat_device_info {
 
 extern struct qat_device_info qat_pci_devs[];
 
-struct qat_sym_dev_private;
-struct qat_asym_dev_private;
+struct qat_cryptodev_private;
 struct qat_comp_dev_private;
 
 /*
@@ -106,14 +105,14 @@ struct qat_pci_device {
 	/**< links to qps set up for each service, index same as on API */
 
 	/* Data relating to symmetric crypto service */
-	struct qat_sym_dev_private *sym_dev;
+	struct qat_cryptodev_private *sym_dev;
 	/**< link back to cryptodev private data */
 
 	int qat_sym_driver_id;
 	/**< Symmetric driver id used by this device */
 
 	/* Data relating to asymmetric crypto service */
-	struct qat_asym_dev_private *asym_dev;
+	struct qat_cryptodev_private *asym_dev;
 	/**< link back to cryptodev private data */
 
 	int qat_asym_driver_id;
diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c
index 0944d27a4d..042f39ddcc 100644
--- a/drivers/crypto/qat/qat_asym_pmd.c
+++ b/drivers/crypto/qat/qat_asym_pmd.c
@@ -6,6 +6,7 @@
 
 #include "qat_logs.h"
 
+#include "qat_crypto.h"
 #include "qat_asym.h"
 #include "qat_asym_pmd.h"
 #include "qat_sym_capabilities.h"
@@ -18,190 +19,45 @@ static const struct rte_cryptodev_capabilities qat_gen1_asym_capabilities[] = {
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };
 
-static int qat_asym_qp_release(struct rte_cryptodev *dev,
-			       uint16_t queue_pair_id);
-
-static int qat_asym_dev_config(__rte_unused struct rte_cryptodev *dev,
-			       __rte_unused struct rte_cryptodev_config *config)
-{
-	return 0;
-}
-
-static int qat_asym_dev_start(__rte_unused struct rte_cryptodev *dev)
-{
-	return 0;
-}
-
-static void qat_asym_dev_stop(__rte_unused struct rte_cryptodev *dev)
-{
-
-}
-
-static int qat_asym_dev_close(struct rte_cryptodev *dev)
-{
-	int i, ret;
-
-	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
-		ret = qat_asym_qp_release(dev, i);
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static void qat_asym_dev_info_get(struct rte_cryptodev *dev,
-				  struct rte_cryptodev_info *info)
-{
-	struct qat_asym_dev_private *internals = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = internals->qat_dev;
-
-	if (info != NULL) {
-		info->max_nb_queue_pairs = qat_qps_per_service(qat_dev,
-							QAT_SERVICE_ASYMMETRIC);
-		info->feature_flags = dev->feature_flags;
-		info->capabilities = internals->qat_dev_capabilities;
-		info->driver_id = qat_asym_driver_id;
-		/* No limit of number of sessions */
-		info->sym.max_nb_sessions = 0;
-	}
-}
-
-static void qat_asym_stats_get(struct rte_cryptodev *dev,
-			       struct rte_cryptodev_stats *stats)
-{
-	struct qat_common_stats qat_stats = {0};
-	struct qat_asym_dev_private *qat_priv;
-
-	if (stats == NULL || dev == NULL) {
-		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_get(qat_priv->qat_dev, &qat_stats, QAT_SERVICE_ASYMMETRIC);
-	stats->enqueued_count = qat_stats.enqueued_count;
-	stats->dequeued_count = qat_stats.dequeued_count;
-	stats->enqueue_err_count = qat_stats.enqueue_err_count;
-	stats->dequeue_err_count = qat_stats.dequeue_err_count;
-}
-
-static void qat_asym_stats_reset(struct rte_cryptodev *dev)
+void
+qat_asym_init_op_cookie(void *op_cookie)
 {
-	struct qat_asym_dev_private *qat_priv;
+	int j;
+	struct qat_asym_op_cookie *cookie = op_cookie;
 
-	if (dev == NULL) {
-		QAT_LOG(ERR, "invalid asymmetric cryptodev ptr %p", dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
+	cookie->input_addr = rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_asym_op_cookie,
+					input_params_ptrs);
 
-	qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_ASYMMETRIC);
-}
-
-static int qat_asym_qp_release(struct rte_cryptodev *dev,
-			       uint16_t queue_pair_id)
-{
-	struct qat_asym_dev_private *qat_private = dev->data->dev_private;
-	enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen;
-
-	QAT_LOG(DEBUG, "Release asym qp %u on device %d",
-				queue_pair_id, dev->data->dev_id);
-
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_ASYMMETRIC][queue_pair_id]
-						= NULL;
-
-	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
-			&(dev->data->queue_pairs[queue_pair_id]));
-}
+	cookie->output_addr = rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_asym_op_cookie,
+					output_params_ptrs);
 
-static int qat_asym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-			     const struct rte_cryptodev_qp_conf *qp_conf,
-			     int socket_id)
-{
-	struct qat_qp_config qat_qp_conf;
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-
-	struct qat_qp **qp_addr =
-			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
-	struct qat_asym_dev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-	const struct qat_qp_hw_data *asym_hw_qps =
-			qat_gen_config[qat_private->qat_dev->qat_dev_gen]
-				      .qp_hw_data[QAT_SERVICE_ASYMMETRIC];
-	const struct qat_qp_hw_data *qp_hw_data = asym_hw_qps + qp_id;
-
-	/* If qp is already in use free ring memory and qp metadata. */
-	if (*qp_addr != NULL) {
-		ret = qat_asym_qp_release(dev, qp_id);
-		if (ret < 0)
-			return ret;
-	}
-	if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_ASYMMETRIC)) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.hw = qp_hw_data;
-	qat_qp_conf.cookie_size = sizeof(struct qat_asym_op_cookie);
-	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
-	qat_qp_conf.socket_id = socket_id;
-	qat_qp_conf.service_str = "asym";
-
-	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
-	if (ret != 0)
-		return ret;
-
-	/* store a link to the qp in the qat_pci_device */
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_ASYMMETRIC][qp_id]
-							= *qp_addr;
-
-	qp = (struct qat_qp *)*qp_addr;
-	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
-
-	for (i = 0; i < qp->nb_descriptors; i++) {
-		int j;
-
-		struct qat_asym_op_cookie __rte_unused *cookie =
-				qp->op_cookies[i];
-		cookie->input_addr = rte_mempool_virt2iova(cookie) +
+	for (j = 0; j < 8; j++) {
+		cookie->input_params_ptrs[j] =
+				rte_mempool_virt2iova(cookie) +
 				offsetof(struct qat_asym_op_cookie,
-						input_params_ptrs);
-
-		cookie->output_addr = rte_mempool_virt2iova(cookie) +
+						input_array[j]);
+		cookie->output_params_ptrs[j] =
+				rte_mempool_virt2iova(cookie) +
 				offsetof(struct qat_asym_op_cookie,
-						output_params_ptrs);
-
-		for (j = 0; j < 8; j++) {
-			cookie->input_params_ptrs[j] =
-					rte_mempool_virt2iova(cookie) +
-					offsetof(struct qat_asym_op_cookie,
-							input_array[j]);
-			cookie->output_params_ptrs[j] =
-					rte_mempool_virt2iova(cookie) +
-					offsetof(struct qat_asym_op_cookie,
-							output_array[j]);
-		}
+						output_array[j]);
 	}
-
-	return ret;
 }
 
-struct rte_cryptodev_ops crypto_qat_ops = {
+static struct rte_cryptodev_ops crypto_qat_ops = {
 
 	/* Device related operations */
-	.dev_configure		= qat_asym_dev_config,
-	.dev_start		= qat_asym_dev_start,
-	.dev_stop		= qat_asym_dev_stop,
-	.dev_close		= qat_asym_dev_close,
-	.dev_infos_get		= qat_asym_dev_info_get,
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
 
-	.stats_get		= qat_asym_stats_get,
-	.stats_reset		= qat_asym_stats_reset,
-	.queue_pair_setup	= qat_asym_qp_setup,
-	.queue_pair_release	= qat_asym_qp_release,
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
 
 	/* Crypto related operations */
 	.asym_session_get_size	= qat_asym_session_get_private_size,
@@ -241,15 +97,14 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	struct qat_device_info *qat_dev_instance =
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
 	struct rte_cryptodev_pmd_init_params init_params = {
-			.name = "",
-			.socket_id =
-				qat_dev_instance->pci_dev->device.numa_node,
-			.private_data_size = sizeof(struct qat_asym_dev_private)
+		.name = "",
+		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
+		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
-	struct qat_asym_dev_private *internals;
+	struct qat_cryptodev_private *internals;
 
 	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
 		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT 4xxx");
@@ -310,8 +165,9 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
-	internals->asym_dev_id = cryptodev->data->dev_id;
+	internals->dev_id = cryptodev->data->dev_id;
 	internals->qat_dev_capabilities = qat_gen1_asym_capabilities;
+	internals->service_type = QAT_SERVICE_ASYMMETRIC;
 
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
@@ -347,7 +203,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	rte_cryptodev_pmd_probing_finish(cryptodev);
 
 	QAT_LOG(DEBUG, "Created QAT ASYM device %s as cryptodev instance %d",
-			cryptodev->data->name, internals->asym_dev_id);
+			cryptodev->data->name, internals->dev_id);
 	return 0;
 }
 
@@ -365,7 +221,7 @@ qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev)
 
 	/* free crypto device */
 	cryptodev = rte_cryptodev_pmd_get_dev(
-			qat_pci_dev->asym_dev->asym_dev_id);
+			qat_pci_dev->asym_dev->dev_id);
 	rte_cryptodev_pmd_destroy(cryptodev);
 	qat_pci_devs[qat_pci_dev->qat_dev_id].asym_rte_dev.name = NULL;
 	qat_pci_dev->asym_dev = NULL;
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index 3b5abddec8..c493796511 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -15,21 +15,8 @@
 
 extern uint8_t qat_asym_driver_id;
 
-/** private data structure for a QAT device.
- * This QAT device is a device offering only asymmetric crypto service,
- * there can be one of these on each qat_pci_device (VF).
- */
-struct qat_asym_dev_private {
-	struct qat_pci_device *qat_dev;
-	/**< The qat pci device hosting the service */
-	uint8_t asym_dev_id;
-	/**< Device instance for this rte_cryptodev */
-	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
-	/* QAT device asymmetric crypto capabilities */
-	const struct rte_memzone *capa_mz;
-	/* Shared memzone for storing capabilities */
-	uint16_t min_enq_burst_threshold;
-};
+void
+qat_asym_init_op_cookie(void *op_cookie);
 
 uint16_t
 qat_asym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops,
@@ -39,16 +26,4 @@ uint16_t
 qat_asym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops,
 			      uint16_t nb_ops);
 
-int qat_asym_session_configure(struct rte_cryptodev *dev,
-		struct rte_crypto_asym_xform *xform,
-		struct rte_cryptodev_asym_session *sess,
-		struct rte_mempool *mempool);
-
-int
-qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
-		struct qat_dev_cmd_param *qat_dev_cmd_param);
-
-int
-qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev);
-
 #endif /* _QAT_ASYM_PMD_H_ */
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
new file mode 100644
index 0000000000..84c26a8062
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -0,0 +1,176 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "qat_crypto.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+
+int
+qat_cryptodev_config(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused struct rte_cryptodev_config *config)
+{
+	return 0;
+}
+
+int
+qat_cryptodev_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+void
+qat_cryptodev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+int
+qat_cryptodev_close(struct rte_cryptodev *dev)
+{
+	int i, ret;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = dev->dev_ops->queue_pair_release(dev, i);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
+void
+qat_cryptodev_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *info)
+{
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_service_type service_type = qat_private->service_type;
+
+	if (info != NULL) {
+		info->max_nb_queue_pairs =
+			qat_qps_per_service(qat_dev, service_type);
+		info->feature_flags = dev->feature_flags;
+		info->capabilities = qat_private->qat_dev_capabilities;
+		if (service_type == QAT_SERVICE_ASYMMETRIC)
+			info->driver_id = qat_asym_driver_id;
+
+		if (service_type == QAT_SERVICE_SYMMETRIC)
+			info->driver_id = qat_sym_driver_id;
+		/* No limit of number of sessions */
+		info->sym.max_nb_sessions = 0;
+	}
+}
+
+void
+qat_cryptodev_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	struct qat_common_stats qat_stats = {0};
+	struct qat_cryptodev_private *qat_priv;
+
+	if (stats == NULL || dev == NULL) {
+		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
+		return;
+	}
+	qat_priv = dev->data->dev_private;
+
+	qat_stats_get(qat_priv->qat_dev, &qat_stats, qat_priv->service_type);
+	stats->enqueued_count = qat_stats.enqueued_count;
+	stats->dequeued_count = qat_stats.dequeued_count;
+	stats->enqueue_err_count = qat_stats.enqueue_err_count;
+	stats->dequeue_err_count = qat_stats.dequeue_err_count;
+}
+
+void
+qat_cryptodev_stats_reset(struct rte_cryptodev *dev)
+{
+	struct qat_cryptodev_private *qat_priv;
+
+	if (dev == NULL) {
+		QAT_LOG(ERR, "invalid cryptodev ptr %p", dev);
+		return;
+	}
+	qat_priv = dev->data->dev_private;
+
+	qat_stats_reset(qat_priv->qat_dev, qat_priv->service_type);
+
+}
+
+int
+qat_cryptodev_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
+	enum qat_service_type service_type = qat_private->service_type;
+
+	QAT_LOG(DEBUG, "Release %s qp %u on device %d",
+			qat_service_get_str(service_type),
+			queue_pair_id, dev->data->dev_id);
+
+	qat_private->qat_dev->qps_in_use[service_type][queue_pair_id] = NULL;
+
+	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
+			&(dev->data->queue_pairs[queue_pair_id]));
+}
+
+int
+qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct qat_qp **qp_addr =
+			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_service_type service_type = qat_private->service_type;
+	struct qat_qp_config qat_qp_conf = {0};
+	struct qat_qp *qp;
+	int ret = 0;
+	uint32_t i;
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (*qp_addr != NULL) {
+		ret = dev->dev_ops->queue_pair_release(dev, qp_id);
+		if (ret < 0)
+			return -EBUSY;
+	}
+	if (qp_id >= qat_qps_per_service(qat_dev, service_type)) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, service_type,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
+	qat_qp_conf.cookie_size = service_type == QAT_SERVICE_SYMMETRIC ?
+			sizeof(struct qat_sym_op_cookie) :
+			sizeof(struct qat_asym_op_cookie);
+	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
+	qat_qp_conf.socket_id = socket_id;
+	qat_qp_conf.service_str = qat_service_get_str(service_type);
+
+	ret = qat_qp_setup(qat_dev, qp_addr, qp_id, &qat_qp_conf);
+	if (ret != 0)
+		return ret;
+
+	/* store a link to the qp in the qat_pci_device */
+	qat_dev->qps_in_use[service_type][qp_id] = *qp_addr;
+
+	qp = (struct qat_qp *)*qp_addr;
+	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
+
+	for (i = 0; i < qp->nb_descriptors; i++) {
+		if (service_type == QAT_SERVICE_SYMMETRIC)
+			qat_sym_init_op_cookie(qp->op_cookies[i]);
+		else
+			qat_asym_init_op_cookie(qp->op_cookies[i]);
+	}
+
+	return ret;
+}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
new file mode 100644
index 0000000000..3803fef19d
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+ #ifndef _QAT_CRYPTO_H_
+ #define _QAT_CRYPTO_H_
+
+#include <rte_cryptodev.h>
+#ifdef RTE_LIB_SECURITY
+#include <rte_security.h>
+#endif
+
+#include "qat_device.h"
+
+extern uint8_t qat_sym_driver_id;
+extern uint8_t qat_asym_driver_id;
+
+/** helper macro to set cryptodev capability range **/
+#define CAP_RNG(n, l, r, i) .n = {.min = l, .max = r, .increment = i}
+
+#define CAP_RNG_ZERO(n) .n = {.min = 0, .max = 0, .increment = 0}
+/** helper macro to set cryptodev capability value **/
+#define CAP_SET(n, v) .n = v
+
+/** private data structure for a QAT device.
+ * there can be one of these on each qat_pci_device (VF).
+ */
+struct qat_cryptodev_private {
+	struct qat_pci_device *qat_dev;
+	/**< The qat pci device hosting the service */
+	uint8_t dev_id;
+	/**< Device instance for this rte_cryptodev */
+	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
+	/* QAT device symmetric crypto capabilities */
+	const struct rte_memzone *capa_mz;
+	/* Shared memzone for storing capabilities */
+	uint16_t min_enq_burst_threshold;
+	uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */
+	enum qat_service_type service_type;
+};
+
+struct qat_capabilities_info {
+	struct rte_cryptodev_capabilities *data;
+	uint64_t size;
+};
+
+int
+qat_cryptodev_config(struct rte_cryptodev *dev,
+		struct rte_cryptodev_config *config);
+
+int
+qat_cryptodev_start(struct rte_cryptodev *dev);
+
+void
+qat_cryptodev_stop(struct rte_cryptodev *dev);
+
+int
+qat_cryptodev_close(struct rte_cryptodev *dev);
+
+void
+qat_cryptodev_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *info);
+
+void
+qat_cryptodev_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats);
+
+void
+qat_cryptodev_stats_reset(struct rte_cryptodev *dev);
+
+int
+qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+
+int
+qat_cryptodev_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id);
+
+#endif
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index 5b8ee4bee6..dec877cfab 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -13,6 +13,7 @@
 #endif
 
 #include "qat_logs.h"
+#include "qat_crypto.h"
 #include "qat_sym.h"
 #include "qat_sym_session.h"
 #include "qat_sym_pmd.h"
@@ -59,213 +60,19 @@ static const struct rte_security_capability qat_security_capabilities[] = {
 };
 #endif
 
-static int qat_sym_qp_release(struct rte_cryptodev *dev,
-	uint16_t queue_pair_id);
-
-static int qat_sym_dev_config(__rte_unused struct rte_cryptodev *dev,
-		__rte_unused struct rte_cryptodev_config *config)
-{
-	return 0;
-}
-
-static int qat_sym_dev_start(__rte_unused struct rte_cryptodev *dev)
-{
-	return 0;
-}
-
-static void qat_sym_dev_stop(__rte_unused struct rte_cryptodev *dev)
-{
-	return;
-}
-
-static int qat_sym_dev_close(struct rte_cryptodev *dev)
-{
-	int i, ret;
-
-	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
-		ret = qat_sym_qp_release(dev, i);
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static void qat_sym_dev_info_get(struct rte_cryptodev *dev,
-			struct rte_cryptodev_info *info)
-{
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = internals->qat_dev;
-
-	if (info != NULL) {
-		info->max_nb_queue_pairs =
-			qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC);
-		info->feature_flags = dev->feature_flags;
-		info->capabilities = internals->qat_dev_capabilities;
-		info->driver_id = qat_sym_driver_id;
-		/* No limit of number of sessions */
-		info->sym.max_nb_sessions = 0;
-	}
-}
-
-static void qat_sym_stats_get(struct rte_cryptodev *dev,
-		struct rte_cryptodev_stats *stats)
-{
-	struct qat_common_stats qat_stats = {0};
-	struct qat_sym_dev_private *qat_priv;
-
-	if (stats == NULL || dev == NULL) {
-		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_get(qat_priv->qat_dev, &qat_stats, QAT_SERVICE_SYMMETRIC);
-	stats->enqueued_count = qat_stats.enqueued_count;
-	stats->dequeued_count = qat_stats.dequeued_count;
-	stats->enqueue_err_count = qat_stats.enqueue_err_count;
-	stats->dequeue_err_count = qat_stats.dequeue_err_count;
-}
-
-static void qat_sym_stats_reset(struct rte_cryptodev *dev)
-{
-	struct qat_sym_dev_private *qat_priv;
-
-	if (dev == NULL) {
-		QAT_LOG(ERR, "invalid cryptodev ptr %p", dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_SYMMETRIC);
-
-}
-
-static int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
-{
-	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
-	enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen;
-
-	QAT_LOG(DEBUG, "Release sym qp %u on device %d",
-				queue_pair_id, dev->data->dev_id);
-
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][queue_pair_id]
-						= NULL;
-
-	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
-			&(dev->data->queue_pairs[queue_pair_id]));
-}
-
-static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-	const struct rte_cryptodev_qp_conf *qp_conf,
-	int socket_id)
-{
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-	struct qat_qp_config qat_qp_conf;
-	struct qat_qp **qp_addr =
-			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
-	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-
-	/* If qp is already in use free ring memory and qp metadata. */
-	if (*qp_addr != NULL) {
-		ret = qat_sym_qp_release(dev, qp_id);
-		if (ret < 0)
-			return ret;
-	}
-	if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC)) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_SYMMETRIC,
-			qp_id);
-	if (qat_qp_conf.hw == NULL) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie);
-	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
-	qat_qp_conf.socket_id = socket_id;
-	qat_qp_conf.service_str = "sym";
-
-	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
-	if (ret != 0)
-		return ret;
-
-	/* store a link to the qp in the qat_pci_device */
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id]
-							= *qp_addr;
-
-	qp = (struct qat_qp *)*qp_addr;
-	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
-
-	for (i = 0; i < qp->nb_descriptors; i++) {
-
-		struct qat_sym_op_cookie *cookie =
-				qp->op_cookies[i];
-
-		cookie->qat_sgl_src_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				qat_sgl_src);
-
-		cookie->qat_sgl_dst_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				qat_sgl_dst);
-
-		cookie->opt.spc_gmac.cd_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				opt.spc_gmac.cd_cipher);
-
-	}
-
-	/* Get fw version from QAT (GEN2), skip if we've got it already */
-	if (qp->qat_dev_gen == QAT_GEN2 && !(qat_private->internal_capabilities
-			& QAT_SYM_CAP_VALID)) {
-		ret = qat_cq_get_fw_version(qp);
-
-		if (ret < 0) {
-			qat_sym_qp_release(dev, qp_id);
-			return ret;
-		}
-
-		if (ret != 0)
-			QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d",
-					(ret >> 24) & 0xff,
-					(ret >> 16) & 0xff,
-					(ret >> 8) & 0xff);
-		else
-			QAT_LOG(DEBUG, "unknown QAT firmware version");
-
-		/* set capabilities based on the fw version */
-		qat_private->internal_capabilities = QAT_SYM_CAP_VALID |
-				((ret >= MIXED_CRYPTO_MIN_FW_VER) ?
-						QAT_SYM_CAP_MIXED_CRYPTO : 0);
-		ret = 0;
-	}
-
-	return ret;
-}
-
 static struct rte_cryptodev_ops crypto_qat_ops = {
 
 		/* Device related operations */
-		.dev_configure		= qat_sym_dev_config,
-		.dev_start		= qat_sym_dev_start,
-		.dev_stop		= qat_sym_dev_stop,
-		.dev_close		= qat_sym_dev_close,
-		.dev_infos_get		= qat_sym_dev_info_get,
+		.dev_configure		= qat_cryptodev_config,
+		.dev_start		= qat_cryptodev_start,
+		.dev_stop		= qat_cryptodev_stop,
+		.dev_close		= qat_cryptodev_close,
+		.dev_infos_get		= qat_cryptodev_info_get,
 
-		.stats_get		= qat_sym_stats_get,
-		.stats_reset		= qat_sym_stats_reset,
-		.queue_pair_setup	= qat_sym_qp_setup,
-		.queue_pair_release	= qat_sym_qp_release,
+		.stats_get		= qat_cryptodev_stats_get,
+		.stats_reset		= qat_cryptodev_stats_reset,
+		.queue_pair_setup	= qat_cryptodev_qp_setup,
+		.queue_pair_release	= qat_cryptodev_qp_release,
 
 		/* Crypto related operations */
 		.sym_session_get_size	= qat_sym_session_get_private_size,
@@ -295,6 +102,27 @@ static struct rte_security_ops security_qat_ops = {
 };
 #endif
 
+void
+qat_sym_init_op_cookie(void *op_cookie)
+{
+	struct qat_sym_op_cookie *cookie = op_cookie;
+
+	cookie->qat_sgl_src_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			qat_sgl_src);
+
+	cookie->qat_sgl_dst_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			qat_sgl_dst);
+
+	cookie->opt.spc_gmac.cd_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			opt.spc_gmac.cd_cipher);
+}
+
 static uint16_t
 qat_sym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops,
 		uint16_t nb_ops)
@@ -330,15 +158,14 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
 
 	struct rte_cryptodev_pmd_init_params init_params = {
-			.name = "",
-			.socket_id =
-				qat_dev_instance->pci_dev->device.numa_node,
-			.private_data_size = sizeof(struct qat_sym_dev_private)
+		.name = "",
+		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
+		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
-	struct qat_sym_dev_private *internals;
+	struct qat_cryptodev_private *internals;
 	const struct rte_cryptodev_capabilities *capabilities;
 	uint64_t capa_size;
 
@@ -424,8 +251,9 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
+	internals->service_type = QAT_SERVICE_SYMMETRIC;
 
-	internals->sym_dev_id = cryptodev->data->dev_id;
+	internals->dev_id = cryptodev->data->dev_id;
 	switch (qat_pci_dev->qat_dev_gen) {
 	case QAT_GEN1:
 		capabilities = qat_gen1_sym_capabilities;
@@ -480,7 +308,7 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	qat_pci_dev->sym_dev = internals;
 	QAT_LOG(DEBUG, "Created QAT SYM device %s as cryptodev instance %d",
-			cryptodev->data->name, internals->sym_dev_id);
+			cryptodev->data->name, internals->dev_id);
 
 	rte_cryptodev_pmd_probing_finish(cryptodev);
 
@@ -511,7 +339,7 @@ qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev)
 		rte_memzone_free(qat_pci_dev->sym_dev->capa_mz);
 
 	/* free crypto device */
-	cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->sym_dev_id);
+	cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->dev_id);
 #ifdef RTE_LIB_SECURITY
 	rte_free(cryptodev->security_ctx);
 	cryptodev->security_ctx = NULL;
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index e0992cbe27..d49b732ca0 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -14,6 +14,7 @@
 #endif
 
 #include "qat_sym_capabilities.h"
+#include "qat_crypto.h"
 #include "qat_device.h"
 
 /** Intel(R) QAT Symmetric Crypto PMD driver name */
@@ -25,23 +26,6 @@
 
 extern uint8_t qat_sym_driver_id;
 
-/** private data structure for a QAT device.
- * This QAT device is a device offering only symmetric crypto service,
- * there can be one of these on each qat_pci_device (VF).
- */
-struct qat_sym_dev_private {
-	struct qat_pci_device *qat_dev;
-	/**< The qat pci device hosting the service */
-	uint8_t sym_dev_id;
-	/**< Device instance for this rte_cryptodev */
-	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
-	/* QAT device symmetric crypto capabilities */
-	const struct rte_memzone *capa_mz;
-	/* Shared memzone for storing capabilities */
-	uint16_t min_enq_burst_threshold;
-	uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */
-};
-
 int
 qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
@@ -49,5 +33,8 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 int
 qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev);
 
+void
+qat_sym_init_op_cookie(void *op_cookie);
+
 #endif
 #endif /* _QAT_SYM_PMD_H_ */
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index 3f2f6736fc..8ca475ca8b 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -131,7 +131,7 @@ bpi_cipher_ctx_init(enum rte_crypto_cipher_algorithm cryptodev_algo,
 
 static int
 qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo,
-		struct qat_sym_dev_private *internals)
+		struct qat_cryptodev_private *internals)
 {
 	int i = 0;
 	const struct rte_cryptodev_capabilities *capability;
@@ -152,7 +152,7 @@ qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo,
 
 static int
 qat_is_auth_alg_supported(enum rte_crypto_auth_algorithm algo,
-		struct qat_sym_dev_private *internals)
+		struct qat_cryptodev_private *internals)
 {
 	int i = 0;
 	const struct rte_cryptodev_capabilities *capability;
@@ -267,7 +267,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
 		struct qat_sym_session *session)
 {
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	struct rte_crypto_cipher_xform *cipher_xform = NULL;
 	enum qat_device_gen qat_dev_gen =
 				internals->qat_dev->qat_dev_gen;
@@ -532,7 +532,8 @@ static void
 qat_sym_session_handle_mixed(const struct rte_cryptodev *dev,
 		struct qat_sym_session *session)
 {
-	const struct qat_sym_dev_private *qat_private = dev->data->dev_private;
+	const struct qat_cryptodev_private *qat_private =
+			dev->data->dev_private;
 	enum qat_device_gen min_dev_gen = (qat_private->internal_capabilities &
 			QAT_SYM_CAP_MIXED_CRYPTO) ? QAT_GEN2 : QAT_GEN3;
 
@@ -564,7 +565,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform, void *session_private)
 {
 	struct qat_sym_session *session = session_private;
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen;
 	int ret;
 	int qat_cmd_id;
@@ -707,7 +708,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
 				struct qat_sym_session *session)
 {
 	struct rte_crypto_auth_xform *auth_xform = qat_get_auth_xform(xform);
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	const uint8_t *key_data = auth_xform->key.data;
 	uint8_t key_length = auth_xform->key.length;
 	enum qat_device_gen qat_dev_gen =
@@ -875,7 +876,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
 {
 	struct rte_crypto_aead_xform *aead_xform = &xform->aead;
 	enum rte_crypto_auth_operation crypto_operation;
-	struct qat_sym_dev_private *internals =
+	struct qat_cryptodev_private *internals =
 			dev->data->dev_private;
 	enum qat_device_gen qat_dev_gen =
 			internals->qat_dev->qat_dev_gen;
-- 
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v7 8/9] crypto/qat: add gen specific data and function
  2021-10-27 15:50             ` [dpdk-dev] [dpdk-dev v7 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                                 ` (6 preceding siblings ...)
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 7/9] crypto/qat: unified device private data structure Kai Ji
@ 2021-10-27 15:50               ` Kai Ji
  2021-10-28  8:33                 ` Power, Ciara
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 9/9] crypto/qat: add gen specific implementation Kai Ji
  2021-11-04 10:34               ` [dpdk-dev] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
  9 siblings, 1 reply; 96+ messages in thread
From: Kai Ji @ 2021-10-27 15:50 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch adds the symmetric and asymmetric crypto data
structure and function prototypes for different QAT
generations.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
---
 drivers/crypto/qat/README                  |    7 -
 drivers/crypto/qat/meson.build             |   26 -
 drivers/crypto/qat/qat_asym_capabilities.h |   63 -
 drivers/crypto/qat/qat_asym_pmd.c          |   64 +-
 drivers/crypto/qat/qat_asym_pmd.h          |   25 +
 drivers/crypto/qat/qat_crypto.h            |   16 +
 drivers/crypto/qat/qat_sym_capabilities.h  | 1248 --------------------
 drivers/crypto/qat/qat_sym_pmd.c           |  186 +--
 drivers/crypto/qat/qat_sym_pmd.h           |   57 +-
 9 files changed, 167 insertions(+), 1525 deletions(-)
 delete mode 100644 drivers/crypto/qat/README
 delete mode 100644 drivers/crypto/qat/meson.build
 delete mode 100644 drivers/crypto/qat/qat_asym_capabilities.h
 delete mode 100644 drivers/crypto/qat/qat_sym_capabilities.h

diff --git a/drivers/crypto/qat/README b/drivers/crypto/qat/README
deleted file mode 100644
index 444ae605f0..0000000000
--- a/drivers/crypto/qat/README
+++ /dev/null
@@ -1,7 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2015-2018 Intel Corporation
-
-Makefile for crypto QAT PMD is in common/qat directory.
-The build for the QAT driver is done from there as only one library is built for the
-whole QAT pci device and that library includes all the services (crypto, compression)
-which are enabled on the device.
diff --git a/drivers/crypto/qat/meson.build b/drivers/crypto/qat/meson.build
deleted file mode 100644
index b3b2d17258..0000000000
--- a/drivers/crypto/qat/meson.build
+++ /dev/null
@@ -1,26 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017-2018 Intel Corporation
-
-# this does not build the QAT driver, instead that is done in the compression
-# driver which comes later. Here we just add our sources files to the list
-build = false
-reason = '' # sentinal value to suppress printout
-dep = dependency('libcrypto', required: false, method: 'pkg-config')
-qat_includes += include_directories('.')
-qat_deps += 'cryptodev'
-qat_deps += 'net'
-qat_deps += 'security'
-if dep.found()
-    # Add our sources files to the list
-    qat_sources += files(
-            'qat_asym.c',
-            'qat_asym_pmd.c',
-            'qat_sym.c',
-            'qat_sym_hw_dp.c',
-            'qat_sym_pmd.c',
-            'qat_sym_session.c',
-	)
-    qat_ext_deps += dep
-    qat_cflags += '-DBUILD_QAT_SYM'
-    qat_cflags += '-DBUILD_QAT_ASYM'
-endif
diff --git a/drivers/crypto/qat/qat_asym_capabilities.h b/drivers/crypto/qat/qat_asym_capabilities.h
deleted file mode 100644
index 523b4da6d3..0000000000
--- a/drivers/crypto/qat/qat_asym_capabilities.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019 Intel Corporation
- */
-
-#ifndef _QAT_ASYM_CAPABILITIES_H_
-#define _QAT_ASYM_CAPABILITIES_H_
-
-#define QAT_BASE_GEN1_ASYM_CAPABILITIES						\
-	{	/* modexp */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_MODEX,	\
-				.op_types = 0,					\
-				{						\
-				.modlen = {					\
-				.min = 1,					\
-				.max = 512,					\
-				.increment = 1					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	},									\
-	{	/* modinv */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_MODINV,	\
-				.op_types = 0,					\
-				{						\
-				.modlen = {					\
-				.min = 1,					\
-				.max = 512,					\
-				.increment = 1					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	},									\
-	{	/* RSA */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_RSA,	\
-				.op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_VERIFY) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),	\
-				{						\
-				.modlen = {					\
-				/* min length is based on openssl rsa keygen */	\
-				.min = 64,					\
-				/* value 0 symbolizes no limit on max length */	\
-				.max = 512,					\
-				.increment = 64					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	}									\
-
-#endif /* _QAT_ASYM_CAPABILITIES_H_ */
diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c
index 042f39ddcc..addee384e3 100644
--- a/drivers/crypto/qat/qat_asym_pmd.c
+++ b/drivers/crypto/qat/qat_asym_pmd.c
@@ -9,15 +9,9 @@
 #include "qat_crypto.h"
 #include "qat_asym.h"
 #include "qat_asym_pmd.h"
-#include "qat_sym_capabilities.h"
-#include "qat_asym_capabilities.h"
 
 uint8_t qat_asym_driver_id;
-
-static const struct rte_cryptodev_capabilities qat_gen1_asym_capabilities[] = {
-	QAT_BASE_GEN1_ASYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
+struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[QAT_N_GENS];
 
 void
 qat_asym_init_op_cookie(void *op_cookie)
@@ -101,23 +95,26 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
 		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
+	struct qat_capabilities_info capa_info;
+	const struct rte_cryptodev_capabilities *capabilities;
+	const struct qat_crypto_gen_dev_ops *gen_dev_ops =
+		&qat_asym_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
 	struct qat_cryptodev_private *internals;
+	uint64_t capa_size;
 
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT 4xxx");
-		return -EFAULT;
-	}
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN3) {
-		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT c4xxx");
-		return -EFAULT;
-	}
 	snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "asym");
 	QAT_LOG(DEBUG, "Creating QAT ASYM device %s\n", name);
 
+	if (gen_dev_ops->cryptodev_ops == NULL) {
+		QAT_LOG(ERR, "Device %s does not support asymmetric crypto",
+				name);
+		return -EFAULT;
+	}
+
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		qat_pci_dev->qat_asym_driver_id =
 				qat_asym_driver_id;
@@ -150,11 +147,8 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	cryptodev->enqueue_burst = qat_asym_pmd_enqueue_op_burst;
 	cryptodev->dequeue_burst = qat_asym_pmd_dequeue_op_burst;
 
-	cryptodev->feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
-			RTE_CRYPTODEV_FF_HW_ACCELERATED |
-			RTE_CRYPTODEV_FF_ASYM_SESSIONLESS |
-			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP |
-			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT;
+
+	cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev);
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -166,27 +160,29 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
 	internals->dev_id = cryptodev->data->dev_id;
-	internals->qat_dev_capabilities = qat_gen1_asym_capabilities;
 	internals->service_type = QAT_SERVICE_ASYMMETRIC;
 
+	capa_info = gen_dev_ops->get_capabilities(qat_pci_dev);
+	capabilities = capa_info.data;
+	capa_size = capa_info.size;
+
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
 		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
-			sizeof(qat_gen1_asym_capabilities),
-			rte_socket_id(), 0);
-	}
-	if (internals->capa_mz == NULL) {
-		QAT_LOG(DEBUG,
-			"Error allocating memzone for capabilities, destroying PMD for %s",
-			name);
-		rte_cryptodev_pmd_destroy(cryptodev);
-		memset(&qat_dev_instance->asym_rte_dev, 0,
-			sizeof(qat_dev_instance->asym_rte_dev));
-		return -EFAULT;
+				capa_size, rte_socket_id(), 0);
+		if (internals->capa_mz == NULL) {
+			QAT_LOG(DEBUG,
+				"Error allocating memzone for capabilities, "
+				"destroying PMD for %s",
+				name);
+			rte_cryptodev_pmd_destroy(cryptodev);
+			memset(&qat_dev_instance->asym_rte_dev, 0,
+				sizeof(qat_dev_instance->asym_rte_dev));
+			return -EFAULT;
+		}
 	}
 
-	memcpy(internals->capa_mz->addr, qat_gen1_asym_capabilities,
-			sizeof(qat_gen1_asym_capabilities));
+	memcpy(internals->capa_mz->addr, capabilities, capa_size);
 	internals->qat_dev_capabilities = internals->capa_mz->addr;
 
 	while (1) {
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index c493796511..fd6b406248 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -7,14 +7,39 @@
 #define _QAT_ASYM_PMD_H_
 
 #include <rte_cryptodev.h>
+#include "qat_crypto.h"
 #include "qat_device.h"
 
 /** Intel(R) QAT Asymmetric Crypto PMD driver name */
 #define CRYPTODEV_NAME_QAT_ASYM_PMD	crypto_qat_asym
 
 
+/**
+ * Helper function to add an asym capability
+ * <name> <op type> <modlen (min, max, increment)>
+ **/
+#define QAT_ASYM_CAP(n, o, l, r, i)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,			\
+		{.asym = {						\
+			.xform_capa = {					\
+				.xform_type = RTE_CRYPTO_ASYM_XFORM_##n,\
+				.op_types = o,				\
+				{					\
+				.modlen = {				\
+				.min = l,				\
+				.max = r,				\
+				.increment = i				\
+				}, }					\
+			}						\
+		},							\
+		}							\
+	}
+
 extern uint8_t qat_asym_driver_id;
 
+extern struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[];
+
 void
 qat_asym_init_op_cookie(void *op_cookie);
 
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index 3803fef19d..0a8afb0b31 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -44,6 +44,22 @@ struct qat_capabilities_info {
 	uint64_t size;
 };
 
+typedef struct qat_capabilities_info (*get_capabilities_info_t)
+			(struct qat_pci_device *qat_dev);
+
+typedef uint64_t (*get_feature_flags_t)(struct qat_pci_device *qat_dev);
+
+typedef void * (*create_security_ctx_t)(void *cryptodev);
+
+struct qat_crypto_gen_dev_ops {
+	get_feature_flags_t get_feature_flags;
+	get_capabilities_info_t get_capabilities;
+	struct rte_cryptodev_ops *cryptodev_ops;
+#ifdef RTE_LIB_SECURITY
+	create_security_ctx_t create_security_ctx;
+#endif
+};
+
 int
 qat_cryptodev_config(struct rte_cryptodev *dev,
 		struct rte_cryptodev_config *config);
diff --git a/drivers/crypto/qat/qat_sym_capabilities.h b/drivers/crypto/qat/qat_sym_capabilities.h
deleted file mode 100644
index cfb176ca94..0000000000
--- a/drivers/crypto/qat/qat_sym_capabilities.h
+++ /dev/null
@@ -1,1248 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017-2019 Intel Corporation
- */
-
-#ifndef _QAT_SYM_CAPABILITIES_H_
-#define _QAT_SYM_CAPABILITIES_H_
-
-#define QAT_BASE_GEN1_SYM_CAPABILITIES					\
-	{	/* SHA1 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA1 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* MD5 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 16,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XCBC MAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-				.aad_size = { 0 },			\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_CMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 4			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_CCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 2			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 224,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 7,			\
-					.max = 13,			\
-					.increment = 1			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_GCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GMAC (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_GMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SNOW 3G (UIA2) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CBC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XTS */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_XTS,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 64,			\
-					.increment = 32			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SNOW 3G (UEA2) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CTR,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* NULL (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, },						\
-		}, },							\
-	},								\
-	{	/* NULL (CIPHER) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				}					\
-			}, },						\
-		}, }							\
-	},								\
-	{       /* KASUMI (F8) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_KASUMI_F8,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{       /* KASUMI (F9) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_KASUMI_F9,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* 3DES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 24,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* 3DES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_3DES_CTR,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 24,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* DES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_DES_CBC,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* DES DOCSISBPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_DES_DOCSISBPI,\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_EXTRA_GEN2_SYM_CAPABILITIES					\
-	{	/* ZUC (EEA3) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_ZUC_EEA3,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* ZUC (EIA3) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_ZUC_EIA3,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_EXTRA_GEN3_SYM_CAPABILITIES					\
-	{	/* Chacha20-Poly1305 */					\
-	.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 32,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_BASE_GEN4_SYM_CAPABILITIES					\
-	{	/* AES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CBC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA1 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XCBC MAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-				.aad_size = { 0 },			\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_CMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 4			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* NULL (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, },						\
-		}, },							\
-	},								\
-	{	/* NULL (CIPHER) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				}					\
-			}, },						\
-		}, }							\
-	},								\
-	{	/* SHA1 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CTR,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_GCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_CCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 2			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 224,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 7,			\
-					.max = 13,			\
-					.increment = 1			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* Chacha20-Poly1305 */					\
-	.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 32,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GMAC (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_GMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				}					\
-			}, }						\
-		}, }							\
-	}								\
-
-
-
-#ifdef RTE_LIB_SECURITY
-#define QAT_SECURITY_SYM_CAPABILITIES					\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_SECURITY_CAPABILITIES(sym)					\
-	[0] = {	/* DOCSIS Uplink */					\
-		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
-		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
-		.docsis = {						\
-			.direction = RTE_SECURITY_DOCSIS_UPLINK		\
-		},							\
-		.crypto_capabilities = (sym)				\
-	},								\
-	[1] = {	/* DOCSIS Downlink */					\
-		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
-		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
-		.docsis = {						\
-			.direction = RTE_SECURITY_DOCSIS_DOWNLINK	\
-		},							\
-		.crypto_capabilities = (sym)				\
-	}
-#endif
-
-#endif /* _QAT_SYM_CAPABILITIES_H_ */
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index dec877cfab..b835245f17 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -22,85 +22,7 @@
 
 uint8_t qat_sym_driver_id;
 
-static const struct rte_cryptodev_capabilities qat_gen1_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen2_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen3_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN3_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen4_sym_capabilities[] = {
-	QAT_BASE_GEN4_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-#ifdef RTE_LIB_SECURITY
-static const struct rte_cryptodev_capabilities
-					qat_security_sym_capabilities[] = {
-	QAT_SECURITY_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_security_capability qat_security_capabilities[] = {
-	QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities),
-	{
-		.action = RTE_SECURITY_ACTION_TYPE_NONE
-	}
-};
-#endif
-
-static struct rte_cryptodev_ops crypto_qat_ops = {
-
-		/* Device related operations */
-		.dev_configure		= qat_cryptodev_config,
-		.dev_start		= qat_cryptodev_start,
-		.dev_stop		= qat_cryptodev_stop,
-		.dev_close		= qat_cryptodev_close,
-		.dev_infos_get		= qat_cryptodev_info_get,
-
-		.stats_get		= qat_cryptodev_stats_get,
-		.stats_reset		= qat_cryptodev_stats_reset,
-		.queue_pair_setup	= qat_cryptodev_qp_setup,
-		.queue_pair_release	= qat_cryptodev_qp_release,
-
-		/* Crypto related operations */
-		.sym_session_get_size	= qat_sym_session_get_private_size,
-		.sym_session_configure	= qat_sym_session_configure,
-		.sym_session_clear	= qat_sym_session_clear,
-
-		/* Raw data-path API related operations */
-		.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
-		.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
-};
-
-#ifdef RTE_LIB_SECURITY
-static const struct rte_security_capability *
-qat_security_cap_get(void *device __rte_unused)
-{
-	return qat_security_capabilities;
-}
-
-static struct rte_security_ops security_qat_ops = {
-
-		.session_create = qat_security_session_create,
-		.session_update = NULL,
-		.session_stats_get = NULL,
-		.session_destroy = qat_security_session_destroy,
-		.set_pkt_metadata = NULL,
-		.capabilities_get = qat_security_cap_get
-};
-#endif
+struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[QAT_N_GENS];
 
 void
 qat_sym_init_op_cookie(void *op_cookie)
@@ -156,7 +78,6 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 	int i = 0, ret = 0;
 	struct qat_device_info *qat_dev_instance =
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
-
 	struct rte_cryptodev_pmd_init_params init_params = {
 		.name = "",
 		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
@@ -166,13 +87,22 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
 	struct qat_cryptodev_private *internals;
+	struct qat_capabilities_info capa_info;
 	const struct rte_cryptodev_capabilities *capabilities;
+	const struct qat_crypto_gen_dev_ops *gen_dev_ops =
+		&qat_sym_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	uint64_t capa_size;
 
 	snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "sym");
 	QAT_LOG(DEBUG, "Creating QAT SYM device %s", name);
 
+	if (gen_dev_ops->cryptodev_ops == NULL) {
+		QAT_LOG(ERR, "Device %s does not support symmetric crypto",
+				name);
+		return -EFAULT;
+	}
+
 	/*
 	 * All processes must use same driver id so they can share sessions.
 	 * Store driver_id so we can validate that all processes have the same
@@ -206,92 +136,56 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 
 	qat_dev_instance->sym_rte_dev.name = cryptodev->data->name;
 	cryptodev->driver_id = qat_sym_driver_id;
-	cryptodev->dev_ops = &crypto_qat_ops;
+	cryptodev->dev_ops = gen_dev_ops->cryptodev_ops;
 
 	cryptodev->enqueue_burst = qat_sym_pmd_enqueue_op_burst;
 	cryptodev->dequeue_burst = qat_sym_pmd_dequeue_op_burst;
 
-	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
-			RTE_CRYPTODEV_FF_HW_ACCELERATED |
-			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
-			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
-			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
-			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
-
-	if (qat_pci_dev->qat_dev_gen < QAT_GEN4)
-		cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SYM_RAW_DP;
+	cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev);
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
 
-	snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN,
-			"QAT_SYM_CAPA_GEN_%d",
-			qat_pci_dev->qat_dev_gen);
-
 #ifdef RTE_LIB_SECURITY
-	struct rte_security_ctx *security_instance;
-	security_instance = rte_malloc("qat_sec",
-				sizeof(struct rte_security_ctx),
-				RTE_CACHE_LINE_SIZE);
-	if (security_instance == NULL) {
-		QAT_LOG(ERR, "rte_security_ctx memory alloc failed");
-		ret = -ENOMEM;
-		goto error;
-	}
+	if (gen_dev_ops->create_security_ctx) {
+		cryptodev->security_ctx =
+			gen_dev_ops->create_security_ctx((void *)cryptodev);
+		if (cryptodev->security_ctx == NULL) {
+			QAT_LOG(ERR, "rte_security_ctx memory alloc failed");
+			ret = -ENOMEM;
+			goto error;
+		}
+
+		cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY;
+		QAT_LOG(INFO, "Device %s rte_security support enabled", name);
+	} else
+		QAT_LOG(INFO, "Device %s rte_security support disabled", name);
 
-	security_instance->device = (void *)cryptodev;
-	security_instance->ops = &security_qat_ops;
-	security_instance->sess_cnt = 0;
-	cryptodev->security_ctx = security_instance;
-	cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY;
 #endif
+	snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"QAT_SYM_CAPA_GEN_%d",
+			qat_pci_dev->qat_dev_gen);
 
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
 	internals->service_type = QAT_SERVICE_SYMMETRIC;
-
 	internals->dev_id = cryptodev->data->dev_id;
-	switch (qat_pci_dev->qat_dev_gen) {
-	case QAT_GEN1:
-		capabilities = qat_gen1_sym_capabilities;
-		capa_size = sizeof(qat_gen1_sym_capabilities);
-		break;
-	case QAT_GEN2:
-		capabilities = qat_gen2_sym_capabilities;
-		capa_size = sizeof(qat_gen2_sym_capabilities);
-		break;
-	case QAT_GEN3:
-		capabilities = qat_gen3_sym_capabilities;
-		capa_size = sizeof(qat_gen3_sym_capabilities);
-		break;
-	case QAT_GEN4:
-		capabilities = qat_gen4_sym_capabilities;
-		capa_size = sizeof(qat_gen4_sym_capabilities);
-		break;
-	default:
-		QAT_LOG(DEBUG,
-			"QAT gen %d capabilities unknown",
-			qat_pci_dev->qat_dev_gen);
-		ret = -(EINVAL);
-		goto error;
-	}
+
+	capa_info = gen_dev_ops->get_capabilities(qat_pci_dev);
+	capabilities = capa_info.data;
+	capa_size = capa_info.size;
 
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
 		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
-		capa_size,
-		rte_socket_id(), 0);
-	}
-	if (internals->capa_mz == NULL) {
-		QAT_LOG(DEBUG,
-			"Error allocating memzone for capabilities, destroying "
-			"PMD for %s",
-			name);
-		ret = -EFAULT;
-		goto error;
+				capa_size, rte_socket_id(), 0);
+		if (internals->capa_mz == NULL) {
+			QAT_LOG(DEBUG,
+				"Error allocating capability memzon for %s",
+				name);
+			ret = -EFAULT;
+			goto error;
+		}
 	}
 
 	memcpy(internals->capa_mz->addr, capabilities, capa_size);
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index d49b732ca0..0dc0c6f0d9 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -13,7 +13,6 @@
 #include <rte_security.h>
 #endif
 
-#include "qat_sym_capabilities.h"
 #include "qat_crypto.h"
 #include "qat_device.h"
 
@@ -24,8 +23,64 @@
 #define QAT_SYM_CAP_MIXED_CRYPTO	(1 << 0)
 #define QAT_SYM_CAP_VALID		(1 << 31)
 
+/**
+ * Macro to add a sym capability
+ * helper function to add an sym capability
+ * <n: name> <b: block size> <k: key size> <d: digest size>
+ * <a: aad_size> <i: iv_size>
+ **/
+#define QAT_SYM_PLAIN_AUTH_CAP(n, b, d)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
+			{.auth = {					\
+				.algo = RTE_CRYPTO_AUTH_##n,		\
+				b, d					\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_AUTH_CAP(n, b, k, d, a, i)				\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
+			{.auth = {					\
+				.algo = RTE_CRYPTO_AUTH_##n,		\
+				b, k, d, a, i				\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_AEAD_CAP(n, b, k, d, a, i)				\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
+			{.aead = {					\
+				.algo = RTE_CRYPTO_AEAD_##n,		\
+				b, k, d, a, i				\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_CIPHER_CAP(n, b, k, i)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
+			{.cipher = {					\
+				.algo = RTE_CRYPTO_CIPHER_##n,		\
+				b, k, i					\
+			}, }						\
+		}, }							\
+	}
+
 extern uint8_t qat_sym_driver_id;
 
+extern struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[];
+
 int
 qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
-- 
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v7 9/9] crypto/qat: add gen specific implementation
  2021-10-27 15:50             ` [dpdk-dev] [dpdk-dev v7 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                                 ` (7 preceding siblings ...)
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 8/9] crypto/qat: add gen specific data and function Kai Ji
@ 2021-10-27 15:50               ` Kai Ji
  2021-11-04 10:34               ` [dpdk-dev] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-10-27 15:50 UTC (permalink / raw)
  To: dev; +Cc: Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch replaces the mixed QAT symmetric and asymmetric
support implementation by separate files with shared or
individual implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 drivers/common/qat/meson.build               |   7 +-
 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c   |  76 +++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 224 +++++++++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 164 +++++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 124 ++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h |  36 +++
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c    | 283 +++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h              |   3 -
 8 files changed, 913 insertions(+), 4 deletions(-)
 create mode 100644 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 29fd0168ea..ce9959d103 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -71,7 +71,12 @@ endif

 if qat_crypto
     foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c',
-            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c']
+            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c',
+	    'dev/qat_sym_pmd_gen1.c',
+            'dev/qat_asym_pmd_gen1.c',
+            'dev/qat_crypto_pmd_gen2.c',
+            'dev/qat_crypto_pmd_gen3.c',
+            'dev/qat_crypto_pmd_gen4.c']
         sources += files(join_paths(qat_crypto_relpath, f))
     endforeach
     deps += ['security']
diff --git a/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
new file mode 100644
index 0000000000..9ed1f21d9d
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+#include "qat_pke_functionality_arrays.h"
+
+struct rte_cryptodev_ops qat_asym_crypto_ops_gen1 = {
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.asym_session_get_size	= qat_asym_session_get_private_size,
+	.asym_session_configure	= qat_asym_session_configure,
+	.asym_session_clear	= qat_asym_session_clear
+};
+
+static struct rte_cryptodev_capabilities qat_asym_crypto_caps_gen1[] = {
+	QAT_ASYM_CAP(MODEX,
+		0, 1, 512, 1),
+	QAT_ASYM_CAP(MODINV,
+		0, 1, 512, 1),
+	QAT_ASYM_CAP(RSA,
+			((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
+			(1 << RTE_CRYPTO_ASYM_OP_VERIFY) |
+			(1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |
+			(1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),
+			64, 512, 64),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+
+struct qat_capabilities_info
+qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_asym_crypto_caps_gen1;
+	capa_info.size = sizeof(qat_asym_crypto_caps_gen1);
+	return capa_info;
+}
+
+uint64_t
+qat_asym_crypto_feature_flags_get_gen1(
+	struct qat_pci_device *qat_dev __rte_unused)
+{
+	uint64_t feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_ASYM_SESSIONLESS |
+			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP |
+			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT;
+
+	return feature_flags;
+}
+
+RTE_INIT(qat_asym_crypto_gen1_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN1].cryptodev_ops =
+			&qat_asym_crypto_ops_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN1].get_capabilities =
+			qat_asym_crypto_cap_get_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN1].get_feature_flags =
+			qat_asym_crypto_feature_flags_get_gen1;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
new file mode 100644
index 0000000000..b4ec440e05
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
@@ -0,0 +1,224 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+#define MIXED_CRYPTO_MIN_FW_VER 0x04090000
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen2[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(MD5_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(KASUMI_F9,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_XTS,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(3DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(3DES_CTR,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(ZUC_EEA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(ZUC_EIA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static int
+qat_sym_crypto_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct qat_cryptodev_private *qat_sym_private = dev->data->dev_private;
+	struct qat_qp *qp;
+	int ret;
+
+	if (qat_cryptodev_qp_setup(dev, qp_id, qp_conf, socket_id)) {
+		QAT_LOG(DEBUG, "QAT qp setup failed");
+		return -1;
+	}
+
+	qp = qat_sym_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id];
+	ret = qat_cq_get_fw_version(qp);
+	if (ret < 0) {
+		qat_cryptodev_qp_release(dev, qp_id);
+		return ret;
+	}
+
+	if (ret != 0)
+		QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d",
+				(ret >> 24) & 0xff,
+				(ret >> 16) & 0xff,
+				(ret >> 8) & 0xff);
+	else
+		QAT_LOG(DEBUG, "unknown QAT firmware version");
+
+	/* set capabilities based on the fw version */
+	qat_sym_private->internal_capabilities = QAT_SYM_CAP_VALID |
+			((ret >= MIXED_CRYPTO_MIN_FW_VER) ?
+					QAT_SYM_CAP_MIXED_CRYPTO : 0);
+	return 0;
+}
+
+struct rte_cryptodev_ops qat_sym_crypto_ops_gen2 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_sym_crypto_qp_setup_gen2,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.sym_session_get_size	= qat_sym_session_get_private_size,
+	.sym_session_configure	= qat_sym_session_configure,
+	.sym_session_clear	= qat_sym_session_clear,
+
+	/* Raw data-path API related operations */
+	.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+	.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen2(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen2;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen2);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen2_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN2].cryptodev_ops = &qat_sym_crypto_ops_gen2;
+	qat_sym_gen_dev_ops[QAT_GEN2].get_capabilities =
+			qat_sym_crypto_cap_get_gen2;
+	qat_sym_gen_dev_ops[QAT_GEN2].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN2].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen2_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN2].cryptodev_ops =
+			&qat_asym_crypto_ops_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN2].get_capabilities =
+			qat_asym_crypto_cap_get_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN2].get_feature_flags =
+			qat_asym_crypto_feature_flags_get_gen1;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
new file mode 100644
index 0000000000..d3336cf4a1
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen3[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(MD5_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(KASUMI_F9,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_XTS,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(3DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(3DES_CTR,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(ZUC_EEA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(ZUC_EIA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AEAD_CAP(CHACHA20_POLY1305,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 32, 32, 0),
+		CAP_RNG(digest_size, 16, 16, 0),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen3(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen3;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen3);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen3_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN3].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN3].get_capabilities =
+			qat_sym_crypto_cap_get_gen3;
+	qat_sym_gen_dev_ops[QAT_GEN3].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN3].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen3_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN3].cryptodev_ops = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN3].get_capabilities = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN3].get_feature_flags = NULL;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
new file mode 100644
index 0000000000..37a58c026f
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
@@ -0,0 +1,124 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen4[] = {
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(CHACHA20_POLY1305,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 32, 32, 0),
+		CAP_RNG(digest_size, 16, 16, 0),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen4;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen4);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen4_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN4].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN4].get_capabilities =
+			qat_sym_crypto_cap_get_gen4;
+	qat_sym_gen_dev_ops[QAT_GEN4].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN4].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen4_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN4].cryptodev_ops = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN4].get_capabilities = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN4].get_feature_flags = NULL;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
new file mode 100644
index 0000000000..67a4d2cb2c
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#ifndef _QAT_CRYPTO_PMD_GENS_H_
+#define _QAT_CRYPTO_PMD_GENS_H_
+
+#include <rte_cryptodev.h>
+#include "qat_crypto.h"
+#include "qat_sym_session.h"
+
+extern struct rte_cryptodev_ops qat_sym_crypto_ops_gen1;
+extern struct rte_cryptodev_ops qat_asym_crypto_ops_gen1;
+
+/* -----------------GENx control path APIs ---------------- */
+uint64_t
+qat_sym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev);
+
+void
+qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session,
+		uint8_t hash_flag);
+
+struct qat_capabilities_info
+qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev);
+
+uint64_t
+qat_asym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev);
+
+#ifdef RTE_LIB_SECURITY
+extern struct rte_security_ops security_qat_ops_gen1;
+
+void *
+qat_sym_create_security_gen1(void *cryptodev);
+#endif
+
+#endif
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
new file mode 100644
index 0000000000..e156f194e2
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
@@ -0,0 +1,283 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#ifdef RTE_LIB_SECURITY
+#include <rte_security_driver.h>
+#endif
+
+#include "adf_transport_access_macros.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_sym_session.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen1[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(MD5_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(KASUMI_F9,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_XTS,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(3DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(3DES_CTR,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+struct rte_cryptodev_ops qat_sym_crypto_ops_gen1 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.sym_session_get_size	= qat_sym_session_get_private_size,
+	.sym_session_configure	= qat_sym_session_configure,
+	.sym_session_clear	= qat_sym_session_clear,
+
+	/* Raw data-path API related operations */
+	.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+	.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen1;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen1);
+	return capa_info;
+}
+
+uint64_t
+qat_sym_crypto_feature_flags_get_gen1(
+	struct qat_pci_device *qat_dev __rte_unused)
+{
+	uint64_t feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
+			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
+			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
+			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED |
+			RTE_CRYPTODEV_FF_SYM_RAW_DP;
+
+	return feature_flags;
+}
+
+#ifdef RTE_LIB_SECURITY
+
+#define QAT_SECURITY_SYM_CAPABILITIES					\
+	{	/* AES DOCSIS BPI */					\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
+			{.cipher = {					\
+				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
+				.block_size = 16,			\
+				.key_size = {				\
+					.min = 16,			\
+					.max = 32,			\
+					.increment = 16			\
+				},					\
+				.iv_size = {				\
+					.min = 16,			\
+					.max = 16,			\
+					.increment = 0			\
+				}					\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SECURITY_CAPABILITIES(sym)					\
+	[0] = {	/* DOCSIS Uplink */					\
+		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
+		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
+		.docsis = {						\
+			.direction = RTE_SECURITY_DOCSIS_UPLINK		\
+		},							\
+		.crypto_capabilities = (sym)				\
+	},								\
+	[1] = {	/* DOCSIS Downlink */					\
+		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
+		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
+		.docsis = {						\
+			.direction = RTE_SECURITY_DOCSIS_DOWNLINK	\
+		},							\
+		.crypto_capabilities = (sym)				\
+	}
+
+static const struct rte_cryptodev_capabilities
+					qat_security_sym_capabilities[] = {
+	QAT_SECURITY_SYM_CAPABILITIES,
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static const struct rte_security_capability qat_security_capabilities_gen1[] = {
+	QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities),
+	{
+		.action = RTE_SECURITY_ACTION_TYPE_NONE
+	}
+};
+
+static const struct rte_security_capability *
+qat_security_cap_get_gen1(void *dev __rte_unused)
+{
+	return qat_security_capabilities_gen1;
+}
+
+struct rte_security_ops security_qat_ops_gen1 = {
+		.session_create = qat_security_session_create,
+		.session_update = NULL,
+		.session_stats_get = NULL,
+		.session_destroy = qat_security_session_destroy,
+		.set_pkt_metadata = NULL,
+		.capabilities_get = qat_security_cap_get_gen1
+};
+
+void *
+qat_sym_create_security_gen1(void *cryptodev)
+{
+	struct rte_security_ctx *security_instance;
+
+	security_instance = rte_malloc(NULL, sizeof(struct rte_security_ctx),
+			RTE_CACHE_LINE_SIZE);
+	if (security_instance == NULL)
+		return NULL;
+
+	security_instance->device = cryptodev;
+	security_instance->ops = &security_qat_ops_gen1;
+	security_instance->sess_cnt = 0;
+
+	return (void *)security_instance;
+}
+
+#endif
+
+RTE_INIT(qat_sym_crypto_gen1_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN1].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN1].get_capabilities =
+			qat_sym_crypto_cap_get_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN1].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN1].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index 0a8afb0b31..6eaa15b975 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -6,9 +6,6 @@
  #define _QAT_CRYPTO_H_

 #include <rte_cryptodev.h>
-#ifdef RTE_LIB_SECURITY
-#include <rte_security.h>
-#endif

 #include "qat_device.h"

--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [dpdk-dev v7 8/9] crypto/qat: add gen specific data and function
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 8/9] crypto/qat: add gen specific data and function Kai Ji
@ 2021-10-28  8:33                 ` Power, Ciara
  0 siblings, 0 replies; 96+ messages in thread
From: Power, Ciara @ 2021-10-28  8:33 UTC (permalink / raw)
  To: Ji, Kai, dev; +Cc: Zhang, Roy Fan, Kusztal, ArkadiuszX, Ji, Kai

>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Kai Ji
>Sent: Wednesday 27 October 2021 16:51
>To: dev@dpdk.org
>Cc: Zhang, Roy Fan <roy.fan.zhang@intel.com>; Kusztal, ArkadiuszX
><arkadiuszx.kusztal@intel.com>; Ji, Kai <kai.ji@intel.com>
>Subject: [dpdk-dev] [dpdk-dev v7 8/9] crypto/qat: add gen specific data and
>function
>
>From: Fan Zhang <roy.fan.zhang@intel.com>
>
>This patch adds the symmetric and asymmetric crypto data
>structure and function prototypes for different QAT
>generations.
>
>Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
>Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
>Signed-off-by: Kai Ji <kai.ji@intel.com>

Acked-by: Ciara Power <ciara.power@intel.com>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [dpdk-dev v7 7/9] crypto/qat: unified device private data structure
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 7/9] crypto/qat: unified device private data structure Kai Ji
@ 2021-10-28  9:31                 ` Power, Ciara
  0 siblings, 0 replies; 96+ messages in thread
From: Power, Ciara @ 2021-10-28  9:31 UTC (permalink / raw)
  To: Ji, Kai, dev; +Cc: Zhang, Roy Fan, Kusztal, ArkadiuszX, Ji, Kai

>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Kai Ji
>Sent: Wednesday 27 October 2021 16:51
>To: dev@dpdk.org
>Cc: Zhang, Roy Fan <roy.fan.zhang@intel.com>; Kusztal, ArkadiuszX
><arkadiuszx.kusztal@intel.com>; Ji, Kai <kai.ji@intel.com>
>Subject: [dpdk-dev] [dpdk-dev v7 7/9] crypto/qat: unified device private data
>structure
>
>From: Fan Zhang <roy.fan.zhang@intel.com>
>
>This patch unifies the QAT symmetric and asymmetric device private data
>structures and functions.
>
>Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
>Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
>Signed-off-by: Kai Ji <kai.ji@intel.com>
>---

Acked-by: Ciara Power <ciara.power@intel.com>

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [dpdk-dev v7 2/9] common/qat: add gen specific device implementation
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 2/9] common/qat: add gen specific device implementation Kai Ji
@ 2021-10-28  9:32                 ` Power, Ciara
  0 siblings, 0 replies; 96+ messages in thread
From: Power, Ciara @ 2021-10-28  9:32 UTC (permalink / raw)
  To: Ji, Kai, dev; +Cc: Zhang, Roy Fan, Kusztal, ArkadiuszX, Ji, Kai

>-----Original Message-----
>From: dev <dev-bounces@dpdk.org> On Behalf Of Kai Ji
>Sent: Wednesday 27 October 2021 16:51
>To: dev@dpdk.org
>Cc: Zhang, Roy Fan <roy.fan.zhang@intel.com>; Kusztal, ArkadiuszX
><arkadiuszx.kusztal@intel.com>; Ji, Kai <kai.ji@intel.com>
>Subject: [dpdk-dev] [dpdk-dev v7 2/9] common/qat: add gen specific device
>implementation
>
>From: Fan Zhang <roy.fan.zhang@intel.com>
>
>This patch replaces the mixed QAT device configuration implementation by
>separate files with shared or individual implementation for specific QAT
>generation.
>
>Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
>Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
>Signed-off-by: Kai Ji <kai.ji@intel.com>

Acked-by: Ciara Power <ciara.power@intel.com>


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations
  2021-10-27 15:50             ` [dpdk-dev] [dpdk-dev v7 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                                 ` (8 preceding siblings ...)
  2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 9/9] crypto/qat: add gen specific implementation Kai Ji
@ 2021-11-04 10:34               ` Kai Ji
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 1/9] common/qat: define gen specific structs and functions Kai Ji
                                   ` (9 more replies)
  9 siblings, 10 replies; 96+ messages in thread
From: Kai Ji @ 2021-11-04 10:34 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Kai Ji

This patchset introduces new qat driver structure and updates
existing symmetric crypto qat PMD.

The purpose of the change is to isolate QAT generation specific
implementations from one to another.

It is expected the changes to the specific generation driver
code does minimum impact to other generations' implementations.
Also adding the support to new features or new qat generation
hardware will have zero impact to existing functionalities.

v8:
- git commit message update

v7:
- rebased on the top of latest master
- review comments addressed

v6:
- updates on commit messages

v5:
- review comments addressed

v4:
- rebased on top of latest master.
- updated comments.
- removed naming convention patch.

v3:
- removed release note update.
- updated with more unified naming conventions.

v2:
- unified asym and sym data structures for qat.
- more refined per gen code split.

Fan Zhang (9):
  common/qat: define gen specific structs and functions
  common/qat: add gen specific device implementation
  common/qat: add gen specific queue pair function
  common/qat: add gen specific queue implementation
  compress/qat: define gen specific structs and functions
  compress/qat: add gen specific implementation
  crypto/qat: unified device private data structure
  crypto/qat: define gen specific structs and functions
  crypto/qat: add gen specific implementation

 drivers/common/qat/dev/qat_dev_gen1.c         |  254 ++++
 drivers/common/qat/dev/qat_dev_gen2.c         |   37 +
 drivers/common/qat/dev/qat_dev_gen3.c         |   83 ++
 drivers/common/qat/dev/qat_dev_gen4.c         |  305 ++++
 drivers/common/qat/dev/qat_dev_gens.h         |   65 +
 drivers/common/qat/meson.build                |   15 +-
 .../qat/qat_adf/adf_transport_access_macros.h |    2 +
 .../common/qat/qat_adf/icp_qat_hw_gen4_comp.h |  195 +++
 .../qat/qat_adf/icp_qat_hw_gen4_comp_defs.h   |  299 ++++
 drivers/common/qat/qat_common.c               |   15 +
 drivers/common/qat/qat_common.h               |   19 +-
 drivers/common/qat/qat_device.c               |  205 ++-
 drivers/common/qat/qat_device.h               |   45 +-
 drivers/common/qat/qat_qp.c                   |  677 ++++-----
 drivers/common/qat/qat_qp.h                   |  121 +-
 drivers/compress/qat/dev/qat_comp_pmd_gen1.c  |  176 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen2.c  |   30 +
 drivers/compress/qat/dev/qat_comp_pmd_gen3.c  |   30 +
 drivers/compress/qat/dev/qat_comp_pmd_gen4.c  |  213 +++
 drivers/compress/qat/dev/qat_comp_pmd_gens.h  |   30 +
 drivers/compress/qat/qat_comp.c               |  101 +-
 drivers/compress/qat/qat_comp.h               |    8 +-
 drivers/compress/qat/qat_comp_pmd.c           |  159 +--
 drivers/compress/qat/qat_comp_pmd.h           |   76 +
 drivers/crypto/qat/README                     |    7 -
 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c    |   76 +
 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c  |  224 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c  |  164 +++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c  |  124 ++
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h  |   36 +
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c     |  283 ++++
 drivers/crypto/qat/meson.build                |   32 -
 drivers/crypto/qat/qat_asym_capabilities.h    |   63 -
 drivers/crypto/qat/qat_asym_pmd.c             |  280 +---
 drivers/crypto/qat/qat_asym_pmd.h             |   54 +-
 drivers/crypto/qat/qat_crypto.c               |  176 +++
 drivers/crypto/qat/qat_crypto.h               |   91 ++
 drivers/crypto/qat/qat_sym_capabilities.h     | 1248 -----------------
 drivers/crypto/qat/qat_sym_pmd.c              |  428 +-----
 drivers/crypto/qat/qat_sym_pmd.h              |   76 +-
 drivers/crypto/qat/qat_sym_session.c          |   15 +-
 41 files changed, 3779 insertions(+), 2758 deletions(-)
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gens.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen1.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen2.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen3.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen4.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gens.h
 delete mode 100644 drivers/crypto/qat/README
 create mode 100644 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
 delete mode 100644 drivers/crypto/qat/meson.build
 delete mode 100644 drivers/crypto/qat/qat_asym_capabilities.h
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h
 delete mode 100644 drivers/crypto/qat/qat_sym_capabilities.h

--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v8 1/9] common/qat: define gen specific structs and functions
  2021-11-04 10:34               ` [dpdk-dev] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
@ 2021-11-04 10:34                 ` Kai Ji
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 2/9] common/qat: add gen specific device implementation Kai Ji
                                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-11-04 10:34 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch adds the data structure and function prototypes for
different QAT generations.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 drivers/common/qat/qat_common.h | 14 ++++++++------
 drivers/common/qat/qat_device.c |  4 ++++
 drivers/common/qat/qat_device.h | 23 +++++++++++++++++++++++
 3 files changed, 35 insertions(+), 6 deletions(-)

diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index 23715085f4..1889ec4e88 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -15,20 +15,24 @@
 /* Intel(R) QuickAssist Technology device generation is enumerated
  * from one according to the generation of the device
  */
+
 enum qat_device_gen {
-	QAT_GEN1 = 1,
+	QAT_GEN1,
 	QAT_GEN2,
 	QAT_GEN3,
-	QAT_GEN4
+	QAT_GEN4,
+	QAT_N_GENS
 };

 enum qat_service_type {
-	QAT_SERVICE_ASYMMETRIC = 0,
+	QAT_SERVICE_ASYMMETRIC,
 	QAT_SERVICE_SYMMETRIC,
 	QAT_SERVICE_COMPRESSION,
-	QAT_SERVICE_INVALID
+	QAT_MAX_SERVICES
 };

+#define QAT_SERVICE_INVALID	(QAT_MAX_SERVICES)
+
 enum qat_svc_list {
 	QAT_SVC_UNUSED = 0,
 	QAT_SVC_CRYPTO = 1,
@@ -37,8 +41,6 @@ enum qat_svc_list {
 	QAT_SVC_ASYM = 4,
 };

-#define QAT_MAX_SERVICES		(QAT_SERVICE_INVALID)
-
 /**< Common struct for scatter-gather list operations */
 struct qat_flat_buf {
 	uint32_t len;
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index 1b967cbcf7..e6b43c541f 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -13,6 +13,10 @@
 #include "adf_pf2vf_msg.h"
 #include "qat_pf2vf.h"

+/* Hardware device information per generation */
+struct qat_gen_hw_data qat_gen_config[QAT_N_GENS];
+struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_N_GENS];
+
 /* pv2vf data Gen 4*/
 struct qat_pf2vf_dev qat_pf2vf_gen4 = {
 	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 228c057d1e..b8b5c387a3 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -21,6 +21,29 @@
 #define COMP_ENQ_THRESHOLD_NAME "qat_comp_enq_threshold"
 #define MAX_QP_THRESHOLD_SIZE	32

+/**
+ * Function prototypes for GENx specific device operations.
+ **/
+typedef int (*qat_dev_reset_ring_pairs_t)
+		(struct qat_pci_device *);
+typedef const struct rte_mem_resource* (*qat_dev_get_transport_bar_t)
+		(struct rte_pci_device *);
+typedef int (*qat_dev_get_misc_bar_t)
+		(struct rte_mem_resource **, struct rte_pci_device *);
+typedef int (*qat_dev_read_config_t)
+		(struct qat_pci_device *);
+typedef int (*qat_dev_get_extra_size_t)(void);
+
+struct qat_dev_hw_spec_funcs {
+	qat_dev_reset_ring_pairs_t	qat_dev_reset_ring_pairs;
+	qat_dev_get_transport_bar_t	qat_dev_get_transport_bar;
+	qat_dev_get_misc_bar_t		qat_dev_get_misc_bar;
+	qat_dev_read_config_t		qat_dev_read_config;
+	qat_dev_get_extra_size_t	qat_dev_get_extra_size;
+};
+
+extern struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[];
+
 struct qat_dev_cmd_param {
 	const char *name;
 	uint16_t val;
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v8 2/9] common/qat: add gen specific device implementation
  2021-11-04 10:34               ` [dpdk-dev] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 1/9] common/qat: define gen specific structs and functions Kai Ji
@ 2021-11-04 10:34                 ` Kai Ji
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 3/9] common/qat: add gen specific queue pair function Kai Ji
                                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-11-04 10:34 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch replaces the mixed QAT device configuration
implementation by separate files with shared or
individual implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c |  64 ++++++++
 drivers/common/qat/dev/qat_dev_gen2.c |  23 +++
 drivers/common/qat/dev/qat_dev_gen3.c |  23 +++
 drivers/common/qat/dev/qat_dev_gen4.c | 152 +++++++++++++++++++
 drivers/common/qat/dev/qat_dev_gens.h |  34 +++++
 drivers/common/qat/meson.build        |   4 +
 drivers/common/qat/qat_device.c       | 205 +++++++++++---------------
 drivers/common/qat/qat_device.h       |   5 +-
 drivers/common/qat/qat_qp.c           |   3 +-
 9 files changed, 389 insertions(+), 124 deletions(-)
 create mode 100644 drivers/common/qat/dev/qat_dev_gen1.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen2.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen3.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gen4.c
 create mode 100644 drivers/common/qat/dev/qat_dev_gens.h

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
new file mode 100644
index 0000000000..9972280e06
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+#define ADF_ARB_REG_SLOT			0x1000
+
+int
+qat_reset_ring_pairs_gen1(struct qat_pci_device *qat_pci_dev __rte_unused)
+{
+	/*
+	 * Ring pairs reset not supported on base, continue
+	 */
+	return 0;
+}
+
+const struct rte_mem_resource *
+qat_dev_get_transport_bar_gen1(struct rte_pci_device *pci_dev)
+{
+	return &pci_dev->mem_resource[0];
+}
+
+int
+qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource __rte_unused,
+		struct rte_pci_device *pci_dev __rte_unused)
+{
+	return -1;
+}
+
+int
+qat_dev_read_config_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	/*
+	 * Base generations do not have configuration,
+	 * but set this pointer anyway that we can
+	 * distinguish higher generations faulty set to NULL
+	 */
+	return 0;
+}
+
+int
+qat_dev_get_extra_size_gen1(void)
+{
+	return 0;
+}
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen1_init)
+{
+	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
+	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c
new file mode 100644
index 0000000000..d3470ed6b8
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen2.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen2_init)
+{
+	qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2;
+	qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c
new file mode 100644
index 0000000000..e4a66869d2
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen3.c
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "adf_transport_access_macros.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen1,
+	.qat_dev_read_config = qat_dev_read_config_gen1,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen1,
+};
+
+RTE_INIT(qat_dev_gen_gen3_init)
+{
+	qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3;
+	qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
new file mode 100644
index 0000000000..5e5423ebfa
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -0,0 +1,152 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "adf_transport_access_macros_gen4vf.h"
+#include "adf_pf2vf_msg.h"
+#include "qat_pf2vf.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+struct qat_dev_gen4_extra {
+	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
+		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
+};
+
+static struct qat_pf2vf_dev qat_pf2vf_gen4 = {
+	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
+	.vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
+	.pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
+	.pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
+	.pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
+	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
+};
+
+int
+qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
+{
+	struct qat_pf2vf_msg pf2vf_msg;
+
+	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ;
+	pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ;
+	pf2vf_msg.msg_data = 2;
+	return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
+}
+
+static enum qat_service_type
+gen4_pick_service(uint8_t hw_service)
+{
+	switch (hw_service) {
+	case QAT_SVC_SYM:
+		return QAT_SERVICE_SYMMETRIC;
+	case QAT_SVC_COMPRESSION:
+		return QAT_SERVICE_COMPRESSION;
+	case QAT_SVC_ASYM:
+		return QAT_SERVICE_ASYMMETRIC;
+	default:
+		return QAT_SERVICE_INVALID;
+	}
+}
+
+static int
+qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
+{
+	int i = 0;
+	uint16_t svc = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+	struct qat_qp_hw_data *hw_data;
+	enum qat_service_type service_type;
+	uint8_t hw_service;
+
+	if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
+		return -EFAULT;
+	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		hw_service = (svc >> (3 * i)) & 0x7;
+		service_type = gen4_pick_service(hw_service);
+		if (service_type == QAT_SERVICE_INVALID) {
+			QAT_LOG(ERR,
+				"Unrecognized service on bundle %d",
+				i);
+			return -ENOTSUP;
+		}
+		hw_data = &dev_extra->qp_gen4_data[i][0];
+		memset(hw_data, 0, sizeof(*hw_data));
+		hw_data->service_type = service_type;
+		if (service_type == QAT_SERVICE_ASYMMETRIC) {
+			hw_data->tx_msg_size = 64;
+			hw_data->rx_msg_size = 32;
+		} else if (service_type == QAT_SERVICE_SYMMETRIC ||
+				service_type ==
+					QAT_SERVICE_COMPRESSION) {
+			hw_data->tx_msg_size = 128;
+			hw_data->rx_msg_size = 32;
+		}
+		hw_data->tx_ring_num = 0;
+		hw_data->rx_ring_num = 1;
+		hw_data->hw_bundle_num = i;
+	}
+	return 0;
+}
+
+static int
+qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
+{
+	int ret = 0, i;
+	uint8_t data[4];
+	struct qat_pf2vf_msg pf2vf_msg;
+
+	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET;
+	pf2vf_msg.block_hdr = -1;
+	for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		pf2vf_msg.msg_data = i;
+		ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data);
+		if (ret) {
+			QAT_LOG(ERR, "QAT error when reset bundle no %d",
+				i);
+			return ret;
+		}
+	}
+
+	return 0;
+}
+
+static const struct
+rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
+{
+	return &pci_dev->mem_resource[0];
+}
+
+static int
+qat_dev_get_misc_bar_gen4(struct rte_mem_resource **mem_resource,
+		struct rte_pci_device *pci_dev)
+{
+	*mem_resource = &pci_dev->mem_resource[2];
+	return 0;
+}
+
+static int
+qat_dev_get_extra_size_gen4(void)
+{
+	return sizeof(struct qat_dev_gen4_extra);
+}
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = {
+	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen4,
+	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen4,
+	.qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen4,
+	.qat_dev_read_config = qat_dev_read_config_gen4,
+	.qat_dev_get_extra_size = qat_dev_get_extra_size_gen4,
+};
+
+RTE_INIT(qat_dev_gen_4_init)
+{
+	qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4;
+	qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4;
+	qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h
new file mode 100644
index 0000000000..4ad0ffa728
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gens.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _QAT_DEV_GENS_H_
+#define _QAT_DEV_GENS_H_
+
+#include "qat_device.h"
+#include "qat_qp.h"
+
+#include <stdint.h>
+
+extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE];
+
+int
+qat_dev_get_extra_size_gen1(void);
+
+int
+qat_reset_ring_pairs_gen1(
+		struct qat_pci_device *qat_pci_dev);
+const struct
+rte_mem_resource *qat_dev_get_transport_bar_gen1(
+		struct rte_pci_device *pci_dev);
+int
+qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource,
+		struct rte_pci_device *pci_dev);
+int
+qat_dev_read_config_gen1(struct qat_pci_device *qat_dev);
+
+int
+qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val);
+
+#endif
diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 053c219fed..532e0fabb3 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -50,6 +50,10 @@ sources += files(
         'qat_device.c',
         'qat_logs.c',
         'qat_pf2vf.c',
+        'dev/qat_dev_gen1.c',
+        'dev/qat_dev_gen2.c',
+        'dev/qat_dev_gen3.c',
+        'dev/qat_dev_gen4.c'
 )
 includes += include_directories(
         'qat_adf',
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index e6b43c541f..437996f2e8 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -17,43 +17,6 @@
 struct qat_gen_hw_data qat_gen_config[QAT_N_GENS];
 struct qat_dev_hw_spec_funcs *qat_dev_hw_spec[QAT_N_GENS];

-/* pv2vf data Gen 4*/
-struct qat_pf2vf_dev qat_pf2vf_gen4 = {
-	.pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
-	.vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
-	.pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
-	.pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
-	.pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
-	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
-};
-
-/* Hardware device information per generation */
-__extension__
-struct qat_gen_hw_data qat_gen_config[] =  {
-	[QAT_GEN1] = {
-		.dev_gen = QAT_GEN1,
-		.qp_hw_data = qat_gen1_qps,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN1
-	},
-	[QAT_GEN2] = {
-		.dev_gen = QAT_GEN2,
-		.qp_hw_data = qat_gen1_qps,
-		/* gen2 has same ring layout as gen1 */
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN2
-	},
-	[QAT_GEN3] = {
-		.dev_gen = QAT_GEN3,
-		.qp_hw_data = qat_gen3_qps,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3
-	},
-	[QAT_GEN4] = {
-		.dev_gen = QAT_GEN4,
-		.qp_hw_data = NULL,
-		.comp_num_im_bufs_required = QAT_NUM_INTERM_BUFS_GEN3,
-		.pf2vf_dev = &qat_pf2vf_gen4
-	},
-};
-
 /* per-process array of device data */
 struct qat_device_info qat_pci_devs[RTE_PMD_QAT_MAX_PCI_DEVICES];
 static int qat_nb_pci_devices;
@@ -87,6 +50,16 @@ static const struct rte_pci_id pci_id_qat_map[] = {
 		{.device_id = 0},
 };

+static int
+qat_pci_get_extra_size(enum qat_device_gen qat_dev_gen)
+{
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_get_extra_size,
+		-ENOTSUP);
+	return ops_hw->qat_dev_get_extra_size();
+}
+
 static struct qat_pci_device *
 qat_pci_get_named_dev(const char *name)
 {
@@ -130,45 +103,8 @@ qat_get_qat_dev_from_pci_dev(struct rte_pci_device *pci_dev)
 	return qat_pci_get_named_dev(name);
 }

-static int
-qat_gen4_reset_ring_pair(struct qat_pci_device *qat_pci_dev)
-{
-	int ret = 0, i;
-	uint8_t data[4];
-	struct qat_pf2vf_msg pf2vf_msg;
-
-	pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_RP_RESET;
-	pf2vf_msg.block_hdr = -1;
-	for (i = 0; i < QAT_GEN4_BUNDLE_NUM; i++) {
-		pf2vf_msg.msg_data = i;
-		ret = qat_pf2vf_exch_msg(qat_pci_dev, pf2vf_msg, 1, data);
-		if (ret) {
-			QAT_LOG(ERR, "QAT error when reset bundle no %d",
-				i);
-			return ret;
-		}
-	}
-
-	return 0;
-}
-
-int qat_query_svc(struct qat_pci_device *qat_dev, uint8_t *val)
-{
-	int ret = -(EINVAL);
-	struct qat_pf2vf_msg pf2vf_msg;
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		pf2vf_msg.msg_type = ADF_VF2PF_MSGTYPE_GET_SMALL_BLOCK_REQ;
-		pf2vf_msg.block_hdr = ADF_VF2PF_BLOCK_MSG_GET_RING_TO_SVC_REQ;
-		pf2vf_msg.msg_data = 2;
-		ret = qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
-	}
-
-	return ret;
-}
-
-
-static void qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param
+static void
+qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param
 		*qat_dev_cmd_param)
 {
 	int i = 0;
@@ -230,13 +166,39 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param)
 {
 	struct qat_pci_device *qat_dev;
+	enum qat_device_gen qat_dev_gen;
 	uint8_t qat_dev_id = 0;
 	char name[QAT_DEV_NAME_MAX_LEN];
 	struct rte_devargs *devargs = pci_dev->device.devargs;
+	struct qat_dev_hw_spec_funcs *ops_hw;
+	struct rte_mem_resource *mem_resource;
+	const struct rte_memzone *qat_dev_mz;
+	int qat_dev_size, extra_size;

 	rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
 	snprintf(name+strlen(name), QAT_DEV_NAME_MAX_LEN-strlen(name), "_qat");

+	switch (pci_dev->id.device_id) {
+	case 0x0443:
+		qat_dev_gen = QAT_GEN1;
+		break;
+	case 0x37c9:
+	case 0x19e3:
+	case 0x6f55:
+	case 0x18ef:
+		qat_dev_gen = QAT_GEN2;
+		break;
+	case 0x18a1:
+		qat_dev_gen = QAT_GEN3;
+		break;
+	case 0x4941:
+		qat_dev_gen = QAT_GEN4;
+		break;
+	default:
+		QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
+		return NULL;
+	}
+
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		const struct rte_memzone *mz = rte_memzone_lookup(name);

@@ -267,63 +229,63 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
 		return NULL;
 	}

-	qat_pci_devs[qat_dev_id].mz = rte_memzone_reserve(name,
-		sizeof(struct qat_pci_device),
+	extra_size = qat_pci_get_extra_size(qat_dev_gen);
+	if (extra_size < 0) {
+		QAT_LOG(ERR, "QAT internal error: no pci pointer for gen %d",
+			qat_dev_gen);
+		return NULL;
+	}
+
+	qat_dev_size = sizeof(struct qat_pci_device) + extra_size;
+	qat_dev_mz = rte_memzone_reserve(name, qat_dev_size,
 		rte_socket_id(), 0);

-	if (qat_pci_devs[qat_dev_id].mz == NULL) {
+	if (qat_dev_mz == NULL) {
 		QAT_LOG(ERR, "Error when allocating memzone for QAT_%d",
 			qat_dev_id);
 		return NULL;
 	}

-	qat_dev = qat_pci_devs[qat_dev_id].mz->addr;
-	memset(qat_dev, 0, sizeof(*qat_dev));
+	qat_dev = qat_dev_mz->addr;
+	memset(qat_dev, 0, qat_dev_size);
+	qat_dev->dev_private = qat_dev + 1;
 	strlcpy(qat_dev->name, name, QAT_DEV_NAME_MAX_LEN);
 	qat_dev->qat_dev_id = qat_dev_id;
 	qat_pci_devs[qat_dev_id].pci_dev = pci_dev;
-	switch (pci_dev->id.device_id) {
-	case 0x0443:
-		qat_dev->qat_dev_gen = QAT_GEN1;
-		break;
-	case 0x37c9:
-	case 0x19e3:
-	case 0x6f55:
-	case 0x18ef:
-		qat_dev->qat_dev_gen = QAT_GEN2;
-		break;
-	case 0x18a1:
-		qat_dev->qat_dev_gen = QAT_GEN3;
-		break;
-	case 0x4941:
-		qat_dev->qat_dev_gen = QAT_GEN4;
-		break;
-	default:
-		QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
-		rte_memzone_free(qat_pci_devs[qat_dev->qat_dev_id].mz);
+	qat_dev->qat_dev_gen = qat_dev_gen;
+
+	ops_hw = qat_dev_hw_spec[qat_dev->qat_dev_gen];
+	if (ops_hw->qat_dev_get_misc_bar == NULL) {
+		QAT_LOG(ERR, "qat_dev_get_misc_bar function pointer not set");
+		rte_memzone_free(qat_dev_mz);
 		return NULL;
 	}
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		qat_dev->misc_bar_io_addr = pci_dev->mem_resource[2].addr;
-		if (qat_dev->misc_bar_io_addr == NULL) {
+	if (ops_hw->qat_dev_get_misc_bar(&mem_resource, pci_dev) == 0) {
+		if (mem_resource->addr == NULL) {
 			QAT_LOG(ERR, "QAT cannot get access to VF misc bar");
+			rte_memzone_free(qat_dev_mz);
 			return NULL;
 		}
-	}
+		qat_dev->misc_bar_io_addr = mem_resource->addr;
+	} else
+		qat_dev->misc_bar_io_addr = NULL;

 	if (devargs && devargs->drv_str)
 		qat_dev_parse_cmd(devargs->drv_str, qat_dev_cmd_param);

-	if (qat_dev->qat_dev_gen >= QAT_GEN4) {
-		if (qat_read_qp_config(qat_dev)) {
-			QAT_LOG(ERR,
-				"Cannot acquire ring configuration for QAT_%d",
-				qat_dev_id);
-			return NULL;
-		}
+	if (qat_read_qp_config(qat_dev)) {
+		QAT_LOG(ERR,
+			"Cannot acquire ring configuration for QAT_%d",
+			qat_dev_id);
+			rte_memzone_free(qat_dev_mz);
+		return NULL;
 	}

+	/* No errors when allocating, attach memzone with
+	 * qat_dev to list of devices
+	 */
+	qat_pci_devs[qat_dev_id].mz = qat_dev_mz;
+
 	rte_spinlock_init(&qat_dev->arb_csr_lock);
 	qat_nb_pci_devices++;

@@ -396,6 +358,7 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	int sym_ret = 0, asym_ret = 0, comp_ret = 0;
 	int num_pmds_created = 0;
 	struct qat_pci_device *qat_pci_dev;
+	struct qat_dev_hw_spec_funcs *ops_hw;
 	struct qat_dev_cmd_param qat_dev_cmd_param[] = {
 			{ SYM_ENQ_THRESHOLD_NAME, 0 },
 			{ ASYM_ENQ_THRESHOLD_NAME, 0 },
@@ -412,13 +375,14 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	if (qat_pci_dev == NULL)
 		return -ENODEV;

-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		if (qat_gen4_reset_ring_pair(qat_pci_dev)) {
-			QAT_LOG(ERR,
-				"Cannot reset ring pairs, does pf driver supports pf2vf comms?"
-				);
-			return -ENODEV;
-		}
+	ops_hw = qat_dev_hw_spec[qat_pci_dev->qat_dev_gen];
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_reset_ring_pairs,
+		-ENOTSUP);
+	if (ops_hw->qat_dev_reset_ring_pairs(qat_pci_dev)) {
+		QAT_LOG(ERR,
+			"Cannot reset ring pairs, does pf driver supports pf2vf comms?"
+			);
+		return -ENODEV;
 	}

 	sym_ret = qat_sym_dev_create(qat_pci_dev, qat_dev_cmd_param);
@@ -453,7 +417,8 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
 	return 0;
 }

-static int qat_pci_remove(struct rte_pci_device *pci_dev)
+static int
+qat_pci_remove(struct rte_pci_device *pci_dev)
 {
 	struct qat_pci_device *qat_pci_dev;

diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index b8b5c387a3..8b69206df5 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -133,6 +133,8 @@ struct qat_pci_device {
 	/**< Data of ring configuration on gen4 */
 	void *misc_bar_io_addr;
 	/**< Address of misc bar */
+	void *dev_private;
+	/**< Per generation specific information */
 };

 struct qat_gen_hw_data {
@@ -182,7 +184,4 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev __rte_unused,
 int
 qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev __rte_unused);

-int
-qat_query_svc(struct qat_pci_device *qat_pci_dev, uint8_t *ret);
-
 #endif /* _QAT_DEVICE_H_ */
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 026ea5ee01..b8c6000e86 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -20,6 +20,7 @@
 #include "qat_comp.h"
 #include "adf_transport_access_macros.h"
 #include "adf_transport_access_macros_gen4vf.h"
+#include "dev/qat_dev_gens.h"

 #define QAT_CQ_MAX_DEQ_RETRIES 10

@@ -512,7 +513,7 @@ qat_read_qp_config(struct qat_pci_device *qat_dev)
 	if (qat_dev_gen == QAT_GEN4) {
 		uint16_t svc = 0;

-		if (qat_query_svc(qat_dev, (uint8_t *)&svc))
+		if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
 			return -(EFAULT);
 		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
 			struct qat_qp_hw_data *hw_data =
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v8 3/9] common/qat: add gen specific queue pair function
  2021-11-04 10:34               ` [dpdk-dev] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 1/9] common/qat: define gen specific structs and functions Kai Ji
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 2/9] common/qat: add gen specific device implementation Kai Ji
@ 2021-11-04 10:34                 ` Kai Ji
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 4/9] common/qat: add gen specific queue implementation Kai Ji
                                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-11-04 10:34 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch adds the queue pair data structure and function
prototypes for different QAT generations.

Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 drivers/common/qat/qat_qp.c |   3 ++
 drivers/common/qat/qat_qp.h | 103 ++++++++++++++++++++++++------------
 2 files changed, 71 insertions(+), 35 deletions(-)

diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index b8c6000e86..27994036b8 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -34,6 +34,9 @@
 	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
 	(ADF_ARB_REG_SLOT * index), value)

+struct qat_qp_hw_spec_funcs*
+	qat_qp_hw_spec[QAT_N_GENS];
+
 __extension__
 const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
 					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
index e1627197fa..726cd2ef61 100644
--- a/drivers/common/qat/qat_qp.h
+++ b/drivers/common/qat/qat_qp.h
@@ -7,8 +7,6 @@
 #include "qat_common.h"
 #include "adf_transport_access_macros.h"

-struct qat_pci_device;
-
 #define QAT_CSR_HEAD_WRITE_THRESH 32U
 /* number of requests to accumulate before writing head CSR */

@@ -24,37 +22,7 @@ struct qat_pci_device;
 #define QAT_GEN4_BUNDLE_NUM             4
 #define QAT_GEN4_QPS_PER_BUNDLE_NUM     1

-/**
- * Structure with data needed for creation of queue pair.
- */
-struct qat_qp_hw_data {
-	enum qat_service_type service_type;
-	uint8_t hw_bundle_num;
-	uint8_t tx_ring_num;
-	uint8_t rx_ring_num;
-	uint16_t tx_msg_size;
-	uint16_t rx_msg_size;
-};
-
-/**
- * Structure with data needed for creation of queue pair on gen4.
- */
-struct qat_qp_gen4_data {
-	struct qat_qp_hw_data qat_qp_hw_data;
-	uint8_t reserved;
-	uint8_t valid;
-};
-
-/**
- * Structure with data needed for creation of queue pair.
- */
-struct qat_qp_config {
-	const struct qat_qp_hw_data *hw;
-	uint32_t nb_descriptors;
-	uint32_t cookie_size;
-	int socket_id;
-	const char *service_str;
-};
+struct qat_pci_device;

 /**
  * Structure associated with each queue.
@@ -96,8 +64,28 @@ struct qat_qp {
 	uint16_t min_enq_burst_threshold;
 } __rte_cache_aligned;

-extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+/**
+ * Structure with data needed for creation of queue pair.
+ */
+struct qat_qp_hw_data {
+	enum qat_service_type service_type;
+	uint8_t hw_bundle_num;
+	uint8_t tx_ring_num;
+	uint8_t rx_ring_num;
+	uint16_t tx_msg_size;
+	uint16_t rx_msg_size;
+};
+
+/**
+ * Structure with data needed for creation of queue pair.
+ */
+struct qat_qp_config {
+	const struct qat_qp_hw_data *hw;
+	uint32_t nb_descriptors;
+	uint32_t cookie_size;
+	int socket_id;
+	const char *service_str;
+};

 uint16_t
 qat_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops);
@@ -136,4 +124,49 @@ qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
 int
 qat_read_qp_config(struct qat_pci_device *qat_dev);

+/**
+ * Function prototypes for GENx specific queue pair operations.
+ **/
+typedef int (*qat_qp_rings_per_service_t)
+		(struct qat_pci_device *, enum qat_service_type);
+
+typedef void (*qat_qp_build_ring_base_t)(void *, struct qat_queue *);
+
+typedef void (*qat_qp_adf_arb_enable_t)(const struct qat_queue *, void *,
+		rte_spinlock_t *);
+
+typedef void (*qat_qp_adf_arb_disable_t)(const struct qat_queue *, void *,
+		rte_spinlock_t *);
+
+typedef void (*qat_qp_adf_configure_queues_t)(struct qat_qp *);
+
+typedef void (*qat_qp_csr_write_tail_t)(struct qat_qp *qp, struct qat_queue *q);
+
+typedef void (*qat_qp_csr_write_head_t)(struct qat_qp *qp, struct qat_queue *q,
+		uint32_t new_head);
+
+typedef void (*qat_qp_csr_setup_t)(struct qat_pci_device*, void *,
+		struct qat_qp *);
+
+typedef const struct qat_qp_hw_data * (*qat_qp_get_hw_data_t)(
+		struct qat_pci_device *dev, enum qat_service_type service_type,
+		uint16_t qp_id);
+
+struct qat_qp_hw_spec_funcs {
+	qat_qp_rings_per_service_t	qat_qp_rings_per_service;
+	qat_qp_build_ring_base_t	qat_qp_build_ring_base;
+	qat_qp_adf_arb_enable_t		qat_qp_adf_arb_enable;
+	qat_qp_adf_arb_disable_t	qat_qp_adf_arb_disable;
+	qat_qp_adf_configure_queues_t	qat_qp_adf_configure_queues;
+	qat_qp_csr_write_tail_t		qat_qp_csr_write_tail;
+	qat_qp_csr_write_head_t		qat_qp_csr_write_head;
+	qat_qp_csr_setup_t		qat_qp_csr_setup;
+	qat_qp_get_hw_data_t		qat_qp_get_hw_data;
+};
+
+extern struct qat_qp_hw_spec_funcs *qat_qp_hw_spec[];
+
+extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
+
 #endif /* _QAT_QP_H_ */
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v8 4/9] common/qat: add gen specific queue implementation
  2021-11-04 10:34               ` [dpdk-dev] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                                   ` (2 preceding siblings ...)
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 3/9] common/qat: add gen specific queue pair function Kai Ji
@ 2021-11-04 10:34                 ` Kai Ji
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 5/9] compress/qat: define gen specific structs and functions Kai Ji
                                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-11-04 10:34 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch replaces the mixed QAT queue pair configuration
implementation by separate files with shared or individual
implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 drivers/common/qat/dev/qat_dev_gen1.c         | 190 +++++
 drivers/common/qat/dev/qat_dev_gen2.c         |  14 +
 drivers/common/qat/dev/qat_dev_gen3.c         |  60 ++
 drivers/common/qat/dev/qat_dev_gen4.c         | 161 ++++-
 drivers/common/qat/dev/qat_dev_gens.h         |  37 +-
 .../qat/qat_adf/adf_transport_access_macros.h |   2 +
 drivers/common/qat/qat_device.h               |   3 -
 drivers/common/qat/qat_qp.c                   | 677 +++++++-----------
 drivers/common/qat/qat_qp.h                   |  24 +-
 drivers/crypto/qat/qat_sym_pmd.c              |  32 +-
 10 files changed, 723 insertions(+), 477 deletions(-)

diff --git a/drivers/common/qat/dev/qat_dev_gen1.c b/drivers/common/qat/dev/qat_dev_gen1.c
index 9972280e06..38757e6e40 100644
--- a/drivers/common/qat/dev/qat_dev_gen1.c
+++ b/drivers/common/qat/dev/qat_dev_gen1.c
@@ -3,6 +3,7 @@
  */

 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"

@@ -10,6 +11,194 @@

 #define ADF_ARB_REG_SLOT			0x1000

+#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
+	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
+	(ADF_ARB_REG_SLOT * index), value)
+
+__extension__
+const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
+	/* queue pairs which provide an asymmetric crypto service */
+	[QAT_SERVICE_ASYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 0,
+			.rx_ring_num = 8,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+
+		}, {
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 1,
+			.rx_ring_num = 9,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a symmetric crypto service */
+	[QAT_SERVICE_SYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 2,
+			.rx_ring_num = 10,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		},
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 3,
+			.rx_ring_num = 11,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a compression service */
+	[QAT_SERVICE_COMPRESSION] = {
+		{
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 6,
+			.rx_ring_num = 14,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}, {
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 7,
+			.rx_ring_num = 15,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	}
+};
+
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen1(struct qat_pci_device *dev __rte_unused,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	return qat_gen1_qps[service_type] + qp_id;
+}
+
+int
+qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
+{
+	int i = 0, count = 0;
+
+	for (i = 0; i < ADF_MAX_QPS_ON_ANY_SERVICE; i++) {
+		const struct qat_qp_hw_data *hw_qps =
+				qat_qp_get_hw_data(qat_dev, service, i);
+		if (hw_qps->service_type == service)
+			count++;
+	}
+
+	return count;
+}
+
+void
+qat_qp_csr_build_ring_base_gen1(void *io_addr,
+			struct qat_queue *queue)
+{
+	uint64_t queue_base;
+
+	queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
+			queue->queue_size);
+	WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
+		queue->hw_queue_number, queue_base);
+}
+
+void
+qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_ARB_REG_SLOT *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr,
+			arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+void
+qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset =  ADF_ARB_RINGSRVARBEN_OFFSET +
+				(ADF_ARB_REG_SLOT * txq->hw_bundle_number);
+	uint32_t value;
+
+	rte_spinlock_lock(lock);
+	value = ADF_CSR_RD(base_addr, arb_csr_offset);
+	value &= ~(0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+void
+qat_qp_adf_configure_queues_gen1(struct qat_qp *qp)
+{
+	uint32_t q_tx_config, q_resp_config;
+	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
+
+	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
+	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
+			ADF_RING_NEAR_WATERMARK_512,
+			ADF_RING_NEAR_WATERMARK_0);
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
+		q_tx->hw_bundle_number,	q_tx->hw_queue_number,
+		q_tx_config);
+	WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
+		q_rx->hw_bundle_number,	q_rx->hw_queue_number,
+		q_resp_config);
+}
+
+void
+qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q)
+{
+	WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number,
+		q->hw_queue_number, q->tail);
+}
+
+void
+qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q,
+			uint32_t new_head)
+{
+	WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number,
+			q->hw_queue_number, new_head);
+}
+
+void
+qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev,
+			void *io_addr, struct qat_qp *qp)
+{
+	qat_qp_csr_build_ring_base_gen1(io_addr, &qp->tx_q);
+	qat_qp_csr_build_ring_base_gen1(io_addr, &qp->rx_q);
+	qat_qp_adf_configure_queues_gen1(qp);
+	qat_qp_adf_arb_enable_gen1(&qp->tx_q, qp->mmap_bar_addr,
+					&qat_dev->arb_csr_lock);
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen1 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen1,
+};
+
 int
 qat_reset_ring_pairs_gen1(struct qat_pci_device *qat_pci_dev __rte_unused)
 {
@@ -59,6 +248,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen1 = {

 RTE_INIT(qat_dev_gen_gen1_init)
 {
+	qat_qp_hw_spec[QAT_GEN1] = &qat_qp_hw_spec_gen1;
 	qat_dev_hw_spec[QAT_GEN1] = &qat_dev_hw_spec_gen1;
 	qat_gen_config[QAT_GEN1].dev_gen = QAT_GEN1;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen2.c b/drivers/common/qat/dev/qat_dev_gen2.c
index d3470ed6b8..f077fe9eef 100644
--- a/drivers/common/qat/dev/qat_dev_gen2.c
+++ b/drivers/common/qat/dev/qat_dev_gen2.c
@@ -3,11 +3,24 @@
  */

 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"

 #include <stdint.h>

+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen2 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen1,
+};
+
 static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {
 	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
 	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
@@ -18,6 +31,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen2 = {

 RTE_INIT(qat_dev_gen_gen2_init)
 {
+	qat_qp_hw_spec[QAT_GEN2] = &qat_qp_hw_spec_gen2;
 	qat_dev_hw_spec[QAT_GEN2] = &qat_dev_hw_spec_gen2;
 	qat_gen_config[QAT_GEN2].dev_gen = QAT_GEN2;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen3.c b/drivers/common/qat/dev/qat_dev_gen3.c
index e4a66869d2..de3fa17fa9 100644
--- a/drivers/common/qat/dev/qat_dev_gen3.c
+++ b/drivers/common/qat/dev/qat_dev_gen3.c
@@ -3,11 +3,70 @@
  */

 #include "qat_device.h"
+#include "qat_qp.h"
 #include "adf_transport_access_macros.h"
 #include "qat_dev_gens.h"

 #include <stdint.h>

+__extension__
+const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES]
+					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
+	/* queue pairs which provide an asymmetric crypto service */
+	[QAT_SERVICE_ASYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_ASYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 0,
+			.rx_ring_num = 4,
+			.tx_msg_size = 64,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a symmetric crypto service */
+	[QAT_SERVICE_SYMMETRIC] = {
+		{
+			.service_type = QAT_SERVICE_SYMMETRIC,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 1,
+			.rx_ring_num = 5,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	},
+	/* queue pairs which provide a compression service */
+	[QAT_SERVICE_COMPRESSION] = {
+		{
+			.service_type = QAT_SERVICE_COMPRESSION,
+			.hw_bundle_num = 0,
+			.tx_ring_num = 3,
+			.rx_ring_num = 7,
+			.tx_msg_size = 128,
+			.rx_msg_size = 32,
+		}
+	}
+};
+
+
+static const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen3(struct qat_pci_device *dev __rte_unused,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	return qat_gen3_qps[service_type] + qp_id;
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen3 = {
+	.qat_qp_rings_per_service  = qat_qp_rings_per_service_gen1,
+	.qat_qp_build_ring_base = qat_qp_csr_build_ring_base_gen1,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen1,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen1,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen1,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen1,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen1,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen1,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen3
+};
+
 static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {
 	.qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen1,
 	.qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen1,
@@ -18,6 +77,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen3 = {

 RTE_INIT(qat_dev_gen_gen3_init)
 {
+	qat_qp_hw_spec[QAT_GEN3] = &qat_qp_hw_spec_gen3;
 	qat_dev_hw_spec[QAT_GEN3] = &qat_dev_hw_spec_gen3;
 	qat_gen_config[QAT_GEN3].dev_gen = QAT_GEN3;
 }
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
index 5e5423ebfa..7ffde5f4c8 100644
--- a/drivers/common/qat/dev/qat_dev_gen4.c
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -10,10 +10,13 @@
 #include "adf_transport_access_macros_gen4vf.h"
 #include "adf_pf2vf_msg.h"
 #include "qat_pf2vf.h"
-#include "qat_dev_gens.h"

 #include <stdint.h>

+/* QAT GEN 4 specific macros */
+#define QAT_GEN4_BUNDLE_NUM             4
+#define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
+
 struct qat_dev_gen4_extra {
 	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
 		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
@@ -28,7 +31,7 @@ static struct qat_pf2vf_dev qat_pf2vf_gen4 = {
 	.pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
 };

-int
+static int
 qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
 {
 	struct qat_pf2vf_msg pf2vf_msg;
@@ -39,6 +42,52 @@ qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val)
 	return qat_pf2vf_exch_msg(qat_dev, pf2vf_msg, 2, val);
 }

+static int
+qat_select_valid_queue_gen4(struct qat_pci_device *qat_dev, int qp_id,
+			enum qat_service_type service_type)
+{
+	int i = 0, valid_qps = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+
+	for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
+		if (dev_extra->qp_gen4_data[i][0].service_type ==
+			service_type) {
+			if (valid_qps == qp_id)
+				return i;
+			++valid_qps;
+		}
+	}
+	return -1;
+}
+
+static const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen4(struct qat_pci_device *qat_dev,
+		enum qat_service_type service_type, uint16_t qp_id)
+{
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+	int ring_pair = qat_select_valid_queue_gen4(qat_dev, qp_id,
+			service_type);
+
+	if (ring_pair < 0)
+		return NULL;
+
+	return &dev_extra->qp_gen4_data[ring_pair][0];
+}
+
+static int
+qat_qp_rings_per_service_gen4(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
+{
+	int i = 0, count = 0, max_ops_per_srv = 0;
+	struct qat_dev_gen4_extra *dev_extra = qat_dev->dev_private;
+
+	max_ops_per_srv = QAT_GEN4_BUNDLE_NUM;
+	for (i = 0, count = 0; i < max_ops_per_srv; i++)
+		if (dev_extra->qp_gen4_data[i][0].service_type == service)
+			count++;
+	return count;
+}
+
 static enum qat_service_type
 gen4_pick_service(uint8_t hw_service)
 {
@@ -94,6 +143,109 @@ qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
 	return 0;
 }

+static void
+qat_qp_build_ring_base_gen4(void *io_addr,
+			struct qat_queue *queue)
+{
+	uint64_t queue_base;
+
+	queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr,
+			queue->queue_size);
+	WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number,
+		queue->hw_queue_number, queue_base);
+}
+
+static void
+qat_qp_adf_arb_enable_gen4(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_RING_BUNDLE_SIZE_GEN4 *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
+			arb_csr_offset);
+	value |= (0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+static void
+qat_qp_adf_arb_disable_gen4(const struct qat_queue *txq,
+			void *base_addr, rte_spinlock_t *lock)
+{
+	uint32_t arb_csr_offset = 0, value;
+
+	rte_spinlock_lock(lock);
+	arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
+			(ADF_RING_BUNDLE_SIZE_GEN4 *
+			txq->hw_bundle_number);
+	value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
+			arb_csr_offset);
+	value &= ~(0x01 << txq->hw_queue_number);
+	ADF_CSR_WR(base_addr, arb_csr_offset, value);
+	rte_spinlock_unlock(lock);
+}
+
+static void
+qat_qp_adf_configure_queues_gen4(struct qat_qp *qp)
+{
+	uint32_t q_tx_config, q_resp_config;
+	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
+
+	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
+	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
+			ADF_RING_NEAR_WATERMARK_512,
+			ADF_RING_NEAR_WATERMARK_0);
+
+	WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
+		q_tx->hw_bundle_number,	q_tx->hw_queue_number,
+		q_tx_config);
+	WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
+		q_rx->hw_bundle_number,	q_rx->hw_queue_number,
+		q_resp_config);
+}
+
+static void
+qat_qp_csr_write_tail_gen4(struct qat_qp *qp, struct qat_queue *q)
+{
+	WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
+		q->hw_bundle_number, q->hw_queue_number, q->tail);
+}
+
+static void
+qat_qp_csr_write_head_gen4(struct qat_qp *qp, struct qat_queue *q,
+			uint32_t new_head)
+{
+	WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr,
+			q->hw_bundle_number, q->hw_queue_number, new_head);
+}
+
+static void
+qat_qp_csr_setup_gen4(struct qat_pci_device *qat_dev,
+			void *io_addr, struct qat_qp *qp)
+{
+	qat_qp_build_ring_base_gen4(io_addr, &qp->tx_q);
+	qat_qp_build_ring_base_gen4(io_addr, &qp->rx_q);
+	qat_qp_adf_configure_queues_gen4(qp);
+	qat_qp_adf_arb_enable_gen4(&qp->tx_q, qp->mmap_bar_addr,
+					&qat_dev->arb_csr_lock);
+}
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen4 = {
+	.qat_qp_rings_per_service = qat_qp_rings_per_service_gen4,
+	.qat_qp_build_ring_base = qat_qp_build_ring_base_gen4,
+	.qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen4,
+	.qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen4,
+	.qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen4,
+	.qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen4,
+	.qat_qp_csr_write_head = qat_qp_csr_write_head_gen4,
+	.qat_qp_csr_setup = qat_qp_csr_setup_gen4,
+	.qat_qp_get_hw_data = qat_qp_get_hw_data_gen4,
+};
+
 static int
 qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
 {
@@ -116,8 +268,8 @@ qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
 	return 0;
 }

-static const struct
-rte_mem_resource *qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
+static const struct rte_mem_resource *
+qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
 {
 	return &pci_dev->mem_resource[0];
 }
@@ -146,6 +298,7 @@ static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen4 = {

 RTE_INIT(qat_dev_gen_4_init)
 {
+	qat_qp_hw_spec[QAT_GEN4] = &qat_qp_hw_spec_gen4;
 	qat_dev_hw_spec[QAT_GEN4] = &qat_dev_hw_spec_gen4;
 	qat_gen_config[QAT_GEN4].dev_gen = QAT_GEN4;
 	qat_gen_config[QAT_GEN4].pf2vf_dev = &qat_pf2vf_gen4;
diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h
index 4ad0ffa728..7c92f1938c 100644
--- a/drivers/common/qat/dev/qat_dev_gens.h
+++ b/drivers/common/qat/dev/qat_dev_gens.h
@@ -16,6 +16,40 @@ extern const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
 int
 qat_dev_get_extra_size_gen1(void);

+const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen1(struct qat_pci_device *dev,
+		enum qat_service_type service_type, uint16_t qp_id);
+
+int
+qat_qp_rings_per_service_gen1(struct qat_pci_device *qat_dev,
+		enum qat_service_type service);
+
+void
+qat_qp_csr_build_ring_base_gen1(void *io_addr,
+		struct qat_queue *queue);
+
+void
+qat_qp_adf_arb_enable_gen1(const struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock);
+
+void
+qat_qp_adf_arb_disable_gen1(const struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock);
+
+void
+qat_qp_adf_configure_queues_gen1(struct qat_qp *qp);
+
+void
+qat_qp_csr_write_tail_gen1(struct qat_qp *qp, struct qat_queue *q);
+
+void
+qat_qp_csr_write_head_gen1(struct qat_qp *qp, struct qat_queue *q,
+		uint32_t new_head);
+
+void
+qat_qp_csr_setup_gen1(struct qat_pci_device *qat_dev,
+		void *io_addr, struct qat_qp *qp);
+
 int
 qat_reset_ring_pairs_gen1(
 		struct qat_pci_device *qat_pci_dev);
@@ -28,7 +62,4 @@ qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource,
 int
 qat_dev_read_config_gen1(struct qat_pci_device *qat_dev);

-int
-qat_query_svc_gen4(struct qat_pci_device *qat_dev, uint8_t *val);
-
 #endif
diff --git a/drivers/common/qat/qat_adf/adf_transport_access_macros.h b/drivers/common/qat/qat_adf/adf_transport_access_macros.h
index 504ffb7236..f98bbb5001 100644
--- a/drivers/common/qat/qat_adf/adf_transport_access_macros.h
+++ b/drivers/common/qat/qat_adf/adf_transport_access_macros.h
@@ -51,6 +51,8 @@
 #define ADF_MIN_RING_SIZE ADF_RING_SIZE_128
 #define ADF_MAX_RING_SIZE ADF_RING_SIZE_4M
 #define ADF_DEFAULT_RING_SIZE ADF_RING_SIZE_16K
+/* ARB CSR offset */
+#define ADF_ARB_RINGSRVARBEN_OFFSET 0x19C

 /* Maximum number of qps on a device for any service type */
 #define ADF_MAX_QPS_ON_ANY_SERVICE	2
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 8b69206df5..8233cc045d 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -128,9 +128,6 @@ struct qat_pci_device {
 	/* Data relating to compression service */
 	struct qat_comp_dev_private *comp_dev;
 	/**< link back to compressdev private data */
-	struct qat_qp_hw_data qp_gen4_data[QAT_GEN4_BUNDLE_NUM]
-		[QAT_GEN4_QPS_PER_BUNDLE_NUM];
-	/**< Data of ring configuration on gen4 */
 	void *misc_bar_io_addr;
 	/**< Address of misc bar */
 	void *dev_private;
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index 27994036b8..cde421eb77 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -18,124 +18,15 @@
 #include "qat_sym.h"
 #include "qat_asym.h"
 #include "qat_comp.h"
-#include "adf_transport_access_macros.h"
-#include "adf_transport_access_macros_gen4vf.h"
-#include "dev/qat_dev_gens.h"

 #define QAT_CQ_MAX_DEQ_RETRIES 10

 #define ADF_MAX_DESC				4096
 #define ADF_MIN_DESC				128

-#define ADF_ARB_REG_SLOT			0x1000
-#define ADF_ARB_RINGSRVARBEN_OFFSET		0x19C
-
-#define WRITE_CSR_ARB_RINGSRVARBEN(csr_addr, index, value) \
-	ADF_CSR_WR(csr_addr, ADF_ARB_RINGSRVARBEN_OFFSET + \
-	(ADF_ARB_REG_SLOT * index), value)
-
 struct qat_qp_hw_spec_funcs*
 	qat_qp_hw_spec[QAT_N_GENS];

-__extension__
-const struct qat_qp_hw_data qat_gen1_qps[QAT_MAX_SERVICES]
-					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
-	/* queue pairs which provide an asymmetric crypto service */
-	[QAT_SERVICE_ASYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 0,
-			.rx_ring_num = 8,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-
-		}, {
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 1,
-			.rx_ring_num = 9,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a symmetric crypto service */
-	[QAT_SERVICE_SYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 2,
-			.rx_ring_num = 10,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		},
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 3,
-			.rx_ring_num = 11,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a compression service */
-	[QAT_SERVICE_COMPRESSION] = {
-		{
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 6,
-			.rx_ring_num = 14,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}, {
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 7,
-			.rx_ring_num = 15,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	}
-};
-
-__extension__
-const struct qat_qp_hw_data qat_gen3_qps[QAT_MAX_SERVICES]
-					 [ADF_MAX_QPS_ON_ANY_SERVICE] = {
-	/* queue pairs which provide an asymmetric crypto service */
-	[QAT_SERVICE_ASYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_ASYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 0,
-			.rx_ring_num = 4,
-			.tx_msg_size = 64,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a symmetric crypto service */
-	[QAT_SERVICE_SYMMETRIC] = {
-		{
-			.service_type = QAT_SERVICE_SYMMETRIC,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 1,
-			.rx_ring_num = 5,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	},
-	/* queue pairs which provide a compression service */
-	[QAT_SERVICE_COMPRESSION] = {
-		{
-			.service_type = QAT_SERVICE_COMPRESSION,
-			.hw_bundle_num = 0,
-			.tx_ring_num = 3,
-			.rx_ring_num = 7,
-			.tx_msg_size = 128,
-			.rx_msg_size = 32,
-		}
-	}
-};
-
 static int qat_qp_check_queue_alignment(uint64_t phys_addr,
 	uint32_t queue_size_bytes);
 static void qat_queue_delete(struct qat_queue *queue);
@@ -143,77 +34,32 @@ static int qat_queue_create(struct qat_pci_device *qat_dev,
 	struct qat_queue *queue, struct qat_qp_config *, uint8_t dir);
 static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
 	uint32_t *queue_size_for_csr);
-static void adf_configure_queues(struct qat_qp *queue,
+static int adf_configure_queues(struct qat_qp *queue,
 	enum qat_device_gen qat_dev_gen);
-static void adf_queue_arb_enable(enum qat_device_gen qat_dev_gen,
+static int adf_queue_arb_enable(struct qat_pci_device *qat_dev,
 	struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock);
-static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
+static int adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
 	struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock);
+static int qat_qp_build_ring_base(struct qat_pci_device *qat_dev,
+	void *io_addr, struct qat_queue *queue);
+static const struct rte_memzone *queue_dma_zone_reserve(const char *queue_name,
+	uint32_t queue_size, int socket_id);
+static int qat_qp_csr_setup(struct qat_pci_device *qat_dev, void *io_addr,
+	struct qat_qp *qp);

-int qat_qps_per_service(struct qat_pci_device *qat_dev,
-		enum qat_service_type service)
-{
-	int i = 0, count = 0, max_ops_per_srv = 0;
-
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		max_ops_per_srv = QAT_GEN4_BUNDLE_NUM;
-		for (i = 0, count = 0; i < max_ops_per_srv; i++)
-			if (qat_dev->qp_gen4_data[i][0].service_type == service)
-				count++;
-	} else {
-		const struct qat_qp_hw_data *sym_hw_qps =
-				qat_gen_config[qat_dev->qat_dev_gen]
-				.qp_hw_data[service];
-
-		max_ops_per_srv = ADF_MAX_QPS_ON_ANY_SERVICE;
-		for (i = 0, count = 0; i < max_ops_per_srv; i++)
-			if (sym_hw_qps[i].service_type == service)
-				count++;
-	}
-
-	return count;
-}
-
-static const struct rte_memzone *
-queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
-			int socket_id)
-{
-	const struct rte_memzone *mz;
-
-	mz = rte_memzone_lookup(queue_name);
-	if (mz != 0) {
-		if (((size_t)queue_size <= mz->len) &&
-				((socket_id == SOCKET_ID_ANY) ||
-					(socket_id == mz->socket_id))) {
-			QAT_LOG(DEBUG, "re-use memzone already "
-					"allocated for %s", queue_name);
-			return mz;
-		}
-
-		QAT_LOG(ERR, "Incompatible memzone already "
-				"allocated %s, size %u, socket %d. "
-				"Requested size %u, socket %u",
-				queue_name, (uint32_t)mz->len,
-				mz->socket_id, queue_size, socket_id);
-		return NULL;
-	}
-
-	QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
-					queue_name, queue_size, socket_id);
-	return rte_memzone_reserve_aligned(queue_name, queue_size,
-		socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size);
-}
-
-int qat_qp_setup(struct qat_pci_device *qat_dev,
+int
+qat_qp_setup(struct qat_pci_device *qat_dev,
 		struct qat_qp **qp_addr,
 		uint16_t queue_pair_id,
 		struct qat_qp_config *qat_qp_conf)
 {
-	struct qat_qp *qp;
+	struct qat_qp *qp = NULL;
 	struct rte_pci_device *pci_dev =
 			qat_pci_devs[qat_dev->qat_dev_id].pci_dev;
 	char op_cookie_pool_name[RTE_RING_NAMESIZE];
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev->qat_dev_gen];
+	void *io_addr;
 	uint32_t i;

 	QAT_LOG(DEBUG, "Setup qp %u on qat pci device %d gen %d",
@@ -226,7 +72,15 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		return -EINVAL;
 	}

-	if (pci_dev->mem_resource[0].addr == NULL) {
+	if (ops_hw->qat_dev_get_transport_bar == NULL)	{
+		QAT_LOG(ERR,
+			"QAT Internal Error: qat_dev_get_transport_bar not set for gen %d",
+			qat_dev->qat_dev_gen);
+		goto create_err;
+	}
+
+	io_addr = ops_hw->qat_dev_get_transport_bar(pci_dev)->addr;
+	if (io_addr == NULL) {
 		QAT_LOG(ERR, "Could not find VF config space "
 				"(UIO driver attached?).");
 		return -EINVAL;
@@ -250,7 +104,7 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		return -ENOMEM;
 	}

-	qp->mmap_bar_addr = pci_dev->mem_resource[0].addr;
+	qp->mmap_bar_addr = io_addr;
 	qp->enqueued = qp->dequeued = 0;

 	if (qat_queue_create(qat_dev, &(qp->tx_q), qat_qp_conf,
@@ -277,10 +131,6 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 		goto create_err;
 	}

-	adf_configure_queues(qp, qat_dev_gen);
-	adf_queue_arb_enable(qat_dev_gen, &qp->tx_q, qp->mmap_bar_addr,
-					&qat_dev->arb_csr_lock);
-
 	snprintf(op_cookie_pool_name, RTE_RING_NAMESIZE,
 					"%s%d_cookies_%s_qp%hu",
 		pci_dev->driver->driver.name, qat_dev->qat_dev_id,
@@ -298,6 +148,8 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	if (!qp->op_cookie_pool) {
 		QAT_LOG(ERR, "QAT PMD Cannot create"
 				" op mempool");
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
 		goto create_err;
 	}

@@ -316,91 +168,32 @@ int qat_qp_setup(struct qat_pci_device *qat_dev,
 	QAT_LOG(DEBUG, "QP setup complete: id: %d, cookiepool: %s",
 			queue_pair_id, op_cookie_pool_name);

+	qat_qp_csr_setup(qat_dev, io_addr, qp);
+
 	*qp_addr = qp;
 	return 0;

 create_err:
-	if (qp->op_cookie_pool)
-		rte_mempool_free(qp->op_cookie_pool);
-	rte_free(qp->op_cookies);
-	rte_free(qp);
-	return -EFAULT;
-}
-
-
-int qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr)
-{
-	struct qat_qp *qp = *qp_addr;
-	uint32_t i;
-
-	if (qp == NULL) {
-		QAT_LOG(DEBUG, "qp already freed");
-		return 0;
-	}
+	if (qp) {
+		if (qp->op_cookie_pool)
+			rte_mempool_free(qp->op_cookie_pool);

-	QAT_LOG(DEBUG, "Free qp on qat_pci device %d",
-				qp->qat_dev->qat_dev_id);
-
-	/* Don't free memory if there are still responses to be processed */
-	if ((qp->enqueued - qp->dequeued) == 0) {
-		qat_queue_delete(&(qp->tx_q));
-		qat_queue_delete(&(qp->rx_q));
-	} else {
-		return -EAGAIN;
-	}
+		if (qp->op_cookies)
+			rte_free(qp->op_cookies);

-	adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q), qp->mmap_bar_addr,
-				&qp->qat_dev->arb_csr_lock);
-
-	for (i = 0; i < qp->nb_descriptors; i++)
-		rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]);
-
-	if (qp->op_cookie_pool)
-		rte_mempool_free(qp->op_cookie_pool);
-
-	rte_free(qp->op_cookies);
-	rte_free(qp);
-	*qp_addr = NULL;
-	return 0;
-}
-
-
-static void qat_queue_delete(struct qat_queue *queue)
-{
-	const struct rte_memzone *mz;
-	int status = 0;
-
-	if (queue == NULL) {
-		QAT_LOG(DEBUG, "Invalid queue");
-		return;
+		rte_free(qp);
 	}
-	QAT_LOG(DEBUG, "Free ring %d, memzone: %s",
-			queue->hw_queue_number, queue->memz_name);

-	mz = rte_memzone_lookup(queue->memz_name);
-	if (mz != NULL)	{
-		/* Write an unused pattern to the queue memory. */
-		memset(queue->base_addr, 0x7F, queue->queue_size);
-		status = rte_memzone_free(mz);
-		if (status != 0)
-			QAT_LOG(ERR, "Error %d on freeing queue %s",
-					status, queue->memz_name);
-	} else {
-		QAT_LOG(DEBUG, "queue %s doesn't exist",
-				queue->memz_name);
-	}
+	return -EFAULT;
 }

 static int
 qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 		struct qat_qp_config *qp_conf, uint8_t dir)
 {
-	uint64_t queue_base;
-	void *io_addr;
 	const struct rte_memzone *qp_mz;
 	struct rte_pci_device *pci_dev =
 			qat_pci_devs[qat_dev->qat_dev_id].pci_dev;
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
 	int ret = 0;
 	uint16_t desc_size = (dir == ADF_RING_DIR_TX ?
 			qp_conf->hw->tx_msg_size : qp_conf->hw->rx_msg_size);
@@ -460,19 +253,6 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 	 * Write an unused pattern to the queue memory.
 	 */
 	memset(queue->base_addr, 0x7F, queue_size_bytes);
-	io_addr = pci_dev->mem_resource[0].addr;
-
-	if (qat_dev_gen == QAT_GEN4) {
-		queue_base = BUILD_RING_BASE_ADDR_GEN4(queue->base_phys_addr,
-					queue->queue_size);
-		WRITE_CSR_RING_BASE_GEN4VF(io_addr, queue->hw_bundle_number,
-			queue->hw_queue_number, queue_base);
-	} else {
-		queue_base = BUILD_RING_BASE_ADDR(queue->base_phys_addr,
-				queue->queue_size);
-		WRITE_CSR_RING_BASE(io_addr, queue->hw_bundle_number,
-			queue->hw_queue_number, queue_base);
-	}

 	QAT_LOG(DEBUG, "RING: Name:%s, size in CSR: %u, in bytes %u,"
 		" nb msgs %u, msg_size %u, modulo mask %u",
@@ -488,202 +268,231 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
 	return ret;
 }

-int
-qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
-			enum qat_service_type service_type)
+static const struct rte_memzone *
+queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size,
+		int socket_id)
 {
-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		int i = 0, valid_qps = 0;
-
-		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
-			if (qat_dev->qp_gen4_data[i][0].service_type ==
-				service_type) {
-				if (valid_qps == qp_id)
-					return i;
-				++valid_qps;
-			}
+	const struct rte_memzone *mz;
+
+	mz = rte_memzone_lookup(queue_name);
+	if (mz != 0) {
+		if (((size_t)queue_size <= mz->len) &&
+				((socket_id == SOCKET_ID_ANY) ||
+					(socket_id == mz->socket_id))) {
+			QAT_LOG(DEBUG, "re-use memzone already "
+					"allocated for %s", queue_name);
+			return mz;
 		}
+
+		QAT_LOG(ERR, "Incompatible memzone already "
+				"allocated %s, size %u, socket %d. "
+				"Requested size %u, socket %u",
+				queue_name, (uint32_t)mz->len,
+				mz->socket_id, queue_size, socket_id);
+		return NULL;
 	}
-	return -1;
+
+	QAT_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u",
+					queue_name, queue_size, socket_id);
+	return rte_memzone_reserve_aligned(queue_name, queue_size,
+		socket_id, RTE_MEMZONE_IOVA_CONTIG, queue_size);
 }

 int
-qat_read_qp_config(struct qat_pci_device *qat_dev)
+qat_qp_release(enum qat_device_gen qat_dev_gen, struct qat_qp **qp_addr)
 {
-	int i = 0;
-	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
-
-	if (qat_dev_gen == QAT_GEN4) {
-		uint16_t svc = 0;
-
-		if (qat_query_svc_gen4(qat_dev, (uint8_t *)&svc))
-			return -(EFAULT);
-		for (; i < QAT_GEN4_BUNDLE_NUM; i++) {
-			struct qat_qp_hw_data *hw_data =
-				&qat_dev->qp_gen4_data[i][0];
-			uint8_t svc1 = (svc >> (3 * i)) & 0x7;
-			enum qat_service_type service_type = QAT_SERVICE_INVALID;
-
-			if (svc1 == QAT_SVC_SYM) {
-				service_type = QAT_SERVICE_SYMMETRIC;
-				QAT_LOG(DEBUG,
-					"Discovered SYMMETRIC service on bundle %d",
-					i);
-			} else if (svc1 == QAT_SVC_COMPRESSION) {
-				service_type = QAT_SERVICE_COMPRESSION;
-				QAT_LOG(DEBUG,
-					"Discovered COPRESSION service on bundle %d",
-					i);
-			} else if (svc1 == QAT_SVC_ASYM) {
-				service_type = QAT_SERVICE_ASYMMETRIC;
-				QAT_LOG(DEBUG,
-					"Discovered ASYMMETRIC service on bundle %d",
-					i);
-			} else {
-				QAT_LOG(ERR,
-					"Unrecognized service on bundle %d",
-					i);
-				return -(EFAULT);
-			}
+	int ret;
+	struct qat_qp *qp = *qp_addr;
+	uint32_t i;

-			memset(hw_data, 0, sizeof(*hw_data));
-			hw_data->service_type = service_type;
-			if (service_type == QAT_SERVICE_ASYMMETRIC) {
-				hw_data->tx_msg_size = 64;
-				hw_data->rx_msg_size = 32;
-			} else if (service_type == QAT_SERVICE_SYMMETRIC ||
-					service_type ==
-						QAT_SERVICE_COMPRESSION) {
-				hw_data->tx_msg_size = 128;
-				hw_data->rx_msg_size = 32;
-			}
-			hw_data->tx_ring_num = 0;
-			hw_data->rx_ring_num = 1;
-			hw_data->hw_bundle_num = i;
-		}
+	if (qp == NULL) {
+		QAT_LOG(DEBUG, "qp already freed");
 		return 0;
 	}
-	return -(EINVAL);
+
+	QAT_LOG(DEBUG, "Free qp on qat_pci device %d",
+				qp->qat_dev->qat_dev_id);
+
+	/* Don't free memory if there are still responses to be processed */
+	if ((qp->enqueued - qp->dequeued) == 0) {
+		qat_queue_delete(&(qp->tx_q));
+		qat_queue_delete(&(qp->rx_q));
+	} else {
+		return -EAGAIN;
+	}
+
+	ret = adf_queue_arb_disable(qat_dev_gen, &(qp->tx_q),
+			qp->mmap_bar_addr, &qp->qat_dev->arb_csr_lock);
+	if (ret)
+		return ret;
+
+	for (i = 0; i < qp->nb_descriptors; i++)
+		rte_mempool_put(qp->op_cookie_pool, qp->op_cookies[i]);
+
+	if (qp->op_cookie_pool)
+		rte_mempool_free(qp->op_cookie_pool);
+
+	rte_free(qp->op_cookies);
+	rte_free(qp);
+	*qp_addr = NULL;
+	return 0;
 }

-static int qat_qp_check_queue_alignment(uint64_t phys_addr,
-					uint32_t queue_size_bytes)
+
+static void
+qat_queue_delete(struct qat_queue *queue)
 {
-	if (((queue_size_bytes - 1) & phys_addr) != 0)
-		return -EINVAL;
+	const struct rte_memzone *mz;
+	int status = 0;
+
+	if (queue == NULL) {
+		QAT_LOG(DEBUG, "Invalid queue");
+		return;
+	}
+	QAT_LOG(DEBUG, "Free ring %d, memzone: %s",
+			queue->hw_queue_number, queue->memz_name);
+
+	mz = rte_memzone_lookup(queue->memz_name);
+	if (mz != NULL)	{
+		/* Write an unused pattern to the queue memory. */
+		memset(queue->base_addr, 0x7F, queue->queue_size);
+		status = rte_memzone_free(mz);
+		if (status != 0)
+			QAT_LOG(ERR, "Error %d on freeing queue %s",
+					status, queue->memz_name);
+	} else {
+		QAT_LOG(DEBUG, "queue %s doesn't exist",
+				queue->memz_name);
+	}
+}
+
+static int __rte_unused
+adf_queue_arb_enable(struct qat_pci_device *qat_dev, struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_enable,
+			-ENOTSUP);
+	ops->qat_qp_adf_arb_enable(txq, base_addr, lock);
 	return 0;
 }

-static int adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
-	uint32_t *p_queue_size_for_csr)
+static int
+adf_queue_arb_disable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq,
+		void *base_addr, rte_spinlock_t *lock)
 {
-	uint8_t i = ADF_MIN_RING_SIZE;
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];

-	for (; i <= ADF_MAX_RING_SIZE; i++)
-		if ((msg_size * msg_num) ==
-				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
-			*p_queue_size_for_csr = i;
-			return 0;
-		}
-	QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
-	return -EINVAL;
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_arb_disable,
+			-ENOTSUP);
+	ops->qat_qp_adf_arb_disable(txq, base_addr, lock);
+	return 0;
 }

-static void
-adf_queue_arb_enable(enum qat_device_gen qat_dev_gen, struct qat_queue *txq,
-			void *base_addr, rte_spinlock_t *lock)
+static int __rte_unused
+qat_qp_build_ring_base(struct qat_pci_device *qat_dev, void *io_addr,
+		struct qat_queue *queue)
 {
-	uint32_t arb_csr_offset = 0, value;
-
-	rte_spinlock_lock(lock);
-	if (qat_dev_gen == QAT_GEN4) {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_RING_BUNDLE_SIZE_GEN4 *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
-				arb_csr_offset);
-	} else {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_ARB_REG_SLOT *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr,
-				arb_csr_offset);
-	}
-	value |= (0x01 << txq->hw_queue_number);
-	ADF_CSR_WR(base_addr, arb_csr_offset, value);
-	rte_spinlock_unlock(lock);
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_build_ring_base,
+			-ENOTSUP);
+	ops->qat_qp_build_ring_base(io_addr, queue);
+	return 0;
 }

-static void adf_queue_arb_disable(enum qat_device_gen qat_dev_gen,
-		struct qat_queue *txq, void *base_addr, rte_spinlock_t *lock)
+int
+qat_qps_per_service(struct qat_pci_device *qat_dev,
+		enum qat_service_type service)
 {
-	uint32_t arb_csr_offset = 0, value;
-
-	rte_spinlock_lock(lock);
-	if (qat_dev_gen == QAT_GEN4) {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_RING_BUNDLE_SIZE_GEN4 *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN4VF,
-				arb_csr_offset);
-	} else {
-		arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET +
-				(ADF_ARB_REG_SLOT *
-				txq->hw_bundle_number);
-		value = ADF_CSR_RD(base_addr,
-				arb_csr_offset);
-	}
-	value &= ~(0x01 << txq->hw_queue_number);
-	ADF_CSR_WR(base_addr, arb_csr_offset, value);
-	rte_spinlock_unlock(lock);
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_rings_per_service,
+			-ENOTSUP);
+	return ops->qat_qp_rings_per_service(qat_dev, service);
 }

-static void adf_configure_queues(struct qat_qp *qp,
-		enum qat_device_gen qat_dev_gen)
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data(struct qat_pci_device *qat_dev,
+		enum qat_service_type service, uint16_t qp_id)
 {
-	uint32_t q_tx_config, q_resp_config;
-	struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q;
-
-	q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size);
-	q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size,
-			ADF_RING_NEAR_WATERMARK_512,
-			ADF_RING_NEAR_WATERMARK_0);
-
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
-			q_tx->hw_bundle_number,	q_tx->hw_queue_number,
-			q_tx_config);
-		WRITE_CSR_RING_CONFIG_GEN4VF(qp->mmap_bar_addr,
-			q_rx->hw_bundle_number,	q_rx->hw_queue_number,
-			q_resp_config);
-	} else {
-		WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
-			q_tx->hw_bundle_number,	q_tx->hw_queue_number,
-			q_tx_config);
-		WRITE_CSR_RING_CONFIG(qp->mmap_bar_addr,
-			q_rx->hw_bundle_number,	q_rx->hw_queue_number,
-			q_resp_config);
-	}
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_get_hw_data, NULL);
+	return ops->qat_qp_get_hw_data(qat_dev, service, qp_id);
 }

-static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask)
+int
+qat_read_qp_config(struct qat_pci_device *qat_dev)
 {
-	return data & modulo_mask;
+	struct qat_dev_hw_spec_funcs *ops_hw =
+		qat_dev_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops_hw->qat_dev_read_config,
+			-ENOTSUP);
+	return ops_hw->qat_dev_read_config(qat_dev);
+}
+
+static int __rte_unused
+adf_configure_queues(struct qat_qp *qp, enum qat_device_gen qat_dev_gen)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_adf_configure_queues,
+			-ENOTSUP);
+	ops->qat_qp_adf_configure_queues(qp);
+	return 0;
 }

 static inline void
 txq_write_tail(enum qat_device_gen qat_dev_gen,
-		struct qat_qp *qp, struct qat_queue *q) {
+		struct qat_qp *qp, struct qat_queue *q)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];

-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
-			q->hw_bundle_number, q->hw_queue_number, q->tail);
-	} else {
-		WRITE_CSR_RING_TAIL(qp->mmap_bar_addr, q->hw_bundle_number,
-			q->hw_queue_number, q->tail);
-	}
+	/*
+	 * Pointer check should be done during
+	 * initialization
+	 */
+	ops->qat_qp_csr_write_tail(qp, q);
 }

+static inline void
+qat_qp_csr_write_head(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
+			struct qat_queue *q, uint32_t new_head)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev_gen];
+
+	/*
+	 * Pointer check should be done during
+	 * initialization
+	 */
+	ops->qat_qp_csr_write_head(qp, q, new_head);
+}
+
+static int
+qat_qp_csr_setup(struct qat_pci_device *qat_dev,
+		void *io_addr, struct qat_qp *qp)
+{
+	struct qat_qp_hw_spec_funcs *ops =
+		qat_qp_hw_spec[qat_dev->qat_dev_gen];
+
+	RTE_FUNC_PTR_OR_ERR_RET(ops->qat_qp_csr_setup,
+			-ENOTSUP);
+	ops->qat_qp_csr_setup(qat_dev, io_addr, qp);
+	return 0;
+}
+
+
 static inline
 void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
 				struct qat_queue *q)
@@ -707,15 +516,37 @@ void rxq_free_desc(enum qat_device_gen qat_dev_gen, struct qat_qp *qp,
 	q->nb_processed_responses = 0;
 	q->csr_head = new_head;

-	/* write current head to CSR */
-	if (qat_dev_gen == QAT_GEN4) {
-		WRITE_CSR_RING_HEAD_GEN4VF(qp->mmap_bar_addr,
-			q->hw_bundle_number, q->hw_queue_number, new_head);
-	} else {
-		WRITE_CSR_RING_HEAD(qp->mmap_bar_addr, q->hw_bundle_number,
-				q->hw_queue_number, new_head);
-	}
+	qat_qp_csr_write_head(qat_dev_gen, qp, q, new_head);
+}
+
+static int
+qat_qp_check_queue_alignment(uint64_t phys_addr, uint32_t queue_size_bytes)
+{
+	if (((queue_size_bytes - 1) & phys_addr) != 0)
+		return -EINVAL;
+	return 0;
+}
+
+static int
+adf_verify_queue_size(uint32_t msg_size, uint32_t msg_num,
+		uint32_t *p_queue_size_for_csr)
+{
+	uint8_t i = ADF_MIN_RING_SIZE;
+
+	for (; i <= ADF_MAX_RING_SIZE; i++)
+		if ((msg_size * msg_num) ==
+				(uint32_t)ADF_SIZE_TO_RING_SIZE_IN_BYTES(i)) {
+			*p_queue_size_for_csr = i;
+			return 0;
+		}
+	QAT_LOG(ERR, "Invalid ring size %d", msg_size * msg_num);
+	return -EINVAL;
+}

+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t modulo_mask)
+{
+	return data & modulo_mask;
 }

 uint16_t
diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h
index 726cd2ef61..deafb407b3 100644
--- a/drivers/common/qat/qat_qp.h
+++ b/drivers/common/qat/qat_qp.h
@@ -12,16 +12,6 @@

 #define QAT_QP_MIN_INFL_THRESHOLD	256

-/* Default qp configuration for GEN4 devices */
-#define QAT_GEN4_QP_DEFCON	(QAT_SERVICE_SYMMETRIC |	\
-				QAT_SERVICE_SYMMETRIC << 8 |	\
-				QAT_SERVICE_SYMMETRIC << 16 |	\
-				QAT_SERVICE_SYMMETRIC << 24)
-
-/* QAT GEN 4 specific macros */
-#define QAT_GEN4_BUNDLE_NUM             4
-#define QAT_GEN4_QPS_PER_BUNDLE_NUM     1
-
 struct qat_pci_device;

 /**
@@ -106,7 +96,11 @@ qat_qp_setup(struct qat_pci_device *qat_dev,

 int
 qat_qps_per_service(struct qat_pci_device *qat_dev,
-			enum qat_service_type service);
+		enum qat_service_type service);
+
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data(struct qat_pci_device *qat_dev,
+		enum qat_service_type service, uint16_t qp_id);

 int
 qat_cq_get_fw_version(struct qat_qp *qp);
@@ -116,11 +110,6 @@ int
 qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused,
 			  void *op_cookie __rte_unused,
 			  uint64_t *dequeue_err_count __rte_unused);
-
-int
-qat_select_valid_queue(struct qat_pci_device *qat_dev, int qp_id,
-			enum qat_service_type service_type);
-
 int
 qat_read_qp_config(struct qat_pci_device *qat_dev);

@@ -166,7 +155,4 @@ struct qat_qp_hw_spec_funcs {

 extern struct qat_qp_hw_spec_funcs *qat_qp_hw_spec[];

-extern const struct qat_qp_hw_data qat_gen1_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-extern const struct qat_qp_hw_data qat_gen3_qps[][ADF_MAX_QPS_ON_ANY_SERVICE];
-
 #endif /* _QAT_QP_H_ */
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index d4f087733f..5b8ee4bee6 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -164,35 +164,11 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 	int ret = 0;
 	uint32_t i;
 	struct qat_qp_config qat_qp_conf;
-	const struct qat_qp_hw_data *sym_hw_qps = NULL;
-	const struct qat_qp_hw_data *qp_hw_data = NULL;
-
 	struct qat_qp **qp_addr =
 			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
 	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
 	struct qat_pci_device *qat_dev = qat_private->qat_dev;

-	if (qat_dev->qat_dev_gen == QAT_GEN4) {
-		int ring_pair =
-			qat_select_valid_queue(qat_dev, qp_id,
-				QAT_SERVICE_SYMMETRIC);
-
-		if (ring_pair < 0) {
-			QAT_LOG(ERR,
-				"qp_id %u invalid for this device, no enough services allocated for GEN4 device",
-				qp_id);
-			return -EINVAL;
-		}
-		sym_hw_qps =
-			&qat_dev->qp_gen4_data[0][0];
-		qp_hw_data =
-			&qat_dev->qp_gen4_data[ring_pair][0];
-	} else {
-		sym_hw_qps = qat_gen_config[qat_dev->qat_dev_gen]
-				.qp_hw_data[QAT_SERVICE_SYMMETRIC];
-		qp_hw_data = sym_hw_qps + qp_id;
-	}
-
 	/* If qp is already in use free ring memory and qp metadata. */
 	if (*qp_addr != NULL) {
 		ret = qat_sym_qp_release(dev, qp_id);
@@ -204,7 +180,13 @@ static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
 		return -EINVAL;
 	}

-	qat_qp_conf.hw = qp_hw_data;
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_SYMMETRIC,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
 	qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie);
 	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
 	qat_qp_conf.socket_id = socket_id;
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v8 5/9] compress/qat: define gen specific structs and functions
  2021-11-04 10:34               ` [dpdk-dev] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                                   ` (3 preceding siblings ...)
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 4/9] common/qat: add gen specific queue implementation Kai Ji
@ 2021-11-04 10:34                 ` Kai Ji
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 6/9] compress/qat: add gen specific implementation Kai Ji
                                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-11-04 10:34 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Adam Dybkowski, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch adds the compression data structure and function
prototypes for different QAT generations.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 .../common/qat/qat_adf/icp_qat_hw_gen4_comp.h | 195 ++++++++++++
 .../qat/qat_adf/icp_qat_hw_gen4_comp_defs.h   | 299 ++++++++++++++++++
 drivers/common/qat/qat_common.h               |   4 +-
 drivers/common/qat/qat_device.h               |   7 -
 drivers/compress/qat/qat_comp.c               | 101 +++---
 drivers/compress/qat/qat_comp.h               |   8 +-
 drivers/compress/qat/qat_comp_pmd.c           | 159 ++++------
 drivers/compress/qat/qat_comp_pmd.h           |  76 +++++
 8 files changed, 675 insertions(+), 174 deletions(-)
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
 create mode 100644 drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h

diff --git a/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
new file mode 100644
index 0000000000..ec69dc7105
--- /dev/null
+++ b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp.h
@@ -0,0 +1,195 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _ICP_QAT_HW_GEN4_COMP_H_
+#define _ICP_QAT_HW_GEN4_COMP_H_
+
+#include "icp_qat_fw.h"
+#include "icp_qat_hw_gen4_comp_defs.h"
+
+struct icp_qat_hw_comp_20_config_csr_lower {
+	icp_qat_hw_comp_20_extended_delay_match_mode_t edmm;
+	icp_qat_hw_comp_20_hw_comp_format_t algo;
+	icp_qat_hw_comp_20_search_depth_t sd;
+	icp_qat_hw_comp_20_hbs_control_t hbs;
+	icp_qat_hw_comp_20_abd_t abd;
+	icp_qat_hw_comp_20_lllbd_ctrl_t lllbd;
+	icp_qat_hw_comp_20_min_match_control_t mmctrl;
+	icp_qat_hw_comp_20_skip_hash_collision_t hash_col;
+	icp_qat_hw_comp_20_skip_hash_update_t hash_update;
+	icp_qat_hw_comp_20_byte_skip_t skip_ctrl;
+};
+
+static inline uint32_t ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER(
+		struct icp_qat_hw_comp_20_config_csr_lower csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.algo,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_MASK);
+
+	QAT_FIELD_SET(val32, csr.sd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_MASK);
+
+	QAT_FIELD_SET(val32, csr.edmm,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_MASK);
+
+	QAT_FIELD_SET(val32, csr.hbs,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lllbd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK);
+
+	QAT_FIELD_SET(val32, csr.mmctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.hash_col,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_MASK);
+
+	QAT_FIELD_SET(val32, csr.hash_update,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_MASK);
+
+	QAT_FIELD_SET(val32, csr.skip_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_MASK);
+
+	QAT_FIELD_SET(val32, csr.abd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_MASK);
+
+	QAT_FIELD_SET(val32, csr.lllbd,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_comp_20_config_csr_upper {
+	icp_qat_hw_comp_20_scb_control_t scb_ctrl;
+	icp_qat_hw_comp_20_rmb_control_t rmb_ctrl;
+	icp_qat_hw_comp_20_som_control_t som_ctrl;
+	icp_qat_hw_comp_20_skip_hash_rd_control_t skip_hash_ctrl;
+	icp_qat_hw_comp_20_scb_unload_control_t scb_unload_ctrl;
+	icp_qat_hw_comp_20_disable_token_fusion_control_t
+			disable_token_fusion_ctrl;
+	icp_qat_hw_comp_20_lbms_t lbms;
+	icp_qat_hw_comp_20_scb_mode_reset_mask_t scb_mode_reset;
+	uint16_t lazy;
+	uint16_t nice;
+};
+
+static inline uint32_t ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER(
+		struct icp_qat_hw_comp_20_config_csr_upper csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.scb_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.rmb_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.som_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.skip_hash_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.scb_unload_ctrl,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.disable_token_fusion_ctrl,
+	ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_BITPOS,
+	ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbms,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_MASK);
+
+	QAT_FIELD_SET(val32, csr.scb_mode_reset,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK);
+
+	QAT_FIELD_SET(val32, csr.lazy,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_MASK);
+
+	QAT_FIELD_SET(val32, csr.nice,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_BITPOS,
+		ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_decomp_20_config_csr_lower {
+	icp_qat_hw_decomp_20_hbs_control_t hbs;
+	icp_qat_hw_decomp_20_lbms_t lbms;
+	icp_qat_hw_decomp_20_hw_comp_format_t algo;
+	icp_qat_hw_decomp_20_min_match_control_t mmctrl;
+	icp_qat_hw_decomp_20_lz4_block_checksum_present_t lbc;
+};
+
+static inline uint32_t ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER(
+		struct icp_qat_hw_decomp_20_config_csr_lower csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.hbs,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbms,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_MASK);
+
+	QAT_FIELD_SET(val32, csr.algo,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_MASK);
+
+	QAT_FIELD_SET(val32, csr.mmctrl,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.lbc,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_BITPOS,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_MASK);
+
+	return rte_bswap32(val32);
+}
+
+struct icp_qat_hw_decomp_20_config_csr_upper {
+	icp_qat_hw_decomp_20_speculative_decoder_control_t sdc;
+	icp_qat_hw_decomp_20_mini_cam_control_t mcc;
+};
+
+static inline uint32_t ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_UPPER(
+		struct icp_qat_hw_decomp_20_config_csr_upper csr)
+{
+	uint32_t val32 = 0;
+
+	QAT_FIELD_SET(val32, csr.sdc,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_BITPOS,
+	ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_MASK);
+
+	QAT_FIELD_SET(val32, csr.mcc,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_BITPOS,
+		ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_MASK);
+
+	return rte_bswap32(val32);
+}
+
+#endif /* _ICP_QAT_HW_GEN4_COMP_H_ */
diff --git a/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
new file mode 100644
index 0000000000..ad02d06b12
--- /dev/null
+++ b/drivers/common/qat/qat_adf/icp_qat_hw_gen4_comp_defs.h
@@ -0,0 +1,299 @@
+/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0)
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _ICP_QAT_HW_GEN4_COMP_DEFS_H
+#define _ICP_QAT_HW_GEN4_COMP_DEFS_H
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_BITPOS	31
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_comp_20_scb_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_BITPOS	30
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_ALL = 0x0,
+	ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_FC_ONLY = 0x1,
+} icp_qat_hw_comp_20_rmb_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_RMB_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_RMB_CONTROL_RESET_ALL
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_BITPOS	28
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_MASK		0x3
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_NORMAL_MODE = 0x0,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_REPLAY_MODE = 0x1,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_INPUT_CRC = 0x2,
+	ICP_QAT_HW_COMP_20_SOM_CONTROL_RESERVED_MODE = 0x3,
+} icp_qat_hw_comp_20_som_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SOM_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SOM_CONTROL_NORMAL_MODE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_BITPOS	27
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_NO_SKIP = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_SKIP_HASH_READS = 0x1,
+} icp_qat_hw_comp_20_skip_hash_rd_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_RD_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_RD_CONTROL_NO_SKIP
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_BITPOS	26
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_UNLOAD = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_NO_UNLOAD = 0x1,
+} icp_qat_hw_comp_20_scb_unload_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_UNLOAD_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_UNLOAD_CONTROL_UNLOAD
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_BITPOS 21
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_comp_20_disable_token_fusion_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_DISABLE_TOKEN_FUSION_CONTROL_DEFAULT_VAL \
+		ICP_QAT_HW_COMP_20_DISABLE_TOKEN_FUSION_CONTROL_ENABLE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_BITPOS	19
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_MASK		0x3
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_64KB = 0x0,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_256KB = 0x1,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_1MB = 0x2,
+	ICP_QAT_HW_COMP_20_LBMS_LBMS_4MB = 0x3,
+} icp_qat_hw_comp_20_lbms_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LBMS_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_LBMS_LBMS_64KB
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_BITPOS	18
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS = 0x0,
+	ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS_AND_HISTORY = 0x1,
+} icp_qat_hw_comp_20_scb_mode_reset_mask_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SCB_MODE_RESET_MASK_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SCB_MODE_RESET_MASK_RESET_COUNTERS
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_BITPOS	9
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_MASK	0x1ff
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL 258
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_BITPOS	0
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_MASK	0x1ff
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL 259
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS	14
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_MASK		0x7
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_HBS_CONTROL_HBS_IS_32KB = 0x0,
+} icp_qat_hw_comp_20_hbs_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HBS_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_HBS_CONTROL_HBS_IS_32KB
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_BITPOS	13
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_ABD_ABD_ENABLED = 0x0,
+	ICP_QAT_HW_COMP_20_ABD_ABD_DISABLED = 0x1,
+} icp_qat_hw_comp_20_abd_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_ABD_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_ABD_ABD_ENABLED
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_BITPOS	12
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED = 0x0,
+	ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_DISABLED = 0x1,
+} icp_qat_hw_comp_20_lllbd_ctrl_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_LLLBD_CTRL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_BITPOS	8
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_MASK		0xf
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1 = 0x1,
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_6 = 0x3,
+	ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_9 = 0x4,
+} icp_qat_hw_comp_20_search_depth_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SEARCH_DEPTH_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_BITPOS	5
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77 = 0x0,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE = 0x1,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_LZ4 = 0x2,
+	ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_LZ4S = 0x3,
+} icp_qat_hw_comp_20_hw_comp_format_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_HW_COMP_FORMAT_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS	4
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_3B = 0x0,
+	ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_4B = 0x1,
+} icp_qat_hw_comp_20_min_match_control_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_MIN_MATCH_CONTROL_MATCH_3B
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_BITPOS	3
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_ALLOW = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_DONT_ALLOW = 0x1,
+} icp_qat_hw_comp_20_skip_hash_collision_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_COLLISION_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_ALLOW
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_BITPOS	2
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_ALLOW = 0x0,
+	ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW = 0x1,
+} icp_qat_hw_comp_20_skip_hash_update_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_SKIP_HASH_UPDATE_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_ALLOW
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_BITPOS	1
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_TOKEN = 0x0,
+	ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL = 0x1,
+} icp_qat_hw_comp_20_byte_skip_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_BYTE_SKIP_DEFAULT_VAL	\
+		ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_TOKEN
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_BITPOS	0
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_DISABLED = 0x0,
+	ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED = 0x1,
+} icp_qat_hw_comp_20_extended_delay_match_mode_t;
+
+#define ICP_QAT_HW_COMP_20_CONFIG_CSR_EXTENDED_DELAY_MATCH_MODE_DEFAULT_VAL \
+		ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_DISABLED
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_BITPOS 31
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_decomp_20_speculative_decoder_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_SPECULATIVE_DECODER_CONTROL_DEFAULT_VAL\
+		ICP_QAT_HW_DECOMP_20_SPECULATIVE_DECODER_CONTROL_ENABLE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_BITPOS	30
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_MASK	0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_ENABLE = 0x0,
+	ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_DISABLE = 0x1,
+} icp_qat_hw_decomp_20_mini_cam_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MINI_CAM_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_MINI_CAM_CONTROL_ENABLE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_BITPOS	14
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_HBS_CONTROL_HBS_IS_32KB = 0x0,
+} icp_qat_hw_decomp_20_hbs_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HBS_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_HBS_CONTROL_HBS_IS_32KB
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_BITPOS	8
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_MASK	0x3
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_64KB = 0x0,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_256KB = 0x1,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_1MB = 0x2,
+	ICP_QAT_HW_DECOMP_20_LBMS_LBMS_4MB = 0x3,
+} icp_qat_hw_decomp_20_lbms_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LBMS_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_LBMS_LBMS_64KB
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_BITPOS	5
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_MASK	0x7
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE = 0x1,
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_LZ4 = 0x2,
+	ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_LZ4S = 0x3,
+} icp_qat_hw_decomp_20_hw_comp_format_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_HW_DECOMP_FORMAT_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_BITPOS	4
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_MASK		0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_3B = 0x0,
+	ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_4B = 0x1,
+} icp_qat_hw_decomp_20_min_match_control_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_MIN_MATCH_CONTROL_DEFAULT_VAL	\
+		ICP_QAT_HW_DECOMP_20_MIN_MATCH_CONTROL_MATCH_3B
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_BITPOS 3
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_MASK   0x1
+
+typedef enum {
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_ABSENT  =  0x0,
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_PRESENT  =  0x1,
+} icp_qat_hw_decomp_20_lz4_block_checksum_present_t;
+
+#define ICP_QAT_HW_DECOMP_20_CONFIG_CSR_LZ4_BLOCK_CHECKSUM_PRESENT_DEFAULT_VAL \
+	ICP_QAT_HW_DECOMP_20_LZ4_BLOCK_CHKSUM_ABSENT
+
+#endif /* _ICP_QAT_HW_GEN4_COMP_DEFS_H */
diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index 1889ec4e88..a7632e31f8 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -13,9 +13,9 @@
 #define QAT_64_BTYE_ALIGN_MASK (~0x3f)

 /* Intel(R) QuickAssist Technology device generation is enumerated
- * from one according to the generation of the device
+ * from one according to the generation of the device.
+ * QAT_GEN* is used as the index to find all devices
  */
-
 enum qat_device_gen {
 	QAT_GEN1,
 	QAT_GEN2,
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index 8233cc045d..e7c7e9af95 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -49,12 +49,6 @@ struct qat_dev_cmd_param {
 	uint16_t val;
 };

-enum qat_comp_num_im_buffers {
-	QAT_NUM_INTERM_BUFS_GEN1 = 12,
-	QAT_NUM_INTERM_BUFS_GEN2 = 20,
-	QAT_NUM_INTERM_BUFS_GEN3 = 64
-};
-
 struct qat_device_info {
 	const struct rte_memzone *mz;
 	/**< mz to store the qat_pci_device so it can be
@@ -137,7 +131,6 @@ struct qat_pci_device {
 struct qat_gen_hw_data {
 	enum qat_device_gen dev_gen;
 	const struct qat_qp_hw_data (*qp_hw_data)[ADF_MAX_QPS_ON_ANY_SERVICE];
-	enum qat_comp_num_im_buffers comp_num_im_bufs_required;
 	struct qat_pf2vf_dev *pf2vf_dev;
 };

diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index 7ac25a3b4c..e8f57c3cc4 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018-2019 Intel Corporation
+ * Copyright(c) 2018-2021 Intel Corporation
  */

 #include <rte_mempool.h>
@@ -332,7 +332,8 @@ qat_comp_build_request(void *in_op, uint8_t *out_msg,
 	return 0;
 }

-static inline uint32_t adf_modulo(uint32_t data, uint32_t modulo_mask)
+static inline uint32_t
+adf_modulo(uint32_t data, uint32_t modulo_mask)
 {
 	return data & modulo_mask;
 }
@@ -793,8 +794,9 @@ qat_comp_stream_size(void)
 	return RTE_ALIGN_CEIL(sizeof(struct qat_comp_stream), 8);
 }

-static void qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
-				    enum qat_comp_request_type request)
+static void
+qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
+	    enum qat_comp_request_type request)
 {
 	if (request == QAT_COMP_REQUEST_FIXED_COMP_STATELESS)
 		header->service_cmd_id = ICP_QAT_FW_COMP_CMD_STATIC;
@@ -811,16 +813,17 @@ static void qat_comp_create_req_hdr(struct icp_qat_fw_comn_req_hdr *header,
 	    QAT_COMN_CD_FLD_TYPE_16BYTE_DATA, QAT_COMN_PTR_TYPE_FLAT);
 }

-static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
-			const struct rte_memzone *interm_buff_mz,
-			const struct rte_comp_xform *xform,
-			const struct qat_comp_stream *stream,
-			enum rte_comp_op_type op_type)
+static int
+qat_comp_create_templates(struct qat_comp_xform *qat_xform,
+			  const struct rte_memzone *interm_buff_mz,
+			  const struct rte_comp_xform *xform,
+			  const struct qat_comp_stream *stream,
+			  enum rte_comp_op_type op_type,
+			  enum qat_device_gen qat_dev_gen)
 {
 	struct icp_qat_fw_comp_req *comp_req;
-	int comp_level, algo;
 	uint32_t req_par_flags;
-	int direction = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS;
+	int res;

 	if (unlikely(qat_xform == NULL)) {
 		QAT_LOG(ERR, "Session was not created for this device");
@@ -839,46 +842,17 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 		}
 	}

-	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) {
-		direction = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS;
-		comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
+	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS)
 		req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(
 				ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP,
 				ICP_QAT_FW_COMP_BFINAL,
 				ICP_QAT_FW_COMP_CNV,
 				ICP_QAT_FW_COMP_CNV_RECOVERY);
-	} else {
-		if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
-		else if (xform->compress.level == 1)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
-		else if (xform->compress.level == 2)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_4;
-		else if (xform->compress.level == 3)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
-		else if (xform->compress.level >= 4 &&
-			 xform->compress.level <= 9)
-			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_16;
-		else {
-			QAT_LOG(ERR, "compression level not supported");
-			return -EINVAL;
-		}
+	else
 		req_par_flags = ICP_QAT_FW_COMP_REQ_PARAM_FLAGS_BUILD(
 				ICP_QAT_FW_COMP_SOP, ICP_QAT_FW_COMP_EOP,
 				ICP_QAT_FW_COMP_BFINAL, ICP_QAT_FW_COMP_CNV,
 				ICP_QAT_FW_COMP_CNV_RECOVERY);
-	}
-
-	switch (xform->compress.algo) {
-	case RTE_COMP_ALGO_DEFLATE:
-		algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE;
-		break;
-	case RTE_COMP_ALGO_LZS:
-	default:
-		/* RTE_COMP_NULL */
-		QAT_LOG(ERR, "compression algorithm not supported");
-		return -EINVAL;
-	}

 	comp_req = &qat_xform->qat_comp_req_tmpl;

@@ -899,18 +873,10 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 		comp_req->comp_cd_ctrl.comp_state_addr =
 				stream->state_registers_decomp_phys;

-		/* Enable A, B, C, D, and E (CAMs). */
+		/* RAM bank flags */
 		comp_req->comp_cd_ctrl.ram_bank_flags =
-			ICP_QAT_FW_COMP_RAM_FLAGS_BUILD(
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */
-				ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank E */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank D */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank C */
-				ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank B */
-				ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */
+				qat_comp_gen_dev_ops[qat_dev_gen]
+					.qat_comp_get_ram_bank_flags();

 		comp_req->comp_cd_ctrl.ram_banks_addr =
 				stream->inflate_context_phys;
@@ -924,13 +890,11 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 			ICP_QAT_FW_COMP_ENABLE_SECURE_RAM_USED_AS_INTMD_BUF);
 	}

-	comp_req->cd_pars.sl.comp_slice_cfg_word[0] =
-	    ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(
-		direction,
-		/* In CPM 1.6 only valid mode ! */
-		ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED, algo,
-		/* Translate level to depth */
-		comp_level, ICP_QAT_HW_COMPRESSION_FILE_TYPE_0);
+	res = qat_comp_gen_dev_ops[qat_dev_gen].qat_comp_set_slice_cfg_word(
+			qat_xform, xform, op_type,
+			comp_req->cd_pars.sl.comp_slice_cfg_word);
+	if (res)
+		return res;

 	comp_req->comp_pars.initial_adler = 1;
 	comp_req->comp_pars.initial_crc32 = 0;
@@ -958,7 +922,8 @@ static int qat_comp_create_templates(struct qat_comp_xform *qat_xform,
 				ICP_QAT_FW_SLICE_XLAT);

 		comp_req->u1.xlt_pars.inter_buff_ptr =
-				interm_buff_mz->iova;
+				(qat_comp_get_num_im_bufs_required(qat_dev_gen)
+					== 0) ? 0 : interm_buff_mz->iova;
 	}

 #if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG
@@ -991,6 +956,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 			      void **private_xform)
 {
 	struct qat_comp_dev_private *qat = dev->data->dev_private;
+	enum qat_device_gen qat_dev_gen = qat->qat_dev->qat_dev_gen;
+	unsigned int im_bufs = qat_comp_get_num_im_bufs_required(qat_dev_gen);

 	if (unlikely(private_xform == NULL)) {
 		QAT_LOG(ERR, "QAT: private_xform parameter is NULL");
@@ -1012,7 +979,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,

 		if (xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_FIXED ||
 		  ((xform->compress.deflate.huffman == RTE_COMP_HUFFMAN_DEFAULT)
-				   && qat->interm_buff_mz == NULL))
+				   && qat->interm_buff_mz == NULL
+				   && im_bufs > 0))
 			qat_xform->qat_comp_request_type =
 					QAT_COMP_REQUEST_FIXED_COMP_STATELESS;

@@ -1020,7 +988,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 				RTE_COMP_HUFFMAN_DYNAMIC ||
 				xform->compress.deflate.huffman ==
 						RTE_COMP_HUFFMAN_DEFAULT) &&
-				qat->interm_buff_mz != NULL)
+				(qat->interm_buff_mz != NULL ||
+						im_bufs == 0))

 			qat_xform->qat_comp_request_type =
 					QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS;
@@ -1039,7 +1008,8 @@ qat_comp_private_xform_create(struct rte_compressdev *dev,
 	}

 	if (qat_comp_create_templates(qat_xform, qat->interm_buff_mz, xform,
-				      NULL, RTE_COMP_OP_STATELESS)) {
+				      NULL, RTE_COMP_OP_STATELESS,
+				      qat_dev_gen)) {
 		QAT_LOG(ERR, "QAT: Problem with setting compression");
 		return -EINVAL;
 	}
@@ -1138,7 +1108,8 @@ qat_comp_stream_create(struct rte_compressdev *dev,
 	ptr->qat_xform.checksum_type = xform->decompress.chksum;

 	if (qat_comp_create_templates(&ptr->qat_xform, qat->interm_buff_mz,
-				      xform, ptr, RTE_COMP_OP_STATEFUL)) {
+				      xform, ptr, RTE_COMP_OP_STATEFUL,
+				      qat->qat_dev->qat_dev_gen)) {
 		QAT_LOG(ERR, "QAT: problem with creating descriptor template for stream");
 		rte_mempool_put(qat->streampool, *stream);
 		*stream = NULL;
diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
index 0444b50a1e..da7b9a6eec 100644
--- a/drivers/compress/qat/qat_comp.h
+++ b/drivers/compress/qat/qat_comp.h
@@ -28,14 +28,16 @@
 #define QAT_MIN_OUT_BUF_SIZE 46

 /* maximum size of the state registers */
-#define QAT_STATE_REGISTERS_MAX_SIZE 64
+#define QAT_STATE_REGISTERS_MAX_SIZE 256 /* 64 bytes for GEN1-3, 256 for GEN4 */

 /* decompressor context size */
 #define QAT_INFLATE_CONTEXT_SIZE_GEN1 36864
 #define QAT_INFLATE_CONTEXT_SIZE_GEN2 34032
 #define QAT_INFLATE_CONTEXT_SIZE_GEN3 34032
-#define QAT_INFLATE_CONTEXT_SIZE RTE_MAX(RTE_MAX(QAT_INFLATE_CONTEXT_SIZE_GEN1,\
-		QAT_INFLATE_CONTEXT_SIZE_GEN2), QAT_INFLATE_CONTEXT_SIZE_GEN3)
+#define QAT_INFLATE_CONTEXT_SIZE_GEN4 36864
+#define QAT_INFLATE_CONTEXT_SIZE RTE_MAX(RTE_MAX(RTE_MAX(\
+		QAT_INFLATE_CONTEXT_SIZE_GEN1, QAT_INFLATE_CONTEXT_SIZE_GEN2), \
+		QAT_INFLATE_CONTEXT_SIZE_GEN3), QAT_INFLATE_CONTEXT_SIZE_GEN4)

 enum qat_comp_request_type {
 	QAT_COMP_REQUEST_FIXED_COMP_STATELESS,
diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c
index caac7839e9..9b24d46e97 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -9,30 +9,29 @@

 #define QAT_PMD_COMP_SGL_DEF_SEGMENTS 16

+struct qat_comp_gen_dev_ops qat_comp_gen_dev_ops[QAT_N_GENS];
+
 struct stream_create_info {
 	struct qat_comp_dev_private *comp_dev;
 	int socket_id;
 	int error;
 };

-static const struct rte_compressdev_capabilities qat_comp_gen_capabilities[] = {
-	{/* COMPRESSION - deflate */
-	 .algo = RTE_COMP_ALGO_DEFLATE,
-	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
-				RTE_COMP_FF_CRC32_CHECKSUM |
-				RTE_COMP_FF_ADLER32_CHECKSUM |
-				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
-				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
-				RTE_COMP_FF_HUFFMAN_FIXED |
-				RTE_COMP_FF_HUFFMAN_DYNAMIC |
-				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
-				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
-				RTE_COMP_FF_OOP_LB_IN_SGL_OUT |
-				RTE_COMP_FF_STATEFUL_DECOMPRESSION,
-	 .window_size = {.min = 15, .max = 15, .increment = 0} },
-	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+static struct
+qat_comp_capabilities_info qat_comp_get_capa_info(
+		enum qat_device_gen qat_dev_gen, struct qat_pci_device *qat_dev)
+{
+	struct qat_comp_capabilities_info ret = { .data = NULL, .size = 0 };

-static void
+	if (qat_dev_gen >= QAT_N_GENS)
+		return ret;
+	RTE_FUNC_PTR_OR_ERR_RET(qat_comp_gen_dev_ops[qat_dev_gen]
+			.qat_comp_get_capabilities, ret);
+	return qat_comp_gen_dev_ops[qat_dev_gen]
+			.qat_comp_get_capabilities(qat_dev);
+}
+
+void
 qat_comp_stats_get(struct rte_compressdev *dev,
 		struct rte_compressdev_stats *stats)
 {
@@ -52,7 +51,7 @@ qat_comp_stats_get(struct rte_compressdev *dev,
 	stats->dequeue_err_count = qat_stats.dequeue_err_count;
 }

-static void
+void
 qat_comp_stats_reset(struct rte_compressdev *dev)
 {
 	struct qat_comp_dev_private *qat_priv;
@@ -67,7 +66,7 @@ qat_comp_stats_reset(struct rte_compressdev *dev)

 }

-static int
+int
 qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 {
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
@@ -95,23 +94,18 @@ qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 			&(dev->data->queue_pairs[queue_pair_id]));
 }

-static int
+int
 qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
-		  uint32_t max_inflight_ops, int socket_id)
+		uint32_t max_inflight_ops, int socket_id)
 {
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-	struct qat_qp_config qat_qp_conf;
-
+	struct qat_qp_config qat_qp_conf = {0};
 	struct qat_qp **qp_addr =
 			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
 	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-	const struct qat_qp_hw_data *comp_hw_qps =
-			qat_gen_config[qat_private->qat_dev->qat_dev_gen]
-				      .qp_hw_data[QAT_SERVICE_COMPRESSION];
-	const struct qat_qp_hw_data *qp_hw_data = comp_hw_qps + qp_id;
+	struct qat_qp *qp;
+	uint32_t i;
+	int ret;

 	/* If qp is already in use free ring memory and qp metadata. */
 	if (*qp_addr != NULL) {
@@ -125,7 +119,13 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 		return -EINVAL;
 	}

-	qat_qp_conf.hw = qp_hw_data;
+
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_COMPRESSION,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
 	qat_qp_conf.cookie_size = sizeof(struct qat_comp_op_cookie);
 	qat_qp_conf.nb_descriptors = max_inflight_ops;
 	qat_qp_conf.socket_id = socket_id;
@@ -134,7 +134,6 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
 	if (ret != 0)
 		return ret;
-
 	/* store a link to the qp in the qat_pci_device */
 	qat_private->qat_dev->qps_in_use[QAT_SERVICE_COMPRESSION][qp_id]
 								= *qp_addr;
@@ -189,7 +188,7 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,


 #define QAT_IM_BUFFER_DEBUG 0
-static const struct rte_memzone *
+const struct rte_memzone *
 qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
 			      uint32_t buff_size)
 {
@@ -202,8 +201,8 @@ qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
 	uint32_t full_size;
 	uint32_t offset_of_flat_buffs;
 	int i;
-	int num_im_sgls = qat_gen_config[
-		comp_dev->qat_dev->qat_dev_gen].comp_num_im_bufs_required;
+	int num_im_sgls = qat_comp_get_num_im_bufs_required(
+			comp_dev->qat_dev->qat_dev_gen);

 	QAT_LOG(DEBUG, "QAT COMP device %s needs %d sgls",
 				comp_dev->qat_dev->name, num_im_sgls);
@@ -480,8 +479,8 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev)
 	/* Free intermediate buffers */
 	if (comp_dev->interm_buff_mz) {
 		char mz_name[RTE_MEMZONE_NAMESIZE];
-		int i = qat_gen_config[
-		      comp_dev->qat_dev->qat_dev_gen].comp_num_im_bufs_required;
+		int i = qat_comp_get_num_im_bufs_required(
+				comp_dev->qat_dev->qat_dev_gen);

 		while (--i >= 0) {
 			snprintf(mz_name, RTE_MEMZONE_NAMESIZE,
@@ -509,28 +508,13 @@ _qat_comp_dev_config_clear(struct qat_comp_dev_private *comp_dev)
 	}
 }

-static int
+int
 qat_comp_dev_config(struct rte_compressdev *dev,
 		struct rte_compressdev_config *config)
 {
 	struct qat_comp_dev_private *comp_dev = dev->data->dev_private;
 	int ret = 0;

-	if (RTE_PMD_QAT_COMP_IM_BUFFER_SIZE == 0) {
-		QAT_LOG(WARNING,
-			"RTE_PMD_QAT_COMP_IM_BUFFER_SIZE = 0 in config file, so"
-			" QAT device can't be used for Dynamic Deflate. "
-			"Did you really intend to do this?");
-	} else {
-		comp_dev->interm_buff_mz =
-				qat_comp_setup_inter_buffers(comp_dev,
-					RTE_PMD_QAT_COMP_IM_BUFFER_SIZE);
-		if (comp_dev->interm_buff_mz == NULL) {
-			ret = -ENOMEM;
-			goto error_out;
-		}
-	}
-
 	if (config->max_nb_priv_xforms) {
 		comp_dev->xformpool = qat_comp_create_xform_pool(comp_dev,
 					    config, config->max_nb_priv_xforms);
@@ -558,19 +542,19 @@ qat_comp_dev_config(struct rte_compressdev *dev,
 	return ret;
 }

-static int
+int
 qat_comp_dev_start(struct rte_compressdev *dev __rte_unused)
 {
 	return 0;
 }

-static void
+void
 qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused)
 {

 }

-static int
+int
 qat_comp_dev_close(struct rte_compressdev *dev)
 {
 	int i;
@@ -588,8 +572,7 @@ qat_comp_dev_close(struct rte_compressdev *dev)
 	return ret;
 }

-
-static void
+void
 qat_comp_dev_info_get(struct rte_compressdev *dev,
 			struct rte_compressdev_info *info)
 {
@@ -662,27 +645,6 @@ qat_comp_pmd_dequeue_first_op_burst(void *qp, struct rte_comp_op **ops,
 	return ret;
 }

-static struct rte_compressdev_ops compress_qat_ops = {
-
-	/* Device related operations */
-	.dev_configure		= qat_comp_dev_config,
-	.dev_start		= qat_comp_dev_start,
-	.dev_stop		= qat_comp_dev_stop,
-	.dev_close		= qat_comp_dev_close,
-	.dev_infos_get		= qat_comp_dev_info_get,
-
-	.stats_get		= qat_comp_stats_get,
-	.stats_reset		= qat_comp_stats_reset,
-	.queue_pair_setup	= qat_comp_qp_setup,
-	.queue_pair_release	= qat_comp_qp_release,
-
-	/* Compression related operations */
-	.private_xform_create	= qat_comp_private_xform_create,
-	.private_xform_free	= qat_comp_private_xform_free,
-	.stream_create		= qat_comp_stream_create,
-	.stream_free		= qat_comp_stream_free
-};
-
 /* An rte_driver is needed in the registration of the device with compressdev.
  * The actual qat pci's rte_driver can't be used as its name represents
  * the whole pci device with all services. Think of this as a holder for a name
@@ -693,6 +655,7 @@ static const struct rte_driver compdev_qat_driver = {
 	.name = qat_comp_drv_name,
 	.alias = qat_comp_drv_name
 };
+
 int
 qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param)
@@ -708,17 +671,21 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	char capa_memz_name[RTE_COMPRESSDEV_NAME_MAX_LEN];
 	struct rte_compressdev *compressdev;
 	struct qat_comp_dev_private *comp_dev;
+	struct qat_comp_capabilities_info capabilities_info;
 	const struct rte_compressdev_capabilities *capabilities;
+	const struct qat_comp_gen_dev_ops *qat_comp_gen_ops =
+			&qat_comp_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	uint64_t capa_size;

-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		QAT_LOG(ERR, "Compression PMD not supported on QAT 4xxx");
-		return -EFAULT;
-	}
 	snprintf(name, RTE_COMPRESSDEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "comp");
 	QAT_LOG(DEBUG, "Creating QAT COMP device %s", name);

+	if (qat_comp_gen_ops->compressdev_ops == NULL) {
+		QAT_LOG(DEBUG, "Device %s does not support compression", name);
+		return -ENOTSUP;
+	}
+
 	/* Populate subset device to use in compressdev device creation */
 	qat_dev_instance->comp_rte_dev.driver = &compdev_qat_driver;
 	qat_dev_instance->comp_rte_dev.numa_node =
@@ -733,13 +700,13 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	if (compressdev == NULL)
 		return -ENODEV;

-	compressdev->dev_ops = &compress_qat_ops;
+	compressdev->dev_ops = qat_comp_gen_ops->compressdev_ops;

 	compressdev->enqueue_burst = (compressdev_enqueue_pkt_burst_t)
 			qat_enqueue_comp_op_burst;
 	compressdev->dequeue_burst = qat_comp_pmd_dequeue_first_op_burst;
-
-	compressdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED;
+	compressdev->feature_flags =
+			qat_comp_gen_ops->qat_comp_get_feature_flags();

 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -752,22 +719,20 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 	comp_dev->qat_dev = qat_pci_dev;
 	comp_dev->compressdev = compressdev;

-	switch (qat_pci_dev->qat_dev_gen) {
-	case QAT_GEN1:
-	case QAT_GEN2:
-	case QAT_GEN3:
-		capabilities = qat_comp_gen_capabilities;
-		capa_size = sizeof(qat_comp_gen_capabilities);
-		break;
-	default:
-		capabilities = qat_comp_gen_capabilities;
-		capa_size = sizeof(qat_comp_gen_capabilities);
+	capabilities_info = qat_comp_get_capa_info(qat_pci_dev->qat_dev_gen,
+			qat_pci_dev);
+
+	if (capabilities_info.data == NULL) {
 		QAT_LOG(DEBUG,
 			"QAT gen %d capabilities unknown, default to GEN1",
 					qat_pci_dev->qat_dev_gen);
-		break;
+		capabilities_info = qat_comp_get_capa_info(QAT_GEN1,
+				qat_pci_dev);
 	}

+	capabilities = capabilities_info.data;
+	capa_size = capabilities_info.size;
+
 	comp_dev->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (comp_dev->capa_mz == NULL) {
 		comp_dev->capa_mz = rte_memzone_reserve(capa_memz_name,
diff --git a/drivers/compress/qat/qat_comp_pmd.h b/drivers/compress/qat/qat_comp_pmd.h
index 252b4b24e3..86317a513c 100644
--- a/drivers/compress/qat/qat_comp_pmd.h
+++ b/drivers/compress/qat/qat_comp_pmd.h
@@ -11,10 +11,44 @@
 #include <rte_compressdev_pmd.h>

 #include "qat_device.h"
+#include "qat_comp.h"

 /**< Intel(R) QAT Compression PMD driver name */
 #define COMPRESSDEV_NAME_QAT_PMD	compress_qat

+/* Private data structure for a QAT compression device capability. */
+struct qat_comp_capabilities_info {
+	const struct rte_compressdev_capabilities *data;
+	uint64_t size;
+};
+
+/**
+ * Function prototypes for GENx specific compress device operations.
+ **/
+typedef struct qat_comp_capabilities_info (*get_comp_capabilities_info_t)
+		(struct qat_pci_device *qat_dev);
+
+typedef uint16_t (*get_comp_ram_bank_flags_t)(void);
+
+typedef int (*set_comp_slice_cfg_word_t)(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word);
+
+typedef unsigned int (*get_comp_num_im_bufs_required_t)(void);
+
+typedef uint64_t (*get_comp_feature_flags_t)(void);
+
+struct qat_comp_gen_dev_ops {
+	struct rte_compressdev_ops *compressdev_ops;
+	get_comp_feature_flags_t qat_comp_get_feature_flags;
+	get_comp_capabilities_info_t qat_comp_get_capabilities;
+	get_comp_ram_bank_flags_t qat_comp_get_ram_bank_flags;
+	set_comp_slice_cfg_word_t qat_comp_set_slice_cfg_word;
+	get_comp_num_im_bufs_required_t qat_comp_get_num_im_bufs_required;
+};
+
+extern struct qat_comp_gen_dev_ops qat_comp_gen_dev_ops[];
+
 /** private data structure for a QAT compression device.
  * This QAT device is a device offering only a compression service,
  * there can be one of these on each qat_pci_device (VF).
@@ -37,6 +71,41 @@ struct qat_comp_dev_private {
 	uint16_t min_enq_burst_threshold;
 };

+int
+qat_comp_dev_config(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config);
+
+int
+qat_comp_dev_start(struct rte_compressdev *dev __rte_unused);
+
+void
+qat_comp_dev_stop(struct rte_compressdev *dev __rte_unused);
+
+int
+qat_comp_dev_close(struct rte_compressdev *dev);
+
+void
+qat_comp_dev_info_get(struct rte_compressdev *dev,
+		struct rte_compressdev_info *info);
+
+void
+qat_comp_stats_get(struct rte_compressdev *dev,
+		struct rte_compressdev_stats *stats);
+
+void
+qat_comp_stats_reset(struct rte_compressdev *dev);
+
+int
+qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id);
+
+int
+qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
+		uint32_t max_inflight_ops, int socket_id);
+
+const struct rte_memzone *
+qat_comp_setup_inter_buffers(struct qat_comp_dev_private *comp_dev,
+		uint32_t buff_size);
+
 int
 qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
@@ -44,5 +113,12 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
 int
 qat_comp_dev_destroy(struct qat_pci_device *qat_pci_dev);

+
+static __rte_always_inline unsigned int
+qat_comp_get_num_im_bufs_required(enum qat_device_gen gen)
+{
+	return (*qat_comp_gen_dev_ops[gen].qat_comp_get_num_im_bufs_required)();
+}
+
 #endif
 #endif /* _QAT_COMP_PMD_H_ */
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v8 6/9] compress/qat: add gen specific implementation
  2021-11-04 10:34               ` [dpdk-dev] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                                   ` (4 preceding siblings ...)
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 5/9] compress/qat: define gen specific structs and functions Kai Ji
@ 2021-11-04 10:34                 ` Kai Ji
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 7/9] crypto/qat: unified device private data structure Kai Ji
                                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-11-04 10:34 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Adam Dybkowski, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch replaces the mixed QAT compression support
implementation by separate files with shared or individual
implementation for specific QAT generation.

Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com>
Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 drivers/common/qat/meson.build               |   4 +-
 drivers/compress/qat/dev/qat_comp_pmd_gen1.c | 176 +++++++++++++++
 drivers/compress/qat/dev/qat_comp_pmd_gen2.c |  30 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen3.c |  30 +++
 drivers/compress/qat/dev/qat_comp_pmd_gen4.c | 213 +++++++++++++++++++
 drivers/compress/qat/dev/qat_comp_pmd_gens.h |  30 +++
 6 files changed, 482 insertions(+), 1 deletion(-)
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen1.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen2.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen3.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen4.c
 create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gens.h

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 532e0fabb3..8a1c6d64e8 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -62,7 +62,9 @@ includes += include_directories(
 )

 if qat_compress
-    foreach f: ['qat_comp_pmd.c', 'qat_comp.c']
+    foreach f: ['qat_comp_pmd.c', 'qat_comp.c',
+            'dev/qat_comp_pmd_gen1.c', 'dev/qat_comp_pmd_gen2.c',
+            'dev/qat_comp_pmd_gen3.c', 'dev/qat_comp_pmd_gen4.c']
         sources += files(join_paths(qat_compress_relpath, f))
     endforeach
 endif
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen1.c b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c
new file mode 100644
index 0000000000..12d9d89072
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen1.c
@@ -0,0 +1,176 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <rte_compressdev.h>
+#include <rte_compressdev_pmd.h>
+
+#include "qat_comp_pmd.h"
+#include "qat_comp.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN1 12
+
+const struct rte_compressdev_capabilities qat_gen1_comp_capabilities[] = {
+	{/* COMPRESSION - deflate */
+	 .algo = RTE_COMP_ALGO_DEFLATE,
+	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
+				RTE_COMP_FF_CRC32_CHECKSUM |
+				RTE_COMP_FF_ADLER32_CHECKSUM |
+				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
+				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+				RTE_COMP_FF_HUFFMAN_FIXED |
+				RTE_COMP_FF_HUFFMAN_DYNAMIC |
+				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
+				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
+				RTE_COMP_FF_OOP_LB_IN_SGL_OUT |
+				RTE_COMP_FF_STATEFUL_DECOMPRESSION,
+	 .window_size = {.min = 15, .max = 15, .increment = 0} },
+	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+
+static int
+qat_comp_dev_config_gen1(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config)
+{
+	struct qat_comp_dev_private *comp_dev = dev->data->dev_private;
+
+	if (RTE_PMD_QAT_COMP_IM_BUFFER_SIZE == 0) {
+		QAT_LOG(WARNING,
+			"RTE_PMD_QAT_COMP_IM_BUFFER_SIZE = 0 in config file, so"
+			" QAT device can't be used for Dynamic Deflate.");
+	} else {
+		comp_dev->interm_buff_mz =
+				qat_comp_setup_inter_buffers(comp_dev,
+					RTE_PMD_QAT_COMP_IM_BUFFER_SIZE);
+		if (comp_dev->interm_buff_mz == NULL)
+			return -ENOMEM;
+	}
+
+	return qat_comp_dev_config(dev, config);
+}
+
+struct rte_compressdev_ops qat_comp_ops_gen1 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_comp_dev_config_gen1,
+	.dev_start		= qat_comp_dev_start,
+	.dev_stop		= qat_comp_dev_stop,
+	.dev_close		= qat_comp_dev_close,
+	.dev_infos_get		= qat_comp_dev_info_get,
+
+	.stats_get		= qat_comp_stats_get,
+	.stats_reset		= qat_comp_stats_reset,
+	.queue_pair_setup	= qat_comp_qp_setup,
+	.queue_pair_release	= qat_comp_qp_release,
+
+	/* Compression related operations */
+	.private_xform_create	= qat_comp_private_xform_create,
+	.private_xform_free	= qat_comp_private_xform_free,
+	.stream_create		= qat_comp_stream_create,
+	.stream_free		= qat_comp_stream_free
+};
+
+struct qat_comp_capabilities_info
+qat_comp_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_comp_capabilities_info capa_info = {
+		.data = qat_gen1_comp_capabilities,
+		.size = sizeof(qat_gen1_comp_capabilities)
+	};
+	return capa_info;
+}
+
+uint16_t
+qat_comp_get_ram_bank_flags_gen1(void)
+{
+	/* Enable A, B, C, D, and E (CAMs). */
+	return ICP_QAT_FW_COMP_RAM_FLAGS_BUILD(
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank I */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank H */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank G */
+			ICP_QAT_FW_COMP_BANK_DISABLED, /* Bank F */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank E */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank D */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank C */
+			ICP_QAT_FW_COMP_BANK_ENABLED,  /* Bank B */
+			ICP_QAT_FW_COMP_BANK_ENABLED); /* Bank A */
+}
+
+int
+qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		__rte_unused enum rte_comp_op_type op_type,
+		uint32_t *comp_slice_cfg_word)
+{
+	unsigned int algo, comp_level, direction;
+
+	if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE)
+		algo = ICP_QAT_HW_COMPRESSION_ALGO_DEFLATE;
+	else {
+		QAT_LOG(ERR, "compression algorithm not supported");
+		return -EINVAL;
+	}
+
+	if (qat_xform->qat_comp_request_type == QAT_COMP_REQUEST_DECOMPRESS) {
+		direction = ICP_QAT_HW_COMPRESSION_DIR_DECOMPRESS;
+		comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+	} else {
+		direction = ICP_QAT_HW_COMPRESSION_DIR_COMPRESS;
+
+		if (xform->compress.level == RTE_COMP_LEVEL_PMD_DEFAULT)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+		else if (xform->compress.level == 1)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_1;
+		else if (xform->compress.level == 2)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_4;
+		else if (xform->compress.level == 3)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_8;
+		else if (xform->compress.level >= 4 &&
+			 xform->compress.level <= 9)
+			comp_level = ICP_QAT_HW_COMPRESSION_DEPTH_16;
+		else {
+			QAT_LOG(ERR, "compression level not supported");
+			return -EINVAL;
+		}
+	}
+
+	comp_slice_cfg_word[0] =
+			ICP_QAT_HW_COMPRESSION_CONFIG_BUILD(
+				direction,
+				/* In CPM 1.6 only valid mode ! */
+				ICP_QAT_HW_COMPRESSION_DELAYED_MATCH_ENABLED,
+				algo,
+				/* Translate level to depth */
+				comp_level,
+				ICP_QAT_HW_COMPRESSION_FILE_TYPE_0);
+
+	return 0;
+}
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen1(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN1;
+}
+
+uint64_t
+qat_comp_get_features_gen1(void)
+{
+	return RTE_COMPDEV_FF_HW_ACCELERATED;
+}
+
+RTE_INIT(qat_comp_pmd_gen1_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN1].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN1].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen2.c b/drivers/compress/qat/dev/qat_comp_pmd_gen2.c
new file mode 100644
index 0000000000..fd6c966f26
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen2.c
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN2 20
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen2(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN2;
+}
+
+RTE_INIT(qat_comp_pmd_gen2_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN2].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen2;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN2].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen3.c b/drivers/compress/qat/dev/qat_comp_pmd_gen3.c
new file mode 100644
index 0000000000..fccb0941f1
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen3.c
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN3 64
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen3(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN3;
+}
+
+RTE_INIT(qat_comp_pmd_gen3_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN3].compressdev_ops =
+			&qat_comp_ops_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen3;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen1;
+	qat_comp_gen_dev_ops[QAT_GEN3].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen4.c b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
new file mode 100644
index 0000000000..79b2ceb414
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
@@ -0,0 +1,213 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp.h"
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+#include "icp_qat_hw_gen4_comp.h"
+#include "icp_qat_hw_gen4_comp_defs.h"
+
+#define QAT_NUM_INTERM_BUFS_GEN4 0
+
+static const struct rte_compressdev_capabilities
+qat_gen4_comp_capabilities[] = {
+	{/* COMPRESSION - deflate */
+	 .algo = RTE_COMP_ALGO_DEFLATE,
+	 .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
+				RTE_COMP_FF_CRC32_CHECKSUM |
+				RTE_COMP_FF_ADLER32_CHECKSUM |
+				RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
+				RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+				RTE_COMP_FF_HUFFMAN_FIXED |
+				RTE_COMP_FF_HUFFMAN_DYNAMIC |
+				RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
+				RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
+				RTE_COMP_FF_OOP_LB_IN_SGL_OUT,
+	 .window_size = {.min = 15, .max = 15, .increment = 0} },
+	{RTE_COMP_ALGO_LIST_END, 0, {0, 0, 0} } };
+
+static int
+qat_comp_dev_config_gen4(struct rte_compressdev *dev,
+		struct rte_compressdev_config *config)
+{
+	/* QAT GEN4 doesn't need preallocated intermediate buffers */
+
+	return qat_comp_dev_config(dev, config);
+}
+
+static struct rte_compressdev_ops qat_comp_ops_gen4 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_comp_dev_config_gen4,
+	.dev_start		= qat_comp_dev_start,
+	.dev_stop		= qat_comp_dev_stop,
+	.dev_close		= qat_comp_dev_close,
+	.dev_infos_get		= qat_comp_dev_info_get,
+
+	.stats_get		= qat_comp_stats_get,
+	.stats_reset		= qat_comp_stats_reset,
+	.queue_pair_setup	= qat_comp_qp_setup,
+	.queue_pair_release	= qat_comp_qp_release,
+
+	/* Compression related operations */
+	.private_xform_create	= qat_comp_private_xform_create,
+	.private_xform_free	= qat_comp_private_xform_free,
+	.stream_create		= qat_comp_stream_create,
+	.stream_free		= qat_comp_stream_free
+};
+
+static struct qat_comp_capabilities_info
+qat_comp_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_comp_capabilities_info capa_info = {
+		.data = qat_gen4_comp_capabilities,
+		.size = sizeof(qat_gen4_comp_capabilities)
+	};
+	return capa_info;
+}
+
+static uint16_t
+qat_comp_get_ram_bank_flags_gen4(void)
+{
+	return 0;
+}
+
+static int
+qat_comp_set_slice_cfg_word_gen4(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word)
+{
+	if (qat_xform->qat_comp_request_type ==
+			QAT_COMP_REQUEST_FIXED_COMP_STATELESS ||
+	    qat_xform->qat_comp_request_type ==
+			QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS) {
+		/* Compression */
+		struct icp_qat_hw_comp_20_config_csr_upper hw_comp_upper_csr;
+		struct icp_qat_hw_comp_20_config_csr_lower hw_comp_lower_csr;
+
+		memset(&hw_comp_upper_csr, 0, sizeof(hw_comp_upper_csr));
+		memset(&hw_comp_lower_csr, 0, sizeof(hw_comp_lower_csr));
+
+		hw_comp_lower_csr.lllbd =
+			ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_DISABLED;
+
+		if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE) {
+			hw_comp_lower_csr.skip_ctrl =
+				ICP_QAT_HW_COMP_20_BYTE_SKIP_3BYTE_LITERAL;
+
+			if (qat_xform->qat_comp_request_type ==
+				QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS) {
+				hw_comp_lower_csr.algo =
+					ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_ILZ77;
+				hw_comp_lower_csr.lllbd =
+				    ICP_QAT_HW_COMP_20_LLLBD_CTRL_LLLBD_ENABLED;
+			} else {
+				hw_comp_lower_csr.algo =
+				      ICP_QAT_HW_COMP_20_HW_COMP_FORMAT_DEFLATE;
+				hw_comp_upper_csr.scb_ctrl =
+					ICP_QAT_HW_COMP_20_SCB_CONTROL_DISABLE;
+			}
+
+			if (op_type == RTE_COMP_OP_STATEFUL) {
+				hw_comp_upper_csr.som_ctrl =
+				     ICP_QAT_HW_COMP_20_SOM_CONTROL_REPLAY_MODE;
+			}
+		} else {
+			QAT_LOG(ERR, "Compression algorithm not supported");
+			return -EINVAL;
+		}
+
+		switch (xform->compress.level) {
+		case 1:
+		case 2:
+		case 3:
+		case 4:
+		case 5:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_1;
+			hw_comp_lower_csr.hash_col =
+			      ICP_QAT_HW_COMP_20_SKIP_HASH_COLLISION_DONT_ALLOW;
+			break;
+		case 6:
+		case 7:
+		case 8:
+		case RTE_COMP_LEVEL_PMD_DEFAULT:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_6;
+			break;
+		case 9:
+		case 10:
+		case 11:
+		case 12:
+			hw_comp_lower_csr.sd =
+					ICP_QAT_HW_COMP_20_SEARCH_DEPTH_LEVEL_9;
+			break;
+		default:
+			QAT_LOG(ERR, "Compression level not supported");
+			return -EINVAL;
+		}
+
+		hw_comp_lower_csr.abd = ICP_QAT_HW_COMP_20_ABD_ABD_DISABLED;
+		hw_comp_lower_csr.hash_update =
+			ICP_QAT_HW_COMP_20_SKIP_HASH_UPDATE_DONT_ALLOW;
+		hw_comp_lower_csr.edmm =
+		      ICP_QAT_HW_COMP_20_EXTENDED_DELAY_MATCH_MODE_EDMM_ENABLED;
+
+		hw_comp_upper_csr.nice =
+			ICP_QAT_HW_COMP_20_CONFIG_CSR_NICE_PARAM_DEFAULT_VAL;
+		hw_comp_upper_csr.lazy =
+			ICP_QAT_HW_COMP_20_CONFIG_CSR_LAZY_PARAM_DEFAULT_VAL;
+
+		comp_slice_cfg_word[0] =
+				ICP_QAT_FW_COMP_20_BUILD_CONFIG_LOWER(
+					hw_comp_lower_csr);
+		comp_slice_cfg_word[1] =
+				ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER(
+					hw_comp_upper_csr);
+	} else {
+		/* Decompression */
+		struct icp_qat_hw_decomp_20_config_csr_lower
+				hw_decomp_lower_csr;
+
+		memset(&hw_decomp_lower_csr, 0, sizeof(hw_decomp_lower_csr));
+
+		if (xform->compress.algo == RTE_COMP_ALGO_DEFLATE)
+			hw_decomp_lower_csr.algo =
+				ICP_QAT_HW_DECOMP_20_HW_DECOMP_FORMAT_DEFLATE;
+		else {
+			QAT_LOG(ERR, "Compression algorithm not supported");
+			return -EINVAL;
+		}
+
+		comp_slice_cfg_word[0] =
+				ICP_QAT_FW_DECOMP_20_BUILD_CONFIG_LOWER(
+					hw_decomp_lower_csr);
+		comp_slice_cfg_word[1] = 0;
+	}
+
+	return 0;
+}
+
+static unsigned int
+qat_comp_get_num_im_bufs_required_gen4(void)
+{
+	return QAT_NUM_INTERM_BUFS_GEN4;
+}
+
+
+RTE_INIT(qat_comp_pmd_gen4_init)
+{
+	qat_comp_gen_dev_ops[QAT_GEN4].compressdev_ops =
+			&qat_comp_ops_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_capabilities =
+			qat_comp_cap_get_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_num_im_bufs_required =
+			qat_comp_get_num_im_bufs_required_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_ram_bank_flags =
+			qat_comp_get_ram_bank_flags_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_set_slice_cfg_word =
+			qat_comp_set_slice_cfg_word_gen4;
+	qat_comp_gen_dev_ops[QAT_GEN4].qat_comp_get_feature_flags =
+			qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gens.h b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
new file mode 100644
index 0000000000..67293092ea
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#ifndef _QAT_COMP_PMD_GENS_H_
+#define _QAT_COMP_PMD_GENS_H_
+
+#include <rte_compressdev.h>
+#include <rte_compressdev_pmd.h>
+#include <stdint.h>
+
+#include "qat_comp_pmd.h"
+
+extern const struct rte_compressdev_capabilities qat_gen1_comp_capabilities[];
+
+struct qat_comp_capabilities_info
+qat_comp_cap_get_gen1(struct qat_pci_device *qat_dev);
+
+uint16_t qat_comp_get_ram_bank_flags_gen1(void);
+
+int qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform,
+		const struct rte_comp_xform *xform,
+		enum rte_comp_op_type op_type,
+		uint32_t *comp_slice_cfg_word);
+
+uint64_t qat_comp_get_features_gen1(void);
+
+extern struct rte_compressdev_ops qat_comp_ops_gen1;
+
+#endif /* _QAT_COMP_PMD_GENS_H_ */
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v8 7/9] crypto/qat: unified device private data structure
  2021-11-04 10:34               ` [dpdk-dev] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                                   ` (5 preceding siblings ...)
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 6/9] compress/qat: add gen specific implementation Kai Ji
@ 2021-11-04 10:34                 ` Kai Ji
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 8/9] crypto/qat: define gen specific structs and functions Kai Ji
                                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-11-04 10:34 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch unifies the QAT symmetric and asymmetric device
private data structures and functions.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 drivers/common/qat/meson.build       |   2 +-
 drivers/common/qat/qat_common.c      |  15 ++
 drivers/common/qat/qat_common.h      |   3 +
 drivers/common/qat/qat_device.h      |   7 +-
 drivers/crypto/qat/qat_asym_pmd.c    | 216 ++++-------------------
 drivers/crypto/qat/qat_asym_pmd.h    |  29 +---
 drivers/crypto/qat/qat_crypto.c      | 176 +++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h      |  78 +++++++++
 drivers/crypto/qat/qat_sym_pmd.c     | 250 +++++----------------------
 drivers/crypto/qat/qat_sym_pmd.h     |  21 +--
 drivers/crypto/qat/qat_sym_session.c |  15 +-
 11 files changed, 365 insertions(+), 447 deletions(-)
 create mode 100644 drivers/crypto/qat/qat_crypto.c
 create mode 100644 drivers/crypto/qat/qat_crypto.h

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 8a1c6d64e8..29fd0168ea 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -71,7 +71,7 @@ endif

 if qat_crypto
     foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c',
-            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c']
+            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c']
         sources += files(join_paths(qat_crypto_relpath, f))
     endforeach
     deps += ['security']
diff --git a/drivers/common/qat/qat_common.c b/drivers/common/qat/qat_common.c
index 5343a1451e..59e7e02622 100644
--- a/drivers/common/qat/qat_common.c
+++ b/drivers/common/qat/qat_common.c
@@ -6,6 +6,21 @@
 #include "qat_device.h"
 #include "qat_logs.h"

+const char *
+qat_service_get_str(enum qat_service_type type)
+{
+	switch (type) {
+	case QAT_SERVICE_SYMMETRIC:
+		return "sym";
+	case QAT_SERVICE_ASYMMETRIC:
+		return "asym";
+	case QAT_SERVICE_COMPRESSION:
+		return "comp";
+	default:
+		return "invalid";
+	}
+}
+
 int
 qat_sgl_fill_array(struct rte_mbuf *buf, int64_t offset,
 		void *list_in, uint32_t data_len,
diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index a7632e31f8..9411a79301 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -91,4 +91,7 @@ void
 qat_stats_reset(struct qat_pci_device *dev,
 		enum qat_service_type service);

+const char *
+qat_service_get_str(enum qat_service_type type);
+
 #endif /* _QAT_COMMON_H_ */
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index e7c7e9af95..85fae7b7c7 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -76,8 +76,7 @@ struct qat_device_info {

 extern struct qat_device_info qat_pci_devs[];

-struct qat_sym_dev_private;
-struct qat_asym_dev_private;
+struct qat_cryptodev_private;
 struct qat_comp_dev_private;

 /*
@@ -106,14 +105,14 @@ struct qat_pci_device {
 	/**< links to qps set up for each service, index same as on API */

 	/* Data relating to symmetric crypto service */
-	struct qat_sym_dev_private *sym_dev;
+	struct qat_cryptodev_private *sym_dev;
 	/**< link back to cryptodev private data */

 	int qat_sym_driver_id;
 	/**< Symmetric driver id used by this device */

 	/* Data relating to asymmetric crypto service */
-	struct qat_asym_dev_private *asym_dev;
+	struct qat_cryptodev_private *asym_dev;
 	/**< link back to cryptodev private data */

 	int qat_asym_driver_id;
diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c
index 0944d27a4d..042f39ddcc 100644
--- a/drivers/crypto/qat/qat_asym_pmd.c
+++ b/drivers/crypto/qat/qat_asym_pmd.c
@@ -6,6 +6,7 @@

 #include "qat_logs.h"

+#include "qat_crypto.h"
 #include "qat_asym.h"
 #include "qat_asym_pmd.h"
 #include "qat_sym_capabilities.h"
@@ -18,190 +19,45 @@ static const struct rte_cryptodev_capabilities qat_gen1_asym_capabilities[] = {
 	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
 };

-static int qat_asym_qp_release(struct rte_cryptodev *dev,
-			       uint16_t queue_pair_id);
-
-static int qat_asym_dev_config(__rte_unused struct rte_cryptodev *dev,
-			       __rte_unused struct rte_cryptodev_config *config)
-{
-	return 0;
-}
-
-static int qat_asym_dev_start(__rte_unused struct rte_cryptodev *dev)
-{
-	return 0;
-}
-
-static void qat_asym_dev_stop(__rte_unused struct rte_cryptodev *dev)
-{
-
-}
-
-static int qat_asym_dev_close(struct rte_cryptodev *dev)
-{
-	int i, ret;
-
-	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
-		ret = qat_asym_qp_release(dev, i);
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static void qat_asym_dev_info_get(struct rte_cryptodev *dev,
-				  struct rte_cryptodev_info *info)
-{
-	struct qat_asym_dev_private *internals = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = internals->qat_dev;
-
-	if (info != NULL) {
-		info->max_nb_queue_pairs = qat_qps_per_service(qat_dev,
-							QAT_SERVICE_ASYMMETRIC);
-		info->feature_flags = dev->feature_flags;
-		info->capabilities = internals->qat_dev_capabilities;
-		info->driver_id = qat_asym_driver_id;
-		/* No limit of number of sessions */
-		info->sym.max_nb_sessions = 0;
-	}
-}
-
-static void qat_asym_stats_get(struct rte_cryptodev *dev,
-			       struct rte_cryptodev_stats *stats)
-{
-	struct qat_common_stats qat_stats = {0};
-	struct qat_asym_dev_private *qat_priv;
-
-	if (stats == NULL || dev == NULL) {
-		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_get(qat_priv->qat_dev, &qat_stats, QAT_SERVICE_ASYMMETRIC);
-	stats->enqueued_count = qat_stats.enqueued_count;
-	stats->dequeued_count = qat_stats.dequeued_count;
-	stats->enqueue_err_count = qat_stats.enqueue_err_count;
-	stats->dequeue_err_count = qat_stats.dequeue_err_count;
-}
-
-static void qat_asym_stats_reset(struct rte_cryptodev *dev)
+void
+qat_asym_init_op_cookie(void *op_cookie)
 {
-	struct qat_asym_dev_private *qat_priv;
+	int j;
+	struct qat_asym_op_cookie *cookie = op_cookie;

-	if (dev == NULL) {
-		QAT_LOG(ERR, "invalid asymmetric cryptodev ptr %p", dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
+	cookie->input_addr = rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_asym_op_cookie,
+					input_params_ptrs);

-	qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_ASYMMETRIC);
-}
-
-static int qat_asym_qp_release(struct rte_cryptodev *dev,
-			       uint16_t queue_pair_id)
-{
-	struct qat_asym_dev_private *qat_private = dev->data->dev_private;
-	enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen;
-
-	QAT_LOG(DEBUG, "Release asym qp %u on device %d",
-				queue_pair_id, dev->data->dev_id);
-
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_ASYMMETRIC][queue_pair_id]
-						= NULL;
-
-	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
-			&(dev->data->queue_pairs[queue_pair_id]));
-}
+	cookie->output_addr = rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_asym_op_cookie,
+					output_params_ptrs);

-static int qat_asym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-			     const struct rte_cryptodev_qp_conf *qp_conf,
-			     int socket_id)
-{
-	struct qat_qp_config qat_qp_conf;
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-
-	struct qat_qp **qp_addr =
-			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
-	struct qat_asym_dev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-	const struct qat_qp_hw_data *asym_hw_qps =
-			qat_gen_config[qat_private->qat_dev->qat_dev_gen]
-				      .qp_hw_data[QAT_SERVICE_ASYMMETRIC];
-	const struct qat_qp_hw_data *qp_hw_data = asym_hw_qps + qp_id;
-
-	/* If qp is already in use free ring memory and qp metadata. */
-	if (*qp_addr != NULL) {
-		ret = qat_asym_qp_release(dev, qp_id);
-		if (ret < 0)
-			return ret;
-	}
-	if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_ASYMMETRIC)) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.hw = qp_hw_data;
-	qat_qp_conf.cookie_size = sizeof(struct qat_asym_op_cookie);
-	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
-	qat_qp_conf.socket_id = socket_id;
-	qat_qp_conf.service_str = "asym";
-
-	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
-	if (ret != 0)
-		return ret;
-
-	/* store a link to the qp in the qat_pci_device */
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_ASYMMETRIC][qp_id]
-							= *qp_addr;
-
-	qp = (struct qat_qp *)*qp_addr;
-	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
-
-	for (i = 0; i < qp->nb_descriptors; i++) {
-		int j;
-
-		struct qat_asym_op_cookie __rte_unused *cookie =
-				qp->op_cookies[i];
-		cookie->input_addr = rte_mempool_virt2iova(cookie) +
+	for (j = 0; j < 8; j++) {
+		cookie->input_params_ptrs[j] =
+				rte_mempool_virt2iova(cookie) +
 				offsetof(struct qat_asym_op_cookie,
-						input_params_ptrs);
-
-		cookie->output_addr = rte_mempool_virt2iova(cookie) +
+						input_array[j]);
+		cookie->output_params_ptrs[j] =
+				rte_mempool_virt2iova(cookie) +
 				offsetof(struct qat_asym_op_cookie,
-						output_params_ptrs);
-
-		for (j = 0; j < 8; j++) {
-			cookie->input_params_ptrs[j] =
-					rte_mempool_virt2iova(cookie) +
-					offsetof(struct qat_asym_op_cookie,
-							input_array[j]);
-			cookie->output_params_ptrs[j] =
-					rte_mempool_virt2iova(cookie) +
-					offsetof(struct qat_asym_op_cookie,
-							output_array[j]);
-		}
+						output_array[j]);
 	}
-
-	return ret;
 }

-struct rte_cryptodev_ops crypto_qat_ops = {
+static struct rte_cryptodev_ops crypto_qat_ops = {

 	/* Device related operations */
-	.dev_configure		= qat_asym_dev_config,
-	.dev_start		= qat_asym_dev_start,
-	.dev_stop		= qat_asym_dev_stop,
-	.dev_close		= qat_asym_dev_close,
-	.dev_infos_get		= qat_asym_dev_info_get,
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,

-	.stats_get		= qat_asym_stats_get,
-	.stats_reset		= qat_asym_stats_reset,
-	.queue_pair_setup	= qat_asym_qp_setup,
-	.queue_pair_release	= qat_asym_qp_release,
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,

 	/* Crypto related operations */
 	.asym_session_get_size	= qat_asym_session_get_private_size,
@@ -241,15 +97,14 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	struct qat_device_info *qat_dev_instance =
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
 	struct rte_cryptodev_pmd_init_params init_params = {
-			.name = "",
-			.socket_id =
-				qat_dev_instance->pci_dev->device.numa_node,
-			.private_data_size = sizeof(struct qat_asym_dev_private)
+		.name = "",
+		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
+		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
-	struct qat_asym_dev_private *internals;
+	struct qat_cryptodev_private *internals;

 	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
 		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT 4xxx");
@@ -310,8 +165,9 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,

 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
-	internals->asym_dev_id = cryptodev->data->dev_id;
+	internals->dev_id = cryptodev->data->dev_id;
 	internals->qat_dev_capabilities = qat_gen1_asym_capabilities;
+	internals->service_type = QAT_SERVICE_ASYMMETRIC;

 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
@@ -347,7 +203,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	rte_cryptodev_pmd_probing_finish(cryptodev);

 	QAT_LOG(DEBUG, "Created QAT ASYM device %s as cryptodev instance %d",
-			cryptodev->data->name, internals->asym_dev_id);
+			cryptodev->data->name, internals->dev_id);
 	return 0;
 }

@@ -365,7 +221,7 @@ qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev)

 	/* free crypto device */
 	cryptodev = rte_cryptodev_pmd_get_dev(
-			qat_pci_dev->asym_dev->asym_dev_id);
+			qat_pci_dev->asym_dev->dev_id);
 	rte_cryptodev_pmd_destroy(cryptodev);
 	qat_pci_devs[qat_pci_dev->qat_dev_id].asym_rte_dev.name = NULL;
 	qat_pci_dev->asym_dev = NULL;
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index 3b5abddec8..c493796511 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -15,21 +15,8 @@

 extern uint8_t qat_asym_driver_id;

-/** private data structure for a QAT device.
- * This QAT device is a device offering only asymmetric crypto service,
- * there can be one of these on each qat_pci_device (VF).
- */
-struct qat_asym_dev_private {
-	struct qat_pci_device *qat_dev;
-	/**< The qat pci device hosting the service */
-	uint8_t asym_dev_id;
-	/**< Device instance for this rte_cryptodev */
-	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
-	/* QAT device asymmetric crypto capabilities */
-	const struct rte_memzone *capa_mz;
-	/* Shared memzone for storing capabilities */
-	uint16_t min_enq_burst_threshold;
-};
+void
+qat_asym_init_op_cookie(void *op_cookie);

 uint16_t
 qat_asym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops,
@@ -39,16 +26,4 @@ uint16_t
 qat_asym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops,
 			      uint16_t nb_ops);

-int qat_asym_session_configure(struct rte_cryptodev *dev,
-		struct rte_crypto_asym_xform *xform,
-		struct rte_cryptodev_asym_session *sess,
-		struct rte_mempool *mempool);
-
-int
-qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
-		struct qat_dev_cmd_param *qat_dev_cmd_param);
-
-int
-qat_asym_dev_destroy(struct qat_pci_device *qat_pci_dev);
-
 #endif /* _QAT_ASYM_PMD_H_ */
diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c
new file mode 100644
index 0000000000..84c26a8062
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.c
@@ -0,0 +1,176 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "qat_crypto.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+
+int
+qat_cryptodev_config(__rte_unused struct rte_cryptodev *dev,
+		__rte_unused struct rte_cryptodev_config *config)
+{
+	return 0;
+}
+
+int
+qat_cryptodev_start(__rte_unused struct rte_cryptodev *dev)
+{
+	return 0;
+}
+
+void
+qat_cryptodev_stop(__rte_unused struct rte_cryptodev *dev)
+{
+}
+
+int
+qat_cryptodev_close(struct rte_cryptodev *dev)
+{
+	int i, ret;
+
+	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
+		ret = dev->dev_ops->queue_pair_release(dev, i);
+		if (ret < 0)
+			return ret;
+	}
+
+	return 0;
+}
+
+void
+qat_cryptodev_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *info)
+{
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_service_type service_type = qat_private->service_type;
+
+	if (info != NULL) {
+		info->max_nb_queue_pairs =
+			qat_qps_per_service(qat_dev, service_type);
+		info->feature_flags = dev->feature_flags;
+		info->capabilities = qat_private->qat_dev_capabilities;
+		if (service_type == QAT_SERVICE_ASYMMETRIC)
+			info->driver_id = qat_asym_driver_id;
+
+		if (service_type == QAT_SERVICE_SYMMETRIC)
+			info->driver_id = qat_sym_driver_id;
+		/* No limit of number of sessions */
+		info->sym.max_nb_sessions = 0;
+	}
+}
+
+void
+qat_cryptodev_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	struct qat_common_stats qat_stats = {0};
+	struct qat_cryptodev_private *qat_priv;
+
+	if (stats == NULL || dev == NULL) {
+		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
+		return;
+	}
+	qat_priv = dev->data->dev_private;
+
+	qat_stats_get(qat_priv->qat_dev, &qat_stats, qat_priv->service_type);
+	stats->enqueued_count = qat_stats.enqueued_count;
+	stats->dequeued_count = qat_stats.dequeued_count;
+	stats->enqueue_err_count = qat_stats.enqueue_err_count;
+	stats->dequeue_err_count = qat_stats.dequeue_err_count;
+}
+
+void
+qat_cryptodev_stats_reset(struct rte_cryptodev *dev)
+{
+	struct qat_cryptodev_private *qat_priv;
+
+	if (dev == NULL) {
+		QAT_LOG(ERR, "invalid cryptodev ptr %p", dev);
+		return;
+	}
+	qat_priv = dev->data->dev_private;
+
+	qat_stats_reset(qat_priv->qat_dev, qat_priv->service_type);
+
+}
+
+int
+qat_cryptodev_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_device_gen qat_dev_gen = qat_dev->qat_dev_gen;
+	enum qat_service_type service_type = qat_private->service_type;
+
+	QAT_LOG(DEBUG, "Release %s qp %u on device %d",
+			qat_service_get_str(service_type),
+			queue_pair_id, dev->data->dev_id);
+
+	qat_private->qat_dev->qps_in_use[service_type][queue_pair_id] = NULL;
+
+	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
+			&(dev->data->queue_pairs[queue_pair_id]));
+}
+
+int
+qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct qat_qp **qp_addr =
+			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
+	struct qat_cryptodev_private *qat_private = dev->data->dev_private;
+	struct qat_pci_device *qat_dev = qat_private->qat_dev;
+	enum qat_service_type service_type = qat_private->service_type;
+	struct qat_qp_config qat_qp_conf = {0};
+	struct qat_qp *qp;
+	int ret = 0;
+	uint32_t i;
+
+	/* If qp is already in use free ring memory and qp metadata. */
+	if (*qp_addr != NULL) {
+		ret = dev->dev_ops->queue_pair_release(dev, qp_id);
+		if (ret < 0)
+			return -EBUSY;
+	}
+	if (qp_id >= qat_qps_per_service(qat_dev, service_type)) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
+	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, service_type,
+			qp_id);
+	if (qat_qp_conf.hw == NULL) {
+		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
+		return -EINVAL;
+	}
+
+	qat_qp_conf.cookie_size = service_type == QAT_SERVICE_SYMMETRIC ?
+			sizeof(struct qat_sym_op_cookie) :
+			sizeof(struct qat_asym_op_cookie);
+	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
+	qat_qp_conf.socket_id = socket_id;
+	qat_qp_conf.service_str = qat_service_get_str(service_type);
+
+	ret = qat_qp_setup(qat_dev, qp_addr, qp_id, &qat_qp_conf);
+	if (ret != 0)
+		return ret;
+
+	/* store a link to the qp in the qat_pci_device */
+	qat_dev->qps_in_use[service_type][qp_id] = *qp_addr;
+
+	qp = (struct qat_qp *)*qp_addr;
+	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
+
+	for (i = 0; i < qp->nb_descriptors; i++) {
+		if (service_type == QAT_SERVICE_SYMMETRIC)
+			qat_sym_init_op_cookie(qp->op_cookies[i]);
+		else
+			qat_asym_init_op_cookie(qp->op_cookies[i]);
+	}
+
+	return ret;
+}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
new file mode 100644
index 0000000000..3803fef19d
--- /dev/null
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -0,0 +1,78 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+ #ifndef _QAT_CRYPTO_H_
+ #define _QAT_CRYPTO_H_
+
+#include <rte_cryptodev.h>
+#ifdef RTE_LIB_SECURITY
+#include <rte_security.h>
+#endif
+
+#include "qat_device.h"
+
+extern uint8_t qat_sym_driver_id;
+extern uint8_t qat_asym_driver_id;
+
+/** helper macro to set cryptodev capability range **/
+#define CAP_RNG(n, l, r, i) .n = {.min = l, .max = r, .increment = i}
+
+#define CAP_RNG_ZERO(n) .n = {.min = 0, .max = 0, .increment = 0}
+/** helper macro to set cryptodev capability value **/
+#define CAP_SET(n, v) .n = v
+
+/** private data structure for a QAT device.
+ * there can be one of these on each qat_pci_device (VF).
+ */
+struct qat_cryptodev_private {
+	struct qat_pci_device *qat_dev;
+	/**< The qat pci device hosting the service */
+	uint8_t dev_id;
+	/**< Device instance for this rte_cryptodev */
+	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
+	/* QAT device symmetric crypto capabilities */
+	const struct rte_memzone *capa_mz;
+	/* Shared memzone for storing capabilities */
+	uint16_t min_enq_burst_threshold;
+	uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */
+	enum qat_service_type service_type;
+};
+
+struct qat_capabilities_info {
+	struct rte_cryptodev_capabilities *data;
+	uint64_t size;
+};
+
+int
+qat_cryptodev_config(struct rte_cryptodev *dev,
+		struct rte_cryptodev_config *config);
+
+int
+qat_cryptodev_start(struct rte_cryptodev *dev);
+
+void
+qat_cryptodev_stop(struct rte_cryptodev *dev);
+
+int
+qat_cryptodev_close(struct rte_cryptodev *dev);
+
+void
+qat_cryptodev_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *info);
+
+void
+qat_cryptodev_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats);
+
+void
+qat_cryptodev_stats_reset(struct rte_cryptodev *dev);
+
+int
+qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+	const struct rte_cryptodev_qp_conf *qp_conf, int socket_id);
+
+int
+qat_cryptodev_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id);
+
+#endif
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index 5b8ee4bee6..dec877cfab 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -13,6 +13,7 @@
 #endif

 #include "qat_logs.h"
+#include "qat_crypto.h"
 #include "qat_sym.h"
 #include "qat_sym_session.h"
 #include "qat_sym_pmd.h"
@@ -59,213 +60,19 @@ static const struct rte_security_capability qat_security_capabilities[] = {
 };
 #endif

-static int qat_sym_qp_release(struct rte_cryptodev *dev,
-	uint16_t queue_pair_id);
-
-static int qat_sym_dev_config(__rte_unused struct rte_cryptodev *dev,
-		__rte_unused struct rte_cryptodev_config *config)
-{
-	return 0;
-}
-
-static int qat_sym_dev_start(__rte_unused struct rte_cryptodev *dev)
-{
-	return 0;
-}
-
-static void qat_sym_dev_stop(__rte_unused struct rte_cryptodev *dev)
-{
-	return;
-}
-
-static int qat_sym_dev_close(struct rte_cryptodev *dev)
-{
-	int i, ret;
-
-	for (i = 0; i < dev->data->nb_queue_pairs; i++) {
-		ret = qat_sym_qp_release(dev, i);
-		if (ret < 0)
-			return ret;
-	}
-
-	return 0;
-}
-
-static void qat_sym_dev_info_get(struct rte_cryptodev *dev,
-			struct rte_cryptodev_info *info)
-{
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = internals->qat_dev;
-
-	if (info != NULL) {
-		info->max_nb_queue_pairs =
-			qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC);
-		info->feature_flags = dev->feature_flags;
-		info->capabilities = internals->qat_dev_capabilities;
-		info->driver_id = qat_sym_driver_id;
-		/* No limit of number of sessions */
-		info->sym.max_nb_sessions = 0;
-	}
-}
-
-static void qat_sym_stats_get(struct rte_cryptodev *dev,
-		struct rte_cryptodev_stats *stats)
-{
-	struct qat_common_stats qat_stats = {0};
-	struct qat_sym_dev_private *qat_priv;
-
-	if (stats == NULL || dev == NULL) {
-		QAT_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_get(qat_priv->qat_dev, &qat_stats, QAT_SERVICE_SYMMETRIC);
-	stats->enqueued_count = qat_stats.enqueued_count;
-	stats->dequeued_count = qat_stats.dequeued_count;
-	stats->enqueue_err_count = qat_stats.enqueue_err_count;
-	stats->dequeue_err_count = qat_stats.dequeue_err_count;
-}
-
-static void qat_sym_stats_reset(struct rte_cryptodev *dev)
-{
-	struct qat_sym_dev_private *qat_priv;
-
-	if (dev == NULL) {
-		QAT_LOG(ERR, "invalid cryptodev ptr %p", dev);
-		return;
-	}
-	qat_priv = dev->data->dev_private;
-
-	qat_stats_reset(qat_priv->qat_dev, QAT_SERVICE_SYMMETRIC);
-
-}
-
-static int qat_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
-{
-	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
-	enum qat_device_gen qat_dev_gen = qat_private->qat_dev->qat_dev_gen;
-
-	QAT_LOG(DEBUG, "Release sym qp %u on device %d",
-				queue_pair_id, dev->data->dev_id);
-
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][queue_pair_id]
-						= NULL;
-
-	return qat_qp_release(qat_dev_gen, (struct qat_qp **)
-			&(dev->data->queue_pairs[queue_pair_id]));
-}
-
-static int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-	const struct rte_cryptodev_qp_conf *qp_conf,
-	int socket_id)
-{
-	struct qat_qp *qp;
-	int ret = 0;
-	uint32_t i;
-	struct qat_qp_config qat_qp_conf;
-	struct qat_qp **qp_addr =
-			(struct qat_qp **)&(dev->data->queue_pairs[qp_id]);
-	struct qat_sym_dev_private *qat_private = dev->data->dev_private;
-	struct qat_pci_device *qat_dev = qat_private->qat_dev;
-
-	/* If qp is already in use free ring memory and qp metadata. */
-	if (*qp_addr != NULL) {
-		ret = qat_sym_qp_release(dev, qp_id);
-		if (ret < 0)
-			return ret;
-	}
-	if (qp_id >= qat_qps_per_service(qat_dev, QAT_SERVICE_SYMMETRIC)) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.hw = qat_qp_get_hw_data(qat_dev, QAT_SERVICE_SYMMETRIC,
-			qp_id);
-	if (qat_qp_conf.hw == NULL) {
-		QAT_LOG(ERR, "qp_id %u invalid for this device", qp_id);
-		return -EINVAL;
-	}
-
-	qat_qp_conf.cookie_size = sizeof(struct qat_sym_op_cookie);
-	qat_qp_conf.nb_descriptors = qp_conf->nb_descriptors;
-	qat_qp_conf.socket_id = socket_id;
-	qat_qp_conf.service_str = "sym";
-
-	ret = qat_qp_setup(qat_private->qat_dev, qp_addr, qp_id, &qat_qp_conf);
-	if (ret != 0)
-		return ret;
-
-	/* store a link to the qp in the qat_pci_device */
-	qat_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id]
-							= *qp_addr;
-
-	qp = (struct qat_qp *)*qp_addr;
-	qp->min_enq_burst_threshold = qat_private->min_enq_burst_threshold;
-
-	for (i = 0; i < qp->nb_descriptors; i++) {
-
-		struct qat_sym_op_cookie *cookie =
-				qp->op_cookies[i];
-
-		cookie->qat_sgl_src_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				qat_sgl_src);
-
-		cookie->qat_sgl_dst_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				qat_sgl_dst);
-
-		cookie->opt.spc_gmac.cd_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_sym_op_cookie,
-				opt.spc_gmac.cd_cipher);
-
-	}
-
-	/* Get fw version from QAT (GEN2), skip if we've got it already */
-	if (qp->qat_dev_gen == QAT_GEN2 && !(qat_private->internal_capabilities
-			& QAT_SYM_CAP_VALID)) {
-		ret = qat_cq_get_fw_version(qp);
-
-		if (ret < 0) {
-			qat_sym_qp_release(dev, qp_id);
-			return ret;
-		}
-
-		if (ret != 0)
-			QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d",
-					(ret >> 24) & 0xff,
-					(ret >> 16) & 0xff,
-					(ret >> 8) & 0xff);
-		else
-			QAT_LOG(DEBUG, "unknown QAT firmware version");
-
-		/* set capabilities based on the fw version */
-		qat_private->internal_capabilities = QAT_SYM_CAP_VALID |
-				((ret >= MIXED_CRYPTO_MIN_FW_VER) ?
-						QAT_SYM_CAP_MIXED_CRYPTO : 0);
-		ret = 0;
-	}
-
-	return ret;
-}
-
 static struct rte_cryptodev_ops crypto_qat_ops = {

 		/* Device related operations */
-		.dev_configure		= qat_sym_dev_config,
-		.dev_start		= qat_sym_dev_start,
-		.dev_stop		= qat_sym_dev_stop,
-		.dev_close		= qat_sym_dev_close,
-		.dev_infos_get		= qat_sym_dev_info_get,
+		.dev_configure		= qat_cryptodev_config,
+		.dev_start		= qat_cryptodev_start,
+		.dev_stop		= qat_cryptodev_stop,
+		.dev_close		= qat_cryptodev_close,
+		.dev_infos_get		= qat_cryptodev_info_get,

-		.stats_get		= qat_sym_stats_get,
-		.stats_reset		= qat_sym_stats_reset,
-		.queue_pair_setup	= qat_sym_qp_setup,
-		.queue_pair_release	= qat_sym_qp_release,
+		.stats_get		= qat_cryptodev_stats_get,
+		.stats_reset		= qat_cryptodev_stats_reset,
+		.queue_pair_setup	= qat_cryptodev_qp_setup,
+		.queue_pair_release	= qat_cryptodev_qp_release,

 		/* Crypto related operations */
 		.sym_session_get_size	= qat_sym_session_get_private_size,
@@ -295,6 +102,27 @@ static struct rte_security_ops security_qat_ops = {
 };
 #endif

+void
+qat_sym_init_op_cookie(void *op_cookie)
+{
+	struct qat_sym_op_cookie *cookie = op_cookie;
+
+	cookie->qat_sgl_src_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			qat_sgl_src);
+
+	cookie->qat_sgl_dst_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			qat_sgl_dst);
+
+	cookie->opt.spc_gmac.cd_phys_addr =
+			rte_mempool_virt2iova(cookie) +
+			offsetof(struct qat_sym_op_cookie,
+			opt.spc_gmac.cd_cipher);
+}
+
 static uint16_t
 qat_sym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops,
 		uint16_t nb_ops)
@@ -330,15 +158,14 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];

 	struct rte_cryptodev_pmd_init_params init_params = {
-			.name = "",
-			.socket_id =
-				qat_dev_instance->pci_dev->device.numa_node,
-			.private_data_size = sizeof(struct qat_sym_dev_private)
+		.name = "",
+		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
+		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
-	struct qat_sym_dev_private *internals;
+	struct qat_cryptodev_private *internals;
 	const struct rte_cryptodev_capabilities *capabilities;
 	uint64_t capa_size;

@@ -424,8 +251,9 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,

 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
+	internals->service_type = QAT_SERVICE_SYMMETRIC;

-	internals->sym_dev_id = cryptodev->data->dev_id;
+	internals->dev_id = cryptodev->data->dev_id;
 	switch (qat_pci_dev->qat_dev_gen) {
 	case QAT_GEN1:
 		capabilities = qat_gen1_sym_capabilities;
@@ -480,7 +308,7 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,

 	qat_pci_dev->sym_dev = internals;
 	QAT_LOG(DEBUG, "Created QAT SYM device %s as cryptodev instance %d",
-			cryptodev->data->name, internals->sym_dev_id);
+			cryptodev->data->name, internals->dev_id);

 	rte_cryptodev_pmd_probing_finish(cryptodev);

@@ -511,7 +339,7 @@ qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev)
 		rte_memzone_free(qat_pci_dev->sym_dev->capa_mz);

 	/* free crypto device */
-	cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->sym_dev_id);
+	cryptodev = rte_cryptodev_pmd_get_dev(qat_pci_dev->sym_dev->dev_id);
 #ifdef RTE_LIB_SECURITY
 	rte_free(cryptodev->security_ctx);
 	cryptodev->security_ctx = NULL;
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index e0992cbe27..d49b732ca0 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -14,6 +14,7 @@
 #endif

 #include "qat_sym_capabilities.h"
+#include "qat_crypto.h"
 #include "qat_device.h"

 /** Intel(R) QAT Symmetric Crypto PMD driver name */
@@ -25,23 +26,6 @@

 extern uint8_t qat_sym_driver_id;

-/** private data structure for a QAT device.
- * This QAT device is a device offering only symmetric crypto service,
- * there can be one of these on each qat_pci_device (VF).
- */
-struct qat_sym_dev_private {
-	struct qat_pci_device *qat_dev;
-	/**< The qat pci device hosting the service */
-	uint8_t sym_dev_id;
-	/**< Device instance for this rte_cryptodev */
-	const struct rte_cryptodev_capabilities *qat_dev_capabilities;
-	/* QAT device symmetric crypto capabilities */
-	const struct rte_memzone *capa_mz;
-	/* Shared memzone for storing capabilities */
-	uint16_t min_enq_burst_threshold;
-	uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */
-};
-
 int
 qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
@@ -49,5 +33,8 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 int
 qat_sym_dev_destroy(struct qat_pci_device *qat_pci_dev);

+void
+qat_sym_init_op_cookie(void *op_cookie);
+
 #endif
 #endif /* _QAT_SYM_PMD_H_ */
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index 3f2f6736fc..8ca475ca8b 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -131,7 +131,7 @@ bpi_cipher_ctx_init(enum rte_crypto_cipher_algorithm cryptodev_algo,

 static int
 qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo,
-		struct qat_sym_dev_private *internals)
+		struct qat_cryptodev_private *internals)
 {
 	int i = 0;
 	const struct rte_cryptodev_capabilities *capability;
@@ -152,7 +152,7 @@ qat_is_cipher_alg_supported(enum rte_crypto_cipher_algorithm algo,

 static int
 qat_is_auth_alg_supported(enum rte_crypto_auth_algorithm algo,
-		struct qat_sym_dev_private *internals)
+		struct qat_cryptodev_private *internals)
 {
 	int i = 0;
 	const struct rte_cryptodev_capabilities *capability;
@@ -267,7 +267,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform,
 		struct qat_sym_session *session)
 {
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	struct rte_crypto_cipher_xform *cipher_xform = NULL;
 	enum qat_device_gen qat_dev_gen =
 				internals->qat_dev->qat_dev_gen;
@@ -532,7 +532,8 @@ static void
 qat_sym_session_handle_mixed(const struct rte_cryptodev *dev,
 		struct qat_sym_session *session)
 {
-	const struct qat_sym_dev_private *qat_private = dev->data->dev_private;
+	const struct qat_cryptodev_private *qat_private =
+			dev->data->dev_private;
 	enum qat_device_gen min_dev_gen = (qat_private->internal_capabilities &
 			QAT_SYM_CAP_MIXED_CRYPTO) ? QAT_GEN2 : QAT_GEN3;

@@ -564,7 +565,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev,
 		struct rte_crypto_sym_xform *xform, void *session_private)
 {
 	struct qat_sym_session *session = session_private;
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen;
 	int ret;
 	int qat_cmd_id;
@@ -707,7 +708,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
 				struct qat_sym_session *session)
 {
 	struct rte_crypto_auth_xform *auth_xform = qat_get_auth_xform(xform);
-	struct qat_sym_dev_private *internals = dev->data->dev_private;
+	struct qat_cryptodev_private *internals = dev->data->dev_private;
 	const uint8_t *key_data = auth_xform->key.data;
 	uint8_t key_length = auth_xform->key.length;
 	enum qat_device_gen qat_dev_gen =
@@ -875,7 +876,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
 {
 	struct rte_crypto_aead_xform *aead_xform = &xform->aead;
 	enum rte_crypto_auth_operation crypto_operation;
-	struct qat_sym_dev_private *internals =
+	struct qat_cryptodev_private *internals =
 			dev->data->dev_private;
 	enum qat_device_gen qat_dev_gen =
 			internals->qat_dev->qat_dev_gen;
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v8 8/9] crypto/qat: define gen specific structs and functions
  2021-11-04 10:34               ` [dpdk-dev] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                                   ` (6 preceding siblings ...)
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 7/9] crypto/qat: unified device private data structure Kai Ji
@ 2021-11-04 10:34                 ` Kai Ji
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 9/9] crypto/qat: add gen specific implementation Kai Ji
  2021-11-04 11:44                 ` [dpdk-dev] [EXT] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations Akhil Goyal
  9 siblings, 0 replies; 96+ messages in thread
From: Kai Ji @ 2021-11-04 10:34 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch adds the symmetric and asymmetric crypto data
structure and function prototypes for different QAT
generations.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 drivers/crypto/qat/README                  |    7 -
 drivers/crypto/qat/meson.build             |   32 -
 drivers/crypto/qat/qat_asym_capabilities.h |   63 -
 drivers/crypto/qat/qat_asym_pmd.c          |   64 +-
 drivers/crypto/qat/qat_asym_pmd.h          |   25 +
 drivers/crypto/qat/qat_crypto.h            |   16 +
 drivers/crypto/qat/qat_sym_capabilities.h  | 1248 --------------------
 drivers/crypto/qat/qat_sym_pmd.c           |  186 +--
 drivers/crypto/qat/qat_sym_pmd.h           |   57 +-
 9 files changed, 167 insertions(+), 1531 deletions(-)
 delete mode 100644 drivers/crypto/qat/README
 delete mode 100644 drivers/crypto/qat/meson.build
 delete mode 100644 drivers/crypto/qat/qat_asym_capabilities.h
 delete mode 100644 drivers/crypto/qat/qat_sym_capabilities.h

diff --git a/drivers/crypto/qat/README b/drivers/crypto/qat/README
deleted file mode 100644
index 444ae605f0..0000000000
--- a/drivers/crypto/qat/README
+++ /dev/null
@@ -1,7 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2015-2018 Intel Corporation
-
-Makefile for crypto QAT PMD is in common/qat directory.
-The build for the QAT driver is done from there as only one library is built for the
-whole QAT pci device and that library includes all the services (crypto, compression)
-which are enabled on the device.
diff --git a/drivers/crypto/qat/meson.build b/drivers/crypto/qat/meson.build
deleted file mode 100644
index c7c7daf3ac..0000000000
--- a/drivers/crypto/qat/meson.build
+++ /dev/null
@@ -1,32 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017-2018 Intel Corporation
-
-if is_windows
-    build = false
-    reason = 'not supported on Windows'
-    subdir_done()
-endif
-
-# this does not build the QAT driver, instead that is done in the compression
-# driver which comes later. Here we just add our sources files to the list
-build = false
-reason = '' # sentinal value to suppress printout
-dep = dependency('libcrypto', required: false, method: 'pkg-config')
-qat_includes += include_directories('.')
-qat_deps += 'cryptodev'
-qat_deps += 'net'
-qat_deps += 'security'
-if dep.found()
-    # Add our sources files to the list
-    qat_sources += files(
-            'qat_asym.c',
-            'qat_asym_pmd.c',
-            'qat_sym.c',
-            'qat_sym_hw_dp.c',
-            'qat_sym_pmd.c',
-            'qat_sym_session.c',
-    )
-    qat_ext_deps += dep
-    qat_cflags += '-DBUILD_QAT_SYM'
-    qat_cflags += '-DBUILD_QAT_ASYM'
-endif
diff --git a/drivers/crypto/qat/qat_asym_capabilities.h b/drivers/crypto/qat/qat_asym_capabilities.h
deleted file mode 100644
index 523b4da6d3..0000000000
--- a/drivers/crypto/qat/qat_asym_capabilities.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2019 Intel Corporation
- */
-
-#ifndef _QAT_ASYM_CAPABILITIES_H_
-#define _QAT_ASYM_CAPABILITIES_H_
-
-#define QAT_BASE_GEN1_ASYM_CAPABILITIES						\
-	{	/* modexp */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_MODEX,	\
-				.op_types = 0,					\
-				{						\
-				.modlen = {					\
-				.min = 1,					\
-				.max = 512,					\
-				.increment = 1					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	},									\
-	{	/* modinv */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_MODINV,	\
-				.op_types = 0,					\
-				{						\
-				.modlen = {					\
-				.min = 1,					\
-				.max = 512,					\
-				.increment = 1					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	},									\
-	{	/* RSA */							\
-		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,				\
-		{.asym = {							\
-			.xform_capa = {						\
-				.xform_type = RTE_CRYPTO_ASYM_XFORM_RSA,	\
-				.op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_VERIFY) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |	\
-					(1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),	\
-				{						\
-				.modlen = {					\
-				/* min length is based on openssl rsa keygen */	\
-				.min = 64,					\
-				/* value 0 symbolizes no limit on max length */	\
-				.max = 512,					\
-				.increment = 64					\
-				}, }						\
-			}							\
-		},								\
-		}								\
-	}									\
-
-#endif /* _QAT_ASYM_CAPABILITIES_H_ */
diff --git a/drivers/crypto/qat/qat_asym_pmd.c b/drivers/crypto/qat/qat_asym_pmd.c
index 042f39ddcc..addee384e3 100644
--- a/drivers/crypto/qat/qat_asym_pmd.c
+++ b/drivers/crypto/qat/qat_asym_pmd.c
@@ -9,15 +9,9 @@
 #include "qat_crypto.h"
 #include "qat_asym.h"
 #include "qat_asym_pmd.h"
-#include "qat_sym_capabilities.h"
-#include "qat_asym_capabilities.h"

 uint8_t qat_asym_driver_id;
-
-static const struct rte_cryptodev_capabilities qat_gen1_asym_capabilities[] = {
-	QAT_BASE_GEN1_ASYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
+struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[QAT_N_GENS];

 void
 qat_asym_init_op_cookie(void *op_cookie)
@@ -101,23 +95,26 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
 		.private_data_size = sizeof(struct qat_cryptodev_private)
 	};
+	struct qat_capabilities_info capa_info;
+	const struct rte_cryptodev_capabilities *capabilities;
+	const struct qat_crypto_gen_dev_ops *gen_dev_ops =
+		&qat_asym_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
 	struct qat_cryptodev_private *internals;
+	uint64_t capa_size;

-	if (qat_pci_dev->qat_dev_gen == QAT_GEN4) {
-		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT 4xxx");
-		return -EFAULT;
-	}
-	if (qat_pci_dev->qat_dev_gen == QAT_GEN3) {
-		QAT_LOG(ERR, "Asymmetric crypto PMD not supported on QAT c4xxx");
-		return -EFAULT;
-	}
 	snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "asym");
 	QAT_LOG(DEBUG, "Creating QAT ASYM device %s\n", name);

+	if (gen_dev_ops->cryptodev_ops == NULL) {
+		QAT_LOG(ERR, "Device %s does not support asymmetric crypto",
+				name);
+		return -EFAULT;
+	}
+
 	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
 		qat_pci_dev->qat_asym_driver_id =
 				qat_asym_driver_id;
@@ -150,11 +147,8 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	cryptodev->enqueue_burst = qat_asym_pmd_enqueue_op_burst;
 	cryptodev->dequeue_burst = qat_asym_pmd_dequeue_op_burst;

-	cryptodev->feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
-			RTE_CRYPTODEV_FF_HW_ACCELERATED |
-			RTE_CRYPTODEV_FF_ASYM_SESSIONLESS |
-			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP |
-			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT;
+
+	cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev);

 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -166,27 +160,29 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
 	internals->dev_id = cryptodev->data->dev_id;
-	internals->qat_dev_capabilities = qat_gen1_asym_capabilities;
 	internals->service_type = QAT_SERVICE_ASYMMETRIC;

+	capa_info = gen_dev_ops->get_capabilities(qat_pci_dev);
+	capabilities = capa_info.data;
+	capa_size = capa_info.size;
+
 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
 		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
-			sizeof(qat_gen1_asym_capabilities),
-			rte_socket_id(), 0);
-	}
-	if (internals->capa_mz == NULL) {
-		QAT_LOG(DEBUG,
-			"Error allocating memzone for capabilities, destroying PMD for %s",
-			name);
-		rte_cryptodev_pmd_destroy(cryptodev);
-		memset(&qat_dev_instance->asym_rte_dev, 0,
-			sizeof(qat_dev_instance->asym_rte_dev));
-		return -EFAULT;
+				capa_size, rte_socket_id(), 0);
+		if (internals->capa_mz == NULL) {
+			QAT_LOG(DEBUG,
+				"Error allocating memzone for capabilities, "
+				"destroying PMD for %s",
+				name);
+			rte_cryptodev_pmd_destroy(cryptodev);
+			memset(&qat_dev_instance->asym_rte_dev, 0,
+				sizeof(qat_dev_instance->asym_rte_dev));
+			return -EFAULT;
+		}
 	}

-	memcpy(internals->capa_mz->addr, qat_gen1_asym_capabilities,
-			sizeof(qat_gen1_asym_capabilities));
+	memcpy(internals->capa_mz->addr, capabilities, capa_size);
 	internals->qat_dev_capabilities = internals->capa_mz->addr;

 	while (1) {
diff --git a/drivers/crypto/qat/qat_asym_pmd.h b/drivers/crypto/qat/qat_asym_pmd.h
index c493796511..fd6b406248 100644
--- a/drivers/crypto/qat/qat_asym_pmd.h
+++ b/drivers/crypto/qat/qat_asym_pmd.h
@@ -7,14 +7,39 @@
 #define _QAT_ASYM_PMD_H_

 #include <rte_cryptodev.h>
+#include "qat_crypto.h"
 #include "qat_device.h"

 /** Intel(R) QAT Asymmetric Crypto PMD driver name */
 #define CRYPTODEV_NAME_QAT_ASYM_PMD	crypto_qat_asym


+/**
+ * Helper function to add an asym capability
+ * <name> <op type> <modlen (min, max, increment)>
+ **/
+#define QAT_ASYM_CAP(n, o, l, r, i)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,			\
+		{.asym = {						\
+			.xform_capa = {					\
+				.xform_type = RTE_CRYPTO_ASYM_XFORM_##n,\
+				.op_types = o,				\
+				{					\
+				.modlen = {				\
+				.min = l,				\
+				.max = r,				\
+				.increment = i				\
+				}, }					\
+			}						\
+		},							\
+		}							\
+	}
+
 extern uint8_t qat_asym_driver_id;

+extern struct qat_crypto_gen_dev_ops qat_asym_gen_dev_ops[];
+
 void
 qat_asym_init_op_cookie(void *op_cookie);

diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index 3803fef19d..0a8afb0b31 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -44,6 +44,22 @@ struct qat_capabilities_info {
 	uint64_t size;
 };

+typedef struct qat_capabilities_info (*get_capabilities_info_t)
+			(struct qat_pci_device *qat_dev);
+
+typedef uint64_t (*get_feature_flags_t)(struct qat_pci_device *qat_dev);
+
+typedef void * (*create_security_ctx_t)(void *cryptodev);
+
+struct qat_crypto_gen_dev_ops {
+	get_feature_flags_t get_feature_flags;
+	get_capabilities_info_t get_capabilities;
+	struct rte_cryptodev_ops *cryptodev_ops;
+#ifdef RTE_LIB_SECURITY
+	create_security_ctx_t create_security_ctx;
+#endif
+};
+
 int
 qat_cryptodev_config(struct rte_cryptodev *dev,
 		struct rte_cryptodev_config *config);
diff --git a/drivers/crypto/qat/qat_sym_capabilities.h b/drivers/crypto/qat/qat_sym_capabilities.h
deleted file mode 100644
index cfb176ca94..0000000000
--- a/drivers/crypto/qat/qat_sym_capabilities.h
+++ /dev/null
@@ -1,1248 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017-2019 Intel Corporation
- */
-
-#ifndef _QAT_SYM_CAPABILITIES_H_
-#define _QAT_SYM_CAPABILITIES_H_
-
-#define QAT_BASE_GEN1_SYM_CAPABILITIES					\
-	{	/* SHA1 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA1 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* MD5 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 16,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XCBC MAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-				.aad_size = { 0 },			\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_CMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 4			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_CCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 2			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 224,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 7,			\
-					.max = 13,			\
-					.increment = 1			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_GCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GMAC (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_GMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SNOW 3G (UIA2) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CBC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XTS */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_XTS,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 64,			\
-					.increment = 32			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SNOW 3G (UEA2) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CTR,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* NULL (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, },						\
-		}, },							\
-	},								\
-	{	/* NULL (CIPHER) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				}					\
-			}, },						\
-		}, }							\
-	},								\
-	{       /* KASUMI (F8) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_KASUMI_F8,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{       /* KASUMI (F9) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_KASUMI_F9,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* 3DES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_3DES_CBC,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 24,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* 3DES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_3DES_CTR,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 24,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* DES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_DES_CBC,	\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* DES DOCSISBPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_DES_DOCSISBPI,\
-				.block_size = 8,			\
-				.key_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 8,			\
-					.max = 8,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_EXTRA_GEN2_SYM_CAPABILITIES					\
-	{	/* ZUC (EEA3) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_ZUC_EEA3,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* ZUC (EIA3) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_ZUC_EIA3,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 4,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_EXTRA_GEN3_SYM_CAPABILITIES					\
-	{	/* Chacha20-Poly1305 */					\
-	.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 32,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_BASE_GEN4_SYM_CAPABILITIES					\
-	{	/* AES CBC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CBC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA1 HMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,	\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 HMAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,	\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 1,			\
-					.max = 128,			\
-					.increment = 1			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES XCBC MAC */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-				.aad_size = { 0 },			\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CMAC */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_CMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 4			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* NULL (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = { 0 }			\
-			}, },						\
-		}, },							\
-	},								\
-	{	/* NULL (CIPHER) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_NULL,		\
-				.block_size = 1,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				}					\
-			}, },						\
-		}, }							\
-	},								\
-	{	/* SHA1 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA1,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 20,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA224 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA224,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 28,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA256 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA256,		\
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 32,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA384 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA384,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 48,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* SHA512 */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_SHA512,		\
-				.block_size = 128,			\
-				.key_size = {				\
-					.min = 0,			\
-					.max = 0,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 1,			\
-					.max = 64,			\
-					.increment = 1			\
-				},					\
-				.iv_size = { 0 }			\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CTR */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_CTR,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_GCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES CCM */						\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_AES_CCM,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 4,			\
-					.max = 16,			\
-					.increment = 2			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 224,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 7,			\
-					.max = 13,			\
-					.increment = 1			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* Chacha20-Poly1305 */					\
-	.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
-			{.aead = {					\
-				.algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305, \
-				.block_size = 64,			\
-				.key_size = {				\
-					.min = 32,			\
-					.max = 32,			\
-					.increment = 0			\
-				},					\
-				.digest_size = {			\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				},					\
-				.aad_size = {				\
-					.min = 0,			\
-					.max = 240,			\
-					.increment = 1			\
-				},					\
-				.iv_size = {				\
-					.min = 12,			\
-					.max = 12,			\
-					.increment = 0			\
-				},					\
-			}, }						\
-		}, }							\
-	},								\
-	{	/* AES GMAC (AUTH) */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
-			{.auth = {					\
-				.algo = RTE_CRYPTO_AUTH_AES_GMAC,	\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 8			\
-				},					\
-				.digest_size = {			\
-					.min = 8,			\
-					.max = 16,			\
-					.increment = 4			\
-				},					\
-				.iv_size = {				\
-					.min = 0,			\
-					.max = 12,			\
-					.increment = 12			\
-				}					\
-			}, }						\
-		}, }							\
-	}								\
-
-
-
-#ifdef RTE_LIB_SECURITY
-#define QAT_SECURITY_SYM_CAPABILITIES					\
-	{	/* AES DOCSIS BPI */					\
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
-		{.sym = {						\
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
-			{.cipher = {					\
-				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
-				.block_size = 16,			\
-				.key_size = {				\
-					.min = 16,			\
-					.max = 32,			\
-					.increment = 16			\
-				},					\
-				.iv_size = {				\
-					.min = 16,			\
-					.max = 16,			\
-					.increment = 0			\
-				}					\
-			}, }						\
-		}, }							\
-	}
-
-#define QAT_SECURITY_CAPABILITIES(sym)					\
-	[0] = {	/* DOCSIS Uplink */					\
-		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
-		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
-		.docsis = {						\
-			.direction = RTE_SECURITY_DOCSIS_UPLINK		\
-		},							\
-		.crypto_capabilities = (sym)				\
-	},								\
-	[1] = {	/* DOCSIS Downlink */					\
-		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
-		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
-		.docsis = {						\
-			.direction = RTE_SECURITY_DOCSIS_DOWNLINK	\
-		},							\
-		.crypto_capabilities = (sym)				\
-	}
-#endif
-
-#endif /* _QAT_SYM_CAPABILITIES_H_ */
diff --git a/drivers/crypto/qat/qat_sym_pmd.c b/drivers/crypto/qat/qat_sym_pmd.c
index dec877cfab..b835245f17 100644
--- a/drivers/crypto/qat/qat_sym_pmd.c
+++ b/drivers/crypto/qat/qat_sym_pmd.c
@@ -22,85 +22,7 @@

 uint8_t qat_sym_driver_id;

-static const struct rte_cryptodev_capabilities qat_gen1_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen2_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen3_sym_capabilities[] = {
-	QAT_BASE_GEN1_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN2_SYM_CAPABILITIES,
-	QAT_EXTRA_GEN3_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities qat_gen4_sym_capabilities[] = {
-	QAT_BASE_GEN4_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-#ifdef RTE_LIB_SECURITY
-static const struct rte_cryptodev_capabilities
-					qat_security_sym_capabilities[] = {
-	QAT_SECURITY_SYM_CAPABILITIES,
-	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_security_capability qat_security_capabilities[] = {
-	QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities),
-	{
-		.action = RTE_SECURITY_ACTION_TYPE_NONE
-	}
-};
-#endif
-
-static struct rte_cryptodev_ops crypto_qat_ops = {
-
-		/* Device related operations */
-		.dev_configure		= qat_cryptodev_config,
-		.dev_start		= qat_cryptodev_start,
-		.dev_stop		= qat_cryptodev_stop,
-		.dev_close		= qat_cryptodev_close,
-		.dev_infos_get		= qat_cryptodev_info_get,
-
-		.stats_get		= qat_cryptodev_stats_get,
-		.stats_reset		= qat_cryptodev_stats_reset,
-		.queue_pair_setup	= qat_cryptodev_qp_setup,
-		.queue_pair_release	= qat_cryptodev_qp_release,
-
-		/* Crypto related operations */
-		.sym_session_get_size	= qat_sym_session_get_private_size,
-		.sym_session_configure	= qat_sym_session_configure,
-		.sym_session_clear	= qat_sym_session_clear,
-
-		/* Raw data-path API related operations */
-		.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
-		.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
-};
-
-#ifdef RTE_LIB_SECURITY
-static const struct rte_security_capability *
-qat_security_cap_get(void *device __rte_unused)
-{
-	return qat_security_capabilities;
-}
-
-static struct rte_security_ops security_qat_ops = {
-
-		.session_create = qat_security_session_create,
-		.session_update = NULL,
-		.session_stats_get = NULL,
-		.session_destroy = qat_security_session_destroy,
-		.set_pkt_metadata = NULL,
-		.capabilities_get = qat_security_cap_get
-};
-#endif
+struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[QAT_N_GENS];

 void
 qat_sym_init_op_cookie(void *op_cookie)
@@ -156,7 +78,6 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 	int i = 0, ret = 0;
 	struct qat_device_info *qat_dev_instance =
 			&qat_pci_devs[qat_pci_dev->qat_dev_id];
-
 	struct rte_cryptodev_pmd_init_params init_params = {
 		.name = "",
 		.socket_id = qat_dev_instance->pci_dev->device.numa_node,
@@ -166,13 +87,22 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 	char capa_memz_name[RTE_CRYPTODEV_NAME_MAX_LEN];
 	struct rte_cryptodev *cryptodev;
 	struct qat_cryptodev_private *internals;
+	struct qat_capabilities_info capa_info;
 	const struct rte_cryptodev_capabilities *capabilities;
+	const struct qat_crypto_gen_dev_ops *gen_dev_ops =
+		&qat_sym_gen_dev_ops[qat_pci_dev->qat_dev_gen];
 	uint64_t capa_size;

 	snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
 			qat_pci_dev->name, "sym");
 	QAT_LOG(DEBUG, "Creating QAT SYM device %s", name);

+	if (gen_dev_ops->cryptodev_ops == NULL) {
+		QAT_LOG(ERR, "Device %s does not support symmetric crypto",
+				name);
+		return -EFAULT;
+	}
+
 	/*
 	 * All processes must use same driver id so they can share sessions.
 	 * Store driver_id so we can validate that all processes have the same
@@ -206,92 +136,56 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,

 	qat_dev_instance->sym_rte_dev.name = cryptodev->data->name;
 	cryptodev->driver_id = qat_sym_driver_id;
-	cryptodev->dev_ops = &crypto_qat_ops;
+	cryptodev->dev_ops = gen_dev_ops->cryptodev_ops;

 	cryptodev->enqueue_burst = qat_sym_pmd_enqueue_op_burst;
 	cryptodev->dequeue_burst = qat_sym_pmd_dequeue_op_burst;

-	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
-			RTE_CRYPTODEV_FF_HW_ACCELERATED |
-			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
-			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
-			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
-			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
-			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
-			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
-
-	if (qat_pci_dev->qat_dev_gen < QAT_GEN4)
-		cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SYM_RAW_DP;
+	cryptodev->feature_flags = gen_dev_ops->get_feature_flags(qat_pci_dev);

 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;

-	snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN,
-			"QAT_SYM_CAPA_GEN_%d",
-			qat_pci_dev->qat_dev_gen);
-
 #ifdef RTE_LIB_SECURITY
-	struct rte_security_ctx *security_instance;
-	security_instance = rte_malloc("qat_sec",
-				sizeof(struct rte_security_ctx),
-				RTE_CACHE_LINE_SIZE);
-	if (security_instance == NULL) {
-		QAT_LOG(ERR, "rte_security_ctx memory alloc failed");
-		ret = -ENOMEM;
-		goto error;
-	}
+	if (gen_dev_ops->create_security_ctx) {
+		cryptodev->security_ctx =
+			gen_dev_ops->create_security_ctx((void *)cryptodev);
+		if (cryptodev->security_ctx == NULL) {
+			QAT_LOG(ERR, "rte_security_ctx memory alloc failed");
+			ret = -ENOMEM;
+			goto error;
+		}
+
+		cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY;
+		QAT_LOG(INFO, "Device %s rte_security support enabled", name);
+	} else
+		QAT_LOG(INFO, "Device %s rte_security support disabled", name);

-	security_instance->device = (void *)cryptodev;
-	security_instance->ops = &security_qat_ops;
-	security_instance->sess_cnt = 0;
-	cryptodev->security_ctx = security_instance;
-	cryptodev->feature_flags |= RTE_CRYPTODEV_FF_SECURITY;
 #endif
+	snprintf(capa_memz_name, RTE_CRYPTODEV_NAME_MAX_LEN,
+			"QAT_SYM_CAPA_GEN_%d",
+			qat_pci_dev->qat_dev_gen);

 	internals = cryptodev->data->dev_private;
 	internals->qat_dev = qat_pci_dev;
 	internals->service_type = QAT_SERVICE_SYMMETRIC;
-
 	internals->dev_id = cryptodev->data->dev_id;
-	switch (qat_pci_dev->qat_dev_gen) {
-	case QAT_GEN1:
-		capabilities = qat_gen1_sym_capabilities;
-		capa_size = sizeof(qat_gen1_sym_capabilities);
-		break;
-	case QAT_GEN2:
-		capabilities = qat_gen2_sym_capabilities;
-		capa_size = sizeof(qat_gen2_sym_capabilities);
-		break;
-	case QAT_GEN3:
-		capabilities = qat_gen3_sym_capabilities;
-		capa_size = sizeof(qat_gen3_sym_capabilities);
-		break;
-	case QAT_GEN4:
-		capabilities = qat_gen4_sym_capabilities;
-		capa_size = sizeof(qat_gen4_sym_capabilities);
-		break;
-	default:
-		QAT_LOG(DEBUG,
-			"QAT gen %d capabilities unknown",
-			qat_pci_dev->qat_dev_gen);
-		ret = -(EINVAL);
-		goto error;
-	}
+
+	capa_info = gen_dev_ops->get_capabilities(qat_pci_dev);
+	capabilities = capa_info.data;
+	capa_size = capa_info.size;

 	internals->capa_mz = rte_memzone_lookup(capa_memz_name);
 	if (internals->capa_mz == NULL) {
 		internals->capa_mz = rte_memzone_reserve(capa_memz_name,
-		capa_size,
-		rte_socket_id(), 0);
-	}
-	if (internals->capa_mz == NULL) {
-		QAT_LOG(DEBUG,
-			"Error allocating memzone for capabilities, destroying "
-			"PMD for %s",
-			name);
-		ret = -EFAULT;
-		goto error;
+				capa_size, rte_socket_id(), 0);
+		if (internals->capa_mz == NULL) {
+			QAT_LOG(DEBUG,
+				"Error allocating capability memzon for %s",
+				name);
+			ret = -EFAULT;
+			goto error;
+		}
 	}

 	memcpy(internals->capa_mz->addr, capabilities, capa_size);
diff --git a/drivers/crypto/qat/qat_sym_pmd.h b/drivers/crypto/qat/qat_sym_pmd.h
index d49b732ca0..0dc0c6f0d9 100644
--- a/drivers/crypto/qat/qat_sym_pmd.h
+++ b/drivers/crypto/qat/qat_sym_pmd.h
@@ -13,7 +13,6 @@
 #include <rte_security.h>
 #endif

-#include "qat_sym_capabilities.h"
 #include "qat_crypto.h"
 #include "qat_device.h"

@@ -24,8 +23,64 @@
 #define QAT_SYM_CAP_MIXED_CRYPTO	(1 << 0)
 #define QAT_SYM_CAP_VALID		(1 << 31)

+/**
+ * Macro to add a sym capability
+ * helper function to add an sym capability
+ * <n: name> <b: block size> <k: key size> <d: digest size>
+ * <a: aad_size> <i: iv_size>
+ **/
+#define QAT_SYM_PLAIN_AUTH_CAP(n, b, d)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
+			{.auth = {					\
+				.algo = RTE_CRYPTO_AUTH_##n,		\
+				b, d					\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_AUTH_CAP(n, b, k, d, a, i)				\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,	\
+			{.auth = {					\
+				.algo = RTE_CRYPTO_AUTH_##n,		\
+				b, k, d, a, i				\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_AEAD_CAP(n, b, k, d, a, i)				\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,	\
+			{.aead = {					\
+				.algo = RTE_CRYPTO_AEAD_##n,		\
+				b, k, d, a, i				\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SYM_CIPHER_CAP(n, b, k, i)					\
+	{								\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
+			{.cipher = {					\
+				.algo = RTE_CRYPTO_CIPHER_##n,		\
+				b, k, i					\
+			}, }						\
+		}, }							\
+	}
+
 extern uint8_t qat_sym_driver_id;

+extern struct qat_crypto_gen_dev_ops qat_sym_gen_dev_ops[];
+
 int
 qat_sym_dev_create(struct qat_pci_device *qat_pci_dev,
 		struct qat_dev_cmd_param *qat_dev_cmd_param);
--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* [dpdk-dev] [dpdk-dev v8 9/9] crypto/qat: add gen specific implementation
  2021-11-04 10:34               ` [dpdk-dev] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                                   ` (7 preceding siblings ...)
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 8/9] crypto/qat: define gen specific structs and functions Kai Ji
@ 2021-11-04 10:34                 ` Kai Ji
  2021-11-05 20:39                   ` Thomas Monjalon
  2021-11-04 11:44                 ` [dpdk-dev] [EXT] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations Akhil Goyal
  9 siblings, 1 reply; 96+ messages in thread
From: Kai Ji @ 2021-11-04 10:34 UTC (permalink / raw)
  To: dev; +Cc: gakhil, Fan Zhang, Arek Kusztal, Kai Ji

From: Fan Zhang <roy.fan.zhang@intel.com>

This patch replaces the mixed QAT symmetric and asymmetric
support implementation by separate files with shared or
individual implementation for specific QAT generation.

Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
Signed-off-by: Kai Ji <kai.ji@intel.com>
Acked-by: Ciara Power <ciara.power@intel.com>
---
 drivers/common/qat/meson.build               |   7 +-
 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c   |  76 +++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 224 +++++++++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 164 +++++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 124 ++++++++
 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h |  36 +++
 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c    | 283 +++++++++++++++++++
 drivers/crypto/qat/qat_crypto.h              |   3 -
 8 files changed, 913 insertions(+), 4 deletions(-)
 create mode 100644 drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
 create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
 create mode 100644 drivers/crypto/qat/dev/qat_sym_pmd_gen1.c

diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 29fd0168ea..ce9959d103 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -71,7 +71,12 @@ endif

 if qat_crypto
     foreach f: ['qat_sym_pmd.c', 'qat_sym.c', 'qat_sym_session.c',
-            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c']
+            'qat_sym_hw_dp.c', 'qat_asym_pmd.c', 'qat_asym.c', 'qat_crypto.c',
+	    'dev/qat_sym_pmd_gen1.c',
+            'dev/qat_asym_pmd_gen1.c',
+            'dev/qat_crypto_pmd_gen2.c',
+            'dev/qat_crypto_pmd_gen3.c',
+            'dev/qat_crypto_pmd_gen4.c']
         sources += files(join_paths(qat_crypto_relpath, f))
     endforeach
     deps += ['security']
diff --git a/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
new file mode 100644
index 0000000000..9ed1f21d9d
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_asym_pmd_gen1.c
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+#include "qat_pke_functionality_arrays.h"
+
+struct rte_cryptodev_ops qat_asym_crypto_ops_gen1 = {
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.asym_session_get_size	= qat_asym_session_get_private_size,
+	.asym_session_configure	= qat_asym_session_configure,
+	.asym_session_clear	= qat_asym_session_clear
+};
+
+static struct rte_cryptodev_capabilities qat_asym_crypto_caps_gen1[] = {
+	QAT_ASYM_CAP(MODEX,
+		0, 1, 512, 1),
+	QAT_ASYM_CAP(MODINV,
+		0, 1, 512, 1),
+	QAT_ASYM_CAP(RSA,
+			((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
+			(1 << RTE_CRYPTO_ASYM_OP_VERIFY) |
+			(1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |
+			(1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),
+			64, 512, 64),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+
+struct qat_capabilities_info
+qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_asym_crypto_caps_gen1;
+	capa_info.size = sizeof(qat_asym_crypto_caps_gen1);
+	return capa_info;
+}
+
+uint64_t
+qat_asym_crypto_feature_flags_get_gen1(
+	struct qat_pci_device *qat_dev __rte_unused)
+{
+	uint64_t feature_flags = RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_ASYM_SESSIONLESS |
+			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_EXP |
+			RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT;
+
+	return feature_flags;
+}
+
+RTE_INIT(qat_asym_crypto_gen1_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN1].cryptodev_ops =
+			&qat_asym_crypto_ops_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN1].get_capabilities =
+			qat_asym_crypto_cap_get_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN1].get_feature_flags =
+			qat_asym_crypto_feature_flags_get_gen1;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
new file mode 100644
index 0000000000..b4ec440e05
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
@@ -0,0 +1,224 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+#define MIXED_CRYPTO_MIN_FW_VER 0x04090000
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen2[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(MD5_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(KASUMI_F9,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_XTS,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(3DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(3DES_CTR,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(ZUC_EEA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(ZUC_EIA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static int
+qat_sym_crypto_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id,
+		const struct rte_cryptodev_qp_conf *qp_conf, int socket_id)
+{
+	struct qat_cryptodev_private *qat_sym_private = dev->data->dev_private;
+	struct qat_qp *qp;
+	int ret;
+
+	if (qat_cryptodev_qp_setup(dev, qp_id, qp_conf, socket_id)) {
+		QAT_LOG(DEBUG, "QAT qp setup failed");
+		return -1;
+	}
+
+	qp = qat_sym_private->qat_dev->qps_in_use[QAT_SERVICE_SYMMETRIC][qp_id];
+	ret = qat_cq_get_fw_version(qp);
+	if (ret < 0) {
+		qat_cryptodev_qp_release(dev, qp_id);
+		return ret;
+	}
+
+	if (ret != 0)
+		QAT_LOG(DEBUG, "QAT firmware version: %d.%d.%d",
+				(ret >> 24) & 0xff,
+				(ret >> 16) & 0xff,
+				(ret >> 8) & 0xff);
+	else
+		QAT_LOG(DEBUG, "unknown QAT firmware version");
+
+	/* set capabilities based on the fw version */
+	qat_sym_private->internal_capabilities = QAT_SYM_CAP_VALID |
+			((ret >= MIXED_CRYPTO_MIN_FW_VER) ?
+					QAT_SYM_CAP_MIXED_CRYPTO : 0);
+	return 0;
+}
+
+struct rte_cryptodev_ops qat_sym_crypto_ops_gen2 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_sym_crypto_qp_setup_gen2,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.sym_session_get_size	= qat_sym_session_get_private_size,
+	.sym_session_configure	= qat_sym_session_configure,
+	.sym_session_clear	= qat_sym_session_clear,
+
+	/* Raw data-path API related operations */
+	.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+	.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen2(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen2;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen2);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen2_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN2].cryptodev_ops = &qat_sym_crypto_ops_gen2;
+	qat_sym_gen_dev_ops[QAT_GEN2].get_capabilities =
+			qat_sym_crypto_cap_get_gen2;
+	qat_sym_gen_dev_ops[QAT_GEN2].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN2].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen2_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN2].cryptodev_ops =
+			&qat_asym_crypto_ops_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN2].get_capabilities =
+			qat_asym_crypto_cap_get_gen1;
+	qat_asym_gen_dev_ops[QAT_GEN2].get_feature_flags =
+			qat_asym_crypto_feature_flags_get_gen1;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
new file mode 100644
index 0000000000..d3336cf4a1
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen3[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(MD5_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(KASUMI_F9,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_XTS,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(3DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(3DES_CTR,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(ZUC_EEA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(ZUC_EIA3,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AEAD_CAP(CHACHA20_POLY1305,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 32, 32, 0),
+		CAP_RNG(digest_size, 16, 16, 0),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen3(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen3;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen3);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen3_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN3].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN3].get_capabilities =
+			qat_sym_crypto_cap_get_gen3;
+	qat_sym_gen_dev_ops[QAT_GEN3].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN3].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen3_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN3].cryptodev_ops = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN3].get_capabilities = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN3].get_feature_flags = NULL;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
new file mode 100644
index 0000000000..37a58c026f
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
@@ -0,0 +1,124 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen4[] = {
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(CHACHA20_POLY1305,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 32, 32, 0),
+		CAP_RNG(digest_size, 16, 16, 0),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen4;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen4);
+	return capa_info;
+}
+
+RTE_INIT(qat_sym_crypto_gen4_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN4].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN4].get_capabilities =
+			qat_sym_crypto_cap_get_gen4;
+	qat_sym_gen_dev_ops[QAT_GEN4].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN4].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
+
+RTE_INIT(qat_asym_crypto_gen4_init)
+{
+	qat_asym_gen_dev_ops[QAT_GEN4].cryptodev_ops = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN4].get_capabilities = NULL;
+	qat_asym_gen_dev_ops[QAT_GEN4].get_feature_flags = NULL;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
new file mode 100644
index 0000000000..67a4d2cb2c
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
@@ -0,0 +1,36 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#ifndef _QAT_CRYPTO_PMD_GENS_H_
+#define _QAT_CRYPTO_PMD_GENS_H_
+
+#include <rte_cryptodev.h>
+#include "qat_crypto.h"
+#include "qat_sym_session.h"
+
+extern struct rte_cryptodev_ops qat_sym_crypto_ops_gen1;
+extern struct rte_cryptodev_ops qat_asym_crypto_ops_gen1;
+
+/* -----------------GENx control path APIs ---------------- */
+uint64_t
+qat_sym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev);
+
+void
+qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session,
+		uint8_t hash_flag);
+
+struct qat_capabilities_info
+qat_asym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev);
+
+uint64_t
+qat_asym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev);
+
+#ifdef RTE_LIB_SECURITY
+extern struct rte_security_ops security_qat_ops_gen1;
+
+void *
+qat_sym_create_security_gen1(void *cryptodev);
+#endif
+
+#endif
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
new file mode 100644
index 0000000000..e156f194e2
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
@@ -0,0 +1,283 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2021 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#ifdef RTE_LIB_SECURITY
+#include <rte_security_driver.h>
+#endif
+
+#include "adf_transport_access_macros.h"
+#include "icp_qat_fw.h"
+#include "icp_qat_fw_la.h"
+
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_sym_session.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen1[] = {
+	QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+		CAP_SET(block_size, 64),
+		CAP_RNG(digest_size, 1, 20, 1)),
+	QAT_SYM_AEAD_CAP(AES_GCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AEAD_CAP(AES_CCM,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+		CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+	QAT_SYM_AUTH_CAP(AES_GMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+	QAT_SYM_AUTH_CAP(AES_CMAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+			CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256,
+		CAP_SET(block_size, 64),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512,
+		CAP_SET(block_size, 128),
+		CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA1_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA224_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA256_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA384_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SHA512_HMAC,
+		CAP_SET(block_size, 128),
+		CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(MD5_HMAC,
+		CAP_SET(block_size, 64),
+		CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 16, 1),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(SNOW3G_UIA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_AUTH_CAP(KASUMI_F9,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_AUTH_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+		CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(AES_CBC,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_CTR,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_XTS,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 32, 64, 32), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,
+		CAP_SET(block_size, 16),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+	QAT_SYM_CIPHER_CAP(KASUMI_F8,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(NULL,
+		CAP_SET(block_size, 1),
+		CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+	QAT_SYM_CIPHER_CAP(3DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(3DES_CTR,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 16, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_CBC,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 24, 8), CAP_RNG(iv_size, 8, 8, 0)),
+	QAT_SYM_CIPHER_CAP(DES_DOCSISBPI,
+		CAP_SET(block_size, 8),
+		CAP_RNG(key_size, 8, 8, 0), CAP_RNG(iv_size, 8, 8, 0)),
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+struct rte_cryptodev_ops qat_sym_crypto_ops_gen1 = {
+
+	/* Device related operations */
+	.dev_configure		= qat_cryptodev_config,
+	.dev_start		= qat_cryptodev_start,
+	.dev_stop		= qat_cryptodev_stop,
+	.dev_close		= qat_cryptodev_close,
+	.dev_infos_get		= qat_cryptodev_info_get,
+
+	.stats_get		= qat_cryptodev_stats_get,
+	.stats_reset		= qat_cryptodev_stats_reset,
+	.queue_pair_setup	= qat_cryptodev_qp_setup,
+	.queue_pair_release	= qat_cryptodev_qp_release,
+
+	/* Crypto related operations */
+	.sym_session_get_size	= qat_sym_session_get_private_size,
+	.sym_session_configure	= qat_sym_session_configure,
+	.sym_session_clear	= qat_sym_session_clear,
+
+	/* Raw data-path API related operations */
+	.sym_get_raw_dp_ctx_size = qat_sym_get_dp_ctx_size,
+	.sym_configure_raw_dp_ctx = qat_sym_configure_dp_ctx,
+};
+
+static struct qat_capabilities_info
+qat_sym_crypto_cap_get_gen1(struct qat_pci_device *qat_dev __rte_unused)
+{
+	struct qat_capabilities_info capa_info;
+	capa_info.data = qat_sym_crypto_caps_gen1;
+	capa_info.size = sizeof(qat_sym_crypto_caps_gen1);
+	return capa_info;
+}
+
+uint64_t
+qat_sym_crypto_feature_flags_get_gen1(
+	struct qat_pci_device *qat_dev __rte_unused)
+{
+	uint64_t feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
+			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
+			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
+			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_OOP_LB_IN_SGL_OUT |
+			RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
+			RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED |
+			RTE_CRYPTODEV_FF_SYM_RAW_DP;
+
+	return feature_flags;
+}
+
+#ifdef RTE_LIB_SECURITY
+
+#define QAT_SECURITY_SYM_CAPABILITIES					\
+	{	/* AES DOCSIS BPI */					\
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,			\
+		{.sym = {						\
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,	\
+			{.cipher = {					\
+				.algo = RTE_CRYPTO_CIPHER_AES_DOCSISBPI,\
+				.block_size = 16,			\
+				.key_size = {				\
+					.min = 16,			\
+					.max = 32,			\
+					.increment = 16			\
+				},					\
+				.iv_size = {				\
+					.min = 16,			\
+					.max = 16,			\
+					.increment = 0			\
+				}					\
+			}, }						\
+		}, }							\
+	}
+
+#define QAT_SECURITY_CAPABILITIES(sym)					\
+	[0] = {	/* DOCSIS Uplink */					\
+		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
+		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
+		.docsis = {						\
+			.direction = RTE_SECURITY_DOCSIS_UPLINK		\
+		},							\
+		.crypto_capabilities = (sym)				\
+	},								\
+	[1] = {	/* DOCSIS Downlink */					\
+		.action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,	\
+		.protocol = RTE_SECURITY_PROTOCOL_DOCSIS,		\
+		.docsis = {						\
+			.direction = RTE_SECURITY_DOCSIS_DOWNLINK	\
+		},							\
+		.crypto_capabilities = (sym)				\
+	}
+
+static const struct rte_cryptodev_capabilities
+					qat_security_sym_capabilities[] = {
+	QAT_SECURITY_SYM_CAPABILITIES,
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static const struct rte_security_capability qat_security_capabilities_gen1[] = {
+	QAT_SECURITY_CAPABILITIES(qat_security_sym_capabilities),
+	{
+		.action = RTE_SECURITY_ACTION_TYPE_NONE
+	}
+};
+
+static const struct rte_security_capability *
+qat_security_cap_get_gen1(void *dev __rte_unused)
+{
+	return qat_security_capabilities_gen1;
+}
+
+struct rte_security_ops security_qat_ops_gen1 = {
+		.session_create = qat_security_session_create,
+		.session_update = NULL,
+		.session_stats_get = NULL,
+		.session_destroy = qat_security_session_destroy,
+		.set_pkt_metadata = NULL,
+		.capabilities_get = qat_security_cap_get_gen1
+};
+
+void *
+qat_sym_create_security_gen1(void *cryptodev)
+{
+	struct rte_security_ctx *security_instance;
+
+	security_instance = rte_malloc(NULL, sizeof(struct rte_security_ctx),
+			RTE_CACHE_LINE_SIZE);
+	if (security_instance == NULL)
+		return NULL;
+
+	security_instance->device = cryptodev;
+	security_instance->ops = &security_qat_ops_gen1;
+	security_instance->sess_cnt = 0;
+
+	return (void *)security_instance;
+}
+
+#endif
+
+RTE_INIT(qat_sym_crypto_gen1_init)
+{
+	qat_sym_gen_dev_ops[QAT_GEN1].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN1].get_capabilities =
+			qat_sym_crypto_cap_get_gen1;
+	qat_sym_gen_dev_ops[QAT_GEN1].get_feature_flags =
+			qat_sym_crypto_feature_flags_get_gen1;
+#ifdef RTE_LIB_SECURITY
+	qat_sym_gen_dev_ops[QAT_GEN1].create_security_ctx =
+			qat_sym_create_security_gen1;
+#endif
+}
diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h
index 0a8afb0b31..6eaa15b975 100644
--- a/drivers/crypto/qat/qat_crypto.h
+++ b/drivers/crypto/qat/qat_crypto.h
@@ -6,9 +6,6 @@
  #define _QAT_CRYPTO_H_

 #include <rte_cryptodev.h>
-#ifdef RTE_LIB_SECURITY
-#include <rte_security.h>
-#endif

 #include "qat_device.h"

--
2.17.1


^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [EXT] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations
  2021-11-04 10:34               ` [dpdk-dev] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
                                   ` (8 preceding siblings ...)
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 9/9] crypto/qat: add gen specific implementation Kai Ji
@ 2021-11-04 11:44                 ` Akhil Goyal
  9 siblings, 0 replies; 96+ messages in thread
From: Akhil Goyal @ 2021-11-04 11:44 UTC (permalink / raw)
  To: Kai Ji, dev

> This patchset introduces new qat driver structure and updates
> existing symmetric crypto qat PMD.
> 
> The purpose of the change is to isolate QAT generation specific
> implementations from one to another.
> 
> It is expected the changes to the specific generation driver
> code does minimum impact to other generations' implementations.
> Also adding the support to new features or new qat generation
> hardware will have zero impact to existing functionalities.
> 
Applied to dpdk-next-crypto

Thanks.

^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [dpdk-dev v8 9/9] crypto/qat: add gen specific implementation
  2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 9/9] crypto/qat: add gen specific implementation Kai Ji
@ 2021-11-05 20:39                   ` Thomas Monjalon
  2021-11-05 20:46                     ` Thomas Monjalon
  0 siblings, 1 reply; 96+ messages in thread
From: Thomas Monjalon @ 2021-11-05 20:39 UTC (permalink / raw)
  To: Fan Zhang, Kai Ji; +Cc: dev, gakhil, Arek Kusztal

04/11/2021 11:34, Kai Ji:
> From: Fan Zhang <roy.fan.zhang@intel.com>
> 
> This patch replaces the mixed QAT symmetric and asymmetric
> support implementation by separate files with shared or
> individual implementation for specific QAT generation.
> 
> Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
> Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> Signed-off-by: Kai Ji <kai.ji@intel.com>
> Acked-by: Ciara Power <ciara.power@intel.com>
[...]
> +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
> +#include "qat_sym_session.h"
> +#include "qat_sym.h"
> +#include "qat_sym_session.h"
> +#include "qat_crypto.h"
> +#include "qat_crypto_pmd_gens.h"

I suppose the double include of qat_sym_session.h is useless...



^ permalink raw reply	[flat|nested] 96+ messages in thread

* Re: [dpdk-dev] [dpdk-dev v8 9/9] crypto/qat: add gen specific implementation
  2021-11-05 20:39                   ` Thomas Monjalon
@ 2021-11-05 20:46                     ` Thomas Monjalon
  0 siblings, 0 replies; 96+ messages in thread
From: Thomas Monjalon @ 2021-11-05 20:46 UTC (permalink / raw)
  To: Fan Zhang, Kai Ji; +Cc: dev, gakhil, Arek Kusztal, david.marchand, ferruh.yigit

05/11/2021 21:39, Thomas Monjalon:
> 04/11/2021 11:34, Kai Ji:
> > From: Fan Zhang <roy.fan.zhang@intel.com>
> > 
> > This patch replaces the mixed QAT symmetric and asymmetric
> > support implementation by separate files with shared or
> > individual implementation for specific QAT generation.
> > 
> > Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
> > Signed-off-by: Fan Zhang <roy.fan.zhang@intel.com>
> > Signed-off-by: Kai Ji <kai.ji@intel.com>
> > Acked-by: Ciara Power <ciara.power@intel.com>
> [...]
> > +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
> > +#include "qat_sym_session.h"
> > +#include "qat_sym.h"
> > +#include "qat_sym_session.h"
> > +#include "qat_crypto.h"
> > +#include "qat_crypto_pmd_gens.h"
> 
> I suppose the double include of qat_sym_session.h is useless...

Note: it can be detected with devtools/check-dup-includes.sh

Other avoidable issues in this series detected with devtools/check-meson.py:

Error: Missing trailing "," in list at drivers/common/qat/meson.build:56
Error parsing drivers/common/qat/meson.build:74, got some tabulation

I expect such basic issues to be solved by the component maintainers,
or at last, by the tree maintainer. This time I will fix it.



^ permalink raw reply	[flat|nested] 96+ messages in thread

end of thread, other threads:[~2021-11-05 20:46 UTC | newest]

Thread overview: 96+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-01 14:47 [dpdk-dev] [PATCH 0/4] drivers/qat: isolate implementations of qat generations Arek Kusztal
2021-09-01 14:47 ` [dpdk-dev] [PATCH 1/4] common/qat: " Arek Kusztal
2021-09-01 14:47 ` [dpdk-dev] [PATCH 2/4] crypto/qat: isolate implementations of symmetric operations Arek Kusztal
2021-09-01 14:47 ` [dpdk-dev] [PATCH 3/4] crypto/qat: move capabilities initialization to spec files Arek Kusztal
2021-09-01 14:47 ` [dpdk-dev] [PATCH 4/4] common/qat: add extra data to qat pci dev Arek Kusztal
2021-09-06 18:24 ` [dpdk-dev] [EXT] [PATCH 0/4] drivers/qat: isolate implementations of qat generations Akhil Goyal
2021-10-01 16:59 ` [dpdk-dev] [PATCH v2 00/10] " Fan Zhang
2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 01/10] common/qat: add gen specific data and function Fan Zhang
2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 02/10] common/qat: add gen specific device implementation Fan Zhang
2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 03/10] common/qat: add gen specific queue pair function Fan Zhang
2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 04/10] common/qat: add gen specific queue implementation Fan Zhang
2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 05/10] compress/qat: add gen specific data and function Fan Zhang
2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 06/10] compress/qat: add gen specific implementation Fan Zhang
2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 07/10] crypto/qat: unified device private data structure Fan Zhang
2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 08/10] crypto/qat: add gen specific data and function Fan Zhang
2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 09/10] crypto/qat: add gen specific implementation Fan Zhang
2021-10-01 16:59   ` [dpdk-dev] [PATCH v2 10/10] doc: update release note Fan Zhang
2021-10-08 10:07     ` [dpdk-dev] [EXT] " Akhil Goyal
2021-10-08 10:34       ` Zhang, Roy Fan
2021-10-14 16:11   ` [dpdk-dev] [dpdk-dev v3 00/10] drivers/qat: isolate implementations of qat generations Fan Zhang
2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 01/10] common/qat: add gen specific data and function Fan Zhang
2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 02/10] common/qat: add gen specific device implementation Fan Zhang
2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 03/10] common/qat: add gen specific queue pair function Fan Zhang
2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 04/10] common/qat: add gen specific queue implementation Fan Zhang
2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 05/10] compress/qat: add gen specific data and function Fan Zhang
2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 06/10] compress/qat: add gen specific implementation Fan Zhang
2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 07/10] crypto/qat: unified device private data structure Fan Zhang
2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 08/10] crypto/qat: add gen specific data and function Fan Zhang
2021-10-16 11:46       ` [dpdk-dev] [EXT] " Akhil Goyal
2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 09/10] crypto/qat: add gen specific implementation Fan Zhang
2021-10-14 16:11     ` [dpdk-dev] [dpdk-dev v3 10/10] common/qat: unify naming conventions in qat functions Fan Zhang
2021-10-22 17:03     ` [dpdk-dev] [dpdk-dev v4 0/9] drivers/qat: isolate implementations of qat generations Fan Zhang
2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 1/9] common/qat: add gen specific data and function Fan Zhang
2021-10-26 15:06         ` Power, Ciara
2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 2/9] common/qat: add gen specific device implementation Fan Zhang
2021-10-26 15:11         ` Power, Ciara
2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 3/9] common/qat: add gen specific queue pair function Fan Zhang
2021-10-26 15:28         ` Power, Ciara
2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 4/9] common/qat: add gen specific queue implementation Fan Zhang
2021-10-26 15:52         ` Power, Ciara
2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 5/9] compress/qat: add gen specific data and function Fan Zhang
2021-10-26 16:22         ` Power, Ciara
2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 6/9] compress/qat: add gen specific implementation Fan Zhang
2021-10-26 16:24         ` Power, Ciara
2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 7/9] crypto/qat: unified device private data structure Fan Zhang
2021-10-27  8:11         ` Power, Ciara
2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 8/9] crypto/qat: add gen specific data and function Fan Zhang
2021-10-27  9:28         ` Power, Ciara
2021-10-22 17:03       ` [dpdk-dev] [dpdk-dev v4 9/9] crypto/qat: add gen specific implementation Fan Zhang
2021-10-27 10:16         ` Power, Ciara
2021-10-26 16:44       ` [dpdk-dev] [dpdk-dev v5 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 1/9] common/qat: add gen specific data and function Kai Ji
2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 2/9] common/qat: add gen specific device implementation Kai Ji
2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 3/9] common/qat: add gen specific queue pair function Kai Ji
2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 4/9] common/qat: add gen specific queue implementation Kai Ji
2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 5/9] compress/qat: add gen specific data and function Kai Ji
2021-10-26 16:44         ` [dpdk-dev] [dpdk-dev v5 6/9] compress/qat: add gen specific implementation Kai Ji
2021-10-26 16:45         ` [dpdk-dev] [dpdk-dev v5 7/9] crypto/qat: unified device private data structure Kai Ji
2021-10-26 16:45         ` [dpdk-dev] [dpdk-dev v5 8/9] crypto/qat: add gen specific data and function Kai Ji
2021-10-26 16:45         ` [dpdk-dev] [dpdk-dev v5 9/9] crypto/qat: add gen specific implementation Kai Ji
2021-10-26 17:16           ` [dpdk-dev] [dpdk-dev v6 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 1/9] common/qat: add gen specific data and function Kai Ji
2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 2/9] common/qat: add gen specific device implementation Kai Ji
2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 3/9] common/qat: add gen specific queue pair function Kai Ji
2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 4/9] common/qat: add gen specific queue implementation Kai Ji
2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 5/9] compress/qat: add gen specific data and function Kai Ji
2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 6/9] compress/qat: add gen specific implementation Kai Ji
2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 7/9] crypto/qat: unified device private data structure Kai Ji
2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 8/9] crypto/qat: add gen specific data and function Kai Ji
2021-10-26 17:16             ` [dpdk-dev] [dpdk-dev v6 9/9] crypto/qat: add gen specific implementation Kai Ji
2021-10-27 15:50             ` [dpdk-dev] [dpdk-dev v7 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 1/9] common/qat: add gen specific data and function Kai Ji
2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 2/9] common/qat: add gen specific device implementation Kai Ji
2021-10-28  9:32                 ` Power, Ciara
2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 3/9] common/qat: add gen specific queue pair function Kai Ji
2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 4/9] common/qat: add gen specific queue implementation Kai Ji
2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 5/9] compress/qat: add gen specific data and function Kai Ji
2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 6/9] compress/qat: add gen specific implementation Kai Ji
2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 7/9] crypto/qat: unified device private data structure Kai Ji
2021-10-28  9:31                 ` Power, Ciara
2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 8/9] crypto/qat: add gen specific data and function Kai Ji
2021-10-28  8:33                 ` Power, Ciara
2021-10-27 15:50               ` [dpdk-dev] [dpdk-dev v7 9/9] crypto/qat: add gen specific implementation Kai Ji
2021-11-04 10:34               ` [dpdk-dev] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations Kai Ji
2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 1/9] common/qat: define gen specific structs and functions Kai Ji
2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 2/9] common/qat: add gen specific device implementation Kai Ji
2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 3/9] common/qat: add gen specific queue pair function Kai Ji
2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 4/9] common/qat: add gen specific queue implementation Kai Ji
2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 5/9] compress/qat: define gen specific structs and functions Kai Ji
2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 6/9] compress/qat: add gen specific implementation Kai Ji
2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 7/9] crypto/qat: unified device private data structure Kai Ji
2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 8/9] crypto/qat: define gen specific structs and functions Kai Ji
2021-11-04 10:34                 ` [dpdk-dev] [dpdk-dev v8 9/9] crypto/qat: add gen specific implementation Kai Ji
2021-11-05 20:39                   ` Thomas Monjalon
2021-11-05 20:46                     ` Thomas Monjalon
2021-11-04 11:44                 ` [dpdk-dev] [EXT] [dpdk-dev v8 0/9] drivers/qat: isolate implementations of qat generations Akhil Goyal

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).