* [PATCH 0/4] add new QAT gen3 device
@ 2023-12-19 15:51 Ciara Power
2023-12-19 15:51 ` [PATCH 1/4] crypto/qat: add new " Ciara Power
` (5 more replies)
0 siblings, 6 replies; 19+ messages in thread
From: Ciara Power @ 2023-12-19 15:51 UTC (permalink / raw)
To: dev; +Cc: Ciara Power
This patchset adds support for a new gen3 QuickAssist device.
There are some changes for this device in comparison to the
existing gen3 implementation:
- DES and Kasumi removed from capabilities.
- ZUC256 added to capabiltiies.
- New device ID.
- New CMAC macros included.
- Some algorithms moved to wireless slice (SNOW3G, ZUC, AES-CMAC).
This patchset covers Symmetric crypto, so a check has been added for
Asymmetric and Compression PMDs to skip for this gen3 device only.
Documentation will be updated in a subsequent version of the patchset.
Ciara Power (4):
crypto/qat: add new gen3 device
crypto/qat: add zuc256 wireless slice for gen3
crypto/qat: add new gen3 CMAC macros
crypto/qat: disable asym and compression for new gen3 device
drivers/common/qat/qat_adf/icp_qat_fw.h | 3 +-
drivers/common/qat/qat_adf/icp_qat_fw_la.h | 24 +++
drivers/common/qat/qat_adf/icp_qat_hw.h | 23 ++-
drivers/common/qat/qat_device.c | 13 ++
drivers/common/qat/qat_device.h | 2 +
drivers/compress/qat/qat_comp_pmd.c | 3 +-
drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 1 +
drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 57 ++++++-
drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 2 +-
drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 44 ++++-
drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 15 ++
drivers/crypto/qat/qat_asym.c | 3 +-
drivers/crypto/qat/qat_sym_session.c | 164 +++++++++++++++++--
drivers/crypto/qat/qat_sym_session.h | 2 +
14 files changed, 332 insertions(+), 24 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 1/4] crypto/qat: add new gen3 device
2023-12-19 15:51 [PATCH 0/4] add new QAT gen3 device Ciara Power
@ 2023-12-19 15:51 ` Ciara Power
2023-12-19 15:51 ` [PATCH 2/4] crypto/qat: add zuc256 wireless slice for gen3 Ciara Power
` (4 subsequent siblings)
5 siblings, 0 replies; 19+ messages in thread
From: Ciara Power @ 2023-12-19 15:51 UTC (permalink / raw)
To: dev; +Cc: Ciara Power, Kai Ji
Add new gen3 QAT device ID.
This device has a wireless slice, but other gen3 devices do not, so we
must set a flag to indicate this wireless enabled device.
Capabilities for the device are slightly different from base gen3
capabilities, some are removed from the list for this device.
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
drivers/common/qat/qat_device.c | 13 +++++++++++++
drivers/common/qat/qat_device.h | 2 ++
drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 11 +++++++++++
3 files changed, 26 insertions(+)
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index f55dc3c6f0..0e7d387d78 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -53,6 +53,9 @@ static const struct rte_pci_id pci_id_qat_map[] = {
{
RTE_PCI_DEVICE(0x8086, 0x18a1),
},
+ {
+ RTE_PCI_DEVICE(0x8086, 0x578b),
+ },
{
RTE_PCI_DEVICE(0x8086, 0x4941),
},
@@ -194,6 +197,7 @@ pick_gen(const struct rte_pci_device *pci_dev)
case 0x18ef:
return QAT_GEN2;
case 0x18a1:
+ case 0x578b:
return QAT_GEN3;
case 0x4941:
case 0x4943:
@@ -205,6 +209,12 @@ pick_gen(const struct rte_pci_device *pci_dev)
}
}
+static int
+wireless_slice_support(uint16_t pci_dev_id)
+{
+ return pci_dev_id == 0x578b;
+}
+
struct qat_pci_device *
qat_pci_device_allocate(struct rte_pci_device *pci_dev,
struct qat_dev_cmd_param *qat_dev_cmd_param)
@@ -282,6 +292,9 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
qat_dev->qat_dev_id = qat_dev_id;
qat_dev->qat_dev_gen = qat_dev_gen;
+ if (wireless_slice_support(pci_dev->id.device_id))
+ qat_dev->has_wireless_slice = 1;
+
ops_hw = qat_dev_hw_spec[qat_dev->qat_dev_gen];
NOT_NULL(ops_hw->qat_dev_get_misc_bar, goto error,
"QAT internal error! qat_dev_get_misc_bar function not set");
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index aa7988bb74..43e4752812 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -135,6 +135,8 @@ struct qat_pci_device {
/**< Per generation specific information */
uint32_t slice_map;
/**< Map of the crypto and compression slices */
+ uint16_t has_wireless_slice;
+ /**< Wireless Slices supported */
};
struct qat_gen_hw_data {
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
index 02bcdb06b1..bc53e2e0f1 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
@@ -255,6 +255,17 @@ qat_sym_crypto_cap_get_gen3(struct qat_cryptodev_private *internals,
RTE_CRYPTO_AUTH_SM3_HMAC))) {
continue;
}
+ if (internals->qat_dev->has_wireless_slice && (
+ check_auth_capa(&capabilities[iter],
+ RTE_CRYPTO_AUTH_KASUMI_F9) ||
+ check_cipher_capa(&capabilities[iter],
+ RTE_CRYPTO_CIPHER_KASUMI_F8) ||
+ check_cipher_capa(&capabilities[iter],
+ RTE_CRYPTO_CIPHER_DES_CBC) ||
+ check_cipher_capa(&capabilities[iter],
+ RTE_CRYPTO_CIPHER_DES_DOCSISBPI)))
+ continue;
+
memcpy(addr + curr_capa, capabilities + iter,
sizeof(struct rte_cryptodev_capabilities));
curr_capa++;
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 2/4] crypto/qat: add zuc256 wireless slice for gen3
2023-12-19 15:51 [PATCH 0/4] add new QAT gen3 device Ciara Power
2023-12-19 15:51 ` [PATCH 1/4] crypto/qat: add new " Ciara Power
@ 2023-12-19 15:51 ` Ciara Power
2023-12-19 15:51 ` [PATCH 3/4] crypto/qat: add new gen3 CMAC macros Ciara Power
` (3 subsequent siblings)
5 siblings, 0 replies; 19+ messages in thread
From: Ciara Power @ 2023-12-19 15:51 UTC (permalink / raw)
To: dev; +Cc: Ciara Power, Kai Ji
The new gen3 device handles wireless algorithms on wireless slices,
based on the device wireless slice support, set the required flags for
these algorithms to move slice.
One of the algorithms supported for the wireless slices is ZUC 256,
support is added for this, along with modifying the capability for the
device.
The device supports 24 bytes iv for ZUC 256, with iv[20]
being ignored in register.
For 25 byte iv, compress this into 23 bytes.
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
drivers/common/qat/qat_adf/icp_qat_fw.h | 3 +-
drivers/common/qat/qat_adf/icp_qat_fw_la.h | 24 ++++
drivers/common/qat/qat_adf/icp_qat_hw.h | 21 ++-
drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 1 +
drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 46 +++++-
drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 2 +-
drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 44 +++++-
drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 15 ++
drivers/crypto/qat/qat_sym_session.c | 142 +++++++++++++++++--
drivers/crypto/qat/qat_sym_session.h | 2 +
10 files changed, 279 insertions(+), 21 deletions(-)
diff --git a/drivers/common/qat/qat_adf/icp_qat_fw.h b/drivers/common/qat/qat_adf/icp_qat_fw.h
index 3aa17ae041..76584d48f0 100644
--- a/drivers/common/qat/qat_adf/icp_qat_fw.h
+++ b/drivers/common/qat/qat_adf/icp_qat_fw.h
@@ -75,7 +75,8 @@ struct icp_qat_fw_comn_req_hdr {
uint8_t service_type;
uint8_t hdr_flags;
uint16_t serv_specif_flags;
- uint16_t comn_req_flags;
+ uint8_t comn_req_flags;
+ uint8_t ext_flags;
};
struct icp_qat_fw_comn_req_rqpars {
diff --git a/drivers/common/qat/qat_adf/icp_qat_fw_la.h b/drivers/common/qat/qat_adf/icp_qat_fw_la.h
index 70f0effa62..134c309355 100644
--- a/drivers/common/qat/qat_adf/icp_qat_fw_la.h
+++ b/drivers/common/qat/qat_adf/icp_qat_fw_la.h
@@ -81,6 +81,15 @@ struct icp_qat_fw_la_bulk_req {
#define ICP_QAT_FW_LA_PARTIAL_END 2
#define QAT_LA_PARTIAL_BITPOS 0
#define QAT_LA_PARTIAL_MASK 0x3
+#define QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS_BITPOS 0
+#define QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS 1
+#define QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS_MASK 0x1
+#define QAT_LA_USE_WCP_SLICE 1
+#define QAT_LA_USE_WCP_SLICE_BITPOS 2
+#define QAT_LA_USE_WCP_SLICE_MASK 0x1
+#define QAT_LA_USE_WAT_SLICE_BITPOS 3
+#define QAT_LA_USE_WAT_SLICE 1
+#define QAT_LA_USE_WAT_SLICE_MASK 0x1
#define ICP_QAT_FW_LA_FLAGS_BUILD(zuc_proto, gcm_iv_len, auth_rslt, proto, \
cmp_auth, ret_auth, update_state, \
ciph_iv, ciphcfg, partial) \
@@ -188,6 +197,21 @@ struct icp_qat_fw_la_bulk_req {
QAT_FIELD_SET(flags, val, QAT_LA_PARTIAL_BITPOS, \
QAT_LA_PARTIAL_MASK)
+#define ICP_QAT_FW_USE_EXTENDED_PROTOCOL_FLAGS_SET(flags, val) \
+ QAT_FIELD_SET(flags, val, \
+ QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS_BITPOS, \
+ QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS_MASK)
+
+#define ICP_QAT_FW_USE_WCP_SLICE_SET(flags, val) \
+ QAT_FIELD_SET(flags, val, \
+ QAT_LA_USE_WCP_SLICE_BITPOS, \
+ QAT_LA_USE_WCP_SLICE_MASK)
+
+#define ICP_QAT_FW_USE_WAT_SLICE_SET(flags, val) \
+ QAT_FIELD_SET(flags, val, \
+ QAT_LA_USE_WAT_SLICE_BITPOS, \
+ QAT_LA_USE_WAT_SLICE_MASK)
+
#define QAT_FW_LA_MODE2 1
#define QAT_FW_LA_NO_MODE2 0
#define QAT_FW_LA_MODE2_MASK 0x1
diff --git a/drivers/common/qat/qat_adf/icp_qat_hw.h b/drivers/common/qat/qat_adf/icp_qat_hw.h
index 8b864e1630..dfd0ea133c 100644
--- a/drivers/common/qat/qat_adf/icp_qat_hw.h
+++ b/drivers/common/qat/qat_adf/icp_qat_hw.h
@@ -71,7 +71,16 @@ enum icp_qat_hw_auth_algo {
ICP_QAT_HW_AUTH_ALGO_SHA3_256 = 17,
ICP_QAT_HW_AUTH_ALGO_SHA3_384 = 18,
ICP_QAT_HW_AUTH_ALGO_SHA3_512 = 19,
- ICP_QAT_HW_AUTH_ALGO_DELIMITER = 20
+ ICP_QAT_HW_AUTH_ALGO_RESERVED = 20,
+ ICP_QAT_HW_AUTH_ALGO_RESERVED1 = 21,
+ ICP_QAT_HW_AUTH_ALGO_RESERVED2 = 22,
+ ICP_QAT_HW_AUTH_ALGO_RESERVED3 = 22,
+ ICP_QAT_HW_AUTH_ALGO_RESERVED4 = 23,
+ ICP_QAT_HW_AUTH_ALGO_RESERVED5 = 24,
+ ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32 = 25,
+ ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64 = 26,
+ ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128 = 27,
+ ICP_QAT_HW_AUTH_ALGO_DELIMITER = 28
};
enum icp_qat_hw_auth_mode {
@@ -167,6 +176,9 @@ struct icp_qat_hw_auth_setup {
#define ICP_QAT_HW_GALOIS_128_STATE1_SZ 16
#define ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ 8
#define ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_256_MAC_32_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_256_MAC_64_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_256_MAC_128_STATE1_SZ 16
#define ICP_QAT_HW_NULL_STATE2_SZ 32
#define ICP_QAT_HW_MD5_STATE2_SZ 16
@@ -191,6 +203,7 @@ struct icp_qat_hw_auth_setup {
#define ICP_QAT_HW_AES_F9_STATE2_SZ ICP_QAT_HW_KASUMI_F9_STATE2_SZ
#define ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ 24
#define ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ 32
+#define ICP_QAT_HW_ZUC_256_STATE2_SZ 56
#define ICP_QAT_HW_GALOIS_H_SZ 16
#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8
#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16
@@ -228,7 +241,8 @@ enum icp_qat_hw_cipher_algo {
ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 = 9,
ICP_QAT_HW_CIPHER_ALGO_SM4 = 10,
ICP_QAT_HW_CIPHER_ALGO_CHACHA20_POLY1305 = 11,
- ICP_QAT_HW_CIPHER_DELIMITER = 12
+ ICP_QAT_HW_CIPHER_ALGO_ZUC_256 = 12,
+ ICP_QAT_HW_CIPHER_DELIMITER = 13
};
enum icp_qat_hw_cipher_mode {
@@ -308,6 +322,7 @@ enum icp_qat_hw_cipher_convert {
#define ICP_QAT_HW_KASUMI_BLK_SZ 8
#define ICP_QAT_HW_SNOW_3G_BLK_SZ 8
#define ICP_QAT_HW_ZUC_3G_BLK_SZ 8
+#define ICP_QAT_HW_ZUC_256_BLK_SZ 8
#define ICP_QAT_HW_NULL_KEY_SZ 256
#define ICP_QAT_HW_DES_KEY_SZ 8
#define ICP_QAT_HW_3DES_KEY_SZ 24
@@ -343,6 +358,8 @@ enum icp_qat_hw_cipher_convert {
#define ICP_QAT_HW_SPC_CTR_SZ 16
#define ICP_QAT_HW_CHACHAPOLY_ICV_SZ 16
#define ICP_QAT_HW_CHACHAPOLY_AAD_MAX_LOG 14
+#define ICP_QAT_HW_ZUC_256_KEY_SZ 32
+#define ICP_QAT_HW_ZUC_256_IV_SZ 24
#define ICP_QAT_HW_CIPHER_MAX_KEY_SZ ICP_QAT_HW_AES_256_F8_KEY_SZ
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
index df47767749..2ff87484f6 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
@@ -199,6 +199,7 @@ qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session,
header->serv_specif_flags, 0);
break;
case ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3:
+ case ICP_QAT_HW_CIPHER_ALGO_ZUC_256:
ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
ICP_QAT_FW_LA_NO_PROTO);
ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
index bc53e2e0f1..421f5994df 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
@@ -204,6 +204,7 @@ qat_sym_crypto_cap_get_gen3(struct qat_cryptodev_private *internals,
uint32_t legacy_size = sizeof(qat_sym_crypto_legacy_caps_gen3);
capa_num = size/sizeof(struct rte_cryptodev_capabilities);
legacy_capa_num = legacy_size/sizeof(struct rte_cryptodev_capabilities);
+ struct rte_cryptodev_capabilities *cap;
if (unlikely(qat_legacy_capa))
size = size + legacy_size;
@@ -268,6 +269,25 @@ qat_sym_crypto_cap_get_gen3(struct qat_cryptodev_private *internals,
memcpy(addr + curr_capa, capabilities + iter,
sizeof(struct rte_cryptodev_capabilities));
+
+ if (internals->qat_dev->has_wireless_slice && (
+ check_auth_capa(&capabilities[iter],
+ RTE_CRYPTO_AUTH_ZUC_EIA3))) {
+ cap = addr + curr_capa;
+ cap->sym.auth.key_size.max = 32;
+ cap->sym.auth.iv_size.max = 25;
+ cap->sym.auth.iv_size.increment = 1;
+ cap->sym.auth.digest_size.max = 16;
+ cap->sym.auth.digest_size.increment = 4;
+ }
+ if (internals->qat_dev->has_wireless_slice && (
+ check_cipher_capa(&capabilities[iter],
+ RTE_CRYPTO_CIPHER_ZUC_EEA3))) {
+ cap = addr + curr_capa;
+ cap->sym.cipher.key_size.max = 32;
+ cap->sym.cipher.iv_size.max = 25;
+ cap->sym.cipher.iv_size.increment = 1;
+ }
curr_capa++;
}
internals->qat_dev_capabilities = internals->capa_mz->addr;
@@ -292,7 +312,8 @@ enqueue_one_aead_job_gen3(struct qat_sym_session *ctx,
*/
cipher_param = (void *)&req->serv_specif_rqpars;
- qat_set_cipher_iv(cipher_param, iv, ctx->cipher_iv.length, req);
+ qat_set_cipher_iv(cipher_param, iv, ctx->cipher_iv.length, req,
+ ctx->is_zuc256);
cipher_param->cipher_offset = ofs.ofs.cipher.head;
cipher_param->cipher_length = data_len - ofs.ofs.cipher.head -
ofs.ofs.cipher.tail;
@@ -339,7 +360,7 @@ enqueue_one_auth_job_gen3(struct qat_sym_session *ctx,
ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_64) {
/* AES-GMAC */
qat_set_cipher_iv(cipher_param, auth_iv, ctx->auth_iv.length,
- req);
+ req, ctx->is_zuc256);
}
/* Fill separate Content Descriptor for this op */
@@ -480,11 +501,14 @@ qat_sym_build_op_auth_gen3(void *in_op, struct qat_sym_session *ctx,
}
static int
-qat_sym_crypto_set_session_gen3(void *cdev __rte_unused, void *session)
+qat_sym_crypto_set_session_gen3(void *cdev, void *session)
{
struct qat_sym_session *ctx = session;
enum rte_proc_type_t proc_type = rte_eal_process_type();
int ret;
+ struct qat_cryptodev_private *internals;
+
+ internals = ((struct rte_cryptodev *)cdev)->data->dev_private;
if (proc_type == RTE_PROC_AUTO || proc_type == RTE_PROC_INVALID)
return -EINVAL;
@@ -517,6 +541,22 @@ qat_sym_crypto_set_session_gen3(void *cdev __rte_unused, void *session)
ctx->qat_cipher_alg ==
ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3)) {
qat_sym_session_set_ext_hash_flags_gen2(ctx, 0);
+ } else if ((internals->qat_dev->has_wireless_slice) &&
+ ((ctx->aes_cmac ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) &&
+ (ctx->qat_cipher_alg ==
+ ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 ||
+ ctx->qat_cipher_alg ==
+ ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 ||
+ ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_256))) {
+ qat_sym_session_set_ext_hash_flags_gen2(ctx, 0);
+ } else if ((internals->qat_dev->has_wireless_slice) &&
+ (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128) &&
+ ctx->qat_cipher_alg != ICP_QAT_HW_CIPHER_ALGO_ZUC_256) {
+ qat_sym_session_set_ext_hash_flags_gen2(ctx,
+ 1 << ICP_QAT_FW_AUTH_HDR_FLAG_ZUC_EIA3_BITPOS);
}
ret = 0;
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
index de72383d4b..e267cd882b 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
@@ -174,7 +174,7 @@ enqueue_one_aead_job_gen4(struct qat_sym_session *ctx,
* operation
*/
qat_set_cipher_iv(cipher_param, iv, ctx->cipher_iv.length,
- req);
+ req, ctx->is_zuc256);
cipher_param->cipher_offset = ofs.ofs.cipher.head;
cipher_param->cipher_length = data_len -
ofs.ofs.cipher.head - ofs.ofs.cipher.tail;
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
index b8ddf42d6f..f8bfb48112 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
@@ -116,7 +116,10 @@ qat_auth_is_len_in_bits(struct qat_sym_session *ctx,
{
if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 ||
ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 ||
- ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3) {
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128) {
if (unlikely((op->sym->auth.data.offset % BYTE_LENGTH != 0) ||
(op->sym->auth.data.length % BYTE_LENGTH != 0)))
return -EINVAL;
@@ -131,7 +134,8 @@ qat_cipher_is_len_in_bits(struct qat_sym_session *ctx,
{
if (ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 ||
ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_KASUMI ||
- ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3) {
+ ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 ||
+ ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_256) {
if (unlikely((op->sym->cipher.data.length % BYTE_LENGTH != 0) ||
((op->sym->cipher.data.offset %
BYTE_LENGTH) != 0)))
@@ -588,10 +592,31 @@ qat_sym_convert_op_to_vec_aead(struct rte_crypto_op *op,
return 0;
}
+static inline void
+zuc256_modify_iv(uint8_t *iv)
+{
+ uint8_t iv_tmp[8];
+
+ iv_tmp[0] = iv[16];
+ /* pack the last 8 bytes of IV to 6 bytes.
+ * discard the 2 MSB bits of each byte
+ */
+ iv_tmp[1] = (((iv[17] & 0x3f) << 2) | ((iv[18] >> 4) & 0x3));
+ iv_tmp[2] = (((iv[18] & 0xf) << 4) | ((iv[19] >> 2) & 0xf));
+ iv_tmp[3] = (((iv[19] & 0x3) << 6) | (iv[20] & 0x3f));
+
+ iv_tmp[4] = (((iv[21] & 0x3f) << 2) | ((iv[22] >> 4) & 0x3));
+ iv_tmp[5] = (((iv[22] & 0xf) << 4) | ((iv[23] >> 2) & 0xf));
+ iv_tmp[6] = (((iv[23] & 0x3) << 6) | (iv[24] & 0x3f));
+
+ memcpy(iv + 16, iv_tmp, 8);
+}
+
static __rte_always_inline void
qat_set_cipher_iv(struct icp_qat_fw_la_cipher_req_params *cipher_param,
struct rte_crypto_va_iova_ptr *iv_ptr, uint32_t iv_len,
- struct icp_qat_fw_la_bulk_req *qat_req)
+ struct icp_qat_fw_la_bulk_req *qat_req,
+ uint8_t is_zuc256)
{
/* copy IV into request if it fits */
if (iv_len <= sizeof(cipher_param->u.cipher_IV_array))
@@ -602,6 +627,9 @@ qat_set_cipher_iv(struct icp_qat_fw_la_cipher_req_params *cipher_param,
qat_req->comn_hdr.serv_specif_flags,
ICP_QAT_FW_CIPH_IV_64BIT_PTR);
cipher_param->u.s.cipher_IV_ptr = iv_ptr->iova;
+
+ if (is_zuc256 && iv_len == 25)
+ zuc256_modify_iv(iv_ptr->va);
}
}
@@ -626,7 +654,7 @@ enqueue_one_cipher_job_gen1(struct qat_sym_session *ctx,
cipher_param = (void *)&req->serv_specif_rqpars;
/* cipher IV */
- qat_set_cipher_iv(cipher_param, iv, ctx->cipher_iv.length, req);
+ qat_set_cipher_iv(cipher_param, iv, ctx->cipher_iv.length, req, ctx->is_zuc256);
cipher_param->cipher_offset = ofs.ofs.cipher.head;
cipher_param->cipher_length = data_len - ofs.ofs.cipher.head -
ofs.ofs.cipher.tail;
@@ -664,6 +692,9 @@ enqueue_one_auth_job_gen1(struct qat_sym_session *ctx,
case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9:
case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128:
auth_param->u1.aad_adr = auth_iv->iova;
break;
case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
@@ -719,7 +750,7 @@ enqueue_one_chain_job_gen1(struct qat_sym_session *ctx,
cipher_param->cipher_offset = ofs.ofs.cipher.head;
cipher_param->cipher_length = cipher_len;
- qat_set_cipher_iv(cipher_param, cipher_iv, ctx->cipher_iv.length, req);
+ qat_set_cipher_iv(cipher_param, cipher_iv, ctx->cipher_iv.length, req, ctx->is_zuc256);
auth_param->auth_off = ofs.ofs.auth.head;
auth_param->auth_len = auth_len;
@@ -746,6 +777,9 @@ enqueue_one_chain_job_gen1(struct qat_sym_session *ctx,
case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9:
case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128:
auth_param->u1.aad_adr = auth_iv->iova;
break;
case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
index 208b7e0ba6..ae86cc005e 100644
--- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
@@ -270,6 +270,8 @@ qat_sym_build_op_auth_gen1(void *in_op, struct qat_sym_session *ctx,
struct rte_crypto_va_iova_ptr digest;
union rte_crypto_sym_ofs ofs;
int32_t total_len;
+ struct rte_cryptodev *cdev;
+ struct qat_cryptodev_private *internals;
in_sgl.vec = in_vec;
out_sgl.vec = out_vec;
@@ -284,6 +286,16 @@ qat_sym_build_op_auth_gen1(void *in_op, struct qat_sym_session *ctx,
return -EINVAL;
}
+ if (ctx->is_zuc256)
+ zuc256_modify_iv(auth_iv.va);
+
+ cdev = rte_cryptodev_pmd_get_dev(ctx->dev_id);
+ internals = cdev->data->dev_private;
+
+ if (internals->qat_dev->has_wireless_slice && !ctx->is_gmac)
+ ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
+ req->comn_hdr.serv_specif_flags, 0);
+
total_len = qat_sym_build_req_set_data(req, in_op, cookie,
in_sgl.vec, in_sgl.num, out_sgl.vec, out_sgl.num);
if (unlikely(total_len < 0)) {
@@ -374,6 +386,9 @@ qat_sym_build_op_chain_gen1(void *in_op, struct qat_sym_session *ctx,
return -EINVAL;
}
+ if (ctx->is_zuc256)
+ zuc256_modify_iv(auth_iv.va);
+
total_len = qat_sym_build_req_set_data(req, in_op, cookie,
in_sgl.vec, in_sgl.num, out_sgl.vec, out_sgl.num);
if (unlikely(total_len < 0)) {
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index 9f4f6c3d93..ebdad0bd67 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -379,7 +379,9 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
struct rte_crypto_cipher_xform *cipher_xform = NULL;
enum qat_device_gen qat_dev_gen =
internals->qat_dev->qat_dev_gen;
- int ret;
+ int ret, is_wireless = 0;
+ struct icp_qat_fw_la_bulk_req *req_tmpl = &session->fw_req;
+ struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
/* Get cipher xform from crypto xform chain */
cipher_xform = qat_get_cipher_xform(xform);
@@ -416,6 +418,8 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
goto error_out;
}
session->qat_mode = ICP_QAT_HW_CIPHER_ECB_MODE;
+ if (internals->qat_dev->has_wireless_slice)
+ is_wireless = 1;
break;
case RTE_CRYPTO_CIPHER_NULL:
session->qat_cipher_alg = ICP_QAT_HW_CIPHER_ALGO_NULL;
@@ -533,6 +537,10 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
goto error_out;
}
session->qat_mode = ICP_QAT_HW_CIPHER_ECB_MODE;
+ if (cipher_xform->key.length == ICP_QAT_HW_ZUC_256_KEY_SZ)
+ session->is_zuc256 = 1;
+ if (internals->qat_dev->has_wireless_slice)
+ is_wireless = 1;
break;
case RTE_CRYPTO_CIPHER_AES_XTS:
if ((cipher_xform->key.length/2) == ICP_QAT_HW_AES_192_KEY_SZ) {
@@ -587,6 +595,17 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
goto error_out;
}
+ if (is_wireless) {
+ /* Set the Use Extended Protocol Flags bit in LW 1 */
+ ICP_QAT_FW_USE_EXTENDED_PROTOCOL_FLAGS_SET(
+ header->ext_flags,
+ QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS);
+ /* Force usage of Wireless Cipher slice */
+ ICP_QAT_FW_USE_WCP_SLICE_SET(header->ext_flags,
+ QAT_LA_USE_WCP_SLICE);
+ session->is_wireless = 1;
+ }
+
return 0;
error_out:
@@ -820,9 +839,16 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
struct rte_crypto_auth_xform *auth_xform = qat_get_auth_xform(xform);
struct qat_cryptodev_private *internals = dev->data->dev_private;
const uint8_t *key_data = auth_xform->key.data;
- uint8_t key_length = auth_xform->key.length;
+ uint16_t key_length = auth_xform->key.length;
enum qat_device_gen qat_dev_gen =
internals->qat_dev->qat_dev_gen;
+ struct icp_qat_fw_la_bulk_req *req_tmpl = &session->fw_req;
+ struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
+ struct icp_qat_fw_cipher_auth_cd_ctrl_hdr *cd_ctrl =
+ (struct icp_qat_fw_cipher_auth_cd_ctrl_hdr *)
+ session->fw_req.cd_ctrl.content_desc_ctrl_lw;
+ uint8_t hash_flag = 0;
+ int is_wireless = 0;
session->aes_cmac = 0;
session->auth_key_length = auth_xform->key.length;
@@ -898,6 +924,10 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
case RTE_CRYPTO_AUTH_AES_CMAC:
session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
session->aes_cmac = 1;
+ if (internals->qat_dev->has_wireless_slice) {
+ is_wireless = 1;
+ session->is_wireless = 1;
+ }
break;
case RTE_CRYPTO_AUTH_AES_GMAC:
if (qat_sym_validate_aes_key(auth_xform->key.length,
@@ -918,6 +948,11 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
break;
case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2;
+ if (internals->qat_dev->has_wireless_slice) {
+ is_wireless = 1;
+ session->is_wireless = 1;
+ hash_flag = 1 << ICP_QAT_FW_AUTH_HDR_FLAG_SNOW3G_UIA2_BITPOS;
+ }
break;
case RTE_CRYPTO_AUTH_MD5_HMAC:
session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_MD5;
@@ -934,7 +969,35 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
rte_cryptodev_get_auth_algo_string(auth_xform->algo));
return -ENOTSUP;
}
- session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3;
+ if (key_length == ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ)
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3;
+ else if (key_length == ICP_QAT_HW_ZUC_256_KEY_SZ) {
+ switch (auth_xform->digest_length) {
+ case 4:
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32;
+ break;
+ case 8:
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64;
+ break;
+ case 16:
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128;
+ break;
+ default:
+ QAT_LOG(ERR, "Invalid digest length: %d",
+ auth_xform->digest_length);
+ return -ENOTSUP;
+ }
+ session->is_zuc256 = 1;
+ } else {
+ QAT_LOG(ERR, "Invalid key length: %d", key_length);
+ return -ENOTSUP;
+ }
+ if (internals->qat_dev->has_wireless_slice) {
+ is_wireless = 1;
+ session->is_wireless = 1;
+ hash_flag = 1 << ICP_QAT_FW_AUTH_HDR_FLAG_ZUC_EIA3_BITPOS;
+ } else
+ session->auth_mode = ICP_QAT_HW_AUTH_MODE0;
break;
case RTE_CRYPTO_AUTH_MD5:
case RTE_CRYPTO_AUTH_AES_CBC_MAC:
@@ -1002,6 +1065,21 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
return -EINVAL;
}
+ if (is_wireless) {
+ if (!session->aes_cmac) {
+ /* Set the Use Extended Protocol Flags bit in LW 1 */
+ ICP_QAT_FW_USE_EXTENDED_PROTOCOL_FLAGS_SET(
+ header->ext_flags,
+ QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS);
+
+ /* Set Hash Flags in LW 28 */
+ cd_ctrl->hash_flags |= hash_flag;
+ }
+ /* Force usage of Wireless Auth slice */
+ ICP_QAT_FW_USE_WAT_SLICE_SET(header->ext_flags,
+ QAT_LA_USE_WAT_SLICE);
+ }
+
return 0;
}
@@ -1204,6 +1282,15 @@ static int qat_hash_get_state1_size(enum icp_qat_hw_auth_algo qat_hash_alg)
case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
return QAT_HW_ROUND_UP(ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ,
QAT_HW_DEFAULT_ALIGNMENT);
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32:
+ return QAT_HW_ROUND_UP(ICP_QAT_HW_ZUC_256_MAC_32_STATE1_SZ,
+ QAT_HW_DEFAULT_ALIGNMENT);
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64:
+ return QAT_HW_ROUND_UP(ICP_QAT_HW_ZUC_256_MAC_64_STATE1_SZ,
+ QAT_HW_DEFAULT_ALIGNMENT);
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128:
+ return QAT_HW_ROUND_UP(ICP_QAT_HW_ZUC_256_MAC_128_STATE1_SZ,
+ QAT_HW_DEFAULT_ALIGNMENT);
case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
return QAT_HW_ROUND_UP(ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ,
QAT_HW_DEFAULT_ALIGNMENT);
@@ -1286,6 +1373,10 @@ static int qat_hash_get_block_size(enum icp_qat_hw_auth_algo qat_hash_alg)
return ICP_QAT_HW_AES_BLK_SZ;
case ICP_QAT_HW_AUTH_ALGO_MD5:
return QAT_MD5_CBLOCK;
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128:
+ return ICP_QAT_HW_ZUC_256_BLK_SZ;
case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
/* return maximum block size in this case */
return QAT_SHA512_CBLOCK;
@@ -2040,7 +2131,8 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc,
key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
} else if (cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2
|| cdesc->qat_cipher_alg ==
- ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3) {
+ ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3
+ || cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_256) {
key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT;
cdesc->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
} else if (cdesc->qat_dir == ICP_QAT_HW_CIPHER_ENCRYPT)
@@ -2075,6 +2167,17 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc,
cipher_cd_ctrl->cipher_state_sz =
ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ >> 3;
cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC;
+ } else if (cdesc->qat_cipher_alg ==
+ ICP_QAT_HW_CIPHER_ALGO_ZUC_256) {
+ if (cdesc->cipher_iv.length != 23 && cdesc->cipher_iv.length != 25) {
+ QAT_LOG(ERR, "Invalid IV length for ZUC256, must be 23 or 25.");
+ return -EINVAL;
+ }
+ total_key_size = ICP_QAT_HW_ZUC_256_KEY_SZ +
+ ICP_QAT_HW_ZUC_256_IV_SZ;
+ cipher_cd_ctrl->cipher_state_sz =
+ ICP_QAT_HW_ZUC_256_IV_SZ >> 3;
+ cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC;
} else {
total_key_size = cipherkeylen;
cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
@@ -2246,6 +2349,9 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_KASUMI_F9
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3
+ || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32
+ || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64
+ || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL
@@ -2519,7 +2625,8 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
cdesc->aad_len = aad_length;
break;
case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
- cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_SNOW3G;
+ if (!cdesc->is_wireless)
+ cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_SNOW3G;
state1_size = qat_hash_get_state1_size(
ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2);
state2_size = ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ;
@@ -2540,10 +2647,12 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
auth_param->hash_state_sz = ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ >> 3;
break;
case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
- hash->auth_config.config =
- ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE0,
- cdesc->qat_hash_alg, digestsize);
- cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC;
+ if (!cdesc->is_wireless) {
+ hash->auth_config.config =
+ ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE0,
+ cdesc->qat_hash_alg, digestsize);
+ cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC;
+ }
state1_size = qat_hash_get_state1_size(
ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3);
state2_size = ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ;
@@ -2554,6 +2663,18 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
cd_extra_size += ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ;
auth_param->hash_state_sz = ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ >> 3;
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128:
+ state1_size = qat_hash_get_state1_size(cdesc->qat_hash_alg);
+ state2_size = ICP_QAT_HW_ZUC_256_STATE2_SZ;
+ memset(cdesc->cd_cur_ptr, 0, state1_size + state2_size
+ + ICP_QAT_HW_ZUC_256_IV_SZ);
+
+ memcpy(cdesc->cd_cur_ptr + state1_size, authkey, authkeylen);
+ cd_extra_size += ICP_QAT_HW_ZUC_256_IV_SZ;
+ auth_param->hash_state_sz = ICP_QAT_HW_ZUC_256_IV_SZ >> 3;
break;
case ICP_QAT_HW_AUTH_ALGO_MD5:
#ifdef RTE_QAT_OPENSSL
@@ -2740,6 +2861,9 @@ int qat_sym_validate_zuc_key(int key_len, enum icp_qat_hw_cipher_algo *alg)
case ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ:
*alg = ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3;
break;
+ case ICP_QAT_HW_ZUC_256_KEY_SZ:
+ *alg = ICP_QAT_HW_CIPHER_ALGO_ZUC_256;
+ break;
default:
return -EINVAL;
}
diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h
index 9209e2e8df..2e25c90342 100644
--- a/drivers/crypto/qat/qat_sym_session.h
+++ b/drivers/crypto/qat/qat_sym_session.h
@@ -140,6 +140,8 @@ struct qat_sym_session {
uint8_t is_auth;
uint8_t is_cnt_zero;
/* Some generations need different setup of counter */
+ uint8_t is_zuc256;
+ uint8_t is_wireless;
uint32_t slice_types;
enum qat_sym_proto_flag qat_proto_flag;
qat_sym_build_request_t build_request[2];
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 3/4] crypto/qat: add new gen3 CMAC macros
2023-12-19 15:51 [PATCH 0/4] add new QAT gen3 device Ciara Power
2023-12-19 15:51 ` [PATCH 1/4] crypto/qat: add new " Ciara Power
2023-12-19 15:51 ` [PATCH 2/4] crypto/qat: add zuc256 wireless slice for gen3 Ciara Power
@ 2023-12-19 15:51 ` Ciara Power
2023-12-19 15:51 ` [PATCH 4/4] crypto/qat: disable asym and compression for new gen3 device Ciara Power
` (2 subsequent siblings)
5 siblings, 0 replies; 19+ messages in thread
From: Ciara Power @ 2023-12-19 15:51 UTC (permalink / raw)
To: dev; +Cc: Ciara Power, Kai Ji
The new QAT GEN3 device uses new macros for CMAC values, rather than
using XCBC_MAC ones.
The wireless slice handles CMAC in the new gen3 device, and no key
precomputes are required by SW.
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
drivers/common/qat/qat_adf/icp_qat_hw.h | 4 +++-
drivers/crypto/qat/qat_sym_session.c | 28 +++++++++++++++++++++----
2 files changed, 27 insertions(+), 5 deletions(-)
diff --git a/drivers/common/qat/qat_adf/icp_qat_hw.h b/drivers/common/qat/qat_adf/icp_qat_hw.h
index dfd0ea133c..b0a6126271 100644
--- a/drivers/common/qat/qat_adf/icp_qat_hw.h
+++ b/drivers/common/qat/qat_adf/icp_qat_hw.h
@@ -74,7 +74,7 @@ enum icp_qat_hw_auth_algo {
ICP_QAT_HW_AUTH_ALGO_RESERVED = 20,
ICP_QAT_HW_AUTH_ALGO_RESERVED1 = 21,
ICP_QAT_HW_AUTH_ALGO_RESERVED2 = 22,
- ICP_QAT_HW_AUTH_ALGO_RESERVED3 = 22,
+ ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC = 22,
ICP_QAT_HW_AUTH_ALGO_RESERVED4 = 23,
ICP_QAT_HW_AUTH_ALGO_RESERVED5 = 24,
ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32 = 25,
@@ -179,6 +179,7 @@ struct icp_qat_hw_auth_setup {
#define ICP_QAT_HW_ZUC_256_MAC_32_STATE1_SZ 8
#define ICP_QAT_HW_ZUC_256_MAC_64_STATE1_SZ 8
#define ICP_QAT_HW_ZUC_256_MAC_128_STATE1_SZ 16
+#define ICP_QAT_HW_AES_CMAC_STATE1_SZ 16
#define ICP_QAT_HW_NULL_STATE2_SZ 32
#define ICP_QAT_HW_MD5_STATE2_SZ 16
@@ -207,6 +208,7 @@ struct icp_qat_hw_auth_setup {
#define ICP_QAT_HW_GALOIS_H_SZ 16
#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8
#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16
+#define ICP_QAT_HW_AES_128_CMAC_STATE2_SZ 16
struct icp_qat_hw_auth_sha512 {
struct icp_qat_hw_auth_setup inner_setup;
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index ebdad0bd67..b1649b8d18 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -922,11 +922,20 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
break;
case RTE_CRYPTO_AUTH_AES_CMAC:
- session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
session->aes_cmac = 1;
- if (internals->qat_dev->has_wireless_slice) {
- is_wireless = 1;
- session->is_wireless = 1;
+ if (!internals->qat_dev->has_wireless_slice) {
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
+ break;
+ }
+ is_wireless = 1;
+ session->is_wireless = 1;
+ switch (key_length) {
+ case ICP_QAT_HW_AES_128_KEY_SZ:
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC;
+ break;
+ default:
+ QAT_LOG(ERR, "Invalid key length: %d", key_length);
+ return -ENOTSUP;
}
break;
case RTE_CRYPTO_AUTH_AES_GMAC:
@@ -1309,6 +1318,9 @@ static int qat_hash_get_state1_size(enum icp_qat_hw_auth_algo qat_hash_alg)
case ICP_QAT_HW_AUTH_ALGO_NULL:
return QAT_HW_ROUND_UP(ICP_QAT_HW_NULL_STATE1_SZ,
QAT_HW_DEFAULT_ALIGNMENT);
+ case ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC:
+ return QAT_HW_ROUND_UP(ICP_QAT_HW_AES_CMAC_STATE1_SZ,
+ QAT_HW_DEFAULT_ALIGNMENT);
case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
/* return maximum state1 size in this case */
return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
@@ -1345,6 +1357,7 @@ static int qat_hash_get_digest_size(enum icp_qat_hw_auth_algo qat_hash_alg)
case ICP_QAT_HW_AUTH_ALGO_MD5:
return ICP_QAT_HW_MD5_STATE1_SZ;
case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+ case ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC:
return ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ;
case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
/* return maximum digest size in this case */
@@ -2353,6 +2366,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC
+ || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SM3
@@ -2593,6 +2607,12 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
return -EFAULT;
}
break;
+ case ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC:
+ state1_size = ICP_QAT_HW_AES_CMAC_STATE1_SZ;
+ memset(cdesc->cd_cur_ptr, 0, state1_size);
+ memcpy(cdesc->cd_cur_ptr + state1_size, authkey, authkeylen);
+ state2_size = ICP_QAT_HW_AES_128_CMAC_STATE2_SZ;
+ break;
case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_GCM;
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 4/4] crypto/qat: disable asym and compression for new gen3 device
2023-12-19 15:51 [PATCH 0/4] add new QAT gen3 device Ciara Power
` (2 preceding siblings ...)
2023-12-19 15:51 ` [PATCH 3/4] crypto/qat: add new gen3 CMAC macros Ciara Power
@ 2023-12-19 15:51 ` Ciara Power
2024-02-23 15:12 ` [PATCH v2 0/4] add new QAT gen3 and gen5 Ciara Power
2024-02-26 17:08 ` [PATCH v3 " Ciara Power
5 siblings, 0 replies; 19+ messages in thread
From: Ciara Power @ 2023-12-19 15:51 UTC (permalink / raw)
To: dev; +Cc: Ciara Power, Kai Ji, Fan Zhang, Ashish Gupta
Currently only symmetric crypto has been added for the new gen3 device,
adding a check to disable asym and comp PMDs for this device.
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
drivers/compress/qat/qat_comp_pmd.c | 3 ++-
drivers/crypto/qat/qat_asym.c | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c
index 6fb8cf69be..bdc35b5949 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -687,7 +687,8 @@ qat_comp_dev_create(struct qat_pci_device *qat_pci_dev,
qat_pci_dev->name, "comp");
QAT_LOG(DEBUG, "Creating QAT COMP device %s", name);
- if (qat_comp_gen_ops->compressdev_ops == NULL) {
+ if (qat_comp_gen_ops->compressdev_ops == NULL ||
+ qat_dev_instance->pci_dev->id.device_id == 0x578b) {
QAT_LOG(DEBUG, "Device %s does not support compression", name);
return -ENOTSUP;
}
diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index 2bf3060278..036813e977 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -1522,7 +1522,8 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
qat_pci_dev->name, "asym");
QAT_LOG(DEBUG, "Creating QAT ASYM device %s\n", name);
- if (gen_dev_ops->cryptodev_ops == NULL) {
+ if (gen_dev_ops->cryptodev_ops == NULL ||
+ qat_dev_instance->pci_dev->id.device_id == 0x578b) {
QAT_LOG(ERR, "Device %s does not support asymmetric crypto",
name);
return -(EFAULT);
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 0/4] add new QAT gen3 and gen5
2023-12-19 15:51 [PATCH 0/4] add new QAT gen3 device Ciara Power
` (3 preceding siblings ...)
2023-12-19 15:51 ` [PATCH 4/4] crypto/qat: disable asym and compression for new gen3 device Ciara Power
@ 2024-02-23 15:12 ` Ciara Power
2024-02-23 15:12 ` [PATCH v2 1/4] common/qat: add new gen3 device Ciara Power
` (4 more replies)
2024-02-26 17:08 ` [PATCH v3 " Ciara Power
5 siblings, 5 replies; 19+ messages in thread
From: Ciara Power @ 2024-02-23 15:12 UTC (permalink / raw)
To: dev; +Cc: gakhil, kai.ji, arkadiuszx.kusztal, Ciara Power
This patchset adds support for two new QAT devices.
A new GEN3 device, and a GEN5 device, both of which have
wireless slice support for algorithms such as ZUC-256.
Symmetric, asymmetric and compression are all supported
for these devices.
v2:
- New patch added for gen5 device that reuses gen4 code,
and new gen3 wireless slice changes.
- Removed patch to disable asymmetric and compression.
- Documentation updates added.
- Fixed ZUC-256 IV modification for raw API path.
- Fixed setting extended protocol flag bit position.
- Added check for ZUC-256 wireless slice in slice map.
Ciara Power (4):
common/qat: add new gen3 device
common/qat: add zuc256 wireless slice for gen3
common/qat: add new gen3 CMAC macros
common/qat: add gen5 device
doc/guides/compressdevs/qat_comp.rst | 1 +
doc/guides/cryptodevs/qat.rst | 6 +
doc/guides/rel_notes/release_24_03.rst | 7 +
drivers/common/qat/dev/qat_dev_gen4.c | 31 ++-
drivers/common/qat/dev/qat_dev_gen5.c | 51 ++++
drivers/common/qat/dev/qat_dev_gens.h | 54 ++++
drivers/common/qat/meson.build | 3 +
drivers/common/qat/qat_adf/icp_qat_fw.h | 6 +-
drivers/common/qat/qat_adf/icp_qat_fw_la.h | 24 ++
drivers/common/qat/qat_adf/icp_qat_hw.h | 26 +-
drivers/common/qat/qat_common.h | 1 +
drivers/common/qat/qat_device.c | 19 ++
drivers/common/qat/qat_device.h | 2 +
drivers/compress/qat/dev/qat_comp_pmd_gen4.c | 8 +-
drivers/compress/qat/dev/qat_comp_pmd_gen5.c | 73 +++++
drivers/compress/qat/dev/qat_comp_pmd_gens.h | 14 +
drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 7 +-
drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 63 ++++-
drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 4 +-
drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c | 278 +++++++++++++++++++
drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 40 ++-
drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 43 +++
drivers/crypto/qat/qat_sym_session.c | 177 ++++++++++--
drivers/crypto/qat/qat_sym_session.h | 2 +
24 files changed, 889 insertions(+), 51 deletions(-)
create mode 100644 drivers/common/qat/dev/qat_dev_gen5.c
create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen5.c
create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 1/4] common/qat: add new gen3 device
2024-02-23 15:12 ` [PATCH v2 0/4] add new QAT gen3 and gen5 Ciara Power
@ 2024-02-23 15:12 ` Ciara Power
2024-02-23 15:12 ` [PATCH v2 2/4] common/qat: add zuc256 wireless slice for gen3 Ciara Power
` (3 subsequent siblings)
4 siblings, 0 replies; 19+ messages in thread
From: Ciara Power @ 2024-02-23 15:12 UTC (permalink / raw)
To: dev; +Cc: gakhil, kai.ji, arkadiuszx.kusztal, Ciara Power
Add new gen3 QAT device ID.
This device has a wireless slice, but other gen3 devices do not, so we
must set a flag to indicate this wireless enabled device.
Capabilities for the device are slightly different from base gen3
capabilities, some are removed from the list for this device.
Symmetric, asymmetric and compression services are enabled.
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
v2: Added documentation updates.
---
doc/guides/compressdevs/qat_comp.rst | 1 +
doc/guides/cryptodevs/qat.rst | 2 ++
doc/guides/rel_notes/release_24_03.rst | 4 ++++
drivers/common/qat/qat_device.c | 13 +++++++++++++
drivers/common/qat/qat_device.h | 2 ++
drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 11 +++++++++++
6 files changed, 33 insertions(+)
diff --git a/doc/guides/compressdevs/qat_comp.rst b/doc/guides/compressdevs/qat_comp.rst
index 475c4a9f9f..338b1bf623 100644
--- a/doc/guides/compressdevs/qat_comp.rst
+++ b/doc/guides/compressdevs/qat_comp.rst
@@ -10,6 +10,7 @@ support for the following hardware accelerator devices:
* ``Intel QuickAssist Technology C62x``
* ``Intel QuickAssist Technology C3xxx``
* ``Intel QuickAssist Technology DH895x``
+* ``Intel QuickAssist Technology 300xx``
Features
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index dc6b95165d..51190e12d6 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -26,6 +26,7 @@ poll mode crypto driver support for the following hardware accelerator devices:
* ``Intel QuickAssist Technology D15xx``
* ``Intel QuickAssist Technology C4xxx``
* ``Intel QuickAssist Technology 4xxx``
+* ``Intel QuickAssist Technology 300xx``
Features
@@ -177,6 +178,7 @@ poll mode crypto driver support for the following hardware accelerator devices:
* ``Intel QuickAssist Technology C4xxx``
* ``Intel QuickAssist Technology 4xxx``
* ``Intel QuickAssist Technology 401xxx``
+* ``Intel QuickAssist Technology 300xx``
The QAT ASYM PMD has support for:
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 879bb4944c..55517eabd8 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -131,6 +131,10 @@ New Features
* Added support for comparing result between packet fields or value.
* Added support for accumulating value of field into another one.
+* **Updated Intel QuickAssist Technology driver.**
+
+ * Enabled support for new QAT GEN3 (578a) devices in QAT crypto driver.
+
* **Updated Marvell cnxk crypto driver.**
* Added support for Rx inject in crypto_cn10k.
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index f55dc3c6f0..0e7d387d78 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -53,6 +53,9 @@ static const struct rte_pci_id pci_id_qat_map[] = {
{
RTE_PCI_DEVICE(0x8086, 0x18a1),
},
+ {
+ RTE_PCI_DEVICE(0x8086, 0x578b),
+ },
{
RTE_PCI_DEVICE(0x8086, 0x4941),
},
@@ -194,6 +197,7 @@ pick_gen(const struct rte_pci_device *pci_dev)
case 0x18ef:
return QAT_GEN2;
case 0x18a1:
+ case 0x578b:
return QAT_GEN3;
case 0x4941:
case 0x4943:
@@ -205,6 +209,12 @@ pick_gen(const struct rte_pci_device *pci_dev)
}
}
+static int
+wireless_slice_support(uint16_t pci_dev_id)
+{
+ return pci_dev_id == 0x578b;
+}
+
struct qat_pci_device *
qat_pci_device_allocate(struct rte_pci_device *pci_dev,
struct qat_dev_cmd_param *qat_dev_cmd_param)
@@ -282,6 +292,9 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
qat_dev->qat_dev_id = qat_dev_id;
qat_dev->qat_dev_gen = qat_dev_gen;
+ if (wireless_slice_support(pci_dev->id.device_id))
+ qat_dev->has_wireless_slice = 1;
+
ops_hw = qat_dev_hw_spec[qat_dev->qat_dev_gen];
NOT_NULL(ops_hw->qat_dev_get_misc_bar, goto error,
"QAT internal error! qat_dev_get_misc_bar function not set");
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index aa7988bb74..43e4752812 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -135,6 +135,8 @@ struct qat_pci_device {
/**< Per generation specific information */
uint32_t slice_map;
/**< Map of the crypto and compression slices */
+ uint16_t has_wireless_slice;
+ /**< Wireless Slices supported */
};
struct qat_gen_hw_data {
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
index 02bcdb06b1..bc53e2e0f1 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
@@ -255,6 +255,17 @@ qat_sym_crypto_cap_get_gen3(struct qat_cryptodev_private *internals,
RTE_CRYPTO_AUTH_SM3_HMAC))) {
continue;
}
+ if (internals->qat_dev->has_wireless_slice && (
+ check_auth_capa(&capabilities[iter],
+ RTE_CRYPTO_AUTH_KASUMI_F9) ||
+ check_cipher_capa(&capabilities[iter],
+ RTE_CRYPTO_CIPHER_KASUMI_F8) ||
+ check_cipher_capa(&capabilities[iter],
+ RTE_CRYPTO_CIPHER_DES_CBC) ||
+ check_cipher_capa(&capabilities[iter],
+ RTE_CRYPTO_CIPHER_DES_DOCSISBPI)))
+ continue;
+
memcpy(addr + curr_capa, capabilities + iter,
sizeof(struct rte_cryptodev_capabilities));
curr_capa++;
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 2/4] common/qat: add zuc256 wireless slice for gen3
2024-02-23 15:12 ` [PATCH v2 0/4] add new QAT gen3 and gen5 Ciara Power
2024-02-23 15:12 ` [PATCH v2 1/4] common/qat: add new gen3 device Ciara Power
@ 2024-02-23 15:12 ` Ciara Power
2024-02-23 15:12 ` [PATCH v2 3/4] common/qat: add new gen3 CMAC macros Ciara Power
` (2 subsequent siblings)
4 siblings, 0 replies; 19+ messages in thread
From: Ciara Power @ 2024-02-23 15:12 UTC (permalink / raw)
To: dev; +Cc: gakhil, kai.ji, arkadiuszx.kusztal, Ciara Power
The new gen3 device handles wireless algorithms on wireless slices,
based on the device wireless slice support, set the required flags for
these algorithms to move slice.
One of the algorithms supported for the wireless slices is ZUC 256,
support is added for this, along with modifying the capability for the
device.
The device supports 24 bytes iv for ZUC 256, with iv[20]
being ignored in register.
For 25 byte iv, compress this into 23 bytes.
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
v2:
- Fixed setting extended protocol flag bit position.
- Added slice map check for ZUC256 wireless slice.
- Fixed IV modification for ZUC256 in raw datapath.
- Added increment size for ZUC256 capabiltiies.
- Added release note.
---
doc/guides/rel_notes/release_24_03.rst | 1 +
drivers/common/qat/qat_adf/icp_qat_fw.h | 6 +-
drivers/common/qat/qat_adf/icp_qat_fw_la.h | 24 ++++
drivers/common/qat/qat_adf/icp_qat_hw.h | 24 +++-
drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 7 +-
drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 52 ++++++-
drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 34 ++++-
drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 43 ++++++
drivers/crypto/qat/qat_sym_session.c | 142 +++++++++++++++++--
drivers/crypto/qat/qat_sym_session.h | 2 +
10 files changed, 312 insertions(+), 23 deletions(-)
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 55517eabd8..0dee1ff104 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -134,6 +134,7 @@ New Features
* **Updated Intel QuickAssist Technology driver.**
* Enabled support for new QAT GEN3 (578a) devices in QAT crypto driver.
+ * Enabled ZUC256 cipher and auth algorithm for wireless slice enabled GEN3 device.
* **Updated Marvell cnxk crypto driver.**
diff --git a/drivers/common/qat/qat_adf/icp_qat_fw.h b/drivers/common/qat/qat_adf/icp_qat_fw.h
index 3aa17ae041..dd7c926140 100644
--- a/drivers/common/qat/qat_adf/icp_qat_fw.h
+++ b/drivers/common/qat/qat_adf/icp_qat_fw.h
@@ -75,7 +75,8 @@ struct icp_qat_fw_comn_req_hdr {
uint8_t service_type;
uint8_t hdr_flags;
uint16_t serv_specif_flags;
- uint16_t comn_req_flags;
+ uint8_t comn_req_flags;
+ uint8_t ext_flags;
};
struct icp_qat_fw_comn_req_rqpars {
@@ -176,9 +177,6 @@ struct icp_qat_fw_comn_resp {
#define QAT_COMN_PTR_TYPE_SGL 0x1
#define QAT_COMN_CD_FLD_TYPE_64BIT_ADR 0x0
#define QAT_COMN_CD_FLD_TYPE_16BYTE_DATA 0x1
-#define QAT_COMN_EXT_FLAGS_BITPOS 8
-#define QAT_COMN_EXT_FLAGS_MASK 0x1
-#define QAT_COMN_EXT_FLAGS_USED 0x1
#define ICP_QAT_FW_COMN_FLAGS_BUILD(cdt, ptr) \
((((cdt) & QAT_COMN_CD_FLD_TYPE_MASK) << QAT_COMN_CD_FLD_TYPE_BITPOS) \
diff --git a/drivers/common/qat/qat_adf/icp_qat_fw_la.h b/drivers/common/qat/qat_adf/icp_qat_fw_la.h
index 70f0effa62..134c309355 100644
--- a/drivers/common/qat/qat_adf/icp_qat_fw_la.h
+++ b/drivers/common/qat/qat_adf/icp_qat_fw_la.h
@@ -81,6 +81,15 @@ struct icp_qat_fw_la_bulk_req {
#define ICP_QAT_FW_LA_PARTIAL_END 2
#define QAT_LA_PARTIAL_BITPOS 0
#define QAT_LA_PARTIAL_MASK 0x3
+#define QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS_BITPOS 0
+#define QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS 1
+#define QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS_MASK 0x1
+#define QAT_LA_USE_WCP_SLICE 1
+#define QAT_LA_USE_WCP_SLICE_BITPOS 2
+#define QAT_LA_USE_WCP_SLICE_MASK 0x1
+#define QAT_LA_USE_WAT_SLICE_BITPOS 3
+#define QAT_LA_USE_WAT_SLICE 1
+#define QAT_LA_USE_WAT_SLICE_MASK 0x1
#define ICP_QAT_FW_LA_FLAGS_BUILD(zuc_proto, gcm_iv_len, auth_rslt, proto, \
cmp_auth, ret_auth, update_state, \
ciph_iv, ciphcfg, partial) \
@@ -188,6 +197,21 @@ struct icp_qat_fw_la_bulk_req {
QAT_FIELD_SET(flags, val, QAT_LA_PARTIAL_BITPOS, \
QAT_LA_PARTIAL_MASK)
+#define ICP_QAT_FW_USE_EXTENDED_PROTOCOL_FLAGS_SET(flags, val) \
+ QAT_FIELD_SET(flags, val, \
+ QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS_BITPOS, \
+ QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS_MASK)
+
+#define ICP_QAT_FW_USE_WCP_SLICE_SET(flags, val) \
+ QAT_FIELD_SET(flags, val, \
+ QAT_LA_USE_WCP_SLICE_BITPOS, \
+ QAT_LA_USE_WCP_SLICE_MASK)
+
+#define ICP_QAT_FW_USE_WAT_SLICE_SET(flags, val) \
+ QAT_FIELD_SET(flags, val, \
+ QAT_LA_USE_WAT_SLICE_BITPOS, \
+ QAT_LA_USE_WAT_SLICE_MASK)
+
#define QAT_FW_LA_MODE2 1
#define QAT_FW_LA_NO_MODE2 0
#define QAT_FW_LA_MODE2_MASK 0x1
diff --git a/drivers/common/qat/qat_adf/icp_qat_hw.h b/drivers/common/qat/qat_adf/icp_qat_hw.h
index 33756d512d..4651fb90bb 100644
--- a/drivers/common/qat/qat_adf/icp_qat_hw.h
+++ b/drivers/common/qat/qat_adf/icp_qat_hw.h
@@ -21,7 +21,8 @@ enum icp_qat_slice_mask {
ICP_ACCEL_MASK_CRYPTO1_SLICE = 0x100,
ICP_ACCEL_MASK_CRYPTO2_SLICE = 0x200,
ICP_ACCEL_MASK_SM3_SLICE = 0x400,
- ICP_ACCEL_MASK_SM4_SLICE = 0x800
+ ICP_ACCEL_MASK_SM4_SLICE = 0x800,
+ ICP_ACCEL_MASK_ZUC_256_SLICE = 0x2000,
};
enum icp_qat_hw_ae_id {
@@ -71,7 +72,16 @@ enum icp_qat_hw_auth_algo {
ICP_QAT_HW_AUTH_ALGO_SHA3_256 = 17,
ICP_QAT_HW_AUTH_ALGO_SHA3_384 = 18,
ICP_QAT_HW_AUTH_ALGO_SHA3_512 = 19,
- ICP_QAT_HW_AUTH_ALGO_DELIMITER = 20
+ ICP_QAT_HW_AUTH_ALGO_RESERVED = 20,
+ ICP_QAT_HW_AUTH_ALGO_RESERVED1 = 21,
+ ICP_QAT_HW_AUTH_ALGO_RESERVED2 = 22,
+ ICP_QAT_HW_AUTH_ALGO_RESERVED3 = 22,
+ ICP_QAT_HW_AUTH_ALGO_RESERVED4 = 23,
+ ICP_QAT_HW_AUTH_ALGO_RESERVED5 = 24,
+ ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32 = 25,
+ ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64 = 26,
+ ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128 = 27,
+ ICP_QAT_HW_AUTH_ALGO_DELIMITER = 28
};
enum icp_qat_hw_auth_mode {
@@ -167,6 +177,9 @@ struct icp_qat_hw_auth_setup {
#define ICP_QAT_HW_GALOIS_128_STATE1_SZ 16
#define ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ 8
#define ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_256_MAC_32_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_256_MAC_64_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_256_MAC_128_STATE1_SZ 16
#define ICP_QAT_HW_NULL_STATE2_SZ 32
#define ICP_QAT_HW_MD5_STATE2_SZ 16
@@ -191,6 +204,7 @@ struct icp_qat_hw_auth_setup {
#define ICP_QAT_HW_AES_F9_STATE2_SZ ICP_QAT_HW_KASUMI_F9_STATE2_SZ
#define ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ 24
#define ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ 32
+#define ICP_QAT_HW_ZUC_256_STATE2_SZ 56
#define ICP_QAT_HW_GALOIS_H_SZ 16
#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8
#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16
@@ -228,7 +242,8 @@ enum icp_qat_hw_cipher_algo {
ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 = 9,
ICP_QAT_HW_CIPHER_ALGO_SM4 = 10,
ICP_QAT_HW_CIPHER_ALGO_CHACHA20_POLY1305 = 11,
- ICP_QAT_HW_CIPHER_DELIMITER = 12
+ ICP_QAT_HW_CIPHER_ALGO_ZUC_256 = 12,
+ ICP_QAT_HW_CIPHER_DELIMITER = 13
};
enum icp_qat_hw_cipher_mode {
@@ -308,6 +323,7 @@ enum icp_qat_hw_cipher_convert {
#define ICP_QAT_HW_KASUMI_BLK_SZ 8
#define ICP_QAT_HW_SNOW_3G_BLK_SZ 8
#define ICP_QAT_HW_ZUC_3G_BLK_SZ 8
+#define ICP_QAT_HW_ZUC_256_BLK_SZ 8
#define ICP_QAT_HW_NULL_KEY_SZ 256
#define ICP_QAT_HW_DES_KEY_SZ 8
#define ICP_QAT_HW_3DES_KEY_SZ 24
@@ -343,6 +359,8 @@ enum icp_qat_hw_cipher_convert {
#define ICP_QAT_HW_SPC_CTR_SZ 16
#define ICP_QAT_HW_CHACHAPOLY_ICV_SZ 16
#define ICP_QAT_HW_CHACHAPOLY_AAD_MAX_LOG 14
+#define ICP_QAT_HW_ZUC_256_KEY_SZ 32
+#define ICP_QAT_HW_ZUC_256_IV_SZ 24
#define ICP_QAT_HW_CIPHER_MAX_KEY_SZ ICP_QAT_HW_AES_256_F8_KEY_SZ
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
index df47767749..62874039a9 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
@@ -182,10 +182,8 @@ qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session,
session->fw_req.cd_ctrl.content_desc_ctrl_lw;
/* Set the Use Extended Protocol Flags bit in LW 1 */
- QAT_FIELD_SET(header->comn_req_flags,
- QAT_COMN_EXT_FLAGS_USED,
- QAT_COMN_EXT_FLAGS_BITPOS,
- QAT_COMN_EXT_FLAGS_MASK);
+ ICP_QAT_FW_USE_EXTENDED_PROTOCOL_FLAGS_SET(
+ header->ext_flags, QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS);
/* Set Hash Flags in LW 28 */
cd_ctrl->hash_flags |= hash_flag;
@@ -199,6 +197,7 @@ qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session,
header->serv_specif_flags, 0);
break;
case ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3:
+ case ICP_QAT_HW_CIPHER_ALGO_ZUC_256:
ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
ICP_QAT_FW_LA_NO_PROTO);
ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
index bc53e2e0f1..907c3ce3e2 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
@@ -204,6 +204,7 @@ qat_sym_crypto_cap_get_gen3(struct qat_cryptodev_private *internals,
uint32_t legacy_size = sizeof(qat_sym_crypto_legacy_caps_gen3);
capa_num = size/sizeof(struct rte_cryptodev_capabilities);
legacy_capa_num = legacy_size/sizeof(struct rte_cryptodev_capabilities);
+ struct rte_cryptodev_capabilities *cap;
if (unlikely(qat_legacy_capa))
size = size + legacy_size;
@@ -255,6 +256,15 @@ qat_sym_crypto_cap_get_gen3(struct qat_cryptodev_private *internals,
RTE_CRYPTO_AUTH_SM3_HMAC))) {
continue;
}
+
+ if (slice_map & ICP_ACCEL_MASK_ZUC_256_SLICE && (
+ check_auth_capa(&capabilities[iter],
+ RTE_CRYPTO_AUTH_ZUC_EIA3) ||
+ check_cipher_capa(&capabilities[iter],
+ RTE_CRYPTO_CIPHER_ZUC_EEA3))) {
+ continue;
+ }
+
if (internals->qat_dev->has_wireless_slice && (
check_auth_capa(&capabilities[iter],
RTE_CRYPTO_AUTH_KASUMI_F9) ||
@@ -268,6 +278,27 @@ qat_sym_crypto_cap_get_gen3(struct qat_cryptodev_private *internals,
memcpy(addr + curr_capa, capabilities + iter,
sizeof(struct rte_cryptodev_capabilities));
+
+ if (internals->qat_dev->has_wireless_slice && (
+ check_auth_capa(&capabilities[iter],
+ RTE_CRYPTO_AUTH_ZUC_EIA3))) {
+ cap = addr + curr_capa;
+ cap->sym.auth.key_size.max = 32;
+ cap->sym.auth.key_size.increment = 16;
+ cap->sym.auth.iv_size.max = 25;
+ cap->sym.auth.iv_size.increment = 1;
+ cap->sym.auth.digest_size.max = 16;
+ cap->sym.auth.digest_size.increment = 4;
+ }
+ if (internals->qat_dev->has_wireless_slice && (
+ check_cipher_capa(&capabilities[iter],
+ RTE_CRYPTO_CIPHER_ZUC_EEA3))) {
+ cap = addr + curr_capa;
+ cap->sym.cipher.key_size.max = 32;
+ cap->sym.cipher.key_size.increment = 16;
+ cap->sym.cipher.iv_size.max = 25;
+ cap->sym.cipher.iv_size.increment = 1;
+ }
curr_capa++;
}
internals->qat_dev_capabilities = internals->capa_mz->addr;
@@ -480,11 +511,14 @@ qat_sym_build_op_auth_gen3(void *in_op, struct qat_sym_session *ctx,
}
static int
-qat_sym_crypto_set_session_gen3(void *cdev __rte_unused, void *session)
+qat_sym_crypto_set_session_gen3(void *cdev, void *session)
{
struct qat_sym_session *ctx = session;
enum rte_proc_type_t proc_type = rte_eal_process_type();
int ret;
+ struct qat_cryptodev_private *internals;
+
+ internals = ((struct rte_cryptodev *)cdev)->data->dev_private;
if (proc_type == RTE_PROC_AUTO || proc_type == RTE_PROC_INVALID)
return -EINVAL;
@@ -517,6 +551,22 @@ qat_sym_crypto_set_session_gen3(void *cdev __rte_unused, void *session)
ctx->qat_cipher_alg ==
ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3)) {
qat_sym_session_set_ext_hash_flags_gen2(ctx, 0);
+ } else if ((internals->qat_dev->has_wireless_slice) &&
+ ((ctx->aes_cmac ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) &&
+ (ctx->qat_cipher_alg ==
+ ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 ||
+ ctx->qat_cipher_alg ==
+ ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 ||
+ ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_256))) {
+ qat_sym_session_set_ext_hash_flags_gen2(ctx, 0);
+ } else if ((internals->qat_dev->has_wireless_slice) &&
+ (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128) &&
+ ctx->qat_cipher_alg != ICP_QAT_HW_CIPHER_ALGO_ZUC_256) {
+ qat_sym_session_set_ext_hash_flags_gen2(ctx,
+ 1 << ICP_QAT_FW_AUTH_HDR_FLAG_ZUC_EIA3_BITPOS);
}
ret = 0;
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
index 911400e53b..ff7ba55c01 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
@@ -117,7 +117,10 @@ qat_auth_is_len_in_bits(struct qat_sym_session *ctx,
{
if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 ||
ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 ||
- ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3) {
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128) {
if (unlikely((op->sym->auth.data.offset % BYTE_LENGTH != 0) ||
(op->sym->auth.data.length % BYTE_LENGTH != 0)))
return -EINVAL;
@@ -132,7 +135,8 @@ qat_cipher_is_len_in_bits(struct qat_sym_session *ctx,
{
if (ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 ||
ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_KASUMI ||
- ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3) {
+ ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 ||
+ ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_256) {
if (unlikely((op->sym->cipher.data.length % BYTE_LENGTH != 0) ||
((op->sym->cipher.data.offset %
BYTE_LENGTH) != 0)))
@@ -589,6 +593,26 @@ qat_sym_convert_op_to_vec_aead(struct rte_crypto_op *op,
return 0;
}
+static inline void
+zuc256_modify_iv(uint8_t *iv)
+{
+ uint8_t iv_tmp[8];
+
+ iv_tmp[0] = iv[16];
+ /* pack the last 8 bytes of IV to 6 bytes.
+ * discard the 2 MSB bits of each byte
+ */
+ iv_tmp[1] = (((iv[17] & 0x3f) << 2) | ((iv[18] >> 4) & 0x3));
+ iv_tmp[2] = (((iv[18] & 0xf) << 4) | ((iv[19] >> 2) & 0xf));
+ iv_tmp[3] = (((iv[19] & 0x3) << 6) | (iv[20] & 0x3f));
+
+ iv_tmp[4] = (((iv[21] & 0x3f) << 2) | ((iv[22] >> 4) & 0x3));
+ iv_tmp[5] = (((iv[22] & 0xf) << 4) | ((iv[23] >> 2) & 0xf));
+ iv_tmp[6] = (((iv[23] & 0x3) << 6) | (iv[24] & 0x3f));
+
+ memcpy(iv + 16, iv_tmp, 8);
+}
+
static __rte_always_inline void
qat_set_cipher_iv(struct icp_qat_fw_la_cipher_req_params *cipher_param,
struct rte_crypto_va_iova_ptr *iv_ptr, uint32_t iv_len,
@@ -665,6 +689,9 @@ enqueue_one_auth_job_gen1(struct qat_sym_session *ctx,
case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9:
case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128:
auth_param->u1.aad_adr = auth_iv->iova;
break;
case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
@@ -747,6 +774,9 @@ enqueue_one_chain_job_gen1(struct qat_sym_session *ctx,
case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9:
case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128:
auth_param->u1.aad_adr = auth_iv->iova;
break;
case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
index 208b7e0ba6..bdd1647ea2 100644
--- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
@@ -248,6 +248,9 @@ qat_sym_build_op_cipher_gen1(void *in_op, struct qat_sym_session *ctx,
return -EINVAL;
}
+ if (ctx->is_zuc256)
+ zuc256_modify_iv(cipher_iv.va);
+
enqueue_one_cipher_job_gen1(ctx, req, &cipher_iv, ofs, total_len, op_cookie);
qat_sym_debug_log_dump(req, ctx, in_sgl.vec, in_sgl.num, &cipher_iv,
@@ -270,6 +273,8 @@ qat_sym_build_op_auth_gen1(void *in_op, struct qat_sym_session *ctx,
struct rte_crypto_va_iova_ptr digest;
union rte_crypto_sym_ofs ofs;
int32_t total_len;
+ struct rte_cryptodev *cdev;
+ struct qat_cryptodev_private *internals;
in_sgl.vec = in_vec;
out_sgl.vec = out_vec;
@@ -284,6 +289,13 @@ qat_sym_build_op_auth_gen1(void *in_op, struct qat_sym_session *ctx,
return -EINVAL;
}
+ cdev = rte_cryptodev_pmd_get_dev(ctx->dev_id);
+ internals = cdev->data->dev_private;
+
+ if (internals->qat_dev->has_wireless_slice && !ctx->is_gmac)
+ ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
+ req->comn_hdr.serv_specif_flags, 0);
+
total_len = qat_sym_build_req_set_data(req, in_op, cookie,
in_sgl.vec, in_sgl.num, out_sgl.vec, out_sgl.num);
if (unlikely(total_len < 0)) {
@@ -291,6 +303,9 @@ qat_sym_build_op_auth_gen1(void *in_op, struct qat_sym_session *ctx,
return -EINVAL;
}
+ if (ctx->is_zuc256)
+ zuc256_modify_iv(auth_iv.va);
+
enqueue_one_auth_job_gen1(ctx, req, &digest, &auth_iv, ofs,
total_len);
@@ -381,6 +396,11 @@ qat_sym_build_op_chain_gen1(void *in_op, struct qat_sym_session *ctx,
return -EINVAL;
}
+ if (ctx->is_zuc256) {
+ zuc256_modify_iv(cipher_iv.va);
+ zuc256_modify_iv(auth_iv.va);
+ }
+
enqueue_one_chain_job_gen1(ctx, req, in_sgl.vec, in_sgl.num,
out_sgl.vec, out_sgl.num, &cipher_iv, &digest, &auth_iv,
ofs, total_len, cookie);
@@ -507,6 +527,9 @@ qat_sym_dp_enqueue_single_cipher_gen1(void *qp_data, uint8_t *drv_ctx,
if (unlikely(data_len < 0))
return -1;
+ if (ctx->is_zuc256)
+ zuc256_modify_iv(iv->va);
+
enqueue_one_cipher_job_gen1(ctx, req, iv, ofs, (uint32_t)data_len, cookie);
qat_sym_debug_log_dump(req, ctx, data, n_data_vecs, iv,
@@ -563,6 +586,10 @@ qat_sym_dp_enqueue_cipher_jobs_gen1(void *qp_data, uint8_t *drv_ctx,
if (unlikely(data_len < 0))
break;
+
+ if (ctx->is_zuc256)
+ zuc256_modify_iv(vec->iv[i].va);
+
enqueue_one_cipher_job_gen1(ctx, req, &vec->iv[i], ofs,
(uint32_t)data_len, cookie);
tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
@@ -613,6 +640,9 @@ qat_sym_dp_enqueue_single_auth_gen1(void *qp_data, uint8_t *drv_ctx,
if (unlikely(data_len < 0))
return -1;
+ if (ctx->is_zuc256)
+ zuc256_modify_iv(auth_iv->va);
+
if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) {
null_digest.iova = cookie->digest_null_phys_addr;
job_digest = &null_digest;
@@ -678,6 +708,9 @@ qat_sym_dp_enqueue_auth_jobs_gen1(void *qp_data, uint8_t *drv_ctx,
if (unlikely(data_len < 0))
break;
+ if (ctx->is_zuc256)
+ zuc256_modify_iv(vec->auth_iv[i].va);
+
if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) {
null_digest.iova = cookie->digest_null_phys_addr;
job_digest = &null_digest;
@@ -733,6 +766,11 @@ qat_sym_dp_enqueue_single_chain_gen1(void *qp_data, uint8_t *drv_ctx,
if (unlikely(data_len < 0))
return -1;
+ if (ctx->is_zuc256) {
+ zuc256_modify_iv(cipher_iv->va);
+ zuc256_modify_iv(auth_iv->va);
+ }
+
if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) {
null_digest.iova = cookie->digest_null_phys_addr;
job_digest = &null_digest;
@@ -801,6 +839,11 @@ qat_sym_dp_enqueue_chain_jobs_gen1(void *qp_data, uint8_t *drv_ctx,
if (unlikely(data_len < 0))
break;
+ if (ctx->is_zuc256) {
+ zuc256_modify_iv(vec->iv[i].va);
+ zuc256_modify_iv(vec->auth_iv[i].va);
+ }
+
if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) {
null_digest.iova = cookie->digest_null_phys_addr;
job_digest = &null_digest;
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index 9f4f6c3d93..ebdad0bd67 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -379,7 +379,9 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
struct rte_crypto_cipher_xform *cipher_xform = NULL;
enum qat_device_gen qat_dev_gen =
internals->qat_dev->qat_dev_gen;
- int ret;
+ int ret, is_wireless = 0;
+ struct icp_qat_fw_la_bulk_req *req_tmpl = &session->fw_req;
+ struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
/* Get cipher xform from crypto xform chain */
cipher_xform = qat_get_cipher_xform(xform);
@@ -416,6 +418,8 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
goto error_out;
}
session->qat_mode = ICP_QAT_HW_CIPHER_ECB_MODE;
+ if (internals->qat_dev->has_wireless_slice)
+ is_wireless = 1;
break;
case RTE_CRYPTO_CIPHER_NULL:
session->qat_cipher_alg = ICP_QAT_HW_CIPHER_ALGO_NULL;
@@ -533,6 +537,10 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
goto error_out;
}
session->qat_mode = ICP_QAT_HW_CIPHER_ECB_MODE;
+ if (cipher_xform->key.length == ICP_QAT_HW_ZUC_256_KEY_SZ)
+ session->is_zuc256 = 1;
+ if (internals->qat_dev->has_wireless_slice)
+ is_wireless = 1;
break;
case RTE_CRYPTO_CIPHER_AES_XTS:
if ((cipher_xform->key.length/2) == ICP_QAT_HW_AES_192_KEY_SZ) {
@@ -587,6 +595,17 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
goto error_out;
}
+ if (is_wireless) {
+ /* Set the Use Extended Protocol Flags bit in LW 1 */
+ ICP_QAT_FW_USE_EXTENDED_PROTOCOL_FLAGS_SET(
+ header->ext_flags,
+ QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS);
+ /* Force usage of Wireless Cipher slice */
+ ICP_QAT_FW_USE_WCP_SLICE_SET(header->ext_flags,
+ QAT_LA_USE_WCP_SLICE);
+ session->is_wireless = 1;
+ }
+
return 0;
error_out:
@@ -820,9 +839,16 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
struct rte_crypto_auth_xform *auth_xform = qat_get_auth_xform(xform);
struct qat_cryptodev_private *internals = dev->data->dev_private;
const uint8_t *key_data = auth_xform->key.data;
- uint8_t key_length = auth_xform->key.length;
+ uint16_t key_length = auth_xform->key.length;
enum qat_device_gen qat_dev_gen =
internals->qat_dev->qat_dev_gen;
+ struct icp_qat_fw_la_bulk_req *req_tmpl = &session->fw_req;
+ struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
+ struct icp_qat_fw_cipher_auth_cd_ctrl_hdr *cd_ctrl =
+ (struct icp_qat_fw_cipher_auth_cd_ctrl_hdr *)
+ session->fw_req.cd_ctrl.content_desc_ctrl_lw;
+ uint8_t hash_flag = 0;
+ int is_wireless = 0;
session->aes_cmac = 0;
session->auth_key_length = auth_xform->key.length;
@@ -898,6 +924,10 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
case RTE_CRYPTO_AUTH_AES_CMAC:
session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
session->aes_cmac = 1;
+ if (internals->qat_dev->has_wireless_slice) {
+ is_wireless = 1;
+ session->is_wireless = 1;
+ }
break;
case RTE_CRYPTO_AUTH_AES_GMAC:
if (qat_sym_validate_aes_key(auth_xform->key.length,
@@ -918,6 +948,11 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
break;
case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2;
+ if (internals->qat_dev->has_wireless_slice) {
+ is_wireless = 1;
+ session->is_wireless = 1;
+ hash_flag = 1 << ICP_QAT_FW_AUTH_HDR_FLAG_SNOW3G_UIA2_BITPOS;
+ }
break;
case RTE_CRYPTO_AUTH_MD5_HMAC:
session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_MD5;
@@ -934,7 +969,35 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
rte_cryptodev_get_auth_algo_string(auth_xform->algo));
return -ENOTSUP;
}
- session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3;
+ if (key_length == ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ)
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3;
+ else if (key_length == ICP_QAT_HW_ZUC_256_KEY_SZ) {
+ switch (auth_xform->digest_length) {
+ case 4:
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32;
+ break;
+ case 8:
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64;
+ break;
+ case 16:
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128;
+ break;
+ default:
+ QAT_LOG(ERR, "Invalid digest length: %d",
+ auth_xform->digest_length);
+ return -ENOTSUP;
+ }
+ session->is_zuc256 = 1;
+ } else {
+ QAT_LOG(ERR, "Invalid key length: %d", key_length);
+ return -ENOTSUP;
+ }
+ if (internals->qat_dev->has_wireless_slice) {
+ is_wireless = 1;
+ session->is_wireless = 1;
+ hash_flag = 1 << ICP_QAT_FW_AUTH_HDR_FLAG_ZUC_EIA3_BITPOS;
+ } else
+ session->auth_mode = ICP_QAT_HW_AUTH_MODE0;
break;
case RTE_CRYPTO_AUTH_MD5:
case RTE_CRYPTO_AUTH_AES_CBC_MAC:
@@ -1002,6 +1065,21 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
return -EINVAL;
}
+ if (is_wireless) {
+ if (!session->aes_cmac) {
+ /* Set the Use Extended Protocol Flags bit in LW 1 */
+ ICP_QAT_FW_USE_EXTENDED_PROTOCOL_FLAGS_SET(
+ header->ext_flags,
+ QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS);
+
+ /* Set Hash Flags in LW 28 */
+ cd_ctrl->hash_flags |= hash_flag;
+ }
+ /* Force usage of Wireless Auth slice */
+ ICP_QAT_FW_USE_WAT_SLICE_SET(header->ext_flags,
+ QAT_LA_USE_WAT_SLICE);
+ }
+
return 0;
}
@@ -1204,6 +1282,15 @@ static int qat_hash_get_state1_size(enum icp_qat_hw_auth_algo qat_hash_alg)
case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
return QAT_HW_ROUND_UP(ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ,
QAT_HW_DEFAULT_ALIGNMENT);
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32:
+ return QAT_HW_ROUND_UP(ICP_QAT_HW_ZUC_256_MAC_32_STATE1_SZ,
+ QAT_HW_DEFAULT_ALIGNMENT);
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64:
+ return QAT_HW_ROUND_UP(ICP_QAT_HW_ZUC_256_MAC_64_STATE1_SZ,
+ QAT_HW_DEFAULT_ALIGNMENT);
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128:
+ return QAT_HW_ROUND_UP(ICP_QAT_HW_ZUC_256_MAC_128_STATE1_SZ,
+ QAT_HW_DEFAULT_ALIGNMENT);
case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
return QAT_HW_ROUND_UP(ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ,
QAT_HW_DEFAULT_ALIGNMENT);
@@ -1286,6 +1373,10 @@ static int qat_hash_get_block_size(enum icp_qat_hw_auth_algo qat_hash_alg)
return ICP_QAT_HW_AES_BLK_SZ;
case ICP_QAT_HW_AUTH_ALGO_MD5:
return QAT_MD5_CBLOCK;
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128:
+ return ICP_QAT_HW_ZUC_256_BLK_SZ;
case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
/* return maximum block size in this case */
return QAT_SHA512_CBLOCK;
@@ -2040,7 +2131,8 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc,
key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
} else if (cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2
|| cdesc->qat_cipher_alg ==
- ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3) {
+ ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3
+ || cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_256) {
key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT;
cdesc->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
} else if (cdesc->qat_dir == ICP_QAT_HW_CIPHER_ENCRYPT)
@@ -2075,6 +2167,17 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc,
cipher_cd_ctrl->cipher_state_sz =
ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ >> 3;
cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC;
+ } else if (cdesc->qat_cipher_alg ==
+ ICP_QAT_HW_CIPHER_ALGO_ZUC_256) {
+ if (cdesc->cipher_iv.length != 23 && cdesc->cipher_iv.length != 25) {
+ QAT_LOG(ERR, "Invalid IV length for ZUC256, must be 23 or 25.");
+ return -EINVAL;
+ }
+ total_key_size = ICP_QAT_HW_ZUC_256_KEY_SZ +
+ ICP_QAT_HW_ZUC_256_IV_SZ;
+ cipher_cd_ctrl->cipher_state_sz =
+ ICP_QAT_HW_ZUC_256_IV_SZ >> 3;
+ cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC;
} else {
total_key_size = cipherkeylen;
cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
@@ -2246,6 +2349,9 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_KASUMI_F9
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3
+ || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32
+ || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64
+ || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL
@@ -2519,7 +2625,8 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
cdesc->aad_len = aad_length;
break;
case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
- cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_SNOW3G;
+ if (!cdesc->is_wireless)
+ cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_SNOW3G;
state1_size = qat_hash_get_state1_size(
ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2);
state2_size = ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ;
@@ -2540,10 +2647,12 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
auth_param->hash_state_sz = ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ >> 3;
break;
case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
- hash->auth_config.config =
- ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE0,
- cdesc->qat_hash_alg, digestsize);
- cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC;
+ if (!cdesc->is_wireless) {
+ hash->auth_config.config =
+ ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE0,
+ cdesc->qat_hash_alg, digestsize);
+ cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC;
+ }
state1_size = qat_hash_get_state1_size(
ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3);
state2_size = ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ;
@@ -2554,6 +2663,18 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
cd_extra_size += ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ;
auth_param->hash_state_sz = ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ >> 3;
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128:
+ state1_size = qat_hash_get_state1_size(cdesc->qat_hash_alg);
+ state2_size = ICP_QAT_HW_ZUC_256_STATE2_SZ;
+ memset(cdesc->cd_cur_ptr, 0, state1_size + state2_size
+ + ICP_QAT_HW_ZUC_256_IV_SZ);
+
+ memcpy(cdesc->cd_cur_ptr + state1_size, authkey, authkeylen);
+ cd_extra_size += ICP_QAT_HW_ZUC_256_IV_SZ;
+ auth_param->hash_state_sz = ICP_QAT_HW_ZUC_256_IV_SZ >> 3;
break;
case ICP_QAT_HW_AUTH_ALGO_MD5:
#ifdef RTE_QAT_OPENSSL
@@ -2740,6 +2861,9 @@ int qat_sym_validate_zuc_key(int key_len, enum icp_qat_hw_cipher_algo *alg)
case ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ:
*alg = ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3;
break;
+ case ICP_QAT_HW_ZUC_256_KEY_SZ:
+ *alg = ICP_QAT_HW_CIPHER_ALGO_ZUC_256;
+ break;
default:
return -EINVAL;
}
diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h
index 9209e2e8df..2e25c90342 100644
--- a/drivers/crypto/qat/qat_sym_session.h
+++ b/drivers/crypto/qat/qat_sym_session.h
@@ -140,6 +140,8 @@ struct qat_sym_session {
uint8_t is_auth;
uint8_t is_cnt_zero;
/* Some generations need different setup of counter */
+ uint8_t is_zuc256;
+ uint8_t is_wireless;
uint32_t slice_types;
enum qat_sym_proto_flag qat_proto_flag;
qat_sym_build_request_t build_request[2];
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 3/4] common/qat: add new gen3 CMAC macros
2024-02-23 15:12 ` [PATCH v2 0/4] add new QAT gen3 and gen5 Ciara Power
2024-02-23 15:12 ` [PATCH v2 1/4] common/qat: add new gen3 device Ciara Power
2024-02-23 15:12 ` [PATCH v2 2/4] common/qat: add zuc256 wireless slice for gen3 Ciara Power
@ 2024-02-23 15:12 ` Ciara Power
2024-02-23 15:12 ` [PATCH v2 4/4] common/qat: add gen5 device Ciara Power
2024-02-26 13:32 ` [PATCH v2 0/4] add new QAT gen3 and gen5 Ji, Kai
4 siblings, 0 replies; 19+ messages in thread
From: Ciara Power @ 2024-02-23 15:12 UTC (permalink / raw)
To: dev; +Cc: gakhil, kai.ji, arkadiuszx.kusztal, Ciara Power
The new QAT GEN3 device uses new macros for CMAC values, rather than
using XCBC_MAC ones.
The wireless slice handles CMAC in the new gen3 device, and no key
precomputes are required by SW.
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
drivers/common/qat/qat_adf/icp_qat_hw.h | 4 +++-
drivers/crypto/qat/qat_sym_session.c | 28 +++++++++++++++++++++----
2 files changed, 27 insertions(+), 5 deletions(-)
diff --git a/drivers/common/qat/qat_adf/icp_qat_hw.h b/drivers/common/qat/qat_adf/icp_qat_hw.h
index 4651fb90bb..b99dde2176 100644
--- a/drivers/common/qat/qat_adf/icp_qat_hw.h
+++ b/drivers/common/qat/qat_adf/icp_qat_hw.h
@@ -75,7 +75,7 @@ enum icp_qat_hw_auth_algo {
ICP_QAT_HW_AUTH_ALGO_RESERVED = 20,
ICP_QAT_HW_AUTH_ALGO_RESERVED1 = 21,
ICP_QAT_HW_AUTH_ALGO_RESERVED2 = 22,
- ICP_QAT_HW_AUTH_ALGO_RESERVED3 = 22,
+ ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC = 22,
ICP_QAT_HW_AUTH_ALGO_RESERVED4 = 23,
ICP_QAT_HW_AUTH_ALGO_RESERVED5 = 24,
ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32 = 25,
@@ -180,6 +180,7 @@ struct icp_qat_hw_auth_setup {
#define ICP_QAT_HW_ZUC_256_MAC_32_STATE1_SZ 8
#define ICP_QAT_HW_ZUC_256_MAC_64_STATE1_SZ 8
#define ICP_QAT_HW_ZUC_256_MAC_128_STATE1_SZ 16
+#define ICP_QAT_HW_AES_CMAC_STATE1_SZ 16
#define ICP_QAT_HW_NULL_STATE2_SZ 32
#define ICP_QAT_HW_MD5_STATE2_SZ 16
@@ -208,6 +209,7 @@ struct icp_qat_hw_auth_setup {
#define ICP_QAT_HW_GALOIS_H_SZ 16
#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8
#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16
+#define ICP_QAT_HW_AES_128_CMAC_STATE2_SZ 16
struct icp_qat_hw_auth_sha512 {
struct icp_qat_hw_auth_setup inner_setup;
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index ebdad0bd67..b1649b8d18 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -922,11 +922,20 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
break;
case RTE_CRYPTO_AUTH_AES_CMAC:
- session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
session->aes_cmac = 1;
- if (internals->qat_dev->has_wireless_slice) {
- is_wireless = 1;
- session->is_wireless = 1;
+ if (!internals->qat_dev->has_wireless_slice) {
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
+ break;
+ }
+ is_wireless = 1;
+ session->is_wireless = 1;
+ switch (key_length) {
+ case ICP_QAT_HW_AES_128_KEY_SZ:
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC;
+ break;
+ default:
+ QAT_LOG(ERR, "Invalid key length: %d", key_length);
+ return -ENOTSUP;
}
break;
case RTE_CRYPTO_AUTH_AES_GMAC:
@@ -1309,6 +1318,9 @@ static int qat_hash_get_state1_size(enum icp_qat_hw_auth_algo qat_hash_alg)
case ICP_QAT_HW_AUTH_ALGO_NULL:
return QAT_HW_ROUND_UP(ICP_QAT_HW_NULL_STATE1_SZ,
QAT_HW_DEFAULT_ALIGNMENT);
+ case ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC:
+ return QAT_HW_ROUND_UP(ICP_QAT_HW_AES_CMAC_STATE1_SZ,
+ QAT_HW_DEFAULT_ALIGNMENT);
case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
/* return maximum state1 size in this case */
return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
@@ -1345,6 +1357,7 @@ static int qat_hash_get_digest_size(enum icp_qat_hw_auth_algo qat_hash_alg)
case ICP_QAT_HW_AUTH_ALGO_MD5:
return ICP_QAT_HW_MD5_STATE1_SZ;
case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+ case ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC:
return ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ;
case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
/* return maximum digest size in this case */
@@ -2353,6 +2366,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC
+ || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SM3
@@ -2593,6 +2607,12 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
return -EFAULT;
}
break;
+ case ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC:
+ state1_size = ICP_QAT_HW_AES_CMAC_STATE1_SZ;
+ memset(cdesc->cd_cur_ptr, 0, state1_size);
+ memcpy(cdesc->cd_cur_ptr + state1_size, authkey, authkeylen);
+ state2_size = ICP_QAT_HW_AES_128_CMAC_STATE2_SZ;
+ break;
case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_GCM;
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 4/4] common/qat: add gen5 device
2024-02-23 15:12 ` [PATCH v2 0/4] add new QAT gen3 and gen5 Ciara Power
` (2 preceding siblings ...)
2024-02-23 15:12 ` [PATCH v2 3/4] common/qat: add new gen3 CMAC macros Ciara Power
@ 2024-02-23 15:12 ` Ciara Power
2024-02-26 13:32 ` [PATCH v2 0/4] add new QAT gen3 and gen5 Ji, Kai
4 siblings, 0 replies; 19+ messages in thread
From: Ciara Power @ 2024-02-23 15:12 UTC (permalink / raw)
To: dev
Cc: gakhil, kai.ji, arkadiuszx.kusztal, Ciara Power, Fan Zhang,
Ashish Gupta, Anatoly Burakov
Add new gen5 QAT device ID.
This device has a wireless slice, so we must set a flag to indicate
this wireless enabled device.
Asides from the wireless slices and some extra capabilities for
wireless algorithms, the device is functionally the same as gen4 and can
reuse most functions and macros.
Symmetric, asymmetric and compression services are enabled.
Signed-off-by: Ciara Power <ciara.power@intel.com>
---
v2:
- Fixed setting extended protocol flag bit position.
- Added slice map check for ZUC256 wireless slice.
- Fixed IV modification for ZUC256 in raw datapath.
- Added increment size for ZUC256 capabiltiies.
- Added release note.
---
doc/guides/cryptodevs/qat.rst | 4 +
doc/guides/rel_notes/release_24_03.rst | 6 +-
drivers/common/qat/dev/qat_dev_gen4.c | 31 ++-
drivers/common/qat/dev/qat_dev_gen5.c | 51 ++++
drivers/common/qat/dev/qat_dev_gens.h | 54 ++++
drivers/common/qat/meson.build | 3 +
drivers/common/qat/qat_common.h | 1 +
drivers/common/qat/qat_device.c | 8 +-
drivers/compress/qat/dev/qat_comp_pmd_gen4.c | 8 +-
drivers/compress/qat/dev/qat_comp_pmd_gen5.c | 73 +++++
drivers/compress/qat/dev/qat_comp_pmd_gens.h | 14 +
drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 4 +-
drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c | 278 +++++++++++++++++++
drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 6 +
drivers/crypto/qat/qat_sym_session.c | 13 +-
15 files changed, 524 insertions(+), 30 deletions(-)
create mode 100644 drivers/common/qat/dev/qat_dev_gen5.c
create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen5.c
create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index 51190e12d6..28945bb5f3 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -27,6 +27,7 @@ poll mode crypto driver support for the following hardware accelerator devices:
* ``Intel QuickAssist Technology C4xxx``
* ``Intel QuickAssist Technology 4xxx``
* ``Intel QuickAssist Technology 300xx``
+* ``Intel QuickAssist Technology 420xx``
Features
@@ -179,6 +180,7 @@ poll mode crypto driver support for the following hardware accelerator devices:
* ``Intel QuickAssist Technology 4xxx``
* ``Intel QuickAssist Technology 401xxx``
* ``Intel QuickAssist Technology 300xx``
+* ``Intel QuickAssist Technology 420xx``
The QAT ASYM PMD has support for:
@@ -472,6 +474,8 @@ to see the full table)
+-----+-----+-----+-----+----------+---------------+---------------+------------+--------+------+--------+--------+
| Yes | No | No | 4 | 402xx | IDZ/ N/A | qat_4xxx | 4xxx | 4944 | 2 | 4945 | 16 |
+-----+-----+-----+-----+----------+---------------+---------------+------------+--------+------+--------+--------+
+ | Yes | Yes | Yes | 5 | 420xx | linux/6.8+ | qat_420xx | 420xx | 4946 | 2 | 4947 | 16 |
+ +-----+-----+-----+-----+----------+---------------+---------------+------------+--------+------+--------+--------+
* Note: Symmetric mixed crypto algorithms feature on Gen 2 works only with IDZ driver version 4.9.0+
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 0dee1ff104..439d354cd8 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -133,8 +133,10 @@ New Features
* **Updated Intel QuickAssist Technology driver.**
- * Enabled support for new QAT GEN3 (578a) devices in QAT crypto driver.
- * Enabled ZUC256 cipher and auth algorithm for wireless slice enabled GEN3 device.
+ * Enabled support for new QAT GEN3 (578a) and QAT GEN5 (4946)
+ devices in QAT crypto driver.
+ * Enabled ZUC256 cipher and auth algorithm for wireless slice
+ enabled GEN3 and GEN5 devices.
* **Updated Marvell cnxk crypto driver.**
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
index 1ce262f715..2525e1e695 100644
--- a/drivers/common/qat/dev/qat_dev_gen4.c
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -10,6 +10,7 @@
#include "adf_transport_access_macros_gen4vf.h"
#include "adf_pf2vf_msg.h"
#include "qat_pf2vf.h"
+#include "qat_dev_gens.h"
#include <stdint.h>
@@ -60,7 +61,7 @@ qat_select_valid_queue_gen4(struct qat_pci_device *qat_dev, int qp_id,
return -1;
}
-static const struct qat_qp_hw_data *
+const struct qat_qp_hw_data *
qat_qp_get_hw_data_gen4(struct qat_pci_device *qat_dev,
enum qat_service_type service_type, uint16_t qp_id)
{
@@ -74,7 +75,7 @@ qat_qp_get_hw_data_gen4(struct qat_pci_device *qat_dev,
return &dev_extra->qp_gen4_data[ring_pair][0];
}
-static int
+int
qat_qp_rings_per_service_gen4(struct qat_pci_device *qat_dev,
enum qat_service_type service)
{
@@ -103,7 +104,7 @@ gen4_pick_service(uint8_t hw_service)
}
}
-static int
+int
qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
{
int i = 0;
@@ -143,7 +144,7 @@ qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
return 0;
}
-static void
+void
qat_qp_build_ring_base_gen4(void *io_addr,
struct qat_queue *queue)
{
@@ -155,7 +156,7 @@ qat_qp_build_ring_base_gen4(void *io_addr,
queue->hw_queue_number, queue_base);
}
-static void
+void
qat_qp_adf_arb_enable_gen4(const struct qat_queue *txq,
void *base_addr, rte_spinlock_t *lock)
{
@@ -172,7 +173,7 @@ qat_qp_adf_arb_enable_gen4(const struct qat_queue *txq,
rte_spinlock_unlock(lock);
}
-static void
+void
qat_qp_adf_arb_disable_gen4(const struct qat_queue *txq,
void *base_addr, rte_spinlock_t *lock)
{
@@ -189,7 +190,7 @@ qat_qp_adf_arb_disable_gen4(const struct qat_queue *txq,
rte_spinlock_unlock(lock);
}
-static void
+void
qat_qp_adf_configure_queues_gen4(struct qat_qp *qp)
{
uint32_t q_tx_config, q_resp_config;
@@ -208,14 +209,14 @@ qat_qp_adf_configure_queues_gen4(struct qat_qp *qp)
q_resp_config);
}
-static void
+void
qat_qp_csr_write_tail_gen4(struct qat_qp *qp, struct qat_queue *q)
{
WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
q->hw_bundle_number, q->hw_queue_number, q->tail);
}
-static void
+void
qat_qp_csr_write_head_gen4(struct qat_qp *qp, struct qat_queue *q,
uint32_t new_head)
{
@@ -223,7 +224,7 @@ qat_qp_csr_write_head_gen4(struct qat_qp *qp, struct qat_queue *q,
q->hw_bundle_number, q->hw_queue_number, new_head);
}
-static void
+void
qat_qp_csr_setup_gen4(struct qat_pci_device *qat_dev,
void *io_addr, struct qat_qp *qp)
{
@@ -246,7 +247,7 @@ static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen4 = {
.qat_qp_get_hw_data = qat_qp_get_hw_data_gen4,
};
-static int
+int
qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
{
int ret = 0, i;
@@ -268,13 +269,13 @@ qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
return 0;
}
-static const struct rte_mem_resource *
+const struct rte_mem_resource *
qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
{
return &pci_dev->mem_resource[0];
}
-static int
+int
qat_dev_get_misc_bar_gen4(struct rte_mem_resource **mem_resource,
struct rte_pci_device *pci_dev)
{
@@ -282,14 +283,14 @@ qat_dev_get_misc_bar_gen4(struct rte_mem_resource **mem_resource,
return 0;
}
-static int
+int
qat_dev_get_slice_map_gen4(uint32_t *map __rte_unused,
const struct rte_pci_device *pci_dev __rte_unused)
{
return 0;
}
-static int
+int
qat_dev_get_extra_size_gen4(void)
{
return sizeof(struct qat_dev_gen4_extra);
diff --git a/drivers/common/qat/dev/qat_dev_gen5.c b/drivers/common/qat/dev/qat_dev_gen5.c
new file mode 100644
index 0000000000..8a6e30d909
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen5.c
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include <dev_driver.h>
+#include <rte_pci.h>
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "adf_pf2vf_msg.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+static struct qat_pf2vf_dev qat_pf2vf_gen5 = {
+ .pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
+ .vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
+ .pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
+ .pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
+ .pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
+ .pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
+};
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen5 = {
+ .qat_qp_rings_per_service = qat_qp_rings_per_service_gen4,
+ .qat_qp_build_ring_base = qat_qp_build_ring_base_gen4,
+ .qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen4,
+ .qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen4,
+ .qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen4,
+ .qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen4,
+ .qat_qp_csr_write_head = qat_qp_csr_write_head_gen4,
+ .qat_qp_csr_setup = qat_qp_csr_setup_gen4,
+ .qat_qp_get_hw_data = qat_qp_get_hw_data_gen4,
+};
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen5 = {
+ .qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen4,
+ .qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen4,
+ .qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen4,
+ .qat_dev_read_config = qat_dev_read_config_gen4,
+ .qat_dev_get_extra_size = qat_dev_get_extra_size_gen4,
+ .qat_dev_get_slice_map = qat_dev_get_slice_map_gen4,
+};
+
+RTE_INIT(qat_dev_gen_5_init)
+{
+ qat_qp_hw_spec[QAT_GEN5] = &qat_qp_hw_spec_gen5;
+ qat_dev_hw_spec[QAT_GEN5] = &qat_dev_hw_spec_gen5;
+ qat_gen_config[QAT_GEN5].dev_gen = QAT_GEN5;
+ qat_gen_config[QAT_GEN5].pf2vf_dev = &qat_pf2vf_gen5;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h
index 7c92f1938c..14c172f22d 100644
--- a/drivers/common/qat/dev/qat_dev_gens.h
+++ b/drivers/common/qat/dev/qat_dev_gens.h
@@ -62,4 +62,58 @@ qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource,
int
qat_dev_read_config_gen1(struct qat_pci_device *qat_dev);
+int
+qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev);
+
+const struct rte_mem_resource *
+qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev);
+
+int
+qat_dev_get_misc_bar_gen4(struct rte_mem_resource **mem_resource,
+ struct rte_pci_device *pci_dev);
+
+int
+qat_dev_read_config_gen4(struct qat_pci_device *qat_dev);
+
+int
+qat_dev_get_extra_size_gen4(void);
+
+int
+qat_dev_get_slice_map_gen4(uint32_t *map __rte_unused,
+ const struct rte_pci_device *pci_dev __rte_unused);
+
+int
+qat_qp_rings_per_service_gen4(struct qat_pci_device *qat_dev,
+ enum qat_service_type service);
+
+void
+qat_qp_build_ring_base_gen4(void *io_addr,
+ struct qat_queue *queue);
+
+void
+qat_qp_adf_arb_enable_gen4(const struct qat_queue *txq,
+ void *base_addr, rte_spinlock_t *lock);
+
+void
+qat_qp_adf_arb_disable_gen4(const struct qat_queue *txq,
+ void *base_addr, rte_spinlock_t *lock);
+
+void
+qat_qp_adf_configure_queues_gen4(struct qat_qp *qp);
+
+void
+qat_qp_csr_write_tail_gen4(struct qat_qp *qp, struct qat_queue *q);
+
+void
+qat_qp_csr_write_head_gen4(struct qat_qp *qp, struct qat_queue *q,
+ uint32_t new_head);
+
+void
+qat_qp_csr_setup_gen4(struct qat_pci_device *qat_dev,
+ void *io_addr, struct qat_qp *qp);
+
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen4(struct qat_pci_device *qat_dev,
+ enum qat_service_type service_type, uint16_t qp_id);
+
#endif
diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 62abcb6fe3..d79085258f 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -82,6 +82,7 @@ sources += files(
'dev/qat_dev_gen2.c',
'dev/qat_dev_gen3.c',
'dev/qat_dev_gen4.c',
+ 'dev/qat_dev_gen5.c',
)
includes += include_directories(
'qat_adf',
@@ -95,6 +96,7 @@ if qat_compress
'dev/qat_comp_pmd_gen2.c',
'dev/qat_comp_pmd_gen3.c',
'dev/qat_comp_pmd_gen4.c',
+ 'dev/qat_comp_pmd_gen5.c',
]
sources += files(join_paths(qat_compress_relpath, f))
endforeach
@@ -108,6 +110,7 @@ if qat_crypto
'dev/qat_crypto_pmd_gen2.c',
'dev/qat_crypto_pmd_gen3.c',
'dev/qat_crypto_pmd_gen4.c',
+ 'dev/qat_crypto_pmd_gen5.c',
]
sources += files(join_paths(qat_crypto_relpath, f))
endforeach
diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index 9411a79301..dc48a2e1ee 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -21,6 +21,7 @@ enum qat_device_gen {
QAT_GEN2,
QAT_GEN3,
QAT_GEN4,
+ QAT_GEN5,
QAT_N_GENS
};
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index 0e7d387d78..0ccc3f85fd 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -65,6 +65,9 @@ static const struct rte_pci_id pci_id_qat_map[] = {
{
RTE_PCI_DEVICE(0x8086, 0x4945),
},
+ {
+ RTE_PCI_DEVICE(0x8086, 0x4947),
+ },
{.device_id = 0},
};
@@ -203,6 +206,8 @@ pick_gen(const struct rte_pci_device *pci_dev)
case 0x4943:
case 0x4945:
return QAT_GEN4;
+ case 0x4947:
+ return QAT_GEN5;
default:
QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
return QAT_N_GENS;
@@ -212,7 +217,8 @@ pick_gen(const struct rte_pci_device *pci_dev)
static int
wireless_slice_support(uint16_t pci_dev_id)
{
- return pci_dev_id == 0x578b;
+ return pci_dev_id == 0x578b ||
+ pci_dev_id == 0x4947;
}
struct qat_pci_device *
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen4.c b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
index 05906f13e0..68d111e07c 100644
--- a/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
@@ -27,7 +27,7 @@ qat_gen4_comp_capabilities[] = {
.window_size = {.min = 15, .max = 15, .increment = 0} },
RTE_COMP_END_OF_CAPABILITIES_LIST() };
-static int
+int
qat_comp_dev_config_gen4(struct rte_compressdev *dev,
struct rte_compressdev_config *config)
{
@@ -67,13 +67,13 @@ qat_comp_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused)
return capa_info;
}
-static uint16_t
+uint16_t
qat_comp_get_ram_bank_flags_gen4(void)
{
return 0;
}
-static int
+int
qat_comp_set_slice_cfg_word_gen4(struct qat_comp_xform *qat_xform,
const struct rte_comp_xform *xform,
enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word)
@@ -189,7 +189,7 @@ qat_comp_set_slice_cfg_word_gen4(struct qat_comp_xform *qat_xform,
return 0;
}
-static unsigned int
+unsigned int
qat_comp_get_num_im_bufs_required_gen4(void)
{
return QAT_NUM_INTERM_BUFS_GEN4;
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen5.c b/drivers/compress/qat/dev/qat_comp_pmd_gen5.c
new file mode 100644
index 0000000000..210c17cca4
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen5.c
@@ -0,0 +1,73 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+
+#include "qat_comp.h"
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+#include "icp_qat_hw_gen4_comp.h"
+#include "icp_qat_hw_gen4_comp_defs.h"
+
+static const struct rte_compressdev_capabilities
+qat_gen5_comp_capabilities[] = {
+ {/* COMPRESSION - deflate */
+ .algo = RTE_COMP_ALGO_DEFLATE,
+ .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
+ RTE_COMP_FF_CRC32_CHECKSUM |
+ RTE_COMP_FF_ADLER32_CHECKSUM |
+ RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
+ RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+ RTE_COMP_FF_HUFFMAN_FIXED |
+ RTE_COMP_FF_HUFFMAN_DYNAMIC |
+ RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
+ RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
+ RTE_COMP_FF_OOP_LB_IN_SGL_OUT,
+ .window_size = {.min = 15, .max = 15, .increment = 0} },
+ RTE_COMP_END_OF_CAPABILITIES_LIST() };
+
+static struct rte_compressdev_ops qat_comp_ops_gen5 = {
+
+ /* Device related operations */
+ .dev_configure = qat_comp_dev_config_gen4,
+ .dev_start = qat_comp_dev_start,
+ .dev_stop = qat_comp_dev_stop,
+ .dev_close = qat_comp_dev_close,
+ .dev_infos_get = qat_comp_dev_info_get,
+
+ .stats_get = qat_comp_stats_get,
+ .stats_reset = qat_comp_stats_reset,
+ .queue_pair_setup = qat_comp_qp_setup,
+ .queue_pair_release = qat_comp_qp_release,
+
+ /* Compression related operations */
+ .private_xform_create = qat_comp_private_xform_create,
+ .private_xform_free = qat_comp_private_xform_free,
+ .stream_create = qat_comp_stream_create,
+ .stream_free = qat_comp_stream_free
+};
+
+static struct qat_comp_capabilities_info
+qat_comp_cap_get_gen5(struct qat_pci_device *qat_dev __rte_unused)
+{
+ struct qat_comp_capabilities_info capa_info = {
+ .data = qat_gen5_comp_capabilities,
+ .size = sizeof(qat_gen5_comp_capabilities)
+ };
+ return capa_info;
+}
+
+RTE_INIT(qat_comp_pmd_gen5_init)
+{
+ qat_comp_gen_dev_ops[QAT_GEN5].compressdev_ops =
+ &qat_comp_ops_gen5;
+ qat_comp_gen_dev_ops[QAT_GEN5].qat_comp_get_capabilities =
+ qat_comp_cap_get_gen5;
+ qat_comp_gen_dev_ops[QAT_GEN5].qat_comp_get_num_im_bufs_required =
+ qat_comp_get_num_im_bufs_required_gen4;
+ qat_comp_gen_dev_ops[QAT_GEN5].qat_comp_get_ram_bank_flags =
+ qat_comp_get_ram_bank_flags_gen4;
+ qat_comp_gen_dev_ops[QAT_GEN5].qat_comp_set_slice_cfg_word =
+ qat_comp_set_slice_cfg_word_gen4;
+ qat_comp_gen_dev_ops[QAT_GEN5].qat_comp_get_feature_flags =
+ qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gens.h b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
index 67293092ea..e329fe3e18 100644
--- a/drivers/compress/qat/dev/qat_comp_pmd_gens.h
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
@@ -25,6 +25,20 @@ int qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform,
uint64_t qat_comp_get_features_gen1(void);
+unsigned int
+qat_comp_get_num_im_bufs_required_gen4(void);
+
+int
+qat_comp_set_slice_cfg_word_gen4(struct qat_comp_xform *qat_xform,
+ const struct rte_comp_xform *xform,
+ enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word);
+
+uint16_t qat_comp_get_ram_bank_flags_gen4(void);
+
+int
+qat_comp_dev_config_gen4(struct rte_compressdev *dev,
+ struct rte_compressdev_config *config);
+
extern struct rte_compressdev_ops qat_comp_ops_gen1;
#endif /* _QAT_COMP_PMD_GENS_H_ */
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
index de72383d4b..9c7f7d98c8 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
@@ -233,7 +233,7 @@ qat_sym_build_op_aead_gen4(void *in_op, struct qat_sym_session *ctx,
return 0;
}
-static int
+int
qat_sym_crypto_set_session_gen4(void *cdev, void *session)
{
struct qat_sym_session *ctx = session;
@@ -385,7 +385,7 @@ qat_sym_dp_enqueue_aead_jobs_gen4(void *qp_data, uint8_t *drv_ctx,
return i;
}
-static int
+int
qat_sym_configure_raw_dp_ctx_gen4(void *_raw_dp_ctx, void *_ctx)
{
struct rte_crypto_raw_dp_ctx *raw_dp_ctx = _raw_dp_ctx;
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c
new file mode 100644
index 0000000000..a235217cfe
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c
@@ -0,0 +1,278 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2017-2022 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_legacy_caps_gen5[] = {
+ QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+ CAP_SET(block_size, 64),
+ CAP_RNG(digest_size, 1, 20, 1)),
+ QAT_SYM_AUTH_CAP(SHA224,
+ CAP_SET(block_size, 64),
+ CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(SHA224_HMAC,
+ CAP_SET(block_size, 64),
+ CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(SHA1_HMAC,
+ CAP_SET(block_size, 64),
+ CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_CIPHER_CAP(SM4_ECB,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 0, 0, 0)),
+};
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen5[] = {
+ QAT_SYM_CIPHER_CAP(AES_CBC,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+ QAT_SYM_AUTH_CAP(SHA256_HMAC,
+ CAP_SET(block_size, 64),
+ CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(SHA384_HMAC,
+ CAP_SET(block_size, 128),
+ CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(SHA512_HMAC,
+ CAP_SET(block_size, 128),
+ CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(AES_CMAC,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+ QAT_SYM_AUTH_CAP(NULL,
+ CAP_SET(block_size, 1),
+ CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_CIPHER_CAP(NULL,
+ CAP_SET(block_size, 1),
+ CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(SHA256,
+ CAP_SET(block_size, 64),
+ CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(SHA384,
+ CAP_SET(block_size, 128),
+ CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(SHA512,
+ CAP_SET(block_size, 128),
+ CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_CIPHER_CAP(AES_CTR,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+ QAT_SYM_AEAD_CAP(AES_GCM,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+ CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+ QAT_SYM_AEAD_CAP(AES_CCM,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+ CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+ QAT_SYM_AUTH_CAP(AES_GMAC,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+ CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+ QAT_SYM_AEAD_CAP(CHACHA20_POLY1305,
+ CAP_SET(block_size, 64),
+ CAP_RNG(key_size, 32, 32, 0),
+ CAP_RNG(digest_size, 16, 16, 0),
+ CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)),
+ QAT_SYM_CIPHER_CAP(SM4_CBC,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+ QAT_SYM_CIPHER_CAP(SM4_CTR,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+ QAT_SYM_PLAIN_AUTH_CAP(SM3,
+ CAP_SET(block_size, 64),
+ CAP_RNG(digest_size, 32, 32, 0)),
+ QAT_SYM_AUTH_CAP(SM3_HMAC,
+ CAP_SET(block_size, 64),
+ CAP_RNG(key_size, 16, 64, 4), CAP_RNG(digest_size, 32, 32, 0),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_CIPHER_CAP(ZUC_EEA3,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 25, 1)),
+ QAT_SYM_AUTH_CAP(ZUC_EIA3,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 32, 16), CAP_RNG(digest_size, 4, 16, 4),
+ CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 25, 1)),
+ QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+ QAT_SYM_AUTH_CAP(SNOW3G_UIA2,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+ CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+ RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static int
+check_cipher_capa(const struct rte_cryptodev_capabilities *cap,
+ enum rte_crypto_cipher_algorithm algo)
+{
+ if (cap->op != RTE_CRYPTO_OP_TYPE_SYMMETRIC)
+ return 0;
+ if (cap->sym.xform_type != RTE_CRYPTO_SYM_XFORM_CIPHER)
+ return 0;
+ if (cap->sym.cipher.algo != algo)
+ return 0;
+ return 1;
+}
+
+static int
+check_auth_capa(const struct rte_cryptodev_capabilities *cap,
+ enum rte_crypto_auth_algorithm algo)
+{
+ if (cap->op != RTE_CRYPTO_OP_TYPE_SYMMETRIC)
+ return 0;
+ if (cap->sym.xform_type != RTE_CRYPTO_SYM_XFORM_AUTH)
+ return 0;
+ if (cap->sym.auth.algo != algo)
+ return 0;
+ return 1;
+}
+
+static int
+qat_sym_crypto_cap_get_gen5(struct qat_cryptodev_private *internals,
+ const char *capa_memz_name,
+ const uint16_t __rte_unused slice_map)
+{
+ uint32_t legacy_capa_num, capa_num;
+ uint32_t size = sizeof(qat_sym_crypto_caps_gen5);
+ uint32_t legacy_size = sizeof(qat_sym_crypto_legacy_caps_gen5);
+ uint32_t i, iter = 0;
+ uint32_t curr_capa = 0;
+ legacy_capa_num = legacy_size/sizeof(struct rte_cryptodev_capabilities);
+ capa_num = RTE_DIM(qat_sym_crypto_caps_gen5);
+
+ if (unlikely(qat_legacy_capa))
+ size = size + legacy_size;
+
+ internals->capa_mz = rte_memzone_lookup(capa_memz_name);
+ if (internals->capa_mz == NULL) {
+ internals->capa_mz = rte_memzone_reserve(capa_memz_name,
+ size, rte_socket_id(), 0);
+ if (internals->capa_mz == NULL) {
+ QAT_LOG(DEBUG,
+ "Error allocating memzone for capabilities");
+ return -1;
+ }
+ }
+
+ struct rte_cryptodev_capabilities *addr =
+ (struct rte_cryptodev_capabilities *)
+ internals->capa_mz->addr;
+
+ struct rte_cryptodev_capabilities *capabilities;
+
+ if (unlikely(qat_legacy_capa)) {
+ capabilities = qat_sym_crypto_legacy_caps_gen5;
+ memcpy(addr, capabilities, legacy_size);
+ addr += legacy_capa_num;
+ }
+ capabilities = qat_sym_crypto_caps_gen5;
+
+ for (i = 0; i < capa_num; i++, iter++) {
+ if (slice_map & ICP_ACCEL_MASK_ZUC_256_SLICE && (
+ check_auth_capa(&capabilities[iter],
+ RTE_CRYPTO_AUTH_ZUC_EIA3) ||
+ check_cipher_capa(&capabilities[iter],
+ RTE_CRYPTO_CIPHER_ZUC_EEA3))) {
+ continue;
+ }
+
+ memcpy(addr + curr_capa, capabilities + iter,
+ sizeof(struct rte_cryptodev_capabilities));
+ curr_capa++;
+ }
+ internals->qat_dev_capabilities = internals->capa_mz->addr;
+
+ return 0;
+}
+
+static int
+qat_sym_crypto_set_session_gen5(void *cdev, void *session)
+{
+ struct qat_sym_session *ctx = session;
+ enum rte_proc_type_t proc_type = rte_eal_process_type();
+ int ret;
+
+ if (proc_type == RTE_PROC_AUTO || proc_type == RTE_PROC_INVALID)
+ return -EINVAL;
+
+ ret = qat_sym_crypto_set_session_gen4(cdev, session);
+
+ if (ret == -ENOTSUP) {
+ /* GEN4 returning -ENOTSUP as it cannot handle some mixed algo,
+ * this is addressed by GEN5
+ */
+ if ((ctx->aes_cmac ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) &&
+ (ctx->qat_cipher_alg ==
+ ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 ||
+ ctx->qat_cipher_alg ==
+ ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 ||
+ ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_256)) {
+ qat_sym_session_set_ext_hash_flags_gen2(ctx, 0);
+ } else if ((ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128) &&
+ ctx->qat_cipher_alg != ICP_QAT_HW_CIPHER_ALGO_ZUC_256) {
+ qat_sym_session_set_ext_hash_flags_gen2(ctx,
+ 1 << ICP_QAT_FW_AUTH_HDR_FLAG_ZUC_EIA3_BITPOS);
+ }
+
+ ret = 0;
+ }
+
+ return ret;
+}
+
+RTE_INIT(qat_sym_crypto_gen5_init)
+{
+ qat_sym_gen_dev_ops[QAT_GEN5].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+ qat_sym_gen_dev_ops[QAT_GEN5].get_capabilities =
+ qat_sym_crypto_cap_get_gen5;
+ qat_sym_gen_dev_ops[QAT_GEN5].set_session =
+ qat_sym_crypto_set_session_gen5;
+ qat_sym_gen_dev_ops[QAT_GEN5].set_raw_dp_ctx =
+ qat_sym_configure_raw_dp_ctx_gen4;
+ qat_sym_gen_dev_ops[QAT_GEN5].get_feature_flags =
+ qat_sym_crypto_feature_flags_get_gen1;
+ qat_sym_gen_dev_ops[QAT_GEN5].create_security_ctx =
+ qat_sym_create_security_gen1;
+}
+
+RTE_INIT(qat_asym_crypto_gen5_init)
+{
+ qat_asym_gen_dev_ops[QAT_GEN5].cryptodev_ops =
+ &qat_asym_crypto_ops_gen1;
+ qat_asym_gen_dev_ops[QAT_GEN5].get_capabilities =
+ qat_asym_crypto_cap_get_gen1;
+ qat_asym_gen_dev_ops[QAT_GEN5].get_feature_flags =
+ qat_asym_crypto_feature_flags_get_gen1;
+ qat_asym_gen_dev_ops[QAT_GEN5].set_session =
+ qat_asym_crypto_set_session_gen1;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
index ff7ba55c01..60b0f0551c 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
@@ -1048,10 +1048,16 @@ qat_sym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev);
int
qat_sym_crypto_set_session_gen1(void *cryptodev, void *session);
+int
+qat_sym_crypto_set_session_gen4(void *cryptodev, void *session);
+
void
qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session,
uint8_t hash_flag);
+int
+qat_sym_configure_raw_dp_ctx_gen4(void *_raw_dp_ctx, void *_ctx);
+
int
qat_asym_crypto_cap_get_gen1(struct qat_cryptodev_private *internals,
const char *capa_memz_name, const uint16_t slice_map);
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index b1649b8d18..39e4a833ec 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -407,7 +407,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
goto error_out;
}
session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
- if (qat_dev_gen == QAT_GEN4)
+ if (qat_dev_gen == QAT_GEN4 || qat_dev_gen == QAT_GEN5)
session->is_ucs = 1;
break;
case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
@@ -950,7 +950,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
session->auth_iv.length = AES_GCM_J0_LEN;
else
session->is_iv12B = 1;
- if (qat_dev_gen == QAT_GEN4) {
+ if (qat_dev_gen == QAT_GEN4 || qat_dev_gen == QAT_GEN5) {
session->is_cnt_zero = 1;
session->is_ucs = 1;
}
@@ -1126,7 +1126,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_GALOIS_128;
- if (qat_dev_gen == QAT_GEN4)
+ if (qat_dev_gen == QAT_GEN4 || qat_dev_gen == QAT_GEN5)
session->is_ucs = 1;
if (session->cipher_iv.length == 0) {
session->cipher_iv.length = AES_GCM_J0_LEN;
@@ -1146,13 +1146,13 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
}
session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC;
- if (qat_dev_gen == QAT_GEN4)
+ if (qat_dev_gen == QAT_GEN4 || qat_dev_gen == QAT_GEN5)
session->is_ucs = 1;
break;
case RTE_CRYPTO_AEAD_CHACHA20_POLY1305:
if (aead_xform->key.length != ICP_QAT_HW_CHACHAPOLY_KEY_SZ)
return -EINVAL;
- if (qat_dev_gen == QAT_GEN4)
+ if (qat_dev_gen == QAT_GEN4 || qat_dev_gen == QAT_GEN5)
session->is_ucs = 1;
session->qat_cipher_alg =
ICP_QAT_HW_CIPHER_ALGO_CHACHA20_POLY1305;
@@ -2418,7 +2418,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
auth_param->u2.inner_prefix_sz =
qat_hash_get_block_size(cdesc->qat_hash_alg);
auth_param->hash_state_sz = digestsize;
- if (qat_dev_gen == QAT_GEN4) {
+ if (qat_dev_gen == QAT_GEN4 || qat_dev_gen == QAT_GEN5) {
ICP_QAT_FW_HASH_FLAG_MODE2_SET(
hash_cd_ctrl->hash_flags,
QAT_FW_LA_MODE2);
@@ -2984,6 +2984,7 @@ qat_sym_cd_crc_set(struct qat_sym_session *cdesc,
cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen3_crc_cd);
break;
case QAT_GEN4:
+ case QAT_GEN5:
crc_cfg.mode = ICP_QAT_HW_CIPHER_ECB_MODE;
crc_cfg.algo = ICP_QAT_HW_CIPHER_ALGO_NULL;
crc_cfg.hash_cmp_val = 0;
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH v2 0/4] add new QAT gen3 and gen5
2024-02-23 15:12 ` [PATCH v2 0/4] add new QAT gen3 and gen5 Ciara Power
` (3 preceding siblings ...)
2024-02-23 15:12 ` [PATCH v2 4/4] common/qat: add gen5 device Ciara Power
@ 2024-02-26 13:32 ` Ji, Kai
2024-02-29 9:48 ` Kusztal, ArkadiuszX
4 siblings, 1 reply; 19+ messages in thread
From: Ji, Kai @ 2024-02-26 13:32 UTC (permalink / raw)
To: Power, Ciara, dev; +Cc: gakhil, Kusztal, ArkadiuszX
[-- Attachment #1: Type: text/plain, Size: 2833 bytes --]
Series-acked-by: Kai Ji <kai.ji@intel.com>
________________________________
From: Power, Ciara <ciara.power@intel.com>
Sent: 23 February 2024 15:12
To: dev@dpdk.org <dev@dpdk.org>
Cc: gakhil@marvell.com <gakhil@marvell.com>; Ji, Kai <kai.ji@intel.com>; Kusztal, ArkadiuszX <arkadiuszx.kusztal@intel.com>; Power, Ciara <ciara.power@intel.com>
Subject: [PATCH v2 0/4] add new QAT gen3 and gen5
This patchset adds support for two new QAT devices.
A new GEN3 device, and a GEN5 device, both of which have
wireless slice support for algorithms such as ZUC-256.
Symmetric, asymmetric and compression are all supported
for these devices.
v2:
- New patch added for gen5 device that reuses gen4 code,
and new gen3 wireless slice changes.
- Removed patch to disable asymmetric and compression.
- Documentation updates added.
- Fixed ZUC-256 IV modification for raw API path.
- Fixed setting extended protocol flag bit position.
- Added check for ZUC-256 wireless slice in slice map.
Ciara Power (4):
common/qat: add new gen3 device
common/qat: add zuc256 wireless slice for gen3
common/qat: add new gen3 CMAC macros
common/qat: add gen5 device
doc/guides/compressdevs/qat_comp.rst | 1 +
doc/guides/cryptodevs/qat.rst | 6 +
doc/guides/rel_notes/release_24_03.rst | 7 +
drivers/common/qat/dev/qat_dev_gen4.c | 31 ++-
drivers/common/qat/dev/qat_dev_gen5.c | 51 ++++
drivers/common/qat/dev/qat_dev_gens.h | 54 ++++
drivers/common/qat/meson.build | 3 +
drivers/common/qat/qat_adf/icp_qat_fw.h | 6 +-
drivers/common/qat/qat_adf/icp_qat_fw_la.h | 24 ++
drivers/common/qat/qat_adf/icp_qat_hw.h | 26 +-
drivers/common/qat/qat_common.h | 1 +
drivers/common/qat/qat_device.c | 19 ++
drivers/common/qat/qat_device.h | 2 +
drivers/compress/qat/dev/qat_comp_pmd_gen4.c | 8 +-
drivers/compress/qat/dev/qat_comp_pmd_gen5.c | 73 +++++
drivers/compress/qat/dev/qat_comp_pmd_gens.h | 14 +
drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 7 +-
drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 63 ++++-
drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 4 +-
drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c | 278 +++++++++++++++++++
drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 40 ++-
drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 43 +++
drivers/crypto/qat/qat_sym_session.c | 177 ++++++++++--
drivers/crypto/qat/qat_sym_session.h | 2 +
24 files changed, 889 insertions(+), 51 deletions(-)
create mode 100644 drivers/common/qat/dev/qat_dev_gen5.c
create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen5.c
create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c
--
2.25.1
[-- Attachment #2: Type: text/html, Size: 4982 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 0/4] add new QAT gen3 and gen5
2023-12-19 15:51 [PATCH 0/4] add new QAT gen3 device Ciara Power
` (4 preceding siblings ...)
2024-02-23 15:12 ` [PATCH v2 0/4] add new QAT gen3 and gen5 Ciara Power
@ 2024-02-26 17:08 ` Ciara Power
2024-02-26 17:08 ` [PATCH v3 1/4] common/qat: add new gen3 device Ciara Power
` (4 more replies)
5 siblings, 5 replies; 19+ messages in thread
From: Ciara Power @ 2024-02-26 17:08 UTC (permalink / raw)
To: dev; +Cc: arkadiuszx.kusztal, gakhil, Ciara Power
This patchset adds support for two new QAT devices.
A new GEN3 device, and a GEN5 device, both of which have
wireless slice support for algorithms such as ZUC-256.
Symmetric, asymmetric and compression are all supported
for these devices.
v3:
- Modified year in licence tag of new gen5 files.
v2:
- New patch added for gen5 device that reuses gen4 code,
and new gen3 wireless slice changes.
- Removed patch to disable asymmetric and compression.
- Documentation updates added.
- Fixed ZUC-256 IV modification for raw API path.
- Fixed setting extended protocol flag bit position.
- Added check for ZUC-256 wireless slice in slice map.
Ciara Power (4):
common/qat: add new gen3 device
common/qat: add zuc256 wireless slice for gen3
common/qat: add new gen3 CMAC macros
common/qat: add gen5 device
doc/guides/compressdevs/qat_comp.rst | 1 +
doc/guides/cryptodevs/qat.rst | 6 +
doc/guides/rel_notes/release_24_03.rst | 7 +
drivers/common/qat/dev/qat_dev_gen4.c | 31 ++-
drivers/common/qat/dev/qat_dev_gen5.c | 51 ++++
drivers/common/qat/dev/qat_dev_gens.h | 54 ++++
drivers/common/qat/meson.build | 3 +
drivers/common/qat/qat_adf/icp_qat_fw.h | 6 +-
drivers/common/qat/qat_adf/icp_qat_fw_la.h | 24 ++
drivers/common/qat/qat_adf/icp_qat_hw.h | 26 +-
drivers/common/qat/qat_common.h | 1 +
drivers/common/qat/qat_device.c | 19 ++
drivers/common/qat/qat_device.h | 2 +
drivers/compress/qat/dev/qat_comp_pmd_gen4.c | 8 +-
drivers/compress/qat/dev/qat_comp_pmd_gen5.c | 73 +++++
drivers/compress/qat/dev/qat_comp_pmd_gens.h | 14 +
drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 7 +-
drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 63 ++++-
drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 4 +-
drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c | 278 +++++++++++++++++++
drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 40 ++-
drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 43 +++
drivers/crypto/qat/qat_sym_session.c | 177 ++++++++++--
drivers/crypto/qat/qat_sym_session.h | 2 +
24 files changed, 889 insertions(+), 51 deletions(-)
create mode 100644 drivers/common/qat/dev/qat_dev_gen5.c
create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen5.c
create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 1/4] common/qat: add new gen3 device
2024-02-26 17:08 ` [PATCH v3 " Ciara Power
@ 2024-02-26 17:08 ` Ciara Power
2024-02-26 17:08 ` [PATCH v3 2/4] common/qat: add zuc256 wireless slice for gen3 Ciara Power
` (3 subsequent siblings)
4 siblings, 0 replies; 19+ messages in thread
From: Ciara Power @ 2024-02-26 17:08 UTC (permalink / raw)
To: dev; +Cc: arkadiuszx.kusztal, gakhil, Ciara Power, Kai Ji
Add new gen3 QAT device ID.
This device has a wireless slice, but other gen3 devices do not, so we
must set a flag to indicate this wireless enabled device.
Capabilities for the device are slightly different from base gen3
capabilities, some are removed from the list for this device.
Symmetric, asymmetric and compression services are enabled.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Kai Ji <kai.ji@intel.com>
---
v2: Added documentation updates.
---
doc/guides/compressdevs/qat_comp.rst | 1 +
doc/guides/cryptodevs/qat.rst | 2 ++
doc/guides/rel_notes/release_24_03.rst | 4 ++++
drivers/common/qat/qat_device.c | 13 +++++++++++++
drivers/common/qat/qat_device.h | 2 ++
drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 11 +++++++++++
6 files changed, 33 insertions(+)
diff --git a/doc/guides/compressdevs/qat_comp.rst b/doc/guides/compressdevs/qat_comp.rst
index 475c4a9f9f..338b1bf623 100644
--- a/doc/guides/compressdevs/qat_comp.rst
+++ b/doc/guides/compressdevs/qat_comp.rst
@@ -10,6 +10,7 @@ support for the following hardware accelerator devices:
* ``Intel QuickAssist Technology C62x``
* ``Intel QuickAssist Technology C3xxx``
* ``Intel QuickAssist Technology DH895x``
+* ``Intel QuickAssist Technology 300xx``
Features
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index dc6b95165d..51190e12d6 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -26,6 +26,7 @@ poll mode crypto driver support for the following hardware accelerator devices:
* ``Intel QuickAssist Technology D15xx``
* ``Intel QuickAssist Technology C4xxx``
* ``Intel QuickAssist Technology 4xxx``
+* ``Intel QuickAssist Technology 300xx``
Features
@@ -177,6 +178,7 @@ poll mode crypto driver support for the following hardware accelerator devices:
* ``Intel QuickAssist Technology C4xxx``
* ``Intel QuickAssist Technology 4xxx``
* ``Intel QuickAssist Technology 401xxx``
+* ``Intel QuickAssist Technology 300xx``
The QAT ASYM PMD has support for:
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 879bb4944c..55517eabd8 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -131,6 +131,10 @@ New Features
* Added support for comparing result between packet fields or value.
* Added support for accumulating value of field into another one.
+* **Updated Intel QuickAssist Technology driver.**
+
+ * Enabled support for new QAT GEN3 (578a) devices in QAT crypto driver.
+
* **Updated Marvell cnxk crypto driver.**
* Added support for Rx inject in crypto_cn10k.
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index f55dc3c6f0..0e7d387d78 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -53,6 +53,9 @@ static const struct rte_pci_id pci_id_qat_map[] = {
{
RTE_PCI_DEVICE(0x8086, 0x18a1),
},
+ {
+ RTE_PCI_DEVICE(0x8086, 0x578b),
+ },
{
RTE_PCI_DEVICE(0x8086, 0x4941),
},
@@ -194,6 +197,7 @@ pick_gen(const struct rte_pci_device *pci_dev)
case 0x18ef:
return QAT_GEN2;
case 0x18a1:
+ case 0x578b:
return QAT_GEN3;
case 0x4941:
case 0x4943:
@@ -205,6 +209,12 @@ pick_gen(const struct rte_pci_device *pci_dev)
}
}
+static int
+wireless_slice_support(uint16_t pci_dev_id)
+{
+ return pci_dev_id == 0x578b;
+}
+
struct qat_pci_device *
qat_pci_device_allocate(struct rte_pci_device *pci_dev,
struct qat_dev_cmd_param *qat_dev_cmd_param)
@@ -282,6 +292,9 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
qat_dev->qat_dev_id = qat_dev_id;
qat_dev->qat_dev_gen = qat_dev_gen;
+ if (wireless_slice_support(pci_dev->id.device_id))
+ qat_dev->has_wireless_slice = 1;
+
ops_hw = qat_dev_hw_spec[qat_dev->qat_dev_gen];
NOT_NULL(ops_hw->qat_dev_get_misc_bar, goto error,
"QAT internal error! qat_dev_get_misc_bar function not set");
diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h
index aa7988bb74..43e4752812 100644
--- a/drivers/common/qat/qat_device.h
+++ b/drivers/common/qat/qat_device.h
@@ -135,6 +135,8 @@ struct qat_pci_device {
/**< Per generation specific information */
uint32_t slice_map;
/**< Map of the crypto and compression slices */
+ uint16_t has_wireless_slice;
+ /**< Wireless Slices supported */
};
struct qat_gen_hw_data {
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
index 02bcdb06b1..bc53e2e0f1 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
@@ -255,6 +255,17 @@ qat_sym_crypto_cap_get_gen3(struct qat_cryptodev_private *internals,
RTE_CRYPTO_AUTH_SM3_HMAC))) {
continue;
}
+ if (internals->qat_dev->has_wireless_slice && (
+ check_auth_capa(&capabilities[iter],
+ RTE_CRYPTO_AUTH_KASUMI_F9) ||
+ check_cipher_capa(&capabilities[iter],
+ RTE_CRYPTO_CIPHER_KASUMI_F8) ||
+ check_cipher_capa(&capabilities[iter],
+ RTE_CRYPTO_CIPHER_DES_CBC) ||
+ check_cipher_capa(&capabilities[iter],
+ RTE_CRYPTO_CIPHER_DES_DOCSISBPI)))
+ continue;
+
memcpy(addr + curr_capa, capabilities + iter,
sizeof(struct rte_cryptodev_capabilities));
curr_capa++;
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 2/4] common/qat: add zuc256 wireless slice for gen3
2024-02-26 17:08 ` [PATCH v3 " Ciara Power
2024-02-26 17:08 ` [PATCH v3 1/4] common/qat: add new gen3 device Ciara Power
@ 2024-02-26 17:08 ` Ciara Power
2024-02-26 17:08 ` [PATCH v3 3/4] common/qat: add new gen3 CMAC macros Ciara Power
` (2 subsequent siblings)
4 siblings, 0 replies; 19+ messages in thread
From: Ciara Power @ 2024-02-26 17:08 UTC (permalink / raw)
To: dev; +Cc: arkadiuszx.kusztal, gakhil, Ciara Power, Kai Ji
The new gen3 device handles wireless algorithms on wireless slices,
based on the device wireless slice support, set the required flags for
these algorithms to move slice.
One of the algorithms supported for the wireless slices is ZUC 256,
support is added for this, along with modifying the capability for the
device.
The device supports 24 bytes iv for ZUC 256, with iv[20]
being ignored in register.
For 25 byte iv, compress this into 23 bytes.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Kai Ji <kai.ji@intel.com>
---
v2:
- Fixed setting extended protocol flag bit position.
- Added slice map check for ZUC256 wireless slice.
- Fixed IV modification for ZUC256 in raw datapath.
- Added increment size for ZUC256 capabiltiies.
- Added release note.
---
doc/guides/rel_notes/release_24_03.rst | 1 +
drivers/common/qat/qat_adf/icp_qat_fw.h | 6 +-
drivers/common/qat/qat_adf/icp_qat_fw_la.h | 24 ++++
drivers/common/qat/qat_adf/icp_qat_hw.h | 24 +++-
drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 7 +-
drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 52 ++++++-
drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 34 ++++-
drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 43 ++++++
drivers/crypto/qat/qat_sym_session.c | 142 +++++++++++++++++--
drivers/crypto/qat/qat_sym_session.h | 2 +
10 files changed, 312 insertions(+), 23 deletions(-)
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 55517eabd8..0dee1ff104 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -134,6 +134,7 @@ New Features
* **Updated Intel QuickAssist Technology driver.**
* Enabled support for new QAT GEN3 (578a) devices in QAT crypto driver.
+ * Enabled ZUC256 cipher and auth algorithm for wireless slice enabled GEN3 device.
* **Updated Marvell cnxk crypto driver.**
diff --git a/drivers/common/qat/qat_adf/icp_qat_fw.h b/drivers/common/qat/qat_adf/icp_qat_fw.h
index 3aa17ae041..dd7c926140 100644
--- a/drivers/common/qat/qat_adf/icp_qat_fw.h
+++ b/drivers/common/qat/qat_adf/icp_qat_fw.h
@@ -75,7 +75,8 @@ struct icp_qat_fw_comn_req_hdr {
uint8_t service_type;
uint8_t hdr_flags;
uint16_t serv_specif_flags;
- uint16_t comn_req_flags;
+ uint8_t comn_req_flags;
+ uint8_t ext_flags;
};
struct icp_qat_fw_comn_req_rqpars {
@@ -176,9 +177,6 @@ struct icp_qat_fw_comn_resp {
#define QAT_COMN_PTR_TYPE_SGL 0x1
#define QAT_COMN_CD_FLD_TYPE_64BIT_ADR 0x0
#define QAT_COMN_CD_FLD_TYPE_16BYTE_DATA 0x1
-#define QAT_COMN_EXT_FLAGS_BITPOS 8
-#define QAT_COMN_EXT_FLAGS_MASK 0x1
-#define QAT_COMN_EXT_FLAGS_USED 0x1
#define ICP_QAT_FW_COMN_FLAGS_BUILD(cdt, ptr) \
((((cdt) & QAT_COMN_CD_FLD_TYPE_MASK) << QAT_COMN_CD_FLD_TYPE_BITPOS) \
diff --git a/drivers/common/qat/qat_adf/icp_qat_fw_la.h b/drivers/common/qat/qat_adf/icp_qat_fw_la.h
index 70f0effa62..134c309355 100644
--- a/drivers/common/qat/qat_adf/icp_qat_fw_la.h
+++ b/drivers/common/qat/qat_adf/icp_qat_fw_la.h
@@ -81,6 +81,15 @@ struct icp_qat_fw_la_bulk_req {
#define ICP_QAT_FW_LA_PARTIAL_END 2
#define QAT_LA_PARTIAL_BITPOS 0
#define QAT_LA_PARTIAL_MASK 0x3
+#define QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS_BITPOS 0
+#define QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS 1
+#define QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS_MASK 0x1
+#define QAT_LA_USE_WCP_SLICE 1
+#define QAT_LA_USE_WCP_SLICE_BITPOS 2
+#define QAT_LA_USE_WCP_SLICE_MASK 0x1
+#define QAT_LA_USE_WAT_SLICE_BITPOS 3
+#define QAT_LA_USE_WAT_SLICE 1
+#define QAT_LA_USE_WAT_SLICE_MASK 0x1
#define ICP_QAT_FW_LA_FLAGS_BUILD(zuc_proto, gcm_iv_len, auth_rslt, proto, \
cmp_auth, ret_auth, update_state, \
ciph_iv, ciphcfg, partial) \
@@ -188,6 +197,21 @@ struct icp_qat_fw_la_bulk_req {
QAT_FIELD_SET(flags, val, QAT_LA_PARTIAL_BITPOS, \
QAT_LA_PARTIAL_MASK)
+#define ICP_QAT_FW_USE_EXTENDED_PROTOCOL_FLAGS_SET(flags, val) \
+ QAT_FIELD_SET(flags, val, \
+ QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS_BITPOS, \
+ QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS_MASK)
+
+#define ICP_QAT_FW_USE_WCP_SLICE_SET(flags, val) \
+ QAT_FIELD_SET(flags, val, \
+ QAT_LA_USE_WCP_SLICE_BITPOS, \
+ QAT_LA_USE_WCP_SLICE_MASK)
+
+#define ICP_QAT_FW_USE_WAT_SLICE_SET(flags, val) \
+ QAT_FIELD_SET(flags, val, \
+ QAT_LA_USE_WAT_SLICE_BITPOS, \
+ QAT_LA_USE_WAT_SLICE_MASK)
+
#define QAT_FW_LA_MODE2 1
#define QAT_FW_LA_NO_MODE2 0
#define QAT_FW_LA_MODE2_MASK 0x1
diff --git a/drivers/common/qat/qat_adf/icp_qat_hw.h b/drivers/common/qat/qat_adf/icp_qat_hw.h
index 33756d512d..4651fb90bb 100644
--- a/drivers/common/qat/qat_adf/icp_qat_hw.h
+++ b/drivers/common/qat/qat_adf/icp_qat_hw.h
@@ -21,7 +21,8 @@ enum icp_qat_slice_mask {
ICP_ACCEL_MASK_CRYPTO1_SLICE = 0x100,
ICP_ACCEL_MASK_CRYPTO2_SLICE = 0x200,
ICP_ACCEL_MASK_SM3_SLICE = 0x400,
- ICP_ACCEL_MASK_SM4_SLICE = 0x800
+ ICP_ACCEL_MASK_SM4_SLICE = 0x800,
+ ICP_ACCEL_MASK_ZUC_256_SLICE = 0x2000,
};
enum icp_qat_hw_ae_id {
@@ -71,7 +72,16 @@ enum icp_qat_hw_auth_algo {
ICP_QAT_HW_AUTH_ALGO_SHA3_256 = 17,
ICP_QAT_HW_AUTH_ALGO_SHA3_384 = 18,
ICP_QAT_HW_AUTH_ALGO_SHA3_512 = 19,
- ICP_QAT_HW_AUTH_ALGO_DELIMITER = 20
+ ICP_QAT_HW_AUTH_ALGO_RESERVED = 20,
+ ICP_QAT_HW_AUTH_ALGO_RESERVED1 = 21,
+ ICP_QAT_HW_AUTH_ALGO_RESERVED2 = 22,
+ ICP_QAT_HW_AUTH_ALGO_RESERVED3 = 22,
+ ICP_QAT_HW_AUTH_ALGO_RESERVED4 = 23,
+ ICP_QAT_HW_AUTH_ALGO_RESERVED5 = 24,
+ ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32 = 25,
+ ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64 = 26,
+ ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128 = 27,
+ ICP_QAT_HW_AUTH_ALGO_DELIMITER = 28
};
enum icp_qat_hw_auth_mode {
@@ -167,6 +177,9 @@ struct icp_qat_hw_auth_setup {
#define ICP_QAT_HW_GALOIS_128_STATE1_SZ 16
#define ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ 8
#define ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_256_MAC_32_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_256_MAC_64_STATE1_SZ 8
+#define ICP_QAT_HW_ZUC_256_MAC_128_STATE1_SZ 16
#define ICP_QAT_HW_NULL_STATE2_SZ 32
#define ICP_QAT_HW_MD5_STATE2_SZ 16
@@ -191,6 +204,7 @@ struct icp_qat_hw_auth_setup {
#define ICP_QAT_HW_AES_F9_STATE2_SZ ICP_QAT_HW_KASUMI_F9_STATE2_SZ
#define ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ 24
#define ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ 32
+#define ICP_QAT_HW_ZUC_256_STATE2_SZ 56
#define ICP_QAT_HW_GALOIS_H_SZ 16
#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8
#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16
@@ -228,7 +242,8 @@ enum icp_qat_hw_cipher_algo {
ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 = 9,
ICP_QAT_HW_CIPHER_ALGO_SM4 = 10,
ICP_QAT_HW_CIPHER_ALGO_CHACHA20_POLY1305 = 11,
- ICP_QAT_HW_CIPHER_DELIMITER = 12
+ ICP_QAT_HW_CIPHER_ALGO_ZUC_256 = 12,
+ ICP_QAT_HW_CIPHER_DELIMITER = 13
};
enum icp_qat_hw_cipher_mode {
@@ -308,6 +323,7 @@ enum icp_qat_hw_cipher_convert {
#define ICP_QAT_HW_KASUMI_BLK_SZ 8
#define ICP_QAT_HW_SNOW_3G_BLK_SZ 8
#define ICP_QAT_HW_ZUC_3G_BLK_SZ 8
+#define ICP_QAT_HW_ZUC_256_BLK_SZ 8
#define ICP_QAT_HW_NULL_KEY_SZ 256
#define ICP_QAT_HW_DES_KEY_SZ 8
#define ICP_QAT_HW_3DES_KEY_SZ 24
@@ -343,6 +359,8 @@ enum icp_qat_hw_cipher_convert {
#define ICP_QAT_HW_SPC_CTR_SZ 16
#define ICP_QAT_HW_CHACHAPOLY_ICV_SZ 16
#define ICP_QAT_HW_CHACHAPOLY_AAD_MAX_LOG 14
+#define ICP_QAT_HW_ZUC_256_KEY_SZ 32
+#define ICP_QAT_HW_ZUC_256_IV_SZ 24
#define ICP_QAT_HW_CIPHER_MAX_KEY_SZ ICP_QAT_HW_AES_256_F8_KEY_SZ
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
index df47767749..62874039a9 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c
@@ -182,10 +182,8 @@ qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session,
session->fw_req.cd_ctrl.content_desc_ctrl_lw;
/* Set the Use Extended Protocol Flags bit in LW 1 */
- QAT_FIELD_SET(header->comn_req_flags,
- QAT_COMN_EXT_FLAGS_USED,
- QAT_COMN_EXT_FLAGS_BITPOS,
- QAT_COMN_EXT_FLAGS_MASK);
+ ICP_QAT_FW_USE_EXTENDED_PROTOCOL_FLAGS_SET(
+ header->ext_flags, QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS);
/* Set Hash Flags in LW 28 */
cd_ctrl->hash_flags |= hash_flag;
@@ -199,6 +197,7 @@ qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session,
header->serv_specif_flags, 0);
break;
case ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3:
+ case ICP_QAT_HW_CIPHER_ALGO_ZUC_256:
ICP_QAT_FW_LA_PROTO_SET(header->serv_specif_flags,
ICP_QAT_FW_LA_NO_PROTO);
ICP_QAT_FW_LA_ZUC_3G_PROTO_FLAG_SET(
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
index bc53e2e0f1..907c3ce3e2 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c
@@ -204,6 +204,7 @@ qat_sym_crypto_cap_get_gen3(struct qat_cryptodev_private *internals,
uint32_t legacy_size = sizeof(qat_sym_crypto_legacy_caps_gen3);
capa_num = size/sizeof(struct rte_cryptodev_capabilities);
legacy_capa_num = legacy_size/sizeof(struct rte_cryptodev_capabilities);
+ struct rte_cryptodev_capabilities *cap;
if (unlikely(qat_legacy_capa))
size = size + legacy_size;
@@ -255,6 +256,15 @@ qat_sym_crypto_cap_get_gen3(struct qat_cryptodev_private *internals,
RTE_CRYPTO_AUTH_SM3_HMAC))) {
continue;
}
+
+ if (slice_map & ICP_ACCEL_MASK_ZUC_256_SLICE && (
+ check_auth_capa(&capabilities[iter],
+ RTE_CRYPTO_AUTH_ZUC_EIA3) ||
+ check_cipher_capa(&capabilities[iter],
+ RTE_CRYPTO_CIPHER_ZUC_EEA3))) {
+ continue;
+ }
+
if (internals->qat_dev->has_wireless_slice && (
check_auth_capa(&capabilities[iter],
RTE_CRYPTO_AUTH_KASUMI_F9) ||
@@ -268,6 +278,27 @@ qat_sym_crypto_cap_get_gen3(struct qat_cryptodev_private *internals,
memcpy(addr + curr_capa, capabilities + iter,
sizeof(struct rte_cryptodev_capabilities));
+
+ if (internals->qat_dev->has_wireless_slice && (
+ check_auth_capa(&capabilities[iter],
+ RTE_CRYPTO_AUTH_ZUC_EIA3))) {
+ cap = addr + curr_capa;
+ cap->sym.auth.key_size.max = 32;
+ cap->sym.auth.key_size.increment = 16;
+ cap->sym.auth.iv_size.max = 25;
+ cap->sym.auth.iv_size.increment = 1;
+ cap->sym.auth.digest_size.max = 16;
+ cap->sym.auth.digest_size.increment = 4;
+ }
+ if (internals->qat_dev->has_wireless_slice && (
+ check_cipher_capa(&capabilities[iter],
+ RTE_CRYPTO_CIPHER_ZUC_EEA3))) {
+ cap = addr + curr_capa;
+ cap->sym.cipher.key_size.max = 32;
+ cap->sym.cipher.key_size.increment = 16;
+ cap->sym.cipher.iv_size.max = 25;
+ cap->sym.cipher.iv_size.increment = 1;
+ }
curr_capa++;
}
internals->qat_dev_capabilities = internals->capa_mz->addr;
@@ -480,11 +511,14 @@ qat_sym_build_op_auth_gen3(void *in_op, struct qat_sym_session *ctx,
}
static int
-qat_sym_crypto_set_session_gen3(void *cdev __rte_unused, void *session)
+qat_sym_crypto_set_session_gen3(void *cdev, void *session)
{
struct qat_sym_session *ctx = session;
enum rte_proc_type_t proc_type = rte_eal_process_type();
int ret;
+ struct qat_cryptodev_private *internals;
+
+ internals = ((struct rte_cryptodev *)cdev)->data->dev_private;
if (proc_type == RTE_PROC_AUTO || proc_type == RTE_PROC_INVALID)
return -EINVAL;
@@ -517,6 +551,22 @@ qat_sym_crypto_set_session_gen3(void *cdev __rte_unused, void *session)
ctx->qat_cipher_alg ==
ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3)) {
qat_sym_session_set_ext_hash_flags_gen2(ctx, 0);
+ } else if ((internals->qat_dev->has_wireless_slice) &&
+ ((ctx->aes_cmac ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) &&
+ (ctx->qat_cipher_alg ==
+ ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 ||
+ ctx->qat_cipher_alg ==
+ ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 ||
+ ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_256))) {
+ qat_sym_session_set_ext_hash_flags_gen2(ctx, 0);
+ } else if ((internals->qat_dev->has_wireless_slice) &&
+ (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128) &&
+ ctx->qat_cipher_alg != ICP_QAT_HW_CIPHER_ALGO_ZUC_256) {
+ qat_sym_session_set_ext_hash_flags_gen2(ctx,
+ 1 << ICP_QAT_FW_AUTH_HDR_FLAG_ZUC_EIA3_BITPOS);
}
ret = 0;
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
index 911400e53b..ff7ba55c01 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
@@ -117,7 +117,10 @@ qat_auth_is_len_in_bits(struct qat_sym_session *ctx,
{
if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2 ||
ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_KASUMI_F9 ||
- ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3) {
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128) {
if (unlikely((op->sym->auth.data.offset % BYTE_LENGTH != 0) ||
(op->sym->auth.data.length % BYTE_LENGTH != 0)))
return -EINVAL;
@@ -132,7 +135,8 @@ qat_cipher_is_len_in_bits(struct qat_sym_session *ctx,
{
if (ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 ||
ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_KASUMI ||
- ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3) {
+ ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 ||
+ ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_256) {
if (unlikely((op->sym->cipher.data.length % BYTE_LENGTH != 0) ||
((op->sym->cipher.data.offset %
BYTE_LENGTH) != 0)))
@@ -589,6 +593,26 @@ qat_sym_convert_op_to_vec_aead(struct rte_crypto_op *op,
return 0;
}
+static inline void
+zuc256_modify_iv(uint8_t *iv)
+{
+ uint8_t iv_tmp[8];
+
+ iv_tmp[0] = iv[16];
+ /* pack the last 8 bytes of IV to 6 bytes.
+ * discard the 2 MSB bits of each byte
+ */
+ iv_tmp[1] = (((iv[17] & 0x3f) << 2) | ((iv[18] >> 4) & 0x3));
+ iv_tmp[2] = (((iv[18] & 0xf) << 4) | ((iv[19] >> 2) & 0xf));
+ iv_tmp[3] = (((iv[19] & 0x3) << 6) | (iv[20] & 0x3f));
+
+ iv_tmp[4] = (((iv[21] & 0x3f) << 2) | ((iv[22] >> 4) & 0x3));
+ iv_tmp[5] = (((iv[22] & 0xf) << 4) | ((iv[23] >> 2) & 0xf));
+ iv_tmp[6] = (((iv[23] & 0x3) << 6) | (iv[24] & 0x3f));
+
+ memcpy(iv + 16, iv_tmp, 8);
+}
+
static __rte_always_inline void
qat_set_cipher_iv(struct icp_qat_fw_la_cipher_req_params *cipher_param,
struct rte_crypto_va_iova_ptr *iv_ptr, uint32_t iv_len,
@@ -665,6 +689,9 @@ enqueue_one_auth_job_gen1(struct qat_sym_session *ctx,
case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9:
case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128:
auth_param->u1.aad_adr = auth_iv->iova;
break;
case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
@@ -747,6 +774,9 @@ enqueue_one_chain_job_gen1(struct qat_sym_session *ctx,
case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
case ICP_QAT_HW_AUTH_ALGO_KASUMI_F9:
case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128:
auth_param->u1.aad_adr = auth_iv->iova;
break;
case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
index 208b7e0ba6..bdd1647ea2 100644
--- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
+++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c
@@ -248,6 +248,9 @@ qat_sym_build_op_cipher_gen1(void *in_op, struct qat_sym_session *ctx,
return -EINVAL;
}
+ if (ctx->is_zuc256)
+ zuc256_modify_iv(cipher_iv.va);
+
enqueue_one_cipher_job_gen1(ctx, req, &cipher_iv, ofs, total_len, op_cookie);
qat_sym_debug_log_dump(req, ctx, in_sgl.vec, in_sgl.num, &cipher_iv,
@@ -270,6 +273,8 @@ qat_sym_build_op_auth_gen1(void *in_op, struct qat_sym_session *ctx,
struct rte_crypto_va_iova_ptr digest;
union rte_crypto_sym_ofs ofs;
int32_t total_len;
+ struct rte_cryptodev *cdev;
+ struct qat_cryptodev_private *internals;
in_sgl.vec = in_vec;
out_sgl.vec = out_vec;
@@ -284,6 +289,13 @@ qat_sym_build_op_auth_gen1(void *in_op, struct qat_sym_session *ctx,
return -EINVAL;
}
+ cdev = rte_cryptodev_pmd_get_dev(ctx->dev_id);
+ internals = cdev->data->dev_private;
+
+ if (internals->qat_dev->has_wireless_slice && !ctx->is_gmac)
+ ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET(
+ req->comn_hdr.serv_specif_flags, 0);
+
total_len = qat_sym_build_req_set_data(req, in_op, cookie,
in_sgl.vec, in_sgl.num, out_sgl.vec, out_sgl.num);
if (unlikely(total_len < 0)) {
@@ -291,6 +303,9 @@ qat_sym_build_op_auth_gen1(void *in_op, struct qat_sym_session *ctx,
return -EINVAL;
}
+ if (ctx->is_zuc256)
+ zuc256_modify_iv(auth_iv.va);
+
enqueue_one_auth_job_gen1(ctx, req, &digest, &auth_iv, ofs,
total_len);
@@ -381,6 +396,11 @@ qat_sym_build_op_chain_gen1(void *in_op, struct qat_sym_session *ctx,
return -EINVAL;
}
+ if (ctx->is_zuc256) {
+ zuc256_modify_iv(cipher_iv.va);
+ zuc256_modify_iv(auth_iv.va);
+ }
+
enqueue_one_chain_job_gen1(ctx, req, in_sgl.vec, in_sgl.num,
out_sgl.vec, out_sgl.num, &cipher_iv, &digest, &auth_iv,
ofs, total_len, cookie);
@@ -507,6 +527,9 @@ qat_sym_dp_enqueue_single_cipher_gen1(void *qp_data, uint8_t *drv_ctx,
if (unlikely(data_len < 0))
return -1;
+ if (ctx->is_zuc256)
+ zuc256_modify_iv(iv->va);
+
enqueue_one_cipher_job_gen1(ctx, req, iv, ofs, (uint32_t)data_len, cookie);
qat_sym_debug_log_dump(req, ctx, data, n_data_vecs, iv,
@@ -563,6 +586,10 @@ qat_sym_dp_enqueue_cipher_jobs_gen1(void *qp_data, uint8_t *drv_ctx,
if (unlikely(data_len < 0))
break;
+
+ if (ctx->is_zuc256)
+ zuc256_modify_iv(vec->iv[i].va);
+
enqueue_one_cipher_job_gen1(ctx, req, &vec->iv[i], ofs,
(uint32_t)data_len, cookie);
tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
@@ -613,6 +640,9 @@ qat_sym_dp_enqueue_single_auth_gen1(void *qp_data, uint8_t *drv_ctx,
if (unlikely(data_len < 0))
return -1;
+ if (ctx->is_zuc256)
+ zuc256_modify_iv(auth_iv->va);
+
if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) {
null_digest.iova = cookie->digest_null_phys_addr;
job_digest = &null_digest;
@@ -678,6 +708,9 @@ qat_sym_dp_enqueue_auth_jobs_gen1(void *qp_data, uint8_t *drv_ctx,
if (unlikely(data_len < 0))
break;
+ if (ctx->is_zuc256)
+ zuc256_modify_iv(vec->auth_iv[i].va);
+
if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) {
null_digest.iova = cookie->digest_null_phys_addr;
job_digest = &null_digest;
@@ -733,6 +766,11 @@ qat_sym_dp_enqueue_single_chain_gen1(void *qp_data, uint8_t *drv_ctx,
if (unlikely(data_len < 0))
return -1;
+ if (ctx->is_zuc256) {
+ zuc256_modify_iv(cipher_iv->va);
+ zuc256_modify_iv(auth_iv->va);
+ }
+
if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) {
null_digest.iova = cookie->digest_null_phys_addr;
job_digest = &null_digest;
@@ -801,6 +839,11 @@ qat_sym_dp_enqueue_chain_jobs_gen1(void *qp_data, uint8_t *drv_ctx,
if (unlikely(data_len < 0))
break;
+ if (ctx->is_zuc256) {
+ zuc256_modify_iv(vec->iv[i].va);
+ zuc256_modify_iv(vec->auth_iv[i].va);
+ }
+
if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) {
null_digest.iova = cookie->digest_null_phys_addr;
job_digest = &null_digest;
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index 9f4f6c3d93..ebdad0bd67 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -379,7 +379,9 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
struct rte_crypto_cipher_xform *cipher_xform = NULL;
enum qat_device_gen qat_dev_gen =
internals->qat_dev->qat_dev_gen;
- int ret;
+ int ret, is_wireless = 0;
+ struct icp_qat_fw_la_bulk_req *req_tmpl = &session->fw_req;
+ struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
/* Get cipher xform from crypto xform chain */
cipher_xform = qat_get_cipher_xform(xform);
@@ -416,6 +418,8 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
goto error_out;
}
session->qat_mode = ICP_QAT_HW_CIPHER_ECB_MODE;
+ if (internals->qat_dev->has_wireless_slice)
+ is_wireless = 1;
break;
case RTE_CRYPTO_CIPHER_NULL:
session->qat_cipher_alg = ICP_QAT_HW_CIPHER_ALGO_NULL;
@@ -533,6 +537,10 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
goto error_out;
}
session->qat_mode = ICP_QAT_HW_CIPHER_ECB_MODE;
+ if (cipher_xform->key.length == ICP_QAT_HW_ZUC_256_KEY_SZ)
+ session->is_zuc256 = 1;
+ if (internals->qat_dev->has_wireless_slice)
+ is_wireless = 1;
break;
case RTE_CRYPTO_CIPHER_AES_XTS:
if ((cipher_xform->key.length/2) == ICP_QAT_HW_AES_192_KEY_SZ) {
@@ -587,6 +595,17 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
goto error_out;
}
+ if (is_wireless) {
+ /* Set the Use Extended Protocol Flags bit in LW 1 */
+ ICP_QAT_FW_USE_EXTENDED_PROTOCOL_FLAGS_SET(
+ header->ext_flags,
+ QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS);
+ /* Force usage of Wireless Cipher slice */
+ ICP_QAT_FW_USE_WCP_SLICE_SET(header->ext_flags,
+ QAT_LA_USE_WCP_SLICE);
+ session->is_wireless = 1;
+ }
+
return 0;
error_out:
@@ -820,9 +839,16 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
struct rte_crypto_auth_xform *auth_xform = qat_get_auth_xform(xform);
struct qat_cryptodev_private *internals = dev->data->dev_private;
const uint8_t *key_data = auth_xform->key.data;
- uint8_t key_length = auth_xform->key.length;
+ uint16_t key_length = auth_xform->key.length;
enum qat_device_gen qat_dev_gen =
internals->qat_dev->qat_dev_gen;
+ struct icp_qat_fw_la_bulk_req *req_tmpl = &session->fw_req;
+ struct icp_qat_fw_comn_req_hdr *header = &req_tmpl->comn_hdr;
+ struct icp_qat_fw_cipher_auth_cd_ctrl_hdr *cd_ctrl =
+ (struct icp_qat_fw_cipher_auth_cd_ctrl_hdr *)
+ session->fw_req.cd_ctrl.content_desc_ctrl_lw;
+ uint8_t hash_flag = 0;
+ int is_wireless = 0;
session->aes_cmac = 0;
session->auth_key_length = auth_xform->key.length;
@@ -898,6 +924,10 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
case RTE_CRYPTO_AUTH_AES_CMAC:
session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
session->aes_cmac = 1;
+ if (internals->qat_dev->has_wireless_slice) {
+ is_wireless = 1;
+ session->is_wireless = 1;
+ }
break;
case RTE_CRYPTO_AUTH_AES_GMAC:
if (qat_sym_validate_aes_key(auth_xform->key.length,
@@ -918,6 +948,11 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
break;
case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2;
+ if (internals->qat_dev->has_wireless_slice) {
+ is_wireless = 1;
+ session->is_wireless = 1;
+ hash_flag = 1 << ICP_QAT_FW_AUTH_HDR_FLAG_SNOW3G_UIA2_BITPOS;
+ }
break;
case RTE_CRYPTO_AUTH_MD5_HMAC:
session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_MD5;
@@ -934,7 +969,35 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
rte_cryptodev_get_auth_algo_string(auth_xform->algo));
return -ENOTSUP;
}
- session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3;
+ if (key_length == ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ)
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3;
+ else if (key_length == ICP_QAT_HW_ZUC_256_KEY_SZ) {
+ switch (auth_xform->digest_length) {
+ case 4:
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32;
+ break;
+ case 8:
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64;
+ break;
+ case 16:
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128;
+ break;
+ default:
+ QAT_LOG(ERR, "Invalid digest length: %d",
+ auth_xform->digest_length);
+ return -ENOTSUP;
+ }
+ session->is_zuc256 = 1;
+ } else {
+ QAT_LOG(ERR, "Invalid key length: %d", key_length);
+ return -ENOTSUP;
+ }
+ if (internals->qat_dev->has_wireless_slice) {
+ is_wireless = 1;
+ session->is_wireless = 1;
+ hash_flag = 1 << ICP_QAT_FW_AUTH_HDR_FLAG_ZUC_EIA3_BITPOS;
+ } else
+ session->auth_mode = ICP_QAT_HW_AUTH_MODE0;
break;
case RTE_CRYPTO_AUTH_MD5:
case RTE_CRYPTO_AUTH_AES_CBC_MAC:
@@ -1002,6 +1065,21 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
return -EINVAL;
}
+ if (is_wireless) {
+ if (!session->aes_cmac) {
+ /* Set the Use Extended Protocol Flags bit in LW 1 */
+ ICP_QAT_FW_USE_EXTENDED_PROTOCOL_FLAGS_SET(
+ header->ext_flags,
+ QAT_LA_USE_EXTENDED_PROTOCOL_FLAGS);
+
+ /* Set Hash Flags in LW 28 */
+ cd_ctrl->hash_flags |= hash_flag;
+ }
+ /* Force usage of Wireless Auth slice */
+ ICP_QAT_FW_USE_WAT_SLICE_SET(header->ext_flags,
+ QAT_LA_USE_WAT_SLICE);
+ }
+
return 0;
}
@@ -1204,6 +1282,15 @@ static int qat_hash_get_state1_size(enum icp_qat_hw_auth_algo qat_hash_alg)
case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
return QAT_HW_ROUND_UP(ICP_QAT_HW_ZUC_3G_EIA3_STATE1_SZ,
QAT_HW_DEFAULT_ALIGNMENT);
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32:
+ return QAT_HW_ROUND_UP(ICP_QAT_HW_ZUC_256_MAC_32_STATE1_SZ,
+ QAT_HW_DEFAULT_ALIGNMENT);
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64:
+ return QAT_HW_ROUND_UP(ICP_QAT_HW_ZUC_256_MAC_64_STATE1_SZ,
+ QAT_HW_DEFAULT_ALIGNMENT);
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128:
+ return QAT_HW_ROUND_UP(ICP_QAT_HW_ZUC_256_MAC_128_STATE1_SZ,
+ QAT_HW_DEFAULT_ALIGNMENT);
case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
return QAT_HW_ROUND_UP(ICP_QAT_HW_SNOW_3G_UIA2_STATE1_SZ,
QAT_HW_DEFAULT_ALIGNMENT);
@@ -1286,6 +1373,10 @@ static int qat_hash_get_block_size(enum icp_qat_hw_auth_algo qat_hash_alg)
return ICP_QAT_HW_AES_BLK_SZ;
case ICP_QAT_HW_AUTH_ALGO_MD5:
return QAT_MD5_CBLOCK;
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128:
+ return ICP_QAT_HW_ZUC_256_BLK_SZ;
case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
/* return maximum block size in this case */
return QAT_SHA512_CBLOCK;
@@ -2040,7 +2131,8 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc,
key_convert = ICP_QAT_HW_CIPHER_NO_CONVERT;
} else if (cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2
|| cdesc->qat_cipher_alg ==
- ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3) {
+ ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3
+ || cdesc->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_256) {
key_convert = ICP_QAT_HW_CIPHER_KEY_CONVERT;
cdesc->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT;
} else if (cdesc->qat_dir == ICP_QAT_HW_CIPHER_ENCRYPT)
@@ -2075,6 +2167,17 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc,
cipher_cd_ctrl->cipher_state_sz =
ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ >> 3;
cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC;
+ } else if (cdesc->qat_cipher_alg ==
+ ICP_QAT_HW_CIPHER_ALGO_ZUC_256) {
+ if (cdesc->cipher_iv.length != 23 && cdesc->cipher_iv.length != 25) {
+ QAT_LOG(ERR, "Invalid IV length for ZUC256, must be 23 or 25.");
+ return -EINVAL;
+ }
+ total_key_size = ICP_QAT_HW_ZUC_256_KEY_SZ +
+ ICP_QAT_HW_ZUC_256_IV_SZ;
+ cipher_cd_ctrl->cipher_state_sz =
+ ICP_QAT_HW_ZUC_256_IV_SZ >> 3;
+ cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC;
} else {
total_key_size = cipherkeylen;
cipher_cd_ctrl->cipher_state_sz = ICP_QAT_HW_AES_BLK_SZ >> 3;
@@ -2246,6 +2349,9 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_KASUMI_F9
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3
+ || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32
+ || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64
+ || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL
@@ -2519,7 +2625,8 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
cdesc->aad_len = aad_length;
break;
case ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2:
- cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_SNOW3G;
+ if (!cdesc->is_wireless)
+ cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_SNOW3G;
state1_size = qat_hash_get_state1_size(
ICP_QAT_HW_AUTH_ALGO_SNOW_3G_UIA2);
state2_size = ICP_QAT_HW_SNOW_3G_UIA2_STATE2_SZ;
@@ -2540,10 +2647,12 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
auth_param->hash_state_sz = ICP_QAT_HW_SNOW_3G_UEA2_IV_SZ >> 3;
break;
case ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3:
- hash->auth_config.config =
- ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE0,
- cdesc->qat_hash_alg, digestsize);
- cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC;
+ if (!cdesc->is_wireless) {
+ hash->auth_config.config =
+ ICP_QAT_HW_AUTH_CONFIG_BUILD(ICP_QAT_HW_AUTH_MODE0,
+ cdesc->qat_hash_alg, digestsize);
+ cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_ZUC;
+ }
state1_size = qat_hash_get_state1_size(
ICP_QAT_HW_AUTH_ALGO_ZUC_3G_128_EIA3);
state2_size = ICP_QAT_HW_ZUC_3G_EIA3_STATE2_SZ;
@@ -2554,6 +2663,18 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
cd_extra_size += ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ;
auth_param->hash_state_sz = ICP_QAT_HW_ZUC_3G_EEA3_IV_SZ >> 3;
+ break;
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64:
+ case ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128:
+ state1_size = qat_hash_get_state1_size(cdesc->qat_hash_alg);
+ state2_size = ICP_QAT_HW_ZUC_256_STATE2_SZ;
+ memset(cdesc->cd_cur_ptr, 0, state1_size + state2_size
+ + ICP_QAT_HW_ZUC_256_IV_SZ);
+
+ memcpy(cdesc->cd_cur_ptr + state1_size, authkey, authkeylen);
+ cd_extra_size += ICP_QAT_HW_ZUC_256_IV_SZ;
+ auth_param->hash_state_sz = ICP_QAT_HW_ZUC_256_IV_SZ >> 3;
break;
case ICP_QAT_HW_AUTH_ALGO_MD5:
#ifdef RTE_QAT_OPENSSL
@@ -2740,6 +2861,9 @@ int qat_sym_validate_zuc_key(int key_len, enum icp_qat_hw_cipher_algo *alg)
case ICP_QAT_HW_ZUC_3G_EEA3_KEY_SZ:
*alg = ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3;
break;
+ case ICP_QAT_HW_ZUC_256_KEY_SZ:
+ *alg = ICP_QAT_HW_CIPHER_ALGO_ZUC_256;
+ break;
default:
return -EINVAL;
}
diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h
index 9209e2e8df..2e25c90342 100644
--- a/drivers/crypto/qat/qat_sym_session.h
+++ b/drivers/crypto/qat/qat_sym_session.h
@@ -140,6 +140,8 @@ struct qat_sym_session {
uint8_t is_auth;
uint8_t is_cnt_zero;
/* Some generations need different setup of counter */
+ uint8_t is_zuc256;
+ uint8_t is_wireless;
uint32_t slice_types;
enum qat_sym_proto_flag qat_proto_flag;
qat_sym_build_request_t build_request[2];
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 3/4] common/qat: add new gen3 CMAC macros
2024-02-26 17:08 ` [PATCH v3 " Ciara Power
2024-02-26 17:08 ` [PATCH v3 1/4] common/qat: add new gen3 device Ciara Power
2024-02-26 17:08 ` [PATCH v3 2/4] common/qat: add zuc256 wireless slice for gen3 Ciara Power
@ 2024-02-26 17:08 ` Ciara Power
2024-02-26 17:08 ` [PATCH v3 4/4] common/qat: add gen5 device Ciara Power
2024-02-29 18:53 ` [EXT] [PATCH v3 0/4] add new QAT gen3 and gen5 Akhil Goyal
4 siblings, 0 replies; 19+ messages in thread
From: Ciara Power @ 2024-02-26 17:08 UTC (permalink / raw)
To: dev; +Cc: arkadiuszx.kusztal, gakhil, Ciara Power, Kai Ji
The new QAT GEN3 device uses new macros for CMAC values, rather than
using XCBC_MAC ones.
The wireless slice handles CMAC in the new gen3 device, and no key
precomputes are required by SW.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Kai Ji <kai.ji@intel.com>
---
drivers/common/qat/qat_adf/icp_qat_hw.h | 4 +++-
drivers/crypto/qat/qat_sym_session.c | 28 +++++++++++++++++++++----
2 files changed, 27 insertions(+), 5 deletions(-)
diff --git a/drivers/common/qat/qat_adf/icp_qat_hw.h b/drivers/common/qat/qat_adf/icp_qat_hw.h
index 4651fb90bb..b99dde2176 100644
--- a/drivers/common/qat/qat_adf/icp_qat_hw.h
+++ b/drivers/common/qat/qat_adf/icp_qat_hw.h
@@ -75,7 +75,7 @@ enum icp_qat_hw_auth_algo {
ICP_QAT_HW_AUTH_ALGO_RESERVED = 20,
ICP_QAT_HW_AUTH_ALGO_RESERVED1 = 21,
ICP_QAT_HW_AUTH_ALGO_RESERVED2 = 22,
- ICP_QAT_HW_AUTH_ALGO_RESERVED3 = 22,
+ ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC = 22,
ICP_QAT_HW_AUTH_ALGO_RESERVED4 = 23,
ICP_QAT_HW_AUTH_ALGO_RESERVED5 = 24,
ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32 = 25,
@@ -180,6 +180,7 @@ struct icp_qat_hw_auth_setup {
#define ICP_QAT_HW_ZUC_256_MAC_32_STATE1_SZ 8
#define ICP_QAT_HW_ZUC_256_MAC_64_STATE1_SZ 8
#define ICP_QAT_HW_ZUC_256_MAC_128_STATE1_SZ 16
+#define ICP_QAT_HW_AES_CMAC_STATE1_SZ 16
#define ICP_QAT_HW_NULL_STATE2_SZ 32
#define ICP_QAT_HW_MD5_STATE2_SZ 16
@@ -208,6 +209,7 @@ struct icp_qat_hw_auth_setup {
#define ICP_QAT_HW_GALOIS_H_SZ 16
#define ICP_QAT_HW_GALOIS_LEN_A_SZ 8
#define ICP_QAT_HW_GALOIS_E_CTR0_SZ 16
+#define ICP_QAT_HW_AES_128_CMAC_STATE2_SZ 16
struct icp_qat_hw_auth_sha512 {
struct icp_qat_hw_auth_setup inner_setup;
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index ebdad0bd67..b1649b8d18 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -922,11 +922,20 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
break;
case RTE_CRYPTO_AUTH_AES_CMAC:
- session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
session->aes_cmac = 1;
- if (internals->qat_dev->has_wireless_slice) {
- is_wireless = 1;
- session->is_wireless = 1;
+ if (!internals->qat_dev->has_wireless_slice) {
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC;
+ break;
+ }
+ is_wireless = 1;
+ session->is_wireless = 1;
+ switch (key_length) {
+ case ICP_QAT_HW_AES_128_KEY_SZ:
+ session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC;
+ break;
+ default:
+ QAT_LOG(ERR, "Invalid key length: %d", key_length);
+ return -ENOTSUP;
}
break;
case RTE_CRYPTO_AUTH_AES_GMAC:
@@ -1309,6 +1318,9 @@ static int qat_hash_get_state1_size(enum icp_qat_hw_auth_algo qat_hash_alg)
case ICP_QAT_HW_AUTH_ALGO_NULL:
return QAT_HW_ROUND_UP(ICP_QAT_HW_NULL_STATE1_SZ,
QAT_HW_DEFAULT_ALIGNMENT);
+ case ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC:
+ return QAT_HW_ROUND_UP(ICP_QAT_HW_AES_CMAC_STATE1_SZ,
+ QAT_HW_DEFAULT_ALIGNMENT);
case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
/* return maximum state1 size in this case */
return QAT_HW_ROUND_UP(ICP_QAT_HW_SHA512_STATE1_SZ,
@@ -1345,6 +1357,7 @@ static int qat_hash_get_digest_size(enum icp_qat_hw_auth_algo qat_hash_alg)
case ICP_QAT_HW_AUTH_ALGO_MD5:
return ICP_QAT_HW_MD5_STATE1_SZ;
case ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC:
+ case ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC:
return ICP_QAT_HW_AES_XCBC_MAC_STATE1_SZ;
case ICP_QAT_HW_AUTH_ALGO_DELIMITER:
/* return maximum digest size in this case */
@@ -2353,6 +2366,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_XCBC_MAC
+ || cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL
|| cdesc->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_SM3
@@ -2593,6 +2607,12 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
return -EFAULT;
}
break;
+ case ICP_QAT_HW_AUTH_ALGO_AES_128_CMAC:
+ state1_size = ICP_QAT_HW_AES_CMAC_STATE1_SZ;
+ memset(cdesc->cd_cur_ptr, 0, state1_size);
+ memcpy(cdesc->cd_cur_ptr + state1_size, authkey, authkeylen);
+ state2_size = ICP_QAT_HW_AES_128_CMAC_STATE2_SZ;
+ break;
case ICP_QAT_HW_AUTH_ALGO_GALOIS_128:
case ICP_QAT_HW_AUTH_ALGO_GALOIS_64:
cdesc->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_GCM;
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 4/4] common/qat: add gen5 device
2024-02-26 17:08 ` [PATCH v3 " Ciara Power
` (2 preceding siblings ...)
2024-02-26 17:08 ` [PATCH v3 3/4] common/qat: add new gen3 CMAC macros Ciara Power
@ 2024-02-26 17:08 ` Ciara Power
2024-02-29 18:53 ` [EXT] [PATCH v3 0/4] add new QAT gen3 and gen5 Akhil Goyal
4 siblings, 0 replies; 19+ messages in thread
From: Ciara Power @ 2024-02-26 17:08 UTC (permalink / raw)
To: dev
Cc: arkadiuszx.kusztal, gakhil, Ciara Power, Kai Ji, Fan Zhang,
Ashish Gupta, Anatoly Burakov
Add new gen5 QAT device ID.
This device has a wireless slice, so we must set a flag to indicate
this wireless enabled device.
Asides from the wireless slices and some extra capabilities for
wireless algorithms, the device is functionally the same as gen4 and can
reuse most functions and macros.
Symmetric, asymmetric and compression services are enabled.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Kai Ji <kai.ji@intel.com>
---
v3:
- Fixed copyright tag in new files to 2024.
- Removed v2 changes in notes of commit, this patch was new in v2.
---
doc/guides/cryptodevs/qat.rst | 4 +
doc/guides/rel_notes/release_24_03.rst | 6 +-
drivers/common/qat/dev/qat_dev_gen4.c | 31 ++-
drivers/common/qat/dev/qat_dev_gen5.c | 51 ++++
drivers/common/qat/dev/qat_dev_gens.h | 54 ++++
drivers/common/qat/meson.build | 3 +
drivers/common/qat/qat_common.h | 1 +
drivers/common/qat/qat_device.c | 8 +-
drivers/compress/qat/dev/qat_comp_pmd_gen4.c | 8 +-
drivers/compress/qat/dev/qat_comp_pmd_gen5.c | 73 +++++
drivers/compress/qat/dev/qat_comp_pmd_gens.h | 14 +
drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 4 +-
drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c | 278 +++++++++++++++++++
drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 6 +
drivers/crypto/qat/qat_sym_session.c | 13 +-
15 files changed, 524 insertions(+), 30 deletions(-)
create mode 100644 drivers/common/qat/dev/qat_dev_gen5.c
create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen5.c
create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index 51190e12d6..28945bb5f3 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -27,6 +27,7 @@ poll mode crypto driver support for the following hardware accelerator devices:
* ``Intel QuickAssist Technology C4xxx``
* ``Intel QuickAssist Technology 4xxx``
* ``Intel QuickAssist Technology 300xx``
+* ``Intel QuickAssist Technology 420xx``
Features
@@ -179,6 +180,7 @@ poll mode crypto driver support for the following hardware accelerator devices:
* ``Intel QuickAssist Technology 4xxx``
* ``Intel QuickAssist Technology 401xxx``
* ``Intel QuickAssist Technology 300xx``
+* ``Intel QuickAssist Technology 420xx``
The QAT ASYM PMD has support for:
@@ -472,6 +474,8 @@ to see the full table)
+-----+-----+-----+-----+----------+---------------+---------------+------------+--------+------+--------+--------+
| Yes | No | No | 4 | 402xx | IDZ/ N/A | qat_4xxx | 4xxx | 4944 | 2 | 4945 | 16 |
+-----+-----+-----+-----+----------+---------------+---------------+------------+--------+------+--------+--------+
+ | Yes | Yes | Yes | 5 | 420xx | linux/6.8+ | qat_420xx | 420xx | 4946 | 2 | 4947 | 16 |
+ +-----+-----+-----+-----+----------+---------------+---------------+------------+--------+------+--------+--------+
* Note: Symmetric mixed crypto algorithms feature on Gen 2 works only with IDZ driver version 4.9.0+
diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 0dee1ff104..439d354cd8 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -133,8 +133,10 @@ New Features
* **Updated Intel QuickAssist Technology driver.**
- * Enabled support for new QAT GEN3 (578a) devices in QAT crypto driver.
- * Enabled ZUC256 cipher and auth algorithm for wireless slice enabled GEN3 device.
+ * Enabled support for new QAT GEN3 (578a) and QAT GEN5 (4946)
+ devices in QAT crypto driver.
+ * Enabled ZUC256 cipher and auth algorithm for wireless slice
+ enabled GEN3 and GEN5 devices.
* **Updated Marvell cnxk crypto driver.**
diff --git a/drivers/common/qat/dev/qat_dev_gen4.c b/drivers/common/qat/dev/qat_dev_gen4.c
index 1ce262f715..2525e1e695 100644
--- a/drivers/common/qat/dev/qat_dev_gen4.c
+++ b/drivers/common/qat/dev/qat_dev_gen4.c
@@ -10,6 +10,7 @@
#include "adf_transport_access_macros_gen4vf.h"
#include "adf_pf2vf_msg.h"
#include "qat_pf2vf.h"
+#include "qat_dev_gens.h"
#include <stdint.h>
@@ -60,7 +61,7 @@ qat_select_valid_queue_gen4(struct qat_pci_device *qat_dev, int qp_id,
return -1;
}
-static const struct qat_qp_hw_data *
+const struct qat_qp_hw_data *
qat_qp_get_hw_data_gen4(struct qat_pci_device *qat_dev,
enum qat_service_type service_type, uint16_t qp_id)
{
@@ -74,7 +75,7 @@ qat_qp_get_hw_data_gen4(struct qat_pci_device *qat_dev,
return &dev_extra->qp_gen4_data[ring_pair][0];
}
-static int
+int
qat_qp_rings_per_service_gen4(struct qat_pci_device *qat_dev,
enum qat_service_type service)
{
@@ -103,7 +104,7 @@ gen4_pick_service(uint8_t hw_service)
}
}
-static int
+int
qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
{
int i = 0;
@@ -143,7 +144,7 @@ qat_dev_read_config_gen4(struct qat_pci_device *qat_dev)
return 0;
}
-static void
+void
qat_qp_build_ring_base_gen4(void *io_addr,
struct qat_queue *queue)
{
@@ -155,7 +156,7 @@ qat_qp_build_ring_base_gen4(void *io_addr,
queue->hw_queue_number, queue_base);
}
-static void
+void
qat_qp_adf_arb_enable_gen4(const struct qat_queue *txq,
void *base_addr, rte_spinlock_t *lock)
{
@@ -172,7 +173,7 @@ qat_qp_adf_arb_enable_gen4(const struct qat_queue *txq,
rte_spinlock_unlock(lock);
}
-static void
+void
qat_qp_adf_arb_disable_gen4(const struct qat_queue *txq,
void *base_addr, rte_spinlock_t *lock)
{
@@ -189,7 +190,7 @@ qat_qp_adf_arb_disable_gen4(const struct qat_queue *txq,
rte_spinlock_unlock(lock);
}
-static void
+void
qat_qp_adf_configure_queues_gen4(struct qat_qp *qp)
{
uint32_t q_tx_config, q_resp_config;
@@ -208,14 +209,14 @@ qat_qp_adf_configure_queues_gen4(struct qat_qp *qp)
q_resp_config);
}
-static void
+void
qat_qp_csr_write_tail_gen4(struct qat_qp *qp, struct qat_queue *q)
{
WRITE_CSR_RING_TAIL_GEN4VF(qp->mmap_bar_addr,
q->hw_bundle_number, q->hw_queue_number, q->tail);
}
-static void
+void
qat_qp_csr_write_head_gen4(struct qat_qp *qp, struct qat_queue *q,
uint32_t new_head)
{
@@ -223,7 +224,7 @@ qat_qp_csr_write_head_gen4(struct qat_qp *qp, struct qat_queue *q,
q->hw_bundle_number, q->hw_queue_number, new_head);
}
-static void
+void
qat_qp_csr_setup_gen4(struct qat_pci_device *qat_dev,
void *io_addr, struct qat_qp *qp)
{
@@ -246,7 +247,7 @@ static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen4 = {
.qat_qp_get_hw_data = qat_qp_get_hw_data_gen4,
};
-static int
+int
qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
{
int ret = 0, i;
@@ -268,13 +269,13 @@ qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev)
return 0;
}
-static const struct rte_mem_resource *
+const struct rte_mem_resource *
qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev)
{
return &pci_dev->mem_resource[0];
}
-static int
+int
qat_dev_get_misc_bar_gen4(struct rte_mem_resource **mem_resource,
struct rte_pci_device *pci_dev)
{
@@ -282,14 +283,14 @@ qat_dev_get_misc_bar_gen4(struct rte_mem_resource **mem_resource,
return 0;
}
-static int
+int
qat_dev_get_slice_map_gen4(uint32_t *map __rte_unused,
const struct rte_pci_device *pci_dev __rte_unused)
{
return 0;
}
-static int
+int
qat_dev_get_extra_size_gen4(void)
{
return sizeof(struct qat_dev_gen4_extra);
diff --git a/drivers/common/qat/dev/qat_dev_gen5.c b/drivers/common/qat/dev/qat_dev_gen5.c
new file mode 100644
index 0000000000..b79187b4d0
--- /dev/null
+++ b/drivers/common/qat/dev/qat_dev_gen5.c
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Intel Corporation
+ */
+
+#include <dev_driver.h>
+#include <rte_pci.h>
+
+#include "qat_device.h"
+#include "qat_qp.h"
+#include "adf_pf2vf_msg.h"
+#include "qat_dev_gens.h"
+
+#include <stdint.h>
+
+static struct qat_pf2vf_dev qat_pf2vf_gen5 = {
+ .pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET,
+ .vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET,
+ .pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT,
+ .pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK,
+ .pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT,
+ .pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK,
+};
+
+static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen5 = {
+ .qat_qp_rings_per_service = qat_qp_rings_per_service_gen4,
+ .qat_qp_build_ring_base = qat_qp_build_ring_base_gen4,
+ .qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen4,
+ .qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen4,
+ .qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen4,
+ .qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen4,
+ .qat_qp_csr_write_head = qat_qp_csr_write_head_gen4,
+ .qat_qp_csr_setup = qat_qp_csr_setup_gen4,
+ .qat_qp_get_hw_data = qat_qp_get_hw_data_gen4,
+};
+
+static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen5 = {
+ .qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen4,
+ .qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen4,
+ .qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen4,
+ .qat_dev_read_config = qat_dev_read_config_gen4,
+ .qat_dev_get_extra_size = qat_dev_get_extra_size_gen4,
+ .qat_dev_get_slice_map = qat_dev_get_slice_map_gen4,
+};
+
+RTE_INIT(qat_dev_gen_5_init)
+{
+ qat_qp_hw_spec[QAT_GEN5] = &qat_qp_hw_spec_gen5;
+ qat_dev_hw_spec[QAT_GEN5] = &qat_dev_hw_spec_gen5;
+ qat_gen_config[QAT_GEN5].dev_gen = QAT_GEN5;
+ qat_gen_config[QAT_GEN5].pf2vf_dev = &qat_pf2vf_gen5;
+}
diff --git a/drivers/common/qat/dev/qat_dev_gens.h b/drivers/common/qat/dev/qat_dev_gens.h
index 7c92f1938c..14c172f22d 100644
--- a/drivers/common/qat/dev/qat_dev_gens.h
+++ b/drivers/common/qat/dev/qat_dev_gens.h
@@ -62,4 +62,58 @@ qat_dev_get_misc_bar_gen1(struct rte_mem_resource **mem_resource,
int
qat_dev_read_config_gen1(struct qat_pci_device *qat_dev);
+int
+qat_reset_ring_pairs_gen4(struct qat_pci_device *qat_pci_dev);
+
+const struct rte_mem_resource *
+qat_dev_get_transport_bar_gen4(struct rte_pci_device *pci_dev);
+
+int
+qat_dev_get_misc_bar_gen4(struct rte_mem_resource **mem_resource,
+ struct rte_pci_device *pci_dev);
+
+int
+qat_dev_read_config_gen4(struct qat_pci_device *qat_dev);
+
+int
+qat_dev_get_extra_size_gen4(void);
+
+int
+qat_dev_get_slice_map_gen4(uint32_t *map __rte_unused,
+ const struct rte_pci_device *pci_dev __rte_unused);
+
+int
+qat_qp_rings_per_service_gen4(struct qat_pci_device *qat_dev,
+ enum qat_service_type service);
+
+void
+qat_qp_build_ring_base_gen4(void *io_addr,
+ struct qat_queue *queue);
+
+void
+qat_qp_adf_arb_enable_gen4(const struct qat_queue *txq,
+ void *base_addr, rte_spinlock_t *lock);
+
+void
+qat_qp_adf_arb_disable_gen4(const struct qat_queue *txq,
+ void *base_addr, rte_spinlock_t *lock);
+
+void
+qat_qp_adf_configure_queues_gen4(struct qat_qp *qp);
+
+void
+qat_qp_csr_write_tail_gen4(struct qat_qp *qp, struct qat_queue *q);
+
+void
+qat_qp_csr_write_head_gen4(struct qat_qp *qp, struct qat_queue *q,
+ uint32_t new_head);
+
+void
+qat_qp_csr_setup_gen4(struct qat_pci_device *qat_dev,
+ void *io_addr, struct qat_qp *qp);
+
+const struct qat_qp_hw_data *
+qat_qp_get_hw_data_gen4(struct qat_pci_device *qat_dev,
+ enum qat_service_type service_type, uint16_t qp_id);
+
#endif
diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 62abcb6fe3..d79085258f 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -82,6 +82,7 @@ sources += files(
'dev/qat_dev_gen2.c',
'dev/qat_dev_gen3.c',
'dev/qat_dev_gen4.c',
+ 'dev/qat_dev_gen5.c',
)
includes += include_directories(
'qat_adf',
@@ -95,6 +96,7 @@ if qat_compress
'dev/qat_comp_pmd_gen2.c',
'dev/qat_comp_pmd_gen3.c',
'dev/qat_comp_pmd_gen4.c',
+ 'dev/qat_comp_pmd_gen5.c',
]
sources += files(join_paths(qat_compress_relpath, f))
endforeach
@@ -108,6 +110,7 @@ if qat_crypto
'dev/qat_crypto_pmd_gen2.c',
'dev/qat_crypto_pmd_gen3.c',
'dev/qat_crypto_pmd_gen4.c',
+ 'dev/qat_crypto_pmd_gen5.c',
]
sources += files(join_paths(qat_crypto_relpath, f))
endforeach
diff --git a/drivers/common/qat/qat_common.h b/drivers/common/qat/qat_common.h
index 9411a79301..dc48a2e1ee 100644
--- a/drivers/common/qat/qat_common.h
+++ b/drivers/common/qat/qat_common.h
@@ -21,6 +21,7 @@ enum qat_device_gen {
QAT_GEN2,
QAT_GEN3,
QAT_GEN4,
+ QAT_GEN5,
QAT_N_GENS
};
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index 0e7d387d78..0ccc3f85fd 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -65,6 +65,9 @@ static const struct rte_pci_id pci_id_qat_map[] = {
{
RTE_PCI_DEVICE(0x8086, 0x4945),
},
+ {
+ RTE_PCI_DEVICE(0x8086, 0x4947),
+ },
{.device_id = 0},
};
@@ -203,6 +206,8 @@ pick_gen(const struct rte_pci_device *pci_dev)
case 0x4943:
case 0x4945:
return QAT_GEN4;
+ case 0x4947:
+ return QAT_GEN5;
default:
QAT_LOG(ERR, "Invalid dev_id, can't determine generation");
return QAT_N_GENS;
@@ -212,7 +217,8 @@ pick_gen(const struct rte_pci_device *pci_dev)
static int
wireless_slice_support(uint16_t pci_dev_id)
{
- return pci_dev_id == 0x578b;
+ return pci_dev_id == 0x578b ||
+ pci_dev_id == 0x4947;
}
struct qat_pci_device *
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen4.c b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
index 05906f13e0..68d111e07c 100644
--- a/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen4.c
@@ -27,7 +27,7 @@ qat_gen4_comp_capabilities[] = {
.window_size = {.min = 15, .max = 15, .increment = 0} },
RTE_COMP_END_OF_CAPABILITIES_LIST() };
-static int
+int
qat_comp_dev_config_gen4(struct rte_compressdev *dev,
struct rte_compressdev_config *config)
{
@@ -67,13 +67,13 @@ qat_comp_cap_get_gen4(struct qat_pci_device *qat_dev __rte_unused)
return capa_info;
}
-static uint16_t
+uint16_t
qat_comp_get_ram_bank_flags_gen4(void)
{
return 0;
}
-static int
+int
qat_comp_set_slice_cfg_word_gen4(struct qat_comp_xform *qat_xform,
const struct rte_comp_xform *xform,
enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word)
@@ -189,7 +189,7 @@ qat_comp_set_slice_cfg_word_gen4(struct qat_comp_xform *qat_xform,
return 0;
}
-static unsigned int
+unsigned int
qat_comp_get_num_im_bufs_required_gen4(void)
{
return QAT_NUM_INTERM_BUFS_GEN4;
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gen5.c b/drivers/compress/qat/dev/qat_comp_pmd_gen5.c
new file mode 100644
index 0000000000..3cfa07e605
--- /dev/null
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gen5.c
@@ -0,0 +1,73 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Intel Corporation
+ */
+
+#include "qat_comp.h"
+#include "qat_comp_pmd.h"
+#include "qat_comp_pmd_gens.h"
+#include "icp_qat_hw_gen4_comp.h"
+#include "icp_qat_hw_gen4_comp_defs.h"
+
+static const struct rte_compressdev_capabilities
+qat_gen5_comp_capabilities[] = {
+ {/* COMPRESSION - deflate */
+ .algo = RTE_COMP_ALGO_DEFLATE,
+ .comp_feature_flags = RTE_COMP_FF_MULTI_PKT_CHECKSUM |
+ RTE_COMP_FF_CRC32_CHECKSUM |
+ RTE_COMP_FF_ADLER32_CHECKSUM |
+ RTE_COMP_FF_CRC32_ADLER32_CHECKSUM |
+ RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+ RTE_COMP_FF_HUFFMAN_FIXED |
+ RTE_COMP_FF_HUFFMAN_DYNAMIC |
+ RTE_COMP_FF_OOP_SGL_IN_SGL_OUT |
+ RTE_COMP_FF_OOP_SGL_IN_LB_OUT |
+ RTE_COMP_FF_OOP_LB_IN_SGL_OUT,
+ .window_size = {.min = 15, .max = 15, .increment = 0} },
+ RTE_COMP_END_OF_CAPABILITIES_LIST() };
+
+static struct rte_compressdev_ops qat_comp_ops_gen5 = {
+
+ /* Device related operations */
+ .dev_configure = qat_comp_dev_config_gen4,
+ .dev_start = qat_comp_dev_start,
+ .dev_stop = qat_comp_dev_stop,
+ .dev_close = qat_comp_dev_close,
+ .dev_infos_get = qat_comp_dev_info_get,
+
+ .stats_get = qat_comp_stats_get,
+ .stats_reset = qat_comp_stats_reset,
+ .queue_pair_setup = qat_comp_qp_setup,
+ .queue_pair_release = qat_comp_qp_release,
+
+ /* Compression related operations */
+ .private_xform_create = qat_comp_private_xform_create,
+ .private_xform_free = qat_comp_private_xform_free,
+ .stream_create = qat_comp_stream_create,
+ .stream_free = qat_comp_stream_free
+};
+
+static struct qat_comp_capabilities_info
+qat_comp_cap_get_gen5(struct qat_pci_device *qat_dev __rte_unused)
+{
+ struct qat_comp_capabilities_info capa_info = {
+ .data = qat_gen5_comp_capabilities,
+ .size = sizeof(qat_gen5_comp_capabilities)
+ };
+ return capa_info;
+}
+
+RTE_INIT(qat_comp_pmd_gen5_init)
+{
+ qat_comp_gen_dev_ops[QAT_GEN5].compressdev_ops =
+ &qat_comp_ops_gen5;
+ qat_comp_gen_dev_ops[QAT_GEN5].qat_comp_get_capabilities =
+ qat_comp_cap_get_gen5;
+ qat_comp_gen_dev_ops[QAT_GEN5].qat_comp_get_num_im_bufs_required =
+ qat_comp_get_num_im_bufs_required_gen4;
+ qat_comp_gen_dev_ops[QAT_GEN5].qat_comp_get_ram_bank_flags =
+ qat_comp_get_ram_bank_flags_gen4;
+ qat_comp_gen_dev_ops[QAT_GEN5].qat_comp_set_slice_cfg_word =
+ qat_comp_set_slice_cfg_word_gen4;
+ qat_comp_gen_dev_ops[QAT_GEN5].qat_comp_get_feature_flags =
+ qat_comp_get_features_gen1;
+}
diff --git a/drivers/compress/qat/dev/qat_comp_pmd_gens.h b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
index 67293092ea..e329fe3e18 100644
--- a/drivers/compress/qat/dev/qat_comp_pmd_gens.h
+++ b/drivers/compress/qat/dev/qat_comp_pmd_gens.h
@@ -25,6 +25,20 @@ int qat_comp_set_slice_cfg_word_gen1(struct qat_comp_xform *qat_xform,
uint64_t qat_comp_get_features_gen1(void);
+unsigned int
+qat_comp_get_num_im_bufs_required_gen4(void);
+
+int
+qat_comp_set_slice_cfg_word_gen4(struct qat_comp_xform *qat_xform,
+ const struct rte_comp_xform *xform,
+ enum rte_comp_op_type op_type, uint32_t *comp_slice_cfg_word);
+
+uint16_t qat_comp_get_ram_bank_flags_gen4(void);
+
+int
+qat_comp_dev_config_gen4(struct rte_compressdev *dev,
+ struct rte_compressdev_config *config);
+
extern struct rte_compressdev_ops qat_comp_ops_gen1;
#endif /* _QAT_COMP_PMD_GENS_H_ */
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
index de72383d4b..9c7f7d98c8 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c
@@ -233,7 +233,7 @@ qat_sym_build_op_aead_gen4(void *in_op, struct qat_sym_session *ctx,
return 0;
}
-static int
+int
qat_sym_crypto_set_session_gen4(void *cdev, void *session)
{
struct qat_sym_session *ctx = session;
@@ -385,7 +385,7 @@ qat_sym_dp_enqueue_aead_jobs_gen4(void *qp_data, uint8_t *drv_ctx,
return i;
}
-static int
+int
qat_sym_configure_raw_dp_ctx_gen4(void *_raw_dp_ctx, void *_ctx)
{
struct rte_crypto_raw_dp_ctx *raw_dp_ctx = _raw_dp_ctx;
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c
new file mode 100644
index 0000000000..1902430480
--- /dev/null
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c
@@ -0,0 +1,278 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 Intel Corporation
+ */
+
+#include <rte_cryptodev.h>
+#include <cryptodev_pmd.h>
+#include "qat_sym_session.h"
+#include "qat_sym.h"
+#include "qat_asym.h"
+#include "qat_crypto.h"
+#include "qat_crypto_pmd_gens.h"
+
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_legacy_caps_gen5[] = {
+ QAT_SYM_PLAIN_AUTH_CAP(SHA1,
+ CAP_SET(block_size, 64),
+ CAP_RNG(digest_size, 1, 20, 1)),
+ QAT_SYM_AUTH_CAP(SHA224,
+ CAP_SET(block_size, 64),
+ CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 28, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(SHA224_HMAC,
+ CAP_SET(block_size, 64),
+ CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 28, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(SHA1_HMAC,
+ CAP_SET(block_size, 64),
+ CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 20, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_CIPHER_CAP(SM4_ECB,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 0, 0, 0)),
+};
+
+static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen5[] = {
+ QAT_SYM_CIPHER_CAP(AES_CBC,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+ QAT_SYM_AUTH_CAP(SHA256_HMAC,
+ CAP_SET(block_size, 64),
+ CAP_RNG(key_size, 1, 64, 1), CAP_RNG(digest_size, 1, 32, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(SHA384_HMAC,
+ CAP_SET(block_size, 128),
+ CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 48, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(SHA512_HMAC,
+ CAP_SET(block_size, 128),
+ CAP_RNG(key_size, 1, 128, 1), CAP_RNG(digest_size, 1, 64, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(AES_XCBC_MAC,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 12, 12, 0),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(AES_CMAC,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 4),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_CIPHER_CAP(AES_DOCSISBPI,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 16, 0)),
+ QAT_SYM_AUTH_CAP(NULL,
+ CAP_SET(block_size, 1),
+ CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(digest_size),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_CIPHER_CAP(NULL,
+ CAP_SET(block_size, 1),
+ CAP_RNG_ZERO(key_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(SHA256,
+ CAP_SET(block_size, 64),
+ CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 32, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(SHA384,
+ CAP_SET(block_size, 128),
+ CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 48, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_AUTH_CAP(SHA512,
+ CAP_SET(block_size, 128),
+ CAP_RNG_ZERO(key_size), CAP_RNG(digest_size, 1, 64, 1),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_CIPHER_CAP(AES_CTR,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 32, 8), CAP_RNG(iv_size, 16, 16, 0)),
+ QAT_SYM_AEAD_CAP(AES_GCM,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+ CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 0, 12, 12)),
+ QAT_SYM_AEAD_CAP(AES_CCM,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 16, 2),
+ CAP_RNG(aad_size, 0, 224, 1), CAP_RNG(iv_size, 7, 13, 1)),
+ QAT_SYM_AUTH_CAP(AES_GMAC,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 32, 8), CAP_RNG(digest_size, 8, 16, 4),
+ CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 0, 12, 12)),
+ QAT_SYM_AEAD_CAP(CHACHA20_POLY1305,
+ CAP_SET(block_size, 64),
+ CAP_RNG(key_size, 32, 32, 0),
+ CAP_RNG(digest_size, 16, 16, 0),
+ CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 0)),
+ QAT_SYM_CIPHER_CAP(SM4_CBC,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+ QAT_SYM_CIPHER_CAP(SM4_CTR,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+ QAT_SYM_PLAIN_AUTH_CAP(SM3,
+ CAP_SET(block_size, 64),
+ CAP_RNG(digest_size, 32, 32, 0)),
+ QAT_SYM_AUTH_CAP(SM3_HMAC,
+ CAP_SET(block_size, 64),
+ CAP_RNG(key_size, 16, 64, 4), CAP_RNG(digest_size, 32, 32, 0),
+ CAP_RNG_ZERO(aad_size), CAP_RNG_ZERO(iv_size)),
+ QAT_SYM_CIPHER_CAP(ZUC_EEA3,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 32, 16), CAP_RNG(iv_size, 16, 25, 1)),
+ QAT_SYM_AUTH_CAP(ZUC_EIA3,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 32, 16), CAP_RNG(digest_size, 4, 16, 4),
+ CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 25, 1)),
+ QAT_SYM_CIPHER_CAP(SNOW3G_UEA2,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 16, 0), CAP_RNG(iv_size, 16, 16, 0)),
+ QAT_SYM_AUTH_CAP(SNOW3G_UIA2,
+ CAP_SET(block_size, 16),
+ CAP_RNG(key_size, 16, 16, 0), CAP_RNG(digest_size, 4, 4, 0),
+ CAP_RNG_ZERO(aad_size), CAP_RNG(iv_size, 16, 16, 0)),
+ RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+static int
+check_cipher_capa(const struct rte_cryptodev_capabilities *cap,
+ enum rte_crypto_cipher_algorithm algo)
+{
+ if (cap->op != RTE_CRYPTO_OP_TYPE_SYMMETRIC)
+ return 0;
+ if (cap->sym.xform_type != RTE_CRYPTO_SYM_XFORM_CIPHER)
+ return 0;
+ if (cap->sym.cipher.algo != algo)
+ return 0;
+ return 1;
+}
+
+static int
+check_auth_capa(const struct rte_cryptodev_capabilities *cap,
+ enum rte_crypto_auth_algorithm algo)
+{
+ if (cap->op != RTE_CRYPTO_OP_TYPE_SYMMETRIC)
+ return 0;
+ if (cap->sym.xform_type != RTE_CRYPTO_SYM_XFORM_AUTH)
+ return 0;
+ if (cap->sym.auth.algo != algo)
+ return 0;
+ return 1;
+}
+
+static int
+qat_sym_crypto_cap_get_gen5(struct qat_cryptodev_private *internals,
+ const char *capa_memz_name,
+ const uint16_t __rte_unused slice_map)
+{
+ uint32_t legacy_capa_num, capa_num;
+ uint32_t size = sizeof(qat_sym_crypto_caps_gen5);
+ uint32_t legacy_size = sizeof(qat_sym_crypto_legacy_caps_gen5);
+ uint32_t i, iter = 0;
+ uint32_t curr_capa = 0;
+ legacy_capa_num = legacy_size/sizeof(struct rte_cryptodev_capabilities);
+ capa_num = RTE_DIM(qat_sym_crypto_caps_gen5);
+
+ if (unlikely(qat_legacy_capa))
+ size = size + legacy_size;
+
+ internals->capa_mz = rte_memzone_lookup(capa_memz_name);
+ if (internals->capa_mz == NULL) {
+ internals->capa_mz = rte_memzone_reserve(capa_memz_name,
+ size, rte_socket_id(), 0);
+ if (internals->capa_mz == NULL) {
+ QAT_LOG(DEBUG,
+ "Error allocating memzone for capabilities");
+ return -1;
+ }
+ }
+
+ struct rte_cryptodev_capabilities *addr =
+ (struct rte_cryptodev_capabilities *)
+ internals->capa_mz->addr;
+
+ struct rte_cryptodev_capabilities *capabilities;
+
+ if (unlikely(qat_legacy_capa)) {
+ capabilities = qat_sym_crypto_legacy_caps_gen5;
+ memcpy(addr, capabilities, legacy_size);
+ addr += legacy_capa_num;
+ }
+ capabilities = qat_sym_crypto_caps_gen5;
+
+ for (i = 0; i < capa_num; i++, iter++) {
+ if (slice_map & ICP_ACCEL_MASK_ZUC_256_SLICE && (
+ check_auth_capa(&capabilities[iter],
+ RTE_CRYPTO_AUTH_ZUC_EIA3) ||
+ check_cipher_capa(&capabilities[iter],
+ RTE_CRYPTO_CIPHER_ZUC_EEA3))) {
+ continue;
+ }
+
+ memcpy(addr + curr_capa, capabilities + iter,
+ sizeof(struct rte_cryptodev_capabilities));
+ curr_capa++;
+ }
+ internals->qat_dev_capabilities = internals->capa_mz->addr;
+
+ return 0;
+}
+
+static int
+qat_sym_crypto_set_session_gen5(void *cdev, void *session)
+{
+ struct qat_sym_session *ctx = session;
+ enum rte_proc_type_t proc_type = rte_eal_process_type();
+ int ret;
+
+ if (proc_type == RTE_PROC_AUTO || proc_type == RTE_PROC_INVALID)
+ return -EINVAL;
+
+ ret = qat_sym_crypto_set_session_gen4(cdev, session);
+
+ if (ret == -ENOTSUP) {
+ /* GEN4 returning -ENOTSUP as it cannot handle some mixed algo,
+ * this is addressed by GEN5
+ */
+ if ((ctx->aes_cmac ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) &&
+ (ctx->qat_cipher_alg ==
+ ICP_QAT_HW_CIPHER_ALGO_SNOW_3G_UEA2 ||
+ ctx->qat_cipher_alg ==
+ ICP_QAT_HW_CIPHER_ALGO_ZUC_3G_128_EEA3 ||
+ ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_ZUC_256)) {
+ qat_sym_session_set_ext_hash_flags_gen2(ctx, 0);
+ } else if ((ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_32 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_64 ||
+ ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_ZUC_256_MAC_128) &&
+ ctx->qat_cipher_alg != ICP_QAT_HW_CIPHER_ALGO_ZUC_256) {
+ qat_sym_session_set_ext_hash_flags_gen2(ctx,
+ 1 << ICP_QAT_FW_AUTH_HDR_FLAG_ZUC_EIA3_BITPOS);
+ }
+
+ ret = 0;
+ }
+
+ return ret;
+}
+
+RTE_INIT(qat_sym_crypto_gen5_init)
+{
+ qat_sym_gen_dev_ops[QAT_GEN5].cryptodev_ops = &qat_sym_crypto_ops_gen1;
+ qat_sym_gen_dev_ops[QAT_GEN5].get_capabilities =
+ qat_sym_crypto_cap_get_gen5;
+ qat_sym_gen_dev_ops[QAT_GEN5].set_session =
+ qat_sym_crypto_set_session_gen5;
+ qat_sym_gen_dev_ops[QAT_GEN5].set_raw_dp_ctx =
+ qat_sym_configure_raw_dp_ctx_gen4;
+ qat_sym_gen_dev_ops[QAT_GEN5].get_feature_flags =
+ qat_sym_crypto_feature_flags_get_gen1;
+ qat_sym_gen_dev_ops[QAT_GEN5].create_security_ctx =
+ qat_sym_create_security_gen1;
+}
+
+RTE_INIT(qat_asym_crypto_gen5_init)
+{
+ qat_asym_gen_dev_ops[QAT_GEN5].cryptodev_ops =
+ &qat_asym_crypto_ops_gen1;
+ qat_asym_gen_dev_ops[QAT_GEN5].get_capabilities =
+ qat_asym_crypto_cap_get_gen1;
+ qat_asym_gen_dev_ops[QAT_GEN5].get_feature_flags =
+ qat_asym_crypto_feature_flags_get_gen1;
+ qat_asym_gen_dev_ops[QAT_GEN5].set_session =
+ qat_asym_crypto_set_session_gen1;
+}
diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
index ff7ba55c01..60b0f0551c 100644
--- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
+++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h
@@ -1048,10 +1048,16 @@ qat_sym_crypto_feature_flags_get_gen1(struct qat_pci_device *qat_dev);
int
qat_sym_crypto_set_session_gen1(void *cryptodev, void *session);
+int
+qat_sym_crypto_set_session_gen4(void *cryptodev, void *session);
+
void
qat_sym_session_set_ext_hash_flags_gen2(struct qat_sym_session *session,
uint8_t hash_flag);
+int
+qat_sym_configure_raw_dp_ctx_gen4(void *_raw_dp_ctx, void *_ctx);
+
int
qat_asym_crypto_cap_get_gen1(struct qat_cryptodev_private *internals,
const char *capa_memz_name, const uint16_t slice_map);
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index b1649b8d18..39e4a833ec 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -407,7 +407,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
goto error_out;
}
session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
- if (qat_dev_gen == QAT_GEN4)
+ if (qat_dev_gen == QAT_GEN4 || qat_dev_gen == QAT_GEN5)
session->is_ucs = 1;
break;
case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
@@ -950,7 +950,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev,
session->auth_iv.length = AES_GCM_J0_LEN;
else
session->is_iv12B = 1;
- if (qat_dev_gen == QAT_GEN4) {
+ if (qat_dev_gen == QAT_GEN4 || qat_dev_gen == QAT_GEN5) {
session->is_cnt_zero = 1;
session->is_ucs = 1;
}
@@ -1126,7 +1126,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_GALOIS_128;
- if (qat_dev_gen == QAT_GEN4)
+ if (qat_dev_gen == QAT_GEN4 || qat_dev_gen == QAT_GEN5)
session->is_ucs = 1;
if (session->cipher_iv.length == 0) {
session->cipher_iv.length = AES_GCM_J0_LEN;
@@ -1146,13 +1146,13 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
}
session->qat_mode = ICP_QAT_HW_CIPHER_CTR_MODE;
session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_AES_CBC_MAC;
- if (qat_dev_gen == QAT_GEN4)
+ if (qat_dev_gen == QAT_GEN4 || qat_dev_gen == QAT_GEN5)
session->is_ucs = 1;
break;
case RTE_CRYPTO_AEAD_CHACHA20_POLY1305:
if (aead_xform->key.length != ICP_QAT_HW_CHACHAPOLY_KEY_SZ)
return -EINVAL;
- if (qat_dev_gen == QAT_GEN4)
+ if (qat_dev_gen == QAT_GEN4 || qat_dev_gen == QAT_GEN5)
session->is_ucs = 1;
session->qat_cipher_alg =
ICP_QAT_HW_CIPHER_ALGO_CHACHA20_POLY1305;
@@ -2418,7 +2418,7 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
auth_param->u2.inner_prefix_sz =
qat_hash_get_block_size(cdesc->qat_hash_alg);
auth_param->hash_state_sz = digestsize;
- if (qat_dev_gen == QAT_GEN4) {
+ if (qat_dev_gen == QAT_GEN4 || qat_dev_gen == QAT_GEN5) {
ICP_QAT_FW_HASH_FLAG_MODE2_SET(
hash_cd_ctrl->hash_flags,
QAT_FW_LA_MODE2);
@@ -2984,6 +2984,7 @@ qat_sym_cd_crc_set(struct qat_sym_session *cdesc,
cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen3_crc_cd);
break;
case QAT_GEN4:
+ case QAT_GEN5:
crc_cfg.mode = ICP_QAT_HW_CIPHER_ECB_MODE;
crc_cfg.algo = ICP_QAT_HW_CIPHER_ALGO_NULL;
crc_cfg.hash_cmp_val = 0;
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: [PATCH v2 0/4] add new QAT gen3 and gen5
2024-02-26 13:32 ` [PATCH v2 0/4] add new QAT gen3 and gen5 Ji, Kai
@ 2024-02-29 9:48 ` Kusztal, ArkadiuszX
0 siblings, 0 replies; 19+ messages in thread
From: Kusztal, ArkadiuszX @ 2024-02-29 9:48 UTC (permalink / raw)
To: Ji, Kai, Power, Ciara, dev; +Cc: gakhil
[-- Attachment #1: Type: text/plain, Size: 3178 bytes --]
Series-acked-by: Arkadiusz Kusztal arkadiuszx.kusztal@intel.com<mailto:arkadiuszx.kusztal@intel.com>
Series-acked-by: Kai Ji <kai.ji@intel.com<mailto:kai.ji@intel.com>>
________________________________
From: Power, Ciara <ciara.power@intel.com<mailto:ciara.power@intel.com>>
Sent: 23 February 2024 15:12
To: dev@dpdk.org<mailto:dev@dpdk.org> <dev@dpdk.org<mailto:dev@dpdk.org>>
Cc: gakhil@marvell.com<mailto:gakhil@marvell.com> <gakhil@marvell.com<mailto:gakhil@marvell.com>>; Ji, Kai <kai.ji@intel.com<mailto:kai.ji@intel.com>>; Kusztal, ArkadiuszX <arkadiuszx.kusztal@intel.com<mailto:arkadiuszx.kusztal@intel.com>>; Power, Ciara <ciara.power@intel.com<mailto:ciara.power@intel.com>>
Subject: [PATCH v2 0/4] add new QAT gen3 and gen5
This patchset adds support for two new QAT devices.
A new GEN3 device, and a GEN5 device, both of which have
wireless slice support for algorithms such as ZUC-256.
Symmetric, asymmetric and compression are all supported
for these devices.
v2:
- New patch added for gen5 device that reuses gen4 code,
and new gen3 wireless slice changes.
- Removed patch to disable asymmetric and compression.
- Documentation updates added.
- Fixed ZUC-256 IV modification for raw API path.
- Fixed setting extended protocol flag bit position.
- Added check for ZUC-256 wireless slice in slice map.
Ciara Power (4):
common/qat: add new gen3 device
common/qat: add zuc256 wireless slice for gen3
common/qat: add new gen3 CMAC macros
common/qat: add gen5 device
doc/guides/compressdevs/qat_comp.rst | 1 +
doc/guides/cryptodevs/qat.rst | 6 +
doc/guides/rel_notes/release_24_03.rst | 7 +
drivers/common/qat/dev/qat_dev_gen4.c | 31 ++-
drivers/common/qat/dev/qat_dev_gen5.c | 51 ++++
drivers/common/qat/dev/qat_dev_gens.h | 54 ++++
drivers/common/qat/meson.build | 3 +
drivers/common/qat/qat_adf/icp_qat_fw.h | 6 +-
drivers/common/qat/qat_adf/icp_qat_fw_la.h | 24 ++
drivers/common/qat/qat_adf/icp_qat_hw.h | 26 +-
drivers/common/qat/qat_common.h | 1 +
drivers/common/qat/qat_device.c | 19 ++
drivers/common/qat/qat_device.h | 2 +
drivers/compress/qat/dev/qat_comp_pmd_gen4.c | 8 +-
drivers/compress/qat/dev/qat_comp_pmd_gen5.c | 73 +++++
drivers/compress/qat/dev/qat_comp_pmd_gens.h | 14 +
drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 7 +-
drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 63 ++++-
drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 4 +-
drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c | 278 +++++++++++++++++++
drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 40 ++-
drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 43 +++
drivers/crypto/qat/qat_sym_session.c | 177 ++++++++++--
drivers/crypto/qat/qat_sym_session.h | 2 +
24 files changed, 889 insertions(+), 51 deletions(-)
create mode 100644 drivers/common/qat/dev/qat_dev_gen5.c
create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen5.c
create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c
--
2.25.1
[-- Attachment #2: Type: text/html, Size: 7257 bytes --]
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: [EXT] [PATCH v3 0/4] add new QAT gen3 and gen5
2024-02-26 17:08 ` [PATCH v3 " Ciara Power
` (3 preceding siblings ...)
2024-02-26 17:08 ` [PATCH v3 4/4] common/qat: add gen5 device Ciara Power
@ 2024-02-29 18:53 ` Akhil Goyal
4 siblings, 0 replies; 19+ messages in thread
From: Akhil Goyal @ 2024-02-29 18:53 UTC (permalink / raw)
To: Ciara Power, dev; +Cc: arkadiuszx.kusztal
> This patchset adds support for two new QAT devices.
> A new GEN3 device, and a GEN5 device, both of which have
> wireless slice support for algorithms such as ZUC-256.
>
> Symmetric, asymmetric and compression are all supported
> for these devices.
>
> v3:
> - Modified year in licence tag of new gen5 files.
> v2:
> - New patch added for gen5 device that reuses gen4 code,
> and new gen3 wireless slice changes.
> - Removed patch to disable asymmetric and compression.
> - Documentation updates added.
> - Fixed ZUC-256 IV modification for raw API path.
> - Fixed setting extended protocol flag bit position.
> - Added check for ZUC-256 wireless slice in slice map.
>
> Ciara Power (4):
> common/qat: add new gen3 device
> common/qat: add zuc256 wireless slice for gen3
> common/qat: add new gen3 CMAC macros
> common/qat: add gen5 device
>
> doc/guides/compressdevs/qat_comp.rst | 1 +
> doc/guides/cryptodevs/qat.rst | 6 +
> doc/guides/rel_notes/release_24_03.rst | 7 +
> drivers/common/qat/dev/qat_dev_gen4.c | 31 ++-
> drivers/common/qat/dev/qat_dev_gen5.c | 51 ++++
> drivers/common/qat/dev/qat_dev_gens.h | 54 ++++
> drivers/common/qat/meson.build | 3 +
> drivers/common/qat/qat_adf/icp_qat_fw.h | 6 +-
> drivers/common/qat/qat_adf/icp_qat_fw_la.h | 24 ++
> drivers/common/qat/qat_adf/icp_qat_hw.h | 26 +-
> drivers/common/qat/qat_common.h | 1 +
> drivers/common/qat/qat_device.c | 19 ++
> drivers/common/qat/qat_device.h | 2 +
> drivers/compress/qat/dev/qat_comp_pmd_gen4.c | 8 +-
> drivers/compress/qat/dev/qat_comp_pmd_gen5.c | 73 +++++
> drivers/compress/qat/dev/qat_comp_pmd_gens.h | 14 +
> drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 7 +-
> drivers/crypto/qat/dev/qat_crypto_pmd_gen3.c | 63 ++++-
> drivers/crypto/qat/dev/qat_crypto_pmd_gen4.c | 4 +-
> drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c | 278 +++++++++++++++++++
> drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 40 ++-
> drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 43 +++
> drivers/crypto/qat/qat_sym_session.c | 177 ++++++++++--
> drivers/crypto/qat/qat_sym_session.h | 2 +
> 24 files changed, 889 insertions(+), 51 deletions(-)
> create mode 100644 drivers/common/qat/dev/qat_dev_gen5.c
> create mode 100644 drivers/compress/qat/dev/qat_comp_pmd_gen5.c
> create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c
>
Series applied to dpdk-next-crypto
Thanks.
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 0/4] add new QAT gen3 and gen5
2023-12-20 13:26 [PATCH 1/4] common/qat: add files specific to GEN5 Nishikant Nayak
@ 2024-02-27 9:35 ` Nishikant Nayak
0 siblings, 0 replies; 19+ messages in thread
From: Nishikant Nayak @ 2024-02-27 9:35 UTC (permalink / raw)
To: dev
Cc: ciara.power, kai.ji, arkadiuszx.kusztal, rakesh.s.joshi, Nishikant Nayak
This patchset adds support for two new QAT devices.
A new GEN3 device, and a GEN5 device, both of which have
wireless slice support for algorithms such as ZUC-256.
Symmetric, asymmetric and compression are all supported
for these devices.
v3:
- Fixed typos in commit and code comments.
- Replaced use of linux/kernel.h macro with local macro
to fix ARM compilation in CI.
v2:
- New patch added for gen5 device that reuses gen4 code,
and new gen3 wireless slice changes.
- Removed patch to disable asymmetric and compression.
- Documentation updates added.
- Fixed ZUC-256 IV modification for raw API path.
- Fixed setting extended protocol flag bit position.
- Added check for ZUC-256 wireless slice in slice map.
Nishikant Nayak (4):
common/qat: add files specific to GEN LCE
common/qat: update common driver to support GEN LCE
crypto/qat: update headers for GEN LCE support
test/cryptodev: add tests for GCM with AAD
.mailmap | 1 +
app/test/test_cryptodev.c | 48 ++-
app/test/test_cryptodev_aead_test_vectors.h | 62 ++++
drivers/common/qat/dev/qat_dev_gen_lce.c | 306 ++++++++++++++++
drivers/common/qat/meson.build | 2 +
.../qat/qat_adf/adf_transport_access_macros.h | 1 +
.../adf_transport_access_macros_gen_lce.h | 51 +++
.../adf_transport_access_macros_gen_lcevf.h | 48 +++
drivers/common/qat/qat_adf/icp_qat_fw.h | 34 ++
drivers/common/qat/qat_adf/icp_qat_fw_la.h | 59 +++-
drivers/common/qat/qat_common.h | 1 +
drivers/common/qat/qat_device.c | 9 +
.../crypto/qat/dev/qat_crypto_pmd_gen_lce.c | 329 ++++++++++++++++++
drivers/crypto/qat/qat_sym.c | 16 +-
drivers/crypto/qat/qat_sym.h | 66 +++-
drivers/crypto/qat/qat_sym_session.c | 62 +++-
drivers/crypto/qat/qat_sym_session.h | 10 +-
17 files changed, 1089 insertions(+), 16 deletions(-)
create mode 100644 drivers/common/qat/dev/qat_dev_gen_lce.c
create mode 100644 drivers/common/qat/qat_adf/adf_transport_access_macros_gen_lce.h
create mode 100644 drivers/common/qat/qat_adf/adf_transport_access_macros_gen_lcevf.h
create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen_lce.c
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2024-02-29 18:53 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-19 15:51 [PATCH 0/4] add new QAT gen3 device Ciara Power
2023-12-19 15:51 ` [PATCH 1/4] crypto/qat: add new " Ciara Power
2023-12-19 15:51 ` [PATCH 2/4] crypto/qat: add zuc256 wireless slice for gen3 Ciara Power
2023-12-19 15:51 ` [PATCH 3/4] crypto/qat: add new gen3 CMAC macros Ciara Power
2023-12-19 15:51 ` [PATCH 4/4] crypto/qat: disable asym and compression for new gen3 device Ciara Power
2024-02-23 15:12 ` [PATCH v2 0/4] add new QAT gen3 and gen5 Ciara Power
2024-02-23 15:12 ` [PATCH v2 1/4] common/qat: add new gen3 device Ciara Power
2024-02-23 15:12 ` [PATCH v2 2/4] common/qat: add zuc256 wireless slice for gen3 Ciara Power
2024-02-23 15:12 ` [PATCH v2 3/4] common/qat: add new gen3 CMAC macros Ciara Power
2024-02-23 15:12 ` [PATCH v2 4/4] common/qat: add gen5 device Ciara Power
2024-02-26 13:32 ` [PATCH v2 0/4] add new QAT gen3 and gen5 Ji, Kai
2024-02-29 9:48 ` Kusztal, ArkadiuszX
2024-02-26 17:08 ` [PATCH v3 " Ciara Power
2024-02-26 17:08 ` [PATCH v3 1/4] common/qat: add new gen3 device Ciara Power
2024-02-26 17:08 ` [PATCH v3 2/4] common/qat: add zuc256 wireless slice for gen3 Ciara Power
2024-02-26 17:08 ` [PATCH v3 3/4] common/qat: add new gen3 CMAC macros Ciara Power
2024-02-26 17:08 ` [PATCH v3 4/4] common/qat: add gen5 device Ciara Power
2024-02-29 18:53 ` [EXT] [PATCH v3 0/4] add new QAT gen3 and gen5 Akhil Goyal
2023-12-20 13:26 [PATCH 1/4] common/qat: add files specific to GEN5 Nishikant Nayak
2024-02-27 9:35 ` [PATCH v3 0/4] add new QAT gen3 and gen5 Nishikant Nayak
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).