* [PATCH 0/2] crypto/qat: added cipher-crc offload feature @ 2023-03-08 12:12 Kevin O'Sullivan 2023-03-08 12:12 ` [PATCH 1/2] crypto/qat: added cipher-crc offload support Kevin O'Sullivan ` (2 more replies) 0 siblings, 3 replies; 17+ messages in thread From: Kevin O'Sullivan @ 2023-03-08 12:12 UTC (permalink / raw) To: dev; +Cc: kai.ji, Kevin O'Sullivan This patchset adds support to the QAT PMD for combined cipher-crc processing on the QAT device. The current QAT PMD implementation of cipher-crc calculates CRC in software and uses QAT for encryption/decryption offload. Note: The code-path is still retained for QAT versions without support for combined Cipher-CRC offload. - Support has been added to DPDK QAT PMD to enable the use of the cipher-crc offload feature on gen2/gen3/gen4 QAT devices. - A cipher-crc offload capability check has been added to the queue pair setup function to determine if the feature is supported on the QAT device. Kevin O'Sullivan (2): crypto/qat: added cipher-crc offload support crypto/qat: added cipher-crc cap check drivers/common/qat/qat_adf/icp_qat_fw.h | 1 - drivers/common/qat/qat_adf/icp_qat_fw_la.h | 3 +- drivers/common/qat/qat_adf/icp_qat_hw.h | 133 +++++++++++++ drivers/common/qat/qat_device.c | 12 +- drivers/common/qat/qat_device.h | 3 +- drivers/common/qat/qat_qp.c | 157 +++++++++++++++ drivers/common/qat/qat_qp.h | 5 + drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 2 +- drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 24 ++- drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 4 + drivers/crypto/qat/qat_crypto.c | 22 ++- drivers/crypto/qat/qat_crypto.h | 1 + drivers/crypto/qat/qat_sym.c | 4 + drivers/crypto/qat/qat_sym.h | 7 +- drivers/crypto/qat/qat_sym_session.c | 194 +++++++++++++++++++ drivers/crypto/qat/qat_sym_session.h | 21 +- 16 files changed, 576 insertions(+), 17 deletions(-) -- 2.34.1 -------------------------------------------------------------- Intel Research and Development Ireland Limited Registered in Ireland Registered Office: Collinstown Industrial Park, Leixlip, County Kildare Registered Number: 308263 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 1/2] crypto/qat: added cipher-crc offload support 2023-03-08 12:12 [PATCH 0/2] crypto/qat: added cipher-crc offload feature Kevin O'Sullivan @ 2023-03-08 12:12 ` Kevin O'Sullivan 2023-03-08 12:12 ` [PATCH 2/2] crypto/qat: added cipher-crc cap check Kevin O'Sullivan 2023-03-09 14:33 ` [PATCH v2 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan 2 siblings, 0 replies; 17+ messages in thread From: Kevin O'Sullivan @ 2023-03-08 12:12 UTC (permalink / raw) To: dev; +Cc: kai.ji, Kevin O'Sullivan, David Coyle Functionality has been added to the QAT PMD to use the combined cipher-crc offload feature on the gen1/gen2/gen3 QAT devices by setting the CRC content descriptor accordingly. Signed-off-by: Kevin O'Sullivan <kevin.osullivan@intel.com> Signed-off-by: David Coyle <david.coyle@intel.com> --- drivers/common/qat/qat_adf/icp_qat_fw.h | 1 - drivers/common/qat/qat_adf/icp_qat_fw_la.h | 3 +- drivers/common/qat/qat_adf/icp_qat_hw.h | 133 +++++++++++++++++++++ 3 files changed, 135 insertions(+), 2 deletions(-) diff --git a/drivers/common/qat/qat_adf/icp_qat_fw.h b/drivers/common/qat/qat_adf/icp_qat_fw.h index be10fc9bde..3aa17ae041 100644 --- a/drivers/common/qat/qat_adf/icp_qat_fw.h +++ b/drivers/common/qat/qat_adf/icp_qat_fw.h @@ -4,7 +4,6 @@ #ifndef _ICP_QAT_FW_H_ #define _ICP_QAT_FW_H_ #include <sys/types.h> -#include "icp_qat_hw.h" #define QAT_FIELD_SET(flags, val, bitpos, mask) \ { (flags) = (((flags) & (~((mask) << (bitpos)))) | \ diff --git a/drivers/common/qat/qat_adf/icp_qat_fw_la.h b/drivers/common/qat/qat_adf/icp_qat_fw_la.h index c4901eb869..227a6cebc8 100644 --- a/drivers/common/qat/qat_adf/icp_qat_fw_la.h +++ b/drivers/common/qat/qat_adf/icp_qat_fw_la.h @@ -18,7 +18,8 @@ enum icp_qat_fw_la_cmd_id { ICP_QAT_FW_LA_CMD_MGF1 = 9, ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP = 10, ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP = 11, - ICP_QAT_FW_LA_CMD_DELIMITER = 12 + ICP_QAT_FW_LA_CMD_CIPHER_CRC = 17, + ICP_QAT_FW_LA_CMD_DELIMITER = 18 }; #define ICP_QAT_FW_LA_ICV_VER_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK diff --git a/drivers/common/qat/qat_adf/icp_qat_hw.h b/drivers/common/qat/qat_adf/icp_qat_hw.h index 866147cd77..8b864e1630 100644 --- a/drivers/common/qat/qat_adf/icp_qat_hw.h +++ b/drivers/common/qat/qat_adf/icp_qat_hw.h @@ -4,6 +4,8 @@ #ifndef _ICP_QAT_HW_H_ #define _ICP_QAT_HW_H_ +#include "icp_qat_fw.h" + #define ADF_C4XXXIOV_VFLEGFUSES_OFFSET 0x4C #define ADF1_C4XXXIOV_VFLEGFUSES_LEN 4 @@ -260,14 +262,19 @@ enum icp_qat_hw_cipher_convert { }; #define QAT_CIPHER_MODE_BITPOS 4 +#define QAT_CIPHER_MODE_LE_BITPOS 28 #define QAT_CIPHER_MODE_MASK 0xF #define QAT_CIPHER_ALGO_BITPOS 0 +#define QAT_CIPHER_ALGO_LE_BITPOS 24 #define QAT_CIPHER_ALGO_MASK 0xF #define QAT_CIPHER_CONVERT_BITPOS 9 +#define QAT_CIPHER_CONVERT_LE_BITPOS 17 #define QAT_CIPHER_CONVERT_MASK 0x1 #define QAT_CIPHER_DIR_BITPOS 8 +#define QAT_CIPHER_DIR_LE_BITPOS 16 #define QAT_CIPHER_DIR_MASK 0x1 #define QAT_CIPHER_AEAD_HASH_CMP_LEN_BITPOS 10 +#define QAT_CIPHER_AEAD_HASH_CMP_LEN_LE_BITPOS 18 #define QAT_CIPHER_AEAD_HASH_CMP_LEN_MASK 0x1F #define QAT_CIPHER_MODE_F8_KEY_SZ_MULT 2 #define QAT_CIPHER_MODE_XTS_KEY_SZ_MULT 2 @@ -281,7 +288,9 @@ enum icp_qat_hw_cipher_convert { #define QAT_CIPHER_AEAD_AAD_UPPER_SHIFT 8 #define QAT_CIPHER_AEAD_AAD_SIZE_LOWER_MASK 0xFF #define QAT_CIPHER_AEAD_AAD_SIZE_UPPER_MASK 0x3F +#define QAT_CIPHER_AEAD_AAD_SIZE_MASK 0x3FFF #define QAT_CIPHER_AEAD_AAD_SIZE_BITPOS 16 +#define QAT_CIPHER_AEAD_AAD_SIZE_LE_BITPOS 0 #define ICP_QAT_HW_CIPHER_CONFIG_BUILD_UPPER(aad_size) \ ({ \ typeof(aad_size) aad_size1 = aad_size; \ @@ -362,6 +371,28 @@ struct icp_qat_hw_cipher_algo_blk { uint8_t key[ICP_QAT_HW_CIPHER_MAX_KEY_SZ]; } __rte_cache_aligned; +struct icp_qat_hw_gen2_crc_cd { + uint32_t flags; + uint32_t reserved1[5]; + uint32_t initial_crc; + uint32_t reserved2[3]; +}; + +#define QAT_GEN3_COMP_REFLECT_IN_BITPOS 17 +#define QAT_GEN3_COMP_REFLECT_IN_MASK 0x1 +#define QAT_GEN3_COMP_REFLECT_OUT_BITPOS 18 +#define QAT_GEN3_COMP_REFLECT_OUT_MASK 0x1 + +struct icp_qat_hw_gen3_crc_cd { + uint32_t flags; + uint32_t reserved1[3]; + uint32_t polynomial; + uint32_t xor_val; + uint32_t reserved2[2]; + uint32_t initial_crc; + uint32_t reserved3; +}; + struct icp_qat_hw_ucs_cipher_config { uint32_t val; uint32_t reserved[3]; @@ -372,6 +403,108 @@ struct icp_qat_hw_cipher_algo_blk20 { uint8_t key[ICP_QAT_HW_CIPHER_MAX_KEY_SZ]; } __rte_cache_aligned; +enum icp_qat_hw_ucs_cipher_reflect_out { + ICP_QAT_HW_CIPHER_UCS_REFLECT_OUT_DISABLED = 0, + ICP_QAT_HW_CIPHER_UCS_REFLECT_OUT_ENABLED = 1, +}; + +enum icp_qat_hw_ucs_cipher_reflect_in { + ICP_QAT_HW_CIPHER_UCS_REFLECT_IN_DISABLED = 0, + ICP_QAT_HW_CIPHER_UCS_REFLECT_IN_ENABLED = 1, +}; + +enum icp_qat_hw_ucs_cipher_crc_encoding { + ICP_QAT_HW_CIPHER_UCS_CRC_NOT_REQUIRED = 0, + ICP_QAT_HW_CIPHER_UCS_CRC32 = 1, + ICP_QAT_HW_CIPHER_UCS_CRC64 = 2, +}; + +#define QAT_CIPHER_UCS_REFLECT_OUT_LE_BITPOS 17 +#define QAT_CIPHER_UCS_REFLECT_OUT_MASK 0x1 +#define QAT_CIPHER_UCS_REFLECT_IN_LE_BITPOS 16 +#define QAT_CIPHER_UCS_REFLECT_IN_MASK 0x1 +#define QAT_CIPHER_UCS_CRC_ENCODING_LE_BITPOS 14 +#define QAT_CIPHER_UCS_CRC_ENCODING_MASK 0x3 + +struct icp_qat_fw_ucs_slice_cipher_config { + enum icp_qat_hw_cipher_mode mode; + enum icp_qat_hw_cipher_algo algo; + uint16_t hash_cmp_val; + enum icp_qat_hw_cipher_dir dir; + uint16_t associated_data_len_in_bytes; + enum icp_qat_hw_ucs_cipher_reflect_out crc_reflect_out; + enum icp_qat_hw_ucs_cipher_reflect_in crc_reflect_in; + enum icp_qat_hw_ucs_cipher_crc_encoding crc_encoding; +}; + +struct icp_qat_hw_gen4_crc_cd { + uint32_t ucs_config[4]; + uint32_t polynomial; + uint32_t reserved1; + uint32_t xor_val; + uint32_t reserved2; + uint32_t initial_crc; + uint32_t reserved3; +}; + +static inline uint32_t +ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_LOWER( + struct icp_qat_fw_ucs_slice_cipher_config csr) +{ + uint32_t val32 = 0; + + QAT_FIELD_SET(val32, + csr.mode, + QAT_CIPHER_MODE_LE_BITPOS, + QAT_CIPHER_MODE_MASK); + + QAT_FIELD_SET(val32, + csr.algo, + QAT_CIPHER_ALGO_LE_BITPOS, + QAT_CIPHER_ALGO_MASK); + + QAT_FIELD_SET(val32, + csr.hash_cmp_val, + QAT_CIPHER_AEAD_HASH_CMP_LEN_LE_BITPOS, + QAT_CIPHER_AEAD_HASH_CMP_LEN_MASK); + + QAT_FIELD_SET(val32, + csr.dir, + QAT_CIPHER_DIR_LE_BITPOS, + QAT_CIPHER_DIR_MASK); + + return rte_bswap32(val32); +} + +static inline uint32_t +ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_UPPER( + struct icp_qat_fw_ucs_slice_cipher_config csr) +{ + uint32_t val32 = 0; + + QAT_FIELD_SET(val32, + csr.associated_data_len_in_bytes, + QAT_CIPHER_AEAD_AAD_SIZE_LE_BITPOS, + QAT_CIPHER_AEAD_AAD_SIZE_MASK); + + QAT_FIELD_SET(val32, + csr.crc_reflect_out, + QAT_CIPHER_UCS_REFLECT_OUT_LE_BITPOS, + QAT_CIPHER_UCS_REFLECT_OUT_MASK); + + QAT_FIELD_SET(val32, + csr.crc_reflect_in, + QAT_CIPHER_UCS_REFLECT_IN_LE_BITPOS, + QAT_CIPHER_UCS_REFLECT_IN_MASK); + + QAT_FIELD_SET(val32, + csr.crc_encoding, + QAT_CIPHER_UCS_CRC_ENCODING_LE_BITPOS, + QAT_CIPHER_UCS_CRC_ENCODING_MASK); + + return rte_bswap32(val32); +} + /* ========================================================================= */ /* COMPRESSION SLICE */ /* ========================================================================= */ -- 2.34.1 -------------------------------------------------------------- Intel Research and Development Ireland Limited Registered in Ireland Registered Office: Collinstown Industrial Park, Leixlip, County Kildare Registered Number: 308263 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 2/2] crypto/qat: added cipher-crc cap check 2023-03-08 12:12 [PATCH 0/2] crypto/qat: added cipher-crc offload feature Kevin O'Sullivan 2023-03-08 12:12 ` [PATCH 1/2] crypto/qat: added cipher-crc offload support Kevin O'Sullivan @ 2023-03-08 12:12 ` Kevin O'Sullivan 2023-03-09 14:33 ` [PATCH v2 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan 2 siblings, 0 replies; 17+ messages in thread From: Kevin O'Sullivan @ 2023-03-08 12:12 UTC (permalink / raw) To: dev; +Cc: kai.ji, Kevin O'Sullivan, David Coyle A configuration item called qat_sym_cipher_crc_enable has been added. When set, an LA bulk req message with combined cipher-crc will be sent on startup to the QAT device. The response is checked to see if the data returned matches the cipher text. If a match is determined the cipher-crc capability bit is set to indicate support. If cipher-crc offload is supported, the LA Bulk request will be formatted correctly before being enqueued to the device. Signed-off-by: Kevin O'Sullivan <kevin.osullivan@intel.com> Signed-off-by: David Coyle <david.coyle@intel.com> --- drivers/common/qat/qat_device.c | 12 +- drivers/common/qat/qat_device.h | 3 +- drivers/common/qat/qat_qp.c | 157 +++++++++++++++ drivers/common/qat/qat_qp.h | 5 + drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 2 +- drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 24 ++- drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 4 + drivers/crypto/qat/qat_crypto.c | 22 ++- drivers/crypto/qat/qat_crypto.h | 1 + drivers/crypto/qat/qat_sym.c | 4 + drivers/crypto/qat/qat_sym.h | 7 +- drivers/crypto/qat/qat_sym_session.c | 194 +++++++++++++++++++ drivers/crypto/qat/qat_sym_session.h | 21 +- 13 files changed, 441 insertions(+), 15 deletions(-) diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c index 8bce2ac073..308c59c39f 100644 --- a/drivers/common/qat/qat_device.c +++ b/drivers/common/qat/qat_device.c @@ -149,7 +149,16 @@ qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param } else { memcpy(value_str, arg2, iter); value = strtol(value_str, NULL, 10); - if (value > MAX_QP_THRESHOLD_SIZE) { + if (strcmp(param, + SYM_CIPHER_CRC_ENABLE_NAME) == 0) { + if (value < 0 || value > 1) { + QAT_LOG(DEBUG, "The value for" + " qat_sym_cipher_crc_enable" + " should be set to 0 or 1," + " setting to 0"); + value = 0; + } + } else if (value > MAX_QP_THRESHOLD_SIZE) { QAT_LOG(DEBUG, "Exceeded max size of" " threshold, setting to %d", MAX_QP_THRESHOLD_SIZE); @@ -369,6 +378,7 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, { SYM_ENQ_THRESHOLD_NAME, 0 }, { ASYM_ENQ_THRESHOLD_NAME, 0 }, { COMP_ENQ_THRESHOLD_NAME, 0 }, + { SYM_CIPHER_CRC_ENABLE_NAME, 0 }, [QAT_CMD_SLICE_MAP_POS] = { QAT_CMD_SLICE_MAP, 0}, { NULL, 0 }, }; diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h index bc3da04238..4188474dde 100644 --- a/drivers/common/qat/qat_device.h +++ b/drivers/common/qat/qat_device.h @@ -21,8 +21,9 @@ #define SYM_ENQ_THRESHOLD_NAME "qat_sym_enq_threshold" #define ASYM_ENQ_THRESHOLD_NAME "qat_asym_enq_threshold" #define COMP_ENQ_THRESHOLD_NAME "qat_comp_enq_threshold" +#define SYM_CIPHER_CRC_ENABLE_NAME "qat_sym_cipher_crc_enable" #define QAT_CMD_SLICE_MAP "qat_cmd_slice_disable" -#define QAT_CMD_SLICE_MAP_POS 4 +#define QAT_CMD_SLICE_MAP_POS 5 #define MAX_QP_THRESHOLD_SIZE 32 /** diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c index 9cbd19a481..441dbe9846 100644 --- a/drivers/common/qat/qat_qp.c +++ b/drivers/common/qat/qat_qp.c @@ -11,6 +11,9 @@ #include <bus_pci_driver.h> #include <rte_atomic.h> #include <rte_prefetch.h> +#ifdef RTE_LIB_SECURITY +#include <rte_ether.h> +#endif #include "qat_logs.h" #include "qat_device.h" @@ -957,6 +960,160 @@ qat_cq_get_fw_version(struct qat_qp *qp) return -EINVAL; } +#ifdef BUILD_QAT_SYM +/* Sends an LA bulk req message to determine if a QAT device supports Cipher-CRC + * offload. This assumes that there are no inflight messages, i.e. assumes + * there's space on the qp, one message is sent and only one response + * collected. The status bit of the response and returned data are checked. + * Returns: + * 1 if status bit indicates success and returned data matches expected + * data (i.e. Cipher-CRC supported) + * 0 if status bit indicates error or returned data does not match expected + * data (i.e. Cipher-CRC not supported) + * Negative error code in case of error + */ +int +qat_cq_get_fw_cipher_crc_cap(struct qat_qp *qp) +{ + struct qat_queue *queue = &(qp->tx_q); + uint8_t *base_addr = (uint8_t *)queue->base_addr; + struct icp_qat_fw_la_bulk_req cipher_crc_cap_msg = {0}; + struct icp_qat_fw_comn_resp response = {0}; + struct icp_qat_fw_la_cipher_req_params *cipher_param; + struct icp_qat_fw_la_auth_req_params *auth_param; + struct qat_sym_session *session; + phys_addr_t phy_src_addr; + uint64_t *src_data_addr; + int ret; + uint8_t cipher_offset = 18; + uint8_t crc_offset = 6; + uint8_t ciphertext[34] = { + /* Outer protocol header */ + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + /* Ethernet frame */ + 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x06, 0x05, + 0x04, 0x03, 0x02, 0x01, 0xD6, 0xE2, 0x70, 0x5C, + 0xE6, 0x4D, 0xCC, 0x8C, 0x47, 0xB7, 0x09, 0xD6, + /* CRC */ + 0x54, 0x85, 0xF8, 0x32 + }; + uint8_t plaintext[34] = { + /* Outer protocol header */ + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + /* Ethernet frame */ + 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x06, 0x05, + 0x04, 0x03, 0x02, 0x01, 0x08, 0x00, 0xAA, 0xAA, + 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, + /* CRC */ + 0xFF, 0xFF, 0xFF, 0xFF + }; + uint8_t key[16] = { + 0x00, 0x00, 0x00, 0x00, 0xAA, 0xBB, 0xCC, 0xDD, + 0xEE, 0xFF, 0x00, 0x11, 0x22, 0x33, 0x44, 0x55 + }; + uint8_t iv[16] = { + 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, + 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11 + }; + + session = rte_zmalloc(NULL, sizeof(struct qat_sym_session), 0); + if (session == NULL) + return -EINVAL; + + /* Verify the session physical address is known */ + rte_iova_t session_paddr = rte_mem_virt2iova(session); + if (session_paddr == 0 || session_paddr == RTE_BAD_IOVA) { + QAT_LOG(ERR, "Session physical address unknown."); + return -EINVAL; + } + + /* Prepare the LA bulk request */ + ret = qat_cipher_crc_cap_msg_sess_prepare(session, + session_paddr, + key, + sizeof(key), + qp->qat_dev_gen); + if (ret < 0) { + rte_free(session); + /* Returning 0 here to allow qp setup to continue, but + * indicate that Cipher-CRC offload is not supported on the + * device + */ + return 0; + } + + cipher_crc_cap_msg = session->fw_req; + + src_data_addr = rte_zmalloc(NULL, sizeof(plaintext), 0); + if (src_data_addr == NULL) { + rte_free(session); + return -EINVAL; + } + + rte_memcpy(src_data_addr, plaintext, sizeof(plaintext)); + + phy_src_addr = rte_mem_virt2iova(src_data_addr); + if (phy_src_addr == 0 || phy_src_addr == RTE_BAD_IOVA) { + QAT_LOG(ERR, "Source physical address unknown."); + return -EINVAL; + } + + cipher_crc_cap_msg.comn_mid.src_data_addr = phy_src_addr; + cipher_crc_cap_msg.comn_mid.src_length = sizeof(plaintext); + cipher_crc_cap_msg.comn_mid.dest_data_addr = phy_src_addr; + cipher_crc_cap_msg.comn_mid.dst_length = sizeof(plaintext); + + cipher_param = (void *)&cipher_crc_cap_msg.serv_specif_rqpars; + auth_param = (void *)((uint8_t *)cipher_param + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + + rte_memcpy(cipher_param->u.cipher_IV_array, iv, sizeof(iv)); + + cipher_param->cipher_offset = cipher_offset; + cipher_param->cipher_length = sizeof(plaintext) - cipher_offset; + auth_param->auth_off = crc_offset; + auth_param->auth_len = sizeof(plaintext) - + crc_offset - + RTE_ETHER_CRC_LEN; + + ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET( + cipher_crc_cap_msg.comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_DIGEST_IN_BUFFER); + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "LA Bulk request", &cipher_crc_cap_msg, + sizeof(cipher_crc_cap_msg)); +#endif + + /* Send the cipher_crc_cap_msg request */ + memcpy(base_addr + queue->tail, + &cipher_crc_cap_msg, + sizeof(cipher_crc_cap_msg)); + queue->tail = adf_modulo(queue->tail + queue->msg_size, + queue->modulo_mask); + txq_write_tail(qp->qat_dev_gen, qp, queue); + + /* Check for response and verify data is same as ciphertext */ + if (qat_cq_dequeue_response(qp, &response)) { +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "LA response:", &response, + sizeof(response)); +#endif + + if (memcmp(src_data_addr, ciphertext, sizeof(ciphertext)) != 0) + ret = 0; /* Cipher-CRC offload not supported */ + else + ret = 1; + } else { + ret = -EINVAL; + } + + rte_free(src_data_addr); + rte_free(session); + return ret; +} +#endif + __rte_weak int qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused, void *op_cookie __rte_unused, diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h index 66f00943a5..d19fc387e4 100644 --- a/drivers/common/qat/qat_qp.h +++ b/drivers/common/qat/qat_qp.h @@ -153,6 +153,11 @@ qat_qp_get_hw_data(struct qat_pci_device *qat_dev, int qat_cq_get_fw_version(struct qat_qp *qp); +#ifdef BUILD_QAT_SYM +int +qat_cq_get_fw_cipher_crc_cap(struct qat_qp *qp); +#endif + /* Needed for weak function*/ int qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused, diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c index 60ca0fc0d2..1f3e2b1d99 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c @@ -163,7 +163,7 @@ qat_sym_crypto_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id, QAT_LOG(DEBUG, "unknown QAT firmware version"); /* set capabilities based on the fw version */ - qat_sym_private->internal_capabilities = QAT_SYM_CAP_VALID | + qat_sym_private->internal_capabilities |= QAT_SYM_CAP_VALID | ((ret >= MIXED_CRYPTO_MIN_FW_VER) ? QAT_SYM_CAP_MIXED_CRYPTO : 0); return 0; diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h index 524c291340..70942906ea 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h @@ -399,8 +399,13 @@ qat_sym_convert_op_to_vec_chain(struct rte_crypto_op *op, cipher_ofs = op->sym->cipher.data.offset >> 3; break; case 0: - cipher_len = op->sym->cipher.data.length; - cipher_ofs = op->sym->cipher.data.offset; + if (ctx->bpi_ctx) { + cipher_len = qat_bpicipher_preprocess(ctx, op); + cipher_ofs = op->sym->cipher.data.offset; + } else { + cipher_len = op->sym->cipher.data.length; + cipher_ofs = op->sym->cipher.data.offset; + } break; default: QAT_DP_LOG(ERR, @@ -428,8 +433,10 @@ qat_sym_convert_op_to_vec_chain(struct rte_crypto_op *op, max_len = RTE_MAX(cipher_ofs + cipher_len, auth_ofs + auth_len); - /* digest in buffer check. Needed only for wireless algos */ - if (ret == 1) { + /* digest in buffer check. Needed only for wireless algos + * or combined cipher-crc operations + */ + if (ret == 1 || ctx->bpi_ctx) { /* Handle digest-encrypted cases, i.e. * auth-gen-then-cipher-encrypt and * cipher-decrypt-then-auth-verify @@ -456,8 +463,9 @@ qat_sym_convert_op_to_vec_chain(struct rte_crypto_op *op, auth_len; /* Then check if digest-encrypted conditions are met */ - if ((auth_ofs + auth_len < cipher_ofs + cipher_len) && - (digest->iova == auth_end_iova)) + if (((auth_ofs + auth_len < cipher_ofs + cipher_len) && + (digest->iova == auth_end_iova)) || + ctx->bpi_ctx) max_len = RTE_MAX(max_len, auth_ofs + auth_len + ctx->digest_length); } @@ -691,9 +699,9 @@ enqueue_one_chain_job_gen1(struct qat_sym_session *ctx, auth_param->auth_len; /* Then check if digest-encrypted conditions are met */ - if ((auth_param->auth_off + auth_param->auth_len < + if (((auth_param->auth_off + auth_param->auth_len < cipher_param->cipher_offset + cipher_param->cipher_length) && - (digest->iova == auth_iova_end)) { + (digest->iova == auth_iova_end)) || ctx->bpi_ctx) { /* Handle partial digest encryption */ if (cipher_param->cipher_offset + cipher_param->cipher_length < auth_param->auth_off + auth_param->auth_len + diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c index 91d5cfa71d..590eaa0057 100644 --- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c @@ -1205,6 +1205,10 @@ qat_sym_crypto_set_session_gen1(void *cryptodev __rte_unused, void *session) } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) { /* do_auth = 0; do_cipher = 1; */ build_request = qat_sym_build_op_cipher_gen1; + } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_CRC) { + /* do_auth = 1; do_cipher = 1; */ + build_request = qat_sym_build_op_chain_gen1; + handle_mixed = 1; } if (build_request) diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c index 84c26a8062..861679373b 100644 --- a/drivers/crypto/qat/qat_crypto.c +++ b/drivers/crypto/qat/qat_crypto.c @@ -172,5 +172,25 @@ qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, qat_asym_init_op_cookie(qp->op_cookies[i]); } - return ret; + if (qat_private->cipher_crc_offload_enable) { + ret = qat_cq_get_fw_cipher_crc_cap(qp); + if (ret < 0) { + qat_cryptodev_qp_release(dev, qp_id); + return ret; + } + + if (ret != 0) + QAT_LOG(DEBUG, "Cipher CRC supported on QAT device"); + else + QAT_LOG(DEBUG, "Cipher CRC not supported on QAT device"); + + /* Only send the cipher crc offload capability message once */ + qat_private->cipher_crc_offload_enable = 0; + /* Set cipher crc offload indicator */ + if (ret) + qat_private->internal_capabilities |= + QAT_SYM_CAP_CIPHER_CRC; + } + + return 0; } diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h index 6fe1326c51..e20f16236e 100644 --- a/drivers/crypto/qat/qat_crypto.h +++ b/drivers/crypto/qat/qat_crypto.h @@ -36,6 +36,7 @@ struct qat_cryptodev_private { /* Shared memzone for storing capabilities */ uint16_t min_enq_burst_threshold; uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */ + bool cipher_crc_offload_enable; enum qat_service_type service_type; }; diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c index 08e92191a3..345c845325 100644 --- a/drivers/crypto/qat/qat_sym.c +++ b/drivers/crypto/qat/qat_sym.c @@ -279,6 +279,10 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, if (!strcmp(qat_dev_cmd_param[i].name, SYM_ENQ_THRESHOLD_NAME)) internals->min_enq_burst_threshold = qat_dev_cmd_param[i].val; + if (!strcmp(qat_dev_cmd_param[i].name, + SYM_CIPHER_CRC_ENABLE_NAME)) + internals->cipher_crc_offload_enable = + qat_dev_cmd_param[i].val; if (!strcmp(qat_dev_cmd_param[i].name, QAT_IPSEC_MB_LIB)) qat_ipsec_mb_lib = qat_dev_cmd_param[i].val; if (!strcmp(qat_dev_cmd_param[i].name, QAT_CMD_SLICE_MAP)) diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h index 9a4251e08b..3d841d0eba 100644 --- a/drivers/crypto/qat/qat_sym.h +++ b/drivers/crypto/qat/qat_sym.h @@ -32,6 +32,7 @@ /* Internal capabilities */ #define QAT_SYM_CAP_MIXED_CRYPTO (1 << 0) +#define QAT_SYM_CAP_CIPHER_CRC (1 << 1) #define QAT_SYM_CAP_VALID (1 << 31) /** @@ -282,7 +283,8 @@ qat_sym_preprocess_requests(void **ops, uint16_t nb_ops) if (ctx == NULL || ctx->bpi_ctx == NULL) continue; - qat_crc_generate(ctx, op); + if (ctx->qat_cmd != ICP_QAT_FW_LA_CMD_CIPHER_CRC) + qat_crc_generate(ctx, op); } } } @@ -330,7 +332,8 @@ qat_sym_process_response(void **op, uint8_t *resp, void *op_cookie, if (sess->bpi_ctx) { qat_bpicipher_postprocess(sess, rx_op); #ifdef RTE_LIB_SECURITY - if (is_docsis_sec) + if (is_docsis_sec && sess->qat_cmd != + ICP_QAT_FW_LA_CMD_CIPHER_CRC) qat_crc_verify(sess, rx_op); #endif } diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 466482d225..c0217654c1 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -27,6 +27,7 @@ #include <rte_crypto_sym.h> #ifdef RTE_LIB_SECURITY #include <rte_security_driver.h> +#include <rte_ether.h> #endif #include "qat_logs.h" @@ -68,6 +69,13 @@ static void ossl_legacy_provider_unload(void) extern int qat_ipsec_mb_lib; +#define ETH_CRC32_POLYNOMIAL 0x04c11db7 +#define ETH_CRC32_INIT_VAL 0xffffffff +#define ETH_CRC32_XOR_OUT 0xffffffff +#define ETH_CRC32_POLYNOMIAL_BE RTE_BE32(ETH_CRC32_POLYNOMIAL) +#define ETH_CRC32_INIT_VAL_BE RTE_BE32(ETH_CRC32_INIT_VAL) +#define ETH_CRC32_XOR_OUT_BE RTE_BE32(ETH_CRC32_XOR_OUT) + /* SHA1 - 20 bytes - Initialiser state can be found in FIPS stds 180-2 */ static const uint8_t sha1InitialState[] = { 0x67, 0x45, 0x23, 0x01, 0xef, 0xcd, 0xab, 0x89, 0x98, 0xba, @@ -115,6 +123,10 @@ qat_sym_cd_cipher_set(struct qat_sym_session *cd, const uint8_t *enckey, uint32_t enckeylen); +static int +qat_sym_cd_crc_set(struct qat_sym_session *cdesc, + enum qat_device_gen qat_dev_gen); + static int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, const uint8_t *authkey, @@ -122,6 +134,7 @@ qat_sym_cd_auth_set(struct qat_sym_session *cdesc, uint32_t aad_length, uint32_t digestsize, unsigned int operation); + static void qat_sym_session_init_common_hdr(struct qat_sym_session *session); @@ -630,6 +643,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, case ICP_QAT_FW_LA_CMD_MGF1: case ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP: case ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP: + case ICP_QAT_FW_LA_CMD_CIPHER_CRC: case ICP_QAT_FW_LA_CMD_DELIMITER: QAT_LOG(ERR, "Unsupported Service %u", session->qat_cmd); @@ -645,6 +659,45 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, (void *)session); } +int +qat_cipher_crc_cap_msg_sess_prepare(struct qat_sym_session *session, + rte_iova_t session_paddr, + const uint8_t *cipherkey, + uint32_t cipherkeylen, + enum qat_device_gen qat_dev_gen) +{ + int ret; + + /* Set content descriptor physical address */ + session->cd_paddr = session_paddr + + offsetof(struct qat_sym_session, cd); + + /* Set up some pre-requisite variables */ + session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_NONE; + session->is_ucs = 0; + session->qat_cmd = ICP_QAT_FW_LA_CMD_CIPHER_CRC; + session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE; + session->qat_cipher_alg = ICP_QAT_HW_CIPHER_ALGO_AES128; + session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT; + session->is_auth = 1; + session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_NULL; + session->auth_mode = ICP_QAT_HW_AUTH_MODE0; + session->auth_op = ICP_QAT_HW_AUTH_GENERATE; + session->digest_length = RTE_ETHER_CRC_LEN; + + ret = qat_sym_cd_cipher_set(session, cipherkey, cipherkeylen); + if (ret < 0) + return -EINVAL; + + ret = qat_sym_cd_crc_set(session, qat_dev_gen); + if (ret < 0) + return -EINVAL; + + qat_sym_session_finalize(session); + + return 0; +} + static int qat_sym_session_handle_single_pass(struct qat_sym_session *session, const struct rte_crypto_aead_xform *aead_xform) @@ -1866,6 +1919,9 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR); cdesc->cd_cur_ptr = (uint8_t *)&cdesc->cd; + } else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_CRC) { + cd_pars->u.s.content_desc_addr = cdesc->cd_paddr; + cdesc->cd_cur_ptr = (uint8_t *)&cdesc->cd; } else if (cdesc->qat_cmd != ICP_QAT_FW_LA_CMD_HASH_CIPHER) { QAT_LOG(ERR, "Invalid param, must be a cipher command."); return -EFAULT; @@ -2641,6 +2697,135 @@ qat_sec_session_check_docsis(struct rte_security_session_conf *conf) return -EINVAL; } +static int +qat_sym_cd_crc_set(struct qat_sym_session *cdesc, + enum qat_device_gen qat_dev_gen) +{ + struct icp_qat_hw_gen2_crc_cd *crc_cd_gen2; + struct icp_qat_hw_gen3_crc_cd *crc_cd_gen3; + struct icp_qat_hw_gen4_crc_cd *crc_cd_gen4; + struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req; + struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars; + void *ptr = &req_tmpl->cd_ctrl; + struct icp_qat_fw_auth_cd_ctrl_hdr *crc_cd_ctrl = ptr; + struct icp_qat_fw_la_auth_req_params *crc_param = + (struct icp_qat_fw_la_auth_req_params *) + ((char *)&req_tmpl->serv_specif_rqpars + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + struct icp_qat_fw_ucs_slice_cipher_config crc_cfg; + uint16_t crc_cfg_offset, cd_size; + + crc_cfg_offset = cdesc->cd_cur_ptr - ((uint8_t *)&cdesc->cd); + + switch (qat_dev_gen) { + case QAT_GEN2: + crc_cd_gen2 = + (struct icp_qat_hw_gen2_crc_cd *)cdesc->cd_cur_ptr; + crc_cd_gen2->flags = 0; + crc_cd_gen2->initial_crc = 0; + memset(&crc_cd_gen2->reserved1, + 0, + sizeof(crc_cd_gen2->reserved1)); + memset(&crc_cd_gen2->reserved2, + 0, + sizeof(crc_cd_gen2->reserved2)); + cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen2_crc_cd); + break; + case QAT_GEN3: + crc_cd_gen3 = + (struct icp_qat_hw_gen3_crc_cd *)cdesc->cd_cur_ptr; + crc_cd_gen3->flags = ICP_QAT_HW_GEN3_CRC_FLAGS_BUILD(1, 1); + crc_cd_gen3->polynomial = ETH_CRC32_POLYNOMIAL; + crc_cd_gen3->initial_crc = ETH_CRC32_INIT_VAL; + crc_cd_gen3->xor_val = ETH_CRC32_XOR_OUT; + memset(&crc_cd_gen3->reserved1, + 0, + sizeof(crc_cd_gen3->reserved1)); + memset(&crc_cd_gen3->reserved2, + 0, + sizeof(crc_cd_gen3->reserved2)); + crc_cd_gen3->reserved3 = 0; + cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen3_crc_cd); + break; + case QAT_GEN4: + crc_cfg.mode = ICP_QAT_HW_CIPHER_ECB_MODE; + crc_cfg.algo = ICP_QAT_HW_CIPHER_ALGO_NULL; + crc_cfg.hash_cmp_val = 0; + crc_cfg.dir = ICP_QAT_HW_CIPHER_ENCRYPT; + crc_cfg.associated_data_len_in_bytes = 0; + crc_cfg.crc_reflect_out = + ICP_QAT_HW_CIPHER_UCS_REFLECT_OUT_ENABLED; + crc_cfg.crc_reflect_in = + ICP_QAT_HW_CIPHER_UCS_REFLECT_IN_ENABLED; + crc_cfg.crc_encoding = ICP_QAT_HW_CIPHER_UCS_CRC32; + + crc_cd_gen4 = + (struct icp_qat_hw_gen4_crc_cd *)cdesc->cd_cur_ptr; + crc_cd_gen4->ucs_config[0] = + ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_LOWER(crc_cfg); + crc_cd_gen4->ucs_config[1] = + ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_UPPER(crc_cfg); + crc_cd_gen4->polynomial = ETH_CRC32_POLYNOMIAL_BE; + crc_cd_gen4->initial_crc = ETH_CRC32_INIT_VAL_BE; + crc_cd_gen4->xor_val = ETH_CRC32_XOR_OUT_BE; + crc_cd_gen4->reserved1 = 0; + crc_cd_gen4->reserved2 = 0; + crc_cd_gen4->reserved3 = 0; + cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen4_crc_cd); + break; + default: + return -EINVAL; + } + + crc_cd_ctrl->hash_cfg_offset = crc_cfg_offset >> 3; + crc_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED; + crc_cd_ctrl->inner_res_sz = cdesc->digest_length; + crc_cd_ctrl->final_sz = cdesc->digest_length; + crc_cd_ctrl->inner_state1_sz = 0; + crc_cd_ctrl->inner_state2_sz = 0; + crc_cd_ctrl->inner_state2_offset = 0; + crc_cd_ctrl->outer_prefix_sz = 0; + crc_cd_ctrl->outer_config_offset = 0; + crc_cd_ctrl->outer_state1_sz = 0; + crc_cd_ctrl->outer_res_sz = 0; + crc_cd_ctrl->outer_prefix_offset = 0; + + crc_param->auth_res_sz = cdesc->digest_length; + crc_param->u2.aad_sz = 0; + crc_param->hash_state_sz = 0; + + cd_size = cdesc->cd_cur_ptr - (uint8_t *)&cdesc->cd; + cd_pars->u.s.content_desc_addr = cdesc->cd_paddr; + cd_pars->u.s.content_desc_params_sz = RTE_ALIGN_CEIL(cd_size, 8) >> 3; + + return 0; +} + +static int +qat_sym_session_configure_crc(struct rte_cryptodev *dev, + const struct rte_crypto_sym_xform *cipher_xform, + struct qat_sym_session *session) +{ + struct qat_cryptodev_private *internals = dev->data->dev_private; + enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen; + int ret; + + session->is_auth = 1; + session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_NULL; + session->auth_mode = ICP_QAT_HW_AUTH_MODE0; + session->auth_op = cipher_xform->cipher.op == + RTE_CRYPTO_CIPHER_OP_ENCRYPT ? + ICP_QAT_HW_AUTH_GENERATE : + ICP_QAT_HW_AUTH_VERIFY; + session->digest_length = RTE_ETHER_CRC_LEN; + + ret = qat_sym_cd_crc_set(session, qat_dev_gen); + if (ret < 0) + return ret; + + return 0; +} + static int qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev, struct rte_security_session_conf *conf, void *session_private, @@ -2681,12 +2866,21 @@ qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev, if (qat_cmd_id != ICP_QAT_FW_LA_CMD_CIPHER) { QAT_LOG(ERR, "Unsupported xform chain requested"); return -ENOTSUP; + } else if (internals->internal_capabilities + & QAT_SYM_CAP_CIPHER_CRC) { + qat_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER_CRC; } session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id; ret = qat_sym_session_configure_cipher(dev, xform, session); if (ret < 0) return ret; + + if (qat_cmd_id == ICP_QAT_FW_LA_CMD_CIPHER_CRC) { + ret = qat_sym_session_configure_crc(dev, xform, session); + if (ret < 0) + return ret; + } qat_sym_session_finalize(session); return qat_sym_gen_dev_ops[qat_dev_gen].set_session((void *)cdev, diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h index 6322d7e3bc..9b5d11ac88 100644 --- a/drivers/crypto/qat/qat_sym_session.h +++ b/drivers/crypto/qat/qat_sym_session.h @@ -46,6 +46,12 @@ ICP_QAT_HW_CIPHER_KEY_CONVERT, \ ICP_QAT_HW_CIPHER_DECRYPT) +#define ICP_QAT_HW_GEN3_CRC_FLAGS_BUILD(ref_in, ref_out) \ + (((ref_in & QAT_GEN3_COMP_REFLECT_IN_MASK) << \ + QAT_GEN3_COMP_REFLECT_IN_BITPOS) | \ + ((ref_out & QAT_GEN3_COMP_REFLECT_OUT_MASK) << \ + QAT_GEN3_COMP_REFLECT_OUT_BITPOS)) + #define QAT_AES_CMAC_CONST_RB 0x87 #define QAT_CRYPTO_SLICE_SPC 1 @@ -76,7 +82,12 @@ typedef int (*qat_sym_build_request_t)(void *in_op, struct qat_sym_session *ctx, /* Common content descriptor */ struct qat_sym_cd { struct icp_qat_hw_cipher_algo_blk cipher; - struct icp_qat_hw_auth_algo_blk hash; + union { + struct icp_qat_hw_auth_algo_blk hash; + struct icp_qat_hw_gen2_crc_cd crc_gen2; + struct icp_qat_hw_gen3_crc_cd crc_gen3; + struct icp_qat_hw_gen4_crc_cd crc_gen4; + }; } __rte_packed __rte_cache_aligned; struct qat_sym_session { @@ -152,10 +163,18 @@ qat_sym_session_clear(struct rte_cryptodev *dev, unsigned int qat_sym_session_get_private_size(struct rte_cryptodev *dev); +int +qat_cipher_crc_cap_msg_sess_prepare(struct qat_sym_session *session, + rte_iova_t session_paddr, + const uint8_t *cipherkey, + uint32_t cipherkeylen, + enum qat_device_gen qat_dev_gen); + void qat_sym_sesssion_init_common_hdr(struct qat_sym_session *session, struct icp_qat_fw_comn_req_hdr *header, enum qat_sym_proto_flag proto_flags); + int qat_sym_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg); int -- 2.34.1 -------------------------------------------------------------- Intel Research and Development Ireland Limited Registered in Ireland Registered Office: Collinstown Industrial Park, Leixlip, County Kildare Registered Number: 308263 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 0/2] crypto/qat: add cipher-crc offload feature 2023-03-08 12:12 [PATCH 0/2] crypto/qat: added cipher-crc offload feature Kevin O'Sullivan 2023-03-08 12:12 ` [PATCH 1/2] crypto/qat: added cipher-crc offload support Kevin O'Sullivan 2023-03-08 12:12 ` [PATCH 2/2] crypto/qat: added cipher-crc cap check Kevin O'Sullivan @ 2023-03-09 14:33 ` Kevin O'Sullivan 2023-03-09 14:33 ` [PATCH v2 1/2] crypto/qat: add cipher-crc offload support to fw interface Kevin O'Sullivan ` (2 more replies) 2 siblings, 3 replies; 17+ messages in thread From: Kevin O'Sullivan @ 2023-03-09 14:33 UTC (permalink / raw) To: dev; +Cc: kai.ji, Kevin O'Sullivan This patchset adds support to the QAT PMD for combined cipher-crc processing for DOCSIS on the QAT device. The current QAT PMD implementation of cipher-crc calculates CRC in software and uses QAT for encryption/decryption offload. Note: The previous code-path is still retained for QAT firmware versions without support for combined cipher-crc offload. - Support has been added to DPDK QAT PMD to enable the use of the cipher-crc offload feature on gen2/gen3/gen4 QAT devices. - A cipher-crc offload capability check has been added to the queue pair setup function to determine if the feature is supported on the QAT device. v2: fixed centos compilation error for missing braces around initializer Kevin O'Sullivan (2): crypto/qat: added cipher-crc offload support crypto/qat: added cipher-crc cap check drivers/common/qat/qat_adf/icp_qat_fw.h | 1 - drivers/common/qat/qat_adf/icp_qat_fw_la.h | 3 +- drivers/common/qat/qat_adf/icp_qat_hw.h | 133 +++++++++++++ drivers/common/qat/qat_device.c | 12 +- drivers/common/qat/qat_device.h | 3 +- drivers/common/qat/qat_qp.c | 157 +++++++++++++++ drivers/common/qat/qat_qp.h | 5 + drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 2 +- drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 24 ++- drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 4 + drivers/crypto/qat/qat_crypto.c | 22 ++- drivers/crypto/qat/qat_crypto.h | 1 + drivers/crypto/qat/qat_sym.c | 4 + drivers/crypto/qat/qat_sym.h | 7 +- drivers/crypto/qat/qat_sym_session.c | 194 +++++++++++++++++++ drivers/crypto/qat/qat_sym_session.h | 21 +- 16 files changed, 576 insertions(+), 17 deletions(-) -- 2.34.1 -------------------------------------------------------------- Intel Research and Development Ireland Limited Registered in Ireland Registered Office: Collinstown Industrial Park, Leixlip, County Kildare Registered Number: 308263 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 1/2] crypto/qat: add cipher-crc offload support to fw interface 2023-03-09 14:33 ` [PATCH v2 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan @ 2023-03-09 14:33 ` Kevin O'Sullivan 2023-03-09 14:33 ` [PATCH v2 2/2] crypto/qat: add cipher-crc offload support Kevin O'Sullivan 2023-03-13 14:26 ` [PATCH v3 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan 2 siblings, 0 replies; 17+ messages in thread From: Kevin O'Sullivan @ 2023-03-09 14:33 UTC (permalink / raw) To: dev; +Cc: kai.ji, Kevin O'Sullivan, David Coyle This patch adds support to the QAT firmware interface header files for the combined cipher-crc offload feature for DOCSIS on gen2/gen3/ gen4 QAT devices. The main change is that new structures have been added for the crc content descriptor for the various generations. Signed-off-by: Kevin O'Sullivan <kevin.osullivan@intel.com> Signed-off-by: David Coyle <david.coyle@intel.com> --- drivers/common/qat/qat_adf/icp_qat_fw.h | 1 - drivers/common/qat/qat_adf/icp_qat_fw_la.h | 3 +- drivers/common/qat/qat_adf/icp_qat_hw.h | 133 +++++++++++++++++++++ 3 files changed, 135 insertions(+), 2 deletions(-) diff --git a/drivers/common/qat/qat_adf/icp_qat_fw.h b/drivers/common/qat/qat_adf/icp_qat_fw.h index be10fc9bde..3aa17ae041 100644 --- a/drivers/common/qat/qat_adf/icp_qat_fw.h +++ b/drivers/common/qat/qat_adf/icp_qat_fw.h @@ -4,7 +4,6 @@ #ifndef _ICP_QAT_FW_H_ #define _ICP_QAT_FW_H_ #include <sys/types.h> -#include "icp_qat_hw.h" #define QAT_FIELD_SET(flags, val, bitpos, mask) \ { (flags) = (((flags) & (~((mask) << (bitpos)))) | \ diff --git a/drivers/common/qat/qat_adf/icp_qat_fw_la.h b/drivers/common/qat/qat_adf/icp_qat_fw_la.h index c4901eb869..227a6cebc8 100644 --- a/drivers/common/qat/qat_adf/icp_qat_fw_la.h +++ b/drivers/common/qat/qat_adf/icp_qat_fw_la.h @@ -18,7 +18,8 @@ enum icp_qat_fw_la_cmd_id { ICP_QAT_FW_LA_CMD_MGF1 = 9, ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP = 10, ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP = 11, - ICP_QAT_FW_LA_CMD_DELIMITER = 12 + ICP_QAT_FW_LA_CMD_CIPHER_CRC = 17, + ICP_QAT_FW_LA_CMD_DELIMITER = 18 }; #define ICP_QAT_FW_LA_ICV_VER_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK diff --git a/drivers/common/qat/qat_adf/icp_qat_hw.h b/drivers/common/qat/qat_adf/icp_qat_hw.h index 866147cd77..8b864e1630 100644 --- a/drivers/common/qat/qat_adf/icp_qat_hw.h +++ b/drivers/common/qat/qat_adf/icp_qat_hw.h @@ -4,6 +4,8 @@ #ifndef _ICP_QAT_HW_H_ #define _ICP_QAT_HW_H_ +#include "icp_qat_fw.h" + #define ADF_C4XXXIOV_VFLEGFUSES_OFFSET 0x4C #define ADF1_C4XXXIOV_VFLEGFUSES_LEN 4 @@ -260,14 +262,19 @@ enum icp_qat_hw_cipher_convert { }; #define QAT_CIPHER_MODE_BITPOS 4 +#define QAT_CIPHER_MODE_LE_BITPOS 28 #define QAT_CIPHER_MODE_MASK 0xF #define QAT_CIPHER_ALGO_BITPOS 0 +#define QAT_CIPHER_ALGO_LE_BITPOS 24 #define QAT_CIPHER_ALGO_MASK 0xF #define QAT_CIPHER_CONVERT_BITPOS 9 +#define QAT_CIPHER_CONVERT_LE_BITPOS 17 #define QAT_CIPHER_CONVERT_MASK 0x1 #define QAT_CIPHER_DIR_BITPOS 8 +#define QAT_CIPHER_DIR_LE_BITPOS 16 #define QAT_CIPHER_DIR_MASK 0x1 #define QAT_CIPHER_AEAD_HASH_CMP_LEN_BITPOS 10 +#define QAT_CIPHER_AEAD_HASH_CMP_LEN_LE_BITPOS 18 #define QAT_CIPHER_AEAD_HASH_CMP_LEN_MASK 0x1F #define QAT_CIPHER_MODE_F8_KEY_SZ_MULT 2 #define QAT_CIPHER_MODE_XTS_KEY_SZ_MULT 2 @@ -281,7 +288,9 @@ enum icp_qat_hw_cipher_convert { #define QAT_CIPHER_AEAD_AAD_UPPER_SHIFT 8 #define QAT_CIPHER_AEAD_AAD_SIZE_LOWER_MASK 0xFF #define QAT_CIPHER_AEAD_AAD_SIZE_UPPER_MASK 0x3F +#define QAT_CIPHER_AEAD_AAD_SIZE_MASK 0x3FFF #define QAT_CIPHER_AEAD_AAD_SIZE_BITPOS 16 +#define QAT_CIPHER_AEAD_AAD_SIZE_LE_BITPOS 0 #define ICP_QAT_HW_CIPHER_CONFIG_BUILD_UPPER(aad_size) \ ({ \ typeof(aad_size) aad_size1 = aad_size; \ @@ -362,6 +371,28 @@ struct icp_qat_hw_cipher_algo_blk { uint8_t key[ICP_QAT_HW_CIPHER_MAX_KEY_SZ]; } __rte_cache_aligned; +struct icp_qat_hw_gen2_crc_cd { + uint32_t flags; + uint32_t reserved1[5]; + uint32_t initial_crc; + uint32_t reserved2[3]; +}; + +#define QAT_GEN3_COMP_REFLECT_IN_BITPOS 17 +#define QAT_GEN3_COMP_REFLECT_IN_MASK 0x1 +#define QAT_GEN3_COMP_REFLECT_OUT_BITPOS 18 +#define QAT_GEN3_COMP_REFLECT_OUT_MASK 0x1 + +struct icp_qat_hw_gen3_crc_cd { + uint32_t flags; + uint32_t reserved1[3]; + uint32_t polynomial; + uint32_t xor_val; + uint32_t reserved2[2]; + uint32_t initial_crc; + uint32_t reserved3; +}; + struct icp_qat_hw_ucs_cipher_config { uint32_t val; uint32_t reserved[3]; @@ -372,6 +403,108 @@ struct icp_qat_hw_cipher_algo_blk20 { uint8_t key[ICP_QAT_HW_CIPHER_MAX_KEY_SZ]; } __rte_cache_aligned; +enum icp_qat_hw_ucs_cipher_reflect_out { + ICP_QAT_HW_CIPHER_UCS_REFLECT_OUT_DISABLED = 0, + ICP_QAT_HW_CIPHER_UCS_REFLECT_OUT_ENABLED = 1, +}; + +enum icp_qat_hw_ucs_cipher_reflect_in { + ICP_QAT_HW_CIPHER_UCS_REFLECT_IN_DISABLED = 0, + ICP_QAT_HW_CIPHER_UCS_REFLECT_IN_ENABLED = 1, +}; + +enum icp_qat_hw_ucs_cipher_crc_encoding { + ICP_QAT_HW_CIPHER_UCS_CRC_NOT_REQUIRED = 0, + ICP_QAT_HW_CIPHER_UCS_CRC32 = 1, + ICP_QAT_HW_CIPHER_UCS_CRC64 = 2, +}; + +#define QAT_CIPHER_UCS_REFLECT_OUT_LE_BITPOS 17 +#define QAT_CIPHER_UCS_REFLECT_OUT_MASK 0x1 +#define QAT_CIPHER_UCS_REFLECT_IN_LE_BITPOS 16 +#define QAT_CIPHER_UCS_REFLECT_IN_MASK 0x1 +#define QAT_CIPHER_UCS_CRC_ENCODING_LE_BITPOS 14 +#define QAT_CIPHER_UCS_CRC_ENCODING_MASK 0x3 + +struct icp_qat_fw_ucs_slice_cipher_config { + enum icp_qat_hw_cipher_mode mode; + enum icp_qat_hw_cipher_algo algo; + uint16_t hash_cmp_val; + enum icp_qat_hw_cipher_dir dir; + uint16_t associated_data_len_in_bytes; + enum icp_qat_hw_ucs_cipher_reflect_out crc_reflect_out; + enum icp_qat_hw_ucs_cipher_reflect_in crc_reflect_in; + enum icp_qat_hw_ucs_cipher_crc_encoding crc_encoding; +}; + +struct icp_qat_hw_gen4_crc_cd { + uint32_t ucs_config[4]; + uint32_t polynomial; + uint32_t reserved1; + uint32_t xor_val; + uint32_t reserved2; + uint32_t initial_crc; + uint32_t reserved3; +}; + +static inline uint32_t +ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_LOWER( + struct icp_qat_fw_ucs_slice_cipher_config csr) +{ + uint32_t val32 = 0; + + QAT_FIELD_SET(val32, + csr.mode, + QAT_CIPHER_MODE_LE_BITPOS, + QAT_CIPHER_MODE_MASK); + + QAT_FIELD_SET(val32, + csr.algo, + QAT_CIPHER_ALGO_LE_BITPOS, + QAT_CIPHER_ALGO_MASK); + + QAT_FIELD_SET(val32, + csr.hash_cmp_val, + QAT_CIPHER_AEAD_HASH_CMP_LEN_LE_BITPOS, + QAT_CIPHER_AEAD_HASH_CMP_LEN_MASK); + + QAT_FIELD_SET(val32, + csr.dir, + QAT_CIPHER_DIR_LE_BITPOS, + QAT_CIPHER_DIR_MASK); + + return rte_bswap32(val32); +} + +static inline uint32_t +ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_UPPER( + struct icp_qat_fw_ucs_slice_cipher_config csr) +{ + uint32_t val32 = 0; + + QAT_FIELD_SET(val32, + csr.associated_data_len_in_bytes, + QAT_CIPHER_AEAD_AAD_SIZE_LE_BITPOS, + QAT_CIPHER_AEAD_AAD_SIZE_MASK); + + QAT_FIELD_SET(val32, + csr.crc_reflect_out, + QAT_CIPHER_UCS_REFLECT_OUT_LE_BITPOS, + QAT_CIPHER_UCS_REFLECT_OUT_MASK); + + QAT_FIELD_SET(val32, + csr.crc_reflect_in, + QAT_CIPHER_UCS_REFLECT_IN_LE_BITPOS, + QAT_CIPHER_UCS_REFLECT_IN_MASK); + + QAT_FIELD_SET(val32, + csr.crc_encoding, + QAT_CIPHER_UCS_CRC_ENCODING_LE_BITPOS, + QAT_CIPHER_UCS_CRC_ENCODING_MASK); + + return rte_bswap32(val32); +} + /* ========================================================================= */ /* COMPRESSION SLICE */ /* ========================================================================= */ -- 2.34.1 -------------------------------------------------------------- Intel Research and Development Ireland Limited Registered in Ireland Registered Office: Collinstown Industrial Park, Leixlip, County Kildare Registered Number: 308263 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v2 2/2] crypto/qat: add cipher-crc offload support 2023-03-09 14:33 ` [PATCH v2 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan 2023-03-09 14:33 ` [PATCH v2 1/2] crypto/qat: add cipher-crc offload support to fw interface Kevin O'Sullivan @ 2023-03-09 14:33 ` Kevin O'Sullivan 2023-03-13 14:26 ` [PATCH v3 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan 2 siblings, 0 replies; 17+ messages in thread From: Kevin O'Sullivan @ 2023-03-09 14:33 UTC (permalink / raw) To: dev; +Cc: kai.ji, Kevin O'Sullivan, David Coyle This patch adds support to the QAT symmetric crypto PMD for combined cipher-crc offload feature, primarily for DOCSIS, on gen2/gen3/gen4 QAT devices. A new parameter called qat_sym_cipher_crc_enable has been added to the PMD, which can be set on process start as follows: -a <qat pci bdf>,qat_sym_cipher_crc_enable=1 When enabled, a capability check for the combined cipher-crc offload feature is triggered to the QAT firmware during queue pair initialization. If supported by the firmware, any subsequent runtime DOCSIS cipher-crc requests handled by the QAT PMD are offloaded to the QAT device by setting up the content descriptor and request accordingly. If the combined DOCSIS cipher-crc feature is not supported by the firmware, the CRC continues to be calculated within the PMD, with just the cipher portion of the request being offloaded to the QAT device. Signed-off-by: Kevin O'Sullivan <kevin.osullivan@intel.com> Signed-off-by: David Coyle <david.coyle@intel.com> --- v2: fixed centos compilation error for missing braces around initializer --- drivers/common/qat/qat_device.c | 12 +- drivers/common/qat/qat_device.h | 3 +- drivers/common/qat/qat_qp.c | 157 +++++++++++++++ drivers/common/qat/qat_qp.h | 5 + drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 2 +- drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 24 ++- drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 4 + drivers/crypto/qat/qat_crypto.c | 22 ++- drivers/crypto/qat/qat_crypto.h | 1 + drivers/crypto/qat/qat_sym.c | 4 + drivers/crypto/qat/qat_sym.h | 7 +- drivers/crypto/qat/qat_sym_session.c | 194 +++++++++++++++++++ drivers/crypto/qat/qat_sym_session.h | 21 +- 13 files changed, 441 insertions(+), 15 deletions(-) diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c index 8bce2ac073..308c59c39f 100644 --- a/drivers/common/qat/qat_device.c +++ b/drivers/common/qat/qat_device.c @@ -149,7 +149,16 @@ qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param } else { memcpy(value_str, arg2, iter); value = strtol(value_str, NULL, 10); - if (value > MAX_QP_THRESHOLD_SIZE) { + if (strcmp(param, + SYM_CIPHER_CRC_ENABLE_NAME) == 0) { + if (value < 0 || value > 1) { + QAT_LOG(DEBUG, "The value for" + " qat_sym_cipher_crc_enable" + " should be set to 0 or 1," + " setting to 0"); + value = 0; + } + } else if (value > MAX_QP_THRESHOLD_SIZE) { QAT_LOG(DEBUG, "Exceeded max size of" " threshold, setting to %d", MAX_QP_THRESHOLD_SIZE); @@ -369,6 +378,7 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, { SYM_ENQ_THRESHOLD_NAME, 0 }, { ASYM_ENQ_THRESHOLD_NAME, 0 }, { COMP_ENQ_THRESHOLD_NAME, 0 }, + { SYM_CIPHER_CRC_ENABLE_NAME, 0 }, [QAT_CMD_SLICE_MAP_POS] = { QAT_CMD_SLICE_MAP, 0}, { NULL, 0 }, }; diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h index bc3da04238..4188474dde 100644 --- a/drivers/common/qat/qat_device.h +++ b/drivers/common/qat/qat_device.h @@ -21,8 +21,9 @@ #define SYM_ENQ_THRESHOLD_NAME "qat_sym_enq_threshold" #define ASYM_ENQ_THRESHOLD_NAME "qat_asym_enq_threshold" #define COMP_ENQ_THRESHOLD_NAME "qat_comp_enq_threshold" +#define SYM_CIPHER_CRC_ENABLE_NAME "qat_sym_cipher_crc_enable" #define QAT_CMD_SLICE_MAP "qat_cmd_slice_disable" -#define QAT_CMD_SLICE_MAP_POS 4 +#define QAT_CMD_SLICE_MAP_POS 5 #define MAX_QP_THRESHOLD_SIZE 32 /** diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c index 9cbd19a481..1ce89c265f 100644 --- a/drivers/common/qat/qat_qp.c +++ b/drivers/common/qat/qat_qp.c @@ -11,6 +11,9 @@ #include <bus_pci_driver.h> #include <rte_atomic.h> #include <rte_prefetch.h> +#ifdef RTE_LIB_SECURITY +#include <rte_ether.h> +#endif #include "qat_logs.h" #include "qat_device.h" @@ -957,6 +960,160 @@ qat_cq_get_fw_version(struct qat_qp *qp) return -EINVAL; } +#ifdef BUILD_QAT_SYM +/* Sends an LA bulk req message to determine if a QAT device supports Cipher-CRC + * offload. This assumes that there are no inflight messages, i.e. assumes + * there's space on the qp, one message is sent and only one response + * collected. The status bit of the response and returned data are checked. + * Returns: + * 1 if status bit indicates success and returned data matches expected + * data (i.e. Cipher-CRC supported) + * 0 if status bit indicates error or returned data does not match expected + * data (i.e. Cipher-CRC not supported) + * Negative error code in case of error + */ +int +qat_cq_get_fw_cipher_crc_cap(struct qat_qp *qp) +{ + struct qat_queue *queue = &(qp->tx_q); + uint8_t *base_addr = (uint8_t *)queue->base_addr; + struct icp_qat_fw_la_bulk_req cipher_crc_cap_msg = {{0}}; + struct icp_qat_fw_comn_resp response = {{0}}; + struct icp_qat_fw_la_cipher_req_params *cipher_param; + struct icp_qat_fw_la_auth_req_params *auth_param; + struct qat_sym_session *session; + phys_addr_t phy_src_addr; + uint64_t *src_data_addr; + int ret; + uint8_t cipher_offset = 18; + uint8_t crc_offset = 6; + uint8_t ciphertext[34] = { + /* Outer protocol header */ + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + /* Ethernet frame */ + 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x06, 0x05, + 0x04, 0x03, 0x02, 0x01, 0xD6, 0xE2, 0x70, 0x5C, + 0xE6, 0x4D, 0xCC, 0x8C, 0x47, 0xB7, 0x09, 0xD6, + /* CRC */ + 0x54, 0x85, 0xF8, 0x32 + }; + uint8_t plaintext[34] = { + /* Outer protocol header */ + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + /* Ethernet frame */ + 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x06, 0x05, + 0x04, 0x03, 0x02, 0x01, 0x08, 0x00, 0xAA, 0xAA, + 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, + /* CRC */ + 0xFF, 0xFF, 0xFF, 0xFF + }; + uint8_t key[16] = { + 0x00, 0x00, 0x00, 0x00, 0xAA, 0xBB, 0xCC, 0xDD, + 0xEE, 0xFF, 0x00, 0x11, 0x22, 0x33, 0x44, 0x55 + }; + uint8_t iv[16] = { + 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, + 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11 + }; + + session = rte_zmalloc(NULL, sizeof(struct qat_sym_session), 0); + if (session == NULL) + return -EINVAL; + + /* Verify the session physical address is known */ + rte_iova_t session_paddr = rte_mem_virt2iova(session); + if (session_paddr == 0 || session_paddr == RTE_BAD_IOVA) { + QAT_LOG(ERR, "Session physical address unknown."); + return -EINVAL; + } + + /* Prepare the LA bulk request */ + ret = qat_cipher_crc_cap_msg_sess_prepare(session, + session_paddr, + key, + sizeof(key), + qp->qat_dev_gen); + if (ret < 0) { + rte_free(session); + /* Returning 0 here to allow qp setup to continue, but + * indicate that Cipher-CRC offload is not supported on the + * device + */ + return 0; + } + + cipher_crc_cap_msg = session->fw_req; + + src_data_addr = rte_zmalloc(NULL, sizeof(plaintext), 0); + if (src_data_addr == NULL) { + rte_free(session); + return -EINVAL; + } + + rte_memcpy(src_data_addr, plaintext, sizeof(plaintext)); + + phy_src_addr = rte_mem_virt2iova(src_data_addr); + if (phy_src_addr == 0 || phy_src_addr == RTE_BAD_IOVA) { + QAT_LOG(ERR, "Source physical address unknown."); + return -EINVAL; + } + + cipher_crc_cap_msg.comn_mid.src_data_addr = phy_src_addr; + cipher_crc_cap_msg.comn_mid.src_length = sizeof(plaintext); + cipher_crc_cap_msg.comn_mid.dest_data_addr = phy_src_addr; + cipher_crc_cap_msg.comn_mid.dst_length = sizeof(plaintext); + + cipher_param = (void *)&cipher_crc_cap_msg.serv_specif_rqpars; + auth_param = (void *)((uint8_t *)cipher_param + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + + rte_memcpy(cipher_param->u.cipher_IV_array, iv, sizeof(iv)); + + cipher_param->cipher_offset = cipher_offset; + cipher_param->cipher_length = sizeof(plaintext) - cipher_offset; + auth_param->auth_off = crc_offset; + auth_param->auth_len = sizeof(plaintext) - + crc_offset - + RTE_ETHER_CRC_LEN; + + ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET( + cipher_crc_cap_msg.comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_DIGEST_IN_BUFFER); + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "LA Bulk request", &cipher_crc_cap_msg, + sizeof(cipher_crc_cap_msg)); +#endif + + /* Send the cipher_crc_cap_msg request */ + memcpy(base_addr + queue->tail, + &cipher_crc_cap_msg, + sizeof(cipher_crc_cap_msg)); + queue->tail = adf_modulo(queue->tail + queue->msg_size, + queue->modulo_mask); + txq_write_tail(qp->qat_dev_gen, qp, queue); + + /* Check for response and verify data is same as ciphertext */ + if (qat_cq_dequeue_response(qp, &response)) { +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "LA response:", &response, + sizeof(response)); +#endif + + if (memcmp(src_data_addr, ciphertext, sizeof(ciphertext)) != 0) + ret = 0; /* Cipher-CRC offload not supported */ + else + ret = 1; + } else { + ret = -EINVAL; + } + + rte_free(src_data_addr); + rte_free(session); + return ret; +} +#endif + __rte_weak int qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused, void *op_cookie __rte_unused, diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h index 66f00943a5..d19fc387e4 100644 --- a/drivers/common/qat/qat_qp.h +++ b/drivers/common/qat/qat_qp.h @@ -153,6 +153,11 @@ qat_qp_get_hw_data(struct qat_pci_device *qat_dev, int qat_cq_get_fw_version(struct qat_qp *qp); +#ifdef BUILD_QAT_SYM +int +qat_cq_get_fw_cipher_crc_cap(struct qat_qp *qp); +#endif + /* Needed for weak function*/ int qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused, diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c index 60ca0fc0d2..1f3e2b1d99 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c @@ -163,7 +163,7 @@ qat_sym_crypto_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id, QAT_LOG(DEBUG, "unknown QAT firmware version"); /* set capabilities based on the fw version */ - qat_sym_private->internal_capabilities = QAT_SYM_CAP_VALID | + qat_sym_private->internal_capabilities |= QAT_SYM_CAP_VALID | ((ret >= MIXED_CRYPTO_MIN_FW_VER) ? QAT_SYM_CAP_MIXED_CRYPTO : 0); return 0; diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h index 524c291340..70942906ea 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h @@ -399,8 +399,13 @@ qat_sym_convert_op_to_vec_chain(struct rte_crypto_op *op, cipher_ofs = op->sym->cipher.data.offset >> 3; break; case 0: - cipher_len = op->sym->cipher.data.length; - cipher_ofs = op->sym->cipher.data.offset; + if (ctx->bpi_ctx) { + cipher_len = qat_bpicipher_preprocess(ctx, op); + cipher_ofs = op->sym->cipher.data.offset; + } else { + cipher_len = op->sym->cipher.data.length; + cipher_ofs = op->sym->cipher.data.offset; + } break; default: QAT_DP_LOG(ERR, @@ -428,8 +433,10 @@ qat_sym_convert_op_to_vec_chain(struct rte_crypto_op *op, max_len = RTE_MAX(cipher_ofs + cipher_len, auth_ofs + auth_len); - /* digest in buffer check. Needed only for wireless algos */ - if (ret == 1) { + /* digest in buffer check. Needed only for wireless algos + * or combined cipher-crc operations + */ + if (ret == 1 || ctx->bpi_ctx) { /* Handle digest-encrypted cases, i.e. * auth-gen-then-cipher-encrypt and * cipher-decrypt-then-auth-verify @@ -456,8 +463,9 @@ qat_sym_convert_op_to_vec_chain(struct rte_crypto_op *op, auth_len; /* Then check if digest-encrypted conditions are met */ - if ((auth_ofs + auth_len < cipher_ofs + cipher_len) && - (digest->iova == auth_end_iova)) + if (((auth_ofs + auth_len < cipher_ofs + cipher_len) && + (digest->iova == auth_end_iova)) || + ctx->bpi_ctx) max_len = RTE_MAX(max_len, auth_ofs + auth_len + ctx->digest_length); } @@ -691,9 +699,9 @@ enqueue_one_chain_job_gen1(struct qat_sym_session *ctx, auth_param->auth_len; /* Then check if digest-encrypted conditions are met */ - if ((auth_param->auth_off + auth_param->auth_len < + if (((auth_param->auth_off + auth_param->auth_len < cipher_param->cipher_offset + cipher_param->cipher_length) && - (digest->iova == auth_iova_end)) { + (digest->iova == auth_iova_end)) || ctx->bpi_ctx) { /* Handle partial digest encryption */ if (cipher_param->cipher_offset + cipher_param->cipher_length < auth_param->auth_off + auth_param->auth_len + diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c index 91d5cfa71d..590eaa0057 100644 --- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c @@ -1205,6 +1205,10 @@ qat_sym_crypto_set_session_gen1(void *cryptodev __rte_unused, void *session) } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) { /* do_auth = 0; do_cipher = 1; */ build_request = qat_sym_build_op_cipher_gen1; + } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_CRC) { + /* do_auth = 1; do_cipher = 1; */ + build_request = qat_sym_build_op_chain_gen1; + handle_mixed = 1; } if (build_request) diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c index 84c26a8062..861679373b 100644 --- a/drivers/crypto/qat/qat_crypto.c +++ b/drivers/crypto/qat/qat_crypto.c @@ -172,5 +172,25 @@ qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, qat_asym_init_op_cookie(qp->op_cookies[i]); } - return ret; + if (qat_private->cipher_crc_offload_enable) { + ret = qat_cq_get_fw_cipher_crc_cap(qp); + if (ret < 0) { + qat_cryptodev_qp_release(dev, qp_id); + return ret; + } + + if (ret != 0) + QAT_LOG(DEBUG, "Cipher CRC supported on QAT device"); + else + QAT_LOG(DEBUG, "Cipher CRC not supported on QAT device"); + + /* Only send the cipher crc offload capability message once */ + qat_private->cipher_crc_offload_enable = 0; + /* Set cipher crc offload indicator */ + if (ret) + qat_private->internal_capabilities |= + QAT_SYM_CAP_CIPHER_CRC; + } + + return 0; } diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h index 6fe1326c51..e20f16236e 100644 --- a/drivers/crypto/qat/qat_crypto.h +++ b/drivers/crypto/qat/qat_crypto.h @@ -36,6 +36,7 @@ struct qat_cryptodev_private { /* Shared memzone for storing capabilities */ uint16_t min_enq_burst_threshold; uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */ + bool cipher_crc_offload_enable; enum qat_service_type service_type; }; diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c index 08e92191a3..345c845325 100644 --- a/drivers/crypto/qat/qat_sym.c +++ b/drivers/crypto/qat/qat_sym.c @@ -279,6 +279,10 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, if (!strcmp(qat_dev_cmd_param[i].name, SYM_ENQ_THRESHOLD_NAME)) internals->min_enq_burst_threshold = qat_dev_cmd_param[i].val; + if (!strcmp(qat_dev_cmd_param[i].name, + SYM_CIPHER_CRC_ENABLE_NAME)) + internals->cipher_crc_offload_enable = + qat_dev_cmd_param[i].val; if (!strcmp(qat_dev_cmd_param[i].name, QAT_IPSEC_MB_LIB)) qat_ipsec_mb_lib = qat_dev_cmd_param[i].val; if (!strcmp(qat_dev_cmd_param[i].name, QAT_CMD_SLICE_MAP)) diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h index 9a4251e08b..3d841d0eba 100644 --- a/drivers/crypto/qat/qat_sym.h +++ b/drivers/crypto/qat/qat_sym.h @@ -32,6 +32,7 @@ /* Internal capabilities */ #define QAT_SYM_CAP_MIXED_CRYPTO (1 << 0) +#define QAT_SYM_CAP_CIPHER_CRC (1 << 1) #define QAT_SYM_CAP_VALID (1 << 31) /** @@ -282,7 +283,8 @@ qat_sym_preprocess_requests(void **ops, uint16_t nb_ops) if (ctx == NULL || ctx->bpi_ctx == NULL) continue; - qat_crc_generate(ctx, op); + if (ctx->qat_cmd != ICP_QAT_FW_LA_CMD_CIPHER_CRC) + qat_crc_generate(ctx, op); } } } @@ -330,7 +332,8 @@ qat_sym_process_response(void **op, uint8_t *resp, void *op_cookie, if (sess->bpi_ctx) { qat_bpicipher_postprocess(sess, rx_op); #ifdef RTE_LIB_SECURITY - if (is_docsis_sec) + if (is_docsis_sec && sess->qat_cmd != + ICP_QAT_FW_LA_CMD_CIPHER_CRC) qat_crc_verify(sess, rx_op); #endif } diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 466482d225..c0217654c1 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -27,6 +27,7 @@ #include <rte_crypto_sym.h> #ifdef RTE_LIB_SECURITY #include <rte_security_driver.h> +#include <rte_ether.h> #endif #include "qat_logs.h" @@ -68,6 +69,13 @@ static void ossl_legacy_provider_unload(void) extern int qat_ipsec_mb_lib; +#define ETH_CRC32_POLYNOMIAL 0x04c11db7 +#define ETH_CRC32_INIT_VAL 0xffffffff +#define ETH_CRC32_XOR_OUT 0xffffffff +#define ETH_CRC32_POLYNOMIAL_BE RTE_BE32(ETH_CRC32_POLYNOMIAL) +#define ETH_CRC32_INIT_VAL_BE RTE_BE32(ETH_CRC32_INIT_VAL) +#define ETH_CRC32_XOR_OUT_BE RTE_BE32(ETH_CRC32_XOR_OUT) + /* SHA1 - 20 bytes - Initialiser state can be found in FIPS stds 180-2 */ static const uint8_t sha1InitialState[] = { 0x67, 0x45, 0x23, 0x01, 0xef, 0xcd, 0xab, 0x89, 0x98, 0xba, @@ -115,6 +123,10 @@ qat_sym_cd_cipher_set(struct qat_sym_session *cd, const uint8_t *enckey, uint32_t enckeylen); +static int +qat_sym_cd_crc_set(struct qat_sym_session *cdesc, + enum qat_device_gen qat_dev_gen); + static int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, const uint8_t *authkey, @@ -122,6 +134,7 @@ qat_sym_cd_auth_set(struct qat_sym_session *cdesc, uint32_t aad_length, uint32_t digestsize, unsigned int operation); + static void qat_sym_session_init_common_hdr(struct qat_sym_session *session); @@ -630,6 +643,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, case ICP_QAT_FW_LA_CMD_MGF1: case ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP: case ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP: + case ICP_QAT_FW_LA_CMD_CIPHER_CRC: case ICP_QAT_FW_LA_CMD_DELIMITER: QAT_LOG(ERR, "Unsupported Service %u", session->qat_cmd); @@ -645,6 +659,45 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, (void *)session); } +int +qat_cipher_crc_cap_msg_sess_prepare(struct qat_sym_session *session, + rte_iova_t session_paddr, + const uint8_t *cipherkey, + uint32_t cipherkeylen, + enum qat_device_gen qat_dev_gen) +{ + int ret; + + /* Set content descriptor physical address */ + session->cd_paddr = session_paddr + + offsetof(struct qat_sym_session, cd); + + /* Set up some pre-requisite variables */ + session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_NONE; + session->is_ucs = 0; + session->qat_cmd = ICP_QAT_FW_LA_CMD_CIPHER_CRC; + session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE; + session->qat_cipher_alg = ICP_QAT_HW_CIPHER_ALGO_AES128; + session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT; + session->is_auth = 1; + session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_NULL; + session->auth_mode = ICP_QAT_HW_AUTH_MODE0; + session->auth_op = ICP_QAT_HW_AUTH_GENERATE; + session->digest_length = RTE_ETHER_CRC_LEN; + + ret = qat_sym_cd_cipher_set(session, cipherkey, cipherkeylen); + if (ret < 0) + return -EINVAL; + + ret = qat_sym_cd_crc_set(session, qat_dev_gen); + if (ret < 0) + return -EINVAL; + + qat_sym_session_finalize(session); + + return 0; +} + static int qat_sym_session_handle_single_pass(struct qat_sym_session *session, const struct rte_crypto_aead_xform *aead_xform) @@ -1866,6 +1919,9 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR); cdesc->cd_cur_ptr = (uint8_t *)&cdesc->cd; + } else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_CRC) { + cd_pars->u.s.content_desc_addr = cdesc->cd_paddr; + cdesc->cd_cur_ptr = (uint8_t *)&cdesc->cd; } else if (cdesc->qat_cmd != ICP_QAT_FW_LA_CMD_HASH_CIPHER) { QAT_LOG(ERR, "Invalid param, must be a cipher command."); return -EFAULT; @@ -2641,6 +2697,135 @@ qat_sec_session_check_docsis(struct rte_security_session_conf *conf) return -EINVAL; } +static int +qat_sym_cd_crc_set(struct qat_sym_session *cdesc, + enum qat_device_gen qat_dev_gen) +{ + struct icp_qat_hw_gen2_crc_cd *crc_cd_gen2; + struct icp_qat_hw_gen3_crc_cd *crc_cd_gen3; + struct icp_qat_hw_gen4_crc_cd *crc_cd_gen4; + struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req; + struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars; + void *ptr = &req_tmpl->cd_ctrl; + struct icp_qat_fw_auth_cd_ctrl_hdr *crc_cd_ctrl = ptr; + struct icp_qat_fw_la_auth_req_params *crc_param = + (struct icp_qat_fw_la_auth_req_params *) + ((char *)&req_tmpl->serv_specif_rqpars + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + struct icp_qat_fw_ucs_slice_cipher_config crc_cfg; + uint16_t crc_cfg_offset, cd_size; + + crc_cfg_offset = cdesc->cd_cur_ptr - ((uint8_t *)&cdesc->cd); + + switch (qat_dev_gen) { + case QAT_GEN2: + crc_cd_gen2 = + (struct icp_qat_hw_gen2_crc_cd *)cdesc->cd_cur_ptr; + crc_cd_gen2->flags = 0; + crc_cd_gen2->initial_crc = 0; + memset(&crc_cd_gen2->reserved1, + 0, + sizeof(crc_cd_gen2->reserved1)); + memset(&crc_cd_gen2->reserved2, + 0, + sizeof(crc_cd_gen2->reserved2)); + cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen2_crc_cd); + break; + case QAT_GEN3: + crc_cd_gen3 = + (struct icp_qat_hw_gen3_crc_cd *)cdesc->cd_cur_ptr; + crc_cd_gen3->flags = ICP_QAT_HW_GEN3_CRC_FLAGS_BUILD(1, 1); + crc_cd_gen3->polynomial = ETH_CRC32_POLYNOMIAL; + crc_cd_gen3->initial_crc = ETH_CRC32_INIT_VAL; + crc_cd_gen3->xor_val = ETH_CRC32_XOR_OUT; + memset(&crc_cd_gen3->reserved1, + 0, + sizeof(crc_cd_gen3->reserved1)); + memset(&crc_cd_gen3->reserved2, + 0, + sizeof(crc_cd_gen3->reserved2)); + crc_cd_gen3->reserved3 = 0; + cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen3_crc_cd); + break; + case QAT_GEN4: + crc_cfg.mode = ICP_QAT_HW_CIPHER_ECB_MODE; + crc_cfg.algo = ICP_QAT_HW_CIPHER_ALGO_NULL; + crc_cfg.hash_cmp_val = 0; + crc_cfg.dir = ICP_QAT_HW_CIPHER_ENCRYPT; + crc_cfg.associated_data_len_in_bytes = 0; + crc_cfg.crc_reflect_out = + ICP_QAT_HW_CIPHER_UCS_REFLECT_OUT_ENABLED; + crc_cfg.crc_reflect_in = + ICP_QAT_HW_CIPHER_UCS_REFLECT_IN_ENABLED; + crc_cfg.crc_encoding = ICP_QAT_HW_CIPHER_UCS_CRC32; + + crc_cd_gen4 = + (struct icp_qat_hw_gen4_crc_cd *)cdesc->cd_cur_ptr; + crc_cd_gen4->ucs_config[0] = + ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_LOWER(crc_cfg); + crc_cd_gen4->ucs_config[1] = + ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_UPPER(crc_cfg); + crc_cd_gen4->polynomial = ETH_CRC32_POLYNOMIAL_BE; + crc_cd_gen4->initial_crc = ETH_CRC32_INIT_VAL_BE; + crc_cd_gen4->xor_val = ETH_CRC32_XOR_OUT_BE; + crc_cd_gen4->reserved1 = 0; + crc_cd_gen4->reserved2 = 0; + crc_cd_gen4->reserved3 = 0; + cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen4_crc_cd); + break; + default: + return -EINVAL; + } + + crc_cd_ctrl->hash_cfg_offset = crc_cfg_offset >> 3; + crc_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED; + crc_cd_ctrl->inner_res_sz = cdesc->digest_length; + crc_cd_ctrl->final_sz = cdesc->digest_length; + crc_cd_ctrl->inner_state1_sz = 0; + crc_cd_ctrl->inner_state2_sz = 0; + crc_cd_ctrl->inner_state2_offset = 0; + crc_cd_ctrl->outer_prefix_sz = 0; + crc_cd_ctrl->outer_config_offset = 0; + crc_cd_ctrl->outer_state1_sz = 0; + crc_cd_ctrl->outer_res_sz = 0; + crc_cd_ctrl->outer_prefix_offset = 0; + + crc_param->auth_res_sz = cdesc->digest_length; + crc_param->u2.aad_sz = 0; + crc_param->hash_state_sz = 0; + + cd_size = cdesc->cd_cur_ptr - (uint8_t *)&cdesc->cd; + cd_pars->u.s.content_desc_addr = cdesc->cd_paddr; + cd_pars->u.s.content_desc_params_sz = RTE_ALIGN_CEIL(cd_size, 8) >> 3; + + return 0; +} + +static int +qat_sym_session_configure_crc(struct rte_cryptodev *dev, + const struct rte_crypto_sym_xform *cipher_xform, + struct qat_sym_session *session) +{ + struct qat_cryptodev_private *internals = dev->data->dev_private; + enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen; + int ret; + + session->is_auth = 1; + session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_NULL; + session->auth_mode = ICP_QAT_HW_AUTH_MODE0; + session->auth_op = cipher_xform->cipher.op == + RTE_CRYPTO_CIPHER_OP_ENCRYPT ? + ICP_QAT_HW_AUTH_GENERATE : + ICP_QAT_HW_AUTH_VERIFY; + session->digest_length = RTE_ETHER_CRC_LEN; + + ret = qat_sym_cd_crc_set(session, qat_dev_gen); + if (ret < 0) + return ret; + + return 0; +} + static int qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev, struct rte_security_session_conf *conf, void *session_private, @@ -2681,12 +2866,21 @@ qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev, if (qat_cmd_id != ICP_QAT_FW_LA_CMD_CIPHER) { QAT_LOG(ERR, "Unsupported xform chain requested"); return -ENOTSUP; + } else if (internals->internal_capabilities + & QAT_SYM_CAP_CIPHER_CRC) { + qat_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER_CRC; } session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id; ret = qat_sym_session_configure_cipher(dev, xform, session); if (ret < 0) return ret; + + if (qat_cmd_id == ICP_QAT_FW_LA_CMD_CIPHER_CRC) { + ret = qat_sym_session_configure_crc(dev, xform, session); + if (ret < 0) + return ret; + } qat_sym_session_finalize(session); return qat_sym_gen_dev_ops[qat_dev_gen].set_session((void *)cdev, diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h index 6322d7e3bc..9b5d11ac88 100644 --- a/drivers/crypto/qat/qat_sym_session.h +++ b/drivers/crypto/qat/qat_sym_session.h @@ -46,6 +46,12 @@ ICP_QAT_HW_CIPHER_KEY_CONVERT, \ ICP_QAT_HW_CIPHER_DECRYPT) +#define ICP_QAT_HW_GEN3_CRC_FLAGS_BUILD(ref_in, ref_out) \ + (((ref_in & QAT_GEN3_COMP_REFLECT_IN_MASK) << \ + QAT_GEN3_COMP_REFLECT_IN_BITPOS) | \ + ((ref_out & QAT_GEN3_COMP_REFLECT_OUT_MASK) << \ + QAT_GEN3_COMP_REFLECT_OUT_BITPOS)) + #define QAT_AES_CMAC_CONST_RB 0x87 #define QAT_CRYPTO_SLICE_SPC 1 @@ -76,7 +82,12 @@ typedef int (*qat_sym_build_request_t)(void *in_op, struct qat_sym_session *ctx, /* Common content descriptor */ struct qat_sym_cd { struct icp_qat_hw_cipher_algo_blk cipher; - struct icp_qat_hw_auth_algo_blk hash; + union { + struct icp_qat_hw_auth_algo_blk hash; + struct icp_qat_hw_gen2_crc_cd crc_gen2; + struct icp_qat_hw_gen3_crc_cd crc_gen3; + struct icp_qat_hw_gen4_crc_cd crc_gen4; + }; } __rte_packed __rte_cache_aligned; struct qat_sym_session { @@ -152,10 +163,18 @@ qat_sym_session_clear(struct rte_cryptodev *dev, unsigned int qat_sym_session_get_private_size(struct rte_cryptodev *dev); +int +qat_cipher_crc_cap_msg_sess_prepare(struct qat_sym_session *session, + rte_iova_t session_paddr, + const uint8_t *cipherkey, + uint32_t cipherkeylen, + enum qat_device_gen qat_dev_gen); + void qat_sym_sesssion_init_common_hdr(struct qat_sym_session *session, struct icp_qat_fw_comn_req_hdr *header, enum qat_sym_proto_flag proto_flags); + int qat_sym_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg); int -- 2.34.1 -------------------------------------------------------------- Intel Research and Development Ireland Limited Registered in Ireland Registered Office: Collinstown Industrial Park, Leixlip, County Kildare Registered Number: 308263 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 0/2] crypto/qat: add cipher-crc offload feature 2023-03-09 14:33 ` [PATCH v2 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan 2023-03-09 14:33 ` [PATCH v2 1/2] crypto/qat: add cipher-crc offload support to fw interface Kevin O'Sullivan 2023-03-09 14:33 ` [PATCH v2 2/2] crypto/qat: add cipher-crc offload support Kevin O'Sullivan @ 2023-03-13 14:26 ` Kevin O'Sullivan 2023-03-13 14:26 ` [PATCH v3 1/2] crypto/qat: add cipher-crc offload support to fw interface Kevin O'Sullivan ` (2 more replies) 2 siblings, 3 replies; 17+ messages in thread From: Kevin O'Sullivan @ 2023-03-13 14:26 UTC (permalink / raw) To: dev; +Cc: kai.ji, Kevin O'Sullivan This patchset adds support to the QAT PMD for combined cipher-crc processing for DOCSIS on the QAT device. The current QAT PMD implementation of cipher-crc calculates CRC in software and uses QAT for encryption/decryption offload. Note: The previous code-path is still retained for QAT firmware versions without support for combined cipher-crc offload. - Support has been added to DPDK QAT PMD to enable the use of the cipher-crc offload feature on gen2/gen3/gen4 QAT devices. - A cipher-crc offload capability check has been added to the queue pair setup function to determine if the feature is supported on the QAT device. v3: updated the file qat.rst with details of new configuration Kevin O'Sullivan (2): crypto/qat: added cipher-crc offload support crypto/qat: added cipher-crc cap check doc/guides/cryptodevs/qat.rst | 23 +++ drivers/common/qat/qat_adf/icp_qat_fw.h | 1 - drivers/common/qat/qat_adf/icp_qat_fw_la.h | 3 +- drivers/common/qat/qat_adf/icp_qat_hw.h | 133 +++++++++++++ drivers/common/qat/qat_device.c | 12 +- drivers/common/qat/qat_device.h | 3 +- drivers/common/qat/qat_qp.c | 157 +++++++++++++++ drivers/common/qat/qat_qp.h | 5 + drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 2 +- drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 24 ++- drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 4 + drivers/crypto/qat/qat_crypto.c | 22 ++- drivers/crypto/qat/qat_crypto.h | 1 + drivers/crypto/qat/qat_sym.c | 4 + drivers/crypto/qat/qat_sym.h | 7 +- drivers/crypto/qat/qat_sym_session.c | 196 ++++++++++++++++++- drivers/crypto/qat/qat_sym_session.h | 21 +- 17 files changed, 600 insertions(+), 18 deletions(-) -- 2.34.1 -------------------------------------------------------------- Intel Research and Development Ireland Limited Registered in Ireland Registered Office: Collinstown Industrial Park, Leixlip, County Kildare Registered Number: 308263 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 1/2] crypto/qat: add cipher-crc offload support to fw interface 2023-03-13 14:26 ` [PATCH v3 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan @ 2023-03-13 14:26 ` Kevin O'Sullivan 2023-03-16 12:24 ` Ji, Kai 2023-03-13 14:26 ` [PATCH v3 2/2] crypto/qat: add cipher-crc offload support Kevin O'Sullivan 2023-04-18 13:39 ` [PATCH v4 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan 2 siblings, 1 reply; 17+ messages in thread From: Kevin O'Sullivan @ 2023-03-13 14:26 UTC (permalink / raw) To: dev; +Cc: kai.ji, Kevin O'Sullivan, David Coyle This patch adds support to the QAT firmware interface header files for the combined cipher-crc offload feature for DOCSIS on gen2/gen3/ gen4 QAT devices. The main change is that new structures have been added for the crc content descriptor for the various generations. Signed-off-by: Kevin O'Sullivan <kevin.osullivan@intel.com> Signed-off-by: David Coyle <david.coyle@intel.com> --- drivers/common/qat/qat_adf/icp_qat_fw.h | 1 - drivers/common/qat/qat_adf/icp_qat_fw_la.h | 3 +- drivers/common/qat/qat_adf/icp_qat_hw.h | 133 +++++++++++++++++++++ 3 files changed, 135 insertions(+), 2 deletions(-) diff --git a/drivers/common/qat/qat_adf/icp_qat_fw.h b/drivers/common/qat/qat_adf/icp_qat_fw.h index be10fc9bde..3aa17ae041 100644 --- a/drivers/common/qat/qat_adf/icp_qat_fw.h +++ b/drivers/common/qat/qat_adf/icp_qat_fw.h @@ -4,7 +4,6 @@ #ifndef _ICP_QAT_FW_H_ #define _ICP_QAT_FW_H_ #include <sys/types.h> -#include "icp_qat_hw.h" #define QAT_FIELD_SET(flags, val, bitpos, mask) \ { (flags) = (((flags) & (~((mask) << (bitpos)))) | \ diff --git a/drivers/common/qat/qat_adf/icp_qat_fw_la.h b/drivers/common/qat/qat_adf/icp_qat_fw_la.h index c4901eb869..227a6cebc8 100644 --- a/drivers/common/qat/qat_adf/icp_qat_fw_la.h +++ b/drivers/common/qat/qat_adf/icp_qat_fw_la.h @@ -18,7 +18,8 @@ enum icp_qat_fw_la_cmd_id { ICP_QAT_FW_LA_CMD_MGF1 = 9, ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP = 10, ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP = 11, - ICP_QAT_FW_LA_CMD_DELIMITER = 12 + ICP_QAT_FW_LA_CMD_CIPHER_CRC = 17, + ICP_QAT_FW_LA_CMD_DELIMITER = 18 }; #define ICP_QAT_FW_LA_ICV_VER_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK diff --git a/drivers/common/qat/qat_adf/icp_qat_hw.h b/drivers/common/qat/qat_adf/icp_qat_hw.h index 866147cd77..8b864e1630 100644 --- a/drivers/common/qat/qat_adf/icp_qat_hw.h +++ b/drivers/common/qat/qat_adf/icp_qat_hw.h @@ -4,6 +4,8 @@ #ifndef _ICP_QAT_HW_H_ #define _ICP_QAT_HW_H_ +#include "icp_qat_fw.h" + #define ADF_C4XXXIOV_VFLEGFUSES_OFFSET 0x4C #define ADF1_C4XXXIOV_VFLEGFUSES_LEN 4 @@ -260,14 +262,19 @@ enum icp_qat_hw_cipher_convert { }; #define QAT_CIPHER_MODE_BITPOS 4 +#define QAT_CIPHER_MODE_LE_BITPOS 28 #define QAT_CIPHER_MODE_MASK 0xF #define QAT_CIPHER_ALGO_BITPOS 0 +#define QAT_CIPHER_ALGO_LE_BITPOS 24 #define QAT_CIPHER_ALGO_MASK 0xF #define QAT_CIPHER_CONVERT_BITPOS 9 +#define QAT_CIPHER_CONVERT_LE_BITPOS 17 #define QAT_CIPHER_CONVERT_MASK 0x1 #define QAT_CIPHER_DIR_BITPOS 8 +#define QAT_CIPHER_DIR_LE_BITPOS 16 #define QAT_CIPHER_DIR_MASK 0x1 #define QAT_CIPHER_AEAD_HASH_CMP_LEN_BITPOS 10 +#define QAT_CIPHER_AEAD_HASH_CMP_LEN_LE_BITPOS 18 #define QAT_CIPHER_AEAD_HASH_CMP_LEN_MASK 0x1F #define QAT_CIPHER_MODE_F8_KEY_SZ_MULT 2 #define QAT_CIPHER_MODE_XTS_KEY_SZ_MULT 2 @@ -281,7 +288,9 @@ enum icp_qat_hw_cipher_convert { #define QAT_CIPHER_AEAD_AAD_UPPER_SHIFT 8 #define QAT_CIPHER_AEAD_AAD_SIZE_LOWER_MASK 0xFF #define QAT_CIPHER_AEAD_AAD_SIZE_UPPER_MASK 0x3F +#define QAT_CIPHER_AEAD_AAD_SIZE_MASK 0x3FFF #define QAT_CIPHER_AEAD_AAD_SIZE_BITPOS 16 +#define QAT_CIPHER_AEAD_AAD_SIZE_LE_BITPOS 0 #define ICP_QAT_HW_CIPHER_CONFIG_BUILD_UPPER(aad_size) \ ({ \ typeof(aad_size) aad_size1 = aad_size; \ @@ -362,6 +371,28 @@ struct icp_qat_hw_cipher_algo_blk { uint8_t key[ICP_QAT_HW_CIPHER_MAX_KEY_SZ]; } __rte_cache_aligned; +struct icp_qat_hw_gen2_crc_cd { + uint32_t flags; + uint32_t reserved1[5]; + uint32_t initial_crc; + uint32_t reserved2[3]; +}; + +#define QAT_GEN3_COMP_REFLECT_IN_BITPOS 17 +#define QAT_GEN3_COMP_REFLECT_IN_MASK 0x1 +#define QAT_GEN3_COMP_REFLECT_OUT_BITPOS 18 +#define QAT_GEN3_COMP_REFLECT_OUT_MASK 0x1 + +struct icp_qat_hw_gen3_crc_cd { + uint32_t flags; + uint32_t reserved1[3]; + uint32_t polynomial; + uint32_t xor_val; + uint32_t reserved2[2]; + uint32_t initial_crc; + uint32_t reserved3; +}; + struct icp_qat_hw_ucs_cipher_config { uint32_t val; uint32_t reserved[3]; @@ -372,6 +403,108 @@ struct icp_qat_hw_cipher_algo_blk20 { uint8_t key[ICP_QAT_HW_CIPHER_MAX_KEY_SZ]; } __rte_cache_aligned; +enum icp_qat_hw_ucs_cipher_reflect_out { + ICP_QAT_HW_CIPHER_UCS_REFLECT_OUT_DISABLED = 0, + ICP_QAT_HW_CIPHER_UCS_REFLECT_OUT_ENABLED = 1, +}; + +enum icp_qat_hw_ucs_cipher_reflect_in { + ICP_QAT_HW_CIPHER_UCS_REFLECT_IN_DISABLED = 0, + ICP_QAT_HW_CIPHER_UCS_REFLECT_IN_ENABLED = 1, +}; + +enum icp_qat_hw_ucs_cipher_crc_encoding { + ICP_QAT_HW_CIPHER_UCS_CRC_NOT_REQUIRED = 0, + ICP_QAT_HW_CIPHER_UCS_CRC32 = 1, + ICP_QAT_HW_CIPHER_UCS_CRC64 = 2, +}; + +#define QAT_CIPHER_UCS_REFLECT_OUT_LE_BITPOS 17 +#define QAT_CIPHER_UCS_REFLECT_OUT_MASK 0x1 +#define QAT_CIPHER_UCS_REFLECT_IN_LE_BITPOS 16 +#define QAT_CIPHER_UCS_REFLECT_IN_MASK 0x1 +#define QAT_CIPHER_UCS_CRC_ENCODING_LE_BITPOS 14 +#define QAT_CIPHER_UCS_CRC_ENCODING_MASK 0x3 + +struct icp_qat_fw_ucs_slice_cipher_config { + enum icp_qat_hw_cipher_mode mode; + enum icp_qat_hw_cipher_algo algo; + uint16_t hash_cmp_val; + enum icp_qat_hw_cipher_dir dir; + uint16_t associated_data_len_in_bytes; + enum icp_qat_hw_ucs_cipher_reflect_out crc_reflect_out; + enum icp_qat_hw_ucs_cipher_reflect_in crc_reflect_in; + enum icp_qat_hw_ucs_cipher_crc_encoding crc_encoding; +}; + +struct icp_qat_hw_gen4_crc_cd { + uint32_t ucs_config[4]; + uint32_t polynomial; + uint32_t reserved1; + uint32_t xor_val; + uint32_t reserved2; + uint32_t initial_crc; + uint32_t reserved3; +}; + +static inline uint32_t +ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_LOWER( + struct icp_qat_fw_ucs_slice_cipher_config csr) +{ + uint32_t val32 = 0; + + QAT_FIELD_SET(val32, + csr.mode, + QAT_CIPHER_MODE_LE_BITPOS, + QAT_CIPHER_MODE_MASK); + + QAT_FIELD_SET(val32, + csr.algo, + QAT_CIPHER_ALGO_LE_BITPOS, + QAT_CIPHER_ALGO_MASK); + + QAT_FIELD_SET(val32, + csr.hash_cmp_val, + QAT_CIPHER_AEAD_HASH_CMP_LEN_LE_BITPOS, + QAT_CIPHER_AEAD_HASH_CMP_LEN_MASK); + + QAT_FIELD_SET(val32, + csr.dir, + QAT_CIPHER_DIR_LE_BITPOS, + QAT_CIPHER_DIR_MASK); + + return rte_bswap32(val32); +} + +static inline uint32_t +ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_UPPER( + struct icp_qat_fw_ucs_slice_cipher_config csr) +{ + uint32_t val32 = 0; + + QAT_FIELD_SET(val32, + csr.associated_data_len_in_bytes, + QAT_CIPHER_AEAD_AAD_SIZE_LE_BITPOS, + QAT_CIPHER_AEAD_AAD_SIZE_MASK); + + QAT_FIELD_SET(val32, + csr.crc_reflect_out, + QAT_CIPHER_UCS_REFLECT_OUT_LE_BITPOS, + QAT_CIPHER_UCS_REFLECT_OUT_MASK); + + QAT_FIELD_SET(val32, + csr.crc_reflect_in, + QAT_CIPHER_UCS_REFLECT_IN_LE_BITPOS, + QAT_CIPHER_UCS_REFLECT_IN_MASK); + + QAT_FIELD_SET(val32, + csr.crc_encoding, + QAT_CIPHER_UCS_CRC_ENCODING_LE_BITPOS, + QAT_CIPHER_UCS_CRC_ENCODING_MASK); + + return rte_bswap32(val32); +} + /* ========================================================================= */ /* COMPRESSION SLICE */ /* ========================================================================= */ -- 2.34.1 -------------------------------------------------------------- Intel Research and Development Ireland Limited Registered in Ireland Registered Office: Collinstown Industrial Park, Leixlip, County Kildare Registered Number: 308263 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v3 1/2] crypto/qat: add cipher-crc offload support to fw interface 2023-03-13 14:26 ` [PATCH v3 1/2] crypto/qat: add cipher-crc offload support to fw interface Kevin O'Sullivan @ 2023-03-16 12:24 ` Ji, Kai 0 siblings, 0 replies; 17+ messages in thread From: Ji, Kai @ 2023-03-16 12:24 UTC (permalink / raw) To: O'Sullivan, Kevin, dev; +Cc: Coyle, David Acked-by: Kai Ji <kai.ji@intel.com> > -----Original Message----- > From: O'Sullivan, Kevin <kevin.osullivan@intel.com> > Sent: Monday, March 13, 2023 2:26 PM > To: dev@dpdk.org > Cc: Ji, Kai <kai.ji@intel.com>; O'Sullivan, Kevin > <kevin.osullivan@intel.com>; Coyle, David <david.coyle@intel.com> > Subject: [PATCH v3 1/2] crypto/qat: add cipher-crc offload support to fw > interface > > This patch adds support to the QAT firmware interface header files for the > combined cipher-crc offload feature for DOCSIS on gen2/gen3/ > gen4 QAT devices. The main change is that new structures have been added > for the crc content descriptor for the various generations. > > Signed-off-by: Kevin O'Sullivan <kevin.osullivan@intel.com> > Signed-off-by: David Coyle <david.coyle@intel.com> > --- > 2.34.1 ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v3 2/2] crypto/qat: add cipher-crc offload support 2023-03-13 14:26 ` [PATCH v3 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan 2023-03-13 14:26 ` [PATCH v3 1/2] crypto/qat: add cipher-crc offload support to fw interface Kevin O'Sullivan @ 2023-03-13 14:26 ` Kevin O'Sullivan 2023-03-16 12:25 ` Ji, Kai 2023-03-16 19:15 ` [EXT] " Akhil Goyal 2023-04-18 13:39 ` [PATCH v4 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan 2 siblings, 2 replies; 17+ messages in thread From: Kevin O'Sullivan @ 2023-03-13 14:26 UTC (permalink / raw) To: dev; +Cc: kai.ji, Kevin O'Sullivan, David Coyle This patch adds support to the QAT symmetric crypto PMD for combined cipher-crc offload feature, primarily for DOCSIS, on gen2/gen3/gen4 QAT devices. A new parameter called qat_sym_cipher_crc_enable has been added to the PMD, which can be set on process start as follows: -a <qat pci bdf>,qat_sym_cipher_crc_enable=1 When enabled, a capability check for the combined cipher-crc offload feature is triggered to the QAT firmware during queue pair initialization. If supported by the firmware, any subsequent runtime DOCSIS cipher-crc requests handled by the QAT PMD are offloaded to the QAT device by setting up the content descriptor and request accordingly. If the combined DOCSIS cipher-crc feature is not supported by the firmware, the CRC continues to be calculated within the PMD, with just the cipher portion of the request being offloaded to the QAT device. Signed-off-by: Kevin O'Sullivan <kevin.osullivan@intel.com> Signed-off-by: David Coyle <david.coyle@intel.com> --- v3: updated the file qat.rst with details of new configuration --- doc/guides/cryptodevs/qat.rst | 23 +++ drivers/common/qat/qat_device.c | 12 +- drivers/common/qat/qat_device.h | 3 +- drivers/common/qat/qat_qp.c | 157 +++++++++++++++ drivers/common/qat/qat_qp.h | 5 + drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 2 +- drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 24 ++- drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 4 + drivers/crypto/qat/qat_crypto.c | 22 ++- drivers/crypto/qat/qat_crypto.h | 1 + drivers/crypto/qat/qat_sym.c | 4 + drivers/crypto/qat/qat_sym.h | 7 +- drivers/crypto/qat/qat_sym_session.c | 196 ++++++++++++++++++- drivers/crypto/qat/qat_sym_session.h | 21 +- 14 files changed, 465 insertions(+), 16 deletions(-) diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst index ef754106a8..32e0d8a562 100644 --- a/doc/guides/cryptodevs/qat.rst +++ b/doc/guides/cryptodevs/qat.rst @@ -294,6 +294,29 @@ by comma. When the same parameter is used more than once first occurrence of the is used. Maximum threshold that can be set is 32. + +Running QAT PMD with Cipher-CRC offload feature +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Support has been added to the QAT symmetric crypto PMD for combined Cipher-CRC offload, +primarily for the Crypto-CRC DOCSIS security protocol, on GEN2/GEN3/GEN4 QAT devices. + +The following parameter enables a Cipher-CRC offload capability check to determine +if the feature is supported on the QAT device. + +- qat_sym_cipher_crc_enable + +When enabled, a capability check for the combined Cipher-CRC offload feature is triggered +to the QAT firmware during queue pair initialization. If supported by the firmware, +any subsequent runtime Crypto-CRC DOCSIS security protocol requests handled by the QAT PMD +are offloaded to the QAT device by setting up the content descriptor and request accordingly. +If not supported, the CRC is calculated by the QAT PMD using the NET CRC API. + +To use this feature the user must set the parameter on process start as a device additional parameter:: + + -a 03:01.1,qat_sym_cipher_crc_enable=1 + + Running QAT PMD with Intel IPSEC MB library for symmetric precomputes function ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c index 8bce2ac073..308c59c39f 100644 --- a/drivers/common/qat/qat_device.c +++ b/drivers/common/qat/qat_device.c @@ -149,7 +149,16 @@ qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param } else { memcpy(value_str, arg2, iter); value = strtol(value_str, NULL, 10); - if (value > MAX_QP_THRESHOLD_SIZE) { + if (strcmp(param, + SYM_CIPHER_CRC_ENABLE_NAME) == 0) { + if (value < 0 || value > 1) { + QAT_LOG(DEBUG, "The value for" + " qat_sym_cipher_crc_enable" + " should be set to 0 or 1," + " setting to 0"); + value = 0; + } + } else if (value > MAX_QP_THRESHOLD_SIZE) { QAT_LOG(DEBUG, "Exceeded max size of" " threshold, setting to %d", MAX_QP_THRESHOLD_SIZE); @@ -369,6 +378,7 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, { SYM_ENQ_THRESHOLD_NAME, 0 }, { ASYM_ENQ_THRESHOLD_NAME, 0 }, { COMP_ENQ_THRESHOLD_NAME, 0 }, + { SYM_CIPHER_CRC_ENABLE_NAME, 0 }, [QAT_CMD_SLICE_MAP_POS] = { QAT_CMD_SLICE_MAP, 0}, { NULL, 0 }, }; diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h index bc3da04238..4188474dde 100644 --- a/drivers/common/qat/qat_device.h +++ b/drivers/common/qat/qat_device.h @@ -21,8 +21,9 @@ #define SYM_ENQ_THRESHOLD_NAME "qat_sym_enq_threshold" #define ASYM_ENQ_THRESHOLD_NAME "qat_asym_enq_threshold" #define COMP_ENQ_THRESHOLD_NAME "qat_comp_enq_threshold" +#define SYM_CIPHER_CRC_ENABLE_NAME "qat_sym_cipher_crc_enable" #define QAT_CMD_SLICE_MAP "qat_cmd_slice_disable" -#define QAT_CMD_SLICE_MAP_POS 4 +#define QAT_CMD_SLICE_MAP_POS 5 #define MAX_QP_THRESHOLD_SIZE 32 /** diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c index 9cbd19a481..1ce89c265f 100644 --- a/drivers/common/qat/qat_qp.c +++ b/drivers/common/qat/qat_qp.c @@ -11,6 +11,9 @@ #include <bus_pci_driver.h> #include <rte_atomic.h> #include <rte_prefetch.h> +#ifdef RTE_LIB_SECURITY +#include <rte_ether.h> +#endif #include "qat_logs.h" #include "qat_device.h" @@ -957,6 +960,160 @@ qat_cq_get_fw_version(struct qat_qp *qp) return -EINVAL; } +#ifdef BUILD_QAT_SYM +/* Sends an LA bulk req message to determine if a QAT device supports Cipher-CRC + * offload. This assumes that there are no inflight messages, i.e. assumes + * there's space on the qp, one message is sent and only one response + * collected. The status bit of the response and returned data are checked. + * Returns: + * 1 if status bit indicates success and returned data matches expected + * data (i.e. Cipher-CRC supported) + * 0 if status bit indicates error or returned data does not match expected + * data (i.e. Cipher-CRC not supported) + * Negative error code in case of error + */ +int +qat_cq_get_fw_cipher_crc_cap(struct qat_qp *qp) +{ + struct qat_queue *queue = &(qp->tx_q); + uint8_t *base_addr = (uint8_t *)queue->base_addr; + struct icp_qat_fw_la_bulk_req cipher_crc_cap_msg = {{0}}; + struct icp_qat_fw_comn_resp response = {{0}}; + struct icp_qat_fw_la_cipher_req_params *cipher_param; + struct icp_qat_fw_la_auth_req_params *auth_param; + struct qat_sym_session *session; + phys_addr_t phy_src_addr; + uint64_t *src_data_addr; + int ret; + uint8_t cipher_offset = 18; + uint8_t crc_offset = 6; + uint8_t ciphertext[34] = { + /* Outer protocol header */ + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + /* Ethernet frame */ + 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x06, 0x05, + 0x04, 0x03, 0x02, 0x01, 0xD6, 0xE2, 0x70, 0x5C, + 0xE6, 0x4D, 0xCC, 0x8C, 0x47, 0xB7, 0x09, 0xD6, + /* CRC */ + 0x54, 0x85, 0xF8, 0x32 + }; + uint8_t plaintext[34] = { + /* Outer protocol header */ + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + /* Ethernet frame */ + 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x06, 0x05, + 0x04, 0x03, 0x02, 0x01, 0x08, 0x00, 0xAA, 0xAA, + 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, + /* CRC */ + 0xFF, 0xFF, 0xFF, 0xFF + }; + uint8_t key[16] = { + 0x00, 0x00, 0x00, 0x00, 0xAA, 0xBB, 0xCC, 0xDD, + 0xEE, 0xFF, 0x00, 0x11, 0x22, 0x33, 0x44, 0x55 + }; + uint8_t iv[16] = { + 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, + 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11 + }; + + session = rte_zmalloc(NULL, sizeof(struct qat_sym_session), 0); + if (session == NULL) + return -EINVAL; + + /* Verify the session physical address is known */ + rte_iova_t session_paddr = rte_mem_virt2iova(session); + if (session_paddr == 0 || session_paddr == RTE_BAD_IOVA) { + QAT_LOG(ERR, "Session physical address unknown."); + return -EINVAL; + } + + /* Prepare the LA bulk request */ + ret = qat_cipher_crc_cap_msg_sess_prepare(session, + session_paddr, + key, + sizeof(key), + qp->qat_dev_gen); + if (ret < 0) { + rte_free(session); + /* Returning 0 here to allow qp setup to continue, but + * indicate that Cipher-CRC offload is not supported on the + * device + */ + return 0; + } + + cipher_crc_cap_msg = session->fw_req; + + src_data_addr = rte_zmalloc(NULL, sizeof(plaintext), 0); + if (src_data_addr == NULL) { + rte_free(session); + return -EINVAL; + } + + rte_memcpy(src_data_addr, plaintext, sizeof(plaintext)); + + phy_src_addr = rte_mem_virt2iova(src_data_addr); + if (phy_src_addr == 0 || phy_src_addr == RTE_BAD_IOVA) { + QAT_LOG(ERR, "Source physical address unknown."); + return -EINVAL; + } + + cipher_crc_cap_msg.comn_mid.src_data_addr = phy_src_addr; + cipher_crc_cap_msg.comn_mid.src_length = sizeof(plaintext); + cipher_crc_cap_msg.comn_mid.dest_data_addr = phy_src_addr; + cipher_crc_cap_msg.comn_mid.dst_length = sizeof(plaintext); + + cipher_param = (void *)&cipher_crc_cap_msg.serv_specif_rqpars; + auth_param = (void *)((uint8_t *)cipher_param + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + + rte_memcpy(cipher_param->u.cipher_IV_array, iv, sizeof(iv)); + + cipher_param->cipher_offset = cipher_offset; + cipher_param->cipher_length = sizeof(plaintext) - cipher_offset; + auth_param->auth_off = crc_offset; + auth_param->auth_len = sizeof(plaintext) - + crc_offset - + RTE_ETHER_CRC_LEN; + + ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET( + cipher_crc_cap_msg.comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_DIGEST_IN_BUFFER); + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "LA Bulk request", &cipher_crc_cap_msg, + sizeof(cipher_crc_cap_msg)); +#endif + + /* Send the cipher_crc_cap_msg request */ + memcpy(base_addr + queue->tail, + &cipher_crc_cap_msg, + sizeof(cipher_crc_cap_msg)); + queue->tail = adf_modulo(queue->tail + queue->msg_size, + queue->modulo_mask); + txq_write_tail(qp->qat_dev_gen, qp, queue); + + /* Check for response and verify data is same as ciphertext */ + if (qat_cq_dequeue_response(qp, &response)) { +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "LA response:", &response, + sizeof(response)); +#endif + + if (memcmp(src_data_addr, ciphertext, sizeof(ciphertext)) != 0) + ret = 0; /* Cipher-CRC offload not supported */ + else + ret = 1; + } else { + ret = -EINVAL; + } + + rte_free(src_data_addr); + rte_free(session); + return ret; +} +#endif + __rte_weak int qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused, void *op_cookie __rte_unused, diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h index 66f00943a5..d19fc387e4 100644 --- a/drivers/common/qat/qat_qp.h +++ b/drivers/common/qat/qat_qp.h @@ -153,6 +153,11 @@ qat_qp_get_hw_data(struct qat_pci_device *qat_dev, int qat_cq_get_fw_version(struct qat_qp *qp); +#ifdef BUILD_QAT_SYM +int +qat_cq_get_fw_cipher_crc_cap(struct qat_qp *qp); +#endif + /* Needed for weak function*/ int qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused, diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c index 60ca0fc0d2..1f3e2b1d99 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c @@ -163,7 +163,7 @@ qat_sym_crypto_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id, QAT_LOG(DEBUG, "unknown QAT firmware version"); /* set capabilities based on the fw version */ - qat_sym_private->internal_capabilities = QAT_SYM_CAP_VALID | + qat_sym_private->internal_capabilities |= QAT_SYM_CAP_VALID | ((ret >= MIXED_CRYPTO_MIN_FW_VER) ? QAT_SYM_CAP_MIXED_CRYPTO : 0); return 0; diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h index 524c291340..70942906ea 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h @@ -399,8 +399,13 @@ qat_sym_convert_op_to_vec_chain(struct rte_crypto_op *op, cipher_ofs = op->sym->cipher.data.offset >> 3; break; case 0: - cipher_len = op->sym->cipher.data.length; - cipher_ofs = op->sym->cipher.data.offset; + if (ctx->bpi_ctx) { + cipher_len = qat_bpicipher_preprocess(ctx, op); + cipher_ofs = op->sym->cipher.data.offset; + } else { + cipher_len = op->sym->cipher.data.length; + cipher_ofs = op->sym->cipher.data.offset; + } break; default: QAT_DP_LOG(ERR, @@ -428,8 +433,10 @@ qat_sym_convert_op_to_vec_chain(struct rte_crypto_op *op, max_len = RTE_MAX(cipher_ofs + cipher_len, auth_ofs + auth_len); - /* digest in buffer check. Needed only for wireless algos */ - if (ret == 1) { + /* digest in buffer check. Needed only for wireless algos + * or combined cipher-crc operations + */ + if (ret == 1 || ctx->bpi_ctx) { /* Handle digest-encrypted cases, i.e. * auth-gen-then-cipher-encrypt and * cipher-decrypt-then-auth-verify @@ -456,8 +463,9 @@ qat_sym_convert_op_to_vec_chain(struct rte_crypto_op *op, auth_len; /* Then check if digest-encrypted conditions are met */ - if ((auth_ofs + auth_len < cipher_ofs + cipher_len) && - (digest->iova == auth_end_iova)) + if (((auth_ofs + auth_len < cipher_ofs + cipher_len) && + (digest->iova == auth_end_iova)) || + ctx->bpi_ctx) max_len = RTE_MAX(max_len, auth_ofs + auth_len + ctx->digest_length); } @@ -691,9 +699,9 @@ enqueue_one_chain_job_gen1(struct qat_sym_session *ctx, auth_param->auth_len; /* Then check if digest-encrypted conditions are met */ - if ((auth_param->auth_off + auth_param->auth_len < + if (((auth_param->auth_off + auth_param->auth_len < cipher_param->cipher_offset + cipher_param->cipher_length) && - (digest->iova == auth_iova_end)) { + (digest->iova == auth_iova_end)) || ctx->bpi_ctx) { /* Handle partial digest encryption */ if (cipher_param->cipher_offset + cipher_param->cipher_length < auth_param->auth_off + auth_param->auth_len + diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c index 91d5cfa71d..590eaa0057 100644 --- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c @@ -1205,6 +1205,10 @@ qat_sym_crypto_set_session_gen1(void *cryptodev __rte_unused, void *session) } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) { /* do_auth = 0; do_cipher = 1; */ build_request = qat_sym_build_op_cipher_gen1; + } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_CRC) { + /* do_auth = 1; do_cipher = 1; */ + build_request = qat_sym_build_op_chain_gen1; + handle_mixed = 1; } if (build_request) diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c index 84c26a8062..861679373b 100644 --- a/drivers/crypto/qat/qat_crypto.c +++ b/drivers/crypto/qat/qat_crypto.c @@ -172,5 +172,25 @@ qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, qat_asym_init_op_cookie(qp->op_cookies[i]); } - return ret; + if (qat_private->cipher_crc_offload_enable) { + ret = qat_cq_get_fw_cipher_crc_cap(qp); + if (ret < 0) { + qat_cryptodev_qp_release(dev, qp_id); + return ret; + } + + if (ret != 0) + QAT_LOG(DEBUG, "Cipher CRC supported on QAT device"); + else + QAT_LOG(DEBUG, "Cipher CRC not supported on QAT device"); + + /* Only send the cipher crc offload capability message once */ + qat_private->cipher_crc_offload_enable = 0; + /* Set cipher crc offload indicator */ + if (ret) + qat_private->internal_capabilities |= + QAT_SYM_CAP_CIPHER_CRC; + } + + return 0; } diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h index 6fe1326c51..e20f16236e 100644 --- a/drivers/crypto/qat/qat_crypto.h +++ b/drivers/crypto/qat/qat_crypto.h @@ -36,6 +36,7 @@ struct qat_cryptodev_private { /* Shared memzone for storing capabilities */ uint16_t min_enq_burst_threshold; uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */ + bool cipher_crc_offload_enable; enum qat_service_type service_type; }; diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c index 08e92191a3..345c845325 100644 --- a/drivers/crypto/qat/qat_sym.c +++ b/drivers/crypto/qat/qat_sym.c @@ -279,6 +279,10 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, if (!strcmp(qat_dev_cmd_param[i].name, SYM_ENQ_THRESHOLD_NAME)) internals->min_enq_burst_threshold = qat_dev_cmd_param[i].val; + if (!strcmp(qat_dev_cmd_param[i].name, + SYM_CIPHER_CRC_ENABLE_NAME)) + internals->cipher_crc_offload_enable = + qat_dev_cmd_param[i].val; if (!strcmp(qat_dev_cmd_param[i].name, QAT_IPSEC_MB_LIB)) qat_ipsec_mb_lib = qat_dev_cmd_param[i].val; if (!strcmp(qat_dev_cmd_param[i].name, QAT_CMD_SLICE_MAP)) diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h index 9a4251e08b..3d841d0eba 100644 --- a/drivers/crypto/qat/qat_sym.h +++ b/drivers/crypto/qat/qat_sym.h @@ -32,6 +32,7 @@ /* Internal capabilities */ #define QAT_SYM_CAP_MIXED_CRYPTO (1 << 0) +#define QAT_SYM_CAP_CIPHER_CRC (1 << 1) #define QAT_SYM_CAP_VALID (1 << 31) /** @@ -282,7 +283,8 @@ qat_sym_preprocess_requests(void **ops, uint16_t nb_ops) if (ctx == NULL || ctx->bpi_ctx == NULL) continue; - qat_crc_generate(ctx, op); + if (ctx->qat_cmd != ICP_QAT_FW_LA_CMD_CIPHER_CRC) + qat_crc_generate(ctx, op); } } } @@ -330,7 +332,8 @@ qat_sym_process_response(void **op, uint8_t *resp, void *op_cookie, if (sess->bpi_ctx) { qat_bpicipher_postprocess(sess, rx_op); #ifdef RTE_LIB_SECURITY - if (is_docsis_sec) + if (is_docsis_sec && sess->qat_cmd != + ICP_QAT_FW_LA_CMD_CIPHER_CRC) qat_crc_verify(sess, rx_op); #endif } diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 6ad6c7ee3a..c0217654c1 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -27,6 +27,7 @@ #include <rte_crypto_sym.h> #ifdef RTE_LIB_SECURITY #include <rte_security_driver.h> +#include <rte_ether.h> #endif #include "qat_logs.h" @@ -68,6 +69,13 @@ static void ossl_legacy_provider_unload(void) extern int qat_ipsec_mb_lib; +#define ETH_CRC32_POLYNOMIAL 0x04c11db7 +#define ETH_CRC32_INIT_VAL 0xffffffff +#define ETH_CRC32_XOR_OUT 0xffffffff +#define ETH_CRC32_POLYNOMIAL_BE RTE_BE32(ETH_CRC32_POLYNOMIAL) +#define ETH_CRC32_INIT_VAL_BE RTE_BE32(ETH_CRC32_INIT_VAL) +#define ETH_CRC32_XOR_OUT_BE RTE_BE32(ETH_CRC32_XOR_OUT) + /* SHA1 - 20 bytes - Initialiser state can be found in FIPS stds 180-2 */ static const uint8_t sha1InitialState[] = { 0x67, 0x45, 0x23, 0x01, 0xef, 0xcd, 0xab, 0x89, 0x98, 0xba, @@ -115,6 +123,10 @@ qat_sym_cd_cipher_set(struct qat_sym_session *cd, const uint8_t *enckey, uint32_t enckeylen); +static int +qat_sym_cd_crc_set(struct qat_sym_session *cdesc, + enum qat_device_gen qat_dev_gen); + static int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, const uint8_t *authkey, @@ -122,6 +134,7 @@ qat_sym_cd_auth_set(struct qat_sym_session *cdesc, uint32_t aad_length, uint32_t digestsize, unsigned int operation); + static void qat_sym_session_init_common_hdr(struct qat_sym_session *session); @@ -630,6 +643,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, case ICP_QAT_FW_LA_CMD_MGF1: case ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP: case ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP: + case ICP_QAT_FW_LA_CMD_CIPHER_CRC: case ICP_QAT_FW_LA_CMD_DELIMITER: QAT_LOG(ERR, "Unsupported Service %u", session->qat_cmd); @@ -645,6 +659,45 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, (void *)session); } +int +qat_cipher_crc_cap_msg_sess_prepare(struct qat_sym_session *session, + rte_iova_t session_paddr, + const uint8_t *cipherkey, + uint32_t cipherkeylen, + enum qat_device_gen qat_dev_gen) +{ + int ret; + + /* Set content descriptor physical address */ + session->cd_paddr = session_paddr + + offsetof(struct qat_sym_session, cd); + + /* Set up some pre-requisite variables */ + session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_NONE; + session->is_ucs = 0; + session->qat_cmd = ICP_QAT_FW_LA_CMD_CIPHER_CRC; + session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE; + session->qat_cipher_alg = ICP_QAT_HW_CIPHER_ALGO_AES128; + session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT; + session->is_auth = 1; + session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_NULL; + session->auth_mode = ICP_QAT_HW_AUTH_MODE0; + session->auth_op = ICP_QAT_HW_AUTH_GENERATE; + session->digest_length = RTE_ETHER_CRC_LEN; + + ret = qat_sym_cd_cipher_set(session, cipherkey, cipherkeylen); + if (ret < 0) + return -EINVAL; + + ret = qat_sym_cd_crc_set(session, qat_dev_gen); + if (ret < 0) + return -EINVAL; + + qat_sym_session_finalize(session); + + return 0; +} + static int qat_sym_session_handle_single_pass(struct qat_sym_session *session, const struct rte_crypto_aead_xform *aead_xform) @@ -697,7 +750,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev *dev, switch (auth_xform->algo) { case RTE_CRYPTO_AUTH_SM3: session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SM3; - session->auth_mode = ICP_QAT_HW_AUTH_MODE0; + session->auth_mode = ICP_QAT_HW_AUTH_MODE2; break; case RTE_CRYPTO_AUTH_SHA1: session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1; @@ -1866,6 +1919,9 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR); cdesc->cd_cur_ptr = (uint8_t *)&cdesc->cd; + } else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_CRC) { + cd_pars->u.s.content_desc_addr = cdesc->cd_paddr; + cdesc->cd_cur_ptr = (uint8_t *)&cdesc->cd; } else if (cdesc->qat_cmd != ICP_QAT_FW_LA_CMD_HASH_CIPHER) { QAT_LOG(ERR, "Invalid param, must be a cipher command."); return -EFAULT; @@ -2641,6 +2697,135 @@ qat_sec_session_check_docsis(struct rte_security_session_conf *conf) return -EINVAL; } +static int +qat_sym_cd_crc_set(struct qat_sym_session *cdesc, + enum qat_device_gen qat_dev_gen) +{ + struct icp_qat_hw_gen2_crc_cd *crc_cd_gen2; + struct icp_qat_hw_gen3_crc_cd *crc_cd_gen3; + struct icp_qat_hw_gen4_crc_cd *crc_cd_gen4; + struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req; + struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars; + void *ptr = &req_tmpl->cd_ctrl; + struct icp_qat_fw_auth_cd_ctrl_hdr *crc_cd_ctrl = ptr; + struct icp_qat_fw_la_auth_req_params *crc_param = + (struct icp_qat_fw_la_auth_req_params *) + ((char *)&req_tmpl->serv_specif_rqpars + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + struct icp_qat_fw_ucs_slice_cipher_config crc_cfg; + uint16_t crc_cfg_offset, cd_size; + + crc_cfg_offset = cdesc->cd_cur_ptr - ((uint8_t *)&cdesc->cd); + + switch (qat_dev_gen) { + case QAT_GEN2: + crc_cd_gen2 = + (struct icp_qat_hw_gen2_crc_cd *)cdesc->cd_cur_ptr; + crc_cd_gen2->flags = 0; + crc_cd_gen2->initial_crc = 0; + memset(&crc_cd_gen2->reserved1, + 0, + sizeof(crc_cd_gen2->reserved1)); + memset(&crc_cd_gen2->reserved2, + 0, + sizeof(crc_cd_gen2->reserved2)); + cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen2_crc_cd); + break; + case QAT_GEN3: + crc_cd_gen3 = + (struct icp_qat_hw_gen3_crc_cd *)cdesc->cd_cur_ptr; + crc_cd_gen3->flags = ICP_QAT_HW_GEN3_CRC_FLAGS_BUILD(1, 1); + crc_cd_gen3->polynomial = ETH_CRC32_POLYNOMIAL; + crc_cd_gen3->initial_crc = ETH_CRC32_INIT_VAL; + crc_cd_gen3->xor_val = ETH_CRC32_XOR_OUT; + memset(&crc_cd_gen3->reserved1, + 0, + sizeof(crc_cd_gen3->reserved1)); + memset(&crc_cd_gen3->reserved2, + 0, + sizeof(crc_cd_gen3->reserved2)); + crc_cd_gen3->reserved3 = 0; + cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen3_crc_cd); + break; + case QAT_GEN4: + crc_cfg.mode = ICP_QAT_HW_CIPHER_ECB_MODE; + crc_cfg.algo = ICP_QAT_HW_CIPHER_ALGO_NULL; + crc_cfg.hash_cmp_val = 0; + crc_cfg.dir = ICP_QAT_HW_CIPHER_ENCRYPT; + crc_cfg.associated_data_len_in_bytes = 0; + crc_cfg.crc_reflect_out = + ICP_QAT_HW_CIPHER_UCS_REFLECT_OUT_ENABLED; + crc_cfg.crc_reflect_in = + ICP_QAT_HW_CIPHER_UCS_REFLECT_IN_ENABLED; + crc_cfg.crc_encoding = ICP_QAT_HW_CIPHER_UCS_CRC32; + + crc_cd_gen4 = + (struct icp_qat_hw_gen4_crc_cd *)cdesc->cd_cur_ptr; + crc_cd_gen4->ucs_config[0] = + ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_LOWER(crc_cfg); + crc_cd_gen4->ucs_config[1] = + ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_UPPER(crc_cfg); + crc_cd_gen4->polynomial = ETH_CRC32_POLYNOMIAL_BE; + crc_cd_gen4->initial_crc = ETH_CRC32_INIT_VAL_BE; + crc_cd_gen4->xor_val = ETH_CRC32_XOR_OUT_BE; + crc_cd_gen4->reserved1 = 0; + crc_cd_gen4->reserved2 = 0; + crc_cd_gen4->reserved3 = 0; + cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen4_crc_cd); + break; + default: + return -EINVAL; + } + + crc_cd_ctrl->hash_cfg_offset = crc_cfg_offset >> 3; + crc_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED; + crc_cd_ctrl->inner_res_sz = cdesc->digest_length; + crc_cd_ctrl->final_sz = cdesc->digest_length; + crc_cd_ctrl->inner_state1_sz = 0; + crc_cd_ctrl->inner_state2_sz = 0; + crc_cd_ctrl->inner_state2_offset = 0; + crc_cd_ctrl->outer_prefix_sz = 0; + crc_cd_ctrl->outer_config_offset = 0; + crc_cd_ctrl->outer_state1_sz = 0; + crc_cd_ctrl->outer_res_sz = 0; + crc_cd_ctrl->outer_prefix_offset = 0; + + crc_param->auth_res_sz = cdesc->digest_length; + crc_param->u2.aad_sz = 0; + crc_param->hash_state_sz = 0; + + cd_size = cdesc->cd_cur_ptr - (uint8_t *)&cdesc->cd; + cd_pars->u.s.content_desc_addr = cdesc->cd_paddr; + cd_pars->u.s.content_desc_params_sz = RTE_ALIGN_CEIL(cd_size, 8) >> 3; + + return 0; +} + +static int +qat_sym_session_configure_crc(struct rte_cryptodev *dev, + const struct rte_crypto_sym_xform *cipher_xform, + struct qat_sym_session *session) +{ + struct qat_cryptodev_private *internals = dev->data->dev_private; + enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen; + int ret; + + session->is_auth = 1; + session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_NULL; + session->auth_mode = ICP_QAT_HW_AUTH_MODE0; + session->auth_op = cipher_xform->cipher.op == + RTE_CRYPTO_CIPHER_OP_ENCRYPT ? + ICP_QAT_HW_AUTH_GENERATE : + ICP_QAT_HW_AUTH_VERIFY; + session->digest_length = RTE_ETHER_CRC_LEN; + + ret = qat_sym_cd_crc_set(session, qat_dev_gen); + if (ret < 0) + return ret; + + return 0; +} + static int qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev, struct rte_security_session_conf *conf, void *session_private, @@ -2681,12 +2866,21 @@ qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev, if (qat_cmd_id != ICP_QAT_FW_LA_CMD_CIPHER) { QAT_LOG(ERR, "Unsupported xform chain requested"); return -ENOTSUP; + } else if (internals->internal_capabilities + & QAT_SYM_CAP_CIPHER_CRC) { + qat_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER_CRC; } session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id; ret = qat_sym_session_configure_cipher(dev, xform, session); if (ret < 0) return ret; + + if (qat_cmd_id == ICP_QAT_FW_LA_CMD_CIPHER_CRC) { + ret = qat_sym_session_configure_crc(dev, xform, session); + if (ret < 0) + return ret; + } qat_sym_session_finalize(session); return qat_sym_gen_dev_ops[qat_dev_gen].set_session((void *)cdev, diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h index 6322d7e3bc..9b5d11ac88 100644 --- a/drivers/crypto/qat/qat_sym_session.h +++ b/drivers/crypto/qat/qat_sym_session.h @@ -46,6 +46,12 @@ ICP_QAT_HW_CIPHER_KEY_CONVERT, \ ICP_QAT_HW_CIPHER_DECRYPT) +#define ICP_QAT_HW_GEN3_CRC_FLAGS_BUILD(ref_in, ref_out) \ + (((ref_in & QAT_GEN3_COMP_REFLECT_IN_MASK) << \ + QAT_GEN3_COMP_REFLECT_IN_BITPOS) | \ + ((ref_out & QAT_GEN3_COMP_REFLECT_OUT_MASK) << \ + QAT_GEN3_COMP_REFLECT_OUT_BITPOS)) + #define QAT_AES_CMAC_CONST_RB 0x87 #define QAT_CRYPTO_SLICE_SPC 1 @@ -76,7 +82,12 @@ typedef int (*qat_sym_build_request_t)(void *in_op, struct qat_sym_session *ctx, /* Common content descriptor */ struct qat_sym_cd { struct icp_qat_hw_cipher_algo_blk cipher; - struct icp_qat_hw_auth_algo_blk hash; + union { + struct icp_qat_hw_auth_algo_blk hash; + struct icp_qat_hw_gen2_crc_cd crc_gen2; + struct icp_qat_hw_gen3_crc_cd crc_gen3; + struct icp_qat_hw_gen4_crc_cd crc_gen4; + }; } __rte_packed __rte_cache_aligned; struct qat_sym_session { @@ -152,10 +163,18 @@ qat_sym_session_clear(struct rte_cryptodev *dev, unsigned int qat_sym_session_get_private_size(struct rte_cryptodev *dev); +int +qat_cipher_crc_cap_msg_sess_prepare(struct qat_sym_session *session, + rte_iova_t session_paddr, + const uint8_t *cipherkey, + uint32_t cipherkeylen, + enum qat_device_gen qat_dev_gen); + void qat_sym_sesssion_init_common_hdr(struct qat_sym_session *session, struct icp_qat_fw_comn_req_hdr *header, enum qat_sym_proto_flag proto_flags); + int qat_sym_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg); int -- 2.34.1 -------------------------------------------------------------- Intel Research and Development Ireland Limited Registered in Ireland Registered Office: Collinstown Industrial Park, Leixlip, County Kildare Registered Number: 308263 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [PATCH v3 2/2] crypto/qat: add cipher-crc offload support 2023-03-13 14:26 ` [PATCH v3 2/2] crypto/qat: add cipher-crc offload support Kevin O'Sullivan @ 2023-03-16 12:25 ` Ji, Kai 2023-03-16 19:15 ` [EXT] " Akhil Goyal 1 sibling, 0 replies; 17+ messages in thread From: Ji, Kai @ 2023-03-16 12:25 UTC (permalink / raw) To: O'Sullivan, Kevin, dev; +Cc: Coyle, David Acked-by: Kai Ji <kai.ji@intel.com> > -----Original Message----- > From: O'Sullivan, Kevin <kevin.osullivan@intel.com> > Sent: Monday, March 13, 2023 2:26 PM > To: dev@dpdk.org > Cc: Ji, Kai <kai.ji@intel.com>; O'Sullivan, Kevin > <kevin.osullivan@intel.com>; Coyle, David <david.coyle@intel.com> > Subject: [PATCH v3 2/2] crypto/qat: add cipher-crc offload support > > This patch adds support to the QAT symmetric crypto PMD for combined > cipher-crc offload feature, primarily for DOCSIS, on gen2/gen3/gen4 QAT > devices. > > A new parameter called qat_sym_cipher_crc_enable has been added to the PMD, > which can be set on process start as follows: > > -a <qat pci bdf>,qat_sym_cipher_crc_enable=1 > > When enabled, a capability check for the combined cipher-crc offload > feature is triggered to the QAT firmware during queue pair initialization. > If supported by the firmware, any subsequent runtime DOCSIS cipher-crc > requests handled by the QAT PMD are offloaded to the QAT device by setting > up the content descriptor and request accordingly. > > If the combined DOCSIS cipher-crc feature is not supported by the firmware, > the CRC continues to be calculated within the PMD, with just the cipher > portion of the request being offloaded to the QAT device. > > Signed-off-by: Kevin O'Sullivan <kevin.osullivan@intel.com> > Signed-off-by: David Coyle <david.coyle@intel.com> > --- > 2.34.1 ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [EXT] [PATCH v3 2/2] crypto/qat: add cipher-crc offload support 2023-03-13 14:26 ` [PATCH v3 2/2] crypto/qat: add cipher-crc offload support Kevin O'Sullivan 2023-03-16 12:25 ` Ji, Kai @ 2023-03-16 19:15 ` Akhil Goyal 2023-03-20 16:28 ` O'Sullivan, Kevin 1 sibling, 1 reply; 17+ messages in thread From: Akhil Goyal @ 2023-03-16 19:15 UTC (permalink / raw) To: Kevin O'Sullivan, dev; +Cc: kai.ji, David Coyle > Subject: [EXT] [PATCH v3 2/2] crypto/qat: add cipher-crc offload support > Update title as crypto/qat: support cipher-crc offload > This patch adds support to the QAT symmetric crypto PMD for combined > cipher-crc offload feature, primarily for DOCSIS, on gen2/gen3/gen4 > QAT devices. > > A new parameter called qat_sym_cipher_crc_enable has been > added to the PMD, which can be set on process start as follows: A new devarg called .... > > -a <qat pci bdf>,qat_sym_cipher_crc_enable=1 > > When enabled, a capability check for the combined cipher-crc offload > feature is triggered to the QAT firmware during queue pair > initialization. If supported by the firmware, any subsequent runtime > DOCSIS cipher-crc requests handled by the QAT PMD are offloaded to the > QAT device by setting up the content descriptor and request > accordingly. > > If the combined DOCSIS cipher-crc feature is not supported by the > firmware, the CRC continues to be calculated within the PMD, with just > the cipher portion of the request being offloaded to the QAT device. > > Signed-off-by: Kevin O'Sullivan <kevin.osullivan@intel.com> > Signed-off-by: David Coyle <david.coyle@intel.com> > --- > v3: updated the file qat.rst with details of new configuration > --- > doc/guides/cryptodevs/qat.rst | 23 +++ > drivers/common/qat/qat_device.c | 12 +- > drivers/common/qat/qat_device.h | 3 +- > drivers/common/qat/qat_qp.c | 157 +++++++++++++++ > drivers/common/qat/qat_qp.h | 5 + > drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 2 +- > drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 24 ++- > drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 4 + > drivers/crypto/qat/qat_crypto.c | 22 ++- > drivers/crypto/qat/qat_crypto.h | 1 + > drivers/crypto/qat/qat_sym.c | 4 + > drivers/crypto/qat/qat_sym.h | 7 +- > drivers/crypto/qat/qat_sym_session.c | 196 ++++++++++++++++++- > drivers/crypto/qat/qat_sym_session.h | 21 +- > 14 files changed, 465 insertions(+), 16 deletions(-) > > diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst > index ef754106a8..32e0d8a562 100644 > --- a/doc/guides/cryptodevs/qat.rst > +++ b/doc/guides/cryptodevs/qat.rst > @@ -294,6 +294,29 @@ by comma. When the same parameter is used more > than once first occurrence of the > is used. > Maximum threshold that can be set is 32. > > + > +Running QAT PMD with Cipher-CRC offload feature > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > + > +Support has been added to the QAT symmetric crypto PMD for combined > Cipher-CRC offload, > +primarily for the Crypto-CRC DOCSIS security protocol, on GEN2/GEN3/GEN4 > QAT devices. > + > +The following parameter enables a Cipher-CRC offload capability check to > determine > +if the feature is supported on the QAT device. > + > +- qat_sym_cipher_crc_enable Use the word devarg to make it uniform across DPDK. > + > +When enabled, a capability check for the combined Cipher-CRC offload feature > is triggered > +to the QAT firmware during queue pair initialization. If supported by the > firmware, > +any subsequent runtime Crypto-CRC DOCSIS security protocol requests handled > by the QAT PMD > +are offloaded to the QAT device by setting up the content descriptor and > request accordingly. > +If not supported, the CRC is calculated by the QAT PMD using the NET CRC API. > + > +To use this feature the user must set the parameter on process start as a device > additional parameter:: > + > + -a 03:01.1,qat_sym_cipher_crc_enable=1 > + > + > Running QAT PMD with Intel IPSEC MB library for symmetric precomputes > function > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > ~~~~~~~~~~~~~ > > diff --git a/drivers/common/qat/qat_device.c > b/drivers/common/qat/qat_device.c > index 8bce2ac073..308c59c39f 100644 > --- a/drivers/common/qat/qat_device.c > +++ b/drivers/common/qat/qat_device.c > @@ -149,7 +149,16 @@ qat_dev_parse_cmd(const char *str, struct > qat_dev_cmd_param > } else { > memcpy(value_str, arg2, iter); > value = strtol(value_str, NULL, 10); > - if (value > MAX_QP_THRESHOLD_SIZE) { > + if (strcmp(param, > + SYM_CIPHER_CRC_ENABLE_NAME) == > 0) { > + if (value < 0 || value > 1) { > + QAT_LOG(DEBUG, "The value > for" > + " qat_sym_cipher_crc_enable" > + " should be set to 0 or 1," > + " setting to 0"); Do not split printable strings across multiple lines even if it cross max limit. Fix this across the patch. Moreover max limit is also increased from 80 -> 100 > + value = 0; > + } > + } else if (value > MAX_QP_THRESHOLD_SIZE) { > QAT_LOG(DEBUG, "Exceeded max size > of" > " threshold, setting to %d", > MAX_QP_THRESHOLD_SIZE); > @@ -369,6 +378,7 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv > __rte_unused, > { SYM_ENQ_THRESHOLD_NAME, 0 }, > { ASYM_ENQ_THRESHOLD_NAME, 0 }, > { COMP_ENQ_THRESHOLD_NAME, 0 }, > + { SYM_CIPHER_CRC_ENABLE_NAME, 0 }, > [QAT_CMD_SLICE_MAP_POS] = { > QAT_CMD_SLICE_MAP, 0}, > { NULL, 0 }, > }; > diff --git a/drivers/common/qat/qat_device.h > b/drivers/common/qat/qat_device.h > index bc3da04238..4188474dde 100644 > --- a/drivers/common/qat/qat_device.h > +++ b/drivers/common/qat/qat_device.h > @@ -21,8 +21,9 @@ > #define SYM_ENQ_THRESHOLD_NAME "qat_sym_enq_threshold" > #define ASYM_ENQ_THRESHOLD_NAME "qat_asym_enq_threshold" > #define COMP_ENQ_THRESHOLD_NAME "qat_comp_enq_threshold" > +#define SYM_CIPHER_CRC_ENABLE_NAME "qat_sym_cipher_crc_enable" > #define QAT_CMD_SLICE_MAP "qat_cmd_slice_disable" > -#define QAT_CMD_SLICE_MAP_POS 4 > +#define QAT_CMD_SLICE_MAP_POS 5 > #define MAX_QP_THRESHOLD_SIZE 32 > > /** > diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c > index 9cbd19a481..1ce89c265f 100644 > --- a/drivers/common/qat/qat_qp.c > +++ b/drivers/common/qat/qat_qp.c > @@ -11,6 +11,9 @@ > #include <bus_pci_driver.h> > #include <rte_atomic.h> > #include <rte_prefetch.h> > +#ifdef RTE_LIB_SECURITY > +#include <rte_ether.h> > +#endif > > #include "qat_logs.h" > #include "qat_device.h" > @@ -957,6 +960,160 @@ qat_cq_get_fw_version(struct qat_qp *qp) > return -EINVAL; > } > > +#ifdef BUILD_QAT_SYM Where is this defined? Even no documentation about when to enable/disable it. > +/* Sends an LA bulk req message to determine if a QAT device supports Cipher- > CRC > + * offload. This assumes that there are no inflight messages, i.e. assumes > + * there's space on the qp, one message is sent and only one response > + * collected. The status bit of the response and returned data are checked. > + * Returns: > + * 1 if status bit indicates success and returned data matches expected > + * data (i.e. Cipher-CRC supported) > + * 0 if status bit indicates error or returned data does not match expected > + * data (i.e. Cipher-CRC not supported) > + * Negative error code in case of error > + */ > +int > +qat_cq_get_fw_cipher_crc_cap(struct qat_qp *qp) > +{ > + struct qat_queue *queue = &(qp->tx_q); > + uint8_t *base_addr = (uint8_t *)queue->base_addr; > + struct icp_qat_fw_la_bulk_req cipher_crc_cap_msg = {{0}}; > + struct icp_qat_fw_comn_resp response = {{0}}; > + struct icp_qat_fw_la_cipher_req_params *cipher_param; > + struct icp_qat_fw_la_auth_req_params *auth_param; > + struct qat_sym_session *session; > + phys_addr_t phy_src_addr; > + uint64_t *src_data_addr; > + int ret; > + uint8_t cipher_offset = 18; > + uint8_t crc_offset = 6; > + uint8_t ciphertext[34] = { > + /* Outer protocol header */ > + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, > + /* Ethernet frame */ > + 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x06, 0x05, > + 0x04, 0x03, 0x02, 0x01, 0xD6, 0xE2, 0x70, 0x5C, > + 0xE6, 0x4D, 0xCC, 0x8C, 0x47, 0xB7, 0x09, 0xD6, > + /* CRC */ > + 0x54, 0x85, 0xF8, 0x32 > + }; > + uint8_t plaintext[34] = { > + /* Outer protocol header */ > + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, > + /* Ethernet frame */ > + 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x06, 0x05, > + 0x04, 0x03, 0x02, 0x01, 0x08, 0x00, 0xAA, 0xAA, > + 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, > + /* CRC */ > + 0xFF, 0xFF, 0xFF, 0xFF > + }; > + uint8_t key[16] = { > + 0x00, 0x00, 0x00, 0x00, 0xAA, 0xBB, 0xCC, 0xDD, > + 0xEE, 0xFF, 0x00, 0x11, 0x22, 0x33, 0x44, 0x55 > + }; > + uint8_t iv[16] = { > + 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, > + 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11 > + }; Is it not better to define them as macros? > + > + session = rte_zmalloc(NULL, sizeof(struct qat_sym_session), 0); > + if (session == NULL) > + return -EINVAL; > + > + /* Verify the session physical address is known */ > + rte_iova_t session_paddr = rte_mem_virt2iova(session); > + if (session_paddr == 0 || session_paddr == RTE_BAD_IOVA) { > + QAT_LOG(ERR, "Session physical address unknown."); > + return -EINVAL; > + } > + > + /* Prepare the LA bulk request */ > + ret = qat_cipher_crc_cap_msg_sess_prepare(session, > + session_paddr, > + key, > + sizeof(key), > + qp->qat_dev_gen); > + if (ret < 0) { > + rte_free(session); > + /* Returning 0 here to allow qp setup to continue, but > + * indicate that Cipher-CRC offload is not supported on the > + * device > + */ > + return 0; > + } > + > + cipher_crc_cap_msg = session->fw_req; > + > + src_data_addr = rte_zmalloc(NULL, sizeof(plaintext), 0); > + if (src_data_addr == NULL) { > + rte_free(session); > + return -EINVAL; > + } > + > + rte_memcpy(src_data_addr, plaintext, sizeof(plaintext)); > + > + phy_src_addr = rte_mem_virt2iova(src_data_addr); > + if (phy_src_addr == 0 || phy_src_addr == RTE_BAD_IOVA) { > + QAT_LOG(ERR, "Source physical address unknown."); > + return -EINVAL; > + } > + > + cipher_crc_cap_msg.comn_mid.src_data_addr = phy_src_addr; > + cipher_crc_cap_msg.comn_mid.src_length = sizeof(plaintext); > + cipher_crc_cap_msg.comn_mid.dest_data_addr = phy_src_addr; > + cipher_crc_cap_msg.comn_mid.dst_length = sizeof(plaintext); > + > + cipher_param = (void *)&cipher_crc_cap_msg.serv_specif_rqpars; > + auth_param = (void *)((uint8_t *)cipher_param + > + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); > + > + rte_memcpy(cipher_param->u.cipher_IV_array, iv, sizeof(iv)); > + > + cipher_param->cipher_offset = cipher_offset; > + cipher_param->cipher_length = sizeof(plaintext) - cipher_offset; > + auth_param->auth_off = crc_offset; > + auth_param->auth_len = sizeof(plaintext) - > + crc_offset - > + RTE_ETHER_CRC_LEN; > + > + ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET( > + cipher_crc_cap_msg.comn_hdr.serv_specif_flags, > + ICP_QAT_FW_LA_DIGEST_IN_BUFFER); > + > +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG > + QAT_DP_HEXDUMP_LOG(DEBUG, "LA Bulk request", > &cipher_crc_cap_msg, > + sizeof(cipher_crc_cap_msg)); > +#endif > + > + /* Send the cipher_crc_cap_msg request */ > + memcpy(base_addr + queue->tail, > + &cipher_crc_cap_msg, > + sizeof(cipher_crc_cap_msg)); > + queue->tail = adf_modulo(queue->tail + queue->msg_size, > + queue->modulo_mask); > + txq_write_tail(qp->qat_dev_gen, qp, queue); > + > + /* Check for response and verify data is same as ciphertext */ > + if (qat_cq_dequeue_response(qp, &response)) { > +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG > + QAT_DP_HEXDUMP_LOG(DEBUG, "LA response:", &response, > + sizeof(response)); > +#endif > + > + if (memcmp(src_data_addr, ciphertext, sizeof(ciphertext)) != 0) > + ret = 0; /* Cipher-CRC offload not supported */ > + else > + ret = 1; > + } else { > + ret = -EINVAL; > + } > + > + rte_free(src_data_addr); > + rte_free(session); > + return ret; > +} > +#endif > + > __rte_weak int > qat_comp_process_response(void **op __rte_unused, uint8_t *resp > __rte_unused, > void *op_cookie __rte_unused, > diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h > index 66f00943a5..d19fc387e4 100644 > --- a/drivers/common/qat/qat_qp.h > +++ b/drivers/common/qat/qat_qp.h > @@ -153,6 +153,11 @@ qat_qp_get_hw_data(struct qat_pci_device *qat_dev, > int > qat_cq_get_fw_version(struct qat_qp *qp); > > +#ifdef BUILD_QAT_SYM > +int > +qat_cq_get_fw_cipher_crc_cap(struct qat_qp *qp); > +#endif > + > /* Needed for weak function*/ > int > qat_comp_process_response(void **op __rte_unused, uint8_t *resp > __rte_unused, > diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c > b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c > index 60ca0fc0d2..1f3e2b1d99 100644 > --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c > +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c > @@ -163,7 +163,7 @@ qat_sym_crypto_qp_setup_gen2(struct rte_cryptodev > *dev, uint16_t qp_id, > QAT_LOG(DEBUG, "unknown QAT firmware version"); > > /* set capabilities based on the fw version */ > - qat_sym_private->internal_capabilities = QAT_SYM_CAP_VALID | > + qat_sym_private->internal_capabilities |= QAT_SYM_CAP_VALID | > ((ret >= MIXED_CRYPTO_MIN_FW_VER) ? > QAT_SYM_CAP_MIXED_CRYPTO : 0); > return 0; > diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h > b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h > index 524c291340..70942906ea 100644 > --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h > +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h > @@ -399,8 +399,13 @@ qat_sym_convert_op_to_vec_chain(struct > rte_crypto_op *op, > cipher_ofs = op->sym->cipher.data.offset >> 3; > break; > case 0: > - cipher_len = op->sym->cipher.data.length; > - cipher_ofs = op->sym->cipher.data.offset; > + if (ctx->bpi_ctx) { > + cipher_len = qat_bpicipher_preprocess(ctx, op); > + cipher_ofs = op->sym->cipher.data.offset; > + } else { > + cipher_len = op->sym->cipher.data.length; > + cipher_ofs = op->sym->cipher.data.offset; > + } > break; > default: > QAT_DP_LOG(ERR, > @@ -428,8 +433,10 @@ qat_sym_convert_op_to_vec_chain(struct > rte_crypto_op *op, > > max_len = RTE_MAX(cipher_ofs + cipher_len, auth_ofs + auth_len); > > - /* digest in buffer check. Needed only for wireless algos */ > - if (ret == 1) { > + /* digest in buffer check. Needed only for wireless algos > + * or combined cipher-crc operations > + */ > + if (ret == 1 || ctx->bpi_ctx) { > /* Handle digest-encrypted cases, i.e. > * auth-gen-then-cipher-encrypt and > * cipher-decrypt-then-auth-verify > @@ -456,8 +463,9 @@ qat_sym_convert_op_to_vec_chain(struct > rte_crypto_op *op, > auth_len; > > /* Then check if digest-encrypted conditions are met */ > - if ((auth_ofs + auth_len < cipher_ofs + cipher_len) && > - (digest->iova == auth_end_iova)) > + if (((auth_ofs + auth_len < cipher_ofs + cipher_len) && > + (digest->iova == auth_end_iova)) || > + ctx->bpi_ctx) > max_len = RTE_MAX(max_len, auth_ofs + auth_len + > ctx->digest_length); > } > @@ -691,9 +699,9 @@ enqueue_one_chain_job_gen1(struct qat_sym_session > *ctx, > auth_param->auth_len; > > /* Then check if digest-encrypted conditions are met */ > - if ((auth_param->auth_off + auth_param->auth_len < > + if (((auth_param->auth_off + auth_param->auth_len < > cipher_param->cipher_offset + cipher_param->cipher_length) > && > - (digest->iova == auth_iova_end)) { > + (digest->iova == auth_iova_end)) || ctx->bpi_ctx) { > /* Handle partial digest encryption */ > if (cipher_param->cipher_offset + cipher_param->cipher_length > < > auth_param->auth_off + auth_param->auth_len + > diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c > b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c > index 91d5cfa71d..590eaa0057 100644 > --- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c > +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c > @@ -1205,6 +1205,10 @@ qat_sym_crypto_set_session_gen1(void *cryptodev > __rte_unused, void *session) > } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) { > /* do_auth = 0; do_cipher = 1; */ > build_request = qat_sym_build_op_cipher_gen1; > + } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_CRC) { > + /* do_auth = 1; do_cipher = 1; */ > + build_request = qat_sym_build_op_chain_gen1; > + handle_mixed = 1; > } > > if (build_request) > diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c > index 84c26a8062..861679373b 100644 > --- a/drivers/crypto/qat/qat_crypto.c > +++ b/drivers/crypto/qat/qat_crypto.c > @@ -172,5 +172,25 @@ qat_cryptodev_qp_setup(struct rte_cryptodev *dev, > uint16_t qp_id, > qat_asym_init_op_cookie(qp->op_cookies[i]); > } > > - return ret; > + if (qat_private->cipher_crc_offload_enable) { > + ret = qat_cq_get_fw_cipher_crc_cap(qp); > + if (ret < 0) { > + qat_cryptodev_qp_release(dev, qp_id); > + return ret; > + } > + > + if (ret != 0) > + QAT_LOG(DEBUG, "Cipher CRC supported on QAT > device"); > + else > + QAT_LOG(DEBUG, "Cipher CRC not supported on QAT > device"); > + > + /* Only send the cipher crc offload capability message once */ > + qat_private->cipher_crc_offload_enable = 0; > + /* Set cipher crc offload indicator */ > + if (ret) > + qat_private->internal_capabilities |= > + QAT_SYM_CAP_CIPHER_CRC; > + } > + > + return 0; > } > diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h > index 6fe1326c51..e20f16236e 100644 > --- a/drivers/crypto/qat/qat_crypto.h > +++ b/drivers/crypto/qat/qat_crypto.h > @@ -36,6 +36,7 @@ struct qat_cryptodev_private { > /* Shared memzone for storing capabilities */ > uint16_t min_enq_burst_threshold; > uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */ > + bool cipher_crc_offload_enable; > enum qat_service_type service_type; > }; > > diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c > index 08e92191a3..345c845325 100644 > --- a/drivers/crypto/qat/qat_sym.c > +++ b/drivers/crypto/qat/qat_sym.c > @@ -279,6 +279,10 @@ qat_sym_dev_create(struct qat_pci_device > *qat_pci_dev, > if (!strcmp(qat_dev_cmd_param[i].name, > SYM_ENQ_THRESHOLD_NAME)) > internals->min_enq_burst_threshold = > qat_dev_cmd_param[i].val; > + if (!strcmp(qat_dev_cmd_param[i].name, > + SYM_CIPHER_CRC_ENABLE_NAME)) > + internals->cipher_crc_offload_enable = > + qat_dev_cmd_param[i].val; > if (!strcmp(qat_dev_cmd_param[i].name, QAT_IPSEC_MB_LIB)) > qat_ipsec_mb_lib = qat_dev_cmd_param[i].val; > if (!strcmp(qat_dev_cmd_param[i].name, > QAT_CMD_SLICE_MAP)) > diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h > index 9a4251e08b..3d841d0eba 100644 > --- a/drivers/crypto/qat/qat_sym.h > +++ b/drivers/crypto/qat/qat_sym.h > @@ -32,6 +32,7 @@ > > /* Internal capabilities */ > #define QAT_SYM_CAP_MIXED_CRYPTO (1 << 0) > +#define QAT_SYM_CAP_CIPHER_CRC (1 << 1) > #define QAT_SYM_CAP_VALID (1 << 31) > > /** > @@ -282,7 +283,8 @@ qat_sym_preprocess_requests(void **ops, uint16_t > nb_ops) > if (ctx == NULL || ctx->bpi_ctx == NULL) > continue; > > - qat_crc_generate(ctx, op); > + if (ctx->qat_cmd != > ICP_QAT_FW_LA_CMD_CIPHER_CRC) > + qat_crc_generate(ctx, op); > } > } > } > @@ -330,7 +332,8 @@ qat_sym_process_response(void **op, uint8_t *resp, > void *op_cookie, > if (sess->bpi_ctx) { > qat_bpicipher_postprocess(sess, rx_op); > #ifdef RTE_LIB_SECURITY > - if (is_docsis_sec) > + if (is_docsis_sec && sess->qat_cmd != > + > ICP_QAT_FW_LA_CMD_CIPHER_CRC) > qat_crc_verify(sess, rx_op); > #endif > } > diff --git a/drivers/crypto/qat/qat_sym_session.c > b/drivers/crypto/qat/qat_sym_session.c > index 6ad6c7ee3a..c0217654c1 100644 > --- a/drivers/crypto/qat/qat_sym_session.c > +++ b/drivers/crypto/qat/qat_sym_session.c > @@ -27,6 +27,7 @@ > #include <rte_crypto_sym.h> > #ifdef RTE_LIB_SECURITY > #include <rte_security_driver.h> > +#include <rte_ether.h> > #endif > > #include "qat_logs.h" > @@ -68,6 +69,13 @@ static void ossl_legacy_provider_unload(void) > > extern int qat_ipsec_mb_lib; > > +#define ETH_CRC32_POLYNOMIAL 0x04c11db7 > +#define ETH_CRC32_INIT_VAL 0xffffffff > +#define ETH_CRC32_XOR_OUT 0xffffffff > +#define ETH_CRC32_POLYNOMIAL_BE RTE_BE32(ETH_CRC32_POLYNOMIAL) > +#define ETH_CRC32_INIT_VAL_BE RTE_BE32(ETH_CRC32_INIT_VAL) > +#define ETH_CRC32_XOR_OUT_BE RTE_BE32(ETH_CRC32_XOR_OUT) > + > /* SHA1 - 20 bytes - Initialiser state can be found in FIPS stds 180-2 */ > static const uint8_t sha1InitialState[] = { > 0x67, 0x45, 0x23, 0x01, 0xef, 0xcd, 0xab, 0x89, 0x98, 0xba, > @@ -115,6 +123,10 @@ qat_sym_cd_cipher_set(struct qat_sym_session *cd, > const uint8_t *enckey, > uint32_t enckeylen); > > +static int > +qat_sym_cd_crc_set(struct qat_sym_session *cdesc, > + enum qat_device_gen qat_dev_gen); > + > static int > qat_sym_cd_auth_set(struct qat_sym_session *cdesc, > const uint8_t *authkey, > @@ -122,6 +134,7 @@ qat_sym_cd_auth_set(struct qat_sym_session *cdesc, > uint32_t aad_length, > uint32_t digestsize, > unsigned int operation); > + > static void > qat_sym_session_init_common_hdr(struct qat_sym_session *session); > > @@ -630,6 +643,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev > *dev, > case ICP_QAT_FW_LA_CMD_MGF1: > case ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP: > case ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP: > + case ICP_QAT_FW_LA_CMD_CIPHER_CRC: > case ICP_QAT_FW_LA_CMD_DELIMITER: > QAT_LOG(ERR, "Unsupported Service %u", > session->qat_cmd); > @@ -645,6 +659,45 @@ qat_sym_session_set_parameters(struct rte_cryptodev > *dev, > (void *)session); > } > > +int > +qat_cipher_crc_cap_msg_sess_prepare(struct qat_sym_session *session, > + rte_iova_t session_paddr, > + const uint8_t *cipherkey, > + uint32_t cipherkeylen, > + enum qat_device_gen qat_dev_gen) > +{ > + int ret; > + > + /* Set content descriptor physical address */ > + session->cd_paddr = session_paddr + > + offsetof(struct qat_sym_session, cd); > + > + /* Set up some pre-requisite variables */ > + session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_NONE; > + session->is_ucs = 0; > + session->qat_cmd = ICP_QAT_FW_LA_CMD_CIPHER_CRC; > + session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE; > + session->qat_cipher_alg = ICP_QAT_HW_CIPHER_ALGO_AES128; > + session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT; > + session->is_auth = 1; > + session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_NULL; > + session->auth_mode = ICP_QAT_HW_AUTH_MODE0; > + session->auth_op = ICP_QAT_HW_AUTH_GENERATE; > + session->digest_length = RTE_ETHER_CRC_LEN; > + > + ret = qat_sym_cd_cipher_set(session, cipherkey, cipherkeylen); > + if (ret < 0) > + return -EINVAL; > + > + ret = qat_sym_cd_crc_set(session, qat_dev_gen); > + if (ret < 0) > + return -EINVAL; > + > + qat_sym_session_finalize(session); > + > + return 0; > +} > + > static int > qat_sym_session_handle_single_pass(struct qat_sym_session *session, > const struct rte_crypto_aead_xform *aead_xform) > @@ -697,7 +750,7 @@ qat_sym_session_configure_auth(struct rte_cryptodev > *dev, > switch (auth_xform->algo) { > case RTE_CRYPTO_AUTH_SM3: > session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SM3; > - session->auth_mode = ICP_QAT_HW_AUTH_MODE0; > + session->auth_mode = ICP_QAT_HW_AUTH_MODE2; > break; > case RTE_CRYPTO_AUTH_SHA1: > session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_SHA1; > @@ -1866,6 +1919,9 @@ int qat_sym_cd_cipher_set(struct qat_sym_session > *cdesc, > ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl, > ICP_QAT_FW_SLICE_DRAM_WR); > cdesc->cd_cur_ptr = (uint8_t *)&cdesc->cd; > + } else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_CRC) { > + cd_pars->u.s.content_desc_addr = cdesc->cd_paddr; > + cdesc->cd_cur_ptr = (uint8_t *)&cdesc->cd; > } else if (cdesc->qat_cmd != ICP_QAT_FW_LA_CMD_HASH_CIPHER) { > QAT_LOG(ERR, "Invalid param, must be a cipher command."); > return -EFAULT; > @@ -2641,6 +2697,135 @@ qat_sec_session_check_docsis(struct > rte_security_session_conf *conf) > return -EINVAL; > } > > +static int > +qat_sym_cd_crc_set(struct qat_sym_session *cdesc, > + enum qat_device_gen qat_dev_gen) > +{ > + struct icp_qat_hw_gen2_crc_cd *crc_cd_gen2; > + struct icp_qat_hw_gen3_crc_cd *crc_cd_gen3; > + struct icp_qat_hw_gen4_crc_cd *crc_cd_gen4; > + struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req; > + struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl- > >cd_pars; > + void *ptr = &req_tmpl->cd_ctrl; > + struct icp_qat_fw_auth_cd_ctrl_hdr *crc_cd_ctrl = ptr; > + struct icp_qat_fw_la_auth_req_params *crc_param = > + (struct icp_qat_fw_la_auth_req_params *) > + ((char *)&req_tmpl->serv_specif_rqpars + > + > ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); > + struct icp_qat_fw_ucs_slice_cipher_config crc_cfg; > + uint16_t crc_cfg_offset, cd_size; > + > + crc_cfg_offset = cdesc->cd_cur_ptr - ((uint8_t *)&cdesc->cd); > + > + switch (qat_dev_gen) { > + case QAT_GEN2: > + crc_cd_gen2 = > + (struct icp_qat_hw_gen2_crc_cd *)cdesc->cd_cur_ptr; > + crc_cd_gen2->flags = 0; > + crc_cd_gen2->initial_crc = 0; > + memset(&crc_cd_gen2->reserved1, > + 0, > + sizeof(crc_cd_gen2->reserved1)); > + memset(&crc_cd_gen2->reserved2, > + 0, > + sizeof(crc_cd_gen2->reserved2)); > + cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen2_crc_cd); > + break; > + case QAT_GEN3: > + crc_cd_gen3 = > + (struct icp_qat_hw_gen3_crc_cd *)cdesc->cd_cur_ptr; > + crc_cd_gen3->flags = > ICP_QAT_HW_GEN3_CRC_FLAGS_BUILD(1, 1); > + crc_cd_gen3->polynomial = ETH_CRC32_POLYNOMIAL; > + crc_cd_gen3->initial_crc = ETH_CRC32_INIT_VAL; > + crc_cd_gen3->xor_val = ETH_CRC32_XOR_OUT; > + memset(&crc_cd_gen3->reserved1, > + 0, > + sizeof(crc_cd_gen3->reserved1)); > + memset(&crc_cd_gen3->reserved2, > + 0, > + sizeof(crc_cd_gen3->reserved2)); > + crc_cd_gen3->reserved3 = 0; > + cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen3_crc_cd); > + break; > + case QAT_GEN4: > + crc_cfg.mode = ICP_QAT_HW_CIPHER_ECB_MODE; > + crc_cfg.algo = ICP_QAT_HW_CIPHER_ALGO_NULL; > + crc_cfg.hash_cmp_val = 0; > + crc_cfg.dir = ICP_QAT_HW_CIPHER_ENCRYPT; > + crc_cfg.associated_data_len_in_bytes = 0; > + crc_cfg.crc_reflect_out = > + > ICP_QAT_HW_CIPHER_UCS_REFLECT_OUT_ENABLED; > + crc_cfg.crc_reflect_in = > + > ICP_QAT_HW_CIPHER_UCS_REFLECT_IN_ENABLED; > + crc_cfg.crc_encoding = ICP_QAT_HW_CIPHER_UCS_CRC32; > + > + crc_cd_gen4 = > + (struct icp_qat_hw_gen4_crc_cd *)cdesc->cd_cur_ptr; > + crc_cd_gen4->ucs_config[0] = > + > ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_LOWER(crc_cfg); > + crc_cd_gen4->ucs_config[1] = > + > ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_UPPER(crc_cfg); > + crc_cd_gen4->polynomial = ETH_CRC32_POLYNOMIAL_BE; > + crc_cd_gen4->initial_crc = ETH_CRC32_INIT_VAL_BE; > + crc_cd_gen4->xor_val = ETH_CRC32_XOR_OUT_BE; > + crc_cd_gen4->reserved1 = 0; > + crc_cd_gen4->reserved2 = 0; > + crc_cd_gen4->reserved3 = 0; > + cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen4_crc_cd); > + break; > + default: > + return -EINVAL; > + } > + > + crc_cd_ctrl->hash_cfg_offset = crc_cfg_offset >> 3; > + crc_cd_ctrl->hash_flags = > ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED; > + crc_cd_ctrl->inner_res_sz = cdesc->digest_length; > + crc_cd_ctrl->final_sz = cdesc->digest_length; > + crc_cd_ctrl->inner_state1_sz = 0; > + crc_cd_ctrl->inner_state2_sz = 0; > + crc_cd_ctrl->inner_state2_offset = 0; > + crc_cd_ctrl->outer_prefix_sz = 0; > + crc_cd_ctrl->outer_config_offset = 0; > + crc_cd_ctrl->outer_state1_sz = 0; > + crc_cd_ctrl->outer_res_sz = 0; > + crc_cd_ctrl->outer_prefix_offset = 0; > + > + crc_param->auth_res_sz = cdesc->digest_length; > + crc_param->u2.aad_sz = 0; > + crc_param->hash_state_sz = 0; > + > + cd_size = cdesc->cd_cur_ptr - (uint8_t *)&cdesc->cd; > + cd_pars->u.s.content_desc_addr = cdesc->cd_paddr; > + cd_pars->u.s.content_desc_params_sz = RTE_ALIGN_CEIL(cd_size, 8) >> > 3; > + > + return 0; > +} > + > +static int > +qat_sym_session_configure_crc(struct rte_cryptodev *dev, > + const struct rte_crypto_sym_xform *cipher_xform, > + struct qat_sym_session *session) > +{ > + struct qat_cryptodev_private *internals = dev->data->dev_private; > + enum qat_device_gen qat_dev_gen = internals->qat_dev- > >qat_dev_gen; > + int ret; > + > + session->is_auth = 1; > + session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_NULL; > + session->auth_mode = ICP_QAT_HW_AUTH_MODE0; > + session->auth_op = cipher_xform->cipher.op == > + RTE_CRYPTO_CIPHER_OP_ENCRYPT ? > + ICP_QAT_HW_AUTH_GENERATE : > + ICP_QAT_HW_AUTH_VERIFY; > + session->digest_length = RTE_ETHER_CRC_LEN; > + > + ret = qat_sym_cd_crc_set(session, qat_dev_gen); > + if (ret < 0) > + return ret; > + > + return 0; > +} > + > static int > qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev, > struct rte_security_session_conf *conf, void *session_private, > @@ -2681,12 +2866,21 @@ qat_sec_session_set_docsis_parameters(struct > rte_cryptodev *dev, > if (qat_cmd_id != ICP_QAT_FW_LA_CMD_CIPHER) { > QAT_LOG(ERR, "Unsupported xform chain requested"); > return -ENOTSUP; > + } else if (internals->internal_capabilities > + & QAT_SYM_CAP_CIPHER_CRC) { > + qat_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER_CRC; > } > session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id; > > ret = qat_sym_session_configure_cipher(dev, xform, session); > if (ret < 0) > return ret; > + > + if (qat_cmd_id == ICP_QAT_FW_LA_CMD_CIPHER_CRC) { > + ret = qat_sym_session_configure_crc(dev, xform, session); > + if (ret < 0) > + return ret; > + } > qat_sym_session_finalize(session); > > return qat_sym_gen_dev_ops[qat_dev_gen].set_session((void *)cdev, > diff --git a/drivers/crypto/qat/qat_sym_session.h > b/drivers/crypto/qat/qat_sym_session.h > index 6322d7e3bc..9b5d11ac88 100644 > --- a/drivers/crypto/qat/qat_sym_session.h > +++ b/drivers/crypto/qat/qat_sym_session.h > @@ -46,6 +46,12 @@ > ICP_QAT_HW_CIPHER_KEY_CONVERT, > \ > ICP_QAT_HW_CIPHER_DECRYPT) > > +#define ICP_QAT_HW_GEN3_CRC_FLAGS_BUILD(ref_in, ref_out) \ > + (((ref_in & QAT_GEN3_COMP_REFLECT_IN_MASK) << \ > + QAT_GEN3_COMP_REFLECT_IN_BITPOS) | \ > + ((ref_out & QAT_GEN3_COMP_REFLECT_OUT_MASK) << \ > + QAT_GEN3_COMP_REFLECT_OUT_BITPOS)) > + > #define QAT_AES_CMAC_CONST_RB 0x87 > > #define QAT_CRYPTO_SLICE_SPC 1 > @@ -76,7 +82,12 @@ typedef int (*qat_sym_build_request_t)(void *in_op, > struct qat_sym_session *ctx, > /* Common content descriptor */ > struct qat_sym_cd { > struct icp_qat_hw_cipher_algo_blk cipher; > - struct icp_qat_hw_auth_algo_blk hash; > + union { > + struct icp_qat_hw_auth_algo_blk hash; > + struct icp_qat_hw_gen2_crc_cd crc_gen2; > + struct icp_qat_hw_gen3_crc_cd crc_gen3; > + struct icp_qat_hw_gen4_crc_cd crc_gen4; > + }; > } __rte_packed __rte_cache_aligned; > > struct qat_sym_session { > @@ -152,10 +163,18 @@ qat_sym_session_clear(struct rte_cryptodev *dev, > unsigned int > qat_sym_session_get_private_size(struct rte_cryptodev *dev); > > +int > +qat_cipher_crc_cap_msg_sess_prepare(struct qat_sym_session *session, > + rte_iova_t session_paddr, > + const uint8_t *cipherkey, > + uint32_t cipherkeylen, > + enum qat_device_gen qat_dev_gen); > + > void > qat_sym_sesssion_init_common_hdr(struct qat_sym_session *session, > struct icp_qat_fw_comn_req_hdr > *header, > enum qat_sym_proto_flag > proto_flags); > + > int > qat_sym_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg); > int > -- > 2.34.1 > > -------------------------------------------------------------- > Intel Research and Development Ireland Limited > Registered in Ireland > Registered Office: Collinstown Industrial Park, Leixlip, County Kildare > Registered Number: 308263 > > > This e-mail and any attachments may contain confidential material for the sole > use of the intended recipient(s). Any review or distribution by others is > strictly prohibited. If you are not the intended recipient, please contact the > sender and delete all copies. ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [EXT] [PATCH v3 2/2] crypto/qat: add cipher-crc offload support 2023-03-16 19:15 ` [EXT] " Akhil Goyal @ 2023-03-20 16:28 ` O'Sullivan, Kevin 0 siblings, 0 replies; 17+ messages in thread From: O'Sullivan, Kevin @ 2023-03-20 16:28 UTC (permalink / raw) To: Akhil Goyal, dev; +Cc: Ji, Kai, Coyle, David Hi Akhil Comments below Best regards Kevin > -----Original Message----- > From: Akhil Goyal <gakhil@marvell.com> > Sent: Thursday 16 March 2023 19:15 > To: O'Sullivan, Kevin <kevin.osullivan@intel.com>; dev@dpdk.org > Cc: Ji, Kai <kai.ji@intel.com>; Coyle, David <david.coyle@intel.com> > Subject: RE: [EXT] [PATCH v3 2/2] crypto/qat: add cipher-crc offload support > > > Subject: [EXT] [PATCH v3 2/2] crypto/qat: add cipher-crc offload > > support > > > Update title as > crypto/qat: support cipher-crc offload > > > This patch adds support to the QAT symmetric crypto PMD for combined > > cipher-crc offload feature, primarily for DOCSIS, on gen2/gen3/gen4 > > QAT devices. > > > > A new parameter called qat_sym_cipher_crc_enable has been added to > the > > PMD, which can be set on process start as follows: > > A new devarg called .... <kos> Sure, I will make this change. > > > > > -a <qat pci bdf>,qat_sym_cipher_crc_enable=1 > > > > When enabled, a capability check for the combined cipher-crc offload > > feature is triggered to the QAT firmware during queue pair > > initialization. If supported by the firmware, any subsequent runtime > > DOCSIS cipher-crc requests handled by the QAT PMD are offloaded to the > > QAT device by setting up the content descriptor and request > > accordingly. > > > > If the combined DOCSIS cipher-crc feature is not supported by the > > firmware, the CRC continues to be calculated within the PMD, with just > > the cipher portion of the request being offloaded to the QAT device. > > > > Signed-off-by: Kevin O'Sullivan <kevin.osullivan@intel.com> > > Signed-off-by: David Coyle <david.coyle@intel.com> > > --- > > v3: updated the file qat.rst with details of new configuration > > --- > > doc/guides/cryptodevs/qat.rst | 23 +++ > > drivers/common/qat/qat_device.c | 12 +- > > drivers/common/qat/qat_device.h | 3 +- > > drivers/common/qat/qat_qp.c | 157 +++++++++++++++ > > drivers/common/qat/qat_qp.h | 5 + > > drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 2 +- > > drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 24 ++- > > drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 4 + > > drivers/crypto/qat/qat_crypto.c | 22 ++- > > drivers/crypto/qat/qat_crypto.h | 1 + > > drivers/crypto/qat/qat_sym.c | 4 + > > drivers/crypto/qat/qat_sym.h | 7 +- > > drivers/crypto/qat/qat_sym_session.c | 196 ++++++++++++++++++- > > drivers/crypto/qat/qat_sym_session.h | 21 +- > > 14 files changed, 465 insertions(+), 16 deletions(-) > > > > diff --git a/doc/guides/cryptodevs/qat.rst > > b/doc/guides/cryptodevs/qat.rst index ef754106a8..32e0d8a562 100644 > > --- a/doc/guides/cryptodevs/qat.rst > > +++ b/doc/guides/cryptodevs/qat.rst > > @@ -294,6 +294,29 @@ by comma. When the same parameter is used > more > > than once first occurrence of the is used. > > Maximum threshold that can be set is 32. > > > > + > > +Running QAT PMD with Cipher-CRC offload feature > > +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > + > > +Support has been added to the QAT symmetric crypto PMD for combined > > Cipher-CRC offload, > > +primarily for the Crypto-CRC DOCSIS security protocol, on > > +GEN2/GEN3/GEN4 > > QAT devices. > > + > > +The following parameter enables a Cipher-CRC offload capability check > > +to > > determine > > +if the feature is supported on the QAT device. > > + > > +- qat_sym_cipher_crc_enable > > Use the word devarg to make it uniform across DPDK. <kos> Sure, I will make this change. > > > > > + > > +When enabled, a capability check for the combined Cipher-CRC offload > > +feature > > is triggered > > +to the QAT firmware during queue pair initialization. If supported by > > +the > > firmware, > > +any subsequent runtime Crypto-CRC DOCSIS security protocol requests > > +handled > > by the QAT PMD > > +are offloaded to the QAT device by setting up the content descriptor > > +and > > request accordingly. > > +If not supported, the CRC is calculated by the QAT PMD using the NET CRC > API. > > + > > +To use this feature the user must set the parameter on process start > > +as a device > > additional parameter:: > > + > > + -a 03:01.1,qat_sym_cipher_crc_enable=1 > > + > > + > > Running QAT PMD with Intel IPSEC MB library for symmetric precomputes > > function > > > > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > ~~~~~~~ > > ~~~~~~~~~~~~~ > > > > diff --git a/drivers/common/qat/qat_device.c > > b/drivers/common/qat/qat_device.c index 8bce2ac073..308c59c39f 100644 > > --- a/drivers/common/qat/qat_device.c > > +++ b/drivers/common/qat/qat_device.c > > @@ -149,7 +149,16 @@ qat_dev_parse_cmd(const char *str, struct > > qat_dev_cmd_param > > } else { > > memcpy(value_str, arg2, iter); > > value = strtol(value_str, NULL, 10); > > - if (value > MAX_QP_THRESHOLD_SIZE) { > > + if (strcmp(param, > > + SYM_CIPHER_CRC_ENABLE_NAME) > == > > 0) { > > + if (value < 0 || value > 1) { > > + QAT_LOG(DEBUG, "The value > > for" > > + " > qat_sym_cipher_crc_enable" > > + " should be set to 0 or 1," > > + " setting to 0"); > > Do not split printable strings across multiple lines even if it cross max limit. > Fix this across the patch. > Moreover max limit is also increased from 80 -> 100 <kos> Ok, thanks for the info, I will move all the parts of string to same line . > > > > + value = 0; > > + } > > + } else if (value > MAX_QP_THRESHOLD_SIZE) > { > > QAT_LOG(DEBUG, "Exceeded max > size of" > > " threshold, setting to %d", > > MAX_QP_THRESHOLD_SIZE); > > @@ -369,6 +378,7 @@ static int qat_pci_probe(struct rte_pci_driver > > *pci_drv __rte_unused, > > { SYM_ENQ_THRESHOLD_NAME, 0 }, > > { ASYM_ENQ_THRESHOLD_NAME, 0 }, > > { COMP_ENQ_THRESHOLD_NAME, 0 }, > > + { SYM_CIPHER_CRC_ENABLE_NAME, 0 }, > > [QAT_CMD_SLICE_MAP_POS] = { > > QAT_CMD_SLICE_MAP, 0}, > > { NULL, 0 }, > > }; > > diff --git a/drivers/common/qat/qat_device.h > > b/drivers/common/qat/qat_device.h index bc3da04238..4188474dde > 100644 > > --- a/drivers/common/qat/qat_device.h > > +++ b/drivers/common/qat/qat_device.h > > @@ -21,8 +21,9 @@ > > #define SYM_ENQ_THRESHOLD_NAME "qat_sym_enq_threshold" > > #define ASYM_ENQ_THRESHOLD_NAME "qat_asym_enq_threshold" > > #define COMP_ENQ_THRESHOLD_NAME "qat_comp_enq_threshold" > > +#define SYM_CIPHER_CRC_ENABLE_NAME > "qat_sym_cipher_crc_enable" > > #define QAT_CMD_SLICE_MAP "qat_cmd_slice_disable" > > -#define QAT_CMD_SLICE_MAP_POS 4 > > +#define QAT_CMD_SLICE_MAP_POS 5 > > #define MAX_QP_THRESHOLD_SIZE 32 > > > > /** > > diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c > > index 9cbd19a481..1ce89c265f 100644 > > --- a/drivers/common/qat/qat_qp.c > > +++ b/drivers/common/qat/qat_qp.c > > @@ -11,6 +11,9 @@ > > #include <bus_pci_driver.h> > > #include <rte_atomic.h> > > #include <rte_prefetch.h> > > +#ifdef RTE_LIB_SECURITY > > +#include <rte_ether.h> > > +#endif > > > > #include "qat_logs.h" > > #include "qat_device.h" > > @@ -957,6 +960,160 @@ qat_cq_get_fw_version(struct qat_qp *qp) > > return -EINVAL; > > } > > > > +#ifdef BUILD_QAT_SYM > > Where is this defined? Even no documentation about when to > enable/disable it. <kos> This is an existing cflag set in the meson.build to compile the QAT code for symmetric sessions. I have used this existing ifdef around my code also as it is only applicable for symmetric sessions. Extract from meson.build below if qat_crypto foreach f: ['qat_sym.c', 'qat_sym_session.c', 'qat_asym.c', 'qat_crypto.c', 'dev/qat_sym_pmd_gen1.c', 'dev/qat_asym_pmd_gen1.c', 'dev/qat_crypto_pmd_gen2.c', 'dev/qat_crypto_pmd_gen3.c', 'dev/qat_crypto_pmd_gen4.c', ] sources += files(join_paths(qat_crypto_relpath, f)) endforeach deps += ['security'] ext_deps += libcrypto cflags += ['-DBUILD_QAT_SYM', '-DBUILD_QAT_ASYM'] endif > > > > +/* Sends an LA bulk req message to determine if a QAT device supports > > +Cipher- > > CRC > > + * offload. This assumes that there are no inflight messages, i.e. > > +assumes > > + * there's space on the qp, one message is sent and only one > > +response > > + * collected. The status bit of the response and returned data are > checked. > > + * Returns: > > + * 1 if status bit indicates success and returned data matches expected > > + * data (i.e. Cipher-CRC supported) > > + * 0 if status bit indicates error or returned data does not match > expected > > + * data (i.e. Cipher-CRC not supported) > > + * Negative error code in case of error > > + */ > > +int > > +qat_cq_get_fw_cipher_crc_cap(struct qat_qp *qp) { > > + struct qat_queue *queue = &(qp->tx_q); > > + uint8_t *base_addr = (uint8_t *)queue->base_addr; > > + struct icp_qat_fw_la_bulk_req cipher_crc_cap_msg = {{0}}; > > + struct icp_qat_fw_comn_resp response = {{0}}; > > + struct icp_qat_fw_la_cipher_req_params *cipher_param; > > + struct icp_qat_fw_la_auth_req_params *auth_param; > > + struct qat_sym_session *session; > > + phys_addr_t phy_src_addr; > > + uint64_t *src_data_addr; > > + int ret; > > + uint8_t cipher_offset = 18; > > + uint8_t crc_offset = 6; > > + uint8_t ciphertext[34] = { > > + /* Outer protocol header */ > > + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, > > + /* Ethernet frame */ > > + 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x06, 0x05, > > + 0x04, 0x03, 0x02, 0x01, 0xD6, 0xE2, 0x70, 0x5C, > > + 0xE6, 0x4D, 0xCC, 0x8C, 0x47, 0xB7, 0x09, 0xD6, > > + /* CRC */ > > + 0x54, 0x85, 0xF8, 0x32 > > + }; > > + uint8_t plaintext[34] = { > > + /* Outer protocol header */ > > + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, > > + /* Ethernet frame */ > > + 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x06, 0x05, > > + 0x04, 0x03, 0x02, 0x01, 0x08, 0x00, 0xAA, 0xAA, > > + 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, > > + /* CRC */ > > + 0xFF, 0xFF, 0xFF, 0xFF > > + }; > > + uint8_t key[16] = { > > + 0x00, 0x00, 0x00, 0x00, 0xAA, 0xBB, 0xCC, 0xDD, > > + 0xEE, 0xFF, 0x00, 0x11, 0x22, 0x33, 0x44, 0x55 > > + }; > > + uint8_t iv[16] = { > > + 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, > > + 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11 > > + }; > > Is it not better to define them as macros? <kos> The arrays are only used locally by this function so that is why I added them here instead of using macros. Do you see merit in adding them as macros? > > > + > > + session = rte_zmalloc(NULL, sizeof(struct qat_sym_session), 0); > > + if (session == NULL) > > + return -EINVAL; > > + > > + /* Verify the session physical address is known */ > > + rte_iova_t session_paddr = rte_mem_virt2iova(session); > > + if (session_paddr == 0 || session_paddr == RTE_BAD_IOVA) { > > + QAT_LOG(ERR, "Session physical address unknown."); > > + return -EINVAL; > > + } > > + > > + /* Prepare the LA bulk request */ > > + ret = qat_cipher_crc_cap_msg_sess_prepare(session, > > + session_paddr, > > + key, > > + sizeof(key), > > + qp->qat_dev_gen); > > + if (ret < 0) { > > + rte_free(session); > > + /* Returning 0 here to allow qp setup to continue, but > > + * indicate that Cipher-CRC offload is not supported on the > > + * device > > + */ > > + return 0; > > + } > > + > > + cipher_crc_cap_msg = session->fw_req; > > + > > + src_data_addr = rte_zmalloc(NULL, sizeof(plaintext), 0); > > + if (src_data_addr == NULL) { > > + rte_free(session); > > + return -EINVAL; > > + } > > + > > + rte_memcpy(src_data_addr, plaintext, sizeof(plaintext)); > > + > > + phy_src_addr = rte_mem_virt2iova(src_data_addr); > > + if (phy_src_addr == 0 || phy_src_addr == RTE_BAD_IOVA) { > > + QAT_LOG(ERR, "Source physical address unknown."); > > + return -EINVAL; > > + } > > + > > + cipher_crc_cap_msg.comn_mid.src_data_addr = phy_src_addr; > > + cipher_crc_cap_msg.comn_mid.src_length = sizeof(plaintext); > > + cipher_crc_cap_msg.comn_mid.dest_data_addr = phy_src_addr; > > + cipher_crc_cap_msg.comn_mid.dst_length = sizeof(plaintext); > > + > > + cipher_param = (void *)&cipher_crc_cap_msg.serv_specif_rqpars; > > + auth_param = (void *)((uint8_t *)cipher_param + > > + > ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); > > + > > + rte_memcpy(cipher_param->u.cipher_IV_array, iv, sizeof(iv)); > > + > > + cipher_param->cipher_offset = cipher_offset; > > + cipher_param->cipher_length = sizeof(plaintext) - cipher_offset; > > + auth_param->auth_off = crc_offset; > > + auth_param->auth_len = sizeof(plaintext) - > > + crc_offset - > > + RTE_ETHER_CRC_LEN; > > + > > + ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET( > > + cipher_crc_cap_msg.comn_hdr.serv_specif_flags, > > + ICP_QAT_FW_LA_DIGEST_IN_BUFFER); > > + > > +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG > > + QAT_DP_HEXDUMP_LOG(DEBUG, "LA Bulk request", > > &cipher_crc_cap_msg, > > + sizeof(cipher_crc_cap_msg)); > > +#endif > > + > > + /* Send the cipher_crc_cap_msg request */ > > + memcpy(base_addr + queue->tail, > > + &cipher_crc_cap_msg, > > + sizeof(cipher_crc_cap_msg)); > > + queue->tail = adf_modulo(queue->tail + queue->msg_size, > > + queue->modulo_mask); > > + txq_write_tail(qp->qat_dev_gen, qp, queue); > > + > > + /* Check for response and verify data is same as ciphertext */ > > + if (qat_cq_dequeue_response(qp, &response)) { #if > RTE_LOG_DP_LEVEL > > +>= RTE_LOG_DEBUG > > + QAT_DP_HEXDUMP_LOG(DEBUG, "LA response:", > &response, > > + sizeof(response)); > > +#endif > > + > > + if (memcmp(src_data_addr, ciphertext, sizeof(ciphertext)) != > 0) > > + ret = 0; /* Cipher-CRC offload not supported */ > > + else > > + ret = 1; > > + } else { > > + ret = -EINVAL; > > + } > > + > > + rte_free(src_data_addr); > > + rte_free(session); > > + return ret; > > +} > > +#endif > > + > > __rte_weak int > > qat_comp_process_response(void **op __rte_unused, uint8_t *resp > > __rte_unused, -------------------------------------------------------------- Intel Research and Development Ireland Limited Registered in Ireland Registered Office: Collinstown Industrial Park, Leixlip, County Kildare Registered Number: 308263 This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v4 0/2] crypto/qat: add cipher-crc offload feature 2023-03-13 14:26 ` [PATCH v3 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan 2023-03-13 14:26 ` [PATCH v3 1/2] crypto/qat: add cipher-crc offload support to fw interface Kevin O'Sullivan 2023-03-13 14:26 ` [PATCH v3 2/2] crypto/qat: add cipher-crc offload support Kevin O'Sullivan @ 2023-04-18 13:39 ` Kevin O'Sullivan 2023-04-18 13:39 ` [PATCH v4 1/2] crypto/qat: add cipher-crc offload support to fw interface Kevin O'Sullivan ` (2 more replies) 2 siblings, 3 replies; 17+ messages in thread From: Kevin O'Sullivan @ 2023-04-18 13:39 UTC (permalink / raw) To: dev; +Cc: kai.ji, Kevin O'Sullivan This patchset adds support to the QAT PMD for combined cipher-crc processing for DOCSIS on the QAT device. The current QAT PMD implementation of cipher-crc calculates CRC in software and uses QAT for encryption/decryption offload. Note: The previous code-path is still retained for QAT firmware versions without support for combined cipher-crc offload. - Support has been added to DPDK QAT PMD to enable the use of the cipher-crc offload feature on gen2/gen3/gen4 QAT devices. - A cipher-crc offload capability check has been added to the queue pair setup function to determine if the feature is supported on the QAT device. v1: * initial version v2: * fixed centos compilation error for missing braces around initializer v3: * updated the file qat.rst with details of new configuration v4: * updated v23.07 release note * moved cipher crc capability check test vectors to top of qat_qp.c and made the vectors static const * changed log string to be all on one line in qat_device.c * changed word parameter to devargs in qat.rst Kevin O'Sullivan (2): crypto/qat: add cipher-crc offload support to fw interface crypto/qat: support cipher-crc offload doc/guides/cryptodevs/qat.rst | 23 +++ doc/guides/rel_notes/release_23_07.rst | 4 + drivers/common/qat/qat_adf/icp_qat_fw.h | 1 - drivers/common/qat/qat_adf/icp_qat_fw_la.h | 3 +- drivers/common/qat/qat_adf/icp_qat_hw.h | 133 +++++++++++++ drivers/common/qat/qat_device.c | 9 +- drivers/common/qat/qat_device.h | 3 +- drivers/common/qat/qat_qp.c | 177 +++++++++++++++++ drivers/common/qat/qat_qp.h | 5 + drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 2 +- drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 24 ++- drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 4 + drivers/crypto/qat/qat_crypto.c | 22 ++- drivers/crypto/qat/qat_crypto.h | 1 + drivers/crypto/qat/qat_sym.c | 4 + drivers/crypto/qat/qat_sym.h | 7 +- drivers/crypto/qat/qat_sym_session.c | 194 +++++++++++++++++++ drivers/crypto/qat/qat_sym_session.h | 21 +- 18 files changed, 620 insertions(+), 17 deletions(-) -- 2.34.1 ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v4 1/2] crypto/qat: add cipher-crc offload support to fw interface 2023-04-18 13:39 ` [PATCH v4 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan @ 2023-04-18 13:39 ` Kevin O'Sullivan 2023-04-18 13:39 ` [PATCH v4 2/2] crypto/qat: support cipher-crc offload Kevin O'Sullivan 2023-05-24 10:04 ` [EXT] [PATCH v4 0/2] crypto/qat: add cipher-crc offload feature Akhil Goyal 2 siblings, 0 replies; 17+ messages in thread From: Kevin O'Sullivan @ 2023-04-18 13:39 UTC (permalink / raw) To: dev; +Cc: kai.ji, Kevin O'Sullivan, David Coyle This patch adds support to the QAT firmware interface header files for the combined cipher-crc offload feature for DOCSIS on gen2/gen3/ gen4 QAT devices. The main change is that new structures have been added for the crc content descriptor for the various generations. Signed-off-by: Kevin O'Sullivan <kevin.osullivan@intel.com> Signed-off-by: David Coyle <david.coyle@intel.com> Acked-by: Kai Ji <kai.ji@intel.com> --- drivers/common/qat/qat_adf/icp_qat_fw.h | 1 - drivers/common/qat/qat_adf/icp_qat_fw_la.h | 3 +- drivers/common/qat/qat_adf/icp_qat_hw.h | 133 +++++++++++++++++++++ 3 files changed, 135 insertions(+), 2 deletions(-) diff --git a/drivers/common/qat/qat_adf/icp_qat_fw.h b/drivers/common/qat/qat_adf/icp_qat_fw.h index be10fc9bde..3aa17ae041 100644 --- a/drivers/common/qat/qat_adf/icp_qat_fw.h +++ b/drivers/common/qat/qat_adf/icp_qat_fw.h @@ -4,7 +4,6 @@ #ifndef _ICP_QAT_FW_H_ #define _ICP_QAT_FW_H_ #include <sys/types.h> -#include "icp_qat_hw.h" #define QAT_FIELD_SET(flags, val, bitpos, mask) \ { (flags) = (((flags) & (~((mask) << (bitpos)))) | \ diff --git a/drivers/common/qat/qat_adf/icp_qat_fw_la.h b/drivers/common/qat/qat_adf/icp_qat_fw_la.h index c4901eb869..227a6cebc8 100644 --- a/drivers/common/qat/qat_adf/icp_qat_fw_la.h +++ b/drivers/common/qat/qat_adf/icp_qat_fw_la.h @@ -18,7 +18,8 @@ enum icp_qat_fw_la_cmd_id { ICP_QAT_FW_LA_CMD_MGF1 = 9, ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP = 10, ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP = 11, - ICP_QAT_FW_LA_CMD_DELIMITER = 12 + ICP_QAT_FW_LA_CMD_CIPHER_CRC = 17, + ICP_QAT_FW_LA_CMD_DELIMITER = 18 }; #define ICP_QAT_FW_LA_ICV_VER_STATUS_PASS ICP_QAT_FW_COMN_STATUS_FLAG_OK diff --git a/drivers/common/qat/qat_adf/icp_qat_hw.h b/drivers/common/qat/qat_adf/icp_qat_hw.h index 866147cd77..8b864e1630 100644 --- a/drivers/common/qat/qat_adf/icp_qat_hw.h +++ b/drivers/common/qat/qat_adf/icp_qat_hw.h @@ -4,6 +4,8 @@ #ifndef _ICP_QAT_HW_H_ #define _ICP_QAT_HW_H_ +#include "icp_qat_fw.h" + #define ADF_C4XXXIOV_VFLEGFUSES_OFFSET 0x4C #define ADF1_C4XXXIOV_VFLEGFUSES_LEN 4 @@ -260,14 +262,19 @@ enum icp_qat_hw_cipher_convert { }; #define QAT_CIPHER_MODE_BITPOS 4 +#define QAT_CIPHER_MODE_LE_BITPOS 28 #define QAT_CIPHER_MODE_MASK 0xF #define QAT_CIPHER_ALGO_BITPOS 0 +#define QAT_CIPHER_ALGO_LE_BITPOS 24 #define QAT_CIPHER_ALGO_MASK 0xF #define QAT_CIPHER_CONVERT_BITPOS 9 +#define QAT_CIPHER_CONVERT_LE_BITPOS 17 #define QAT_CIPHER_CONVERT_MASK 0x1 #define QAT_CIPHER_DIR_BITPOS 8 +#define QAT_CIPHER_DIR_LE_BITPOS 16 #define QAT_CIPHER_DIR_MASK 0x1 #define QAT_CIPHER_AEAD_HASH_CMP_LEN_BITPOS 10 +#define QAT_CIPHER_AEAD_HASH_CMP_LEN_LE_BITPOS 18 #define QAT_CIPHER_AEAD_HASH_CMP_LEN_MASK 0x1F #define QAT_CIPHER_MODE_F8_KEY_SZ_MULT 2 #define QAT_CIPHER_MODE_XTS_KEY_SZ_MULT 2 @@ -281,7 +288,9 @@ enum icp_qat_hw_cipher_convert { #define QAT_CIPHER_AEAD_AAD_UPPER_SHIFT 8 #define QAT_CIPHER_AEAD_AAD_SIZE_LOWER_MASK 0xFF #define QAT_CIPHER_AEAD_AAD_SIZE_UPPER_MASK 0x3F +#define QAT_CIPHER_AEAD_AAD_SIZE_MASK 0x3FFF #define QAT_CIPHER_AEAD_AAD_SIZE_BITPOS 16 +#define QAT_CIPHER_AEAD_AAD_SIZE_LE_BITPOS 0 #define ICP_QAT_HW_CIPHER_CONFIG_BUILD_UPPER(aad_size) \ ({ \ typeof(aad_size) aad_size1 = aad_size; \ @@ -362,6 +371,28 @@ struct icp_qat_hw_cipher_algo_blk { uint8_t key[ICP_QAT_HW_CIPHER_MAX_KEY_SZ]; } __rte_cache_aligned; +struct icp_qat_hw_gen2_crc_cd { + uint32_t flags; + uint32_t reserved1[5]; + uint32_t initial_crc; + uint32_t reserved2[3]; +}; + +#define QAT_GEN3_COMP_REFLECT_IN_BITPOS 17 +#define QAT_GEN3_COMP_REFLECT_IN_MASK 0x1 +#define QAT_GEN3_COMP_REFLECT_OUT_BITPOS 18 +#define QAT_GEN3_COMP_REFLECT_OUT_MASK 0x1 + +struct icp_qat_hw_gen3_crc_cd { + uint32_t flags; + uint32_t reserved1[3]; + uint32_t polynomial; + uint32_t xor_val; + uint32_t reserved2[2]; + uint32_t initial_crc; + uint32_t reserved3; +}; + struct icp_qat_hw_ucs_cipher_config { uint32_t val; uint32_t reserved[3]; @@ -372,6 +403,108 @@ struct icp_qat_hw_cipher_algo_blk20 { uint8_t key[ICP_QAT_HW_CIPHER_MAX_KEY_SZ]; } __rte_cache_aligned; +enum icp_qat_hw_ucs_cipher_reflect_out { + ICP_QAT_HW_CIPHER_UCS_REFLECT_OUT_DISABLED = 0, + ICP_QAT_HW_CIPHER_UCS_REFLECT_OUT_ENABLED = 1, +}; + +enum icp_qat_hw_ucs_cipher_reflect_in { + ICP_QAT_HW_CIPHER_UCS_REFLECT_IN_DISABLED = 0, + ICP_QAT_HW_CIPHER_UCS_REFLECT_IN_ENABLED = 1, +}; + +enum icp_qat_hw_ucs_cipher_crc_encoding { + ICP_QAT_HW_CIPHER_UCS_CRC_NOT_REQUIRED = 0, + ICP_QAT_HW_CIPHER_UCS_CRC32 = 1, + ICP_QAT_HW_CIPHER_UCS_CRC64 = 2, +}; + +#define QAT_CIPHER_UCS_REFLECT_OUT_LE_BITPOS 17 +#define QAT_CIPHER_UCS_REFLECT_OUT_MASK 0x1 +#define QAT_CIPHER_UCS_REFLECT_IN_LE_BITPOS 16 +#define QAT_CIPHER_UCS_REFLECT_IN_MASK 0x1 +#define QAT_CIPHER_UCS_CRC_ENCODING_LE_BITPOS 14 +#define QAT_CIPHER_UCS_CRC_ENCODING_MASK 0x3 + +struct icp_qat_fw_ucs_slice_cipher_config { + enum icp_qat_hw_cipher_mode mode; + enum icp_qat_hw_cipher_algo algo; + uint16_t hash_cmp_val; + enum icp_qat_hw_cipher_dir dir; + uint16_t associated_data_len_in_bytes; + enum icp_qat_hw_ucs_cipher_reflect_out crc_reflect_out; + enum icp_qat_hw_ucs_cipher_reflect_in crc_reflect_in; + enum icp_qat_hw_ucs_cipher_crc_encoding crc_encoding; +}; + +struct icp_qat_hw_gen4_crc_cd { + uint32_t ucs_config[4]; + uint32_t polynomial; + uint32_t reserved1; + uint32_t xor_val; + uint32_t reserved2; + uint32_t initial_crc; + uint32_t reserved3; +}; + +static inline uint32_t +ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_LOWER( + struct icp_qat_fw_ucs_slice_cipher_config csr) +{ + uint32_t val32 = 0; + + QAT_FIELD_SET(val32, + csr.mode, + QAT_CIPHER_MODE_LE_BITPOS, + QAT_CIPHER_MODE_MASK); + + QAT_FIELD_SET(val32, + csr.algo, + QAT_CIPHER_ALGO_LE_BITPOS, + QAT_CIPHER_ALGO_MASK); + + QAT_FIELD_SET(val32, + csr.hash_cmp_val, + QAT_CIPHER_AEAD_HASH_CMP_LEN_LE_BITPOS, + QAT_CIPHER_AEAD_HASH_CMP_LEN_MASK); + + QAT_FIELD_SET(val32, + csr.dir, + QAT_CIPHER_DIR_LE_BITPOS, + QAT_CIPHER_DIR_MASK); + + return rte_bswap32(val32); +} + +static inline uint32_t +ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_UPPER( + struct icp_qat_fw_ucs_slice_cipher_config csr) +{ + uint32_t val32 = 0; + + QAT_FIELD_SET(val32, + csr.associated_data_len_in_bytes, + QAT_CIPHER_AEAD_AAD_SIZE_LE_BITPOS, + QAT_CIPHER_AEAD_AAD_SIZE_MASK); + + QAT_FIELD_SET(val32, + csr.crc_reflect_out, + QAT_CIPHER_UCS_REFLECT_OUT_LE_BITPOS, + QAT_CIPHER_UCS_REFLECT_OUT_MASK); + + QAT_FIELD_SET(val32, + csr.crc_reflect_in, + QAT_CIPHER_UCS_REFLECT_IN_LE_BITPOS, + QAT_CIPHER_UCS_REFLECT_IN_MASK); + + QAT_FIELD_SET(val32, + csr.crc_encoding, + QAT_CIPHER_UCS_CRC_ENCODING_LE_BITPOS, + QAT_CIPHER_UCS_CRC_ENCODING_MASK); + + return rte_bswap32(val32); +} + /* ========================================================================= */ /* COMPRESSION SLICE */ /* ========================================================================= */ -- 2.34.1 ^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH v4 2/2] crypto/qat: support cipher-crc offload 2023-04-18 13:39 ` [PATCH v4 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan 2023-04-18 13:39 ` [PATCH v4 1/2] crypto/qat: add cipher-crc offload support to fw interface Kevin O'Sullivan @ 2023-04-18 13:39 ` Kevin O'Sullivan 2023-05-24 10:04 ` [EXT] [PATCH v4 0/2] crypto/qat: add cipher-crc offload feature Akhil Goyal 2 siblings, 0 replies; 17+ messages in thread From: Kevin O'Sullivan @ 2023-04-18 13:39 UTC (permalink / raw) To: dev; +Cc: kai.ji, Kevin O'Sullivan, David Coyle This patch adds support to the QAT symmetric crypto PMD for combined cipher-crc offload feature, primarily for DOCSIS, on gen2/gen3/gen4 QAT devices. A new devarg called qat_sym_cipher_crc_enable has been added to the PMD, which can be set on process start as follows: -a <qat pci bdf>,qat_sym_cipher_crc_enable=1 When enabled, a capability check for the combined cipher-crc offload feature is triggered to the QAT firmware during queue pair initialization. If supported by the firmware, any subsequent runtime DOCSIS cipher-crc requests handled by the QAT PMD are offloaded to the QAT device by setting up the content descriptor and request accordingly. If the combined DOCSIS cipher-crc feature is not supported by the firmware, the CRC continues to be calculated within the PMD, with just the cipher portion of the request being offloaded to the QAT device. Signed-off-by: Kevin O'Sullivan <kevin.osullivan@intel.com> Signed-off-by: David Coyle <david.coyle@intel.com> Acked-by: Kai Ji <kai.ji@intel.com> --- doc/guides/cryptodevs/qat.rst | 23 +++ doc/guides/rel_notes/release_23_07.rst | 4 + drivers/common/qat/qat_device.c | 9 +- drivers/common/qat/qat_device.h | 3 +- drivers/common/qat/qat_qp.c | 177 +++++++++++++++++ drivers/common/qat/qat_qp.h | 5 + drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c | 2 +- drivers/crypto/qat/dev/qat_crypto_pmd_gens.h | 24 ++- drivers/crypto/qat/dev/qat_sym_pmd_gen1.c | 4 + drivers/crypto/qat/qat_crypto.c | 22 ++- drivers/crypto/qat/qat_crypto.h | 1 + drivers/crypto/qat/qat_sym.c | 4 + drivers/crypto/qat/qat_sym.h | 7 +- drivers/crypto/qat/qat_sym_session.c | 194 +++++++++++++++++++ drivers/crypto/qat/qat_sym_session.h | 21 +- 15 files changed, 485 insertions(+), 15 deletions(-) diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst index ef754106a8..a4a25711ed 100644 --- a/doc/guides/cryptodevs/qat.rst +++ b/doc/guides/cryptodevs/qat.rst @@ -294,6 +294,29 @@ by comma. When the same parameter is used more than once first occurrence of the is used. Maximum threshold that can be set is 32. + +Running QAT PMD with Cipher-CRC offload feature +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Support has been added to the QAT symmetric crypto PMD for combined Cipher-CRC offload, +primarily for the Crypto-CRC DOCSIS security protocol, on GEN2/GEN3/GEN4 QAT devices. + +The following devarg enables a Cipher-CRC offload capability check to determine +if the feature is supported on the QAT device. + +- qat_sym_cipher_crc_enable + +When enabled, a capability check for the combined Cipher-CRC offload feature is triggered +to the QAT firmware during queue pair initialization. If supported by the firmware, +any subsequent runtime Crypto-CRC DOCSIS security protocol requests handled by the QAT PMD +are offloaded to the QAT device by setting up the content descriptor and request accordingly. +If not supported, the CRC is calculated by the QAT PMD using the NET CRC API. + +To use this feature the user must set the devarg on process start as a device additional devarg:: + + -a 03:01.1,qat_sym_cipher_crc_enable=1 + + Running QAT PMD with Intel IPSEC MB library for symmetric precomputes function ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_23_07.rst b/doc/guides/rel_notes/release_23_07.rst index a9b1293689..a9c8311127 100644 --- a/doc/guides/rel_notes/release_23_07.rst +++ b/doc/guides/rel_notes/release_23_07.rst @@ -55,6 +55,10 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Updated Intel QuickAssist Technology (QAT) crypto driver.** + + Added support for combined Cipher-CRC offload for DOCSIS for QAT GENs 2,3 and 4. + Removed Items ------------- diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c index 8bce2ac073..0479175b65 100644 --- a/drivers/common/qat/qat_device.c +++ b/drivers/common/qat/qat_device.c @@ -149,7 +149,13 @@ qat_dev_parse_cmd(const char *str, struct qat_dev_cmd_param } else { memcpy(value_str, arg2, iter); value = strtol(value_str, NULL, 10); - if (value > MAX_QP_THRESHOLD_SIZE) { + if (strcmp(param, + SYM_CIPHER_CRC_ENABLE_NAME) == 0) { + if (value < 0 || value > 1) { + QAT_LOG(DEBUG, "The value for qat_sym_cipher_crc_enable should be set to 0 or 1, setting to 0"); + value = 0; + } + } else if (value > MAX_QP_THRESHOLD_SIZE) { QAT_LOG(DEBUG, "Exceeded max size of" " threshold, setting to %d", MAX_QP_THRESHOLD_SIZE); @@ -369,6 +375,7 @@ static int qat_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, { SYM_ENQ_THRESHOLD_NAME, 0 }, { ASYM_ENQ_THRESHOLD_NAME, 0 }, { COMP_ENQ_THRESHOLD_NAME, 0 }, + { SYM_CIPHER_CRC_ENABLE_NAME, 0 }, [QAT_CMD_SLICE_MAP_POS] = { QAT_CMD_SLICE_MAP, 0}, { NULL, 0 }, }; diff --git a/drivers/common/qat/qat_device.h b/drivers/common/qat/qat_device.h index bc3da04238..4188474dde 100644 --- a/drivers/common/qat/qat_device.h +++ b/drivers/common/qat/qat_device.h @@ -21,8 +21,9 @@ #define SYM_ENQ_THRESHOLD_NAME "qat_sym_enq_threshold" #define ASYM_ENQ_THRESHOLD_NAME "qat_asym_enq_threshold" #define COMP_ENQ_THRESHOLD_NAME "qat_comp_enq_threshold" +#define SYM_CIPHER_CRC_ENABLE_NAME "qat_sym_cipher_crc_enable" #define QAT_CMD_SLICE_MAP "qat_cmd_slice_disable" -#define QAT_CMD_SLICE_MAP_POS 4 +#define QAT_CMD_SLICE_MAP_POS 5 #define MAX_QP_THRESHOLD_SIZE 32 /** diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c index 9cbd19a481..348a1d574d 100644 --- a/drivers/common/qat/qat_qp.c +++ b/drivers/common/qat/qat_qp.c @@ -11,6 +11,9 @@ #include <bus_pci_driver.h> #include <rte_atomic.h> #include <rte_prefetch.h> +#ifdef RTE_LIB_SECURITY +#include <rte_ether.h> +#endif #include "qat_logs.h" #include "qat_device.h" @@ -24,6 +27,44 @@ #define ADF_MAX_DESC 4096 #define ADF_MIN_DESC 128 +#ifdef BUILD_QAT_SYM +/* Cipher-CRC capability check test parameters */ +static const uint8_t cipher_crc_cap_check_iv[] = { + 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, + 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11, 0x11 +}; + +static const uint8_t cipher_crc_cap_check_key[] = { + 0x00, 0x00, 0x00, 0x00, 0xAA, 0xBB, 0xCC, 0xDD, + 0xEE, 0xFF, 0x00, 0x11, 0x22, 0x33, 0x44, 0x55 +}; + +static const uint8_t cipher_crc_cap_check_plaintext[] = { + /* Outer protocol header */ + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + /* Ethernet frame */ + 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x06, 0x05, + 0x04, 0x03, 0x02, 0x01, 0x08, 0x00, 0xAA, 0xAA, + 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, 0xAA, + /* CRC */ + 0xFF, 0xFF, 0xFF, 0xFF +}; + +static const uint8_t cipher_crc_cap_check_ciphertext[] = { + /* Outer protocol header */ + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + /* Ethernet frame */ + 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x06, 0x05, + 0x04, 0x03, 0x02, 0x01, 0xD6, 0xE2, 0x70, 0x5C, + 0xE6, 0x4D, 0xCC, 0x8C, 0x47, 0xB7, 0x09, 0xD6, + /* CRC */ + 0x54, 0x85, 0xF8, 0x32 +}; + +static const uint8_t cipher_crc_cap_check_cipher_offset = 18; +static const uint8_t cipher_crc_cap_check_crc_offset = 6; +#endif + struct qat_qp_hw_spec_funcs* qat_qp_hw_spec[QAT_N_GENS]; @@ -957,6 +998,142 @@ qat_cq_get_fw_version(struct qat_qp *qp) return -EINVAL; } +#ifdef BUILD_QAT_SYM +/* Sends an LA bulk req message to determine if a QAT device supports Cipher-CRC + * offload. This assumes that there are no inflight messages, i.e. assumes + * there's space on the qp, one message is sent and only one response + * collected. The status bit of the response and returned data are checked. + * Returns: + * 1 if status bit indicates success and returned data matches expected + * data (i.e. Cipher-CRC supported) + * 0 if status bit indicates error or returned data does not match expected + * data (i.e. Cipher-CRC not supported) + * Negative error code in case of error + */ +int +qat_cq_get_fw_cipher_crc_cap(struct qat_qp *qp) +{ + struct qat_queue *queue = &(qp->tx_q); + uint8_t *base_addr = (uint8_t *)queue->base_addr; + struct icp_qat_fw_la_bulk_req cipher_crc_cap_msg = {{0}}; + struct icp_qat_fw_comn_resp response = {{0}}; + struct icp_qat_fw_la_cipher_req_params *cipher_param; + struct icp_qat_fw_la_auth_req_params *auth_param; + struct qat_sym_session *session; + phys_addr_t phy_src_addr; + uint64_t *src_data_addr; + int ret; + + session = rte_zmalloc(NULL, sizeof(struct qat_sym_session), 0); + if (session == NULL) + return -EINVAL; + + /* Verify the session physical address is known */ + rte_iova_t session_paddr = rte_mem_virt2iova(session); + if (session_paddr == 0 || session_paddr == RTE_BAD_IOVA) { + QAT_LOG(ERR, "Session physical address unknown."); + return -EINVAL; + } + + /* Prepare the LA bulk request */ + ret = qat_cipher_crc_cap_msg_sess_prepare(session, + session_paddr, + cipher_crc_cap_check_key, + sizeof(cipher_crc_cap_check_key), + qp->qat_dev_gen); + if (ret < 0) { + rte_free(session); + /* Returning 0 here to allow qp setup to continue, but + * indicate that Cipher-CRC offload is not supported on the + * device + */ + return 0; + } + + cipher_crc_cap_msg = session->fw_req; + + src_data_addr = rte_zmalloc(NULL, + sizeof(cipher_crc_cap_check_plaintext), + 0); + if (src_data_addr == NULL) { + rte_free(session); + return -EINVAL; + } + + rte_memcpy(src_data_addr, + cipher_crc_cap_check_plaintext, + sizeof(cipher_crc_cap_check_plaintext)); + + phy_src_addr = rte_mem_virt2iova(src_data_addr); + if (phy_src_addr == 0 || phy_src_addr == RTE_BAD_IOVA) { + QAT_LOG(ERR, "Source physical address unknown."); + return -EINVAL; + } + + cipher_crc_cap_msg.comn_mid.src_data_addr = phy_src_addr; + cipher_crc_cap_msg.comn_mid.src_length = + sizeof(cipher_crc_cap_check_plaintext); + cipher_crc_cap_msg.comn_mid.dest_data_addr = phy_src_addr; + cipher_crc_cap_msg.comn_mid.dst_length = + sizeof(cipher_crc_cap_check_plaintext); + + cipher_param = (void *)&cipher_crc_cap_msg.serv_specif_rqpars; + auth_param = (void *)((uint8_t *)cipher_param + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + + rte_memcpy(cipher_param->u.cipher_IV_array, + cipher_crc_cap_check_iv, + sizeof(cipher_crc_cap_check_iv)); + + cipher_param->cipher_offset = cipher_crc_cap_check_cipher_offset; + cipher_param->cipher_length = + sizeof(cipher_crc_cap_check_plaintext) - + cipher_crc_cap_check_cipher_offset; + auth_param->auth_off = cipher_crc_cap_check_crc_offset; + auth_param->auth_len = sizeof(cipher_crc_cap_check_plaintext) - + cipher_crc_cap_check_crc_offset - + RTE_ETHER_CRC_LEN; + + ICP_QAT_FW_LA_DIGEST_IN_BUFFER_SET( + cipher_crc_cap_msg.comn_hdr.serv_specif_flags, + ICP_QAT_FW_LA_DIGEST_IN_BUFFER); + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "LA Bulk request", &cipher_crc_cap_msg, + sizeof(cipher_crc_cap_msg)); +#endif + + /* Send the cipher_crc_cap_msg request */ + memcpy(base_addr + queue->tail, + &cipher_crc_cap_msg, + sizeof(cipher_crc_cap_msg)); + queue->tail = adf_modulo(queue->tail + queue->msg_size, + queue->modulo_mask); + txq_write_tail(qp->qat_dev_gen, qp, queue); + + /* Check for response and verify data is same as ciphertext */ + if (qat_cq_dequeue_response(qp, &response)) { +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "LA response:", &response, + sizeof(response)); +#endif + + if (memcmp(src_data_addr, + cipher_crc_cap_check_ciphertext, + sizeof(cipher_crc_cap_check_ciphertext)) != 0) + ret = 0; /* Cipher-CRC offload not supported */ + else + ret = 1; + } else { + ret = -EINVAL; + } + + rte_free(src_data_addr); + rte_free(session); + return ret; +} +#endif + __rte_weak int qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused, void *op_cookie __rte_unused, diff --git a/drivers/common/qat/qat_qp.h b/drivers/common/qat/qat_qp.h index 66f00943a5..d19fc387e4 100644 --- a/drivers/common/qat/qat_qp.h +++ b/drivers/common/qat/qat_qp.h @@ -153,6 +153,11 @@ qat_qp_get_hw_data(struct qat_pci_device *qat_dev, int qat_cq_get_fw_version(struct qat_qp *qp); +#ifdef BUILD_QAT_SYM +int +qat_cq_get_fw_cipher_crc_cap(struct qat_qp *qp); +#endif + /* Needed for weak function*/ int qat_comp_process_response(void **op __rte_unused, uint8_t *resp __rte_unused, diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c index 60ca0fc0d2..1f3e2b1d99 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen2.c @@ -163,7 +163,7 @@ qat_sym_crypto_qp_setup_gen2(struct rte_cryptodev *dev, uint16_t qp_id, QAT_LOG(DEBUG, "unknown QAT firmware version"); /* set capabilities based on the fw version */ - qat_sym_private->internal_capabilities = QAT_SYM_CAP_VALID | + qat_sym_private->internal_capabilities |= QAT_SYM_CAP_VALID | ((ret >= MIXED_CRYPTO_MIN_FW_VER) ? QAT_SYM_CAP_MIXED_CRYPTO : 0); return 0; diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h index 524c291340..70942906ea 100644 --- a/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gens.h @@ -399,8 +399,13 @@ qat_sym_convert_op_to_vec_chain(struct rte_crypto_op *op, cipher_ofs = op->sym->cipher.data.offset >> 3; break; case 0: - cipher_len = op->sym->cipher.data.length; - cipher_ofs = op->sym->cipher.data.offset; + if (ctx->bpi_ctx) { + cipher_len = qat_bpicipher_preprocess(ctx, op); + cipher_ofs = op->sym->cipher.data.offset; + } else { + cipher_len = op->sym->cipher.data.length; + cipher_ofs = op->sym->cipher.data.offset; + } break; default: QAT_DP_LOG(ERR, @@ -428,8 +433,10 @@ qat_sym_convert_op_to_vec_chain(struct rte_crypto_op *op, max_len = RTE_MAX(cipher_ofs + cipher_len, auth_ofs + auth_len); - /* digest in buffer check. Needed only for wireless algos */ - if (ret == 1) { + /* digest in buffer check. Needed only for wireless algos + * or combined cipher-crc operations + */ + if (ret == 1 || ctx->bpi_ctx) { /* Handle digest-encrypted cases, i.e. * auth-gen-then-cipher-encrypt and * cipher-decrypt-then-auth-verify @@ -456,8 +463,9 @@ qat_sym_convert_op_to_vec_chain(struct rte_crypto_op *op, auth_len; /* Then check if digest-encrypted conditions are met */ - if ((auth_ofs + auth_len < cipher_ofs + cipher_len) && - (digest->iova == auth_end_iova)) + if (((auth_ofs + auth_len < cipher_ofs + cipher_len) && + (digest->iova == auth_end_iova)) || + ctx->bpi_ctx) max_len = RTE_MAX(max_len, auth_ofs + auth_len + ctx->digest_length); } @@ -691,9 +699,9 @@ enqueue_one_chain_job_gen1(struct qat_sym_session *ctx, auth_param->auth_len; /* Then check if digest-encrypted conditions are met */ - if ((auth_param->auth_off + auth_param->auth_len < + if (((auth_param->auth_off + auth_param->auth_len < cipher_param->cipher_offset + cipher_param->cipher_length) && - (digest->iova == auth_iova_end)) { + (digest->iova == auth_iova_end)) || ctx->bpi_ctx) { /* Handle partial digest encryption */ if (cipher_param->cipher_offset + cipher_param->cipher_length < auth_param->auth_off + auth_param->auth_len + diff --git a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c index 91d5cfa71d..590eaa0057 100644 --- a/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c +++ b/drivers/crypto/qat/dev/qat_sym_pmd_gen1.c @@ -1205,6 +1205,10 @@ qat_sym_crypto_set_session_gen1(void *cryptodev __rte_unused, void *session) } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER) { /* do_auth = 0; do_cipher = 1; */ build_request = qat_sym_build_op_cipher_gen1; + } else if (ctx->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_CRC) { + /* do_auth = 1; do_cipher = 1; */ + build_request = qat_sym_build_op_chain_gen1; + handle_mixed = 1; } if (build_request) diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c index 84c26a8062..861679373b 100644 --- a/drivers/crypto/qat/qat_crypto.c +++ b/drivers/crypto/qat/qat_crypto.c @@ -172,5 +172,25 @@ qat_cryptodev_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, qat_asym_init_op_cookie(qp->op_cookies[i]); } - return ret; + if (qat_private->cipher_crc_offload_enable) { + ret = qat_cq_get_fw_cipher_crc_cap(qp); + if (ret < 0) { + qat_cryptodev_qp_release(dev, qp_id); + return ret; + } + + if (ret != 0) + QAT_LOG(DEBUG, "Cipher CRC supported on QAT device"); + else + QAT_LOG(DEBUG, "Cipher CRC not supported on QAT device"); + + /* Only send the cipher crc offload capability message once */ + qat_private->cipher_crc_offload_enable = 0; + /* Set cipher crc offload indicator */ + if (ret) + qat_private->internal_capabilities |= + QAT_SYM_CAP_CIPHER_CRC; + } + + return 0; } diff --git a/drivers/crypto/qat/qat_crypto.h b/drivers/crypto/qat/qat_crypto.h index 6fe1326c51..e20f16236e 100644 --- a/drivers/crypto/qat/qat_crypto.h +++ b/drivers/crypto/qat/qat_crypto.h @@ -36,6 +36,7 @@ struct qat_cryptodev_private { /* Shared memzone for storing capabilities */ uint16_t min_enq_burst_threshold; uint32_t internal_capabilities; /* see flags QAT_SYM_CAP_xxx */ + bool cipher_crc_offload_enable; enum qat_service_type service_type; }; diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c index 08e92191a3..345c845325 100644 --- a/drivers/crypto/qat/qat_sym.c +++ b/drivers/crypto/qat/qat_sym.c @@ -279,6 +279,10 @@ qat_sym_dev_create(struct qat_pci_device *qat_pci_dev, if (!strcmp(qat_dev_cmd_param[i].name, SYM_ENQ_THRESHOLD_NAME)) internals->min_enq_burst_threshold = qat_dev_cmd_param[i].val; + if (!strcmp(qat_dev_cmd_param[i].name, + SYM_CIPHER_CRC_ENABLE_NAME)) + internals->cipher_crc_offload_enable = + qat_dev_cmd_param[i].val; if (!strcmp(qat_dev_cmd_param[i].name, QAT_IPSEC_MB_LIB)) qat_ipsec_mb_lib = qat_dev_cmd_param[i].val; if (!strcmp(qat_dev_cmd_param[i].name, QAT_CMD_SLICE_MAP)) diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h index 9a4251e08b..3d841d0eba 100644 --- a/drivers/crypto/qat/qat_sym.h +++ b/drivers/crypto/qat/qat_sym.h @@ -32,6 +32,7 @@ /* Internal capabilities */ #define QAT_SYM_CAP_MIXED_CRYPTO (1 << 0) +#define QAT_SYM_CAP_CIPHER_CRC (1 << 1) #define QAT_SYM_CAP_VALID (1 << 31) /** @@ -282,7 +283,8 @@ qat_sym_preprocess_requests(void **ops, uint16_t nb_ops) if (ctx == NULL || ctx->bpi_ctx == NULL) continue; - qat_crc_generate(ctx, op); + if (ctx->qat_cmd != ICP_QAT_FW_LA_CMD_CIPHER_CRC) + qat_crc_generate(ctx, op); } } } @@ -330,7 +332,8 @@ qat_sym_process_response(void **op, uint8_t *resp, void *op_cookie, if (sess->bpi_ctx) { qat_bpicipher_postprocess(sess, rx_op); #ifdef RTE_LIB_SECURITY - if (is_docsis_sec) + if (is_docsis_sec && sess->qat_cmd != + ICP_QAT_FW_LA_CMD_CIPHER_CRC) qat_crc_verify(sess, rx_op); #endif } diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c index 6ad6c7ee3a..9babf13b66 100644 --- a/drivers/crypto/qat/qat_sym_session.c +++ b/drivers/crypto/qat/qat_sym_session.c @@ -27,6 +27,7 @@ #include <rte_crypto_sym.h> #ifdef RTE_LIB_SECURITY #include <rte_security_driver.h> +#include <rte_ether.h> #endif #include "qat_logs.h" @@ -68,6 +69,13 @@ static void ossl_legacy_provider_unload(void) extern int qat_ipsec_mb_lib; +#define ETH_CRC32_POLYNOMIAL 0x04c11db7 +#define ETH_CRC32_INIT_VAL 0xffffffff +#define ETH_CRC32_XOR_OUT 0xffffffff +#define ETH_CRC32_POLYNOMIAL_BE RTE_BE32(ETH_CRC32_POLYNOMIAL) +#define ETH_CRC32_INIT_VAL_BE RTE_BE32(ETH_CRC32_INIT_VAL) +#define ETH_CRC32_XOR_OUT_BE RTE_BE32(ETH_CRC32_XOR_OUT) + /* SHA1 - 20 bytes - Initialiser state can be found in FIPS stds 180-2 */ static const uint8_t sha1InitialState[] = { 0x67, 0x45, 0x23, 0x01, 0xef, 0xcd, 0xab, 0x89, 0x98, 0xba, @@ -115,6 +123,10 @@ qat_sym_cd_cipher_set(struct qat_sym_session *cd, const uint8_t *enckey, uint32_t enckeylen); +static int +qat_sym_cd_crc_set(struct qat_sym_session *cdesc, + enum qat_device_gen qat_dev_gen); + static int qat_sym_cd_auth_set(struct qat_sym_session *cdesc, const uint8_t *authkey, @@ -122,6 +134,7 @@ qat_sym_cd_auth_set(struct qat_sym_session *cdesc, uint32_t aad_length, uint32_t digestsize, unsigned int operation); + static void qat_sym_session_init_common_hdr(struct qat_sym_session *session); @@ -630,6 +643,7 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, case ICP_QAT_FW_LA_CMD_MGF1: case ICP_QAT_FW_LA_CMD_AUTH_PRE_COMP: case ICP_QAT_FW_LA_CMD_CIPHER_PRE_COMP: + case ICP_QAT_FW_LA_CMD_CIPHER_CRC: case ICP_QAT_FW_LA_CMD_DELIMITER: QAT_LOG(ERR, "Unsupported Service %u", session->qat_cmd); @@ -645,6 +659,45 @@ qat_sym_session_set_parameters(struct rte_cryptodev *dev, (void *)session); } +int +qat_cipher_crc_cap_msg_sess_prepare(struct qat_sym_session *session, + rte_iova_t session_paddr, + const uint8_t *cipherkey, + uint32_t cipherkeylen, + enum qat_device_gen qat_dev_gen) +{ + int ret; + + /* Set content descriptor physical address */ + session->cd_paddr = session_paddr + + offsetof(struct qat_sym_session, cd); + + /* Set up some pre-requisite variables */ + session->qat_proto_flag = QAT_CRYPTO_PROTO_FLAG_NONE; + session->is_ucs = 0; + session->qat_cmd = ICP_QAT_FW_LA_CMD_CIPHER_CRC; + session->qat_mode = ICP_QAT_HW_CIPHER_CBC_MODE; + session->qat_cipher_alg = ICP_QAT_HW_CIPHER_ALGO_AES128; + session->qat_dir = ICP_QAT_HW_CIPHER_ENCRYPT; + session->is_auth = 1; + session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_NULL; + session->auth_mode = ICP_QAT_HW_AUTH_MODE0; + session->auth_op = ICP_QAT_HW_AUTH_GENERATE; + session->digest_length = RTE_ETHER_CRC_LEN; + + ret = qat_sym_cd_cipher_set(session, cipherkey, cipherkeylen); + if (ret < 0) + return -EINVAL; + + ret = qat_sym_cd_crc_set(session, qat_dev_gen); + if (ret < 0) + return -EINVAL; + + qat_sym_session_finalize(session); + + return 0; +} + static int qat_sym_session_handle_single_pass(struct qat_sym_session *session, const struct rte_crypto_aead_xform *aead_xform) @@ -1866,6 +1919,9 @@ int qat_sym_cd_cipher_set(struct qat_sym_session *cdesc, ICP_QAT_FW_COMN_NEXT_ID_SET(hash_cd_ctrl, ICP_QAT_FW_SLICE_DRAM_WR); cdesc->cd_cur_ptr = (uint8_t *)&cdesc->cd; + } else if (cdesc->qat_cmd == ICP_QAT_FW_LA_CMD_CIPHER_CRC) { + cd_pars->u.s.content_desc_addr = cdesc->cd_paddr; + cdesc->cd_cur_ptr = (uint8_t *)&cdesc->cd; } else if (cdesc->qat_cmd != ICP_QAT_FW_LA_CMD_HASH_CIPHER) { QAT_LOG(ERR, "Invalid param, must be a cipher command."); return -EFAULT; @@ -2641,6 +2697,135 @@ qat_sec_session_check_docsis(struct rte_security_session_conf *conf) return -EINVAL; } +static int +qat_sym_cd_crc_set(struct qat_sym_session *cdesc, + enum qat_device_gen qat_dev_gen) +{ + struct icp_qat_hw_gen2_crc_cd *crc_cd_gen2; + struct icp_qat_hw_gen3_crc_cd *crc_cd_gen3; + struct icp_qat_hw_gen4_crc_cd *crc_cd_gen4; + struct icp_qat_fw_la_bulk_req *req_tmpl = &cdesc->fw_req; + struct icp_qat_fw_comn_req_hdr_cd_pars *cd_pars = &req_tmpl->cd_pars; + void *ptr = &req_tmpl->cd_ctrl; + struct icp_qat_fw_auth_cd_ctrl_hdr *crc_cd_ctrl = ptr; + struct icp_qat_fw_la_auth_req_params *crc_param = + (struct icp_qat_fw_la_auth_req_params *) + ((char *)&req_tmpl->serv_specif_rqpars + + ICP_QAT_FW_HASH_REQUEST_PARAMETERS_OFFSET); + struct icp_qat_fw_ucs_slice_cipher_config crc_cfg; + uint16_t crc_cfg_offset, cd_size; + + crc_cfg_offset = cdesc->cd_cur_ptr - ((uint8_t *)&cdesc->cd); + + switch (qat_dev_gen) { + case QAT_GEN2: + crc_cd_gen2 = + (struct icp_qat_hw_gen2_crc_cd *)cdesc->cd_cur_ptr; + crc_cd_gen2->flags = 0; + crc_cd_gen2->initial_crc = 0; + memset(&crc_cd_gen2->reserved1, + 0, + sizeof(crc_cd_gen2->reserved1)); + memset(&crc_cd_gen2->reserved2, + 0, + sizeof(crc_cd_gen2->reserved2)); + cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen2_crc_cd); + break; + case QAT_GEN3: + crc_cd_gen3 = + (struct icp_qat_hw_gen3_crc_cd *)cdesc->cd_cur_ptr; + crc_cd_gen3->flags = ICP_QAT_HW_GEN3_CRC_FLAGS_BUILD(1, 1); + crc_cd_gen3->polynomial = ETH_CRC32_POLYNOMIAL; + crc_cd_gen3->initial_crc = ETH_CRC32_INIT_VAL; + crc_cd_gen3->xor_val = ETH_CRC32_XOR_OUT; + memset(&crc_cd_gen3->reserved1, + 0, + sizeof(crc_cd_gen3->reserved1)); + memset(&crc_cd_gen3->reserved2, + 0, + sizeof(crc_cd_gen3->reserved2)); + crc_cd_gen3->reserved3 = 0; + cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen3_crc_cd); + break; + case QAT_GEN4: + crc_cfg.mode = ICP_QAT_HW_CIPHER_ECB_MODE; + crc_cfg.algo = ICP_QAT_HW_CIPHER_ALGO_NULL; + crc_cfg.hash_cmp_val = 0; + crc_cfg.dir = ICP_QAT_HW_CIPHER_ENCRYPT; + crc_cfg.associated_data_len_in_bytes = 0; + crc_cfg.crc_reflect_out = + ICP_QAT_HW_CIPHER_UCS_REFLECT_OUT_ENABLED; + crc_cfg.crc_reflect_in = + ICP_QAT_HW_CIPHER_UCS_REFLECT_IN_ENABLED; + crc_cfg.crc_encoding = ICP_QAT_HW_CIPHER_UCS_CRC32; + + crc_cd_gen4 = + (struct icp_qat_hw_gen4_crc_cd *)cdesc->cd_cur_ptr; + crc_cd_gen4->ucs_config[0] = + ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_LOWER(crc_cfg); + crc_cd_gen4->ucs_config[1] = + ICP_QAT_HW_UCS_CIPHER_GEN4_BUILD_CONFIG_UPPER(crc_cfg); + crc_cd_gen4->polynomial = ETH_CRC32_POLYNOMIAL_BE; + crc_cd_gen4->initial_crc = ETH_CRC32_INIT_VAL_BE; + crc_cd_gen4->xor_val = ETH_CRC32_XOR_OUT_BE; + crc_cd_gen4->reserved1 = 0; + crc_cd_gen4->reserved2 = 0; + crc_cd_gen4->reserved3 = 0; + cdesc->cd_cur_ptr += sizeof(struct icp_qat_hw_gen4_crc_cd); + break; + default: + return -EINVAL; + } + + crc_cd_ctrl->hash_cfg_offset = crc_cfg_offset >> 3; + crc_cd_ctrl->hash_flags = ICP_QAT_FW_AUTH_HDR_FLAG_NO_NESTED; + crc_cd_ctrl->inner_res_sz = cdesc->digest_length; + crc_cd_ctrl->final_sz = cdesc->digest_length; + crc_cd_ctrl->inner_state1_sz = 0; + crc_cd_ctrl->inner_state2_sz = 0; + crc_cd_ctrl->inner_state2_offset = 0; + crc_cd_ctrl->outer_prefix_sz = 0; + crc_cd_ctrl->outer_config_offset = 0; + crc_cd_ctrl->outer_state1_sz = 0; + crc_cd_ctrl->outer_res_sz = 0; + crc_cd_ctrl->outer_prefix_offset = 0; + + crc_param->auth_res_sz = cdesc->digest_length; + crc_param->u2.aad_sz = 0; + crc_param->hash_state_sz = 0; + + cd_size = cdesc->cd_cur_ptr - (uint8_t *)&cdesc->cd; + cd_pars->u.s.content_desc_addr = cdesc->cd_paddr; + cd_pars->u.s.content_desc_params_sz = RTE_ALIGN_CEIL(cd_size, 8) >> 3; + + return 0; +} + +static int +qat_sym_session_configure_crc(struct rte_cryptodev *dev, + const struct rte_crypto_sym_xform *cipher_xform, + struct qat_sym_session *session) +{ + struct qat_cryptodev_private *internals = dev->data->dev_private; + enum qat_device_gen qat_dev_gen = internals->qat_dev->qat_dev_gen; + int ret; + + session->is_auth = 1; + session->qat_hash_alg = ICP_QAT_HW_AUTH_ALGO_NULL; + session->auth_mode = ICP_QAT_HW_AUTH_MODE0; + session->auth_op = cipher_xform->cipher.op == + RTE_CRYPTO_CIPHER_OP_ENCRYPT ? + ICP_QAT_HW_AUTH_GENERATE : + ICP_QAT_HW_AUTH_VERIFY; + session->digest_length = RTE_ETHER_CRC_LEN; + + ret = qat_sym_cd_crc_set(session, qat_dev_gen); + if (ret < 0) + return ret; + + return 0; +} + static int qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev, struct rte_security_session_conf *conf, void *session_private, @@ -2681,12 +2866,21 @@ qat_sec_session_set_docsis_parameters(struct rte_cryptodev *dev, if (qat_cmd_id != ICP_QAT_FW_LA_CMD_CIPHER) { QAT_LOG(ERR, "Unsupported xform chain requested"); return -ENOTSUP; + } else if (internals->internal_capabilities + & QAT_SYM_CAP_CIPHER_CRC) { + qat_cmd_id = ICP_QAT_FW_LA_CMD_CIPHER_CRC; } session->qat_cmd = (enum icp_qat_fw_la_cmd_id)qat_cmd_id; ret = qat_sym_session_configure_cipher(dev, xform, session); if (ret < 0) return ret; + + if (qat_cmd_id == ICP_QAT_FW_LA_CMD_CIPHER_CRC) { + ret = qat_sym_session_configure_crc(dev, xform, session); + if (ret < 0) + return ret; + } qat_sym_session_finalize(session); return qat_sym_gen_dev_ops[qat_dev_gen].set_session((void *)cdev, diff --git a/drivers/crypto/qat/qat_sym_session.h b/drivers/crypto/qat/qat_sym_session.h index 6322d7e3bc..9b5d11ac88 100644 --- a/drivers/crypto/qat/qat_sym_session.h +++ b/drivers/crypto/qat/qat_sym_session.h @@ -46,6 +46,12 @@ ICP_QAT_HW_CIPHER_KEY_CONVERT, \ ICP_QAT_HW_CIPHER_DECRYPT) +#define ICP_QAT_HW_GEN3_CRC_FLAGS_BUILD(ref_in, ref_out) \ + (((ref_in & QAT_GEN3_COMP_REFLECT_IN_MASK) << \ + QAT_GEN3_COMP_REFLECT_IN_BITPOS) | \ + ((ref_out & QAT_GEN3_COMP_REFLECT_OUT_MASK) << \ + QAT_GEN3_COMP_REFLECT_OUT_BITPOS)) + #define QAT_AES_CMAC_CONST_RB 0x87 #define QAT_CRYPTO_SLICE_SPC 1 @@ -76,7 +82,12 @@ typedef int (*qat_sym_build_request_t)(void *in_op, struct qat_sym_session *ctx, /* Common content descriptor */ struct qat_sym_cd { struct icp_qat_hw_cipher_algo_blk cipher; - struct icp_qat_hw_auth_algo_blk hash; + union { + struct icp_qat_hw_auth_algo_blk hash; + struct icp_qat_hw_gen2_crc_cd crc_gen2; + struct icp_qat_hw_gen3_crc_cd crc_gen3; + struct icp_qat_hw_gen4_crc_cd crc_gen4; + }; } __rte_packed __rte_cache_aligned; struct qat_sym_session { @@ -152,10 +163,18 @@ qat_sym_session_clear(struct rte_cryptodev *dev, unsigned int qat_sym_session_get_private_size(struct rte_cryptodev *dev); +int +qat_cipher_crc_cap_msg_sess_prepare(struct qat_sym_session *session, + rte_iova_t session_paddr, + const uint8_t *cipherkey, + uint32_t cipherkeylen, + enum qat_device_gen qat_dev_gen); + void qat_sym_sesssion_init_common_hdr(struct qat_sym_session *session, struct icp_qat_fw_comn_req_hdr *header, enum qat_sym_proto_flag proto_flags); + int qat_sym_validate_aes_key(int key_len, enum icp_qat_hw_cipher_algo *alg); int -- 2.34.1 ^ permalink raw reply [flat|nested] 17+ messages in thread
* RE: [EXT] [PATCH v4 0/2] crypto/qat: add cipher-crc offload feature 2023-04-18 13:39 ` [PATCH v4 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan 2023-04-18 13:39 ` [PATCH v4 1/2] crypto/qat: add cipher-crc offload support to fw interface Kevin O'Sullivan 2023-04-18 13:39 ` [PATCH v4 2/2] crypto/qat: support cipher-crc offload Kevin O'Sullivan @ 2023-05-24 10:04 ` Akhil Goyal 2 siblings, 0 replies; 17+ messages in thread From: Akhil Goyal @ 2023-05-24 10:04 UTC (permalink / raw) To: Kevin O'Sullivan, dev; +Cc: kai.ji > This patchset adds support to the QAT PMD for combined cipher-crc > processing for DOCSIS on the QAT device. The current QAT PMD > implementation of cipher-crc calculates CRC in software and uses QAT > for encryption/decryption offload. > > Note: The previous code-path is still retained for QAT firmware > versions without support for combined cipher-crc offload. > > - Support has been added to DPDK QAT PMD to enable the use of the > cipher-crc offload feature on gen2/gen3/gen4 QAT devices. > > - A cipher-crc offload capability check has been added to the queue > pair setup function to determine if the feature is supported on the > QAT device. > > v1: > * initial version > > v2: > * fixed centos compilation error for missing braces around > initializer > > v3: > * updated the file qat.rst with details of new configuration > > v4: > * updated v23.07 release note > * moved cipher crc capability check test vectors to top of > qat_qp.c and made the vectors static const > * changed log string to be all on one line in qat_device.c > * changed word parameter to devargs in qat.rst Series applied to dpdk-next-crypto Updated cipher-crc -> Cipher-CRC in patch title and description ^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2023-05-24 10:04 UTC | newest] Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2023-03-08 12:12 [PATCH 0/2] crypto/qat: added cipher-crc offload feature Kevin O'Sullivan 2023-03-08 12:12 ` [PATCH 1/2] crypto/qat: added cipher-crc offload support Kevin O'Sullivan 2023-03-08 12:12 ` [PATCH 2/2] crypto/qat: added cipher-crc cap check Kevin O'Sullivan 2023-03-09 14:33 ` [PATCH v2 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan 2023-03-09 14:33 ` [PATCH v2 1/2] crypto/qat: add cipher-crc offload support to fw interface Kevin O'Sullivan 2023-03-09 14:33 ` [PATCH v2 2/2] crypto/qat: add cipher-crc offload support Kevin O'Sullivan 2023-03-13 14:26 ` [PATCH v3 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan 2023-03-13 14:26 ` [PATCH v3 1/2] crypto/qat: add cipher-crc offload support to fw interface Kevin O'Sullivan 2023-03-16 12:24 ` Ji, Kai 2023-03-13 14:26 ` [PATCH v3 2/2] crypto/qat: add cipher-crc offload support Kevin O'Sullivan 2023-03-16 12:25 ` Ji, Kai 2023-03-16 19:15 ` [EXT] " Akhil Goyal 2023-03-20 16:28 ` O'Sullivan, Kevin 2023-04-18 13:39 ` [PATCH v4 0/2] crypto/qat: add cipher-crc offload feature Kevin O'Sullivan 2023-04-18 13:39 ` [PATCH v4 1/2] crypto/qat: add cipher-crc offload support to fw interface Kevin O'Sullivan 2023-04-18 13:39 ` [PATCH v4 2/2] crypto/qat: support cipher-crc offload Kevin O'Sullivan 2023-05-24 10:04 ` [EXT] [PATCH v4 0/2] crypto/qat: add cipher-crc offload feature Akhil Goyal
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).