From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 012614374A; Wed, 20 Dec 2023 14:26:50 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E1F5740685; Wed, 20 Dec 2023 14:26:49 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 0FA8B4021F for ; Wed, 20 Dec 2023 14:26:47 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1703078808; x=1734614808; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=QzLTLRsexr7Dnm+VZ2J+azEQlNbGhVXo/9zf0/VCz2s=; b=dVh/hcZpy0fl4owGTIJDxm3Ww3/HEwdrDTlYhoKAg3NNd+LdIXV1gmbm 5iTcGthaVbDkoH+eBD3hXKRjPVRTHV6tFysZUfYooFHgsrD2SWRmCLlA1 2TVyQIFbHdSsrhqUkyZhdMHct8GhLrsfr/loofELlXKxi6ZkWuLtiaI1o xdQKlGjFHyVZ3SO1pfzmI24LBGLHZCjE7XoYTEuM+GCpDTTEAs6njiHEB 8upryNNIygcuptOyLn5N07BJyPQH4wyWQ1YHSkKIqHFfrNJ0ui2ywybFf cKXdTRQTFAtkJS8PYPYbMxX7kDEz2Kj+NGwliGG598Gep54oS60sIPnRs A==; X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="380801520" X-IronPort-AV: E=Sophos;i="6.04,291,1695711600"; d="scan'208";a="380801520" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Dec 2023 05:26:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10929"; a="842276242" X-IronPort-AV: E=Sophos;i="6.04,291,1695711600"; d="scan'208";a="842276242" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.80]) by fmsmga008.fm.intel.com with ESMTP; 20 Dec 2023 05:26:43 -0800 From: Nishikant Nayak To: dev@dpdk.org Cc: kai.ji@intel.com, ciara.power@intel.com, arkadiuszx.kusztal@intel.com, Nishikant Nayak , Thomas Monjalon , Anatoly Burakov Subject: [PATCH 1/4] common/qat: add files specific to GEN5 Date: Wed, 20 Dec 2023 13:26:13 +0000 Message-Id: <20231220132616.318983-1-nishikanta.nayak@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adding GEN5 files for handling GEN5 specific operaions. These files are inherited from the existing files/APIs which has some changes specific GEN5 requirements Also updated the mailmap file. Signed-off-by: Nishikant Nayak --- .mailmap | 1 + drivers/common/qat/dev/qat_dev_gen5.c | 336 ++++++++++++++++++ .../adf_transport_access_macros_gen5.h | 51 +++ .../adf_transport_access_macros_gen5vf.h | 48 +++ drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c | 336 ++++++++++++++++++ 5 files changed, 772 insertions(+) create mode 100644 drivers/common/qat/dev/qat_dev_gen5.c create mode 100644 drivers/common/qat/qat_adf/adf_transport_access_macros_gen5.h create mode 100644 drivers/common/qat/qat_adf/adf_transport_access_macros_gen5vf.h create mode 100644 drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c diff --git a/.mailmap b/.mailmap index ab0742a382..ef8e0b79e5 100644 --- a/.mailmap +++ b/.mailmap @@ -1027,6 +1027,7 @@ Ning Li Nipun Gupta Nir Efrati Nirmoy Das +Nishikant Nayak Nithin Dabilpuram Nitin Saxena Nitzan Weller diff --git a/drivers/common/qat/dev/qat_dev_gen5.c b/drivers/common/qat/dev/qat_dev_gen5.c new file mode 100644 index 0000000000..dc2bcd5650 --- /dev/null +++ b/drivers/common/qat/dev/qat_dev_gen5.c @@ -0,0 +1,336 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 Intel Corporation + */ + +#include +#include + +#include "qat_device.h" +#include "qat_qp.h" +#include "adf_transport_access_macros_gen5vf.h" +#include "adf_pf2vf_msg.h" +#include "qat_pf2vf.h" + +#include +#include +#include +#include + +#define BITS_PER_LONG (sizeof(unsigned long) * 8) +#define BITS_PER_ULONG (sizeof(unsigned long) * 8) + +#define VFIO_PCI_LCE_DEVICE_CFG_REGION_INDEX VFIO_PCI_NUM_REGIONS +#define VFIO_PCI_LCE_CY_CFG_REGION_INDEX (VFIO_PCI_NUM_REGIONS + 2) +#define VFIO_PCI_LCE_RING_CFG_REGION_INDEX (VFIO_PCI_NUM_REGIONS + 4) + +#define min_t(type, x, y) ({ \ + type __min1 = (x); \ + type __min2 = (y); \ + __min1 < __min2 ? __min1 : __min2; }) + +/** + * struct lce_vfio_dev_cap - LCE device capabilities + * + * Device level capabilities and service level capabilities + */ +struct lce_vfio_dev_cap { + uint16_t device_num; + uint16_t device_type; + +#define LCE_DEVICE_CAP_DYNAMIC_BANK BIT(31) + uint32_t capability_mask; + uint32_t extended_capabilities; + uint16_t max_banks; + uint16_t max_rings_per_bank; + uint16_t arb_mask; + +#define SERV_TYPE_DC BIT(0) +#define SERV_TYPE_SYM BIT(1) +#define SERV_TYPE_ASYM BIT(2) +#define SERV_TYPE_DMA BIT(3) + uint16_t services; + uint16_t pkg_id; + uint16_t node_id; + +#define LCE_DEVICE_NAME_SIZE 64 + __u8 device_name[LCE_DEVICE_NAME_SIZE]; +}; + +#define LCE_DEVICE_MAX_BANKS 2080 +#define LCE_DEVICE_BITMAP_SIZE \ + __KERNEL_DIV_ROUND_UP(LCE_DEVICE_MAX_BANKS, BITS_PER_LONG) + +/* struct lce_vfio_dev_cy_cap - CY capabilities of LCE device */ +struct lce_vfio_dev_cy_cap { + uint32_t nr_banks; + unsigned long bitmap[LCE_DEVICE_BITMAP_SIZE]; +}; + +#define LCE_QAT_NID_LOCAL 0x7 +#define LCE_QAT_FUNC_LOCAL 0x3ff +#define LCE_QAT_RID_LOCAL 0xf +#define LCE_QAT_PASID_LOCAL 0xfffff + +struct lce_qat_domain { + uint32_t nid :3; + uint32_t fid :7; + uint32_t ftype :2; + uint32_t vfid :13; + uint32_t rid :4; + uint32_t vld :1; + uint32_t desc_over :1; + uint32_t pasid_vld :1; + uint32_t pasid :20; +}; + +struct lce_qat_buf_domain { + uint32_t bank_id: 20; +#define LCE_REQ_BUFFER_DOMAIN 1UL +#define LCE_RES_BUFFER_DOMAIN 2UL +#define LCE_SRC_BUFFER_DOMAIN 4UL +#define LCE_DST_BUFFER_DOMAIN 8UL + uint32_t type: 4; + uint32_t resv: 8; + struct lce_qat_domain dom; +}; + +/* QAT GEN 5 specific macros */ +#define QAT_GEN5_BUNDLE_NUM LCE_DEVICE_MAX_BANKS +#define QAT_GEN5_QPS_PER_BUNDLE_NUM 1 + +struct qat_dev_gen5_extra { + struct qat_qp_hw_data + qp_gen5_data[QAT_GEN5_BUNDLE_NUM][QAT_GEN5_QPS_PER_BUNDLE_NUM]; +}; + +static struct qat_pf2vf_dev qat_pf2vf_gen5 = { + .pf2vf_offset = ADF_4XXXIOV_PF2VM_OFFSET, + .vf2pf_offset = ADF_4XXXIOV_VM2PF_OFFSET, + .pf2vf_type_shift = ADF_PFVF_2X_MSGTYPE_SHIFT, + .pf2vf_type_mask = ADF_PFVF_2X_MSGTYPE_MASK, + .pf2vf_data_shift = ADF_PFVF_2X_MSGDATA_SHIFT, + .pf2vf_data_mask = ADF_PFVF_2X_MSGDATA_MASK, +}; + +static int +qat_select_valid_queue_gen5(struct qat_pci_device *qat_dev, int qp_id, + enum qat_service_type service_type) +{ + int i = 0, valid_qps = 0; + struct qat_dev_gen5_extra *dev_extra = qat_dev->dev_private; + + for (; i < QAT_GEN5_BUNDLE_NUM; i++) { + if (dev_extra->qp_gen5_data[i][0].service_type == + service_type) { + if (valid_qps == qp_id) + return i; + ++valid_qps; + } + } + return -1; +} + +static const struct qat_qp_hw_data * +qat_qp_get_hw_data_gen5(struct qat_pci_device *qat_dev, + enum qat_service_type service_type, uint16_t qp_id) +{ + struct qat_dev_gen5_extra *dev_extra = qat_dev->dev_private; + int ring_pair = qat_select_valid_queue_gen5(qat_dev, qp_id, + service_type); + + if (ring_pair < 0) + return NULL; + + return &dev_extra->qp_gen5_data[ring_pair][0]; +} + +static int +qat_qp_rings_per_service_gen5(struct qat_pci_device *qat_dev, + enum qat_service_type service) +{ + int i = 0, count = 0, max_ops_per_srv = 0; + struct qat_dev_gen5_extra *dev_extra = qat_dev->dev_private; + + max_ops_per_srv = QAT_GEN5_BUNDLE_NUM; + for (i = 0, count = 0; i < max_ops_per_srv; i++) + if (dev_extra->qp_gen5_data[i][0].service_type == service) + count++; + return count; +} + +static int qat_dev_read_config(struct qat_pci_device *qat_dev) +{ + struct qat_dev_gen5_extra *dev_extra = qat_dev->dev_private; + struct qat_qp_hw_data *hw_data; + + /** Enable only crypto ring: RP-0 */ + hw_data = &dev_extra->qp_gen5_data[0][0]; + memset(hw_data, 0, sizeof(*hw_data)); + + hw_data->service_type = QAT_SERVICE_SYMMETRIC; + hw_data->tx_msg_size = 128; + hw_data->rx_msg_size = 32; + + hw_data->tx_ring_num = 0; + hw_data->rx_ring_num = 1; + + hw_data->hw_bundle_num = 0; + + return 0; +} + + +static int qat_dev_read_config_gen5(struct qat_pci_device *qat_dev) +{ + return qat_dev_read_config(qat_dev); +} + +static void qat_qp_build_ring_base_gen5(void *io_addr, struct qat_queue *queue) +{ + uint64_t queue_base; + + queue_base = BUILD_RING_BASE_ADDR_GEN5(queue->base_phys_addr, + queue->queue_size); + WRITE_CSR_RING_BASE_GEN5VF(io_addr, queue->hw_bundle_number, + queue->hw_queue_number, queue_base); +} + +static void +qat_qp_adf_arb_enable_gen5(const struct qat_queue *txq, + void *base_addr, rte_spinlock_t *lock) +{ + uint32_t arb_csr_offset = 0, value; + + rte_spinlock_lock(lock); + arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + + (ADF_RING_BUNDLE_SIZE_GEN5 * + txq->hw_bundle_number); + value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN5VF, + arb_csr_offset); + value |= 0x01; + ADF_CSR_WR(base_addr, arb_csr_offset, value); + rte_spinlock_unlock(lock); +} + +static void +qat_qp_adf_arb_disable_gen5(const struct qat_queue *txq, + void *base_addr, rte_spinlock_t *lock) +{ + uint32_t arb_csr_offset = 0, value; + + rte_spinlock_lock(lock); + arb_csr_offset = ADF_ARB_RINGSRVARBEN_OFFSET + (ADF_RING_BUNDLE_SIZE_GEN5 * + txq->hw_bundle_number); + value = ADF_CSR_RD(base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN5VF, + arb_csr_offset); + value &= ~(0x01); + ADF_CSR_WR(base_addr, arb_csr_offset, value); + rte_spinlock_unlock(lock); +} + +static void +qat_qp_adf_configure_queues_gen5(struct qat_qp *qp) +{ + uint32_t q_tx_config, q_resp_config; + struct qat_queue *q_tx = &qp->tx_q, *q_rx = &qp->rx_q; + + /* q_tx/rx->queue_size is initialized as per bundle config register */ + q_tx_config = BUILD_RING_CONFIG(q_tx->queue_size); + + q_resp_config = BUILD_RESP_RING_CONFIG(q_rx->queue_size, + ADF_RING_NEAR_WATERMARK_512, + ADF_RING_NEAR_WATERMARK_0); + + WRITE_CSR_RING_CONFIG_GEN5VF(qp->mmap_bar_addr, q_tx->hw_bundle_number, + q_tx->hw_queue_number, q_tx_config); + WRITE_CSR_RING_CONFIG_GEN5VF(qp->mmap_bar_addr, q_rx->hw_bundle_number, + q_rx->hw_queue_number, q_resp_config); +} + +static void +qat_qp_csr_write_tail_gen5(struct qat_qp *qp, struct qat_queue *q) +{ + WRITE_CSR_RING_TAIL_GEN5VF(qp->mmap_bar_addr, q->hw_bundle_number, + q->hw_queue_number, q->tail); +} + +static void +qat_qp_csr_write_head_gen5(struct qat_qp *qp, struct qat_queue *q, + uint32_t new_head) +{ + WRITE_CSR_RING_HEAD_GEN5VF(qp->mmap_bar_addr, q->hw_bundle_number, + q->hw_queue_number, new_head); +} + +static void +qat_qp_csr_setup_gen5(struct qat_pci_device *qat_dev, void *io_addr, + struct qat_qp *qp) +{ + qat_qp_build_ring_base_gen5(io_addr, &qp->tx_q); + qat_qp_build_ring_base_gen5(io_addr, &qp->rx_q); + qat_qp_adf_configure_queues_gen5(qp); + qat_qp_adf_arb_enable_gen5(&qp->tx_q, qp->mmap_bar_addr, + &qat_dev->arb_csr_lock); +} + +static struct qat_qp_hw_spec_funcs qat_qp_hw_spec_gen5 = { + .qat_qp_rings_per_service = qat_qp_rings_per_service_gen5, + .qat_qp_build_ring_base = qat_qp_build_ring_base_gen5, + .qat_qp_adf_arb_enable = qat_qp_adf_arb_enable_gen5, + .qat_qp_adf_arb_disable = qat_qp_adf_arb_disable_gen5, + .qat_qp_adf_configure_queues = qat_qp_adf_configure_queues_gen5, + .qat_qp_csr_write_tail = qat_qp_csr_write_tail_gen5, + .qat_qp_csr_write_head = qat_qp_csr_write_head_gen5, + .qat_qp_csr_setup = qat_qp_csr_setup_gen5, + .qat_qp_get_hw_data = qat_qp_get_hw_data_gen5, +}; + +static int +qat_reset_ring_pairs_gen5(struct qat_pci_device *qat_pci_dev __rte_unused) +{ + return 0; +} + +static const struct rte_mem_resource* +qat_dev_get_transport_bar_gen5(struct rte_pci_device *pci_dev) +{ + return &pci_dev->mem_resource[0]; +} + +static int +qat_dev_get_misc_bar_gen5(struct rte_mem_resource **mem_resource, + struct rte_pci_device *pci_dev) +{ + *mem_resource = &pci_dev->mem_resource[2]; + return 0; +} + +static int +qat_dev_get_extra_size_gen5(void) +{ + return sizeof(struct qat_dev_gen5_extra); +} + +static int +qat_dev_get_slice_map_gen5(uint32_t *map __rte_unused, + const struct rte_pci_device *pci_dev __rte_unused) +{ + return 0; +} + +static struct qat_dev_hw_spec_funcs qat_dev_hw_spec_gen5 = { + .qat_dev_reset_ring_pairs = qat_reset_ring_pairs_gen5, + .qat_dev_get_transport_bar = qat_dev_get_transport_bar_gen5, + .qat_dev_get_misc_bar = qat_dev_get_misc_bar_gen5, + .qat_dev_read_config = qat_dev_read_config_gen5, + .qat_dev_get_extra_size = qat_dev_get_extra_size_gen5, + .qat_dev_get_slice_map = qat_dev_get_slice_map_gen5, +}; + +RTE_INIT(qat_dev_gen_5_init) +{ + qat_qp_hw_spec[QAT_GEN5] = &qat_qp_hw_spec_gen5; + qat_dev_hw_spec[QAT_GEN5] = &qat_dev_hw_spec_gen5; + qat_gen_config[QAT_GEN5].dev_gen = QAT_GEN5; + qat_gen_config[QAT_GEN5].pf2vf_dev = &qat_pf2vf_gen5; +} diff --git a/drivers/common/qat/qat_adf/adf_transport_access_macros_gen5.h b/drivers/common/qat/qat_adf/adf_transport_access_macros_gen5.h new file mode 100644 index 0000000000..29ce6b8e60 --- /dev/null +++ b/drivers/common/qat/qat_adf/adf_transport_access_macros_gen5.h @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) + * Copyright(c) 2023 Intel Corporation + */ + +#ifndef ADF_TRANSPORT_ACCESS_MACROS_GEN5_H +#define ADF_TRANSPORT_ACCESS_MACROS_GEN5_H + +#include "adf_transport_access_macros.h" + +#define ADF_RINGS_PER_INT_SRCSEL_GEN4 2 +#define ADF_BANK_INT_SRC_SEL_MASK_GEN4 0x44UL +#define ADF_BANK_INT_FLAG_CLEAR_MASK_GEN4 0x3 +#define ADF_RING_BUNDLE_SIZE_GEN5 0x2000 +#define ADF_RING_CSR_RING_CONFIG_GEN5 0x1000 +#define ADF_RING_CSR_RING_LBASE_GEN5 0x1040 +#define ADF_RING_CSR_RING_UBASE_GEN5 0x1080 + +#define BUILD_RING_BASE_ADDR_GEN5(addr, size) \ + ((((addr) >> 6) & (0xFFFFFFFFFFFFFFFFULL << (size))) << 6) + +#define WRITE_CSR_RING_BASE_GEN5(csr_base_addr, bank, ring, value) \ +do { \ + uint32_t l_base = 0, u_base = 0; \ + l_base = (uint32_t)(value & 0xFFFFFFFF); \ + u_base = (uint32_t)((value & 0xFFFFFFFF00000000ULL) >> 32); \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE_GEN5 * bank) + \ + ADF_RING_CSR_RING_LBASE_GEN5 + (ring << 2), \ + l_base); \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE_GEN5 * bank) + \ + ADF_RING_CSR_RING_UBASE_GEN5 + (ring << 2), \ + u_base); \ +} while (0) + +#define WRITE_CSR_RING_CONFIG_GEN5(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR(csr_base_addr, \ + (ADF_RING_BUNDLE_SIZE_GEN5 * bank) + \ + ADF_RING_CSR_RING_CONFIG_GEN5 + (ring << 2), value) + +#define WRITE_CSR_RING_TAIL_GEN5(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR((u8 *)(csr_base_addr), \ + (ADF_RING_BUNDLE_SIZE_GEN5 * (bank)) + \ + ADF_RING_CSR_RING_TAIL + ((ring) << 2), value) + +#define WRITE_CSR_RING_HEAD_GEN5(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR((u8 *)(csr_base_addr), \ + (ADF_RING_BUNDLE_SIZE_GEN5 * (bank)) + \ + ADF_RING_CSR_RING_HEAD + ((ring) << 2), value) + +#endif diff --git a/drivers/common/qat/qat_adf/adf_transport_access_macros_gen5vf.h b/drivers/common/qat/qat_adf/adf_transport_access_macros_gen5vf.h new file mode 100644 index 0000000000..5d2c6706a6 --- /dev/null +++ b/drivers/common/qat/qat_adf/adf_transport_access_macros_gen5vf.h @@ -0,0 +1,48 @@ +/* SPDX-License-Identifier: (BSD-3-Clause OR GPL-2.0) + * Copyright(c) 2023 Intel Corporation + */ + +#ifndef ADF_TRANSPORT_ACCESS_MACROS_GEN5VF_H +#define ADF_TRANSPORT_ACCESS_MACROS_GEN5VF_H + +#include "adf_transport_access_macros.h" +#include "adf_transport_access_macros_gen5.h" + +#define ADF_RING_CSR_ADDR_OFFSET_GEN5VF 0x0 + +#define WRITE_CSR_RING_BASE_GEN5VF(csr_base_addr, bank, ring, value) \ +do { \ + uint32_t l_base = 0, u_base = 0; \ + l_base = (uint32_t)(value & 0xFFFFFFFF); \ + u_base = (uint32_t)((value & 0xFFFFFFFF00000000ULL) >> 32); \ + ADF_CSR_WR(csr_base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN5VF, \ + (ADF_RING_BUNDLE_SIZE_GEN5 * bank) + \ + ADF_RING_CSR_RING_LBASE_GEN5 + (ring << 2), \ + l_base); \ + ADF_CSR_WR(csr_base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN5VF, \ + (ADF_RING_BUNDLE_SIZE_GEN5 * bank) + \ + ADF_RING_CSR_RING_UBASE_GEN5 + (ring << 2), \ + u_base); \ +} while (0) + +#define WRITE_CSR_RING_CONFIG_GEN5VF(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR(csr_base_addr + ADF_RING_CSR_ADDR_OFFSET_GEN5VF, \ + (ADF_RING_BUNDLE_SIZE_GEN5 * bank) + \ + ADF_RING_CSR_RING_CONFIG_GEN5 + (ring << 2), value) + +#define WRITE_CSR_RING_TAIL_GEN5VF(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET_GEN5VF, \ + (ADF_RING_BUNDLE_SIZE_GEN5 * (bank)) + \ + ADF_RING_CSR_RING_TAIL + ((ring) << 2), (value)) + +#define WRITE_CSR_RING_HEAD_GEN5VF(csr_base_addr, bank, ring, value) \ + ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET_GEN5VF, \ + (ADF_RING_BUNDLE_SIZE_GEN5 * (bank)) + \ + ADF_RING_CSR_RING_HEAD + ((ring) << 2), (value)) + +#define WRITE_CSR_RING_SRV_ARB_EN_GEN5VF(csr_base_addr, bank, value) \ + ADF_CSR_WR((csr_base_addr) + ADF_RING_CSR_ADDR_OFFSET_GEN5VF, \ + (ADF_RING_BUNDLE_SIZE_GEN5 * (bank)) + \ + ADF_RING_CSR_RING_SRV_ARB_EN, (value)) + +#endif diff --git a/drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c b/drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c new file mode 100644 index 0000000000..1f1242c5c0 --- /dev/null +++ b/drivers/crypto/qat/dev/qat_crypto_pmd_gen5.c @@ -0,0 +1,336 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 Intel Corporation + */ + +#include +#include +#include "qat_sym_session.h" +#include "qat_sym.h" +#include "qat_asym.h" +#include "qat_crypto.h" +#include "qat_crypto_pmd_gens.h" + +static struct rte_cryptodev_capabilities qat_sym_crypto_caps_gen5[] = { + QAT_SYM_AEAD_CAP(AES_GCM, + CAP_SET(block_size, 16), + CAP_RNG(key_size, 32, 32, 1), CAP_RNG(digest_size, 16, 16, 1), + CAP_RNG(aad_size, 0, 240, 1), CAP_RNG(iv_size, 12, 12, 1)), + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + +static int +qat_sgl_add_buffer_gen5(void *list_in, uint64_t addr, uint32_t len) +{ + struct qat_sgl *list = (struct qat_sgl *)list_in; + uint32_t nr; + + nr = list->num_bufs; + + if (nr >= QAT_SYM_SGL_MAX_NUMBER) { + QAT_DP_LOG(ERR, "Adding %d entry failed, no empty SGL buffer", nr); + return -EINVAL; + } + + list->buffers[nr].len = len; + list->buffers[nr].resrvd = 0; + list->buffers[nr].addr = addr; + + list->num_bufs++; +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_LOG(INFO, "SGL with %d buffers:", list->num_bufs); + QAT_DP_LOG(INFO, "QAT SGL buf %d, len = %d, iova = 0x%012"PRIx64, + nr, list->buffers[nr].len, list->buffers[nr].addr); +#endif + return 0; +} + +static int +qat_sgl_fill_array_with_mbuf(struct rte_mbuf *buf, int64_t offset, + void *list_in, uint32_t data_len) +{ + struct qat_sgl *list = (struct qat_sgl *)list_in; + uint32_t nr, buf_len; + int res = -EINVAL; +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + uint32_t start_idx; + start_idx = list->num_bufs; +#endif + + /* Append to the existing list */ + nr = list->num_bufs; + + for (buf_len = 0; buf && nr < QAT_SYM_SGL_MAX_NUMBER; buf = buf->next) { + if (offset >= rte_pktmbuf_data_len(buf)) { + offset -= rte_pktmbuf_data_len(buf); + /* Jump to next mbuf */ + continue; + } + + list->buffers[nr].len = rte_pktmbuf_data_len(buf) - offset; + list->buffers[nr].resrvd = 0; + list->buffers[nr].addr = rte_pktmbuf_iova_offset(buf, offset); + + offset = 0; + buf_len += list->buffers[nr].len; + + if (buf_len >= data_len) { + list->buffers[nr].len -= buf_len - data_len; + res = 0; + break; + } + ++nr; + } + + if (unlikely(res != 0)) { + if (nr == QAT_SYM_SGL_MAX_NUMBER) + QAT_DP_LOG(ERR, "Exceeded max segments in QAT SGL (%u)", + QAT_SYM_SGL_MAX_NUMBER); + else + QAT_DP_LOG(ERR, "Mbuf chain is too short"); + } else { + + list->num_bufs = ++nr; +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_LOG(INFO, "SGL with %d buffers:", list->num_bufs); + for (nr = start_idx; nr < list->num_bufs; nr++) { + QAT_DP_LOG(INFO, "QAT SGL buf %d, len = %d, iova = 0x%012"PRIx64, + nr, list->buffers[nr].len, + list->buffers[nr].addr); + } +#endif + } + + return res; +} + +static int +qat_sym_build_op_aead_gen5(void *in_op, struct qat_sym_session *ctx, + uint8_t *out_msg, void *op_cookie) +{ + struct qat_sym_op_cookie *cookie = op_cookie; + struct rte_crypto_op *op = in_op; + uint64_t digest_phys_addr, aad_phys_addr; + uint16_t iv_len, aad_len, digest_len, key_len; + uint32_t cipher_ofs, iv_offset, cipher_len; + register struct icp_qat_fw_la_bulk_req *qat_req; + struct icp_qat_fw_la_cipher_30_req_params *cipher_param; + enum icp_qat_hw_cipher_dir dir; + bool is_digest_adjacent = false; + + if (ctx->qat_cmd != ICP_QAT_FW_LA_CMD_CIPHER || + ctx->qat_cipher_alg != ICP_QAT_HW_CIPHER_ALGO_AES256 || + ctx->qat_mode != ICP_QAT_HW_CIPHER_AEAD_MODE) { + + QAT_DP_LOG(ERR, "Not supported (cmd: %d, alg: %d, mode: %d). " + "GEN5 PMD only supports AES-256 AEAD mode", + ctx->qat_cmd, ctx->qat_cipher_alg, ctx->qat_mode); + return -EINVAL; + } + + qat_req = (struct icp_qat_fw_la_bulk_req *)out_msg; + rte_mov128((uint8_t *)qat_req, (const uint8_t *)&(ctx->fw_req)); + qat_req->comn_mid.opaque_data = (uint64_t)(uintptr_t)op; + cipher_param = (void *)&qat_req->serv_specif_rqpars; + + dir = ctx->qat_dir; + + aad_phys_addr = op->sym->aead.aad.phys_addr; + aad_len = ctx->aad_len; + + iv_offset = ctx->cipher_iv.offset; + iv_len = ctx->cipher_iv.length; + + cipher_ofs = op->sym->aead.data.offset; + cipher_len = op->sym->aead.data.length; + + digest_phys_addr = op->sym->aead.digest.phys_addr; + digest_len = ctx->digest_length; + + /* Upto 16B IV can be directly embedded in descriptor. + * But GCM supports only 12B IV + */ + if (iv_len != GCM_IV_LENGTH) { + QAT_DP_LOG(ERR, "iv_len: %d not supported. Must be 12B.", + iv_len); + return -EINVAL; + } + + rte_memcpy(cipher_param->u.cipher_IV_array, + rte_crypto_op_ctod_offset(op, uint8_t*, iv_offset), + iv_len); + + /* Always SGL */ + RTE_ASSERT((qat_req->comn_hdr.comn_req_flags & + ICP_QAT_FW_SYM_COMM_ADDR_SGL) == 1); + /* Always inplace */ + RTE_ASSERT(op->sym->m_dst == NULL); + + /* Key buffer address is already programmed by reusing the + * content-descriptor buffer + */ + key_len = ctx->auth_key_length; + + cipher_param->spc_aad_sz = aad_len; + cipher_param->cipher_length = key_len; + cipher_param->spc_auth_res_sz = digest_len; + + /* Knowing digest is contiguous to cipher-text helps optimizing SGL */ + if (rte_pktmbuf_iova_offset(op->sym->m_src, cipher_ofs + cipher_len) + == digest_phys_addr) + is_digest_adjacent = true; + + /* SRC-SGL: 3 entries: + * a) AAD + * b) cipher + * c) digest (only for decrypt and buffer is_NOT_adjacent) + * + */ + cookie->qat_sgl_src.num_bufs = 0; + if (aad_len) + qat_sgl_add_buffer_gen5(&cookie->qat_sgl_src, aad_phys_addr, + aad_len); + + if (is_digest_adjacent && dir == ICP_QAT_HW_CIPHER_DECRYPT) { + qat_sgl_fill_array_with_mbuf(op->sym->m_src, cipher_ofs, + &cookie->qat_sgl_src, + cipher_len + digest_len); + } else { + qat_sgl_fill_array_with_mbuf(op->sym->m_src, cipher_ofs, + &cookie->qat_sgl_src, + cipher_len); + + /* Digest buffer in decrypt job */ + if (dir == ICP_QAT_HW_CIPHER_DECRYPT) + qat_sgl_add_buffer_gen5(&cookie->qat_sgl_src, + digest_phys_addr, digest_len); + } + + /* (in-place) DST-SGL: 2 entries: + * a) cipher + * b) digest (only for encrypt and buffer is_NOT_adjacent) + */ + cookie->qat_sgl_dst.num_bufs = 0; + + if (is_digest_adjacent && dir == ICP_QAT_HW_CIPHER_ENCRYPT) { + qat_sgl_fill_array_with_mbuf(op->sym->m_src, cipher_ofs, + &cookie->qat_sgl_dst, + cipher_len + digest_len); + } else { + qat_sgl_fill_array_with_mbuf(op->sym->m_src, cipher_ofs, + &cookie->qat_sgl_dst, + cipher_len); + + /* Digest buffer in Encrypt job */ + if (dir == ICP_QAT_HW_CIPHER_ENCRYPT) + qat_sgl_add_buffer_gen5(&cookie->qat_sgl_dst, + digest_phys_addr, digest_len); + } + + /* Length values in 128B descriptor */ + qat_req->comn_mid.src_length = cipher_len; + qat_req->comn_mid.dst_length = cipher_len; + + if (dir == ICP_QAT_HW_CIPHER_ENCRYPT) /* Digest buffer in Encrypt job */ + qat_req->comn_mid.dst_length += GCM_256_DIGEST_LEN; + + /* src & dst SGL addresses in 128B descriptor */ + qat_req->comn_mid.src_data_addr = cookie->qat_sgl_src_phys_addr; + qat_req->comn_mid.dest_data_addr = cookie->qat_sgl_dst_phys_addr; + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + QAT_DP_HEXDUMP_LOG(DEBUG, "qat_req:", qat_req, + sizeof(struct icp_qat_fw_la_bulk_req)); + QAT_DP_HEXDUMP_LOG(DEBUG, "src_data:", + rte_pktmbuf_mtod(op->sym->m_src, uint8_t*), + rte_pktmbuf_data_len(op->sym->m_src)); + QAT_DP_HEXDUMP_LOG(DEBUG, "digest:", op->sym->aead.digest.data, + digest_len); + QAT_DP_HEXDUMP_LOG(DEBUG, "aad:", op->sym->aead.aad.data, aad_len); +#endif + return 0; +} + +static int +qat_sym_crypto_set_session_gen5(void *cdev __rte_unused, void *session) +{ + struct qat_sym_session *ctx = session; + qat_sym_build_request_t build_request = NULL; + enum rte_proc_type_t proc_type = rte_eal_process_type(); + + if (proc_type == RTE_PROC_AUTO || proc_type == RTE_PROC_INVALID) + return -EINVAL; + + /* build request for aead */ + if (ctx->qat_cipher_alg == ICP_QAT_HW_CIPHER_ALGO_AES256 && + ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_GALOIS_128) { + build_request = qat_sym_build_op_aead_gen5; + if (build_request) + ctx->build_request[proc_type] = build_request; + else + return -EINVAL; + } + return 0; +} + + +static int +qat_sym_crypto_cap_get_gen5(struct qat_cryptodev_private *internals, + const char *capa_memz_name, + const uint16_t __rte_unused slice_map) +{ + const uint32_t size = sizeof(qat_sym_crypto_caps_gen5); + uint32_t i; + + internals->capa_mz = rte_memzone_lookup(capa_memz_name); + if (internals->capa_mz == NULL) { + internals->capa_mz = rte_memzone_reserve(capa_memz_name, + size, rte_socket_id(), 0); + if (internals->capa_mz == NULL) { + QAT_LOG(DEBUG, + "Error allocating memzone for capabilities"); + return -1; + } + } + + struct rte_cryptodev_capabilities *addr = + (struct rte_cryptodev_capabilities *) + internals->capa_mz->addr; + const struct rte_cryptodev_capabilities *capabilities = + qat_sym_crypto_caps_gen5; + const uint32_t capa_num = + size / sizeof(struct rte_cryptodev_capabilities); + uint32_t curr_capa = 0; + + for (i = 0; i < capa_num; i++) { + memcpy(addr + curr_capa, capabilities + i, + sizeof(struct rte_cryptodev_capabilities)); + curr_capa++; + } + internals->qat_dev_capabilities = internals->capa_mz->addr; + + return 0; +} + +RTE_INIT(qat_sym_crypto_gen5_init) +{ + qat_sym_gen_dev_ops[QAT_GEN5].cryptodev_ops = &qat_sym_crypto_ops_gen1; + qat_sym_gen_dev_ops[QAT_GEN5].get_capabilities = + qat_sym_crypto_cap_get_gen5; + qat_sym_gen_dev_ops[QAT_GEN5].set_session = + qat_sym_crypto_set_session_gen5; + qat_sym_gen_dev_ops[QAT_GEN5].set_raw_dp_ctx = NULL; + qat_sym_gen_dev_ops[QAT_GEN5].get_feature_flags = + qat_sym_crypto_feature_flags_get_gen1; +#ifdef RTE_LIB_SECURITY + qat_sym_gen_dev_ops[QAT_GEN5].create_security_ctx = + qat_sym_create_security_gen1; +#endif +} + +RTE_INIT(qat_asym_crypto_gen5_init) +{ + qat_asym_gen_dev_ops[QAT_GEN5].cryptodev_ops = NULL; + qat_asym_gen_dev_ops[QAT_GEN5].get_capabilities = NULL; + qat_asym_gen_dev_ops[QAT_GEN5].get_feature_flags = NULL; + qat_asym_gen_dev_ops[QAT_GEN5].set_session = NULL; +} -- 2.25.1