From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5025BA0540; Fri, 9 Sep 2022 17:04:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 870B94282F; Fri, 9 Sep 2022 17:04:44 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 048AF4282E for ; Fri, 9 Sep 2022 17:04:41 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1662735881; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qolOzGy8pFaOl4rhz9KaQvARYSb0H9IyLLQyMipGukE=; b=L3DxNOKiETrjYqlY3oxG4xnw77HfBCwP4JWR23WiNNjJJkxATNVUBBJGAIv4dpnfzGwYS2 3SvkfRi13x3vSy830LizXzcw2YpdCEk04QKdOUovcYbP1txTnpRJVTFOVQxoanfK3Uhxnx sOmkfqkmjZ4XU1zWkA45uk8BQMKHQK4= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-556-XcmvwMr5P8WStaerRbs8XA-1; Fri, 09 Sep 2022 11:04:37 -0400 X-MC-Unique: XcmvwMr5P8WStaerRbs8XA-1 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E42A929324A7; Fri, 9 Sep 2022 15:04:36 +0000 (UTC) Received: from localhost.localdomain (unknown [10.40.193.53]) by smtp.corp.redhat.com (Postfix) with ESMTP id A7B152166B26; Fri, 9 Sep 2022 15:04:35 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: gakhil@marvell.com, chandu@amd.com, stable@dpdk.org, Amaranath Somalapuram Subject: [PATCH 3/4] crypto/ccp: fix IOVA handling Date: Fri, 9 Sep 2022 17:04:10 +0200 Message-Id: <20220909150411.3702860-4-david.marchand@redhat.com> In-Reply-To: <20220909150411.3702860-1-david.marchand@redhat.com> References: <20220909150411.3702860-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII"; x-default=true X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Using IOVA or physical addresses is something that the user (via --iova-mode=) or the bus code decides. The crypto/ccp PCI driver should only use rte_mem_virt2iova. It should not try to decide what to use solely based on the kmod the PCI device is bound to. While at it, the global variable sha_ctx looks unsafe and unneeded. Remove it. Fixes: 09a0fd736a08 ("crypto/ccp: enable IOMMU") Cc: stable@dpdk.org Signed-off-by: David Marchand --- drivers/crypto/ccp/ccp_crypto.c | 105 ++++++------------------------- drivers/crypto/ccp/ccp_dev.c | 9 +-- drivers/crypto/ccp/ccp_pci.c | 34 ---------- drivers/crypto/ccp/ccp_pci.h | 3 - drivers/crypto/ccp/rte_ccp_pmd.c | 3 - 5 files changed, 19 insertions(+), 135 deletions(-) diff --git a/drivers/crypto/ccp/ccp_crypto.c b/drivers/crypto/ccp/ccp_crypto.c index 4bab18323b..351d8ac63e 100644 --- a/drivers/crypto/ccp/ccp_crypto.c +++ b/drivers/crypto/ccp/ccp_crypto.c @@ -33,8 +33,6 @@ #include #include -extern int iommu_mode; -void *sha_ctx; /* SHA initial context values */ uint32_t ccp_sha1_init[SHA_COMMON_DIGEST_SIZE / sizeof(uint32_t)] = { SHA1_H4, SHA1_H3, @@ -748,13 +746,8 @@ ccp_configure_session_cipher(struct ccp_session *sess, CCP_LOG_ERR("Invalid CCP Engine"); return -ENOTSUP; } - if (iommu_mode == 2) { - sess->cipher.nonce_phys = rte_mem_virt2iova(sess->cipher.nonce); - sess->cipher.key_phys = rte_mem_virt2iova(sess->cipher.key_ccp); - } else { - sess->cipher.nonce_phys = rte_mem_virt2phy(sess->cipher.nonce); - sess->cipher.key_phys = rte_mem_virt2phy(sess->cipher.key_ccp); - } + sess->cipher.nonce_phys = rte_mem_virt2iova(sess->cipher.nonce); + sess->cipher.key_phys = rte_mem_virt2iova(sess->cipher.key_ccp); return 0; } @@ -793,7 +786,6 @@ ccp_configure_session_auth(struct ccp_session *sess, sess->auth.ctx = (void *)ccp_sha1_init; sess->auth.ctx_len = CCP_SB_BYTES; sess->auth.offset = CCP_SB_BYTES - SHA1_DIGEST_SIZE; - rte_memcpy(sha_ctx, sess->auth.ctx, SHA_COMMON_DIGEST_SIZE); break; case RTE_CRYPTO_AUTH_SHA1_HMAC: if (sess->auth_opt) { @@ -832,7 +824,6 @@ ccp_configure_session_auth(struct ccp_session *sess, sess->auth.ctx = (void *)ccp_sha224_init; sess->auth.ctx_len = CCP_SB_BYTES; sess->auth.offset = CCP_SB_BYTES - SHA224_DIGEST_SIZE; - rte_memcpy(sha_ctx, sess->auth.ctx, SHA256_DIGEST_SIZE); break; case RTE_CRYPTO_AUTH_SHA224_HMAC: if (sess->auth_opt) { @@ -895,7 +886,6 @@ ccp_configure_session_auth(struct ccp_session *sess, sess->auth.ctx = (void *)ccp_sha256_init; sess->auth.ctx_len = CCP_SB_BYTES; sess->auth.offset = CCP_SB_BYTES - SHA256_DIGEST_SIZE; - rte_memcpy(sha_ctx, sess->auth.ctx, SHA256_DIGEST_SIZE); break; case RTE_CRYPTO_AUTH_SHA256_HMAC: if (sess->auth_opt) { @@ -958,7 +948,6 @@ ccp_configure_session_auth(struct ccp_session *sess, sess->auth.ctx = (void *)ccp_sha384_init; sess->auth.ctx_len = CCP_SB_BYTES << 1; sess->auth.offset = (CCP_SB_BYTES << 1) - SHA384_DIGEST_SIZE; - rte_memcpy(sha_ctx, sess->auth.ctx, SHA512_DIGEST_SIZE); break; case RTE_CRYPTO_AUTH_SHA384_HMAC: if (sess->auth_opt) { @@ -1023,7 +1012,6 @@ ccp_configure_session_auth(struct ccp_session *sess, sess->auth.ctx = (void *)ccp_sha512_init; sess->auth.ctx_len = CCP_SB_BYTES << 1; sess->auth.offset = (CCP_SB_BYTES << 1) - SHA512_DIGEST_SIZE; - rte_memcpy(sha_ctx, sess->auth.ctx, SHA512_DIGEST_SIZE); break; case RTE_CRYPTO_AUTH_SHA512_HMAC: if (sess->auth_opt) { @@ -1173,13 +1161,8 @@ ccp_configure_session_aead(struct ccp_session *sess, CCP_LOG_ERR("Unsupported aead algo"); return -ENOTSUP; } - if (iommu_mode == 2) { - sess->cipher.nonce_phys = rte_mem_virt2iova(sess->cipher.nonce); - sess->cipher.key_phys = rte_mem_virt2iova(sess->cipher.key_ccp); - } else { - sess->cipher.nonce_phys = rte_mem_virt2phy(sess->cipher.nonce); - sess->cipher.key_phys = rte_mem_virt2phy(sess->cipher.key_ccp); - } + sess->cipher.nonce_phys = rte_mem_virt2iova(sess->cipher.nonce); + sess->cipher.key_phys = rte_mem_virt2iova(sess->cipher.key_ccp); return 0; } @@ -1594,14 +1577,8 @@ ccp_perform_hmac(struct rte_crypto_op *op, op->sym->auth.data.offset); append_ptr = (void *)rte_pktmbuf_append(op->sym->m_src, session->auth.ctx_len); - if (iommu_mode == 2) { - dest_addr = (phys_addr_t)rte_mem_virt2iova(append_ptr); - pst.src_addr = (phys_addr_t)rte_mem_virt2iova((void *)addr); - } else { - dest_addr = (phys_addr_t)rte_mem_virt2phy(append_ptr); - pst.src_addr = (phys_addr_t)rte_mem_virt2phy((void *)addr); - } - dest_addr_t = dest_addr; + dest_addr_t = dest_addr = (phys_addr_t)rte_mem_virt2iova(append_ptr); + pst.src_addr = (phys_addr_t)rte_mem_virt2iova((void *)addr); /** Load PHash1 to LSB*/ pst.dest_addr = (phys_addr_t)(cmd_q->sb_sha * CCP_SB_BYTES); @@ -1683,10 +1660,7 @@ ccp_perform_hmac(struct rte_crypto_op *op, /** Load PHash2 to LSB*/ addr += session->auth.ctx_len; - if (iommu_mode == 2) - pst.src_addr = (phys_addr_t)rte_mem_virt2iova((void *)addr); - else - pst.src_addr = (phys_addr_t)rte_mem_virt2phy((void *)addr); + pst.src_addr = (phys_addr_t)rte_mem_virt2iova((void *)addr); pst.dest_addr = (phys_addr_t)(cmd_q->sb_sha * CCP_SB_BYTES); pst.len = session->auth.ctx_len; pst.dir = 1; @@ -1774,14 +1748,8 @@ ccp_perform_sha(struct rte_crypto_op *op, op->sym->auth.data.offset); append_ptr = (void *)rte_pktmbuf_append(op->sym->m_src, session->auth.ctx_len); - if (iommu_mode == 2) { - dest_addr = (phys_addr_t)rte_mem_virt2iova(append_ptr); - pst.src_addr = (phys_addr_t)sha_ctx; - } else { - dest_addr = (phys_addr_t)rte_mem_virt2phy(append_ptr); - pst.src_addr = (phys_addr_t)rte_mem_virt2phy((void *) - session->auth.ctx); - } + pst.src_addr = (phys_addr_t)rte_mem_virt2iova((void *)session->auth.ctx); + dest_addr = (phys_addr_t)rte_mem_virt2iova(append_ptr); /** Passthru sha context*/ @@ -1871,15 +1839,8 @@ ccp_perform_sha3_hmac(struct rte_crypto_op *op, CCP_LOG_ERR("CCP MBUF append failed\n"); return -1; } - if (iommu_mode == 2) { - dest_addr = (phys_addr_t)rte_mem_virt2iova((void *)append_ptr); - ctx_paddr = (phys_addr_t)rte_mem_virt2iova( - session->auth.pre_compute); - } else { - dest_addr = (phys_addr_t)rte_mem_virt2phy((void *)append_ptr); - ctx_paddr = (phys_addr_t)rte_mem_virt2phy( - session->auth.pre_compute); - } + dest_addr = (phys_addr_t)rte_mem_virt2iova((void *)append_ptr); + ctx_paddr = (phys_addr_t)rte_mem_virt2iova(session->auth.pre_compute); dest_addr_t = dest_addr + (session->auth.ctx_len / 2); desc = &cmd_q->qbase_desc[cmd_q->qidx]; memset(desc, 0, Q_DESC_SIZE); @@ -2017,13 +1978,8 @@ ccp_perform_sha3(struct rte_crypto_op *op, CCP_LOG_ERR("CCP MBUF append failed\n"); return -1; } - if (iommu_mode == 2) { - dest_addr = (phys_addr_t)rte_mem_virt2iova((void *)append_ptr); - ctx_paddr = (phys_addr_t)rte_mem_virt2iova((void *)ctx_addr); - } else { - dest_addr = (phys_addr_t)rte_mem_virt2phy((void *)append_ptr); - ctx_paddr = (phys_addr_t)rte_mem_virt2phy((void *)ctx_addr); - } + dest_addr = (phys_addr_t)rte_mem_virt2iova((void *)append_ptr); + ctx_paddr = (phys_addr_t)rte_mem_virt2iova((void *)ctx_addr); ctx_addr = session->auth.sha3_ctx; @@ -2099,13 +2055,7 @@ ccp_perform_aes_cmac(struct rte_crypto_op *op, ctx_addr = session->auth.pre_compute; memset(ctx_addr, 0, AES_BLOCK_SIZE); - if (iommu_mode == 2) - pst.src_addr = (phys_addr_t)rte_mem_virt2iova( - (void *)ctx_addr); - else - pst.src_addr = (phys_addr_t)rte_mem_virt2phy( - (void *)ctx_addr); - + pst.src_addr = (phys_addr_t)rte_mem_virt2iova((void *)ctx_addr); pst.dest_addr = (phys_addr_t)(cmd_q->sb_iv * CCP_SB_BYTES); pst.len = CCP_SB_BYTES; pst.dir = 1; @@ -2143,12 +2093,7 @@ ccp_perform_aes_cmac(struct rte_crypto_op *op, } else { ctx_addr = session->auth.pre_compute + CCP_SB_BYTES; memset(ctx_addr, 0, AES_BLOCK_SIZE); - if (iommu_mode == 2) - pst.src_addr = (phys_addr_t)rte_mem_virt2iova( - (void *)ctx_addr); - else - pst.src_addr = (phys_addr_t)rte_mem_virt2phy( - (void *)ctx_addr); + pst.src_addr = (phys_addr_t)rte_mem_virt2iova((void *)ctx_addr); pst.dest_addr = (phys_addr_t)(cmd_q->sb_iv * CCP_SB_BYTES); pst.len = CCP_SB_BYTES; pst.dir = 1; @@ -2342,12 +2287,7 @@ ccp_perform_3des(struct rte_crypto_op *op, rte_memcpy(lsb_buf + (CCP_SB_BYTES - session->iv.length), iv, session->iv.length); - if (iommu_mode == 2) - pst.src_addr = (phys_addr_t)rte_mem_virt2iova( - (void *) lsb_buf); - else - pst.src_addr = (phys_addr_t)rte_mem_virt2phy( - (void *) lsb_buf); + pst.src_addr = (phys_addr_t)rte_mem_virt2iova((void *) lsb_buf); pst.dest_addr = (phys_addr_t)(cmd_q->sb_iv * CCP_SB_BYTES); pst.len = CCP_SB_BYTES; pst.dir = 1; @@ -2370,11 +2310,7 @@ ccp_perform_3des(struct rte_crypto_op *op, else dest_addr = src_addr; - if (iommu_mode == 2) - key_addr = rte_mem_virt2iova(session->cipher.key_ccp); - else - key_addr = rte_mem_virt2phy(session->cipher.key_ccp); - + key_addr = rte_mem_virt2iova(session->cipher.key_ccp); desc = &cmd_q->qbase_desc[cmd_q->qidx]; memset(desc, 0, Q_DESC_SIZE); @@ -2768,12 +2704,7 @@ process_ops_to_enqueue(struct ccp_qp *qp, b_info->lsb_buf_idx = 0; b_info->desccnt = 0; b_info->cmd_q = cmd_q; - if (iommu_mode == 2) - b_info->lsb_buf_phys = - (phys_addr_t)rte_mem_virt2iova((void *)b_info->lsb_buf); - else - b_info->lsb_buf_phys = - (phys_addr_t)rte_mem_virt2phy((void *)b_info->lsb_buf); + b_info->lsb_buf_phys = (phys_addr_t)rte_mem_virt2iova((void *)b_info->lsb_buf); rte_atomic64_sub(&b_info->cmd_q->free_slots, slots_req); diff --git a/drivers/crypto/ccp/ccp_dev.c b/drivers/crypto/ccp/ccp_dev.c index 410e62121e..14c54929c4 100644 --- a/drivers/crypto/ccp/ccp_dev.c +++ b/drivers/crypto/ccp/ccp_dev.c @@ -23,7 +23,6 @@ #include "ccp_pci.h" #include "ccp_pmd_private.h" -int iommu_mode; struct ccp_list ccp_list = TAILQ_HEAD_INITIALIZER(ccp_list); static int ccp_dev_id; @@ -652,7 +651,7 @@ is_ccp_device(const char *dirname, static int ccp_probe_device(int ccp_type, struct rte_pci_device *pci_dev) { - struct ccp_device *ccp_dev = NULL; + struct ccp_device *ccp_dev; ccp_dev = rte_zmalloc("ccp_device", sizeof(*ccp_dev), RTE_CACHE_LINE_SIZE); @@ -683,16 +682,10 @@ ccp_probe_devices(struct rte_pci_device *pci_dev, struct dirent *d; DIR *dir; int ret = 0; - int module_idx = 0; uint16_t domain; uint8_t bus, devid, function; char dirname[PATH_MAX]; - module_idx = ccp_check_pci_uio_module(); - if (module_idx < 0) - return -1; - - iommu_mode = module_idx; TAILQ_INIT(&ccp_list); dir = opendir(SYSFS_PCI_DEVICES); if (dir == NULL) diff --git a/drivers/crypto/ccp/ccp_pci.c b/drivers/crypto/ccp/ccp_pci.c index c941e222c7..bd1a037f76 100644 --- a/drivers/crypto/ccp/ccp_pci.c +++ b/drivers/crypto/ccp/ccp_pci.c @@ -11,40 +11,6 @@ #include #include "ccp_pci.h" -#include "ccp_pmd_private.h" - -static const char * const uio_module_names[] = { - "igb_uio", - "uio_pci_generic", - "vfio_pci" -}; - -int -ccp_check_pci_uio_module(void) -{ - FILE *fp; - int i; - char buf[BUFSIZ]; - - fp = fopen(PROC_MODULES, "r"); - if (fp == NULL) - return -1; - i = 0; - while (uio_module_names[i] != NULL) { - while (fgets(buf, sizeof(buf), fp) != NULL) { - if (!strncmp(buf, uio_module_names[i], - strlen(uio_module_names[i]))) { - fclose(fp); - return i; - } - } - i++; - rewind(fp); - } - fclose(fp); - CCP_LOG_DBG("Insert igb_uio or uio_pci_generic kernel module(s)"); - return -1;/* uio not inserted */ -} /* * split up a pci address into its constituent parts. diff --git a/drivers/crypto/ccp/ccp_pci.h b/drivers/crypto/ccp/ccp_pci.h index 7ed3bac406..f393a04d6f 100644 --- a/drivers/crypto/ccp/ccp_pci.h +++ b/drivers/crypto/ccp/ccp_pci.h @@ -10,9 +10,6 @@ #include #define SYSFS_PCI_DEVICES "/sys/bus/pci/devices" -#define PROC_MODULES "/proc/modules" - -int ccp_check_pci_uio_module(void); int ccp_parse_pci_addr_format(const char *buf, int bufsize, uint16_t *domain, uint8_t *bus, uint8_t *devid, uint8_t *function); diff --git a/drivers/crypto/ccp/rte_ccp_pmd.c b/drivers/crypto/ccp/rte_ccp_pmd.c index c5ec952e36..0d84c8cd0e 100644 --- a/drivers/crypto/ccp/rte_ccp_pmd.c +++ b/drivers/crypto/ccp/rte_ccp_pmd.c @@ -22,7 +22,6 @@ static unsigned int ccp_pmd_init_done; uint8_t ccp_cryptodev_driver_id; uint8_t cryptodev_cnt; -extern void *sha_ctx; struct ccp_pmd_init_params { struct rte_cryptodev_pmd_init_params def_p; @@ -213,7 +212,6 @@ cryptodev_ccp_remove(struct rte_pci_device *pci_dev) return -ENODEV; ccp_pmd_init_done = 0; - rte_free(sha_ctx); RTE_LOG(INFO, PMD, "Closing ccp device %s on numa socket %u\n", name, rte_socket_id()); @@ -300,7 +298,6 @@ cryptodev_ccp_probe(struct rte_pci_driver *pci_drv __rte_unused, .auth_opt = CCP_PMD_AUTH_OPT_CCP, }; - sha_ctx = (void *)rte_malloc(NULL, SHA512_DIGEST_SIZE, 64); if (ccp_pmd_init_done) { RTE_LOG(INFO, PMD, "CCP PMD already initialized\n"); return -EFAULT; -- 2.37.2