From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E7B1D43390 for ; Tue, 21 Nov 2023 17:22:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CF61342EA2; Tue, 21 Nov 2023 17:22:16 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 11B1642E9D for ; Tue, 21 Nov 2023 17:22:14 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1700583735; x=1732119735; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=XpgTsDmA33ED/yvoXEzcTmjImlVKdBUM3y2pxqWlTv4=; b=Guz89l21Oe8yUadHF1XzhX7VtbFmHpO0+SuM28a+KON5IOaFturJf2MH 3mF8PwjTTOhETkkfk8NTdlUpxu8qOaWCZhEdwvOpvRd3JwYXJ6mWwa63E kf2AhTFsMVQmv9lZWlm8EhWr/MQrBNPsa3o3h2j29scP+trJ5UAYAGlvc QnjlHVxPfxq+fuksMJggxR29z7YyIW7FI5QHNGOdisR2EBHQka+N7aQn1 4oVvAXXe+03A82EwEMQENRzKoUZsk1zzFGQ51qIRVxQnLo9VjreUI2n8n 65ySG7Gsz5BODNbCDhLT/5GcXdowGZYUCoNooPLXtkZyu1HMs414IVCFk w==; X-IronPort-AV: E=McAfee;i="6600,9927,10901"; a="390731871" X-IronPort-AV: E=Sophos;i="6.04,216,1695711600"; d="scan'208";a="390731871" Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Nov 2023 08:22:13 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10901"; a="770309636" X-IronPort-AV: E=Sophos;i="6.04,216,1695711600"; d="scan'208";a="770309636" Received: from silpixa00400355.ir.intel.com (HELO silpixa00400355.ger.corp.intel.com) ([10.237.222.80]) by fmsmga007.fm.intel.com with ESMTP; 21 Nov 2023 08:22:12 -0800 From: Ciara Power To: stable@dpdk.org Cc: ktraynor@redhat.com, Ciara Power , John Griffin , Fiona Trahe , Deepak Kumar Jain Subject: [PATCH 21.11] crypto/qat: fix raw API null algorithm digest Date: Tue, 21 Nov 2023 16:22:09 +0000 Message-Id: <20231121162209.3945846-1-ciara.power@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org [ upstream commit d7d52b37e89132f07121323c449ac838e6448ae0 ] QAT HW generates bytes of 0x00 digest, even when a digest of len 0 is requested for NULL. This caused test failures when the test vector had digest len 0, as the buffer has unexpected changed bytes. By placing the digest into the cookie for NULL authentication, the buffer remains unchanged as expected, and the digest is placed to the side, as it won't be used anyway. This fix was previously added for the main QAT code path, but it also needs to be included for the raw API code path. Fixes: db0e952a5c01 ("crypto/qat: add NULL capability") Signed-off-by: Ciara Power --- drivers/crypto/qat/qat_sym_hw_dp.c | 42 +++++++++++++++++++++++++++--- 1 file changed, 38 insertions(+), 4 deletions(-) diff --git a/drivers/crypto/qat/qat_sym_hw_dp.c b/drivers/crypto/qat/qat_sym_hw_dp.c index 792ad2b213..8b505a87e0 100644 --- a/drivers/crypto/qat/qat_sym_hw_dp.c +++ b/drivers/crypto/qat/qat_sym_hw_dp.c @@ -251,13 +251,17 @@ qat_sym_dp_enqueue_single_auth(void *qp_data, uint8_t *drv_ctx, struct qat_qp *qp = qp_data; struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_op_cookie *cookie; struct qat_sym_session *ctx = dp_ctx->session; struct icp_qat_fw_la_bulk_req *req; int32_t data_len; uint32_t tail = dp_ctx->tail; + struct rte_crypto_va_iova_ptr null_digest; + struct rte_crypto_va_iova_ptr *job_digest = digest; req = (struct icp_qat_fw_la_bulk_req *)( (uint8_t *)tx_queue->base_addr + tail); + cookie = qp->op_cookies[tail >> tx_queue->trailz]; tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); @@ -266,7 +270,11 @@ qat_sym_dp_enqueue_single_auth(void *qp_data, uint8_t *drv_ctx, return -1; req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data; - enqueue_one_auth_job(ctx, req, digest, auth_iv, ofs, + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) { + null_digest.iova = cookie->digest_null_phys_addr; + job_digest = &null_digest; + } + enqueue_one_auth_job(ctx, req, job_digest, auth_iv, ofs, (uint32_t)data_len); dp_ctx->tail = tail; @@ -283,11 +291,14 @@ qat_sym_dp_enqueue_auth_jobs(void *qp_data, uint8_t *drv_ctx, struct qat_qp *qp = qp_data; struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_op_cookie *cookie; struct qat_sym_session *ctx = dp_ctx->session; uint32_t i, n; uint32_t tail; struct icp_qat_fw_la_bulk_req *req; int32_t data_len; + struct rte_crypto_va_iova_ptr null_digest; + struct rte_crypto_va_iova_ptr *job_digest = NULL; n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); if (unlikely(n == 0)) { @@ -301,6 +312,7 @@ qat_sym_dp_enqueue_auth_jobs(void *qp_data, uint8_t *drv_ctx, for (i = 0; i < n; i++) { req = (struct icp_qat_fw_la_bulk_req *)( (uint8_t *)tx_queue->base_addr + tail); + cookie = qp->op_cookies[tail >> tx_queue->trailz]; rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); data_len = qat_sym_dp_parse_data_vec(qp, req, @@ -309,7 +321,12 @@ qat_sym_dp_enqueue_auth_jobs(void *qp_data, uint8_t *drv_ctx, if (unlikely(data_len < 0)) break; req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data[i]; - enqueue_one_auth_job(ctx, req, &vec->digest[i], + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) { + null_digest.iova = cookie->digest_null_phys_addr; + job_digest = &null_digest; + } else + job_digest = &vec->digest[i]; + enqueue_one_auth_job(ctx, req, job_digest, &vec->auth_iv[i], ofs, (uint32_t)data_len); tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; } @@ -433,23 +450,31 @@ qat_sym_dp_enqueue_single_chain(void *qp_data, uint8_t *drv_ctx, struct qat_qp *qp = qp_data; struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_op_cookie *cookie; struct qat_sym_session *ctx = dp_ctx->session; struct icp_qat_fw_la_bulk_req *req; int32_t data_len; uint32_t tail = dp_ctx->tail; + struct rte_crypto_va_iova_ptr null_digest; + struct rte_crypto_va_iova_ptr *job_digest = digest; req = (struct icp_qat_fw_la_bulk_req *)( (uint8_t *)tx_queue->base_addr + tail); + cookie = qp->op_cookies[tail >> tx_queue->trailz]; tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask; rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); rte_prefetch0((uint8_t *)tx_queue->base_addr + tail); data_len = qat_sym_dp_parse_data_vec(qp, req, data, n_data_vecs); if (unlikely(data_len < 0)) return -1; + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) { + null_digest.iova = cookie->digest_null_phys_addr; + job_digest = &null_digest; + } req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data; if (unlikely(enqueue_one_chain_job(ctx, req, data, n_data_vecs, - cipher_iv, digest, auth_iv, ofs, (uint32_t)data_len))) + cipher_iv, job_digest, auth_iv, ofs, (uint32_t)data_len))) return -1; dp_ctx->tail = tail; @@ -466,11 +491,14 @@ qat_sym_dp_enqueue_chain_jobs(void *qp_data, uint8_t *drv_ctx, struct qat_qp *qp = qp_data; struct qat_sym_dp_ctx *dp_ctx = (void *)drv_ctx; struct qat_queue *tx_queue = &qp->tx_q; + struct qat_sym_op_cookie *cookie; struct qat_sym_session *ctx = dp_ctx->session; uint32_t i, n; uint32_t tail; struct icp_qat_fw_la_bulk_req *req; int32_t data_len; + struct rte_crypto_va_iova_ptr null_digest; + struct rte_crypto_va_iova_ptr *job_digest; n = QAT_SYM_DP_GET_MAX_ENQ(qp, dp_ctx->cached_enqueue, vec->num); if (unlikely(n == 0)) { @@ -484,6 +512,7 @@ qat_sym_dp_enqueue_chain_jobs(void *qp_data, uint8_t *drv_ctx, for (i = 0; i < n; i++) { req = (struct icp_qat_fw_la_bulk_req *)( (uint8_t *)tx_queue->base_addr + tail); + cookie = qp->op_cookies[tail >> tx_queue->trailz]; rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req)); data_len = qat_sym_dp_parse_data_vec(qp, req, @@ -491,10 +520,15 @@ qat_sym_dp_enqueue_chain_jobs(void *qp_data, uint8_t *drv_ctx, vec->src_sgl[i].num); if (unlikely(data_len < 0)) break; + if (ctx->qat_hash_alg == ICP_QAT_HW_AUTH_ALGO_NULL) { + null_digest.iova = cookie->digest_null_phys_addr; + job_digest = &null_digest; + } else + job_digest = &vec->digest[i]; req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data[i]; if (unlikely(enqueue_one_chain_job(ctx, req, vec->src_sgl[i].vec, vec->src_sgl[i].num, - &vec->iv[i], &vec->digest[i], + &vec->iv[i], job_digest, &vec->auth_iv[i], ofs, (uint32_t)data_len))) break; -- 2.25.1