From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F1555A00C2; Thu, 16 Jun 2022 16:52:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D11AE40689; Thu, 16 Jun 2022 16:52:53 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 5B83540141 for ; Thu, 16 Jun 2022 16:52:52 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655391171; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Ez+lxkksT4NxP5s2QaJ7XsHILXnOPPZNlZ8OhjEVSfs=; b=KWfdx78+dnLcIxazzS2lwCXQR6XaSoPXzx4PtLovJzkHHFPNAEZLRGy5WSQzcqGqj9//xa 55JnCafBItJEY5V+icbj2mO1fcv4LDLHlRxiGArE/+h1pzDtvE+WMA/J3KkvoedrNFcA6P pZ7Lxjb1x/sU/4p9/leVWSPUG2WYs58= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-563-quRFl_DkPmamJ58BjSW3WQ-1; Thu, 16 Jun 2022 10:52:48 -0400 X-MC-Unique: quRFl_DkPmamJ58BjSW3WQ-1 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.rdu2.redhat.com [10.11.54.7]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 94AED29DD994; Thu, 16 Jun 2022 14:52:47 +0000 (UTC) Received: from fchome.redhat.com (unknown [10.40.193.132]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0CEDC1415109; Thu, 16 Jun 2022 14:52:44 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: stable@dpdk.org, Maxime Coquelin , Chenbo Xia , Fan Zhang Subject: [PATCH v3] vhost/crypto: fix build with GCC 12 Date: Thu, 16 Jun 2022 16:46:50 +0200 Message-Id: <20220616144650.1013920-1-david.marchand@redhat.com> In-Reply-To: <20220518101657.1230416-13-david.marchand@redhat.com> References: <20220518101657.1230416-13-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.85 on 10.11.54.7 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org GCC 12 raises the following warning: In file included from ../lib/mempool/rte_mempool.h:46, from ../lib/mbuf/rte_mbuf.h:38, from ../lib/vhost/vhost_crypto.c:7: ../lib/vhost/vhost_crypto.c: In function ‘rte_vhost_crypto_fetch_requests’: ../lib/eal/x86/include/rte_memcpy.h:371:9: warning: array subscript 1 is outside array bounds of ‘struct virtio_crypto_op_data_req[1]’ [-Warray-bounds] 371 | rte_mov32((uint8_t *)dst + 3 * 32, (const uint8_t *)src + 3 * 32); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../lib/vhost/vhost_crypto.c:1178:42: note: while referencing ‘req’ 1178 | struct virtio_crypto_op_data_req req; | ^~~ Split this function and separate the per descriptor copy. This makes the code clearer, and the compiler happier. Note: logs for errors have been moved to callers to avoid duplicates. Fixes: 3c79609fda7c ("vhost/crypto: handle virtually non-contiguous buffers") Cc: stable@dpdk.org Signed-off-by: David Marchand --- Changes since v2: - fixed 32-bits build, Changes since v1: - refactored copy function, --- lib/vhost/vhost_crypto.c | 123 +++++++++++++++------------------------ 1 file changed, 46 insertions(+), 77 deletions(-) diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c index b1c0eb6a0f..96ffb82a5d 100644 --- a/lib/vhost/vhost_crypto.c +++ b/lib/vhost/vhost_crypto.c @@ -565,94 +565,58 @@ get_data_ptr(struct vhost_crypto_data_req *vc_req, return data; } -static __rte_always_inline int -copy_data(void *dst_data, struct vhost_crypto_data_req *vc_req, - struct vhost_crypto_desc *head, - struct vhost_crypto_desc **cur_desc, - uint32_t size, uint32_t max_n_descs) +static __rte_always_inline uint32_t +copy_data_from_desc(void *dst, struct vhost_crypto_data_req *vc_req, + struct vhost_crypto_desc *desc, uint32_t size) { - struct vhost_crypto_desc *desc = *cur_desc; - uint64_t remain, addr, dlen, len; - uint32_t to_copy; - uint8_t *data = dst_data; - uint8_t *src; - int left = size; - - to_copy = RTE_MIN(desc->len, (uint32_t)left); - dlen = to_copy; - src = IOVA_TO_VVA(uint8_t *, vc_req, desc->addr, &dlen, - VHOST_ACCESS_RO); - if (unlikely(!src || !dlen)) - return -1; + uint64_t remain; + uint64_t addr; + + remain = RTE_MIN(desc->len, size); + addr = desc->addr; + do { + uint64_t len; + void *src; + + len = remain; + src = IOVA_TO_VVA(void *, vc_req, addr, &len, VHOST_ACCESS_RO); + if (unlikely(src == NULL || len == 0)) + return 0; - rte_memcpy((uint8_t *)data, src, dlen); - data += dlen; + rte_memcpy(dst, src, len); + remain -= len; + /* cast is needed for 32-bit architecture */ + dst = RTE_PTR_ADD(dst, (size_t)len); + addr += len; + } while (unlikely(remain != 0)); - if (unlikely(dlen < to_copy)) { - remain = to_copy - dlen; - addr = desc->addr + dlen; + return RTE_MIN(desc->len, size); +} - while (remain) { - len = remain; - src = IOVA_TO_VVA(uint8_t *, vc_req, addr, &len, - VHOST_ACCESS_RO); - if (unlikely(!src || !len)) { - VC_LOG_ERR("Failed to map descriptor"); - return -1; - } - rte_memcpy(data, src, len); - addr += len; - remain -= len; - data += len; - } - } +static __rte_always_inline int +copy_data(void *data, struct vhost_crypto_data_req *vc_req, + struct vhost_crypto_desc *head, struct vhost_crypto_desc **cur_desc, + uint32_t size, uint32_t max_n_descs) +{ + struct vhost_crypto_desc *desc = *cur_desc; + uint32_t left = size; - left -= to_copy; + do { + uint32_t copied; - while (desc >= head && desc - head < (int)max_n_descs && left) { - desc++; - to_copy = RTE_MIN(desc->len, (uint32_t)left); - dlen = to_copy; - src = IOVA_TO_VVA(uint8_t *, vc_req, desc->addr, &dlen, - VHOST_ACCESS_RO); - if (unlikely(!src || !dlen)) { - VC_LOG_ERR("Failed to map descriptor"); + copied = copy_data_from_desc(data, vc_req, desc, left); + if (copied == 0) return -1; - } - - rte_memcpy(data, src, dlen); - data += dlen; - - if (unlikely(dlen < to_copy)) { - remain = to_copy - dlen; - addr = desc->addr + dlen; - - while (remain) { - len = remain; - src = IOVA_TO_VVA(uint8_t *, vc_req, addr, &len, - VHOST_ACCESS_RO); - if (unlikely(!src || !len)) { - VC_LOG_ERR("Failed to map descriptor"); - return -1; - } - - rte_memcpy(data, src, len); - addr += len; - remain -= len; - data += len; - } - } - - left -= to_copy; - } + left -= copied; + data = RTE_PTR_ADD(data, copied); + desc++; + } while (desc < head + max_n_descs && left != 0); - if (unlikely(left > 0)) { - VC_LOG_ERR("Incorrect virtio descriptor"); + if (unlikely(left != 0)) return -1; - } - if (unlikely(desc - head == (int)max_n_descs)) + if (unlikely(desc == head + max_n_descs)) *cur_desc = NULL; else *cur_desc = desc + 1; @@ -852,6 +816,7 @@ prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, /* iv */ if (unlikely(copy_data(iv_data, vc_req, head, &desc, cipher->para.iv_len, max_n_descs))) { + VC_LOG_ERR("Incorrect virtio descriptor"); ret = VIRTIO_CRYPTO_BADMSG; goto error_exit; } @@ -883,6 +848,7 @@ prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, if (unlikely(copy_data(rte_pktmbuf_mtod(m_src, uint8_t *), vc_req, head, &desc, cipher->para.src_data_len, max_n_descs) < 0)) { + VC_LOG_ERR("Incorrect virtio descriptor"); ret = VIRTIO_CRYPTO_BADMSG; goto error_exit; } @@ -1006,6 +972,7 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, /* iv */ if (unlikely(copy_data(iv_data, vc_req, head, &desc, chain->para.iv_len, max_n_descs) < 0)) { + VC_LOG_ERR("Incorrect virtio descriptor"); ret = VIRTIO_CRYPTO_BADMSG; goto error_exit; } @@ -1037,6 +1004,7 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, if (unlikely(copy_data(rte_pktmbuf_mtod(m_src, uint8_t *), vc_req, head, &desc, chain->para.src_data_len, max_n_descs) < 0)) { + VC_LOG_ERR("Incorrect virtio descriptor"); ret = VIRTIO_CRYPTO_BADMSG; goto error_exit; } @@ -1121,6 +1089,7 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, if (unlikely(copy_data(digest_addr, vc_req, head, &digest_desc, chain->para.hash_result_len, max_n_descs) < 0)) { + VC_LOG_ERR("Incorrect virtio descriptor"); ret = VIRTIO_CRYPTO_BADMSG; goto error_exit; } -- 2.36.1