From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5EF91A00BE for ; Thu, 16 Jun 2022 11:32:27 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 54D3C4003C; Thu, 16 Jun 2022 11:32:27 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 8F5BA4003C for ; Thu, 16 Jun 2022 11:32:25 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655371944; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uB2nC8aofpsPH9Ul69tsiuYdrcD9dJQHSjRryxQdMag=; b=gn+8cJn0EVGYR+EC6KxOQzmgWrEmJvi4pg6japFMN8hmWwaKl7AM8KRrLdzpoxtA9L3W/5 +kfUA4Jy6ZI+uM/t4wFF7RyHdNvGciEp9y8Q4BWkhJZEkDl1L3eH9nVY27DNmPfOlNH7+9 KL4rkQ3o4veYfyhwvw6HseRVI7YQvQ0= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-588-bmE5hvaqOiiWHVCgslULDQ-1; Thu, 16 Jun 2022 05:32:21 -0400 X-MC-Unique: bmE5hvaqOiiWHVCgslULDQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7B20D29AA2FF; Thu, 16 Jun 2022 09:32:21 +0000 (UTC) Received: from fchome.redhat.com (unknown [10.40.193.132]) by smtp.corp.redhat.com (Postfix) with ESMTP id 2E79D40149A6; Thu, 16 Jun 2022 09:32:20 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: bruce.richardson@intel.com, stable@dpdk.org, Maxime Coquelin , Chenbo Xia , Fan Zhang Subject: [PATCH v2] vhost/crypto: fix build with GCC 12 Date: Thu, 16 Jun 2022 11:32:11 +0200 Message-Id: <20220616093211.881984-1-david.marchand@redhat.com> In-Reply-To: <20220518101657.1230416-11-david.marchand@redhat.com> References: <20220518101657.1230416-11-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org GCC 12 raises the following warning: In file included from ../lib/mempool/rte_mempool.h:46, from ../lib/mbuf/rte_mbuf.h:38, from ../lib/vhost/vhost_crypto.c:7: ../lib/vhost/vhost_crypto.c: In function ‘rte_vhost_crypto_fetch_requests’: ../lib/eal/x86/include/rte_memcpy.h:371:9: warning: array subscript 1 is outside array bounds of ‘struct virtio_crypto_op_data_req[1]’ [-Warray-bounds] 371 | rte_mov32((uint8_t *)dst + 3 * 32, (const uint8_t *)src + 3 * 32); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ../lib/vhost/vhost_crypto.c:1178:42: note: while referencing ‘req’ 1178 | struct virtio_crypto_op_data_req req; | ^~~ Split this function and separate the per descriptor copy. This makes the code clearer, and the compiler happier. Note: logs for errors have been moved to callers to avoid duplicates. Fixes: 3c79609fda7c ("vhost/crypto: handle virtually non-contiguous buffers") Cc: stable@dpdk.org Signed-off-by: David Marchand --- Changes since v1: - refactored copy function, --- lib/vhost/vhost_crypto.c | 122 +++++++++++++++------------------------ 1 file changed, 45 insertions(+), 77 deletions(-) diff --git a/lib/vhost/vhost_crypto.c b/lib/vhost/vhost_crypto.c index b1c0eb6a0f..1bc42896ea 100644 --- a/lib/vhost/vhost_crypto.c +++ b/lib/vhost/vhost_crypto.c @@ -565,94 +565,57 @@ get_data_ptr(struct vhost_crypto_data_req *vc_req, return data; } -static __rte_always_inline int -copy_data(void *dst_data, struct vhost_crypto_data_req *vc_req, - struct vhost_crypto_desc *head, - struct vhost_crypto_desc **cur_desc, - uint32_t size, uint32_t max_n_descs) +static __rte_always_inline uint32_t +copy_data_from_desc(void *dst, struct vhost_crypto_data_req *vc_req, + struct vhost_crypto_desc *desc, uint32_t size) { - struct vhost_crypto_desc *desc = *cur_desc; - uint64_t remain, addr, dlen, len; - uint32_t to_copy; - uint8_t *data = dst_data; - uint8_t *src; - int left = size; - - to_copy = RTE_MIN(desc->len, (uint32_t)left); - dlen = to_copy; - src = IOVA_TO_VVA(uint8_t *, vc_req, desc->addr, &dlen, - VHOST_ACCESS_RO); - if (unlikely(!src || !dlen)) - return -1; + uint64_t remain; + uint64_t addr; + + remain = RTE_MIN(desc->len, size); + addr = desc->addr; + do { + uint64_t len; + void *src; + + len = remain; + src = IOVA_TO_VVA(void *, vc_req, addr, &len, VHOST_ACCESS_RO); + if (unlikely(src == NULL || len == 0)) + return 0; - rte_memcpy((uint8_t *)data, src, dlen); - data += dlen; + rte_memcpy(dst, src, len); + remain -= len; + dst = RTE_PTR_ADD(dst, len); + addr += len; + } while (unlikely(remain != 0)); - if (unlikely(dlen < to_copy)) { - remain = to_copy - dlen; - addr = desc->addr + dlen; + return RTE_MIN(desc->len, size); +} - while (remain) { - len = remain; - src = IOVA_TO_VVA(uint8_t *, vc_req, addr, &len, - VHOST_ACCESS_RO); - if (unlikely(!src || !len)) { - VC_LOG_ERR("Failed to map descriptor"); - return -1; - } - rte_memcpy(data, src, len); - addr += len; - remain -= len; - data += len; - } - } +static __rte_always_inline int +copy_data(void *data, struct vhost_crypto_data_req *vc_req, + struct vhost_crypto_desc *head, struct vhost_crypto_desc **cur_desc, + uint32_t size, uint32_t max_n_descs) +{ + struct vhost_crypto_desc *desc = *cur_desc; + uint32_t left = size; - left -= to_copy; + do { + uint32_t copied; - while (desc >= head && desc - head < (int)max_n_descs && left) { - desc++; - to_copy = RTE_MIN(desc->len, (uint32_t)left); - dlen = to_copy; - src = IOVA_TO_VVA(uint8_t *, vc_req, desc->addr, &dlen, - VHOST_ACCESS_RO); - if (unlikely(!src || !dlen)) { - VC_LOG_ERR("Failed to map descriptor"); + copied = copy_data_from_desc(data, vc_req, desc, left); + if (copied == 0) return -1; - } - - rte_memcpy(data, src, dlen); - data += dlen; - - if (unlikely(dlen < to_copy)) { - remain = to_copy - dlen; - addr = desc->addr + dlen; - - while (remain) { - len = remain; - src = IOVA_TO_VVA(uint8_t *, vc_req, addr, &len, - VHOST_ACCESS_RO); - if (unlikely(!src || !len)) { - VC_LOG_ERR("Failed to map descriptor"); - return -1; - } - - rte_memcpy(data, src, len); - addr += len; - remain -= len; - data += len; - } - } - - left -= to_copy; - } + left -= copied; + data = RTE_PTR_ADD(data, copied); + desc++; + } while (desc < head + max_n_descs && left != 0); - if (unlikely(left > 0)) { - VC_LOG_ERR("Incorrect virtio descriptor"); + if (unlikely(left != 0)) return -1; - } - if (unlikely(desc - head == (int)max_n_descs)) + if (unlikely(desc == head + max_n_descs)) *cur_desc = NULL; else *cur_desc = desc + 1; @@ -852,6 +815,7 @@ prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, /* iv */ if (unlikely(copy_data(iv_data, vc_req, head, &desc, cipher->para.iv_len, max_n_descs))) { + VC_LOG_ERR("Incorrect virtio descriptor"); ret = VIRTIO_CRYPTO_BADMSG; goto error_exit; } @@ -883,6 +847,7 @@ prepare_sym_cipher_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, if (unlikely(copy_data(rte_pktmbuf_mtod(m_src, uint8_t *), vc_req, head, &desc, cipher->para.src_data_len, max_n_descs) < 0)) { + VC_LOG_ERR("Incorrect virtio descriptor"); ret = VIRTIO_CRYPTO_BADMSG; goto error_exit; } @@ -1006,6 +971,7 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, /* iv */ if (unlikely(copy_data(iv_data, vc_req, head, &desc, chain->para.iv_len, max_n_descs) < 0)) { + VC_LOG_ERR("Incorrect virtio descriptor"); ret = VIRTIO_CRYPTO_BADMSG; goto error_exit; } @@ -1037,6 +1003,7 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, if (unlikely(copy_data(rte_pktmbuf_mtod(m_src, uint8_t *), vc_req, head, &desc, chain->para.src_data_len, max_n_descs) < 0)) { + VC_LOG_ERR("Incorrect virtio descriptor"); ret = VIRTIO_CRYPTO_BADMSG; goto error_exit; } @@ -1121,6 +1088,7 @@ prepare_sym_chain_op(struct vhost_crypto *vcrypto, struct rte_crypto_op *op, if (unlikely(copy_data(digest_addr, vc_req, head, &digest_desc, chain->para.hash_result_len, max_n_descs) < 0)) { + VC_LOG_ERR("Incorrect virtio descriptor"); ret = VIRTIO_CRYPTO_BADMSG; goto error_exit; } -- 2.36.1