From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 67EBCA04B1 for ; Mon, 23 Nov 2020 18:13:15 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 54569C8F4; Mon, 23 Nov 2020 18:13:12 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by dpdk.org (Postfix) with ESMTP id BB038C8F4 for ; Mon, 23 Nov 2020 18:13:11 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1606151590; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=R2jgZLms0iQAGgmmSHvRCPzx77wAQSLJ9vsf+tK/qsw=; b=D0dHsg0UYwR3qs0XQLYAF7m7Q+PwhIGw/YXPLg25ndEwkpqlfqRpbJSFyO1G8rSNGLtWHs 3JzBkd5L5sk69Er7x9nnJQl77g9XDV/M40Ror9L2kccSuEKbVyzsDdelNqlp0CFpVfjzka ZE3/Fq3iuBlBcHH0jWPaR69At8x4GSY= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-26-Z5HQuxLdMvuEROGQaC_J6w-1; Mon, 23 Nov 2020 12:13:05 -0500 X-MC-Unique: Z5HQuxLdMvuEROGQaC_J6w-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 54B271034B20; Mon, 23 Nov 2020 17:13:04 +0000 (UTC) Received: from rh.redhat.com (ovpn-112-19.ams2.redhat.com [10.36.112.19]) by smtp.corp.redhat.com (Postfix) with ESMTP id 086D360C04; Mon, 23 Nov 2020 17:13:02 +0000 (UTC) From: Kevin Traynor To: Maxime Coquelin Cc: Xuan Ding , Chenbo Xia , Xueming Li , dpdk stable Date: Mon, 23 Nov 2020 17:12:06 +0000 Message-Id: <20201123171222.79398-14-ktraynor@redhat.com> In-Reply-To: <20201123171222.79398-1-ktraynor@redhat.com> References: <20201123171222.79398-1-ktraynor@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ktraynor@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII" Subject: [dpdk-stable] patch 'vhost: fix error path when setting memory tables' has been queued to LTS release 18.11.11 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Hi, FYI, your patch has been queued to LTS release 18.11.11 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 11/27/20. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://github.com/kevintraynor/dpdk-stable-queue This queued commit can be viewed at: https://github.com/kevintraynor/dpdk-stable-queue/commit/36b1fdd3248fcf1a87eded42f82ca2648fa54293 Thanks. Kevin. --- >From 36b1fdd3248fcf1a87eded42f82ca2648fa54293 Mon Sep 17 00:00:00 2001 From: Maxime Coquelin Date: Thu, 12 Nov 2020 18:10:27 +0100 Subject: [PATCH] vhost: fix error path when setting memory tables [ upstream commit 726a14eb83a594011aba5e09159b47f12bc1bad0 ] If an error is encountered before the memory regions are parsed, the file descriptors for these shared buffers are leaked. This patch fixes this by closing the message file descriptors on error, taking care of avoiding double closing of the file descriptors. guest_pages is also freed, even though it was not leaked as its pointer was not overridden on subsequent function calls. Fixes: 8f972312b8f4 ("vhost: support vhost-user") Reported-by: Xuan Ding Signed-off-by: Maxime Coquelin Reviewed-by: Chenbo Xia Reviewed-by: Xueming Li --- lib/librte_vhost/vhost_user.c | 60 ++++++++++++++++++++++------------- 1 file changed, 38 insertions(+), 22 deletions(-) diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c index 754759f6e0..f35ea97b6a 100644 --- a/lib/librte_vhost/vhost_user.c +++ b/lib/librte_vhost/vhost_user.c @@ -89,6 +89,13 @@ close_msg_fds(struct VhostUserMsg *msg) int i; - for (i = 0; i < msg->fd_num; i++) - close(msg->fds[i]); + for (i = 0; i < msg->fd_num; i++) { + int fd = msg->fds[i]; + + if (fd == -1) + continue; + + msg->fds[i] = -1; + close(fd); + } } @@ -1005,5 +1012,4 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, uint32_t i; int populate; - int fd; if (validate_msg_fds(msg, memory->nregions) != 0) @@ -1013,5 +1019,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, RTE_LOG(ERR, VHOST_CONFIG, "too many memory regions (%u)\n", memory->nregions); - return VH_RESULT_ERR; + goto close_msg_fds; } @@ -1046,5 +1052,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, "for dev->guest_pages\n", dev->vid); - return VH_RESULT_ERR; + goto close_msg_fds; } } @@ -1056,10 +1062,9 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, "(%d) failed to allocate memory for dev->mem\n", dev->vid); - return VH_RESULT_ERR; + goto free_guest_pages; } dev->mem->nregions = memory->nregions; for (i = 0; i < memory->nregions; i++) { - fd = msg->fds[i]; reg = &dev->mem->regions[i]; @@ -1067,5 +1072,11 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, reg->guest_user_addr = memory->regions[i].userspace_addr; reg->size = memory->regions[i].memory_size; - reg->fd = fd; + reg->fd = msg->fds[i]; + + /* + * Assign invalid file descriptor value to avoid double + * closing on error path. + */ + msg->fds[i] = -1; mmap_offset = memory->regions[i].mmap_offset; @@ -1077,5 +1088,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, "(%#"PRIx64") overflow\n", mmap_offset, reg->size); - goto err_mmap; + goto free_mem_table; } @@ -1090,9 +1101,9 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, * aligned. */ - alignment = get_blk_size(fd); + alignment = get_blk_size(reg->fd); if (alignment == (uint64_t)-1) { RTE_LOG(ERR, VHOST_CONFIG, "couldn't get hugepage size through fstat\n"); - goto err_mmap; + goto free_mem_table; } mmap_size = RTE_ALIGN_CEIL(mmap_size, alignment); @@ -1110,15 +1121,15 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, "or alignment (0x%" PRIx64 ") is invalid\n", reg->size + mmap_offset, alignment); - goto err_mmap; + goto free_mem_table; } populate = (dev->dequeue_zero_copy) ? MAP_POPULATE : 0; mmap_addr = mmap(NULL, mmap_size, PROT_READ | PROT_WRITE, - MAP_SHARED | populate, fd, 0); + MAP_SHARED | populate, reg->fd, 0); if (mmap_addr == MAP_FAILED) { RTE_LOG(ERR, VHOST_CONFIG, "mmap region %u failed.\n", i); - goto err_mmap; + goto free_mem_table; } @@ -1133,5 +1144,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, "adding guest pages to region %u failed.\n", i); - goto err_mmap; + goto free_mem_table; } @@ -1176,9 +1187,9 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, RTE_LOG(ERR, VHOST_CONFIG, "Failed to read qemu ack on postcopy set-mem-table\n"); - goto err_mmap; + goto free_mem_table; } if (validate_msg_fds(&ack_msg, 0) != 0) - goto err_mmap; + goto free_mem_table; if (ack_msg.request.master != VHOST_USER_SET_MEM_TABLE) { @@ -1186,5 +1197,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, "Bad qemu ack on postcopy set-mem-table (%d)\n", ack_msg.request.master); - goto err_mmap; + goto free_mem_table; } @@ -1210,5 +1221,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, i, dev->postcopy_ufd, strerror(errno)); - goto err_mmap; + goto free_mem_table; } RTE_LOG(INFO, VHOST_CONFIG, @@ -1219,5 +1230,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, (uint64_t)reg_struct.range.len - 1); #else - goto err_mmap; + goto free_mem_table; #endif } @@ -1238,5 +1249,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, if (!dev) { dev = *pdev; - goto err_mmap; + goto free_mem_table; } @@ -1249,8 +1260,13 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, return VH_RESULT_OK; -err_mmap: +free_mem_table: free_mem_region(dev); rte_free(dev->mem); dev->mem = NULL; +free_guest_pages: + rte_free(dev->guest_pages); + dev->guest_pages = NULL; +close_msg_fds: + close_msg_fds(msg); return VH_RESULT_ERR; } -- 2.26.2 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2020-11-23 17:10:14.328560137 +0000 +++ 0014-vhost-fix-error-path-when-setting-memory-tables.patch 2020-11-23 17:10:13.993061585 +0000 @@ -1 +1 @@ -From 726a14eb83a594011aba5e09159b47f12bc1bad0 Mon Sep 17 00:00:00 2001 +From 36b1fdd3248fcf1a87eded42f82ca2648fa54293 Mon Sep 17 00:00:00 2001 @@ -5,0 +6,2 @@ +[ upstream commit 726a14eb83a594011aba5e09159b47f12bc1bad0 ] + @@ -17 +18,0 @@ -Cc: stable@dpdk.org @@ -28 +29 @@ -index 8a8726f8b8..3898c93d1f 100644 +index 754759f6e0..f35ea97b6a 100644 @@ -31 +32 @@ -@@ -100,6 +100,13 @@ close_msg_fds(struct VhostUserMsg *msg) +@@ -89,6 +89,13 @@ close_msg_fds(struct VhostUserMsg *msg) @@ -54 +55 @@ - VHOST_LOG_CONFIG(ERR, + RTE_LOG(ERR, VHOST_CONFIG, @@ -56 +57 @@ -- return RTE_VHOST_MSG_RESULT_ERR; +- return VH_RESULT_ERR; @@ -60 +61 @@ -@@ -1055,5 +1061,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, +@@ -1046,5 +1052,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, @@ -63 +64 @@ -- return RTE_VHOST_MSG_RESULT_ERR; +- return VH_RESULT_ERR; @@ -67 +68 @@ -@@ -1065,10 +1071,9 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, +@@ -1056,10 +1062,9 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, @@ -70 +71 @@ -- return RTE_VHOST_MSG_RESULT_ERR; +- return VH_RESULT_ERR; @@ -79 +80 @@ -@@ -1076,5 +1081,11 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, +@@ -1067,5 +1072,11 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, @@ -92 +93 @@ -@@ -1086,5 +1097,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, +@@ -1077,5 +1088,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, @@ -99 +100 @@ -@@ -1099,9 +1110,9 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, +@@ -1090,9 +1101,9 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, @@ -105 +106 @@ - VHOST_LOG_CONFIG(ERR, + RTE_LOG(ERR, VHOST_CONFIG, @@ -111 +112 @@ -@@ -1119,15 +1130,15 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, +@@ -1110,15 +1121,15 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, @@ -118 +119 @@ - populate = dev->async_copy ? MAP_POPULATE : 0; + populate = (dev->dequeue_zero_copy) ? MAP_POPULATE : 0; @@ -124 +125 @@ - VHOST_LOG_CONFIG(ERR, + RTE_LOG(ERR, VHOST_CONFIG, @@ -130 +131 @@ -@@ -1142,5 +1153,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, +@@ -1133,5 +1144,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, @@ -137,2 +138,2 @@ -@@ -1185,9 +1196,9 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, - VHOST_LOG_CONFIG(ERR, +@@ -1176,9 +1187,9 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, + RTE_LOG(ERR, VHOST_CONFIG, @@ -149 +150 @@ -@@ -1195,5 +1206,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, +@@ -1186,5 +1197,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, @@ -156 +157 @@ -@@ -1219,5 +1230,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, +@@ -1210,5 +1221,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, @@ -162,2 +163,2 @@ - VHOST_LOG_CONFIG(INFO, -@@ -1228,5 +1239,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, + RTE_LOG(INFO, VHOST_CONFIG, +@@ -1219,5 +1230,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, @@ -170 +171 @@ -@@ -1250,5 +1261,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, +@@ -1238,5 +1249,5 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, @@ -177,2 +178,2 @@ -@@ -1261,8 +1272,13 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, - return RTE_VHOST_MSG_RESULT_OK; +@@ -1249,8 +1260,13 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, + return VH_RESULT_OK; @@ -190 +191 @@ - return RTE_VHOST_MSG_RESULT_ERR; + return VH_RESULT_ERR;