From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 61E2D2C38; Thu, 23 Aug 2018 04:58:26 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga104.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 22 Aug 2018 19:58:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,276,1531810800"; d="scan'208";a="84165955" Received: from debian.sh.intel.com ([10.67.104.194]) by orsmga001.jf.intel.com with ESMTP; 22 Aug 2018 19:58:23 -0700 From: Tiwei Bie To: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org Cc: seanbh@gmail.com, anatoly.burakov@intel.com, stable@dpdk.org Date: Thu, 23 Aug 2018 10:57:21 +0800 Message-Id: <20180823025721.18300-1-tiwei.bie@intel.com> X-Mailer: git-send-email 2.18.0 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Subject: [dpdk-stable] [PATCH] net/virtio-user: fix memory hotplug support X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 23 Aug 2018 02:58:27 -0000 Deadlock can occur when allocating memory if a vhost-kernel based virtio-user device is in use. Besides, it's possible to have much more than 64 non-contiguous hugepage backed memory regions due to the memory hotplug, which may cause problems when handling VHOST_SET_MEM_TABLE request. A better solution is to have the virtio-user pass all the VA ranges reserved by DPDK to vhost-kernel. Bugzilla ID: 81 Fixes: 12ecb2f63b12 ("net/virtio-user: support memory hotplug") Cc: stable@dpdk.org Reported-by: Seán Harte Signed-off-by: Tiwei Bie --- drivers/net/virtio/virtio_user/vhost_kernel.c | 64 ++++++++----------- 1 file changed, 27 insertions(+), 37 deletions(-) diff --git a/drivers/net/virtio/virtio_user/vhost_kernel.c b/drivers/net/virtio/virtio_user/vhost_kernel.c index b2444096c..49bd1b821 100644 --- a/drivers/net/virtio/virtio_user/vhost_kernel.c +++ b/drivers/net/virtio/virtio_user/vhost_kernel.c @@ -70,41 +70,12 @@ static uint64_t vhost_req_user_to_kernel[] = { [VHOST_USER_SET_MEM_TABLE] = VHOST_SET_MEM_TABLE, }; -struct walk_arg { - struct vhost_memory_kernel *vm; - uint32_t region_nr; -}; -static int -add_memory_region(const struct rte_memseg_list *msl __rte_unused, - const struct rte_memseg *ms, size_t len, void *arg) -{ - struct walk_arg *wa = arg; - struct vhost_memory_region *mr; - void *start_addr; - - if (wa->region_nr >= max_regions) - return -1; - - mr = &wa->vm->regions[wa->region_nr++]; - start_addr = ms->addr; - - mr->guest_phys_addr = (uint64_t)(uintptr_t)start_addr; - mr->userspace_addr = (uint64_t)(uintptr_t)start_addr; - mr->memory_size = len; - mr->mmap_offset = 0; - - return 0; -} - -/* By default, vhost kernel module allows 64 regions, but DPDK allows - * 256 segments. As a relief, below function merges those virtually - * adjacent memsegs into one region. - */ static struct vhost_memory_kernel * prepare_vhost_memory_kernel(void) { + struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; struct vhost_memory_kernel *vm; - struct walk_arg wa; + uint32_t region_nr = 0, i; vm = malloc(sizeof(struct vhost_memory_kernel) + max_regions * @@ -112,15 +83,34 @@ prepare_vhost_memory_kernel(void) if (!vm) return NULL; - wa.region_nr = 0; - wa.vm = vm; + for (i = 0; i < RTE_MAX_MEMSEG_LISTS; i++) { + struct rte_memseg_list *msl = &mcfg->memsegs[i]; + struct vhost_memory_region *mr; + void *start_addr; + uint64_t len; - if (rte_memseg_contig_walk(add_memory_region, &wa) < 0) { - free(vm); - return NULL; + start_addr = msl->base_va; + len = msl->page_sz * msl->memseg_arr.len; + + if (start_addr == NULL || len == 0) + continue; + + if (region_nr >= max_regions) { + free(vm); + return NULL; + } + + mr = &vm->regions[region_nr++]; + mr->guest_phys_addr = (uint64_t)(uintptr_t)start_addr; + mr->userspace_addr = (uint64_t)(uintptr_t)start_addr; + mr->memory_size = len; + mr->mmap_offset = 0; /* flags_padding */ + + PMD_DRV_LOG(DEBUG, "index=%u, addr=%p len=%" PRIu64, + i, start_addr, len); } - vm->nregions = wa.region_nr; + vm->nregions = region_nr; vm->padding = 0; return vm; } -- 2.18.0