From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by dpdk.org (Postfix) with ESMTP id AA974F11; Wed, 5 Sep 2018 06:31:02 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 04 Sep 2018 21:31:00 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,332,1531810800"; d="scan'208";a="68456353" Received: from btwcube1.sh.intel.com ([10.67.104.194]) by fmsmga008.fm.intel.com with ESMTP; 04 Sep 2018 21:29:50 -0700 From: Tiwei Bie To: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org Cc: seanbh@gmail.com, anatoly.burakov@intel.com, stable@dpdk.org Date: Wed, 5 Sep 2018 12:28:52 +0800 Message-Id: <20180905042852.6212-4-tiwei.bie@intel.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180905042852.6212-1-tiwei.bie@intel.com> References: <20180905042852.6212-1-tiwei.bie@intel.com> Subject: [dpdk-dev] [PATCH 3/3] net/virtio-user: fix memory hotplug support in vhost-kernel X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Sep 2018 04:31:03 -0000 It's possible to have much more hugepage backed memory regions than what vhost-kernel supports due to the memory hotplug, which may cause problems. A better solution is to have the virtio-user pass all the memory ranges reserved by DPDK to vhost-kernel. Fixes: 12ecb2f63b12 ("net/virtio-user: support memory hotplug") Cc: stable@dpdk.org Signed-off-by: Tiwei Bie --- drivers/net/virtio/virtio_user/vhost_kernel.c | 38 +++++++++---------- 1 file changed, 18 insertions(+), 20 deletions(-) diff --git a/drivers/net/virtio/virtio_user/vhost_kernel.c b/drivers/net/virtio/virtio_user/vhost_kernel.c index 897fee0af..9338166d9 100644 --- a/drivers/net/virtio/virtio_user/vhost_kernel.c +++ b/drivers/net/virtio/virtio_user/vhost_kernel.c @@ -70,41 +70,41 @@ static uint64_t vhost_req_user_to_kernel[] = { [VHOST_USER_SET_MEM_TABLE] = VHOST_SET_MEM_TABLE, }; -struct walk_arg { - struct vhost_memory_kernel *vm; - uint32_t region_nr; -}; static int -add_memory_region(const struct rte_memseg_list *msl __rte_unused, - const struct rte_memseg *ms, size_t len, void *arg) +add_memseg_list(const struct rte_memseg_list *msl, void *arg) { - struct walk_arg *wa = arg; + struct vhost_memory_kernel *vm = arg; struct vhost_memory_region *mr; void *start_addr; + uint64_t len; - if (wa->region_nr >= max_regions) + if (vm->nregions >= max_regions) return -1; - mr = &wa->vm->regions[wa->region_nr++]; - start_addr = ms->addr; + start_addr = msl->base_va; + len = msl->page_sz * msl->memseg_arr.len; + + mr = &vm->regions[vm->nregions++]; mr->guest_phys_addr = (uint64_t)(uintptr_t)start_addr; mr->userspace_addr = (uint64_t)(uintptr_t)start_addr; mr->memory_size = len; - mr->mmap_offset = 0; + mr->mmap_offset = 0; /* flags_padding */ + + PMD_DRV_LOG(DEBUG, "index=%u addr=%p len=%" PRIu64, + vm->nregions - 1, start_addr, len); return 0; } -/* By default, vhost kernel module allows 64 regions, but DPDK allows - * 256 segments. As a relief, below function merges those virtually - * adjacent memsegs into one region. +/* By default, vhost kernel module allows 64 regions, but DPDK may + * have much more memory regions. Below function will treat each + * contiguous memory space reserved by DPDK as one region. */ static struct vhost_memory_kernel * prepare_vhost_memory_kernel(void) { struct vhost_memory_kernel *vm; - struct walk_arg wa; vm = malloc(sizeof(struct vhost_memory_kernel) + max_regions * @@ -112,20 +112,18 @@ prepare_vhost_memory_kernel(void) if (!vm) return NULL; - wa.region_nr = 0; - wa.vm = vm; + vm->nregions = 0; + vm->padding = 0; /* * The memory lock has already been taken by memory subsystem * or virtio_user_start_device(). */ - if (rte_memseg_contig_walk_thread_unsafe(add_memory_region, &wa) < 0) { + if (rte_memseg_list_walk_thread_unsafe(add_memseg_list, vm) < 0) { free(vm); return NULL; } - vm->nregions = wa.region_nr; - vm->padding = 0; return vm; } -- 2.18.0