From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <tiwei.bie@intel.com>
Received: from mga11.intel.com (mga11.intel.com [192.55.52.93])
 by dpdk.org (Postfix) with ESMTP id A57EF3256;
 Fri,  7 Sep 2018 13:38:50 +0200 (CEST)
X-Amp-Result: UNKNOWN
X-Amp-Original-Verdict: FILE UNKNOWN
X-Amp-File-Uploaded: False
Received: from orsmga008.jf.intel.com ([10.7.209.65])
 by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;
 07 Sep 2018 04:38:49 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.53,342,1531810800"; d="scan'208";a="71351165"
Received: from btwcube1.sh.intel.com (HELO debian) ([10.67.104.194])
 by orsmga008.jf.intel.com with ESMTP; 07 Sep 2018 04:38:40 -0700
Date: Fri, 7 Sep 2018 19:37:44 +0800
From: Tiwei Bie <tiwei.bie@intel.com>
To: "Burakov, Anatoly" <anatoly.burakov@intel.com>
Cc: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org,
 seanbh@gmail.com, stable@dpdk.org
Message-ID: <20180907113744.GA22511@debian>
References: <20180905042852.6212-1-tiwei.bie@intel.com>
 <20180905042852.6212-4-tiwei.bie@intel.com>
 <a82e230d-7fd7-ade0-d7a9-18265f30b00b@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <a82e230d-7fd7-ade0-d7a9-18265f30b00b@intel.com>
User-Agent: Mutt/1.10.1 (2018-07-13)
Subject: Re: [dpdk-dev] [PATCH 3/3] net/virtio-user: fix memory hotplug
 support in vhost-kernel
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Fri, 07 Sep 2018 11:38:51 -0000

On Fri, Sep 07, 2018 at 10:44:22AM +0100, Burakov, Anatoly wrote:
> On 05-Sep-18 5:28 AM, Tiwei Bie wrote:
> > It's possible to have much more hugepage backed memory regions
> > than what vhost-kernel supports due to the memory hotplug, which
> > may cause problems. A better solution is to have the virtio-user
> > pass all the memory ranges reserved by DPDK to vhost-kernel.
> > 
> > Fixes: 12ecb2f63b12 ("net/virtio-user: support memory hotplug")
> > Cc: stable@dpdk.org
> > 
> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > ---
> >   drivers/net/virtio/virtio_user/vhost_kernel.c | 38 +++++++++----------
> >   1 file changed, 18 insertions(+), 20 deletions(-)
> > 
> > diff --git a/drivers/net/virtio/virtio_user/vhost_kernel.c b/drivers/net/virtio/virtio_user/vhost_kernel.c
> > index 897fee0af..9338166d9 100644
> > --- a/drivers/net/virtio/virtio_user/vhost_kernel.c
> > +++ b/drivers/net/virtio/virtio_user/vhost_kernel.c
> > @@ -70,41 +70,41 @@ static uint64_t vhost_req_user_to_kernel[] = {
> >   	[VHOST_USER_SET_MEM_TABLE] = VHOST_SET_MEM_TABLE,
> >   };
> > -struct walk_arg {
> > -	struct vhost_memory_kernel *vm;
> > -	uint32_t region_nr;
> > -};
> >   static int
> > -add_memory_region(const struct rte_memseg_list *msl __rte_unused,
> > -		const struct rte_memseg *ms, size_t len, void *arg)
> > +add_memseg_list(const struct rte_memseg_list *msl, void *arg)
> >   {
> > -	struct walk_arg *wa = arg;
> > +	struct vhost_memory_kernel *vm = arg;
> >   	struct vhost_memory_region *mr;
> >   	void *start_addr;
> > +	uint64_t len;
> > -	if (wa->region_nr >= max_regions)
> > +	if (vm->nregions >= max_regions)
> >   		return -1;
> > -	mr = &wa->vm->regions[wa->region_nr++];
> > -	start_addr = ms->addr;
> > +	start_addr = msl->base_va;
> > +	len = msl->page_sz * msl->memseg_arr.len;
> > +
> > +	mr = &vm->regions[vm->nregions++];
> >   	mr->guest_phys_addr = (uint64_t)(uintptr_t)start_addr;
> >   	mr->userspace_addr = (uint64_t)(uintptr_t)start_addr;
> >   	mr->memory_size = len;
> > -	mr->mmap_offset = 0;
> > +	mr->mmap_offset = 0; /* flags_padding */
> > +
> > +	PMD_DRV_LOG(DEBUG, "index=%u addr=%p len=%" PRIu64,
> > +			vm->nregions - 1, start_addr, len);
> >   	return 0;
> >   }
> > -/* By default, vhost kernel module allows 64 regions, but DPDK allows
> > - * 256 segments. As a relief, below function merges those virtually
> > - * adjacent memsegs into one region.
> > +/* By default, vhost kernel module allows 64 regions, but DPDK may
> > + * have much more memory regions. Below function will treat each
> > + * contiguous memory space reserved by DPDK as one region.
> >    */
> >   static struct vhost_memory_kernel *
> >   prepare_vhost_memory_kernel(void)
> >   {
> >   	struct vhost_memory_kernel *vm;
> > -	struct walk_arg wa;
> >   	vm = malloc(sizeof(struct vhost_memory_kernel) +
> >   			max_regions *
> > @@ -112,20 +112,18 @@ prepare_vhost_memory_kernel(void)
> >   	if (!vm)
> >   		return NULL;
> > -	wa.region_nr = 0;
> > -	wa.vm = vm;
> > +	vm->nregions = 0;
> > +	vm->padding = 0;
> >   	/*
> >   	 * The memory lock has already been taken by memory subsystem
> >   	 * or virtio_user_start_device().
> >   	 */
> > -	if (rte_memseg_contig_walk_thread_unsafe(add_memory_region, &wa) < 0) {
> > +	if (rte_memseg_list_walk_thread_unsafe(add_memseg_list, vm) < 0) {
> >   		free(vm);
> >   		return NULL;
> >   	}
> > -	vm->nregions = wa.region_nr;
> > -	vm->padding = 0;
> >   	return vm;
> >   }
> > 
> 
> Doesn't that assume single file segments mode?

This is to find out the VA ranges reserved by memory subsystem.
Why does it need to assume single file segments mode?


> 
> -- 
> Thanks,
> Anatoly