From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id E8EA837B7; Wed, 21 Jun 2017 21:16:06 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id DB45F80C1D; Wed, 21 Jun 2017 19:16:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com DB45F80C1D Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx02.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=dgilbert@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com DB45F80C1D Received: from work-vm (ovpn-116-89.ams2.redhat.com [10.36.116.89]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 76C3361F21; Wed, 21 Jun 2017 19:16:04 +0000 (UTC) Date: Wed, 21 Jun 2017 20:16:02 +0100 From: "Dr. David Alan Gilbert" To: Sam Cc: Pavel Shirshov , dev@dpdk.org, qemu-discuss@nongnu.org, users@dpdk.org, qemu-devel@nongnu.org Message-ID: <20170621191601.GB2425@work-vm> References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.8.2 (2017-04-18) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Wed, 21 Jun 2017 19:16:06 +0000 (UTC) Subject: Re: [dpdk-dev] [Qemu-devel] Will huge page have negative effect on guest vm in qemu enviroment? X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Jun 2017 19:16:07 -0000 * Sam (batmanustc@gmail.com) wrote: > Thank you~ > > 1. We have a compare test on qemu-kvm enviroment with huge page and without > huge page. Qemu start process is much longer in huge page enviromwnt. And I > write an email titled with '[DPDK-memory] how qemu waste such long time > under dpdk huge page envriment?'. I could resend it later. > 2. Then I have another test on qemu-kvm enviroment with huge page and > without huge page, which I didn't start ovs-dpdk and vhostuser port in qemu > start process. And I found Qemu start process is also much longer in huge > page enviroment. > > So I think huge page enviroment, which grub2.cfg file is specified in > ‘[DPDK-memory] > how qemu waste such long time under dpdk huge page envriment?’, will really > have negative effect on qemu start up process. > > That's why we don't like to use ovs-dpdk. Althrough ovs-dpdk is faster, but > the start up process of qemu is much longer then normal ovs, and the reason > is nothing with ovs but huge page. For customers, vm start up time is > important then network speed. How are you setting up hugepages? What values are you putting in the various /proc or cmdline options and how are you specifying them on QEMU's commandline. I think one problem is that with hugepages qemu normally allocates them all at the start; I think there are cases where that means moving a lot of memory about, especially if you lock it to particular NUMA nodes. > BTW, ovs-dpdk start up process is also longer then normal ovs. But I know > the reason, it's dpdk EAL init process with forking big continous memory > and zero this memory. For qemu, I don't know why, as there is no log to > report this. I suspect it's the mmaping and madvising of those hugepages - you should be able to see it with an strace of a qemu startup, or perhaps a 'perf top' on the host as it's in that pause. I'm told that hugepages are supposed to be especially useful with IOMMU performance for cards passed through to the guest, so it might still be worth doing. Dave > 2017-06-21 14:15 GMT+08:00 Pavel Shirshov : > > > Hi Sam, > > > > Below I'm saying about KVM. I don't have experience with vbox and others. > > 1. I'd suggest don't use dpdk inside of VM if you want to see best > > perfomance on the box. > > 2. huge pages enabled globally will not have any bad effect to guest > > OS. Except you have to enable huge pages inside of VM and provide real > > huge page for VM's huge pages from the host system. Otherwise dpdk > > will use "hugepages" inside of VM, but this "huge pages" will not real > > ones. They will be constructed from normal pages outside. Also when > > you enable huge pages OS will reserve them from start and your OS will > > not able use them for other things. Also you can't swap out huge > > pages, KSM will not work for them and so on. > > 3. You can enable huge pages just for one numa node. It's impossible > > to enable them just for one core. Usually you reserve some memory for > > hugepages when the system starts and you can't use this memory in > > normal applications unless normal application knows how to use them. > > > > Also why it didn't work inside of the docker? > > > > > > On Tue, Jun 20, 2017 at 8:35 PM, Sam wrote: > > > BTW, we also think about use ovs-dpdk in docker enviroment, but test > > result > > > said it's not good idea, we don't know why. > > > > > > 2017-06-21 11:32 GMT+08:00 Sam : > > > > > >> Hi all, > > >> > > >> We plan to use DPDK on HP host machine with several core and big memory. > > >> We plan to use qemu-kvm enviroment. The host will carry 4 or more guest > > vm > > >> and 1 ovs. > > >> > > >> Ovs-dpdk is much faster then normal ovs, but to use ovs-dpdk, we have to > > >> enable huge page globally. > > >> > > >> My question is, will huge page enabled globally have negative effect on > > >> guest vm's memory orperate or something? If it is, how to prevent this, > > or > > >> could I enable huge page on some core or enable huge page for a part of > > >> memory? > > >> > > >> Thank you~ > > >> > > -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK