From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wr0-f174.google.com (mail-wr0-f174.google.com [209.85.128.174]) by dpdk.org (Postfix) with ESMTP id 4EA3C4C57 for ; Wed, 21 Jun 2017 11:03:21 +0200 (CEST) Received: by mail-wr0-f174.google.com with SMTP id y25so86393128wrd.2 for ; Wed, 21 Jun 2017 02:03:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind-com.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to :user-agent; bh=fVbut2TJlaRk+zbO4OTtCWnaPk5eoT0XRhnZctK/tEo=; b=zhucUrJhAjRCtbFlcCr6DHVwWuMIpfKnkOsFxLaJXDbmL+o3n08+jD4/LWNX/nls+h 7nsYSj8G9S77ZYACCbhDGdZB+9dZHGjp6TPcS8FE7U2Dj8tfxd99gOTp+jbUdveFO0J+ av+c6Nfjt8Atib9lEdm99tlv1TlW1CLV78OLnK+5lFT2H7VCO3kJUs5t2zu9HnMWdA6K 8+dH3fAttsvZYkDNB5lHSqPP0Zu46Jj9OH1llgV3XCZT+St66USjpntgZ1NpLcT0kX1/ XcLL3l2cMjp+mfebjSRtpD+YIUP7Sc8O4t0pHFitDH2kUkJTQLqKMAjBM/vsczwDSHIv +Wxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=fVbut2TJlaRk+zbO4OTtCWnaPk5eoT0XRhnZctK/tEo=; b=r24ErXZLAol7RKQHJEOX/IyvMOR3Zb7n6VvCb2tf66RlWJ5pZsS9tkpXFaxD+rNemw /Ps5a1OXblimiYFa1JVCmvVpdI2zDTINvh+hhUFAmUkWTCgzFzZi42QP0V3UGOFqnbpC S0DIB8yh0VF01rvihzbeNlTPJJY15bl17m/sLvH+4UO6daEdsjS8qyxgz5JNxQ8uedWy WIK1q/dsQtP6sbRQktrqNnHVYhDfaSSvYxjaUXOWSjq49+Xzvgp+OpeMuRqDmvbDBam6 HyhoiYYK9Dry+Jk6SdLU4F64duZKksqfKjE6PyGcNL35L9hYCelz52+ddNYvi8yXCFNj EXbA== X-Gm-Message-State: AKS2vOwkGlhSlko2gkVByfr8o0URLT1RzWHG9Xzr8Unzs+KQZAOxa6Gh CupcDWBbII8jlFMD X-Received: by 10.223.144.201 with SMTP id i67mr20758739wri.90.1498035800645; Wed, 21 Jun 2017 02:03:20 -0700 (PDT) Received: from bidouze.vm.6wind.com (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id z99sm19304359wrc.12.2017.06.21.02.03.19 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 21 Jun 2017 02:03:19 -0700 (PDT) Date: Wed, 21 Jun 2017 11:03:08 +0200 From: =?iso-8859-1?Q?Ga=EBtan?= Rivet To: Sam Cc: Pavel Shirshov , dev@dpdk.org, users@dpdk.org Message-ID: <20170621090239.GA2344@bidouze.vm.6wind.com> References: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Subject: Re: [dpdk-dev] Will huge page have negative effect on guest vm in qemu enviroment? X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Jun 2017 09:03:21 -0000 Hi Sam, On Wed, Jun 21, 2017 at 03:22:45PM +0800, Sam wrote: > Thank you~ > > 1. We have a compare test on qemu-kvm enviroment with huge page and without > huge page. Qemu start process is much longer in huge page enviromwnt. And I > write an email titled with '[DPDK-memory] how qemu waste such long time > under dpdk huge page envriment?'. I could resend it later. > Are you using 2M hugepages? Do you see any difference with 1G hugepages? The smaller ones should not incur such delay. On a side note, if the VM is properly configured the performance loss should be negligible, and mostly visible in benchmark contexts with little or no treatment done on the packets. > 2. Then I have another test on qemu-kvm enviroment with huge page and > without huge page, which I didn't start ovs-dpdk and vhostuser port in qemu > start process. And I found Qemu start process is also much longer in huge > page enviroment. > > So I think huge page enviroment, which grub2.cfg file is specified in > ‘[DPDK-memory] > how qemu waste such long time under dpdk huge page envriment?’, will really > have negative effect on qemu start up process. > > That's why we don't like to use ovs-dpdk. Althrough ovs-dpdk is faster, but > the start up process of qemu is much longer then normal ovs, and the reason > is nothing with ovs but huge page. For customers, vm start up time is > important then network speed. > > BTW, ovs-dpdk start up process is also longer then normal ovs. But I know > the reason, it's dpdk EAL init process with forking big continous memory > and zero this memory. For qemu, I don't know why, as there is no log to > report this. > > 2017-06-21 14:15 GMT+08:00 Pavel Shirshov : > > > Hi Sam, > > > > Below I'm saying about KVM. I don't have experience with vbox and others. > > 1. I'd suggest don't use dpdk inside of VM if you want to see best > > perfomance on the box. > > 2. huge pages enabled globally will not have any bad effect to guest > > OS. Except you have to enable huge pages inside of VM and provide real > > huge page for VM's huge pages from the host system. Otherwise dpdk > > will use "hugepages" inside of VM, but this "huge pages" will not real > > ones. They will be constructed from normal pages outside. Also when > > you enable huge pages OS will reserve them from start and your OS will > > not able use them for other things. Also you can't swap out huge > > pages, KSM will not work for them and so on. > > 3. You can enable huge pages just for one numa node. It's impossible > > to enable them just for one core. Usually you reserve some memory for > > hugepages when the system starts and you can't use this memory in > > normal applications unless normal application knows how to use them. > > > > Also why it didn't work inside of the docker? > > > > > > On Tue, Jun 20, 2017 at 8:35 PM, Sam wrote: > > > BTW, we also think about use ovs-dpdk in docker enviroment, but test > > result > > > said it's not good idea, we don't know why. > > > > > > 2017-06-21 11:32 GMT+08:00 Sam : > > > > > >> Hi all, > > >> > > >> We plan to use DPDK on HP host machine with several core and big memory. > > >> We plan to use qemu-kvm enviroment. The host will carry 4 or more guest > > vm > > >> and 1 ovs. > > >> > > >> Ovs-dpdk is much faster then normal ovs, but to use ovs-dpdk, we have to > > >> enable huge page globally. > > >> > > >> My question is, will huge page enabled globally have negative effect on > > >> guest vm's memory orperate or something? If it is, how to prevent this, > > or > > >> could I enable huge page on some core or enable huge page for a part of > > >> memory? > > >> > > >> Thank you~ > > >> > > -- Gaëtan Rivet 6WIND