From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <huawei.xie@intel.com>
Received: from mga03.intel.com (mga03.intel.com [134.134.136.65])
 by dpdk.org (Postfix) with ESMTP id DA177559C
 for <dev@dpdk.org>; Thu, 10 Dec 2015 14:34:02 +0100 (CET)
Received: from orsmga002.jf.intel.com ([10.7.209.21])
 by orsmga103.jf.intel.com with ESMTP; 10 Dec 2015 05:33:55 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.20,408,1444719600"; d="scan'208";a="868828767"
Received: from fmsmsx108.amr.corp.intel.com ([10.18.124.206])
 by orsmga002.jf.intel.com with ESMTP; 10 Dec 2015 05:33:55 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
 FMSMSX108.amr.corp.intel.com (10.18.124.206) with Microsoft SMTP Server (TLS)
 id 14.3.248.2; Thu, 10 Dec 2015 05:33:55 -0800
Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.28]) by
 SHSMSX152.ccr.corp.intel.com ([169.254.6.105]) with mapi id 14.03.0248.002;
 Thu, 10 Dec 2015 21:33:53 +0800
From: "Xie, Huawei" <huawei.xie@intel.com>
To: "Iremonger, Bernard" <bernard.iremonger@intel.com>, Yuanhan Liu
 <yuanhan.liu@linux.intel.com>
Thread-Topic: [dpdk-dev] [PATCH v2 1/1] doc: correct Vhost Sample
 Application guide
Thread-Index: AQHRMxc12dxOorfyRBiDtUr7CU+d+Z7D+jdQgAAmCGD//45QgIAAidNQ
Date: Thu, 10 Dec 2015 13:33:53 +0000
Message-ID: <C37D651A908B024F974696C65296B57B4BB89AEE@SHSMSX103.ccr.corp.intel.com>
References: <1449664541-9546-1-git-send-email-bernard.iremonger@intel.com>
 <1449681518-27656-1-git-send-email-bernard.iremonger@intel.com>
 <20151210065247.GT29571@yliu-dev.sh.intel.com>
 <8CEF83825BEC744B83065625E567D7C219F8EC44@IRSMSX108.ger.corp.intel.com>
 <C37D651A908B024F974696C65296B57B4BB897EB@SHSMSX103.ccr.corp.intel.com>
 <8CEF83825BEC744B83065625E567D7C219F8ED27@IRSMSX108.ger.corp.intel.com>
In-Reply-To: <8CEF83825BEC744B83065625E567D7C219F8ED27@IRSMSX108.ger.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH v2 1/1] doc: correct Vhost Sample Application
 guide
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Thu, 10 Dec 2015 13:34:03 -0000



> -----Original Message-----
> From: Iremonger, Bernard
> Sent: Thursday, December 10, 2015 9:20 PM
> To: Xie, Huawei; Yuanhan Liu
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v2 1/1] doc: correct Vhost Sample
> Application guide
>=20
> Hi Huawei,
>=20
> <snip>
>=20
> > > > I don't think that's a right "correction": vhost-swtich would fail
> > > > to
> > > > start:
> > > >
> > > >     EAL: Error - exiting with code: 1
> > > >     Cause: Cannot create mbuf pool
> > > >
> > > > As vhost-switch creates more mbuf than 1024M can hold.
> > > >
> > > > However, I do think that adding this option is necessary, otherwise=
,
> > > > all hugepages will be allocated to vhost-switch, leaving no memory
> > > > for
> > > starting
> > > > VM at all. (And this is kind of informatin you need to put into
> > > > commit
> > > log).
> > > >
> > > > And limiting it to "1024M" is reasonable as well, so that we can ru=
n
> > > > it
> > > on a
> > > > desktop (or laptop) without too many memories. You just need fix
> > > > vhost- switch example to not allocate too many mbufs by default.
> > > >
> > > > 	--yliu
> > >
> > > Yes the --socket-mem  <number> option is necessary.
> > > I will add a note that a value of  <number> may need to be greater
> > > than 1024.
> > > I will submit a patch to vhost-switch to reduce the number of mbufs.
> > >
> > I recall we have to allocate mbufs for each queue rather than used queu=
es
> > only, so memory consumption depends on queue number. After that issue i=
s
> > fixed, I think 1024MB is enough. For the time being, you could
> temporarily
> > use 2048M and add an explanation.
> >
>=20
> I have sent a v3 of this patch which includes a note that the value of 10=
24
> may have to be increased.
> I would prefer to keep the value of 1024.
>=20
For FVL with more queues, I recall 1024mb isn't enough, but it is ok with a=
 note. :).
> Regards,
>=20
> Bernard.