From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 15E248DAE for ; Thu, 15 Oct 2015 09:54:42 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP; 15 Oct 2015 00:54:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,684,1437462000"; d="scan'208";a="664763527" Received: from fmsmsx103.amr.corp.intel.com ([10.18.124.201]) by orsmga003.jf.intel.com with ESMTP; 15 Oct 2015 00:54:41 -0700 Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by FMSMSX103.amr.corp.intel.com (10.18.124.201) with Microsoft SMTP Server (TLS) id 14.3.248.2; Thu, 15 Oct 2015 00:54:40 -0700 Received: from shsmsx102.ccr.corp.intel.com ([169.254.2.253]) by SHSMSX101.ccr.corp.intel.com ([169.254.1.96]) with mapi id 14.03.0248.002; Thu, 15 Oct 2015 15:54:37 +0800 From: "Sun, Xutao" To: "De Lara Guarch, Pablo" , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH] examples/vmdq: Fix the core dump issue when mem_pool is more than 34 Thread-Index: AQHRBY0SIUmGRnqxM0S846YJfSyv+p5sKl4w Date: Thu, 15 Oct 2015 07:54:36 +0000 Message-ID: <9AC567D38896294095E6F3228F697FC8021C1F96@shsmsx102.ccr.corp.intel.com> References: <1444721365-1065-1-git-send-email-xutao.sun@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] examples/vmdq: Fix the core dump issue when mem_pool is more than 34 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 15 Oct 2015 07:54:43 -0000 > -----Original Message----- > From: De Lara Guarch, Pablo > Sent: Tuesday, October 13, 2015 3:59 PM > To: Sun, Xutao; dev@dpdk.org > Subject: RE: [dpdk-dev] [PATCH] examples/vmdq: Fix the core dump issue > when mem_pool is more than 34 >=20 > Hi Xutao, >=20 > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Xutao Sun > > Sent: Tuesday, October 13, 2015 8:29 AM > > To: dev@dpdk.org > > Subject: [dpdk-dev] [PATCH] examples/vmdq: Fix the core dump issue > > when mem_pool is more than 34 > > > > Macro MAX_QUEUES was defined to 128, only allow 16 mem_pools in > > theory. > > When running vmdq_app with more than 34 mem_pools, it will cause the > > core_dump issue. > > Change MAX_QUEUES to 1024 will solve this issue. > > > > Signed-off-by: Xutao Sun > > --- > > examples/vmdq/main.c | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c index > > a142d49..b463cfb 100644 > > --- a/examples/vmdq/main.c > > +++ b/examples/vmdq/main.c > > @@ -69,7 +69,7 @@ > > #include > > #include > > > > -#define MAX_QUEUES 128 > > +#define MAX_QUEUES 1024 > > /* > > * For 10 GbE, 128 queues require roughly > > * 128*512 (RX/TX_queue_nb * RX/TX_ring_descriptors_nb) per port. > > -- > > 1.9.3 >=20 > Just for clarification, when you say mem_pools, do you mean vmdq pools? > Also, if you are going to increase MAX_QUEUES, shouldn't you increase the > NUM_MBUFS_PER_PORT? > Looking at the comment below, looks like there is a calculation of number= of > mbufs based on number of queues. > Plus, I assume 128 is the maximum number of queues per port, and as far a= s I > know, only Fortville supports 256 as maximum. >=20 > Thanks, > Pablo Hi Pablo, I mean vmdq pools when I say mem_pools. And as you say, NUM_MBUFS_PER_PORT should be increased actually.=20 I may use macro to replace the old expression.=20 #define NUM_MBUFS_PER_PORT (MAX_QUEUES * max(RTE_TEST_RX_DESC_DEFAULT,RTE_T= EST_TX_DESC_DEFAULT)) And this patch is to fix the issue about running VMDQ on Fortville, so the= maximum number of queues is larger than 128. Thank you very much for your advice! Thanks, Xutao