From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id A2CBA8E95 for ; Tue, 13 Oct 2015 09:59:21 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP; 13 Oct 2015 00:59:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,677,1437462000"; d="scan'208";a="663224997" Received: from irsmsx103.ger.corp.intel.com ([163.33.3.157]) by orsmga003.jf.intel.com with ESMTP; 13 Oct 2015 00:59:20 -0700 Received: from irsmsx108.ger.corp.intel.com ([169.254.11.138]) by IRSMSX103.ger.corp.intel.com ([169.254.3.116]) with mapi id 14.03.0248.002; Tue, 13 Oct 2015 08:59:19 +0100 From: "De Lara Guarch, Pablo" To: "Sun, Xutao" , "dev@dpdk.org" Thread-Topic: [dpdk-dev] [PATCH] examples/vmdq: Fix the core dump issue when mem_pool is more than 34 Thread-Index: AQHRBYkBvaxnSj/YKUWsNKnyqB8UR55pDSdg Date: Tue, 13 Oct 2015 07:59:18 +0000 Message-ID: References: <1444721365-1065-1-git-send-email-xutao.sun@intel.com> In-Reply-To: <1444721365-1065-1-git-send-email-xutao.sun@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.182] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] examples/vmdq: Fix the core dump issue when mem_pool is more than 34 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 13 Oct 2015 07:59:22 -0000 Hi Xutao, > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Xutao Sun > Sent: Tuesday, October 13, 2015 8:29 AM > To: dev@dpdk.org > Subject: [dpdk-dev] [PATCH] examples/vmdq: Fix the core dump issue when > mem_pool is more than 34 >=20 > Macro MAX_QUEUES was defined to 128, only allow 16 mem_pools in > theory. > When running vmdq_app with more than 34 mem_pools, > it will cause the core_dump issue. > Change MAX_QUEUES to 1024 will solve this issue. >=20 > Signed-off-by: Xutao Sun > --- > examples/vmdq/main.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) >=20 > diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c > index a142d49..b463cfb 100644 > --- a/examples/vmdq/main.c > +++ b/examples/vmdq/main.c > @@ -69,7 +69,7 @@ > #include > #include >=20 > -#define MAX_QUEUES 128 > +#define MAX_QUEUES 1024 > /* > * For 10 GbE, 128 queues require roughly > * 128*512 (RX/TX_queue_nb * RX/TX_ring_descriptors_nb) per port. > -- > 1.9.3 Just for clarification, when you say mem_pools, do you mean vmdq pools? Also, if you are going to increase MAX_QUEUES, shouldn't you increase the N= UM_MBUFS_PER_PORT? Looking at the comment below, looks like there is a calculation of number o= f mbufs based on number of queues. Plus, I assume 128 is the maximum number of queues per port, and as far as = I know, only Fortville supports 256 as maximum. Thanks, Pablo