From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 74D059AE7 for ; Tue, 21 Jun 2016 11:28:55 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga102.fm.intel.com with ESMTP; 21 Jun 2016 02:28:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,503,1459839600"; d="scan'208";a="980254919" Received: from irsmsx104.ger.corp.intel.com ([163.33.3.159]) by orsmga001.jf.intel.com with ESMTP; 21 Jun 2016 02:28:53 -0700 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.51]) by IRSMSX104.ger.corp.intel.com ([163.33.3.159]) with mapi id 14.03.0248.002; Tue, 21 Jun 2016 10:28:52 +0100 From: "Ananyev, Konstantin" To: Jerin Jacob CC: Thomas Monjalon , "dev@dpdk.org" , "Hunt, David" , "olivier.matz@6wind.com" , "viktorin@rehivetech.com" , "shreyansh.jain@nxp.com" Thread-Topic: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler Thread-Index: AQHRyv80+hHnUr600EaG2ahxL/xgqJ/yo6JQgACRZICAAHLxQA== Date: Tue, 21 Jun 2016 09:28:51 +0000 Message-ID: <2601191342CEEE43887BDE71AB97725836B73FE3@irsmsx105.ger.corp.intel.com> References: <1463669335-30378-1-git-send-email-david.hunt@intel.com> <1466428091-115821-2-git-send-email-david.hunt@intel.com> <20160620132506.GA3301@localhost.localdomain> <3416153.NDoMD8TpjF@xps13> <2601191342CEEE43887BDE71AB97725836B73750@irsmsx105.ger.corp.intel.com> <20160620142205.GA4118@localhost.localdomain> <2601191342CEEE43887BDE71AB97725836B73B9B@irsmsx105.ger.corp.intel.com> <20160621033459.GA4903@localhost.localdomain> In-Reply-To: <20160621033459.GA4903@localhost.localdomain> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 21 Jun 2016 09:28:56 -0000 > -----Original Message----- > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > Sent: Tuesday, June 21, 2016 4:35 AM > To: Ananyev, Konstantin > Cc: Thomas Monjalon; dev@dpdk.org; Hunt, David; olivier.matz@6wind.com; v= iktorin@rehivetech.com; shreyansh.jain@nxp.com > Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool = handler >=20 > On Mon, Jun 20, 2016 at 05:56:40PM +0000, Ananyev, Konstantin wrote: > > > > > > > -----Original Message----- > > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > > > Sent: Monday, June 20, 2016 3:22 PM > > > To: Ananyev, Konstantin > > > Cc: Thomas Monjalon; dev@dpdk.org; Hunt, David; olivier.matz@6wind.co= m; viktorin@rehivetech.com; shreyansh.jain@nxp.com > > > Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) memp= ool handler > > > > > > On Mon, Jun 20, 2016 at 01:58:04PM +0000, Ananyev, Konstantin wrote: > > > > > > > > > > > > > -----Original Message----- > > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas Monja= lon > > > > > Sent: Monday, June 20, 2016 2:54 PM > > > > > To: Jerin Jacob > > > > > Cc: dev@dpdk.org; Hunt, David; olivier.matz@6wind.com; viktorin@r= ehivetech.com; shreyansh.jain@nxp.com > > > > > Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) = mempool handler > > > > > > > > > > 2016-06-20 18:55, Jerin Jacob: > > > > > > On Mon, Jun 20, 2016 at 02:08:10PM +0100, David Hunt wrote: > > > > > > > This is a mempool handler that is useful for pipelining apps,= where > > > > > > > the mempool cache doesn't really work - example, where we hav= e one > > > > > > > core doing rx (and alloc), and another core doing Tx (and ret= urn). In > > > > > > > such a case, the mempool ring simply cycles through all the m= bufs, > > > > > > > resulting in a LLC miss on every mbuf allocated when the numb= er of > > > > > > > mbufs is large. A stack recycles buffers more effectively in = this > > > > > > > case. > > > > > > > > > > > > > > Signed-off-by: David Hunt > > > > > > > --- > > > > > > > lib/librte_mempool/Makefile | 1 + > > > > > > > lib/librte_mempool/rte_mempool_stack.c | 145 +++++++++++++++= ++++++++++++++++++ > > > > > > > > > > > > How about moving new mempool handlers to drivers/mempool? (or s= imilar). > > > > > > In future, adding HW specific handlers in lib/librte_mempool/ m= ay be bad idea. > > > > > > > > > > You're probably right. > > > > > However we need to check and understand what a HW mempool handler= will be. > > > > > I imagine the first of them will have to move handlers in drivers= / > > > > > > > > Does it mean it we'll have to move mbuf into drivers too? > > > > Again other libs do use mempool too. > > > > Why not just lib/librte_mempool/arch/ > > > > ? > > > > > > I was proposing only to move only the new > > > handler(lib/librte_mempool/rte_mempool_stack.c). Not any library or a= ny > > > other common code. > > > > > > Just like DPDK crypto device, Even if it is software implementation i= ts > > > better to move in driver/crypto instead of lib/librte_cryptodev > > > > > > "lib/librte_mempool/arch/" is not correct place as it is platform spe= cific > > > not architecture specific and HW mempool device may be PCIe or platfo= rm > > > device. > > > > Ok, but why rte_mempool_stack.c has to be moved? >=20 > Just thought of having all the mempool handlers at one place. > We can't move all HW mempool handlers at lib/librte_mempool/ Yep, but from your previous mail I thought we might have specific ones for specific devices, no?=20 If so, why to put them in one place, why just not in: Drivers/xxx_dev/xxx_mempool.[h,c] ? And keep generic ones in lib/librte_mempool ? Konstantin >=20 > Jerin >=20 > > I can hardly imagine it is a 'platform sepcific'. > > From my understanding it is a generic code. > > Konstantin > > > > > > > > > > > Konstantin > > > > > > > > > > > > > Jerin, are you volunteer?