From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id E78B595DC for ; Mon, 20 Jun 2016 19:56:45 +0200 (CEST) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP; 20 Jun 2016 10:56:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,499,1459839600"; d="scan'208";a="1001727532" Received: from irsmsx110.ger.corp.intel.com ([163.33.3.25]) by orsmga002.jf.intel.com with ESMTP; 20 Jun 2016 10:56:43 -0700 Received: from irsmsx112.ger.corp.intel.com (10.108.20.5) by irsmsx110.ger.corp.intel.com (163.33.3.25) with Microsoft SMTP Server (TLS) id 14.3.248.2; Mon, 20 Jun 2016 18:56:41 +0100 Received: from irsmsx105.ger.corp.intel.com ([169.254.7.51]) by irsmsx112.ger.corp.intel.com ([169.254.1.18]) with mapi id 14.03.0248.002; Mon, 20 Jun 2016 18:56:41 +0100 From: "Ananyev, Konstantin" To: Jerin Jacob CC: Thomas Monjalon , "dev@dpdk.org" , "Hunt, David" , "olivier.matz@6wind.com" , "viktorin@rehivetech.com" , "shreyansh.jain@nxp.com" Thread-Topic: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler Thread-Index: AQHRyv80+hHnUr600EaG2ahxL/xgqJ/yo6JQ Date: Mon, 20 Jun 2016 17:56:40 +0000 Message-ID: <2601191342CEEE43887BDE71AB97725836B73B9B@irsmsx105.ger.corp.intel.com> References: <1463669335-30378-1-git-send-email-david.hunt@intel.com> <1466428091-115821-2-git-send-email-david.hunt@intel.com> <20160620132506.GA3301@localhost.localdomain> <3416153.NDoMD8TpjF@xps13> <2601191342CEEE43887BDE71AB97725836B73750@irsmsx105.ger.corp.intel.com> <20160620142205.GA4118@localhost.localdomain> In-Reply-To: <20160620142205.GA4118@localhost.localdomain> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 20 Jun 2016 17:56:46 -0000 > -----Original Message----- > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com] > Sent: Monday, June 20, 2016 3:22 PM > To: Ananyev, Konstantin > Cc: Thomas Monjalon; dev@dpdk.org; Hunt, David; olivier.matz@6wind.com; v= iktorin@rehivetech.com; shreyansh.jain@nxp.com > Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool = handler >=20 > On Mon, Jun 20, 2016 at 01:58:04PM +0000, Ananyev, Konstantin wrote: > > > > > > > -----Original Message----- > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas Monjalon > > > Sent: Monday, June 20, 2016 2:54 PM > > > To: Jerin Jacob > > > Cc: dev@dpdk.org; Hunt, David; olivier.matz@6wind.com; viktorin@rehiv= etech.com; shreyansh.jain@nxp.com > > > Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) memp= ool handler > > > > > > 2016-06-20 18:55, Jerin Jacob: > > > > On Mon, Jun 20, 2016 at 02:08:10PM +0100, David Hunt wrote: > > > > > This is a mempool handler that is useful for pipelining apps, whe= re > > > > > the mempool cache doesn't really work - example, where we have on= e > > > > > core doing rx (and alloc), and another core doing Tx (and return)= . In > > > > > such a case, the mempool ring simply cycles through all the mbufs= , > > > > > resulting in a LLC miss on every mbuf allocated when the number o= f > > > > > mbufs is large. A stack recycles buffers more effectively in this > > > > > case. > > > > > > > > > > Signed-off-by: David Hunt > > > > > --- > > > > > lib/librte_mempool/Makefile | 1 + > > > > > lib/librte_mempool/rte_mempool_stack.c | 145 +++++++++++++++++++= ++++++++++++++ > > > > > > > > How about moving new mempool handlers to drivers/mempool? (or simil= ar). > > > > In future, adding HW specific handlers in lib/librte_mempool/ may b= e bad idea. > > > > > > You're probably right. > > > However we need to check and understand what a HW mempool handler wil= l be. > > > I imagine the first of them will have to move handlers in drivers/ > > > > Does it mean it we'll have to move mbuf into drivers too? > > Again other libs do use mempool too. > > Why not just lib/librte_mempool/arch/ > > ? >=20 > I was proposing only to move only the new > handler(lib/librte_mempool/rte_mempool_stack.c). Not any library or any > other common code. >=20 > Just like DPDK crypto device, Even if it is software implementation its > better to move in driver/crypto instead of lib/librte_cryptodev >=20 > "lib/librte_mempool/arch/" is not correct place as it is platform specifi= c > not architecture specific and HW mempool device may be PCIe or platform > device. Ok, but why rte_mempool_stack.c has to be moved? I can hardly imagine it is a 'platform sepcific'. >>From my understanding it is a generic code. Konstantin >=20 > > Konstantin > > > > > > > Jerin, are you volunteer?