From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 3C5A45F2F for ; Wed, 21 Mar 2018 09:33:00 +0100 (CET) Received: from core.dev.6wind.com (unknown [10.0.0.1]) by proxy.6wind.com (Postfix) with ESMTPS id 1CDA614C36F; Wed, 21 Mar 2018 09:28:29 +0100 (CET) Received: from [10.16.0.195] (helo=6wind.com) by core.dev.6wind.com with smtp (Exim 4.84_2) (envelope-from ) id 1eyZAs-0008U8-D9; Wed, 21 Mar 2018 09:32:47 +0100 Received: by 6wind.com (sSMTP sendmail emulation); Wed, 21 Mar 2018 09:32:46 +0100 Date: Wed, 21 Mar 2018 09:32:46 +0100 From: Olivier Matz To: Andrew Rybchenko Cc: Anatoly Burakov , dev@dpdk.org, keith.wiles@intel.com, jianfeng.tan@intel.com, andras.kovacs@ericsson.com, laszlo.vadkeri@ericsson.com, benjamin.walker@intel.com, bruce.richardson@intel.com, thomas@monjalon.net, konstantin.ananyev@intel.com, kuralamudhan.ramakrishnan@intel.com, louise.m.daly@intel.com, nelio.laranjeiro@6wind.com, yskoh@mellanox.com, pepperjo@japf.ch, jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com Message-ID: <20180321083246.imqquh2dnq6nj5tz@glumotte.dev.6wind.com> References: <20180319171131.dnhd752syi6fo67s@platinum> <3ff9d5a9-ca9a-f6f7-b6d3-c75710e02a22@solarflare.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <3ff9d5a9-ca9a-f6f7-b6d3-c75710e02a22@solarflare.com> User-Agent: NeoMutt/20170113 (1.7.2) Subject: Re: [dpdk-dev] [PATCH v2 23/41] mempool: add support for the new allocation methods X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 21 Mar 2018 08:33:00 -0000 On Wed, Mar 21, 2018 at 10:49:55AM +0300, Andrew Rybchenko wrote: > On 03/19/2018 08:11 PM, Olivier Matz wrote: > > > + * > > > + * if we don't need our mempools to have physically contiguous objects, > > > + * then just set page shift and page size to 0, because the user has > > > + * indicated that there's no need to care about anything. > > > + * > > > + * if we do need contiguous objects, there is also an option to reserve > > > + * the entire mempool memory as one contiguous block of memory, in > > > + * which case the page shift and alignment wouldn't matter as well. > > > + * > > > + * if we require contiguous objects, but not necessarily the entire > > > + * mempool reserved space to be contiguous, then there are two options. > > > + * > > > + * if our IO addresses are virtual, not actual physical (IOVA as VA > > > + * case), then no page shift needed - our memory allocation will give us > > > + * contiguous physical memory as far as the hardware is concerned, so > > > + * act as if we're getting contiguous memory. > > > + * > > > + * if our IO addresses are physical, we may get memory from bigger > > > + * pages, or we might get memory from smaller pages, and how much of it > > > + * we require depends on whether we want bigger or smaller pages. > > > + * However, requesting each and every memory size is too much work, so > > > + * what we'll do instead is walk through the page sizes available, pick > > > + * the smallest one and set up page shift to match that one. We will be > > > + * wasting some space this way, but it's much nicer than looping around > > > + * trying to reserve each and every page size. > > > + */ > > This comment is helpful to understand, thanks. > > > > (by the way, reading it makes me think we should rename > > MEMPOOL_F_*_PHYS_CONTIG as MEMPOOL_F_*_IOVA_CONTIG) > > I'll care about renaming in my patchset about mempool_ops API. Great, thanks! Please also keep the old ones for now, we will remove them later.