From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <arybchenko@solarflare.com>
Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com
 [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id 774D55F24
 for <dev@dpdk.org>; Wed, 21 Mar 2018 08:50:07 +0100 (CET)
X-Virus-Scanned: Proofpoint Essentials engine
Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26])
 (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mx1-us1.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id
 93867A80063; Wed, 21 Mar 2018 07:50:05 +0000 (UTC)
Received: from [192.168.38.17] (84.52.114.114) by ocex03.SolarFlarecom.com
 (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Wed, 21 Mar
 2018 00:49:58 -0700
To: Olivier Matz <olivier.matz@6wind.com>, Anatoly Burakov
 <anatoly.burakov@intel.com>
CC: <dev@dpdk.org>, <keith.wiles@intel.com>, <jianfeng.tan@intel.com>,
 <andras.kovacs@ericsson.com>, <laszlo.vadkeri@ericsson.com>,
 <benjamin.walker@intel.com>, <bruce.richardson@intel.com>,
 <thomas@monjalon.net>, <konstantin.ananyev@intel.com>,
 <kuralamudhan.ramakrishnan@intel.com>, <louise.m.daly@intel.com>,
 <nelio.laranjeiro@6wind.com>, <yskoh@mellanox.com>, <pepperjo@japf.ch>,
 <jerin.jacob@caviumnetworks.com>, <hemant.agrawal@nxp.com>
References: <cover.1520083504.git.anatoly.burakov@intel.com>
 <cover.1520428025.git.anatoly.burakov@intel.com>
 <fd44d0dd7d88613b196c6859a888166c8b1841d9.1520428025.git.anatoly.burakov@intel.com>
 <20180319171131.dnhd752syi6fo67s@platinum>
From: Andrew Rybchenko <arybchenko@solarflare.com>
Message-ID: <3ff9d5a9-ca9a-f6f7-b6d3-c75710e02a22@solarflare.com>
Date: Wed, 21 Mar 2018 10:49:55 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101
 Thunderbird/52.6.0
MIME-Version: 1.0
In-Reply-To: <20180319171131.dnhd752syi6fo67s@platinum>
Content-Language: en-GB
X-Originating-IP: [84.52.114.114]
X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To
 ocex03.SolarFlarecom.com (10.20.40.36)
X-MDID: 1521618606-ALjFEJwXGtcx
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Transfer-Encoding: 7bit
X-Content-Filtered-By: Mailman/MimeDel 2.1.15
Subject: Re: [dpdk-dev] [PATCH v2 23/41] mempool: add support for the new
 allocation methods
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Wed, 21 Mar 2018 07:50:07 -0000

On 03/19/2018 08:11 PM, Olivier Matz wrote:
>> +	 *
>> +	 * if we don't need our mempools to have physically contiguous objects,
>> +	 * then just set page shift and page size to 0, because the user has
>> +	 * indicated that there's no need to care about anything.
>> +	 *
>> +	 * if we do need contiguous objects, there is also an option to reserve
>> +	 * the entire mempool memory as one contiguous block of memory, in
>> +	 * which case the page shift and alignment wouldn't matter as well.
>> +	 *
>> +	 * if we require contiguous objects, but not necessarily the entire
>> +	 * mempool reserved space to be contiguous, then there are two options.
>> +	 *
>> +	 * if our IO addresses are virtual, not actual physical (IOVA as VA
>> +	 * case), then no page shift needed - our memory allocation will give us
>> +	 * contiguous physical memory as far as the hardware is concerned, so
>> +	 * act as if we're getting contiguous memory.
>> +	 *
>> +	 * if our IO addresses are physical, we may get memory from bigger
>> +	 * pages, or we might get memory from smaller pages, and how much of it
>> +	 * we require depends on whether we want bigger or smaller pages.
>> +	 * However, requesting each and every memory size is too much work, so
>> +	 * what we'll do instead is walk through the page sizes available, pick
>> +	 * the smallest one and set up page shift to match that one. We will be
>> +	 * wasting some space this way, but it's much nicer than looping around
>> +	 * trying to reserve each and every page size.
>> +	 */
> This comment is helpful to understand, thanks.
>
> (by the way, reading it makes me think we should rename
> MEMPOOL_F_*_PHYS_CONTIG as MEMPOOL_F_*_IOVA_CONTIG)

I'll care about renaming in my patchset about mempool_ops API.