From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 758F3A046B
	for <public@inbox.dpdk.org>; Wed, 24 Jul 2019 09:28:13 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 9F4211C123;
	Wed, 24 Jul 2019 09:28:12 +0200 (CEST)
Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com
 [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 1CF561C121
 for <dev@dpdk.org>; Wed, 24 Jul 2019 09:28:11 +0200 (CEST)
X-Virus-Scanned: Proofpoint Essentials engine
Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits))
 (No client certificate requested)
 by mx1-us3.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 9B6969C005B;
 Wed, 24 Jul 2019 07:28:09 +0000 (UTC)
Received: from [192.168.1.11] (85.187.13.152) by ukex01.SolarFlarecom.com
 (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 24 Jul
 2019 08:28:03 +0100
To: Vamsi Krishna Attunuru <vattunuru@marvell.com>, "dev@dpdk.org"
 <dev@dpdk.org>
CC: "thomas@monjalon.net" <thomas@monjalon.net>, Jerin Jacob Kollanukkaran
 <jerinj@marvell.com>, "olivier.matz@6wind.com" <olivier.matz@6wind.com>,
 "ferruh.yigit@intel.com" <ferruh.yigit@intel.com>,
 "anatoly.burakov@intel.com" <anatoly.burakov@intel.com>, "Kiran Kumar
 Kokkilagadda" <kirankumark@marvell.com>
References: <20190717090408.13717-1-vattunuru@marvell.com>
 <20190723053821.30227-1-vattunuru@marvell.com>
 <20190723053821.30227-2-vattunuru@marvell.com>
 <4b9cec50-348a-3359-04ee-3b567b49aa9f@solarflare.com>
 <CH2PR18MB3381BF1EB386EA03889F8039A6C70@CH2PR18MB3381.namprd18.prod.outlook.com>
 <ebb88b37-b4f2-3e82-b192-ad6cd2b3ceb2@solarflare.com>
 <CH2PR18MB338101FB57C8EB1CCFA2DDE0A6C60@CH2PR18MB3381.namprd18.prod.outlook.com>
From: Andrew Rybchenko <arybchenko@solarflare.com>
Message-ID: <0b77d853-48f7-6c6d-8687-3bd46b91e19d@solarflare.com>
Date: Wed, 24 Jul 2019 10:27:58 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101
 Thunderbird/60.8.0
MIME-Version: 1.0
In-Reply-To: <CH2PR18MB338101FB57C8EB1CCFA2DDE0A6C60@CH2PR18MB3381.namprd18.prod.outlook.com>
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [85.187.13.152]
X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To
 ukex01.SolarFlarecom.com (10.17.10.4)
X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24792.000
X-TM-AS-Result: No-7.745800-8.000000-10
X-TMASE-MatchedRID: xcONGPdDH5rmLzc6AOD8DfHkpkyUphL9Ud7Bjfo+5jSGCfvYsySdyj6P
 hj6DfZCEl0dJzNle2ZtTwH6fRcR43TM9BBRuZZ1v6zi8j+r17t37CQeuTfrGzB/IyQ3t7z0Dwns
 yHGYDgytjWZWUqOtRTF/NnaI9HF5eVkxRNPlszjnJ1E/nrJFED2ZUAxADYXhxJLfQYoCQHFYSg3
 ifeD1Gtnn+8GVwS/tB/GUjdk/3mzWMkFI4QIufqudUdcFTNW9kQ8iUCoDj8MT5LkL/TyFZzW2rD
 qsRwTo3/7yMlJ1P+1GaAqCOBPNjvkXK9gK/GQyh/ccgt/EtX/1gsFOoKOJn5ntTo0P1ssT+l5wr
 ogyz/gPgzrFYJB6O51VZuPaFC48sMTzTu/0RGpV2OcGSRCd5955TXHTTf49l33Nl3elSfsqgvlt
 NtJ43ZeLzNWBegCW2RYvisGWbbS+3sNbcHjySQUlXctromFFi+gtHj7OwNO1Sa+jpKCDmEZ62pt
 2BhWbq+pUTNHJP8GxMY5TdJTiNon/53NrTsPT30KVN2Q9WhLYb1Q9w7nmgte45Aih5lF0BOlbNF
 JlnqGwY+l/E3POm9lHTb2Y4zo/QutwTzHK3ELl3mFldkWgHw/FbH3cFJjLJwL6SxPpr1/I=
X-TM-AS-User-Approved-Sender: Yes
X-TM-AS-User-Blocked-Sender: No
X-TMASE-Result: 10--7.745800-8.000000
X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24792.000
X-MDID: 1563953290-7GdKFQ6CQbXB
Subject: Re: [dpdk-dev] [PATCH v8 1/5] mempool: populate mempool with page
 sized chunks of memory
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

On 7/24/19 10:09 AM, Vamsi Krishna Attunuru wrote:
>
>> -----Original Message-----
>> From: Andrew Rybchenko <arybchenko@solarflare.com>
>> Sent: Wednesday, July 24, 2019 1:04 AM
>> To: Vamsi Krishna Attunuru <vattunuru@marvell.com>; dev@dpdk.org
>> Cc: thomas@monjalon.net; Jerin Jacob Kollanukkaran <jerinj@marvell.com>;
>> olivier.matz@6wind.com; ferruh.yigit@intel.com; anatoly.burakov@intel.com;
>> Kiran Kumar Kokkilagadda <kirankumark@marvell.com>
>> Subject: Re: [dpdk-dev] [PATCH v8 1/5] mempool: populate mempool with page
>> sized chunks of memory
>>
>> On 7/23/19 3:28 PM, Vamsi Krishna Attunuru wrote:
>>>> -----Original Message-----
>>>> From: Andrew Rybchenko <arybchenko@solarflare.com>
>>>> Sent: Tuesday, July 23, 2019 4:38 PM
>>>> To: Vamsi Krishna Attunuru <vattunuru@marvell.com>; dev@dpdk.org
>>>> Cc: thomas@monjalon.net; Jerin Jacob Kollanukkaran
>>>> <jerinj@marvell.com>; olivier.matz@6wind.com; ferruh.yigit@intel.com;
>>>> anatoly.burakov@intel.com; Kiran Kumar Kokkilagadda
>>>> <kirankumark@marvell.com>
>>>> Subject: Re: [dpdk-dev] [PATCH v8 1/5] mempool: populate mempool with
>>>> page sized chunks of memory
>>>>
>>>> On 7/23/19 8:38 AM, vattunuru@marvell.com wrote:
>>>>> From: Vamsi Attunuru <vattunuru@marvell.com>
>>>>>
>>>>> Patch adds a routine to populate mempool from page aligned and page
>>>>> sized chunks of memory to ensures memory objs do not fall across the
>>>>> page boundaries. It's useful for applications that require
>>>>> physically contiguous mbuf memory while running in IOVA=VA mode.
>>>>>
>>>>> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
>>>>> Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
>>>>> ---
>> <...>
>>
>>>>> +	int ret;
>>>>> +
>>>>> +	ret = mempool_ops_alloc_once(mp);
>>>>> +	if (ret != 0)
>>>>> +		return ret;
>>>>> +
>>>>> +	if (mp->nb_mem_chunks != 0)
>>>>> +		return -EEXIST;
>>>>> +
>>>>> +	pg_sz = get_min_page_size(mp->socket_id);
>>>>> +	pg_shift = rte_bsf32(pg_sz);
>>>>> +
>>>>> +	for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
>>>>> +
>>>>> +		rte_mempool_op_calc_mem_size_default(mp, n, pg_shift,
>>>>> +			     &chunk_size, &align);
>>>> It is incorrect to ignore mempool pool ops and enforce default
>>>> handler. Use rte_mempool_ops_calc_mem_size().
>>>> Also it is better to treat negative return value as an error as
>>>> default function does.
>>>> (May be it my mistake in return value description that it is not mentioned).
>>>>
>>> Yes, I thought so, but ops_calc_mem_size() would in turn call mempool
>>> pmd's calc_mem_size() op which may/may not return required chunk_size
>>> and align values in this case. Or else it would be skipped completely and use
>> pg_sz for both memzone len and align, anyways this  page sized alignment will
>> suits the pmd's specific align requirements.
>>
>> Anyway it is incorrect to violate driver ops. default is definitely wrong for
>> bucket.
>> min_chunk_size and align is mempool driver requirements. You can harden it,
>> but should not violate it.
> fine, I will modify the routine as below,  call pmd's calc_mem_size() op and over write min_chunk_size if it does not suit for this function's purpose.
>
> +       total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
> +       if (total_elt_sz > pg_sz)
> +               return ret;
>
> +       for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) {
>
> -               rte_mempool_op_calc_mem_size_default(mp, n, pg_shift,
> -                            &chunk_size, &align);
> +               ret = rte_mempool_ops_calc_mem_size(mp, n, pg_shift,
> +                               &min_chunk_size, &align);
> +
> +               if (ret < 0)
>                          goto fail;
>
> +               if (min_chunk_size > pg_sz)
> +                       min_chunk_size = pg_sz;
>
> Changes look fine.?

No, you can't violate min_chunk_size requirement. If it is unacceptable,
error should be returned. So, you should not check total_elt_sz above
separately.