From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DC769A046B for ; Tue, 23 Jul 2019 21:33:52 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4FFA71C0CA; Tue, 23 Jul 2019 21:33:51 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id 28DD31C0C6 for ; Tue, 23 Jul 2019 21:33:49 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us3.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 23CA4600081; Tue, 23 Jul 2019 19:33:47 +0000 (UTC) Received: from [192.168.1.11] (85.187.13.152) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Tue, 23 Jul 2019 20:33:39 +0100 To: Vamsi Krishna Attunuru , "dev@dpdk.org" CC: "thomas@monjalon.net" , Jerin Jacob Kollanukkaran , "olivier.matz@6wind.com" , "ferruh.yigit@intel.com" , "anatoly.burakov@intel.com" , "Kiran Kumar Kokkilagadda" References: <20190717090408.13717-1-vattunuru@marvell.com> <20190723053821.30227-1-vattunuru@marvell.com> <20190723053821.30227-2-vattunuru@marvell.com> <4b9cec50-348a-3359-04ee-3b567b49aa9f@solarflare.com> From: Andrew Rybchenko Message-ID: Date: Tue, 23 Jul 2019 22:33:34 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-Originating-IP: [85.187.13.152] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24788.000 X-TM-AS-Result: No-8.415900-8.000000-10 X-TMASE-MatchedRID: 1GZI+iG+MtfmLzc6AOD8DfHkpkyUphL9Ud7Bjfo+5jSGCfvYsySdyj6P hj6DfZCEl0dJzNle2Zskjau/W+hNsTUZn1MYzuPIOM7ns3UgBY0dLjiwtocA8ivnrpOjGJQ2G6Y 3SlrWARGpDgMfeL8P3k7P1Ohr3mkDszLAY5oHhBAoSebb2328i3607foZgOWyvGAx/1ATZ5tsNP tqkwKXDuiwz5rLCoTxScn/JyEuJcq7AOV8pgZcV4vqrlGw2G/kRiPTMMc/Mmm4zlF1pkNzluyrh S15Cszn7pgHD/VxnRu0W7NTNZSXTjk1eVZZcEWO502L7QPguYoyJlZlUbAvWJsoi2XrUn/JyeMt MD9QOgChMIDkR/KfwK/dq3PF0Qgr3QfwsVk0UbsIoUKaF27lxUadAG676PgkpJMJ0Spl2Hr0bEG YYaGi4d/JeWBpgDMJFI+8LY+7f5w5K01kA9qFmDBMrC4bo3oh60GSj1gxEpcMNWXylyQWcKKAQf LsnhLrKWSt4DmvbhpicKLmK2TeKmsPn5C6nWpTnqg/VrSZEiM= X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--8.415900-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24788.000 X-MDID: 1563910428-jboJJn3e8CXp Subject: Re: [dpdk-dev] [PATCH v8 1/5] mempool: populate mempool with page sized chunks of memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 7/23/19 3:28 PM, Vamsi Krishna Attunuru wrote: >> -----Original Message----- >> From: Andrew Rybchenko >> Sent: Tuesday, July 23, 2019 4:38 PM >> To: Vamsi Krishna Attunuru ; dev@dpdk.org >> Cc: thomas@monjalon.net; Jerin Jacob Kollanukkaran ; >> olivier.matz@6wind.com; ferruh.yigit@intel.com; >> anatoly.burakov@intel.com; Kiran Kumar Kokkilagadda >> >> Subject: Re: [dpdk-dev] [PATCH v8 1/5] mempool: populate mempool with >> page sized chunks of memory >> >> On 7/23/19 8:38 AM, vattunuru@marvell.com wrote: >>> From: Vamsi Attunuru >>> >>> Patch adds a routine to populate mempool from page aligned and page >>> sized chunks of memory to ensures memory objs do not fall across the >>> page boundaries. It's useful for applications that require physically >>> contiguous mbuf memory while running in IOVA=VA mode. >>> >>> Signed-off-by: Vamsi Attunuru >>> Signed-off-by: Kiran Kumar K >>> --- <...> >>> + int ret; >>> + >>> + ret = mempool_ops_alloc_once(mp); >>> + if (ret != 0) >>> + return ret; >>> + >>> + if (mp->nb_mem_chunks != 0) >>> + return -EEXIST; >>> + >>> + pg_sz = get_min_page_size(mp->socket_id); >>> + pg_shift = rte_bsf32(pg_sz); >>> + >>> + for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) { >>> + >>> + rte_mempool_op_calc_mem_size_default(mp, n, pg_shift, >>> + &chunk_size, &align); >> It is incorrect to ignore mempool pool ops and enforce default handler. Use >> rte_mempool_ops_calc_mem_size(). >> Also it is better to treat negative return value as an error as default function >> does. >> (May be it my mistake in return value description that it is not mentioned). >> > Yes, I thought so, but ops_calc_mem_size() would in turn call mempool pmd's calc_mem_size() op which > may/may not return required chunk_size and align values in this case. Or else it would be skipped completely > and use pg_sz for both memzone len and align, anyways this page sized alignment will suits the pmd's specific align requirements. Anyway it is incorrect to violate driver ops. default is definitely wrong for bucket. min_chunk_size and align is mempool driver requirements. You can harden it, but should not violate it.