From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0BB37A00BE; Wed, 30 Oct 2019 08:36:51 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D51251BEB5; Wed, 30 Oct 2019 08:36:50 +0100 (CET) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id 41B671BEAD for ; Wed, 30 Oct 2019 08:36:50 +0100 (CET) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id C42EA940060; Wed, 30 Oct 2019 07:36:48 +0000 (UTC) Received: from [192.168.38.17] (91.220.146.112) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 30 Oct 2019 07:36:41 +0000 To: Olivier Matz CC: , Anatoly Burakov , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , Kiran Kumar Kokkilagadda , Stephen Hemminger , "Thomas Monjalon" , Vamsi Krishna Attunuru References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191028140122.9592-1-olivier.matz@6wind.com> <20191028140122.9592-4-olivier.matz@6wind.com> <24990d86-4ef8-4dce-113c-b824fe55e3f5@solarflare.com> <20191029172018.wvz57dbuhyy4bd4k@platinum> From: Andrew Rybchenko Message-ID: <79e7adee-32c0-2559-3aec-6d311f3dddfa@solarflare.com> Date: Wed, 30 Oct 2019 10:36:37 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: <20191029172018.wvz57dbuhyy4bd4k@platinum> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-GB X-Originating-IP: [91.220.146.112] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-25010.003 X-TM-AS-Result: No-11.876200-8.000000-10 X-TMASE-MatchedRID: UuaOI1zLN1jlBUZDaqkvV/ZvT2zYoYOwC/ExpXrHizwdQW9W2F3v/ZRH roLqLm/zefwhycM0BbOFNQFuxJmqazdlsQBiDkU4tAPJLacFo+sxXH/dlhvLv+D3XFrJfgvzT51 2fbtiIR249B96VzRA5y+wKzmu8Az8YlldA0POS1KCYB3gC22mf+qhuTPUDQDtw0B7B0IIzkSZI5 3PKmUqN5jecj2e6HJGaa0GcyksHZSPcxe/n56ep3TnOygHVQpOfZubQ3G+UmGnk8VVpU1FNhUsY g9U/g5BT53L53/f6yeOPpe0Zssm7LKRmD27FFbq9Ib/6w+1lWQA+JHhu0IR5vdpM2ZaJMkeUlPX GoZQ97bCjZxSCr875hIlVYCqhV5OTX7PJ/OU3vKDGx/OQ1GV8rHlqZYrZqdI+gtHj7OwNO1fbzE 2lTRTc1JylIBI5cqVzLFy5z9rB7o9zEeMaCxF9vZ49VQoA9zc X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--11.876200-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-25010.003 X-MDID: 1572421009-3t_oqg4CSJJl Subject: Re: [dpdk-dev] [PATCH 3/5] mempool: remove optimistic IOVA-contiguous allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 10/29/19 8:20 PM, Olivier Matz wrote: > On Tue, Oct 29, 2019 at 01:25:10PM +0300, Andrew Rybchenko wrote: >> On 10/28/19 5:01 PM, Olivier Matz wrote: >>> The previous commit reduced the amount of required memory when >>> populating the mempool with non iova-contiguous memory. >>> >>> Since there is no big advantage to have a fully iova-contiguous mempool >>> if it is not explicitly asked, remove this code, it simplifies the >>> populate function. >>> >>> Signed-off-by: Olivier Matz >> One comment below, other than that >> Reviewed-by: Andrew Rybchenko >> >>> --- >>> lib/librte_mempool/rte_mempool.c | 47 ++++++-------------------------- >>> 1 file changed, 8 insertions(+), 39 deletions(-) >>> >>> diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c >>> index 3275e48c9..5e1f202dc 100644 >>> --- a/lib/librte_mempool/rte_mempool.c >>> +++ b/lib/librte_mempool/rte_mempool.c >> [snip] >> >>> @@ -531,36 +521,15 @@ rte_mempool_populate_default(struct rte_mempool *mp) >>> goto fail; >>> } >>> - flags = mz_flags; >>> - >>> /* if we're trying to reserve contiguous memory, add appropriate >>> * memzone flag. >>> */ >>> - if (try_iova_contig_mempool) >>> - flags |= RTE_MEMZONE_IOVA_CONTIG; >>> + if (min_chunk_size == (size_t)mem_size) >>> + mz_flags |= RTE_MEMZONE_IOVA_CONTIG; >>> mz = rte_memzone_reserve_aligned(mz_name, mem_size, >>> - mp->socket_id, flags, align); >>> - >>> - /* if we were trying to allocate contiguous memory, failed and >>> - * minimum required contiguous chunk fits minimum page, adjust >>> - * memzone size to the page size, and try again. >>> - */ >>> - if (mz == NULL && try_iova_contig_mempool && >>> - min_chunk_size <= pg_sz) { >>> - try_iova_contig_mempool = false; >>> - flags &= ~RTE_MEMZONE_IOVA_CONTIG; >>> - >>> - mem_size = rte_mempool_ops_calc_mem_size(mp, n, >>> - pg_shift, &min_chunk_size, &align); >>> - if (mem_size < 0) { >>> - ret = mem_size; >>> - goto fail; >>> - } >>> + mp->socket_id, mz_flags, align); >>> - mz = rte_memzone_reserve_aligned(mz_name, mem_size, >>> - mp->socket_id, flags, align); >>> - } >>> /* don't try reserving with 0 size if we were asked to reserve >>> * IOVA-contiguous memory. >>> */ >> [snip] >> >>> @@ -587,7 +556,7 @@ rte_mempool_populate_default(struct rte_mempool *mp) >>> else >>> iova = RTE_BAD_IOVA; >>> - if (try_iova_contig_mempool || pg_sz == 0) >>> + if (pg_sz == 0) >> I think (mz_flags & RTE_MEMZONE_IOVA_CONTIG) is lost here. > Do you mean we should do instead > if (pg_sz == 0 || (mz_flags & RTE_MEMZONE_IOVA_CONTIG)) > populate_iova() > else > populate_virt() > ? Yes. > I would say yes, but it should be removed in patch 5/5. > At the end, we don't want objects to be across pages, even > with RTE_MEMZONE_IOVA_CONTIG. > >>> ret = rte_mempool_populate_iova(mp, mz->addr, >>> iova, mz->len, >>> rte_mempool_memchunk_mz_free,