From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DD46EA00BE; Tue, 29 Oct 2019 11:25:26 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C5A741BEB5; Tue, 29 Oct 2019 11:25:25 +0100 (CET) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id D86D91BEA6 for ; Tue, 29 Oct 2019 11:25:23 +0100 (CET) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us2.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 866BA1C0065; Tue, 29 Oct 2019 10:25:21 +0000 (UTC) Received: from [192.168.38.17] (91.220.146.112) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Tue, 29 Oct 2019 10:25:14 +0000 To: Olivier Matz , CC: Anatoly Burakov , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , Kiran Kumar Kokkilagadda , Stephen Hemminger , "Thomas Monjalon" , Vamsi Krishna Attunuru References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191028140122.9592-1-olivier.matz@6wind.com> <20191028140122.9592-4-olivier.matz@6wind.com> From: Andrew Rybchenko Message-ID: <24990d86-4ef8-4dce-113c-b824fe55e3f5@solarflare.com> Date: Tue, 29 Oct 2019 13:25:10 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.9.0 MIME-Version: 1.0 In-Reply-To: <20191028140122.9592-4-olivier.matz@6wind.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-GB X-Originating-IP: [91.220.146.112] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-25008.003 X-TM-AS-Result: No-10.327100-8.000000-10 X-TMASE-MatchedRID: +f/wAVSGjujlBUZDaqkvV/ZvT2zYoYOwC/ExpXrHizy4zlF1pkNzlnsy gY4tPtxeUjwC4PXX8BpI/O+/7DzpyJs+g3Z/fZXdEhGH3CRdKUXdXhRKGhNdp/OW/sUTBY7LzkT 37q8H81fWaP9n/x93iY9CL1e45ag4VT3t8YYv6rcqsMfMfrOZRb77CQtqlp13IIvbIEPeuIGuvl MTgYvOTc94hclbObTLJBufqG2vPnR8xqXGIUw3Sry84akQDZNwTJYvCNRUoW6Rjx4hNpIk+IOnB +2gyWrW3ZT+S+tvWTHqxFVggzEONrmvMSppeWbNogGd8wIUGILGYnoF/CTeZZsoi2XrUn/JyeMt MD9QOgChMIDkR/KfwI2j49Ftap9EOwBXM346/+xO8R6xPl1f7ogDCLSx83hgL7lhmZW+zwsc9L4 K6/DLZu25pvgKK+xb X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--10.327100-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-25008.003 X-MDID: 1572344722-bgztSHbOpmsD Subject: Re: [dpdk-dev] [PATCH 3/5] mempool: remove optimistic IOVA-contiguous allocation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 10/28/19 5:01 PM, Olivier Matz wrote: > The previous commit reduced the amount of required memory when > populating the mempool with non iova-contiguous memory. > > Since there is no big advantage to have a fully iova-contiguous mempool > if it is not explicitly asked, remove this code, it simplifies the > populate function. > > Signed-off-by: Olivier Matz One comment below, other than that Reviewed-by: Andrew Rybchenko > --- > lib/librte_mempool/rte_mempool.c | 47 ++++++-------------------------- > 1 file changed, 8 insertions(+), 39 deletions(-) > > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c > index 3275e48c9..5e1f202dc 100644 > --- a/lib/librte_mempool/rte_mempool.c > +++ b/lib/librte_mempool/rte_mempool.c [snip] > @@ -531,36 +521,15 @@ rte_mempool_populate_default(struct rte_mempool *mp) > goto fail; > } > > - flags = mz_flags; > - > /* if we're trying to reserve contiguous memory, add appropriate > * memzone flag. > */ > - if (try_iova_contig_mempool) > - flags |= RTE_MEMZONE_IOVA_CONTIG; > + if (min_chunk_size == (size_t)mem_size) > + mz_flags |= RTE_MEMZONE_IOVA_CONTIG; > > mz = rte_memzone_reserve_aligned(mz_name, mem_size, > - mp->socket_id, flags, align); > - > - /* if we were trying to allocate contiguous memory, failed and > - * minimum required contiguous chunk fits minimum page, adjust > - * memzone size to the page size, and try again. > - */ > - if (mz == NULL && try_iova_contig_mempool && > - min_chunk_size <= pg_sz) { > - try_iova_contig_mempool = false; > - flags &= ~RTE_MEMZONE_IOVA_CONTIG; > - > - mem_size = rte_mempool_ops_calc_mem_size(mp, n, > - pg_shift, &min_chunk_size, &align); > - if (mem_size < 0) { > - ret = mem_size; > - goto fail; > - } > + mp->socket_id, mz_flags, align); > > - mz = rte_memzone_reserve_aligned(mz_name, mem_size, > - mp->socket_id, flags, align); > - } > /* don't try reserving with 0 size if we were asked to reserve > * IOVA-contiguous memory. > */ [snip] > @@ -587,7 +556,7 @@ rte_mempool_populate_default(struct rte_mempool *mp) > else > iova = RTE_BAD_IOVA; > > - if (try_iova_contig_mempool || pg_sz == 0) > + if (pg_sz == 0) I think (mz_flags & RTE_MEMZONE_IOVA_CONTIG) is lost here. > ret = rte_mempool_populate_iova(mp, mz->addr, > iova, mz->len, > rte_mempool_memchunk_mz_free,