From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 32472A00E6 for ; Wed, 7 Aug 2019 17:21:56 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 072183798; Wed, 7 Aug 2019 17:21:56 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [148.163.129.52]) by dpdk.org (Postfix) with ESMTP id EAC883798 for ; Wed, 7 Aug 2019 17:21:53 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us5.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 7BC22800084; Wed, 7 Aug 2019 15:21:52 +0000 (UTC) Received: from [192.168.1.11] (85.187.13.152) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 7 Aug 2019 16:21:45 +0100 To: Olivier Matz , Vamsi Krishna Attunuru , CC: Thomas Monjalon , Anatoly Burakov , Jerin Jacob Kollanukkaran , Kokkilagadda , Ferruh Yigit References: <20190719133845.32432-1-olivier.matz@6wind.com> <20190719133845.32432-4-olivier.matz@6wind.com> From: Andrew Rybchenko Message-ID: <37519783-13aa-6854-5ff3-84d21e35fe97@solarflare.com> Date: Wed, 7 Aug 2019 18:21:41 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20190719133845.32432-4-olivier.matz@6wind.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [85.187.13.152] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24828.000 X-TM-AS-Result: No-2.503300-8.000000-10 X-TMASE-MatchedRID: u7Yf2n7Ca/06yy6RAAEPc2Lv/foKD6FA69aS+7/zbj+qvcIF1TcLYAYx Jykfuo4AkCFNQ46XU+wO6bgzXko5LNP/9zlego/mCfEIlU0KzNjImecPUV56jlc/Cedjlcvkxmo s9QG2BXmG2HA1VWmpsB0uTE7bH78ptPeV72UUGU9ubyl3d/pPvz+k5IvvZ1N/B2QWi8BF5Sgs8m 1alsmNjVswV78py9WBOQPm3kMvO1flMOLj7fx09oboZ15KqReT/qWl+m17jWFWm0JlHAu9AXROw AUH8TPXNLaqagFUrWBoEr6qpE+amtW/9r3moZYjngIgpj8eDcC063Wh9WVqgtQdB5NUNSsi1GcR AJRT6POOhzOa6g8KrSuHPjQ5F7gwPG/yXiVr3ZPAr+TO9Frz3cvmrWNrlFsbGWs3OgE+4Fm/oas LAJJ/WecHVNoJ4t0sc6khykZob8eTwKoa6dG8wuL59MzH0po2K2yzo9Rrj9wPoYC35RuihKPUI7 hfQSp5eCBcUCG1aJiUTGVAhB5EbQ== X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--2.503300-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24828.000 X-MDID: 1565191313-CuzXwQ4VNcLO Subject: Re: [dpdk-dev] [RFC 3/4] mempool: introduce function to get mempool page size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 7/19/19 4:38 PM, Olivier Matz wrote: > In rte_mempool_populate_default(), we determine the page size, > which is needed for calc_size and allocation of memory. > > Move this in a function and export it, it will be used in next > commit. The major change here is taking page sizes into account even in the case of RTE_IOVA_VA. As I understand it is the main problem initially that we should respect page boundaries even in RTE_IOVA_VA case. It looks like we can have it even without removal of contiguous allocation attempt. > Signed-off-by: Olivier Matz > --- > lib/librte_mempool/rte_mempool.c | 50 +++++++++++++++++++++++++--------------- > lib/librte_mempool/rte_mempool.h | 6 +++++ > 2 files changed, 37 insertions(+), 19 deletions(-) > > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c > index 335032dc8..7def0ba68 100644 > --- a/lib/librte_mempool/rte_mempool.c > +++ b/lib/librte_mempool/rte_mempool.c > @@ -414,6 +414,32 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, > return ret; > } > > +int > +rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz) > +{ > + bool need_iova_contig_obj; > + bool alloc_in_ext_mem; > + int ret; > + > + /* check if we can retrieve a valid socket ID */ > + ret = rte_malloc_heap_socket_is_external(mp->socket_id); > + if (ret < 0) > + return -EINVAL; > + alloc_in_ext_mem = (ret == 1); > + need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG); > + > + if (!need_iova_contig_obj) > + *pg_sz = 0; > + else if (!alloc_in_ext_mem && rte_eal_iova_mode() == RTE_IOVA_VA) > + *pg_sz = get_min_page_size(mp->socket_id); > + else if (rte_eal_has_hugepages() || alloc_in_ext_mem) > + *pg_sz = get_min_page_size(mp->socket_id); > + else > + *pg_sz = getpagesize(); > + > + return 0; > +} > + > /* Default function to populate the mempool: allocate memory in memzones, > * and populate them. Return the number of objects added, or a negative > * value on error. > @@ -425,12 +451,11 @@ rte_mempool_populate_default(struct rte_mempool *mp) > char mz_name[RTE_MEMZONE_NAMESIZE]; > const struct rte_memzone *mz; > ssize_t mem_size; > - size_t align, pg_sz, pg_shift; > + size_t align, pg_sz, pg_shift = 0; > rte_iova_t iova; > unsigned mz_id, n; > int ret; > bool need_iova_contig_obj; > - bool alloc_in_ext_mem; > > ret = mempool_ops_alloc_once(mp); > if (ret != 0) > @@ -482,26 +507,13 @@ rte_mempool_populate_default(struct rte_mempool *mp) > * synonymous with IOVA contiguousness will not hold. > */ > > - /* check if we can retrieve a valid socket ID */ > - ret = rte_malloc_heap_socket_is_external(mp->socket_id); > - if (ret < 0) > - return -EINVAL; > - alloc_in_ext_mem = (ret == 1); > need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG); > + ret = rte_mempool_get_page_size(mp, &pg_sz); > + if (ret < 0) > + return ret; > > - if (!need_iova_contig_obj) { > - pg_sz = 0; > - pg_shift = 0; > - } else if (!alloc_in_ext_mem && rte_eal_iova_mode() == RTE_IOVA_VA) { > - pg_sz = 0; > - pg_shift = 0; > - } else if (rte_eal_has_hugepages() || alloc_in_ext_mem) { > - pg_sz = get_min_page_size(mp->socket_id); > - pg_shift = rte_bsf32(pg_sz); > - } else { > - pg_sz = getpagesize(); > + if (pg_sz != 0) > pg_shift = rte_bsf32(pg_sz); > - } > > for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) { > size_t min_chunk_size; > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h > index 7bc10e699..00b927989 100644 > --- a/lib/librte_mempool/rte_mempool.h > +++ b/lib/librte_mempool/rte_mempool.h > @@ -1692,6 +1692,12 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, > void rte_mempool_walk(void (*func)(struct rte_mempool *, void *arg), > void *arg); > > +/** > + * @internal Get page size used for mempool object allocation. > + */ > +int > +rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz); > + > #ifdef __cplusplus > } > #endif >