From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A7FE3A04A2; Tue, 5 Nov 2019 16:37:28 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C3A811BFB0; Tue, 5 Nov 2019 16:37:22 +0100 (CET) Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 57DE71BF9F for ; Tue, 5 Nov 2019 16:37:19 +0100 (CET) Received: from glumotte.dev.6wind.com. (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 3A8A533B0EF; Tue, 5 Nov 2019 16:37:19 +0100 (CET) From: Olivier Matz To: dev@dpdk.org Cc: Anatoly Burakov , Andrew Rybchenko , Ferruh Yigit , "Giridharan, Ganesan" , Jerin Jacob Kollanukkaran , "Kiran Kumar Kokkilagadda" , Stephen Hemminger , Thomas Monjalon , Vamsi Krishna Attunuru , Hemant Agrawal , Nipun Gupta , David Marchand Date: Tue, 5 Nov 2019 16:37:01 +0100 Message-Id: <20191105153707.14645-3-olivier.matz@6wind.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191105153707.14645-1-olivier.matz@6wind.com> References: <20190719133845.32432-1-olivier.matz@6wind.com> <20191105153707.14645-1-olivier.matz@6wind.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v4 2/7] mempool: reduce wasted space on mempool populate X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The size returned by rte_mempool_op_calc_mem_size_default() is aligned to the specified page size. Therefore, with big pages, the returned size can be much more that what we really need to populate the mempool. For instance, populating a mempool that requires 1.1GB of memory with 1GB hugepages can result in allocating 2GB of memory. This problem is hidden most of the time due to the allocation method of rte_mempool_populate_default(): when try_iova_contig_mempool=true, it first tries to allocate an iova contiguous area, without the alignment constraint. If it fails, it fallbacks to an aligned allocation that does not require to be iova-contiguous. This can also fallback into several smaller aligned allocations. This commit changes rte_mempool_op_calc_mem_size_default() to relax the alignment constraint to a cache line and to return a smaller size. Signed-off-by: Olivier Matz Reviewed-by: Andrew Rybdhenko Acked-by: Nipun Gupta --- lib/librte_mempool/rte_mempool.c | 7 ++--- lib/librte_mempool/rte_mempool.h | 9 +++---- lib/librte_mempool/rte_mempool_ops.c | 4 ++- lib/librte_mempool/rte_mempool_ops_default.c | 28 +++++++++++++++----- 4 files changed, 30 insertions(+), 18 deletions(-) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 88e49c751..4e0d576f5 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -477,11 +477,8 @@ rte_mempool_populate_default(struct rte_mempool *mp) * wasting some space this way, but it's much nicer than looping around * trying to reserve each and every page size. * - * However, since size calculation will produce page-aligned sizes, it - * makes sense to first try and see if we can reserve the entire memzone - * in one contiguous chunk as well (otherwise we might end up wasting a - * 1G page on a 10MB memzone). If we fail to get enough contiguous - * memory, then we'll go and reserve space page-by-page. + * If we fail to get enough contiguous memory, then we'll go and + * reserve space in smaller chunks. * * We also have to take into account the fact that memory that we're * going to allocate from can belong to an externally allocated memory diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 0fe8aa7b8..78b687bb6 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -458,7 +458,7 @@ typedef unsigned (*rte_mempool_get_count)(const struct rte_mempool *mp); * @param[out] align * Location for required memory chunk alignment. * @return - * Required memory size aligned at page boundary. + * Required memory size. */ typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp, uint32_t obj_num, uint32_t pg_shift, @@ -477,11 +477,8 @@ typedef ssize_t (*rte_mempool_calc_mem_size_t)(const struct rte_mempool *mp, * that pages are grouped in subsets of physically continuous pages big * enough to store at least one object. * - * Minimum size of memory chunk is a maximum of the page size and total - * element size. - * - * Required memory chunk alignment is a maximum of page size and cache - * line size. + * Minimum size of memory chunk is the total element size. + * Required memory chunk alignment is the cache line size. */ ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, uint32_t obj_num, uint32_t pg_shift, diff --git a/lib/librte_mempool/rte_mempool_ops.c b/lib/librte_mempool/rte_mempool_ops.c index e02eb702c..22c5251eb 100644 --- a/lib/librte_mempool/rte_mempool_ops.c +++ b/lib/librte_mempool/rte_mempool_ops.c @@ -100,7 +100,9 @@ rte_mempool_ops_get_count(const struct rte_mempool *mp) return ops->get_count(mp); } -/* wrapper to notify new memory area to external mempool */ +/* wrapper to calculate the memory size required to store given number + * of objects + */ ssize_t rte_mempool_ops_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num, uint32_t pg_shift, diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c index 4e2bfc82d..f6aea7662 100644 --- a/lib/librte_mempool/rte_mempool_ops_default.c +++ b/lib/librte_mempool/rte_mempool_ops_default.c @@ -12,7 +12,7 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, size_t *min_chunk_size, size_t *align) { size_t total_elt_sz; - size_t obj_per_page, pg_num, pg_sz; + size_t obj_per_page, pg_sz, objs_in_last_page; size_t mem_size; total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; @@ -33,14 +33,30 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, mem_size = RTE_ALIGN_CEIL(total_elt_sz, pg_sz) * obj_num; } else { - pg_num = (obj_num + obj_per_page - 1) / obj_per_page; - mem_size = pg_num << pg_shift; + /* In the best case, the allocator will return a + * page-aligned address. For example, with 5 objs, + * the required space is as below: + * | page0 | page1 | page2 (last) | + * |obj0 |obj1 |xxx|obj2 |obj3 |xxx|obj4| + * <------------- mem_size -------------> + */ + objs_in_last_page = ((obj_num - 1) % obj_per_page) + 1; + /* room required for the last page */ + mem_size = objs_in_last_page * total_elt_sz; + /* room required for other pages */ + mem_size += ((obj_num - objs_in_last_page) / + obj_per_page) << pg_shift; + + /* In the worst case, the allocator returns a + * non-aligned pointer, wasting up to + * total_elt_sz. Add a margin for that. + */ + mem_size += total_elt_sz - 1; } } - *min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz); - - *align = RTE_MAX((size_t)RTE_CACHE_LINE_SIZE, (size_t)1 << pg_shift); + *min_chunk_size = total_elt_sz; + *align = RTE_CACHE_LINE_SIZE; return mem_size; } -- 2.20.1