From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 4D01247D1 for ; Wed, 9 Mar 2016 17:22:15 +0100 (CET) Received: from glumotte.dev.6wind.com (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 46CEA24A4A for ; Wed, 9 Mar 2016 17:21:32 +0100 (CET) From: Olivier Matz To: dev@dpdk.org Date: Wed, 9 Mar 2016 17:19:26 +0100 Message-Id: <1457540381-20274-21-git-send-email-olivier.matz@6wind.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1457540381-20274-1-git-send-email-olivier.matz@6wind.com> References: <1457540381-20274-1-git-send-email-olivier.matz@6wind.com> Subject: [dpdk-dev] [RFC 20/35] mempool: make page size optional when getting xmem size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Mar 2016 16:22:16 -0000 Update rte_mempool_xmem_size() so that when the page_shift argument is set to 0, assume that memory is physically contiguous, allowing to ignore page boundaries. This will be used in the next commits. By the way, rename the variable 'n' as 'obj_per_page' and avoid the affectation inside the if(). Signed-off-by: Olivier Matz --- lib/librte_mempool/rte_mempool.c | 18 +++++++++--------- lib/librte_mempool/rte_mempool.h | 2 +- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 5bfe4cb..805ac19 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -254,18 +254,18 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, size_t rte_mempool_xmem_size(uint32_t elt_num, size_t total_elt_sz, uint32_t pg_shift) { - size_t n, pg_num, pg_sz, sz; + size_t obj_per_page, pg_num, pg_sz; - pg_sz = (size_t)1 << pg_shift; + if (pg_shift == 0) + return total_elt_sz * elt_num; - if ((n = pg_sz / total_elt_sz) > 0) { - pg_num = (elt_num + n - 1) / n; - sz = pg_num << pg_shift; - } else { - sz = RTE_ALIGN_CEIL(total_elt_sz, pg_sz) * elt_num; - } + pg_sz = (size_t)1 << pg_shift; + obj_per_page = pg_sz / total_elt_sz; + if (obj_per_page == 0) + return RTE_ALIGN_CEIL(total_elt_sz, pg_sz) * elt_num; - return sz; + pg_num = (elt_num + obj_per_page - 1) / obj_per_page; + return pg_num << pg_shift; } /* diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index dacdf6c..2cce7ee 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -1257,7 +1257,7 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, * The size of each element, including header and trailer, as returned * by rte_mempool_calc_obj_size(). * @param pg_shift - * LOG2 of the physical pages size. + * LOG2 of the physical pages size. If set to 0, ignore page boundaries. * @return * Required memory size aligned at page boundary. */ -- 2.1.4