From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id ECD844C94 for ; Fri, 27 Jul 2018 15:46:14 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26]) (using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1-us1.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id 6FF37780069; Fri, 27 Jul 2018 13:46:13 +0000 (UTC) Received: from ocex03.SolarFlarecom.com (10.20.40.36) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1044.25; Fri, 27 Jul 2018 06:46:10 -0700 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1044.25 via Frontend Transport; Fri, 27 Jul 2018 06:46:10 -0700 Received: from ukv-loginhost.uk.solarflarecom.com (ukv-loginhost.uk.solarflarecom.com [10.17.10.39]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id w6RDk9WL019846; Fri, 27 Jul 2018 14:46:09 +0100 Received: from ukv-loginhost.uk.solarflarecom.com (localhost [127.0.0.1]) by ukv-loginhost.uk.solarflarecom.com (Postfix) with ESMTP id 447AC1657C0; Fri, 27 Jul 2018 14:46:09 +0100 (BST) From: Andrew Rybchenko To: CC: Olivier Matz , Thomas Monjalon Date: Fri, 27 Jul 2018 14:46:05 +0100 Message-ID: <1532699165-28063-2-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1532699165-28063-1-git-send-email-arybchenko@solarflare.com> References: <1532699165-28063-1-git-send-email-arybchenko@solarflare.com> MIME-Version: 1.0 Content-Type: text/plain X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-23994.005 X-TM-AS-Result: No-5.922700-4.000000-10 X-TMASE-MatchedRID: 3/N5Oa22rDqL06bhI7iKZGivjLE8DPtZvJ9Xvh5CmT5UjspoiX02F3B4 4IkzjfYyEcE+LOiKuIu8tEB2PzzoWktr2/0YfvSFkDpLRKO9xhSENvZav9mwIbqln+jYe7ZhhdH CSx8AAMinjTWnjQxrsaoVYp1T01im1dDZHbsJnLMHTkHUtPYzxWlYsa84w2hTjNnoU1fopou+0z XVdrHsciAvAWDRMWYuwstATZ3MImnxr/KVtJp+VGpAeB7X2zZKfS0Ip2eEHnz3IzXlXlpamPoLR 4+zsDTtviI7BBDiM2KC4HCGy7JRZAI9qQKh9a3XQr810tMNk9f0J3cma6i7lg== X-TM-AS-User-Approved-Sender: No X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--5.922700-4.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-23994.005 X-MDID: 1532699174-zx6jRoUGu94c Subject: [dpdk-dev] [PATCH v2 2/2] mempool: fold memory size calculation helper X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 27 Jul 2018 13:46:15 -0000 rte_mempool_calc_mem_size_helper() was introduced to avoid code duplication and used in deprecated rte_mempool_mem_size() and rte_mempool_op_calc_mem_size_default(). Now the first one is removed and it is better to fold the helper into the second one to make it more readable. Signed-off-by: Andrew Rybchenko --- lib/librte_mempool/rte_mempool.c | 25 -------------------- lib/librte_mempool/rte_mempool.h | 22 ----------------- lib/librte_mempool/rte_mempool_ops_default.c | 25 +++++++++++++++++--- 3 files changed, 22 insertions(+), 50 deletions(-) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index d48e53c7e..52bc97a75 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -225,31 +225,6 @@ rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, return sz->total_size; } - -/* - * Internal function to calculate required memory chunk size. - */ -size_t -rte_mempool_calc_mem_size_helper(uint32_t elt_num, size_t total_elt_sz, - uint32_t pg_shift) -{ - size_t obj_per_page, pg_num, pg_sz; - - if (total_elt_sz == 0) - return 0; - - if (pg_shift == 0) - return total_elt_sz * elt_num; - - pg_sz = (size_t)1 << pg_shift; - obj_per_page = pg_sz / total_elt_sz; - if (obj_per_page == 0) - return RTE_ALIGN_CEIL(total_elt_sz, pg_sz) * elt_num; - - pg_num = (elt_num + obj_per_page - 1) / obj_per_page; - return pg_num << pg_shift; -} - /* free a memchunk allocated with rte_memzone_reserve() */ static void rte_mempool_memchunk_mz_free(__rte_unused struct rte_mempool_memhdr *memhdr, diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 5d1602555..7c9cd9a2f 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -487,28 +487,6 @@ ssize_t rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, uint32_t obj_num, uint32_t pg_shift, size_t *min_chunk_size, size_t *align); -/** - * @internal Helper function to calculate memory size required to store - * specified number of objects in assumption that the memory buffer will - * be aligned at page boundary. - * - * Note that if object size is bigger than page size, then it assumes - * that pages are grouped in subsets of physically continuous pages big - * enough to store at least one object. - * - * @param elt_num - * Number of elements. - * @param total_elt_sz - * The size of each element, including header and trailer, as returned - * by rte_mempool_calc_obj_size(). - * @param pg_shift - * LOG2 of the physical pages size. If set to 0, ignore page boundaries. - * @return - * Required memory size aligned at page boundary. - */ -size_t rte_mempool_calc_mem_size_helper(uint32_t elt_num, size_t total_elt_sz, - uint32_t pg_shift); - /** * Function to be called for each populated object. * diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c index fd63ca137..4e2bfc82d 100644 --- a/lib/librte_mempool/rte_mempool_ops_default.c +++ b/lib/librte_mempool/rte_mempool_ops_default.c @@ -12,12 +12,31 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, size_t *min_chunk_size, size_t *align) { size_t total_elt_sz; + size_t obj_per_page, pg_num, pg_sz; size_t mem_size; total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; - - mem_size = rte_mempool_calc_mem_size_helper(obj_num, total_elt_sz, - pg_shift); + if (total_elt_sz == 0) { + mem_size = 0; + } else if (pg_shift == 0) { + mem_size = total_elt_sz * obj_num; + } else { + pg_sz = (size_t)1 << pg_shift; + obj_per_page = pg_sz / total_elt_sz; + if (obj_per_page == 0) { + /* + * Note that if object size is bigger than page size, + * then it is assumed that pages are grouped in subsets + * of physically continuous pages big enough to store + * at least one object. + */ + mem_size = + RTE_ALIGN_CEIL(total_elt_sz, pg_sz) * obj_num; + } else { + pg_num = (obj_num + obj_per_page - 1) / obj_per_page; + mem_size = pg_num << pg_shift; + } + } *min_chunk_size = RTE_MAX((size_t)1 << pg_shift, total_elt_sz); -- 2.17.1