From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 66974A0471 for ; Fri, 19 Jul 2019 15:39:32 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 380951B959; Fri, 19 Jul 2019 15:39:11 +0200 (CEST) Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id E68224D27 for ; Fri, 19 Jul 2019 15:38:59 +0200 (CEST) Received: from glumotte.dev.6wind.com. (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id C8FED2ED328; Fri, 19 Jul 2019 15:38:59 +0200 (CEST) From: Olivier Matz To: Vamsi Krishna Attunuru , dev@dpdk.org Cc: Andrew Rybchenko , Thomas Monjalon , Anatoly Burakov , Jerin Jacob Kollanukkaran , Kokkilagadda , Ferruh Yigit Date: Fri, 19 Jul 2019 15:38:45 +0200 Message-Id: <20190719133845.32432-5-olivier.matz@6wind.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20190719133845.32432-1-olivier.matz@6wind.com> References: <20190719133845.32432-1-olivier.matz@6wind.com> Subject: [dpdk-dev] [RFC 4/4] mempool: prevent objects from being across pages X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When using iova contiguous memory and objets smaller than page size, ensure that objects are not located across several pages. Signed-off-by: Vamsi Krishna Attunuru Signed-off-by: Olivier Matz --- lib/librte_mempool/rte_mempool_ops_default.c | 39 ++++++++++++++++++++++++++-- 1 file changed, 37 insertions(+), 2 deletions(-) diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c index 4e2bfc82d..2bbd67367 100644 --- a/lib/librte_mempool/rte_mempool_ops_default.c +++ b/lib/librte_mempool/rte_mempool_ops_default.c @@ -45,19 +45,54 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, return mem_size; } +/* Returns -1 if object falls on a page boundary, else returns 0 */ +static inline int +mempool_check_obj_bounds(void *obj, uint64_t pg_sz, size_t elt_sz) +{ + uintptr_t page_end, elt_addr = (uintptr_t)obj; + uint32_t pg_shift; + uint64_t page_mask; + + if (pg_sz == 0) + return 0; + if (elt_sz > pg_sz) + return 0; + + pg_shift = rte_bsf32(pg_sz); + page_mask = ~((1ull << pg_shift) - 1); + page_end = (elt_addr & page_mask) + pg_sz; + + if (elt_addr + elt_sz > page_end) + return -1; + + return 0; +} + int rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs, void *vaddr, rte_iova_t iova, size_t len, rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) { - size_t total_elt_sz; + size_t total_elt_sz, pg_sz; size_t off; unsigned int i; void *obj; + rte_mempool_get_page_size(mp, &pg_sz); + total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; - for (off = 0, i = 0; off + total_elt_sz <= len && i < max_objs; i++) { + for (off = 0, i = 0; i < max_objs; i++) { + /* align offset to next page start if required */ + if (mempool_check_obj_bounds((char *)vaddr + off, + pg_sz, total_elt_sz) < 0) { + off += RTE_PTR_ALIGN_CEIL((char *)vaddr + off, pg_sz) - + ((char *)vaddr + off); + } + + if (off + total_elt_sz > len) + break; + off += mp->header_size; obj = (char *)vaddr + off; obj_cb(mp, obj_cb_arg, obj, -- 2.11.0