From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 8544EA046B for ; Tue, 25 Jun 2019 05:57:23 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1489D1BC01; Tue, 25 Jun 2019 05:57:17 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id EDC8C1BC06 for ; Tue, 25 Jun 2019 05:57:14 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5P3uV5U020675; Mon, 24 Jun 2019 20:57:13 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=bcp60yXFEzDXfH3s14tTgyM2dLKwq9lxy2Tcby9pCfc=; b=e9PZ9t1JUNhbBdrBXXlEeqdTfLsJQ691k6MfzWEfZknJk4Rb3VKUBCrgIQKkvvs/N0t0 j/mAGRA6BfaaIFhJqPoAhc3Wp2gybbZ9zYRIjkVFoJzKIf/+FogDVsDPCDErLhIQ4qkx nI0nIFivOs3wlDQXOtLtu3+8zoShJm4hIpErq9eySw58gtF3v/iIebaOkGGWlqJasqAr 3qvLyIEZPCTvgmVv6aN474dpRLA82GnJkN8LapvrZV2CpKtUTk/VIMY2G9vNhJrMTpOv CunJRJeJ6UrCh8SnOp1+tpSyFvxG9JfqVApG7dAW3c2X+P9FV3DKXR+HH3WAJpm0EaKI 2A== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2tb2hqj117-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 24 Jun 2019 20:57:13 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 24 Jun 2019 20:57:11 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 24 Jun 2019 20:57:12 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id E70533F703F; Mon, 24 Jun 2019 20:57:09 -0700 (PDT) From: To: CC: , , , Vamsi Attunuru Date: Tue, 25 Jun 2019 09:26:57 +0530 Message-ID: <20190625035700.2953-2-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20190625035700.2953-1-vattunuru@marvell.com> References: <20190422061533.17538-1-kirankumark@marvell.com> <20190625035700.2953-1-vattunuru@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-25_02:, , signatures=0 Subject: [dpdk-dev] [PATCH v6 1/4] lib/mempool: skip populating mempool objs that falls on page boundaries X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Patch adds new mempool flag to avoid scattering mbuf memory across page boundaries. Mempool created with this flag set, populated with mbufs which are exactly inside the page boundaries. Signed-off-by: Vamsi Attunuru --- lib/librte_mempool/rte_mempool.c | 2 +- lib/librte_mempool/rte_mempool.h | 2 ++ lib/librte_mempool/rte_mempool_ops_default.c | 30 ++++++++++++++++++++++++++++ 3 files changed, 33 insertions(+), 1 deletion(-) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 69bd2a6..175a20a 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -338,7 +338,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr, i = rte_mempool_ops_populate(mp, mp->size - mp->populated_size, (char *)vaddr + off, (iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off), - len - off, mempool_add_elem, NULL); + len - off, mempool_add_elem, opaque); /* not enough room to store one object */ if (i == 0) { diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 8053f7a..97a1529 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -263,6 +263,8 @@ struct rte_mempool { #define MEMPOOL_F_SC_GET 0x0008 /**< Default get is "single-consumer".*/ #define MEMPOOL_F_POOL_CREATED 0x0010 /**< Internal: pool is created. */ #define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */ +#define MEMPOOL_F_NO_PAGE_BOUND 0x0040 +/**< Don't create objs on page boundaries. */ #define MEMPOOL_F_NO_PHYS_CONTIG MEMPOOL_F_NO_IOVA_CONTIG /* deprecated */ /** diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c index 4e2bfc8..c029e9a 100644 --- a/lib/librte_mempool/rte_mempool_ops_default.c +++ b/lib/librte_mempool/rte_mempool_ops_default.c @@ -45,11 +45,29 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp, return mem_size; } +/* Returns -1 if object falls on a page boundary, else returns 0 */ +static inline int +mempool_check_obj_bounds(void *obj, uint64_t hugepage_sz, size_t elt_sz) +{ + uintptr_t page_end, elt_addr = (uintptr_t)obj; + uint32_t pg_shift = rte_bsf32(hugepage_sz); + uint64_t page_mask; + + page_mask = ~((1ull << pg_shift) - 1); + page_end = (elt_addr & page_mask) + hugepage_sz; + + if (elt_addr + elt_sz > page_end) + return -1; + + return 0; +} + int rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs, void *vaddr, rte_iova_t iova, size_t len, rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg) { + struct rte_memzone *mz; size_t total_elt_sz; size_t off; unsigned int i; @@ -58,6 +76,18 @@ rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs, total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size; for (off = 0, i = 0; off + total_elt_sz <= len && i < max_objs; i++) { + + if (mp->flags & MEMPOOL_F_NO_PAGE_BOUND) { + mz = (struct rte_memzone *)obj_cb_arg; + if (mempool_check_obj_bounds((char *)vaddr + off, + mz->hugepage_sz, + total_elt_sz) < 0) { + i--; /* Decrement count & skip this obj */ + off += total_elt_sz; + continue; + } + } + off += mp->header_size; obj = (char *)vaddr + off; obj_cb(mp, obj_cb_arg, obj, -- 2.8.4