From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id CCDA7A0471
	for <public@inbox.dpdk.org>; Wed, 17 Jul 2019 11:05:01 +0200 (CEST)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 34263276C;
	Wed, 17 Jul 2019 11:04:57 +0200 (CEST)
Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com
 [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 1B6E61DBE
 for <dev@dpdk.org>; Wed, 17 Jul 2019 11:04:54 +0200 (CEST)
Received: from pps.filterd (m0045851.ppops.net [127.0.0.1])
 by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 x6H93ZBH001991; Wed, 17 Jul 2019 02:04:54 -0700
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com;
 h=from : to : cc :
 subject : date : message-id : in-reply-to : references : mime-version :
 content-type; s=pfpt0818; bh=bhIk2ix3UfThvL5mxEF5ava1Ge5pp9WCNPgjkg/Lo9s=;
 b=qoSZx69RN16YxJpISwmol19LIoQ2P6ZnRdmGAfHkRxusgW/GajXl0wJCu8Roi1in7PUs
 wm9u9y//Zs3O/s2Ilt8fvQVkFxYFFObMVlC9iJTH9jb8P5MNiTx0m/y/7b/l8UlXaCF8
 wABcSchZC9Tr1LjmpsKTVaqbzh1NoBsDKzt7IAks1ae3bn6rRIkk2JOg/tl+Ca4K1lU0
 cu/DhWLzPPt3tl7GnDDZh5hEXIL0bVREkukifTcT9JhdJyEK+qVEg56QUEgyesjkiSNA
 f1lbiw4G9ivO/14mDYeUn3TXuQ7lBu+Tlts1B2i/FnytsRKmJC1Qq00C2cYgmGLLu0+N 9g== 
Received: from sc-exch01.marvell.com ([199.233.58.181])
 by mx0b-0016f401.pphosted.com with ESMTP id 2ts0a26x0k-2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT);
 Wed, 17 Jul 2019 02:04:53 -0700
Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com
 (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 17 Jul
 2019 02:04:52 -0700
Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com
 (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend
 Transport; Wed, 17 Jul 2019 02:04:52 -0700
Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72])
 by maili.marvell.com (Postfix) with ESMTP id D82A43F7040;
 Wed, 17 Jul 2019 02:04:49 -0700 (PDT)
From: <vattunuru@marvell.com>
To: <dev@dpdk.org>
CC: <thomas@monjalon.net>, <jerinj@marvell.com>, <olivier.matz@6wind.com>,
 <ferruh.yigit@intel.com>, <anatoly.burakov@intel.com>,
 <kirankumark@marvell.com>, Vamsi Attunuru <vattunuru@marvell.com>
Date: Wed, 17 Jul 2019 14:34:05 +0530
Message-ID: <20190717090408.13717-2-vattunuru@marvell.com>
X-Mailer: git-send-email 2.8.4
In-Reply-To: <20190717090408.13717-1-vattunuru@marvell.com>
References: <20190625035700.2953-1-vattunuru@marvell.com>
 <20190717090408.13717-1-vattunuru@marvell.com>
MIME-Version: 1.0
Content-Type: text/plain
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8
 definitions=2019-07-17_03:2019-07-16,2019-07-17 signatures=0
Subject: [dpdk-dev] [PATCH v7 1/4] mempool: modify mempool populate() to
	skip objects from page boundaries
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>

From: Vamsi Attunuru <vattunuru@marvell.com>

Currently the phys address of a mempool object populated by the mempool
populate default() routine may not be contiguous with in that mbuf range.

Patch ensures that each object's phys address is contiguous by modifying
default behaviour of mempool populate() to prevent objects from being
across 2 pages, expect if the size of object is bigger than size of page.

Since the overhead after this modification will be very minimal considering
the hugepage sizes of 512M & 1G, default behaviour is modified except for
the object sizes bigger than the page size.

Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
---
 lib/librte_mempool/rte_mempool.c             |  2 +-
 lib/librte_mempool/rte_mempool_ops_default.c | 33 ++++++++++++++++++++++++++--
 2 files changed, 32 insertions(+), 3 deletions(-)

diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
index 7260ce0..1c48325 100644
--- a/lib/librte_mempool/rte_mempool.c
+++ b/lib/librte_mempool/rte_mempool.c
@@ -339,7 +339,7 @@ rte_mempool_populate_iova(struct rte_mempool *mp, char *vaddr,
 	i = rte_mempool_ops_populate(mp, mp->size - mp->populated_size,
 		(char *)vaddr + off,
 		(iova == RTE_BAD_IOVA) ? RTE_BAD_IOVA : (iova + off),
-		len - off, mempool_add_elem, NULL);
+		len - off, mempool_add_elem, opaque);
 
 	/* not enough room to store one object */
 	if (i == 0) {
diff --git a/lib/librte_mempool/rte_mempool_ops_default.c b/lib/librte_mempool/rte_mempool_ops_default.c
index 4e2bfc8..85da264 100644
--- a/lib/librte_mempool/rte_mempool_ops_default.c
+++ b/lib/librte_mempool/rte_mempool_ops_default.c
@@ -45,19 +45,48 @@ rte_mempool_op_calc_mem_size_default(const struct rte_mempool *mp,
 	return mem_size;
 }
 
+/* Returns -1 if object falls on a page boundary, else returns 0 */
+static inline int
+mempool_check_obj_bounds(void *obj, uint64_t hugepage_sz, size_t elt_sz)
+{
+	uintptr_t page_end, elt_addr = (uintptr_t)obj;
+	uint32_t pg_shift = rte_bsf32(hugepage_sz);
+	uint64_t page_mask;
+
+	page_mask =  ~((1ull << pg_shift) - 1);
+	page_end = (elt_addr & page_mask) + hugepage_sz;
+
+	if (elt_addr + elt_sz > page_end)
+		return -1;
+
+	return 0;
+}
+
 int
 rte_mempool_op_populate_default(struct rte_mempool *mp, unsigned int max_objs,
 		void *vaddr, rte_iova_t iova, size_t len,
 		rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
 {
-	size_t total_elt_sz;
-	size_t off;
+	struct rte_memzone *mz = obj_cb_arg;
+	size_t total_elt_sz, off;
 	unsigned int i;
 	void *obj;
 
 	total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
 
 	for (off = 0, i = 0; off + total_elt_sz <= len && i < max_objs; i++) {
+
+		/* Skip page boundary check if element is bigger than page */
+		if (mz->hugepage_sz >= total_elt_sz) {
+			if (mempool_check_obj_bounds((char *)vaddr + off,
+						    mz->hugepage_sz,
+						    total_elt_sz) < 0) {
+				i--; /* Decrement count & skip this obj */
+				off += total_elt_sz;
+				continue;
+			}
+		}
+
 		off += mp->header_size;
 		obj = (char *)vaddr + off;
 		obj_cb(mp, obj_cb_arg, obj,
-- 
2.8.4