From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7367AA0487 for ; Mon, 29 Jul 2019 14:13:57 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 65A551BF53; Mon, 29 Jul 2019 14:13:39 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 1CE9B1BF51 for ; Mon, 29 Jul 2019 14:13:35 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id x6TCAJBv025747; Mon, 29 Jul 2019 05:13:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=EHw0uM827wGIBOe/pUyVBH6DDA72kyO9p9v49qBtTEc=; b=JfH3Md9dpy/ryAK0vuc8aop/GRJxYx2c8zgiyZdfNhti+Tm7h5RHy6wWAk8B+x1iv7Af oh7gwzG6Ih4df6ElOVzgih+4DuM3OTbpgLXQyB6krXHTfPF3hQ3bw/SJNoye8VNyp9wD hgBjmg9mWRCad7UU9dh0wjM8e+jRL59PBjQaP1j6Zs20Ofc4VZkOqI6QBuNLgu7obS6t zfvSmF6u7PVe9Dg98a4pGFTdO5C75evde7WKJNUKPC4SbNRN7u3IeJdOW0wp6NxFlBGa L7bIMlZVHorpmfza3Lg5GTWJnVXjEjUmnPbl+9RRfXLyDFNgxh2QzVVUi0C7s6h/qIyA tA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2u0p4ky2kv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 29 Jul 2019 05:13:34 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 29 Jul 2019 05:13:32 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 29 Jul 2019 05:13:32 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id 5BE5C3F7040; Mon, 29 Jul 2019 05:13:29 -0700 (PDT) From: To: CC: , , , , , , , Vamsi Attunuru Date: Mon, 29 Jul 2019 17:43:09 +0530 Message-ID: <20190729121313.30639-2-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20190729121313.30639-1-vattunuru@marvell.com> References: <20190723053821.30227-1-vattunuru@marvell.com> <20190729121313.30639-1-vattunuru@marvell.com> MIME-Version: 1.0 Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:5.22.84,1.0.8 definitions=2019-07-29_06:2019-07-29,2019-07-29 signatures=0 Subject: [dpdk-dev] [PATCH v9 1/5] mempool: populate mempool with the page sized chunks of memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Patch adds a routine to populate mempool from page aligned and page sized chunks of memory to ensure memory objs do not fall across the page boundaries. It's useful for applications that require physically contiguous mbuf memory while running in IOVA=VA mode. Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K --- lib/librte_mempool/rte_mempool.c | 64 ++++++++++++++++++++++++++++++ lib/librte_mempool/rte_mempool.h | 17 ++++++++ lib/librte_mempool/rte_mempool_version.map | 1 + 3 files changed, 82 insertions(+) diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c index 7260ce0..00619bd 100644 --- a/lib/librte_mempool/rte_mempool.c +++ b/lib/librte_mempool/rte_mempool.c @@ -414,6 +414,70 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, return ret; } +/* Function to populate mempool from page sized mem chunks, allocate page size + * of memory in memzone and populate them. Return the number of objects added, + * or a negative value on error. + */ +int +rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp) +{ + char mz_name[RTE_MEMZONE_NAMESIZE]; + size_t align, pg_sz, pg_shift; + const struct rte_memzone *mz; + unsigned int mz_id, n; + size_t min_chunk_size; + int ret; + + ret = mempool_ops_alloc_once(mp); + if (ret != 0) + return ret; + + if (mp->nb_mem_chunks != 0) + return -EEXIST; + + pg_sz = get_min_page_size(mp->socket_id); + pg_shift = rte_bsf32(pg_sz); + + for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) { + + ret = rte_mempool_ops_calc_mem_size(mp, n, + pg_shift, &min_chunk_size, &align); + + if (ret < 0 || min_chunk_size > pg_sz) + goto fail; + + ret = snprintf(mz_name, sizeof(mz_name), + RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id); + if (ret < 0 || ret >= (int)sizeof(mz_name)) { + ret = -ENAMETOOLONG; + goto fail; + } + + mz = rte_memzone_reserve_aligned(mz_name, min_chunk_size, + mp->socket_id, 0, align); + + if (mz == NULL) { + ret = -rte_errno; + goto fail; + } + + ret = rte_mempool_populate_iova(mp, mz->addr, + mz->iova, mz->len, + rte_mempool_memchunk_mz_free, + (void *)(uintptr_t)mz); + if (ret < 0) { + rte_memzone_free(mz); + goto fail; + } + } + + return mp->size; + +fail: + rte_mempool_free_memchunks(mp); + return ret; +} + /* Default function to populate the mempool: allocate memory in memzones, * and populate them. Return the number of objects added, or a negative * value on error. diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 8053f7a..3046e4f 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -1062,6 +1062,23 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, void *opaque); /** + * Add memory from page sized memzones for objects in the pool at init + * + * This is the function used to populate the mempool with page aligned and + * page sized memzone memory to avoid spreading object memory across two pages + * and to ensure all mempool objects reside on the page memory. + * + * @param mp + * A pointer to the mempool structure. + * @return + * The number of objects added on success. + * On error, the chunk is not added in the memory list of the + * mempool and a negative errno is returned. + */ +__rte_experimental +int rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp); + +/** * Add memory for objects in the pool at init * * This is the default function used by rte_mempool_create() to populate diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map index 17cbca4..9a6fe65 100644 --- a/lib/librte_mempool/rte_mempool_version.map +++ b/lib/librte_mempool/rte_mempool_version.map @@ -57,4 +57,5 @@ EXPERIMENTAL { global: rte_mempool_ops_get_info; + rte_mempool_populate_from_pg_sz_chunks; }; -- 2.8.4