From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id F1109A0487 for ; Mon, 29 Jul 2019 14:42:02 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C45B91BF8D; Mon, 29 Jul 2019 14:42:02 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 16EBD1BF79 for ; Mon, 29 Jul 2019 14:42:02 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us3.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id C33579C0080; Mon, 29 Jul 2019 12:42:00 +0000 (UTC) Received: from [192.168.1.11] (85.187.13.152) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 29 Jul 2019 13:41:53 +0100 To: , CC: , , , , , References: <20190723053821.30227-1-vattunuru@marvell.com> <20190729121313.30639-1-vattunuru@marvell.com> <20190729121313.30639-2-vattunuru@marvell.com> From: Andrew Rybchenko Message-ID: <6d61077c-ff43-eab5-5907-fadf1d9b444b@solarflare.com> Date: Mon, 29 Jul 2019 15:41:48 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20190729121313.30639-2-vattunuru@marvell.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Originating-IP: [85.187.13.152] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24808.003 X-TM-AS-Result: No-13.303900-8.000000-10 X-TMASE-MatchedRID: oHOSwQSJZWjmLzc6AOD8DfHkpkyUphL9APiR4btCEeYMrRrnLCZEmqkf lqEM4f+Y2hDeDoTBDfg6dpwuKIBF5hy/DZ8tz1CLqjZ865FPtpqteS443ymeUQL+e4+Xk/QWls8 GxoFWhuVtKa2/kIcfy7dIIXPT5l/eC+l9fIf3S2z1xv2JHBkcHz+k5IvvZ1N/B2QWi8BF5SiArq oIZrVn1yu4ucpIx8xwq5xoQ6tCQVONTCl4ZDDq7IboZ15KqReT/qWl+m17jWFWm0JlHAu9AXROw AUH8TPXNLaqagFUrWBoEr6qpE+amnoPr+FuP6pFSZJFFtJz2zdB2JccVE3dfqn44XyKwnSZ+NYB Zrpe1ulbdj2y5mJ/m/jMuFnMeceEn6Cb4UOqVaAylU6xjA3vw6FCGy3An0bgH/mQ0tA7uKLNl53 1e9zJek0yzgH0SzQEAvQZGpSfwTJsxX1xM2jbfOb3p4cnIXGNfS0Ip2eEHnzUHQeTVDUrItRnEQ CUU+jzjoczmuoPCq0G9KL8WTZyqyvqiY8uAtMgyCWdqEWOqUbvszDYK4+yWR6Y+sKohiEu X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--13.303900-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24808.003 X-MDID: 1564404121-PErQaB893Kod Subject: Re: [dpdk-dev] [PATCH v9 1/5] mempool: populate mempool with the page sized chunks of memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 7/29/19 3:13 PM, vattunuru@marvell.com wrote: > From: Vamsi Attunuru > > Patch adds a routine to populate mempool from page aligned and > page sized chunks of memory to ensure memory objs do not fall > across the page boundaries. It's useful for applications that > require physically contiguous mbuf memory while running in > IOVA=VA mode. > > Signed-off-by: Vamsi Attunuru > Signed-off-by: Kiran Kumar K When two below issues fixed: Acked-by: Andrew Rybchenko As I understand it is likely to be a temporary solution until the problem is fixed in a generic way. > --- > lib/librte_mempool/rte_mempool.c | 64 ++++++++++++++++++++++++++++++ > lib/librte_mempool/rte_mempool.h | 17 ++++++++ > lib/librte_mempool/rte_mempool_version.map | 1 + > 3 files changed, 82 insertions(+) > > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c > index 7260ce0..00619bd 100644 > --- a/lib/librte_mempool/rte_mempool.c > +++ b/lib/librte_mempool/rte_mempool.c > @@ -414,6 +414,70 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, > return ret; > } > > +/* Function to populate mempool from page sized mem chunks, allocate page size > + * of memory in memzone and populate them. Return the number of objects added, > + * or a negative value on error. > + */ > +int > +rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp) > +{ > + char mz_name[RTE_MEMZONE_NAMESIZE]; > + size_t align, pg_sz, pg_shift; > + const struct rte_memzone *mz; > + unsigned int mz_id, n; > + size_t min_chunk_size; > + int ret; > + > + ret = mempool_ops_alloc_once(mp); > + if (ret != 0) > + return ret; > + > + if (mp->nb_mem_chunks != 0) > + return -EEXIST; > + > + pg_sz = get_min_page_size(mp->socket_id); > + pg_shift = rte_bsf32(pg_sz); > + > + for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) { > + > + ret = rte_mempool_ops_calc_mem_size(mp, n, > + pg_shift, &min_chunk_size, &align); > + > + if (ret < 0 || min_chunk_size > pg_sz) If min_chunk_size is greater than pg_sz, ret is 0 and function returns success. > + goto fail; > + > + ret = snprintf(mz_name, sizeof(mz_name), > + RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id); > + if (ret < 0 || ret >= (int)sizeof(mz_name)) { > + ret = -ENAMETOOLONG; > + goto fail; > + } > + > + mz = rte_memzone_reserve_aligned(mz_name, min_chunk_size, > + mp->socket_id, 0, align); > + > + if (mz == NULL) { > + ret = -rte_errno; > + goto fail; > + } > + > + ret = rte_mempool_populate_iova(mp, mz->addr, > + mz->iova, mz->len, > + rte_mempool_memchunk_mz_free, > + (void *)(uintptr_t)mz); > + if (ret < 0) { > + rte_memzone_free(mz); > + goto fail; > + } > + } > + > + return mp->size; > + > +fail: > + rte_mempool_free_memchunks(mp); > + return ret; > +} > + > /* Default function to populate the mempool: allocate memory in memzones, > * and populate them. Return the number of objects added, or a negative > * value on error. > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h > index 8053f7a..3046e4f 100644 > --- a/lib/librte_mempool/rte_mempool.h > +++ b/lib/librte_mempool/rte_mempool.h > @@ -1062,6 +1062,23 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, > void *opaque); > > /**  * @warning  * @b EXPERIMENTAL: this API may change without prior notice. is missing > + * Add memory from page sized memzones for objects in the pool at init > + * > + * This is the function used to populate the mempool with page aligned and > + * page sized memzone memory to avoid spreading object memory across two pages > + * and to ensure all mempool objects reside on the page memory. > + * > + * @param mp > + * A pointer to the mempool structure. > + * @return > + * The number of objects added on success. > + * On error, the chunk is not added in the memory list of the > + * mempool and a negative errno is returned. > + */ > +__rte_experimental > +int rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp); > + > +/** > * Add memory for objects in the pool at init > * > * This is the default function used by rte_mempool_create() to populate > diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map > index 17cbca4..9a6fe65 100644 > --- a/lib/librte_mempool/rte_mempool_version.map > +++ b/lib/librte_mempool/rte_mempool_version.map > @@ -57,4 +57,5 @@ EXPERIMENTAL { > global: > > rte_mempool_ops_get_info; > + rte_mempool_populate_from_pg_sz_chunks; > };