From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 09F71A046B for ; Tue, 23 Jul 2019 13:08:28 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B77CD1BFF1; Tue, 23 Jul 2019 13:08:26 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 42BD81BFE9 for ; Tue, 23 Jul 2019 13:08:25 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id BBFADB00053; Tue, 23 Jul 2019 11:08:23 +0000 (UTC) Received: from [192.168.1.11] (85.187.13.152) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Tue, 23 Jul 2019 12:08:16 +0100 To: , CC: , , , , , References: <20190717090408.13717-1-vattunuru@marvell.com> <20190723053821.30227-1-vattunuru@marvell.com> <20190723053821.30227-2-vattunuru@marvell.com> From: Andrew Rybchenko Message-ID: <4b9cec50-348a-3359-04ee-3b567b49aa9f@solarflare.com> Date: Tue, 23 Jul 2019 14:08:11 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 MIME-Version: 1.0 In-Reply-To: <20190723053821.30227-2-vattunuru@marvell.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-Originating-IP: [85.187.13.152] X-ClientProxiedBy: ocex03.SolarFlarecom.com (10.20.40.36) To ukex01.SolarFlarecom.com (10.17.10.4) X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24788.000 X-TM-AS-Result: No-11.568200-8.000000-10 X-TMASE-MatchedRID: WMT2WRIkHPPmLzc6AOD8DfHkpkyUphL9APiR4btCEeYMrRrnLCZEmkGV MhAG7jpc0mO8e/UisLvktTJlTF/cAqucWQ5bKLGx9cb9iRwZHB8/pOSL72dTfwdkFovAReUogK6 qCGa1Z9cruLnKSMfMcKucaEOrQkFTjUwpeGQw6uyG6GdeSqkXk/6lpfpte41hVptCZRwLvQF0Ts AFB/Ez16LqebnC2s87fnyL6QX6YMKdnO9WT4MKG0UzvZoSFxuN+ma2kEuhRFokt9BigJAcVtBMv cVIu0RsXqOBzw3CWkiRV/zOlZpEXVcPBqi5jTLuznQMa4gTGT2i8D/o42y/St3eqAgQlaiw+NYB Zrpe1umFz2NWWw+bbTGG6myZaSFopJPbEM0dg1Z0+657dxGJGHyzRzLq38pIKwv5G6fLcA+gRWh 6vXA6NwfbaKslvpY24QeSIpZv9OlNfs8n85Te8oMbH85DUZXyseWplitmp0j6C0ePs7A07RjOlt 1Pi553pKhznC0W+to8iIFbBzzqWfFCEgMQ3PxN4pTBHBdX8JQ= X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--11.568200-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24788.000 X-MDID: 1563880104-C4aK65uWREYD Subject: Re: [dpdk-dev] [PATCH v8 1/5] mempool: populate mempool with page sized chunks of memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 7/23/19 8:38 AM, vattunuru@marvell.com wrote: > From: Vamsi Attunuru > > Patch adds a routine to populate mempool from page aligned and > page sized chunks of memory to ensures memory objs do not fall > across the page boundaries. It's useful for applications that > require physically contiguous mbuf memory while running in > IOVA=VA mode. > > Signed-off-by: Vamsi Attunuru > Signed-off-by: Kiran Kumar K > --- > lib/librte_mempool/rte_mempool.c | 59 ++++++++++++++++++++++++++++++ > lib/librte_mempool/rte_mempool.h | 17 +++++++++ > lib/librte_mempool/rte_mempool_version.map | 1 + > 3 files changed, 77 insertions(+) > > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c > index 7260ce0..5312c8f 100644 > --- a/lib/librte_mempool/rte_mempool.c > +++ b/lib/librte_mempool/rte_mempool.c > @@ -414,6 +414,65 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, > return ret; > } > > +/* Function to populate mempool from page sized mem chunks, allocate page size > + * of memory in memzone and populate them. Return the number of objects added, > + * or a negative value on error. > + */ > +int > +rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp) > +{ > + char mz_name[RTE_MEMZONE_NAMESIZE]; > + size_t align, pg_sz, pg_shift; > + const struct rte_memzone *mz; > + unsigned int mz_id, n; > + size_t chunk_size; I think it would be better to keep min_chunk_size name here. It would make it easier to read and understand the code. > + int ret; > + > + ret = mempool_ops_alloc_once(mp); > + if (ret != 0) > + return ret; > + > + if (mp->nb_mem_chunks != 0) > + return -EEXIST; > + > + pg_sz = get_min_page_size(mp->socket_id); > + pg_shift = rte_bsf32(pg_sz); > + > + for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) { > + > + rte_mempool_op_calc_mem_size_default(mp, n, pg_shift, > + &chunk_size, &align); It is incorrect to ignore mempool pool ops and enforce default handler. Use rte_mempool_ops_calc_mem_size(). Also it is better to treat negative return value as an error as default function does. (May be it my mistake in return value description that it is not mentioned). > + > + if (chunk_size > pg_sz) > + goto fail; > + > + ret = snprintf(mz_name, sizeof(mz_name), > + RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, mz_id); > + if (ret < 0 || ret >= (int)sizeof(mz_name)) { > + ret = -ENAMETOOLONG; > + goto fail; > + } > + > + mz = rte_memzone_reserve_aligned(mz_name, chunk_size, > + mp->socket_id, 0, align); NULL return value must be handled. > + > + ret = rte_mempool_populate_iova(mp, mz->addr, > + mz->iova, mz->len, > + rte_mempool_memchunk_mz_free, > + (void *)(uintptr_t)mz); > + if (ret < 0) { > + rte_memzone_free(mz); > + goto fail; > + } > + } > + > + return mp->size; > + > +fail: > + rte_mempool_free_memchunks(mp); > + return ret; > +} > + > /* Default function to populate the mempool: allocate memory in memzones, > * and populate them. Return the number of objects added, or a negative > * value on error. > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h > index 8053f7a..73d6ada 100644 > --- a/lib/librte_mempool/rte_mempool.h > +++ b/lib/librte_mempool/rte_mempool.h > @@ -1064,6 +1064,23 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, > /** > * Add memory for objects in the pool at init The different from default must be highlighted in the summary. > * > + * This is the function used to populate the mempool with page aligned memzone > + * memory. It ensures all mempool objects being on the page by allocating > + * memzones with page size. > + * > + * @param mp > + * A pointer to the mempool structure. > + * @return > + * The number of objects added on success. > + * On error, the chunk is not added in the memory list of the > + * mempool and a negative errno is returned. > + */ > +__rte_experimental > +int rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp); > + > +/** > + * Add memory for objects in the pool at init > + * > * This is the default function used by rte_mempool_create() to populate > * the mempool. It adds memory allocated using rte_memzone_reserve(). > * > diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map > index 17cbca4..9a6fe65 100644 > --- a/lib/librte_mempool/rte_mempool_version.map > +++ b/lib/librte_mempool/rte_mempool_version.map > @@ -57,4 +57,5 @@ EXPERIMENTAL { > global: > > rte_mempool_ops_get_info; > + rte_mempool_populate_from_pg_sz_chunks; > };