From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2CE2DA00BE; Mon, 28 Oct 2019 15:06:53 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8DE101BF42; Mon, 28 Oct 2019 15:06:52 +0100 (CET) Received: from mail-wr1-f67.google.com (mail-wr1-f67.google.com [209.85.221.67]) by dpdk.org (Postfix) with ESMTP id 323421BF40 for ; Mon, 28 Oct 2019 15:06:51 +0100 (CET) Received: by mail-wr1-f67.google.com with SMTP id v9so10038068wrq.5 for ; Mon, 28 Oct 2019 07:06:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=Z2y2qjRWDgW4aKW2K/4ChP0yks6ZW/bcFVAH4nMGZFw=; b=gTPNgdJt5qDpKFd+Ogtf9up3N2pew06R8hBd4SDC45bBOeNpUydYbbCmrUzURJTlx8 tTdVAM+iE8YPJVyJ+A5S3h4GT9rybJoEnXgaidnnpzjgQfpv1B80ecZJKyamLvd3gAu6 t9VQqZMOTQJ7WGkkJV0Hl/NAmenJnwpP3V4drIiPBrEN1zjbfnRDHywQdSgZ+svtjT6i DqzpUfNKcxR8ic5NLMg8glIarhoNUyV/SCd3jkTm6kRrW9AX8toiY1NI45fO642vQH8P VIV8EUw8McOp13dfKXU1V6GF5PYnkr5v9NrUfLylBeDQbMI/2y3rk5P6BOm5Kb+sHttY 2YZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=Z2y2qjRWDgW4aKW2K/4ChP0yks6ZW/bcFVAH4nMGZFw=; b=lyPajJrPfd8Nn9JU8ro+rOY68mbjy6zYVM7P8I7YyJJFvO9Ba97aSOCbfMI3O1Gqt3 Kcn9bEq9W0OLrRHYQoQ3Bgmx+fFnIK6dt47Jo4+Zi0IyTjsApxvWDpOx3glcLpaQxpx4 Vxra1qvmM7hnU9yydSWkg1KtFVnM8AndRo9r7owiejYOJg8roGyzaKwstKMdhqMi2IP5 2fR55eFEX9WAH0YAgw1sIP7zQJBQ5P6BH0mmpq1nd8XQSNQ5fhXfFD9cPD3Uv9B83rZS ZsjeL3v3RodvZ3tgFg8VyVerRmdOdMgJQ70f0Wq5CLBlG86gTz7Y3qBOPSP0g/4fvZl4 Bzag== X-Gm-Message-State: APjAAAWuSEtRhSQjpuMtdLuvTiFN81vsUwwn/MxJlixKzQvhg4DZhVcD 4ryGZmTY3AJ0V7TTcUFYPr7wBw== X-Google-Smtp-Source: APXvYqzmnPVk0EN9KmgIvEwSJ7wDn0Il7fNs6idTFTM61UiN1Mbd+vTyN43P75gwV7No3VNRS5oR2Q== X-Received: by 2002:adf:8465:: with SMTP id 92mr14460854wrf.376.1572271610883; Mon, 28 Oct 2019 07:06:50 -0700 (PDT) Received: from 6wind.com (2a01cb0c0005a6000226b0fffeed02fc.ipv6.abo.wanadoo.fr. [2a01:cb0c:5:a600:226:b0ff:feed:2fc]) by smtp.gmail.com with ESMTPSA id p12sm11532840wrt.7.2019.10.28.07.06.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Oct 2019 07:06:50 -0700 (PDT) Date: Mon, 28 Oct 2019 15:06:49 +0100 From: Olivier Matz To: Andrew Rybchenko Cc: Vamsi Krishna Attunuru , dev@dpdk.org, Thomas Monjalon , Anatoly Burakov , Jerin Jacob Kollanukkaran , Kokkilagadda , Ferruh Yigit Message-ID: <20191028140649.lrvixolaarjqfuv6@platinum> References: <20190719133845.32432-1-olivier.matz@6wind.com> <20190719133845.32432-4-olivier.matz@6wind.com> <37519783-13aa-6854-5ff3-84d21e35fe97@solarflare.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <37519783-13aa-6854-5ff3-84d21e35fe97@solarflare.com> User-Agent: NeoMutt/20180716 Subject: Re: [dpdk-dev] [RFC 3/4] mempool: introduce function to get mempool page size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Wed, Aug 07, 2019 at 06:21:41PM +0300, Andrew Rybchenko wrote: > On 7/19/19 4:38 PM, Olivier Matz wrote: > > In rte_mempool_populate_default(), we determine the page size, > > which is needed for calc_size and allocation of memory. > > > > Move this in a function and export it, it will be used in next > > commit. > > The major change here is taking page sizes into account even in the > case of RTE_IOVA_VA. The patch should not change any functional thing. I've fixed it in new version. > As I understand it is the main problem initially > that we should respect page boundaries even in RTE_IOVA_VA case. > It looks like we can have it even without removal of contiguous > allocation attempt. That's true, but what would be the advantage? Removing it makes the default populate function simpler, and helps to shorten the ~50 lines of explanations in the comments :) > > > Signed-off-by: Olivier Matz > > --- > > lib/librte_mempool/rte_mempool.c | 50 +++++++++++++++++++++++++--------------- > > lib/librte_mempool/rte_mempool.h | 6 +++++ > > 2 files changed, 37 insertions(+), 19 deletions(-) > > > > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c > > index 335032dc8..7def0ba68 100644 > > --- a/lib/librte_mempool/rte_mempool.c > > +++ b/lib/librte_mempool/rte_mempool.c > > @@ -414,6 +414,32 @@ rte_mempool_populate_virt(struct rte_mempool *mp, char *addr, > > return ret; > > } > > +int > > +rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz) > > +{ > > + bool need_iova_contig_obj; > > + bool alloc_in_ext_mem; > > + int ret; > > + > > + /* check if we can retrieve a valid socket ID */ > > + ret = rte_malloc_heap_socket_is_external(mp->socket_id); > > + if (ret < 0) > > + return -EINVAL; > > + alloc_in_ext_mem = (ret == 1); > > + need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG); > > + > > + if (!need_iova_contig_obj) > > + *pg_sz = 0; > > + else if (!alloc_in_ext_mem && rte_eal_iova_mode() == RTE_IOVA_VA) > > + *pg_sz = get_min_page_size(mp->socket_id); > > + else if (rte_eal_has_hugepages() || alloc_in_ext_mem) > > + *pg_sz = get_min_page_size(mp->socket_id); > > + else > > + *pg_sz = getpagesize(); > > + > > + return 0; > > +} > > + > > /* Default function to populate the mempool: allocate memory in memzones, > > * and populate them. Return the number of objects added, or a negative > > * value on error. > > @@ -425,12 +451,11 @@ rte_mempool_populate_default(struct rte_mempool *mp) > > char mz_name[RTE_MEMZONE_NAMESIZE]; > > const struct rte_memzone *mz; > > ssize_t mem_size; > > - size_t align, pg_sz, pg_shift; > > + size_t align, pg_sz, pg_shift = 0; > > rte_iova_t iova; > > unsigned mz_id, n; > > int ret; > > bool need_iova_contig_obj; > > - bool alloc_in_ext_mem; > > ret = mempool_ops_alloc_once(mp); > > if (ret != 0) > > @@ -482,26 +507,13 @@ rte_mempool_populate_default(struct rte_mempool *mp) > > * synonymous with IOVA contiguousness will not hold. > > */ > > - /* check if we can retrieve a valid socket ID */ > > - ret = rte_malloc_heap_socket_is_external(mp->socket_id); > > - if (ret < 0) > > - return -EINVAL; > > - alloc_in_ext_mem = (ret == 1); > > need_iova_contig_obj = !(mp->flags & MEMPOOL_F_NO_IOVA_CONTIG); > > + ret = rte_mempool_get_page_size(mp, &pg_sz); > > + if (ret < 0) > > + return ret; > > - if (!need_iova_contig_obj) { > > - pg_sz = 0; > > - pg_shift = 0; > > - } else if (!alloc_in_ext_mem && rte_eal_iova_mode() == RTE_IOVA_VA) { > > - pg_sz = 0; > > - pg_shift = 0; > > - } else if (rte_eal_has_hugepages() || alloc_in_ext_mem) { > > - pg_sz = get_min_page_size(mp->socket_id); > > - pg_shift = rte_bsf32(pg_sz); > > - } else { > > - pg_sz = getpagesize(); > > + if (pg_sz != 0) > > pg_shift = rte_bsf32(pg_sz); > > - } > > for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) { > > size_t min_chunk_size; > > diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h > > index 7bc10e699..00b927989 100644 > > --- a/lib/librte_mempool/rte_mempool.h > > +++ b/lib/librte_mempool/rte_mempool.h > > @@ -1692,6 +1692,12 @@ uint32_t rte_mempool_calc_obj_size(uint32_t elt_size, uint32_t flags, > > void rte_mempool_walk(void (*func)(struct rte_mempool *, void *arg), > > void *arg); > > +/** > > + * @internal Get page size used for mempool object allocation. > > + */ > > +int > > +rte_mempool_get_page_size(struct rte_mempool *mp, size_t *pg_sz); > > + > > #ifdef __cplusplus > > } > > #endif > > >