From: Andrew Rybchenko <arybchenko@solarflare.com> To: Olivier Matz <olivier.matz@6wind.com>, <dev@dpdk.org> Cc: Anatoly Burakov <anatoly.burakov@intel.com>, <stable@dpdk.org> Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH] mempool: fix slow allocation of large mempools Date: Fri, 10 Jan 2020 12:53:24 +0300 Message-ID: <634e9e97-68d1-ce5f-9825-a400fce8c185@solarflare.com> (raw) In-Reply-To: <20200109132742.15828-1-olivier.matz@6wind.com> On 1/9/20 4:27 PM, Olivier Matz wrote: > When allocating a mempool which is larger than the largest > available area, it can take a lot of time: > > a- the mempool calculate the required memory size, and tries > to allocate it, it fails > b- then it tries to allocate the largest available area (this > does not request new huge pages) > c- add this zone to the mempool, this triggers the allocation > of a mem hdr, which request a new huge page > d- back to a- until mempool is populated or until there is no > more memory > > This can take a lot of time to finally fail (several minutes): in step > a- it takes all available hugepages on the system, then release them > after it fails. > > The problem appeared with commit eba11e364614 ("mempool: reduce wasted > space on populate"), because smaller chunks are now allowed. Previously, > it had to be at least one page size, which is not the case in step b-. > > To fix this, implement our own way to allocate the largest available > area instead of using the feature from memzone: if an allocation fails, > try to divide the size by 2 and retry. When the requested size falls > below min_chunk_size, stop and return an error. > > Fixes: eba11e364614 ("mempool: reduce wasted space on populate") > Cc: stable@dpdk.org > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com> LGTM except already mentioned bug with missing mz == NULL to retry loop. Plus one minor question below. > --- > lib/librte_mempool/rte_mempool.c | 29 ++++++++++++----------------- > 1 file changed, 12 insertions(+), 17 deletions(-) > > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c > index bda361ce6..03c8d984c 100644 > --- a/lib/librte_mempool/rte_mempool.c > +++ b/lib/librte_mempool/rte_mempool.c > @@ -481,6 +481,7 @@ rte_mempool_populate_default(struct rte_mempool *mp) > unsigned mz_id, n; > int ret; > bool need_iova_contig_obj; > + size_t max_alloc_size = SIZE_MAX; > > ret = mempool_ops_alloc_once(mp); > if (ret != 0) > @@ -560,30 +561,24 @@ rte_mempool_populate_default(struct rte_mempool *mp) > if (min_chunk_size == (size_t)mem_size) > mz_flags |= RTE_MEMZONE_IOVA_CONTIG; > > - mz = rte_memzone_reserve_aligned(mz_name, mem_size, > + /* Allocate a memzone, retrying with a smaller area on ENOMEM */ > + do { > + mz = rte_memzone_reserve_aligned(mz_name, > + RTE_MIN((size_t)mem_size, max_alloc_size), > mp->socket_id, mz_flags, align); > > - /* don't try reserving with 0 size if we were asked to reserve > - * IOVA-contiguous memory. > - */ > - if (min_chunk_size < (size_t)mem_size && mz == NULL) { > - /* not enough memory, retry with the biggest zone we > - * have > - */ > - mz = rte_memzone_reserve_aligned(mz_name, 0, > - mp->socket_id, mz_flags, align); > - } > + if (mz == NULL && rte_errno != ENOMEM) > + break; > + > + max_alloc_size = RTE_MIN(max_alloc_size, > + (size_t)mem_size) / 2; Does it make sense to make max_alloc_size multiple of min_chunk_size here? I think it could help to waste less memory space. > + } while (max_alloc_size >= min_chunk_size); > + > if (mz == NULL) { > ret = -rte_errno; > goto fail; > } > > - if (mz->len < min_chunk_size) { > - rte_memzone_free(mz); > - ret = -ENOMEM; > - goto fail; > - } > - > if (need_iova_contig_obj) > iova = mz->iova; > else >
next prev parent reply other threads:[~2020-01-10 9:53 UTC|newest] Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-01-09 13:27 [dpdk-stable] " Olivier Matz 2020-01-09 13:57 ` Burakov, Anatoly 2020-01-09 16:06 ` [dpdk-stable] [dpdk-dev] " Ali Alnubani 2020-01-09 17:27 ` Olivier Matz 2020-01-10 9:53 ` Andrew Rybchenko [this message] 2020-01-17 8:45 ` Olivier Matz 2020-01-17 9:51 ` [dpdk-stable] [PATCH v2] " Olivier Matz 2020-01-17 10:01 ` [dpdk-stable] [dpdk-dev] " Olivier Matz 2020-01-17 10:09 ` Andrew Rybchenko 2020-01-20 10:12 ` Thomas Monjalon 2020-01-19 12:29 ` [dpdk-stable] " Ali Alnubani
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=634e9e97-68d1-ce5f-9825-a400fce8c185@solarflare.com \ --to=arybchenko@solarflare.com \ --cc=anatoly.burakov@intel.com \ --cc=dev@dpdk.org \ --cc=olivier.matz@6wind.com \ --cc=stable@dpdk.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
patches for DPDK stable branches This inbox may be cloned and mirrored by anyone: git clone --mirror https://inbox.dpdk.org/stable/0 stable/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 stable stable/ https://inbox.dpdk.org/stable \ stable@dpdk.org public-inbox-index stable Example config snippet for mirrors. Newsgroup available over NNTP: nntp://inbox.dpdk.org/inbox.dpdk.stable AGPL code for this site: git clone https://public-inbox.org/public-inbox.git