DPDK patches and discussions
 help / color / mirror / Atom feed
From: Olivier Matz <olivier.matz@6wind.com>
To: Andrew Rybchenko <arybchenko@solarflare.com>
Cc: dev@dpdk.org, Anatoly Burakov <anatoly.burakov@intel.com>,
	stable@dpdk.org
Subject: Re: [dpdk-dev] [PATCH] mempool: fix slow allocation of large mempools
Date: Fri, 17 Jan 2020 09:45:45 +0100	[thread overview]
Message-ID: <20200117084545.GW22738@platinum> (raw)
In-Reply-To: <634e9e97-68d1-ce5f-9825-a400fce8c185@solarflare.com>

Hi,

On Fri, Jan 10, 2020 at 12:53:24PM +0300, Andrew Rybchenko wrote:
> On 1/9/20 4:27 PM, Olivier Matz wrote:
> > When allocating a mempool which is larger than the largest
> > available area, it can take a lot of time:
> > 
> > a- the mempool calculate the required memory size, and tries
> >    to allocate it, it fails
> > b- then it tries to allocate the largest available area (this
> >    does not request new huge pages)
> > c- add this zone to the mempool, this triggers the allocation
> >    of a mem hdr, which request a new huge page
> > d- back to a- until mempool is populated or until there is no
> >    more memory
> > 
> > This can take a lot of time to finally fail (several minutes): in step
> > a- it takes all available hugepages on the system, then release them
> > after it fails.
> > 
> > The problem appeared with commit eba11e364614 ("mempool: reduce wasted
> > space on populate"), because smaller chunks are now allowed. Previously,
> > it had to be at least one page size, which is not the case in step b-.
> > 
> > To fix this, implement our own way to allocate the largest available
> > area instead of using the feature from memzone: if an allocation fails,
> > try to divide the size by 2 and retry. When the requested size falls
> > below min_chunk_size, stop and return an error.
> > 
> > Fixes: eba11e364614 ("mempool: reduce wasted space on populate")
> > Cc: stable@dpdk.org
> > 
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> 
> LGTM except already mentioned bug with missing mz == NULL to retry loop.
> Plus one minor question below.
> 
> > ---
> >  lib/librte_mempool/rte_mempool.c | 29 ++++++++++++-----------------
> >  1 file changed, 12 insertions(+), 17 deletions(-)
> > 
> > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> > index bda361ce6..03c8d984c 100644
> > --- a/lib/librte_mempool/rte_mempool.c
> > +++ b/lib/librte_mempool/rte_mempool.c
> > @@ -481,6 +481,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> >  	unsigned mz_id, n;
> >  	int ret;
> >  	bool need_iova_contig_obj;
> > +	size_t max_alloc_size = SIZE_MAX;
> >  
> >  	ret = mempool_ops_alloc_once(mp);
> >  	if (ret != 0)
> > @@ -560,30 +561,24 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> >  		if (min_chunk_size == (size_t)mem_size)
> >  			mz_flags |= RTE_MEMZONE_IOVA_CONTIG;
> >  
> > -		mz = rte_memzone_reserve_aligned(mz_name, mem_size,
> > +		/* Allocate a memzone, retrying with a smaller area on ENOMEM */
> > +		do {
> > +			mz = rte_memzone_reserve_aligned(mz_name,
> > +				RTE_MIN((size_t)mem_size, max_alloc_size),
> >  				mp->socket_id, mz_flags, align);
> >  
> > -		/* don't try reserving with 0 size if we were asked to reserve
> > -		 * IOVA-contiguous memory.
> > -		 */
> > -		if (min_chunk_size < (size_t)mem_size && mz == NULL) {
> > -			/* not enough memory, retry with the biggest zone we
> > -			 * have
> > -			 */
> > -			mz = rte_memzone_reserve_aligned(mz_name, 0,
> > -					mp->socket_id, mz_flags, align);
> > -		}
> > +			if (mz == NULL && rte_errno != ENOMEM)
> > +				break;
> > +
> > +			max_alloc_size = RTE_MIN(max_alloc_size,
> > +						(size_t)mem_size) / 2;
> 
> Does it make sense to make max_alloc_size multiple of
> min_chunk_size here? I think it could help to waste less
> memory space.

I don't think it's worth doing it: I agree it could avoid to waste
space, but it is only significant if max_alloc_size is in the same order
of size than min_chunk_size. And this would only happen when we are
running out of memory.

Also, as populate_virt() will skip page boundaries, keeping
a multiple of min_chunk_size may not make sense in that case.


> 
> > +		} while (max_alloc_size >= min_chunk_size);
> > +
> >  		if (mz == NULL) {
> >  			ret = -rte_errno;
> >  			goto fail;
> >  		}
> >  
> > -		if (mz->len < min_chunk_size) {
> > -			rte_memzone_free(mz);
> > -			ret = -ENOMEM;
> > -			goto fail;
> > -		}
> > -
> >  		if (need_iova_contig_obj)
> >  			iova = mz->iova;
> >  		else
> > 
> 

  reply	other threads:[~2020-01-17  8:45 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-09 13:27 Olivier Matz
2020-01-09 13:57 ` Burakov, Anatoly
2020-01-09 16:06 ` Ali Alnubani
2020-01-09 17:27   ` Olivier Matz
2020-01-10  9:53 ` Andrew Rybchenko
2020-01-17  8:45   ` Olivier Matz [this message]
2020-01-17  9:51 ` [dpdk-dev] [PATCH v2] " Olivier Matz
2020-01-17 10:01   ` Olivier Matz
2020-01-17 10:09     ` Andrew Rybchenko
2020-01-20 10:12       ` Thomas Monjalon
2020-01-19 12:29   ` Ali Alnubani

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200117084545.GW22738@platinum \
    --to=olivier.matz@6wind.com \
    --cc=anatoly.burakov@intel.com \
    --cc=arybchenko@solarflare.com \
    --cc=dev@dpdk.org \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).