From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <stable-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 925C7A051A
	for <public@inbox.dpdk.org>; Fri, 17 Jan 2020 09:45:50 +0100 (CET)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id 538CA1D408;
	Fri, 17 Jan 2020 09:45:50 +0100 (CET)
Received: from mail-wr1-f65.google.com (mail-wr1-f65.google.com
 [209.85.221.65]) by dpdk.org (Postfix) with ESMTP id 70A0B1D37F
 for <stable@dpdk.org>; Fri, 17 Jan 2020 09:45:47 +0100 (CET)
Received: by mail-wr1-f65.google.com with SMTP id c9so21815837wrw.8
 for <stable@dpdk.org>; Fri, 17 Jan 2020 00:45:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google;
 h=date:from:to:cc:subject:message-id:references:mime-version
 :content-disposition:in-reply-to:user-agent;
 bh=vhRh+AXL4IvrYi7GLLqVUM2R3SigLW7h545AtxtT+fI=;
 b=C+RSLU4YDsw5iiV3KIxk2ayjMCLXXbBFJrGY1k6KOzfXqN67odWevkaQoJZZBehWlO
 5wU3LElOe1m29IL1u1/+gJvwi8CRXVzZ728WTIzvUpE7m8jQ2OYigWFlvlmq6BR/cwDC
 VhP+c15bDrV9mtlZOTClcCEFcK9CD9pFAOesbAhNEIXqrKeX6frhR5tMfqoLICov6xEW
 /ziqmx2S1JMX8woS554w0n0DybnepXfs45uxT4LfINapTIpf+GS2/p34uFSex087gBJ8
 2WZk1a5myN+UI85vqD/PnlrDj/Laj8m2mUwXjxU+ngIf1a17E6o5Vi9RcBv8UpeXO+3J
 OS+A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=vhRh+AXL4IvrYi7GLLqVUM2R3SigLW7h545AtxtT+fI=;
 b=bLfROqoxmw5RCvWm7gGlbqnZKEJCkZzCxjej0ah1MJWSUq2xlu3oNAdUzmdY389bA+
 2AgSzZamPCh/orT528yWo7+Y6efkw0NdWGfjgy25JqICSrjK8GrqgeJquVX3mDANDgZz
 wN+oRzQcWoOi5SnS540TXoqlQpzC42VfAtT2Hzp5sbZ9ozrbjQjvPj4p9v+Cv2jyIAN0
 mrRWQRBn1zraGvzEtkTqTPdLHFgej27B+5HwVxLtnPNL+2DFEtkuVcRqOXdudL7vYqAN
 dc2Ab7AJj8quKe1gQ6NG8ZlHcjOVyPc+162l3cPX7aVZ+wmnz+YjbX0fODMSE3oHMh1I
 w8zA==
X-Gm-Message-State: APjAAAWtdPu3QKeOChhsMWASVGc0KsQLdThadTXLv3sbmxDhok8LH2It
 4iaBTM25BdwEacyAmPpc8J/+Cg==
X-Google-Smtp-Source: APXvYqz8AHFcl2Q8VBY8NS/QCnpBr/p0pcZqbbQWWzRUEMwvIUst0vKBFOFltJiwfCeYdUerUyJn9w==
X-Received: by 2002:a5d:540f:: with SMTP id g15mr1809921wrv.86.1579250747008; 
 Fri, 17 Jan 2020 00:45:47 -0800 (PST)
Received: from 6wind.com (2a01cb0c0005a600345636f7e65ed1a0.ipv6.abo.wanadoo.fr.
 [2a01:cb0c:5:a600:3456:36f7:e65e:d1a0])
 by smtp.gmail.com with ESMTPSA id j2sm4203102wmk.23.2020.01.17.00.45.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 17 Jan 2020 00:45:45 -0800 (PST)
Date: Fri, 17 Jan 2020 09:45:45 +0100
From: Olivier Matz <olivier.matz@6wind.com>
To: Andrew Rybchenko <arybchenko@solarflare.com>
Cc: dev@dpdk.org, Anatoly Burakov <anatoly.burakov@intel.com>, stable@dpdk.org
Message-ID: <20200117084545.GW22738@platinum>
References: <20200109132742.15828-1-olivier.matz@6wind.com>
 <634e9e97-68d1-ce5f-9825-a400fce8c185@solarflare.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <634e9e97-68d1-ce5f-9825-a400fce8c185@solarflare.com>
User-Agent: Mutt/1.10.1 (2018-07-13)
Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH] mempool: fix slow allocation
	of large mempools
X-BeenThere: stable@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches for DPDK stable branches <stable.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/stable>,
 <mailto:stable-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/stable/>
List-Post: <mailto:stable@dpdk.org>
List-Help: <mailto:stable-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/stable>,
 <mailto:stable-request@dpdk.org?subject=subscribe>
Errors-To: stable-bounces@dpdk.org
Sender: "stable" <stable-bounces@dpdk.org>

Hi,

On Fri, Jan 10, 2020 at 12:53:24PM +0300, Andrew Rybchenko wrote:
> On 1/9/20 4:27 PM, Olivier Matz wrote:
> > When allocating a mempool which is larger than the largest
> > available area, it can take a lot of time:
> > 
> > a- the mempool calculate the required memory size, and tries
> >    to allocate it, it fails
> > b- then it tries to allocate the largest available area (this
> >    does not request new huge pages)
> > c- add this zone to the mempool, this triggers the allocation
> >    of a mem hdr, which request a new huge page
> > d- back to a- until mempool is populated or until there is no
> >    more memory
> > 
> > This can take a lot of time to finally fail (several minutes): in step
> > a- it takes all available hugepages on the system, then release them
> > after it fails.
> > 
> > The problem appeared with commit eba11e364614 ("mempool: reduce wasted
> > space on populate"), because smaller chunks are now allowed. Previously,
> > it had to be at least one page size, which is not the case in step b-.
> > 
> > To fix this, implement our own way to allocate the largest available
> > area instead of using the feature from memzone: if an allocation fails,
> > try to divide the size by 2 and retry. When the requested size falls
> > below min_chunk_size, stop and return an error.
> > 
> > Fixes: eba11e364614 ("mempool: reduce wasted space on populate")
> > Cc: stable@dpdk.org
> > 
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> 
> LGTM except already mentioned bug with missing mz == NULL to retry loop.
> Plus one minor question below.
> 
> > ---
> >  lib/librte_mempool/rte_mempool.c | 29 ++++++++++++-----------------
> >  1 file changed, 12 insertions(+), 17 deletions(-)
> > 
> > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c
> > index bda361ce6..03c8d984c 100644
> > --- a/lib/librte_mempool/rte_mempool.c
> > +++ b/lib/librte_mempool/rte_mempool.c
> > @@ -481,6 +481,7 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> >  	unsigned mz_id, n;
> >  	int ret;
> >  	bool need_iova_contig_obj;
> > +	size_t max_alloc_size = SIZE_MAX;
> >  
> >  	ret = mempool_ops_alloc_once(mp);
> >  	if (ret != 0)
> > @@ -560,30 +561,24 @@ rte_mempool_populate_default(struct rte_mempool *mp)
> >  		if (min_chunk_size == (size_t)mem_size)
> >  			mz_flags |= RTE_MEMZONE_IOVA_CONTIG;
> >  
> > -		mz = rte_memzone_reserve_aligned(mz_name, mem_size,
> > +		/* Allocate a memzone, retrying with a smaller area on ENOMEM */
> > +		do {
> > +			mz = rte_memzone_reserve_aligned(mz_name,
> > +				RTE_MIN((size_t)mem_size, max_alloc_size),
> >  				mp->socket_id, mz_flags, align);
> >  
> > -		/* don't try reserving with 0 size if we were asked to reserve
> > -		 * IOVA-contiguous memory.
> > -		 */
> > -		if (min_chunk_size < (size_t)mem_size && mz == NULL) {
> > -			/* not enough memory, retry with the biggest zone we
> > -			 * have
> > -			 */
> > -			mz = rte_memzone_reserve_aligned(mz_name, 0,
> > -					mp->socket_id, mz_flags, align);
> > -		}
> > +			if (mz == NULL && rte_errno != ENOMEM)
> > +				break;
> > +
> > +			max_alloc_size = RTE_MIN(max_alloc_size,
> > +						(size_t)mem_size) / 2;
> 
> Does it make sense to make max_alloc_size multiple of
> min_chunk_size here? I think it could help to waste less
> memory space.

I don't think it's worth doing it: I agree it could avoid to waste
space, but it is only significant if max_alloc_size is in the same order
of size than min_chunk_size. And this would only happen when we are
running out of memory.

Also, as populate_virt() will skip page boundaries, keeping
a multiple of min_chunk_size may not make sense in that case.


> 
> > +		} while (max_alloc_size >= min_chunk_size);
> > +
> >  		if (mz == NULL) {
> >  			ret = -rte_errno;
> >  			goto fail;
> >  		}
> >  
> > -		if (mz->len < min_chunk_size) {
> > -			rte_memzone_free(mz);
> > -			ret = -ENOMEM;
> > -			goto fail;
> > -		}
> > -
> >  		if (need_iova_contig_obj)
> >  			iova = mz->iova;
> >  		else
> > 
>