From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8E17BA051A; Fri, 17 Jan 2020 09:45:48 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 58F581D37F; Fri, 17 Jan 2020 09:45:48 +0100 (CET) Received: from mail-wr1-f68.google.com (mail-wr1-f68.google.com [209.85.221.68]) by dpdk.org (Postfix) with ESMTP id 6D09D1D178 for ; Fri, 17 Jan 2020 09:45:47 +0100 (CET) Received: by mail-wr1-f68.google.com with SMTP id j42so21778546wrj.12 for ; Fri, 17 Jan 2020 00:45:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=vhRh+AXL4IvrYi7GLLqVUM2R3SigLW7h545AtxtT+fI=; b=C+RSLU4YDsw5iiV3KIxk2ayjMCLXXbBFJrGY1k6KOzfXqN67odWevkaQoJZZBehWlO 5wU3LElOe1m29IL1u1/+gJvwi8CRXVzZ728WTIzvUpE7m8jQ2OYigWFlvlmq6BR/cwDC VhP+c15bDrV9mtlZOTClcCEFcK9CD9pFAOesbAhNEIXqrKeX6frhR5tMfqoLICov6xEW /ziqmx2S1JMX8woS554w0n0DybnepXfs45uxT4LfINapTIpf+GS2/p34uFSex087gBJ8 2WZk1a5myN+UI85vqD/PnlrDj/Laj8m2mUwXjxU+ngIf1a17E6o5Vi9RcBv8UpeXO+3J OS+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=vhRh+AXL4IvrYi7GLLqVUM2R3SigLW7h545AtxtT+fI=; b=dUUiJIim5K1wTuk18iCadJ3FM9AIMOe9cyxchdcnpbJzg2CSI6SNMSKhu155eVv9bl RhjvGledLW8f5LnBgQYB983r53pK82rhlRHDqk8jXDpMacwQzJMZ5GvnJemtCwO6tMZn 5zaADpXMBnABkom7gW67of5jD4dgtBMoKaOAoIsD6EheoChnwjTDsJnH59AszO9WGbEw gAxvUkXA7Ms+K35/rjZoElUJZ4lxmZn0keqV/VME9WDaRCQaEkC6/Z3/6LRuattxNp03 Kt583V1F9c5EwJf92hGjdfe8ILMmZn85rux7gpEPRp6+AOMGfSjYKtqBrpmCI2ul51tb hxSQ== X-Gm-Message-State: APjAAAWnGSacVou4MubnjCj5hWOFh2+vK/NjpINBk3S856xxtR4qxTue H/fFZ2bMQhVYB1uUFzKYChy0CQ== X-Google-Smtp-Source: APXvYqz8AHFcl2Q8VBY8NS/QCnpBr/p0pcZqbbQWWzRUEMwvIUst0vKBFOFltJiwfCeYdUerUyJn9w== X-Received: by 2002:a5d:540f:: with SMTP id g15mr1809921wrv.86.1579250747008; Fri, 17 Jan 2020 00:45:47 -0800 (PST) Received: from 6wind.com (2a01cb0c0005a600345636f7e65ed1a0.ipv6.abo.wanadoo.fr. [2a01:cb0c:5:a600:3456:36f7:e65e:d1a0]) by smtp.gmail.com with ESMTPSA id j2sm4203102wmk.23.2020.01.17.00.45.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jan 2020 00:45:45 -0800 (PST) Date: Fri, 17 Jan 2020 09:45:45 +0100 From: Olivier Matz To: Andrew Rybchenko Cc: dev@dpdk.org, Anatoly Burakov , stable@dpdk.org Message-ID: <20200117084545.GW22738@platinum> References: <20200109132742.15828-1-olivier.matz@6wind.com> <634e9e97-68d1-ce5f-9825-a400fce8c185@solarflare.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <634e9e97-68d1-ce5f-9825-a400fce8c185@solarflare.com> User-Agent: Mutt/1.10.1 (2018-07-13) Subject: Re: [dpdk-dev] [PATCH] mempool: fix slow allocation of large mempools X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, On Fri, Jan 10, 2020 at 12:53:24PM +0300, Andrew Rybchenko wrote: > On 1/9/20 4:27 PM, Olivier Matz wrote: > > When allocating a mempool which is larger than the largest > > available area, it can take a lot of time: > > > > a- the mempool calculate the required memory size, and tries > > to allocate it, it fails > > b- then it tries to allocate the largest available area (this > > does not request new huge pages) > > c- add this zone to the mempool, this triggers the allocation > > of a mem hdr, which request a new huge page > > d- back to a- until mempool is populated or until there is no > > more memory > > > > This can take a lot of time to finally fail (several minutes): in step > > a- it takes all available hugepages on the system, then release them > > after it fails. > > > > The problem appeared with commit eba11e364614 ("mempool: reduce wasted > > space on populate"), because smaller chunks are now allowed. Previously, > > it had to be at least one page size, which is not the case in step b-. > > > > To fix this, implement our own way to allocate the largest available > > area instead of using the feature from memzone: if an allocation fails, > > try to divide the size by 2 and retry. When the requested size falls > > below min_chunk_size, stop and return an error. > > > > Fixes: eba11e364614 ("mempool: reduce wasted space on populate") > > Cc: stable@dpdk.org > > > > Signed-off-by: Olivier Matz > > LGTM except already mentioned bug with missing mz == NULL to retry loop. > Plus one minor question below. > > > --- > > lib/librte_mempool/rte_mempool.c | 29 ++++++++++++----------------- > > 1 file changed, 12 insertions(+), 17 deletions(-) > > > > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c > > index bda361ce6..03c8d984c 100644 > > --- a/lib/librte_mempool/rte_mempool.c > > +++ b/lib/librte_mempool/rte_mempool.c > > @@ -481,6 +481,7 @@ rte_mempool_populate_default(struct rte_mempool *mp) > > unsigned mz_id, n; > > int ret; > > bool need_iova_contig_obj; > > + size_t max_alloc_size = SIZE_MAX; > > > > ret = mempool_ops_alloc_once(mp); > > if (ret != 0) > > @@ -560,30 +561,24 @@ rte_mempool_populate_default(struct rte_mempool *mp) > > if (min_chunk_size == (size_t)mem_size) > > mz_flags |= RTE_MEMZONE_IOVA_CONTIG; > > > > - mz = rte_memzone_reserve_aligned(mz_name, mem_size, > > + /* Allocate a memzone, retrying with a smaller area on ENOMEM */ > > + do { > > + mz = rte_memzone_reserve_aligned(mz_name, > > + RTE_MIN((size_t)mem_size, max_alloc_size), > > mp->socket_id, mz_flags, align); > > > > - /* don't try reserving with 0 size if we were asked to reserve > > - * IOVA-contiguous memory. > > - */ > > - if (min_chunk_size < (size_t)mem_size && mz == NULL) { > > - /* not enough memory, retry with the biggest zone we > > - * have > > - */ > > - mz = rte_memzone_reserve_aligned(mz_name, 0, > > - mp->socket_id, mz_flags, align); > > - } > > + if (mz == NULL && rte_errno != ENOMEM) > > + break; > > + > > + max_alloc_size = RTE_MIN(max_alloc_size, > > + (size_t)mem_size) / 2; > > Does it make sense to make max_alloc_size multiple of > min_chunk_size here? I think it could help to waste less > memory space. I don't think it's worth doing it: I agree it could avoid to waste space, but it is only significant if max_alloc_size is in the same order of size than min_chunk_size. And this would only happen when we are running out of memory. Also, as populate_virt() will skip page boundaries, keeping a multiple of min_chunk_size may not make sense in that case. > > > + } while (max_alloc_size >= min_chunk_size); > > + > > if (mz == NULL) { > > ret = -rte_errno; > > goto fail; > > } > > > > - if (mz->len < min_chunk_size) { > > - rte_memzone_free(mz); > > - ret = -ENOMEM; > > - goto fail; > > - } > > - > > if (need_iova_contig_obj) > > iova = mz->iova; > > else > > >