From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3CD1BA051A for ; Fri, 17 Jan 2020 11:01:21 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E5ECC1D407; Fri, 17 Jan 2020 11:01:20 +0100 (CET) Received: from mail-wr1-f68.google.com (mail-wr1-f68.google.com [209.85.221.68]) by dpdk.org (Postfix) with ESMTP id 436841D407 for ; Fri, 17 Jan 2020 11:01:19 +0100 (CET) Received: by mail-wr1-f68.google.com with SMTP id z3so22081263wru.3 for ; Fri, 17 Jan 2020 02:01:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=Nfgi3o3n+Bn7KAB5ezasO9EeDAnrnj+hDSSW621el3c=; b=YG7XM26SfS2TbR8VVrzjrlEPpQbgwjLFwie4BEHuxMEJI8RHQxmA3EKAj+CJ0vtEEe 5u/JoO0xSmaxT9B8XVFWY/UQ/Ia5+kAK60ccvlqU4c2+fR+2zaNUti+dVAccovXhQc1C pyHTejIP7jLewQy2b+0wO9gwMgIgjG687yiHYH+HFmQj1Ch5HqxqQG30X2ReyfYbxFC5 YFo4ZKwxBaOKUrc3S00Ou1faxfHv6exthyTGiwtEiueTFYNfXN4fExbHHS/XypX/MvJ9 h6FjldZmsE5ZosV1dY4S/lv5S2o50hs02cKqEh7/2hjibWY6G9GJMwpYY6oCvO9C5AzZ 76Pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=Nfgi3o3n+Bn7KAB5ezasO9EeDAnrnj+hDSSW621el3c=; b=fKi26Ya4l1EJe597E5c4HZcUTJKcM0KzlV3s//2/BxIg3aUMYmoe59mBL/lmLdlE+Y JxwThPjPBRhsPoTvOBqosI+30iYsbuiSFAOR3G0wys7z+G6tXOgmUYPiBNSp7U8fGPlv RCuedXRxfeb22N0oe7W2Q8eH35pRR4aYmigsqWD655aBdw8HbXbRAmwnmyCA5PCaoxsq rPFMXtp1bTT/w6bleTBLiNnQWXYQnjvYifcE1oPi1H42Zzn8sq/6e3KAIS9an2o9vcF4 +JzPYbh/O0O5za9LnKzwMxuYJ71JLWw1nRJCCNrlTuUwe4/oeb+nVnkwPucZ4T41PFGB malg== X-Gm-Message-State: APjAAAVso0B2rQUoHl3LwdWRyq8HpzNb7ocNo1tIfmBkTnygqAWydUQk Qmj5tQwhqc4tf3j2MznYWuInLw== X-Google-Smtp-Source: APXvYqyUjmi8V33QpJfAU3+0eE/o8gMkflu4cKoZ+BqUF2RUgcLSb7XTtM0oWjATEC4BM8FFdSNn2Q== X-Received: by 2002:adf:81c2:: with SMTP id 60mr2155041wra.8.1579255278929; Fri, 17 Jan 2020 02:01:18 -0800 (PST) Received: from 6wind.com (2a01cb0c0005a600345636f7e65ed1a0.ipv6.abo.wanadoo.fr. [2a01:cb0c:5:a600:3456:36f7:e65e:d1a0]) by smtp.gmail.com with ESMTPSA id u8sm8891954wmm.15.2020.01.17.02.01.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 Jan 2020 02:01:18 -0800 (PST) Date: Fri, 17 Jan 2020 11:01:17 +0100 From: Olivier Matz To: dev@dpdk.org Cc: Ali Alnubani , Anatoly Burakov , Andrew Rybchenko , Raslan Darawsheh , stable@dpdk.org, Thomas Monjalon Message-ID: <20200117100117.GX22738@platinum> References: <20200109132742.15828-1-olivier.matz@6wind.com> <20200117095149.23107-1-olivier.matz@6wind.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200117095149.23107-1-olivier.matz@6wind.com> User-Agent: Mutt/1.10.1 (2018-07-13) Subject: Re: [dpdk-stable] [dpdk-dev] [PATCH v2] mempool: fix slow allocation of large mempools X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" On Fri, Jan 17, 2020 at 10:51:49AM +0100, Olivier Matz wrote: > When allocating a mempool which is larger than the largest > available area, it can take a lot of time: > > a- the mempool calculate the required memory size, and tries > to allocate it, it fails > b- then it tries to allocate the largest available area (this > does not request new huge pages) > c- add this zone to the mempool, this triggers the allocation > of a mem hdr, which request a new huge page > d- back to a- until mempool is populated or until there is no > more memory > > This can take a lot of time to finally fail (several minutes): in step > a- it takes all available hugepages on the system, then release them > after it fails. > > The problem appeared with commit eba11e364614 ("mempool: reduce wasted > space on populate"), because smaller chunks are now allowed. Previously, > it had to be at least one page size, which is not the case in step b-. > > To fix this, implement our own way to allocate the largest available > area instead of using the feature from memzone: if an allocation fails, > try to divide the size by 2 and retry. When the requested size falls > below min_chunk_size, stop and return an error. > > Fixes: eba11e364614 ("mempool: reduce wasted space on populate") > Cc: stable@dpdk.org > > Signed-off-by: Olivier Matz Sorry I forgot to report Anatoly's ack on v1 http://patchwork.dpdk.org/patch/64370/ Acked-by: Anatoly Burakov > --- > > v2: > * fix missing check on mz == NULL condition > > > lib/librte_mempool/rte_mempool.c | 29 ++++++++++++----------------- > 1 file changed, 12 insertions(+), 17 deletions(-) > > diff --git a/lib/librte_mempool/rte_mempool.c b/lib/librte_mempool/rte_mempool.c > index 1eae10e27..a68a69040 100644 > --- a/lib/librte_mempool/rte_mempool.c > +++ b/lib/librte_mempool/rte_mempool.c > @@ -499,6 +499,7 @@ rte_mempool_populate_default(struct rte_mempool *mp) > unsigned mz_id, n; > int ret; > bool need_iova_contig_obj; > + size_t max_alloc_size = SIZE_MAX; > > ret = mempool_ops_alloc_once(mp); > if (ret != 0) > @@ -578,30 +579,24 @@ rte_mempool_populate_default(struct rte_mempool *mp) > if (min_chunk_size == (size_t)mem_size) > mz_flags |= RTE_MEMZONE_IOVA_CONTIG; > > - mz = rte_memzone_reserve_aligned(mz_name, mem_size, > + /* Allocate a memzone, retrying with a smaller area on ENOMEM */ > + do { > + mz = rte_memzone_reserve_aligned(mz_name, > + RTE_MIN((size_t)mem_size, max_alloc_size), > mp->socket_id, mz_flags, align); > > - /* don't try reserving with 0 size if we were asked to reserve > - * IOVA-contiguous memory. > - */ > - if (min_chunk_size < (size_t)mem_size && mz == NULL) { > - /* not enough memory, retry with the biggest zone we > - * have > - */ > - mz = rte_memzone_reserve_aligned(mz_name, 0, > - mp->socket_id, mz_flags, align); > - } > + if (mz == NULL && rte_errno != ENOMEM) > + break; > + > + max_alloc_size = RTE_MIN(max_alloc_size, > + (size_t)mem_size) / 2; > + } while (mz == NULL && max_alloc_size >= min_chunk_size); > + > if (mz == NULL) { > ret = -rte_errno; > goto fail; > } > > - if (mz->len < min_chunk_size) { > - rte_memzone_free(mz); > - ret = -ENOMEM; > - goto fail; > - } > - > if (need_iova_contig_obj) > iova = mz->iova; > else > -- > 2.20.1 >