From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 882A7A0547 for ; Wed, 29 Sep 2021 13:11:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 03D71410DB; Wed, 29 Sep 2021 13:11:06 +0200 (CEST) Received: from out3-smtp.messagingengine.com (out3-smtp.messagingengine.com [66.111.4.27]) by mails.dpdk.org (Postfix) with ESMTP id 3BCB7410D7 for ; Wed, 29 Sep 2021 13:11:05 +0200 (CEST) Received: from compute2.internal (compute2.nyi.internal [10.202.2.42]) by mailout.nyi.internal (Postfix) with ESMTP id A14CC5C00FE; Wed, 29 Sep 2021 07:11:04 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute2.internal (MEProxy); Wed, 29 Sep 2021 07:11:04 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm2; bh= T8kOW1kxSm1hK9A4t9091C4+erTsTf2hZPS6hVn82pM=; b=H6UiqOqNyrixVP7T 8D+AlMo21yqGiyo7cG87RER4+f50iP9E3FeZQg/URZbEz8q9MZ4LF6w4wvwItI1T JiYimg+pAo8Alue1pgQMcPEhYvohI82a5k6CCu2J4HZAFbPgWXPrCpoO9zd7sbJ8 y34co5vl88gMcBfx6ZOQIjegjiIztD0en7g5ilg1ShnaVt4Fcli6gtI7QiXimIs5 7UjLFQCeKeiK30DCz9pTV8Q6hBdMpXr5DQnO+RZBs80vk3XOoblPYCRugGvUTwg/ bJ0q9n1W43TVQvIADaNZCRXKBSVz7blg2j5R9bDMrN6mkCPwneqMS0N1DeO7Py8C k+qJ3A== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm3; bh=T8kOW1kxSm1hK9A4t9091C4+erTsTf2hZPS6hVn82 pM=; b=t+W05jmscQExKKZL7PBBDBXbi9s82DSJK2S9H/jgR+py7QjLWFbXnjsW2 sG5V5kvVCaVjiQmI76mVihVPiVrEKjzxML6npIFQCLKBtwgcPK6isrqbQJ1pmAM7 vsjWjaCXisjqettqQDAigeVhksRp3qtSR2BR2GsmyvSmwMASpqtTuMfxWD4D+gSi dbYO0TIShKnO1cnxAwwOCahyPqQI6csayn/TdePPzgheeiWFTlcV4wlMkqOEQMAd 3oOcDbmVi3ynw/MLXh19LCp5zHPJBweTz7YJuqhR8IoZ2khjQFFVzgKyS4pQXICA O7i7A2FeR0QerwhOCSWVVQLQDH9aA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvtddrudekvddgfeehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhhomhgr shcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqnecugg ftrfgrthhtvghrnhepudeggfdvfeduffdtfeeglefghfeukefgfffhueejtdetuedtjeeu ieeivdffgeehnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrh homhepthhhohhmrghssehmohhnjhgrlhhonhdrnhgvth X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 29 Sep 2021 07:11:03 -0400 (EDT) From: Thomas Monjalon To: Jiany Wu , "Burakov, Anatoly" Cc: "users@dpdk.org" , "Richardson, Bruce" , "olivier.matz@6wind.com" , "dmitry.kozliuk@gmail.com" , "stephen@networkplumber.org" , "Mcnamara, John" Subject: Re: [dpdk-users] can we reserve hugepage and not release Date: Wed, 29 Sep 2021 13:11:02 +0200 Message-ID: <7391021.896Q4A1nnV@thomas> In-Reply-To: References: <3927535.NYFkNA2spg@thomas> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org 29/09/2021 12:16, Burakov, Anatoly: > From: Thomas Monjalon > > 18/09/2021 04:37, Jiany Wu: > > > Hello, > > > > > > I met a scenario that, need to start and stop the container many times > > > for the hugepage. But after several times container start and stop, > > > the hugepage is not able to reserve. > > > Hugepage size is 2MB, and HW only support 2MB, can't support 1GB. > > > Is there anyway to make sure the hugepage is still kept continuous? > > > Thanks indeed. > > > > Interesting question. > > I think we need to address it in the DPDK documentation. > > > > Anatoly, Stephen, Bruce, any advice please? > > > > Hi, > > From description, I don't quite understand what's the issue here. Is the problem about "contiguousness of memory", or is it about inability to reserve more hugepages? I think the issue is that sometimes some pages are not properly released, so we cannot reserve them again. That's something I experienced myself. Any trick to reset hugepages state? > How are hugepages assigned to your container? > Have you tried using --in-memory mode?