From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 92ACFA00C3; Thu, 20 Jan 2022 10:06:06 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 79EAC426D4; Thu, 20 Jan 2022 10:06:06 +0100 (CET) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id AC76240042 for ; Thu, 20 Jan 2022 10:06:04 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1642669564; x=1674205564; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=FPrN2S+5UmnZKAe3UAiHXra1+U6a75rbDjsnod1X6/k=; b=WPvxzayqO3PUMb7D2hNREsGzUZZA5rChmizQgUlAy2dd5JHjW/Ni4Zhc 4zgD7rSthZvYL8pGDUmIis6PDJiX72ACZlOJqLnKDszKoLcd8eqBAjMnQ 36eSwugqkYA1ERpxG7O16lV3/xxk5Hbe++tpKi8VVz/Mh+gQ+4WZDT1oN mnOv2pK9m4my+yUZaOW23yHzL5q+xBeiAYlnGbeqWbL0mFDWj5ANVkOCi /QzBnbP5LTOLYmOXvanEQ8/8RuuCKAUzCNU8Ade/lf9YjhcZS6y279XWp nzq2VbLReGeTgvP5uKsJ15yh8bTkneiOewbPQMfJviq5qFcbKa4sd+isw A==; X-IronPort-AV: E=McAfee;i="6200,9189,10232"; a="245099650" X-IronPort-AV: E=Sophos;i="5.88,302,1635231600"; d="scan'208";a="245099650" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Jan 2022 01:06:03 -0800 X-IronPort-AV: E=Sophos;i="5.88,302,1635231600"; d="scan'208";a="694127631" Received: from bricha3-mobl.ger.corp.intel.com ([10.252.29.19]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-SHA; 20 Jan 2022 01:06:02 -0800 Date: Thu, 20 Jan 2022 09:05:59 +0000 From: Bruce Richardson To: Dmitry Kozlyuk Cc: "dev@dpdk.org" , Anatoly Burakov , Slava Ovsiienko , David Marchand , "NBU-Contact-Thomas Monjalon (EXTERNAL)" , Lior Margalit Subject: Re: [PATCH v1 0/6] Fast restart with many hugepages Message-ID: References: <20211230143744.3550098-1-dkozlyuk@nvidia.com> <20220117080801.481568-1-dkozlyuk@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Wed, Jan 19, 2022 at 09:12:27PM +0000, Dmitry Kozlyuk wrote: > Hi Bruce, > > > From: Bruce Richardson > > [...] > > this seems really interesting, but in the absense of TB of memory > > being > > used, is it easily possible to see the benefits of this work? I've > > been > > playing with adding large memory allocations to helloworld example and > > checking the runtime. Allocating 1GB using malloc per thread seems to > > show > > a small (<0.5 second at most) benefit, and using a fixed 10GB > > allocation > > using memzone_reserve at startup shows runtimes within the margin of > > error > > when run with --huge-unlink=existing vs huge-unlink=never. At what > > size of > > memory footprint is it expected to make a clear improvement? > > Sorry, there was a bug in v1 that completely broke the testing. > I should've double-checked > after what I considered a quick rebase before sending. > Version 2 can be simply tested even without modifyin the code: > > time sh -c 'echo quit | sudo ../_build/dpdk/app/test/dpdk-test > --huge-unlink=never -m 8192 --single-file-segments --no-pci > 2>/dev/null >/dev/null' > > With --huge-unlink=existing: > real 0m1.450s > user 0m0.574s > sys 0m0.706s (1) > > With --huge-unlink=never, first run (no hugepage files to reuse): > real 0m0.892s > user 0m0.002s > sys 0m0.718s (2) > > With --huge-unlink=never, second run (hugepage files left): > real 0m0.210s > user 0m0.010s > sys 0m0.021s (3) > > Notice that (1) and (2) are close since there is no reuse, > but (2) and (3) are differ by 0.7 seconds for 8GB, > which correlates with 14 GB/sec memset() speed on this machine. > Results without --single-file-segments are nearly identical. Thanks, glad to hear it wasn't just me! I'll check again the v2 when I get the chance.