From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E3ED8A0093; Fri, 7 Jan 2022 12:16:04 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A318440140; Fri, 7 Jan 2022 12:16:04 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id B347340042 for ; Fri, 7 Jan 2022 12:16:03 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1641554163; x=1673090163; h=date:from:to:cc:subject:message-id:references: mime-version:content-transfer-encoding:in-reply-to; bh=YHJldOTmVHHl8GyVPprlDj3lCziitnCbyN9K3zfnmuA=; b=dk1S4sD2CLPK1Bipc2JVjeMw0L/hHv+jvtGGBtPkht9Nbp6g261ZVxim KfLDJnUYEdld9yxBEvMgJqhf7KeNHN+Hp7NJ4ATv2KqplG5sxsku3SwbW 3wCvHI5JS3BisUCQzbSZioHo+gejI4q5qCYZQFBg078GzxN4zSnMWolq2 UeA1rBCXoiAj3uLQJL8+j+AsHUBDK5RXU9tV9hz+68ZHnnFovxjzdLwgk hkeHsmUl3wsBnWJqCzaZuUSESvF83x821dIs6zT0GguuWf/iPiG38SA6m p0FFUIn4WhdKDq3LxpomxiSDMWg0/ZP1beQmpezlSdXczv1IFDVPHeu6/ w==; X-IronPort-AV: E=McAfee;i="6200,9189,10219"; a="240402973" X-IronPort-AV: E=Sophos;i="5.88,269,1635231600"; d="scan'208";a="240402973" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jan 2022 03:16:02 -0800 X-IronPort-AV: E=Sophos;i="5.88,269,1635231600"; d="scan'208";a="621866986" Received: from bricha3-mobl.ger.corp.intel.com ([10.252.26.15]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-SHA; 07 Jan 2022 03:16:00 -0800 Date: Fri, 7 Jan 2022 11:15:57 +0000 From: Bruce Richardson To: Morten =?iso-8859-1?Q?Br=F8rup?= Cc: Dharmik Thakkar , dev@dpdk.org, nd@arm.com, honnappa.nagarahalli@arm.com, ruifeng.wang@arm.com Subject: Re: [PATCH 0/1] mempool: implement index-based per core cache Message-ID: References: <20210930172735.2675627-1-dharmik.thakkar@arm.com> <20211224225923.806498-1-dharmik.thakkar@arm.com> <98CBD80474FA8B44BF855DF32C47DC35D86DAD@smartserver.smartshare.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35D86DAD@smartserver.smartshare.dk> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Sat, Dec 25, 2021 at 01:16:03AM +0100, Morten Brørup wrote: > > From: Dharmik Thakkar [mailto:dharmik.thakkar@arm.com] Sent: Friday, 24 > > December 2021 23.59 > > > > Current mempool per core cache implementation stores pointers to mbufs > > On 64b architectures, each pointer consumes 8B This patch replaces it > > with index-based implementation, where in each buffer is addressed by > > (pool base address + index) It reduces the amount of memory/cache > > required for per core cache > > > > L3Fwd performance testing reveals minor improvements in the cache > > performance (L1 and L2 misses reduced by 0.60%) with no change in > > throughput > > > > Micro-benchmarking the patch using mempool_perf_test shows significant > > improvement with majority of the test cases > > > > Number of cores = 1: n_get_bulk=1 n_put_bulk=1 n_keep=32 > > %_change_with_patch=18.01 n_get_bulk=1 n_put_bulk=1 n_keep=128 > > %_change_with_patch=19.91 n_get_bulk=1 n_put_bulk=4 n_keep=32 > > %_change_with_patch=-20.37 (regression) n_get_bulk=1 n_put_bulk=4 > > n_keep=128 %_change_with_patch=-17.01 (regression) n_get_bulk=1 > > n_put_bulk=32 n_keep=32 %_change_with_patch=-25.06 (regression) > > n_get_bulk=1 n_put_bulk=32 n_keep=128 %_change_with_patch=-23.81 > > (regression) n_get_bulk=4 n_put_bulk=1 n_keep=32 > > %_change_with_patch=53.93 n_get_bulk=4 n_put_bulk=1 n_keep=128 > > %_change_with_patch=60.90 n_get_bulk=4 n_put_bulk=4 n_keep=32 > > %_change_with_patch=1.64 n_get_bulk=4 n_put_bulk=4 n_keep=128 > > %_change_with_patch=8.76 n_get_bulk=4 n_put_bulk=32 n_keep=32 > > %_change_with_patch=-4.71 (regression) n_get_bulk=4 n_put_bulk=32 > > n_keep=128 %_change_with_patch=-3.19 (regression) n_get_bulk=32 > > n_put_bulk=1 n_keep=32 %_change_with_patch=65.63 n_get_bulk=32 > > n_put_bulk=1 n_keep=128 %_change_with_patch=75.19 n_get_bulk=32 > > n_put_bulk=4 n_keep=32 %_change_with_patch=11.75 n_get_bulk=32 > > n_put_bulk=4 n_keep=128 %_change_with_patch=15.52 n_get_bulk=32 > > n_put_bulk=32 n_keep=32 %_change_with_patch=13.45 n_get_bulk=32 > > n_put_bulk=32 n_keep=128 %_change_with_patch=11.58 > > > > Number of core = 2: n_get_bulk=1 n_put_bulk=1 n_keep=32 > > %_change_with_patch=18.21 n_get_bulk=1 n_put_bulk=1 n_keep=128 > > %_change_with_patch=21.89 n_get_bulk=1 n_put_bulk=4 n_keep=32 > > %_change_with_patch=-21.21 (regression) n_get_bulk=1 n_put_bulk=4 > > n_keep=128 %_change_with_patch=-17.05 (regression) n_get_bulk=1 > > n_put_bulk=32 n_keep=32 %_change_with_patch=-26.09 (regression) > > n_get_bulk=1 n_put_bulk=32 n_keep=128 %_change_with_patch=-23.49 > > (regression) n_get_bulk=4 n_put_bulk=1 n_keep=32 > > %_change_with_patch=56.28 n_get_bulk=4 n_put_bulk=1 n_keep=128 > > %_change_with_patch=67.69 n_get_bulk=4 n_put_bulk=4 n_keep=32 > > %_change_with_patch=1.45 n_get_bulk=4 n_put_bulk=4 n_keep=128 > > %_change_with_patch=8.84 n_get_bulk=4 n_put_bulk=32 n_keep=32 > > %_change_with_patch=-5.27 (regression) n_get_bulk=4 n_put_bulk=32 > > n_keep=128 %_change_with_patch=-3.09 (regression) n_get_bulk=32 > > n_put_bulk=1 n_keep=32 %_change_with_patch=76.11 n_get_bulk=32 > > n_put_bulk=1 n_keep=128 %_change_with_patch=86.06 n_get_bulk=32 > > n_put_bulk=4 n_keep=32 %_change_with_patch=11.86 n_get_bulk=32 > > n_put_bulk=4 n_keep=128 %_change_with_patch=16.55 n_get_bulk=32 > > n_put_bulk=32 n_keep=32 %_change_with_patch=13.01 n_get_bulk=32 > > n_put_bulk=32 n_keep=128 %_change_with_patch=11.51 > > > > > > From analyzing the results, it is clear that for n_get_bulk and > > n_put_bulk sizes of 32 there is no performance regression IMO, the > > other sizes are not practical from performance perspective and the > > regression in those cases can be safely ignored > > > > Dharmik Thakkar (1): mempool: implement index-based per core cache > > > > lib/mempool/rte_mempool.h | 114 +++++++++++++++++++++++++- > > lib/mempool/rte_mempool_ops_default.c | 7 ++ 2 files changed, 119 > > insertions(+), 2 deletions(-) > > > > -- 2.25.1 > > > > I still think this is very interesting. And your performance numbers are > looking good. > > However, it limits the size of a mempool to 4 GB. As previously > discussed, the max mempool size can be increased by multiplying the index > with a constant. > > I would suggest using sizeof(uintptr_t) as the constant multiplier, so > the mempool can hold objects of any size divisible by sizeof(uintptr_t). > And it would be silly to use a mempool to hold objects smaller than > sizeof(uintptr_t). > > How does the performance look if you multiply the index by > sizeof(uintptr_t)? > Each mempool entry is cache aligned, so we can use that if we want a bigger multiplier.