From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B3AC6A04A3; Fri, 7 Jan 2022 14:50:42 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4E3AD40140; Fri, 7 Jan 2022 14:50:42 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by mails.dpdk.org (Postfix) with ESMTP id DF46E40042 for ; Fri, 7 Jan 2022 14:50:40 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1641563441; x=1673099441; h=date:from:to:cc:subject:message-id:references: mime-version:content-transfer-encoding:in-reply-to; bh=oeNpeuF2oc2fB19fkUJGMpUY6ZjnN3i2OotOiLXFoKc=; b=OITON//PVtVlLUgXiHXRmZdq78tLesOULamwSPzlRhEPVY4IirMGYrQT sJkoKXLk9ZPpbRmmwK9QfsvaGWsRQZriBZsbTU08T1xU0W626he8zBhH0 9W3mXSZI/1IAjiXgZk6WdZMjQrYcvmc3c0mAxn5tZsYDJKXZ9Dpq7dJ9v oYtdfjQ7DDX1+epal1bTbc8tEVGggDiuYMrlpFI9XbwSvZfc35ev03qta 2Q+0H0NP28o0thPocmBKEJGZhO7YfR/1QGKLE14rfCGvehVCjx0Jr0/JL F9mEGXAz/MPv6v75w97xpeVQKuVUr1janpWPxW1XXZ59m2193DcffiFef A==; X-IronPort-AV: E=McAfee;i="6200,9189,10219"; a="222858827" X-IronPort-AV: E=Sophos;i="5.88,270,1635231600"; d="scan'208";a="222858827" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Jan 2022 05:50:39 -0800 X-IronPort-AV: E=Sophos;i="5.88,270,1635231600"; d="scan'208";a="471312789" Received: from bricha3-mobl.ger.corp.intel.com ([10.252.26.15]) by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-SHA; 07 Jan 2022 05:50:37 -0800 Date: Fri, 7 Jan 2022 13:50:34 +0000 From: Bruce Richardson To: Morten =?iso-8859-1?Q?Br=F8rup?= Cc: Dharmik Thakkar , dev@dpdk.org, nd@arm.com, honnappa.nagarahalli@arm.com, ruifeng.wang@arm.com Subject: Re: [PATCH 0/1] mempool: implement index-based per core cache Message-ID: References: <20210930172735.2675627-1-dharmik.thakkar@arm.com> <20211224225923.806498-1-dharmik.thakkar@arm.com> <98CBD80474FA8B44BF855DF32C47DC35D86DAD@smartserver.smartshare.dk> <98CBD80474FA8B44BF855DF32C47DC35D86DEA@smartserver.smartshare.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35D86DEA@smartserver.smartshare.dk> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Fri, Jan 07, 2022 at 12:29:23PM +0100, Morten Brørup wrote: > > From: Bruce Richardson [mailto:bruce.richardson@intel.com] > > Sent: Friday, 7 January 2022 12.16 > > > > On Sat, Dec 25, 2021 at 01:16:03AM +0100, Morten Brørup wrote: > > > > From: Dharmik Thakkar [mailto:dharmik.thakkar@arm.com] Sent: > > Friday, 24 > > > > December 2021 23.59 > > > > > > > > Current mempool per core cache implementation stores pointers to > > mbufs > > > > On 64b architectures, each pointer consumes 8B This patch replaces > > it > > > > with index-based implementation, where in each buffer is addressed > > by > > > > (pool base address + index) It reduces the amount of memory/cache > > > > required for per core cache > > > > > > > > L3Fwd performance testing reveals minor improvements in the cache > > > > performance (L1 and L2 misses reduced by 0.60%) with no change in > > > > throughput > > > > > > > > Micro-benchmarking the patch using mempool_perf_test shows > > significant > > > > improvement with majority of the test cases > > > > > > > > Number of cores = 1: n_get_bulk=1 n_put_bulk=1 n_keep=32 > > > > %_change_with_patch=18.01 n_get_bulk=1 n_put_bulk=1 n_keep=128 > > > > %_change_with_patch=19.91 n_get_bulk=1 n_put_bulk=4 n_keep=32 > > > > %_change_with_patch=-20.37 (regression) n_get_bulk=1 n_put_bulk=4 > > > > n_keep=128 %_change_with_patch=-17.01 (regression) n_get_bulk=1 > > > > n_put_bulk=32 n_keep=32 %_change_with_patch=-25.06 (regression) > > > > n_get_bulk=1 n_put_bulk=32 n_keep=128 %_change_with_patch=-23.81 > > > > (regression) n_get_bulk=4 n_put_bulk=1 n_keep=32 > > > > %_change_with_patch=53.93 n_get_bulk=4 n_put_bulk=1 n_keep=128 > > > > %_change_with_patch=60.90 n_get_bulk=4 n_put_bulk=4 n_keep=32 > > > > %_change_with_patch=1.64 n_get_bulk=4 n_put_bulk=4 n_keep=128 > > > > %_change_with_patch=8.76 n_get_bulk=4 n_put_bulk=32 n_keep=32 > > > > %_change_with_patch=-4.71 (regression) n_get_bulk=4 n_put_bulk=32 > > > > n_keep=128 %_change_with_patch=-3.19 (regression) n_get_bulk=32 > > > > n_put_bulk=1 n_keep=32 %_change_with_patch=65.63 n_get_bulk=32 > > > > n_put_bulk=1 n_keep=128 %_change_with_patch=75.19 n_get_bulk=32 > > > > n_put_bulk=4 n_keep=32 %_change_with_patch=11.75 n_get_bulk=32 > > > > n_put_bulk=4 n_keep=128 %_change_with_patch=15.52 n_get_bulk=32 > > > > n_put_bulk=32 n_keep=32 %_change_with_patch=13.45 n_get_bulk=32 > > > > n_put_bulk=32 n_keep=128 %_change_with_patch=11.58 > > > > > > > > Number of core = 2: n_get_bulk=1 n_put_bulk=1 n_keep=32 > > > > %_change_with_patch=18.21 n_get_bulk=1 n_put_bulk=1 n_keep=128 > > > > %_change_with_patch=21.89 n_get_bulk=1 n_put_bulk=4 n_keep=32 > > > > %_change_with_patch=-21.21 (regression) n_get_bulk=1 n_put_bulk=4 > > > > n_keep=128 %_change_with_patch=-17.05 (regression) n_get_bulk=1 > > > > n_put_bulk=32 n_keep=32 %_change_with_patch=-26.09 (regression) > > > > n_get_bulk=1 n_put_bulk=32 n_keep=128 %_change_with_patch=-23.49 > > > > (regression) n_get_bulk=4 n_put_bulk=1 n_keep=32 > > > > %_change_with_patch=56.28 n_get_bulk=4 n_put_bulk=1 n_keep=128 > > > > %_change_with_patch=67.69 n_get_bulk=4 n_put_bulk=4 n_keep=32 > > > > %_change_with_patch=1.45 n_get_bulk=4 n_put_bulk=4 n_keep=128 > > > > %_change_with_patch=8.84 n_get_bulk=4 n_put_bulk=32 n_keep=32 > > > > %_change_with_patch=-5.27 (regression) n_get_bulk=4 n_put_bulk=32 > > > > n_keep=128 %_change_with_patch=-3.09 (regression) n_get_bulk=32 > > > > n_put_bulk=1 n_keep=32 %_change_with_patch=76.11 n_get_bulk=32 > > > > n_put_bulk=1 n_keep=128 %_change_with_patch=86.06 n_get_bulk=32 > > > > n_put_bulk=4 n_keep=32 %_change_with_patch=11.86 n_get_bulk=32 > > > > n_put_bulk=4 n_keep=128 %_change_with_patch=16.55 n_get_bulk=32 > > > > n_put_bulk=32 n_keep=32 %_change_with_patch=13.01 n_get_bulk=32 > > > > n_put_bulk=32 n_keep=128 %_change_with_patch=11.51 > > > > > > > > > > > > From analyzing the results, it is clear that for n_get_bulk and > > > > n_put_bulk sizes of 32 there is no performance regression IMO, the > > > > other sizes are not practical from performance perspective and the > > > > regression in those cases can be safely ignored > > > > > > > > Dharmik Thakkar (1): mempool: implement index-based per core cache > > > > > > > > lib/mempool/rte_mempool.h | 114 > > +++++++++++++++++++++++++- > > > > lib/mempool/rte_mempool_ops_default.c | 7 ++ 2 files changed, > > 119 > > > > insertions(+), 2 deletions(-) > > > > > > > > -- 2.25.1 > > > > > > > > > > I still think this is very interesting. And your performance numbers > > are > > > looking good. > > > > > > However, it limits the size of a mempool to 4 GB. As previously > > > discussed, the max mempool size can be increased by multiplying the > > index > > > with a constant. > > > > > > I would suggest using sizeof(uintptr_t) as the constant multiplier, > > so > > > the mempool can hold objects of any size divisible by > > sizeof(uintptr_t). > > > And it would be silly to use a mempool to hold objects smaller than > > > sizeof(uintptr_t). > > > > > > How does the performance look if you multiply the index by > > > sizeof(uintptr_t)? > > > > > > > Each mempool entry is cache aligned, so we can use that if we want a > > bigger > > multiplier. > > Thanks for chiming in, Bruce. > > Please also read this discussion about the multiplier: > http://inbox.dpdk.org/dev/CALBAE1PrQYyOG96f6ECeW1vPF3TOh1h7MZZULiY95z9xjbRuyA@mail.gmail.com/ > I actually wondered after I had sent the email whether we had indeed an option to disable the cache alignment or not! Thanks for pointing out that we do. This brings a couple additional thoughts: * Using indexes for the cache should probably be a runtime flag rather than a build-time one. * It would seem reasonable to me to disallow use of the indexed-cache flag and the non-cache aligned flag simultaneously. * On the offchance that that restriction is unacceptable, then we can make things a little more complicated by doing a runtime computation of the "index-shiftwidth" to use. Overall, I think defaulting to cacheline shiftwidth and disallowing index-based addressing when using unaligned buffers is simplest and easiest unless we can come up with a valid usecase for needing more than that. /Bruce