From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 453844297F; Tue, 18 Apr 2023 18:02:01 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E68DD42BB1; Tue, 18 Apr 2023 18:02:00 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id F180E4014F for ; Tue, 18 Apr 2023 18:01:59 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id 48F7F21C202C; Tue, 18 Apr 2023 09:01:59 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 48F7F21C202C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1681833719; bh=lYzlbt0x09A2b5aKkL/hnspA+cVcrjIOKwscErp6FAs=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=E6g6zpXheQtVYr0pCNZ0id82dHDXneOp8JvqOZuwQJcPT8aMzoilDEdpxdqWQQT5V 8UT2tKwavTxU/EgnXCk/Ji+KJVVfjiM3EaqwEuECZaLtTzcjPXN1Qy94YGM/xDI1fs r6zoOJCFfz+8yD62HxW44mkFpN1VdF7CEkyt8DdY= Date: Tue, 18 Apr 2023 09:01:59 -0700 From: Tyler Retzlaff To: Morten =?iso-8859-1?Q?Br=F8rup?= Cc: Bruce Richardson , olivier.matz@6wind.com, andrew.rybchenko@oktetlabs.ru, dev@dpdk.org Subject: Re: [PATCH] mempool: optimize get objects with constant n Message-ID: <20230418160159.GB4574@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net> References: <20230411064845.37713-1-mb@smartsharesystems.com> <20230418151509.GB32568@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net> <98CBD80474FA8B44BF855DF32C47DC35D87891@smartserver.smartshare.dk> <20230418154435.GA4574@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net> <98CBD80474FA8B44BF855DF32C47DC35D87892@smartserver.smartshare.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35D87892@smartserver.smartshare.dk> User-Agent: Mutt/1.5.21 (2010-09-15) X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Tue, Apr 18, 2023 at 05:50:56PM +0200, Morten Brørup wrote: > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com] > > Sent: Tuesday, 18 April 2023 17.45 > > > > On Tue, Apr 18, 2023 at 05:30:27PM +0200, Morten Brørup wrote: > > > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com] > > > > Sent: Tuesday, 18 April 2023 17.15 > > > > > > > > On Tue, Apr 18, 2023 at 12:06:42PM +0100, Bruce Richardson wrote: > > > > > On Tue, Apr 11, 2023 at 08:48:45AM +0200, Morten Brørup wrote: > > > > > > When getting objects from the mempool, the number of objects to > > get > > > > is > > > > > > often constant at build time. > > > > > > > > > > > > This patch adds another code path for this case, so the compiler > > can > > > > > > optimize more, e.g. unroll the copy loop when the entire request > > is > > > > > > satisfied from the cache. > > > > > > > > > > > > On an Intel(R) Xeon(R) E5-2620 v4 CPU, and compiled with gcc > > 9.4.0, > > > > > > mempool_perf_test with constant n shows an increase in > > rate_persec > > > > by an > > > > > > average of 17 %, minimum 9.5 %, maximum 24 %. > > > > > > > > > > > > The code path where the number of objects to get is unknown at > > build > > > > time > > > > > > remains essentially unchanged. > > > > > > > > > > > > Signed-off-by: Morten Brørup > > > > > > > > > > Change looks a good idea. Some suggestions inline below, which you > > may > > > > want to > > > > > take on board for any future version. I'd strongly suggest adding > > some > > > > > extra clarifying code comments, as I suggest below. > > > > > With those exta code comments: > > > > > > > > > > Acked-by: Bruce Richardson > > > > > > > > > > > --- > > > > > > lib/mempool/rte_mempool.h | 24 +++++++++++++++++++++--- > > > > > > 1 file changed, 21 insertions(+), 3 deletions(-) > > > > > > > > > > > > diff --git a/lib/mempool/rte_mempool.h > > b/lib/mempool/rte_mempool.h > > > > > > index 9f530db24b..ade0100ec7 100644 > > > > > > --- a/lib/mempool/rte_mempool.h > > > > > > +++ b/lib/mempool/rte_mempool.h > > > > > > @@ -1500,15 +1500,33 @@ rte_mempool_do_generic_get(struct > > > > rte_mempool *mp, void **obj_table, > > > > > > if (unlikely(cache == NULL)) > > > > > > goto driver_dequeue; > > > > > > > > > > > > - /* Use the cache as much as we have to return hot objects > > first */ > > > > > > - len = RTE_MIN(remaining, cache->len); > > > > > > cache_objs = &cache->objs[cache->len]; > > > > > > + > > > > > > + if (__extension__(__builtin_constant_p(n)) && n <= cache- > > >len) { > > > > > > > > don't take direct dependency on compiler builtins. define a macro so > > we > > > > don't have to play shotgun surgery later. > > > > > > > > also what is the purpose of using __extension__ here? are you > > annotating > > > > the use of __builtin_constant_p() or is there more? because if > > that's > > > > the only reason i see no need to use __extension__ when already > > using a > > > > compiler specific builtin like this, that it is not standard is > > implied > > > > and enforced by a compile break. > > > > > > ARM 32-bit memcpy() [1] does it this way, so I did the same. > > > > > > [1]: > > https://elixir.bootlin.com/dpdk/v23.03/source/lib/eal/arm/include/rte_me > > mcpy_32.h#L122 > > > > i see thanks. > > > > > > > > While I agree that a macro for __builtin_constant_p() would be good, > > it belongs in a patch to fix portability, not in this patch. > > > > i agree it isn't composite of this change. > > > > would you mind introducing it as a separate patch and depending on it or > > do you feel that would delay this patch too much? i wouldn't mind doing > > it myself but there is a long merge time on my patches which means i end > > up having to carry the adaptations locally for weeks at a time. > > I would rather not. > > Introducing global macros in rte_common.h usually triggers a lot of discussion and pushback, and I don't want it to hold back this patch. yeah, no kidding. i wish the process was a bit more friendly being on the receiving end. it's unfortunate because it is discouraging improvements. i'll bring a patch for it then.