From: Bruce Richardson <bruce.richardson@intel.com>
To: "Morten Brørup" <mb@smartsharesystems.com>
Cc: <olivier.matz@6wind.com>, <andrew.rybchenko@oktetlabs.ru>,
<dev@dpdk.org>
Subject: Re: [PATCH] mempool: optimize get objects with constant n
Date: Tue, 18 Apr 2023 13:55:26 +0100 [thread overview]
Message-ID: <ZD6TPs0EvuOx9p2i@bricha3-MOBL.ger.corp.intel.com> (raw)
In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35D8788D@smartserver.smartshare.dk>
On Tue, Apr 18, 2023 at 01:29:49PM +0200, Morten Brørup wrote:
> > From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> > Sent: Tuesday, 18 April 2023 13.07
> >
> > On Tue, Apr 11, 2023 at 08:48:45AM +0200, Morten Brørup wrote:
> > > When getting objects from the mempool, the number of objects to get is
> > > often constant at build time.
> > >
> > > This patch adds another code path for this case, so the compiler can
> > > optimize more, e.g. unroll the copy loop when the entire request is
> > > satisfied from the cache.
> > >
> > > On an Intel(R) Xeon(R) E5-2620 v4 CPU, and compiled with gcc 9.4.0,
> > > mempool_perf_test with constant n shows an increase in rate_persec by an
> > > average of 17 %, minimum 9.5 %, maximum 24 %.
> > >
> > > The code path where the number of objects to get is unknown at build time
> > > remains essentially unchanged.
> > >
> > > Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> >
> > Change looks a good idea. Some suggestions inline below, which you may want to
> > take on board for any future version. I'd strongly suggest adding some
> > extra clarifying code comments, as I suggest below.
> > With those exta code comments:
> >
> > Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> >
> > > ---
> > > lib/mempool/rte_mempool.h | 24 +++++++++++++++++++++---
> > > 1 file changed, 21 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> > > index 9f530db24b..ade0100ec7 100644
> > > --- a/lib/mempool/rte_mempool.h
> > > +++ b/lib/mempool/rte_mempool.h
> > > @@ -1500,15 +1500,33 @@ rte_mempool_do_generic_get(struct rte_mempool *mp,
> > void **obj_table,
> > > if (unlikely(cache == NULL))
> > > goto driver_dequeue;
> > >
> > > - /* Use the cache as much as we have to return hot objects first */
> > > - len = RTE_MIN(remaining, cache->len);
> > > cache_objs = &cache->objs[cache->len];
> > > +
> > > + if (__extension__(__builtin_constant_p(n)) && n <= cache->len) {
> > > + /*
> > > + * The request size is known at build time, and
> > > + * the entire request can be satisfied from the cache,
> > > + * so let the compiler unroll the fixed length copy loop.
> > > + */
> > > + cache->len -= n;
> > > + for (index = 0; index < n; index++)
> > > + *obj_table++ = *--cache_objs;
> > > +
> >
> > This loop looks a little awkward to me. Would it be clearer (and perhaps
> > easier for compilers to unroll efficiently if it was rewritten as:
> >
> > cache->len -= n;
> > cache_objs = &cache->objs[cache->len];
> > for (index = 0; index < n; index++)
> > obj_table[index] = cache_objs[index];
>
> The mempool cache is a stack, so the copy loop needs get the objects in decrementing order. I.e. the source index decrements and the destination index increments.
>
BTW: Please add this as a comment in the code too, above the loop to avoid
future developers (or even future me), asking this question again!
/Bruce
next prev parent reply other threads:[~2023-04-18 12:55 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-11 6:48 Morten Brørup
2023-04-18 11:06 ` Bruce Richardson
2023-04-18 11:29 ` Morten Brørup
2023-04-18 12:54 ` Bruce Richardson
2023-04-18 12:55 ` Bruce Richardson [this message]
2023-06-07 7:51 ` Thomas Monjalon
2023-06-07 8:03 ` Morten Brørup
2023-06-07 8:10 ` Thomas Monjalon
2023-06-07 8:33 ` Morten Brørup
2023-06-07 8:41 ` Morten Brørup
2023-04-18 15:15 ` Tyler Retzlaff
2023-04-18 15:30 ` Morten Brørup
2023-04-18 15:44 ` Tyler Retzlaff
2023-04-18 15:50 ` Morten Brørup
2023-04-18 16:01 ` Tyler Retzlaff
2023-04-18 16:05 ` Morten Brørup
2023-04-18 19:51 ` [PATCH v3] " Morten Brørup
2023-04-18 20:09 ` [PATCH v4] " Morten Brørup
2023-06-07 9:12 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZD6TPs0EvuOx9p2i@bricha3-MOBL.ger.corp.intel.com \
--to=bruce.richardson@intel.com \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=dev@dpdk.org \
--cc=mb@smartsharesystems.com \
--cc=olivier.matz@6wind.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).