DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Morten Brørup" <mb@smartsharesystems.com>
To: "Thomas Monjalon" <thomas@monjalon.net>,
	<andrew.rybchenko@oktetlabs.ru>, <olivier.matz@6wind.com>
Cc: <david.marchand@redhat.com>, <dev@dpdk.org>,
	<hofors@lysator.liu.se>, <dev@dpdk.org>,
	"Konstantin Ananyev" <konstantin.ananyev@huawei.com>,
	<mattias.ronnblom@ericsson.com>, <stephen@networkplumber.org>,
	<jerinj@marvell.com>, <bruce.richardson@intel.com>
Subject: RE: FW: [PATCH v4 3/3] mempool: use cache for frequently updated stats
Date: Tue, 8 Nov 2022 15:30:07 +0100	[thread overview]
Message-ID: <98CBD80474FA8B44BF855DF32C47DC35D874A3@smartserver.smartshare.dk> (raw)
In-Reply-To: <6110999.17fYzF0512@thomas>

> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Tuesday, 8 November 2022 14.32
> 
> 08/11/2022 12:25, Morten Brørup:
> > From: Morten Brørup
> > Sent: Tuesday, 8 November 2022 12.22
> >
> > > From: Konstantin Ananyev [mailto:konstantin.ananyev@huawei.com]
> > > Sent: Tuesday, 8 November 2022 10.20
> > >
> > > > When built with stats enabled (RTE_LIBRTE_MEMPOOL_STATS defined),
> the
> > > > performance of mempools with caches is improved as follows.
> > > >
> > > > When accessing objects in the mempool, either the put_bulk and
> > > put_objs or
> > > > the get_success_bulk and get_success_objs statistics counters are
> > > likely
> > > > to be incremented.
> > > >
> > > > By adding an alternative set of these counters to the mempool
> cache
> > > > structure, accessing the dedicated statistics structure is
> avoided in
> > > the
> > > > likely cases where these counters are incremented.
> > > >
> > > > The trick here is that the cache line holding the mempool cache
> > > structure
> > > > is accessed anyway, in order to access the 'len' or 'flushthresh'
> > > fields.
> > > > Updating some statistics counters in the same cache line has
> lower
> > > > performance cost than accessing the statistics counters in the
> > > dedicated
> > > > statistics structure, which resides in another cache line.
> > > >
> > > > mempool_perf_autotest with this patch shows the following
> > > improvements in
> > > > rate_persec.
> > > >
> > > > The cost of enabling mempool stats (without debug) after this
> patch:
> > > > -6.8 % and -6.7 %, respectively without and with cache.
> > > >
> > > > v4:
> > > > * Fix checkpatch warnings:
> > > >   A couple of typos in the patch description.
> > > >   The macro to add to a mempool cache stat variable should not
> use
> > > >   do {} while (0). Personally, I would tend to disagree with
> this,
> > > but
> > > >   whatever keeps the CI happy.
> > > > v3:
> > > > * Don't update the description of the RTE_MEMPOOL_STAT_ADD macro.
> > > >   This change belongs in the first patch of the series.
> > > > v2:
> > > > * Move the statistics counters into a stats structure.
> > > >
> > > > Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> > > > ---
> >
> > [...]
> >
> > > > +/**
> > > > + * @internal When stats is enabled, store some statistics.
> > > > + *
> > > > + * @param cache
> > > > + *   Pointer to the memory pool cache.
> > > > + * @param name
> > > > + *   Name of the statistics field to increment in the memory
> pool
> > > cache.
> > > > + * @param n
> > > > + *   Number to add to the statistics.
> > > > + */
> > > > +#ifdef RTE_LIBRTE_MEMPOOL_STATS
> > > > +#define RTE_MEMPOOL_CACHE_STAT_ADD(cache, name, n) (cache)-
> >stats.name += n
> > >
> > > As Andrew already pointed, it needs to be: ((cache)->stats.name +=
> (n))
> > > Apart from that, LGTM.
> > > Series-Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> >
> > @Thomas, this series should be ready to apply... it now has been:
> > Reviewed-by: (mempool maintainer) Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>
> > Reviewed-By: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> > Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
> 
> Being acked does not mean it is good to apply in -rc3.

I understand that the RFC/v1 of this series was formally too late to make it in 22.11, so I will not complain loudly if you choose to omit it for 22.11.

With two independent reviews, including from a mempool maintainer, I still have some hope. Also considering the risk assessment below. ;-)

> Please tell what is the benefit for 22.11 (before/after and condition).

Short version: With this series, mempool statistics can be used in production. Without it, the performance cost (mempool_perf_autotest: -74 %) is prohibitive!

Long version:

The patch series provides significantly higher performance for mempool statistics, which are readable through rte_mempool_dump(FILE *f, struct rte_mempool *mp).

Without this series, you have to set RTE_LIBRTE_MEMPOOL_DEBUG at build time to get mempool statistics. RTE_LIBRTE_MEMPOOL_DEBUG also enables protective cookies before and after each mempool object, which are all verified on get/put from the mempool. According to mempool_perf_autotest, the performance cost of mempool statistics (by setting RTE_LIBRTE_MEMPOOL_DEBUG) is a 74 % decrease in rate_persec for mempools with cache (i.e. mbuf pools). Prohibitive for use in production!

With this series, the performance cost of mempool statistics (by setting RTE_LIBRTE_MEMPOOL_STATS) in mempool_perf_autotest is only 6.7 %, so mempool statistics can be used in production.

> Note there is a real risk doing such change that late.

Risk assessment:

The patch series has zero effect unless either RTE_LIBRTE_MEMPOOL_DEBUG or RTE_LIBRTE_MEMPOOL_STATS are set when building. They are not set in the default build.

> 
> > Please fix the RTE_MEMPOOL_CACHE_STAT_ADD macro while merging, to
> satisfy checkpatch. ;-)
> >
> > It should be:
> >
> > +#ifdef RTE_LIBRTE_MEMPOOL_STATS
> > +#define RTE_MEMPOOL_CACHE_STAT_ADD(cache, name, n) ((cache)-
> >stats.name += (n))
> > +#else
> > +#define RTE_MEMPOOL_CACHE_STAT_ADD(cache, name, n) do {} while (0)
> > +#endif
> 
> Would be easier if you fix it.

I will send a v5 of the series.

> 
> > @Thomas/@David: I changed the state of this patch series to Awaiting
> Upstream in patchwork. Is that helpful, or should I change them to some
> other state?
> 
> You should keep it as "New".

OK. Thank you.


  reply	other threads:[~2022-11-08 14:30 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-08 11:25 Morten Brørup
2022-11-08 13:32 ` Thomas Monjalon
2022-11-08 14:30   ` Morten Brørup [this message]
2022-11-08 15:51     ` Thomas Monjalon
2022-11-08 15:59       ` Bruce Richardson
2022-11-08 17:38         ` Konstantin Ananyev
2022-11-09  5:03           ` Morten Brørup
2022-11-09  8:21             ` Mattias Rönnblom
2022-11-09 10:19               ` Konstantin Ananyev
2022-11-09 11:42                 ` Morten Brørup

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=98CBD80474FA8B44BF855DF32C47DC35D874A3@smartserver.smartshare.dk \
    --to=mb@smartsharesystems.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=bruce.richardson@intel.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=hofors@lysator.liu.se \
    --cc=jerinj@marvell.com \
    --cc=konstantin.ananyev@huawei.com \
    --cc=mattias.ronnblom@ericsson.com \
    --cc=olivier.matz@6wind.com \
    --cc=stephen@networkplumber.org \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).