From: "Morten Brørup" <mb@smartsharesystems.com>
To: <olivier.matz@6wind.com>, <andrew.rybchenko@oktetlabs.ru>,
"Thomas Monjalon" <thomas@monjalon.net>
Cc: <jerinj@marvell.com>, <bruce.richardson@intel.com>, <dev@dpdk.org>
Subject: RE: [PATCH v4 2/2] mempool: optimized debug statistics
Date: Sun, 30 Oct 2022 10:09:55 +0100 [thread overview]
Message-ID: <98CBD80474FA8B44BF855DF32C47DC35D87462@smartserver.smartshare.dk> (raw)
In-Reply-To: <20221028064152.98341-2-mb@smartsharesystems.com>
> From: Morten Brørup [mailto:mb@smartsharesystems.com]
> Sent: Friday, 28 October 2022 08.42
>
> When built with debug enabled (RTE_LIBRTE_MEMPOOL_DEBUG defined), the
> performance of mempools with caches is improved as follows.
>
> Accessing objects in the mempool is likely to increment either the
> put_bulk and put_objs or the get_success_bulk and get_success_objs
> debug statistics counters.
>
> By adding an alternative set of these counters to the mempool cache
> structure, accessing the dedicated debug statistics structure is
> avoided in
> the likely cases where these counters are incremented.
>
> The trick here is that the cache line holding the mempool cache
> structure
> is accessed anyway, in order to update the "len" field. Updating some
> debug statistics counters in the same cache line has lower performance
> cost than accessing the debug statistics counters in the dedicated
> debug
> statistics structure, i.e. in another cache line.
>
> Running mempool_perf_autotest on a VMware virtual server shows an avg.
> increase of 6.4 % in rate_persec for the tests with cache. (Only when
> built with debug enabled, obviously!)
>
> For the tests without cache, the avg. increase in rate_persec is 0.8 %.
> I
> assume this is noise from the test environment.
>
> v4:
> * Fix spelling and repeated word in commit message, caught by
> checkpatch.
> v3:
> * Try to fix git reference by making part of a series.
> * Add --in-reply-to v1 when sending email.
> v2:
> * Fix spelling and repeated word in commit message, caught by
> checkpatch.
>
> Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> ---
> lib/mempool/rte_mempool.c | 7 +++++
> lib/mempool/rte_mempool.h | 55 +++++++++++++++++++++++++++++++--------
> 2 files changed, 51 insertions(+), 11 deletions(-)
>
> diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
> index 21c94a2b9f..7b8c00a022 100644
> --- a/lib/mempool/rte_mempool.c
> +++ b/lib/mempool/rte_mempool.c
> @@ -1285,6 +1285,13 @@ rte_mempool_dump(FILE *f, struct rte_mempool
> *mp)
> sum.get_fail_objs += mp->stats[lcore_id].get_fail_objs;
> sum.get_success_blks += mp-
> >stats[lcore_id].get_success_blks;
> sum.get_fail_blks += mp->stats[lcore_id].get_fail_blks;
> + /* Add the fast access statistics, if local caches exist */
> + if (mp->cache_size != 0) {
> + sum.put_bulk += mp->local_cache[lcore_id].put_bulk;
> + sum.put_objs += mp->local_cache[lcore_id].put_objs;
> + sum.get_success_bulk += mp-
> >local_cache[lcore_id].get_success_bulk;
> + sum.get_success_objs += mp-
> >local_cache[lcore_id].get_success_objs;
> + }
> }
> fprintf(f, " stats:\n");
> fprintf(f, " put_bulk=%"PRIu64"\n", sum.put_bulk);
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index 3725a72951..d84087bc92 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -86,6 +86,14 @@ struct rte_mempool_cache {
> uint32_t size; /**< Size of the cache */
> uint32_t flushthresh; /**< Threshold before we flush excess
> elements */
> uint32_t len; /**< Current cache count */
> +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> + uint32_t unused;
> + /* Fast access statistics, only for likely events */
> + uint64_t put_bulk; /**< Number of puts. */
> + uint64_t put_objs; /**< Number of objects
> successfully put. */
> + uint64_t get_success_bulk; /**< Successful allocation number.
> */
> + uint64_t get_success_objs; /**< Objects successfully
> allocated. */
> +#endif
> /**
> * Cache objects
> *
> @@ -1327,13 +1335,19 @@ rte_mempool_do_generic_put(struct rte_mempool
> *mp, void * const *obj_table,
> {
> void **cache_objs;
>
> + /* No cache provided */
> + if (unlikely(cache == NULL))
> + goto driver_enqueue;
> +
> +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> /* increment stat now, adding in mempool always success */
> - RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
> - RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
> + cache->put_bulk += 1;
> + cache->put_objs += n;
> +#endif
>
> - /* No cache provided or the request itself is too big for the
> cache */
> - if (unlikely(cache == NULL || n > cache->flushthresh))
> - goto driver_enqueue;
> + /* The request is too big for the cache */
> + if (unlikely(n > cache->flushthresh))
> + goto driver_enqueue_stats_incremented;
>
> /*
> * The cache follows the following algorithm:
> @@ -1358,6 +1372,12 @@ rte_mempool_do_generic_put(struct rte_mempool
> *mp, void * const *obj_table,
>
> driver_enqueue:
>
> + /* increment stat now, adding in mempool always success */
> + RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
> + RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
> +
> +driver_enqueue_stats_incremented:
> +
> /* push objects to the backend */
> rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
> }
> @@ -1464,8 +1484,10 @@ rte_mempool_do_generic_get(struct rte_mempool
> *mp, void **obj_table,
> if (remaining == 0) {
> /* The entire request is satisfied from the cache. */
>
> - RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
> - RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n);
> +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> + cache->get_success_bulk += 1;
> + cache->get_success_objs += n;
> +#endif
>
> return 0;
> }
> @@ -1494,8 +1516,10 @@ rte_mempool_do_generic_get(struct rte_mempool
> *mp, void **obj_table,
>
> cache->len = cache->size;
>
> - RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
> - RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n);
> +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> + cache->get_success_bulk += 1;
> + cache->get_success_objs += n;
> +#endif
>
> return 0;
>
> @@ -1517,8 +1541,17 @@ rte_mempool_do_generic_get(struct rte_mempool
> *mp, void **obj_table,
> RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1);
> RTE_MEMPOOL_STAT_ADD(mp, get_fail_objs, n);
> } else {
> - RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
> - RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n);
> +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> + if (likely(cache != NULL)) {
> + cache->get_success_bulk += 1;
> + cache->get_success_bulk += n;
> + } else {
> +#endif
> + RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
> + RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, n);
> +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> + }
> +#endif
> }
>
> return ret;
> --
> 2.17.1
I am retracting this second part of the patch series, and reopening the original patch instead. This second part is probably not going to make it to 22.11 anyway.
Instead, I am going to provide another patch series (after 22.11) to split the current RTE_LIBRTE_MEMPOOL_DEBUG define in two: RTE_LIBRTE_MEMPOOL_STATS for statistics, and RTE_LIBRTE_MEMPOOL_DEBUG for debugging. And then this patch can be added to the RTE_LIBRTE_MEMPOOL_STATS.
-Morten
next prev parent reply other threads:[~2022-10-30 9:10 UTC|newest]
Thread overview: 85+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-12-26 15:34 [RFC] mempool: rte_mempool_do_generic_get optimizations Morten Brørup
2022-01-06 12:23 ` [PATCH] mempool: optimize incomplete cache handling Morten Brørup
2022-01-06 16:55 ` Jerin Jacob
2022-01-07 8:46 ` Morten Brørup
2022-01-10 7:26 ` Jerin Jacob
2022-01-10 10:55 ` Morten Brørup
2022-01-14 16:36 ` [PATCH] mempool: fix get objects from mempool with cache Morten Brørup
2022-01-17 17:35 ` Bruce Richardson
2022-01-18 8:25 ` Morten Brørup
2022-01-18 9:07 ` Bruce Richardson
2022-01-24 15:38 ` Olivier Matz
2022-01-24 16:11 ` Olivier Matz
2022-01-28 10:22 ` Morten Brørup
2022-01-17 11:52 ` [PATCH] mempool: optimize put objects to " Morten Brørup
2022-01-19 14:52 ` [PATCH v2] mempool: fix " Morten Brørup
2022-01-19 15:03 ` [PATCH v3] " Morten Brørup
2022-01-24 15:39 ` Olivier Matz
2022-01-28 9:37 ` Morten Brørup
2022-02-02 8:14 ` [PATCH v2] mempool: fix get objects from " Morten Brørup
2022-06-15 21:18 ` Morten Brørup
2022-09-29 10:52 ` Morten Brørup
2022-10-04 12:57 ` Andrew Rybchenko
2022-10-04 15:13 ` Morten Brørup
2022-10-04 15:58 ` Andrew Rybchenko
2022-10-04 18:09 ` Morten Brørup
2022-10-06 13:43 ` Aaron Conole
2022-10-04 16:03 ` Morten Brørup
2022-10-04 16:36 ` Morten Brørup
2022-10-04 16:39 ` Morten Brørup
2022-02-02 10:33 ` [PATCH v4] mempool: fix mempool cache flushing algorithm Morten Brørup
2022-04-07 9:04 ` Morten Brørup
2022-04-07 9:14 ` Bruce Richardson
2022-04-07 9:26 ` Morten Brørup
2022-04-07 10:32 ` Bruce Richardson
2022-04-07 10:43 ` Bruce Richardson
2022-04-07 11:36 ` Morten Brørup
2022-10-04 20:01 ` Morten Brørup
2022-10-09 11:11 ` [PATCH 1/2] mempool: check driver enqueue result in one place Andrew Rybchenko
2022-10-09 11:11 ` [PATCH 2/2] mempool: avoid usage of term ring on put Andrew Rybchenko
2022-10-09 13:08 ` Morten Brørup
2022-10-09 13:14 ` Andrew Rybchenko
2022-10-09 13:01 ` [PATCH 1/2] mempool: check driver enqueue result in one place Morten Brørup
2022-10-09 13:19 ` [PATCH v4] mempool: fix mempool cache flushing algorithm Andrew Rybchenko
2022-10-04 12:53 ` [PATCH v3] mempool: fix get objects from mempool with cache Andrew Rybchenko
2022-10-04 14:42 ` Morten Brørup
2022-10-07 10:44 ` [PATCH v4] " Andrew Rybchenko
2022-10-08 20:56 ` Thomas Monjalon
2022-10-11 20:30 ` Copy-pasted code should be updated Morten Brørup
2022-10-11 21:47 ` Honnappa Nagarahalli
2022-10-30 8:44 ` Morten Brørup
2022-10-30 22:50 ` Honnappa Nagarahalli
2022-10-14 14:01 ` [PATCH v4] mempool: fix get objects from mempool with cache Olivier Matz
2022-10-09 13:37 ` [PATCH v6 0/4] mempool: fix mempool cache flushing algorithm Andrew Rybchenko
2022-10-09 13:37 ` [PATCH v6 1/4] mempool: check driver enqueue result in one place Andrew Rybchenko
2022-10-09 13:37 ` [PATCH v6 2/4] mempool: avoid usage of term ring on put Andrew Rybchenko
2022-10-09 13:37 ` [PATCH v6 3/4] mempool: fix cache flushing algorithm Andrew Rybchenko
2022-10-09 14:31 ` Morten Brørup
2022-10-09 14:51 ` Andrew Rybchenko
2022-10-09 15:08 ` Morten Brørup
2022-10-14 14:01 ` Olivier Matz
2022-10-14 15:57 ` Morten Brørup
2022-10-14 19:50 ` Olivier Matz
2022-10-15 6:57 ` Morten Brørup
2022-10-18 16:32 ` Jerin Jacob
2022-10-09 13:37 ` [PATCH v6 4/4] mempool: flush cache completely on overflow Andrew Rybchenko
2022-10-09 14:44 ` Morten Brørup
2022-10-14 14:01 ` Olivier Matz
2022-10-10 15:21 ` [PATCH v6 0/4] mempool: fix mempool cache flushing algorithm Thomas Monjalon
2022-10-11 19:26 ` Morten Brørup
2022-10-26 14:09 ` Thomas Monjalon
2022-10-26 14:26 ` Morten Brørup
2022-10-26 14:44 ` [PATCH] mempool: cache align mempool cache objects Morten Brørup
2022-10-26 19:44 ` Andrew Rybchenko
2022-10-27 8:34 ` Olivier Matz
2022-10-27 9:22 ` Morten Brørup
2022-10-27 11:42 ` Olivier Matz
2022-10-27 12:11 ` Morten Brørup
2022-10-27 15:20 ` Olivier Matz
2022-10-28 6:35 ` [PATCH v3 1/2] " Morten Brørup
2022-10-28 6:35 ` [PATCH v3 2/2] mempool: optimized debug statistics Morten Brørup
2022-10-28 6:41 ` [PATCH v4 1/2] mempool: cache align mempool cache objects Morten Brørup
2022-10-28 6:41 ` [PATCH v4 2/2] mempool: optimized debug statistics Morten Brørup
2022-10-30 9:09 ` Morten Brørup [this message]
2022-10-30 9:16 ` Thomas Monjalon
2022-10-30 9:17 ` [PATCH v4 1/2] mempool: cache align mempool cache objects Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=98CBD80474FA8B44BF855DF32C47DC35D87462@smartserver.smartshare.dk \
--to=mb@smartsharesystems.com \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=jerinj@marvell.com \
--cc=olivier.matz@6wind.com \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).