From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
To: "Morten Brørup" <mb@smartsharesystems.com>,
"Konstantin Ananyev" <konstantin.v.ananyev@yandex.ru>,
"olivier.matz@6wind.com" <olivier.matz@6wind.com>,
"andrew.rybchenko@oktetlabs.ru" <andrew.rybchenko@oktetlabs.ru>,
"honnappa.nagarahalli@arm.com" <honnappa.nagarahalli@arm.com>,
"kamalakshitha.aligeri@arm.com" <kamalakshitha.aligeri@arm.com>,
"bruce.richardson@intel.com" <bruce.richardson@intel.com>,
"dev@dpdk.org" <dev@dpdk.org>
Cc: "nd@arm.com" <nd@arm.com>
Subject: RE: [PATCH v5] mempool cache: add zero-copy get and put functions
Date: Mon, 23 Jan 2023 12:52:24 +0000 [thread overview]
Message-ID: <b47466ca9ac547f0b01e63dec50d3280@huawei.com> (raw)
In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35D876A9@smartserver.smartshare.dk>
> > > > > @@ -1364,32 +1556,25 @@ rte_mempool_do_generic_put(struct
> > rte_mempool
> > > > *mp, void * const *obj_table,
> > > > > {
> > > > > void **cache_objs;
> > > > >
> > > > > - /* No cache provided */
> > > > > - if (unlikely(cache == NULL))
> > > > > - goto driver_enqueue;
> > > > > + /* No cache provided? */
> > > > > + if (unlikely(cache == NULL)) {
> > > > > + /* Increment stats now, adding in mempool always
> > succeeds.
> > > > */
> > > > > + RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
> > > > > + RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
> > > > >
> > > > > - /* increment stat now, adding in mempool always success */
> > > > > - RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
> > > > > - RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
> > > > > + goto driver_enqueue;
> > > > > + }
> > > > >
> > > > > - /* The request itself is too big for the cache */
> > > > > - if (unlikely(n > cache->flushthresh))
> > > > > - goto driver_enqueue_stats_incremented;
> > > > > + /* Prepare to add the objects to the cache. */
> > > > > + cache_objs = __rte_mempool_cache_zc_put_bulk(cache, mp, n);
> > > > >
> > > > > - /*
> > > > > - * The cache follows the following algorithm:
> > > > > - * 1. If the objects cannot be added to the cache without
> > > > crossing
> > > > > - * the flush threshold, flush the cache to the
> > backend.
> > > > > - * 2. Add the objects to the cache.
> > > > > - */
> > > > > + /* The request itself is too big for the cache? */
> > > > > + if (unlikely(cache_objs == NULL)) {
> > > > > + /* Increment stats now, adding in mempool always
> > succeeds.
> > > > */
> > > > > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
> > > > > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
> > > >
> > > > Shouldn't it be RTE_MEMPOOL_STAT_ADD() here?
> > >
> > > I can see why you are wondering, but the answer is no. The statistics
> > in mempool cache are not related to the cache, they are related
> > > to the mempool; they are there to provide faster per-lcore update
> > access [1].
> > >
> > > [1]:
> > https://elixir.bootlin.com/dpdk/v22.11.1/source/lib/mempool/rte_mempool
> > .h#L94
> >
> > But the condition above:
> > if (unlikely(cache_objs == NULL))
> > means that me can't put these object to the cache and have to put
> > objects straight to the pool (skipping cache completely), right?
>
> Correct.
>
> > If so, then why to update cache stats instead of pool stats?
>
> Because updating the stats in the cache structure is faster than updating the stats in the pool structure. Refer to the two macros:
> RTE_MEMPOOL_STAT_ADD() [2] is effectively five lines of code, but RTE_MEMPOOL_CACHE_STAT_ADD(cache, name, n) [3] is a one-
> liner: ((cache)->stats.name += (n)).
>
> [2]: https://elixir.bootlin.com/dpdk/v22.11.1/source/lib/mempool/rte_mempool.h#L325
> [3]: https://elixir.bootlin.com/dpdk/v22.11.1/source/lib/mempool/rte_mempool.h#L348
>
> And to reiterate that this is the correct behavior here, I will rephrase my previous response: The stats kept in the cache are part of the
> pool stats, they are not stats for the cache itself.
Ah ok, that's the same as current behavior.
It is still looks a bit strange to me that we incrementing cache (not pool) stats here.
But that's another story, so no extra comments from me for that case.
>
> > > > >
> > > > > - if (cache->len + n <= cache->flushthresh) {
> > > > > - cache_objs = &cache->objs[cache->len];
> > > > > - cache->len += n;
> > > > > - } else {
> > > > > - cache_objs = &cache->objs[0];
> > > > > - rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache-
> > >len);
> > > > > - cache->len = n;
> > > > > + goto driver_enqueue;
> > > > > }
> > > > >
> > > > > /* Add the objects to the cache. */
next prev parent reply other threads:[~2023-01-23 12:52 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-05 13:19 [RFC]: mempool: zero-copy cache get bulk Morten Brørup
2022-11-07 9:19 ` Bruce Richardson
2022-11-07 14:32 ` Morten Brørup
2022-11-15 16:18 ` [PATCH] mempool cache: add zero-copy get and put functions Morten Brørup
2022-11-16 18:04 ` [PATCH v2] " Morten Brørup
2022-11-29 20:54 ` Kamalakshitha Aligeri
2022-11-30 10:21 ` Morten Brørup
2022-12-22 15:57 ` Konstantin Ananyev
2022-12-22 17:55 ` Morten Brørup
2022-12-23 16:58 ` Konstantin Ananyev
2022-12-24 12:17 ` Morten Brørup
2022-12-24 11:49 ` [PATCH v3] " Morten Brørup
2022-12-24 11:55 ` [PATCH v4] " Morten Brørup
2022-12-27 9:24 ` Andrew Rybchenko
2022-12-27 10:31 ` Morten Brørup
2022-12-27 15:17 ` [PATCH v5] " Morten Brørup
2023-01-22 20:34 ` Konstantin Ananyev
2023-01-22 21:17 ` Morten Brørup
2023-01-23 11:53 ` Konstantin Ananyev
2023-01-23 12:23 ` Morten Brørup
2023-01-23 12:52 ` Konstantin Ananyev [this message]
2023-01-23 14:30 ` Bruce Richardson
2023-01-24 1:53 ` Kamalakshitha Aligeri
2023-02-09 14:39 ` [PATCH v6] " Morten Brørup
2023-02-09 14:52 ` [PATCH v7] " Morten Brørup
2023-02-09 14:58 ` [PATCH v8] " Morten Brørup
2023-02-10 8:35 ` fengchengwen
2023-02-12 19:56 ` Honnappa Nagarahalli
2023-02-12 23:15 ` Morten Brørup
2023-02-13 4:29 ` Honnappa Nagarahalli
2023-02-13 9:30 ` Morten Brørup
2023-02-13 9:37 ` Olivier Matz
2023-02-13 10:25 ` Morten Brørup
2023-02-14 14:16 ` Andrew Rybchenko
2023-02-13 12:24 ` [PATCH v9] " Morten Brørup
2023-02-13 14:33 ` Kamalakshitha Aligeri
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b47466ca9ac547f0b01e63dec50d3280@huawei.com \
--to=konstantin.ananyev@huawei.com \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=bruce.richardson@intel.com \
--cc=dev@dpdk.org \
--cc=honnappa.nagarahalli@arm.com \
--cc=kamalakshitha.aligeri@arm.com \
--cc=konstantin.v.ananyev@yandex.ru \
--cc=mb@smartsharesystems.com \
--cc=nd@arm.com \
--cc=olivier.matz@6wind.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).