DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Morten Brørup" <mb@smartsharesystems.com>
To: "Olivier Matz" <olivier.matz@6wind.com>
Cc: <andrew.rybchenko@oktetlabs.ru>, <bruce.richardson@intel.com>,
	<jerinjacobk@gmail.com>, <dev@dpdk.org>
Subject: RE: [PATCH] mempool: fix get objects from mempool with cache
Date: Fri, 28 Jan 2022 11:22:58 +0100	[thread overview]
Message-ID: <98CBD80474FA8B44BF855DF32C47DC35D86E56@smartserver.smartshare.dk> (raw)
In-Reply-To: <Ye7IEJGmdAclYMeZ@platinum>

Olivier, thank you for the detailed feedback on my mempool RFC and patches.

We might disagree on some points, but that is the point of having a discussion. :-)

> From: Olivier Matz [mailto:olivier.matz@6wind.com]
> Sent: Monday, 24 January 2022 16.39
> 
> Hi Morten,
> 
> Few comments below.
> 
> On Fri, Jan 14, 2022 at 05:36:50PM +0100, Morten Brørup wrote:
> > A flush threshold for the mempool cache was introduced in DPDK
> version
> > 1.3, but rte_mempool_do_generic_get() was not completely updated back
> > then, and some inefficiencies were introduced.
> >
> > This patch fixes the following in rte_mempool_do_generic_get():
> >
> > 1. The code that initially screens the cache request was not updated
> > with the change in DPDK version 1.3.
> > The initial screening compared the request length to the cache size,
> > which was correct before, but became irrelevant with the introduction
> of
> > the flush threshold. E.g. the cache can hold up to flushthresh
> objects,
> > which is more than its size, so some requests were not served from
> the
> > cache, even though they could be.
> > The initial screening has now been corrected to match the initial
> > screening in rte_mempool_do_generic_put(), which verifies that a
> cache
> > is present, and that the length of the request does not overflow the
> > memory allocated for the cache.

This bug will cause a major performance degradation in a scenario where the application burst length is the same as the cache size. In this case, the objects are not ever fetched from the mempool cache.

This scenario occurs if an application has configured a mempool with a size matching the application's burst size. Do any such applications exist? I don't know.

> >
> > 2. The function is a helper for rte_mempool_generic_get(), so it must
> > behave according to the description of that function.
> > Specifically, objects must first be returned from the cache,
> > subsequently from the ring.
> > After the change in DPDK version 1.3, this was not the behavior when
> > the request was partially satisfied from the cache; instead, the
> objects
> > from the ring were returned ahead of the objects from the cache. This
> is
> > bad for CPUs with a small L1 cache, which benefit from having the hot
> > objects first in the returned array. (This is also the reason why
> > the function returns the objects in reverse order.)
> > Now, all code paths first return objects from the cache, subsequently
> > from the ring.

Formally, the function is buggy when it isn't doing what it is supposed to.

But yes, it only has a performance impact. And perhaps mostly on low end hardware.

> >
> > 3. If the cache could not be backfilled, the function would attempt
> > to get all the requested objects from the ring (instead of only the
> > number of requested objects minus the objects available in the ring),
> > and the function would fail if that failed.
> > Now, the first part of the request is always satisfied from the
> cache,
> > and if the subsequent backfilling of the cache from the ring fails,
> only
> > the remaining requested objects are retrieved from the ring.
> 
> This is the only point I'd consider to be a fix. The problem, from the
> user perspective, is that a get() can fail despite there are enough
> objects in cache + common pool.
> 
> To be honnest, I feel a bit uncomfortable to have such a list of
> problems solved in one commit, even if I understand that they are part
> of the same code rework.
> 
> Ideally, this fix should be a separate commit. What do you think of
> having this simple patch for this fix, and then do the
> optimizations/rework in another commit?
> 
>   --- a/lib/mempool/rte_mempool.h
>   +++ b/lib/mempool/rte_mempool.h
>   @@ -1484,7 +1484,22 @@ rte_mempool_do_generic_get(struct rte_mempool
> *mp, void **obj_table,
>                            * the ring directly. If that fails, we are
> truly out of
>                            * buffers.
>                            */
>   -                       goto ring_dequeue;
>   +                       req = n - cache->len;
>   +                       ret = rte_mempool_ops_dequeue_bulk(mp,
> obj_table, req);
>   +                       if (ret < 0) {
>   +                               RTE_MEMPOOL_STAT_ADD(mp,
> get_fail_bulk, 1);
>   +                               RTE_MEMPOOL_STAT_ADD(mp,
> get_fail_objs, n);
>   +                               return ret;
>   +                       }
>   +                       obj_table += req;
>   +                       len = cache->len;
>   +                       while (len > 0)
>   +                               *obj_table++ = cache_objs[--len];
>   +                       cache->len = 0;
>   +                       RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk,
> 1);
>   +                       RTE_MEMPOOL_STAT_ADD(mp, get_success_objs,
> n);
>   +
>   +                       return 0;
>                   }
> 
>                   cache->len += req;
> 
> The title of this commit could then be more precise to describe
> the solved issue.

I get your point here, but I still consider the other modifications bug fixes too, so a unified rework patch works better for me.

As you also noticed yourself, the resulting code is clear and easily readable. If that was not the case, I might have agreed to break it up in a couple of steps.

> 
> > 4. The code flow for satisfying the request from the cache was
> slightly
> > inefficient:
> > The likely code path where the objects are simply served from the
> cache
> > was treated as unlikely. Now it is treated as likely.
> > And in the code path where the cache was backfilled first, numbers
> were
> > added and subtracted from the cache length; now this code path simply
> > sets the cache length to its final value.
> >
> > 5. Some comments were not correct anymore.
> > The comments have been updated.
> > Most importanly, the description of the succesful return value was
> > inaccurate. Success only returns 0, not >= 0.
> >
> > Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> > ---
> >  lib/mempool/rte_mempool.h | 81 ++++++++++++++++++++++++++++---------
> --
> >  1 file changed, 59 insertions(+), 22 deletions(-)
> >
> > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> > index 1e7a3c1527..88f1b8b7ab 100644
> > --- a/lib/mempool/rte_mempool.h
> > +++ b/lib/mempool/rte_mempool.h
> > @@ -1443,6 +1443,10 @@ rte_mempool_put(struct rte_mempool *mp, void
> *obj)
> >
> >  /**
> >   * @internal Get several objects from the mempool; used internally.
> > + *
> > + * If cache is enabled, objects are returned from the cache in Last
> In First
> > + * Out (LIFO) order for the benefit of CPUs with small L1 cache.
> > + *
> >   * @param mp
> >   *   A pointer to the mempool structure.
> >   * @param obj_table
> > @@ -1452,7 +1456,7 @@ rte_mempool_put(struct rte_mempool *mp, void
> *obj)
> >   * @param cache
> >   *   A pointer to a mempool cache structure. May be NULL if not
> needed.
> >   * @return
> > - *   - >=0: Success; number of objects supplied.
> > + *   - 0: Success; got n objects.
> >   *   - <0: Error; code of ring dequeue function.
> >   */
> >  static __rte_always_inline int
> 
> I think that part should be in a separate commit too. This is a
> documentation fix, which is easily backportable (and should be
> backported) (Fixes: af75078fece3 ("first public release")).

OK. I'll take this out from the patch. And as you mentioned in your follow-up, this is also addressed by another commit, so I'll leave it at that.

> 
> > @@ -1463,38 +1467,71 @@ rte_mempool_do_generic_get(struct rte_mempool
> *mp, void **obj_table,
> >  	uint32_t index, len;
> >  	void **cache_objs;
> >
> > -	/* No cache provided or cannot be satisfied from cache */
> > -	if (unlikely(cache == NULL || n >= cache->size))
> > +	/* No cache provided or if get would overflow mem allocated for
> cache */
> > +	if (unlikely(cache == NULL || n > RTE_MEMPOOL_CACHE_MAX_SIZE))
> >  		goto ring_dequeue;
> >
> > -	cache_objs = cache->objs;
> > +	cache_objs = &cache->objs[cache->len];
> > +
> > +	if (n <= cache->len) {
> > +		/* The entire request can be satisfied from the cache. */
> > +		cache->len -= n;
> > +		for (index = 0; index < n; index++)
> > +			*obj_table++ = *--cache_objs;
> >
> > -	/* Can this be satisfied from the cache? */
> > -	if (cache->len < n) {
> > -		/* No. Backfill the cache first, and then fill from it */
> > -		uint32_t req = n + (cache->size - cache->len);
> > +		RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
> > +		RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n);
> >
> > -		/* How many do we require i.e. number to fill the cache +
> the request */
> > -		ret = rte_mempool_ops_dequeue_bulk(mp,
> > -			&cache->objs[cache->len], req);
> > +		return 0;
> > +	}
> > +
> > +	/* Satisfy the first part of the request by depleting the cache.
> */
> > +	len = cache->len;
> > +	for (index = 0; index < len; index++)
> > +		*obj_table++ = *--cache_objs;
> > +
> > +	/* Number of objects remaining to satisfy the request. */
> > +	len = n - len;
> > +
> > +	/* Fill the cache from the ring; fetch size + remaining objects.
> */
> > +	ret = rte_mempool_ops_dequeue_bulk(mp, cache->objs,
> > +			cache->size + len);
> > +	if (unlikely(ret < 0)) {
> > +		/*
> > +		 * We are buffer constrained, and not able to allocate
> > +		 * cache + remaining.
> > +		 * Do not fill the cache, just satisfy the remaining part
> of
> > +		 * the request directly from the ring.
> > +		 */
> > +		ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, len);
> >  		if (unlikely(ret < 0)) {
> >  			/*
> > -			 * In the off chance that we are buffer constrained,
> > -			 * where we are not able to allocate cache + n, go to
> > -			 * the ring directly. If that fails, we are truly out
> of
> > -			 * buffers.
> > +			 * That also failed.
> > +			 * No furter action is required to roll the first
> > +			 * part of the request back into the cache, as both
> > +			 * cache->len and the objects in the cache are
> intact.
> >  			 */
> > -			goto ring_dequeue;
> > +			RTE_MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1);
> > +			RTE_MEMPOOL_STAT_ADD(mp, get_fail_objs, n);
> > +
> > +			return ret;
> >  		}
> >
> > -		cache->len += req;
> > +		/* Commit that the cache was emptied. */
> > +		cache->len = 0;
> > +
> > +		RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
> > +		RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n);
> > +
> > +		return 0;
> >  	}
> >
> > -	/* Now fill in the response ... */
> > -	for (index = 0, len = cache->len - 1; index < n; ++index, len--,
> obj_table++)
> > -		*obj_table = cache_objs[len];
> > +	cache_objs = &cache->objs[cache->size + len];
> >
> > -	cache->len -= n;
> > +	/* Satisfy the remaining part of the request from the filled
> cache. */
> > +	cache->len = cache->size;
> > +	for (index = 0; index < len; index++)
> > +		*obj_table++ = *--cache_objs;
> >
> >  	RTE_MEMPOOL_STAT_ADD(mp, get_success_bulk, 1);
> >  	RTE_MEMPOOL_STAT_ADD(mp, get_success_objs, n);
> > @@ -1503,7 +1540,7 @@ rte_mempool_do_generic_get(struct rte_mempool
> *mp, void **obj_table,
> >
> >  ring_dequeue:
> >
> > -	/* get remaining objects from ring */
> > +	/* Get the objects from the ring. */
> >  	ret = rte_mempool_ops_dequeue_bulk(mp, obj_table, n);
> >
> >  	if (ret < 0) {
> 
> About the code itself, it is more readable now, and probably more
> efficient. Did you notice any performance change in mempool perf
> autotests ?

No significant performance change in my test environment. Probably also because mempool_perf_autotest doesn't test flushing/refilling the cache.

> 
> Thanks,
> Olivier


  parent reply	other threads:[~2022-01-28 10:22 UTC|newest]

Thread overview: 85+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-26 15:34 [RFC] mempool: rte_mempool_do_generic_get optimizations Morten Brørup
2022-01-06 12:23 ` [PATCH] mempool: optimize incomplete cache handling Morten Brørup
2022-01-06 16:55   ` Jerin Jacob
2022-01-07  8:46     ` Morten Brørup
2022-01-10  7:26       ` Jerin Jacob
2022-01-10 10:55         ` Morten Brørup
2022-01-14 16:36 ` [PATCH] mempool: fix get objects from mempool with cache Morten Brørup
2022-01-17 17:35   ` Bruce Richardson
2022-01-18  8:25     ` Morten Brørup
2022-01-18  9:07       ` Bruce Richardson
2022-01-24 15:38   ` Olivier Matz
2022-01-24 16:11     ` Olivier Matz
2022-01-28 10:22     ` Morten Brørup [this message]
2022-01-17 11:52 ` [PATCH] mempool: optimize put objects to " Morten Brørup
2022-01-19 14:52 ` [PATCH v2] mempool: fix " Morten Brørup
2022-01-19 15:03 ` [PATCH v3] " Morten Brørup
2022-01-24 15:39   ` Olivier Matz
2022-01-28  9:37     ` Morten Brørup
2022-02-02  8:14 ` [PATCH v2] mempool: fix get objects from " Morten Brørup
2022-06-15 21:18   ` Morten Brørup
2022-09-29 10:52     ` Morten Brørup
2022-10-04 12:57   ` Andrew Rybchenko
2022-10-04 15:13     ` Morten Brørup
2022-10-04 15:58       ` Andrew Rybchenko
2022-10-04 18:09         ` Morten Brørup
2022-10-06 13:43       ` Aaron Conole
2022-10-04 16:03   ` Morten Brørup
2022-10-04 16:36   ` Morten Brørup
2022-10-04 16:39   ` Morten Brørup
2022-02-02 10:33 ` [PATCH v4] mempool: fix mempool cache flushing algorithm Morten Brørup
2022-04-07  9:04   ` Morten Brørup
2022-04-07  9:14     ` Bruce Richardson
2022-04-07  9:26       ` Morten Brørup
2022-04-07 10:32         ` Bruce Richardson
2022-04-07 10:43           ` Bruce Richardson
2022-04-07 11:36             ` Morten Brørup
2022-10-04 20:01   ` Morten Brørup
2022-10-09 11:11   ` [PATCH 1/2] mempool: check driver enqueue result in one place Andrew Rybchenko
2022-10-09 11:11     ` [PATCH 2/2] mempool: avoid usage of term ring on put Andrew Rybchenko
2022-10-09 13:08       ` Morten Brørup
2022-10-09 13:14         ` Andrew Rybchenko
2022-10-09 13:01     ` [PATCH 1/2] mempool: check driver enqueue result in one place Morten Brørup
2022-10-09 13:19   ` [PATCH v4] mempool: fix mempool cache flushing algorithm Andrew Rybchenko
2022-10-04 12:53 ` [PATCH v3] mempool: fix get objects from mempool with cache Andrew Rybchenko
2022-10-04 14:42   ` Morten Brørup
2022-10-07 10:44 ` [PATCH v4] " Andrew Rybchenko
2022-10-08 20:56   ` Thomas Monjalon
2022-10-11 20:30     ` Copy-pasted code should be updated Morten Brørup
2022-10-11 21:47       ` Honnappa Nagarahalli
2022-10-30  8:44         ` Morten Brørup
2022-10-30 22:50           ` Honnappa Nagarahalli
2022-10-14 14:01     ` [PATCH v4] mempool: fix get objects from mempool with cache Olivier Matz
2022-10-09 13:37 ` [PATCH v6 0/4] mempool: fix mempool cache flushing algorithm Andrew Rybchenko
2022-10-09 13:37   ` [PATCH v6 1/4] mempool: check driver enqueue result in one place Andrew Rybchenko
2022-10-09 13:37   ` [PATCH v6 2/4] mempool: avoid usage of term ring on put Andrew Rybchenko
2022-10-09 13:37   ` [PATCH v6 3/4] mempool: fix cache flushing algorithm Andrew Rybchenko
2022-10-09 14:31     ` Morten Brørup
2022-10-09 14:51       ` Andrew Rybchenko
2022-10-09 15:08         ` Morten Brørup
2022-10-14 14:01           ` Olivier Matz
2022-10-14 15:57             ` Morten Brørup
2022-10-14 19:50               ` Olivier Matz
2022-10-15  6:57                 ` Morten Brørup
2022-10-18 16:32                   ` Jerin Jacob
2022-10-09 13:37   ` [PATCH v6 4/4] mempool: flush cache completely on overflow Andrew Rybchenko
2022-10-09 14:44     ` Morten Brørup
2022-10-14 14:01       ` Olivier Matz
2022-10-10 15:21   ` [PATCH v6 0/4] mempool: fix mempool cache flushing algorithm Thomas Monjalon
2022-10-11 19:26     ` Morten Brørup
2022-10-26 14:09     ` Thomas Monjalon
2022-10-26 14:26       ` Morten Brørup
2022-10-26 14:44         ` [PATCH] mempool: cache align mempool cache objects Morten Brørup
2022-10-26 19:44           ` Andrew Rybchenko
2022-10-27  8:34           ` Olivier Matz
2022-10-27  9:22             ` Morten Brørup
2022-10-27 11:42               ` Olivier Matz
2022-10-27 12:11                 ` Morten Brørup
2022-10-27 15:20                   ` Olivier Matz
2022-10-28  6:35           ` [PATCH v3 1/2] " Morten Brørup
2022-10-28  6:35             ` [PATCH v3 2/2] mempool: optimized debug statistics Morten Brørup
2022-10-28  6:41           ` [PATCH v4 1/2] mempool: cache align mempool cache objects Morten Brørup
2022-10-28  6:41             ` [PATCH v4 2/2] mempool: optimized debug statistics Morten Brørup
2022-10-30  9:09               ` Morten Brørup
2022-10-30  9:16                 ` Thomas Monjalon
2022-10-30  9:17             ` [PATCH v4 1/2] mempool: cache align mempool cache objects Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=98CBD80474FA8B44BF855DF32C47DC35D86E56@smartserver.smartshare.dk \
    --to=mb@smartsharesystems.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=jerinjacobk@gmail.com \
    --cc=olivier.matz@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).