DPDK patches and discussions
 help / color / mirror / Atom feed
From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
To: "Morten Brørup" <mb@smartsharesystems.com>,
	olivier.matz@6wind.com, dev@dpdk.org
Cc: honnappa.nagarahalli@arm.com, bruce.richardson@intel.com,
	konstantin.ananyev@huawei.com
Subject: Re: [PATCH] mempool: micro-optimize put function
Date: Wed, 16 Nov 2022 14:29:55 +0300	[thread overview]
Message-ID: <27299e15-948d-9e0b-d6e0-efb740a3016e@oktetlabs.ru> (raw)
In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35D874C7@smartserver.smartshare.dk>

On 11/16/22 14:10, Morten Brørup wrote:
>> From: Andrew Rybchenko [mailto:andrew.rybchenko@oktetlabs.ru]
>> Sent: Wednesday, 16 November 2022 12.05
>>
>> On 11/16/22 13:18, Morten Brørup wrote:
>>> Micro-optimization:
>>> Reduced the most likely code path in the generic put function by
>> moving an
>>> unlikely check out of the most likely code path and further down.
>>>
>>> Also updated the comments in the function.
>>>
>>> Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
>>> ---
>>>    lib/mempool/rte_mempool.h | 35 ++++++++++++++++++-----------------
>>>    1 file changed, 18 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
>>> index 9f530db24b..aba90dbb5b 100644
>>> --- a/lib/mempool/rte_mempool.h
>>> +++ b/lib/mempool/rte_mempool.h
>>> @@ -1364,32 +1364,33 @@ rte_mempool_do_generic_put(struct rte_mempool
>> *mp, void * const *obj_table,
>>>    {
>>>    	void **cache_objs;
>>>
>>> -	/* No cache provided */
>>> +	/* No cache provided? */
>>>    	if (unlikely(cache == NULL))
>>>    		goto driver_enqueue;
>>>
>>> -	/* increment stat now, adding in mempool always success */
>>> +	/* Increment stats now, adding in mempool always succeeds. */
>>>    	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
>>>    	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
>>>
>>> -	/* The request itself is too big for the cache */
>>> -	if (unlikely(n > cache->flushthresh))
>>> -		goto driver_enqueue_stats_incremented;
>>
>> I've kept the check here since it protects against overflow in len plus
>> n below if n is really huge.
> 
> We can fix that, see below.
> 
>>
>>> -
>>> -	/*
>>> -	 * The cache follows the following algorithm:
>>> -	 *   1. If the objects cannot be added to the cache without
>> crossing
>>> -	 *      the flush threshold, flush the cache to the backend.
>>> -	 *   2. Add the objects to the cache.
>>> -	 */
>>> -
>>> -	if (cache->len + n <= cache->flushthresh) {
>>> +	if (likely(cache->len + n <= cache->flushthresh)) {
> 
> It is an invariant that cache->len <= cache->flushthresh, so the above comparison can be rewritten to protect against overflow:
> 
> if (likely(n <= cache->flushthresh - cache->len)) {
> 

True, but it would be useful to highlight the usage of the
invariant here using either a comment or an assert.

IMHO it is wrong to use likely here since, as far as I know, it makes 
else branch very expensive, but crossing the flush
threshold is an expected branch and it must not be that
expensive.

>>> +		/*
>>> +		 * The objects can be added to the cache without crossing
>> the
>>> +		 * flush threshold.
>>> +		 */
>>>    		cache_objs = &cache->objs[cache->len];
>>>    		cache->len += n;
>>> -	} else {
>>> +	} else if (likely(n <= cache->flushthresh)) {
>>> +		/*
>>> +		 * The request itself fits into the cache.
>>> +		 * But first, the cache must be flushed to the backend, so
>>> +		 * adding the objects does not cross the flush threshold.
>>> +		 */
>>>    		cache_objs = &cache->objs[0];
>>>    		rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
>>>    		cache->len = n;
>>> +	} else {
>>> +		/* The request itself is too big for the cache. */
>>> +		goto driver_enqueue_stats_incremented;
>>>    	}
>>>
>>>    	/* Add the objects to the cache. */
>>> @@ -1399,13 +1400,13 @@ rte_mempool_do_generic_put(struct rte_mempool
>> *mp, void * const *obj_table,
>>>
>>>    driver_enqueue:
>>>
>>> -	/* increment stat now, adding in mempool always success */
>>> +	/* Increment stats now, adding in mempool always succeeds. */
>>>    	RTE_MEMPOOL_STAT_ADD(mp, put_bulk, 1);
>>>    	RTE_MEMPOOL_STAT_ADD(mp, put_objs, n);
>>>
>>>    driver_enqueue_stats_incremented:
>>>
>>> -	/* push objects to the backend */
>>> +	/* Push the objects to the backend. */
>>>    	rte_mempool_ops_enqueue_bulk(mp, obj_table, n);
>>>    }
>>>
>>
> 


  reply	other threads:[~2022-11-16 11:29 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-16 10:18 Morten Brørup
2022-11-16 11:04 ` Andrew Rybchenko
2022-11-16 11:10   ` Morten Brørup
2022-11-16 11:29     ` Andrew Rybchenko [this message]
2022-11-16 12:14 ` [PATCH v2] " Morten Brørup
2022-11-16 15:51   ` Honnappa Nagarahalli
2022-11-16 15:59     ` Morten Brørup
2022-11-16 16:26       ` Honnappa Nagarahalli
2022-11-16 17:39         ` Morten Brørup
2022-12-19  8:50           ` Morten Brørup
2022-12-22 13:52             ` Konstantin Ananyev
2022-12-22 15:02               ` Morten Brørup
2022-12-23 16:34                 ` Konstantin Ananyev
2022-12-24 10:46 ` [PATCH v3] " Morten Brørup
2022-12-27  8:54   ` Andrew Rybchenko
2022-12-27 15:37     ` Morten Brørup

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=27299e15-948d-9e0b-d6e0-efb740a3016e@oktetlabs.ru \
    --to=andrew.rybchenko@oktetlabs.ru \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=honnappa.nagarahalli@arm.com \
    --cc=konstantin.ananyev@huawei.com \
    --cc=mb@smartsharesystems.com \
    --cc=olivier.matz@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).