DPDK patches and discussions
 help / color / mirror / Atom feed
From: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
To: "Morten Brørup" <mb@smartsharesystems.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"olivier.matz@6wind.com" <olivier.matz@6wind.com>,
	"andrew.rybchenko@oktetlabs.ru" <andrew.rybchenko@oktetlabs.ru>,
	"Kamalakshitha Aligeri" <Kamalakshitha.Aligeri@arm.com>
Cc: nd <nd@arm.com>, nd <nd@arm.com>
Subject: RE: [RFC] mempool: zero-copy cache put bulk
Date: Sat, 5 Nov 2022 23:11:20 +0000	[thread overview]
Message-ID: <DBAPR08MB58140B597479021ED58A9185983A9@DBAPR08MB5814.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <98CBD80474FA8B44BF855DF32C47DC35D87489@smartserver.smartshare.dk>

+ Akshitha, she is working on similar patch

Few comments inline

> -----Original Message-----
> From: Morten Brørup <mb@smartsharesystems.com>
> Sent: Saturday, November 5, 2022 8:40 AM
> To: dev@dpdk.org; olivier.matz@6wind.com;
> andrew.rybchenko@oktetlabs.ru; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>
> Subject: [RFC] mempool: zero-copy cache put bulk
> 
> Zero-copy access to the mempool cache is beneficial for PMD performance,
> and must be provided by the mempool library to fix [Bug 1052] without a
> performance regression.
> 
> [Bug 1052]: https://bugs.dpdk.org/show_bug.cgi?id=1052
> 
> 
> This RFC offers a conceptual zero-copy put function, where the application
> promises to store some objects, and in return gets an address where to store
> them.
> 
> I would like some early feedback.
> 
> Notes:
> * Allowing the 'cache' parameter to be NULL, and getting it from the
> mempool instead, was inspired by rte_mempool_cache_flush().
I am not sure why the 'cache' parameter is required for this API. This API should take the mem pool as the parameter.

We have based our API on 'rte_mempool_do_generic_put' and removed the 'cache' parameter. This new API, on success, returns the pointer to memory where the objects are copied. On failure it returns NULL and the caller has to call 'rte_mempool_ops_enqueue_bulk'. Alternatively, the new API could do this as well and PMD does not need to do anything if it gets a NULL pointer.

We should think about providing  similar API on the RX side to keep it symmetric.

> * Asserting that the 'mp' parameter is not NULL is not done by other
> functions, so I omitted it here too.
> 
> NB: Please ignore formatting. Also, this code has not even been compile
> tested.
We are little bit ahead, tested the changes with i40e PF PMD, wrote unit test cases, going through internal review, will send out RFC on Monday

> 
> /**
>  * Promise to put objects in a mempool via zero-copy access to a user-owned
> mempool cache.
>  *
>  * @param cache
>  *   A pointer to the mempool cache.
>  * @param mp
>  *   A pointer to the mempool.
>  * @param n
>  *   The number of objects to be put in the mempool cache.
>  * @return
>  *   The pointer to where to put the objects in the mempool cache.
>  *   NULL on error
>  *   with rte_errno set appropriately.
>  */
> static __rte_always_inline void *
> rte_mempool_cache_put_bulk_promise(struct rte_mempool_cache *cache,
>         struct rte_mempool *mp,
>         unsigned int n)
> {
>     void **cache_objs;
> 
>     if (cache == NULL)
>         cache = rte_mempool_default_cache(mp, rte_lcore_id());
>     if (cache == NULL) {
>         rte_errno = EINVAL;
>         return NULL;
>     }
> 
>     rte_mempool_trace_cache_put_bulk_promise(cache, mp, n);
> 
>     /* The request itself is too big for the cache */
>     if (unlikely(n > cache->flushthresh)) {
>         rte_errno = EINVAL;
>         return NULL;
>     }
> 
>     /*
>      * The cache follows the following algorithm:
>      *   1. If the objects cannot be added to the cache without crossing
>      *      the flush threshold, flush the cache to the backend.
>      *   2. Add the objects to the cache.
>      */
> 
>     if (cache->len + n <= cache->flushthresh) {
>         cache_objs = &cache->objs[cache->len];
>         cache->len += n;
>     } else {
>         cache_objs = &cache->objs[0];
>         rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
>         cache->len = n;
>     }
> 
>     RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
>     RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
> 
>     return cache_objs;
> }
> 
> 
> Med venlig hilsen / Kind regards,
> -Morten Brørup
> 


  reply	other threads:[~2022-11-05 23:11 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-05 13:40 Morten Brørup
2022-11-05 23:11 ` Honnappa Nagarahalli [this message]
2022-11-06  6:57   ` Morten Brørup
2022-11-09 17:57     ` Honnappa Nagarahalli
2022-11-09 20:36       ` Morten Brørup
2022-11-09 22:45         ` Honnappa Nagarahalli
2022-11-10 10:15           ` Morten Brørup
2022-11-10 11:00             ` Bruce Richardson
2022-11-11  4:24               ` Honnappa Nagarahalli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DBAPR08MB58140B597479021ED58A9185983A9@DBAPR08MB5814.eurprd08.prod.outlook.com \
    --to=honnappa.nagarahalli@arm.com \
    --cc=Kamalakshitha.Aligeri@arm.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=mb@smartsharesystems.com \
    --cc=nd@arm.com \
    --cc=olivier.matz@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).