From: "Morten Brørup" <mb@smartsharesystems.com>
To: <dev@dpdk.org>, <olivier.matz@6wind.com>,
<andrew.rybchenko@oktetlabs.ru>, <honnappa.nagarahalli@arm.com>
Subject: [RFC] mempool: zero-copy cache put bulk
Date: Sat, 5 Nov 2022 14:40:10 +0100 [thread overview]
Message-ID: <98CBD80474FA8B44BF855DF32C47DC35D87489@smartserver.smartshare.dk> (raw)
Zero-copy access to the mempool cache is beneficial for PMD performance, and must be provided by the mempool library to fix [Bug 1052] without a performance regression.
[Bug 1052]: https://bugs.dpdk.org/show_bug.cgi?id=1052
This RFC offers a conceptual zero-copy put function, where the application promises to store some objects, and in return gets an address where to store them.
I would like some early feedback.
Notes:
* Allowing the 'cache' parameter to be NULL, and getting it from the mempool instead, was inspired by rte_mempool_cache_flush().
* Asserting that the 'mp' parameter is not NULL is not done by other functions, so I omitted it here too.
NB: Please ignore formatting. Also, this code has not even been compile tested.
/**
* Promise to put objects in a mempool via zero-copy access to a user-owned mempool cache.
*
* @param cache
* A pointer to the mempool cache.
* @param mp
* A pointer to the mempool.
* @param n
* The number of objects to be put in the mempool cache.
* @return
* The pointer to where to put the objects in the mempool cache.
* NULL on error
* with rte_errno set appropriately.
*/
static __rte_always_inline void *
rte_mempool_cache_put_bulk_promise(struct rte_mempool_cache *cache,
struct rte_mempool *mp,
unsigned int n)
{
void **cache_objs;
if (cache == NULL)
cache = rte_mempool_default_cache(mp, rte_lcore_id());
if (cache == NULL) {
rte_errno = EINVAL;
return NULL;
}
rte_mempool_trace_cache_put_bulk_promise(cache, mp, n);
/* The request itself is too big for the cache */
if (unlikely(n > cache->flushthresh)) {
rte_errno = EINVAL;
return NULL;
}
/*
* The cache follows the following algorithm:
* 1. If the objects cannot be added to the cache without crossing
* the flush threshold, flush the cache to the backend.
* 2. Add the objects to the cache.
*/
if (cache->len + n <= cache->flushthresh) {
cache_objs = &cache->objs[cache->len];
cache->len += n;
} else {
cache_objs = &cache->objs[0];
rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
cache->len = n;
}
RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
return cache_objs;
}
Med venlig hilsen / Kind regards,
-Morten Brørup
next reply other threads:[~2022-11-05 13:40 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-05 13:40 Morten Brørup [this message]
2022-11-05 23:11 ` Honnappa Nagarahalli
2022-11-06 6:57 ` Morten Brørup
2022-11-09 17:57 ` Honnappa Nagarahalli
2022-11-09 20:36 ` Morten Brørup
2022-11-09 22:45 ` Honnappa Nagarahalli
2022-11-10 10:15 ` Morten Brørup
2022-11-10 11:00 ` Bruce Richardson
2022-11-11 4:24 ` Honnappa Nagarahalli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=98CBD80474FA8B44BF855DF32C47DC35D87489@smartserver.smartshare.dk \
--to=mb@smartsharesystems.com \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=dev@dpdk.org \
--cc=honnappa.nagarahalli@arm.com \
--cc=olivier.matz@6wind.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).