DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Morten Brørup" <mb@smartsharesystems.com>
To: "Konstantin Ananyev" <konstantin.ananyev@huawei.com>,
	<olivier.matz@6wind.com>, <andrew.rybchenko@oktetlabs.ru>,
	<honnappa.nagarahalli@arm.com>, <kamalakshitha.aligeri@arm.com>,
	<bruce.richardson@intel.com>, <dev@dpdk.org>
Cc: <nd@arm.com>
Subject: RE: [PATCH v2] mempool cache: add zero-copy get and put functions
Date: Thu, 22 Dec 2022 18:55:59 +0100	[thread overview]
Message-ID: <98CBD80474FA8B44BF855DF32C47DC35D875DF@smartserver.smartshare.dk> (raw)
In-Reply-To: <ddd910a4a5ea459aa9a63e2b89d9a96a@huawei.com>

> From: Konstantin Ananyev [mailto:konstantin.ananyev@huawei.com]
> Sent: Thursday, 22 December 2022 16.57
> 
> > Zero-copy access to mempool caches is beneficial for PMD performance,
> and
> > must be provided by the mempool library to fix [Bug 1052] without a
> > performance regression.
> 
> LGTM in general, thank you for working on it.
> Few comments below.
> 
> >
> > [Bug 1052]: https://bugs.dpdk.org/show_bug.cgi?id=1052
> >
> > v2:
> > * Fix checkpatch warnings.
> > * Fix missing registration of trace points.
> > * The functions are inline, so they don't go into the map file.
> > v1 changes from the RFC:
> > * Removed run-time parameter checks. (Honnappa)
> >   This is a hot fast path function; requiring correct application
> >   behaviour, i.e. function parameters must be valid.
> > * Added RTE_ASSERT for parameters instead.
> 
> RTE_ASSERT(n <= RTE_MEMPOOL_CACHE_MAX_SIZE);
> I think it is too excessive.
> Just:
> if (n <= RTE_MEMPOOL_CACHE_MAX_SIZE) return NULL;
> seems much more convenient for the users here and
> more close to other mempool/ring API behavior.
> In terms of performance - I don’t think one extra comparison here
> would really count.

The insignificant performance degradation seems like a good tradeoff for making the function more generic.
I will update the function documentation and place the run-time check here, so both trace and stats reflect what happened:

	RTE_ASSERT(cache != NULL);
	RTE_ASSERT(mp != NULL);
-	RTE_ASSERT(n <= RTE_MEMPOOL_CACHE_MAX_SIZE);

	rte_mempool_trace_cache_zc_put_bulk(cache, mp, n);
+
+	if (unlikely(n > RTE_MEMPOOL_CACHE_MAX_SIZE)) {
+		rte_errno = -ENOSPC; // Or EINVAL?
+		return NULL;
+	}

	/* Increment stats now, adding in mempool always succeeds. */

I will probably also be able to come up with solution for zc_get_bulk(), so both trace and stats make sense if called with n > RTE_MEMPOOL_CACHE_MAX_SIZE.

> 
> I also think would be really good to add:
> add zc_(get|put)_bulk_start(),  zc_(get|put)_bulk_finish().
> Where _start would check/fill the cache and return the pointer,
> while _finsih would updathe cache->len.
> Similar to what we have for rte_ring _peek_ API.
> That would allow to extend this API usage - let say inside PMDs
> it could be used not only for MBUF_FAST_FREE case,  but for generic
> TX code path (one that have to call rte_mbuf_prefree()) also.

I don't see a use case for zc_get_start()/_finish().

And since the mempool cache is a stack, it would *require* that the application reads the array in reverse order. In such case, the function should not return a pointer to the array of objects, but a pointer to the top of the stack.

So I prefer to stick with the single-function zero-copy get, i.e. without start/finish.


I do agree with you about the use case for zc_put_start()/_finish().

Unlike the ring, there is no need for locking with the mempool cache, so we can implement something much simpler:

Instead of requiring calling both zc_put_start() and _finish() for every zero-copy burst, we could add a zc_put_rewind() function, only to be called if some number of objects were not added anyway:

/* FIXME: Function documentation here. */
__rte_experimental
static __rte_always_inline void
rte_mempool_cache_zc_put_rewind(struct rte_mempool_cache *cache,
		unsigned int n)
{
	RTE_ASSERT(cache != NULL);
	RTE_ASSERT(n <= cache->len);

	rte_mempool_trace_cache_zc_put_rewind(cache, n);

	/* Rewind stats. */
	RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, -n);

	cache->len -= n;
}

I have a strong preference for _rewind() over _start() and _finish(), because in the full burst case, it only touches the rte_mempool_cache structure once, whereas splitting it up into _start() and _finish() touches the rte_mempool_cache structure both before and after copying the array of objects.

What do you think?

I am open for other names than _rewind(), so feel free to speak up if you have a better name.


> 
> >   Code for this is only generated if built with RTE_ENABLE_ASSERT.
> > * Removed fallback when 'cache' parameter is not set. (Honnappa)
> > * Chose the simple get function; i.e. do not move the existing
> objects in
> >   the cache to the top of the new stack, just leave them at the
> bottom.
> > * Renamed the functions. Other suggestions are welcome, of course. ;-
> )
> > * Updated the function descriptions.
> > * Added the functions to trace_fp and version.map.
> 
> Would be great to add some test-cases in app/test to cover this new
> API.

Good point. I will look at it.

BTW: Akshitha already has zc_put_bulk working in the i40e PMD.


  reply	other threads:[~2022-12-22 17:56 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-05 13:19 [RFC]: mempool: zero-copy cache get bulk Morten Brørup
2022-11-07  9:19 ` Bruce Richardson
2022-11-07 14:32   ` Morten Brørup
2022-11-15 16:18 ` [PATCH] mempool cache: add zero-copy get and put functions Morten Brørup
2022-11-16 18:04 ` [PATCH v2] " Morten Brørup
2022-11-29 20:54   ` Kamalakshitha Aligeri
2022-11-30 10:21     ` Morten Brørup
2022-12-22 15:57   ` Konstantin Ananyev
2022-12-22 17:55     ` Morten Brørup [this message]
2022-12-23 16:58       ` Konstantin Ananyev
2022-12-24 12:17         ` Morten Brørup
2022-12-24 11:49 ` [PATCH v3] " Morten Brørup
2022-12-24 11:55 ` [PATCH v4] " Morten Brørup
2022-12-27  9:24   ` Andrew Rybchenko
2022-12-27 10:31     ` Morten Brørup
2022-12-27 15:17 ` [PATCH v5] " Morten Brørup
2023-01-22 20:34   ` Konstantin Ananyev
2023-01-22 21:17     ` Morten Brørup
2023-01-23 11:53       ` Konstantin Ananyev
2023-01-23 12:23         ` Morten Brørup
2023-01-23 12:52           ` Konstantin Ananyev
2023-01-23 14:30           ` Bruce Richardson
2023-01-24  1:53             ` Kamalakshitha Aligeri
2023-02-09 14:39 ` [PATCH v6] " Morten Brørup
2023-02-09 14:52 ` [PATCH v7] " Morten Brørup
2023-02-09 14:58 ` [PATCH v8] " Morten Brørup
2023-02-10  8:35   ` fengchengwen
2023-02-12 19:56   ` Honnappa Nagarahalli
2023-02-12 23:15     ` Morten Brørup
2023-02-13  4:29       ` Honnappa Nagarahalli
2023-02-13  9:30         ` Morten Brørup
2023-02-13  9:37         ` Olivier Matz
2023-02-13 10:25           ` Morten Brørup
2023-02-14 14:16             ` Andrew Rybchenko
2023-02-13 12:24 ` [PATCH v9] " Morten Brørup
2023-02-13 14:33   ` Kamalakshitha Aligeri

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=98CBD80474FA8B44BF855DF32C47DC35D875DF@smartserver.smartshare.dk \
    --to=mb@smartsharesystems.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=honnappa.nagarahalli@arm.com \
    --cc=kamalakshitha.aligeri@arm.com \
    --cc=konstantin.ananyev@huawei.com \
    --cc=nd@arm.com \
    --cc=olivier.matz@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).