DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Morten Brørup" <mb@smartsharesystems.com>
To: "Olivier Matz" <olivier.matz@6wind.com>
Cc: <andrew.rybchenko@oktetlabs.ru>,
	"Kamalakshitha Aligeri" <Kamalakshitha.Aligeri@arm.com>,
	<bruce.richardson@intel.com>, <konstantin.ananyev@huawei.com>,
	<dev@dpdk.org>, "nd" <nd@arm.com>, <david.marchand@redhat.com>,
	"Honnappa Nagarahalli" <Honnappa.Nagarahalli@arm.com>
Subject: RE: [PATCH v8] mempool cache: add zero-copy get and put functions
Date: Mon, 13 Feb 2023 11:25:00 +0100	[thread overview]
Message-ID: <98CBD80474FA8B44BF855DF32C47DC35D87734@smartserver.smartshare.dk> (raw)
In-Reply-To: <Y+oEvWDEwWcR7wLp@platinum>

> From: Olivier Matz [mailto:olivier.matz@6wind.com]
> Sent: Monday, 13 February 2023 10.37
> 
> Hello,
> 
> Thank you for this work, and sorry for the late feedback too.

Better late than never. And it's a core library, so important to get it right!

> 
> On Mon, Feb 13, 2023 at 04:29:51AM +0000, Honnappa Nagarahalli wrote:
> > <snip>
> >
> > > > > +/**
> > > > > + * @internal used by rte_mempool_cache_zc_put_bulk() and
> > > > > rte_mempool_do_generic_put().
> > > > > + *
> > > > > + * Zero-copy put objects in a mempool cache backed by the
> specified
> > > > > mempool.
> > > > > + *
> > > > > + * @param cache
> > > > > + *   A pointer to the mempool cache.
> > > > > + * @param mp
> > > > > + *   A pointer to the mempool.
> > > > > + * @param n
> > > > > + *   The number of objects to be put in the mempool cache.
> > > > > + * @return
> > > > > + *   The pointer to where to put the objects in the mempool
> cache.
> > > > > + *   NULL if the request itself is too big for the cache, i.e.
> > > > > + *   exceeds the cache flush threshold.
> > > > > + */
> > > > > +static __rte_always_inline void **
> > > > > +__rte_mempool_cache_zc_put_bulk(struct rte_mempool_cache
> *cache,
> > > > > +		struct rte_mempool *mp,
> > > > > +		unsigned int n)
> > > > > +{
> > > > > +	void **cache_objs;
> > > > > +
> > > > > +	RTE_ASSERT(cache != NULL);
> > > > > +	RTE_ASSERT(mp != NULL);
> > > > > +
> > > > > +	if (n <= cache->flushthresh - cache->len) {
> 
> The previous code was doing this test instead:
> 
> if (cache->len + n <= cache->flushthresh)
> 
> I know there is an invariant asserting that cache->len <= cache-
> >threshold,
> so there is no real issue, but I'll tend to say that it is a good
> practise
> to avoid substractions on unsigned values to avoid the risk of
> wrapping.
> 
> I also think the previous test was a bit more readable.

I agree with you, but I didn't object to Andrew's recommendation of changing it to this, so I did.

I will change it back. Konstantin, I hope you don't mind. :-)

[...]

> > > > > +/**
> > > > > + * @warning
> > > > > + * @b EXPERIMENTAL: This API may change, or be removed,
> without
> > > > prior
> > > > > notice.
> > > > > + *
> > > > > + * Zero-copy put objects in a mempool cache backed by the
> specified
> > > > > mempool.
> 
> I think we should document the differences and advantage of using this
> function over the standard version, explaining which copy is avoided,
> why it is faster, ...
> 
> Also, we should say that once this function is called, the user has
> to copy the objects to the cache.
> 

I agree, the function descriptions could be more verbose.

If we want to get this feature into DPDK now, we can postpone the descriptions improvements to a later patch.

[...]

> > Earlier there was a discussion on the API name.
> > IMO, we should keep the API names similar to those in ring library.
> This would provide consistency across the libraries.
> > There were some concerns expressed in PMD having to call 2 APIs. I do
> not think changing to 2 APIs will have any perf impact.
> 
> I'm not really convinced by the API names too. Again, sorry, I know
> this
> comment arrives after the battle.
> 
> Your proposal is:
> 
> /* Zero-copy put objects in a mempool cache backed by the specified
> mempool. */
> rte_mempool_cache_zc_put_bulk(cache, mp, n)
> 
> /* Zero-copy get objects from a mempool cache backed by the specified
> mempool. */
> rte_mempool_cache_zc_get_bulk(cache, mp, n)
> 
> Here are some observations:
> 
> - This was said in the discussion previously, but the functions do not
>   really get or put objects in the cache. Instead, they prepare the
>   cache (filling it or flushing it if needed) and update its length so
>   that the user can do the effective copy.

Can be fixed by improving function descriptions.

> 
> - The "_cache" is superfluous for me: these functions do not deal more
>   with the cache than the non zero-copy version

I have been thinking of these as "mempool cache" APIs.

I don't mind getting rid of "_cache" in their names, if we agree that they are "mempool" functions, instead of "mempool cache" functions.

> 
> - The order of the parameters is (cache, mp, n) while the other
> functions
>   that take a mempool and a cache as parameters have the mp first (see
>   _generic versions).

The order of the parameters was due to considering these as "mempool cache" functions, so I followed the convention for an existing "mempool cache" function:

rte_mempool_cache_flush(struct rte_mempool_cache *cache,
		struct rte_mempool *mp);

If we instead consider them as simple "mempool" functions, I agree with you about the parameter ordering.

So, what does the community think... Are these "mempool cache" functions, or just "mempool" functions?

> 
> - The "_bulk" is indeed present on other functions, but not all (the
> generic
>   version does not have it), I'm not sure it is absolutely required

The mempool library offers both single-object and bulk functions, so the function names must include "_bulk".

> 
> What do you think about these API below?
> 
> rte_mempool_prepare_zc_put(mp, n, cache)
> rte_mempool_prepare_zc_get(mp, n, cache)

I initially used "prepare" in the names, but since we don't have accompanying "commit" functions, I decided against "prepare" to avoid confusion. (Any SQL developer will probably agree with me on this.)

> 
> >
> > Also, what is the use case for the 'rewind' API?
> 
> +1
> 
> I have the same feeling that rewind() is not required now. It can be
> added later if we find a use-case.
> 
> In case we want to keep it, I think we need to better specify in the
> API
> comments in which unique conditions the function can be called
> (i.e. after a call to rte_mempool_prepare_zc_put() with the same number
> of objects, given no other operations were done on the mempool in
> between). A call outside of these conditions has an undefined behavior.

Please refer to my answer to Honnappa on this topic.


  reply	other threads:[~2023-02-13 10:25 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-05 13:19 [RFC]: mempool: zero-copy cache get bulk Morten Brørup
2022-11-07  9:19 ` Bruce Richardson
2022-11-07 14:32   ` Morten Brørup
2022-11-15 16:18 ` [PATCH] mempool cache: add zero-copy get and put functions Morten Brørup
2022-11-16 18:04 ` [PATCH v2] " Morten Brørup
2022-11-29 20:54   ` Kamalakshitha Aligeri
2022-11-30 10:21     ` Morten Brørup
2022-12-22 15:57   ` Konstantin Ananyev
2022-12-22 17:55     ` Morten Brørup
2022-12-23 16:58       ` Konstantin Ananyev
2022-12-24 12:17         ` Morten Brørup
2022-12-24 11:49 ` [PATCH v3] " Morten Brørup
2022-12-24 11:55 ` [PATCH v4] " Morten Brørup
2022-12-27  9:24   ` Andrew Rybchenko
2022-12-27 10:31     ` Morten Brørup
2022-12-27 15:17 ` [PATCH v5] " Morten Brørup
2023-01-22 20:34   ` Konstantin Ananyev
2023-01-22 21:17     ` Morten Brørup
2023-01-23 11:53       ` Konstantin Ananyev
2023-01-23 12:23         ` Morten Brørup
2023-01-23 12:52           ` Konstantin Ananyev
2023-01-23 14:30           ` Bruce Richardson
2023-01-24  1:53             ` Kamalakshitha Aligeri
2023-02-09 14:39 ` [PATCH v6] " Morten Brørup
2023-02-09 14:52 ` [PATCH v7] " Morten Brørup
2023-02-09 14:58 ` [PATCH v8] " Morten Brørup
2023-02-10  8:35   ` fengchengwen
2023-02-12 19:56   ` Honnappa Nagarahalli
2023-02-12 23:15     ` Morten Brørup
2023-02-13  4:29       ` Honnappa Nagarahalli
2023-02-13  9:30         ` Morten Brørup
2023-02-13  9:37         ` Olivier Matz
2023-02-13 10:25           ` Morten Brørup [this message]
2023-02-14 14:16             ` Andrew Rybchenko
2023-02-13 12:24 ` [PATCH v9] " Morten Brørup
2023-02-13 14:33   ` Kamalakshitha Aligeri

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=98CBD80474FA8B44BF855DF32C47DC35D87734@smartserver.smartshare.dk \
    --to=mb@smartsharesystems.com \
    --cc=Honnappa.Nagarahalli@arm.com \
    --cc=Kamalakshitha.Aligeri@arm.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=bruce.richardson@intel.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=konstantin.ananyev@huawei.com \
    --cc=nd@arm.com \
    --cc=olivier.matz@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).