DPDK patches and discussions
 help / color / mirror / Atom feed
From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>,
	"mb@smartsharesystems.com" <mb@smartsharesystems.com>,
	"olivier.matz@6wind.com" <olivier.matz@6wind.com>,
	"konstantin.v.ananyev@yandex.ru" <konstantin.v.ananyev@yandex.ru>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	Ruifeng Wang <Ruifeng.Wang@arm.com>,
	Kamalakshitha Aligeri <Kamalakshitha.Aligeri@arm.com>,
	"Wathsala Wathawana Vithanage" <wathsala.vithanage@arm.com>,
	nd <nd@arm.com>, nd <nd@arm.com>
Subject: RE: [PATCH 4/4] mempool: use lcore API to check if lcore ID is valid
Date: Fri, 10 Mar 2023 14:06:20 +0000	[thread overview]
Message-ID: <f96a0c94573f4252b192fd89fd2e7673@huawei.com> (raw)
In-Reply-To: <DBAPR08MB58141A79477BC4B98B8C1F8998BA9@DBAPR08MB5814.eurprd08.prod.outlook.com>



> 
> <snip>
> >
> >
> >
> > >
> > > Use lcore API to check if the lcore ID is valid. The runtime check
> > > does not add much value.
> >
> > From my perspective it adds a perfect value:
> > Only threads with valid lcore id have their own default mempool cache.
> The threads would call 'rte_lcore_id()' to return their lcore_id. This ensures the lcore_id is valid already.
> Why do we need to check it
> again in rte_mempool_default_cache? Why would a thread use an incorrect lcore_id?

rte_lcore_id() will just return you value of per-thread variable: RTE_PER_LCORE(_lcore_id).
Without any checking. For non-eal tthreads this value would be UINT32_MAX.
 
> >
> > > Hence use assert to validate
> > > the lcore ID.
> >
> > Wonder why?
> > What's wrong for the thread to try to get default mempool cache?
> What are the cases where a thread does not know that it is not an EAL thread and call rte_mempool_default_cache with a random
> lcore_id?
> Since, this API is called in the data plane, it makes sense to remove any additional validations.

Why is that?
I believe this API have to be generic and safe to call from any thread.
Let's look for example at:
*/
static __rte_always_inline void
rte_mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table,
                     unsigned int n)
{
        struct rte_mempool_cache *cache;
        cache = rte_mempool_default_cache(mp, rte_lcore_id());
        rte_mempool_trace_put_bulk(mp, obj_table, n, cache);
        rte_mempool_generic_put(mp, obj_table, n, cache);
} 

Right now it is perfectly valid to invoke it from any thread.
With the change you propose invoking mempool_put() from non-EAL thread
will cause either a crash or silent memory corruption.
 
> > That would change existing behavior and in general seems wrong to me.
> Agree on the change in existing behavior. We can discuss this once we agree/disagree on the above.
> 
> > So I am strongly opposed.
> >
> > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > Reviewed-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
> > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > ---
> > >  lib/mempool/rte_mempool.h | 5 ++---
> > >  1 file changed, 2 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> > > index 009bd10215..00c5aa961b 100644
> > > --- a/lib/mempool/rte_mempool.h
> > > +++ b/lib/mempool/rte_mempool.h
> > > @@ -1314,10 +1314,9 @@ rte_mempool_cache_free(struct
> > rte_mempool_cache
> > > *cache);  static __rte_always_inline struct rte_mempool_cache *
> > > rte_mempool_default_cache(struct rte_mempool *mp, unsigned lcore_id)
> > > {
> > > -	if (mp->cache_size == 0)
> > > -		return NULL;
> > > +	RTE_ASSERT(rte_lcore_id_is_valid(lcore_id));
> > >
> > > -	if (lcore_id >= RTE_MAX_LCORE)
> > > +	if (mp->cache_size == 0)
> > >  		return NULL;
> > >
> > >  	rte_mempool_trace_default_cache(mp, lcore_id,
> > > --
> > > 2.25.1
> > >


  reply	other threads:[~2023-03-10 14:06 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-03-09  4:57 [PATCH 0/4] Small corrections in mempool Honnappa Nagarahalli
2023-03-09  4:57 ` [PATCH 1/4] mempool: clarify mempool cache flush API behavior Honnappa Nagarahalli
2023-06-07 10:03   ` Morten Brørup
2023-03-09  4:57 ` [PATCH 2/4] mempool: clarify comments for mempool cache implementation Honnappa Nagarahalli
2023-06-07 10:10   ` Morten Brørup
2024-10-04 21:08     ` Stephen Hemminger
2023-03-09  4:57 ` [PATCH 3/4] eal: add API to check if lcore id is valid Honnappa Nagarahalli
2023-06-07 10:19   ` Morten Brørup
2023-06-07 15:05     ` Stephen Hemminger
2023-03-09  4:57 ` [PATCH 4/4] mempool: use lcore API to check if lcore ID " Honnappa Nagarahalli
2023-03-09  9:39   ` Konstantin Ananyev
2023-03-10  4:01     ` Honnappa Nagarahalli
2023-03-10 14:06       ` Konstantin Ananyev [this message]
2023-06-07  9:35 ` [PATCH 0/4] Small corrections in mempool Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f96a0c94573f4252b192fd89fd2e7673@huawei.com \
    --to=konstantin.ananyev@huawei.com \
    --cc=Honnappa.Nagarahalli@arm.com \
    --cc=Kamalakshitha.Aligeri@arm.com \
    --cc=Ruifeng.Wang@arm.com \
    --cc=dev@dpdk.org \
    --cc=konstantin.v.ananyev@yandex.ru \
    --cc=mb@smartsharesystems.com \
    --cc=nd@arm.com \
    --cc=olivier.matz@6wind.com \
    --cc=wathsala.vithanage@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).