DPDK patches and discussions
 help / color / mirror / Atom feed
From: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
To: Olivier Matz <olivier.matz@6wind.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, nd <nd@arm.com>,
	Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>,
	nd <nd@arm.com>
Subject: Re: [dpdk-dev] [PATCH] test/mcslock: remove unneeded per-lcore copy
Date: Wed, 4 Nov 2020 21:20:07 +0000
Message-ID: <DBAPR08MB581492519F68564244E2D43698EF0@DBAPR08MB5814.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <20201104210352.GA6890@arsenic.home>

<snip>

> > >
> > > Each core already comes with its local storage for mcslock (in its
> > > stack), therefore there is no need to define an additional per-lcore
> mcslock.
> > >
> > > Fixes: 32dcb9fd2a22 ("test/mcslock: add MCS queued lock unit test")
> > >
> > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>

> > > ---
> > >  app/test/test_mcslock.c | 16 ++++++----------
> > >  1 file changed, 6 insertions(+), 10 deletions(-)
> > >
> > > diff --git a/app/test/test_mcslock.c b/app/test/test_mcslock.c index
> > > fbca78707d..80eaecc90a 100644
> > > --- a/app/test/test_mcslock.c
> > > +++ b/app/test/test_mcslock.c
> > > @@ -37,10 +37,6 @@
> > >   *   lock multiple times.
> > >   */
> > >
> > > -RTE_DEFINE_PER_LCORE(rte_mcslock_t, _ml_me); -
> > > RTE_DEFINE_PER_LCORE(rte_mcslock_t, _ml_try_me); -
> > > RTE_DEFINE_PER_LCORE(rte_mcslock_t, _ml_perf_me);
> > > -
> > >  rte_mcslock_t *p_ml;
> > >  rte_mcslock_t *p_ml_try;
> > >  rte_mcslock_t *p_ml_perf;
> > > @@ -53,7 +49,7 @@ static int
> > >  test_mcslock_per_core(__rte_unused void *arg)  {
> > >  	/* Per core me node. */
> > > -	rte_mcslock_t ml_me = RTE_PER_LCORE(_ml_me);
> > > +	rte_mcslock_t ml_me;
> > These variables are modified by other threads. IMO, it is better to keep
> them global (and not on the stack). From that perspective, I think we should
> be taking the address of the per lcore variable. For ex:
> > rte_mcslock_t *ml_me = &RTE_PER_LCORE(_ml_me);
> 
> In my understanding, the only case where another thread modifies our local
> variable is when the other thread releases the lock we are waiting for.  I can't
> see how it could cause an issue to have the locks in the stack. Am I missing
> something?
Agree, it was just my personal preference. I am fine with the patch.

> 
> Thanks,
> Olivier
> 
> 
> >
> > >
> > >  	rte_mcslock_lock(&p_ml, &ml_me);
> > >  	printf("MCS lock taken on core %u\n", rte_lcore_id()); @@ -77,7
> > > +73,7 @@ load_loop_fn(void *func_param)
> > >  	const unsigned int lcore = rte_lcore_id();
> > >
> > >  	/**< Per core me node. */
> > > -	rte_mcslock_t ml_perf_me = RTE_PER_LCORE(_ml_perf_me);
> > > +	rte_mcslock_t ml_perf_me;
> > >
> > >  	/* wait synchro */
> > >  	while (rte_atomic32_read(&synchro) == 0) @@ -151,8 +147,8 @@
> > > static int  test_mcslock_try(__rte_unused void *arg)  {
> > >  	/**< Per core me node. */
> > > -	rte_mcslock_t ml_me     = RTE_PER_LCORE(_ml_me);
> > > -	rte_mcslock_t ml_try_me = RTE_PER_LCORE(_ml_try_me);
> > > +	rte_mcslock_t ml_me;
> > > +	rte_mcslock_t ml_try_me;
> > >
> > >  	/* Locked ml_try in the main lcore, so it should fail
> > >  	 * when trying to lock it in the worker lcore.
> > > @@ -178,8 +174,8 @@ test_mcslock(void)
> > >  	int i;
> > >
> > >  	/* Define per core me node. */
> > > -	rte_mcslock_t ml_me     = RTE_PER_LCORE(_ml_me);
> > > -	rte_mcslock_t ml_try_me = RTE_PER_LCORE(_ml_try_me);
> > > +	rte_mcslock_t ml_me;
> > > +	rte_mcslock_t ml_try_me;
> > >
> > >  	/*
> > >  	 * Test mcs lock & unlock on each core
> > > --
> > > 2.25.1
> >

  reply	other threads:[~2020-11-04 21:20 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-04 17:04 Olivier Matz
2020-11-04 17:57 ` Honnappa Nagarahalli
2020-11-04 21:03   ` Olivier Matz
2020-11-04 21:20     ` Honnappa Nagarahalli [this message]
2021-01-15 15:28 ` David Marchand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DBAPR08MB581492519F68564244E2D43698EF0@DBAPR08MB5814.eurprd08.prod.outlook.com \
    --to=honnappa.nagarahalli@arm.com \
    --cc=dev@dpdk.org \
    --cc=nd@arm.com \
    --cc=olivier.matz@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK patches and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ https://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git