From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id F172CA04B1; Wed, 4 Nov 2020 22:04:02 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C99E32C2A; Wed, 4 Nov 2020 22:04:01 +0100 (CET) Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) by dpdk.org (Postfix) with ESMTP id 377172C16 for ; Wed, 4 Nov 2020 22:04:00 +0100 (CET) Received: by mail-wm1-f65.google.com with SMTP id 23so2682193wmg.1 for ; Wed, 04 Nov 2020 13:04:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=xz3AjddHORmsRtjF3BloYndJgvRcx6khNKOrnw0dN/0=; b=LtZyshAk8c1GY5PHBta0++VGKK+qudYPTlTmMiTZVxcxH6Lj66xAv5Ev3mDZ6yIlSG BIN8EOmv4qUzkciiiSfWaSz/MPaQAmJtA3Lrz3vWktQaiLctqBiCQVB/CrNYL4/vro2N +UbeVfQmT/4VzzXxzfj95L/DgR1ca8iZv+d4qiykr9btcGmhdAenjPeO+o71d/Gf5A14 Q3Bh8ozPb+C8RESil/4dyt0Wuw3QYndXXf4H0few0ge8XhUsx+ASpMUazREEiGtvwxMr G3CfUBKmkBcUZHOVcK1qVFmfL7LtUZupQXx8uXGWtHx6R8YN8pTzebIoFiqQ9eybU2/+ +kQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=xz3AjddHORmsRtjF3BloYndJgvRcx6khNKOrnw0dN/0=; b=lpJCWa3j1f6O+x0syx1tDkA5StwOOxihv3rOLgpfRvNgPkDSA/47X5A2D6XUujzD6Y gVq7J7ks0TmYSrduI7TY0FraQBuh3FQ0RZTYSgmCn7c/Nvj8jc5zHydozCnvCcmSIk+9 o0VPtu/xIZ2IJTeUcxhaC0ExN47tqjn0a7t7TURwjXxkK3TzL9tcI5drXs5BcOb7KbEe PYvoa8cs0a/mzjuASiVeNclBE9StsAEBGiFHAeThN4f/Su+3rh0BIJQCrYSkL4eFfSZv +Hun6I++xunrk9QbIrnY5yTWFbIKSh3S0oMOr2ycCjgogFrEvizRuiVc8UN9xVkTZBSm snPQ== X-Gm-Message-State: AOAM530PHhVGIXfGduNzzMmUGwVEKT37U05vAZExZ3GfkVKlwzKLyM4x hmKXsvnsvjGXUGohz734eAXbpfvlugUSYQ== X-Google-Smtp-Source: ABdhPJwUsgGjVk6KKdkWscQkaxyZ7Kkzr1vRvdo0hrbmMR1lKzs6WUyP/hBY0vyd+Gw26rzcLCcqpg== X-Received: by 2002:a1c:9916:: with SMTP id b22mr6927405wme.105.1604523838900; Wed, 04 Nov 2020 13:03:58 -0800 (PST) Received: from 6wind.com (2a01cb0c0005a6000c321531fba1e24e.ipv6.abo.wanadoo.fr. [2a01:cb0c:5:a600:c32:1531:fba1:e24e]) by smtp.gmail.com with ESMTPSA id e5sm4165129wrw.93.2020.11.04.13.03.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 Nov 2020 13:03:58 -0800 (PST) Date: Wed, 4 Nov 2020 22:03:52 +0100 From: Olivier Matz To: Honnappa Nagarahalli Cc: "dev@dpdk.org" , nd Message-ID: <20201104210352.GA6890@arsenic.home> References: <20201104170425.8882-1-olivier.matz@6wind.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Subject: Re: [dpdk-dev] [PATCH] test/mcslock: remove unneeded per-lcore copy X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Honnappa, On Wed, Nov 04, 2020 at 05:57:19PM +0000, Honnappa Nagarahalli wrote: > > > > > > Each core already comes with its local storage for mcslock (in its stack), > > therefore there is no need to define an additional per-lcore mcslock. > > > > Fixes: 32dcb9fd2a22 ("test/mcslock: add MCS queued lock unit test") > > > > Signed-off-by: Olivier Matz > > --- > > app/test/test_mcslock.c | 16 ++++++---------- > > 1 file changed, 6 insertions(+), 10 deletions(-) > > > > diff --git a/app/test/test_mcslock.c b/app/test/test_mcslock.c index > > fbca78707d..80eaecc90a 100644 > > --- a/app/test/test_mcslock.c > > +++ b/app/test/test_mcslock.c > > @@ -37,10 +37,6 @@ > > * lock multiple times. > > */ > > > > -RTE_DEFINE_PER_LCORE(rte_mcslock_t, _ml_me); - > > RTE_DEFINE_PER_LCORE(rte_mcslock_t, _ml_try_me); - > > RTE_DEFINE_PER_LCORE(rte_mcslock_t, _ml_perf_me); > > - > > rte_mcslock_t *p_ml; > > rte_mcslock_t *p_ml_try; > > rte_mcslock_t *p_ml_perf; > > @@ -53,7 +49,7 @@ static int > > test_mcslock_per_core(__rte_unused void *arg) { > > /* Per core me node. */ > > - rte_mcslock_t ml_me = RTE_PER_LCORE(_ml_me); > > + rte_mcslock_t ml_me; > These variables are modified by other threads. IMO, it is better to keep them global (and not on the stack). From that perspective, I think we should be taking the address of the per lcore variable. For ex: > rte_mcslock_t *ml_me = &RTE_PER_LCORE(_ml_me); In my understanding, the only case where another thread modifies our local variable is when the other thread releases the lock we are waiting for. I can't see how it could cause an issue to have the locks in the stack. Am I missing something? Thanks, Olivier > > > > > rte_mcslock_lock(&p_ml, &ml_me); > > printf("MCS lock taken on core %u\n", rte_lcore_id()); @@ -77,7 > > +73,7 @@ load_loop_fn(void *func_param) > > const unsigned int lcore = rte_lcore_id(); > > > > /**< Per core me node. */ > > - rte_mcslock_t ml_perf_me = RTE_PER_LCORE(_ml_perf_me); > > + rte_mcslock_t ml_perf_me; > > > > /* wait synchro */ > > while (rte_atomic32_read(&synchro) == 0) @@ -151,8 +147,8 @@ > > static int test_mcslock_try(__rte_unused void *arg) { > > /**< Per core me node. */ > > - rte_mcslock_t ml_me = RTE_PER_LCORE(_ml_me); > > - rte_mcslock_t ml_try_me = RTE_PER_LCORE(_ml_try_me); > > + rte_mcslock_t ml_me; > > + rte_mcslock_t ml_try_me; > > > > /* Locked ml_try in the main lcore, so it should fail > > * when trying to lock it in the worker lcore. > > @@ -178,8 +174,8 @@ test_mcslock(void) > > int i; > > > > /* Define per core me node. */ > > - rte_mcslock_t ml_me = RTE_PER_LCORE(_ml_me); > > - rte_mcslock_t ml_try_me = RTE_PER_LCORE(_ml_try_me); > > + rte_mcslock_t ml_me; > > + rte_mcslock_t ml_try_me; > > > > /* > > * Test mcs lock & unlock on each core > > -- > > 2.25.1 >