From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 81EF1A04BC for ; Fri, 9 Oct 2020 14:23:43 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 58AA41D5D4; Fri, 9 Oct 2020 14:23:42 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id 143951D5D4; Fri, 9 Oct 2020 14:23:38 +0200 (CEST) IronPort-SDR: CBDJ+HalUSxY7aL0EUwEj4ic1R1bnkTvmmrbBXrQTETocXYfxyqrjhERQJUhQxXLxDzdoU3WbG fuxf2qxynw3A== X-IronPort-AV: E=McAfee;i="6000,8403,9768"; a="227125383" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="227125383" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 05:23:34 -0700 IronPort-SDR: daPJtMsRzIb4ihPk5te6tUnZoLfjqHFeJ/dC+cDJhITe9H9deKvQ7Ocvri01lKttyYlRaDjCf7 nuowsKBC+uXg== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="528910248" Received: from dhunt5-mobl5.ger.corp.intel.com (HELO [10.249.34.11]) ([10.249.34.11]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 05:23:32 -0700 To: Lukasz Wojciechowski , Bruce Richardson Cc: dev@dpdk.org, stable@dpdk.org References: <20200925224209.12173-1-l.wojciechow@partner.samsung.com> <20201008052323.11547-1-l.wojciechow@partner.samsung.com> <20201008052323.11547-12-l.wojciechow@partner.samsung.com> From: David Hunt Message-ID: Date: Fri, 9 Oct 2020 13:23:30 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Thunderbird/68.12.1 MIME-Version: 1.0 In-Reply-To: <20201008052323.11547-12-l.wojciechow@partner.samsung.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-GB Subject: Re: [dpdk-stable] [PATCH v5 11/15] test/distributor: replace delays with spin locks X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" On 8/10/2020 6:23 AM, Lukasz Wojciechowski wrote: > Instead of making delays in test code and waiting > for worker hopefully to reach proper states, > synchronize worker shutdown test cases with spin lock > on atomic variable. > > Fixes: c0de0eb82e40 ("distributor: switch over to new API") > Cc: david.hunt@intel.com > Cc: stable@dpdk.org > > Signed-off-by: Lukasz Wojciechowski > --- > app/test/test_distributor.c | 19 +++++++++++++++++-- > 1 file changed, 17 insertions(+), 2 deletions(-) > > diff --git a/app/test/test_distributor.c b/app/test/test_distributor.c > index 838a67515..1e0a079ff 100644 > --- a/app/test/test_distributor.c > +++ b/app/test/test_distributor.c > @@ -27,6 +27,7 @@ struct worker_params worker_params; > /* statics - all zero-initialized by default */ > static volatile int quit; /**< general quit variable for all threads */ > static volatile int zero_quit; /**< var for when we just want thr0 to quit*/ > +static volatile int zero_sleep; /**< thr0 has quit basic loop and is sleeping*/ > static volatile unsigned worker_idx; > static volatile unsigned zero_idx; > > @@ -376,8 +377,10 @@ handle_work_for_shutdown_test(void *arg) > /* for worker zero, allow it to restart to pick up last packet > * when all workers are shutting down. > */ > + __atomic_store_n(&zero_sleep, 1, __ATOMIC_RELEASE); > while (zero_quit) > usleep(100); > + __atomic_store_n(&zero_sleep, 0, __ATOMIC_RELEASE); > > num = rte_distributor_get_pkt(d, id, buf, NULL, 0); > > @@ -445,7 +448,12 @@ sanity_test_with_worker_shutdown(struct worker_params *wp, > > /* flush the distributor */ > rte_distributor_flush(d); > - rte_delay_us(10000); > + while (!__atomic_load_n(&zero_sleep, __ATOMIC_ACQUIRE)) > + rte_distributor_flush(d); > + > + zero_quit = 0; > + while (__atomic_load_n(&zero_sleep, __ATOMIC_ACQUIRE)) > + rte_delay_us(100); > > for (i = 0; i < rte_lcore_count() - 1; i++) > printf("Worker %u handled %u packets\n", i, > @@ -505,9 +513,14 @@ test_flush_with_worker_shutdown(struct worker_params *wp, > /* flush the distributor */ > rte_distributor_flush(d); > > - rte_delay_us(10000); > + while (!__atomic_load_n(&zero_sleep, __ATOMIC_ACQUIRE)) > + rte_distributor_flush(d); > > zero_quit = 0; > + > + while (__atomic_load_n(&zero_sleep, __ATOMIC_ACQUIRE)) > + rte_delay_us(100); > + > for (i = 0; i < rte_lcore_count() - 1; i++) > printf("Worker %u handled %u packets\n", i, > __atomic_load_n(&worker_stats[i].handled_packets, > @@ -615,6 +628,8 @@ quit_workers(struct worker_params *wp, struct rte_mempool *p) > quit = 0; > worker_idx = 0; > zero_idx = RTE_MAX_LCORE; > + zero_quit = 0; > + zero_sleep = 0; > } > > static int Acked-by: David Hunt