From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 66101A046B for ; Sat, 1 Jun 2019 18:00:05 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8471C1C0B; Sat, 1 Jun 2019 18:00:04 +0200 (CEST) Received: from mail-ua1-f66.google.com (mail-ua1-f66.google.com [209.85.222.66]) by dpdk.org (Postfix) with ESMTP id 3D0582AB for ; Sat, 1 Jun 2019 18:00:02 +0200 (CEST) Received: by mail-ua1-f66.google.com with SMTP id l3so4862556uad.4 for ; Sat, 01 Jun 2019 09:00:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ADZD7dAm4jQh/LO1vx+6TnT2jtiT70F6L8ohF4RK36o=; b=T/y7SgNiYVBHQ2NhwxP3UOBibnSJtQbTtqqMURnu4Yjn/nWVlisqdDy3JgzxguHyKl 51VymPNWreyBeVo5HjHHW8ZFFlpBsZivKkVw/UXFJCNb1FApfmqFg62esnhOCoUTZ7ja hDjEx+3O5k7tZL6+L85q4bu7K8mrO4I2OgVrX3etjNE6AbF9PEww7dJ/63P+khTyAztR e511pK3Z7q+oqneKTougmdeevlcDMWCQPmduP32QugnQmKsx/iUNtThRe9/cCRjtlOYp sQjtQN2RuwmYwiqjqvPR7p5GADsOK6Z67J9zRY98XQzqLQvNMyrP53jWKyyVKkuK5dSi Ke8A== X-Gm-Message-State: APjAAAWXGrOjr0cJxpziwXCsVqhMbmiGnkm3X5d6TrcE89g1U9VctCU8 F0eg4V8hW600kbgbVkvRuVzYGvw4ET2M2bZ7vCibjw== X-Google-Smtp-Source: APXvYqykduU4W/eF4WIIzhn4sMG+4Di3drcTRAbFK2MeYe3J/nMKnAhr2TRAMTKOMiLBNVpPilWAUDV+sr4cAl0g+r0= X-Received: by 2002:ab0:5930:: with SMTP id n45mr8044102uad.87.1559404802259; Sat, 01 Jun 2019 09:00:02 -0700 (PDT) MIME-Version: 1.0 References: <20190531232723.2030-1-dharmik.thakkar@arm.com> In-Reply-To: <20190531232723.2030-1-dharmik.thakkar@arm.com> From: David Marchand Date: Sat, 1 Jun 2019 17:59:51 +0200 Message-ID: To: Dharmik Thakkar Cc: Yipeng Wang , Sameh Gobriel , Bruce Richardson , Pablo de Lara , dev , dpdk stable , Michael Santana Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] [dpdk-stable] [PATCH] test/hash: rectify slaveid to point to valid cores X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Sat, Jun 1, 2019 at 1:28 AM Dharmik Thakkar wrote: > This patch rectifies slave_id passed to rte_eal_wait_lcore() > to point to valid cores in read-write lock-free concurrency test. > > It also replaces a 'for' loop with RTE_LCORE_FOREACH API. > > Fixes: dfd9d5537e876 ("test/hash: use existing lcore API") > This incriminated commit only converts direct access to lcore_config into call to rte_eal_wait_lcore. So it did not introduce the issue you want to fix. The Fixes: tag should probably be: Fixes: c7eb0972e74b ("test/hash: add lock-free r/w concurrency") Cc: stable@dpdk.org > > Signed-off-by: Dharmik Thakkar > Reviewed-by: Ruifeng Wang > --- > app/test/test_hash_readwrite_lf.c | 24 +++++++++++------------- > 1 file changed, 11 insertions(+), 13 deletions(-) > > diff --git a/app/test/test_hash_readwrite_lf.c > b/app/test/test_hash_readwrite_lf.c > index 343a338b4ea8..af1ee9c34394 100644 > --- a/app/test/test_hash_readwrite_lf.c > +++ b/app/test/test_hash_readwrite_lf.c > @@ -126,11 +126,9 @@ get_enabled_cores_list(void) > uint32_t i = 0; > uint16_t core_id; > uint32_t max_cores = rte_lcore_count(); > - for (core_id = 0; core_id < RTE_MAX_LCORE && i < max_cores; > core_id++) { > - if (rte_lcore_is_enabled(core_id)) { > - enabled_core_ids[i] = core_id; > - i++; > - } > + RTE_LCORE_FOREACH(core_id) { > + enabled_core_ids[i] = core_id; > + i++; > } > > if (i != max_cores) { > @@ -738,7 +736,7 @@ test_hash_add_no_ks_lookup_hit(struct rwc_perf > *rwc_perf_results, int rwc_lf, > > enabled_core_ids[i]); > > for (i = 1; i <= rwc_core_cnt[n]; i++) > - if (rte_eal_wait_lcore(i) < 0) > + if > (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) > goto err; > > unsigned long long cycles_per_lookup = > @@ -810,7 +808,7 @@ test_hash_add_no_ks_lookup_miss(struct rwc_perf > *rwc_perf_results, int rwc_lf, > if (ret < 0) > goto err; > for (i = 1; i <= rwc_core_cnt[n]; i++) > - if (rte_eal_wait_lcore(i) < 0) > + if > (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) > goto err; > > unsigned long long cycles_per_lookup = > @@ -886,7 +884,7 @@ test_hash_add_ks_lookup_hit_non_sp(struct rwc_perf > *rwc_perf_results, > if (ret < 0) > goto err; > for (i = 1; i <= rwc_core_cnt[n]; i++) > - if (rte_eal_wait_lcore(i) < 0) > + if > (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) > goto err; > > unsigned long long cycles_per_lookup = > @@ -962,7 +960,7 @@ test_hash_add_ks_lookup_hit_sp(struct rwc_perf > *rwc_perf_results, int rwc_lf, > if (ret < 0) > goto err; > for (i = 1; i <= rwc_core_cnt[n]; i++) > - if (rte_eal_wait_lcore(i) < 0) > + if > (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) > goto err; > > unsigned long long cycles_per_lookup = > @@ -1037,7 +1035,7 @@ test_hash_add_ks_lookup_miss(struct rwc_perf > *rwc_perf_results, int rwc_lf, int > if (ret < 0) > goto err; > for (i = 1; i <= rwc_core_cnt[n]; i++) > - if (rte_eal_wait_lcore(i) < 0) > + if > (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) > goto err; > > unsigned long long cycles_per_lookup = > @@ -1132,12 +1130,12 @@ test_hash_multi_add_lookup(struct rwc_perf > *rwc_perf_results, int rwc_lf, > for (i = rwc_core_cnt[n] + 1; > i <= rwc_core_cnt[m] + > rwc_core_cnt[n]; > i++) > - rte_eal_wait_lcore(i); > + > rte_eal_wait_lcore(enabled_core_ids[i]); > > writer_done = 1; > > for (i = 1; i <= rwc_core_cnt[n]; i++) > - if (rte_eal_wait_lcore(i) < 0) > + if > (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) > goto err; > > unsigned long long cycles_per_lookup = > Checkpatch complains here. @@ -1221,7 +1219,7 @@ test_hash_add_ks_lookup_hit_extbkt(struct rwc_perf > *rwc_perf_results, > writer_done = 1; > > for (i = 1; i <= rwc_core_cnt[n]; i++) > - if (rte_eal_wait_lcore(i) < 0) > + if > (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) > goto err; > > unsigned long long cycles_per_lookup = > -- > 2.17.1 > > The rest looks good to me. We have accumulated quite some fixes with Michael on app/test. Do you mind if I take your patch as part of our series? I would change the Fixes: tag, fix the checkpatch warning, and send it next week. Bon week-end. -- David Marchand