From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id D464AA046B for ; Sat, 1 Jun 2019 18:00:06 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A85392C55; Sat, 1 Jun 2019 18:00:06 +0200 (CEST) Received: from mail-ua1-f68.google.com (mail-ua1-f68.google.com [209.85.222.68]) by dpdk.org (Postfix) with ESMTP id 3FFF81C0B for ; Sat, 1 Jun 2019 18:00:03 +0200 (CEST) Received: by mail-ua1-f68.google.com with SMTP id d4so4987184uaj.7 for ; Sat, 01 Jun 2019 09:00:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ADZD7dAm4jQh/LO1vx+6TnT2jtiT70F6L8ohF4RK36o=; b=ebR81LMGY/5ZLExz6Bm2ZBi8tyr3akoSLwV2rjBP/JFXlFwo7xtwoO6UNjpogzEIIE 5YYbfvswrK4LnIqZ6hrVksy8VwO35vuC93dJg2sG6o3y6wJ/UFG7gMhtX1OIopMahU2M HV7ta4KLaedSVPMq8l3wE7qeXIUhNDJEFyAt0sxY2pEkkIz6Z+8McED5NZRz8zvUVJCR m99xlNv3HH4IYbWuRvcuzX5fW257ajOf8Y9AeU25MIcQF/tYYCpDPynJl8RY63BJBtho RHYvpBU/0gsaveBtdmOcFIuothL4UbTCFax3GC765Tf9kozvzyrG2hz0kzK/YaZtE8C5 imEQ== X-Gm-Message-State: APjAAAUAHiqPY0YxxiYFjcWtUnxQdE84OXUJo0SS/2HZXava3AjhAD7Z SJFNhlHClJwY7jrrJpg7g+06pqddzDhiVxtPK22nTg== X-Google-Smtp-Source: APXvYqykduU4W/eF4WIIzhn4sMG+4Di3drcTRAbFK2MeYe3J/nMKnAhr2TRAMTKOMiLBNVpPilWAUDV+sr4cAl0g+r0= X-Received: by 2002:ab0:5930:: with SMTP id n45mr8044102uad.87.1559404802259; Sat, 01 Jun 2019 09:00:02 -0700 (PDT) MIME-Version: 1.0 References: <20190531232723.2030-1-dharmik.thakkar@arm.com> In-Reply-To: <20190531232723.2030-1-dharmik.thakkar@arm.com> From: David Marchand Date: Sat, 1 Jun 2019 17:59:51 +0200 Message-ID: To: Dharmik Thakkar Cc: Yipeng Wang , Sameh Gobriel , Bruce Richardson , Pablo de Lara , dev , dpdk stable , Michael Santana Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-stable] [PATCH] test/hash: rectify slaveid to point to valid cores X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" On Sat, Jun 1, 2019 at 1:28 AM Dharmik Thakkar wrote: > This patch rectifies slave_id passed to rte_eal_wait_lcore() > to point to valid cores in read-write lock-free concurrency test. > > It also replaces a 'for' loop with RTE_LCORE_FOREACH API. > > Fixes: dfd9d5537e876 ("test/hash: use existing lcore API") > This incriminated commit only converts direct access to lcore_config into call to rte_eal_wait_lcore. So it did not introduce the issue you want to fix. The Fixes: tag should probably be: Fixes: c7eb0972e74b ("test/hash: add lock-free r/w concurrency") Cc: stable@dpdk.org > > Signed-off-by: Dharmik Thakkar > Reviewed-by: Ruifeng Wang > --- > app/test/test_hash_readwrite_lf.c | 24 +++++++++++------------- > 1 file changed, 11 insertions(+), 13 deletions(-) > > diff --git a/app/test/test_hash_readwrite_lf.c > b/app/test/test_hash_readwrite_lf.c > index 343a338b4ea8..af1ee9c34394 100644 > --- a/app/test/test_hash_readwrite_lf.c > +++ b/app/test/test_hash_readwrite_lf.c > @@ -126,11 +126,9 @@ get_enabled_cores_list(void) > uint32_t i = 0; > uint16_t core_id; > uint32_t max_cores = rte_lcore_count(); > - for (core_id = 0; core_id < RTE_MAX_LCORE && i < max_cores; > core_id++) { > - if (rte_lcore_is_enabled(core_id)) { > - enabled_core_ids[i] = core_id; > - i++; > - } > + RTE_LCORE_FOREACH(core_id) { > + enabled_core_ids[i] = core_id; > + i++; > } > > if (i != max_cores) { > @@ -738,7 +736,7 @@ test_hash_add_no_ks_lookup_hit(struct rwc_perf > *rwc_perf_results, int rwc_lf, > > enabled_core_ids[i]); > > for (i = 1; i <= rwc_core_cnt[n]; i++) > - if (rte_eal_wait_lcore(i) < 0) > + if > (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) > goto err; > > unsigned long long cycles_per_lookup = > @@ -810,7 +808,7 @@ test_hash_add_no_ks_lookup_miss(struct rwc_perf > *rwc_perf_results, int rwc_lf, > if (ret < 0) > goto err; > for (i = 1; i <= rwc_core_cnt[n]; i++) > - if (rte_eal_wait_lcore(i) < 0) > + if > (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) > goto err; > > unsigned long long cycles_per_lookup = > @@ -886,7 +884,7 @@ test_hash_add_ks_lookup_hit_non_sp(struct rwc_perf > *rwc_perf_results, > if (ret < 0) > goto err; > for (i = 1; i <= rwc_core_cnt[n]; i++) > - if (rte_eal_wait_lcore(i) < 0) > + if > (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) > goto err; > > unsigned long long cycles_per_lookup = > @@ -962,7 +960,7 @@ test_hash_add_ks_lookup_hit_sp(struct rwc_perf > *rwc_perf_results, int rwc_lf, > if (ret < 0) > goto err; > for (i = 1; i <= rwc_core_cnt[n]; i++) > - if (rte_eal_wait_lcore(i) < 0) > + if > (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) > goto err; > > unsigned long long cycles_per_lookup = > @@ -1037,7 +1035,7 @@ test_hash_add_ks_lookup_miss(struct rwc_perf > *rwc_perf_results, int rwc_lf, int > if (ret < 0) > goto err; > for (i = 1; i <= rwc_core_cnt[n]; i++) > - if (rte_eal_wait_lcore(i) < 0) > + if > (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) > goto err; > > unsigned long long cycles_per_lookup = > @@ -1132,12 +1130,12 @@ test_hash_multi_add_lookup(struct rwc_perf > *rwc_perf_results, int rwc_lf, > for (i = rwc_core_cnt[n] + 1; > i <= rwc_core_cnt[m] + > rwc_core_cnt[n]; > i++) > - rte_eal_wait_lcore(i); > + > rte_eal_wait_lcore(enabled_core_ids[i]); > > writer_done = 1; > > for (i = 1; i <= rwc_core_cnt[n]; i++) > - if (rte_eal_wait_lcore(i) < 0) > + if > (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) > goto err; > > unsigned long long cycles_per_lookup = > Checkpatch complains here. @@ -1221,7 +1219,7 @@ test_hash_add_ks_lookup_hit_extbkt(struct rwc_perf > *rwc_perf_results, > writer_done = 1; > > for (i = 1; i <= rwc_core_cnt[n]; i++) > - if (rte_eal_wait_lcore(i) < 0) > + if > (rte_eal_wait_lcore(enabled_core_ids[i]) < 0) > goto err; > > unsigned long long cycles_per_lookup = > -- > 2.17.1 > > The rest looks good to me. We have accumulated quite some fixes with Michael on app/test. Do you mind if I take your patch as part of our series? I would change the Fixes: tag, fix the checkpatch warning, and send it next week. Bon week-end. -- David Marchand