From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by dpdk.space (Postfix) with ESMTP id 8B168A00E6 for ; Wed, 20 Mar 2019 07:25:44 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 46F074C96; Wed, 20 Mar 2019 07:25:39 +0100 (CET) Received: from foss.arm.com (usa-sjc-mx-foss1.foss.arm.com [217.140.101.70]) by dpdk.org (Postfix) with ESMTP id 761475A; Wed, 20 Mar 2019 07:25:38 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E3929EBD; Tue, 19 Mar 2019 23:25:37 -0700 (PDT) Received: from net-arm-thunderx2.shanghai.arm.com (net-arm-thunderx2.shanghai.arm.com [10.169.40.121]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 19A313F575; Tue, 19 Mar 2019 23:25:35 -0700 (PDT) From: Joyce Kong To: dev@dpdk.org Cc: nd@arm.com, jerinj@marvell.com, konstantin.ananyev@intel.com, chaozhu@linux.vnet.ibm.com, bruce.richardson@intel.com, thomas@monjalon.net, hemant.agrawal@nxp.com, honnappa.nagarahalli@arm.com, gavin.hu@arm.com, stable@dpdk.org Date: Wed, 20 Mar 2019 14:25:08 +0800 Message-Id: <1553063109-57574-3-git-send-email-joyce.kong@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1553063109-57574-1-git-send-email-joyce.kong@arm.com> References: <1553063109-57574-1-git-send-email-joyce.kong@arm.com> In-Reply-To: <1544672265-219262-2-git-send-email-joyce.kong@arm.com> References: <1544672265-219262-2-git-send-email-joyce.kong@arm.com> Subject: [dpdk-dev] [PATCH v4 2/3] test/rwlock: add perf test case on all available cores X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Content-Type: text/plain; charset="UTF-8" Message-ID: <20190320062508.heEeGxpXmUr8Gw65ZryxeFoiLxYfs79-R2IW60HWBrQ@z> Add performance test on all available cores to benchmark the scaling up performance of rwlock. Fixes: af75078fece3 ("first public release") Cc: stable@dpdk.org Suggested-by: Gavin Hu Signed-off-by: Joyce Kong --- app/test/test_rwlock.c | 75 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 75 insertions(+) diff --git a/app/test/test_rwlock.c b/app/test/test_rwlock.c index 224f0de..f1c5f40 100644 --- a/app/test/test_rwlock.c +++ b/app/test/test_rwlock.c @@ -36,6 +36,7 @@ static rte_rwlock_t sl; static rte_rwlock_t sl_tab[RTE_MAX_LCORE]; +static rte_atomic32_t synchro; enum { LC_TYPE_RDLOCK, @@ -83,6 +84,77 @@ test_rwlock_per_core(__attribute__((unused)) void *arg) return 0; } +static rte_rwlock_t lk = RTE_RWLOCK_INITIALIZER; +static volatile uint64_t rwlock_data; +static uint64_t lock_count[RTE_MAX_LCORE] = {0}; + +#define TIME_MS 100 +#define TEST_RWLOCK_DEBUG 0 + +static int +load_loop_fn(__attribute__((unused)) void *arg) +{ + uint64_t time_diff = 0, begin; + uint64_t hz = rte_get_timer_hz(); + uint64_t lcount = 0; + const unsigned int lcore = rte_lcore_id(); + + /* wait synchro for slaves */ + if (lcore != rte_get_master_lcore()) + while (rte_atomic32_read(&synchro) == 0) + ; + + begin = rte_rdtsc_precise(); + while (time_diff < hz * TIME_MS / 1000) { + rte_rwlock_write_lock(&lk); + ++rwlock_data; + rte_rwlock_write_unlock(&lk); + + rte_rwlock_read_lock(&lk); + if (TEST_RWLOCK_DEBUG & !(lcount % 100)) + printf("Core [%u] rwlock_data = %"PRIu64"\n", + lcore, rwlock_data); + rte_rwlock_read_unlock(&lk); + + lcount++; + /* delay to make lock duty cycle slightly realistic */ + rte_pause(); + time_diff = rte_rdtsc_precise() - begin; + } + + lock_count[lcore] = lcount; + return 0; +} + +static int +test_rwlock_perf(void) +{ + unsigned int i; + uint64_t total = 0; + + printf("\nRwlock Perf Test on %u cores...\n", rte_lcore_count()); + + /* clear synchro and start slaves */ + rte_atomic32_set(&synchro, 0); + if (rte_eal_mp_remote_launch(load_loop_fn, NULL, SKIP_MASTER) < 0) + return -1; + + /* start synchro and launch test on master */ + rte_atomic32_set(&synchro, 1); + load_loop_fn(NULL); + + rte_eal_mp_wait_lcore(); + + RTE_LCORE_FOREACH(i) { + printf("Core [%u] count = %"PRIu64"\n", i, lock_count[i]); + total += lock_count[i]; + } + + printf("Total count = %"PRIu64"\n", total); + + return 0; +} + /* * - There is a global rwlock and a table of rwlocks (one per lcore). * @@ -132,6 +204,9 @@ rwlock_test1(void) rte_eal_mp_wait_lcore(); + if (test_rwlock_perf() < 0) + return -1; + return 0; } -- 2.7.4