From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out1-smtp.messagingengine.com (out1-smtp.messagingengine.com [66.111.4.25]) by dpdk.org (Postfix) with ESMTP id 214BE5689 for ; Thu, 20 Dec 2018 12:40:30 +0100 (CET) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.nyi.internal (Postfix) with ESMTP id A924021F01; Thu, 20 Dec 2018 06:40:29 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute1.internal (MEProxy); Thu, 20 Dec 2018 06:40:29 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=mesmtp; bh=L8XrCdJll9/T6LF46YW+nySZJiYDTX7XBd/rlhJaWW0=; b=m330jVRErywt epgeB9xFuHWquxN7/9zbJEQFHzOt5gpmKNe6fuK7xJ/sZMBG7xcFrMHV9hVJvHeG s/bYwqHQrrkd9xkbrN2e/dOL0TkbFEjSZLzw6833CQg1yV3EEc+qcPZxTcWXiztI FmFDUAR7JZPTL5zzn0YlENxxQCqM370= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm1; bh=L8XrCdJll9/T6LF46YW+nySZJiYDTX7XBd/rlhJaW W0=; b=cjh2Fp0ykE7Sx1+tjFBcAuU5skggCFLnjWMzprukXLjrKsKbx7euTiFAN F/l1QZPRJRfqeg0L01ugIT69kcrUjEekIDfs2pjUyi4+BwzqSZOIJi7mV5XVyZzD vVoVZ1QFfQGX1tz4vJfENo14nBXgLiWYantk9gBa494FgQhtWe3whDWfipLLBDc9 EXkwZjuWxCYUn5jcE20t1CR6psRyNHtV6vfePeoSiJtTkKtXywYAW243y7virB3K fzoFOZhO3g+9aKjEEKUiTPXaHc0A1kz9Qmf53mMFQdZU1xMN9NVbkSJ/srCtsi2s 1H+7mHePenTgrNMTEWQOjpGMBXRtw== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedtkedrudejfedgfeefucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfquhhtnecuuegrihhlohhuthemucef tddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenucfjughrpefhvffufffkjg hfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhmrghsucfoohhnjhgrlhhonhcu oehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenucfkphepjeejrddufeegrddvtd efrddukeegnecurfgrrhgrmhepmhgrihhlfhhrohhmpehthhhomhgrshesmhhonhhjrghl ohhnrdhnvghtnecuvehluhhsthgvrhfuihiivgeptd X-ME-Proxy: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id 68441100BA; Thu, 20 Dec 2018 06:40:27 -0500 (EST) From: Thomas Monjalon To: Gavin Hu Cc: dev@dpdk.org, jerinj@marvell.com, hemant.agrawal@nxp.com, bruce.richardson@intel.com, anatoly.burakov@intel.com, Honnappa.Nagarahalli@arm.com, nd@arm.com, Joyce Kong , olivier.matz@6wind.com Date: Thu, 20 Dec 2018 12:40:26 +0100 Message-ID: <1600613.K5MK70CiJL@xps> In-Reply-To: <1545305634-81288-1-git-send-email-gavin.hu@arm.com> References: <1545305634-81288-1-git-send-email-gavin.hu@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH v1] test/ring: ring perf test case enhancement X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 20 Dec 2018 11:40:30 -0000 +Cc Olivier, maintainer of the ring library. 20/12/2018 12:33, Gavin Hu: > From: Joyce Kong > > Run ring perf test on all available cores to really verify MPMC operations. > The old way of running on a pair of cores is not enough for MPMC rings. We > used this test case for ring optimization and it was really helpful for > measuring the ring performance in multi-core environment. > > Suggested-by: Gavin Hu > Signed-off-by: Joyce Kong > Reviewed-by: Ruifeng Wang > Reviewed-by: Honnappa Nagarahalli > Reviewed-by: Dharmik Thakkar > Reviewed-by: Ola Liljedahl > Reviewed-by: Gavin Hu > --- > test/test/test_ring_perf.c | 82 ++++++++++++++++++++++++++++++++++++++++++++-- > 1 file changed, 80 insertions(+), 2 deletions(-) > > diff --git a/test/test/test_ring_perf.c b/test/test/test_ring_perf.c > index ebb3939..819d119 100644 > --- a/test/test/test_ring_perf.c > +++ b/test/test/test_ring_perf.c > @@ -20,12 +20,17 @@ > * * Empty ring dequeue > * * Enqueue/dequeue of bursts in 1 threads > * * Enqueue/dequeue of bursts in 2 threads > + * * Enqueue/dequeue of bursts in all available threads > */ > > #define RING_NAME "RING_PERF" > #define RING_SIZE 4096 > #define MAX_BURST 32 > > +#ifndef ARRAY_SIZE > +#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) > +#endif > + > /* > * the sizes to enqueue and dequeue in testing > * (marked volatile so they won't be seen as compile-time constants) > @@ -248,9 +253,78 @@ run_on_core_pair(struct lcore_pair *cores, struct rte_ring *r, > } > } > > +static rte_atomic32_t synchro; > +static uint64_t queue_count[RTE_MAX_LCORE] = {0}; > + > +#define TIME_MS 100 > + > +static int > +load_loop_fn(void *p) > +{ > + uint64_t time_diff = 0; > + uint64_t begin = 0; > + uint64_t hz = rte_get_timer_hz(); > + uint64_t lcount = 0; > + const unsigned int lcore = rte_lcore_id(); > + struct thread_params *params = p; > + void *burst[MAX_BURST] = {0}; > + > + /* wait synchro for slaves */ > + if (lcore != rte_get_master_lcore()) > + while (rte_atomic32_read(&synchro) == 0) > + rte_pause(); > + > + begin = rte_get_timer_cycles(); > + while (time_diff < hz * TIME_MS / 1000) { > + rte_ring_mp_enqueue_bulk(params->r, burst, params->size, NULL); > + rte_ring_mc_dequeue_bulk(params->r, burst, params->size, NULL); > + lcount++; > + time_diff = rte_get_timer_cycles() - begin; > + } > + queue_count[lcore] = lcount; > + return 0; > +} > + > +static int > +run_on_all_cores(struct rte_ring *r) > +{ > + uint64_t total = 0; > + struct thread_params param = {0}; > + unsigned int i, c; > + for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) { > + printf("\nBulk enq/dequeue count on size %u\n", bulk_sizes[i]); > + param.size = bulk_sizes[i]; > + param.r = r; > + > + /* clear synchro and start slaves */ > + rte_atomic32_set(&synchro, 0); > + if (rte_eal_mp_remote_launch(load_loop_fn, > + ¶m, SKIP_MASTER) < 0) > + return -1; > + > + /* start synchro and launch test on master */ > + rte_atomic32_set(&synchro, 1); > + load_loop_fn(¶m); > + > + rte_eal_mp_wait_lcore(); > + > + RTE_LCORE_FOREACH(c) { > + printf("Core [%u] count = %"PRIu64"\n", > + c, queue_count[c]); > + total += queue_count[c]; > + } > + > + printf("Total count (size: %u): %"PRIu64"\n", bulk_sizes[i], > + total); > + } > + > + return 0; > +} > + > /* > - * Test function that determines how long an enqueue + dequeue of a single item > - * takes on a single lcore. Result is for comparison with the bulk enq+deq. > + * Test function that determines how long an enqueue + dequeue of a single > + * item takes on a single lcore. Result is for comparison with the bulk > + * enq+deq. > */ > static void > test_single_enqueue_dequeue(struct rte_ring *r) > @@ -394,6 +468,10 @@ test_ring_perf(void) > printf("\n### Testing using two NUMA nodes ###\n"); > run_on_core_pair(&cores, r, enqueue_bulk, dequeue_bulk); > } > + > + printf("\n### Testing using all slave nodes ###\n"); > + run_on_all_cores(r); > + > rte_ring_free(r); > return 0; > } >