From: Feifei Wang <feifei.wang2@arm.com>
To: Jerin Jacob <jerinj@marvell.com>
Cc: dev@dpdk.org, nd@arm.com, Honnappa.Nagarahalli@arm.com,
Feifei Wang <feifei.wang2@arm.com>, Phil Yang <phil.yang@arm.com>
Subject: [dpdk-dev] [RFC PATCH v1 3/6] app/eventdev: replace wmb with thread fence for perf test
Date: Tue, 22 Dec 2020 13:07:24 +0800 [thread overview]
Message-ID: <20201222050728.41000-4-feifei.wang2@arm.com> (raw)
In-Reply-To: <20201222050728.41000-1-feifei.wang2@arm.com>
Simply replace rte_smp barrier with atomic threand fence.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
app/test-eventdev/test_perf_common.h | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/app/test-eventdev/test_perf_common.h b/app/test-eventdev/test_perf_common.h
index e7233e5a5..9785dc3e2 100644
--- a/app/test-eventdev/test_perf_common.h
+++ b/app/test-eventdev/test_perf_common.h
@@ -98,11 +98,11 @@ perf_process_last_stage(struct rte_mempool *const pool,
{
bufs[count++] = ev->event_ptr;
- /* wmb here ensures event_prt is stored before
- * updating the number of processed packets
- * for worker lcores
+ /* release fence here ensures event_prt is
+ * stored before updating the number of
+ * processed packets for worker lcores
*/
- rte_smp_wmb();
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
w->processed_pkts++;
if (unlikely(count == buf_sz)) {
@@ -122,11 +122,11 @@ perf_process_last_stage_latency(struct rte_mempool *const pool,
bufs[count++] = ev->event_ptr;
- /* wmb here ensures event_prt is stored before
- * updating the number of processed packets
- * for worker lcores
+ /* release fence here ensures event_prt is
+ * stored before updating the number of
+ * processed packets for worker lcores
*/
- rte_smp_wmb();
+ rte_atomic_thread_fence(__ATOMIC_RELEASE);
w->processed_pkts++;
if (unlikely(count == buf_sz)) {
--
2.17.1
next prev parent reply other threads:[~2020-12-22 5:08 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-22 5:07 [dpdk-dev] [RFC PATCH v1 0/6] refactor smp barriers in app/eventdev Feifei Wang
2020-12-22 5:07 ` [dpdk-dev] [RFC PATCH v1 1/6] app/eventdev: fix SMP barrier bugs for perf test Feifei Wang
2020-12-22 5:07 ` [dpdk-dev] [RFC PATCH v1 2/6] app/eventdev: remove unnecessary barriers " Feifei Wang
2020-12-22 5:07 ` Feifei Wang [this message]
2020-12-22 5:07 ` [dpdk-dev] [RFC PATCH v1 4/6] app/eventdev: add release barriers for pipeline test Feifei Wang
2020-12-22 5:07 ` [dpdk-dev] [RFC PATCH v1 5/6] app/eventdev: remove unnecessary " Feifei Wang
2020-12-22 5:07 ` [dpdk-dev] [RFC PATCH v1 6/6] app/eventdev: remove unnecessary barriers for order test Feifei Wang
-- strict thread matches above, loose matches on Subject: below --
2020-12-22 3:22 [dpdk-dev] [RFC PATCH v1 0/6] refactor smp barriers in app/eventdev Feifei Wang
2020-12-22 3:22 ` [dpdk-dev] [RFC PATCH v1 3/6] app/eventdev: replace wmb with thread fence for perf test Feifei Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201222050728.41000-4-feifei.wang2@arm.com \
--to=feifei.wang2@arm.com \
--cc=Honnappa.Nagarahalli@arm.com \
--cc=dev@dpdk.org \
--cc=jerinj@marvell.com \
--cc=nd@arm.com \
--cc=phil.yang@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).