DPDK patches and discussions
 help / color / mirror / Atom feed
From: Feifei Wang <Feifei.Wang2@arm.com>
To: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>,
	"jerinj@marvell.com" <jerinj@marvell.com>,
	Harry van Haaren <harry.van.haaren@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, nd <nd@arm.com>,
	Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>,
	"stable@dpdk.org" <stable@dpdk.org>,
	Ruifeng Wang <Ruifeng.Wang@arm.com>, nd <nd@arm.com>
Subject: Re: [dpdk-dev] [RFC PATCH v1 4/6] app/eventdev: add release barriers for pipeline test
Date: Tue, 5 Jan 2021 07:39:26 +0000	[thread overview]
Message-ID: <DBBPR08MB44117C17CA574F2B669FE805C8D10@DBBPR08MB4411.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <CO6PR18MB3828B5F785167C28DA6C30BDDEDF0@CO6PR18MB3828.namprd18.prod.outlook.com>

Hi, Pavan

Sorry for my late reply and thanks very much for your review.

> -----Original Message-----
> From: Pavan Nikhilesh Bhagavatula <pbhagavatula@marvell.com>
> Sent: 2020年12月22日 18:33
> To: Feifei Wang <Feifei.Wang2@arm.com>; jerinj@marvell.com; Harry van
> Haaren <harry.van.haaren@intel.com>; Pavan Nikhilesh
> <pbhagavatula@caviumnetworks.com>
> Cc: dev@dpdk.org; nd <nd@arm.com>; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; stable@dpdk.org; Phil Yang
> <Phil.Yang@arm.com>
> Subject: RE: [RFC PATCH v1 4/6] app/eventdev: add release barriers for
> pipeline test
> 
> 
> >Add release barriers before updating the processed packets for worker
> >lcores to ensure the worker lcore has really finished data processing
> >and then it can update the processed packets number.
> >
> 
> I believe we can live with minor inaccuracies in stats being presented as
> atomics are pretty heavy when scheduler is limited to burst size as 1.
> 
> One option is to move it before a pipeline operation (pipeline_event_tx,
> pipeline_fwd_event etc.) as they imply implicit release barrier (as all the
> changes done to the event should be visible to the next core).

If I understand correctly, your meaning is that move release barriers before
pipeline_event_tx or pipeline_fwd_event. This can ensure the event has been
processed before the next core begins to tx/fwd. For example:

if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) {
		+	__atomic_thread_fence(__ATOMIC_RELEASE); 
			pipeline_event_tx(dev, port, &ev);
			w->processed_pkts++;
		} else {
			ev.queue_id++;
		+	__atomic_thread_fence(__ATOMIC_RELEASE);
			pipeline_fwd_event(&ev, RTE_SCHED_TYPE_ATOMIC);
			pipeline_event_enqueue(dev, port, &ev);

However, there are two reasons to prevent this:

First, compare with other tests in app/eventdev, for example, the eventdev perf test,
the wmb is after event operation to ensure operation has been finished and then w->processed_pkts++.
So, if we move release barriers before tx/fwd, it may cause that the tests of app/eventdev
become  inconsistent.This may reduce the maintainability of the code and make it difficult to understand.

Second, it is a test case, though heavy thread may cause performance degradation, it can ensure that
the operation process and the test result are correct. And maybe for a test case, correctness is more important
than performance.

So, due to two reasons above, I'm ambivalent about how we should do in the next step. 

Best Regards
Feifei

> >Fixes: 314bcf58ca8f ("app/eventdev: add pipeline queue worker
> >functions")
> >Cc: pbhagavatula@marvell.com
> >Cc: stable@dpdk.org
> >
> >Signed-off-by: Phil Yang <phil.yang@arm.com>
> >Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> >Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> >---
> > app/test-eventdev/test_pipeline_queue.c | 64
> >+++++++++++++++++++++----
> > 1 file changed, 56 insertions(+), 8 deletions(-)
> >
> >diff --git a/app/test-eventdev/test_pipeline_queue.c b/app/test-
> >eventdev/test_pipeline_queue.c index 7bebac34f..0c0ec0ceb 100644
> >--- a/app/test-eventdev/test_pipeline_queue.c
> >+++ b/app/test-eventdev/test_pipeline_queue.c
> >@@ -30,7 +30,13 @@ pipeline_queue_worker_single_stage_tx(void
> >*arg)
> >
> > 		if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) {
> > 			pipeline_event_tx(dev, port, &ev);
> >-			w->processed_pkts++;
> >+
> >+			/* release barrier here ensures stored operation
> >+			 * of the event completes before the number of
> >+			 * processed pkts is visible to the main core
> >+			 */
> >+			__atomic_fetch_add(&(w->processed_pkts), 1,
> >+					__ATOMIC_RELEASE);
> > 		} else {
> > 			ev.queue_id++;
> > 			pipeline_fwd_event(&ev,
> >RTE_SCHED_TYPE_ATOMIC);
> >@@ -59,7 +65,13 @@ pipeline_queue_worker_single_stage_fwd(void
> >*arg)
> > 		rte_event_eth_tx_adapter_txq_set(ev.mbuf, 0);
> > 		pipeline_fwd_event(&ev, RTE_SCHED_TYPE_ATOMIC);
> > 		pipeline_event_enqueue(dev, port, &ev);
> >-		w->processed_pkts++;
> >+
> >+		/* release barrier here ensures stored operation
> >+		 * of the event completes before the number of
> >+		 * processed pkts is visible to the main core
> >+		 */
> >+		__atomic_fetch_add(&(w->processed_pkts), 1,
> >+				__ATOMIC_RELEASE);
> > 	}
> >
> > 	return 0;
> >@@ -84,7 +96,13 @@
> >pipeline_queue_worker_single_stage_burst_tx(void *arg)
> > 			if (ev[i].sched_type ==
> >RTE_SCHED_TYPE_ATOMIC) {
> > 				pipeline_event_tx(dev, port, &ev[i]);
> > 				ev[i].op = RTE_EVENT_OP_RELEASE;
> >-				w->processed_pkts++;
> >+
> >+				/* release barrier here ensures stored
> >operation
> >+				 * of the event completes before the
> >number of
> >+				 * processed pkts is visible to the main
> >core
> >+				 */
> >+				__atomic_fetch_add(&(w-
> >>processed_pkts), 1,
> >+						__ATOMIC_RELEASE);
> > 			} else {
> > 				ev[i].queue_id++;
> > 				pipeline_fwd_event(&ev[i],
> >@@ -121,7 +139,13 @@
> >pipeline_queue_worker_single_stage_burst_fwd(void *arg)
> > 		}
> >
> > 		pipeline_event_enqueue_burst(dev, port, ev, nb_rx);
> >-		w->processed_pkts += nb_rx;
> >+
> >+		/* release barrier here ensures stored operation
> >+		 * of the event completes before the number of
> >+		 * processed pkts is visible to the main core
> >+		 */
> >+		__atomic_fetch_add(&(w->processed_pkts), nb_rx,
> >+				__ATOMIC_RELEASE);
> > 	}
> >
> > 	return 0;
> >@@ -146,7 +170,13 @@ pipeline_queue_worker_multi_stage_tx(void
> >*arg)
> >
> > 		if (ev.queue_id == tx_queue[ev.mbuf->port]) {
> > 			pipeline_event_tx(dev, port, &ev);
> >-			w->processed_pkts++;
> >+
> >+			/* release barrier here ensures stored operation
> >+			 * of the event completes before the number of
> >+			 * processed pkts is visible to the main core
> >+			 */
> >+			__atomic_fetch_add(&(w->processed_pkts), 1,
> >+					__ATOMIC_RELEASE);
> > 			continue;
> > 		}
> >
> >@@ -180,7 +210,13 @@
> >pipeline_queue_worker_multi_stage_fwd(void *arg)
> > 			ev.queue_id = tx_queue[ev.mbuf->port];
> > 			rte_event_eth_tx_adapter_txq_set(ev.mbuf, 0);
> > 			pipeline_fwd_event(&ev,
> >RTE_SCHED_TYPE_ATOMIC);
> >-			w->processed_pkts++;
> >+
> >+			/* release barrier here ensures stored operation
> >+			 * of the event completes before the number of
> >+			 * processed pkts is visible to the main core
> >+			 */
> >+			__atomic_fetch_add(&(w->processed_pkts), 1,
> >+					__ATOMIC_RELEASE);
> > 		} else {
> > 			ev.queue_id++;
> > 			pipeline_fwd_event(&ev,
> >sched_type_list[cq_id]);
> >@@ -214,7 +250,13 @@
> >pipeline_queue_worker_multi_stage_burst_tx(void *arg)
> > 			if (ev[i].queue_id == tx_queue[ev[i].mbuf-
> >>port]) {
> > 				pipeline_event_tx(dev, port, &ev[i]);
> > 				ev[i].op = RTE_EVENT_OP_RELEASE;
> >-				w->processed_pkts++;
> >+
> >+				/* release barrier here ensures stored
> >operation
> >+				 * of the event completes before the
> >number of
> >+				 * processed pkts is visible to the main
> >core
> >+				 */
> >+				__atomic_fetch_add(&(w-
> >>processed_pkts), 1,
> >+						__ATOMIC_RELEASE);
> > 				continue;
> > 			}
> >
> >@@ -254,7 +296,13 @@
> >pipeline_queue_worker_multi_stage_burst_fwd(void *arg)
> >
> >	rte_event_eth_tx_adapter_txq_set(ev[i].mbuf, 0);
> > 				pipeline_fwd_event(&ev[i],
> >
> >	RTE_SCHED_TYPE_ATOMIC);
> >-				w->processed_pkts++;
> >+
> >+				/* release barrier here ensures stored
> >operation
> >+				 * of the event completes before the
> >number of
> >+				 * processed pkts is visible to the main
> >core
> >+				 */
> >+				__atomic_fetch_add(&(w-
> >>processed_pkts), 1,
> >+						__ATOMIC_RELEASE);
> > 			} else {
> > 				ev[i].queue_id++;
> > 				pipeline_fwd_event(&ev[i],
> >--
> >2.17.1


  reply	other threads:[~2021-01-05  7:39 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-22 10:33 Pavan Nikhilesh Bhagavatula
2021-01-05  7:39 ` Feifei Wang [this message]
2021-01-05  9:29   ` Pavan Nikhilesh Bhagavatula
2021-01-08  7:12     ` [dpdk-dev] 回复: " Feifei Wang
2021-01-08  9:12       ` [dpdk-dev] " Pavan Nikhilesh Bhagavatula
2021-01-08 10:44         ` [dpdk-dev] 回复: " Feifei Wang
2021-01-08 10:58           ` [dpdk-dev] " Pavan Nikhilesh Bhagavatula
2021-01-11  1:57             ` [dpdk-dev] 回复: " Feifei Wang
2021-01-12  8:29             ` [dpdk-dev] " Pavan Nikhilesh Bhagavatula
2021-01-14  6:07               ` [dpdk-dev] 回复: " Feifei Wang
  -- strict thread matches above, loose matches on Subject: below --
2020-12-22  5:07 [dpdk-dev] [RFC PATCH v1 0/6] refactor smp barriers in app/eventdev Feifei Wang
2020-12-22  5:07 ` [dpdk-dev] [RFC PATCH v1 4/6] app/eventdev: add release barriers for pipeline test Feifei Wang
2020-12-22  3:22 [dpdk-dev] [RFC PATCH v1 0/6] refactor smp barriers in app/eventdev Feifei Wang
2020-12-22  3:22 ` [dpdk-dev] [RFC PATCH v1 4/6] app/eventdev: add release barriers for pipeline test Feifei Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DBBPR08MB44117C17CA574F2B669FE805C8D10@DBBPR08MB4411.eurprd08.prod.outlook.com \
    --to=feifei.wang2@arm.com \
    --cc=Honnappa.Nagarahalli@arm.com \
    --cc=Ruifeng.Wang@arm.com \
    --cc=dev@dpdk.org \
    --cc=harry.van.haaren@intel.com \
    --cc=jerinj@marvell.com \
    --cc=nd@arm.com \
    --cc=pbhagavatula@marvell.com \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).