DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [RFC PATCH v1 0/6] refactor smp barriers in app/eventdev
@ 2020-12-22  5:07 Feifei Wang
  2020-12-22  5:07 ` [dpdk-dev] [RFC PATCH v1 1/6] app/eventdev: fix SMP barrier bugs for perf test Feifei Wang
                   ` (5 more replies)
  0 siblings, 6 replies; 18+ messages in thread
From: Feifei Wang @ 2020-12-22  5:07 UTC (permalink / raw)
  Cc: dev, nd, Honnappa.Nagarahalli, Feifei Wang

For smp barriers in app/eventdev, remove the unnecessary barriers or
replace them with thread fence.

Feifei Wang (6):
  app/eventdev: fix SMP barrier bugs for perf test
  app/eventdev: remove unnecessary barriers for perf test
  app/eventdev: replace wmb with thread fence for perf test
  app/eventdev: add release barriers for pipeline test
  app/eventdev: remove unnecessary barriers for pipeline test
  app/eventdev: remove unnecessary barriers for order test

 app/test-eventdev/test_order_common.h    |  2 -
 app/test-eventdev/test_perf_common.c     |  4 --
 app/test-eventdev/test_perf_common.h     | 14 +++++-
 app/test-eventdev/test_pipeline_common.c |  1 -
 app/test-eventdev/test_pipeline_queue.c  | 64 +++++++++++++++++++++---
 5 files changed, 68 insertions(+), 17 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [RFC PATCH v1 4/6] app/eventdev: add release barriers for pipeline test
@ 2020-12-22 10:33 Pavan Nikhilesh Bhagavatula
  2021-01-05  7:39 ` Feifei Wang
  0 siblings, 1 reply; 18+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2020-12-22 10:33 UTC (permalink / raw)
  To: Feifei Wang, Jerin Jacob Kollanukkaran, Harry van Haaren,
	Pavan Nikhilesh
  Cc: dev, nd, Honnappa.Nagarahalli, stable, Phil Yang


>Add release barriers before updating the processed packets for worker
>lcores to ensure the worker lcore has really finished data processing
>and then it can update the processed packets number.
>

I believe we can live with minor inaccuracies in stats being presented
as atomics are pretty heavy when scheduler is limited to burst size as 1. 

One option is to move it before a pipeline operation (pipeline_event_tx, pipeline_fwd_event etc.)
as they imply implicit release barrier (as all the changes done to the event should be visible to the next core).

>Fixes: 314bcf58ca8f ("app/eventdev: add pipeline queue worker
>functions")
>Cc: pbhagavatula@marvell.com
>Cc: stable@dpdk.org
>
>Signed-off-by: Phil Yang <phil.yang@arm.com>
>Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
>Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
>---
> app/test-eventdev/test_pipeline_queue.c | 64
>+++++++++++++++++++++----
> 1 file changed, 56 insertions(+), 8 deletions(-)
>
>diff --git a/app/test-eventdev/test_pipeline_queue.c b/app/test-
>eventdev/test_pipeline_queue.c
>index 7bebac34f..0c0ec0ceb 100644
>--- a/app/test-eventdev/test_pipeline_queue.c
>+++ b/app/test-eventdev/test_pipeline_queue.c
>@@ -30,7 +30,13 @@ pipeline_queue_worker_single_stage_tx(void
>*arg)
>
> 		if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) {
> 			pipeline_event_tx(dev, port, &ev);
>-			w->processed_pkts++;
>+
>+			/* release barrier here ensures stored operation
>+			 * of the event completes before the number of
>+			 * processed pkts is visible to the main core
>+			 */
>+			__atomic_fetch_add(&(w->processed_pkts), 1,
>+					__ATOMIC_RELEASE);
> 		} else {
> 			ev.queue_id++;
> 			pipeline_fwd_event(&ev,
>RTE_SCHED_TYPE_ATOMIC);
>@@ -59,7 +65,13 @@ pipeline_queue_worker_single_stage_fwd(void
>*arg)
> 		rte_event_eth_tx_adapter_txq_set(ev.mbuf, 0);
> 		pipeline_fwd_event(&ev, RTE_SCHED_TYPE_ATOMIC);
> 		pipeline_event_enqueue(dev, port, &ev);
>-		w->processed_pkts++;
>+
>+		/* release barrier here ensures stored operation
>+		 * of the event completes before the number of
>+		 * processed pkts is visible to the main core
>+		 */
>+		__atomic_fetch_add(&(w->processed_pkts), 1,
>+				__ATOMIC_RELEASE);
> 	}
>
> 	return 0;
>@@ -84,7 +96,13 @@
>pipeline_queue_worker_single_stage_burst_tx(void *arg)
> 			if (ev[i].sched_type ==
>RTE_SCHED_TYPE_ATOMIC) {
> 				pipeline_event_tx(dev, port, &ev[i]);
> 				ev[i].op = RTE_EVENT_OP_RELEASE;
>-				w->processed_pkts++;
>+
>+				/* release barrier here ensures stored
>operation
>+				 * of the event completes before the
>number of
>+				 * processed pkts is visible to the main
>core
>+				 */
>+				__atomic_fetch_add(&(w-
>>processed_pkts), 1,
>+						__ATOMIC_RELEASE);
> 			} else {
> 				ev[i].queue_id++;
> 				pipeline_fwd_event(&ev[i],
>@@ -121,7 +139,13 @@
>pipeline_queue_worker_single_stage_burst_fwd(void *arg)
> 		}
>
> 		pipeline_event_enqueue_burst(dev, port, ev, nb_rx);
>-		w->processed_pkts += nb_rx;
>+
>+		/* release barrier here ensures stored operation
>+		 * of the event completes before the number of
>+		 * processed pkts is visible to the main core
>+		 */
>+		__atomic_fetch_add(&(w->processed_pkts), nb_rx,
>+				__ATOMIC_RELEASE);
> 	}
>
> 	return 0;
>@@ -146,7 +170,13 @@ pipeline_queue_worker_multi_stage_tx(void
>*arg)
>
> 		if (ev.queue_id == tx_queue[ev.mbuf->port]) {
> 			pipeline_event_tx(dev, port, &ev);
>-			w->processed_pkts++;
>+
>+			/* release barrier here ensures stored operation
>+			 * of the event completes before the number of
>+			 * processed pkts is visible to the main core
>+			 */
>+			__atomic_fetch_add(&(w->processed_pkts), 1,
>+					__ATOMIC_RELEASE);
> 			continue;
> 		}
>
>@@ -180,7 +210,13 @@
>pipeline_queue_worker_multi_stage_fwd(void *arg)
> 			ev.queue_id = tx_queue[ev.mbuf->port];
> 			rte_event_eth_tx_adapter_txq_set(ev.mbuf, 0);
> 			pipeline_fwd_event(&ev,
>RTE_SCHED_TYPE_ATOMIC);
>-			w->processed_pkts++;
>+
>+			/* release barrier here ensures stored operation
>+			 * of the event completes before the number of
>+			 * processed pkts is visible to the main core
>+			 */
>+			__atomic_fetch_add(&(w->processed_pkts), 1,
>+					__ATOMIC_RELEASE);
> 		} else {
> 			ev.queue_id++;
> 			pipeline_fwd_event(&ev,
>sched_type_list[cq_id]);
>@@ -214,7 +250,13 @@
>pipeline_queue_worker_multi_stage_burst_tx(void *arg)
> 			if (ev[i].queue_id == tx_queue[ev[i].mbuf-
>>port]) {
> 				pipeline_event_tx(dev, port, &ev[i]);
> 				ev[i].op = RTE_EVENT_OP_RELEASE;
>-				w->processed_pkts++;
>+
>+				/* release barrier here ensures stored
>operation
>+				 * of the event completes before the
>number of
>+				 * processed pkts is visible to the main
>core
>+				 */
>+				__atomic_fetch_add(&(w-
>>processed_pkts), 1,
>+						__ATOMIC_RELEASE);
> 				continue;
> 			}
>
>@@ -254,7 +296,13 @@
>pipeline_queue_worker_multi_stage_burst_fwd(void *arg)
>
>	rte_event_eth_tx_adapter_txq_set(ev[i].mbuf, 0);
> 				pipeline_fwd_event(&ev[i],
>
>	RTE_SCHED_TYPE_ATOMIC);
>-				w->processed_pkts++;
>+
>+				/* release barrier here ensures stored
>operation
>+				 * of the event completes before the
>number of
>+				 * processed pkts is visible to the main
>core
>+				 */
>+				__atomic_fetch_add(&(w-
>>processed_pkts), 1,
>+						__ATOMIC_RELEASE);
> 			} else {
> 				ev[i].queue_id++;
> 				pipeline_fwd_event(&ev[i],
>--
>2.17.1


^ permalink raw reply	[flat|nested] 18+ messages in thread
* [dpdk-dev] [RFC PATCH v1 0/6] refactor smp barriers in app/eventdev
@ 2020-12-22  3:22 Feifei Wang
  2020-12-22  3:22 ` [dpdk-dev] [RFC PATCH v1 4/6] app/eventdev: add release barriers for pipeline test Feifei Wang
  0 siblings, 1 reply; 18+ messages in thread
From: Feifei Wang @ 2020-12-22  3:22 UTC (permalink / raw)
  Cc: dev, nd, Feifei Wang

For smp barriers in app/eventdev, remove the unnecessary barriers or
replace them with thread fence.

Feifei Wang (6):
  app/eventdev: fix SMP barrier bugs for perf test
  app/eventdev: remove unnecessary barriers for perf test
  app/eventdev: replace wmb with thread fence for perf test
  app/eventdev: add release barriers for pipeline test
  app/eventdev: remove unnecessary barriers for pipeline test
  app/eventdev: remove unnecessary barriers for order test

 app/test-eventdev/test_order_common.h    |  2 -
 app/test-eventdev/test_perf_common.c     |  4 --
 app/test-eventdev/test_perf_common.h     | 14 +++++-
 app/test-eventdev/test_pipeline_common.c |  1 -
 app/test-eventdev/test_pipeline_queue.c  | 64 +++++++++++++++++++++---
 5 files changed, 68 insertions(+), 17 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-01-14  6:07 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-22  5:07 [dpdk-dev] [RFC PATCH v1 0/6] refactor smp barriers in app/eventdev Feifei Wang
2020-12-22  5:07 ` [dpdk-dev] [RFC PATCH v1 1/6] app/eventdev: fix SMP barrier bugs for perf test Feifei Wang
2020-12-22  5:07 ` [dpdk-dev] [RFC PATCH v1 2/6] app/eventdev: remove unnecessary barriers " Feifei Wang
2020-12-22  5:07 ` [dpdk-dev] [RFC PATCH v1 3/6] app/eventdev: replace wmb with thread fence " Feifei Wang
2020-12-22  5:07 ` [dpdk-dev] [RFC PATCH v1 4/6] app/eventdev: add release barriers for pipeline test Feifei Wang
2020-12-22  5:07 ` [dpdk-dev] [RFC PATCH v1 5/6] app/eventdev: remove unnecessary " Feifei Wang
2020-12-22  5:07 ` [dpdk-dev] [RFC PATCH v1 6/6] app/eventdev: remove unnecessary barriers for order test Feifei Wang
  -- strict thread matches above, loose matches on Subject: below --
2020-12-22 10:33 [dpdk-dev] [RFC PATCH v1 4/6] app/eventdev: add release barriers for pipeline test Pavan Nikhilesh Bhagavatula
2021-01-05  7:39 ` Feifei Wang
2021-01-05  9:29   ` Pavan Nikhilesh Bhagavatula
2021-01-08  7:12     ` [dpdk-dev] 回复: " Feifei Wang
2021-01-08  9:12       ` [dpdk-dev] " Pavan Nikhilesh Bhagavatula
2021-01-08 10:44         ` [dpdk-dev] 回复: " Feifei Wang
2021-01-08 10:58           ` [dpdk-dev] " Pavan Nikhilesh Bhagavatula
2021-01-11  1:57             ` [dpdk-dev] 回复: " Feifei Wang
2021-01-12  8:29             ` [dpdk-dev] " Pavan Nikhilesh Bhagavatula
2021-01-14  6:07               ` [dpdk-dev] 回复: " Feifei Wang
2020-12-22  3:22 [dpdk-dev] [RFC PATCH v1 0/6] refactor smp barriers in app/eventdev Feifei Wang
2020-12-22  3:22 ` [dpdk-dev] [RFC PATCH v1 4/6] app/eventdev: add release barriers for pipeline test Feifei Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).