* [dpdk-stable] [PATCH v1 1/2] app/eventdev: adjust event count order for pipeline test
[not found] <20210122051916.1408093-1-feifei.wang2@arm.com>
@ 2021-01-22 5:19 ` Feifei Wang
2021-01-22 5:19 ` [dpdk-stable] [PATCH v1 2/2] app/eventdev: remove redundant enqueue in burst Tx Feifei Wang
1 sibling, 0 replies; 2+ messages in thread
From: Feifei Wang @ 2021-01-22 5:19 UTC (permalink / raw)
To: Jerin Jacob, Harry van Haaren, Pavan Nikhilesh
Cc: dev, nd, Feifei Wang, pbhagavatula, stable, Ruifeng Wang
For the fwd mode (internal_port = false) in pipeline test,
processed-pkts increment should after enqueue. However, in
multi_stage_fwd and multi_stage_burst_fwd, "w->processed_pkts" is
increased before enqueue.
To fix this, move "w->processed_pkts" increment after enqueue, and then
the main core can load the correct number of processed packets.
Fixes: 314bcf58ca8f ("app/eventdev: add pipeline queue worker functions")
Cc: pbhagavatula@marvell.com
Cc: stable@dpdk.org
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
app/test-eventdev/test_pipeline_queue.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/app/test-eventdev/test_pipeline_queue.c b/app/test-eventdev/test_pipeline_queue.c
index 7bebac34f..01f33e3b4 100644
--- a/app/test-eventdev/test_pipeline_queue.c
+++ b/app/test-eventdev/test_pipeline_queue.c
@@ -180,13 +180,13 @@ pipeline_queue_worker_multi_stage_fwd(void *arg)
ev.queue_id = tx_queue[ev.mbuf->port];
rte_event_eth_tx_adapter_txq_set(ev.mbuf, 0);
pipeline_fwd_event(&ev, RTE_SCHED_TYPE_ATOMIC);
+ pipeline_event_enqueue(dev, port, &ev);
w->processed_pkts++;
} else {
ev.queue_id++;
pipeline_fwd_event(&ev, sched_type_list[cq_id]);
+ pipeline_event_enqueue(dev, port, &ev);
}
-
- pipeline_event_enqueue(dev, port, &ev);
}
return 0;
@@ -237,6 +237,7 @@ pipeline_queue_worker_multi_stage_burst_fwd(void *arg)
const uint8_t *tx_queue = t->tx_evqueue_id;
while (t->done == false) {
+ uint16_t processed_pkts = 0;
uint16_t nb_rx = rte_event_dequeue_burst(dev, port, ev,
BURST_SIZE, 0);
@@ -254,7 +255,7 @@ pipeline_queue_worker_multi_stage_burst_fwd(void *arg)
rte_event_eth_tx_adapter_txq_set(ev[i].mbuf, 0);
pipeline_fwd_event(&ev[i],
RTE_SCHED_TYPE_ATOMIC);
- w->processed_pkts++;
+ processed_pkts++;
} else {
ev[i].queue_id++;
pipeline_fwd_event(&ev[i],
@@ -263,6 +264,7 @@ pipeline_queue_worker_multi_stage_burst_fwd(void *arg)
}
pipeline_event_enqueue_burst(dev, port, ev, nb_rx);
+ w->processed_pkts += processed_pkts;
}
return 0;
--
2.25.1
^ permalink raw reply [flat|nested] 2+ messages in thread
* [dpdk-stable] [PATCH v1 2/2] app/eventdev: remove redundant enqueue in burst Tx
[not found] <20210122051916.1408093-1-feifei.wang2@arm.com>
2021-01-22 5:19 ` [dpdk-stable] [PATCH v1 1/2] app/eventdev: adjust event count order for pipeline test Feifei Wang
@ 2021-01-22 5:19 ` Feifei Wang
1 sibling, 0 replies; 2+ messages in thread
From: Feifei Wang @ 2021-01-22 5:19 UTC (permalink / raw)
To: Jerin Jacob, Pavan Nikhilesh, Harry van Haaren
Cc: dev, nd, Feifei Wang, pbhagavatula, stable, Ruifeng Wang
For eventdev pipeline test, in burst_tx cases, there is no needed to
set ev.op as RTE_EVENT_OP_RELEASE and call pipeline_event_enqueue_burst
to release events. This is because for tx mode(internal_port=true),
the capability "implicit_release" of dev is enabled, and the app can
release events by "rte_event_dequeue_burst" rather than enqueue.
Fixes: 314bcf58ca8f ("app/eventdev: add pipeline queue worker functions")
Cc: pbhagavatula@marvell.com
Cc: stable@dpdk.org
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
app/test-eventdev/test_pipeline_queue.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/app/test-eventdev/test_pipeline_queue.c b/app/test-eventdev/test_pipeline_queue.c
index 01f33e3b4..9a9febb19 100644
--- a/app/test-eventdev/test_pipeline_queue.c
+++ b/app/test-eventdev/test_pipeline_queue.c
@@ -83,16 +83,15 @@ pipeline_queue_worker_single_stage_burst_tx(void *arg)
rte_prefetch0(ev[i + 1].mbuf);
if (ev[i].sched_type == RTE_SCHED_TYPE_ATOMIC) {
pipeline_event_tx(dev, port, &ev[i]);
- ev[i].op = RTE_EVENT_OP_RELEASE;
w->processed_pkts++;
} else {
ev[i].queue_id++;
pipeline_fwd_event(&ev[i],
RTE_SCHED_TYPE_ATOMIC);
+ pipeline_event_enqueue_burst(dev, port, ev,
+ nb_rx);
}
}
-
- pipeline_event_enqueue_burst(dev, port, ev, nb_rx);
}
return 0;
@@ -213,7 +212,6 @@ pipeline_queue_worker_multi_stage_burst_tx(void *arg)
if (ev[i].queue_id == tx_queue[ev[i].mbuf->port]) {
pipeline_event_tx(dev, port, &ev[i]);
- ev[i].op = RTE_EVENT_OP_RELEASE;
w->processed_pkts++;
continue;
}
@@ -222,9 +220,8 @@ pipeline_queue_worker_multi_stage_burst_tx(void *arg)
pipeline_fwd_event(&ev[i], cq_id != last_queue ?
sched_type_list[cq_id] :
RTE_SCHED_TYPE_ATOMIC);
+ pipeline_event_enqueue_burst(dev, port, ev, nb_rx);
}
-
- pipeline_event_enqueue_burst(dev, port, ev, nb_rx);
}
return 0;
--
2.25.1
^ permalink raw reply [flat|nested] 2+ messages in thread