* [dpdk-dev] [PATCH 0/8] DSW performance and statistics improvements
@ 2020-03-09 6:50 Mattias Rönnblom
2020-03-09 6:50 ` [dpdk-dev] [PATCH 1/8] event/dsw: reduce latency in low-load situations Mattias Rönnblom
` (8 more replies)
0 siblings, 9 replies; 21+ messages in thread
From: Mattias Rönnblom @ 2020-03-09 6:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, stefan.sundkvist, Ola.Liljedahl, Mattias Rönnblom
Performance and statistics improvements for the distributed software
(DSW) event device.
Mattias Rönnblom (8):
event/dsw: reduce latency in low-load situations
event/dsw: reduce max flows to speed up load balancing
event/dsw: extend statistics
event/dsw: improve migration mechanism
event/dsw: avoid migration waves in large systems
event/dsw: remove redundant control ring poll
event/dsw: remove unnecessary read barrier
event/dsw: add port busy cycles xstats
drivers/event/dsw/dsw_evdev.c | 1 +
drivers/event/dsw/dsw_evdev.h | 45 ++-
drivers/event/dsw/dsw_event.c | 602 ++++++++++++++++++++-------------
drivers/event/dsw/dsw_xstats.c | 26 +-
4 files changed, 425 insertions(+), 249 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH 1/8] event/dsw: reduce latency in low-load situations
2020-03-09 6:50 [dpdk-dev] [PATCH 0/8] DSW performance and statistics improvements Mattias Rönnblom
@ 2020-03-09 6:50 ` Mattias Rönnblom
2020-03-09 6:51 ` [dpdk-dev] [PATCH 2/8] event/dsw: reduce max flows to speed up load balancing Mattias Rönnblom
` (7 subsequent siblings)
8 siblings, 0 replies; 21+ messages in thread
From: Mattias Rönnblom @ 2020-03-09 6:50 UTC (permalink / raw)
To: jerinj; +Cc: dev, stefan.sundkvist, Ola.Liljedahl, Mattias Rönnblom
In DSW, in case a port can't produce any events for the application to
consume, the port is considered idle.
To slightly reduce wall-time latency, flush the port's output buffer
in case of such an empty dequeue.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
drivers/event/dsw/dsw_event.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c
index 296adea18..7f1f29218 100644
--- a/drivers/event/dsw/dsw_event.c
+++ b/drivers/event/dsw/dsw_event.c
@@ -1245,11 +1245,11 @@ dsw_event_dequeue_burst(void *port, struct rte_event *events, uint16_t num,
* seem to improve performance.
*/
dsw_port_record_seen_events(port, events, dequeued);
- }
- /* XXX: Assuming the port can't produce any more work,
- * consider flushing the output buffer, on dequeued ==
- * 0.
- */
+ } else /* Zero-size dequeue means a likely idle port, and thus
+ * we can afford trading some efficiency for a slightly
+ * reduced event wall-time latency.
+ */
+ dsw_port_flush_out_buffers(dsw, port);
#ifdef DSW_SORT_DEQUEUED
dsw_stable_sort(events, dequeued, sizeof(events[0]), dsw_cmp_event);
--
2.17.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH 2/8] event/dsw: reduce max flows to speed up load balancing
2020-03-09 6:50 [dpdk-dev] [PATCH 0/8] DSW performance and statistics improvements Mattias Rönnblom
2020-03-09 6:50 ` [dpdk-dev] [PATCH 1/8] event/dsw: reduce latency in low-load situations Mattias Rönnblom
@ 2020-03-09 6:51 ` Mattias Rönnblom
2020-03-09 6:51 ` [dpdk-dev] [PATCH 3/8] event/dsw: extend statistics Mattias Rönnblom
` (6 subsequent siblings)
8 siblings, 0 replies; 21+ messages in thread
From: Mattias Rönnblom @ 2020-03-09 6:51 UTC (permalink / raw)
To: jerinj; +Cc: dev, stefan.sundkvist, Ola.Liljedahl, Mattias Rönnblom
Reduce the maximum number of DSW flows from 32k to 8k, to be able
rebalance load faster.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
drivers/event/dsw/dsw_evdev.h | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/drivers/event/dsw/dsw_evdev.h b/drivers/event/dsw/dsw_evdev.h
index 5c7b6108d..dc44bce81 100644
--- a/drivers/event/dsw/dsw_evdev.h
+++ b/drivers/event/dsw/dsw_evdev.h
@@ -19,8 +19,20 @@
#define DSW_MAX_EVENTS (16384)
-/* Code changes are required to allow more flows than 32k. */
-#define DSW_MAX_FLOWS_BITS (15)
+/* Multiple 24-bit flow ids will map to the same DSW-level flow. The
+ * number of DSW flows should be high enough make it unlikely that
+ * flow ids of several large flows hash to the same DSW-level flow.
+ * Such collisions will limit parallism and thus the number of cores
+ * that may be utilized. However, configuring a large number of DSW
+ * flows might potentially, depending on traffic and actual
+ * application flow id value range, result in each such DSW-level flow
+ * being very small. The effect of migrating such flows will be small,
+ * in terms amount of processing load redistributed. This will in turn
+ * reduce the load balancing speed, since flow migration rate has an
+ * upper limit. Code changes are required to allow > 32k DSW-level
+ * flows.
+ */
+#define DSW_MAX_FLOWS_BITS (13)
#define DSW_MAX_FLOWS (1<<(DSW_MAX_FLOWS_BITS))
#define DSW_MAX_FLOWS_MASK (DSW_MAX_FLOWS-1)
--
2.17.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH 3/8] event/dsw: extend statistics
2020-03-09 6:50 [dpdk-dev] [PATCH 0/8] DSW performance and statistics improvements Mattias Rönnblom
2020-03-09 6:50 ` [dpdk-dev] [PATCH 1/8] event/dsw: reduce latency in low-load situations Mattias Rönnblom
2020-03-09 6:51 ` [dpdk-dev] [PATCH 2/8] event/dsw: reduce max flows to speed up load balancing Mattias Rönnblom
@ 2020-03-09 6:51 ` Mattias Rönnblom
2020-03-09 6:51 ` [dpdk-dev] [PATCH 4/8] event/dsw: improve migration mechanism Mattias Rönnblom
` (5 subsequent siblings)
8 siblings, 0 replies; 21+ messages in thread
From: Mattias Rönnblom @ 2020-03-09 6:51 UTC (permalink / raw)
To: jerinj; +Cc: dev, stefan.sundkvist, Ola.Liljedahl, Mattias Rönnblom
Extend DSW xstats.
To allow visualization of migrations, track the number flow
immigrations in "port_<N>_immigrations". The "port_<N>_migrations"
retains legacy semantics, but is renamed "port_<N>_emigrations".
Expose the number of events currently undergoing processing
(i.e. pending releases) at a particular port.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
drivers/event/dsw/dsw_evdev.h | 16 ++--
drivers/event/dsw/dsw_event.c | 131 +++++++++++++++++----------------
drivers/event/dsw/dsw_xstats.c | 17 +++--
3 files changed, 91 insertions(+), 73 deletions(-)
diff --git a/drivers/event/dsw/dsw_evdev.h b/drivers/event/dsw/dsw_evdev.h
index dc44bce81..2c7f9efa3 100644
--- a/drivers/event/dsw/dsw_evdev.h
+++ b/drivers/event/dsw/dsw_evdev.h
@@ -162,18 +162,20 @@ struct dsw_port {
uint64_t total_busy_cycles;
/* For the ctl interface and flow migration mechanism. */
- uint64_t next_migration;
+ uint64_t next_emigration;
uint64_t migration_interval;
enum dsw_migration_state migration_state;
- uint64_t migration_start;
- uint64_t migrations;
- uint64_t migration_latency;
+ uint64_t emigration_start;
+ uint64_t emigrations;
+ uint64_t emigration_latency;
- uint8_t migration_target_port_id;
- struct dsw_queue_flow migration_target_qf;
+ uint8_t emigration_target_port_id;
+ struct dsw_queue_flow emigration_target_qf;
uint8_t cfm_cnt;
+ uint64_t immigrations;
+
uint16_t paused_flows_len;
struct dsw_queue_flow paused_flows[DSW_MAX_PAUSED_FLOWS];
@@ -187,11 +189,13 @@ struct dsw_port {
uint16_t seen_events_idx;
struct dsw_queue_flow seen_events[DSW_MAX_EVENTS_RECORDED];
+ uint64_t enqueue_calls;
uint64_t new_enqueued;
uint64_t forward_enqueued;
uint64_t release_enqueued;
uint64_t queue_enqueued[DSW_MAX_QUEUES];
+ uint64_t dequeue_calls;
uint64_t dequeued;
uint64_t queue_dequeued[DSW_MAX_QUEUES];
diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c
index 7f1f29218..69cff7aa2 100644
--- a/drivers/event/dsw/dsw_event.c
+++ b/drivers/event/dsw/dsw_event.c
@@ -385,12 +385,12 @@ dsw_retrieve_port_loads(struct dsw_evdev *dsw, int16_t *port_loads,
}
static bool
-dsw_select_migration_target(struct dsw_evdev *dsw,
- struct dsw_port *source_port,
- struct dsw_queue_flow_burst *bursts,
- uint16_t num_bursts, int16_t *port_loads,
- int16_t max_load, struct dsw_queue_flow *target_qf,
- uint8_t *target_port_id)
+dsw_select_emigration_target(struct dsw_evdev *dsw,
+ struct dsw_port *source_port,
+ struct dsw_queue_flow_burst *bursts,
+ uint16_t num_bursts, int16_t *port_loads,
+ int16_t max_load, struct dsw_queue_flow *target_qf,
+ uint8_t *target_port_id)
{
uint16_t source_load = port_loads[source_port->id];
uint16_t i;
@@ -598,39 +598,39 @@ dsw_port_flush_paused_events(struct dsw_evdev *dsw,
}
static void
-dsw_port_migration_stats(struct dsw_port *port)
+dsw_port_emigration_stats(struct dsw_port *port)
{
- uint64_t migration_latency;
+ uint64_t emigration_latency;
- migration_latency = (rte_get_timer_cycles() - port->migration_start);
- port->migration_latency += migration_latency;
- port->migrations++;
+ emigration_latency = (rte_get_timer_cycles() - port->emigration_start);
+ port->emigration_latency += emigration_latency;
+ port->emigrations++;
}
static void
-dsw_port_end_migration(struct dsw_evdev *dsw, struct dsw_port *port)
+dsw_port_end_emigration(struct dsw_evdev *dsw, struct dsw_port *port)
{
- uint8_t queue_id = port->migration_target_qf.queue_id;
- uint16_t flow_hash = port->migration_target_qf.flow_hash;
+ uint8_t queue_id = port->emigration_target_qf.queue_id;
+ uint16_t flow_hash = port->emigration_target_qf.flow_hash;
port->migration_state = DSW_MIGRATION_STATE_IDLE;
port->seen_events_len = 0;
- dsw_port_migration_stats(port);
+ dsw_port_emigration_stats(port);
if (dsw->queues[queue_id].schedule_type != RTE_SCHED_TYPE_PARALLEL) {
dsw_port_remove_paused_flow(port, queue_id, flow_hash);
dsw_port_flush_paused_events(dsw, port, queue_id, flow_hash);
}
- DSW_LOG_DP_PORT(DEBUG, port->id, "Migration completed for queue_id "
+ DSW_LOG_DP_PORT(DEBUG, port->id, "Emigration completed for queue_id "
"%d flow_hash %d.\n", queue_id, flow_hash);
}
static void
-dsw_port_consider_migration(struct dsw_evdev *dsw,
- struct dsw_port *source_port,
- uint64_t now)
+dsw_port_consider_emigration(struct dsw_evdev *dsw,
+ struct dsw_port *source_port,
+ uint64_t now)
{
bool any_port_below_limit;
struct dsw_queue_flow *seen_events = source_port->seen_events;
@@ -640,7 +640,7 @@ dsw_port_consider_migration(struct dsw_evdev *dsw,
int16_t source_port_load;
int16_t port_loads[dsw->num_ports];
- if (now < source_port->next_migration)
+ if (now < source_port->next_emigration)
return;
if (dsw->num_ports == 1)
@@ -649,25 +649,25 @@ dsw_port_consider_migration(struct dsw_evdev *dsw,
if (seen_events_len < DSW_MAX_EVENTS_RECORDED)
return;
- DSW_LOG_DP_PORT(DEBUG, source_port->id, "Considering migration.\n");
+ DSW_LOG_DP_PORT(DEBUG, source_port->id, "Considering emigration.\n");
/* Randomize interval to avoid having all threads considering
- * migration at the same in point in time, which might lead to
- * all choosing the same target port.
+ * emigration at the same in point in time, which might lead
+ * to all choosing the same target port.
*/
- source_port->next_migration = now +
+ source_port->next_emigration = now +
source_port->migration_interval / 2 +
rte_rand() % source_port->migration_interval;
if (source_port->migration_state != DSW_MIGRATION_STATE_IDLE) {
DSW_LOG_DP_PORT(DEBUG, source_port->id,
- "Migration already in progress.\n");
+ "Emigration already in progress.\n");
return;
}
/* For simplicity, avoid migration in the unlikely case there
* is still events to consume in the in_buffer (from the last
- * migration).
+ * emigration).
*/
if (source_port->in_buffer_len > 0) {
DSW_LOG_DP_PORT(DEBUG, source_port->id, "There are still "
@@ -719,52 +719,56 @@ dsw_port_consider_migration(struct dsw_evdev *dsw,
}
/* The strategy is to first try to find a flow to move to a
- * port with low load (below the migration-attempt
+ * port with low load (below the emigration-attempt
* threshold). If that fails, we try to find a port which is
* below the max threshold, and also less loaded than this
* port is.
*/
- if (!dsw_select_migration_target(dsw, source_port, bursts, num_bursts,
- port_loads,
- DSW_MIN_SOURCE_LOAD_FOR_MIGRATION,
- &source_port->migration_target_qf,
- &source_port->migration_target_port_id)
+ if (!dsw_select_emigration_target(dsw, source_port, bursts, num_bursts,
+ port_loads,
+ DSW_MIN_SOURCE_LOAD_FOR_MIGRATION,
+ &source_port->emigration_target_qf,
+ &source_port->emigration_target_port_id)
&&
- !dsw_select_migration_target(dsw, source_port, bursts, num_bursts,
- port_loads,
- DSW_MAX_TARGET_LOAD_FOR_MIGRATION,
- &source_port->migration_target_qf,
- &source_port->migration_target_port_id))
+ !dsw_select_emigration_target(dsw, source_port, bursts, num_bursts,
+ port_loads,
+ DSW_MAX_TARGET_LOAD_FOR_MIGRATION,
+ &source_port->emigration_target_qf,
+ &source_port->emigration_target_port_id))
return;
DSW_LOG_DP_PORT(DEBUG, source_port->id, "Migrating queue_id %d "
"flow_hash %d from port %d to port %d.\n",
- source_port->migration_target_qf.queue_id,
- source_port->migration_target_qf.flow_hash,
- source_port->id, source_port->migration_target_port_id);
+ source_port->emigration_target_qf.queue_id,
+ source_port->emigration_target_qf.flow_hash,
+ source_port->id,
+ source_port->emigration_target_port_id);
/* We have a winner. */
source_port->migration_state = DSW_MIGRATION_STATE_PAUSING;
- source_port->migration_start = rte_get_timer_cycles();
+ source_port->emigration_start = rte_get_timer_cycles();
/* No need to go through the whole pause procedure for
* parallel queues, since atomic/ordered semantics need not to
* be maintained.
*/
- if (dsw->queues[source_port->migration_target_qf.queue_id].schedule_type
- == RTE_SCHED_TYPE_PARALLEL) {
- uint8_t queue_id = source_port->migration_target_qf.queue_id;
- uint16_t flow_hash = source_port->migration_target_qf.flow_hash;
- uint8_t dest_port_id = source_port->migration_target_port_id;
+ if (dsw->queues[source_port->emigration_target_qf.queue_id].
+ schedule_type == RTE_SCHED_TYPE_PARALLEL) {
+ uint8_t queue_id =
+ source_port->emigration_target_qf.queue_id;
+ uint16_t flow_hash =
+ source_port->emigration_target_qf.flow_hash;
+ uint8_t dest_port_id =
+ source_port->emigration_target_port_id;
/* Single byte-sized stores are always atomic. */
dsw->queues[queue_id].flow_to_port_map[flow_hash] =
dest_port_id;
rte_smp_wmb();
- dsw_port_end_migration(dsw, source_port);
+ dsw_port_end_emigration(dsw, source_port);
return;
}
@@ -775,12 +779,12 @@ dsw_port_consider_migration(struct dsw_evdev *dsw,
dsw_port_flush_out_buffers(dsw, source_port);
dsw_port_add_paused_flow(source_port,
- source_port->migration_target_qf.queue_id,
- source_port->migration_target_qf.flow_hash);
+ source_port->emigration_target_qf.queue_id,
+ source_port->emigration_target_qf.flow_hash);
dsw_port_ctl_broadcast(dsw, source_port, DSW_CTL_PAUS_REQ,
- source_port->migration_target_qf.queue_id,
- source_port->migration_target_qf.flow_hash);
+ source_port->emigration_target_qf.queue_id,
+ source_port->emigration_target_qf.flow_hash);
source_port->cfm_cnt = 0;
}
@@ -808,6 +812,9 @@ dsw_port_handle_unpause_flow(struct dsw_evdev *dsw, struct dsw_port *port,
rte_smp_rmb();
+ if (dsw_schedule(dsw, queue_id, paused_flow_hash) == port->id)
+ port->immigrations++;
+
dsw_port_ctl_enqueue(&dsw->ports[originating_port_id], &cfm);
dsw_port_flush_paused_events(dsw, port, queue_id, paused_flow_hash);
@@ -816,10 +823,10 @@ dsw_port_handle_unpause_flow(struct dsw_evdev *dsw, struct dsw_port *port,
#define FORWARD_BURST_SIZE (32)
static void
-dsw_port_forward_migrated_flow(struct dsw_port *source_port,
- struct rte_event_ring *dest_ring,
- uint8_t queue_id,
- uint16_t flow_hash)
+dsw_port_forward_emigrated_flow(struct dsw_port *source_port,
+ struct rte_event_ring *dest_ring,
+ uint8_t queue_id,
+ uint16_t flow_hash)
{
uint16_t events_left;
@@ -868,9 +875,9 @@ static void
dsw_port_move_migrating_flow(struct dsw_evdev *dsw,
struct dsw_port *source_port)
{
- uint8_t queue_id = source_port->migration_target_qf.queue_id;
- uint16_t flow_hash = source_port->migration_target_qf.flow_hash;
- uint8_t dest_port_id = source_port->migration_target_port_id;
+ uint8_t queue_id = source_port->emigration_target_qf.queue_id;
+ uint16_t flow_hash = source_port->emigration_target_qf.flow_hash;
+ uint8_t dest_port_id = source_port->emigration_target_port_id;
struct dsw_port *dest_port = &dsw->ports[dest_port_id];
dsw_port_flush_out_buffers(dsw, source_port);
@@ -880,8 +887,8 @@ dsw_port_move_migrating_flow(struct dsw_evdev *dsw,
dsw->queues[queue_id].flow_to_port_map[flow_hash] =
dest_port_id;
- dsw_port_forward_migrated_flow(source_port, dest_port->in_ring,
- queue_id, flow_hash);
+ dsw_port_forward_emigrated_flow(source_port, dest_port->in_ring,
+ queue_id, flow_hash);
/* Flow table update and migration destination port's enqueues
* must be seen before the control message.
@@ -907,7 +914,7 @@ dsw_port_handle_confirm(struct dsw_evdev *dsw, struct dsw_port *port)
port->migration_state = DSW_MIGRATION_STATE_FORWARDING;
break;
case DSW_MIGRATION_STATE_UNPAUSING:
- dsw_port_end_migration(dsw, port);
+ dsw_port_end_emigration(dsw, port);
break;
default:
RTE_ASSERT(0);
@@ -987,7 +994,7 @@ dsw_port_bg_process(struct dsw_evdev *dsw, struct dsw_port *port)
dsw_port_consider_load_update(port, now);
- dsw_port_consider_migration(dsw, port, now);
+ dsw_port_consider_emigration(dsw, port, now);
port->ops_since_bg_task = 0;
}
diff --git a/drivers/event/dsw/dsw_xstats.c b/drivers/event/dsw/dsw_xstats.c
index c3f5db89c..d332a57b6 100644
--- a/drivers/event/dsw/dsw_xstats.c
+++ b/drivers/event/dsw/dsw_xstats.c
@@ -84,16 +84,17 @@ dsw_xstats_port_get_queue_dequeued(struct dsw_evdev *dsw, uint8_t port_id,
return dsw->ports[port_id].queue_dequeued[queue_id];
}
-DSW_GEN_PORT_ACCESS_FN(migrations)
+DSW_GEN_PORT_ACCESS_FN(emigrations)
+DSW_GEN_PORT_ACCESS_FN(immigrations)
static uint64_t
dsw_xstats_port_get_migration_latency(struct dsw_evdev *dsw, uint8_t port_id,
uint8_t queue_id __rte_unused)
{
- uint64_t total_latency = dsw->ports[port_id].migration_latency;
- uint64_t num_migrations = dsw->ports[port_id].migrations;
+ uint64_t total_latency = dsw->ports[port_id].emigration_latency;
+ uint64_t num_emigrations = dsw->ports[port_id].emigrations;
- return num_migrations > 0 ? total_latency / num_migrations : 0;
+ return num_emigrations > 0 ? total_latency / num_emigrations : 0;
}
static uint64_t
@@ -110,6 +111,8 @@ dsw_xstats_port_get_event_proc_latency(struct dsw_evdev *dsw, uint8_t port_id,
DSW_GEN_PORT_ACCESS_FN(inflight_credits)
+DSW_GEN_PORT_ACCESS_FN(pending_releases)
+
static uint64_t
dsw_xstats_port_get_load(struct dsw_evdev *dsw, uint8_t port_id,
uint8_t queue_id __rte_unused)
@@ -136,14 +139,18 @@ static struct dsw_xstats_port dsw_port_xstats[] = {
false },
{ "port_%u_queue_%u_dequeued", dsw_xstats_port_get_queue_dequeued,
true },
- { "port_%u_migrations", dsw_xstats_port_get_migrations,
+ { "port_%u_emigrations", dsw_xstats_port_get_emigrations,
false },
{ "port_%u_migration_latency", dsw_xstats_port_get_migration_latency,
false },
+ { "port_%u_immigrations", dsw_xstats_port_get_immigrations,
+ false },
{ "port_%u_event_proc_latency", dsw_xstats_port_get_event_proc_latency,
false },
{ "port_%u_inflight_credits", dsw_xstats_port_get_inflight_credits,
false },
+ { "port_%u_pending_releases", dsw_xstats_port_get_pending_releases,
+ false },
{ "port_%u_load", dsw_xstats_port_get_load,
false },
{ "port_%u_last_bg", dsw_xstats_port_get_last_bg,
--
2.17.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH 4/8] event/dsw: improve migration mechanism
2020-03-09 6:50 [dpdk-dev] [PATCH 0/8] DSW performance and statistics improvements Mattias Rönnblom
` (2 preceding siblings ...)
2020-03-09 6:51 ` [dpdk-dev] [PATCH 3/8] event/dsw: extend statistics Mattias Rönnblom
@ 2020-03-09 6:51 ` Mattias Rönnblom
2020-03-09 6:51 ` [dpdk-dev] [PATCH 5/8] event/dsw: avoid migration waves in large systems Mattias Rönnblom
` (4 subsequent siblings)
8 siblings, 0 replies; 21+ messages in thread
From: Mattias Rönnblom @ 2020-03-09 6:51 UTC (permalink / raw)
To: jerinj; +Cc: dev, stefan.sundkvist, Ola.Liljedahl, Mattias Rönnblom
Allowing moving multiple flows in one migration transaction, to
rebalance load more quickly.
Introduce a threshold to avoid migrating flows between ports with very
similar load.
Simplify logic for selecting which flow to migrate. The aim is now to
move flows in such a way that the receiving port is as lightly-loaded
as possible (after receiving the flow), while still migrating enough
flows from the source port to reduce its load. This is essentially how
legacy strategy work as well, but the code is more readable.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
drivers/event/dsw/dsw_evdev.h | 15 +-
drivers/event/dsw/dsw_event.c | 541 +++++++++++++++++++++-------------
2 files changed, 343 insertions(+), 213 deletions(-)
diff --git a/drivers/event/dsw/dsw_evdev.h b/drivers/event/dsw/dsw_evdev.h
index 2c7f9efa3..ced40ef8d 100644
--- a/drivers/event/dsw/dsw_evdev.h
+++ b/drivers/event/dsw/dsw_evdev.h
@@ -93,11 +93,14 @@
#define DSW_MIGRATION_INTERVAL (1000)
#define DSW_MIN_SOURCE_LOAD_FOR_MIGRATION (DSW_LOAD_FROM_PERCENT(70))
#define DSW_MAX_TARGET_LOAD_FOR_MIGRATION (DSW_LOAD_FROM_PERCENT(95))
+#define DSW_REBALANCE_THRESHOLD (DSW_LOAD_FROM_PERCENT(3))
#define DSW_MAX_EVENTS_RECORDED (128)
+#define DSW_MAX_FLOWS_PER_MIGRATION (8)
+
/* Only one outstanding migration per port is allowed */
-#define DSW_MAX_PAUSED_FLOWS (DSW_MAX_PORTS)
+#define DSW_MAX_PAUSED_FLOWS (DSW_MAX_PORTS*DSW_MAX_FLOWS_PER_MIGRATION)
/* Enough room for paus request/confirm and unpaus request/confirm for
* all possible senders.
@@ -170,8 +173,10 @@ struct dsw_port {
uint64_t emigrations;
uint64_t emigration_latency;
- uint8_t emigration_target_port_id;
- struct dsw_queue_flow emigration_target_qf;
+ uint8_t emigration_target_port_ids[DSW_MAX_FLOWS_PER_MIGRATION];
+ struct dsw_queue_flow
+ emigration_target_qfs[DSW_MAX_FLOWS_PER_MIGRATION];
+ uint8_t emigration_targets_len;
uint8_t cfm_cnt;
uint64_t immigrations;
@@ -244,8 +249,8 @@ struct dsw_evdev {
struct dsw_ctl_msg {
uint8_t type;
uint8_t originating_port_id;
- uint8_t queue_id;
- uint16_t flow_hash;
+ uint8_t qfs_len;
+ struct dsw_queue_flow qfs[DSW_MAX_FLOWS_PER_MIGRATION];
} __rte_aligned(4);
uint16_t dsw_event_enqueue(void *port, const struct rte_event *event);
diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c
index 69cff7aa2..21c102275 100644
--- a/drivers/event/dsw/dsw_event.c
+++ b/drivers/event/dsw/dsw_event.c
@@ -189,58 +189,75 @@ dsw_port_ctl_dequeue(struct dsw_port *port, struct dsw_ctl_msg *msg)
static void
dsw_port_ctl_broadcast(struct dsw_evdev *dsw, struct dsw_port *source_port,
- uint8_t type, uint8_t queue_id, uint16_t flow_hash)
+ uint8_t type, struct dsw_queue_flow *qfs,
+ uint8_t qfs_len)
{
uint16_t port_id;
struct dsw_ctl_msg msg = {
.type = type,
.originating_port_id = source_port->id,
- .queue_id = queue_id,
- .flow_hash = flow_hash
+ .qfs_len = qfs_len
};
+ memcpy(msg.qfs, qfs, sizeof(struct dsw_queue_flow) * qfs_len);
+
for (port_id = 0; port_id < dsw->num_ports; port_id++)
if (port_id != source_port->id)
dsw_port_ctl_enqueue(&dsw->ports[port_id], &msg);
}
-static bool
-dsw_port_is_flow_paused(struct dsw_port *port, uint8_t queue_id,
- uint16_t flow_hash)
+static __rte_always_inline bool
+dsw_is_queue_flow_in_ary(const struct dsw_queue_flow *qfs, uint16_t qfs_len,
+ uint8_t queue_id, uint16_t flow_hash)
{
uint16_t i;
- for (i = 0; i < port->paused_flows_len; i++) {
- struct dsw_queue_flow *qf = &port->paused_flows[i];
- if (qf->queue_id == queue_id &&
- qf->flow_hash == flow_hash)
+ for (i = 0; i < qfs_len; i++)
+ if (qfs[i].queue_id == queue_id &&
+ qfs[i].flow_hash == flow_hash)
return true;
- }
+
return false;
}
+static __rte_always_inline bool
+dsw_port_is_flow_paused(struct dsw_port *port, uint8_t queue_id,
+ uint16_t flow_hash)
+{
+ return dsw_is_queue_flow_in_ary(port->paused_flows,
+ port->paused_flows_len,
+ queue_id, flow_hash);
+}
+
static void
-dsw_port_add_paused_flow(struct dsw_port *port, uint8_t queue_id,
- uint16_t paused_flow_hash)
+dsw_port_add_paused_flows(struct dsw_port *port, struct dsw_queue_flow *qfs,
+ uint8_t qfs_len)
{
- port->paused_flows[port->paused_flows_len] = (struct dsw_queue_flow) {
- .queue_id = queue_id,
- .flow_hash = paused_flow_hash
+ uint8_t i;
+
+ for (i = 0; i < qfs_len; i++) {
+ struct dsw_queue_flow *qf = &qfs[i];
+
+ DSW_LOG_DP_PORT(DEBUG, port->id,
+ "Pausing queue_id %d flow_hash %d.\n",
+ qf->queue_id, qf->flow_hash);
+
+ port->paused_flows[port->paused_flows_len] = *qf;
+ port->paused_flows_len++;
};
- port->paused_flows_len++;
}
static void
-dsw_port_remove_paused_flow(struct dsw_port *port, uint8_t queue_id,
- uint16_t paused_flow_hash)
+dsw_port_remove_paused_flow(struct dsw_port *port,
+ struct dsw_queue_flow *target_qf)
{
uint16_t i;
for (i = 0; i < port->paused_flows_len; i++) {
struct dsw_queue_flow *qf = &port->paused_flows[i];
- if (qf->queue_id == queue_id &&
- qf->flow_hash == paused_flow_hash) {
+ if (qf->queue_id == target_qf->queue_id &&
+ qf->flow_hash == target_qf->flow_hash) {
uint16_t last_idx = port->paused_flows_len-1;
if (i != last_idx)
port->paused_flows[i] =
@@ -251,30 +268,37 @@ dsw_port_remove_paused_flow(struct dsw_port *port, uint8_t queue_id,
}
}
+static void
+dsw_port_remove_paused_flows(struct dsw_port *port,
+ struct dsw_queue_flow *qfs, uint8_t qfs_len)
+{
+ uint8_t i;
+
+ for (i = 0; i < qfs_len; i++)
+ dsw_port_remove_paused_flow(port, &qfs[i]);
+
+}
+
static void
dsw_port_flush_out_buffers(struct dsw_evdev *dsw, struct dsw_port *source_port);
static void
-dsw_port_handle_pause_flow(struct dsw_evdev *dsw, struct dsw_port *port,
- uint8_t originating_port_id, uint8_t queue_id,
- uint16_t paused_flow_hash)
+dsw_port_handle_pause_flows(struct dsw_evdev *dsw, struct dsw_port *port,
+ uint8_t originating_port_id,
+ struct dsw_queue_flow *paused_qfs,
+ uint8_t qfs_len)
{
struct dsw_ctl_msg cfm = {
.type = DSW_CTL_CFM,
- .originating_port_id = port->id,
- .queue_id = queue_id,
- .flow_hash = paused_flow_hash
+ .originating_port_id = port->id
};
- DSW_LOG_DP_PORT(DEBUG, port->id, "Pausing queue_id %d flow_hash %d.\n",
- queue_id, paused_flow_hash);
-
/* There might be already-scheduled events belonging to the
* paused flow in the output buffers.
*/
dsw_port_flush_out_buffers(dsw, port);
- dsw_port_add_paused_flow(port, queue_id, paused_flow_hash);
+ dsw_port_add_paused_flows(port, paused_qfs, qfs_len);
/* Make sure any stores to the original port's in_ring is seen
* before the ctl message.
@@ -284,47 +308,11 @@ dsw_port_handle_pause_flow(struct dsw_evdev *dsw, struct dsw_port *port,
dsw_port_ctl_enqueue(&dsw->ports[originating_port_id], &cfm);
}
-static void
-dsw_find_lowest_load_port(uint8_t *port_ids, uint16_t num_port_ids,
- uint8_t exclude_port_id, int16_t *port_loads,
- uint8_t *target_port_id, int16_t *target_load)
-{
- int16_t candidate_port_id = -1;
- int16_t candidate_load = DSW_MAX_LOAD;
- uint16_t i;
-
- for (i = 0; i < num_port_ids; i++) {
- uint8_t port_id = port_ids[i];
- if (port_id != exclude_port_id) {
- int16_t load = port_loads[port_id];
- if (candidate_port_id == -1 ||
- load < candidate_load) {
- candidate_port_id = port_id;
- candidate_load = load;
- }
- }
- }
- *target_port_id = candidate_port_id;
- *target_load = candidate_load;
-}
-
struct dsw_queue_flow_burst {
struct dsw_queue_flow queue_flow;
uint16_t count;
};
-static inline int
-dsw_cmp_burst(const void *v_burst_a, const void *v_burst_b)
-{
- const struct dsw_queue_flow_burst *burst_a = v_burst_a;
- const struct dsw_queue_flow_burst *burst_b = v_burst_b;
-
- int a_count = burst_a->count;
- int b_count = burst_b->count;
-
- return a_count - b_count;
-}
-
#define DSW_QF_TO_INT(_qf) \
((int)((((_qf)->queue_id)<<16)|((_qf)->flow_hash)))
@@ -363,8 +351,6 @@ dsw_sort_qfs_to_bursts(struct dsw_queue_flow *qfs, uint16_t qfs_len,
current_burst->count++;
}
- qsort(bursts, num_bursts, sizeof(bursts[0]), dsw_cmp_burst);
-
return num_bursts;
}
@@ -384,44 +370,158 @@ dsw_retrieve_port_loads(struct dsw_evdev *dsw, int16_t *port_loads,
return below_limit;
}
+static int16_t
+dsw_flow_load(uint16_t num_events, int16_t port_load)
+{
+ return ((int32_t)port_load * (int32_t)num_events) /
+ DSW_MAX_EVENTS_RECORDED;
+}
+
+static int16_t
+dsw_evaluate_migration(int16_t source_load, int16_t target_load,
+ int16_t flow_load)
+{
+ int32_t res_target_load;
+ int32_t imbalance;
+
+ if (target_load > DSW_MAX_TARGET_LOAD_FOR_MIGRATION)
+ return -1;
+
+ imbalance = source_load - target_load;
+
+ if (imbalance < DSW_REBALANCE_THRESHOLD)
+ return -1;
+
+ res_target_load = target_load + flow_load;
+
+ /* If the estimated load of the target port will be higher
+ * than the source port's load, it doesn't make sense to move
+ * the flow.
+ */
+ if (res_target_load > source_load)
+ return -1;
+
+ /* The more idle the target will be, the better. This will
+ * make migration prefer moving smaller flows, and flows to
+ * lightly loaded ports.
+ */
+ return DSW_MAX_LOAD - res_target_load;
+}
+
+static bool
+dsw_is_serving_port(struct dsw_evdev *dsw, uint8_t port_id, uint8_t queue_id)
+{
+ struct dsw_queue *queue = &dsw->queues[queue_id];
+ uint16_t i;
+
+ for (i = 0; i < queue->num_serving_ports; i++)
+ if (queue->serving_ports[i] == port_id)
+ return true;
+
+ return false;
+}
+
static bool
dsw_select_emigration_target(struct dsw_evdev *dsw,
- struct dsw_port *source_port,
- struct dsw_queue_flow_burst *bursts,
- uint16_t num_bursts, int16_t *port_loads,
- int16_t max_load, struct dsw_queue_flow *target_qf,
- uint8_t *target_port_id)
+ struct dsw_queue_flow_burst *bursts,
+ uint16_t num_bursts, uint8_t source_port_id,
+ int16_t *port_loads, uint16_t num_ports,
+ uint8_t *target_port_ids,
+ struct dsw_queue_flow *target_qfs,
+ uint8_t *targets_len)
{
- uint16_t source_load = port_loads[source_port->id];
+ int16_t source_port_load = port_loads[source_port_id];
+ struct dsw_queue_flow *candidate_qf;
+ uint8_t candidate_port_id;
+ int16_t candidate_weight = -1;
+ int16_t candidate_flow_load;
uint16_t i;
+ if (source_port_load < DSW_MIN_SOURCE_LOAD_FOR_MIGRATION)
+ return false;
+
for (i = 0; i < num_bursts; i++) {
- struct dsw_queue_flow *qf = &bursts[i].queue_flow;
+ struct dsw_queue_flow_burst *burst = &bursts[i];
+ struct dsw_queue_flow *qf = &burst->queue_flow;
+ int16_t flow_load;
+ uint16_t port_id;
- if (dsw_port_is_flow_paused(source_port, qf->queue_id,
- qf->flow_hash))
+ if (dsw_is_queue_flow_in_ary(target_qfs, *targets_len,
+ qf->queue_id, qf->flow_hash))
continue;
- struct dsw_queue *queue = &dsw->queues[qf->queue_id];
- int16_t target_load;
+ flow_load = dsw_flow_load(burst->count, source_port_load);
- dsw_find_lowest_load_port(queue->serving_ports,
- queue->num_serving_ports,
- source_port->id, port_loads,
- target_port_id, &target_load);
+ for (port_id = 0; port_id < num_ports; port_id++) {
+ int16_t weight;
- if (target_load < source_load &&
- target_load < max_load) {
- *target_qf = *qf;
- return true;
+ if (port_id == source_port_id)
+ continue;
+
+ if (!dsw_is_serving_port(dsw, port_id, qf->queue_id))
+ continue;
+
+ weight = dsw_evaluate_migration(source_port_load,
+ port_loads[port_id],
+ flow_load);
+
+ if (weight > candidate_weight) {
+ candidate_qf = qf;
+ candidate_port_id = port_id;
+ candidate_weight = weight;
+ candidate_flow_load = flow_load;
+ }
}
}
- DSW_LOG_DP_PORT(DEBUG, source_port->id, "For the %d flows considered, "
- "no target port found with load less than %d.\n",
- num_bursts, DSW_LOAD_TO_PERCENT(max_load));
+ if (candidate_weight < 0)
+ return false;
- return false;
+ DSW_LOG_DP_PORT(DEBUG, source_port_id, "Selected queue_id %d "
+ "flow_hash %d (with flow load %d) for migration "
+ "to port %d.\n", candidate_qf->queue_id,
+ candidate_qf->flow_hash,
+ DSW_LOAD_TO_PERCENT(candidate_flow_load),
+ candidate_port_id);
+
+ port_loads[candidate_port_id] += candidate_flow_load;
+ port_loads[source_port_id] -= candidate_flow_load;
+
+ target_port_ids[*targets_len] = candidate_port_id;
+ target_qfs[*targets_len] = *candidate_qf;
+ (*targets_len)++;
+
+ return true;
+}
+
+static void
+dsw_select_emigration_targets(struct dsw_evdev *dsw,
+ struct dsw_port *source_port,
+ struct dsw_queue_flow_burst *bursts,
+ uint16_t num_bursts, int16_t *port_loads)
+{
+ struct dsw_queue_flow *target_qfs = source_port->emigration_target_qfs;
+ uint8_t *target_port_ids = source_port->emigration_target_port_ids;
+ uint8_t *targets_len = &source_port->emigration_targets_len;
+ uint8_t i;
+
+ for (i = 0; i < DSW_MAX_FLOWS_PER_MIGRATION; i++) {
+ bool found;
+
+ found = dsw_select_emigration_target(dsw, bursts, num_bursts,
+ source_port->id,
+ port_loads, dsw->num_ports,
+ target_port_ids,
+ target_qfs,
+ targets_len);
+ if (!found)
+ break;
+ }
+
+ if (*targets_len == 0)
+ DSW_LOG_DP_PORT(DEBUG, source_port->id,
+ "For the %d flows considered, no target port "
+ "was found.\n", num_bursts);
}
static uint8_t
@@ -562,7 +662,7 @@ dsw_port_buffer_event(struct dsw_evdev *dsw, struct dsw_port *source_port,
static void
dsw_port_flush_paused_events(struct dsw_evdev *dsw,
struct dsw_port *source_port,
- uint8_t queue_id, uint16_t paused_flow_hash)
+ const struct dsw_queue_flow *qf)
{
uint16_t paused_events_len = source_port->paused_events_len;
struct rte_event paused_events[paused_events_len];
@@ -572,7 +672,7 @@ dsw_port_flush_paused_events(struct dsw_evdev *dsw,
if (paused_events_len == 0)
return;
- if (dsw_port_is_flow_paused(source_port, queue_id, paused_flow_hash))
+ if (dsw_port_is_flow_paused(source_port, qf->queue_id, qf->flow_hash))
return;
rte_memcpy(paused_events, source_port->paused_events,
@@ -580,7 +680,7 @@ dsw_port_flush_paused_events(struct dsw_evdev *dsw,
source_port->paused_events_len = 0;
- dest_port_id = dsw_schedule(dsw, queue_id, paused_flow_hash);
+ dest_port_id = dsw_schedule(dsw, qf->queue_id, qf->flow_hash);
for (i = 0; i < paused_events_len; i++) {
struct rte_event *event = &paused_events[i];
@@ -588,8 +688,8 @@ dsw_port_flush_paused_events(struct dsw_evdev *dsw,
flow_hash = dsw_flow_id_hash(event->flow_id);
- if (event->queue_id == queue_id &&
- flow_hash == paused_flow_hash)
+ if (event->queue_id == qf->queue_id &&
+ flow_hash == qf->flow_hash)
dsw_port_buffer_non_paused(dsw, source_port,
dest_port_id, event);
else
@@ -598,33 +698,94 @@ dsw_port_flush_paused_events(struct dsw_evdev *dsw,
}
static void
-dsw_port_emigration_stats(struct dsw_port *port)
+dsw_port_emigration_stats(struct dsw_port *port, uint8_t finished)
{
- uint64_t emigration_latency;
+ uint64_t flow_migration_latency;
- emigration_latency = (rte_get_timer_cycles() - port->emigration_start);
- port->emigration_latency += emigration_latency;
- port->emigrations++;
+ flow_migration_latency =
+ (rte_get_timer_cycles() - port->emigration_start);
+ port->emigration_latency += (flow_migration_latency * finished);
+ port->emigrations += finished;
}
static void
-dsw_port_end_emigration(struct dsw_evdev *dsw, struct dsw_port *port)
+dsw_port_end_emigration(struct dsw_evdev *dsw, struct dsw_port *port,
+ uint8_t schedule_type)
{
- uint8_t queue_id = port->emigration_target_qf.queue_id;
- uint16_t flow_hash = port->emigration_target_qf.flow_hash;
+ uint8_t i;
+ struct dsw_queue_flow left_qfs[DSW_MAX_FLOWS_PER_MIGRATION];
+ uint8_t left_port_ids[DSW_MAX_FLOWS_PER_MIGRATION];
+ uint8_t left_qfs_len = 0;
+ uint8_t finished;
+
+ for (i = 0; i < port->emigration_targets_len; i++) {
+ struct dsw_queue_flow *qf = &port->emigration_target_qfs[i];
+ uint8_t queue_id = qf->queue_id;
+ uint8_t queue_schedule_type =
+ dsw->queues[queue_id].schedule_type;
+ uint16_t flow_hash = qf->flow_hash;
+
+ if (queue_schedule_type != schedule_type) {
+ left_port_ids[left_qfs_len] =
+ port->emigration_target_port_ids[i];
+ left_qfs[left_qfs_len] = *qf;
+ left_qfs_len++;
+ continue;
+ }
+
+ DSW_LOG_DP_PORT(DEBUG, port->id, "Migration completed for "
+ "queue_id %d flow_hash %d.\n", queue_id,
+ flow_hash);
+
+ if (queue_schedule_type == RTE_SCHED_TYPE_ATOMIC) {
+ dsw_port_remove_paused_flow(port, qf);
+ dsw_port_flush_paused_events(dsw, port, qf);
+ }
+ }
- port->migration_state = DSW_MIGRATION_STATE_IDLE;
- port->seen_events_len = 0;
+ finished = port->emigration_targets_len - left_qfs_len;
- dsw_port_emigration_stats(port);
+ if (finished > 0)
+ dsw_port_emigration_stats(port, finished);
- if (dsw->queues[queue_id].schedule_type != RTE_SCHED_TYPE_PARALLEL) {
- dsw_port_remove_paused_flow(port, queue_id, flow_hash);
- dsw_port_flush_paused_events(dsw, port, queue_id, flow_hash);
+ for (i = 0; i < left_qfs_len; i++) {
+ port->emigration_target_port_ids[i] = left_port_ids[i];
+ port->emigration_target_qfs[i] = left_qfs[i];
}
+ port->emigration_targets_len = left_qfs_len;
- DSW_LOG_DP_PORT(DEBUG, port->id, "Emigration completed for queue_id "
- "%d flow_hash %d.\n", queue_id, flow_hash);
+ if (port->emigration_targets_len == 0) {
+ port->migration_state = DSW_MIGRATION_STATE_IDLE;
+ port->seen_events_len = 0;
+ }
+}
+
+static void
+dsw_port_move_parallel_flows(struct dsw_evdev *dsw,
+ struct dsw_port *source_port)
+{
+ uint8_t i;
+
+ for (i = 0; i < source_port->emigration_targets_len; i++) {
+ struct dsw_queue_flow *qf =
+ &source_port->emigration_target_qfs[i];
+ uint8_t queue_id = qf->queue_id;
+
+ if (dsw->queues[queue_id].schedule_type ==
+ RTE_SCHED_TYPE_PARALLEL) {
+ uint8_t dest_port_id =
+ source_port->emigration_target_port_ids[i];
+ uint16_t flow_hash = qf->flow_hash;
+
+ /* Single byte-sized stores are always atomic. */
+ dsw->queues[queue_id].flow_to_port_map[flow_hash] =
+ dest_port_id;
+ }
+ }
+
+ rte_smp_wmb();
+
+ dsw_port_end_emigration(dsw, source_port, RTE_SCHED_TYPE_PARALLEL);
}
static void
@@ -678,9 +839,9 @@ dsw_port_consider_emigration(struct dsw_evdev *dsw,
source_port_load = rte_atomic16_read(&source_port->load);
if (source_port_load < DSW_MIN_SOURCE_LOAD_FOR_MIGRATION) {
DSW_LOG_DP_PORT(DEBUG, source_port->id,
- "Load %d is below threshold level %d.\n",
- DSW_LOAD_TO_PERCENT(source_port_load),
- DSW_LOAD_TO_PERCENT(DSW_MIN_SOURCE_LOAD_FOR_MIGRATION));
+ "Load %d is below threshold level %d.\n",
+ DSW_LOAD_TO_PERCENT(source_port_load),
+ DSW_LOAD_TO_PERCENT(DSW_MIN_SOURCE_LOAD_FOR_MIGRATION));
return;
}
@@ -697,16 +858,9 @@ dsw_port_consider_emigration(struct dsw_evdev *dsw,
return;
}
- /* Sort flows into 'bursts' to allow attempting to migrating
- * small (but still active) flows first - this it to avoid
- * having large flows moving around the worker cores too much
- * (to avoid cache misses, among other things). Of course, the
- * number of recorded events (queue+flow ids) are limited, and
- * provides only a snapshot, so only so many conclusions can
- * be drawn from this data.
- */
num_bursts = dsw_sort_qfs_to_bursts(seen_events, seen_events_len,
bursts);
+
/* For non-big-little systems, there's no point in moving the
* only (known) flow.
*/
@@ -718,33 +872,11 @@ dsw_port_consider_emigration(struct dsw_evdev *dsw,
return;
}
- /* The strategy is to first try to find a flow to move to a
- * port with low load (below the emigration-attempt
- * threshold). If that fails, we try to find a port which is
- * below the max threshold, and also less loaded than this
- * port is.
- */
- if (!dsw_select_emigration_target(dsw, source_port, bursts, num_bursts,
- port_loads,
- DSW_MIN_SOURCE_LOAD_FOR_MIGRATION,
- &source_port->emigration_target_qf,
- &source_port->emigration_target_port_id)
- &&
- !dsw_select_emigration_target(dsw, source_port, bursts, num_bursts,
- port_loads,
- DSW_MAX_TARGET_LOAD_FOR_MIGRATION,
- &source_port->emigration_target_qf,
- &source_port->emigration_target_port_id))
- return;
-
- DSW_LOG_DP_PORT(DEBUG, source_port->id, "Migrating queue_id %d "
- "flow_hash %d from port %d to port %d.\n",
- source_port->emigration_target_qf.queue_id,
- source_port->emigration_target_qf.flow_hash,
- source_port->id,
- source_port->emigration_target_port_id);
+ dsw_select_emigration_targets(dsw, source_port, bursts, num_bursts,
+ port_loads);
- /* We have a winner. */
+ if (source_port->emigration_targets_len == 0)
+ return;
source_port->migration_state = DSW_MIGRATION_STATE_PAUSING;
source_port->emigration_start = rte_get_timer_cycles();
@@ -753,71 +885,58 @@ dsw_port_consider_emigration(struct dsw_evdev *dsw,
* parallel queues, since atomic/ordered semantics need not to
* be maintained.
*/
+ dsw_port_move_parallel_flows(dsw, source_port);
- if (dsw->queues[source_port->emigration_target_qf.queue_id].
- schedule_type == RTE_SCHED_TYPE_PARALLEL) {
- uint8_t queue_id =
- source_port->emigration_target_qf.queue_id;
- uint16_t flow_hash =
- source_port->emigration_target_qf.flow_hash;
- uint8_t dest_port_id =
- source_port->emigration_target_port_id;
-
- /* Single byte-sized stores are always atomic. */
- dsw->queues[queue_id].flow_to_port_map[flow_hash] =
- dest_port_id;
- rte_smp_wmb();
-
- dsw_port_end_emigration(dsw, source_port);
-
+ /* All flows were on PARALLEL queues. */
+ if (source_port->migration_state == DSW_MIGRATION_STATE_IDLE)
return;
- }
/* There might be 'loopback' events already scheduled in the
* output buffers.
*/
dsw_port_flush_out_buffers(dsw, source_port);
- dsw_port_add_paused_flow(source_port,
- source_port->emigration_target_qf.queue_id,
- source_port->emigration_target_qf.flow_hash);
+ dsw_port_add_paused_flows(source_port,
+ source_port->emigration_target_qfs,
+ source_port->emigration_targets_len);
dsw_port_ctl_broadcast(dsw, source_port, DSW_CTL_PAUS_REQ,
- source_port->emigration_target_qf.queue_id,
- source_port->emigration_target_qf.flow_hash);
+ source_port->emigration_target_qfs,
+ source_port->emigration_targets_len);
source_port->cfm_cnt = 0;
}
static void
dsw_port_flush_paused_events(struct dsw_evdev *dsw,
struct dsw_port *source_port,
- uint8_t queue_id, uint16_t paused_flow_hash);
+ const struct dsw_queue_flow *qf);
static void
-dsw_port_handle_unpause_flow(struct dsw_evdev *dsw, struct dsw_port *port,
- uint8_t originating_port_id, uint8_t queue_id,
- uint16_t paused_flow_hash)
+dsw_port_handle_unpause_flows(struct dsw_evdev *dsw, struct dsw_port *port,
+ uint8_t originating_port_id,
+ struct dsw_queue_flow *paused_qfs,
+ uint8_t qfs_len)
{
+ uint16_t i;
struct dsw_ctl_msg cfm = {
.type = DSW_CTL_CFM,
- .originating_port_id = port->id,
- .queue_id = queue_id,
- .flow_hash = paused_flow_hash
+ .originating_port_id = port->id
};
- DSW_LOG_DP_PORT(DEBUG, port->id, "Un-pausing queue_id %d flow_hash %d.\n",
- queue_id, paused_flow_hash);
-
- dsw_port_remove_paused_flow(port, queue_id, paused_flow_hash);
+ dsw_port_remove_paused_flows(port, paused_qfs, qfs_len);
rte_smp_rmb();
- if (dsw_schedule(dsw, queue_id, paused_flow_hash) == port->id)
- port->immigrations++;
-
dsw_port_ctl_enqueue(&dsw->ports[originating_port_id], &cfm);
- dsw_port_flush_paused_events(dsw, port, queue_id, paused_flow_hash);
+ for (i = 0; i < qfs_len; i++) {
+ struct dsw_queue_flow *qf = &paused_qfs[i];
+
+ if (dsw_schedule(dsw, qf->queue_id, qf->flow_hash) == port->id)
+ port->immigrations++;
+
+ dsw_port_flush_paused_events(dsw, port, qf);
+ }
}
#define FORWARD_BURST_SIZE (32)
@@ -872,31 +991,37 @@ dsw_port_forward_emigrated_flow(struct dsw_port *source_port,
}
static void
-dsw_port_move_migrating_flow(struct dsw_evdev *dsw,
- struct dsw_port *source_port)
+dsw_port_move_emigrating_flows(struct dsw_evdev *dsw,
+ struct dsw_port *source_port)
{
- uint8_t queue_id = source_port->emigration_target_qf.queue_id;
- uint16_t flow_hash = source_port->emigration_target_qf.flow_hash;
- uint8_t dest_port_id = source_port->emigration_target_port_id;
- struct dsw_port *dest_port = &dsw->ports[dest_port_id];
+ uint8_t i;
dsw_port_flush_out_buffers(dsw, source_port);
rte_smp_wmb();
- dsw->queues[queue_id].flow_to_port_map[flow_hash] =
- dest_port_id;
+ for (i = 0; i < source_port->emigration_targets_len; i++) {
+ struct dsw_queue_flow *qf =
+ &source_port->emigration_target_qfs[i];
+ uint8_t dest_port_id =
+ source_port->emigration_target_port_ids[i];
+ struct dsw_port *dest_port = &dsw->ports[dest_port_id];
+
+ dsw->queues[qf->queue_id].flow_to_port_map[qf->flow_hash] =
+ dest_port_id;
- dsw_port_forward_emigrated_flow(source_port, dest_port->in_ring,
- queue_id, flow_hash);
+ dsw_port_forward_emigrated_flow(source_port, dest_port->in_ring,
+ qf->queue_id, qf->flow_hash);
+ }
/* Flow table update and migration destination port's enqueues
* must be seen before the control message.
*/
rte_smp_wmb();
- dsw_port_ctl_broadcast(dsw, source_port, DSW_CTL_UNPAUS_REQ, queue_id,
- flow_hash);
+ dsw_port_ctl_broadcast(dsw, source_port, DSW_CTL_UNPAUS_REQ,
+ source_port->emigration_target_qfs,
+ source_port->emigration_targets_len);
source_port->cfm_cnt = 0;
source_port->migration_state = DSW_MIGRATION_STATE_UNPAUSING;
}
@@ -914,7 +1039,8 @@ dsw_port_handle_confirm(struct dsw_evdev *dsw, struct dsw_port *port)
port->migration_state = DSW_MIGRATION_STATE_FORWARDING;
break;
case DSW_MIGRATION_STATE_UNPAUSING:
- dsw_port_end_emigration(dsw, port);
+ dsw_port_end_emigration(dsw, port,
+ RTE_SCHED_TYPE_ATOMIC);
break;
default:
RTE_ASSERT(0);
@@ -936,15 +1062,14 @@ dsw_port_ctl_process(struct dsw_evdev *dsw, struct dsw_port *port)
if (dsw_port_ctl_dequeue(port, &msg) == 0) {
switch (msg.type) {
case DSW_CTL_PAUS_REQ:
- dsw_port_handle_pause_flow(dsw, port,
- msg.originating_port_id,
- msg.queue_id, msg.flow_hash);
+ dsw_port_handle_pause_flows(dsw, port,
+ msg.originating_port_id,
+ msg.qfs, msg.qfs_len);
break;
case DSW_CTL_UNPAUS_REQ:
- dsw_port_handle_unpause_flow(dsw, port,
- msg.originating_port_id,
- msg.queue_id,
- msg.flow_hash);
+ dsw_port_handle_unpause_flows(dsw, port,
+ msg.originating_port_id,
+ msg.qfs, msg.qfs_len);
break;
case DSW_CTL_CFM:
dsw_port_handle_confirm(dsw, port);
@@ -967,7 +1092,7 @@ dsw_port_bg_process(struct dsw_evdev *dsw, struct dsw_port *port)
{
if (unlikely(port->migration_state == DSW_MIGRATION_STATE_FORWARDING &&
port->pending_releases == 0))
- dsw_port_move_migrating_flow(dsw, port);
+ dsw_port_move_emigrating_flows(dsw, port);
/* Polling the control ring is relatively inexpensive, and
* polling it often helps bringing down migration latency, so
--
2.17.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH 5/8] event/dsw: avoid migration waves in large systems
2020-03-09 6:50 [dpdk-dev] [PATCH 0/8] DSW performance and statistics improvements Mattias Rönnblom
` (3 preceding siblings ...)
2020-03-09 6:51 ` [dpdk-dev] [PATCH 4/8] event/dsw: improve migration mechanism Mattias Rönnblom
@ 2020-03-09 6:51 ` Mattias Rönnblom
2020-03-09 6:51 ` [dpdk-dev] [PATCH 6/8] event/dsw: remove redundant control ring poll Mattias Rönnblom
` (3 subsequent siblings)
8 siblings, 0 replies; 21+ messages in thread
From: Mattias Rönnblom @ 2020-03-09 6:51 UTC (permalink / raw)
To: jerinj; +Cc: dev, stefan.sundkvist, Ola.Liljedahl, Mattias Rönnblom
DSW limits the rate of migrations on a per-port basis. Hence, as the
number of cores grows, so does the total migration capacity.
In high core-count systems, this allows for a situation where flows
are migrated to a lightly loaded port which recently already received
a number of new flows (from other ports). The processing load
generated by these new flows may not yet be reflected in the lightly
loaded port's load estimate. The result is that the previously lightly
loaded port is now overloaded.
This patch adds a rough estimate of the size of the inbound migrations
to a particular port, which can be factored into the migration logic,
avoiding the above problem.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
drivers/event/dsw/dsw_evdev.c | 1 +
drivers/event/dsw/dsw_evdev.h | 2 ++
drivers/event/dsw/dsw_event.c | 18 ++++++++++++++++--
3 files changed, 19 insertions(+), 2 deletions(-)
diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c
index 7798a38ad..e796975df 100644
--- a/drivers/event/dsw/dsw_evdev.c
+++ b/drivers/event/dsw/dsw_evdev.c
@@ -62,6 +62,7 @@ dsw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
port->ctl_in_ring = ctl_in_ring;
rte_atomic16_init(&port->load);
+ rte_atomic32_init(&port->immigration_load);
port->load_update_interval =
(DSW_LOAD_UPDATE_INTERVAL * rte_get_timer_hz()) / US_PER_S;
diff --git a/drivers/event/dsw/dsw_evdev.h b/drivers/event/dsw/dsw_evdev.h
index ced40ef8d..6cb77cfc4 100644
--- a/drivers/event/dsw/dsw_evdev.h
+++ b/drivers/event/dsw/dsw_evdev.h
@@ -220,6 +220,8 @@ struct dsw_port {
/* Estimate of current port load. */
rte_atomic16_t load __rte_cache_aligned;
+ /* Estimate of flows currently migrating to this port. */
+ rte_atomic32_t immigration_load __rte_cache_aligned;
} __rte_cache_aligned;
struct dsw_queue {
diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c
index 21c102275..f87656703 100644
--- a/drivers/event/dsw/dsw_event.c
+++ b/drivers/event/dsw/dsw_event.c
@@ -160,6 +160,11 @@ dsw_port_load_update(struct dsw_port *port, uint64_t now)
(DSW_OLD_LOAD_WEIGHT+1);
rte_atomic16_set(&port->load, new_load);
+
+ /* The load of the recently immigrated flows should hopefully
+ * be reflected the load estimate by now.
+ */
+ rte_atomic32_set(&port->immigration_load, 0);
}
static void
@@ -362,7 +367,13 @@ dsw_retrieve_port_loads(struct dsw_evdev *dsw, int16_t *port_loads,
uint16_t i;
for (i = 0; i < dsw->num_ports; i++) {
- int16_t load = rte_atomic16_read(&dsw->ports[i].load);
+ int16_t measured_load = rte_atomic16_read(&dsw->ports[i].load);
+ int32_t immigration_load =
+ rte_atomic32_read(&dsw->ports[i].immigration_load);
+ int32_t load = measured_load + immigration_load;
+
+ load = RTE_MIN(load, DSW_MAX_LOAD);
+
if (load < load_limit)
below_limit = true;
port_loads[i] = load;
@@ -491,6 +502,9 @@ dsw_select_emigration_target(struct dsw_evdev *dsw,
target_qfs[*targets_len] = *candidate_qf;
(*targets_len)++;
+ rte_atomic32_add(&dsw->ports[candidate_port_id].immigration_load,
+ candidate_flow_load);
+
return true;
}
@@ -503,7 +517,7 @@ dsw_select_emigration_targets(struct dsw_evdev *dsw,
struct dsw_queue_flow *target_qfs = source_port->emigration_target_qfs;
uint8_t *target_port_ids = source_port->emigration_target_port_ids;
uint8_t *targets_len = &source_port->emigration_targets_len;
- uint8_t i;
+ uint16_t i;
for (i = 0; i < DSW_MAX_FLOWS_PER_MIGRATION; i++) {
bool found;
--
2.17.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH 6/8] event/dsw: remove redundant control ring poll
2020-03-09 6:50 [dpdk-dev] [PATCH 0/8] DSW performance and statistics improvements Mattias Rönnblom
` (4 preceding siblings ...)
2020-03-09 6:51 ` [dpdk-dev] [PATCH 5/8] event/dsw: avoid migration waves in large systems Mattias Rönnblom
@ 2020-03-09 6:51 ` Mattias Rönnblom
2020-03-09 6:51 ` [dpdk-dev] [PATCH 7/8] event/dsw: remove unnecessary read barrier Mattias Rönnblom
` (2 subsequent siblings)
8 siblings, 0 replies; 21+ messages in thread
From: Mattias Rönnblom @ 2020-03-09 6:51 UTC (permalink / raw)
To: jerinj; +Cc: dev, stefan.sundkvist, Ola.Liljedahl, Mattias Rönnblom
On dequeue, polling the control ring once is enough.
Fixes: f6257b22e767 ("event/dsw: add load balancing")
Suggested-by: Ola Liljedahl <Ola.Liljedahl@arm.com>
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
drivers/event/dsw/dsw_event.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c
index f87656703..bb06df803 100644
--- a/drivers/event/dsw/dsw_event.c
+++ b/drivers/event/dsw/dsw_event.c
@@ -1331,11 +1331,6 @@ static uint16_t
dsw_port_dequeue_burst(struct dsw_port *port, struct rte_event *events,
uint16_t num)
{
- struct dsw_port *source_port = port;
- struct dsw_evdev *dsw = source_port->dsw;
-
- dsw_port_ctl_process(dsw, source_port);
-
if (unlikely(port->in_buffer_len > 0)) {
uint16_t dequeued = RTE_MIN(num, port->in_buffer_len);
--
2.17.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH 7/8] event/dsw: remove unnecessary read barrier
2020-03-09 6:50 [dpdk-dev] [PATCH 0/8] DSW performance and statistics improvements Mattias Rönnblom
` (5 preceding siblings ...)
2020-03-09 6:51 ` [dpdk-dev] [PATCH 6/8] event/dsw: remove redundant control ring poll Mattias Rönnblom
@ 2020-03-09 6:51 ` Mattias Rönnblom
2020-03-09 6:51 ` [dpdk-dev] [PATCH 8/8] event/dsw: add port busy cycles xstats Mattias Rönnblom
2020-04-04 14:35 ` [dpdk-dev] [EXT] [PATCH 0/8] DSW performance and statistics improvements Jerin Jacob Kollanukkaran
8 siblings, 0 replies; 21+ messages in thread
From: Mattias Rönnblom @ 2020-03-09 6:51 UTC (permalink / raw)
To: jerinj; +Cc: dev, stefan.sundkvist, Ola.Liljedahl, Mattias Rönnblom
Remove unnecessary read barrier (and misleading comment) on control
message dequeue.
Fixes: f6257b22e767 ("event/dsw: add load balancing")
Suggested-by: Ola Liljedahl <Ola.Liljedahl@arm.com>
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
drivers/event/dsw/dsw_event.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c
index bb06df803..73a9d38cb 100644
--- a/drivers/event/dsw/dsw_event.c
+++ b/drivers/event/dsw/dsw_event.c
@@ -1068,11 +1068,6 @@ dsw_port_ctl_process(struct dsw_evdev *dsw, struct dsw_port *port)
{
struct dsw_ctl_msg msg;
- /* So any table loads happens before the ring dequeue, in the
- * case of a 'paus' message.
- */
- rte_smp_rmb();
-
if (dsw_port_ctl_dequeue(port, &msg) == 0) {
switch (msg.type) {
case DSW_CTL_PAUS_REQ:
--
2.17.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH 8/8] event/dsw: add port busy cycles xstats
2020-03-09 6:50 [dpdk-dev] [PATCH 0/8] DSW performance and statistics improvements Mattias Rönnblom
` (6 preceding siblings ...)
2020-03-09 6:51 ` [dpdk-dev] [PATCH 7/8] event/dsw: remove unnecessary read barrier Mattias Rönnblom
@ 2020-03-09 6:51 ` Mattias Rönnblom
2020-04-04 14:35 ` [dpdk-dev] [EXT] [PATCH 0/8] DSW performance and statistics improvements Jerin Jacob Kollanukkaran
8 siblings, 0 replies; 21+ messages in thread
From: Mattias Rönnblom @ 2020-03-09 6:51 UTC (permalink / raw)
To: jerinj; +Cc: dev, stefan.sundkvist, Ola.Liljedahl, Mattias Rönnblom
DSW keeps an internal port load estimate, used by the load balancing
mechanism. As a side effect, it keeps track of the total number of
busy cycles since startup. This metric is indirectly exposed in the
form of DSW xstats' "port_<n>_event_proc_latency", which is the total
number of busy cycles divided by the total number of events processed
on a particular port.
An external application can take (event_latency * dequeued) to go back
to busy_cycles. One reason for doing this is to measure the port's
load during a longer time period, without resorting to sampling
"port_<n>_load". However, as the number dequeued events grows, a
rounding error in event_latency renders the application-calculated
busy_cycles inaccurate.
Thus, it makes sense to directly expose the number of busy cycles as a
DSW xstats, even though it might seem redundant.
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
drivers/event/dsw/dsw_xstats.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/event/dsw/dsw_xstats.c b/drivers/event/dsw/dsw_xstats.c
index d332a57b6..e8e92183e 100644
--- a/drivers/event/dsw/dsw_xstats.c
+++ b/drivers/event/dsw/dsw_xstats.c
@@ -109,6 +109,13 @@ dsw_xstats_port_get_event_proc_latency(struct dsw_evdev *dsw, uint8_t port_id,
return dequeued > 0 ? total_busy_cycles / dequeued : 0;
}
+static uint64_t
+dsw_xstats_port_get_busy_cycles(struct dsw_evdev *dsw, uint8_t port_id,
+ uint8_t queue_id __rte_unused)
+{
+ return dsw->ports[port_id].total_busy_cycles;
+}
+
DSW_GEN_PORT_ACCESS_FN(inflight_credits)
DSW_GEN_PORT_ACCESS_FN(pending_releases)
@@ -147,6 +154,8 @@ static struct dsw_xstats_port dsw_port_xstats[] = {
false },
{ "port_%u_event_proc_latency", dsw_xstats_port_get_event_proc_latency,
false },
+ { "port_%u_busy_cycles", dsw_xstats_port_get_busy_cycles,
+ false },
{ "port_%u_inflight_credits", dsw_xstats_port_get_inflight_credits,
false },
{ "port_%u_pending_releases", dsw_xstats_port_get_pending_releases,
--
2.17.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH 0/8] DSW performance and statistics improvements
2020-03-09 6:50 [dpdk-dev] [PATCH 0/8] DSW performance and statistics improvements Mattias Rönnblom
` (7 preceding siblings ...)
2020-03-09 6:51 ` [dpdk-dev] [PATCH 8/8] event/dsw: add port busy cycles xstats Mattias Rönnblom
@ 2020-04-04 14:35 ` Jerin Jacob Kollanukkaran
2020-04-15 16:37 ` David Marchand
8 siblings, 1 reply; 21+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2020-04-04 14:35 UTC (permalink / raw)
To: Mattias Rönnblom; +Cc: dev, stefan.sundkvist, Ola.Liljedahl
> -----Original Message-----
> From: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Sent: Monday, March 9, 2020 12:21 PM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
> Cc: dev@dpdk.org; stefan.sundkvist@ericsson.com; Ola.Liljedahl@arm.com;
> Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Subject: [EXT] [PATCH 0/8] DSW performance and statistics improvements
>
> External Email
>
> ----------------------------------------------------------------------
> Performance and statistics improvements for the distributed software
> (DSW) event device.
>
> Mattias Rönnblom (8):
> event/dsw: reduce latency in low-load situations
> event/dsw: reduce max flows to speed up load balancing
> event/dsw: extend statistics
> event/dsw: improve migration mechanism
> event/dsw: avoid migration waves in large systems
> event/dsw: remove redundant control ring poll
> event/dsw: remove unnecessary read barrier
> event/dsw: add port busy cycles xstats
Series applied to dpdk-next-eventdev/master. Thanks.
>
> drivers/event/dsw/dsw_evdev.c | 1 +
> drivers/event/dsw/dsw_evdev.h | 45 ++- drivers/event/dsw/dsw_event.c |
> 602 ++++++++++++++++++++------------- drivers/event/dsw/dsw_xstats.c | 26 +-
> 4 files changed, 425 insertions(+), 249 deletions(-)
>
> --
> 2.17.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH 0/8] DSW performance and statistics improvements
2020-04-04 14:35 ` [dpdk-dev] [EXT] [PATCH 0/8] DSW performance and statistics improvements Jerin Jacob Kollanukkaran
@ 2020-04-15 16:37 ` David Marchand
2020-04-15 17:39 ` Mattias Rönnblom
0 siblings, 1 reply; 21+ messages in thread
From: David Marchand @ 2020-04-15 16:37 UTC (permalink / raw)
To: Jerin Jacob Kollanukkaran, Mattias Rönnblom
Cc: dev, stefan.sundkvist, Ola.Liljedahl, ci
On Sat, Apr 4, 2020 at 4:35 PM Jerin Jacob Kollanukkaran
<jerinj@marvell.com> wrote:
>
> > -----Original Message-----
> > From: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> > Sent: Monday, March 9, 2020 12:21 PM
> > To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
> > Cc: dev@dpdk.org; stefan.sundkvist@ericsson.com; Ola.Liljedahl@arm.com;
> > Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> > Subject: [EXT] [PATCH 0/8] DSW performance and statistics improvements
> >
> > External Email
> >
> > ----------------------------------------------------------------------
> > Performance and statistics improvements for the distributed software
> > (DSW) event device.
> >
> > Mattias Rönnblom (8):
> > event/dsw: reduce latency in low-load situations
> > event/dsw: reduce max flows to speed up load balancing
> > event/dsw: extend statistics
> > event/dsw: improve migration mechanism
> > event/dsw: avoid migration waves in large systems
> > event/dsw: remove redundant control ring poll
> > event/dsw: remove unnecessary read barrier
> > event/dsw: add port busy cycles xstats
>
> Series applied to dpdk-next-eventdev/master. Thanks.
I get a compilation issue on rhel7.
Too bad the CI did not help.
http://patchwork.dpdk.org/project/dpdk/list/?series=8828&state=*
[1583/1808] Compiling C object
'drivers/drivers@@tmp_rte_pmd_dsw_event@sta/event_dsw_dsw_event.c.o'.
../drivers/event/dsw/dsw_event.c: In function ‘dsw_port_consider_emigration’:
../drivers/event/dsw/dsw_event.c:502:27: warning: ‘candidate_qf’ may
be used uninitialized in this function [-Wmaybe-uninitialized]
target_qfs[*targets_len] = *candidate_qf;
^
../drivers/event/dsw/dsw_event.c:445:25: note: ‘candidate_qf’ was declared here
struct dsw_queue_flow *candidate_qf;
^
In file included from ../lib/librte_eal/x86/include/rte_atomic.h:16:0,
from ../lib/librte_eal/include/generic/rte_rwlock.h:25,
from ../lib/librte_eal/x86/include/rte_rwlock.h:12,
from ../lib/librte_eal/include/rte_fbarray.h:40,
from ../lib/librte_eal/include/rte_memory.h:25,
from ../lib/librte_eventdev/rte_event_ring.h:20,
from ../drivers/event/dsw/dsw_evdev.h:8,
from ../drivers/event/dsw/dsw_event.c:5:
../lib/librte_eal/include/generic/rte_atomic.h:566:22: warning:
‘candidate_flow_load’ may be used uninitialized in this function
[-Wmaybe-uninitialized]
__sync_fetch_and_add(&v->cnt, inc);
^
../drivers/event/dsw/dsw_event.c:448:10: note: ‘candidate_flow_load’
was declared here
int16_t candidate_flow_load;
^
../drivers/event/dsw/dsw_event.c:505:49: warning: ‘candidate_port_id’
may be used uninitialized in this function [-Wmaybe-uninitialized]
rte_atomic32_add(&dsw->ports[candidate_port_id].immigration_load,
^
../drivers/event/dsw/dsw_event.c:446:10: note: ‘candidate_port_id’ was
declared here
uint8_t candidate_port_id;
^
--
David Marchand
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [EXT] [PATCH 0/8] DSW performance and statistics improvements
2020-04-15 16:37 ` David Marchand
@ 2020-04-15 17:39 ` Mattias Rönnblom
2020-04-15 17:45 ` [dpdk-dev] [dpdk-ci] " Thomas Monjalon
0 siblings, 1 reply; 21+ messages in thread
From: Mattias Rönnblom @ 2020-04-15 17:39 UTC (permalink / raw)
To: David Marchand, Jerin Jacob Kollanukkaran
Cc: dev, Stefan Sundkvist, Ola.Liljedahl, ci
On 2020-04-15 18:37, David Marchand wrote:
> On Sat, Apr 4, 2020 at 4:35 PM Jerin Jacob Kollanukkaran
> <jerinj@marvell.com> wrote:
>>> -----Original Message-----
>>> From: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>> Sent: Monday, March 9, 2020 12:21 PM
>>> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
>>> Cc: dev@dpdk.org; stefan.sundkvist@ericsson.com; Ola.Liljedahl@arm.com;
>>> Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>> Subject: [EXT] [PATCH 0/8] DSW performance and statistics improvements
>>>
>>> External Email
>>>
>>> ----------------------------------------------------------------------
>>> Performance and statistics improvements for the distributed software
>>> (DSW) event device.
>>>
>>> Mattias Rönnblom (8):
>>> event/dsw: reduce latency in low-load situations
>>> event/dsw: reduce max flows to speed up load balancing
>>> event/dsw: extend statistics
>>> event/dsw: improve migration mechanism
>>> event/dsw: avoid migration waves in large systems
>>> event/dsw: remove redundant control ring poll
>>> event/dsw: remove unnecessary read barrier
>>> event/dsw: add port busy cycles xstats
>> Series applied to dpdk-next-eventdev/master. Thanks.
> I get a compilation issue on rhel7.
> Too bad the CI did not help.
> https://protect2.fireeye.com/v1/url?k=6777ccba-3ba3c0e4-67778c21-8691959ed9b7-56149d0a5dee8fea&q=1&e=d034618d-5861-42ba-adf3-fc0aafd4892a&u=http%3A%2F%2Fpatchwork.dpdk.org%2Fproject%2Fdpdk%2Flist%2F%3Fseries%3D8828%26state%3D%2A
>
>
> [1583/1808] Compiling C object
> 'drivers/drivers@@tmp_rte_pmd_dsw_event@sta/event_dsw_dsw_event.c.o'.
> ../drivers/event/dsw/dsw_event.c: In function ‘dsw_port_consider_emigration’:
> ../drivers/event/dsw/dsw_event.c:502:27: warning: ‘candidate_qf’ may
> be used uninitialized in this function [-Wmaybe-uninitialized]
> target_qfs[*targets_len] = *candidate_qf;
> ^
> ../drivers/event/dsw/dsw_event.c:445:25: note: ‘candidate_qf’ was declared here
> struct dsw_queue_flow *candidate_qf;
> ^
> In file included from ../lib/librte_eal/x86/include/rte_atomic.h:16:0,
> from ../lib/librte_eal/include/generic/rte_rwlock.h:25,
> from ../lib/librte_eal/x86/include/rte_rwlock.h:12,
> from ../lib/librte_eal/include/rte_fbarray.h:40,
> from ../lib/librte_eal/include/rte_memory.h:25,
> from ../lib/librte_eventdev/rte_event_ring.h:20,
> from ../drivers/event/dsw/dsw_evdev.h:8,
> from ../drivers/event/dsw/dsw_event.c:5:
> ../lib/librte_eal/include/generic/rte_atomic.h:566:22: warning:
> ‘candidate_flow_load’ may be used uninitialized in this function
> [-Wmaybe-uninitialized]
> __sync_fetch_and_add(&v->cnt, inc);
> ^
> ../drivers/event/dsw/dsw_event.c:448:10: note: ‘candidate_flow_load’
> was declared here
> int16_t candidate_flow_load;
> ^
> ../drivers/event/dsw/dsw_event.c:505:49: warning: ‘candidate_port_id’
> may be used uninitialized in this function [-Wmaybe-uninitialized]
> rte_atomic32_add(&dsw->ports[candidate_port_id].immigration_load,
> ^
> ../drivers/event/dsw/dsw_event.c:446:10: note: ‘candidate_port_id’ was
> declared here
> uint8_t candidate_port_id;
> ^
>
Looks like a false positive. What GCC version is this?
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [dpdk-ci] [EXT] [PATCH 0/8] DSW performance and statistics improvements
2020-04-15 17:39 ` Mattias Rönnblom
@ 2020-04-15 17:45 ` Thomas Monjalon
2020-04-15 18:09 ` Mattias Rönnblom
0 siblings, 1 reply; 21+ messages in thread
From: Thomas Monjalon @ 2020-04-15 17:45 UTC (permalink / raw)
To: Mattias Rönnblom
Cc: David Marchand, Jerin Jacob Kollanukkaran, ci, dev,
Stefan Sundkvist, Ola.Liljedahl
15/04/2020 19:39, Mattias Rönnblom:
> On 2020-04-15 18:37, David Marchand wrote:
> > On Sat, Apr 4, 2020 at 4:35 PM Jerin Jacob Kollanukkaran
> > <jerinj@marvell.com> wrote:
> >> From: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >>> Performance and statistics improvements for the distributed software
> >>> (DSW) event device.
> >>>
> >>> Mattias Rönnblom (8):
> >>> event/dsw: reduce latency in low-load situations
> >>> event/dsw: reduce max flows to speed up load balancing
> >>> event/dsw: extend statistics
> >>> event/dsw: improve migration mechanism
> >>> event/dsw: avoid migration waves in large systems
> >>> event/dsw: remove redundant control ring poll
> >>> event/dsw: remove unnecessary read barrier
> >>> event/dsw: add port busy cycles xstats
> >> Series applied to dpdk-next-eventdev/master. Thanks.
> >
> > I get a compilation issue on rhel7.
> > Too bad the CI did not help.
> > https://protect2.fireeye.com/v1/url?k=6777ccba-3ba3c0e4-67778c21-8691959ed9b7-56149d0a5dee8fea&q=1&e=d034618d-5861-42ba-adf3-fc0aafd4892a&u=http%3A%2F%2Fpatchwork.dpdk.org%2Fproject%2Fdpdk%2Flist%2F%3Fseries%3D8828%26state%3D%2A
> >
> >
> > [1583/1808] Compiling C object
> > 'drivers/drivers@@tmp_rte_pmd_dsw_event@sta/event_dsw_dsw_event.c.o'.
> > ../drivers/event/dsw/dsw_event.c: In function ‘dsw_port_consider_emigration’:
> > ../drivers/event/dsw/dsw_event.c:502:27: warning: ‘candidate_qf’ may
> > be used uninitialized in this function [-Wmaybe-uninitialized]
> > target_qfs[*targets_len] = *candidate_qf;
> > ^
> > ../drivers/event/dsw/dsw_event.c:445:25: note: ‘candidate_qf’ was declared here
> > struct dsw_queue_flow *candidate_qf;
> > ^
> > In file included from ../lib/librte_eal/x86/include/rte_atomic.h:16:0,
> > from ../lib/librte_eal/include/generic/rte_rwlock.h:25,
> > from ../lib/librte_eal/x86/include/rte_rwlock.h:12,
> > from ../lib/librte_eal/include/rte_fbarray.h:40,
> > from ../lib/librte_eal/include/rte_memory.h:25,
> > from ../lib/librte_eventdev/rte_event_ring.h:20,
> > from ../drivers/event/dsw/dsw_evdev.h:8,
> > from ../drivers/event/dsw/dsw_event.c:5:
> > ../lib/librte_eal/include/generic/rte_atomic.h:566:22: warning:
> > ‘candidate_flow_load’ may be used uninitialized in this function
> > [-Wmaybe-uninitialized]
> > __sync_fetch_and_add(&v->cnt, inc);
> > ^
> > ../drivers/event/dsw/dsw_event.c:448:10: note: ‘candidate_flow_load’
> > was declared here
> > int16_t candidate_flow_load;
> > ^
> > ../drivers/event/dsw/dsw_event.c:505:49: warning: ‘candidate_port_id’
> > may be used uninitialized in this function [-Wmaybe-uninitialized]
> > rte_atomic32_add(&dsw->ports[candidate_port_id].immigration_load,
> > ^
> > ../drivers/event/dsw/dsw_event.c:446:10: note: ‘candidate_port_id’ was
> > declared here
> > uint8_t candidate_port_id;
> > ^
> >
>
> Looks like a false positive. What GCC version is this?
This is with RHEL 7.
Please do you have such distro to test and fix the false positive?
A quick fix would be very welcome.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [dpdk-ci] [EXT] [PATCH 0/8] DSW performance and statistics improvements
2020-04-15 17:45 ` [dpdk-dev] [dpdk-ci] " Thomas Monjalon
@ 2020-04-15 18:09 ` Mattias Rönnblom
2020-04-15 18:15 ` [dpdk-dev] [PATCH v2] event/dsw: fix gcc 4.8 false positive warning Mattias Rönnblom
0 siblings, 1 reply; 21+ messages in thread
From: Mattias Rönnblom @ 2020-04-15 18:09 UTC (permalink / raw)
To: Thomas Monjalon
Cc: David Marchand, Jerin Jacob Kollanukkaran, ci, dev,
Stefan Sundkvist, Ola.Liljedahl
On 2020-04-15 19:45, Thomas Monjalon wrote:
> 15/04/2020 19:39, Mattias Rönnblom:
>> On 2020-04-15 18:37, David Marchand wrote:
>>> On Sat, Apr 4, 2020 at 4:35 PM Jerin Jacob Kollanukkaran
>>> <jerinj@marvell.com> wrote:
>>>> From: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>>>> Performance and statistics improvements for the distributed software
>>>>> (DSW) event device.
>>>>>
>>>>> Mattias Rönnblom (8):
>>>>> event/dsw: reduce latency in low-load situations
>>>>> event/dsw: reduce max flows to speed up load balancing
>>>>> event/dsw: extend statistics
>>>>> event/dsw: improve migration mechanism
>>>>> event/dsw: avoid migration waves in large systems
>>>>> event/dsw: remove redundant control ring poll
>>>>> event/dsw: remove unnecessary read barrier
>>>>> event/dsw: add port busy cycles xstats
>>>> Series applied to dpdk-next-eventdev/master. Thanks.
>>> I get a compilation issue on rhel7.
>>> Too bad the CI did not help.
>>> https://protect2.fireeye.com/v1/url?k=6777ccba-3ba3c0e4-67778c21-8691959ed9b7-56149d0a5dee8fea&q=1&e=d034618d-5861-42ba-adf3-fc0aafd4892a&u=http%3A%2F%2Fpatchwork.dpdk.org%2Fproject%2Fdpdk%2Flist%2F%3Fseries%3D8828%26state%3D%2A
>>>
>>>
>>> [1583/1808] Compiling C object
>>> 'drivers/drivers@@tmp_rte_pmd_dsw_event@sta/event_dsw_dsw_event.c.o'.
>>> ../drivers/event/dsw/dsw_event.c: In function ‘dsw_port_consider_emigration’:
>>> ../drivers/event/dsw/dsw_event.c:502:27: warning: ‘candidate_qf’ may
>>> be used uninitialized in this function [-Wmaybe-uninitialized]
>>> target_qfs[*targets_len] = *candidate_qf;
>>> ^
>>> ../drivers/event/dsw/dsw_event.c:445:25: note: ‘candidate_qf’ was declared here
>>> struct dsw_queue_flow *candidate_qf;
>>> ^
>>> In file included from ../lib/librte_eal/x86/include/rte_atomic.h:16:0,
>>> from ../lib/librte_eal/include/generic/rte_rwlock.h:25,
>>> from ../lib/librte_eal/x86/include/rte_rwlock.h:12,
>>> from ../lib/librte_eal/include/rte_fbarray.h:40,
>>> from ../lib/librte_eal/include/rte_memory.h:25,
>>> from ../lib/librte_eventdev/rte_event_ring.h:20,
>>> from ../drivers/event/dsw/dsw_evdev.h:8,
>>> from ../drivers/event/dsw/dsw_event.c:5:
>>> ../lib/librte_eal/include/generic/rte_atomic.h:566:22: warning:
>>> ‘candidate_flow_load’ may be used uninitialized in this function
>>> [-Wmaybe-uninitialized]
>>> __sync_fetch_and_add(&v->cnt, inc);
>>> ^
>>> ../drivers/event/dsw/dsw_event.c:448:10: note: ‘candidate_flow_load’
>>> was declared here
>>> int16_t candidate_flow_load;
>>> ^
>>> ../drivers/event/dsw/dsw_event.c:505:49: warning: ‘candidate_port_id’
>>> may be used uninitialized in this function [-Wmaybe-uninitialized]
>>> rte_atomic32_add(&dsw->ports[candidate_port_id].immigration_load,
>>> ^
>>> ../drivers/event/dsw/dsw_event.c:446:10: note: ‘candidate_port_id’ was
>>> declared here
>>> uint8_t candidate_port_id;
>>> ^
>>>
>> Looks like a false positive. What GCC version is this?
> This is with RHEL 7.
> Please do you have such distro to test and fix the false positive?
> A quick fix would be very welcome.
>
>
Most distributions support several compilers. I'm assuming it's the
default 4.8 compiler, and I unfortunately don't have a system with that
compiler.
^ permalink raw reply [flat|nested] 21+ messages in thread
* [dpdk-dev] [PATCH v2] event/dsw: fix gcc 4.8 false positive warning
2020-04-15 18:09 ` Mattias Rönnblom
@ 2020-04-15 18:15 ` Mattias Rönnblom
2020-04-15 19:45 ` David Marchand
0 siblings, 1 reply; 21+ messages in thread
From: Mattias Rönnblom @ 2020-04-15 18:15 UTC (permalink / raw)
To: dev, Jerin Jacob
Cc: Thomas Monjalon, David Marchand, Mattias Rönnblom, stable
Add redundant stack variable initialization to work around
false-positive warnings in older versions of GCC.
Fixes: bba7a1aeef46 ("event/dsw: improve migration mechanism")
Cc: stable@dpdk.org
Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
drivers/event/dsw/dsw_event.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c
index 73a9d38cb..e5e3597aa 100644
--- a/drivers/event/dsw/dsw_event.c
+++ b/drivers/event/dsw/dsw_event.c
@@ -442,10 +442,10 @@ dsw_select_emigration_target(struct dsw_evdev *dsw,
uint8_t *targets_len)
{
int16_t source_port_load = port_loads[source_port_id];
- struct dsw_queue_flow *candidate_qf;
- uint8_t candidate_port_id;
+ struct dsw_queue_flow *candidate_qf = NULL;
+ uint8_t candidate_port_id = 0;
int16_t candidate_weight = -1;
- int16_t candidate_flow_load;
+ int16_t candidate_flow_load = -1;
uint16_t i;
if (source_port_load < DSW_MIN_SOURCE_LOAD_FOR_MIGRATION)
--
2.20.1
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [PATCH v2] event/dsw: fix gcc 4.8 false positive warning
2020-04-15 18:15 ` [dpdk-dev] [PATCH v2] event/dsw: fix gcc 4.8 false positive warning Mattias Rönnblom
@ 2020-04-15 19:45 ` David Marchand
2020-04-16 6:15 ` Mattias Rönnblom
0 siblings, 1 reply; 21+ messages in thread
From: David Marchand @ 2020-04-15 19:45 UTC (permalink / raw)
To: Mattias Rönnblom; +Cc: dev, Jerin Jacob, Thomas Monjalon
On Wed, Apr 15, 2020 at 8:15 PM Mattias Rönnblom
<mattias.ronnblom@ericsson.com> wrote:
>
> Add redundant stack variable initialization to work around
> false-positive warnings in older versions of GCC.
>
> Fixes: bba7a1aeef46 ("event/dsw: improve migration mechanism")
The commitid in master is 1f2b99e8d.
> Cc: stable@dpdk.org
Original commit is not a fix and is only in master, dropped stable.
Applied directly in master to avoid build failures in the CI.
--
David Marchand
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [PATCH v2] event/dsw: fix gcc 4.8 false positive warning
2020-04-15 19:45 ` David Marchand
@ 2020-04-16 6:15 ` Mattias Rönnblom
0 siblings, 0 replies; 21+ messages in thread
From: Mattias Rönnblom @ 2020-04-16 6:15 UTC (permalink / raw)
To: David Marchand; +Cc: dev, Jerin Jacob, Thomas Monjalon
On 2020-04-15 21:45, David Marchand wrote:
> On Wed, Apr 15, 2020 at 8:15 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
>> Add redundant stack variable initialization to work around
>> false-positive warnings in older versions of GCC.
>>
>> Fixes: bba7a1aeef46 ("event/dsw: improve migration mechanism")
> The commitid in master is 1f2b99e8d.
>
>> Cc: stable@dpdk.org
> Original commit is not a fix and is only in master, dropped stable.
>
>
> Applied directly in master to avoid build failures in the CI.
>
Thanks.
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [PATCH 5/8] event/dsw: avoid migration waves in large systems
2020-03-09 8:12 ` Jerin Jacob Kollanukkaran
@ 2020-03-09 8:41 ` Mattias Rönnblom
0 siblings, 0 replies; 21+ messages in thread
From: Mattias Rönnblom @ 2020-03-09 8:41 UTC (permalink / raw)
To: Jerin Jacob Kollanukkaran; +Cc: dev, Stefan Sundkvist, Ola.Liljedahl
On 2020-03-09 09:12, Jerin Jacob Kollanukkaran wrote:
>> -----Original Message-----
>> From: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> Sent: Monday, March 9, 2020 1:28 PM
>> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
>> Cc: dev@dpdk.org; Stefan Sundkvist <stefan.sundkvist@ericsson.com>;
>> Ola.Liljedahl@arm.com
>> Subject: [EXT] Re: [PATCH 5/8] event/dsw: avoid migration waves in large
>> systems
>>
>> On 2020-03-09 08:17, Jerin Jacob Kollanukkaran wrote:
>>>> -----Original Message-----
>>>> From: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>>> Sent: Monday, March 9, 2020 12:21 PM
>>>> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
>>>> Cc: dev@dpdk.org; stefan.sundkvist@ericsson.com;
>>>> Ola.Liljedahl@arm.com; Mattias Rönnblom
>>>> <mattias.ronnblom@ericsson.com>
>>>> Subject: [PATCH 5/8] event/dsw: avoid migration waves in large
>>>> systems
>>>>
>>>> ---------------------------------------------------------------------
>>>> - DSW limits the rate of migrations on a per-port basis. Hence, as
>>>> the number of cores grows, so does the total migration capacity.
>>>>
>>>> In high core-count systems, this allows for a situation where flows
>>>> are migrated to a lightly loaded port which recently already received
>>>> a number of new flows (from other ports). The processing load
>>>> generated by these new flows may not yet be reflected in the lightly
>>>> loaded port's load estimate. The result is that the previously lightly loaded
>> port is now overloaded.
>>>> This patch adds a rough estimate of the size of the inbound
>>>> migrations to a particular port, which can be factored into the
>>>> migration logic, avoiding the above problem.
>>>>
>>>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>>>> ---
>>>> @@ -491,6 +502,9 @@ dsw_select_emigration_target(struct dsw_evdev
>> *dsw,
>>>> target_qfs[*targets_len] = *candidate_qf;
>>>> (*targets_len)++;
>>>>
>>>> + rte_atomic32_add(&dsw->ports[candidate_port_id].immigration_load,
>>>> + candidate_flow_load);
>>> These are the full barriers in arm64 and PowerPC.
>>> Request to change the C11 mem model[1] with Load and acquire semantics
>>> For better performance enhancement on non x86 machines.
>>>
>>> drivers/event/opdl is already moved to C11 mem model.
>>>
>>> [1]
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__gcc.gnu.org_onlin
>>> edocs_gcc_-5F005f-5F005fatomic-
>> 2DBuiltins.html&d=DwIGaQ&c=nKjWec2b6R0m
>> OyPaz7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0s5f4wCNtTa4UUKvcsvI&m=WWfY
>> IvEKR8a
>> _FuTltGFBbtERAKU1akjXuokLpv2zSz0&s=bEjlLRgN4LriVpVzwYcdgcTV39OI_MZY
>> OG0
>>> QDhjmezw&e=
>>>
>> The performance impacts would be small, since this is in the slow path, with
>> something like a handful of memory barrier per core per ms.
> OK. If it is slow path, then yes, no point in changing.
>
> How about the other following uses in the DSW driver? Does it comes in fastpath or slowpath?
>
> drivers/event/dsw/dsw_event.c: new_total_on_loan = rte_atomic32_add_return(&dsw->credits_on_loan,
> drivers/event/dsw/dsw_event.c: rte_atomic32_sub(&dsw->credits_on_loan, acquired_credits);
> drivers/event/dsw/dsw_event.c: rte_atomic32_sub(&dsw->credits_on_loan, return_credits);
>
Technically still the slow path, but a path much more often taken. For
producer- and consumer-only ports, it's once per 64 events (per port).
For ports that do both, it's less often.
Sounds like a bug that rte_atomic32_sub() needs a full barrier.
>> Arguably, it could be done for consistency reasons, but then you should change
>> all DSW atomics.
>>
>>>> +
>>>> return true;
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [PATCH 5/8] event/dsw: avoid migration waves in large systems
2020-03-09 7:58 ` Mattias Rönnblom
@ 2020-03-09 8:12 ` Jerin Jacob Kollanukkaran
2020-03-09 8:41 ` Mattias Rönnblom
0 siblings, 1 reply; 21+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2020-03-09 8:12 UTC (permalink / raw)
To: Mattias Rönnblom; +Cc: dev, Stefan Sundkvist, Ola.Liljedahl
> -----Original Message-----
> From: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Sent: Monday, March 9, 2020 1:28 PM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
> Cc: dev@dpdk.org; Stefan Sundkvist <stefan.sundkvist@ericsson.com>;
> Ola.Liljedahl@arm.com
> Subject: [EXT] Re: [PATCH 5/8] event/dsw: avoid migration waves in large
> systems
>
> On 2020-03-09 08:17, Jerin Jacob Kollanukkaran wrote:
> >> -----Original Message-----
> >> From: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >> Sent: Monday, March 9, 2020 12:21 PM
> >> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
> >> Cc: dev@dpdk.org; stefan.sundkvist@ericsson.com;
> >> Ola.Liljedahl@arm.com; Mattias Rönnblom
> >> <mattias.ronnblom@ericsson.com>
> >> Subject: [PATCH 5/8] event/dsw: avoid migration waves in large
> >> systems
> >>
> >> ---------------------------------------------------------------------
> >> - DSW limits the rate of migrations on a per-port basis. Hence, as
> >> the number of cores grows, so does the total migration capacity.
> >>
> >> In high core-count systems, this allows for a situation where flows
> >> are migrated to a lightly loaded port which recently already received
> >> a number of new flows (from other ports). The processing load
> >> generated by these new flows may not yet be reflected in the lightly
> >> loaded port's load estimate. The result is that the previously lightly loaded
> port is now overloaded.
> >>
> >> This patch adds a rough estimate of the size of the inbound
> >> migrations to a particular port, which can be factored into the
> >> migration logic, avoiding the above problem.
> >>
> >> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> >> ---
> >> @@ -491,6 +502,9 @@ dsw_select_emigration_target(struct dsw_evdev
> *dsw,
> >> target_qfs[*targets_len] = *candidate_qf;
> >> (*targets_len)++;
> >>
> >> + rte_atomic32_add(&dsw->ports[candidate_port_id].immigration_load,
> >> + candidate_flow_load);
> > These are the full barriers in arm64 and PowerPC.
> > Request to change the C11 mem model[1] with Load and acquire semantics
> > For better performance enhancement on non x86 machines.
> >
> > drivers/event/opdl is already moved to C11 mem model.
> >
> > [1]
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__gcc.gnu.org_onlin
> > edocs_gcc_-5F005f-5F005fatomic-
> 2DBuiltins.html&d=DwIGaQ&c=nKjWec2b6R0m
> >
> OyPaz7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0s5f4wCNtTa4UUKvcsvI&m=WWfY
> IvEKR8a
> >
> _FuTltGFBbtERAKU1akjXuokLpv2zSz0&s=bEjlLRgN4LriVpVzwYcdgcTV39OI_MZY
> OG0
> > QDhjmezw&e=
> >
> The performance impacts would be small, since this is in the slow path, with
> something like a handful of memory barrier per core per ms.
OK. If it is slow path, then yes, no point in changing.
How about the other following uses in the DSW driver? Does it comes in fastpath or slowpath?
drivers/event/dsw/dsw_event.c: new_total_on_loan = rte_atomic32_add_return(&dsw->credits_on_loan,
drivers/event/dsw/dsw_event.c: rte_atomic32_sub(&dsw->credits_on_loan, acquired_credits);
drivers/event/dsw/dsw_event.c: rte_atomic32_sub(&dsw->credits_on_loan, return_credits);
>
> Arguably, it could be done for consistency reasons, but then you should change
> all DSW atomics.
>
> >> +
> >> return true;
>
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [PATCH 5/8] event/dsw: avoid migration waves in large systems
2020-03-09 7:17 [dpdk-dev] [PATCH 5/8] event/dsw: avoid migration waves in large systems Jerin Jacob Kollanukkaran
@ 2020-03-09 7:58 ` Mattias Rönnblom
2020-03-09 8:12 ` Jerin Jacob Kollanukkaran
0 siblings, 1 reply; 21+ messages in thread
From: Mattias Rönnblom @ 2020-03-09 7:58 UTC (permalink / raw)
To: Jerin Jacob Kollanukkaran; +Cc: dev, Stefan Sundkvist, Ola.Liljedahl
On 2020-03-09 08:17, Jerin Jacob Kollanukkaran wrote:
>> -----Original Message-----
>> From: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> Sent: Monday, March 9, 2020 12:21 PM
>> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
>> Cc: dev@dpdk.org; stefan.sundkvist@ericsson.com; Ola.Liljedahl@arm.com;
>> Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> Subject: [PATCH 5/8] event/dsw: avoid migration waves in large systems
>>
>> ----------------------------------------------------------------------
>> DSW limits the rate of migrations on a per-port basis. Hence, as the number of
>> cores grows, so does the total migration capacity.
>>
>> In high core-count systems, this allows for a situation where flows are migrated
>> to a lightly loaded port which recently already received a number of new flows
>> (from other ports). The processing load generated by these new flows may not
>> yet be reflected in the lightly loaded port's load estimate. The result is that the
>> previously lightly loaded port is now overloaded.
>>
>> This patch adds a rough estimate of the size of the inbound migrations to a
>> particular port, which can be factored into the migration logic, avoiding the
>> above problem.
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
>> ---
>> @@ -491,6 +502,9 @@ dsw_select_emigration_target(struct dsw_evdev *dsw,
>> target_qfs[*targets_len] = *candidate_qf;
>> (*targets_len)++;
>>
>> + rte_atomic32_add(&dsw->ports[candidate_port_id].immigration_load,
>> + candidate_flow_load);
> These are the full barriers in arm64 and PowerPC.
> Request to change the C11 mem model[1] with Load and acquire semantics
> For better performance enhancement on non x86 machines.
>
> drivers/event/opdl is already moved to C11 mem model.
>
> [1]
> https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html
>
The performance impacts would be small, since this is in the slow path,
with something like a handful of memory barrier per core per ms.
Arguably, it could be done for consistency reasons, but then you should
change all DSW atomics.
>> +
>> return true;
^ permalink raw reply [flat|nested] 21+ messages in thread
* Re: [dpdk-dev] [PATCH 5/8] event/dsw: avoid migration waves in large systems
@ 2020-03-09 7:17 Jerin Jacob Kollanukkaran
2020-03-09 7:58 ` Mattias Rönnblom
0 siblings, 1 reply; 21+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2020-03-09 7:17 UTC (permalink / raw)
To: Mattias Rönnblom; +Cc: dev, stefan.sundkvist, Ola.Liljedahl
> -----Original Message-----
> From: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Sent: Monday, March 9, 2020 12:21 PM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
> Cc: dev@dpdk.org; stefan.sundkvist@ericsson.com; Ola.Liljedahl@arm.com;
> Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> Subject: [PATCH 5/8] event/dsw: avoid migration waves in large systems
>
> ----------------------------------------------------------------------
> DSW limits the rate of migrations on a per-port basis. Hence, as the number of
> cores grows, so does the total migration capacity.
>
> In high core-count systems, this allows for a situation where flows are migrated
> to a lightly loaded port which recently already received a number of new flows
> (from other ports). The processing load generated by these new flows may not
> yet be reflected in the lightly loaded port's load estimate. The result is that the
> previously lightly loaded port is now overloaded.
>
> This patch adds a rough estimate of the size of the inbound migrations to a
> particular port, which can be factored into the migration logic, avoiding the
> above problem.
>
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> ---
> @@ -491,6 +502,9 @@ dsw_select_emigration_target(struct dsw_evdev *dsw,
> target_qfs[*targets_len] = *candidate_qf;
> (*targets_len)++;
>
> + rte_atomic32_add(&dsw->ports[candidate_port_id].immigration_load,
> + candidate_flow_load);
These are the full barriers in arm64 and PowerPC.
Request to change the C11 mem model[1] with Load and acquire semantics
For better performance enhancement on non x86 machines.
drivers/event/opdl is already moved to C11 mem model.
[1]
https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html
> +
> return true;
^ permalink raw reply [flat|nested] 21+ messages in thread
end of thread, other threads:[~2020-04-16 6:15 UTC | newest]
Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-09 6:50 [dpdk-dev] [PATCH 0/8] DSW performance and statistics improvements Mattias Rönnblom
2020-03-09 6:50 ` [dpdk-dev] [PATCH 1/8] event/dsw: reduce latency in low-load situations Mattias Rönnblom
2020-03-09 6:51 ` [dpdk-dev] [PATCH 2/8] event/dsw: reduce max flows to speed up load balancing Mattias Rönnblom
2020-03-09 6:51 ` [dpdk-dev] [PATCH 3/8] event/dsw: extend statistics Mattias Rönnblom
2020-03-09 6:51 ` [dpdk-dev] [PATCH 4/8] event/dsw: improve migration mechanism Mattias Rönnblom
2020-03-09 6:51 ` [dpdk-dev] [PATCH 5/8] event/dsw: avoid migration waves in large systems Mattias Rönnblom
2020-03-09 6:51 ` [dpdk-dev] [PATCH 6/8] event/dsw: remove redundant control ring poll Mattias Rönnblom
2020-03-09 6:51 ` [dpdk-dev] [PATCH 7/8] event/dsw: remove unnecessary read barrier Mattias Rönnblom
2020-03-09 6:51 ` [dpdk-dev] [PATCH 8/8] event/dsw: add port busy cycles xstats Mattias Rönnblom
2020-04-04 14:35 ` [dpdk-dev] [EXT] [PATCH 0/8] DSW performance and statistics improvements Jerin Jacob Kollanukkaran
2020-04-15 16:37 ` David Marchand
2020-04-15 17:39 ` Mattias Rönnblom
2020-04-15 17:45 ` [dpdk-dev] [dpdk-ci] " Thomas Monjalon
2020-04-15 18:09 ` Mattias Rönnblom
2020-04-15 18:15 ` [dpdk-dev] [PATCH v2] event/dsw: fix gcc 4.8 false positive warning Mattias Rönnblom
2020-04-15 19:45 ` David Marchand
2020-04-16 6:15 ` Mattias Rönnblom
2020-03-09 7:17 [dpdk-dev] [PATCH 5/8] event/dsw: avoid migration waves in large systems Jerin Jacob Kollanukkaran
2020-03-09 7:58 ` Mattias Rönnblom
2020-03-09 8:12 ` Jerin Jacob Kollanukkaran
2020-03-09 8:41 ` Mattias Rönnblom
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).