From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 14E12A052E; Mon, 9 Mar 2020 07:53:03 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 53CFC1C068; Mon, 9 Mar 2020 07:52:00 +0100 (CET) Received: from EUR02-VE1-obe.outbound.protection.outlook.com (mail-eopbgr20087.outbound.protection.outlook.com [40.107.2.87]) by dpdk.org (Postfix) with ESMTP id A76EC1BFF2 for ; Mon, 9 Mar 2020 07:51:48 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=N/oU9AMHHXRgTeVNYt8l3LC6XltPixFwPpgrpDRzOcDVrMFU/pgNT6kP8VYKyAIJ28kS7ZSY6faSAw23j/udNvjnyy1Cao2MQjz6lW5+nYiuV/tW+ycBYhC8JR9zsB/ZeTHYw+m/Fl8UBuvPm/lhys29sp3HJrsH4KBImzwKnTcatD7hV/TJgr1Hydj/tF7G559nMGi8N/fG8U2AnOg04zlzHDmhpK0VSATvMWgG4+wHDS2nSBdCV96gJ2AasrnDUPs7+cGLYbyyDBqwRheRW8W9LcswQgaQZbl43tRwcdNVQyn4PqztyO45e6yiQUZAIPelZ1AEm27kF+KjbP3rgg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=y0JGTKrbjFOgiyDI8YIq8II5oc3jlMbSL767cx9CLuw=; b=NFQm7CIU6MmuGw0Sh9Gwz0YfxzEuXfN5PjeQzsT+utwlPU6UHduF68R0nvAKY4A0KPvDHeun3GNCopGvAnOAWWG+DsNwZ7dg0tBzzCFg1rGHYLe/nIJwS3/d/5FzwSRvEWDGkFJYZfv7thtVIpYbeR5n3Ih3AJR+AssFjiEBnkYI1O/zjJ0eAzWPlcY4yz7GPENnNvxnu0hl9CX7SZ7ahSp6G1pht1EpyFxBk6dNJqzKv9eX5LDohLwn5WkuYier8KjV26KAGo1xjQaA5IDr+MnHKzBZPMg0Znzq2xEQZQX4972naEgC90cTSIC1PTBa2HjCIHEuVvoz4elP7Uavgw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 192.176.1.74) smtp.rcpttodomain=arm.com smtp.mailfrom=ericsson.com; dmarc=pass (p=reject sp=none pct=100) action=none header.from=ericsson.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ericsson.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=y0JGTKrbjFOgiyDI8YIq8II5oc3jlMbSL767cx9CLuw=; b=GxU3hq5DD5hEDHXEDRE9IcFo6EWGtwSCOuwpLBLiM50MtK07ro97I2zkaQI9CzmJzOadRo+gsP7SK8f1tcMiuU/B3e9da2V9RHISPQg5M42W7ivA2C31+zgz2lov7RWqD1g/wcDr86gvW6LrdpA8iJA0N4qKl4ioTjORJqVkZpo= Received: from AM6P192CA0004.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:83::17) by AM6PR0702MB3574.eurprd07.prod.outlook.com (2603:10a6:209:3::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2814.10; Mon, 9 Mar 2020 06:51:47 +0000 Received: from VE1EUR02FT033.eop-EUR02.prod.protection.outlook.com (2603:10a6:209:83:cafe::2e) by AM6P192CA0004.outlook.office365.com (2603:10a6:209:83::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2793.11 via Frontend Transport; Mon, 9 Mar 2020 06:51:47 +0000 Authentication-Results: spf=pass (sender IP is 192.176.1.74) smtp.mailfrom=ericsson.com; arm.com; dkim=none (message not signed) header.d=none;arm.com; dmarc=pass action=none header.from=ericsson.com; Received-SPF: Pass (protection.outlook.com: domain of ericsson.com designates 192.176.1.74 as permitted sender) receiver=protection.outlook.com; client-ip=192.176.1.74; helo=oa.msg.ericsson.com; Received: from oa.msg.ericsson.com (192.176.1.74) by VE1EUR02FT033.mail.protection.outlook.com (10.152.12.99) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.20.2793.11 via Frontend Transport; Mon, 9 Mar 2020 06:51:47 +0000 Received: from ESESBMB503.ericsson.se (153.88.183.170) by ESESSMR501.ericsson.se (153.88.183.108) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1713.5; Mon, 9 Mar 2020 07:51:44 +0100 Received: from ESESBMB504.ericsson.se (153.88.183.171) by ESESBMB503.ericsson.se (153.88.183.170) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1713.5; Mon, 9 Mar 2020 07:51:43 +0100 Received: from selio1a020.lmera.ericsson.se (153.88.183.153) by smtp.internal.ericsson.com (153.88.183.187) with Microsoft SMTP Server id 15.1.1713.5 via Frontend Transport; Mon, 9 Mar 2020 07:51:42 +0100 Received: from breslau.lmera.ericsson.se (breslau.lmera.ericsson.se [150.132.109.241]) by selio1a020.lmera.ericsson.se (8.15.1+Sun/8.15.1) with ESMTP id 0296pgIN024126; Mon, 9 Mar 2020 07:51:43 +0100 (CET) From: =?UTF-8?q?Mattias=20R=C3=B6nnblom?= To: CC: , , , =?UTF-8?q?Mattias=20R=C3=B6nnblom?= Date: Mon, 9 Mar 2020 07:51:01 +0100 Message-ID: <20200309065106.23800-4-mattias.ronnblom@ericsson.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200309065106.23800-1-mattias.ronnblom@ericsson.com> References: <20200309065106.23800-1-mattias.ronnblom@ericsson.com> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-EOPAttributedMessage: 0 X-Forefront-Antispam-Report: CIP:192.176.1.74; IPV:; CTRY:SE; EFV:NLI; SFV:NSPM; SFS:(10009020)(4636009)(396003)(136003)(346002)(376002)(39860400002)(199004)(189003)(4326008)(36756003)(6916009)(7636002)(6666004)(356004)(8676002)(107886003)(8936002)(246002)(70586007)(86362001)(316002)(336012)(70206006)(2616005)(26005)(54906003)(186003)(2906002)(956004)(30864003)(5660300002)(478600001)(66574012)(1076003)(461764006); DIR:OUT; SFP:1101; SCL:1; SRVR:AM6PR0702MB3574; H:oa.msg.ericsson.com; FPR:; SPF:Pass; LANG:en; PTR:office365.se.ericsson.net; A:1; MX:1; X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3ed41f05-75f3-48ab-1ddd-08d7c3f65680 X-MS-TrafficTypeDiagnostic: AM6PR0702MB3574: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:8273; X-Forefront-PRVS: 0337AFFE9A X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Kx1PnDdjr60dDRCwJhwm56aLdWMwbs+Ohzwg4QKB1PWj3BD4558gWAX3eH/M16LL7Xj87M44yehToraz6UZsBg2zUKLRSTgU44e8hdsKq06PiMq/ikJ3P5gFMR7AZ3CQ0gmEaC2RJXdddMmlyEs1vOz5gQ9iuxNR9uz3IXfhVq5GxGXZijUXR7SwzJmTtXhyhwXsKGaQHew40+HDmYe1n1X6N928kjjGIwHmdW88KHZQ+0LQrY2IAppB3TIW47LqCYyq8D36oggh3nnvEKtv2gb1P6MwqdOg2ZQ4dcD7kVluFldNxpBcAI8cK8lAspBHbhHP8Je6N8W1XpP3qJvx9vEW7mcvJmSo6wJbPbWCxR+xNjM0zVXQqmnBKR+WL6Coi18rSkWQqdKuR/Lvf5xMiiHdjdvPqipOSmWZ4ALbr2bd0NkTRKU5JFbb/5L4JiE7VEOb0HJCoGBJ4efA40qfwViC5r5N0euUSeZkQ2pLtY1y1KCJWHaO44m+aCFEtWyC X-OriginatorOrg: ericsson.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Mar 2020 06:51:47.6041 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3ed41f05-75f3-48ab-1ddd-08d7c3f65680 X-MS-Exchange-CrossTenant-Id: 92e84ceb-fbfd-47ab-be52-080c6b87953f X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=92e84ceb-fbfd-47ab-be52-080c6b87953f; Ip=[192.176.1.74]; Helo=[oa.msg.ericsson.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR0702MB3574 Subject: [dpdk-dev] [PATCH 3/8] event/dsw: extend statistics X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Extend DSW xstats. To allow visualization of migrations, track the number flow immigrations in "port__immigrations". The "port__migrations" retains legacy semantics, but is renamed "port__emigrations". Expose the number of events currently undergoing processing (i.e. pending releases) at a particular port. Signed-off-by: Mattias Rönnblom --- drivers/event/dsw/dsw_evdev.h | 16 ++-- drivers/event/dsw/dsw_event.c | 131 +++++++++++++++++---------------- drivers/event/dsw/dsw_xstats.c | 17 +++-- 3 files changed, 91 insertions(+), 73 deletions(-) diff --git a/drivers/event/dsw/dsw_evdev.h b/drivers/event/dsw/dsw_evdev.h index dc44bce81..2c7f9efa3 100644 --- a/drivers/event/dsw/dsw_evdev.h +++ b/drivers/event/dsw/dsw_evdev.h @@ -162,18 +162,20 @@ struct dsw_port { uint64_t total_busy_cycles; /* For the ctl interface and flow migration mechanism. */ - uint64_t next_migration; + uint64_t next_emigration; uint64_t migration_interval; enum dsw_migration_state migration_state; - uint64_t migration_start; - uint64_t migrations; - uint64_t migration_latency; + uint64_t emigration_start; + uint64_t emigrations; + uint64_t emigration_latency; - uint8_t migration_target_port_id; - struct dsw_queue_flow migration_target_qf; + uint8_t emigration_target_port_id; + struct dsw_queue_flow emigration_target_qf; uint8_t cfm_cnt; + uint64_t immigrations; + uint16_t paused_flows_len; struct dsw_queue_flow paused_flows[DSW_MAX_PAUSED_FLOWS]; @@ -187,11 +189,13 @@ struct dsw_port { uint16_t seen_events_idx; struct dsw_queue_flow seen_events[DSW_MAX_EVENTS_RECORDED]; + uint64_t enqueue_calls; uint64_t new_enqueued; uint64_t forward_enqueued; uint64_t release_enqueued; uint64_t queue_enqueued[DSW_MAX_QUEUES]; + uint64_t dequeue_calls; uint64_t dequeued; uint64_t queue_dequeued[DSW_MAX_QUEUES]; diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c index 7f1f29218..69cff7aa2 100644 --- a/drivers/event/dsw/dsw_event.c +++ b/drivers/event/dsw/dsw_event.c @@ -385,12 +385,12 @@ dsw_retrieve_port_loads(struct dsw_evdev *dsw, int16_t *port_loads, } static bool -dsw_select_migration_target(struct dsw_evdev *dsw, - struct dsw_port *source_port, - struct dsw_queue_flow_burst *bursts, - uint16_t num_bursts, int16_t *port_loads, - int16_t max_load, struct dsw_queue_flow *target_qf, - uint8_t *target_port_id) +dsw_select_emigration_target(struct dsw_evdev *dsw, + struct dsw_port *source_port, + struct dsw_queue_flow_burst *bursts, + uint16_t num_bursts, int16_t *port_loads, + int16_t max_load, struct dsw_queue_flow *target_qf, + uint8_t *target_port_id) { uint16_t source_load = port_loads[source_port->id]; uint16_t i; @@ -598,39 +598,39 @@ dsw_port_flush_paused_events(struct dsw_evdev *dsw, } static void -dsw_port_migration_stats(struct dsw_port *port) +dsw_port_emigration_stats(struct dsw_port *port) { - uint64_t migration_latency; + uint64_t emigration_latency; - migration_latency = (rte_get_timer_cycles() - port->migration_start); - port->migration_latency += migration_latency; - port->migrations++; + emigration_latency = (rte_get_timer_cycles() - port->emigration_start); + port->emigration_latency += emigration_latency; + port->emigrations++; } static void -dsw_port_end_migration(struct dsw_evdev *dsw, struct dsw_port *port) +dsw_port_end_emigration(struct dsw_evdev *dsw, struct dsw_port *port) { - uint8_t queue_id = port->migration_target_qf.queue_id; - uint16_t flow_hash = port->migration_target_qf.flow_hash; + uint8_t queue_id = port->emigration_target_qf.queue_id; + uint16_t flow_hash = port->emigration_target_qf.flow_hash; port->migration_state = DSW_MIGRATION_STATE_IDLE; port->seen_events_len = 0; - dsw_port_migration_stats(port); + dsw_port_emigration_stats(port); if (dsw->queues[queue_id].schedule_type != RTE_SCHED_TYPE_PARALLEL) { dsw_port_remove_paused_flow(port, queue_id, flow_hash); dsw_port_flush_paused_events(dsw, port, queue_id, flow_hash); } - DSW_LOG_DP_PORT(DEBUG, port->id, "Migration completed for queue_id " + DSW_LOG_DP_PORT(DEBUG, port->id, "Emigration completed for queue_id " "%d flow_hash %d.\n", queue_id, flow_hash); } static void -dsw_port_consider_migration(struct dsw_evdev *dsw, - struct dsw_port *source_port, - uint64_t now) +dsw_port_consider_emigration(struct dsw_evdev *dsw, + struct dsw_port *source_port, + uint64_t now) { bool any_port_below_limit; struct dsw_queue_flow *seen_events = source_port->seen_events; @@ -640,7 +640,7 @@ dsw_port_consider_migration(struct dsw_evdev *dsw, int16_t source_port_load; int16_t port_loads[dsw->num_ports]; - if (now < source_port->next_migration) + if (now < source_port->next_emigration) return; if (dsw->num_ports == 1) @@ -649,25 +649,25 @@ dsw_port_consider_migration(struct dsw_evdev *dsw, if (seen_events_len < DSW_MAX_EVENTS_RECORDED) return; - DSW_LOG_DP_PORT(DEBUG, source_port->id, "Considering migration.\n"); + DSW_LOG_DP_PORT(DEBUG, source_port->id, "Considering emigration.\n"); /* Randomize interval to avoid having all threads considering - * migration at the same in point in time, which might lead to - * all choosing the same target port. + * emigration at the same in point in time, which might lead + * to all choosing the same target port. */ - source_port->next_migration = now + + source_port->next_emigration = now + source_port->migration_interval / 2 + rte_rand() % source_port->migration_interval; if (source_port->migration_state != DSW_MIGRATION_STATE_IDLE) { DSW_LOG_DP_PORT(DEBUG, source_port->id, - "Migration already in progress.\n"); + "Emigration already in progress.\n"); return; } /* For simplicity, avoid migration in the unlikely case there * is still events to consume in the in_buffer (from the last - * migration). + * emigration). */ if (source_port->in_buffer_len > 0) { DSW_LOG_DP_PORT(DEBUG, source_port->id, "There are still " @@ -719,52 +719,56 @@ dsw_port_consider_migration(struct dsw_evdev *dsw, } /* The strategy is to first try to find a flow to move to a - * port with low load (below the migration-attempt + * port with low load (below the emigration-attempt * threshold). If that fails, we try to find a port which is * below the max threshold, and also less loaded than this * port is. */ - if (!dsw_select_migration_target(dsw, source_port, bursts, num_bursts, - port_loads, - DSW_MIN_SOURCE_LOAD_FOR_MIGRATION, - &source_port->migration_target_qf, - &source_port->migration_target_port_id) + if (!dsw_select_emigration_target(dsw, source_port, bursts, num_bursts, + port_loads, + DSW_MIN_SOURCE_LOAD_FOR_MIGRATION, + &source_port->emigration_target_qf, + &source_port->emigration_target_port_id) && - !dsw_select_migration_target(dsw, source_port, bursts, num_bursts, - port_loads, - DSW_MAX_TARGET_LOAD_FOR_MIGRATION, - &source_port->migration_target_qf, - &source_port->migration_target_port_id)) + !dsw_select_emigration_target(dsw, source_port, bursts, num_bursts, + port_loads, + DSW_MAX_TARGET_LOAD_FOR_MIGRATION, + &source_port->emigration_target_qf, + &source_port->emigration_target_port_id)) return; DSW_LOG_DP_PORT(DEBUG, source_port->id, "Migrating queue_id %d " "flow_hash %d from port %d to port %d.\n", - source_port->migration_target_qf.queue_id, - source_port->migration_target_qf.flow_hash, - source_port->id, source_port->migration_target_port_id); + source_port->emigration_target_qf.queue_id, + source_port->emigration_target_qf.flow_hash, + source_port->id, + source_port->emigration_target_port_id); /* We have a winner. */ source_port->migration_state = DSW_MIGRATION_STATE_PAUSING; - source_port->migration_start = rte_get_timer_cycles(); + source_port->emigration_start = rte_get_timer_cycles(); /* No need to go through the whole pause procedure for * parallel queues, since atomic/ordered semantics need not to * be maintained. */ - if (dsw->queues[source_port->migration_target_qf.queue_id].schedule_type - == RTE_SCHED_TYPE_PARALLEL) { - uint8_t queue_id = source_port->migration_target_qf.queue_id; - uint16_t flow_hash = source_port->migration_target_qf.flow_hash; - uint8_t dest_port_id = source_port->migration_target_port_id; + if (dsw->queues[source_port->emigration_target_qf.queue_id]. + schedule_type == RTE_SCHED_TYPE_PARALLEL) { + uint8_t queue_id = + source_port->emigration_target_qf.queue_id; + uint16_t flow_hash = + source_port->emigration_target_qf.flow_hash; + uint8_t dest_port_id = + source_port->emigration_target_port_id; /* Single byte-sized stores are always atomic. */ dsw->queues[queue_id].flow_to_port_map[flow_hash] = dest_port_id; rte_smp_wmb(); - dsw_port_end_migration(dsw, source_port); + dsw_port_end_emigration(dsw, source_port); return; } @@ -775,12 +779,12 @@ dsw_port_consider_migration(struct dsw_evdev *dsw, dsw_port_flush_out_buffers(dsw, source_port); dsw_port_add_paused_flow(source_port, - source_port->migration_target_qf.queue_id, - source_port->migration_target_qf.flow_hash); + source_port->emigration_target_qf.queue_id, + source_port->emigration_target_qf.flow_hash); dsw_port_ctl_broadcast(dsw, source_port, DSW_CTL_PAUS_REQ, - source_port->migration_target_qf.queue_id, - source_port->migration_target_qf.flow_hash); + source_port->emigration_target_qf.queue_id, + source_port->emigration_target_qf.flow_hash); source_port->cfm_cnt = 0; } @@ -808,6 +812,9 @@ dsw_port_handle_unpause_flow(struct dsw_evdev *dsw, struct dsw_port *port, rte_smp_rmb(); + if (dsw_schedule(dsw, queue_id, paused_flow_hash) == port->id) + port->immigrations++; + dsw_port_ctl_enqueue(&dsw->ports[originating_port_id], &cfm); dsw_port_flush_paused_events(dsw, port, queue_id, paused_flow_hash); @@ -816,10 +823,10 @@ dsw_port_handle_unpause_flow(struct dsw_evdev *dsw, struct dsw_port *port, #define FORWARD_BURST_SIZE (32) static void -dsw_port_forward_migrated_flow(struct dsw_port *source_port, - struct rte_event_ring *dest_ring, - uint8_t queue_id, - uint16_t flow_hash) +dsw_port_forward_emigrated_flow(struct dsw_port *source_port, + struct rte_event_ring *dest_ring, + uint8_t queue_id, + uint16_t flow_hash) { uint16_t events_left; @@ -868,9 +875,9 @@ static void dsw_port_move_migrating_flow(struct dsw_evdev *dsw, struct dsw_port *source_port) { - uint8_t queue_id = source_port->migration_target_qf.queue_id; - uint16_t flow_hash = source_port->migration_target_qf.flow_hash; - uint8_t dest_port_id = source_port->migration_target_port_id; + uint8_t queue_id = source_port->emigration_target_qf.queue_id; + uint16_t flow_hash = source_port->emigration_target_qf.flow_hash; + uint8_t dest_port_id = source_port->emigration_target_port_id; struct dsw_port *dest_port = &dsw->ports[dest_port_id]; dsw_port_flush_out_buffers(dsw, source_port); @@ -880,8 +887,8 @@ dsw_port_move_migrating_flow(struct dsw_evdev *dsw, dsw->queues[queue_id].flow_to_port_map[flow_hash] = dest_port_id; - dsw_port_forward_migrated_flow(source_port, dest_port->in_ring, - queue_id, flow_hash); + dsw_port_forward_emigrated_flow(source_port, dest_port->in_ring, + queue_id, flow_hash); /* Flow table update and migration destination port's enqueues * must be seen before the control message. @@ -907,7 +914,7 @@ dsw_port_handle_confirm(struct dsw_evdev *dsw, struct dsw_port *port) port->migration_state = DSW_MIGRATION_STATE_FORWARDING; break; case DSW_MIGRATION_STATE_UNPAUSING: - dsw_port_end_migration(dsw, port); + dsw_port_end_emigration(dsw, port); break; default: RTE_ASSERT(0); @@ -987,7 +994,7 @@ dsw_port_bg_process(struct dsw_evdev *dsw, struct dsw_port *port) dsw_port_consider_load_update(port, now); - dsw_port_consider_migration(dsw, port, now); + dsw_port_consider_emigration(dsw, port, now); port->ops_since_bg_task = 0; } diff --git a/drivers/event/dsw/dsw_xstats.c b/drivers/event/dsw/dsw_xstats.c index c3f5db89c..d332a57b6 100644 --- a/drivers/event/dsw/dsw_xstats.c +++ b/drivers/event/dsw/dsw_xstats.c @@ -84,16 +84,17 @@ dsw_xstats_port_get_queue_dequeued(struct dsw_evdev *dsw, uint8_t port_id, return dsw->ports[port_id].queue_dequeued[queue_id]; } -DSW_GEN_PORT_ACCESS_FN(migrations) +DSW_GEN_PORT_ACCESS_FN(emigrations) +DSW_GEN_PORT_ACCESS_FN(immigrations) static uint64_t dsw_xstats_port_get_migration_latency(struct dsw_evdev *dsw, uint8_t port_id, uint8_t queue_id __rte_unused) { - uint64_t total_latency = dsw->ports[port_id].migration_latency; - uint64_t num_migrations = dsw->ports[port_id].migrations; + uint64_t total_latency = dsw->ports[port_id].emigration_latency; + uint64_t num_emigrations = dsw->ports[port_id].emigrations; - return num_migrations > 0 ? total_latency / num_migrations : 0; + return num_emigrations > 0 ? total_latency / num_emigrations : 0; } static uint64_t @@ -110,6 +111,8 @@ dsw_xstats_port_get_event_proc_latency(struct dsw_evdev *dsw, uint8_t port_id, DSW_GEN_PORT_ACCESS_FN(inflight_credits) +DSW_GEN_PORT_ACCESS_FN(pending_releases) + static uint64_t dsw_xstats_port_get_load(struct dsw_evdev *dsw, uint8_t port_id, uint8_t queue_id __rte_unused) @@ -136,14 +139,18 @@ static struct dsw_xstats_port dsw_port_xstats[] = { false }, { "port_%u_queue_%u_dequeued", dsw_xstats_port_get_queue_dequeued, true }, - { "port_%u_migrations", dsw_xstats_port_get_migrations, + { "port_%u_emigrations", dsw_xstats_port_get_emigrations, false }, { "port_%u_migration_latency", dsw_xstats_port_get_migration_latency, false }, + { "port_%u_immigrations", dsw_xstats_port_get_immigrations, + false }, { "port_%u_event_proc_latency", dsw_xstats_port_get_event_proc_latency, false }, { "port_%u_inflight_credits", dsw_xstats_port_get_inflight_credits, false }, + { "port_%u_pending_releases", dsw_xstats_port_get_pending_releases, + false }, { "port_%u_load", dsw_xstats_port_get_load, false }, { "port_%u_last_bg", dsw_xstats_port_get_last_bg, -- 2.17.1