From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8CBFA440AB; Fri, 24 May 2024 21:34:50 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5F98C402C0; Fri, 24 May 2024 21:34:50 +0200 (CEST) Received: from EUR03-AM7-obe.outbound.protection.outlook.com (mail-am7eur03on2056.outbound.protection.outlook.com [40.107.105.56]) by mails.dpdk.org (Postfix) with ESMTP id 7FB174026B for ; Fri, 24 May 2024 21:34:49 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=G+wYTBcTCJKymrC9IAx75vh4ll24aHCTLr2X51xNzMUlFzj6vqjJThfszyUnTyLeU0iXbCArjv7CAijDWZkpS5V1qGSlXFFUj/DkZ5ezPCIZHXvIjbdHPaDBBpSUl0OxaBvniNYB4wkOkzVdGDFf9CtB2whYiK7uZV/SEh9BTH+q9FPvzd0ysIwfgy8spzkdeH7uo8HZso7PxOJXcidJISBmXcdPa/xu3uMAJQxi/B6EDiTwiNOojfpKHIpg8HbfS+jE/fWSZbNejTNQTTZ9LcfCR2u7Z4DC5Ji2xdUD3YcGCajoCLcht8p8qghG6Rpq6EMnnJam+Oi4qw7I1sLpDw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=anRYVJMxTUJz9SUOf0RHp5RsASg5JfiXKQBORt/wu0o=; b=GhG7JNoGs+WEOaGaifg6TXJtKko5KuAdUUwWS2h+OLsTF2lo7+6bVqG/jtajtoKBAjG46aR3mPR/4FkbEc3zmtjXEvELnx2RILmEqK7slJnjJjTPUFq+pYdtLlgKPhLmu+AAl7IVNvgyTR0tohNhMl2Z+Fs71tYTPJDgALYrRBBQ/iIZuUAnO8H2fJR9WjIThIRscXqkBNrI3NDdgoRaL8Ak3G8iAic3LGjKmFFOpTb481I0VxOQLHrFJSmpzsZDFNBHoVYk4BaChounuPPO44sMJdsFDlhcrD6R7z9bbozibGOddOkQr5Y/s82cj1YSW+qD8cn5iUiKh+QaJyD9Bg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 192.176.1.74) smtp.rcpttodomain=dpdk.org smtp.mailfrom=ericsson.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=ericsson.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ericsson.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=anRYVJMxTUJz9SUOf0RHp5RsASg5JfiXKQBORt/wu0o=; b=HTpvcfs+8DuVv14+u235OGLcn6ML2sifooD0wXFl+Vjmsb8pxOr69CLrDW9L6sD0LBKMq+lNzW/M4IyBcOeJP3E/WWXY6o2f2vgyrMb2JIvdvaKz6do9rPIvAkffv28fkKXfcP+/l3+5ob4ijkoWY3fg1bvxzX503wu5FzXlDOoj5GXqAAJUiUBqtk/t8ukP73/zeiLDj+7LSaXWuw0qd3XGJIkmObxboHBOFXkK1c0MvPhrh54i6pR/wmNeukh9KDVC4rwRw5l+T7zHD6uI/D5i9wpSTOvkgisgVkgJeWA80zca/MTgwDGg5zU3xFjQfR2nvLCfMRL5aGqn+Jn2TA== Received: from DUZPR01CA0127.eurprd01.prod.exchangelabs.com (2603:10a6:10:4bc::13) by DB9PR07MB7945.eurprd07.prod.outlook.com (2603:10a6:10:2a0::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7611.19; Fri, 24 May 2024 19:34:48 +0000 Received: from DU2PEPF00028D0D.eurprd03.prod.outlook.com (2603:10a6:10:4bc:cafe::7b) by DUZPR01CA0127.outlook.office365.com (2603:10a6:10:4bc::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7611.22 via Frontend Transport; Fri, 24 May 2024 19:34:48 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 192.176.1.74) smtp.mailfrom=ericsson.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=ericsson.com; Received-SPF: Pass (protection.outlook.com: domain of ericsson.com designates 192.176.1.74 as permitted sender) receiver=protection.outlook.com; client-ip=192.176.1.74; helo=oa.msg.ericsson.com; pr=C Received: from oa.msg.ericsson.com (192.176.1.74) by DU2PEPF00028D0D.mail.protection.outlook.com (10.167.242.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7587.21 via Frontend Transport; Fri, 24 May 2024 19:34:47 +0000 Received: from seliicinfr00049.seli.gic.ericsson.se (153.88.142.248) by smtp-central.internal.ericsson.com (100.87.178.62) with Microsoft SMTP Server id 15.2.1544.9; Fri, 24 May 2024 21:34:47 +0200 Received: from breslau.. (seliicwb00002.seli.gic.ericsson.se [10.156.25.100]) by seliicinfr00049.seli.gic.ericsson.se (Postfix) with ESMTP id 477FC380061; Fri, 24 May 2024 21:34:47 +0200 (CEST) From: =?UTF-8?q?Mattias=20R=C3=B6nnblom?= To: Jerin Jacob CC: , , , Peter Nilsson J , =?UTF-8?q?Svante=20J=C3=A4rvstr=C3=A5t?= , Heng Wang , =?UTF-8?q?Mattias=20R=C3=B6nnblom?= Subject: [PATCH] event/dsw: support explicit release only mode Date: Fri, 24 May 2024 21:24:37 +0200 Message-ID: <20240524192437.183960-1-mattias.ronnblom@ericsson.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231109183323.2880-1-mattias.ronnblom@ericsson.com> References: <20231109183323.2880-1-mattias.ronnblom@ericsson.com> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DU2PEPF00028D0D:EE_|DB9PR07MB7945:EE_ X-MS-Office365-Filtering-Correlation-Id: af82fdac-bed2-4f95-14c7-08dc7c28929f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230031|82310400017|1800799015|36860700004|376005; X-Microsoft-Antispam-Message-Info: =?utf-8?B?UEpzZllTdVdqWWJqc25vRzBCOWh6ZkZmczRFdktReitLMHRHOFFIRjI3R0Fh?= =?utf-8?B?aU4zM0lIWk90VkpLQnJNUk5ucGRkYkUwUzh6ZTNIa05DaXpuT3RlaEtvYVla?= =?utf-8?B?N0xqREhtWFlBRmpEZWdVc1RFUzhyVkRVemJTM0Ivenp2QngzQmx0aFhSTHNT?= =?utf-8?B?RkRFZGp3VHFoVGRHTHNYK3lkSnlVckthN21CVmxlZklQYk12blJqV3JrWTNj?= =?utf-8?B?eW1FVGtCTG9lOTJKWmVjVWRsbDUzVExFbE92RVlObkUyaWtWd1c1L0RCYjFQ?= =?utf-8?B?Q3pWVE5FWGNPRGdTdU42TE9OZ3RqbVVSbFBrT2lnUzkxd2gzLzVrdzVkWHg1?= =?utf-8?B?ek4wZEU1alkxK3VuVnpwdEdOekEwUm8zZHJXbnNqQ2ZLN2VzR00rZGhuYlVz?= =?utf-8?B?cjdOTi9SYXJFaGtrUFluMm1DZzBVSlY3YkJ1Y09XMTJIOG1EVmlETkhkQ0Vr?= =?utf-8?B?K3lhM295NmZidFhwcmdlNjRyZzRBSnh4VjBMOTlUSmhNVXFpU1cxcXorQ0lJ?= =?utf-8?B?NFBPTUlxeUdHUTg3bVVPS2wxelVEeDhET1gyRHVYOVhFaWhQQ0Y4ajVodDk5?= =?utf-8?B?SDQwMDFuMmlQUFU1L0JZdG5Vem1MN3RVU25aTThyMjVKSUJSdlpOUDhRZlBL?= =?utf-8?B?NUtUNXlwbGZaQ1pBRnhVSmNuWGk0VlRqUXdtYmZEcUVrRFBkUVFrazA1dFVZ?= =?utf-8?B?UXRtUXVqN0ZxekkyVU5aWUFPalV0VUtKUTBoald3c2oydmpDL1RKdFc0ZDR4?= =?utf-8?B?M1BFczN0NjBZTjdwd0t4TWtDTnF3Z1JLU3pQY0RQdkFxOGZjZi9URGRCRlJa?= =?utf-8?B?TmwyL0FrRkxMVzB2NXZqc1ljanRQZmxsYXZYT0Y5eGhWeDJqeXlWSFkyMkgx?= =?utf-8?B?dkJ0Y01vZVBHbFFvRVd6eGVua3lyZ0lpV1U4cU1URWNqVnNMNXE1UUpWQlJj?= =?utf-8?B?NHo5UEtrdFpuWnFFdUdJWmdEZnIzYnZHQTJtVzFDMmdUMlRmU0dMekMyRzdi?= =?utf-8?B?bWxOcE1LYWV5ZzdKVW1yNExMNjZTVEVhckZaK0taQ1N3ZjNKdzZxMFBUVCtH?= =?utf-8?B?aDJYMEVlbm9zTTc5dUJjelVaWlJKQzZCUk1iZ0pFUW5TMjhmRm0yaWVBN1k2?= =?utf-8?B?OUNyM2d3VkpQcGxNcGJrNGhNN0RpVWxnOXBncWtEQ3pJY2xVWXJ2amE4WUww?= =?utf-8?B?QlZNcHF6UDc5NXlneTI1ZWJlYjZUV2NWd01rN1UrN0w3SHl6ci9uZzByMk5m?= =?utf-8?B?R2c1bVpKS09MMlhzYUN1UG56WElDZVdmbmlLRGduaDFobjlhZjR6bXdWNjI0?= =?utf-8?B?VFgrSHVFNW41QWNCSFloNVZZcmNUY0Npc1FGalYxRndjclR1TVlyUTRDTmN6?= =?utf-8?B?RTRYZEpZNzN3VHNRcklucjVOK1NHWGg4VjRYWDRBRnpYeDBZSGt0Y0x5ZUhp?= =?utf-8?B?M0JQdkthMHAyY2laK1RSQ2Z2dmx5MjVSNFZWaUhGTWF1MElwZHdRUkJQUktt?= =?utf-8?B?SGdDTGF3eEh0a3JPL3crVVRGejZxd0NyOEh6dmNxN0JGVmFvVmhpT25nRXhr?= =?utf-8?B?QS9UQzNoNUtaM1lxTG9BLzNlZSt3MktWdnA4aXEyMTFzY1o4UUVYN1B3L0d0?= =?utf-8?B?VEZvZFdiZ201cUtJTUJDaHcrZVRZWlFURnNCYXNTWXM5ZW84YWM3bmhFb3hj?= =?utf-8?B?M1daZkVHbVl4QmRMR01QTUJIbzkvbytST0JIUzViQzE1ZU82cUtXU1pkZ1gy?= =?utf-8?B?SldVNG90bUxMOFRuNDFNd1cxcGMwMGpzVE5reWxlU1RLUXhLWHZkbnBpWkVY?= =?utf-8?B?V3o2K0kvU2dxOWdEKy8zZGhEcnJBeXZkMzBxU0piZXNBa3JCV2hZcTRFY2ZP?= =?utf-8?Q?FK6vmjfU1Bg7V?= X-Forefront-Antispam-Report: CIP:192.176.1.74; CTRY:SE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:oa.msg.ericsson.com; PTR:office365.se.ericsson.net; CAT:NONE; SFS:(13230031)(82310400017)(1800799015)(36860700004)(376005); DIR:OUT; SFP:1101; X-OriginatorOrg: ericsson.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2024 19:34:47.9421 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: af82fdac-bed2-4f95-14c7-08dc7c28929f X-MS-Exchange-CrossTenant-Id: 92e84ceb-fbfd-47ab-be52-080c6b87953f X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=92e84ceb-fbfd-47ab-be52-080c6b87953f; Ip=[192.176.1.74]; Helo=[oa.msg.ericsson.com] X-MS-Exchange-CrossTenant-AuthSource: DU2PEPF00028D0D.eurprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR07MB7945 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add the RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE capability to the DSW event device. This feature may be used by an EAL thread to pull more work from the work scheduler, without giving up the option to forward events originating from a previous dequeue batch. This in turn allows an EAL thread to be productive while waiting for a hardware accelerator to complete some operation. Prior to this change, DSW didn't make any distinction between RTE_EVENT_OP_FORWARD and RTE_EVENT_OP_NEW type events, other than that new events would be backpressured earlier. After this change, DSW tracks the number of released events (i.e., events of type RTE_EVENT_OP_FORWARD and RTE_EVENT_OP_RELASE) that has been enqueued. For efficency reasons, DSW does not track the *identity* of individual events. This in turn implies that a certain stage in the flow migration process, DSW must wait for all pending releases (on the migration source port, only) to be received from the application, to assure that no event pertaining to any of the to-be-migrated flows are being processed. With this change, DSW starts making a distinction between forward and new type events for credit allocation purposes. Only RTE_EVENT_OP_NEW events needs credits. All events marked as RTE_EVENT_OP_FORWARD must have a corresponding dequeued event from a previous dequeue batch. Flow migration for flows on RTE_SCHED_TYPE_PARALLEL queues remains unaffected by this change. A side-effect of the tweaked DSW migration logic is that the migration latency is reduced, regardless if implicit relase is enabled or not. Signed-off-by: Mattias Rönnblom --- drivers/event/dsw/dsw_evdev.c | 8 +++- drivers/event/dsw/dsw_evdev.h | 3 ++ drivers/event/dsw/dsw_event.c | 84 ++++++++++++++++++++++------------- 3 files changed, 62 insertions(+), 33 deletions(-) diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c index ab0420b549..0dea1091e3 100644 --- a/drivers/event/dsw/dsw_evdev.c +++ b/drivers/event/dsw/dsw_evdev.c @@ -23,15 +23,20 @@ dsw_port_setup(struct rte_eventdev *dev, uint8_t port_id, struct rte_event_ring *in_ring; struct rte_ring *ctl_in_ring; char ring_name[RTE_RING_NAMESIZE]; + bool implicit_release; port = &dsw->ports[port_id]; + implicit_release = + !(conf->event_port_cfg & RTE_EVENT_PORT_CFG_DISABLE_IMPL_REL); + *port = (struct dsw_port) { .id = port_id, .dsw = dsw, .dequeue_depth = conf->dequeue_depth, .enqueue_depth = conf->enqueue_depth, - .new_event_threshold = conf->new_event_threshold + .new_event_threshold = conf->new_event_threshold, + .implicit_release = implicit_release }; snprintf(ring_name, sizeof(ring_name), "dsw%d_p%u", dev->data->dev_id, @@ -222,6 +227,7 @@ dsw_info_get(struct rte_eventdev *dev __rte_unused, RTE_EVENT_DEV_CAP_ATOMIC | RTE_EVENT_DEV_CAP_PARALLEL | RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED| + RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE| RTE_EVENT_DEV_CAP_NONSEQ_MODE| RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT| RTE_EVENT_DEV_CAP_CARRY_FLOW_ID diff --git a/drivers/event/dsw/dsw_evdev.h b/drivers/event/dsw/dsw_evdev.h index 2018306265..d0d59478eb 100644 --- a/drivers/event/dsw/dsw_evdev.h +++ b/drivers/event/dsw/dsw_evdev.h @@ -128,6 +128,7 @@ struct dsw_queue_flow { enum dsw_migration_state { DSW_MIGRATION_STATE_IDLE, DSW_MIGRATION_STATE_PAUSING, + DSW_MIGRATION_STATE_FINISH_PENDING, DSW_MIGRATION_STATE_UNPAUSING }; @@ -148,6 +149,8 @@ struct __rte_cache_aligned dsw_port { int32_t new_event_threshold; + bool implicit_release; + uint16_t pending_releases; uint16_t next_parallel_flow_id; diff --git a/drivers/event/dsw/dsw_event.c b/drivers/event/dsw/dsw_event.c index ca2b8e1032..f23079fd73 100644 --- a/drivers/event/dsw/dsw_event.c +++ b/drivers/event/dsw/dsw_event.c @@ -1149,6 +1149,15 @@ dsw_port_move_emigrating_flows(struct dsw_evdev *dsw, source_port->migration_state = DSW_MIGRATION_STATE_UNPAUSING; } +static void +dsw_port_try_finish_pending(struct dsw_evdev *dsw, struct dsw_port *source_port) +{ + if (unlikely(source_port->migration_state == + DSW_MIGRATION_STATE_FINISH_PENDING && + source_port->pending_releases == 0)) + dsw_port_move_emigrating_flows(dsw, source_port); +} + static void dsw_port_handle_confirm(struct dsw_evdev *dsw, struct dsw_port *port) { @@ -1157,14 +1166,15 @@ dsw_port_handle_confirm(struct dsw_evdev *dsw, struct dsw_port *port) if (port->cfm_cnt == (dsw->num_ports-1)) { switch (port->migration_state) { case DSW_MIGRATION_STATE_PAUSING: - dsw_port_move_emigrating_flows(dsw, port); + port->migration_state = + DSW_MIGRATION_STATE_FINISH_PENDING; break; case DSW_MIGRATION_STATE_UNPAUSING: dsw_port_end_emigration(dsw, port, RTE_SCHED_TYPE_ATOMIC); break; default: - RTE_ASSERT(0); + RTE_VERIFY(0); break; } } @@ -1203,19 +1213,18 @@ dsw_port_note_op(struct dsw_port *port, uint16_t num_events) static void dsw_port_bg_process(struct dsw_evdev *dsw, struct dsw_port *port) { - /* For simplicity (in the migration logic), avoid all - * background processing in case event processing is in - * progress. - */ - if (port->pending_releases > 0) - return; - /* Polling the control ring is relatively inexpensive, and * polling it often helps bringing down migration latency, so * do this for every iteration. */ dsw_port_ctl_process(dsw, port); + /* Always check if a migration is waiting for pending releases + * to arrive, to minimize the amount of time dequeuing events + * from the port is disabled. + */ + dsw_port_try_finish_pending(dsw, port); + /* To avoid considering migration and flushing output buffers * on every dequeue/enqueue call, the scheduler only performs * such 'background' tasks every nth @@ -1260,8 +1269,8 @@ static __rte_always_inline uint16_t dsw_event_enqueue_burst_generic(struct dsw_port *source_port, const struct rte_event events[], uint16_t events_len, bool op_types_known, - uint16_t num_new, uint16_t num_release, - uint16_t num_non_release) + uint16_t num_new, uint16_t num_forward, + uint16_t num_release) { struct dsw_evdev *dsw = source_port->dsw; bool enough_credits; @@ -1295,14 +1304,14 @@ dsw_event_enqueue_burst_generic(struct dsw_port *source_port, if (!op_types_known) for (i = 0; i < events_len; i++) { switch (events[i].op) { - case RTE_EVENT_OP_RELEASE: - num_release++; - break; case RTE_EVENT_OP_NEW: num_new++; - /* Falls through. */ - default: - num_non_release++; + break; + case RTE_EVENT_OP_FORWARD: + num_forward++; + break; + case RTE_EVENT_OP_RELEASE: + num_release++; break; } } @@ -1318,15 +1327,15 @@ dsw_event_enqueue_burst_generic(struct dsw_port *source_port, source_port->new_event_threshold)) return 0; - enough_credits = dsw_port_acquire_credits(dsw, source_port, - num_non_release); + enough_credits = dsw_port_acquire_credits(dsw, source_port, num_new); if (unlikely(!enough_credits)) return 0; - source_port->pending_releases -= num_release; + dsw_port_return_credits(dsw, source_port, num_release); + + source_port->pending_releases -= (num_forward + num_release); - dsw_port_enqueue_stats(source_port, num_new, - num_non_release-num_new, num_release); + dsw_port_enqueue_stats(source_port, num_new, num_forward, num_release); for (i = 0; i < events_len; i++) { const struct rte_event *event = &events[i]; @@ -1338,9 +1347,9 @@ dsw_event_enqueue_burst_generic(struct dsw_port *source_port, } DSW_LOG_DP_PORT(DEBUG, source_port->id, "%d non-release events " - "accepted.\n", num_non_release); + "accepted.\n", num_new + num_forward); - return (num_non_release + num_release); + return (num_new + num_forward + num_release); } uint16_t @@ -1367,7 +1376,7 @@ dsw_event_enqueue_new_burst(void *port, const struct rte_event events[], return dsw_event_enqueue_burst_generic(source_port, events, events_len, true, events_len, - 0, events_len); + 0, 0); } uint16_t @@ -1380,8 +1389,8 @@ dsw_event_enqueue_forward_burst(void *port, const struct rte_event events[], events_len = source_port->enqueue_depth; return dsw_event_enqueue_burst_generic(source_port, events, - events_len, true, 0, 0, - events_len); + events_len, true, 0, + events_len, 0); } uint16_t @@ -1493,21 +1502,34 @@ dsw_event_dequeue_burst(void *port, struct rte_event *events, uint16_t num, struct dsw_evdev *dsw = source_port->dsw; uint16_t dequeued; - source_port->pending_releases = 0; + if (source_port->implicit_release) { + dsw_port_return_credits(dsw, port, + source_port->pending_releases); + + source_port->pending_releases = 0; + } dsw_port_bg_process(dsw, source_port); if (unlikely(num > source_port->dequeue_depth)) num = source_port->dequeue_depth; - dequeued = dsw_port_dequeue_burst(source_port, events, num); + if (unlikely(source_port->migration_state == + DSW_MIGRATION_STATE_FINISH_PENDING)) + /* Do not take on new work - only finish outstanding + * (unreleased) events, to allow the migration + * procedure to complete. + */ + dequeued = 0; + else + dequeued = dsw_port_dequeue_burst(source_port, events, num); if (unlikely(source_port->migration_state == DSW_MIGRATION_STATE_PAUSING)) dsw_port_stash_migrating_events(source_port, events, &dequeued); - source_port->pending_releases = dequeued; + source_port->pending_releases += dequeued; dsw_port_load_record(source_port, dequeued); @@ -1517,8 +1539,6 @@ dsw_event_dequeue_burst(void *port, struct rte_event *events, uint16_t num, DSW_LOG_DP_PORT(DEBUG, source_port->id, "Dequeued %d events.\n", dequeued); - dsw_port_return_credits(dsw, source_port, dequeued); - /* One potential optimization one might think of is to * add a migration state (prior to 'pausing'), and * only record seen events when the port is in this -- 2.34.1