From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B3F2542340; Mon, 9 Oct 2023 20:22:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 18D1A40E25; Mon, 9 Oct 2023 20:22:40 +0200 (CEST) Received: from EUR05-VI1-obe.outbound.protection.outlook.com (mail-vi1eur05on2088.outbound.protection.outlook.com [40.107.21.88]) by mails.dpdk.org (Postfix) with ESMTP id 07FEE4069D; Mon, 9 Oct 2023 20:22:38 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=h4zi4YvZESHFXJR3u31o4VdTrMruKtBjXwiTds0uBLNo1hTnq5DjvPjaMZe6PxzX//4fduFmkY1Zo0J2MwpXMqFvnSuxE5zag6E1eAaaGV/LlSDv0rsxfzT8ijP4GmGX2lX2d13qbRc+/pb97U/n1dAy8bkfKkHOKymhsB/n5aNjfq5I/SBgwJtRdSRQSRDuxxZpnF4TxppAuG1VgMIkzU1zKPZaJEc0cjqFAWBzdwDf13DBrgdnh/MTCdwcTYhETRNnub+Q03MGB+rasjw25M7A1PC1AwewmaMEgmiJzkL8rr3uOXrAb/Y4tt95USP/BYrS7BSaJvYIEXLyiJXpbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=3jWZy6dSc8AqC2LJmC/xWJ4f87TihBjmw7QKEf03bZA=; b=I4+DoBst3eU98tldnTMgeNcV7x7j34WTgp1WdTDVEgwitdfOw916gfDh/ynszoov36zGEeZ7qbVYskl+5Zv1w8CXC/2h/p35d6rJBdupS0pRvscj4QR2MHDRrnhoI+3oAWRn8dDVpMt36nHmIpsJoIuas084n4sW6nFh7baakMx4M/qxUJKk4xoKEd4n9w9IDeZHGzP6qZZ5KjupyTUgVVherpjm7bT2lljE8QIpsctgE7ZsXym6fCW5wBAGiDbQrPkh2gL/0tOwQ5QYlfUiMrUQ7mpeqPXSDG3vZWEx/0AR/C4WW4JylYEP/MykUkxX+T2qQZkdFMc65ZZcNC90Qw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 192.176.1.74) smtp.rcpttodomain=dpdk.org smtp.mailfrom=ericsson.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=ericsson.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ericsson.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3jWZy6dSc8AqC2LJmC/xWJ4f87TihBjmw7QKEf03bZA=; b=rl9iBLTCEehur9KeH6p08qxBcSCpyIJ/Ux2BXY+yVg8V0AVK+aH0Jv59wvTnJSJeQLup180SSfPL6wlu4B5Gi1dFhLEGt4yxsAwYKWCor2oAtkfihs16y4hobi6piKcVO7ZlNMxDtixOmhj7X/BwmXpkdrZkjfT+yJLvoerqEbU= Received: from AM0PR01CA0158.eurprd01.prod.exchangelabs.com (2603:10a6:208:aa::27) by AM9PR07MB7108.eurprd07.prod.outlook.com (2603:10a6:20b:2ca::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.37; Mon, 9 Oct 2023 18:22:34 +0000 Received: from AM4PEPF00025F96.EURPRD83.prod.outlook.com (2603:10a6:208:aa:cafe::65) by AM0PR01CA0158.outlook.office365.com (2603:10a6:208:aa::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.36 via Frontend Transport; Mon, 9 Oct 2023 18:22:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 192.176.1.74) smtp.mailfrom=ericsson.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=ericsson.com; Received-SPF: Pass (protection.outlook.com: domain of ericsson.com designates 192.176.1.74 as permitted sender) receiver=protection.outlook.com; client-ip=192.176.1.74; helo=oa.msg.ericsson.com; pr=C Received: from oa.msg.ericsson.com (192.176.1.74) by AM4PEPF00025F96.mail.protection.outlook.com (10.167.16.5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.20.6907.2 via Frontend Transport; Mon, 9 Oct 2023 18:22:33 +0000 Received: from ESESBMB505.ericsson.se (153.88.183.172) by ESESBMB503.ericsson.se (153.88.183.170) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.32; Mon, 9 Oct 2023 20:22:32 +0200 Received: from seliicinfr00050.seli.gic.ericsson.se (153.88.183.153) by smtp.internal.ericsson.com (153.88.183.188) with Microsoft SMTP Server id 15.1.2507.32 via Frontend Transport; Mon, 9 Oct 2023 20:22:32 +0200 Received: from breslau.. (seliicwb00002.seli.gic.ericsson.se [10.156.25.100]) by seliicinfr00050.seli.gic.ericsson.se (Postfix) with ESMTP id A77B91C006B; Mon, 9 Oct 2023 20:22:32 +0200 (CEST) From: =?UTF-8?q?Mattias=20R=C3=B6nnblom?= To: , CC: Jerin Jacob , , , , Peter Nilsson , Heng Wang , "Naga Harish K S V" , Pavan Nikhilesh , Gujjar Abhinandan S , Erik Gabriel Carrillo , Shijith Thotton , "Hemant Agrawal" , Sachin Saxena , Liang Ma , Peter Mccarthy , Zhirun Yan , =?UTF-8?q?Mattias=20R=C3=B6nnblom?= Subject: [PATCH v6 2/3] test: add dispatcher test suite Date: Mon, 9 Oct 2023 20:17:10 +0200 Message-ID: <20231009181711.440865-3-mattias.ronnblom@ericsson.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231009181711.440865-1-mattias.ronnblom@ericsson.com> References: <20230928073056.359356-2-mattias.ronnblom@ericsson.com> <20231009181711.440865-1-mattias.ronnblom@ericsson.com> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: AM4PEPF00025F96:EE_|AM9PR07MB7108:EE_ X-MS-Office365-Filtering-Correlation-Id: 8be48adf-bc1f-43c7-afa6-08dbc8f4b503 X-LD-Processed: 92e84ceb-fbfd-47ab-be52-080c6b87953f,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 53AE56dlIHC8dirw5IarjsEtcfZKDC6eNggOIsdv457MAP8HFOdeOlX7qWGR3ldvQwH+OlKZlrnMskRS04AzpxIc9fPoYbr35Fb/b9/C2CXN2wyQDTMFlMXz/wE0VdF32lUqxKU8RNeFBS4z77A5r8zd1VSVsmqwOAX//GnOvGgcEGxhLxkhmst4ZHgpoDK4yXSJnhapp9j5E6A7J+DQzW1ICBFqqOP1BjyGKnDhATaqjcVNTjr9MqcyxtcyRTu9K/KNU2lPGfIDtPoR6HDVWeDhfFcxt7NCezJn9NoHSUplq29iStrFU83STxfEwHVyI290dhU87DomMjaCn50Bv3+6u4GSzz6bMMpcGMP9gBnWoX4MDqfieHFv2iFKgmlbSf2e/QVUhRbX767FDG8jsKcgHQzbFhZ9hTI91RhuhnltquFNANSGeljOH5Uf7ymDvxJCeIDSQIPGkEc/ttCPiwisULFzhdzPSuZjHk8r5rEf79DqW7bRAc1NP+INsIAGgavwsNh/A3DfkxxDhMuO85at+kUYloXr3TAYlE1JMCZbOdOkKL0YL5wMSkLsmlsgngdXiVIciWxzzCn6kIa/D5qJHCoORqcGpD8foxwzcTTcp9kpiAv+ET5dC7ZHMhOaBHMrYsBk+TneL0+XYWfUGXDrDACttOnGUNOJwR9b1HFFpG+6i1DgYDjD47r6ASniUfCm/RSZGL7jln7sCc2axOYqMVj29xaCTaFy8h+8nHOGidgFX7lqp0qpdleXIJhmaV9xtVmQ68UL9k2UEZdAd0MEA87RgQmy4q2FTB9lrsJCB60m4QrSgndtSfKiXtwkvxAPdUSZ7DDWKD8ZUC49E2bzhe+zLF4C4ftKQ26Ogo4= X-Forefront-Antispam-Report: CIP:192.176.1.74; CTRY:SE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:oa.msg.ericsson.com; PTR:office365.se.ericsson.net; CAT:NONE; SFS:(13230031)(4636009)(396003)(346002)(39860400002)(136003)(376002)(230922051799003)(451199024)(186009)(64100799003)(1800799009)(82310400011)(40470700004)(46966006)(36840700001)(6666004)(107886003)(2616005)(1076003)(40460700003)(82740400003)(86362001)(40480700001)(7636003)(356005)(36756003)(82960400001)(36860700001)(83380400001)(6266002)(47076005)(336012)(7416002)(2906002)(66574015)(30864003)(478600001)(41300700001)(26005)(8936002)(8676002)(4326008)(316002)(5660300002)(110136005)(70586007)(54906003)(70206006)(2101003); DIR:OUT; SFP:1101; X-OriginatorOrg: ericsson.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Oct 2023 18:22:33.6925 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8be48adf-bc1f-43c7-afa6-08dbc8f4b503 X-MS-Exchange-CrossTenant-Id: 92e84ceb-fbfd-47ab-be52-080c6b87953f X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=92e84ceb-fbfd-47ab-be52-080c6b87953f; Ip=[192.176.1.74]; Helo=[oa.msg.ericsson.com] X-MS-Exchange-CrossTenant-AuthSource: AM4PEPF00025F96.EURPRD83.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR07MB7108 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add unit tests for the dispatcher. -- PATCH v6: o Register test as "fast". (David Marchand) o Use single tab as indentation for continuation lines in multiple-line function prototypes. (David Marchand) o Add Signed-off-by line. (David Marchand) o Use DPDK atomics wrapper API instead of C11 atomics. PATCH v5: o Update test suite to use pointer and not integer id when calling dispatcher functions. PATCH v3: o Adapt the test suite to dispatcher API name changes. PATCH v2: o Test finalize callback functionality. o Test handler and finalizer count upper limits. o Add statistics reset test. o Make sure dispatcher supply the proper event dev id and port id back to the application. PATCH: o Extend test to cover often-used handler optimization feature. RFC v4: o Adapt to non-const events in process function prototype. Signed-off-by: Mattias Rönnblom --- MAINTAINERS | 1 + app/test/meson.build | 1 + app/test/test_dispatcher.c | 1050 ++++++++++++++++++++++++++++++++++++ 3 files changed, 1052 insertions(+) create mode 100644 app/test/test_dispatcher.c diff --git a/MAINTAINERS b/MAINTAINERS index a4372701c4..262401d43d 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1736,6 +1736,7 @@ F: lib/node/ Dispatcher - EXPERIMENTAL M: Mattias Rönnblom F: lib/dispatcher/ +F: app/test/test_dispatcher.c Test Applications diff --git a/app/test/meson.build b/app/test/meson.build index bf9fc90612..ace10327f8 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -59,6 +59,7 @@ source_file_deps = { 'test_cycles.c': [], 'test_debug.c': [], 'test_devargs.c': ['kvargs'], + 'test_dispatcher.c': ['dispatcher'], 'test_distributor.c': ['distributor'], 'test_distributor_perf.c': ['distributor'], 'test_dmadev.c': ['dmadev', 'bus_vdev'], diff --git a/app/test/test_dispatcher.c b/app/test/test_dispatcher.c new file mode 100644 index 0000000000..5a9c972d1f --- /dev/null +++ b/app/test/test_dispatcher.c @@ -0,0 +1,1050 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 Ericsson AB + */ + +#include +#include +#include +#include +#include +#include + +#include "test.h" + +#define NUM_WORKERS 3 + +#define NUM_PORTS (NUM_WORKERS + 1) +#define WORKER_PORT_ID(worker_idx) (worker_idx) +#define DRIVER_PORT_ID (NUM_PORTS - 1) + +#define NUM_SERVICE_CORES NUM_WORKERS + +/* Eventdev */ +#define NUM_QUEUES 8 +#define LAST_QUEUE_ID (NUM_QUEUES - 1) +#define MAX_EVENTS 4096 +#define NEW_EVENT_THRESHOLD (MAX_EVENTS / 2) +#define DEQUEUE_BURST_SIZE 32 +#define ENQUEUE_BURST_SIZE 32 + +#define NUM_EVENTS 10000000 +#define NUM_FLOWS 16 + +#define DSW_VDEV "event_dsw0" + +struct app_queue { + uint8_t queue_id; + uint64_t sn[NUM_FLOWS]; + int dispatcher_reg_id; +}; + +struct cb_count { + uint8_t expected_event_dev_id; + uint8_t expected_event_port_id[RTE_MAX_LCORE]; + RTE_ATOMIC(int) count; +}; + +struct test_app { + uint8_t event_dev_id; + struct rte_dispatcher *dispatcher; + uint32_t dispatcher_service_id; + + unsigned int service_lcores[NUM_SERVICE_CORES]; + + int never_match_reg_id; + uint64_t never_match_count; + struct cb_count never_process_count; + + struct app_queue queues[NUM_QUEUES]; + + int finalize_reg_id; + struct cb_count finalize_count; + + bool running; + + RTE_ATOMIC(int) completed_events; + RTE_ATOMIC(int) errors; +}; + +static struct test_app * +test_app_create(void) +{ + int i; + struct test_app *app; + + app = calloc(1, sizeof(struct test_app)); + + if (app == NULL) + return NULL; + + for (i = 0; i < NUM_QUEUES; i++) + app->queues[i].queue_id = i; + + return app; +} + +static void +test_app_free(struct test_app *app) +{ + free(app); +} + +static int +test_app_create_vdev(struct test_app *app) +{ + int rc; + + rc = rte_vdev_init(DSW_VDEV, NULL); + if (rc < 0) + return TEST_SKIPPED; + + rc = rte_event_dev_get_dev_id(DSW_VDEV); + + app->event_dev_id = (uint8_t)rc; + + return TEST_SUCCESS; +} + +static int +test_app_destroy_vdev(struct test_app *app) +{ + int rc; + + rc = rte_event_dev_close(app->event_dev_id); + TEST_ASSERT_SUCCESS(rc, "Error while closing event device"); + + rc = rte_vdev_uninit(DSW_VDEV); + TEST_ASSERT_SUCCESS(rc, "Error while uninitializing virtual device"); + + return TEST_SUCCESS; +} + +static int +test_app_setup_event_dev(struct test_app *app) +{ + int rc; + int i; + + rc = test_app_create_vdev(app); + if (rc < 0) + return rc; + + struct rte_event_dev_config config = { + .nb_event_queues = NUM_QUEUES, + .nb_event_ports = NUM_PORTS, + .nb_events_limit = MAX_EVENTS, + .nb_event_queue_flows = 64, + .nb_event_port_dequeue_depth = DEQUEUE_BURST_SIZE, + .nb_event_port_enqueue_depth = ENQUEUE_BURST_SIZE + }; + + rc = rte_event_dev_configure(app->event_dev_id, &config); + + TEST_ASSERT_SUCCESS(rc, "Unable to configure event device"); + + struct rte_event_queue_conf queue_config = { + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .schedule_type = RTE_SCHED_TYPE_ATOMIC, + .nb_atomic_flows = 64 + }; + + for (i = 0; i < NUM_QUEUES; i++) { + uint8_t queue_id = i; + + rc = rte_event_queue_setup(app->event_dev_id, queue_id, + &queue_config); + + TEST_ASSERT_SUCCESS(rc, "Unable to setup queue %d", queue_id); + } + + struct rte_event_port_conf port_config = { + .new_event_threshold = NEW_EVENT_THRESHOLD, + .dequeue_depth = DEQUEUE_BURST_SIZE, + .enqueue_depth = ENQUEUE_BURST_SIZE + }; + + for (i = 0; i < NUM_PORTS; i++) { + uint8_t event_port_id = i; + + rc = rte_event_port_setup(app->event_dev_id, event_port_id, + &port_config); + TEST_ASSERT_SUCCESS(rc, "Failed to create event port %d", + event_port_id); + + if (event_port_id == DRIVER_PORT_ID) + continue; + + rc = rte_event_port_link(app->event_dev_id, event_port_id, + NULL, NULL, 0); + + TEST_ASSERT_EQUAL(rc, NUM_QUEUES, "Failed to link port %d", + event_port_id); + } + + return TEST_SUCCESS; +} + +static int +test_app_teardown_event_dev(struct test_app *app) +{ + return test_app_destroy_vdev(app); +} + +static int +test_app_start_event_dev(struct test_app *app) +{ + int rc; + + rc = rte_event_dev_start(app->event_dev_id); + TEST_ASSERT_SUCCESS(rc, "Unable to start event device"); + + return TEST_SUCCESS; +} + +static void +test_app_stop_event_dev(struct test_app *app) +{ + rte_event_dev_stop(app->event_dev_id); +} + +static int +test_app_create_dispatcher(struct test_app *app) +{ + int rc; + + app->dispatcher = rte_dispatcher_create(app->event_dev_id); + + TEST_ASSERT(app->dispatcher != NULL, "Unable to create event " + "dispatcher"); + + app->dispatcher_service_id = + rte_dispatcher_service_id_get(app->dispatcher); + + rc = rte_service_set_stats_enable(app->dispatcher_service_id, 1); + + TEST_ASSERT_SUCCESS(rc, "Unable to enable event dispatcher service " + "stats"); + + rc = rte_service_runstate_set(app->dispatcher_service_id, 1); + + TEST_ASSERT_SUCCESS(rc, "Unable to set dispatcher service runstate"); + + return TEST_SUCCESS; +} + +static int +test_app_free_dispatcher(struct test_app *app) +{ + int rc; + + rc = rte_service_runstate_set(app->dispatcher_service_id, 0); + TEST_ASSERT_SUCCESS(rc, "Error disabling dispatcher service"); + + rc = rte_dispatcher_free(app->dispatcher); + TEST_ASSERT_SUCCESS(rc, "Error freeing dispatcher"); + + return TEST_SUCCESS; +} + +static int +test_app_bind_ports(struct test_app *app) +{ + int i; + + app->never_process_count.expected_event_dev_id = + app->event_dev_id; + app->finalize_count.expected_event_dev_id = + app->event_dev_id; + + for (i = 0; i < NUM_WORKERS; i++) { + unsigned int lcore_id = app->service_lcores[i]; + uint8_t port_id = WORKER_PORT_ID(i); + + int rc = rte_dispatcher_bind_port_to_lcore( + app->dispatcher, port_id, DEQUEUE_BURST_SIZE, 0, + lcore_id + ); + + TEST_ASSERT_SUCCESS(rc, "Unable to bind event device port %d " + "to lcore %d", port_id, lcore_id); + + app->never_process_count.expected_event_port_id[lcore_id] = + port_id; + app->finalize_count.expected_event_port_id[lcore_id] = port_id; + } + + + return TEST_SUCCESS; +} + +static int +test_app_unbind_ports(struct test_app *app) +{ + int i; + + for (i = 0; i < NUM_WORKERS; i++) { + unsigned int lcore_id = app->service_lcores[i]; + + int rc = rte_dispatcher_unbind_port_from_lcore( + app->dispatcher, + WORKER_PORT_ID(i), + lcore_id + ); + + TEST_ASSERT_SUCCESS(rc, "Unable to unbind event device port %d " + "from lcore %d", WORKER_PORT_ID(i), + lcore_id); + } + + return TEST_SUCCESS; +} + +static bool +match_queue(const struct rte_event *event, void *cb_data) +{ + uintptr_t queue_id = (uintptr_t)cb_data; + + return event->queue_id == queue_id; +} + +static int +test_app_get_worker_index(struct test_app *app, unsigned int lcore_id) +{ + int i; + + for (i = 0; i < NUM_SERVICE_CORES; i++) + if (app->service_lcores[i] == lcore_id) + return i; + + return -1; +} + +static int +test_app_get_worker_port(struct test_app *app, unsigned int lcore_id) +{ + int worker; + + worker = test_app_get_worker_index(app, lcore_id); + + if (worker < 0) + return -1; + + return WORKER_PORT_ID(worker); +} + +static void +test_app_queue_note_error(struct test_app *app) +{ + rte_atomic_fetch_add_explicit(&app->errors, 1, rte_memory_order_relaxed); +} + +static void +test_app_process_queue(uint8_t p_event_dev_id, uint8_t p_event_port_id, + struct rte_event *in_events, uint16_t num, + void *cb_data) +{ + struct app_queue *app_queue = cb_data; + struct test_app *app = container_of(app_queue, struct test_app, + queues[app_queue->queue_id]); + unsigned int lcore_id = rte_lcore_id(); + bool intermediate_queue = app_queue->queue_id != LAST_QUEUE_ID; + int event_port_id; + uint16_t i; + struct rte_event out_events[num]; + + event_port_id = test_app_get_worker_port(app, lcore_id); + + if (event_port_id < 0 || p_event_dev_id != app->event_dev_id || + p_event_port_id != event_port_id) { + test_app_queue_note_error(app); + return; + } + + for (i = 0; i < num; i++) { + const struct rte_event *in_event = &in_events[i]; + struct rte_event *out_event = &out_events[i]; + uint64_t sn = in_event->u64; + uint64_t expected_sn; + + if (in_event->queue_id != app_queue->queue_id) { + test_app_queue_note_error(app); + return; + } + + expected_sn = app_queue->sn[in_event->flow_id]++; + + if (expected_sn != sn) { + test_app_queue_note_error(app); + return; + } + + if (intermediate_queue) + *out_event = (struct rte_event) { + .queue_id = in_event->queue_id + 1, + .flow_id = in_event->flow_id, + .sched_type = RTE_SCHED_TYPE_ATOMIC, + .op = RTE_EVENT_OP_FORWARD, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .u64 = sn + }; + } + + if (intermediate_queue) { + uint16_t n = 0; + + do { + n += rte_event_enqueue_forward_burst(p_event_dev_id, + p_event_port_id, + out_events + n, + num - n); + } while (n != num); + } else + rte_atomic_fetch_add_explicit(&app->completed_events, num, + rte_memory_order_relaxed); +} + +static bool +never_match(const struct rte_event *event __rte_unused, void *cb_data) +{ + uint64_t *count = cb_data; + + (*count)++; + + return false; +} + +static void +test_app_never_process(uint8_t event_dev_id, uint8_t event_port_id, + struct rte_event *in_events __rte_unused, uint16_t num, void *cb_data) +{ + struct cb_count *count = cb_data; + unsigned int lcore_id = rte_lcore_id(); + + if (event_dev_id == count->expected_event_dev_id && + event_port_id == count->expected_event_port_id[lcore_id]) + rte_atomic_fetch_add_explicit(&count->count, num, + rte_memory_order_relaxed); +} + +static void +finalize(uint8_t event_dev_id, uint8_t event_port_id, void *cb_data) +{ + struct cb_count *count = cb_data; + unsigned int lcore_id = rte_lcore_id(); + + if (event_dev_id == count->expected_event_dev_id && + event_port_id == count->expected_event_port_id[lcore_id]) + rte_atomic_fetch_add_explicit(&count->count, 1, + rte_memory_order_relaxed); +} + +static int +test_app_register_callbacks(struct test_app *app) +{ + int i; + + app->never_match_reg_id = + rte_dispatcher_register(app->dispatcher, never_match, + &app->never_match_count, + test_app_never_process, + &app->never_process_count); + + TEST_ASSERT(app->never_match_reg_id >= 0, "Unable to register " + "never-match handler"); + + for (i = 0; i < NUM_QUEUES; i++) { + struct app_queue *app_queue = &app->queues[i]; + uintptr_t queue_id = app_queue->queue_id; + int reg_id; + + reg_id = rte_dispatcher_register(app->dispatcher, + match_queue, (void *)queue_id, + test_app_process_queue, + app_queue); + + TEST_ASSERT(reg_id >= 0, "Unable to register consumer " + "callback for queue %d", i); + + app_queue->dispatcher_reg_id = reg_id; + } + + app->finalize_reg_id = + rte_dispatcher_finalize_register(app->dispatcher, + finalize, + &app->finalize_count); + TEST_ASSERT_SUCCESS(app->finalize_reg_id, "Error registering " + "finalize callback"); + + return TEST_SUCCESS; +} + +static int +test_app_unregister_callback(struct test_app *app, uint8_t queue_id) +{ + int reg_id = app->queues[queue_id].dispatcher_reg_id; + int rc; + + if (reg_id < 0) /* unregistered already */ + return 0; + + rc = rte_dispatcher_unregister(app->dispatcher, reg_id); + + TEST_ASSERT_SUCCESS(rc, "Unable to unregister consumer " + "callback for queue %d", queue_id); + + app->queues[queue_id].dispatcher_reg_id = -1; + + return TEST_SUCCESS; +} + +static int +test_app_unregister_callbacks(struct test_app *app) +{ + int i; + int rc; + + if (app->never_match_reg_id >= 0) { + rc = rte_dispatcher_unregister(app->dispatcher, + app->never_match_reg_id); + + TEST_ASSERT_SUCCESS(rc, "Unable to unregister never-match " + "handler"); + app->never_match_reg_id = -1; + } + + for (i = 0; i < NUM_QUEUES; i++) { + rc = test_app_unregister_callback(app, i); + if (rc != TEST_SUCCESS) + return rc; + } + + if (app->finalize_reg_id >= 0) { + rc = rte_dispatcher_finalize_unregister( + app->dispatcher, app->finalize_reg_id + ); + app->finalize_reg_id = -1; + } + + return TEST_SUCCESS; +} + +static int +test_app_start_dispatcher(struct test_app *app) +{ + int rc; + + rc = rte_dispatcher_start(app->dispatcher); + + TEST_ASSERT_SUCCESS(rc, "Unable to start the event dispatcher"); + + return TEST_SUCCESS; +} + +static int +test_app_stop_dispatcher(struct test_app *app) +{ + int rc; + + rc = rte_dispatcher_stop(app->dispatcher); + + TEST_ASSERT_SUCCESS(rc, "Unable to stop the event dispatcher"); + + return TEST_SUCCESS; +} + +static int +test_app_reset_dispatcher_stats(struct test_app *app) +{ + struct rte_dispatcher_stats stats; + + rte_dispatcher_stats_reset(app->dispatcher); + + memset(&stats, 0xff, sizeof(stats)); + + rte_dispatcher_stats_get(app->dispatcher, &stats); + + TEST_ASSERT_EQUAL(stats.poll_count, 0, "Poll count not zero"); + TEST_ASSERT_EQUAL(stats.ev_batch_count, 0, "Batch count not zero"); + TEST_ASSERT_EQUAL(stats.ev_dispatch_count, 0, "Dispatch count " + "not zero"); + TEST_ASSERT_EQUAL(stats.ev_drop_count, 0, "Drop count not zero"); + + return TEST_SUCCESS; +} + +static int +test_app_setup_service_core(struct test_app *app, unsigned int lcore_id) +{ + int rc; + + rc = rte_service_lcore_add(lcore_id); + TEST_ASSERT_SUCCESS(rc, "Unable to make lcore %d an event dispatcher " + "service core", lcore_id); + + rc = rte_service_map_lcore_set(app->dispatcher_service_id, lcore_id, 1); + TEST_ASSERT_SUCCESS(rc, "Unable to map event dispatcher service"); + + return TEST_SUCCESS; +} + +static int +test_app_setup_service_cores(struct test_app *app) +{ + int i; + int lcore_id = -1; + + for (i = 0; i < NUM_SERVICE_CORES; i++) { + lcore_id = rte_get_next_lcore(lcore_id, 1, 0); + + TEST_ASSERT(lcore_id != RTE_MAX_LCORE, + "Too few lcores. Needs at least %d worker lcores", + NUM_SERVICE_CORES); + + app->service_lcores[i] = lcore_id; + } + + for (i = 0; i < NUM_SERVICE_CORES; i++) { + int rc; + + rc = test_app_setup_service_core(app, app->service_lcores[i]); + if (rc != TEST_SUCCESS) + return rc; + } + + return TEST_SUCCESS; +} + +static int +test_app_teardown_service_core(struct test_app *app, unsigned int lcore_id) +{ + int rc; + + rc = rte_service_map_lcore_set(app->dispatcher_service_id, lcore_id, 0); + TEST_ASSERT_SUCCESS(rc, "Unable to unmap event dispatcher service"); + + rc = rte_service_lcore_del(lcore_id); + TEST_ASSERT_SUCCESS(rc, "Unable change role of service lcore %d", + lcore_id); + + return TEST_SUCCESS; +} + +static int +test_app_teardown_service_cores(struct test_app *app) +{ + int i; + + for (i = 0; i < NUM_SERVICE_CORES; i++) { + unsigned int lcore_id = app->service_lcores[i]; + int rc; + + rc = test_app_teardown_service_core(app, lcore_id); + if (rc != TEST_SUCCESS) + return rc; + } + + return TEST_SUCCESS; +} + +static int +test_app_start_service_cores(struct test_app *app) +{ + int i; + + for (i = 0; i < NUM_SERVICE_CORES; i++) { + unsigned int lcore_id = app->service_lcores[i]; + int rc; + + rc = rte_service_lcore_start(lcore_id); + TEST_ASSERT_SUCCESS(rc, "Unable to start service lcore %d", + lcore_id); + } + + return TEST_SUCCESS; +} + +static int +test_app_stop_service_cores(struct test_app *app) +{ + int i; + + for (i = 0; i < NUM_SERVICE_CORES; i++) { + unsigned int lcore_id = app->service_lcores[i]; + int rc; + + rc = rte_service_lcore_stop(lcore_id); + TEST_ASSERT_SUCCESS(rc, "Unable to stop service lcore %d", + lcore_id); + } + + return TEST_SUCCESS; +} + +static int +test_app_start(struct test_app *app) +{ + int rc; + + rc = test_app_start_event_dev(app); + if (rc != TEST_SUCCESS) + return rc; + + rc = test_app_start_service_cores(app); + if (rc != TEST_SUCCESS) + return rc; + + rc = test_app_start_dispatcher(app); + if (rc != TEST_SUCCESS) + return rc; + + app->running = true; + + return TEST_SUCCESS; +} + +static int +test_app_stop(struct test_app *app) +{ + int rc; + + rc = test_app_stop_dispatcher(app); + if (rc != TEST_SUCCESS) + return rc; + + test_app_stop_service_cores(app); + if (rc != TEST_SUCCESS) + return rc; + + test_app_stop_event_dev(app); + if (rc != TEST_SUCCESS) + return rc; + + app->running = false; + + return TEST_SUCCESS; +} + +struct test_app *test_app; + +static int +test_setup(void) +{ + int rc; + + test_app = test_app_create(); + TEST_ASSERT(test_app != NULL, "Unable to allocate memory"); + + rc = test_app_setup_event_dev(test_app); + if (rc != TEST_SUCCESS) + return rc; + + rc = test_app_create_dispatcher(test_app); + + rc = test_app_setup_service_cores(test_app); + if (rc != TEST_SUCCESS) + return rc; + + rc = test_app_register_callbacks(test_app); + if (rc != TEST_SUCCESS) + return rc; + + rc = test_app_bind_ports(test_app); + + return rc; +} + +static void test_teardown(void) +{ + if (test_app->running) + test_app_stop(test_app); + + test_app_teardown_service_cores(test_app); + + test_app_unregister_callbacks(test_app); + + test_app_unbind_ports(test_app); + + test_app_free_dispatcher(test_app); + + test_app_teardown_event_dev(test_app); + + test_app_free(test_app); + + test_app = NULL; +} + +static int +test_app_get_completed_events(struct test_app *app) +{ + return rte_atomic_load_explicit(&app->completed_events, + rte_memory_order_relaxed); +} + +static int +test_app_get_errors(struct test_app *app) +{ + return rte_atomic_load_explicit(&app->errors, rte_memory_order_relaxed); +} + +static int +test_basic(void) +{ + int rc; + int i; + + rc = test_app_start(test_app); + if (rc != TEST_SUCCESS) + return rc; + + uint64_t sns[NUM_FLOWS] = { 0 }; + + for (i = 0; i < NUM_EVENTS;) { + struct rte_event events[ENQUEUE_BURST_SIZE]; + int left; + int batch_size; + int j; + uint16_t n = 0; + + batch_size = 1 + rte_rand_max(ENQUEUE_BURST_SIZE); + left = NUM_EVENTS - i; + + batch_size = RTE_MIN(left, batch_size); + + for (j = 0; j < batch_size; j++) { + struct rte_event *event = &events[j]; + uint64_t sn; + uint32_t flow_id; + + flow_id = rte_rand_max(NUM_FLOWS); + + sn = sns[flow_id]++; + + *event = (struct rte_event) { + .queue_id = 0, + .flow_id = flow_id, + .sched_type = RTE_SCHED_TYPE_ATOMIC, + .op = RTE_EVENT_OP_NEW, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .u64 = sn + }; + } + + while (n < batch_size) + n += rte_event_enqueue_new_burst(test_app->event_dev_id, + DRIVER_PORT_ID, + events + n, + batch_size - n); + + i += batch_size; + } + + while (test_app_get_completed_events(test_app) != NUM_EVENTS) + rte_event_maintain(test_app->event_dev_id, DRIVER_PORT_ID, 0); + + rc = test_app_get_errors(test_app); + TEST_ASSERT(rc == 0, "%d errors occurred", rc); + + rc = test_app_stop(test_app); + if (rc != TEST_SUCCESS) + return rc; + + struct rte_dispatcher_stats stats; + rte_dispatcher_stats_get(test_app->dispatcher, &stats); + + TEST_ASSERT_EQUAL(stats.ev_drop_count, 0, "Drop count is not zero"); + TEST_ASSERT_EQUAL(stats.ev_dispatch_count, NUM_EVENTS * NUM_QUEUES, + "Invalid dispatch count"); + TEST_ASSERT(stats.poll_count > 0, "Poll count is zero"); + + TEST_ASSERT_EQUAL(test_app->never_process_count.count, 0, + "Never-match handler's process function has " + "been called"); + + int finalize_count = + rte_atomic_load_explicit(&test_app->finalize_count.count, + rte_memory_order_relaxed); + + TEST_ASSERT(finalize_count > 0, "Finalize count is zero"); + TEST_ASSERT(finalize_count <= (int)stats.ev_dispatch_count, + "Finalize count larger than event count"); + + TEST_ASSERT_EQUAL(finalize_count, (int)stats.ev_batch_count, + "%"PRIu64" batches dequeued, but finalize called %d " + "times", stats.ev_batch_count, finalize_count); + + /* + * The event dispatcher should call often-matching match functions + * more often, and thus this never-matching match function should + * be called relatively infrequently. + */ + TEST_ASSERT(test_app->never_match_count < + (stats.ev_dispatch_count / 4), + "Never-matching match function called suspiciously often"); + + rc = test_app_reset_dispatcher_stats(test_app); + if (rc != TEST_SUCCESS) + return rc; + + return TEST_SUCCESS; +} + +static int +test_drop(void) +{ + int rc; + uint8_t unhandled_queue; + struct rte_dispatcher_stats stats; + + unhandled_queue = (uint8_t)rte_rand_max(NUM_QUEUES); + + rc = test_app_start(test_app); + if (rc != TEST_SUCCESS) + return rc; + + rc = test_app_unregister_callback(test_app, unhandled_queue); + if (rc != TEST_SUCCESS) + return rc; + + struct rte_event event = { + .queue_id = unhandled_queue, + .flow_id = 0, + .sched_type = RTE_SCHED_TYPE_ATOMIC, + .op = RTE_EVENT_OP_NEW, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .u64 = 0 + }; + + do { + rc = rte_event_enqueue_burst(test_app->event_dev_id, + DRIVER_PORT_ID, &event, 1); + } while (rc == 0); + + do { + rte_dispatcher_stats_get(test_app->dispatcher, &stats); + + rte_event_maintain(test_app->event_dev_id, DRIVER_PORT_ID, 0); + } while (stats.ev_drop_count == 0 && stats.ev_dispatch_count == 0); + + rc = test_app_stop(test_app); + if (rc != TEST_SUCCESS) + return rc; + + TEST_ASSERT_EQUAL(stats.ev_drop_count, 1, "Drop count is not one"); + TEST_ASSERT_EQUAL(stats.ev_dispatch_count, 0, + "Dispatch count is not zero"); + TEST_ASSERT(stats.poll_count > 0, "Poll count is zero"); + + return TEST_SUCCESS; +} + +#define MORE_THAN_MAX_HANDLERS 1000 +#define MIN_HANDLERS 32 + +static int +test_many_handler_registrations(void) +{ + int rc; + int num_regs = 0; + int reg_ids[MORE_THAN_MAX_HANDLERS]; + int reg_id; + int i; + + rc = test_app_unregister_callbacks(test_app); + if (rc != TEST_SUCCESS) + return rc; + + for (i = 0; i < MORE_THAN_MAX_HANDLERS; i++) { + reg_id = rte_dispatcher_register(test_app->dispatcher, + never_match, NULL, + test_app_never_process, NULL); + if (reg_id < 0) + break; + + reg_ids[num_regs++] = reg_id; + } + + TEST_ASSERT_EQUAL(reg_id, -ENOMEM, "Incorrect return code. Expected " + "%d but was %d", -ENOMEM, reg_id); + TEST_ASSERT(num_regs >= MIN_HANDLERS, "Registration failed already " + "after %d handler registrations.", num_regs); + + for (i = 0; i < num_regs; i++) { + rc = rte_dispatcher_unregister(test_app->dispatcher, + reg_ids[i]); + TEST_ASSERT_SUCCESS(rc, "Unable to unregister handler %d", + reg_ids[i]); + } + + return TEST_SUCCESS; +} + +static void +dummy_finalize(uint8_t event_dev_id __rte_unused, + uint8_t event_port_id __rte_unused, + void *cb_data __rte_unused) +{ +} + +#define MORE_THAN_MAX_FINALIZERS 1000 +#define MIN_FINALIZERS 16 + +static int +test_many_finalize_registrations(void) +{ + int rc; + int num_regs = 0; + int reg_ids[MORE_THAN_MAX_FINALIZERS]; + int reg_id; + int i; + + rc = test_app_unregister_callbacks(test_app); + if (rc != TEST_SUCCESS) + return rc; + + for (i = 0; i < MORE_THAN_MAX_FINALIZERS; i++) { + reg_id = rte_dispatcher_finalize_register( + test_app->dispatcher, dummy_finalize, NULL + ); + + if (reg_id < 0) + break; + + reg_ids[num_regs++] = reg_id; + } + + TEST_ASSERT_EQUAL(reg_id, -ENOMEM, "Incorrect return code. Expected " + "%d but was %d", -ENOMEM, reg_id); + TEST_ASSERT(num_regs >= MIN_FINALIZERS, "Finalize registration failed " + "already after %d registrations.", num_regs); + + for (i = 0; i < num_regs; i++) { + rc = rte_dispatcher_finalize_unregister( + test_app->dispatcher, reg_ids[i] + ); + TEST_ASSERT_SUCCESS(rc, "Unable to unregister finalizer %d", + reg_ids[i]); + } + + return TEST_SUCCESS; +} + +static struct unit_test_suite test_suite = { + .suite_name = "Event dispatcher test suite", + .unit_test_cases = { + TEST_CASE_ST(test_setup, test_teardown, test_basic), + TEST_CASE_ST(test_setup, test_teardown, test_drop), + TEST_CASE_ST(test_setup, test_teardown, + test_many_handler_registrations), + TEST_CASE_ST(test_setup, test_teardown, + test_many_finalize_registrations), + TEST_CASES_END() + } +}; + +static int +test_dispatcher(void) +{ + return unit_test_suite_runner(&test_suite); +} + +REGISTER_FAST_TEST(dispatcher_autotest, false, true, test_dispatcher); -- 2.34.1