From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C958645FE6; Wed, 15 Jan 2025 14:38:56 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5CAEC40299; Wed, 15 Jan 2025 14:38:56 +0100 (CET) Received: from DU2PR03CU002.outbound.protection.outlook.com (mail-northeuropeazon11012044.outbound.protection.outlook.com [52.101.66.44]) by mails.dpdk.org (Postfix) with ESMTP id B02FB40298 for ; Wed, 15 Jan 2025 14:38:54 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=vRr46U+SauZupBHD1EC+Bk8BWuTxsR9thTKL2JUdUZT8Y1wBDuIRTerilglg270bNUcfvcoNkrHGv3nyDe0p71W9uvUtFr+lcbb7XhZ7eroo37VTznFW/snYfCG3YFrc+toBpqKz1/UXKqYjotb3dF8vZsKB6F2F3yW+VAdEBdlQ53dbMejXk+4iVSP9gQWTxP4R88AXqReGorVXHQWTDMUBQmbHawwplJASW4zcTWz6NPh+hecvmZZK1vfyVobs2BWwYiegG5eD99o1yhMERKAV/OlRJFNn83P9XFTzwP4uWHbGIX06PyZy10edGNwxuJ2FwSDqPYb4QY0n9/bpQA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=EWx9+NnAfZG/KkGT7MO8uTSciAihjQBdwemm8ODJkxw=; b=YXGuP8alIow+ZsxHjRdeMtFgzuo4xECspCN4A3TVHIIXoDJKYHDZpiEv5uwShtZmOdj5bUKZyk3WDlJRARIcPTGTVxvhzJB/3pxVRZr/sT46waO71FOwInBmxXWObM+LBa+sW4llPJDqvqO9ze2UD6knJV8hF0rUh7GIB4gn+UotBlFl7W35vt4yOr0wFm8vXiKIUxqnU/XPb+hXNXIpR1ZwXlQudrhR9sbY5ozUXaR3jxwYLyCyJTjLRkxhOv6SUsVtAcWTPEFK+EW9vIB74nkCE5CC65Ml4c6NZdoqRB/fl+0xDGPwIRTbT5z2cyAlkzrYWXBbOZHN24Glj24XcQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 192.176.1.74) smtp.rcpttodomain=dpdk.org smtp.mailfrom=ericsson.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=ericsson.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ericsson.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=EWx9+NnAfZG/KkGT7MO8uTSciAihjQBdwemm8ODJkxw=; b=ukK9vjGFvQg7zqa+7Wc9T2CGo6dG8HVV2nGsMpvmbh5/4jSOpKQksi4SlpNEulsnoRdVzOjMIbC2yYrzfD6Vn3kAwVny8xpacjTBJz3yWnBW6x15OakeQuGOCwCWahzwvdAAZOdIFiq1WannYBzlT8yQVKGRBQWYZBuczNs3ZdI9+lisyO96cGIflU+/4AJ970aTEotx7Fcr4npgp4IPijhw1/rJXcc1sbZh9mLOdnrE2lJxnlGFnC7NNbeCIbjRCQeOE8QNahAIQL8Sj3gnTHx8nsD7ezHajTtAtYFXFPd0ax9dkxuTqxTUXoEVcskrIQUmW7EZtv22++06BsRL/Q== Received: from DUZPR01CA0094.eurprd01.prod.exchangelabs.com (2603:10a6:10:4bb::20) by DB9PR07MB9344.eurprd07.prod.outlook.com (2603:10a6:10:45e::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8356.13; Wed, 15 Jan 2025 13:38:50 +0000 Received: from DB5PEPF00014B9D.eurprd02.prod.outlook.com (2603:10a6:10:4bb:cafe::c2) by DUZPR01CA0094.outlook.office365.com (2603:10a6:10:4bb::20) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.8356.13 via Frontend Transport; Wed, 15 Jan 2025 13:38:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 192.176.1.74) smtp.mailfrom=ericsson.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=ericsson.com; Received-SPF: Pass (protection.outlook.com: domain of ericsson.com designates 192.176.1.74 as permitted sender) receiver=protection.outlook.com; client-ip=192.176.1.74; helo=oa.msg.ericsson.com; pr=C Received: from oa.msg.ericsson.com (192.176.1.74) by DB5PEPF00014B9D.mail.protection.outlook.com (10.167.8.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8356.11 via Frontend Transport; Wed, 15 Jan 2025 13:38:50 +0000 Received: from seliiuvd00917.seli.gic.ericsson.se (153.88.142.248) by smtp-central.internal.ericsson.com (100.87.178.62) with Microsoft SMTP Server id 15.2.1544.14; Wed, 15 Jan 2025 14:38:49 +0100 From: Luka Jankovic To: CC: , , Subject: [RFC v5 1/2] eventdev: add atomic queue to test-eventdev app Date: Wed, 15 Jan 2025 14:38:43 +0100 Message-ID: <20250115133844.1403623-1-luka.jankovic@ericsson.com> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20250113121733.2384990-1-luka.jankovic@ericsson.com> References: <20250113121733.2384990-1-luka.jankovic@ericsson.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DB5PEPF00014B9D:EE_|DB9PR07MB9344:EE_ X-MS-Office365-Filtering-Correlation-Id: 09167f32-bfc7-46b5-06b7-08dd3569f20a X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|376014|82310400026|1800799024|36860700013; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?UWS84PkHOKnq40LWM2KQQe92fAI6SWVvWRXNC2MQSBti8PMAOROvJ+Z6b2Bb?= =?us-ascii?Q?9DO4iEeRA6sZghmLSRZh1ZiucRdiRMIOiC2knE48MFcEmBexXGwwgoSvWAsP?= =?us-ascii?Q?1zgyaV2+oEm4DOwPy1o5v9RKEVP16KyacIoVhlo7R7XLr2IQ+muaxdVjsMVw?= =?us-ascii?Q?p7zyG3XkqZxoMTknFYxz+uWbB8+oKSxggDS5k7xzqpU6trO6i0NSo0k94cNM?= =?us-ascii?Q?X/5iUYQEJjUgsqdTqr/G2R08NmK/+XY/ZS4WllsVL8RCTWkrywMA2NLbRyaU?= =?us-ascii?Q?QsUHXsL0hhCjnQbxjIFe4qJsSe9QCfVUZ6lS/+aV+ceBg0jIw8/6wzfAuAFw?= =?us-ascii?Q?rBGLkgRQE87qGvNAF1p6xNi4X1PFLNI/BpxD1dWv/cbvHGD1cggt6lbD8ZZd?= =?us-ascii?Q?5uGyi4uiKoMAtSvPREYp+AT2cyq7ksRWc2hgbwJWlN/0o9RL5VpTOKfmKJxi?= =?us-ascii?Q?ngJMIHpw++4IvEIw2leoyBBXEHUCPPKyC8l/JsJNZr72e/ckulDeSoH1lLhN?= =?us-ascii?Q?7WNJDa7NILqBnYl2fqKqMHi545iGnxs7ILe1K/K5f9VJcJMX1Rn+7IK2deu4?= =?us-ascii?Q?pyDioqtY89cP0OqAEJZoXE529SHF0t7rgOlB5vgwcpP5kB1lT5jcZVQ/U1t+?= =?us-ascii?Q?zXsppjW/NSlT5YWXBGxATETIIQi6t1blj3zThGzikt6F1H+lPLu/OOtn1z2v?= =?us-ascii?Q?SxMVXDHJKsiXM4P002UvsMeM7VDoXz2rQDELa40OI6OQrHiQCgx5lwVyP3M1?= =?us-ascii?Q?KqtwE3BqHOE6xamFlpusJAhMKPYI1FhiS7gD4HY/S21RuXt+fWKXmMymPK0G?= =?us-ascii?Q?WZX8H45GY8WrXqxOQuwWY1DEOolFrezQYFG6M5iJyDdLjI0/orrIGI21Uv8e?= =?us-ascii?Q?LdwUwXmvALuPYa+56fodlWrWcHp6S8iI+Mp7FqZ3IYIJFw82ACjpFwLGsmcQ?= =?us-ascii?Q?vgsKWzItTd6+FJTV5KgTeghBEzleZx5Zj/qLHn5fkP0twrteVZo9tbCsHdDZ?= =?us-ascii?Q?FwjzlSj0PCjBrcrrX2H7mXOmkWtv+rtdfzV3gKB95+i7Jmgpapj08OvQEwck?= =?us-ascii?Q?Jnrz4min4eCxQMKwtL6+ABQiqKIP5fYDj1zz43VLjSY/glLOEqtMDqR/AXLl?= =?us-ascii?Q?wVRNxtXzeU7H5G0Tdg+a8Y3unr3zGbE0Te38OuOSbhnvWf8SOELxDluirdRH?= =?us-ascii?Q?K8CWTbnOQXlToNOtrMHRfD7rnzvptuDgKwLeMv1cfMTGd6MT1YSSrZAhhf9g?= =?us-ascii?Q?tP78eHcEaUPYzqCp9s1ekC9zGtqYaChRod8HokzLa0U2qP2BW74K8pQe0KqH?= =?us-ascii?Q?Jtej5JrKlbPI4yM6XrQ6XzItaflJL66XsMDHbqoOfuafrau2BMlkqtol5RuO?= =?us-ascii?Q?o07gOjJ4FbE17dXkm3puguU7LkY0amH7LU6DwZN1JJ9jtSNBmml7G13dsFid?= =?us-ascii?Q?QFWUs29DrerzAK1gkB+yX3Zje8ocV1/VCqL7/XQWrXTrDQyRgWrMvhs7JpZ1?= =?us-ascii?Q?ahjI3BHA1WbTp70=3D?= X-Forefront-Antispam-Report: CIP:192.176.1.74; CTRY:SE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:oa.msg.ericsson.com; PTR:office365.se.ericsson.net; CAT:NONE; SFS:(13230040)(376014)(82310400026)(1800799024)(36860700013); DIR:OUT; SFP:1101; X-OriginatorOrg: ericsson.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jan 2025 13:38:50.4278 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 09167f32-bfc7-46b5-06b7-08dd3569f20a X-MS-Exchange-CrossTenant-Id: 92e84ceb-fbfd-47ab-be52-080c6b87953f X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=92e84ceb-fbfd-47ab-be52-080c6b87953f; Ip=[192.176.1.74]; Helo=[oa.msg.ericsson.com] X-MS-Exchange-CrossTenant-AuthSource: DB5PEPF00014B9D.eurprd02.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR07MB9344 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add an atomic queue test based on the order queue test that exclusively uses atomic queues. This makes it compatible with event devices such as the distributed software eventdev. The test detects if port maintenance is required. To verify atomicity, a spinlock is set up for each combination of queue and flow. It is taken whenever an event is dequeued for processing and released when processing is finished. The test will fail if a port attempts to take a lock which is already taken. Signed-off-by: Luka Jankovic --- v5: * Updated documentation for dpdk-test-eventdev v4: * Fix code style issues. * Remove unused imports. v3: * Use struct to avoid bit operations when accessing event u64. * Changed __rte_always_inline to inline for processing stages. * Introduce idle timeout constant. * Formatting and cleanup. v2: * Changed to only check queue, flow combination, not port, queue, flow. * Lock is only held when a packet is processed. * Utilize event u64 instead of mbuf. * General cleanup. --- app/test-eventdev/evt_common.h | 9 + app/test-eventdev/meson.build | 1 + app/test-eventdev/test_atomic_queue.c | 412 ++++++++++++++++++++++++++ 3 files changed, 422 insertions(+) create mode 100644 app/test-eventdev/test_atomic_queue.c diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h index 63b782f11a..74f9d187f3 100644 --- a/app/test-eventdev/evt_common.h +++ b/app/test-eventdev/evt_common.h @@ -138,6 +138,15 @@ evt_has_flow_id(uint8_t dev_id) true : false; } +static inline bool +evt_is_maintenance_free(uint8_t dev_id) +{ + struct rte_event_dev_info dev_info; + + rte_event_dev_info_get(dev_id, &dev_info); + return dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_MAINTENANCE_FREE; +} + static inline int evt_service_setup(uint32_t service_id) { diff --git a/app/test-eventdev/meson.build b/app/test-eventdev/meson.build index ab8769c755..db5add39eb 100644 --- a/app/test-eventdev/meson.build +++ b/app/test-eventdev/meson.build @@ -15,6 +15,7 @@ sources = files( 'test_order_atq.c', 'test_order_common.c', 'test_order_queue.c', + 'test_atomic_queue.c', 'test_perf_atq.c', 'test_perf_common.c', 'test_perf_queue.c', diff --git a/app/test-eventdev/test_atomic_queue.c b/app/test-eventdev/test_atomic_queue.c new file mode 100644 index 0000000000..4059a28a43 --- /dev/null +++ b/app/test-eventdev/test_atomic_queue.c @@ -0,0 +1,412 @@ +#include +#include + +#include "test_order_common.h" + +#define IDLE_TIMEOUT 1 +#define NB_QUEUES 2 + +static rte_spinlock_t *atomic_locks; + +struct event_data { + union { + struct { + uint32_t flow; + uint32_t seq; + }; + uint64_t raw; + }; +}; + +static inline uint64_t +event_data_create(flow_id_t flow, uint32_t seq) +{ + struct event_data data = {.flow = flow, .seq = seq}; + return data.raw; +} + +static inline uint32_t +event_data_get_seq(struct rte_event *const ev) +{ + struct event_data data = {.raw = ev->u64}; + return data.seq; +} + +static inline uint32_t +event_data_get_flow(struct rte_event *const ev) +{ + struct event_data data = {.raw = ev->u64}; + return data.flow; +} + +static inline uint32_t +get_lock_idx(int queue, flow_id_t flow, uint32_t nb_flows) +{ + return (queue * nb_flows) + flow; +} + +static inline bool +atomic_spinlock_trylock(uint32_t queue, uint32_t flow, uint32_t nb_flows) +{ + return rte_spinlock_trylock(&atomic_locks[get_lock_idx(queue, flow, nb_flows)]); +} + +static inline void +atomic_spinlock_unlock(uint32_t queue, uint32_t flow, uint32_t nb_flows) +{ + rte_spinlock_unlock(&atomic_locks[get_lock_idx(queue, flow, nb_flows)]); +} + +static inline bool +test_done(struct test_order *const t) +{ + return t->err || t->result == EVT_TEST_SUCCESS; +} + +static inline int +atomic_producer(void *arg) +{ + struct prod_data *p = arg; + struct test_order *t = p->t; + struct evt_options *opt = t->opt; + const uint8_t dev_id = p->dev_id; + const uint8_t port = p->port_id; + const uint64_t nb_pkts = t->nb_pkts; + uint32_t *producer_flow_seq = t->producer_flow_seq; + const uint32_t nb_flows = t->nb_flows; + uint64_t count = 0; + struct rte_event ev; + + if (opt->verbose_level > 1) + printf("%s(): lcore %d dev_id %d port=%d queue=%d\n", + __func__, rte_lcore_id(), dev_id, port, p->queue_id); + + ev = (struct rte_event) { + .op = RTE_EVENT_OP_NEW, + .queue_id = p->queue_id, + .sched_type = RTE_SCHED_TYPE_ATOMIC, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .event_type = RTE_EVENT_TYPE_CPU + }; + + while (count < nb_pkts && t->err == false) { + const flow_id_t flow = rte_rand_max(nb_flows); + + /* Maintain seq number per flow */ + ev.u64 = event_data_create(flow, producer_flow_seq[flow]++); + ev.flow_id = flow; + + while (rte_event_enqueue_burst(dev_id, port, &ev, 1) != 1) { + if (t->err) + break; + rte_pause(); + } + + count++; + } + + if (!evt_is_maintenance_free(dev_id)) { + while (!test_done(t)) { + rte_event_maintain(dev_id, port, RTE_EVENT_DEV_MAINT_OP_FLUSH); + rte_pause(); + } + } + + return 0; +} + +static inline void +atomic_lock_verify(struct test_order *const t, + uint32_t flow, + uint32_t nb_flows, + uint32_t port, + uint32_t queue_id) +{ + if (!atomic_spinlock_trylock(queue_id, flow, nb_flows)) { + evt_err("q=%u, flow=%x atomicity error: port %u tried to take locked spinlock", + queue_id, flow, port); + t->err = true; + } +} + +static inline void +atomic_process_stage_0(struct test_order *const t, + struct rte_event *const ev, + uint32_t nb_flows, + uint32_t port) +{ + const uint32_t flow = event_data_get_flow(ev); + + atomic_lock_verify(t, flow, nb_flows, port, 0); + + ev->queue_id = 1; + ev->op = RTE_EVENT_OP_FORWARD; + ev->sched_type = RTE_SCHED_TYPE_ATOMIC; + ev->event_type = RTE_EVENT_TYPE_CPU; + + atomic_spinlock_unlock(0, flow, nb_flows); +} + +static inline void +atomic_process_stage_1(struct test_order *const t, + struct rte_event *const ev, + uint32_t nb_flows, + uint32_t *const expected_flow_seq, + RTE_ATOMIC(uint64_t) *const outstand_pkts, + uint32_t port) +{ + const uint32_t flow = event_data_get_flow(ev); + + atomic_lock_verify(t, flow, nb_flows, port, 1); + + /* compare the seqn against expected value */ + uint32_t seq = event_data_get_seq(ev); + if (seq != expected_flow_seq[flow]) { + evt_err("flow=%x seqn mismatch got=%x expected=%x", + flow, seq, expected_flow_seq[flow]); + t->err = true; + } + + expected_flow_seq[flow]++; + rte_atomic_fetch_sub_explicit(outstand_pkts, 1, rte_memory_order_relaxed); + + ev->op = RTE_EVENT_OP_RELEASE; + + atomic_spinlock_unlock(1, flow, nb_flows); +} + +static int +atomic_queue_worker_burst(void *arg, bool flow_id_cap, uint32_t max_burst) +{ + ORDER_WORKER_INIT; + struct rte_event ev[BURST_SIZE]; + uint16_t i; + + while (t->err == false) { + + uint16_t const nb_rx = rte_event_dequeue_burst(dev_id, port, ev, max_burst, 0); + + if (nb_rx == 0) { + if (rte_atomic_load_explicit(outstand_pkts, rte_memory_order_relaxed) <= + 0) { + break; + } + rte_pause(); + continue; + } + + for (i = 0; i < nb_rx; i++) { + if (!flow_id_cap) { + ev[i].flow_id = event_data_get_flow(&ev[i]); + } + + switch (ev[i].queue_id) { + case 0: + atomic_process_stage_0(t, &ev[i], nb_flows, port); + break; + case 1: + atomic_process_stage_1(t, &ev[i], nb_flows, expected_flow_seq, + outstand_pkts, port); + break; + default: + order_process_stage_invalid(t, &ev[i]); + break; + } + } + + uint16_t total_enq = 0; + + do { + total_enq += rte_event_enqueue_burst( + dev_id, port, ev + total_enq, nb_rx - total_enq); + } while (total_enq < nb_rx); + } + + return 0; +} + +static int +worker_wrapper(void *arg) +{ + struct worker_data *w = arg; + int max_burst = evt_has_burst_mode(w->dev_id) ? BURST_SIZE : 1; + const bool flow_id_cap = evt_has_flow_id(w->dev_id); + + return atomic_queue_worker_burst(arg, flow_id_cap, max_burst); +} + +static int +atomic_queue_launch_lcores(struct evt_test *test, struct evt_options *opt) +{ + int ret, lcore_id; + struct test_order *t = evt_test_priv(test); + + /* launch workers */ + + int wkr_idx = 0; + RTE_LCORE_FOREACH_WORKER(lcore_id) { + if (!(opt->wlcores[lcore_id])) + continue; + + ret = rte_eal_remote_launch(worker_wrapper, &t->worker[wkr_idx], lcore_id); + if (ret) { + evt_err("failed to launch worker %d", lcore_id); + return ret; + } + wkr_idx++; + } + + /* launch producer */ + int plcore = evt_get_first_active_lcore(opt->plcores); + + ret = rte_eal_remote_launch(atomic_producer, &t->prod, plcore); + if (ret) { + evt_err("failed to launch order_producer %d", plcore); + return ret; + } + + uint64_t prev_time = rte_get_timer_cycles(); + int64_t prev_outstanding_pkts = -1; + + while (t->err == false) { + uint64_t current_time = rte_get_timer_cycles(); + int64_t outstanding_pkts = rte_atomic_load_explicit( + &t->outstand_pkts, rte_memory_order_relaxed); + + if (outstanding_pkts <= 0) { + t->result = EVT_TEST_SUCCESS; + break; + } + + if (current_time - prev_time > rte_get_timer_hz() * IDLE_TIMEOUT) { + printf(CLGRN "\r%" PRId64 "" CLNRM, outstanding_pkts); + fflush(stdout); + if (prev_outstanding_pkts == outstanding_pkts) { + rte_event_dev_dump(opt->dev_id, stdout); + evt_err("No schedules for seconds, deadlock"); + t->err = true; + break; + } + prev_outstanding_pkts = outstanding_pkts; + prev_time = current_time; + } + } + printf("\r"); + + rte_free(atomic_locks); + + return 0; +} + +static int +atomic_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt) +{ + int ret; + + const uint8_t nb_workers = evt_nr_active_lcores(opt->wlcores); + /* number of active worker cores + 1 producer */ + const uint8_t nb_ports = nb_workers + 1; + + ret = evt_configure_eventdev(opt, NB_QUEUES, nb_ports); + if (ret) { + evt_err("failed to configure eventdev %d", opt->dev_id); + return ret; + } + + /* q0 configuration */ + struct rte_event_queue_conf q0_atomic_conf = { + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .schedule_type = RTE_SCHED_TYPE_ATOMIC, + .nb_atomic_flows = opt->nb_flows, + .nb_atomic_order_sequences = opt->nb_flows, + }; + ret = rte_event_queue_setup(opt->dev_id, 0, &q0_atomic_conf); + if (ret) { + evt_err("failed to setup queue0 eventdev %d err %d", opt->dev_id, ret); + return ret; + } + + /* q1 configuration */ + struct rte_event_queue_conf q1_atomic_conf = { + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .schedule_type = RTE_SCHED_TYPE_ATOMIC, + .nb_atomic_flows = opt->nb_flows, + .nb_atomic_order_sequences = opt->nb_flows, + }; + ret = rte_event_queue_setup(opt->dev_id, 1, &q1_atomic_conf); + if (ret) { + evt_err("failed to setup queue0 eventdev %d err %d", opt->dev_id, ret); + return ret; + } + + /* setup one port per worker, linking to all queues */ + ret = order_event_dev_port_setup(test, opt, nb_workers, NB_QUEUES); + if (ret) + return ret; + + if (!evt_has_distributed_sched(opt->dev_id)) { + uint32_t service_id; + rte_event_dev_service_id_get(opt->dev_id, &service_id); + ret = evt_service_setup(service_id); + if (ret) { + evt_err("No service lcore found to run event dev."); + return ret; + } + } + + ret = rte_event_dev_start(opt->dev_id); + if (ret) { + evt_err("failed to start eventdev %d", opt->dev_id); + return ret; + } + + const uint32_t num_locks = NB_QUEUES * opt->nb_flows; + + atomic_locks = rte_calloc(NULL, num_locks, sizeof(rte_spinlock_t), 0); + + for (uint32_t i = 0; i < num_locks; i++) { + rte_spinlock_init(&atomic_locks[i]); + } + + return 0; +} + +static void +atomic_queue_opt_dump(struct evt_options *opt) +{ + order_opt_dump(opt); + evt_dump("nb_evdev_queues", "%d", NB_QUEUES); +} + +static bool +atomic_queue_capability_check(struct evt_options *opt) +{ + struct rte_event_dev_info dev_info; + + rte_event_dev_info_get(opt->dev_id, &dev_info); + if (dev_info.max_event_queues < NB_QUEUES || + dev_info.max_event_ports < order_nb_event_ports(opt)) { + evt_err("not enough eventdev queues=%d/%d or ports=%d/%d", NB_QUEUES, + dev_info.max_event_queues, order_nb_event_ports(opt), + dev_info.max_event_ports); + return false; + } + + return true; +} + +static const struct evt_test_ops atomic_queue = { + .cap_check = atomic_queue_capability_check, + .opt_check = order_opt_check, + .opt_dump = atomic_queue_opt_dump, + .test_setup = order_test_setup, + .mempool_setup = order_mempool_setup, + .eventdev_setup = atomic_queue_eventdev_setup, + .launch_lcores = atomic_queue_launch_lcores, + .eventdev_destroy = order_eventdev_destroy, + .mempool_destroy = order_mempool_destroy, + .test_result = order_test_result, + .test_destroy = order_test_destroy, +}; + +EVT_TEST_REGISTER(atomic_queue); -- 2.34.1