From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1nam02on0065.outbound.protection.outlook.com [104.47.36.65]) by dpdk.org (Postfix) with ESMTP id 606911B1CC for ; Mon, 8 Jan 2018 14:48:31 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=kXFHO8aNNrD7wLxDt49kYP+O3nNou7g6y3iMs6hmTZQ=; b=kQ/xTJIQv5pIoSs5PH39eAx+rm3Z1VCh1w5AK5FT3zgCQ4kPbuLY2x/tPATF7grbRsIwcJ5jbEQdasStHYbnTVB6ctD8Xz+vpFeVhSTzauJkQS4WvMk0Lv+Q3tQiRCw2D0JV1YXnyGk+HKB/LLNYUfxSqv0kx0DAQzIsIyFcIoI= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Pavan.Bhagavatula@cavium.com; Received: from localhost.localdomain (111.93.218.67) by MWHPR07MB3470.namprd07.prod.outlook.com (10.164.192.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.386.5; Mon, 8 Jan 2018 13:48:26 +0000 From: Pavan Nikhilesh To: jerin.jacob@caviumnetworks.com, harry.van.haaren@intel.com, gage.eads@intel.com, liang.j.ma@intel.com Cc: dev@dpdk.org, Pavan Nikhilesh Date: Mon, 8 Jan 2018 19:17:35 +0530 Message-Id: <20180108134742.30857-4-pbhagavatula@caviumnetworks.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180108134742.30857-1-pbhagavatula@caviumnetworks.com> References: <20171212192713.17620-1-pbhagavatula@caviumnetworks.com> <20180108134742.30857-1-pbhagavatula@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: SG2PR06CA0105.apcprd06.prod.outlook.com (10.170.138.31) To MWHPR07MB3470.namprd07.prod.outlook.com (10.164.192.21) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f593d29c-5417-414e-23f2-08d5569e7f7e X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(5600026)(4604075)(2017052603307)(7153060)(7193020); SRVR:MWHPR07MB3470; X-Microsoft-Exchange-Diagnostics: 1; MWHPR07MB3470; 3:ryddYkz+pSWpztvhQlcKo+ESZD5FP0C6jK7Za6iB1e7HjdLic/g6JEI7fSYT4W4dW9vI9QmgCeMXHbOewwAozBI/ungoSrV0VygSq7w3H68PlHc58r/DGfAJ3pqlP7skgJhO7bsAdC4EC9OmKB8QJAPgKJRFH4EO10f08tzT9bIYdywqdOVXAqOuDV9oJtLah8abE0ng0KDz9gTaDLfVzYaO9NN5XCDZocGo6UOmrcvUJoq6bH1gLWvqcixzpv/j; 25:VMVHbWLFFE7VPu7DHlHCtEbT4eLw6MVKoK5arO7n4D7W3Jb4u7BC2oKRm9wvl5T7KomL9ojbUSoJ4glTKYZgc9CSEmj9gffzZCQywulLhesPfm12XeIEdK39cvvcZtQqP5nHDt6tKWLF/KjODRFpkJUNE+4C1Q1DctPA2no8+qcwUAWlZBdSeJ+MOsL5MVVODIfojiJEeNk24Xzji9UzWe4CYrac5HvOE2glZNjNk2O5uFdao+CK/haAgSHHYXd2pY6DLXF1k7Yd08p54+0mcQikhnebWlZB0xFmHYmCtgal189mzNYtw/TTzLmnhBiL7tQPYRuSKX3x8l6C10lDJg==; 31:T8I3+Cgr59trGieWT/rqpjuslwir/HYf1wxTLYSQlVQVEI1CDMkXRvPmJDhKzO2VOeLmD4tZnKy2G/u7kPHGQezp4iVliAy6fh/ageOJ8wtTHWGHAJeDyIvOpi2Z0bM8+lvlcdmh3fG008g5D28s1lpVOerFjhBVwU5yOnm+xSRuXaV+AJ2664RJD7yD9wDUSeXpW78q1iXbW1GbPHILvHjkdkOpWuznzmV/x7kIgOc= X-MS-TrafficTypeDiagnostic: MWHPR07MB3470: X-Microsoft-Exchange-Diagnostics: 1; MWHPR07MB3470; 20:q/XpYEd2yBmA6PoyYUL+hzgrOxzKlAye2x6cZJzET6wtPk4Wv9rMxdUNParyKwB1qvov/wS3C1dqn2oTuL9y63LzMe+f697zDqzy6S+NJUs7/hUZW4CVnpgnKVw37AkRSqGO57g6dWvP//Os8t3to0Xo5R9bjJXOobsvcix878nQu/1wZytjTqe3EHZmkbyUmG83jJ51QGPWJ8Dg3oFAwF6/yJ0BKyH8is6Md7LrPFX7TECm0Gw6+opL2DbU4XSBDh+6wdK6aSetlKeOY+baIVHVj3AJ6t5ApWdVMWSMrhrWPFzXLonwIwt9dkR5UyxN6v+aHnANFDjnEWEl98F5RY2qx0kTjyKa/iGisvZp4TqJyoJcsuK1OhT+aEggxcA5b1v4bpPOYtNsJhoh8NBdUN9KmS5daoHO+uhnPsMqaBfZmsDQ9kVAZv4cnw+Ui0qjMmZGAuHp6VSTAsLJv2Vb7zJ23kNOrpUJcpn9AH1uCzy0ned5WpcYYpSSOqea9by0VR58mZJuseccKmXDNE6bfk2AYon5tCXd93esw180lorFMTnjUOJ3BtE2o0JTnCBfsKextRtOR4nHhphuJmS8QRenMINiITPH8XW0E0T6CbY=; 4:/DzBktbT4KYobkS5xVi+pN8dyDtBr7G7Ly+n4MDfKz2B0wAIEHw3Nh0CpU9k7FZBMvp4rAbg2H6prcVC5SG2gckv0CKbI4D9UzwBIjxo0bjLozs8LMtvFoGIkT7ctpfA5H0VRmy0vEJIrWfr5Nesjrlck6CY2gQjDDv0FNnT+H2JO/nDZbkTrnhnKobx+FL1ktzh/ORUPjv6D2olCKl90XOyu7c0gvRAi24lK35HKsAm3r+4O0aTw4bCd49jZyXh9DNLZh4+k5smub+urkiW2A== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040470)(2401047)(8121501046)(5005006)(3231023)(944501075)(3002001)(10201501046)(93006095)(6041268)(20161123562045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123560045)(20161123558120)(20161123564045)(6072148)(201708071742011); SRVR:MWHPR07MB3470; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:MWHPR07MB3470; X-Forefront-PRVS: 054642504A X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6069001)(366004)(346002)(376002)(39850400004)(39380400002)(396003)(189003)(199004)(1076002)(2906002)(305945005)(7736002)(6486002)(76176011)(51416003)(52116002)(2950100002)(36756003)(16526018)(6116002)(42882006)(6666003)(316002)(3846002)(5660300001)(5009440100003)(16586007)(53936002)(48376002)(50466002)(8676002)(8936002)(50226002)(53946003)(107886003)(81166006)(72206003)(68736007)(478600001)(105586002)(6506007)(106356001)(59450400001)(97736004)(386003)(47776003)(25786009)(66066001)(81156014)(6512007)(4326008)(42262002); DIR:OUT; SFP:1101; SCL:1; SRVR:MWHPR07MB3470; H:localhost.localdomain; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; MWHPR07MB3470; 23:wq9ztg8NcKoC/C13nOx9K7hCg9un5PpkmnbCSKHL5?= =?us-ascii?Q?AhDp8M9YDc9zG9OM/6uB/r7PProNSQ0SMOy85aSAGiBT7T/ZFx1c1B87hhhE?= =?us-ascii?Q?CLNdVci8xQaya/IzQ7abGhC5zcM75g72szvppzM04RrdlcdebO5Mx/oN5E9P?= =?us-ascii?Q?5tXaO0KgplgIg54AUWm3D6dalNtLQltOMMXr8fIvRbbTIoe+kd7x/XEEQFtx?= =?us-ascii?Q?KMhTOC4vuTm9KWnLvuc4S5tinM5qXwtEjRA2QmfHFSlHhlIsucBpK2WxwQJv?= =?us-ascii?Q?R5mOv3iNb3KkRPuPEACGrslKu6pvQVafRaYCtjWofjcyIJP9q9GviS2TIyOx?= =?us-ascii?Q?7ZaK77eaEVgpKrJodvCg8QJxoby10iQFSQBwF35LEpO4qE5L+7MxhJ0ihw9P?= =?us-ascii?Q?ZxbWI6F7mOMEzy0EfHZlZ1c7s4JQcjS1WczbFbzewMGAi1Y6Zsk0YlNHEzRE?= =?us-ascii?Q?d2fMgEf04OM+kw62S90kIGgl1uq9lfXRyIeNe1aaFLw9ag6hWfYwb08Ok7xU?= =?us-ascii?Q?yqGUbkhDTGadiIvl1J2ROVUghKX1qvt4w7GqvisnlVPbl19VVAHl6gNuUW2f?= =?us-ascii?Q?gzpxXNe/0uuud0VfiNtdGYe7kRQ+nqWFdx+MzOe1dCEtPPxNMKbgzobuRlw/?= =?us-ascii?Q?fb4bbftka6BGBqKBOkaa/vxWPxvidlPiPVfphKTblQVYKrxbdxcr9wgXjCUC?= =?us-ascii?Q?4nWY3Zbvwe0Lc5odv60YhNrUUZKJKZYjVamIjhsiDK7D0LQGJ4/uIDD5Pn0E?= =?us-ascii?Q?8TybR3gMScpA6Mnofe9f7fjQcMmypmRZfQrioPf/8fCAE0qLBHNOQLR+mQgD?= =?us-ascii?Q?GMTCjnIwEfYglKcG/+c+Q9DIKMLOeFxxI3970IrNhOxDq/zCuMtURefASA9c?= =?us-ascii?Q?oKrL1soWF2vY6wFfVngvTaXxijPkw+Cq1U6kZn1tUBaq6zrWkNnMnzCg20GX?= =?us-ascii?Q?ZbeZ74Kgjow3ctaALaVJn9UER1UdzouBQOEDc2WciDmHE/u4hZVevbOmBxmp?= =?us-ascii?Q?UMl6Mbo/cWVbhqVujFVxFdnvrc8YPg36UsEwi5etlqEn17dPEIwxaOJOijlX?= =?us-ascii?Q?3xGsv8Janoq5Lu5nL3WUEsl9ILa+9iG5YoLUGuVjMSwNoWKbf4qn+k0ztFPy?= =?us-ascii?Q?fAlRLuDkMsCbmA0dKWf7m64tXGRszf4bSZYmrS2H0arm6lKCp5do7iYiwqJi?= =?us-ascii?Q?eqJK+LZ9/wx7qZZ3f9MUHnso0dCVTNfEfaYEE8wD2bwCCvh8o0B8GECmQ=3D?= =?us-ascii?Q?=3D?= X-Microsoft-Exchange-Diagnostics: 1; MWHPR07MB3470; 6:p94AltbEz6CaE4ncO22NNZPnPGIocRS64SAQgeBMnxNzl39Rtw4vw4VRmbeYxf38jhNodqBSWPJrin8Vl89/sLodmP+6iPYbHk4ElISyQcq8dsirsxrKHk0eQWbDRiIGXV4H7NYaDh2IC4AS6QH1L80U+h2o/V/rtyRnMCxMy5/LFoSWsatly8v+Qc4z0f9XLdor+2rOwVjMz60x0UXTXoEJ4lHNxXt6XcuSts9PZCnQPtcZczldE4XSbajVntpOsF686ZtHOkLKnJWZNwkSGLrA6umw8wBM8lgi/36sC66zs39zZgyVdCgKnwHuMgeUL3cVIHazhcpKmjgj8idtCxfdWIA6dVk7BaRAMIdoUsI=; 5:aAdULLVNLugqXI7sTc1Man772/vptsU8mhjI9u1TTqBJUfB3yRRU0wABbVCb/ZtMqpmZsNpsCla2FqSyQF0YZL1BOKJMjP/jnaSIz3V38vD8vu3gG7NZZZbgN/7gnvUpJDctk2GgkAPWOSeT+DnRcwyYsZlbKOk5beXhZWTy4Bo=; 24:vTKQHrcpZ6hX+pWNmvB0QTGK1RtJXTupWWSH2nVGQj9VWk6d3wJtdV/Nc2q9ziIHs3FfuxYlu+Sozgxzh1KzjmdtBGSMGnS76x9eW36BC9E=; 7:xJHdTr8u6q4S8vh2K6zaBREIWWjIfMkm6+gRDbVNR8nUH/SuWcZy/UUly3/DOoCCfuFKcfnGfKXsEHQ8H2vI2oxtOGWj0mgvsg0dnlIh/UWit3+GQbrgiMFmDKX37bSiOQKiQox8d4MBD/3hBdNpGM5CEOuBBpC8n8JT/0TbB25I2e0883bMXsbuMvlwhpJNMiEZ4HWV/JZg66md1pbfwu6H9oQ1LUlU6LC0oOaPLyl69g5xbqaISrgkAVnQmIX8 SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Jan 2018 13:48:26.6900 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f593d29c-5417-414e-23f2-08d5569e7f7e X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR07MB3470 Subject: [dpdk-dev] [PATCH v4 04/11] event/octeontx: modify octeontx eventdev test X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 08 Jan 2018 13:48:32 -0000 Modify test_eventdev_octeontx to be standalone selftest independent of test framework. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx/ssovf_evdev_selftest.c | 457 +++++++++++++------------- 1 file changed, 234 insertions(+), 223 deletions(-) diff --git a/drivers/event/octeontx/ssovf_evdev_selftest.c b/drivers/event/octeontx/ssovf_evdev_selftest.c index 8fddb4fd2..3866ba968 100644 --- a/drivers/event/octeontx/ssovf_evdev_selftest.c +++ b/drivers/event/octeontx/ssovf_evdev_selftest.c @@ -1,33 +1,5 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2017 Cavium, Inc. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Cavium, Inc nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017 Cavium, Inc */ #include @@ -46,12 +18,21 @@ #include #include #include +#include -#include "test.h" +#include "ssovf_evdev.h" #define NUM_PACKETS (1 << 18) #define MAX_EVENTS (16 * 1024) +#define OCTEONTX_TEST_RUN(setup, teardown, test) \ + octeontx_test_run(setup, teardown, test, #test) + +static int total; +static int passed; +static int failed; +static int unsupported; + static int evdev; static struct rte_mempool *eventdev_test_mempool; @@ -79,11 +60,11 @@ static inline int seqn_list_update(int val) { if (seqn_list_index >= NUM_PACKETS) - return TEST_FAILED; + return -1; seqn_list[seqn_list_index++] = val; rte_smp_wmb(); - return TEST_SUCCESS; + return 0; } static inline int @@ -93,11 +74,11 @@ seqn_list_check(int limit) for (i = 0; i < limit; i++) { if (seqn_list[i] != i) { - printf("Seqn mismatch %d %d\n", seqn_list[i], i); - return TEST_FAILED; + ssovf_log_dbg("Seqn mismatch %d %d", seqn_list[i], i); + return -1; } } - return TEST_SUCCESS; + return 0; } struct test_core_param { @@ -114,20 +95,21 @@ testsuite_setup(void) evdev = rte_event_dev_get_dev_id(eventdev_name); if (evdev < 0) { - printf("%d: Eventdev %s not found - creating.\n", + ssovf_log_dbg("%d: Eventdev %s not found - creating.", __LINE__, eventdev_name); if (rte_vdev_init(eventdev_name, NULL) < 0) { - printf("Error creating eventdev %s\n", eventdev_name); - return TEST_FAILED; + ssovf_log_dbg("Error creating eventdev %s", + eventdev_name); + return -1; } evdev = rte_event_dev_get_dev_id(eventdev_name); if (evdev < 0) { - printf("Error finding newly created eventdev\n"); - return TEST_FAILED; + ssovf_log_dbg("Error finding newly created eventdev"); + return -1; } } - return TEST_SUCCESS; + return 0; } static void @@ -177,31 +159,32 @@ _eventdev_setup(int mode) 512, /* Use very small mbufs */ rte_socket_id()); if (!eventdev_test_mempool) { - printf("ERROR creating mempool\n"); - return TEST_FAILED; + ssovf_log_dbg("ERROR creating mempool"); + return -1; } ret = rte_event_dev_info_get(evdev, &info); - TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); - TEST_ASSERT(info.max_num_events >= (int32_t)MAX_EVENTS, - "max_num_events=%d < max_events=%d", - info.max_num_events, MAX_EVENTS); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + RTE_TEST_ASSERT(info.max_num_events >= (int32_t)MAX_EVENTS, + "ERROR max_num_events=%d < max_events=%d", + info.max_num_events, MAX_EVENTS); devconf_set_default_sane_values(&dev_conf, &info); if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT) dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT; ret = rte_event_dev_configure(evdev, &dev_conf); - TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev"); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev"); uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); if (mode == TEST_EVENTDEV_SETUP_PRIORITY) { if (queue_count > 8) { - printf("test expects the unique priority per queue\n"); + ssovf_log_dbg( + "test expects the unique priority per queue"); return -ENOTSUP; } @@ -216,35 +199,39 @@ _eventdev_setup(int mode) ret = rte_event_queue_default_conf_get(evdev, i, &queue_conf); - TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d", i); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d", + i); queue_conf.priority = i * step; ret = rte_event_queue_setup(evdev, i, &queue_conf); - TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", i); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", + i); } } else { /* Configure event queues with default priority */ for (i = 0; i < (int)queue_count; i++) { ret = rte_event_queue_setup(evdev, i, NULL); - TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", i); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", + i); } } /* Configure event ports */ uint32_t port_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count), "Port count get failed"); for (i = 0; i < (int)port_count; i++) { ret = rte_event_port_setup(evdev, i, NULL); - TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i); ret = rte_event_port_link(evdev, i, NULL, NULL, 0); - TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d", i); + RTE_TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d", + i); } ret = rte_event_dev_start(evdev); - TEST_ASSERT_SUCCESS(ret, "Failed to start device"); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start device"); - return TEST_SUCCESS; + return 0; } static inline int @@ -311,7 +298,7 @@ inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type, struct rte_event ev = {.event = 0, .u64 = 0}; m = rte_pktmbuf_alloc(eventdev_test_mempool); - TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); + RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); m->seqn = i; update_event_and_validation_attr(m, &ev, flow_id, event_type, @@ -332,8 +319,8 @@ check_excess_events(uint8_t port) for (i = 0; i < 32; i++) { valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); - TEST_ASSERT_SUCCESS(valid_event, "Unexpected valid event=%d", - ev.mbuf->seqn); + RTE_TEST_ASSERT_SUCCESS(valid_event, + "Unexpected valid event=%d", ev.mbuf->seqn); } return 0; } @@ -346,12 +333,12 @@ generate_random_events(const unsigned int total_events) int ret; uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); ret = rte_event_dev_info_get(evdev, &info); - TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); for (i = 0; i < total_events; i++) { ret = inject_events( rte_rand() % info.max_event_queue_flows /*flow_id */, @@ -362,7 +349,7 @@ generate_random_events(const unsigned int total_events) 0 /* port */, 1 /* events */); if (ret) - return TEST_FAILED; + return -1; } return ret; } @@ -374,19 +361,19 @@ validate_event(struct rte_event *ev) struct event_attr *attr; attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *); - TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id, + RTE_TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id, "flow_id mismatch enq=%d deq =%d", attr->flow_id, ev->flow_id); - TEST_ASSERT_EQUAL(attr->event_type, ev->event_type, + RTE_TEST_ASSERT_EQUAL(attr->event_type, ev->event_type, "event_type mismatch enq=%d deq =%d", attr->event_type, ev->event_type); - TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type, + RTE_TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type, "sub_event_type mismatch enq=%d deq =%d", attr->sub_event_type, ev->sub_event_type); - TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type, + RTE_TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type, "sched_type mismatch enq=%d deq =%d", attr->sched_type, ev->sched_type); - TEST_ASSERT_EQUAL(attr->queue, ev->queue_id, + RTE_TEST_ASSERT_EQUAL(attr->queue, ev->queue_id, "queue mismatch enq=%d deq =%d", attr->queue, ev->queue_id); return 0; @@ -405,8 +392,8 @@ consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn) while (1) { if (++forward_progress_cnt > UINT16_MAX) { - printf("Detected deadlock\n"); - return TEST_FAILED; + ssovf_log_dbg("Detected deadlock"); + return -1; } valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); @@ -416,11 +403,11 @@ consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn) forward_progress_cnt = 0; ret = validate_event(&ev); if (ret) - return TEST_FAILED; + return -1; if (fn != NULL) { ret = fn(index, port, &ev); - TEST_ASSERT_SUCCESS(ret, + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate test specific event"); } @@ -438,8 +425,8 @@ static int validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(port); - TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d", index, - ev->mbuf->seqn); + RTE_TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d", + index, ev->mbuf->seqn); return 0; } @@ -456,7 +443,7 @@ test_simple_enqdeq(uint8_t sched_type) 0 /* port */, MAX_EVENTS); if (ret) - return TEST_FAILED; + return -1; return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq); } @@ -491,7 +478,7 @@ test_multi_queue_enq_single_port_deq(void) ret = generate_random_events(MAX_EVENTS); if (ret) - return TEST_FAILED; + return -1; return consume_events(0 /* port */, MAX_EVENTS, NULL); } @@ -514,7 +501,7 @@ static int validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) { uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); uint32_t range = MAX_EVENTS / queue_count; @@ -522,7 +509,7 @@ validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) expected_val += ev->queue_id; RTE_SET_USED(port); - TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val, + RTE_TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val, "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", ev->mbuf->seqn, index, expected_val, range, queue_count, MAX_EVENTS); @@ -538,7 +525,7 @@ test_multi_queue_priority(void) /* See validate_queue_priority() comments for priority validate logic */ uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); max_evts_roundoff = MAX_EVENTS / queue_count; @@ -548,7 +535,7 @@ test_multi_queue_priority(void) struct rte_event ev = {.event = 0, .u64 = 0}; m = rte_pktmbuf_alloc(eventdev_test_mempool); - TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); + RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); m->seqn = i; queue = i % queue_count; @@ -576,7 +563,7 @@ worker_multi_port_fn(void *arg) continue; ret = validate_event(&ev); - TEST_ASSERT_SUCCESS(ret, "Failed to validate event"); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate event"); rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } @@ -587,27 +574,29 @@ static inline int wait_workers_to_join(int lcore, const rte_atomic32_t *count) { uint64_t cycles, print_cycles; + RTE_SET_USED(count); print_cycles = cycles = rte_get_timer_cycles(); while (rte_eal_get_lcore_state(lcore) != FINISHED) { uint64_t new_cycles = rte_get_timer_cycles(); if (new_cycles - print_cycles > rte_get_timer_hz()) { - printf("\r%s: events %d\n", __func__, + ssovf_log_dbg("\r%s: events %d", __func__, rte_atomic32_read(count)); print_cycles = new_cycles; } if (new_cycles - cycles > rte_get_timer_hz() * 10) { - printf("%s: No schedules for seconds, deadlock (%d)\n", + ssovf_log_dbg( + "%s: No schedules for seconds, deadlock (%d)", __func__, rte_atomic32_read(count)); rte_event_dev_dump(evdev, stdout); cycles = new_cycles; - return TEST_FAILED; + return -1; } } rte_eal_mp_wait_lcore(); - return TEST_SUCCESS; + return 0; } @@ -631,12 +620,12 @@ launch_workers_and_wait(int (*master_worker)(void *), param = malloc(sizeof(struct test_core_param) * nb_workers); if (!param) - return TEST_FAILED; + return -1; ret = rte_event_dequeue_timeout_ticks(evdev, rte_rand() % 10000000/* 10ms */, &dequeue_tmo_ticks); if (ret) - return TEST_FAILED; + return -1; param[0].total_events = &atomic_total_events; param[0].sched_type = sched_type; @@ -679,17 +668,17 @@ test_multi_queue_enq_multi_port_deq(void) ret = generate_random_events(total_events); if (ret) - return TEST_FAILED; + return -1; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (!nr_ports) { - printf("%s: Not enough ports=%d or workers=%d\n", __func__, + ssovf_log_dbg("%s: Not enough ports=%d or workers=%d", __func__, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } return launch_workers_and_wait(worker_multi_port_fn, @@ -702,7 +691,7 @@ validate_queue_to_port_single_link(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(index); - TEST_ASSERT_EQUAL(port, ev->queue_id, + RTE_TEST_ASSERT_EQUAL(port, ev->queue_id, "queue mismatch enq=%d deq =%d", port, ev->queue_id); return 0; @@ -718,18 +707,19 @@ test_queue_to_port_single_link(void) int i, nr_links, ret; uint32_t port_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count), "Port count get failed"); /* Unlink all connections that created in eventdev_setup */ for (i = 0; i < (int)port_count; i++) { ret = rte_event_port_unlink(evdev, i, NULL, 0); - TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d", i); + RTE_TEST_ASSERT(ret >= 0, + "Failed to unlink all queues port=%d", i); } uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); @@ -741,7 +731,7 @@ test_queue_to_port_single_link(void) uint8_t queue = (uint8_t)i; ret = rte_event_port_link(evdev, i, &queue, NULL, 1); - TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i); + RTE_TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i); ret = inject_events( 0x100 /*flow_id */, @@ -752,7 +742,7 @@ test_queue_to_port_single_link(void) i /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; } /* Verify the events generated from correct queue */ @@ -760,10 +750,10 @@ test_queue_to_port_single_link(void) ret = consume_events(i /* port */, total_events, validate_queue_to_port_single_link); if (ret) - return TEST_FAILED; + return -1; } - return TEST_SUCCESS; + return 0; } static int @@ -771,7 +761,7 @@ validate_queue_to_port_multi_link(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(index); - TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1), + RTE_TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1), "queue mismatch enq=%d deq =%d", port, ev->queue_id); return 0; @@ -789,27 +779,27 @@ test_queue_to_port_multi_link(void) uint32_t nr_queues = 0; uint32_t nr_ports = 0; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues), "Queue count get failed"); - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues), "Queue count get failed"); - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); if (nr_ports < 2) { - printf("%s: Not enough ports to test ports=%d\n", + ssovf_log_dbg("%s: Not enough ports to test ports=%d", __func__, nr_ports); - return TEST_SUCCESS; + return 0; } /* Unlink all connections that created in eventdev_setup */ for (port = 0; port < nr_ports; port++) { ret = rte_event_port_unlink(evdev, port, NULL, 0); - TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d", + RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d", port); } @@ -819,7 +809,7 @@ test_queue_to_port_multi_link(void) for (queue = 0; queue < nr_queues; queue++) { port = queue & 0x1; ret = rte_event_port_link(evdev, port, &queue, NULL, 1); - TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d", + RTE_TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d", queue, port); ret = inject_events( @@ -831,7 +821,7 @@ test_queue_to_port_multi_link(void) port /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; if (port == 0) port0_events += total_events; @@ -842,13 +832,13 @@ test_queue_to_port_multi_link(void) ret = consume_events(0 /* port */, port0_events, validate_queue_to_port_multi_link); if (ret) - return TEST_FAILED; + return -1; ret = consume_events(1 /* port */, port1_events, validate_queue_to_port_multi_link); if (ret) - return TEST_FAILED; + return -1; - return TEST_SUCCESS; + return 0; } static int @@ -878,17 +868,17 @@ worker_flow_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.sub_event_type == 1) { /* Events from stage 1*/ - if (seqn_list_update(ev.mbuf->seqn) == TEST_SUCCESS) { + if (seqn_list_update(ev.mbuf->seqn) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { - printf("Failed to update seqn_list\n"); - return TEST_FAILED; + ssovf_log_dbg("Failed to update seqn_list"); + return -1; } } else { - printf("Invalid ev.sub_event_type = %d\n", + ssovf_log_dbg("Invalid ev.sub_event_type = %d", ev.sub_event_type); - return TEST_FAILED; + return -1; } } return 0; @@ -902,15 +892,15 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type, uint32_t nr_ports; int ret; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (!nr_ports) { - printf("%s: Not enough ports=%d or workers=%d\n", __func__, + ssovf_log_dbg("%s: Not enough ports=%d or workers=%d", __func__, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } /* Injects events with m->seqn=0 to total_events */ @@ -923,20 +913,20 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type, 0 /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; ret = launch_workers_and_wait(worker_flow_based_pipeline, worker_flow_based_pipeline, total_events, nr_ports, out_sched_type); if (ret) - return TEST_FAILED; + return -1; if (in_sched_type != RTE_SCHED_TYPE_PARALLEL && out_sched_type == RTE_SCHED_TYPE_ATOMIC) { /* Check the events order maintained or not */ return seqn_list_check(total_events); } - return TEST_SUCCESS; + return 0; } @@ -1033,16 +1023,16 @@ worker_group_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/ - if (seqn_list_update(ev.mbuf->seqn) == TEST_SUCCESS) { + if (seqn_list_update(ev.mbuf->seqn) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { - printf("Failed to update seqn_list\n"); - return TEST_FAILED; + ssovf_log_dbg("Failed to update seqn_list"); + return -1; } } else { - printf("Invalid ev.queue_id = %d\n", ev.queue_id); - return TEST_FAILED; + ssovf_log_dbg("Invalid ev.queue_id = %d", ev.queue_id); + return -1; } } @@ -1058,21 +1048,21 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type, uint32_t nr_ports; int ret; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); if (queue_count < 2 || !nr_ports) { - printf("%s: Not enough queues=%d ports=%d or workers=%d\n", + ssovf_log_dbg("%s: Not enough queues=%d ports=%d or workers=%d", __func__, queue_count, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } /* Injects events with m->seqn=0 to total_events */ @@ -1085,20 +1075,20 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type, 0 /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; ret = launch_workers_and_wait(worker_group_based_pipeline, worker_group_based_pipeline, total_events, nr_ports, out_sched_type); if (ret) - return TEST_FAILED; + return -1; if (in_sched_type != RTE_SCHED_TYPE_PARALLEL && out_sched_type == RTE_SCHED_TYPE_ATOMIC) { /* Check the events order maintained or not */ return seqn_list_check(total_events); } - return TEST_SUCCESS; + return 0; } static int @@ -1201,15 +1191,15 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) uint32_t nr_ports; int ret; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (!nr_ports) { - printf("%s: Not enough ports=%d or workers=%d\n", __func__, + ssovf_log_dbg("%s: Not enough ports=%d or workers=%d", __func__, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } /* Injects events with m->seqn=0 to total_events */ @@ -1222,7 +1212,7 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) 0 /* port */, MAX_EVENTS /* events */); if (ret) - return TEST_FAILED; + return -1; return launch_workers_and_wait(fn, fn, MAX_EVENTS, nr_ports, 0xff /* invalid */); @@ -1244,7 +1234,7 @@ worker_queue_based_pipeline_max_stages_rand_sched_type(void *arg) uint16_t valid_event; uint8_t port = param->port; uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); uint8_t nr_queues = queue_count; @@ -1286,7 +1276,7 @@ worker_mixed_pipeline_max_stages_rand_sched_type(void *arg) uint16_t valid_event; uint8_t port = param->port; uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); uint8_t nr_queues = queue_count; @@ -1357,14 +1347,14 @@ test_producer_consumer_ingress_order_test(int (*fn)(void *)) { uint32_t nr_ports; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (rte_lcore_count() < 3 || nr_ports < 2) { - printf("### Not enough cores for %s test.\n", __func__); - return TEST_SUCCESS; + ssovf_log_dbg("### Not enough cores for %s test.", __func__); + return 0; } launch_workers_and_wait(worker_ordered_flow_producer, fn, @@ -1389,86 +1379,107 @@ test_queue_producer_consumer_ingress_order_test(void) worker_group_based_pipeline); } -static struct unit_test_suite eventdev_octeontx_testsuite = { - .suite_name = "eventdev octeontx unit test suite", - .setup = testsuite_setup, - .teardown = testsuite_teardown, - .unit_test_cases = { - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_queue_enq_single_port_deq), - TEST_CASE_ST(eventdev_setup_priority, eventdev_teardown, - test_multi_queue_priority), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_queue_enq_multi_port_deq), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_queue_to_port_single_link), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_queue_to_port_multi_link), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_max_stages_random_sched_type), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_max_stages_random_sched_type), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_mixed_max_stages_random_sched_type), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_flow_producer_consumer_ingress_order_test), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_queue_producer_consumer_ingress_order_test), - /* Tests with dequeue timeout */ - TEST_CASE_ST(eventdev_setup_dequeue_timeout, eventdev_teardown, - test_multi_port_flow_ordered_to_atomic), - TEST_CASE_ST(eventdev_setup_dequeue_timeout, eventdev_teardown, - test_multi_port_queue_ordered_to_atomic), - TEST_CASES_END() /**< NULL terminate unit test array */ +static void octeontx_test_run(int (*setup)(void), void (*tdown)(void), + int (*test)(void), const char *name) +{ + if (setup() < 0) { + ssovf_log_selftest("Error setting up test %s", name); + unsupported++; + } else { + if (test() < 0) { + failed++; + ssovf_log_selftest("%s Failed", name); + } else { + passed++; + ssovf_log_selftest("%s Passed", name); + } } -}; -static int + total++; + tdown(); +} + +int test_eventdev_octeontx(void) { - return unit_test_suite_runner(&eventdev_octeontx_testsuite); -} + testsuite_setup(); + + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_queue_enq_single_port_deq); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_queue_enq_multi_port_deq); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_to_port_single_link); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_to_port_multi_link); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_max_stages_random_sched_type); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_max_stages_random_sched_type); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_mixed_max_stages_random_sched_type); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_flow_producer_consumer_ingress_order_test); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_producer_consumer_ingress_order_test); + OCTEONTX_TEST_RUN(eventdev_setup_priority, eventdev_teardown, + test_multi_queue_priority); + OCTEONTX_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown, + test_multi_port_flow_ordered_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown, + test_multi_port_queue_ordered_to_atomic); + + ssovf_log_selftest("Total tests : %d", total); + ssovf_log_selftest("Passed : %d", passed); + ssovf_log_selftest("Failed : %d", failed); + ssovf_log_selftest("Not supported : %d", unsupported); + + testsuite_teardown(); + + if (failed) + return -1; -REGISTER_TEST_COMMAND(eventdev_octeontx_autotest, test_eventdev_octeontx); + return 0; +} -- 2.15.1