From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM01-BY2-obe.outbound.protection.outlook.com (mail-by2nam01on0040.outbound.protection.outlook.com [104.47.34.40]) by dpdk.org (Postfix) with ESMTP id 5CE48293B for ; Tue, 12 Dec 2017 20:28:15 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=7uKP/g3W7odR94DH4lOU8sAtJTbVgxx+UN24UysMkHQ=; b=YZG87iyjwAzwaNGQhT7JC1IYlWK/rsXfnXv/hDyG5JO+b5Wh3iTEAiBTqtfqC3cilbO5ikvtc4rSJ63Uj3kgYMpSdBlCt3o6GRlt9cCa9BMAjPlKmt3swxGuZXFbwrXtJ55D/4mkHGZI9pt4QfzLenTTpkbQwfSzkBrGMTki04c= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Pavan.Bhagavatula@cavium.com; Received: from localhost.localdomain (111.93.218.67) by CY4PR07MB3464.namprd07.prod.outlook.com (10.171.252.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.302.9; Tue, 12 Dec 2017 19:27:58 +0000 From: Pavan Nikhilesh To: jerin.jacob@caviumnetworks.com, bruce.richardson@intel.com, harry.van.haaren@intel.com, gage.eads@intel.com, hemant.agrawal@nxp.com, nipun.gupta@nxp.com, liang.j.ma@intel.com Cc: dev@dpdk.org, Pavan Nikhilesh Date: Wed, 13 Dec 2017 00:57:08 +0530 Message-Id: <20171212192713.17620-2-pbhagavatula@caviumnetworks.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20171212192713.17620-1-pbhagavatula@caviumnetworks.com> References: <20171212192713.17620-1-pbhagavatula@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: HK2PR04CA0045.apcprd04.prod.outlook.com (10.170.154.13) To CY4PR07MB3464.namprd07.prod.outlook.com (10.171.252.145) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a4df2697-649c-4d85-857c-08d5419677fa X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(5600026)(4604075)(2017052603307); SRVR:CY4PR07MB3464; X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3464; 3:+9jmNhLpzo6KDSnhrF+wqIwTwkmWJqJIsr4btanT4/DrUkCGqYySkFWDbTThG/hlQuhFtDoOTvA2vfixKifhgOZ06F4s6JQJCVzqIDycwl5LuvhfKvD82aS3/l/2IspiBxoVIKp/5w+UNMcFkPOf+xSumzSQHePsJWowdp+9FjzUzy6G5L2zu4RDvgYyF06rLvEi0Sl1cD1Mk4hJRN3pPXeSQ9l9e5rwxe88a1hLGS4O66T6kiUx9OrouCJQAZ3D; 25:a+POIOBtXbqkfLuO5eZi86yT0tRv1ahK53FLAcv2nU1puoZpjo3AwIL7z/assSJoKZrS+Z9lKu9GF5nYzsyF5aEBZfqlKGf6a8MFsgA6vdrGHceuF6Fr2ATQswQmUmGAfMIJUqyNvPDbs84FFMeNvcdcufL2VghT4BMxVSX31ySRQqk4hlHlgNH50y+ylcsV2pUJ5SQNiv1V8e61kvFEnVOrnYIZC9X1XF7HFZW+GFPpniY+lcshOTq5n1qggs9nHaEOH3GGZ5IodRgc2aqGdZKlB97gVlk7ocNsO1rHqOWHSeTk5mWT2DzrStntGPxzDz4tCOgdREhrCZWtOrNavg==; 31:Am5bO2qS0B2UszHuyfDFbAtE23R1Kkqdl0Qzbk4Kc0g9MOih6s2FQXGY9O+uoIQVYqTtFjfObcjKu+TDqm7OVxqfwP35Dg1rHlHLjW/mdXBLFx5lO4ltPXexBLuvi9bjudcDgL2c4okA+iQHKxp0Wcie0sWRxyTdtqm61C5fGtJU/QdvWhrknu0pnMiZb5Hl3ZYSjecusmBnaHaAwzZqGRFsrqgzx+kY2uoHeF+U/5M= X-MS-TrafficTypeDiagnostic: CY4PR07MB3464: X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3464; 20:qLMzp+2mzasj3meDC0mo3tsPpaaYSdXgCvtIVIT2hakZCnfOktXKQys1e42B3d7Ts5PwIS68e2qrK/0yUdrj1oqUjL4jMwWZP15AH0ydwaSe/AvGW9RxxVJAUL4a1pho4cul6w1Gkv778A0aR86LebDHHi6n5PdDp8Tx5bxCqBCCMLdR6VesPsvLYaLx435q2ul71pu8BcBFNbNeLPEkuOMvTRI6KZLtJ4UwtawfQj55ICxgT+gKkM6E+D43HNRbcyPo+3zjPc5P6ys8wGPNt3RMhnk27tAyCyjK28TMC41a9kuoto38Dv2d2QKh+T/duOd8obVI5L24lhcowx352XP6xv31uS5KOaK1RirIjnMyQ0yiKCxCnHmtN+EEJv8k+di0jPHo/gOrl09x1fScVdPXzTBDsWCc9nL5jBEgzDgBtU+T9kjBS/5p8YHODRIoQkdEIx4/gf79swOfuqAbLkejdmkecyj95r2LUjXhsJTt5fmBiBZk3HByJgkGpV4C6v8+qnA9kpPI+SblQzcRp2jbgHwxnoHtv6e5TF1HAdhAfnhUuyL5ok51mOzm8B6ElhQEYpR0h03SToNnrXvl/CZTiKtAcFpECdgs3qrFRdY=; 4:eqLZmTXPwLZ9E9tNa6THxMoYXqThVsNE5NKGAYk9m3HH4XTPcmmqAZpdNBdak8ncyN3hV/K66/FfSv/NVsSLENG06dXJZISddrbb3UoarT4qLnkwlYog96P6SYSTc5Ku/kd+Q9H5d593p/vOhOUm4lWJVVq7onQ0szOkFgQ8xOIM9NRpk/biaTl6Y3ixysIOXLHlkYFq6w6Cf924L4iZ0ya+86+xbph3D8+uDvRDGIDSR0ZzKZDm6N3q/9XELnbNhsTApuxrd8zkxvbm7/deBA== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040450)(2401047)(8121501046)(5005006)(93006095)(10201501046)(3231023)(3002001)(6041248)(20161123560025)(20161123564025)(20161123558100)(20161123555025)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123562025)(6072148)(201708071742011); SRVR:CY4PR07MB3464; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:CY4PR07MB3464; X-Forefront-PRVS: 051900244E X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6069001)(346002)(366004)(376002)(189003)(199004)(66066001)(4326008)(5660300001)(6486002)(316002)(16526018)(36756003)(50466002)(25786009)(72206003)(47776003)(478600001)(68736007)(8676002)(305945005)(81166006)(6506007)(386003)(81156014)(16586007)(48376002)(50226002)(6116002)(3846002)(7736002)(5009440100003)(53936002)(106356001)(1076002)(53946003)(97736004)(42882006)(76176011)(6666003)(2906002)(107886003)(6512007)(2950100002)(8656006)(52116002)(51416003)(8936002)(59450400001)(105586002)(42262002); DIR:OUT; SFP:1101; SCL:1; SRVR:CY4PR07MB3464; H:localhost.localdomain; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CY4PR07MB3464; 23:Rb4C0nelgPThMidQEHAVVex8ibqOHzvInoczE/FXz?= =?us-ascii?Q?YUcTYnRCy4GnNh4KWpSLme7U6J4o2iTRC/0EbmkWIjGSPYrVeA3uB4Xn9v5u?= =?us-ascii?Q?s4XDbnbrMm/YzwPbbW05xX5D2QZYARgAoG8cV8gU+r8XOS2KPTcB/epINIgD?= =?us-ascii?Q?ZAOsfE85hxxOuMKVANoEASqcp7dpEwkvnbQ06CNqAfFBNHB7WvIQuRMKkSSl?= =?us-ascii?Q?HPK1DLOXQ7r6wVF8p7uty/3WE8owWtHvAVFO5DeimWoukhJatRAMTqP72mMT?= =?us-ascii?Q?z595/WgpaqIwGbxsz0C37jTfUKIP+QkQHSw5rd76D7js0VbwS9BnMjRpnqJl?= =?us-ascii?Q?6feco7NcBRCiqa40XE0Ss5URTRbGqnXflzgSV2knsUULoibzN53vdhnnwlac?= =?us-ascii?Q?sL6IVxQMFvOW0TnCZXfBCl24amtzElTeQVBpHlGb3oIPtgjZDR6NjRMyVBxW?= =?us-ascii?Q?hfq9wYZQVYPLN92UC9Z8qvEUlh2DleAzR/UyTKD4Xnmy7PiQGmsQC+eG+WjI?= =?us-ascii?Q?IlJeJYiFPgu0WAhWbQRgHjF/dRux7vGPahaGXSrogqc2ixg8pEer5ry/kp8m?= =?us-ascii?Q?RT4eNyG3PJ9mWERNBcvfmXlFkJ0DhfD+MEVFoHbAHYIGpP4joep83JEH/uZL?= =?us-ascii?Q?43ZR3Q85zFm1oCGzMwwAjDMGJIIer6gM/Rg5U+0Gj0yqEkbH+1hfw88uUX7l?= =?us-ascii?Q?+fgNX28oG1J/Xo/1mpMLDIL9JQW5Ry91A3uXTQOtiySFe5m7jEvJ0bt2F5re?= =?us-ascii?Q?24rGu89GAUcScC6jPLfSoZQms3unjfdwbVpy6pTr8nJkZjbhWrrgY2BTRWMV?= =?us-ascii?Q?rTtM4/dl75qP6nlPTJj8TNXQ7iTrZzSxyUJPIgE1xCmc3URDzZbC3flGwlaz?= =?us-ascii?Q?7UrJt0M5mIEe2Z5KNW8GBE4nl/UdqJ7OXWIFnKB6W5AiNKs3lEhozMNjemn6?= =?us-ascii?Q?E6c1Xv4A/5rh2y1pwSzLapHHk1GZdjsU1fSyIxw6fummKog8O9KSNCYUftbI?= =?us-ascii?Q?ncpjjEy/09cSK7nV0TX4frBduPHpqONuOUZUHJ7CV9ZHBDSvVl72RdC659Ic?= =?us-ascii?Q?JLzqaanmTIRnKx4W+HCOVILo5tepr/fSfQtLj/lLtMiJ4wCQe0w6TS+8dJr3?= =?us-ascii?Q?7fdwfS4bBiOAVvb8L7Myr3DX0SWa1d+Blv/TRvYD3899gEw1a3+4AWydYgIj?= =?us-ascii?Q?hsYwOQU/ucgFl8=3D?= X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3464; 6:MK3nLiCj5LG6+EaLeFT/rKx/r10eqKJrpV51TcJD5SUqrErLm2/w30Kx0AHHQ0ksOt/Lu/NvdJLl1zbzbVMHq0D7LjJfXFFBVFwi++Xmpn+DmYEVerdRhEI3pjRg1+Rtgq8HOsiDYxH7vka+jlH8LG+tccLGr0bAAlImfC2TNjZ3TxZ5Drs6KugzC8JJyO7TTirTGOnR3f5eRYiux+hkQ4iJ01cUp7Nax0mNbmYMmkfVCObEYzjW8T88AsfGmVsu40U3zrzr+ZcMbmMjq3R5ePwzAd3lYqgcQoUD7ijESqBfwBD58P/S/aWwGpnxzUNdHv5OS9/vStOMYgeBFGu8zD/EWWviE5hNa3Ot0NLCtA8=; 5:x6oXeSfDD44pEWtDhti+mCi2hX1nDOJYK8e4UGh1LjBR2jTuKzfc3IjxvNGyhnJ0qppVt/sKJzLndbn33fQuz2BM7YBVSKj8hDC72J0tpVrd46mpD9bxy+G/FLstPYCdqGul3EiDBrhx9zWhunNCTnGtDTXYod4ONMz3r9H/bs8=; 24:UHGPM039P4MOUmTH4bf+zJjFewKbX34S8ueJBui8iDeicJr9rq6sPXA6h9HzUwLxmxYJ3W4Uir97OmAIjcckBcr74c82Dr0dzW4vmq35Hms=; 7:GPGbGvIQ2/+zJyy67w9zylYQM9S/SqUiDHKlpmmxlzmJcASXYAOrh5Y9hXkGiXcg0tYFtTK3CjMvSFwrZzYOCwXSK6WG6nbF1fgVFJVRn6YETMQjuI7deDNHHkWYogG9rc8BClIv8JbM5SMZNukVjyMUWycoUljApAL5LUr5GLELJj+VyrUZZxLD6MY9u4HXoYJFC6J83a6Ar+5PiyD7438ZFMSY6MZ2zTZ6AXPs4NnYtRmbHSSsIf3ldlcA4LKe SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Dec 2017 19:27:58.4054 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a4df2697-649c-4d85-857c-08d5419677fa X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR07MB3464 Subject: [dpdk-dev] [PATCH 2/7] event/octeontx: modify octeontx eventdev test X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 12 Dec 2017 19:28:16 -0000 Modify test_eventdev_octeontx to be standalone selftest independent of test framework. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx/selftest_octeontx.c | 449 +++++++++++++++++------------ 1 file changed, 258 insertions(+), 191 deletions(-) diff --git a/drivers/event/octeontx/selftest_octeontx.c b/drivers/event/octeontx/selftest_octeontx.c index 8fddb4fd2..2b666007d 100644 --- a/drivers/event/octeontx/selftest_octeontx.c +++ b/drivers/event/octeontx/selftest_octeontx.c @@ -47,11 +47,56 @@ #include #include -#include "test.h" +#include "ssovf_evdev.h" #define NUM_PACKETS (1 << 18) #define MAX_EVENTS (16 * 1024) +#define ASSERT(cond, msg, ...) do { \ + if (!(cond)) { \ + ssovf_log_dbg("Assert failed %s() line %d failed : " \ + msg , __func__, __LINE__, \ + ##__VA_ARGS__); \ + return -1; \ + } \ +} while (0) + +#define ASSERT_EQUAL(a, b, msg, ...) do { \ + if (!(a == b)) { \ + ssovf_log_dbg("Assert failed %s() line %d failed : " \ + msg , __func__, __LINE__, \ + ##__VA_ARGS__); \ + return -1; \ + } \ +} while (0) + +#define ASSERT_SUCCESS(val, msg, ...) do { \ + typeof(val) _val = (val); \ + if (!(_val == 0)) { \ + ssovf_log_dbg("Assert failed %s() line %d failed (err %d): " \ + msg , __func__, __LINE__, _val, \ + ##__VA_ARGS__); \ + return -1; \ + } \ +} while (0) + +#define ASSERT_NOT_NULL(val, msg, ...) do { \ + if (!(val != NULL)) { \ + ssovf_log_dbg("Assert failed %s() line %d failed : " \ + msg , __func__, __LINE__, \ + ##__VA_ARGS__); \ + return -1; \ + } \ +} while (0) + +#define OCTEONTX_TEST_RUN(setup, teardown, test) \ + octeontx_test_run(setup, teardown, test, #test) + +static int total; +static int passed; +static int failed; +static int unsupported; + static int evdev; static struct rte_mempool *eventdev_test_mempool; @@ -79,11 +124,11 @@ static inline int seqn_list_update(int val) { if (seqn_list_index >= NUM_PACKETS) - return TEST_FAILED; + return -1; seqn_list[seqn_list_index++] = val; rte_smp_wmb(); - return TEST_SUCCESS; + return 0; } static inline int @@ -93,11 +138,11 @@ seqn_list_check(int limit) for (i = 0; i < limit; i++) { if (seqn_list[i] != i) { - printf("Seqn mismatch %d %d\n", seqn_list[i], i); - return TEST_FAILED; + ssovf_log_dbg("Seqn mismatch %d %d", seqn_list[i], i); + return -1; } } - return TEST_SUCCESS; + return 0; } struct test_core_param { @@ -114,20 +159,21 @@ testsuite_setup(void) evdev = rte_event_dev_get_dev_id(eventdev_name); if (evdev < 0) { - printf("%d: Eventdev %s not found - creating.\n", + ssovf_log_dbg("%d: Eventdev %s not found - creating.", __LINE__, eventdev_name); if (rte_vdev_init(eventdev_name, NULL) < 0) { - printf("Error creating eventdev %s\n", eventdev_name); - return TEST_FAILED; + ssovf_log_dbg("Error creating eventdev %s", + eventdev_name); + return -1; } evdev = rte_event_dev_get_dev_id(eventdev_name); if (evdev < 0) { - printf("Error finding newly created eventdev\n"); - return TEST_FAILED; + ssovf_log_dbg("Error finding newly created eventdev"); + return -1; } } - return TEST_SUCCESS; + return 0; } static void @@ -177,31 +223,34 @@ _eventdev_setup(int mode) 512, /* Use very small mbufs */ rte_socket_id()); if (!eventdev_test_mempool) { - printf("ERROR creating mempool\n"); - return TEST_FAILED; + ssovf_log_dbg("ERROR creating mempool"); + return -1; } ret = rte_event_dev_info_get(evdev, &info); - TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); - TEST_ASSERT(info.max_num_events >= (int32_t)MAX_EVENTS, - "max_num_events=%d < max_events=%d", - info.max_num_events, MAX_EVENTS); + ASSERT_SUCCESS(ret, "Failed to get event dev info"); + if (!(info.max_num_events >= (int32_t)MAX_EVENTS)) { + ssovf_log_dbg("ERROR max_num_events=%d < max_events=%d", + info.max_num_events, MAX_EVENTS); + return -1; + } devconf_set_default_sane_values(&dev_conf, &info); if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT) dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT; ret = rte_event_dev_configure(evdev, &dev_conf); - TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev"); + ASSERT_SUCCESS(ret, "Failed to configure eventdev"); uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); if (mode == TEST_EVENTDEV_SETUP_PRIORITY) { if (queue_count > 8) { - printf("test expects the unique priority per queue\n"); + ssovf_log_dbg( + "test expects the unique priority per queue"); return -ENOTSUP; } @@ -216,35 +265,35 @@ _eventdev_setup(int mode) ret = rte_event_queue_default_conf_get(evdev, i, &queue_conf); - TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d", i); + ASSERT_SUCCESS(ret, "Failed to get def_conf%d", i); queue_conf.priority = i * step; ret = rte_event_queue_setup(evdev, i, &queue_conf); - TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", i); + ASSERT_SUCCESS(ret, "Failed to setup queue=%d", i); } } else { /* Configure event queues with default priority */ for (i = 0; i < (int)queue_count; i++) { ret = rte_event_queue_setup(evdev, i, NULL); - TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", i); + ASSERT_SUCCESS(ret, "Failed to setup queue=%d", i); } } /* Configure event ports */ uint32_t port_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count), "Port count get failed"); for (i = 0; i < (int)port_count; i++) { ret = rte_event_port_setup(evdev, i, NULL); - TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i); + ASSERT_SUCCESS(ret, "Failed to setup port=%d", i); ret = rte_event_port_link(evdev, i, NULL, NULL, 0); - TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d", i); + ASSERT(ret >= 0, "Failed to link all queues port=%d", i); } ret = rte_event_dev_start(evdev); - TEST_ASSERT_SUCCESS(ret, "Failed to start device"); + ASSERT_SUCCESS(ret, "Failed to start device"); - return TEST_SUCCESS; + return 0; } static inline int @@ -311,7 +360,7 @@ inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type, struct rte_event ev = {.event = 0, .u64 = 0}; m = rte_pktmbuf_alloc(eventdev_test_mempool); - TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); + ASSERT_NOT_NULL(m, "mempool alloc failed"); m->seqn = i; update_event_and_validation_attr(m, &ev, flow_id, event_type, @@ -332,7 +381,7 @@ check_excess_events(uint8_t port) for (i = 0; i < 32; i++) { valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); - TEST_ASSERT_SUCCESS(valid_event, "Unexpected valid event=%d", + ASSERT_SUCCESS(valid_event, "Unexpected valid event=%d", ev.mbuf->seqn); } return 0; @@ -346,12 +395,12 @@ generate_random_events(const unsigned int total_events) int ret; uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); ret = rte_event_dev_info_get(evdev, &info); - TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + ASSERT_SUCCESS(ret, "Failed to get event dev info"); for (i = 0; i < total_events; i++) { ret = inject_events( rte_rand() % info.max_event_queue_flows /*flow_id */, @@ -362,7 +411,7 @@ generate_random_events(const unsigned int total_events) 0 /* port */, 1 /* events */); if (ret) - return TEST_FAILED; + return -1; } return ret; } @@ -374,19 +423,19 @@ validate_event(struct rte_event *ev) struct event_attr *attr; attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *); - TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id, + ASSERT_EQUAL(attr->flow_id, ev->flow_id, "flow_id mismatch enq=%d deq =%d", attr->flow_id, ev->flow_id); - TEST_ASSERT_EQUAL(attr->event_type, ev->event_type, + ASSERT_EQUAL(attr->event_type, ev->event_type, "event_type mismatch enq=%d deq =%d", attr->event_type, ev->event_type); - TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type, + ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type, "sub_event_type mismatch enq=%d deq =%d", attr->sub_event_type, ev->sub_event_type); - TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type, + ASSERT_EQUAL(attr->sched_type, ev->sched_type, "sched_type mismatch enq=%d deq =%d", attr->sched_type, ev->sched_type); - TEST_ASSERT_EQUAL(attr->queue, ev->queue_id, + ASSERT_EQUAL(attr->queue, ev->queue_id, "queue mismatch enq=%d deq =%d", attr->queue, ev->queue_id); return 0; @@ -405,8 +454,8 @@ consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn) while (1) { if (++forward_progress_cnt > UINT16_MAX) { - printf("Detected deadlock\n"); - return TEST_FAILED; + ssovf_log_dbg("Detected deadlock"); + return -1; } valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); @@ -416,11 +465,11 @@ consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn) forward_progress_cnt = 0; ret = validate_event(&ev); if (ret) - return TEST_FAILED; + return -1; if (fn != NULL) { ret = fn(index, port, &ev); - TEST_ASSERT_SUCCESS(ret, + ASSERT_SUCCESS(ret, "Failed to validate test specific event"); } @@ -438,7 +487,7 @@ static int validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(port); - TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d", index, + ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d", index, ev->mbuf->seqn); return 0; } @@ -456,7 +505,7 @@ test_simple_enqdeq(uint8_t sched_type) 0 /* port */, MAX_EVENTS); if (ret) - return TEST_FAILED; + return -1; return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq); } @@ -491,7 +540,7 @@ test_multi_queue_enq_single_port_deq(void) ret = generate_random_events(MAX_EVENTS); if (ret) - return TEST_FAILED; + return -1; return consume_events(0 /* port */, MAX_EVENTS, NULL); } @@ -514,7 +563,7 @@ static int validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) { uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); uint32_t range = MAX_EVENTS / queue_count; @@ -522,7 +571,7 @@ validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) expected_val += ev->queue_id; RTE_SET_USED(port); - TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val, + ASSERT_EQUAL(ev->mbuf->seqn, expected_val, "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", ev->mbuf->seqn, index, expected_val, range, queue_count, MAX_EVENTS); @@ -538,7 +587,7 @@ test_multi_queue_priority(void) /* See validate_queue_priority() comments for priority validate logic */ uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); max_evts_roundoff = MAX_EVENTS / queue_count; @@ -548,7 +597,7 @@ test_multi_queue_priority(void) struct rte_event ev = {.event = 0, .u64 = 0}; m = rte_pktmbuf_alloc(eventdev_test_mempool); - TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); + ASSERT_NOT_NULL(m, "mempool alloc failed"); m->seqn = i; queue = i % queue_count; @@ -576,7 +625,7 @@ worker_multi_port_fn(void *arg) continue; ret = validate_event(&ev); - TEST_ASSERT_SUCCESS(ret, "Failed to validate event"); + ASSERT_SUCCESS(ret, "Failed to validate event"); rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } @@ -587,27 +636,29 @@ static inline int wait_workers_to_join(int lcore, const rte_atomic32_t *count) { uint64_t cycles, print_cycles; + RTE_SET_USED(count); print_cycles = cycles = rte_get_timer_cycles(); while (rte_eal_get_lcore_state(lcore) != FINISHED) { uint64_t new_cycles = rte_get_timer_cycles(); if (new_cycles - print_cycles > rte_get_timer_hz()) { - printf("\r%s: events %d\n", __func__, + ssovf_log_dbg("\r%s: events %d", __func__, rte_atomic32_read(count)); print_cycles = new_cycles; } if (new_cycles - cycles > rte_get_timer_hz() * 10) { - printf("%s: No schedules for seconds, deadlock (%d)\n", + ssovf_log_dbg( + "%s: No schedules for seconds, deadlock (%d)", __func__, rte_atomic32_read(count)); rte_event_dev_dump(evdev, stdout); cycles = new_cycles; - return TEST_FAILED; + return -1; } } rte_eal_mp_wait_lcore(); - return TEST_SUCCESS; + return 0; } @@ -631,12 +682,12 @@ launch_workers_and_wait(int (*master_worker)(void *), param = malloc(sizeof(struct test_core_param) * nb_workers); if (!param) - return TEST_FAILED; + return -1; ret = rte_event_dequeue_timeout_ticks(evdev, rte_rand() % 10000000/* 10ms */, &dequeue_tmo_ticks); if (ret) - return TEST_FAILED; + return -1; param[0].total_events = &atomic_total_events; param[0].sched_type = sched_type; @@ -679,17 +730,17 @@ test_multi_queue_enq_multi_port_deq(void) ret = generate_random_events(total_events); if (ret) - return TEST_FAILED; + return -1; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (!nr_ports) { - printf("%s: Not enough ports=%d or workers=%d\n", __func__, + ssovf_log_dbg("%s: Not enough ports=%d or workers=%d", __func__, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } return launch_workers_and_wait(worker_multi_port_fn, @@ -702,7 +753,7 @@ validate_queue_to_port_single_link(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(index); - TEST_ASSERT_EQUAL(port, ev->queue_id, + ASSERT_EQUAL(port, ev->queue_id, "queue mismatch enq=%d deq =%d", port, ev->queue_id); return 0; @@ -718,18 +769,18 @@ test_queue_to_port_single_link(void) int i, nr_links, ret; uint32_t port_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count), "Port count get failed"); /* Unlink all connections that created in eventdev_setup */ for (i = 0; i < (int)port_count; i++) { ret = rte_event_port_unlink(evdev, i, NULL, 0); - TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d", i); + ASSERT(ret >= 0, "Failed to unlink all queues port=%d", i); } uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); @@ -741,7 +792,7 @@ test_queue_to_port_single_link(void) uint8_t queue = (uint8_t)i; ret = rte_event_port_link(evdev, i, &queue, NULL, 1); - TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i); + ASSERT(ret == 1, "Failed to link queue to port %d", i); ret = inject_events( 0x100 /*flow_id */, @@ -752,7 +803,7 @@ test_queue_to_port_single_link(void) i /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; } /* Verify the events generated from correct queue */ @@ -760,10 +811,10 @@ test_queue_to_port_single_link(void) ret = consume_events(i /* port */, total_events, validate_queue_to_port_single_link); if (ret) - return TEST_FAILED; + return -1; } - return TEST_SUCCESS; + return 0; } static int @@ -771,7 +822,7 @@ validate_queue_to_port_multi_link(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(index); - TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1), + ASSERT_EQUAL(port, (ev->queue_id & 0x1), "queue mismatch enq=%d deq =%d", port, ev->queue_id); return 0; @@ -789,27 +840,27 @@ test_queue_to_port_multi_link(void) uint32_t nr_queues = 0; uint32_t nr_ports = 0; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues), "Queue count get failed"); - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues), "Queue count get failed"); - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); if (nr_ports < 2) { - printf("%s: Not enough ports to test ports=%d\n", + ssovf_log_dbg("%s: Not enough ports to test ports=%d", __func__, nr_ports); - return TEST_SUCCESS; + return 0; } /* Unlink all connections that created in eventdev_setup */ for (port = 0; port < nr_ports; port++) { ret = rte_event_port_unlink(evdev, port, NULL, 0); - TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d", + ASSERT(ret >= 0, "Failed to unlink all queues port=%d", port); } @@ -819,7 +870,7 @@ test_queue_to_port_multi_link(void) for (queue = 0; queue < nr_queues; queue++) { port = queue & 0x1; ret = rte_event_port_link(evdev, port, &queue, NULL, 1); - TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d", + ASSERT(ret == 1, "Failed to link queue=%d to port=%d", queue, port); ret = inject_events( @@ -831,7 +882,7 @@ test_queue_to_port_multi_link(void) port /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; if (port == 0) port0_events += total_events; @@ -842,13 +893,13 @@ test_queue_to_port_multi_link(void) ret = consume_events(0 /* port */, port0_events, validate_queue_to_port_multi_link); if (ret) - return TEST_FAILED; + return -1; ret = consume_events(1 /* port */, port1_events, validate_queue_to_port_multi_link); if (ret) - return TEST_FAILED; + return -1; - return TEST_SUCCESS; + return 0; } static int @@ -878,17 +929,17 @@ worker_flow_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.sub_event_type == 1) { /* Events from stage 1*/ - if (seqn_list_update(ev.mbuf->seqn) == TEST_SUCCESS) { + if (seqn_list_update(ev.mbuf->seqn) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { - printf("Failed to update seqn_list\n"); - return TEST_FAILED; + ssovf_log_dbg("Failed to update seqn_list"); + return -1; } } else { - printf("Invalid ev.sub_event_type = %d\n", + ssovf_log_dbg("Invalid ev.sub_event_type = %d", ev.sub_event_type); - return TEST_FAILED; + return -1; } } return 0; @@ -902,15 +953,15 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type, uint32_t nr_ports; int ret; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (!nr_ports) { - printf("%s: Not enough ports=%d or workers=%d\n", __func__, + ssovf_log_dbg("%s: Not enough ports=%d or workers=%d", __func__, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } /* Injects events with m->seqn=0 to total_events */ @@ -923,20 +974,20 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type, 0 /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; ret = launch_workers_and_wait(worker_flow_based_pipeline, worker_flow_based_pipeline, total_events, nr_ports, out_sched_type); if (ret) - return TEST_FAILED; + return -1; if (in_sched_type != RTE_SCHED_TYPE_PARALLEL && out_sched_type == RTE_SCHED_TYPE_ATOMIC) { /* Check the events order maintained or not */ return seqn_list_check(total_events); } - return TEST_SUCCESS; + return 0; } @@ -1033,16 +1084,16 @@ worker_group_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/ - if (seqn_list_update(ev.mbuf->seqn) == TEST_SUCCESS) { + if (seqn_list_update(ev.mbuf->seqn) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { - printf("Failed to update seqn_list\n"); - return TEST_FAILED; + ssovf_log_dbg("Failed to update seqn_list"); + return -1; } } else { - printf("Invalid ev.queue_id = %d\n", ev.queue_id); - return TEST_FAILED; + ssovf_log_dbg("Invalid ev.queue_id = %d", ev.queue_id); + return -1; } } @@ -1058,21 +1109,21 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type, uint32_t nr_ports; int ret; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); if (queue_count < 2 || !nr_ports) { - printf("%s: Not enough queues=%d ports=%d or workers=%d\n", + ssovf_log_dbg("%s: Not enough queues=%d ports=%d or workers=%d", __func__, queue_count, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } /* Injects events with m->seqn=0 to total_events */ @@ -1085,20 +1136,20 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type, 0 /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; ret = launch_workers_and_wait(worker_group_based_pipeline, worker_group_based_pipeline, total_events, nr_ports, out_sched_type); if (ret) - return TEST_FAILED; + return -1; if (in_sched_type != RTE_SCHED_TYPE_PARALLEL && out_sched_type == RTE_SCHED_TYPE_ATOMIC) { /* Check the events order maintained or not */ return seqn_list_check(total_events); } - return TEST_SUCCESS; + return 0; } static int @@ -1201,15 +1252,15 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) uint32_t nr_ports; int ret; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (!nr_ports) { - printf("%s: Not enough ports=%d or workers=%d\n", __func__, + ssovf_log_dbg("%s: Not enough ports=%d or workers=%d", __func__, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } /* Injects events with m->seqn=0 to total_events */ @@ -1222,7 +1273,7 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) 0 /* port */, MAX_EVENTS /* events */); if (ret) - return TEST_FAILED; + return -1; return launch_workers_and_wait(fn, fn, MAX_EVENTS, nr_ports, 0xff /* invalid */); @@ -1244,7 +1295,7 @@ worker_queue_based_pipeline_max_stages_rand_sched_type(void *arg) uint16_t valid_event; uint8_t port = param->port; uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); uint8_t nr_queues = queue_count; @@ -1286,7 +1337,7 @@ worker_mixed_pipeline_max_stages_rand_sched_type(void *arg) uint16_t valid_event; uint8_t port = param->port; uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); uint8_t nr_queues = queue_count; @@ -1357,14 +1408,14 @@ test_producer_consumer_ingress_order_test(int (*fn)(void *)) { uint32_t nr_ports; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (rte_lcore_count() < 3 || nr_ports < 2) { - printf("### Not enough cores for %s test.\n", __func__); - return TEST_SUCCESS; + ssovf_log_dbg("### Not enough cores for %s test.", __func__); + return 0; } launch_workers_and_wait(worker_ordered_flow_producer, fn, @@ -1389,86 +1440,102 @@ test_queue_producer_consumer_ingress_order_test(void) worker_group_based_pipeline); } -static struct unit_test_suite eventdev_octeontx_testsuite = { - .suite_name = "eventdev octeontx unit test suite", - .setup = testsuite_setup, - .teardown = testsuite_teardown, - .unit_test_cases = { - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_queue_enq_single_port_deq), - TEST_CASE_ST(eventdev_setup_priority, eventdev_teardown, - test_multi_queue_priority), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_queue_enq_multi_port_deq), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_queue_to_port_single_link), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_queue_to_port_multi_link), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_max_stages_random_sched_type), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_max_stages_random_sched_type), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_mixed_max_stages_random_sched_type), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_flow_producer_consumer_ingress_order_test), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_queue_producer_consumer_ingress_order_test), - /* Tests with dequeue timeout */ - TEST_CASE_ST(eventdev_setup_dequeue_timeout, eventdev_teardown, - test_multi_port_flow_ordered_to_atomic), - TEST_CASE_ST(eventdev_setup_dequeue_timeout, eventdev_teardown, - test_multi_port_queue_ordered_to_atomic), - TEST_CASES_END() /**< NULL terminate unit test array */ +static void octeontx_test_run(int (*setup)(void), void (*tdown)(void), + int (*test)(void), const char *name) +{ + if (setup() < 0) { + ssovf_log_selftest("Error setting up test %s", name); + unsupported++; + } else { + if (test() < 0) { + failed++; + ssovf_log_selftest("%s Failed", name); + } else { + passed++; + ssovf_log_selftest("%s Passed", name); + } } -}; -static int + total++; + tdown(); +} + +void test_eventdev_octeontx(void) { - return unit_test_suite_runner(&eventdev_octeontx_testsuite); + testsuite_setup(); + + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_queue_enq_single_port_deq); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_queue_enq_multi_port_deq); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_to_port_single_link); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_to_port_multi_link); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_max_stages_random_sched_type); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_max_stages_random_sched_type); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_mixed_max_stages_random_sched_type); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_flow_producer_consumer_ingress_order_test); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_producer_consumer_ingress_order_test); + OCTEONTX_TEST_RUN(eventdev_setup_priority, eventdev_teardown, + test_multi_queue_priority); + OCTEONTX_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown, + test_multi_port_flow_ordered_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown, + test_multi_port_queue_ordered_to_atomic); + + ssovf_log_selftest("Total tests : %d", total); + ssovf_log_selftest("Passed : %d", passed); + ssovf_log_selftest("Failed : %d", failed); + ssovf_log_selftest("Not supported : %d", unsupported); + + testsuite_teardown(); } - -REGISTER_TEST_COMMAND(eventdev_octeontx_autotest, test_eventdev_octeontx); -- 2.14.1