From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM02-CY1-obe.outbound.protection.outlook.com (mail-cys01nam02on0088.outbound.protection.outlook.com [104.47.37.88]) by dpdk.org (Postfix) with ESMTP id 307DA31FC for ; Thu, 11 Jan 2018 11:23:22 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=CF25b6QgT0RxOYDyWypC09ykPLFfFI/T3WdsqXNcKDs=; b=bE2jzT8NwA5U/tmRvq+Ge0c9PyuREwpnBiXz5s+efq0ksm4PQHWsa73lwNOIzBHlzuPF4Xcpl+0N/MAywJVBmx7V/HhceNzj2v0nMq85wY4WvsVBrLohGI0bCgDG2Yunb8S2lv755atDP4zh646ewnFWKXQQIc6w1fSeYi9lWfM= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Pavan.Bhagavatula@cavium.com; Received: from Pavan-LT.caveonetworks.com (111.93.218.67) by CY4PR07MB3464.namprd07.prod.outlook.com (10.171.252.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.386.5; Thu, 11 Jan 2018 10:23:16 +0000 From: Pavan Nikhilesh To: jerin.jacob@caviumnetworks.com, harry.van.haaren@intel.com, gage.eads@intel.com, liang.j.ma@intel.com Cc: dev@dpdk.org, Pavan Nikhilesh Date: Thu, 11 Jan 2018 15:51:49 +0530 Message-Id: <20180111102156.12726-4-pbhagavatula@caviumnetworks.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180111102156.12726-1-pbhagavatula@caviumnetworks.com> References: <20171212192713.17620-1-pbhagavatula@caviumnetworks.com> <20180111102156.12726-1-pbhagavatula@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: MWHPR2001CA0023.namprd20.prod.outlook.com (10.172.58.161) To CY4PR07MB3464.namprd07.prod.outlook.com (10.171.252.145) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4431d089-4814-4dcf-c7fc-08d558dd55dd X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(5600026)(4604075)(2017052603307)(7153060)(7193020); SRVR:CY4PR07MB3464; X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3464; 3:sDR++Ak/YA3+BtWu8q7avEO3SOmspsyrRhk/SjDr1V0T0HjQKNVX1rerB3MD3IpBsG3vIyrQyy+I8Zu7AGX9sXgRoBVvuJZ06AjsD+lwCnhqg9QJHVTWtVSObfmM0VvWOsZSC8PVkovGU6witCCMlyhQ0ef9CHy81jCmfilzTr4FV7el4pM4/fwUCp7TpZCLL3UfMsSjNz3WfibodY+mNVVfPnFxsak5j5ohaVdo9DOw7Ul/SqIaKuXtWQVmFPAw; 25:pJb3cflOvUHlZayoN/4W2jk71nyT0ojbtVBNngmZ7PJKoeUOiZ0aqnBUwKLo67xouC3tLbvUkv0ZGfXZ7ZAewSfjuAkdRvaZXv+Zp7gH4Fk/ucwYGz9Iosn2XFiUgYh/9l8EvzRkM7eyoBRWywfXLbNHjxRTWgd0mynx2w7jXfu3I2Bvrx/lGMEw3ETXYE4rSYfbM+uVGApiSudPnsbuUAmDm3GN9z9SgNfuSwBHg9+tv5dyhIaMs74OrzZRerhj4ZmqCnKBsIsfIOcaJJrI3eViqxu21/oMQUB+HXOQwvHhkyRgOGV5i/cNHupOC3UCx0OlVsSGXSnZpwNtjJf4ig==; 31:I+/zLt+BVg72tGkGXMFkvLW+YLaOYWrTIb8RT1M4um4gYGWY37BLZSRT2YfBs6lxeyWlZ1YrE3h/c1qQPuP+FGJM5capjXyV+2j3/Neqac+xNmwoU2jZZ9B2RQdko4e6e69Gq/MyxtDrm69S2ZjDIXBh9fs45B+q9pdA1dJpKkuKUNlw48tt7u87Iq76mKN4AxK4gCKZNsg4yjxEVWPfy5B3LIkwCjg5snbYZd6SVj0= X-MS-TrafficTypeDiagnostic: CY4PR07MB3464: X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3464; 20:uHev2a0am9XOv+/n+O6sTO1CyNswzqAo6tOwEF2H0dPE5L/qXW+ORFwyRS5KtTqWk5OLlap3VzAWf4adsgJihr+naHl9N2Vfdsbrl/shTRqnzy+UiFDtbMxELess6snV4Uky7M5NqMs4uoc05mjdb/6Ty3hoAdd9As9v2eC7kpYV9a92bXB5bGK3A/Nbt0ikFWkW92N4Q/Nw8NfPkSxc7G1ak18Q5CvOpthXUsnJ3fuo2Qax6R2uSfWRS2XgTRAa9DFfIdb5DAvw/B2Msw3Q86wOG31PZL4nD3L5SCu+LrINy78RD2rgBMMkBt5clu+MbOdd31WRW+ij3+yOqB1EISI1vbpZxrwPf4R54R79WtnL0qF5UvGmJZzLa0mQwWK+wKWTKbM43wT6kJFgwhbga5XiFp64UBHSLGnpOE1XzcHjxOEgMnq4xRP13CJfl8MMRFKG+FVssOFuxI6a7I+fu/Dm5sW3KDBDdBliWf86WINjLkKc0rRFnb7xPcYAf5mt3dc4url/cuI/7xdHwaV4rpJut6LwRLriBsJqcaASIKwVcuCQY9S2YnWbfIp5ekFLHPh9NrgbPKbVyhumTyuBOo9xrPLOLWkYCyTWIcCxIrM=; 4:CQ18QOsOj7C5wu+2G3PomU83uwxsZQH054OM2ywD7b751yWSSIfmDp5o+R1KM75FICWnp01Xar6pms5pShGwx0A7AjSUVW7UfTA/Vg8eyAFV9AtPyhg2MVNruLqFXb61bCLCODXaQvwCfgdf+2Lmz4YNLbcBDqAvef5YQqoA85tEWNdF2IfqC0VmUP5uKDXVT6/g847GkQywIJ3tw5yuKaBY8hnaynvYEbgYs/OUQaCAqALzpKmpNumVG7Vwc1vfUCd7SBD/B3otN8l2BxxMkA== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040470)(2401047)(8121501046)(5005006)(3002001)(10201501046)(3231023)(944501075)(93006095)(6041268)(20161123558120)(20161123562045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123560045)(20161123564045)(6072148)(201708071742011); SRVR:CY4PR07MB3464; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:CY4PR07MB3464; X-Forefront-PRVS: 0549E6FD50 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(346002)(39860400002)(39380400002)(366004)(376002)(396003)(189003)(199004)(6116002)(2906002)(8676002)(6506007)(47776003)(97736004)(53936002)(5009440100003)(53946003)(72206003)(1076002)(16526018)(53416004)(51416003)(66066001)(59450400001)(50226002)(16586007)(386003)(76176011)(25786009)(6486002)(316002)(69596002)(48376002)(42882006)(106356001)(52116002)(8936002)(6666003)(7736002)(6512007)(3846002)(81166006)(2950100002)(4326008)(478600001)(107886003)(5660300001)(81156014)(68736007)(105586002)(50466002)(305945005)(36756003)(42262002); DIR:OUT; SFP:1101; SCL:1; SRVR:CY4PR07MB3464; H:Pavan-LT.caveonetworks.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CY4PR07MB3464; 23:S8dAWxMaeobTV/w5sP9gB6xnrOPKeg34xElU7wTrc?= =?us-ascii?Q?KnEGcla/0+NS36yIXmJ3lAoMADM+89csAeQQUu0Ph7h7y4cTPyGjql9ILKNc?= =?us-ascii?Q?PGzx3sGbFXt0Z3YoIB8f+i9Ga4tbtCkQ0T+K0kM4GSbGc/aNao54PgfpOMBv?= =?us-ascii?Q?wU7vaPE8l0f6XuvSXKcRDnOMiL3Q/UBjOvjVBgQD3KhxirQY1oVmJnNb8xQp?= =?us-ascii?Q?71wmF+X0ztXdLy1Ijy7IMFVCf9wiFXhb8ggypkcg+djD6AEgn64BJHMbgsMs?= =?us-ascii?Q?lhtRJtsbHBROsIvcnV0fVL9eD6gn0p7LoSstIC+n76IusbO/j67ZHpsnKAja?= =?us-ascii?Q?MgzyUWPjJVXocydjYV7EcgnV2aHcBrShSAWwlI4wdK+jMv+5gIiprCLVLjPN?= =?us-ascii?Q?p3GvlEOurLUpv6Iymf7e3UFAPWcxyugBPEMeIj0YEvmd4s0jrXuS8qZSl7HL?= =?us-ascii?Q?23doE3+BwDTgkgKd7g5JhUv4EeRdtdCsfalOx74gYzNvcU51Ye4+2wmx6VJ9?= =?us-ascii?Q?RaOrNDsIUdW/nLXwYS/PAXfoaWRsq2QJTs+GOSTk1XUYAh8Gqv7LNFV4m8dt?= =?us-ascii?Q?3HKeDLv5bLCFJYlnIfLhJQgTbIUdOYTJ45YdBbMdnGAoDqcbzSyjULHYr6hk?= =?us-ascii?Q?nLkdhLkdaSSqUcjO+f3H4ickqaWpNT4XHTH0M1p2U8TuNk3No+xo0BHrdiQ/?= =?us-ascii?Q?qbocK0CD0FKr3/Apkh10hqu4PWxXyuO5/8B7GpF4HNAK8YIuvBSZLaqoYn2N?= =?us-ascii?Q?t8JlpKxGER/4F4U8FqbKMy76iOsPKqvMhpWB4VJRh+ANbqEo/It6QCLacPZS?= =?us-ascii?Q?67/HW8VjJTX4evK8zdUXem32RMsxPo0bItEt50cAai793WIl0PlXEmzT8wQx?= =?us-ascii?Q?ltq/DUbucrdcH3lryTTXRUHMUVeSK2f+7WeYIjGJI/p74ImjjvXSW4hPVjeQ?= =?us-ascii?Q?/iAhpeP0TBfSqPs4gd3u32sD6mgYGXiThyiPOTJLsxP9DRekFeuLlIHGugl/?= =?us-ascii?Q?4Hjt1bYVCngXJWyCzaR8pFXrA9wgX0+T/bvhqOrQ0ccy7Mkc6KKrHbbsdlJy?= =?us-ascii?Q?Pm4H0xG+Ouk1BaB29sjvqp4W49BolHhOtv5e/LPg23iBVqs+FbAY/RSS1/0k?= =?us-ascii?Q?u7z60OSoFwbcDKIGC5LshFqlHTbUtaKeVQBGHcHo9+rdtPUMnefax+oe2Xwx?= =?us-ascii?Q?d1J7/SDaLwDp+I19TlKrDjNcIN+qQa/x23A7GorOBy40/KBYBjbgkgfCkNu0?= =?us-ascii?Q?YiYJ2xZfddXcCNYYy8=3D?= X-Microsoft-Exchange-Diagnostics: 1; CY4PR07MB3464; 6:Wl7xGg4Ds3rNb0JRoD3xatLBD6muKi9RfWEahuExL2jSJVV9Wo1UQvRBGEpLOhpwtShTIDilQYFvY9HwhZfvsaDpIcwsN1GIbwFWCTDOVI9TmSuoI5OtBVt83RYYccmqG5rYZX8VnlFIapRvSYIow17epPgRMNwYGy71LPMbzjLjw/uSw2eSsCofPMuMJa02QHb/mmFtONJcEpPy5fcbOSwq0ireQpi0kuPiY5gRiYXWP66qqHwYmvZei42/AsLgX96ZbInmJLmC2j7qPGhAjZFdMmfVNtddfoD4du1XmHBJ4Cf3VLBEc4KR301SAv+4yXZTNYw5i86sGTcne2E/YPkxP+BIKn7rPJWm/0M8wLk=; 5:p11Ttb+GZsVQC5u5FimGzplv9SAdQ95h5ru/XaKi3Rnzu0FuOAvj+isGGpOjRXDKOv5lVhj5QPZSPbZJnMKH/Yc3eWv3T6MxRgxnCjpOHzHxiBM58XpoFWreFMuW33o82OBaWDj4G58Y2sv7+COUmNQpL6PfIF9lg91umfrtF8w=; 24:hJF7lECo1fDQTp58jKAUkyVmSN/tnBichGGvrJU3eT9ywaVlEe8dQo1O0MSDaXg2cDy7LIDLftVgMaMn8+e33HSVRvKwLFt0qse1rPyhntw=; 7:pE+kweUuzoWyIYU5tx0EuZ9nzm4UqjQuzoE0gfUl74vNUi7Tyoo72mQS9Swl3bicHfuEKLMlgZz4FAnvwQppO1o3fzhDz9MGqnQSWIErgWTlDBPjhNJb/gwxvrIcQT8ND7WNNJBRrn0MkTLJ0i2HNnLLdbLzNMRQLs1ckXOyciaAA0eP5CFDhgKG3yXNr/rWru/V4gfksIw/B+qhOnlxbZAa/yUnQ8kmsbf524gFDWEJex5s3w6F4L60raNt+ljH SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jan 2018 10:23:16.7532 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4431d089-4814-4dcf-c7fc-08d558dd55dd X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR07MB3464 Subject: [dpdk-dev] [PATCH v5 04/11] event/octeontx: modify octeontx eventdev test X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 11 Jan 2018 10:23:23 -0000 Modify test_eventdev_octeontx to be standalone selftest independent of test framework. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx/ssovf_evdev_selftest.c | 425 ++++++++++++++------------ 1 file changed, 232 insertions(+), 193 deletions(-) diff --git a/drivers/event/octeontx/ssovf_evdev_selftest.c b/drivers/event/octeontx/ssovf_evdev_selftest.c index 8fddb4fd2..325c110c8 100644 --- a/drivers/event/octeontx/ssovf_evdev_selftest.c +++ b/drivers/event/octeontx/ssovf_evdev_selftest.c @@ -46,12 +46,21 @@ #include #include #include +#include -#include "test.h" +#include "ssovf_evdev.h" #define NUM_PACKETS (1 << 18) #define MAX_EVENTS (16 * 1024) +#define OCTEONTX_TEST_RUN(setup, teardown, test) \ + octeontx_test_run(setup, teardown, test, #test) + +static int total; +static int passed; +static int failed; +static int unsupported; + static int evdev; static struct rte_mempool *eventdev_test_mempool; @@ -79,11 +88,11 @@ static inline int seqn_list_update(int val) { if (seqn_list_index >= NUM_PACKETS) - return TEST_FAILED; + return -1; seqn_list[seqn_list_index++] = val; rte_smp_wmb(); - return TEST_SUCCESS; + return 0; } static inline int @@ -93,11 +102,11 @@ seqn_list_check(int limit) for (i = 0; i < limit; i++) { if (seqn_list[i] != i) { - printf("Seqn mismatch %d %d\n", seqn_list[i], i); - return TEST_FAILED; + ssovf_log_dbg("Seqn mismatch %d %d", seqn_list[i], i); + return -1; } } - return TEST_SUCCESS; + return 0; } struct test_core_param { @@ -114,20 +123,21 @@ testsuite_setup(void) evdev = rte_event_dev_get_dev_id(eventdev_name); if (evdev < 0) { - printf("%d: Eventdev %s not found - creating.\n", + ssovf_log_dbg("%d: Eventdev %s not found - creating.", __LINE__, eventdev_name); if (rte_vdev_init(eventdev_name, NULL) < 0) { - printf("Error creating eventdev %s\n", eventdev_name); - return TEST_FAILED; + ssovf_log_dbg("Error creating eventdev %s", + eventdev_name); + return -1; } evdev = rte_event_dev_get_dev_id(eventdev_name); if (evdev < 0) { - printf("Error finding newly created eventdev\n"); - return TEST_FAILED; + ssovf_log_dbg("Error finding newly created eventdev"); + return -1; } } - return TEST_SUCCESS; + return 0; } static void @@ -177,31 +187,32 @@ _eventdev_setup(int mode) 512, /* Use very small mbufs */ rte_socket_id()); if (!eventdev_test_mempool) { - printf("ERROR creating mempool\n"); - return TEST_FAILED; + ssovf_log_dbg("ERROR creating mempool"); + return -1; } ret = rte_event_dev_info_get(evdev, &info); - TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); - TEST_ASSERT(info.max_num_events >= (int32_t)MAX_EVENTS, - "max_num_events=%d < max_events=%d", - info.max_num_events, MAX_EVENTS); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + RTE_TEST_ASSERT(info.max_num_events >= (int32_t)MAX_EVENTS, + "ERROR max_num_events=%d < max_events=%d", + info.max_num_events, MAX_EVENTS); devconf_set_default_sane_values(&dev_conf, &info); if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT) dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT; ret = rte_event_dev_configure(evdev, &dev_conf); - TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev"); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev"); uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); if (mode == TEST_EVENTDEV_SETUP_PRIORITY) { if (queue_count > 8) { - printf("test expects the unique priority per queue\n"); + ssovf_log_dbg( + "test expects the unique priority per queue"); return -ENOTSUP; } @@ -216,35 +227,39 @@ _eventdev_setup(int mode) ret = rte_event_queue_default_conf_get(evdev, i, &queue_conf); - TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d", i); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d", + i); queue_conf.priority = i * step; ret = rte_event_queue_setup(evdev, i, &queue_conf); - TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", i); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", + i); } } else { /* Configure event queues with default priority */ for (i = 0; i < (int)queue_count; i++) { ret = rte_event_queue_setup(evdev, i, NULL); - TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", i); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", + i); } } /* Configure event ports */ uint32_t port_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count), "Port count get failed"); for (i = 0; i < (int)port_count; i++) { ret = rte_event_port_setup(evdev, i, NULL); - TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i); ret = rte_event_port_link(evdev, i, NULL, NULL, 0); - TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d", i); + RTE_TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d", + i); } ret = rte_event_dev_start(evdev); - TEST_ASSERT_SUCCESS(ret, "Failed to start device"); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start device"); - return TEST_SUCCESS; + return 0; } static inline int @@ -311,7 +326,7 @@ inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type, struct rte_event ev = {.event = 0, .u64 = 0}; m = rte_pktmbuf_alloc(eventdev_test_mempool); - TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); + RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); m->seqn = i; update_event_and_validation_attr(m, &ev, flow_id, event_type, @@ -332,8 +347,8 @@ check_excess_events(uint8_t port) for (i = 0; i < 32; i++) { valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); - TEST_ASSERT_SUCCESS(valid_event, "Unexpected valid event=%d", - ev.mbuf->seqn); + RTE_TEST_ASSERT_SUCCESS(valid_event, + "Unexpected valid event=%d", ev.mbuf->seqn); } return 0; } @@ -346,12 +361,12 @@ generate_random_events(const unsigned int total_events) int ret; uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); ret = rte_event_dev_info_get(evdev, &info); - TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); for (i = 0; i < total_events; i++) { ret = inject_events( rte_rand() % info.max_event_queue_flows /*flow_id */, @@ -362,7 +377,7 @@ generate_random_events(const unsigned int total_events) 0 /* port */, 1 /* events */); if (ret) - return TEST_FAILED; + return -1; } return ret; } @@ -374,19 +389,19 @@ validate_event(struct rte_event *ev) struct event_attr *attr; attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *); - TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id, + RTE_TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id, "flow_id mismatch enq=%d deq =%d", attr->flow_id, ev->flow_id); - TEST_ASSERT_EQUAL(attr->event_type, ev->event_type, + RTE_TEST_ASSERT_EQUAL(attr->event_type, ev->event_type, "event_type mismatch enq=%d deq =%d", attr->event_type, ev->event_type); - TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type, + RTE_TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type, "sub_event_type mismatch enq=%d deq =%d", attr->sub_event_type, ev->sub_event_type); - TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type, + RTE_TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type, "sched_type mismatch enq=%d deq =%d", attr->sched_type, ev->sched_type); - TEST_ASSERT_EQUAL(attr->queue, ev->queue_id, + RTE_TEST_ASSERT_EQUAL(attr->queue, ev->queue_id, "queue mismatch enq=%d deq =%d", attr->queue, ev->queue_id); return 0; @@ -405,8 +420,8 @@ consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn) while (1) { if (++forward_progress_cnt > UINT16_MAX) { - printf("Detected deadlock\n"); - return TEST_FAILED; + ssovf_log_dbg("Detected deadlock"); + return -1; } valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); @@ -416,11 +431,11 @@ consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn) forward_progress_cnt = 0; ret = validate_event(&ev); if (ret) - return TEST_FAILED; + return -1; if (fn != NULL) { ret = fn(index, port, &ev); - TEST_ASSERT_SUCCESS(ret, + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate test specific event"); } @@ -438,8 +453,8 @@ static int validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(port); - TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d", index, - ev->mbuf->seqn); + RTE_TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d", + index, ev->mbuf->seqn); return 0; } @@ -456,7 +471,7 @@ test_simple_enqdeq(uint8_t sched_type) 0 /* port */, MAX_EVENTS); if (ret) - return TEST_FAILED; + return -1; return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq); } @@ -491,7 +506,7 @@ test_multi_queue_enq_single_port_deq(void) ret = generate_random_events(MAX_EVENTS); if (ret) - return TEST_FAILED; + return -1; return consume_events(0 /* port */, MAX_EVENTS, NULL); } @@ -514,7 +529,7 @@ static int validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) { uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); uint32_t range = MAX_EVENTS / queue_count; @@ -522,7 +537,7 @@ validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) expected_val += ev->queue_id; RTE_SET_USED(port); - TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val, + RTE_TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val, "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", ev->mbuf->seqn, index, expected_val, range, queue_count, MAX_EVENTS); @@ -538,7 +553,7 @@ test_multi_queue_priority(void) /* See validate_queue_priority() comments for priority validate logic */ uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); max_evts_roundoff = MAX_EVENTS / queue_count; @@ -548,7 +563,7 @@ test_multi_queue_priority(void) struct rte_event ev = {.event = 0, .u64 = 0}; m = rte_pktmbuf_alloc(eventdev_test_mempool); - TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); + RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); m->seqn = i; queue = i % queue_count; @@ -576,7 +591,7 @@ worker_multi_port_fn(void *arg) continue; ret = validate_event(&ev); - TEST_ASSERT_SUCCESS(ret, "Failed to validate event"); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate event"); rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } @@ -587,27 +602,29 @@ static inline int wait_workers_to_join(int lcore, const rte_atomic32_t *count) { uint64_t cycles, print_cycles; + RTE_SET_USED(count); print_cycles = cycles = rte_get_timer_cycles(); while (rte_eal_get_lcore_state(lcore) != FINISHED) { uint64_t new_cycles = rte_get_timer_cycles(); if (new_cycles - print_cycles > rte_get_timer_hz()) { - printf("\r%s: events %d\n", __func__, + ssovf_log_dbg("\r%s: events %d", __func__, rte_atomic32_read(count)); print_cycles = new_cycles; } if (new_cycles - cycles > rte_get_timer_hz() * 10) { - printf("%s: No schedules for seconds, deadlock (%d)\n", + ssovf_log_dbg( + "%s: No schedules for seconds, deadlock (%d)", __func__, rte_atomic32_read(count)); rte_event_dev_dump(evdev, stdout); cycles = new_cycles; - return TEST_FAILED; + return -1; } } rte_eal_mp_wait_lcore(); - return TEST_SUCCESS; + return 0; } @@ -631,12 +648,12 @@ launch_workers_and_wait(int (*master_worker)(void *), param = malloc(sizeof(struct test_core_param) * nb_workers); if (!param) - return TEST_FAILED; + return -1; ret = rte_event_dequeue_timeout_ticks(evdev, rte_rand() % 10000000/* 10ms */, &dequeue_tmo_ticks); if (ret) - return TEST_FAILED; + return -1; param[0].total_events = &atomic_total_events; param[0].sched_type = sched_type; @@ -679,17 +696,17 @@ test_multi_queue_enq_multi_port_deq(void) ret = generate_random_events(total_events); if (ret) - return TEST_FAILED; + return -1; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (!nr_ports) { - printf("%s: Not enough ports=%d or workers=%d\n", __func__, + ssovf_log_dbg("%s: Not enough ports=%d or workers=%d", __func__, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } return launch_workers_and_wait(worker_multi_port_fn, @@ -702,7 +719,7 @@ validate_queue_to_port_single_link(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(index); - TEST_ASSERT_EQUAL(port, ev->queue_id, + RTE_TEST_ASSERT_EQUAL(port, ev->queue_id, "queue mismatch enq=%d deq =%d", port, ev->queue_id); return 0; @@ -718,18 +735,19 @@ test_queue_to_port_single_link(void) int i, nr_links, ret; uint32_t port_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count), "Port count get failed"); /* Unlink all connections that created in eventdev_setup */ for (i = 0; i < (int)port_count; i++) { ret = rte_event_port_unlink(evdev, i, NULL, 0); - TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d", i); + RTE_TEST_ASSERT(ret >= 0, + "Failed to unlink all queues port=%d", i); } uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); @@ -741,7 +759,7 @@ test_queue_to_port_single_link(void) uint8_t queue = (uint8_t)i; ret = rte_event_port_link(evdev, i, &queue, NULL, 1); - TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i); + RTE_TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i); ret = inject_events( 0x100 /*flow_id */, @@ -752,7 +770,7 @@ test_queue_to_port_single_link(void) i /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; } /* Verify the events generated from correct queue */ @@ -760,10 +778,10 @@ test_queue_to_port_single_link(void) ret = consume_events(i /* port */, total_events, validate_queue_to_port_single_link); if (ret) - return TEST_FAILED; + return -1; } - return TEST_SUCCESS; + return 0; } static int @@ -771,7 +789,7 @@ validate_queue_to_port_multi_link(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(index); - TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1), + RTE_TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1), "queue mismatch enq=%d deq =%d", port, ev->queue_id); return 0; @@ -789,27 +807,27 @@ test_queue_to_port_multi_link(void) uint32_t nr_queues = 0; uint32_t nr_ports = 0; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues), "Queue count get failed"); - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues), "Queue count get failed"); - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); if (nr_ports < 2) { - printf("%s: Not enough ports to test ports=%d\n", + ssovf_log_dbg("%s: Not enough ports to test ports=%d", __func__, nr_ports); - return TEST_SUCCESS; + return 0; } /* Unlink all connections that created in eventdev_setup */ for (port = 0; port < nr_ports; port++) { ret = rte_event_port_unlink(evdev, port, NULL, 0); - TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d", + RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d", port); } @@ -819,7 +837,7 @@ test_queue_to_port_multi_link(void) for (queue = 0; queue < nr_queues; queue++) { port = queue & 0x1; ret = rte_event_port_link(evdev, port, &queue, NULL, 1); - TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d", + RTE_TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d", queue, port); ret = inject_events( @@ -831,7 +849,7 @@ test_queue_to_port_multi_link(void) port /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; if (port == 0) port0_events += total_events; @@ -842,13 +860,13 @@ test_queue_to_port_multi_link(void) ret = consume_events(0 /* port */, port0_events, validate_queue_to_port_multi_link); if (ret) - return TEST_FAILED; + return -1; ret = consume_events(1 /* port */, port1_events, validate_queue_to_port_multi_link); if (ret) - return TEST_FAILED; + return -1; - return TEST_SUCCESS; + return 0; } static int @@ -878,17 +896,17 @@ worker_flow_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.sub_event_type == 1) { /* Events from stage 1*/ - if (seqn_list_update(ev.mbuf->seqn) == TEST_SUCCESS) { + if (seqn_list_update(ev.mbuf->seqn) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { - printf("Failed to update seqn_list\n"); - return TEST_FAILED; + ssovf_log_dbg("Failed to update seqn_list"); + return -1; } } else { - printf("Invalid ev.sub_event_type = %d\n", + ssovf_log_dbg("Invalid ev.sub_event_type = %d", ev.sub_event_type); - return TEST_FAILED; + return -1; } } return 0; @@ -902,15 +920,15 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type, uint32_t nr_ports; int ret; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (!nr_ports) { - printf("%s: Not enough ports=%d or workers=%d\n", __func__, + ssovf_log_dbg("%s: Not enough ports=%d or workers=%d", __func__, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } /* Injects events with m->seqn=0 to total_events */ @@ -923,20 +941,20 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type, 0 /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; ret = launch_workers_and_wait(worker_flow_based_pipeline, worker_flow_based_pipeline, total_events, nr_ports, out_sched_type); if (ret) - return TEST_FAILED; + return -1; if (in_sched_type != RTE_SCHED_TYPE_PARALLEL && out_sched_type == RTE_SCHED_TYPE_ATOMIC) { /* Check the events order maintained or not */ return seqn_list_check(total_events); } - return TEST_SUCCESS; + return 0; } @@ -1033,16 +1051,16 @@ worker_group_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/ - if (seqn_list_update(ev.mbuf->seqn) == TEST_SUCCESS) { + if (seqn_list_update(ev.mbuf->seqn) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { - printf("Failed to update seqn_list\n"); - return TEST_FAILED; + ssovf_log_dbg("Failed to update seqn_list"); + return -1; } } else { - printf("Invalid ev.queue_id = %d\n", ev.queue_id); - return TEST_FAILED; + ssovf_log_dbg("Invalid ev.queue_id = %d", ev.queue_id); + return -1; } } @@ -1058,21 +1076,21 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type, uint32_t nr_ports; int ret; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); if (queue_count < 2 || !nr_ports) { - printf("%s: Not enough queues=%d ports=%d or workers=%d\n", + ssovf_log_dbg("%s: Not enough queues=%d ports=%d or workers=%d", __func__, queue_count, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } /* Injects events with m->seqn=0 to total_events */ @@ -1085,20 +1103,20 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type, 0 /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; ret = launch_workers_and_wait(worker_group_based_pipeline, worker_group_based_pipeline, total_events, nr_ports, out_sched_type); if (ret) - return TEST_FAILED; + return -1; if (in_sched_type != RTE_SCHED_TYPE_PARALLEL && out_sched_type == RTE_SCHED_TYPE_ATOMIC) { /* Check the events order maintained or not */ return seqn_list_check(total_events); } - return TEST_SUCCESS; + return 0; } static int @@ -1201,15 +1219,15 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) uint32_t nr_ports; int ret; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (!nr_ports) { - printf("%s: Not enough ports=%d or workers=%d\n", __func__, + ssovf_log_dbg("%s: Not enough ports=%d or workers=%d", __func__, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } /* Injects events with m->seqn=0 to total_events */ @@ -1222,7 +1240,7 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) 0 /* port */, MAX_EVENTS /* events */); if (ret) - return TEST_FAILED; + return -1; return launch_workers_and_wait(fn, fn, MAX_EVENTS, nr_ports, 0xff /* invalid */); @@ -1244,7 +1262,7 @@ worker_queue_based_pipeline_max_stages_rand_sched_type(void *arg) uint16_t valid_event; uint8_t port = param->port; uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); uint8_t nr_queues = queue_count; @@ -1286,7 +1304,7 @@ worker_mixed_pipeline_max_stages_rand_sched_type(void *arg) uint16_t valid_event; uint8_t port = param->port; uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); uint8_t nr_queues = queue_count; @@ -1357,14 +1375,14 @@ test_producer_consumer_ingress_order_test(int (*fn)(void *)) { uint32_t nr_ports; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (rte_lcore_count() < 3 || nr_ports < 2) { - printf("### Not enough cores for %s test.\n", __func__); - return TEST_SUCCESS; + ssovf_log_dbg("### Not enough cores for %s test.", __func__); + return 0; } launch_workers_and_wait(worker_ordered_flow_producer, fn, @@ -1389,86 +1407,107 @@ test_queue_producer_consumer_ingress_order_test(void) worker_group_based_pipeline); } -static struct unit_test_suite eventdev_octeontx_testsuite = { - .suite_name = "eventdev octeontx unit test suite", - .setup = testsuite_setup, - .teardown = testsuite_teardown, - .unit_test_cases = { - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_queue_enq_single_port_deq), - TEST_CASE_ST(eventdev_setup_priority, eventdev_teardown, - test_multi_queue_priority), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_queue_enq_multi_port_deq), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_queue_to_port_single_link), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_queue_to_port_multi_link), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_max_stages_random_sched_type), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_max_stages_random_sched_type), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_mixed_max_stages_random_sched_type), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_flow_producer_consumer_ingress_order_test), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_queue_producer_consumer_ingress_order_test), - /* Tests with dequeue timeout */ - TEST_CASE_ST(eventdev_setup_dequeue_timeout, eventdev_teardown, - test_multi_port_flow_ordered_to_atomic), - TEST_CASE_ST(eventdev_setup_dequeue_timeout, eventdev_teardown, - test_multi_port_queue_ordered_to_atomic), - TEST_CASES_END() /**< NULL terminate unit test array */ +static void octeontx_test_run(int (*setup)(void), void (*tdown)(void), + int (*test)(void), const char *name) +{ + if (setup() < 0) { + ssovf_log_selftest("Error setting up test %s", name); + unsupported++; + } else { + if (test() < 0) { + failed++; + ssovf_log_selftest("%s Failed", name); + } else { + passed++; + ssovf_log_selftest("%s Passed", name); + } } -}; -static int + total++; + tdown(); +} + +int test_eventdev_octeontx(void) { - return unit_test_suite_runner(&eventdev_octeontx_testsuite); -} + testsuite_setup(); + + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_queue_enq_single_port_deq); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_queue_enq_multi_port_deq); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_to_port_single_link); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_to_port_multi_link); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_max_stages_random_sched_type); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_max_stages_random_sched_type); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_mixed_max_stages_random_sched_type); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_flow_producer_consumer_ingress_order_test); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_producer_consumer_ingress_order_test); + OCTEONTX_TEST_RUN(eventdev_setup_priority, eventdev_teardown, + test_multi_queue_priority); + OCTEONTX_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown, + test_multi_port_flow_ordered_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown, + test_multi_port_queue_ordered_to_atomic); + + ssovf_log_selftest("Total tests : %d", total); + ssovf_log_selftest("Passed : %d", passed); + ssovf_log_selftest("Failed : %d", failed); + ssovf_log_selftest("Not supported : %d", unsupported); + + testsuite_teardown(); + + if (failed) + return -1; -REGISTER_TEST_COMMAND(eventdev_octeontx_autotest, test_eventdev_octeontx); + return 0; +} -- 2.15.1