From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM03-CO1-obe.outbound.protection.outlook.com (mail-co1nam03on0060.outbound.protection.outlook.com [104.47.40.60]) by dpdk.org (Postfix) with ESMTP id AF0CB1B3E6 for ; Mon, 25 Dec 2017 20:18:37 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=BRg+Bc3QqMNFDHZ7/KV5W6Z/+uSipZTUdFLBnmEIMmE=; b=WYk2jGnoDj3qg4fRkP9m7MWjj0GsgvLIRrDjPGe0HttmNeAzXhzm1co6+3thI7bxc7DW6pErbCcX97MIUeVZb+p3gD+I678awGaHDeCY/cOM28LuZLHQicV0AadALEH2GABbVTeYxY7YteKhzd1N9JzMddZmIH/3shdGtUVmn1Y= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Pavan.Bhagavatula@cavium.com; Received: from localhost.localdomain (111.93.218.67) by DM5PR07MB3466.namprd07.prod.outlook.com (10.164.153.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.345.14; Mon, 25 Dec 2017 19:18:32 +0000 From: Pavan Nikhilesh To: jerin.jacob@caviumnetworks.com, harry.van.haaren@intel.com, gage.eads@intel.com, liang.j.ma@intel.com Cc: dev@dpdk.org, Pavan Nikhilesh Date: Tue, 26 Dec 2017 00:47:31 +0530 Message-Id: <20171225191738.17151-4-pbhagavatula@caviumnetworks.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20171225191738.17151-1-pbhagavatula@caviumnetworks.com> References: <20171212192713.17620-1-pbhagavatula@caviumnetworks.com> <20171225191738.17151-1-pbhagavatula@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: SG2PR0601CA0004.apcprd06.prod.outlook.com (10.170.128.14) To DM5PR07MB3466.namprd07.prod.outlook.com (10.164.153.21) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 50630105-7891-4cea-1464-08d54bcc4b37 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(5600026)(4604075)(2017052603307)(7153060); SRVR:DM5PR07MB3466; X-Microsoft-Exchange-Diagnostics: 1; DM5PR07MB3466; 3:LlLJyqKyO70/jQEd+fJXbTfol8+61698frpCCObkwoZZ7OGwhyIOX+7e8HzeA/DKXodtWBbHiuuTitT1ZSUp6KTNdG0pWVGjmmsJHBUKBzqAXGfHpBrc+xv/qjDBrL8v+bOo2hOK2jM4D0HPRHUcw6v1Qd3nfN3W9gS0xeDoRUVp8GJpEyqvUrM3cIv320qvumfignwKH1c2SULN0i/2XKFHB/Ju/1L6zMpUC5SxECrjt+up5mdsKTl4pyxTfZJL; 25:H0kMPgPbBFqTFyYuMA2ORwK0u/h8D22rVY4C43UF7e7QtElzNeLPpklhXm0DE80UOuawYLMlPdXCR7WmbBZG2g1hMHb0VbaOC11RO6ZclH99iRjTlgNDvvgkNp9ZEqqGWaRSnHZqh5JMO637cntlua0DF1e1TPtkwHEYACC4C5TBRwU3glOeZF+ELiT+sW9p6jKna13GfS1OD28JDXdqhvYQObmkMVI73Th6jggcGOcxvl6TdkF4o+IHwmk6oTdrrNjdRJ9JRGVoK90TtA4WRJTcal3hZ7q+AY15IYmk3a4akEOlo0Fu8A+lNvdMpa7b2IL6w1c45f+xyurbrFSuXA==; 31:pY+mAwVayOyF9m2Mad2uBSAj2Y+da2KI7l4CR7jyikCg75BjY/kcNLp6JrMqu1RnEtaIh28AzHoFPV4pdBd1l6tpIyXq7TJDrXBkfP5+W5mlwXFDazjuCdMY+NTTotUMwzWmLRAjoWSAVjjLsA2mwhp748TE6d0FYkEhW50MZalGtoCYKMM3JPyvZH+JESdtjgny+PeTTyKHsM1gUxy4yr1VmcA4CHeF6lQHkOfw2wI= X-MS-TrafficTypeDiagnostic: DM5PR07MB3466: X-Microsoft-Exchange-Diagnostics: 1; DM5PR07MB3466; 20:K7IOEKufk85op+R/zGHt07UhqdpxRniUTd+SWKgIizcglNdU1dR+f+U3p52RTGLVqa8Q5PCpzg27JbwyAkN769NEGP9iK6MMvBR6c9vqEcg1MZFWXDfn7m9FkUi5H75BN+79C80G8mJkmVM8zA61/mMAqarOWag+mjl59PTrrz6wfunsFhDKrqaGCzSCkHInNboiefGj5kVjZ3t+Kea9X0deb49qwB7miEFcelij9y6D/Al0bq1SUqcUu/9muLTMx7j6AEnggakgE262Rme4OPOT7mNByXHd1Aadul1GmOMrUbzUZ/zQdZOz42NyMvtllW2ZB6BtFrDdHsBVAN4pBwrALJO9/+mJPf0ZKiiG9xvHHq1ihQ6Lo5hq/Ui2N4BhcNdQkSlbmqw7S3UxMQRy6hGrYDyx2MIDBJOMcdAKii2OzHHkDDsX5/y8XpXzu9Dj1ldReBGv7WvHzdq8eeWZnKc5oi7z5LM6g/ZkUkw8k2M/IcEpMW32Jjx7FwYYewuEhU0SVzjT/IjIHUo3EHQBBNBH3BoLMbttthj+xLDG02NwyqYhK0tVeu0ji6v0Gj4aua/KOsSctRzKs54IODa275YZoi1Z/t7dZeA32OpGygE=; 4:KQyWojuk3jTw2RbRmZuYx+qGxtXO9+XhpsMIxljCk+eF9q/vsmAhGD2w2FTThnGHd619K3paHoIJipK845z3QpFjyPy8icn9Y7Uwwpq2jySQNKDvkJ3WkkuJcOJcAWrA/niumuVFYirYv4qXYQeZbPdY947+YGtYAYTOsvdl9h7zv8por/cWb+2zZ04S5lk82EUOSOBeF4wzb2t5fhwekyYY6UyvBFpFNQlwVuQ3tONXYve5vTe3QIrmZ2sunJynRXYfvGM++MFJWAwGC47C4Q== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040470)(2401047)(5005006)(8121501046)(3231023)(944501075)(10201501046)(3002001)(93006095)(6041268)(20161123564045)(20161123558120)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123562045)(6072148)(201708071742011); SRVR:DM5PR07MB3466; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:DM5PR07MB3466; X-Forefront-PRVS: 0532BF6DC2 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(979002)(6069001)(366004)(396003)(376002)(39380400002)(346002)(39850400004)(199004)(189003)(36756003)(68736007)(5009440100003)(7736002)(66066001)(81166006)(478600001)(81156014)(305945005)(47776003)(16526018)(8676002)(59450400001)(6506007)(386003)(53946003)(316002)(97736004)(16586007)(5660300001)(52116002)(106356001)(25786009)(53936002)(48376002)(2906002)(105586002)(76176011)(51416003)(1076002)(3846002)(107886003)(6486002)(4326008)(50226002)(72206003)(8936002)(6512007)(6116002)(42882006)(2950100002)(50466002)(42262002)(969003)(989001)(999001)(1009001)(1019001); DIR:OUT; SFP:1101; SCL:1; SRVR:DM5PR07MB3466; H:localhost.localdomain; FPR:; SPF:None; PTR:InfoNoRecords; A:1; MX:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; DM5PR07MB3466; 23:j83WI/IUaPau/wQGNSlGbqw1xxvBYZ6pPyji8KDuk?= =?us-ascii?Q?5wh++XDhUxy2NH5nc5tUMTY02J5ld84/5r51NeIAjKD0xx5xoZqAM8upefw7?= =?us-ascii?Q?5Au8Cc41+MYa2m3udBdEDtEoE5kEf4376OSEMznaSOcmZJJXyIYWZYua27/R?= =?us-ascii?Q?2APeOEWA0Pu5zj2QIxPu4CkN75eSoWdlhrIXOJw2val39n0JCzmaun/TMO7S?= =?us-ascii?Q?C4FHGB93DvJXaryoPHUznn3BEhBLgTHIdgxWLu61DfCfnq2l0j/2wzdT6Sv4?= =?us-ascii?Q?Lmtyc80DNYWwc0N6IkiS5mnqHH00xdHOqKNe3sE8rKf9Yvw4P8p5UT6L8olL?= =?us-ascii?Q?ABW7DKku9uIG4BAyaEbn4+5TufWLrNW9k3CZ0KdNYStqJ/VmCa+VFeDJeFHx?= =?us-ascii?Q?JyJbNlxbfsupf1po0aUACDzZ/CCAuKCbd+0MWXPEnp78Rp2/5yGH9CRUis8R?= =?us-ascii?Q?cfDI41MWzmVhC9JUzIPjPNtaWIjfI6S+j7FUD88aR5KuHOX6xlNae5pt07lj?= =?us-ascii?Q?q3CE4XVr5d+ynL35094749KVsH7FQZ6bB0UlqT41EhPCN0+uVwBtz4ICIRMF?= =?us-ascii?Q?dvoCAZovuiY6C41aetA33mbcOAI5xYo0FckdkxLxom0YPO7cpbPccmewxTY3?= =?us-ascii?Q?wHh5L6woqXVLyhXgHaR9jpxO1a4SEkkTWdP6newTH6tsIczpInc/yAkyyiWc?= =?us-ascii?Q?wXPAoySn2mD1aW5IzYfiV/5/f2zEuJPcONQV8j8YQkZaeRWc+UPOo0d8/abJ?= =?us-ascii?Q?f8DZKJKYTvtG17XtNZxWeIBbHrm8Av7sSuXuQcfTwlyqr2CrnNUgbBSRY8gU?= =?us-ascii?Q?WD4kKGBcHO+D32p/sQnUDy7VdDcdkJTm4xem/uEc7hXvsXXdZNefylU0Iiol?= =?us-ascii?Q?YdHSjebydqD/O2fnCVzvI3QvYV6w1NwaKRzFYUAttJ38aDpUbRTpiFjpztxc?= =?us-ascii?Q?xP2VhobYCdtq/v4Y1kkIjUOi9nx/dLAyePoaKYTUCDQ5NhbkGLsgHltfzBKQ?= =?us-ascii?Q?Os5/pc1UJLp5O7GVqGeBfWHCqC6EaGgOdgudMfw1hYlfcebGKJ5Klij8Gk+q?= =?us-ascii?Q?1jlEeoRS+jMOLZgHEXWD7iMlXN/rszM0TJ8mQGX+HEGjkLkzn/SnR7To8C+g?= =?us-ascii?Q?X2Fl6GiVX2CB+6YOutFTDTkVKYPBdgqvV+oIH0ksg3n4rUOKvOJ89VkjNDnS?= =?us-ascii?Q?MKafQ8O6WemuznuAGJOqDucuAT1HiPvpf9LB6X5/K1Pb6YU+fN1mxpRmSj6E?= =?us-ascii?Q?VG7IBWkMY3vzhanvVwnk3xylpii7yuTXquVmRZZpd4dCIxjr9CLRtBbO2dkf?= =?us-ascii?Q?atYOI18VTwWxHiMUyWilaI=3D?= X-Microsoft-Exchange-Diagnostics: 1; DM5PR07MB3466; 6:tB/80A1OQNvttU4v7YoDkzEFqLic4UMLnLcvIj/xOEZ9l0DgtmG0caoGvBJhudWGJ2SdJAuiu8RYeNSGQnxzr01U+1SKPLXuBHKKoOgDK2DEQH4bgTSBdiO9LHgXX6T4Yk0V8e81n8PN25ji/OmUMHCtoWFosVD2jc2taeQwAzuSVk6hre0n3fGm6RmU0WrPUJduWV0rJtaDNnENil34b8vmBTIyhPlER9k0gk15Y4/zYabieRjuN7IoPrYiGo7DyYplvmpdvcRTIpVpbR5WBT5QMH6QtLrBLkXbKN3DoPKZGEh2JxtKEt92EHSoIyO+dPK6mPLDGi4oI21UG+pS8Zfl7wp9rVmgwScL7Ejgd8s=; 5:4QFsIxIDQUW8BbrLxrWVIxXTzyJG4/dmQUHW2tAfgsC0I8bEJx7kqEC0nYZQCxoLyj8mIi0R7EvXd6ISPHPJYzg9c7yrNi4kZZBMaVugL9BO07219FQQm4vP7KqT5Q08BJ7qFc8r5wy/i+YIhbIxit4zqbhhVeRGObLDo5MgkO8=; 24:mlp45bhBagWtRX80V5azFXCGlWMFxaQ83L/QLIloTkO+7b/qTzFAI/bkAqXKGJQTu6mXzirWaGvJUlYta/vYE5G5lXbhMrjitU9+TEH+f3g=; 7:pc/pf5WO7bCJ+jsWoWBBeuo0T47/1I4GkOhConAy4xTya4ep/TrFZFPZHeB7RKvEtJAV7IlBeAvsbt2sg/JzWzygj92LOC6IjwCIhJbQEV1yAPIBmiYCfc4GcdFOXpYyn46pC72jfuIB+DhNJrFAvXUHZRqwUP6KC5P7m2xYpANjAD/lOn9F3zetAwgriz81AagC8VoKB0no9K3CJJgWsuKGVomvYf53sQstVVmeaBl1MgbsdxgTgz+E7Ef/DbEL SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Dec 2017 19:18:32.7449 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 50630105-7891-4cea-1464-08d54bcc4b37 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR07MB3466 Subject: [dpdk-dev] [PATCH v3 04/11] event/octeontx: modify octeontx eventdev test X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 25 Dec 2017 19:18:38 -0000 Modify test_eventdev_octeontx to be standalone selftest independent of test framework. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx/octeontx_evdev_selftest.c | 427 +++++++++++++---------- 1 file changed, 234 insertions(+), 193 deletions(-) diff --git a/drivers/event/octeontx/octeontx_evdev_selftest.c b/drivers/event/octeontx/octeontx_evdev_selftest.c index 8fddb4fd2..3877bca4a 100644 --- a/drivers/event/octeontx/octeontx_evdev_selftest.c +++ b/drivers/event/octeontx/octeontx_evdev_selftest.c @@ -46,12 +46,21 @@ #include #include #include +#include -#include "test.h" +#include "ssovf_evdev.h" #define NUM_PACKETS (1 << 18) #define MAX_EVENTS (16 * 1024) +#define OCTEONTX_TEST_RUN(setup, teardown, test) \ + octeontx_test_run(setup, teardown, test, #test) + +static int total; +static int passed; +static int failed; +static int unsupported; + static int evdev; static struct rte_mempool *eventdev_test_mempool; @@ -79,11 +88,11 @@ static inline int seqn_list_update(int val) { if (seqn_list_index >= NUM_PACKETS) - return TEST_FAILED; + return -1; seqn_list[seqn_list_index++] = val; rte_smp_wmb(); - return TEST_SUCCESS; + return 0; } static inline int @@ -93,11 +102,11 @@ seqn_list_check(int limit) for (i = 0; i < limit; i++) { if (seqn_list[i] != i) { - printf("Seqn mismatch %d %d\n", seqn_list[i], i); - return TEST_FAILED; + ssovf_log_dbg("Seqn mismatch %d %d", seqn_list[i], i); + return -1; } } - return TEST_SUCCESS; + return 0; } struct test_core_param { @@ -114,20 +123,21 @@ testsuite_setup(void) evdev = rte_event_dev_get_dev_id(eventdev_name); if (evdev < 0) { - printf("%d: Eventdev %s not found - creating.\n", + ssovf_log_dbg("%d: Eventdev %s not found - creating.", __LINE__, eventdev_name); if (rte_vdev_init(eventdev_name, NULL) < 0) { - printf("Error creating eventdev %s\n", eventdev_name); - return TEST_FAILED; + ssovf_log_dbg("Error creating eventdev %s", + eventdev_name); + return -1; } evdev = rte_event_dev_get_dev_id(eventdev_name); if (evdev < 0) { - printf("Error finding newly created eventdev\n"); - return TEST_FAILED; + ssovf_log_dbg("Error finding newly created eventdev"); + return -1; } } - return TEST_SUCCESS; + return 0; } static void @@ -177,31 +187,34 @@ _eventdev_setup(int mode) 512, /* Use very small mbufs */ rte_socket_id()); if (!eventdev_test_mempool) { - printf("ERROR creating mempool\n"); - return TEST_FAILED; + ssovf_log_dbg("ERROR creating mempool"); + return -1; } ret = rte_event_dev_info_get(evdev, &info); - TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); - TEST_ASSERT(info.max_num_events >= (int32_t)MAX_EVENTS, - "max_num_events=%d < max_events=%d", - info.max_num_events, MAX_EVENTS); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + if (!(info.max_num_events >= (int32_t)MAX_EVENTS)) { + ssovf_log_dbg("ERROR max_num_events=%d < max_events=%d", + info.max_num_events, MAX_EVENTS); + return -1; + } devconf_set_default_sane_values(&dev_conf, &info); if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT) dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT; ret = rte_event_dev_configure(evdev, &dev_conf); - TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev"); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev"); uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); if (mode == TEST_EVENTDEV_SETUP_PRIORITY) { if (queue_count > 8) { - printf("test expects the unique priority per queue\n"); + ssovf_log_dbg( + "test expects the unique priority per queue"); return -ENOTSUP; } @@ -216,35 +229,39 @@ _eventdev_setup(int mode) ret = rte_event_queue_default_conf_get(evdev, i, &queue_conf); - TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d", i); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d", + i); queue_conf.priority = i * step; ret = rte_event_queue_setup(evdev, i, &queue_conf); - TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", i); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", + i); } } else { /* Configure event queues with default priority */ for (i = 0; i < (int)queue_count; i++) { ret = rte_event_queue_setup(evdev, i, NULL); - TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", i); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", + i); } } /* Configure event ports */ uint32_t port_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count), "Port count get failed"); for (i = 0; i < (int)port_count; i++) { ret = rte_event_port_setup(evdev, i, NULL); - TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i); ret = rte_event_port_link(evdev, i, NULL, NULL, 0); - TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d", i); + RTE_TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d", + i); } ret = rte_event_dev_start(evdev); - TEST_ASSERT_SUCCESS(ret, "Failed to start device"); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start device"); - return TEST_SUCCESS; + return 0; } static inline int @@ -311,7 +328,7 @@ inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type, struct rte_event ev = {.event = 0, .u64 = 0}; m = rte_pktmbuf_alloc(eventdev_test_mempool); - TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); + RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); m->seqn = i; update_event_and_validation_attr(m, &ev, flow_id, event_type, @@ -332,8 +349,8 @@ check_excess_events(uint8_t port) for (i = 0; i < 32; i++) { valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); - TEST_ASSERT_SUCCESS(valid_event, "Unexpected valid event=%d", - ev.mbuf->seqn); + RTE_TEST_ASSERT_SUCCESS(valid_event, + "Unexpected valid event=%d", ev.mbuf->seqn); } return 0; } @@ -346,12 +363,12 @@ generate_random_events(const unsigned int total_events) int ret; uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); ret = rte_event_dev_info_get(evdev, &info); - TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); for (i = 0; i < total_events; i++) { ret = inject_events( rte_rand() % info.max_event_queue_flows /*flow_id */, @@ -362,7 +379,7 @@ generate_random_events(const unsigned int total_events) 0 /* port */, 1 /* events */); if (ret) - return TEST_FAILED; + return -1; } return ret; } @@ -374,19 +391,19 @@ validate_event(struct rte_event *ev) struct event_attr *attr; attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *); - TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id, + RTE_TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id, "flow_id mismatch enq=%d deq =%d", attr->flow_id, ev->flow_id); - TEST_ASSERT_EQUAL(attr->event_type, ev->event_type, + RTE_TEST_ASSERT_EQUAL(attr->event_type, ev->event_type, "event_type mismatch enq=%d deq =%d", attr->event_type, ev->event_type); - TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type, + RTE_TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type, "sub_event_type mismatch enq=%d deq =%d", attr->sub_event_type, ev->sub_event_type); - TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type, + RTE_TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type, "sched_type mismatch enq=%d deq =%d", attr->sched_type, ev->sched_type); - TEST_ASSERT_EQUAL(attr->queue, ev->queue_id, + RTE_TEST_ASSERT_EQUAL(attr->queue, ev->queue_id, "queue mismatch enq=%d deq =%d", attr->queue, ev->queue_id); return 0; @@ -405,8 +422,8 @@ consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn) while (1) { if (++forward_progress_cnt > UINT16_MAX) { - printf("Detected deadlock\n"); - return TEST_FAILED; + ssovf_log_dbg("Detected deadlock"); + return -1; } valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); @@ -416,11 +433,11 @@ consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn) forward_progress_cnt = 0; ret = validate_event(&ev); if (ret) - return TEST_FAILED; + return -1; if (fn != NULL) { ret = fn(index, port, &ev); - TEST_ASSERT_SUCCESS(ret, + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate test specific event"); } @@ -438,8 +455,8 @@ static int validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(port); - TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d", index, - ev->mbuf->seqn); + RTE_TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d", + index, ev->mbuf->seqn); return 0; } @@ -456,7 +473,7 @@ test_simple_enqdeq(uint8_t sched_type) 0 /* port */, MAX_EVENTS); if (ret) - return TEST_FAILED; + return -1; return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq); } @@ -491,7 +508,7 @@ test_multi_queue_enq_single_port_deq(void) ret = generate_random_events(MAX_EVENTS); if (ret) - return TEST_FAILED; + return -1; return consume_events(0 /* port */, MAX_EVENTS, NULL); } @@ -514,7 +531,7 @@ static int validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) { uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); uint32_t range = MAX_EVENTS / queue_count; @@ -522,7 +539,7 @@ validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) expected_val += ev->queue_id; RTE_SET_USED(port); - TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val, + RTE_TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val, "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", ev->mbuf->seqn, index, expected_val, range, queue_count, MAX_EVENTS); @@ -538,7 +555,7 @@ test_multi_queue_priority(void) /* See validate_queue_priority() comments for priority validate logic */ uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); max_evts_roundoff = MAX_EVENTS / queue_count; @@ -548,7 +565,7 @@ test_multi_queue_priority(void) struct rte_event ev = {.event = 0, .u64 = 0}; m = rte_pktmbuf_alloc(eventdev_test_mempool); - TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); + RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); m->seqn = i; queue = i % queue_count; @@ -576,7 +593,7 @@ worker_multi_port_fn(void *arg) continue; ret = validate_event(&ev); - TEST_ASSERT_SUCCESS(ret, "Failed to validate event"); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate event"); rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } @@ -587,27 +604,29 @@ static inline int wait_workers_to_join(int lcore, const rte_atomic32_t *count) { uint64_t cycles, print_cycles; + RTE_SET_USED(count); print_cycles = cycles = rte_get_timer_cycles(); while (rte_eal_get_lcore_state(lcore) != FINISHED) { uint64_t new_cycles = rte_get_timer_cycles(); if (new_cycles - print_cycles > rte_get_timer_hz()) { - printf("\r%s: events %d\n", __func__, + ssovf_log_dbg("\r%s: events %d", __func__, rte_atomic32_read(count)); print_cycles = new_cycles; } if (new_cycles - cycles > rte_get_timer_hz() * 10) { - printf("%s: No schedules for seconds, deadlock (%d)\n", + ssovf_log_dbg( + "%s: No schedules for seconds, deadlock (%d)", __func__, rte_atomic32_read(count)); rte_event_dev_dump(evdev, stdout); cycles = new_cycles; - return TEST_FAILED; + return -1; } } rte_eal_mp_wait_lcore(); - return TEST_SUCCESS; + return 0; } @@ -631,12 +650,12 @@ launch_workers_and_wait(int (*master_worker)(void *), param = malloc(sizeof(struct test_core_param) * nb_workers); if (!param) - return TEST_FAILED; + return -1; ret = rte_event_dequeue_timeout_ticks(evdev, rte_rand() % 10000000/* 10ms */, &dequeue_tmo_ticks); if (ret) - return TEST_FAILED; + return -1; param[0].total_events = &atomic_total_events; param[0].sched_type = sched_type; @@ -679,17 +698,17 @@ test_multi_queue_enq_multi_port_deq(void) ret = generate_random_events(total_events); if (ret) - return TEST_FAILED; + return -1; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (!nr_ports) { - printf("%s: Not enough ports=%d or workers=%d\n", __func__, + ssovf_log_dbg("%s: Not enough ports=%d or workers=%d", __func__, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } return launch_workers_and_wait(worker_multi_port_fn, @@ -702,7 +721,7 @@ validate_queue_to_port_single_link(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(index); - TEST_ASSERT_EQUAL(port, ev->queue_id, + RTE_TEST_ASSERT_EQUAL(port, ev->queue_id, "queue mismatch enq=%d deq =%d", port, ev->queue_id); return 0; @@ -718,18 +737,19 @@ test_queue_to_port_single_link(void) int i, nr_links, ret; uint32_t port_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count), "Port count get failed"); /* Unlink all connections that created in eventdev_setup */ for (i = 0; i < (int)port_count; i++) { ret = rte_event_port_unlink(evdev, i, NULL, 0); - TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d", i); + RTE_TEST_ASSERT(ret >= 0, + "Failed to unlink all queues port=%d", i); } uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); @@ -741,7 +761,7 @@ test_queue_to_port_single_link(void) uint8_t queue = (uint8_t)i; ret = rte_event_port_link(evdev, i, &queue, NULL, 1); - TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i); + RTE_TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i); ret = inject_events( 0x100 /*flow_id */, @@ -752,7 +772,7 @@ test_queue_to_port_single_link(void) i /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; } /* Verify the events generated from correct queue */ @@ -760,10 +780,10 @@ test_queue_to_port_single_link(void) ret = consume_events(i /* port */, total_events, validate_queue_to_port_single_link); if (ret) - return TEST_FAILED; + return -1; } - return TEST_SUCCESS; + return 0; } static int @@ -771,7 +791,7 @@ validate_queue_to_port_multi_link(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(index); - TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1), + RTE_TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1), "queue mismatch enq=%d deq =%d", port, ev->queue_id); return 0; @@ -789,27 +809,27 @@ test_queue_to_port_multi_link(void) uint32_t nr_queues = 0; uint32_t nr_ports = 0; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues), "Queue count get failed"); - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues), "Queue count get failed"); - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); if (nr_ports < 2) { - printf("%s: Not enough ports to test ports=%d\n", + ssovf_log_dbg("%s: Not enough ports to test ports=%d", __func__, nr_ports); - return TEST_SUCCESS; + return 0; } /* Unlink all connections that created in eventdev_setup */ for (port = 0; port < nr_ports; port++) { ret = rte_event_port_unlink(evdev, port, NULL, 0); - TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d", + RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d", port); } @@ -819,7 +839,7 @@ test_queue_to_port_multi_link(void) for (queue = 0; queue < nr_queues; queue++) { port = queue & 0x1; ret = rte_event_port_link(evdev, port, &queue, NULL, 1); - TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d", + RTE_TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d", queue, port); ret = inject_events( @@ -831,7 +851,7 @@ test_queue_to_port_multi_link(void) port /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; if (port == 0) port0_events += total_events; @@ -842,13 +862,13 @@ test_queue_to_port_multi_link(void) ret = consume_events(0 /* port */, port0_events, validate_queue_to_port_multi_link); if (ret) - return TEST_FAILED; + return -1; ret = consume_events(1 /* port */, port1_events, validate_queue_to_port_multi_link); if (ret) - return TEST_FAILED; + return -1; - return TEST_SUCCESS; + return 0; } static int @@ -878,17 +898,17 @@ worker_flow_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.sub_event_type == 1) { /* Events from stage 1*/ - if (seqn_list_update(ev.mbuf->seqn) == TEST_SUCCESS) { + if (seqn_list_update(ev.mbuf->seqn) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { - printf("Failed to update seqn_list\n"); - return TEST_FAILED; + ssovf_log_dbg("Failed to update seqn_list"); + return -1; } } else { - printf("Invalid ev.sub_event_type = %d\n", + ssovf_log_dbg("Invalid ev.sub_event_type = %d", ev.sub_event_type); - return TEST_FAILED; + return -1; } } return 0; @@ -902,15 +922,15 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type, uint32_t nr_ports; int ret; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (!nr_ports) { - printf("%s: Not enough ports=%d or workers=%d\n", __func__, + ssovf_log_dbg("%s: Not enough ports=%d or workers=%d", __func__, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } /* Injects events with m->seqn=0 to total_events */ @@ -923,20 +943,20 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type, 0 /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; ret = launch_workers_and_wait(worker_flow_based_pipeline, worker_flow_based_pipeline, total_events, nr_ports, out_sched_type); if (ret) - return TEST_FAILED; + return -1; if (in_sched_type != RTE_SCHED_TYPE_PARALLEL && out_sched_type == RTE_SCHED_TYPE_ATOMIC) { /* Check the events order maintained or not */ return seqn_list_check(total_events); } - return TEST_SUCCESS; + return 0; } @@ -1033,16 +1053,16 @@ worker_group_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/ - if (seqn_list_update(ev.mbuf->seqn) == TEST_SUCCESS) { + if (seqn_list_update(ev.mbuf->seqn) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { - printf("Failed to update seqn_list\n"); - return TEST_FAILED; + ssovf_log_dbg("Failed to update seqn_list"); + return -1; } } else { - printf("Invalid ev.queue_id = %d\n", ev.queue_id); - return TEST_FAILED; + ssovf_log_dbg("Invalid ev.queue_id = %d", ev.queue_id); + return -1; } } @@ -1058,21 +1078,21 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type, uint32_t nr_ports; int ret; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); if (queue_count < 2 || !nr_ports) { - printf("%s: Not enough queues=%d ports=%d or workers=%d\n", + ssovf_log_dbg("%s: Not enough queues=%d ports=%d or workers=%d", __func__, queue_count, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } /* Injects events with m->seqn=0 to total_events */ @@ -1085,20 +1105,20 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type, 0 /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; ret = launch_workers_and_wait(worker_group_based_pipeline, worker_group_based_pipeline, total_events, nr_ports, out_sched_type); if (ret) - return TEST_FAILED; + return -1; if (in_sched_type != RTE_SCHED_TYPE_PARALLEL && out_sched_type == RTE_SCHED_TYPE_ATOMIC) { /* Check the events order maintained or not */ return seqn_list_check(total_events); } - return TEST_SUCCESS; + return 0; } static int @@ -1201,15 +1221,15 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) uint32_t nr_ports; int ret; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (!nr_ports) { - printf("%s: Not enough ports=%d or workers=%d\n", __func__, + ssovf_log_dbg("%s: Not enough ports=%d or workers=%d", __func__, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } /* Injects events with m->seqn=0 to total_events */ @@ -1222,7 +1242,7 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) 0 /* port */, MAX_EVENTS /* events */); if (ret) - return TEST_FAILED; + return -1; return launch_workers_and_wait(fn, fn, MAX_EVENTS, nr_ports, 0xff /* invalid */); @@ -1244,7 +1264,7 @@ worker_queue_based_pipeline_max_stages_rand_sched_type(void *arg) uint16_t valid_event; uint8_t port = param->port; uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); uint8_t nr_queues = queue_count; @@ -1286,7 +1306,7 @@ worker_mixed_pipeline_max_stages_rand_sched_type(void *arg) uint16_t valid_event; uint8_t port = param->port; uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); uint8_t nr_queues = queue_count; @@ -1357,14 +1377,14 @@ test_producer_consumer_ingress_order_test(int (*fn)(void *)) { uint32_t nr_ports; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (rte_lcore_count() < 3 || nr_ports < 2) { - printf("### Not enough cores for %s test.\n", __func__); - return TEST_SUCCESS; + ssovf_log_dbg("### Not enough cores for %s test.", __func__); + return 0; } launch_workers_and_wait(worker_ordered_flow_producer, fn, @@ -1389,86 +1409,107 @@ test_queue_producer_consumer_ingress_order_test(void) worker_group_based_pipeline); } -static struct unit_test_suite eventdev_octeontx_testsuite = { - .suite_name = "eventdev octeontx unit test suite", - .setup = testsuite_setup, - .teardown = testsuite_teardown, - .unit_test_cases = { - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_queue_enq_single_port_deq), - TEST_CASE_ST(eventdev_setup_priority, eventdev_teardown, - test_multi_queue_priority), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_queue_enq_multi_port_deq), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_queue_to_port_single_link), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_queue_to_port_multi_link), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_max_stages_random_sched_type), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_max_stages_random_sched_type), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_mixed_max_stages_random_sched_type), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_flow_producer_consumer_ingress_order_test), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_queue_producer_consumer_ingress_order_test), - /* Tests with dequeue timeout */ - TEST_CASE_ST(eventdev_setup_dequeue_timeout, eventdev_teardown, - test_multi_port_flow_ordered_to_atomic), - TEST_CASE_ST(eventdev_setup_dequeue_timeout, eventdev_teardown, - test_multi_port_queue_ordered_to_atomic), - TEST_CASES_END() /**< NULL terminate unit test array */ +static void octeontx_test_run(int (*setup)(void), void (*tdown)(void), + int (*test)(void), const char *name) +{ + if (setup() < 0) { + ssovf_log_selftest("Error setting up test %s", name); + unsupported++; + } else { + if (test() < 0) { + failed++; + ssovf_log_selftest("%s Failed", name); + } else { + passed++; + ssovf_log_selftest("%s Passed", name); + } } -}; -static int + total++; + tdown(); +} + +int test_eventdev_octeontx(void) { - return unit_test_suite_runner(&eventdev_octeontx_testsuite); -} + testsuite_setup(); + + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_queue_enq_single_port_deq); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_queue_enq_multi_port_deq); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_to_port_single_link); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_to_port_multi_link); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_max_stages_random_sched_type); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_max_stages_random_sched_type); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_mixed_max_stages_random_sched_type); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_flow_producer_consumer_ingress_order_test); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_producer_consumer_ingress_order_test); + OCTEONTX_TEST_RUN(eventdev_setup_priority, eventdev_teardown, + test_multi_queue_priority); + OCTEONTX_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown, + test_multi_port_flow_ordered_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown, + test_multi_port_queue_ordered_to_atomic); + + ssovf_log_selftest("Total tests : %d", total); + ssovf_log_selftest("Passed : %d", passed); + ssovf_log_selftest("Failed : %d", failed); + ssovf_log_selftest("Not supported : %d", unsupported); + + testsuite_teardown(); + + if (failed) + return -1; -REGISTER_TEST_COMMAND(eventdev_octeontx_autotest, test_eventdev_octeontx); + return 0; +} -- 2.14.1