From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM01-BN3-obe.outbound.protection.outlook.com (mail-bn3nam01on0079.outbound.protection.outlook.com [104.47.33.79]) by dpdk.org (Postfix) with ESMTP id A02F51B024 for ; Thu, 14 Dec 2017 16:02:32 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=BRg+Bc3QqMNFDHZ7/KV5W6Z/+uSipZTUdFLBnmEIMmE=; b=PlMI2cvWp5etFT/VvE66bghrXXHx4nStsfoIdV5VR/n64MGUMpcJJmA+6RZoEHfYWLHWDibv4AaefbkyWKKj6mtIMkct7xpURu0+xx+uJg8KjrwUqeOKkKJgP6x1VkFPpANdxKLbh4Jz6qohwshOecn3cPpzCiQImOOyXevBUIY= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Pavan.Bhagavatula@cavium.com; Received: from Pavan-LT.caveonetworks.com (111.93.218.67) by BN6PR07MB3460.namprd07.prod.outlook.com (10.161.153.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.302.9; Thu, 14 Dec 2017 15:02:26 +0000 From: Pavan Nikhilesh To: jerin.jacob@caviumnetworks.com, santosh.shukla@caviumnetworks.com, bruce.richardson@intel.com, harry.van.haaren@intel.com, gage.eads@intel.com, hemant.agrawal@nxp.com, nipun.gupta@nxp.com, liang.j.ma@intel.com Cc: dev@dpdk.org, Pavan Nikhilesh Date: Thu, 14 Dec 2017 20:31:31 +0530 Message-Id: <20171214150138.25667-5-pbhagavatula@caviumnetworks.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20171214150138.25667-1-pbhagavatula@caviumnetworks.com> References: <20171212192713.17620-1-pbhagavatula@caviumnetworks.com> <20171214150138.25667-1-pbhagavatula@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: MWHPR17CA0079.namprd17.prod.outlook.com (10.173.237.145) To BN6PR07MB3460.namprd07.prod.outlook.com (10.161.153.23) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e7f62a51-487e-41c7-1730-08d54303b221 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(5600026)(4604075)(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(2017052603307); SRVR:BN6PR07MB3460; X-Microsoft-Exchange-Diagnostics: 1; BN6PR07MB3460; 3:pa7VFePcm+59vKEJYXY/xMmmNMtz28sQGXBMugrfTLIWbzQYH3O8GLQ0n3fOLyErtRFbtcOoBvo4FXWN2sy4a42wqTbEN/C0gcVZ6n29iCg88jC/o6VbZlua8hvv4Y8qty7aT58vXB6YYYoGeS/ebq2BP/IS/6LJ8ioN0B5cIFd8JzIB/rBZ8AgfIiUREc/KP8krX0qihI/YlD3XAuntc9n1kiTTIWEqxzqhVsA8bMlkV1HzX+I+TtXLqHsfaF1z; 25:NCkCllEgYBeUMj3ho6HRgMU9rBZdnwQc12/hcywphAX4pvwgnGxX6fjfHAemJK6UvB/p4R7jToafSH2SnvXg5XnP4hTSvKpT4JrL7CW5e4BYbhRe+ie6hDlH2QUWGIGjPoxrfrKlekWTEa0jAuMgTCZ/BG8RmNvFWFizDeimF/mItXmjYrjXo2B520UuGov1adzZgQ3CVsKRojXJFN1BfjRTPodEW34EVYto+cS0czA52uNZ1SlVOAP6KfNzRhVM+ZWsWs5XxcslWKYmx7OlBsgbQyjodTpvqJ7n2VIWd32aPBz3Gp2ZGZCjhuBlZZNiETkpO5lemAkpISmOmfk8ug==; 31:eIAyXQpX54VxrKvg2thmf+NIYzkY64Fqzc8PZ3rUE1+sh0RgU0xjguUOLyuLknRF77zbUrY7qrLKVlbUJM4ULWKDaJwenEN7as4d1k1qY/Y5nCDYr01bRmx0QvCuS1dZQPw7ASmiyOSUl/klta7FGZHdGHjBceK8Lo+HA2TO/QwXUC2WOoE3LWzNGz0a9msfCWn6QKzFYGghpH1O17qkt3wMqpqZ8ufABC+XzXs/4Z0= X-MS-TrafficTypeDiagnostic: BN6PR07MB3460: X-Microsoft-Exchange-Diagnostics: 1; BN6PR07MB3460; 20:Ehzj1q6+W2mJh1Lqea9cc+9l/8qgsp4oJvPgYTXWfEm0aUHpvwxFdPUp8LCGcI57Ntp835LO9tf5Uqfjz0q9+WZVhAO+dloRkmLdZDnkChLH8wAV4edEqQoh+dcFxu5rGZc3f/xpVZaItNMA6KG3XOoInJ9wSMcmTxSS18CKGO75yo8QyRGLYZ8NoGllX66Gg9vzqCnbhqFY2TC7HLWM4CigvRCTdfBJPuqegNW/ZW2B1xW3ssRhmvmp4gQOo9q/+JrpKlC9rrBslVI7Ppvz5bimu1WfMtdr8HCauDHQuPzrZGf8o/WPXCof5+lfsDEkrpnfQKqZ0wrBctuD/oUx58qKb4PhecaV/gk4sn2XzM/gCsmGa+VT5rKDOWlASUc78y2SUEgfHWFJsATV1iGqj78csC8EDaKd9J5O4cDntWN38Ll4BpjeuhFmZdXuSRlSyMxTwtwZvSpNtQ+u2Q6hGn3QErxgusxWxgQfNPac5oAnK0KQsjVo0cP5EAPwHh1hJIgIjalv2anUhH6OxVNvor9lpZHi02+K1/FyP42IkmIIAiA6uLl0vx1gq47hYbFO5IpMKOfi25MPZEGH5mLnzQxR6AIRjEx3KAtlJugRghE=; 4:Ah9LHswEkUqThtA2RE6b30w5zcKohNFDbHTbOvFNpCG5RjCEvlcgXb8rIrLLd1zGDIiLEaNv27q+wVpU+xBfYVBkzWUEb8sqs+W69ATcIkgs2FvXF7A4X7xJOUhRS71YeFebOfJV81hNglyqJr+j86uKGasrq66PQUgXM2dlOAQwBlORTMyS+SS/snLNdeSyftEJUGvhMSU30aIqy3FoQeINPRsdWBfCjiaIjqEZpYdA3AHzjTlplGroLojcLdKbuRkBQiRtM5e/7+sjYgLqTQ== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040450)(2401047)(8121501046)(5005006)(3002001)(3231023)(93006095)(10201501046)(6041248)(20161123560025)(20161123564025)(20161123555025)(20161123558100)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123562025)(6072148)(201708071742011); SRVR:BN6PR07MB3460; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:BN6PR07MB3460; X-Forefront-PRVS: 05214FD68E X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(979002)(396003)(366004)(39860400002)(346002)(376002)(189003)(199004)(66066001)(42882006)(6666003)(2950100002)(305945005)(97736004)(72206003)(478600001)(2906002)(6486002)(47776003)(69596002)(107886003)(6512007)(4326008)(53946003)(25786009)(53936002)(48376002)(105586002)(36756003)(106356001)(50466002)(5660300001)(53416004)(5009440100003)(6116002)(3846002)(1076002)(16586007)(59450400001)(68736007)(6506007)(386003)(16526018)(316002)(52116002)(50226002)(76176011)(8656006)(7736002)(8936002)(8676002)(81156014)(51416003)(81166006)(42262002)(969003)(989001)(999001)(1009001)(1019001); DIR:OUT; SFP:1101; SCL:1; SRVR:BN6PR07MB3460; H:Pavan-LT.caveonetworks.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN6PR07MB3460; 23:wTUFypuKfJyfHwnjx/pFf+I4ESLnR6xOfeHFfpwFp?= =?us-ascii?Q?UDPQsF4HAhtcoAfOM7rFnMPUfPBAln7JmjYi5IZ698ImVrrRi1f6Gas6M9mB?= =?us-ascii?Q?ynlG4N+VTboGm2ttEIbzrAnnQeSwnyUZVYrKbdbzzMUIbxPVrJoJkb1P8YhH?= =?us-ascii?Q?lbqhDfH4eUIBBgu+TNIZv7baKoR13qzmbXV+dB11fgam3b35kGYRxl2IM5fD?= =?us-ascii?Q?9UiRhgQRNTGidYKHh5YGiueL253yJo+iIfXclpFi19spWgZV6oP0/S+vlstG?= =?us-ascii?Q?3S4FKqMC6gNofXYh6+HKXgFjSHKg7vZY9ch8nCuMMclpExk3PfSYDDnB8EAO?= =?us-ascii?Q?Y5GPOdMZvqVDC0yYzs/2GP+fMzIaKN7GoRQieSvb0TntIRzol9K/N/NO/kJp?= =?us-ascii?Q?z6H5eP3bxvRLu4BoHGxZY6zPjOEGPaxF3/tT1qsOgFpWTDI5rKOYyuCGLHb2?= =?us-ascii?Q?bW3pXf4jCQwFQp0T/ZrvPYVfWD9Y/I7tkGjXxdTQRQzGyGdw2P7JKrwqaMrt?= =?us-ascii?Q?iFVm15WHCYqMJcZTDsZ4OAp3w8dgzBXXNzKWzJL/74IJwJvZQNM9pQqlE7Au?= =?us-ascii?Q?cxe7Umxk8MR+zEA4KYLYkrH5Su8iRUwkAh39nEX/f7L4nqHgqM8hg4sqQN7a?= =?us-ascii?Q?8cCTB2NtoyvLlUzMIvFic1IrqllaaZZ/DV8BAEU6XH54icvkLa7eLZWI4Dyz?= =?us-ascii?Q?Bp0oJGUHuuY0Ly8vxX1Ric+tJBiUssQl2R3jul8lYM9D7cXOJxyiCV9mh8+F?= =?us-ascii?Q?7ESv20kv0fvjiuPzHcQc7sPgDUiCycR4y8bHF1Cn+XWLJ/HwL5cZBu57MHzn?= =?us-ascii?Q?jQcofHVll8v4m/JGPMngSJ6McRgbAvrwXPk73khMTMLk7rDUZu/IJ1RjE/+T?= =?us-ascii?Q?aooQrd+kvg0jUAKwyFelXIcR6ICiueteq9iuBcLUPcLFXtFKczU4SgycG8Mi?= =?us-ascii?Q?KohxdRL268guUAJvDYzq4KbUvlfvqlRm/slDdTA1wPqrHK8OmORwRdC86axb?= =?us-ascii?Q?BPXF20lMZKYEm09ncc7SeiuX6mnVk3ndBFFlopUYc5h6/J5stWXWHOYz93U6?= =?us-ascii?Q?DUF6Cs1j+RLBr0WpxY3dB39gF2+JcSd9Z26vsBIwik1rv7P1RsvZ9rN8i5dO?= =?us-ascii?Q?KwWH6UwHSP8OwzAshyRYZ5Xeclfk7JoMACZcXP/2wxPY6tjXIpnFstmKk3GZ?= =?us-ascii?Q?1dG7w3aH7qQNaqZnS/dOhsK6jgR+QIPRdt91QDryhXe3UkFfQ6PEmOGfgDFl?= =?us-ascii?Q?PX/MxJoegqrFbM5bh89PlrJeow9fp3GkbgGRHsVtJVOjpYpRb9AJWb0k9sqh?= =?us-ascii?Q?R3xqpGrIy/ZVwBANfarXJ7W+B4LtLdmRUPJk6aVNjtZMhPZ7/+C+fMhjJKQm?= =?us-ascii?Q?flUqA=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1; BN6PR07MB3460; 6:0RYfasqoBU62t68FpWnOOV8t0OzCvbCV11hNK5tBsMWvrkPPVBiX5+kGd2KdxeclroRGmYdKCc9SW083wS0FEtiQuF5PPMdoN1MoV5Fs4dZdCdp34JYO7qjUuBW/5Sd0xaBr04BNAxs82XxoE7z02e58ZjcznFzd+nx/M7G00Fd/dilzGUiNcTrcoD8r98xoYxPH9l88jhPnKMuGh0gzC9r/9rq0y8H/CCrNkEue45NBH9hUZ2nBbxKExRgd2CThe5N5+diGoCocOC30VBWqcVzJ2tF9HV+pI+QvKQz5NOe/OIu7vc0qwsEzwaRia3NB4A21KRL6AFG1sMSaUOrzldlbk7cjM8dYNF+rd+AnQbU=; 5:MmrBPxY+j3s/92qCWcXAz8HmXhXAzwQyNU26Sav8OIcNRVZ9wDFOjYDiYEd/wc3Bu9mMiVOh4fe3FGGK3i0YBhh0YiDaRYT0rhETL9SaejJZBBTM1kepaYZ+KmpxRsaILWrcIhpB4yCiHN4nnU1g+7KJCufJlvsPIZw3MefGuPs=; 24:ZuFCQrP4IBepsCVWFaaG4pMKNnXVFHwkt/2HsRERJHGlMN4z4c5VEk1DP6H1kAqPtuMuX27X6mrl57i2aljKH/s2Tn+HsFO6mr1/4SPuTes=; 7:3anU+wEwyImQJaY0oGdNvWoOVritnzROUvEkk9+pW+iYMSREOHnVjhWtI6w+CknUr4QAEkml3kZywLSi18lWJFb+0EzBJN3nv1jVHk06ZtzBMqUvD3KtGrAJpj+eVTlaOyMWSldlehrYrXzjnX3rLGjC3kX5/fQd8deYLZWGRdLZ+RoG57/JDQO7RSxqKD63D8pytcin+t4s4e37eNAQyj82oad/wgwhj64DHEEGa6ugC5QOJ6lWa+EsEBfREdF1 SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Dec 2017 15:02:26.0581 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e7f62a51-487e-41c7-1730-08d54303b221 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR07MB3460 Subject: [dpdk-dev] [PATCH v2 04/11] event/octeontx: modify octeontx eventdev test X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 14 Dec 2017 15:02:33 -0000 Modify test_eventdev_octeontx to be standalone selftest independent of test framework. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx/octeontx_evdev_selftest.c | 427 +++++++++++++---------- 1 file changed, 234 insertions(+), 193 deletions(-) diff --git a/drivers/event/octeontx/octeontx_evdev_selftest.c b/drivers/event/octeontx/octeontx_evdev_selftest.c index 8fddb4fd2..3877bca4a 100644 --- a/drivers/event/octeontx/octeontx_evdev_selftest.c +++ b/drivers/event/octeontx/octeontx_evdev_selftest.c @@ -46,12 +46,21 @@ #include #include #include +#include -#include "test.h" +#include "ssovf_evdev.h" #define NUM_PACKETS (1 << 18) #define MAX_EVENTS (16 * 1024) +#define OCTEONTX_TEST_RUN(setup, teardown, test) \ + octeontx_test_run(setup, teardown, test, #test) + +static int total; +static int passed; +static int failed; +static int unsupported; + static int evdev; static struct rte_mempool *eventdev_test_mempool; @@ -79,11 +88,11 @@ static inline int seqn_list_update(int val) { if (seqn_list_index >= NUM_PACKETS) - return TEST_FAILED; + return -1; seqn_list[seqn_list_index++] = val; rte_smp_wmb(); - return TEST_SUCCESS; + return 0; } static inline int @@ -93,11 +102,11 @@ seqn_list_check(int limit) for (i = 0; i < limit; i++) { if (seqn_list[i] != i) { - printf("Seqn mismatch %d %d\n", seqn_list[i], i); - return TEST_FAILED; + ssovf_log_dbg("Seqn mismatch %d %d", seqn_list[i], i); + return -1; } } - return TEST_SUCCESS; + return 0; } struct test_core_param { @@ -114,20 +123,21 @@ testsuite_setup(void) evdev = rte_event_dev_get_dev_id(eventdev_name); if (evdev < 0) { - printf("%d: Eventdev %s not found - creating.\n", + ssovf_log_dbg("%d: Eventdev %s not found - creating.", __LINE__, eventdev_name); if (rte_vdev_init(eventdev_name, NULL) < 0) { - printf("Error creating eventdev %s\n", eventdev_name); - return TEST_FAILED; + ssovf_log_dbg("Error creating eventdev %s", + eventdev_name); + return -1; } evdev = rte_event_dev_get_dev_id(eventdev_name); if (evdev < 0) { - printf("Error finding newly created eventdev\n"); - return TEST_FAILED; + ssovf_log_dbg("Error finding newly created eventdev"); + return -1; } } - return TEST_SUCCESS; + return 0; } static void @@ -177,31 +187,34 @@ _eventdev_setup(int mode) 512, /* Use very small mbufs */ rte_socket_id()); if (!eventdev_test_mempool) { - printf("ERROR creating mempool\n"); - return TEST_FAILED; + ssovf_log_dbg("ERROR creating mempool"); + return -1; } ret = rte_event_dev_info_get(evdev, &info); - TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); - TEST_ASSERT(info.max_num_events >= (int32_t)MAX_EVENTS, - "max_num_events=%d < max_events=%d", - info.max_num_events, MAX_EVENTS); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + if (!(info.max_num_events >= (int32_t)MAX_EVENTS)) { + ssovf_log_dbg("ERROR max_num_events=%d < max_events=%d", + info.max_num_events, MAX_EVENTS); + return -1; + } devconf_set_default_sane_values(&dev_conf, &info); if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT) dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT; ret = rte_event_dev_configure(evdev, &dev_conf); - TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev"); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev"); uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); if (mode == TEST_EVENTDEV_SETUP_PRIORITY) { if (queue_count > 8) { - printf("test expects the unique priority per queue\n"); + ssovf_log_dbg( + "test expects the unique priority per queue"); return -ENOTSUP; } @@ -216,35 +229,39 @@ _eventdev_setup(int mode) ret = rte_event_queue_default_conf_get(evdev, i, &queue_conf); - TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d", i); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d", + i); queue_conf.priority = i * step; ret = rte_event_queue_setup(evdev, i, &queue_conf); - TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", i); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", + i); } } else { /* Configure event queues with default priority */ for (i = 0; i < (int)queue_count; i++) { ret = rte_event_queue_setup(evdev, i, NULL); - TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", i); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d", + i); } } /* Configure event ports */ uint32_t port_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count), "Port count get failed"); for (i = 0; i < (int)port_count; i++) { ret = rte_event_port_setup(evdev, i, NULL); - TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i); ret = rte_event_port_link(evdev, i, NULL, NULL, 0); - TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d", i); + RTE_TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d", + i); } ret = rte_event_dev_start(evdev); - TEST_ASSERT_SUCCESS(ret, "Failed to start device"); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start device"); - return TEST_SUCCESS; + return 0; } static inline int @@ -311,7 +328,7 @@ inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type, struct rte_event ev = {.event = 0, .u64 = 0}; m = rte_pktmbuf_alloc(eventdev_test_mempool); - TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); + RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); m->seqn = i; update_event_and_validation_attr(m, &ev, flow_id, event_type, @@ -332,8 +349,8 @@ check_excess_events(uint8_t port) for (i = 0; i < 32; i++) { valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); - TEST_ASSERT_SUCCESS(valid_event, "Unexpected valid event=%d", - ev.mbuf->seqn); + RTE_TEST_ASSERT_SUCCESS(valid_event, + "Unexpected valid event=%d", ev.mbuf->seqn); } return 0; } @@ -346,12 +363,12 @@ generate_random_events(const unsigned int total_events) int ret; uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); ret = rte_event_dev_info_get(evdev, &info); - TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info"); for (i = 0; i < total_events; i++) { ret = inject_events( rte_rand() % info.max_event_queue_flows /*flow_id */, @@ -362,7 +379,7 @@ generate_random_events(const unsigned int total_events) 0 /* port */, 1 /* events */); if (ret) - return TEST_FAILED; + return -1; } return ret; } @@ -374,19 +391,19 @@ validate_event(struct rte_event *ev) struct event_attr *attr; attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *); - TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id, + RTE_TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id, "flow_id mismatch enq=%d deq =%d", attr->flow_id, ev->flow_id); - TEST_ASSERT_EQUAL(attr->event_type, ev->event_type, + RTE_TEST_ASSERT_EQUAL(attr->event_type, ev->event_type, "event_type mismatch enq=%d deq =%d", attr->event_type, ev->event_type); - TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type, + RTE_TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type, "sub_event_type mismatch enq=%d deq =%d", attr->sub_event_type, ev->sub_event_type); - TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type, + RTE_TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type, "sched_type mismatch enq=%d deq =%d", attr->sched_type, ev->sched_type); - TEST_ASSERT_EQUAL(attr->queue, ev->queue_id, + RTE_TEST_ASSERT_EQUAL(attr->queue, ev->queue_id, "queue mismatch enq=%d deq =%d", attr->queue, ev->queue_id); return 0; @@ -405,8 +422,8 @@ consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn) while (1) { if (++forward_progress_cnt > UINT16_MAX) { - printf("Detected deadlock\n"); - return TEST_FAILED; + ssovf_log_dbg("Detected deadlock"); + return -1; } valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0); @@ -416,11 +433,11 @@ consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn) forward_progress_cnt = 0; ret = validate_event(&ev); if (ret) - return TEST_FAILED; + return -1; if (fn != NULL) { ret = fn(index, port, &ev); - TEST_ASSERT_SUCCESS(ret, + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate test specific event"); } @@ -438,8 +455,8 @@ static int validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(port); - TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d", index, - ev->mbuf->seqn); + RTE_TEST_ASSERT_EQUAL(index, ev->mbuf->seqn, "index=%d != seqn=%d", + index, ev->mbuf->seqn); return 0; } @@ -456,7 +473,7 @@ test_simple_enqdeq(uint8_t sched_type) 0 /* port */, MAX_EVENTS); if (ret) - return TEST_FAILED; + return -1; return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq); } @@ -491,7 +508,7 @@ test_multi_queue_enq_single_port_deq(void) ret = generate_random_events(MAX_EVENTS); if (ret) - return TEST_FAILED; + return -1; return consume_events(0 /* port */, MAX_EVENTS, NULL); } @@ -514,7 +531,7 @@ static int validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) { uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); uint32_t range = MAX_EVENTS / queue_count; @@ -522,7 +539,7 @@ validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev) expected_val += ev->queue_id; RTE_SET_USED(port); - TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val, + RTE_TEST_ASSERT_EQUAL(ev->mbuf->seqn, expected_val, "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d", ev->mbuf->seqn, index, expected_val, range, queue_count, MAX_EVENTS); @@ -538,7 +555,7 @@ test_multi_queue_priority(void) /* See validate_queue_priority() comments for priority validate logic */ uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); max_evts_roundoff = MAX_EVENTS / queue_count; @@ -548,7 +565,7 @@ test_multi_queue_priority(void) struct rte_event ev = {.event = 0, .u64 = 0}; m = rte_pktmbuf_alloc(eventdev_test_mempool); - TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); + RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed"); m->seqn = i; queue = i % queue_count; @@ -576,7 +593,7 @@ worker_multi_port_fn(void *arg) continue; ret = validate_event(&ev); - TEST_ASSERT_SUCCESS(ret, "Failed to validate event"); + RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate event"); rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } @@ -587,27 +604,29 @@ static inline int wait_workers_to_join(int lcore, const rte_atomic32_t *count) { uint64_t cycles, print_cycles; + RTE_SET_USED(count); print_cycles = cycles = rte_get_timer_cycles(); while (rte_eal_get_lcore_state(lcore) != FINISHED) { uint64_t new_cycles = rte_get_timer_cycles(); if (new_cycles - print_cycles > rte_get_timer_hz()) { - printf("\r%s: events %d\n", __func__, + ssovf_log_dbg("\r%s: events %d", __func__, rte_atomic32_read(count)); print_cycles = new_cycles; } if (new_cycles - cycles > rte_get_timer_hz() * 10) { - printf("%s: No schedules for seconds, deadlock (%d)\n", + ssovf_log_dbg( + "%s: No schedules for seconds, deadlock (%d)", __func__, rte_atomic32_read(count)); rte_event_dev_dump(evdev, stdout); cycles = new_cycles; - return TEST_FAILED; + return -1; } } rte_eal_mp_wait_lcore(); - return TEST_SUCCESS; + return 0; } @@ -631,12 +650,12 @@ launch_workers_and_wait(int (*master_worker)(void *), param = malloc(sizeof(struct test_core_param) * nb_workers); if (!param) - return TEST_FAILED; + return -1; ret = rte_event_dequeue_timeout_ticks(evdev, rte_rand() % 10000000/* 10ms */, &dequeue_tmo_ticks); if (ret) - return TEST_FAILED; + return -1; param[0].total_events = &atomic_total_events; param[0].sched_type = sched_type; @@ -679,17 +698,17 @@ test_multi_queue_enq_multi_port_deq(void) ret = generate_random_events(total_events); if (ret) - return TEST_FAILED; + return -1; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (!nr_ports) { - printf("%s: Not enough ports=%d or workers=%d\n", __func__, + ssovf_log_dbg("%s: Not enough ports=%d or workers=%d", __func__, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } return launch_workers_and_wait(worker_multi_port_fn, @@ -702,7 +721,7 @@ validate_queue_to_port_single_link(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(index); - TEST_ASSERT_EQUAL(port, ev->queue_id, + RTE_TEST_ASSERT_EQUAL(port, ev->queue_id, "queue mismatch enq=%d deq =%d", port, ev->queue_id); return 0; @@ -718,18 +737,19 @@ test_queue_to_port_single_link(void) int i, nr_links, ret; uint32_t port_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count), "Port count get failed"); /* Unlink all connections that created in eventdev_setup */ for (i = 0; i < (int)port_count; i++) { ret = rte_event_port_unlink(evdev, i, NULL, 0); - TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d", i); + RTE_TEST_ASSERT(ret >= 0, + "Failed to unlink all queues port=%d", i); } uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); @@ -741,7 +761,7 @@ test_queue_to_port_single_link(void) uint8_t queue = (uint8_t)i; ret = rte_event_port_link(evdev, i, &queue, NULL, 1); - TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i); + RTE_TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i); ret = inject_events( 0x100 /*flow_id */, @@ -752,7 +772,7 @@ test_queue_to_port_single_link(void) i /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; } /* Verify the events generated from correct queue */ @@ -760,10 +780,10 @@ test_queue_to_port_single_link(void) ret = consume_events(i /* port */, total_events, validate_queue_to_port_single_link); if (ret) - return TEST_FAILED; + return -1; } - return TEST_SUCCESS; + return 0; } static int @@ -771,7 +791,7 @@ validate_queue_to_port_multi_link(uint32_t index, uint8_t port, struct rte_event *ev) { RTE_SET_USED(index); - TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1), + RTE_TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1), "queue mismatch enq=%d deq =%d", port, ev->queue_id); return 0; @@ -789,27 +809,27 @@ test_queue_to_port_multi_link(void) uint32_t nr_queues = 0; uint32_t nr_ports = 0; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues), "Queue count get failed"); - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues), "Queue count get failed"); - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); if (nr_ports < 2) { - printf("%s: Not enough ports to test ports=%d\n", + ssovf_log_dbg("%s: Not enough ports to test ports=%d", __func__, nr_ports); - return TEST_SUCCESS; + return 0; } /* Unlink all connections that created in eventdev_setup */ for (port = 0; port < nr_ports; port++) { ret = rte_event_port_unlink(evdev, port, NULL, 0); - TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d", + RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d", port); } @@ -819,7 +839,7 @@ test_queue_to_port_multi_link(void) for (queue = 0; queue < nr_queues; queue++) { port = queue & 0x1; ret = rte_event_port_link(evdev, port, &queue, NULL, 1); - TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d", + RTE_TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d", queue, port); ret = inject_events( @@ -831,7 +851,7 @@ test_queue_to_port_multi_link(void) port /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; if (port == 0) port0_events += total_events; @@ -842,13 +862,13 @@ test_queue_to_port_multi_link(void) ret = consume_events(0 /* port */, port0_events, validate_queue_to_port_multi_link); if (ret) - return TEST_FAILED; + return -1; ret = consume_events(1 /* port */, port1_events, validate_queue_to_port_multi_link); if (ret) - return TEST_FAILED; + return -1; - return TEST_SUCCESS; + return 0; } static int @@ -878,17 +898,17 @@ worker_flow_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.sub_event_type == 1) { /* Events from stage 1*/ - if (seqn_list_update(ev.mbuf->seqn) == TEST_SUCCESS) { + if (seqn_list_update(ev.mbuf->seqn) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { - printf("Failed to update seqn_list\n"); - return TEST_FAILED; + ssovf_log_dbg("Failed to update seqn_list"); + return -1; } } else { - printf("Invalid ev.sub_event_type = %d\n", + ssovf_log_dbg("Invalid ev.sub_event_type = %d", ev.sub_event_type); - return TEST_FAILED; + return -1; } } return 0; @@ -902,15 +922,15 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type, uint32_t nr_ports; int ret; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (!nr_ports) { - printf("%s: Not enough ports=%d or workers=%d\n", __func__, + ssovf_log_dbg("%s: Not enough ports=%d or workers=%d", __func__, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } /* Injects events with m->seqn=0 to total_events */ @@ -923,20 +943,20 @@ test_multiport_flow_sched_type_test(uint8_t in_sched_type, 0 /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; ret = launch_workers_and_wait(worker_flow_based_pipeline, worker_flow_based_pipeline, total_events, nr_ports, out_sched_type); if (ret) - return TEST_FAILED; + return -1; if (in_sched_type != RTE_SCHED_TYPE_PARALLEL && out_sched_type == RTE_SCHED_TYPE_ATOMIC) { /* Check the events order maintained or not */ return seqn_list_check(total_events); } - return TEST_SUCCESS; + return 0; } @@ -1033,16 +1053,16 @@ worker_group_based_pipeline(void *arg) ev.op = RTE_EVENT_OP_FORWARD; rte_event_enqueue_burst(evdev, port, &ev, 1); } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/ - if (seqn_list_update(ev.mbuf->seqn) == TEST_SUCCESS) { + if (seqn_list_update(ev.mbuf->seqn) == 0) { rte_pktmbuf_free(ev.mbuf); rte_atomic32_sub(total_events, 1); } else { - printf("Failed to update seqn_list\n"); - return TEST_FAILED; + ssovf_log_dbg("Failed to update seqn_list"); + return -1; } } else { - printf("Invalid ev.queue_id = %d\n", ev.queue_id); - return TEST_FAILED; + ssovf_log_dbg("Invalid ev.queue_id = %d", ev.queue_id); + return -1; } } @@ -1058,21 +1078,21 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type, uint32_t nr_ports; int ret; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); if (queue_count < 2 || !nr_ports) { - printf("%s: Not enough queues=%d ports=%d or workers=%d\n", + ssovf_log_dbg("%s: Not enough queues=%d ports=%d or workers=%d", __func__, queue_count, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } /* Injects events with m->seqn=0 to total_events */ @@ -1085,20 +1105,20 @@ test_multiport_queue_sched_type_test(uint8_t in_sched_type, 0 /* port */, total_events /* events */); if (ret) - return TEST_FAILED; + return -1; ret = launch_workers_and_wait(worker_group_based_pipeline, worker_group_based_pipeline, total_events, nr_ports, out_sched_type); if (ret) - return TEST_FAILED; + return -1; if (in_sched_type != RTE_SCHED_TYPE_PARALLEL && out_sched_type == RTE_SCHED_TYPE_ATOMIC) { /* Check the events order maintained or not */ return seqn_list_check(total_events); } - return TEST_SUCCESS; + return 0; } static int @@ -1201,15 +1221,15 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) uint32_t nr_ports; int ret; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (!nr_ports) { - printf("%s: Not enough ports=%d or workers=%d\n", __func__, + ssovf_log_dbg("%s: Not enough ports=%d or workers=%d", __func__, nr_ports, rte_lcore_count() - 1); - return TEST_SUCCESS; + return 0; } /* Injects events with m->seqn=0 to total_events */ @@ -1222,7 +1242,7 @@ launch_multi_port_max_stages_random_sched_type(int (*fn)(void *)) 0 /* port */, MAX_EVENTS /* events */); if (ret) - return TEST_FAILED; + return -1; return launch_workers_and_wait(fn, fn, MAX_EVENTS, nr_ports, 0xff /* invalid */); @@ -1244,7 +1264,7 @@ worker_queue_based_pipeline_max_stages_rand_sched_type(void *arg) uint16_t valid_event; uint8_t port = param->port; uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); uint8_t nr_queues = queue_count; @@ -1286,7 +1306,7 @@ worker_mixed_pipeline_max_stages_rand_sched_type(void *arg) uint16_t valid_event; uint8_t port = param->port; uint32_t queue_count; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count), "Queue count get failed"); uint8_t nr_queues = queue_count; @@ -1357,14 +1377,14 @@ test_producer_consumer_ingress_order_test(int (*fn)(void *)) { uint32_t nr_ports; - TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, + RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports), "Port count get failed"); nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1); if (rte_lcore_count() < 3 || nr_ports < 2) { - printf("### Not enough cores for %s test.\n", __func__); - return TEST_SUCCESS; + ssovf_log_dbg("### Not enough cores for %s test.", __func__); + return 0; } launch_workers_and_wait(worker_ordered_flow_producer, fn, @@ -1389,86 +1409,107 @@ test_queue_producer_consumer_ingress_order_test(void) worker_group_based_pipeline); } -static struct unit_test_suite eventdev_octeontx_testsuite = { - .suite_name = "eventdev octeontx unit test suite", - .setup = testsuite_setup, - .teardown = testsuite_teardown, - .unit_test_cases = { - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_simple_enqdeq_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_queue_enq_single_port_deq), - TEST_CASE_ST(eventdev_setup_priority, eventdev_teardown, - test_multi_queue_priority), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_queue_enq_multi_port_deq), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_queue_to_port_single_link), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_queue_to_port_multi_link), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_ordered_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_atomic_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_parallel_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_ordered_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_atomic_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_atomic), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_ordered), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_parallel_to_parallel), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_flow_max_stages_random_sched_type), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_queue_max_stages_random_sched_type), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_multi_port_mixed_max_stages_random_sched_type), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_flow_producer_consumer_ingress_order_test), - TEST_CASE_ST(eventdev_setup, eventdev_teardown, - test_queue_producer_consumer_ingress_order_test), - /* Tests with dequeue timeout */ - TEST_CASE_ST(eventdev_setup_dequeue_timeout, eventdev_teardown, - test_multi_port_flow_ordered_to_atomic), - TEST_CASE_ST(eventdev_setup_dequeue_timeout, eventdev_teardown, - test_multi_port_queue_ordered_to_atomic), - TEST_CASES_END() /**< NULL terminate unit test array */ +static void octeontx_test_run(int (*setup)(void), void (*tdown)(void), + int (*test)(void), const char *name) +{ + if (setup() < 0) { + ssovf_log_selftest("Error setting up test %s", name); + unsupported++; + } else { + if (test() < 0) { + failed++; + ssovf_log_selftest("%s Failed", name); + } else { + passed++; + ssovf_log_selftest("%s Passed", name); + } } -}; -static int + total++; + tdown(); +} + +int test_eventdev_octeontx(void) { - return unit_test_suite_runner(&eventdev_octeontx_testsuite); -} + testsuite_setup(); + + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_simple_enqdeq_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_queue_enq_single_port_deq); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_queue_enq_multi_port_deq); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_to_port_single_link); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_to_port_multi_link); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_ordered_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_atomic_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_parallel_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_ordered_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_atomic_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_ordered); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_parallel_to_parallel); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_flow_max_stages_random_sched_type); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_queue_max_stages_random_sched_type); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_multi_port_mixed_max_stages_random_sched_type); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_flow_producer_consumer_ingress_order_test); + OCTEONTX_TEST_RUN(eventdev_setup, eventdev_teardown, + test_queue_producer_consumer_ingress_order_test); + OCTEONTX_TEST_RUN(eventdev_setup_priority, eventdev_teardown, + test_multi_queue_priority); + OCTEONTX_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown, + test_multi_port_flow_ordered_to_atomic); + OCTEONTX_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown, + test_multi_port_queue_ordered_to_atomic); + + ssovf_log_selftest("Total tests : %d", total); + ssovf_log_selftest("Passed : %d", passed); + ssovf_log_selftest("Failed : %d", failed); + ssovf_log_selftest("Not supported : %d", unsupported); + + testsuite_teardown(); + + if (failed) + return -1; -REGISTER_TEST_COMMAND(eventdev_octeontx_autotest, test_eventdev_octeontx); + return 0; +} -- 2.14.1