From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM03-BY2-obe.outbound.protection.outlook.com (mail-by2nam03on0064.outbound.protection.outlook.com [104.47.42.64]) by dpdk.org (Postfix) with ESMTP id 356D9330C for ; Thu, 30 Nov 2017 08:24:38 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=BMQEOCfc+yXMSnXubhGLQGn/icnYSNqJ4LJekqYJ+HA=; b=W5WQ5jvNdEo7h+B7WOtU9+axbbFckdSg4NTWsfaK7bj5KdoVrxpiR3IiXU46D1zaGV6gwOUeArSjMbbd8Jh4Enuk/Vp5YoYFVCa7U4Ks8ZZ/uI2Nkgqf8CMwR/2kYwLxV3XR/tmg6iGKf+L6Yw6pfeM434RsrNJghnqAhaxzGU8= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Pavan.Bhagavatula@cavium.com; Received: from Pavan-LT.caveonetworks.com (111.93.218.67) by DM5PR07MB3468.namprd07.prod.outlook.com (10.164.153.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.260.4; Thu, 30 Nov 2017 07:24:32 +0000 From: Pavan Nikhilesh To: jerin.jacobkollanukkaran@cavium.com, gage.eads@intel.com, harry.van.haaren@intel.com, bruce.richardson@intel.com, hemant.agrawal@nxp.com, nipun.gupta@nxp.com, nikhil.rao@intel.com Cc: dev@dpdk.org, Pavan Nikhilesh Date: Thu, 30 Nov 2017 12:54:05 +0530 Message-Id: <20171130072406.15605-3-pbhagavatula@caviumnetworks.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20171130072406.15605-1-pbhagavatula@caviumnetworks.com> References: <20171130072406.15605-1-pbhagavatula@caviumnetworks.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: YQXPR0101CA0013.CANPRD01.PROD.OUTLOOK.COM (52.132.74.154) To DM5PR07MB3468.namprd07.prod.outlook.com (10.164.153.23) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b9d024e4-ebc3-4b59-88a9-08d537c368d3 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(5600026)(4604075)(2017052603282); SRVR:DM5PR07MB3468; X-Microsoft-Exchange-Diagnostics: 1; DM5PR07MB3468; 3:MDY791MgW8/7KsbqC62xXzMTQqEsG9diJ/naF1QcLEkSvAwM7VWj4gHu/uZAV8OE5j9CUOBSw+5NlCTZPx8KpTihOshkFz3p+pO4ZtKSFDZgUVhysBGmyxvUZcqCDYo3/M42Dq7PRHS4QFpPxzHyFGU9HuANOszFweQfNUFp8R+djNWLsl7eTQB/JEeRynWfowYqWJ4LFKbXWXTpXBCZjy1GSAS+Q/iWqwSNAxpg1izgLyO5/3mOEaBmpMnRgvh+; 25:0B61KwO7AXLgdxLlNZTMt6VOmghcDwZJUd+UlEd3TOvL6H8JmSwetxGfsDZhUW3HAx3ERR6y+xyqGgU6eFGx6XiOZSLnH+rvfVvWiG5GRUC6HIJo25N81nZ+ywUqjcbEjFRqJHgTGoj/WKCgJpPltJHnJ6EqoRlyA5AkmF0hKboelfIMwifhGfQNjkg3eh+hPwzSfks3SmlkP5ez5mhnV9gnwnn2zc/hDHNTrKqjkKjYHK352ZIyr86qDdTaEDD0moF3TaZXq3h7y/74H/EKZR1TnnBZu8422yOaRCKZh6AkOBqHJKdkU5hW0rn6y0jcBP3fMIp0Q3CpkMXaGzpZtg==; 31:pOkZbLdPv/QQig9htfK0ECNMvDH1URA43JxC0PP5c2zzp3W4jzY+KJcm6r1quDE6CSHTW8HN6bU02B4w6iKDR5Fxb4ZHs5LSmsipZOMAS94IQwQtZQoRp/sOFhIR5gZDDCdMxBxjIGYLslCco3OxrpTl+0SWiMUH8mLxRFD5oWxOBvLo7B5y9uakLHvs5TwLLD3v22UhcJsX8JPfpTxCp90It7+aiO8Us+VKolyH7lU= X-MS-TrafficTypeDiagnostic: DM5PR07MB3468: X-Microsoft-Exchange-Diagnostics: 1; DM5PR07MB3468; 20:UMujmj638dH5M9c+eHuop3mpIyBvuFOT32Zb8fh55MFTRoFyIXwyJ81rvVIHh9zKPdcZm9WSJgjy484b6/SoBUlN2tgvgSmX1YZimfuJxByfpoP3iOnz/EIq9aRvqPfY/GLkSIInvHzZ/DUf/ivnD3Y0ZZHkw3/ocK+awudsmI7ut3OuWluU5OKabhRG+kJtfrjNQ1NIVUbopuUMLFm0xJB2yQEm3Ks9gj4IA6eYT664UTraoORdkv0VZhENlWxRPRFQu7HhuVVJhD5iJXUxGFR4QEuUaPls91oywZy+zwim5LuJR5prMds9nTa8i5ri4QVAopEGpeuMJAqaWO/r9MroCwwh0C1CcVZ6P08sLmAfAQxf30xP9Mck2w3HFhG4tlp04+5NVPXEjf99g8NTeKbP2KZzNOLiK9/ajFsFDMLFO0bSrNu5g9ZK4wYBpxtrhXY+Gj5F6seatslJ3HyiAi/r9E8HX5CHUtGCiBkPvYmmKXd8beKuD9zA1telpYvAnuUh2eviWltukVyOUKb4R694i6ZlD7JiTGibzTj6BqZQK1SVQGS9CRtdvfNr/3x46C9mWZ4GzjID2uvPCiJtk7XqZ1XsfNGBt9o64vyzG4c=; 4:qlooK5L8CavBx+hL0aDeVo4N2maSCeJiosPOh8r5yayeVi1MrqvCB12t2KDIaH9Eg8Te37mJGY5lnfkCFCpyLfSRrLfzNJplgEb99XCuYQVNEw701QGE8mOxf9gqcaA2Mx/sgGvItrhbOR6iMGLLq4xnbJtLz6c9pOMsdM8WIsatwqe4kX+VPl8T83iozG++iLLmlrCA+2GwHZ3pjcHO/jzOrK3I2NmjbhmmImZh5zWalK+Mzl/i0gYT7vc9ihrBO92aBiOBjSvYxZQZUpQPuQ== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040450)(2401047)(5005006)(8121501046)(10201501046)(93006095)(3231022)(3002001)(6041248)(201703131423075)(201702281528075)(201703061421075)(201703061406153)(20161123562025)(20161123564025)(20161123558100)(20161123555025)(20161123560025)(6072148)(201708071742011); SRVR:DM5PR07MB3468; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:DM5PR07MB3468; X-Forefront-PRVS: 05079D8470 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6009001)(376002)(346002)(366004)(189002)(199003)(66066001)(48376002)(4326008)(7736002)(189998001)(8676002)(81166006)(6486002)(81156014)(47776003)(50466002)(53416004)(8656006)(5009440100003)(478600001)(16526018)(72206003)(97736004)(69596002)(316002)(51416003)(16586007)(19273905006)(966005)(52116002)(6506006)(305945005)(53936002)(2950100002)(6512007)(36756003)(105586002)(6306002)(2906002)(5660300001)(25786009)(68736007)(6666003)(8936002)(42882006)(53376002)(101416001)(33646002)(107886003)(106356001)(1076002)(3846002)(6116002)(76176010)(50986010)(50226002)(562404015)(42262002)(563064011); DIR:OUT; SFP:1101; SCL:1; SRVR:DM5PR07MB3468; H:Pavan-LT.caveonetworks.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; DM5PR07MB3468; 23:xSa6vh8CwZLXyXakit69ut9A44ZK4HEZOPwkpi5jg?= =?us-ascii?Q?saJ8LIgb5rFwiqH/GHsyd+p5FnZqC1k+11qdloh4SmPwxMmy81l64bh4S58D?= =?us-ascii?Q?mxmXBdxqb7uZ8UeOYXS1fVTUUEeCr8lsmmYrLasdPI0XkceyrFu+oHwP7m2d?= =?us-ascii?Q?eEQjqI3PD4UHLqiOtcY53gJGM7+Ucc56jFVT7UTJvx6MAsqj/Fg4xo0P0NFp?= =?us-ascii?Q?HAlcmNz07Hx8gS+5Y9onusIOCNX1ugHQDieIzSDhGBCIDA0mMvnnNjAMo6rG?= =?us-ascii?Q?8gV9psrakARobZkcRy1RIN+lvn0EOf69dR2+09j7imiOPQWCyPuGOMG202fW?= =?us-ascii?Q?7LBr7X5m0NBaazxOTDON0bPzM/cKB2/IhdNmy7rMwdgNBS+Ygtgsa+WOXEEn?= =?us-ascii?Q?soC23p8FocGvZZ/TSMcdN1vqtnOaA+T76X9zP/9D+ThCkcUyGV3rcrtBGu11?= =?us-ascii?Q?G+pwfhfeb/ZG/O3UwzLRpMUtESVT8M+YP3pNJQNgPlmF93W9lgX5law6TRu1?= =?us-ascii?Q?ANpNRBwZVOLNKt4lXmB84ZfzsjqhxBwuYm0nkfa/FLvzSLtVcC0Livu/Lswj?= =?us-ascii?Q?bjwO3r5O0xtPiVjIrUJ8Kx5hLXHGwZiVAvZLR0BL/HciZnUDQyHgtD0q448b?= =?us-ascii?Q?SqQnGducUqFKo05PrdoYN/+ITLdwhBfzGCRvR9qZsBOxgO5EBnl2SaQspr3m?= =?us-ascii?Q?PC8G7q5VyNhl+TfAoDnynCY1bvUz1yykDHDL9euE84h0EK7rkoVCqCtvKj9M?= =?us-ascii?Q?H+QG6JAim6+mUMp446oB5NYvqC16lUU5xRuSqSeC1/HA9D2RAMxvorJdYs2y?= =?us-ascii?Q?bR8uBVfSaoVWc3w/ouEhyx5BvyGr7QOEflY5uwYAEeEFqZfSrGYvxW/00jXi?= =?us-ascii?Q?kWGd06OVa8UCnnH9kB2pmjV1U9wGLJr3zznwiR7nNCEbNsE+WqofIni9Uimh?= =?us-ascii?Q?h+PnGV9nnr6VdSlEEJL+8Fv4X7fOxtxUhDfqoLK8b4M8LQm5aXwMOpNbmjy/?= =?us-ascii?Q?uwUmiGWDpzd4dZIB1aOWGgvtEONdNAWvLEHN8T4O3t2HVjJ9ivFOQawI/CWQ?= =?us-ascii?Q?3ixAGNak6w94ioCSEzfLgs+d6eJVIRE23Lophr4IhE4DcHFhW/MeqNbA084m?= =?us-ascii?Q?V3uF7iThzZv1y6TmZwoEXIuKSrkZGZpGi7qSqpUaJ6AKgacv97a0KPDXfCRq?= =?us-ascii?Q?eijKEn8pGsiksrweChiR3Qi2Sqp5R20lUGqnZvlXqhDRz+GW9aBBy6sltRG4?= =?us-ascii?Q?A7WGVxaKdK4UKRpRRsUCVVQDn/2IuFA+ZwSyW+wrc9BPOXPC9ZiZFUts0nXC?= =?us-ascii?Q?4YGmeW11aFyVH07qzIta8Xfq0OHxq3gEi2QBjSA2r6fdVJr62puVbvDZd+tF?= =?us-ascii?Q?xvWCGzOEki88UTIqtAO8Ew5NNw=3D?= X-Microsoft-Exchange-Diagnostics: 1; DM5PR07MB3468; 6:bwcCf53OZX5B/myFiEaQpNf8xsQS2xF8p+f5wSoArg8FKh87YRWLuxxBOhDwZRSWT2/dWTO3X2xKff5Qxf7oYfBMW9YzhZY9l3RynitpVYb8FWAV7W6h4pYUB/R8152+nvNIFjuJ9gB0E+Ca8ynoj610ARFvYBBqVp4GIAwREGBs5nZZu8UvnCEXuYA0VLLE+kY+qZQBlJFfiGDaE6soFSxWYe/w7JzrbUfBdLlMXgwXmfQKW0Hjg190TZ52cxTgkpBOIJ2A/AwpqMMZU4YKbVhJ4oTEx1Wdhf2xQT+Jvzc/yldG3UU1epF+t0kn3beJ6mXFW9mrphA3x1MRVgRBKfF+e6quiCdfNL30Hg8xsOs=; 5:9KE7RS5lJ9pdLYZaNlKxPG96Ao68y6MhI4OcclD6lj6fhDp+/bT4p3rvmjdmFVz8uVMFIu9sKNLX/1ybLdS7dRAdS7HO10RL3T2chP8dwOFOCYvkVOy/eBhgxu5Z3zaeAvOMe+kNuzg7N61/J/Saoo1y2RQ8rYuq7MlG2AhyDg4=; 24:o4MeszQdTYWLiYEtloWq8Kl/+ERwf+Yk/tFROI64TMMNa20u8zTMbDO/8iIl9GXsqOz9l9QYcaqDXBRATXUQzpKY/MN/RWmSEf0DeShC9DI=; 7:2VFQ6AF29wA0j/yaKiZnDzSNXuDVvon9jBaXBz2D3MJ4O5wnGfiOuIR1bZ2NUzmiPyd8n6p7Z7G2IVyOlw1w+UXwFFOCGZuPP7jOEcptY38bVqk5wzIqdCKJoSpQa5nmE7V/Yi0Pe+hfbJhN3F7IRUYYgMNdaTL+J4CZt0fZmkqjXmsoGrOMEWY7QBoisHzJYgY36ZZMTj9bFJ3xWmNfQsoKa7th/jUYlpxxgLlkqT8dKkZ1TeNDp09HUiNw5rYj SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Nov 2017 07:24:32.8034 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b9d024e4-ebc3-4b59-88a9-08d537c368d3 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR07MB3468 Subject: [dpdk-dev] [PATCH 3/4] app/eventdev: add perf pipeline test X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 30 Nov 2017 07:24:39 -0000 This is a performance test case that aims at testing the following: 1. Measure the end-to-end performance of an event dev with a ethernet dev. 2. Maintain packet ordering from Rx to Tx. The perf pipeline test configures the eventdev with Q queues and P ports, where Q is nb_ethdev * nb_stages and P is nb_workers. The user can choose the number of workers and number of stages through the --wlcores and the --stlist application command line arguments respectively. The probed ethernet devices act as producer(s) for this application. The ethdevs are configured as event Rx adapters that enables them to injects events to eventdev based the first stage schedule type list requested by the user through --stlist the command line argument. Based on the number of stages to process(selected through --stlist), the application forwards the event to next upstream queue and when it reaches last stage in the pipeline if the event type is ATOMIC it is enqueued onto ethdev Tx queue else to maintain ordering the event type is set to ATOMIC and enqueued onto the last stage queue. On packet Tx, application increments the number events processed and print periodically in one second to get the number of events processed in one second. Note: The --prod_type_ethdev is mandatory for running the application. Example command to run perf pipeline test: sudo build/app/dpdk-test-eventdev -c 0xf -s 0x8 --vdev=event_sw0 -- \ --test=perf_pipeline --wlcore=1 --prod_type_ethdev --stlist=ao Signed-off-by: Pavan Nikhilesh --- app/test-eventdev/Makefile | 1 + app/test-eventdev/test_perf_pipeline.c | 548 +++++++++++++++++++++++++++++++++ 2 files changed, 549 insertions(+) create mode 100644 app/test-eventdev/test_perf_pipeline.c diff --git a/app/test-eventdev/Makefile b/app/test-eventdev/Makefile index dcb2ac4..9bd8ecd 100644 --- a/app/test-eventdev/Makefile +++ b/app/test-eventdev/Makefile @@ -50,5 +50,6 @@ SRCS-y += test_order_atq.c SRCS-y += test_perf_common.c SRCS-y += test_perf_queue.c SRCS-y += test_perf_atq.c +SRCS-y += test_perf_pipeline.c include $(RTE_SDK)/mk/rte.app.mk diff --git a/app/test-eventdev/test_perf_pipeline.c b/app/test-eventdev/test_perf_pipeline.c new file mode 100644 index 0000000..a4a13f8 --- /dev/null +++ b/app/test-eventdev/test_perf_pipeline.c @@ -0,0 +1,548 @@ +/* + * BSD LICENSE + * + * Copyright (C) Cavium, Inc 2017. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Cavium, Inc nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include "test_perf_common.h" + +/* See http://dpdk.org/doc/guides/tools/testeventdev.html for test details */ + +static inline int +perf_pipeline_nb_event_queues(struct evt_options *opt) +{ + uint16_t eth_count = rte_eth_dev_count(); + + return (eth_count * opt->nb_stages) + + evt_has_all_types_queue(opt->dev_id) ? 0 : eth_count; +} + +static __rte_always_inline void +perf_pipeline_tx_pkt_safe(struct rte_mbuf *mbuf) +{ + while (rte_eth_tx_burst(mbuf->port, 0, &mbuf, 1) != 1) + rte_pause(); +} + +static __rte_always_inline void +perf_pipeline_tx_pkt_unsafe(struct rte_mbuf *mbuf, struct test_perf *t) +{ + rte_spinlock_t *lk = &t->tx_lk[mbuf->port]; + + rte_spinlock_lock(lk); + perf_pipeline_tx_pkt_safe(mbuf); + rte_spinlock_unlock(lk); +} + +static __rte_always_inline void +perf_pipeline_tx_unsafe_burst(struct rte_mbuf *mbuf, struct test_perf *t) +{ + uint16_t port = mbuf->port; + rte_spinlock_t *lk = &t->tx_lk[port]; + + rte_spinlock_lock(lk); + rte_eth_tx_buffer(port, 0, t->tx_buf[port], mbuf); + rte_spinlock_unlock(lk); +} + +static __rte_always_inline void +perf_pipeline_tx_flush(struct test_perf *t, const uint8_t nb_ports) +{ + int i; + rte_spinlock_t *lk; + + for (i = 0; i < nb_ports; i++) { + lk = &t->tx_lk[i]; + + rte_spinlock_lock(lk); + rte_eth_tx_buffer_flush(i, 0, t->tx_buf[i]); + rte_spinlock_unlock(lk); + } +} + +static int +perf_pipeline_worker_single_stage(void *arg) +{ + struct worker_data *w = arg; + struct test_perf *t = w->t; + const uint8_t dev = w->dev_id; + const uint8_t port = w->port_id; + const bool mt_safe = !t->mt_unsafe; + const bool atq = evt_has_all_types_queue(dev); + struct rte_event ev; + + while (t->done == false) { + uint16_t event = rte_event_dequeue_burst(dev, port, &ev, 1, 0); + + if (!event) { + rte_pause(); + continue; + } + + if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) { + if (mt_safe) + perf_pipeline_tx_pkt_safe(ev.mbuf); + else + perf_pipeline_tx_pkt_unsafe(ev.mbuf, t); + w->processed_pkts++; + } else { + ev.event_type = RTE_EVENT_TYPE_CPU; + ev.op = RTE_EVENT_OP_FORWARD; + ev.sched_type = RTE_SCHED_TYPE_ATOMIC; + ev.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST; + atq ? 0 : ev.queue_id++; + while (rte_event_enqueue_burst(dev, port, &ev, 1) != 1) + rte_pause(); + } + } + + return 0; +} + +static int +perf_pipeline_worker_single_stage_burst(void *arg) +{ + int i; + struct worker_data *w = arg; + struct test_perf *t = w->t; + const uint8_t dev = w->dev_id; + const uint8_t port = w->port_id; + const bool mt_safe = !t->mt_unsafe; + const bool atq = evt_has_all_types_queue(dev); + struct rte_event ev[BURST_SIZE]; + const uint16_t nb_ports = rte_eth_dev_count(); + + while (t->done == false) { + uint16_t nb_rx = rte_event_dequeue_burst(dev, port, ev, + BURST_SIZE, 0); + + if (!nb_rx) { + if (!mt_safe) + perf_pipeline_tx_flush(t, nb_ports); + + rte_pause(); + continue; + } + + for (i = 0; i < nb_rx; i++) { + rte_prefetch0(ev[i + 1].mbuf); + if (ev[i].sched_type == RTE_SCHED_TYPE_ATOMIC) { + + if (mt_safe) + perf_pipeline_tx_pkt_safe(ev[i].mbuf); + else + perf_pipeline_tx_unsafe_burst( + ev[i].mbuf, t); + ev[i].op = RTE_EVENT_OP_RELEASE; + w->processed_pkts++; + } else { + ev[i].event_type = RTE_EVENT_TYPE_CPU; + ev[i].op = RTE_EVENT_OP_FORWARD; + ev[i].sched_type = RTE_SCHED_TYPE_ATOMIC; + ev[i].priority = RTE_EVENT_DEV_PRIORITY_HIGHEST; + atq ? 0 : ev[i].queue_id++; + } + } + + uint16_t enq; + + enq = rte_event_enqueue_burst(dev, port, ev, nb_rx); + while (enq < nb_rx) { + enq += rte_event_enqueue_burst(dev, port, + ev + enq, nb_rx - enq); + } + } + + return 0; +} + +static int +perf_pipeline_worker_multi_stage(void *arg) +{ + struct worker_data *w = arg; + struct test_perf *t = w->t; + const uint8_t dev = w->dev_id; + const uint8_t port = w->port_id; + const bool mt_safe = !t->mt_unsafe; + const bool atq = evt_has_all_types_queue(dev); + const uint8_t last_queue = t->opt->nb_stages - 1; + const uint8_t nb_stages = atq ? t->opt->nb_stages : + t->opt->nb_stages + 1; + uint8_t *const sched_type_list = &t->sched_type_list[0]; + uint8_t cq_id; + struct rte_event ev; + + + while (t->done == false) { + uint16_t event = rte_event_dequeue_burst(dev, port, &ev, 1, 0); + + if (!event) { + rte_pause(); + continue; + } + + cq_id = ev.queue_id % nb_stages; + + if (cq_id >= last_queue) { + if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) { + + if (mt_safe) + perf_pipeline_tx_pkt_safe(ev.mbuf); + else + perf_pipeline_tx_pkt_unsafe(ev.mbuf, t); + w->processed_pkts++; + continue; + } + ev.sched_type = RTE_SCHED_TYPE_ATOMIC; + ev.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST; + atq || !(cq_id == last_queue) ? 0 : ev.queue_id++; + } else { + ev.queue_id++; + ev.sched_type = sched_type_list[cq_id]; + } + + ev.event_type = RTE_EVENT_TYPE_CPU; + ev.op = RTE_EVENT_OP_FORWARD; + while (rte_event_enqueue_burst(dev, port, &ev, 1) != 1) + rte_pause(); + } + return 0; +} + +static int +perf_pipeline_worker_multi_stage_burst(void *arg) +{ + int i; + struct worker_data *w = arg; + struct test_perf *t = w->t; + const uint8_t dev = w->dev_id; + const uint8_t port = w->port_id; + uint8_t *const sched_type_list = &t->sched_type_list[0]; + const bool mt_safe = !t->mt_unsafe; + const bool atq = evt_has_all_types_queue(dev); + const uint8_t last_queue = t->opt->nb_stages - 1; + const uint8_t nb_stages = atq ? t->opt->nb_stages : + t->opt->nb_stages + 1; + uint8_t cq_id; + struct rte_event ev[BURST_SIZE + 1]; + const uint16_t nb_ports = rte_eth_dev_count(); + + RTE_SET_USED(atq); + while (t->done == false) { + uint16_t nb_rx = rte_event_dequeue_burst(dev, port, ev, + BURST_SIZE, 0); + + if (!nb_rx) { + if (!mt_safe) + perf_pipeline_tx_flush(t, nb_ports); + rte_pause(); + continue; + } + + for (i = 0; i < nb_rx; i++) { + rte_prefetch0(ev[i + 1].mbuf); + cq_id = ev[i].queue_id % nb_stages; + + if (cq_id >= last_queue) { + if (ev[i].sched_type == RTE_SCHED_TYPE_ATOMIC) { + + if (mt_safe) + perf_pipeline_tx_pkt_safe( + ev[i].mbuf); + else + perf_pipeline_tx_unsafe_burst( + ev[i].mbuf, t); + ev[i].op = RTE_EVENT_OP_RELEASE; + w->processed_pkts++; + continue; + } + + ev[i].sched_type = RTE_SCHED_TYPE_ATOMIC; + ev[i].priority = RTE_EVENT_DEV_PRIORITY_HIGHEST; + atq || !(cq_id == last_queue) ? 0 : + ev[i].queue_id++; + } else { + ev[i].queue_id++; + ev[i].sched_type = sched_type_list[cq_id]; + } + + ev[i].event_type = RTE_EVENT_TYPE_CPU; + ev[i].op = RTE_EVENT_OP_FORWARD; + } + + uint16_t enq; + + enq = rte_event_enqueue_burst(dev, port, ev, nb_rx); + while (enq < nb_rx) { + enq += rte_event_enqueue_burst(dev, port, + ev + enq, nb_rx - enq); + } + } + return 0; +} + +static int +worker_wrapper(void *arg) +{ + struct worker_data *w = arg; + struct evt_options *opt = w->t->opt; + const bool burst = evt_has_burst_mode(w->dev_id); + const uint8_t nb_stages = opt->nb_stages; + RTE_SET_USED(opt); + + /* allow compiler to optimize */ + if (nb_stages == 1) { + if (!burst) + return perf_pipeline_worker_single_stage(arg); + else + return perf_pipeline_worker_single_stage_burst(arg); + } else { + if (!burst) + return perf_pipeline_worker_multi_stage(arg); + else + return perf_pipeline_worker_multi_stage_burst(arg); + } + rte_panic("invalid worker\n"); +} + +static int +perf_pipeline_launch_lcores(struct evt_test *test, struct evt_options *opt) +{ + return perf_launch_lcores(test, opt, worker_wrapper); +} + +static int +perf_pipeline_eventdev_setup(struct evt_test *test, struct evt_options *opt) +{ + int ret; + int nb_ports; + int nb_queues; + int nb_stages = opt->nb_stages; + uint8_t queue; + uint8_t port; + uint8_t atq = evt_has_all_types_queue(opt->dev_id); + struct test_perf *t = evt_test_priv(test); + + nb_ports = evt_nr_active_lcores(opt->wlcores); + nb_queues = rte_eth_dev_count() * (nb_stages); + nb_queues += atq ? 0 : rte_eth_dev_count(); + + const struct rte_event_dev_config config = { + .nb_event_queues = nb_queues, + .nb_event_ports = nb_ports, + .nb_events_limit = 4096, + .nb_event_queue_flows = opt->nb_flows, + .nb_event_port_dequeue_depth = 128, + .nb_event_port_enqueue_depth = 128, + }; + + ret = rte_event_dev_configure(opt->dev_id, &config); + if (ret) { + evt_err("failed to configure eventdev %d", opt->dev_id); + return ret; + } + + struct rte_event_queue_conf q_conf = { + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .nb_atomic_flows = opt->nb_flows, + .nb_atomic_order_sequences = opt->nb_flows, + }; + /* queue configurations */ + for (queue = 0; queue < nb_queues; queue++) { + if (atq) { + q_conf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES; + } else { + uint8_t slot; + + slot = queue % (nb_stages + 1); + q_conf.schedule_type = slot == nb_stages ? + RTE_SCHED_TYPE_ATOMIC : + opt->sched_type_list[slot]; + } + + ret = rte_event_queue_setup(opt->dev_id, queue, &q_conf); + if (ret) { + evt_err("failed to setup queue=%d", queue); + return ret; + } + } + + /* port configuration */ + const struct rte_event_port_conf p_conf = { + .dequeue_depth = opt->wkr_deq_dep, + .enqueue_depth = 64, + .new_event_threshold = 4096, + }; + + /* setup one port per worker, linking to all queues */ + for (port = 0; port < evt_nr_active_lcores(opt->wlcores); port++) { + struct worker_data *w = &t->worker[port]; + + w->dev_id = opt->dev_id; + w->port_id = port; + w->t = t; + w->processed_pkts = 0; + w->latency = 0; + + ret = rte_event_port_setup(opt->dev_id, port, &p_conf); + if (ret) { + evt_err("failed to setup port %d", port); + return ret; + } + + ret = rte_event_port_link(opt->dev_id, port, NULL, NULL, 0); + if (ret != nb_queues) { + evt_err("failed to link all queues to port %d", port); + return -EINVAL; + } + } + + ret = perf_event_rx_adapter_setup(opt, atq ? nb_stages : nb_stages + 1, + p_conf); + if (ret) + return ret; + + if (!evt_has_distributed_sched(opt->dev_id)) { + uint32_t service_id; + rte_event_dev_service_id_get(opt->dev_id, &service_id); + ret = evt_service_setup(service_id); + if (ret) { + evt_err("No service lcore found to run event dev."); + return ret; + } + } + + ret = rte_event_dev_start(opt->dev_id); + if (ret) { + evt_err("failed to start eventdev %d", opt->dev_id); + return ret; + } + + return 0; +} + +static void +perf_pipeline_opt_dump(struct evt_options *opt) +{ + evt_dump_fwd_latency(opt); + perf_opt_dump(opt, perf_pipeline_nb_event_queues(opt)); +} + +static int +perf_pipeline_opt_check(struct evt_options *opt) +{ + unsigned int lcores; + /* + * N worker + 1 master + */ + lcores = 2; + + if (opt->prod_type == EVT_PROD_TYPE_SYNT) { + evt_err("test doesn't support synthetic producers"); + return -1; + } + + if (!rte_eth_dev_count()) { + evt_err("test needs minimum 1 ethernet dev"); + return -1; + } + + if (rte_lcore_count() < lcores) { + evt_err("test need minimum %d lcores", lcores); + return -1; + } + + /* Validate worker lcores */ + if (evt_lcores_has_overlap(opt->wlcores, rte_get_master_lcore())) { + evt_err("worker lcores overlaps with master lcore"); + return -1; + } + if (evt_has_disabled_lcore(opt->wlcores)) { + evt_err("one or more workers lcores are not enabled"); + return -1; + } + if (!evt_has_active_lcore(opt->wlcores)) { + evt_err("minimum one worker is required"); + return -1; + } + + if (perf_pipeline_nb_event_queues(opt) > EVT_MAX_QUEUES) { + evt_err("number of queues exceeds %d", EVT_MAX_QUEUES); + return -1; + } + if (perf_nb_event_ports(opt) > EVT_MAX_PORTS) { + evt_err("number of ports exceeds %d", EVT_MAX_PORTS); + return -1; + } + + if (evt_has_invalid_stage(opt)) + return -1; + + if (evt_has_invalid_sched_type(opt)) + return -1; + + return 0; +} + +static bool +perf_pipeline_capability_check(struct evt_options *opt) +{ + struct rte_event_dev_info dev_info; + + rte_event_dev_info_get(opt->dev_id, &dev_info); + if (dev_info.max_event_queues < perf_pipeline_nb_event_queues(opt) || + dev_info.max_event_ports < + evt_nr_active_lcores(opt->wlcores)) { + evt_err("not enough eventdev queues=%d/%d or ports=%d/%d", + perf_pipeline_nb_event_queues(opt), + dev_info.max_event_queues, + evt_nr_active_lcores(opt->wlcores), + dev_info.max_event_ports); + } + + return true; +} + +static const struct evt_test_ops perf_pipeline = { + .cap_check = perf_pipeline_capability_check, + .opt_check = perf_pipeline_opt_check, + .opt_dump = perf_pipeline_opt_dump, + .test_setup = perf_test_setup, + .mempool_setup = perf_mempool_setup, + .ethdev_setup = perf_ethdev_setup, + .eventdev_setup = perf_pipeline_eventdev_setup, + .launch_lcores = perf_pipeline_launch_lcores, + .eventdev_destroy = perf_eventdev_destroy, + .mempool_destroy = perf_mempool_destroy, + .ethdev_destroy = perf_ethdev_destroy, + .test_result = perf_test_result, + .test_destroy = perf_test_destroy, +}; + +EVT_TEST_REGISTER(perf_pipeline); -- 2.7.4