From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from NAM03-DM3-obe.outbound.protection.outlook.com (mail-dm3nam03on0045.outbound.protection.outlook.com [104.47.41.45]) by dpdk.org (Postfix) with ESMTP id 321AAF11 for ; Wed, 5 Sep 2018 15:51:41 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=CAVIUMNETWORKS.onmicrosoft.com; s=selector1-cavium-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=N+EMkpC8AJS4Slfwl3CE07FBVdzVXnwci4LTHLtSD9c=; b=fBEdmyB+pIb3qoZR9IxjfiRQxfYEe28AeWeh5xRxq6xqJplOvmOWZdQw1VNnXTpglVfUfow4c2bUST9wB5T4ee5YJSI160wDHBfoxrtouRBRqzZSimIm5SzALLBlPvqgNh3QzDBXUuey3XbYJfnj/mYu41PigldCFNuXYGyBMMw= Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Pavan.Bhagavatula@cavium.com; Received: from localhost.localdomain (111.93.218.67) by BL0PR07MB4962.namprd07.prod.outlook.com (2603:10b6:208:49::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1080.17; Wed, 5 Sep 2018 13:46:28 +0000 From: Pavan Nikhilesh To: jerin.jacob@caviumnetworks.com, nikhil.rao@intel.com, harry.van.haaren@intel.com, anoob.joseph@caviumnetworks.com Cc: dev@dpdk.org, Pavan Nikhilesh Date: Wed, 5 Sep 2018 19:15:54 +0530 Message-Id: <20180905134554.25243-1-pbhagavatula@caviumnetworks.com> X-Mailer: git-send-email 2.18.0 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [111.93.218.67] X-ClientProxiedBy: SC1P215CA0010.LAMP215.PROD.OUTLOOK.COM (2603:10d6:4:40::20) To BL0PR07MB4962.namprd07.prod.outlook.com (2603:10b6:208:49::15) X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: cc4cb002-bd40-4435-9b07-08d61335fd79 X-Microsoft-Antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989137)(5600074)(711020)(2017052603328)(7153060)(7193020); SRVR:BL0PR07MB4962; X-Microsoft-Exchange-Diagnostics: 1; BL0PR07MB4962; 3:CbH0HyfZXLsptVCqgUFzZuUSPHbHXo6gJcLctcbOat2p/a6Y3yi6Ec9YsWvk1hZlEJ5IEVXn0kkBPq8mIuMkZ3S/9xOTmGePjGgAppRk0eUGPbrbeVmSDgbPqEzoNjpGauXaApUOUWCMZ4xwxkTQi2kdGV6BDdDBpkwHPkCT0qoNL/KYwJsU4eF3Mb7bwbJYWi+8HJVpWPeZ8QMeULR7sXK44utlUaxC1TmKa4biHsMOzZr+xsqwOw+9D3y+2/+b; 25:CCV8+3FLZsmsT/liNyQWJBswD4yZBxcgskGTEvSWta86sc0foC5CWnQRnVO8538s9bSphCL5s4lyA8nryMWVFP70ieQWmrpx/ugy74W0Uf+OoLkmvqj59F6KyKSO7voo4KhepLUGRFG6lqH/rj+gj2/du9FACPVcLSqEFckTUpR8V+JEhxEcNZrUA3K7lzhQehEWtFo/Kiz7Zt5hKGzLRTeNZSM2DK+Y6PoTw9Mqa7CUs+kosOq2N5ndoaMWnbmQCgcRaTkPASURqAbaiwDiEsBKAFE3WtGzgtWY2NxfVsGU1A4CYZnTSBi6AOYfORGNFJ86gC+3aaC8ontYIhg5OQ==; 31:DMm1v2yezTmA9lMLhbnxbXbRyUVh/Wk9jDhf85J/NTJpk+DVprY6UwohnxmGrDHnjBh21/+AJbO3G9nv3WgLRmvd4nYpGdKXuJLIjstdKzdwmLsd5JQdLDUUuVh3Sa6mwLvtMwt3p2oBqZz62mSqzJg3eawXIQqsxbhcfl30FYRFMQUQwO5OzA+9SzdQtF9YtilaoHtwH7t4NzdwMkmVdwZXXnn3zxrFl2JPe59xxr8= X-MS-TrafficTypeDiagnostic: BL0PR07MB4962: X-Microsoft-Exchange-Diagnostics: 1; BL0PR07MB4962; 20:vbPD7i+4p3rpLunxhXD1J3OO5YQDcCUHucyYmctAo5Z8aO2TvJHQspVMULiQSAvovtFhzm7KzOSh9Fp/OqjYQ2M8FBQkcWcF0asLrsYuSjCoyI2MR0TKIiaOAxNiTNTdb1SAOzY5Na9g44neVrsCqk7NI9I+TmA861AX2PaDjQrfs6LM27CBIhmHXhavZ6ExXhC4vZgF+Oo6Fp6dlaUVPm08IQphaFziR+KOXL+2/4DoqMGIuWXs5P1S600DCfR04dCVRKRbJCVy6guAImATwMPLTrrdKqiVbui6CM9/glaRLzUPLdKPuaBjb99d1AXNacO2JaoVDHKO/t61iSuzd6lkZO7tM3bpAF5M43oKlnKBDs3w5vqICMg5NlM5OOEsraO+VCWxJy3CJPwUeZfCkRM5MyFbFhgnW1nqxbKO6PjIwWZtG62GUd1grT4YGcmLvC+2OqI1AoPJlyoll98uBWz8SRvQYSSdaBq/oyQFfhwoZyjvHwkImo+QpW6bum6xcvaAMFKBL0m2uc04YBLC+Iud+hWCCwu4BDgM5DkABQzRKHy03F33J15ZEbdnXxD9cMWrLoUEtPMKUCWsNTTnQ5bUM6YINExVPm5mzp5QAfo=; 4:LTn08W/3XdCeXAnnPfZKXHj065KsYuzL3OzgHCFkeKlWxxf6kO5aiRw3P3Ge3k9SJx6KjJMRofcW4tJnZ+fK+F9r5+bFtxo+d5T6Y9o5htTl2pp3jdNMi3y0ejL8/CcKTtQIJW9xtY4IsrumEJnhSx1ciGQEzAfXkoLQjrXLxaaL+xt61DARO9FRxAViEu6C4dgLxm1tAWYlUYOEcoXkEmkxPByxYD4nAHa0ju4vWlP4XgHHapa5drwl97RiieP2cP/+1cufldLWQMfh9zNmlg== X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-MS-Exchange-SenderADCheck: 1 X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(93006095)(3231311)(944501410)(52105095)(10201501046)(3002001)(149027)(150027)(6041310)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123560045)(20161123562045)(20161123558120)(20161123564045)(201708071742011)(7699016); SRVR:BL0PR07MB4962; BCL:0; PCL:0; RULEID:; SRVR:BL0PR07MB4962; X-Forefront-PRVS: 078693968A X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(6069001)(39860400002)(396003)(376002)(346002)(136003)(366004)(189003)(199004)(16526019)(36756003)(97736004)(476003)(2616005)(486006)(956004)(42882007)(48376002)(50466002)(26005)(52116002)(6506007)(7736002)(69590400006)(305945005)(107886003)(4326008)(25786009)(6486002)(81156014)(81166006)(8676002)(316002)(53946003)(6306002)(6512007)(14444005)(5009440100003)(1076002)(47776003)(6116002)(3846002)(2906002)(478600001)(386003)(1857600001)(106356001)(51416003)(16586007)(966005)(72206003)(68736007)(8936002)(6666003)(6636002)(105586002)(5660300001)(53936002)(50226002)(66066001)(42262002); DIR:OUT; SFP:1101; SCL:1; SRVR:BL0PR07MB4962; H:localhost.localdomain; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BL0PR07MB4962; 23:YIJukoFweO4iUohQdEDY0jCqA0y4ln2KJCWV2hpi2?= =?us-ascii?Q?Cht0DJdDIzQDzzKYmydgdznMK93DXw/qT1ErGrBSftMCCUzKPUmh/jUitjzt?= =?us-ascii?Q?RO2lj2XO8L/6xv9lmVbrBbxadg0LrEA0v/ccjY2yMbd7auEGPKNPWvzpD3me?= =?us-ascii?Q?hwwxmMRJbdlHQPLIonCvndtoPJhJGh5nR3ZBb3mNOnfD5AMwUT30W6M7AmRp?= =?us-ascii?Q?XNAn0Yi28kweLTD0RLCKrVcJhC5oakS0d2LWxOgZest1AeAnXbgAerNwyPrn?= =?us-ascii?Q?3PhO8YkBmBIsm8ThA4yGoyktP3dmPaIX5nOiA2c5SVLAJ9++F+KeaBU5DM/8?= =?us-ascii?Q?EkBkJs3WzMvCKFfTFOPKDerZzAS3G8T+YJWtahDcfK6+WZxi7BElyug9utl2?= =?us-ascii?Q?X7992z76zGtB+KgSPTyoMXDRZRqYiqHWMBSWz1t91Lk3OrlJRwvpZG7tR+nw?= =?us-ascii?Q?eWABzvoAahNVcN2NlW1eehH6N5pY6FbZ9hSBeG/FKbVczKgbxhepb4LgOhqT?= =?us-ascii?Q?SkZ9LV59VaoBLpSC6Gy014omMF7ye/QHTu2XGklxMsAfp/cZ8z5XB54DHmQT?= =?us-ascii?Q?kVty4s6iO1B9EBRfVQAhumK+tfTiZkq9b2fAz+M/C+XBgwGWsHWtv1USUNJX?= =?us-ascii?Q?haDartxNHCQWnIlPOTuhkvzGHacAo6fa14yJ4Ft7TlcRcrirWx0KJv/8InXU?= =?us-ascii?Q?gevoBaEOcpGnVP0xMR89MBR+Nml0J0orl4gohVe4jifJ4bQ/CTSSewObpM9V?= =?us-ascii?Q?mD3ydp9s9m1TIUI9wi0igG1sD0mh9IYhWTgYtGX1ZNVdyuPjCDQUAAFJ/Wi1?= =?us-ascii?Q?hRNUl7i8broBBL8oX4x3sI9nOZRJq7OzVpecB6IdNr3gR5Nod01LTRgqixZP?= =?us-ascii?Q?PWpuBwjUTwfX2SaXPIMaez6VCI2BfXc7dHypF9FWQx1sdSo4eHaPQroXhWFa?= =?us-ascii?Q?Cn0bA9TBeTfkImiX9vM9b9slf+ssYTYuykZN6AGnWgJQ24gdqCbBtXWsxb8y?= =?us-ascii?Q?Bz1HR7FAbWILhP4YLGjH76OmGVH9lFQWKo5a79OA6g68hC1RLGCR1d9Jkscu?= =?us-ascii?Q?aRcRh3yax6U8k+qSeBMKZxtMrmGGNAuKbP9MgkPGhWT1GTMFykUd3Z5WWKvz?= =?us-ascii?Q?8Uu+3W3vtIkHSolrtPtyMuIoTI8tjIjFPpDWbKeq4M629oxiZ3eZ/zugjIAW?= =?us-ascii?Q?yL5lGbVXHOZHgu9GfKbHE91+1IMx5yWdsXKFJgx9h4S36VNDDOA7pRCT5tp6?= =?us-ascii?Q?laOIxewcy6fTzokrTqSUxRiLV2fP3fOpKFZab8ovCyDUjc1V0QGh/DOBGqO1?= =?us-ascii?Q?GC4c6uoOdrPhoIz1oTHqSbhcxqvMDoN+3vzHsRMHS6lnMBYW5FPoOQTtLcWu?= =?us-ascii?Q?eO1hw=3D=3D?= X-Microsoft-Antispam-Message-Info: UlAOOqsczjc04Gp0rxDgp4UfV5CuYXoLOxiwqO4G92USt/YREv3dO4F9lIqQHncJz7ynNnpCr1dqdBoLedtI3FnZyNbb7CGWRjVdPh/tUOj0VcsvrM6P2egMxDqbIxUHW01gmZ5PPGETGgXXSxQLB0puYteyzFQ1z8m0xFSNEt43MWl5Jqywq9gERvm/XCaQj40xt3Y9E4GB9aBhkNAV5fZTUIyy2WkTS+RtZFp1h5qZc6Jsfhbv7JaC81pS6YcEUlwto60TtKA7jCDbTxxaep3yKpC+4kHTOqn2rmSa/sppAwfCa5rHIxw9zUSI5rrs5b7Oj4F3IwS382j283Ilk/dtivJO/Trcnf3P7xCm3Bw= X-Microsoft-Exchange-Diagnostics: 1; BL0PR07MB4962; 6:DyChgs/p56hY09erPfj06V2bLP6wS+ZIyKyg/67GtwNtWHboEFJcHeVoe/0lhLb2Dgqrzf2NLjdJqwtZ8kdnHG+emEOU02/PSdfF0McUrxMxf3IOOpCy4mt6TxlegRiJCEHnAjpWxu208t9UYjLZyDBWLeUmVS9Gq1xGdhypY/bHae8m+ikzXFGb/uPQ8GUV8kA0MeJ36aMDjhQyyIgYVRz1nHQUOCDQL2hWObG67Y0OREouxAFHfoRd79INesOR4ta3/KlOFBCPuGBCmwJw6mikPxZkYdF35ubx5joShkPWnThPtPWBl3mM9WbM6wMvvduzHoUgu+8SyyWedC1uTUY+iDV9ZOGJECUidpmvGOK6hiu3Ajr07ak6Oz97mBiINwdw38N0gbb1AyvnM4E7Wg85c4jlwZ/VuzG/3Gc6h8EFdMVR3wEgcriioGulPeVKDNHSc95FoO/d/NzvTTGMcg==; 5:FFcvCFmG5tb6fBdqLQQK+eOsERNsfD0FduiOhmwXXowSV3sl+1lxIrpPIFNbvLfZNL5PnPfxieBBgACVaakaVHBStw4YKr7J+7kxVtJ4Tss5d+vnmmvyArkzvrfXAb4QTG0XUdM0jJnraAK+sWCzOqcp8aAqPSUbZggM2ocvXnA=; 7:4R+VG3bFw6SEmYAlp0zG7QdQ0vGT4+F5OXqLvYYDIr28qFo6EOSMsWnlnotrC2KtMEFUZX+gxCmGdHlBi35K/CcqDf0GfXCLzfHrjsf5YYliD5sU674poxjxqrl02JLYjzBudzPOyDOG9cUR5WJKONd/FjGTBFeVPyXbrttJvQMR1K8abZ92Siw2Dd00jLZTEy7iYrIdnFLAPjvULeD6BiiQr4A3q8SYRlS/ZUKbvzMoPnWUvghuUDyokH/mLqix SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Sep 2018 13:46:28.3843 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cc4cb002-bd40-4435-9b07-08d61335fd79 X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 711e4ccf-2e9b-4bcf-a551-4094005b6194 X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR07MB4962 Subject: [dpdk-dev] [PATCH] examples/eventdev_pipeline: add Tx adapter support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 05 Sep 2018 13:51:42 -0000 Signed-off-by: Pavan Nikhilesh --- This patch depends on the following series: http://patches.dpdk.org/project/dpdk/list/?series=1121 examples/eventdev_pipeline/main.c | 62 ++-- examples/eventdev_pipeline/pipeline_common.h | 31 +- .../pipeline_worker_generic.c | 273 +++++------------- .../eventdev_pipeline/pipeline_worker_tx.c | 130 +++++---- 4 files changed, 186 insertions(+), 310 deletions(-) diff --git a/examples/eventdev_pipeline/main.c b/examples/eventdev_pipeline/main.c index 700bc696f..95531150b 100644 --- a/examples/eventdev_pipeline/main.c +++ b/examples/eventdev_pipeline/main.c @@ -26,20 +26,6 @@ core_in_use(unsigned int lcore_id) { fdata->tx_core[lcore_id] || fdata->worker_core[lcore_id]); } -static void -eth_tx_buffer_retry(struct rte_mbuf **pkts, uint16_t unsent, - void *userdata) -{ - int port_id = (uintptr_t) userdata; - unsigned int _sent = 0; - - do { - /* Note: hard-coded TX queue */ - _sent += rte_eth_tx_burst(port_id, 0, &pkts[_sent], - unsent - _sent); - } while (_sent != unsent); -} - /* * Parse the coremask given as argument (hexadecimal string) and fill * the global configuration (core role and core count) with the parsed @@ -263,6 +249,7 @@ parse_app_args(int argc, char **argv) static inline int port_init(uint8_t port, struct rte_mempool *mbuf_pool) { + struct rte_eth_rxconf rx_conf; static const struct rte_eth_conf port_conf_default = { .rxmode = { .mq_mode = ETH_MQ_RX_RSS, @@ -291,6 +278,8 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool) if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE) port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE; + rx_conf = dev_info.default_rxconf; + rx_conf.offloads = port_conf.rxmode.offloads; port_conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads; @@ -311,7 +300,8 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool) /* Allocate and set up 1 RX queue per Ethernet port. */ for (q = 0; q < rx_rings; q++) { retval = rte_eth_rx_queue_setup(port, q, rx_ring_size, - rte_eth_dev_socket_id(port), NULL, mbuf_pool); + rte_eth_dev_socket_id(port), &rx_conf, + mbuf_pool); if (retval < 0) return retval; } @@ -350,7 +340,7 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool) static int init_ports(uint16_t num_ports) { - uint16_t portid, i; + uint16_t portid; if (!cdata.num_mbuf) cdata.num_mbuf = 16384 * num_ports; @@ -367,36 +357,26 @@ init_ports(uint16_t num_ports) rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu16 "\n", portid); - RTE_ETH_FOREACH_DEV(i) { - void *userdata = (void *)(uintptr_t) i; - fdata->tx_buf[i] = - rte_malloc(NULL, RTE_ETH_TX_BUFFER_SIZE(32), 0); - if (fdata->tx_buf[i] == NULL) - rte_panic("Out of memory\n"); - rte_eth_tx_buffer_init(fdata->tx_buf[i], 32); - rte_eth_tx_buffer_set_err_callback(fdata->tx_buf[i], - eth_tx_buffer_retry, - userdata); - } - return 0; } static void do_capability_setup(uint8_t eventdev_id) { + int ret; uint16_t i; - uint8_t mt_unsafe = 0; + uint8_t generic_pipeline = 0; uint8_t burst = 0; RTE_ETH_FOREACH_DEV(i) { - struct rte_eth_dev_info dev_info; - memset(&dev_info, 0, sizeof(struct rte_eth_dev_info)); - - rte_eth_dev_info_get(i, &dev_info); - /* Check if it is safe ask worker to tx. */ - mt_unsafe |= !(dev_info.tx_offload_capa & - DEV_TX_OFFLOAD_MT_LOCKFREE); + uint32_t caps = 0; + + ret = rte_event_eth_tx_adapter_caps_get(eventdev_id, i, &caps); + if (ret) + rte_exit(EXIT_FAILURE, + "Invalid capability for Tx adptr port %d\n", i); + generic_pipeline |= !(caps & + RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT); } struct rte_event_dev_info eventdev_info; @@ -406,10 +386,10 @@ do_capability_setup(uint8_t eventdev_id) burst = eventdev_info.event_dev_cap & RTE_EVENT_DEV_CAP_BURST_MODE ? 1 : 0; - if (mt_unsafe) + if (generic_pipeline) set_worker_generic_setup_data(&fdata->cap, burst); else - set_worker_tx_setup_data(&fdata->cap, burst); + set_worker_tx_enq_setup_data(&fdata->cap, burst); } static void @@ -499,7 +479,7 @@ main(int argc, char **argv) if (worker_data == NULL) rte_panic("rte_calloc failed\n"); - int dev_id = fdata->cap.evdev_setup(&cons_data, worker_data); + int dev_id = fdata->cap.evdev_setup(worker_data); if (dev_id < 0) rte_exit(EXIT_FAILURE, "Error setting up eventdev\n"); @@ -524,8 +504,8 @@ main(int argc, char **argv) if (fdata->tx_core[lcore_id]) printf( - "[%s()] lcore %d executing NIC Tx, and using eventdev port %u\n", - __func__, lcore_id, cons_data.port_id); + "[%s()] lcore %d executing NIC Tx\n", + __func__, lcore_id); if (fdata->sched_core[lcore_id]) printf("[%s()] lcore %d executing scheduler\n", diff --git a/examples/eventdev_pipeline/pipeline_common.h b/examples/eventdev_pipeline/pipeline_common.h index 9703396f8..a6cc912fb 100644 --- a/examples/eventdev_pipeline/pipeline_common.h +++ b/examples/eventdev_pipeline/pipeline_common.h @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -23,38 +24,30 @@ #define BATCH_SIZE 16 #define MAX_NUM_CORE 64 -struct cons_data { - uint8_t dev_id; - uint8_t port_id; - uint8_t release; -} __rte_cache_aligned; - struct worker_data { uint8_t dev_id; uint8_t port_id; } __rte_cache_aligned; typedef int (*worker_loop)(void *); -typedef int (*consumer_loop)(void); typedef void (*schedule_loop)(unsigned int); -typedef int (*eventdev_setup)(struct cons_data *, struct worker_data *); -typedef void (*rx_adapter_setup)(uint16_t nb_ports); +typedef int (*eventdev_setup)(struct worker_data *); +typedef void (*adapter_setup)(uint16_t nb_ports); typedef void (*opt_check)(void); struct setup_data { worker_loop worker; - consumer_loop consumer; schedule_loop scheduler; eventdev_setup evdev_setup; - rx_adapter_setup adptr_setup; + adapter_setup adptr_setup; opt_check check_opt; }; struct fastpath_data { volatile int done; - uint32_t tx_lock; uint32_t evdev_service_id; uint32_t rxadptr_service_id; + uint32_t txadptr_service_id; bool rx_single; bool tx_single; bool sched_single; @@ -62,7 +55,6 @@ struct fastpath_data { unsigned int tx_core[MAX_NUM_CORE]; unsigned int sched_core[MAX_NUM_CORE]; unsigned int worker_core[MAX_NUM_CORE]; - struct rte_eth_dev_tx_buffer *tx_buf[RTE_MAX_ETHPORTS]; struct setup_data cap; } __rte_cache_aligned; @@ -88,6 +80,8 @@ struct config_data { int16_t next_qid[MAX_NUM_STAGES+2]; int16_t qid[MAX_NUM_STAGES]; uint8_t rx_adapter_id; + uint8_t tx_adapter_id; + uint8_t tx_queue_id; uint64_t worker_lcore_mask; uint64_t rx_lcore_mask; uint64_t tx_lcore_mask; @@ -99,8 +93,6 @@ struct port_link { uint8_t priority; }; -struct cons_data cons_data; - struct fastpath_data *fdata; struct config_data cdata; @@ -142,12 +134,11 @@ schedule_devices(unsigned int lcore_id) } } - if (fdata->tx_core[lcore_id] && (fdata->tx_single || - rte_atomic32_cmpset(&(fdata->tx_lock), 0, 1))) { - fdata->cap.consumer(); - rte_atomic32_clear((rte_atomic32_t *)&(fdata->tx_lock)); + if (fdata->tx_core[lcore_id]) { + rte_service_run_iter_on_app_lcore(fdata->txadptr_service_id, + !fdata->tx_single); } } void set_worker_generic_setup_data(struct setup_data *caps, bool burst); -void set_worker_tx_setup_data(struct setup_data *caps, bool burst); +void set_worker_tx_enq_setup_data(struct setup_data *caps, bool burst); diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c index 2215e9ebe..a355c23a1 100644 --- a/examples/eventdev_pipeline/pipeline_worker_generic.c +++ b/examples/eventdev_pipeline/pipeline_worker_generic.c @@ -119,153 +119,13 @@ worker_generic_burst(void *arg) return 0; } -static __rte_always_inline int -consumer(void) -{ - const uint64_t freq_khz = rte_get_timer_hz() / 1000; - struct rte_event packet; - - static uint64_t received; - static uint64_t last_pkts; - static uint64_t last_time; - static uint64_t start_time; - int i; - uint8_t dev_id = cons_data.dev_id; - uint8_t port_id = cons_data.port_id; - - do { - uint16_t n = rte_event_dequeue_burst(dev_id, port_id, - &packet, 1, 0); - - if (n == 0) { - RTE_ETH_FOREACH_DEV(i) - rte_eth_tx_buffer_flush(i, 0, fdata->tx_buf[i]); - return 0; - } - if (start_time == 0) - last_time = start_time = rte_get_timer_cycles(); - - received++; - uint8_t outport = packet.mbuf->port; - - exchange_mac(packet.mbuf); - rte_eth_tx_buffer(outport, 0, fdata->tx_buf[outport], - packet.mbuf); - - if (cons_data.release) - rte_event_enqueue_burst(dev_id, port_id, - &packet, n); - - /* Print out mpps every 1<22 packets */ - if (!cdata.quiet && received >= last_pkts + (1<<22)) { - const uint64_t now = rte_get_timer_cycles(); - const uint64_t total_ms = (now - start_time) / freq_khz; - const uint64_t delta_ms = (now - last_time) / freq_khz; - uint64_t delta_pkts = received - last_pkts; - - printf("# %s RX=%"PRIu64", time %"PRIu64 "ms, " - "avg %.3f mpps [current %.3f mpps]\n", - __func__, - received, - total_ms, - received / (total_ms * 1000.0), - delta_pkts / (delta_ms * 1000.0)); - last_pkts = received; - last_time = now; - } - - cdata.num_packets--; - if (cdata.num_packets <= 0) - fdata->done = 1; - /* Be stuck in this loop if single. */ - } while (!fdata->done && fdata->tx_single); - - return 0; -} - -static __rte_always_inline int -consumer_burst(void) -{ - const uint64_t freq_khz = rte_get_timer_hz() / 1000; - struct rte_event packets[BATCH_SIZE]; - - static uint64_t received; - static uint64_t last_pkts; - static uint64_t last_time; - static uint64_t start_time; - unsigned int i, j; - uint8_t dev_id = cons_data.dev_id; - uint8_t port_id = cons_data.port_id; - - do { - uint16_t n = rte_event_dequeue_burst(dev_id, port_id, - packets, RTE_DIM(packets), 0); - - if (n == 0) { - RTE_ETH_FOREACH_DEV(j) - rte_eth_tx_buffer_flush(j, 0, fdata->tx_buf[j]); - return 0; - } - if (start_time == 0) - last_time = start_time = rte_get_timer_cycles(); - - received += n; - for (i = 0; i < n; i++) { - uint8_t outport = packets[i].mbuf->port; - - exchange_mac(packets[i].mbuf); - rte_eth_tx_buffer(outport, 0, fdata->tx_buf[outport], - packets[i].mbuf); - - packets[i].op = RTE_EVENT_OP_RELEASE; - } - - if (cons_data.release) { - uint16_t nb_tx; - - nb_tx = rte_event_enqueue_burst(dev_id, port_id, - packets, n); - while (nb_tx < n) - nb_tx += rte_event_enqueue_burst(dev_id, - port_id, packets + nb_tx, - n - nb_tx); - } - - /* Print out mpps every 1<22 packets */ - if (!cdata.quiet && received >= last_pkts + (1<<22)) { - const uint64_t now = rte_get_timer_cycles(); - const uint64_t total_ms = (now - start_time) / freq_khz; - const uint64_t delta_ms = (now - last_time) / freq_khz; - uint64_t delta_pkts = received - last_pkts; - - printf("# consumer RX=%"PRIu64", time %"PRIu64 "ms, " - "avg %.3f mpps [current %.3f mpps]\n", - received, - total_ms, - received / (total_ms * 1000.0), - delta_pkts / (delta_ms * 1000.0)); - last_pkts = received; - last_time = now; - } - - cdata.num_packets -= n; - if (cdata.num_packets <= 0) - fdata->done = 1; - /* Be stuck in this loop if single. */ - } while (!fdata->done && fdata->tx_single); - - return 0; -} - static int -setup_eventdev_generic(struct cons_data *cons_data, - struct worker_data *worker_data) +setup_eventdev_generic(struct worker_data *worker_data) { const uint8_t dev_id = 0; /* +1 stages is for a SINGLE_LINK TX stage */ const uint8_t nb_queues = cdata.num_stages + 1; - /* + 1 is one port for consumer */ - const uint8_t nb_ports = cdata.num_workers + 1; + const uint8_t nb_ports = cdata.num_workers; struct rte_event_dev_config config = { .nb_event_queues = nb_queues, .nb_event_ports = nb_ports, @@ -285,11 +145,6 @@ setup_eventdev_generic(struct cons_data *cons_data, .nb_atomic_flows = 1024, .nb_atomic_order_sequences = 1024, }; - struct rte_event_port_conf tx_p_conf = { - .dequeue_depth = 128, - .enqueue_depth = 128, - .new_event_threshold = 4096, - }; struct rte_event_queue_conf tx_q_conf = { .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST, .event_queue_cfg = RTE_EVENT_QUEUE_CFG_SINGLE_LINK, @@ -297,7 +152,6 @@ setup_eventdev_generic(struct cons_data *cons_data, struct port_link worker_queues[MAX_NUM_STAGES]; uint8_t disable_implicit_release; - struct port_link tx_queue; unsigned int i; int ret, ndev = rte_event_dev_count(); @@ -314,7 +168,6 @@ setup_eventdev_generic(struct cons_data *cons_data, RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE); wkr_p_conf.disable_implicit_release = disable_implicit_release; - tx_p_conf.disable_implicit_release = disable_implicit_release; if (dev_info.max_event_port_dequeue_depth < config.nb_event_port_dequeue_depth) @@ -372,8 +225,7 @@ setup_eventdev_generic(struct cons_data *cons_data, printf("%d: error creating qid %d\n", __LINE__, i); return -1; } - tx_queue.queue_id = i; - tx_queue.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST; + cdata.tx_queue_id = i; if (wkr_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth) wkr_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth; @@ -403,26 +255,6 @@ setup_eventdev_generic(struct cons_data *cons_data, w->port_id = i; } - if (tx_p_conf.dequeue_depth > config.nb_event_port_dequeue_depth) - tx_p_conf.dequeue_depth = config.nb_event_port_dequeue_depth; - if (tx_p_conf.enqueue_depth > config.nb_event_port_enqueue_depth) - tx_p_conf.enqueue_depth = config.nb_event_port_enqueue_depth; - - /* port for consumer, linked to TX queue */ - if (rte_event_port_setup(dev_id, i, &tx_p_conf) < 0) { - printf("Error setting up port %d\n", i); - return -1; - } - if (rte_event_port_link(dev_id, i, &tx_queue.queue_id, - &tx_queue.priority, 1) != 1) { - printf("%d: error creating link for port %d\n", - __LINE__, i); - return -1; - } - *cons_data = (struct cons_data){.dev_id = dev_id, - .port_id = i, - .release = disable_implicit_release }; - ret = rte_event_dev_service_id_get(dev_id, &fdata->evdev_service_id); if (ret != -ESRCH && ret != 0) { @@ -431,76 +263,107 @@ setup_eventdev_generic(struct cons_data *cons_data, } rte_service_runstate_set(fdata->evdev_service_id, 1); rte_service_set_runstate_mapped_check(fdata->evdev_service_id, 0); - if (rte_event_dev_start(dev_id) < 0) { - printf("Error starting eventdev\n"); - return -1; - } return dev_id; } static void -init_rx_adapter(uint16_t nb_ports) +init_adapters(uint16_t nb_ports) { int i; int ret; + uint8_t tx_port_id = 0; uint8_t evdev_id = 0; struct rte_event_dev_info dev_info; ret = rte_event_dev_info_get(evdev_id, &dev_info); - struct rte_event_port_conf rx_p_conf = { + struct rte_event_port_conf adptr_p_conf = { .dequeue_depth = 8, .enqueue_depth = 8, .new_event_threshold = 1200, }; - if (rx_p_conf.dequeue_depth > dev_info.max_event_port_dequeue_depth) - rx_p_conf.dequeue_depth = dev_info.max_event_port_dequeue_depth; - if (rx_p_conf.enqueue_depth > dev_info.max_event_port_enqueue_depth) - rx_p_conf.enqueue_depth = dev_info.max_event_port_enqueue_depth; + if (adptr_p_conf.dequeue_depth > dev_info.max_event_port_dequeue_depth) + adptr_p_conf.dequeue_depth = + dev_info.max_event_port_dequeue_depth; + if (adptr_p_conf.enqueue_depth > dev_info.max_event_port_enqueue_depth) + adptr_p_conf.enqueue_depth = + dev_info.max_event_port_enqueue_depth; /* Create one adapter for all the ethernet ports. */ ret = rte_event_eth_rx_adapter_create(cdata.rx_adapter_id, evdev_id, - &rx_p_conf); + &adptr_p_conf); if (ret) rte_exit(EXIT_FAILURE, "failed to create rx adapter[%d]", cdata.rx_adapter_id); + ret = rte_event_eth_tx_adapter_create(cdata.tx_adapter_id, evdev_id, + &adptr_p_conf); + if (ret) + rte_exit(EXIT_FAILURE, "failed to create tx adapter[%d]", + cdata.tx_adapter_id); + struct rte_event_eth_rx_adapter_queue_conf queue_conf; memset(&queue_conf, 0, sizeof(queue_conf)); queue_conf.ev.sched_type = cdata.queue_type; queue_conf.ev.queue_id = cdata.qid[0]; for (i = 0; i < nb_ports; i++) { - uint32_t cap; - - ret = rte_event_eth_rx_adapter_caps_get(evdev_id, i, &cap); - if (ret) - rte_exit(EXIT_FAILURE, - "failed to get event rx adapter " - "capabilities"); - ret = rte_event_eth_rx_adapter_queue_add(cdata.rx_adapter_id, i, -1, &queue_conf); if (ret) rte_exit(EXIT_FAILURE, "Failed to add queues to Rx adapter"); + + ret = rte_event_eth_tx_adapter_queue_add(cdata.tx_adapter_id, i, + -1); + if (ret) + rte_exit(EXIT_FAILURE, + "Failed to add queues to Tx adapter"); } + ret = rte_event_eth_tx_adapter_event_port_get(cdata.tx_adapter_id, + &tx_port_id); + if (ret) + rte_exit(EXIT_FAILURE, + "Failed to get Tx adapter port id"); + ret = rte_event_port_link(evdev_id, tx_port_id, &cdata.tx_queue_id, + NULL, 1); + if (ret != 1) + rte_exit(EXIT_FAILURE, + "Unable to link Tx adapter port to Tx queue"); + ret = rte_event_eth_rx_adapter_service_id_get(cdata.rx_adapter_id, &fdata->rxadptr_service_id); if (ret != -ESRCH && ret != 0) { rte_exit(EXIT_FAILURE, - "Error getting the service ID for sw eventdev\n"); + "Error getting the service ID for Rx adapter\n"); } rte_service_runstate_set(fdata->rxadptr_service_id, 1); rte_service_set_runstate_mapped_check(fdata->rxadptr_service_id, 0); + ret = rte_event_eth_tx_adapter_service_id_get(cdata.tx_adapter_id, + &fdata->txadptr_service_id); + if (ret != -ESRCH && ret != 0) { + rte_exit(EXIT_FAILURE, + "Error getting the service ID for Tx adapter\n"); + } + rte_service_runstate_set(fdata->txadptr_service_id, 1); + rte_service_set_runstate_mapped_check(fdata->txadptr_service_id, 0); + ret = rte_event_eth_rx_adapter_start(cdata.rx_adapter_id); if (ret) rte_exit(EXIT_FAILURE, "Rx adapter[%d] start failed", cdata.rx_adapter_id); + + ret = rte_event_eth_tx_adapter_start(cdata.tx_adapter_id); + if (ret) + rte_exit(EXIT_FAILURE, "Tx adapter[%d] start failed", + cdata.tx_adapter_id); + + if (rte_event_dev_start(evdev_id) < 0) + rte_exit(EXIT_FAILURE, "Error starting eventdev"); } static void @@ -510,6 +373,8 @@ generic_opt_check(void) int ret; uint32_t cap = 0; uint8_t rx_needed = 0; + uint8_t tx_needed = 0; + uint8_t sched_needed = 0; struct rte_event_dev_info eventdev_info; memset(&eventdev_info, 0, sizeof(struct rte_event_dev_info)); @@ -519,6 +384,8 @@ generic_opt_check(void) RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)) rte_exit(EXIT_FAILURE, "Event dev doesn't support all type queues\n"); + sched_needed = !(eventdev_info.event_dev_cap & + RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED); RTE_ETH_FOREACH_DEV(i) { ret = rte_event_eth_rx_adapter_caps_get(0, i, &cap); @@ -527,13 +394,19 @@ generic_opt_check(void) "failed to get event rx adapter capabilities"); rx_needed |= !(cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT); + + ret = rte_event_eth_tx_adapter_caps_get(0, i, &cap); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to get event tx adapter capabilities"); + tx_needed |= + !(cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT); } if (cdata.worker_lcore_mask == 0 || (rx_needed && cdata.rx_lcore_mask == 0) || - cdata.tx_lcore_mask == 0 || (cdata.sched_lcore_mask == 0 - && !(eventdev_info.event_dev_cap & - RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED))) { + (tx_needed && cdata.tx_lcore_mask == 0) || + (sched_needed && cdata.sched_lcore_mask == 0)) { printf("Core part of pipeline was not assigned any cores. " "This will stall the pipeline, please check core masks " "(use -h for details on setting core masks):\n" @@ -545,23 +418,27 @@ generic_opt_check(void) rte_exit(-1, "Fix core masks\n"); } - if (eventdev_info.event_dev_cap & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) + if (!sched_needed) memset(fdata->sched_core, 0, sizeof(unsigned int) * MAX_NUM_CORE); + if (!rx_needed) + memset(fdata->rx_core, 0, + sizeof(unsigned int) * MAX_NUM_CORE); + if (!tx_needed) + memset(fdata->tx_core, 0, + sizeof(unsigned int) * MAX_NUM_CORE); } void set_worker_generic_setup_data(struct setup_data *caps, bool burst) { if (burst) { - caps->consumer = consumer_burst; caps->worker = worker_generic_burst; } else { - caps->consumer = consumer; caps->worker = worker_generic; } - caps->adptr_setup = init_rx_adapter; + caps->adptr_setup = init_adapters; caps->scheduler = schedule_devices; caps->evdev_setup = setup_eventdev_generic; caps->check_opt = generic_opt_check; diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c index 3dbde92df..7cd516cd7 100644 --- a/examples/eventdev_pipeline/pipeline_worker_tx.c +++ b/examples/eventdev_pipeline/pipeline_worker_tx.c @@ -36,10 +36,11 @@ worker_event_enqueue_burst(const uint8_t dev, const uint8_t port, } static __rte_always_inline void -worker_tx_pkt(struct rte_mbuf *mbuf) +worker_tx_pkt(const uint8_t dev, const uint8_t port, struct rte_event *ev) { - exchange_mac(mbuf); - while (rte_eth_tx_burst(mbuf->port, 0, &mbuf, 1) != 1) + exchange_mac(ev->mbuf); + rte_event_eth_tx_adapter_txq_set(ev->mbuf, 0); + while (!rte_event_eth_tx_adapter_enqueue(dev, port, ev, 1)) rte_pause(); } @@ -64,7 +65,7 @@ worker_do_tx_single(void *arg) received++; if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) { - worker_tx_pkt(ev.mbuf); + worker_tx_pkt(dev, port, &ev); tx++; continue; } @@ -100,7 +101,7 @@ worker_do_tx_single_atq(void *arg) received++; if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) { - worker_tx_pkt(ev.mbuf); + worker_tx_pkt(dev, port, &ev); tx++; continue; } @@ -141,7 +142,7 @@ worker_do_tx_single_burst(void *arg) rte_prefetch0(ev[i + 1].mbuf); if (ev[i].sched_type == RTE_SCHED_TYPE_ATOMIC) { - worker_tx_pkt(ev[i].mbuf); + worker_tx_pkt(dev, port, &ev[i]); ev[i].op = RTE_EVENT_OP_RELEASE; tx++; @@ -188,7 +189,7 @@ worker_do_tx_single_burst_atq(void *arg) rte_prefetch0(ev[i + 1].mbuf); if (ev[i].sched_type == RTE_SCHED_TYPE_ATOMIC) { - worker_tx_pkt(ev[i].mbuf); + worker_tx_pkt(dev, port, &ev[i]); ev[i].op = RTE_EVENT_OP_RELEASE; tx++; } else @@ -232,7 +233,7 @@ worker_do_tx(void *arg) if (cq_id >= lst_qid) { if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) { - worker_tx_pkt(ev.mbuf); + worker_tx_pkt(dev, port, &ev); tx++; continue; } @@ -280,7 +281,7 @@ worker_do_tx_atq(void *arg) if (cq_id == lst_qid) { if (ev.sched_type == RTE_SCHED_TYPE_ATOMIC) { - worker_tx_pkt(ev.mbuf); + worker_tx_pkt(dev, port, &ev); tx++; continue; } @@ -330,7 +331,7 @@ worker_do_tx_burst(void *arg) if (cq_id >= lst_qid) { if (ev[i].sched_type == RTE_SCHED_TYPE_ATOMIC) { - worker_tx_pkt(ev[i].mbuf); + worker_tx_pkt(dev, port, &ev[i]); tx++; ev[i].op = RTE_EVENT_OP_RELEASE; continue; @@ -387,7 +388,7 @@ worker_do_tx_burst_atq(void *arg) if (cq_id == lst_qid) { if (ev[i].sched_type == RTE_SCHED_TYPE_ATOMIC) { - worker_tx_pkt(ev[i].mbuf); + worker_tx_pkt(dev, port, &ev[i]); tx++; ev[i].op = RTE_EVENT_OP_RELEASE; continue; @@ -413,10 +414,8 @@ worker_do_tx_burst_atq(void *arg) } static int -setup_eventdev_worker_tx(struct cons_data *cons_data, - struct worker_data *worker_data) +setup_eventdev_worker_tx_enq(struct worker_data *worker_data) { - RTE_SET_USED(cons_data); uint8_t i; const uint8_t atq = cdata.all_type_queues ? 1 : 0; const uint8_t dev_id = 0; @@ -575,10 +574,9 @@ setup_eventdev_worker_tx(struct cons_data *cons_data, } rte_service_runstate_set(fdata->evdev_service_id, 1); rte_service_set_runstate_mapped_check(fdata->evdev_service_id, 0); - if (rte_event_dev_start(dev_id) < 0) { - printf("Error starting eventdev\n"); - return -1; - } + + if (rte_event_dev_start(dev_id) < 0) + rte_exit(EXIT_FAILURE, "Error starting eventdev"); return dev_id; } @@ -602,7 +600,7 @@ service_rx_adapter(void *arg) } static void -init_rx_adapter(uint16_t nb_ports) +init_adapters(uint16_t nb_ports) { int i; int ret; @@ -613,17 +611,18 @@ init_rx_adapter(uint16_t nb_ports) ret = rte_event_dev_info_get(evdev_id, &dev_info); adptr_services = rte_zmalloc(NULL, sizeof(struct rx_adptr_services), 0); - struct rte_event_port_conf rx_p_conf = { + struct rte_event_port_conf adptr_p_conf = { .dequeue_depth = 8, .enqueue_depth = 8, .new_event_threshold = 1200, }; - if (rx_p_conf.dequeue_depth > dev_info.max_event_port_dequeue_depth) - rx_p_conf.dequeue_depth = dev_info.max_event_port_dequeue_depth; - if (rx_p_conf.enqueue_depth > dev_info.max_event_port_enqueue_depth) - rx_p_conf.enqueue_depth = dev_info.max_event_port_enqueue_depth; - + if (adptr_p_conf.dequeue_depth > dev_info.max_event_port_dequeue_depth) + adptr_p_conf.dequeue_depth = + dev_info.max_event_port_dequeue_depth; + if (adptr_p_conf.enqueue_depth > dev_info.max_event_port_enqueue_depth) + adptr_p_conf.enqueue_depth = + dev_info.max_event_port_enqueue_depth; struct rte_event_eth_rx_adapter_queue_conf queue_conf; memset(&queue_conf, 0, sizeof(queue_conf)); @@ -633,11 +632,11 @@ init_rx_adapter(uint16_t nb_ports) uint32_t cap; uint32_t service_id; - ret = rte_event_eth_rx_adapter_create(i, evdev_id, &rx_p_conf); + ret = rte_event_eth_rx_adapter_create(i, evdev_id, + &adptr_p_conf); if (ret) rte_exit(EXIT_FAILURE, - "failed to create rx adapter[%d]", - cdata.rx_adapter_id); + "failed to create rx adapter[%d]", i); ret = rte_event_eth_rx_adapter_caps_get(evdev_id, i, &cap); if (ret) @@ -654,7 +653,6 @@ init_rx_adapter(uint16_t nb_ports) rte_exit(EXIT_FAILURE, "Failed to add queues to Rx adapter"); - /* Producer needs to be scheduled. */ if (!(cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT)) { ret = rte_event_eth_rx_adapter_service_id_get(i, @@ -680,9 +678,29 @@ init_rx_adapter(uint16_t nb_ports) ret = rte_event_eth_rx_adapter_start(i); if (ret) rte_exit(EXIT_FAILURE, "Rx adapter[%d] start failed", - cdata.rx_adapter_id); + i); + } + + /* We already know that Tx adapter has INTERNAL port cap*/ + ret = rte_event_eth_tx_adapter_create(cdata.tx_adapter_id, evdev_id, + &adptr_p_conf); + if (ret) + rte_exit(EXIT_FAILURE, "failed to create tx adapter[%d]", + cdata.tx_adapter_id); + + for (i = 0; i < nb_ports; i++) { + ret = rte_event_eth_tx_adapter_queue_add(cdata.tx_adapter_id, i, + -1); + if (ret) + rte_exit(EXIT_FAILURE, + "Failed to add queues to Tx adapter"); } + ret = rte_event_eth_tx_adapter_start(cdata.tx_adapter_id); + if (ret) + rte_exit(EXIT_FAILURE, "Tx adapter[%d] start failed", + cdata.tx_adapter_id); + if (adptr_services->nb_rx_adptrs) { struct rte_service_spec service; @@ -695,8 +713,7 @@ init_rx_adapter(uint16_t nb_ports) &fdata->rxadptr_service_id); if (ret) rte_exit(EXIT_FAILURE, - "Rx adapter[%d] service register failed", - cdata.rx_adapter_id); + "Rx adapter service register failed"); rte_service_runstate_set(fdata->rxadptr_service_id, 1); rte_service_component_runstate_set(fdata->rxadptr_service_id, @@ -708,23 +725,20 @@ init_rx_adapter(uint16_t nb_ports) rte_free(adptr_services); } - if (!adptr_services->nb_rx_adptrs && fdata->cap.consumer == NULL && - (dev_info.event_dev_cap & + if (!adptr_services->nb_rx_adptrs && (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)) fdata->cap.scheduler = NULL; - - if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) - memset(fdata->sched_core, 0, - sizeof(unsigned int) * MAX_NUM_CORE); } static void -worker_tx_opt_check(void) +worker_tx_enq_opt_check(void) { int i; int ret; uint32_t cap = 0; uint8_t rx_needed = 0; + uint8_t tx_needed = 0; + uint8_t sched_needed = 0; struct rte_event_dev_info eventdev_info; memset(&eventdev_info, 0, sizeof(struct rte_event_dev_info)); @@ -734,22 +748,29 @@ worker_tx_opt_check(void) RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES)) rte_exit(EXIT_FAILURE, "Event dev doesn't support all type queues\n"); + sched_needed = !(eventdev_info.event_dev_cap & + RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED); RTE_ETH_FOREACH_DEV(i) { ret = rte_event_eth_rx_adapter_caps_get(0, i, &cap); if (ret) rte_exit(EXIT_FAILURE, - "failed to get event rx adapter " - "capabilities"); + "failed to get event rx adapter capabilities"); rx_needed |= !(cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT); + + ret = rte_event_eth_tx_adapter_caps_get(0, i, &cap); + if (ret) + rte_exit(EXIT_FAILURE, + "failed to get event tx adapter capabilities"); + tx_needed |= + !(cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT); } if (cdata.worker_lcore_mask == 0 || (rx_needed && cdata.rx_lcore_mask == 0) || - (cdata.sched_lcore_mask == 0 && - !(eventdev_info.event_dev_cap & - RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED))) { + (tx_needed && cdata.tx_lcore_mask == 0) || + (sched_needed && cdata.sched_lcore_mask == 0)) { printf("Core part of pipeline was not assigned any cores. " "This will stall the pipeline, please check core masks " "(use -h for details on setting core masks):\n" @@ -760,6 +781,16 @@ worker_tx_opt_check(void) cdata.worker_lcore_mask); rte_exit(-1, "Fix core masks\n"); } + + if (!sched_needed) + memset(fdata->sched_core, 0, + sizeof(unsigned int) * MAX_NUM_CORE); + if (!rx_needed) + memset(fdata->rx_core, 0, + sizeof(unsigned int) * MAX_NUM_CORE); + if (!tx_needed) + memset(fdata->tx_core, 0, + sizeof(unsigned int) * MAX_NUM_CORE); } static worker_loop @@ -821,18 +852,15 @@ get_worker_multi_stage(bool burst) } void -set_worker_tx_setup_data(struct setup_data *caps, bool burst) +set_worker_tx_enq_setup_data(struct setup_data *caps, bool burst) { if (cdata.num_stages == 1) caps->worker = get_worker_single_stage(burst); else caps->worker = get_worker_multi_stage(burst); - memset(fdata->tx_core, 0, sizeof(unsigned int) * MAX_NUM_CORE); - - caps->check_opt = worker_tx_opt_check; - caps->consumer = NULL; + caps->check_opt = worker_tx_enq_opt_check; caps->scheduler = schedule_devices; - caps->evdev_setup = setup_eventdev_worker_tx; - caps->adptr_setup = init_rx_adapter; + caps->evdev_setup = setup_eventdev_worker_tx_enq; + caps->adptr_setup = init_adapters; } -- 2.18.0