From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2B419A034E; Sat, 19 Feb 2022 05:12:33 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 336C741159; Sat, 19 Feb 2022 05:12:22 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2073.outbound.protection.outlook.com [40.107.237.73]) by mails.dpdk.org (Postfix) with ESMTP id AA21F40040 for ; Sat, 19 Feb 2022 05:12:18 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TnuOyl+PZ0/efQyqMbt+SQQEKYz/i2sBtKVdQFUfR8ya4azLJ52rkhrnL6mY4zEozOZLnahjAebCd3ntEM5jVvVa/deBTR1wYlwzECpOYj3hVL2KPZt5BIk6zeZ72JHLmkrIMJiJpOAh10eT46OmF4plRuDYdYtj4itaIfwOSTPdNN/3rHhrxPDW8hWRUvyRf/uNI0GERH0WOQqT+i6BWJnKsTvsjwIOpQxVJ27krjV7xclusJ8wLwgnMeuoNthRD3vpqsB/sGfwmKeuLBRPqGbaVj4FH7CMXakkOhbWqGwoIM+YThcrcTJs9ouWg5f24YpX+crLlXEhEfiabd5Bgg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=F18VSrjGftHS1dc+GzTlTNoxbkYJ5zn2c3piDAoldrk=; b=byBExodoMEu0oZ/2zGmuHZVrIrohqPtQM1Gqoz5C6m7QXTKgMjH65rlpfs3sLTpOkpGdU5B8PLWUXmHzBR2kVNGYYSk0ZU3F646vIp4perZKoKoo2q5wnrroQZGlzAeg6FoGcSZlUPouoX/3GvtW6Jh3RXNiT21MW13Ynmgpw2fH9voDPi5lXRAAHEclE2AhQAZy/8O9bBl7787k75A38Bn/5XZ343m90Y50uWSxVicb8bMsdyts08uI19bPohnkExsheK1u25m6YCEthB4DTui7OM6XBvh5QKOn3+kEII04gblLhdJqDRUtarPJG45W6RfxGBrahu07PND3aHOwNA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=F18VSrjGftHS1dc+GzTlTNoxbkYJ5zn2c3piDAoldrk=; b=CrOfjtUYdrhCkfKd9J4bf/ylYRWVLuYA6/1uRTpoIpwi6qjboF7BKD5FIhoR+Y0rZxLnIDkmfD5zfWGPiWhFUecs4YkD9T3IQGVbIYxayw4GipJt7IeMT/Bl13II3kq7s6UrQ9oJy7oQmQNQrIRja6pe2mojZ3PNIoik9PsfcbVhHa6Sj8tI99Wl50fVvecLJh4tTqCb0zwUxbj4n0fmQbUzJuE+Yk44ISy/GB6pKAxMvs4HGYsKBj6WQldKt2tX/Gu6OmFmwX4KWr4eAOMPzx0DQjM5k9Gw2Sdn3+wNjHXvteBlHeHnnv/l3DYMA1PqSAa+XB8xS/ZZ2gNbhv5h6g== Received: from BY5PR12MB3666.namprd12.prod.outlook.com (2603:10b6:a03:1a4::23) by BL0PR12MB4851.namprd12.prod.outlook.com (2603:10b6:208:1c1::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.23; Sat, 19 Feb 2022 04:12:16 +0000 Received: from DM5PR20CA0018.namprd20.prod.outlook.com (2603:10b6:3:93::28) by BY5PR12MB3666.namprd12.prod.outlook.com (2603:10b6:a03:1a4::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.17; Sat, 19 Feb 2022 04:12:14 +0000 Received: from DM6NAM11FT033.eop-nam11.prod.protection.outlook.com (2603:10b6:3:93:cafe::23) by DM5PR20CA0018.outlook.office365.com (2603:10b6:3:93::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4975.18 via Frontend Transport; Sat, 19 Feb 2022 04:12:13 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by DM6NAM11FT033.mail.protection.outlook.com (10.13.172.221) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4995.15 via Frontend Transport; Sat, 19 Feb 2022 04:12:13 +0000 Received: from drhqmail201.nvidia.com (10.126.190.180) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sat, 19 Feb 2022 04:12:13 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by drhqmail201.nvidia.com (10.126.190.180) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Fri, 18 Feb 2022 20:12:09 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v7 03/11] ethdev: bring in async queue-based flow rules operations Date: Sat, 19 Feb 2022 06:11:36 +0200 Message-ID: <20220219041144.2145380-4-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220219041144.2145380-1-akozyrev@nvidia.com> References: <20220212041930.1516767-1-akozyrev@nvidia.com> <20220219041144.2145380-1-akozyrev@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To drhqmail201.nvidia.com (10.126.190.180) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 56576a74-ad7d-477d-e953-08d9f35e0210 X-MS-TrafficTypeDiagnostic: BY5PR12MB3666:EE_|BL0PR12MB4851:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: xWFTBSW5pobNRKCb/q2EJqusSmf5bSQC4c7dmaHRiAkGTaoiOzeZ733N7XuHlWfgeoduBgs51HcDJaz6tZwc/2+hd0KMYX8H/U2/gJ+kNvihlOiCIVHpclBPmAgmIP8uqh6bFvtn7HR66JaogyMYg73V4DbW+TSGMTyPOJBUznhsMWm4Mqnns7xr1Ic0UJ+g51bpPC5SBq3XICx+8W+tRCCVExUEnlQrFITUT/8SI5X7jhsWjZS4in0R30gBtstTEXrh83/wCCl/qSM2vACI16Cyej1rD7sPJ5JWxCjy3zLwegOskox/jXKhpm/UuiXP6t6/lNBOOSo1O2I6UgJmpSD9ua9mH/0JhQXJL/NyECVfqBhEvKsfwhzB5rzfexcFksfw4WafShG9NcMg/oDBTVOV3epq+RwDcP2CsiCUw630cA/D9eGUxbISxA3X7Nn5v+d7DkRnsQhNUYceM+NTcE/jwI8nqt/Q/sCgBsPv9+jDKz9ar7hq+YBUP4jmviwoLwUyZCiPAA4rIf1WGk6ORFaH0fjrMwQZbaeenp7rj7KiwkkE6Cl0SFnBz5gaeflMsYtZ5L3ORFMVnaLKDTAELMrOxnrc4b0lNpQTLwuiJO7b9RvQtdkYswZa33WAI+kAsL/wgCui6PdkCaB1a1tZvDDz58UG+8nRhSneLQSFfUQu+XRZo4AOcMva0G+c9vjUtLyiwi9Fw8DXUiCphLCXstLZprDxjWTORRlMhtJNFq9BDDFwCKRYtyujCp5hhA6FIlWCbqmtBGnQ4uyoFHGt8z+Svs0mshWZ7B/91VOfFClagcSolqtsAJv2HBuG6twwuAjJrqo/qLnfQiftqUgI4aVpi7V3JaUQqsO4mCWcJd8= X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(6916009)(54906003)(316002)(82310400004)(6666004)(508600001)(19273905006)(8936002)(2906002)(7416002)(40460700003)(81166007)(356005)(86362001)(8676002)(4326008)(70206006)(70586007)(5660300002)(36756003)(30864003)(47076005)(83380400001)(2616005)(16526019)(1076003)(186003)(26005)(336012)(426003)(36860700001)(36900700001)(579004)(563064011); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Feb 2022 04:12:13.5055 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 56576a74-ad7d-477d-e953-08d9f35e0210 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT033.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB4851 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org A new, faster, queue-based flow rules management mechanism is needed for applications offloading rules inside the datapath. This asynchronous and lockless mechanism frees the CPU for further packet processing and reduces the performance impact of the flow rules creation/destruction on the datapath. Note that queues are not thread-safe and the queue should be accessed from the same thread for all queue operations. It is the responsibility of the app to sync the queue functions in case of multi-threaded access to the same queue. The rte_flow_async_create() function enqueues a flow creation to the requested queue. It benefits from already configured resources and sets unique values on top of item and action templates. A flow rule is enqueued on the specified flow queue and offloaded asynchronously to the hardware. The function returns immediately to spare CPU for further packet processing. The application must invoke the rte_flow_pull() function to complete the flow rule operation offloading, to clear the queue, and to receive the operation status. The rte_flow_async_destroy() function enqueues a flow destruction to the requested queue. Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- .../prog_guide/img/rte_flow_async_init.svg | 205 ++++++++++ .../prog_guide/img/rte_flow_async_usage.svg | 354 ++++++++++++++++++ doc/guides/prog_guide/rte_flow.rst | 124 ++++++ doc/guides/rel_notes/release_22_03.rst | 7 + lib/ethdev/rte_flow.c | 110 +++++- lib/ethdev/rte_flow.h | 251 +++++++++++++ lib/ethdev/rte_flow_driver.h | 35 ++ lib/ethdev/version.map | 4 + 8 files changed, 1087 insertions(+), 3 deletions(-) create mode 100644 doc/guides/prog_guide/img/rte_flow_async_init.svg create mode 100644 doc/guides/prog_guide/img/rte_flow_async_usage.svg diff --git a/doc/guides/prog_guide/img/rte_flow_async_init.svg b/doc/guides/prog_guide/img/rte_flow_async_init.svg new file mode 100644 index 0000000000..f66e9c73d7 --- /dev/null +++ b/doc/guides/prog_guide/img/rte_flow_async_init.svg @@ -0,0 +1,205 @@ + + + + + + + + + + + + + + + + + rte_eth_dev_configure + () + + + rte_flow_configure() + + + rte_flow_pattern_template_create() + + rte_flow_actions_template_create() + + rte_eal_init() + + + + + rte_flow_template_table_create() + + + + rte_eth_dev_start() + + + diff --git a/doc/guides/prog_guide/img/rte_flow_async_usage.svg b/doc/guides/prog_guide/img/rte_flow_async_usage.svg new file mode 100644 index 0000000000..bb978bca1e --- /dev/null +++ b/doc/guides/prog_guide/img/rte_flow_async_usage.svg @@ -0,0 +1,354 @@ + + + + + + + + + + + + + + + + rte_eth_rx_burst() + + analyze packet + + rte_flow_async_create() + + more packets? + + + + + + + add new rule? + + + yes + + no + + + destroy the rule? + + + rte_flow_async_destroy() + + + + + rte_flow_pull() + + rte_flow_push() + + + no + + yes + + no + + yes + + diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 6cdfea09be..436845717f 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3624,12 +3624,16 @@ Expected number of resources in an application allows PMD to prepare and optimize NIC hardware configuration and memory layout in advance. ``rte_flow_configure()`` must be called before any flow rule is created, but after an Ethernet device is configured. +It also creates flow queues for asynchronous flow rules operations via +queue-based API, see `Asynchronous operations`_ section. .. code-block:: c int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error); Information about the number of available resources can be retrieved via @@ -3640,6 +3644,7 @@ Information about the number of available resources can be retrieved via int rte_flow_info_get(uint16_t port_id, struct rte_flow_port_info *port_info, + struct rte_flow_queue_info *queue_info, struct rte_flow_error *error); Flow templates @@ -3777,6 +3782,125 @@ and pattern and actions templates are created. &actions_templates, nb_actions_templ, &error); +Asynchronous operations +----------------------- + +Flow rules management can be done via special lockless flow management queues. +- Queue operations are asynchronous and not thread-safe. + +- Operations can thus be invoked by the app's datapath, + packet processing can continue while queue operations are processed by NIC. + +- Number of flow queues is configured at initialization stage. + +- Available operation types: rule creation, rule destruction, + indirect rule creation, indirect rule destruction, indirect rule update. + +- Operations may be reordered within a queue. + +- Operations can be postponed and pushed to NIC in batches. + +- Results pulling must be done on time to avoid queue overflows. + +- User data is returned as part of the result to identify an operation. + +- Flow handle is valid once the creation operation is enqueued and must be + destroyed even if the operation is not successful and the rule is not inserted. + +- Application must wait for the creation operation result before enqueueing + the deletion operation to make sure the creation is processed by NIC. + +The asynchronous flow rule insertion logic can be broken into two phases. + +1. Initialization stage as shown here: + +.. _figure_rte_flow_async_init: + +.. figure:: img/rte_flow_async_init.* + +2. Main loop as presented on a datapath application example: + +.. _figure_rte_flow_async_usage: + +.. figure:: img/rte_flow_async_usage.* + +Enqueue creation operation +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enqueueing a flow rule creation operation is similar to simple creation. + +.. code-block:: c + + struct rte_flow * + rte_flow_async_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_template_table *template_table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error); + +A valid handle in case of success is returned. It must be destroyed later +by calling ``rte_flow_async_destroy()`` even if the rule is rejected by HW. + +Enqueue destruction operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enqueueing a flow rule destruction operation is similar to simple destruction. + +.. code-block:: c + + int + rte_flow_async_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + void *user_data, + struct rte_flow_error *error); + +Push enqueued operations +~~~~~~~~~~~~~~~~~~~~~~~~ + +Pushing all internally stored rules from a queue to the NIC. + +.. code-block:: c + + int + rte_flow_push(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error); + +There is the postpone attribute in the queue operation attributes. +When it is set, multiple operations can be bulked together and not sent to HW +right away to save SW/HW interactions and prioritize throughput over latency. +The application must invoke this function to actually push all outstanding +operations to HW in this case. + +Pull enqueued operations +~~~~~~~~~~~~~~~~~~~~~~~~ + +Pulling asynchronous operations results. + +The application must invoke this function in order to complete asynchronous +flow rule operations and to receive flow rule operations statuses. + +.. code-block:: c + + int + rte_flow_pull(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); + +Multiple outstanding operation results can be pulled simultaneously. +User data may be provided during a flow creation/destruction in order +to distinguish between multiple operations. User data is returned as part +of the result to provide a method to detect which operation is completed. + .. _flow_isolated_mode: Flow isolated mode diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index 3a1c2d2d4d..e0549a2da3 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -114,6 +114,13 @@ New Features ``rte_flow_pattern_template_destroy`` and ``rte_flow_actions_template_destroy``. +* ** Added functions for asynchronous flow rules creation/destruction + + * ethdev: Added ``rte_flow_async_create`` and ``rte_flow_async_destroy`` API + to enqueue flow creaion/destruction operations asynchronously as well as + ``rte_flow_pull`` to poll and retrieve results of these operations and + ``rte_flow_push`` to push all the in-flight operations to the NIC. + * **Updated AF_XDP PMD** * Added support for libxdp >=v1.2.2. diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index e9f684eedb..4e7b202522 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1396,6 +1396,7 @@ rte_flow_flex_item_release(uint16_t port_id, int rte_flow_info_get(uint16_t port_id, struct rte_flow_port_info *port_info, + struct rte_flow_queue_info *queue_info, struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; @@ -1415,7 +1416,7 @@ rte_flow_info_get(uint16_t port_id, return -rte_errno; if (likely(!!ops->info_get)) { return flow_err(port_id, - ops->info_get(dev, port_info, error), + ops->info_get(dev, port_info, queue_info, error), error); } return rte_flow_error_set(error, ENOTSUP, @@ -1426,6 +1427,8 @@ rte_flow_info_get(uint16_t port_id, int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; @@ -1433,7 +1436,7 @@ rte_flow_configure(uint16_t port_id, int ret; dev->data->flow_configured = 0; - if (port_attr == NULL) { + if (port_attr == NULL || queue_attr == NULL) { RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id); return -EINVAL; } @@ -1452,7 +1455,7 @@ rte_flow_configure(uint16_t port_id, if (unlikely(!ops)) return -rte_errno; if (likely(!!ops->configure)) { - ret = ops->configure(dev, port_attr, error); + ret = ops->configure(dev, port_attr, nb_queue, queue_attr, error); if (ret == 0) dev->data->flow_configured = 1; return flow_err(port_id, ret, error); @@ -1713,3 +1716,104 @@ rte_flow_template_table_destroy(uint16_t port_id, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, rte_strerror(ENOTSUP)); } + +struct rte_flow * +rte_flow_async_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_template_table *template_table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow *flow; + + if (unlikely(!ops)) + return NULL; + if (likely(!!ops->async_create)) { + flow = ops->async_create(dev, queue_id, + q_ops_attr, template_table, + pattern, pattern_template_index, + actions, actions_template_index, + user_data, error); + if (flow == NULL) + flow_err(port_id, -rte_errno, error); + return flow; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_async_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + void *user_data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->async_destroy)) { + return flow_err(port_id, + ops->async_destroy(dev, queue_id, + q_ops_attr, flow, + user_data, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +int +rte_flow_push(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->push)) { + return flow_err(port_id, + ops->push(dev, queue_id, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +int +rte_flow_pull(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->pull)) { + ret = ops->pull(dev, queue_id, res, n_res, error); + return ret ? ret : flow_err(port_id, ret, error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 776e8ccc11..9e71a576f6 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4884,6 +4884,10 @@ rte_flow_flex_item_release(uint16_t port_id, * */ struct rte_flow_port_info { + /** + * Maximum umber of queues for asynchronous operations. + */ + uint32_t max_nb_queues; /** * Maximum number of counters. * @see RTE_FLOW_ACTION_TYPE_COUNT @@ -4901,6 +4905,21 @@ struct rte_flow_port_info { uint32_t max_nb_meters; }; +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Information about flow engine asynchronous queues. + * The value only valid if @p port_attr.max_nb_queues is not zero. + * + */ +struct rte_flow_queue_info { + /** + * Maximum number of operations a queue can hold. + */ + uint32_t max_size; +}; + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. @@ -4912,6 +4931,9 @@ struct rte_flow_port_info { * @param[out] port_info * A pointer to a structure of type *rte_flow_port_info* * to be filled with the resources information of the port. + * @param[out] queue_info + * A pointer to a structure of type *rte_flow_queue_info* + * to be filled with the asynchronous queues information. * @param[out] error * Perform verbose error reporting if not NULL. * PMDs initialize this structure in case of error only. @@ -4923,6 +4945,7 @@ __rte_experimental int rte_flow_info_get(uint16_t port_id, struct rte_flow_port_info *port_info, + struct rte_flow_queue_info *queue_info, struct rte_flow_error *error); /** @@ -4951,6 +4974,21 @@ struct rte_flow_port_attr { uint32_t nb_meters; }; +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Flow engine asynchronous queues settings. + * The value means default value picked by PMD. + * + */ +struct rte_flow_queue_attr { + /** + * Number of flow rule operations a queue can hold. + */ + uint32_t size; +}; + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. @@ -4970,6 +5008,11 @@ struct rte_flow_port_attr { * Port identifier of Ethernet device. * @param[in] port_attr * Port configuration attributes. + * @param[in] nb_queue + * Number of flow queues to be configured. + * @param[in] queue_attr + * Array that holds attributes for each flow queue. + * Number of elements is set in @p port_attr.nb_queues. * @param[out] error * Perform verbose error reporting if not NULL. * PMDs initialize this structure in case of error only. @@ -4981,6 +5024,8 @@ __rte_experimental int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error); /** @@ -5257,6 +5302,212 @@ rte_flow_template_table_destroy(uint16_t port_id, struct rte_flow_template_table *template_table, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Queue operation attributes. + */ +__extension__ +struct rte_flow_q_ops_attr { + /** + * When set, the requested action will not be sent to the HW immediately. + * The application must call the rte_flow_queue_push to actually send it. + */ + uint32_t postpone:1; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule creation operation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue used to insert the rule. + * @param[in] q_ops_attr + * Rule creation operation attributes. + * @param[in] template_table + * Template table to select templates from. + * @param[in] pattern + * List of pattern items to be used. + * The list order should match the order in the pattern template. + * The spec is the only relevant member of the item that is being used. + * @param[in] pattern_template_index + * Pattern template index in the table. + * @param[in] actions + * List of actions to be used. + * The list order should match the order in the actions template. + * @param[in] actions_template_index + * Actions template index in the table. + * @param[in] user_data + * The user data that will be returned on the completion events. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + * The rule handle doesn't mean that the rule has been populated. + * Only completion result indicates that if there was success or failure. + */ +__rte_experimental +struct rte_flow * +rte_flow_async_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_template_table *template_table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule destruction operation. + * + * This function enqueues a destruction operation on the queue. + * Application should assume that after calling this function + * the rule handle is not valid anymore. + * Completion indicates the full removal of the rule from the HW. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue which is used to destroy the rule. + * This must match the queue on which the rule was created. + * @param[in] q_ops_attr + * Rule destroy operation attributes. + * @param[in] flow + * Flow handle to be destroyed. + * @param[in] user_data + * The user data that will be returned on the completion events. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_async_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + void *user_data, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Push all internally stored rules to the HW. + * Postponed rules are rules that were inserted with the postpone flag set. + * Can be used to notify the HW about batch of rules prepared by the SW to + * reduce the number of communications between the HW and SW. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue to be pushed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set + * to one of the error codes defined: + * - (ENODEV) if *port_id* invalid. + * - (ENOSYS) if underlying device does not support this functionality. + * - (EIO) if underlying device is removed. + * - (EINVAL) if *queue_id* invalid. + */ +__rte_experimental +int +rte_flow_push(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Queue operation status. + */ +enum rte_flow_q_op_status { + /** + * The operation was completed successfully. + */ + RTE_FLOW_Q_OP_SUCCESS, + /** + * The operation was not completed successfully. + */ + RTE_FLOW_Q_OP_ERROR, +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Queue operation results. + */ +__extension__ +struct rte_flow_q_op_res { + /** + * Returns the status of the operation that this completion signals. + */ + enum rte_flow_q_op_status status; + /** + * The user data that will be returned on the completion events. + */ + void *user_data; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Pull a rte flow operation. + * The application must invoke this function in order to complete + * the flow rule offloading and to retrieve the flow rule operation status. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue which is used to pull the operation. + * @param[out] res + * Array of results that will be set. + * @param[in] n_res + * Maximum number of results that can be returned. + * This value is equal to the size of the res array. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Number of results that were pulled, + * a negative errno value otherwise and rte_errno is set + * to one of the error codes defined: + * - (ENODEV) if *port_id* invalid. + * - (ENOSYS) if underlying device does not support this functionality. + * - (EIO) if underlying device is removed. + * - (EINVAL) if *queue_id* invalid. + */ +__rte_experimental +int +rte_flow_pull(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index 2d96db1dc7..332783cf78 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -156,11 +156,14 @@ struct rte_flow_ops { int (*info_get) (struct rte_eth_dev *dev, struct rte_flow_port_info *port_info, + struct rte_flow_queue_info *queue_info, struct rte_flow_error *err); /** See rte_flow_configure() */ int (*configure) (struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *err); /** See rte_flow_pattern_template_create() */ struct rte_flow_pattern_template *(*pattern_template_create) @@ -199,6 +202,38 @@ struct rte_flow_ops { (struct rte_eth_dev *dev, struct rte_flow_template_table *template_table, struct rte_flow_error *err); + /** See rte_flow_async_create() */ + struct rte_flow *(*async_create) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_template_table *template_table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *err); + /** See rte_flow_async_destroy() */ + int (*async_destroy) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + void *user_data, + struct rte_flow_error *err); + /** See rte_flow_push() */ + int (*push) + (struct rte_eth_dev *dev, + uint32_t queue_id, + struct rte_flow_error *err); + /** See rte_flow_pull() */ + int (*pull) + (struct rte_eth_dev *dev, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 62ff791261..13c1a22118 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -272,6 +272,10 @@ EXPERIMENTAL { rte_flow_actions_template_destroy; rte_flow_template_table_create; rte_flow_template_table_destroy; + rte_flow_async_create; + rte_flow_async_destroy; + rte_flow_push; + rte_flow_pull; }; INTERNAL { -- 2.18.2