From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 19F78A034C; Tue, 18 Jan 2022 16:31:12 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B767F42736; Tue, 18 Jan 2022 16:31:04 +0100 (CET) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2073.outbound.protection.outlook.com [40.107.100.73]) by mails.dpdk.org (Postfix) with ESMTP id 51F264068E for ; Tue, 18 Jan 2022 16:31:03 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fLCyQ1qRvbu6RhZEEUwk9s+WHyDig1w+1zVCgbKg8HZwRsVAB3c9Z5Lqn7NDOZLTt3/CdzutG63ev95pkf1r1CwNedy0KXssQKSwXVPpYmND9FwV6TX9Yy/QJiS6vxvWBQAc/ZVI22k4bPvN1RnhuivAny9cFkL4FaKPUfg3zyS5SHDMBuup43TAnwy24Nyic74xZQ56rLoEO71wkHlJSsMNF5+/AsmImeezLAXGc+sZ+mIGupaNucc2UclYAPl5k2AbHbuxr5y+buM528QjBfsUCogvEueoLVdNhToalU2NzW+uN02/Jag/58UQj0w5XmudAcaFZmwDt8eZc28yXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fStBP0ZOA7NSCFzqoMM6Qi5DGZWXDsNQZQ1+vkNnOuc=; b=f35XBcMohhPgPPb70/DDs1YYAsXls8JLkWJMRsWlrmUrIaYvrPEZvFgKR6f2v3VXws5P0FwiQOr4MkrBACDbwhgEkoBhmova47O4+tTas3xwP2Ii+LLEFYDWyQ4f0ytvbVyoGBDN3v8XaDJzZTJKoDcqeftdRSuRYUpphwS6aHY3NnEzmO6jjKwGEgTd/GsFNxKFkyKrWI/6vls/2gZH4KNbkoK4iLhppXsx8I0dhpHR+qKvQyO89hqEKbwIcjk7hiHRhXqTjsUGrTrIxRbEdzDopAwA5KZS/5QxNxoSw8OPPPZjILVxToWHo21aLPdjHAkZgzEDNcJgRO5cNPAjBw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fStBP0ZOA7NSCFzqoMM6Qi5DGZWXDsNQZQ1+vkNnOuc=; b=lFNZoUvKv7v4HZe8qX+yhCABYtBGYnK9mbqY5U320duDdZD1C7453z81o3x7C983iIMEL9p8poh7S9h2vos36xbWIVbyl1+GxjUXwesvUeuvp+58NiiuSoB9hVcf5Kk3+2dvAD8wOLkOv3W11TuSsPtFtv4SJ+WIFk1PiOsbzz6lYQlCY9Ci1FgNtumeyw+lD/MAxXcTmlRaBUeAiKwqr62M7jeswZu/7eLUQv8sPUVyJhPQbG9W7b6Ycl9NCHouo+cTo6zIxJ+GVMyssbAltNyDexHyJBY9y8TKz8NpHBcNsUc2cX3cWpvRVaG9Kkgublz7VIBGXvZYHmV7etysDQ== Received: from BYAPR12MB4758.namprd12.prod.outlook.com (2603:10b6:a03:a5::28) by DM6PR12MB3740.namprd12.prod.outlook.com (2603:10b6:5:1c3::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.10; Tue, 18 Jan 2022 15:31:00 +0000 Received: from DM3PR03CA0018.namprd03.prod.outlook.com (2603:10b6:0:50::28) by BYAPR12MB4758.namprd12.prod.outlook.com (2603:10b6:a03:a5::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4888.12; Tue, 18 Jan 2022 15:30:58 +0000 Received: from DM6NAM11FT009.eop-nam11.prod.protection.outlook.com (2603:10b6:0:50:cafe::4f) by DM3PR03CA0018.outlook.office365.com (2603:10b6:0:50::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4909.7 via Frontend Transport; Tue, 18 Jan 2022 15:30:58 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by DM6NAM11FT009.mail.protection.outlook.com (10.13.173.20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4888.9 via Frontend Transport; Tue, 18 Jan 2022 15:30:57 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 18 Jan 2022 15:30:57 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.986.9; Tue, 18 Jan 2022 07:30:54 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH v2 03/10] ethdev: bring in async queue-based flow rules operations Date: Tue, 18 Jan 2022 17:30:20 +0200 Message-ID: <20220118153027.3947448-4-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220118153027.3947448-1-akozyrev@nvidia.com> References: <20211006044835.3936226-1-akozyrev@nvidia.com> <20220118153027.3947448-1-akozyrev@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f4632f78-a9c5-4ab5-908f-08d9da978686 X-MS-TrafficTypeDiagnostic: BYAPR12MB4758:EE_|DM6PR12MB3740:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:10000; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: e1l0EDX0AsO9SCy81zT9WXA1aDiI3v6ASQXyec2Cb6d+4oz3RIsi+gz4uViZwIFdXnDIF/3FsW9RctQlODmmpLRpPgkOIE7A3z7UsFbgkPyDmliZ0ajVuDkUq0ajd6Q6ayztvE2dG94ROOHpSXWI34rd267eh24RBXVDR0kp/JzbsdRKsFkC13QcRA6MZUL7+iFPZ20uv8/N5UStA1VbYw5NSgSbCquZT5CkMVTfO3xXA6krHIZFDNhbMeWAwomDJkM6tJ9Zq+//zE/YzPT6SzucCFa5CEseVo3YEgrZrFniGj26j8L89gNPQu1OZo7F3+K/0gQR1Fm2fYuJ3h+3wMOpjb9qGKPUSmVlMpCv8heGOian7chMRC8+tBGyy1gslg0xJRoYD16MowQE2podyv4jvj2IP4eVaeso8cxdFxgB4Ak8yqEOrO8frM9msJLJpj5Oe9scFfg1bKgfqXNmV+9vaQtMzruubs+RB5LYI6uOZtSvT/sZjWkk8VGzpL5jKV2XutREF332nzzQq3WII8zuFQxAFLZUqWB1DKU+fFLNL6mC7DnEyGNoDEkoMMjw/+17RJBPhpx2sWenlk+6WCra4E6UtMP1Mxa7GfRH/F+TdO8By3fn3eKrk29d+ik2VtTNllmpjBSXux3W3QhXefEYkBgNsLRe+bIRPEMcSP+lSQD6Z5YI+/9mOcE3kn56crV6NlCj471xMol4ChSA0UZKMgfnVS/7SJsFRjVGOmTes3bC8OMnnk/RdZqGRxkHLtNKxfIfwNeeFcrhsND8n8gby23L9Lvb0IfA+4Oxwq3etTwhwuhOnT6DQcMw4IzU1NSOTyiXnJ5qAMClkciv/qCw0z3pCWWBHcEs+SybKsd4Dox4a4UdsgYl6C7V0KtNXIyZz2a6A2jopkz6FMxCMFbyAUaFYkgbYheLmg6XL0g= X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(4636009)(40470700002)(36840700001)(46966006)(7696005)(86362001)(30864003)(81166007)(6666004)(6286002)(26005)(426003)(2616005)(336012)(2906002)(16526019)(8936002)(4326008)(47076005)(316002)(508600001)(1076003)(8676002)(36756003)(83380400001)(356005)(82310400004)(5660300002)(54906003)(36860700001)(6916009)(70206006)(40460700001)(70586007)(186003)(55016003)(36900700001)(579004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jan 2022 15:30:57.9799 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f4632f78-a9c5-4ab5-908f-08d9da978686 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT009.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3740 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org A new, faster, queue-based flow rules management mechanism is needed for applications offloading rules inside the datapath. This asynchronous and lockless mechanism frees the CPU for further packet processing and reduces the performance impact of the flow rules creation/destruction on the datapath. Note that queues are not thread-safe and the queue should be accessed from the same thread for all queue operations. It is the responsibility of the app to sync the queue functions in case of multi-threaded access to the same queue. The rte_flow_q_flow_create() function enqueues a flow creation to the requested queue. It benefits from already configured resources and sets unique values on top of item and action templates. A flow rule is enqueued on the specified flow queue and offloaded asynchronously to the hardware. The function returns immediately to spare CPU for further packet processing. The application must invoke the rte_flow_q_dequeue() function to complete the flow rule operation offloading, to clear the queue, and to receive the operation status. The rte_flow_q_flow_destroy() function enqueues a flow destruction to the requested queue. Signed-off-by: Alexander Kozyrev --- doc/guides/prog_guide/img/rte_flow_q_init.svg | 71 ++++ .../prog_guide/img/rte_flow_q_usage.svg | 60 +++ doc/guides/prog_guide/rte_flow.rst | 158 ++++++++ doc/guides/rel_notes/release_22_03.rst | 9 + lib/ethdev/rte_flow.c | 173 ++++++++- lib/ethdev/rte_flow.h | 348 ++++++++++++++++++ lib/ethdev/rte_flow_driver.h | 61 +++ lib/ethdev/version.map | 7 + 8 files changed, 886 insertions(+), 1 deletion(-) create mode 100644 doc/guides/prog_guide/img/rte_flow_q_init.svg create mode 100644 doc/guides/prog_guide/img/rte_flow_q_usage.svg diff --git a/doc/guides/prog_guide/img/rte_flow_q_init.svg b/doc/guides/prog_guide/img/rte_flow_q_init.svg new file mode 100644 index 0000000000..994e85521c --- /dev/null +++ b/doc/guides/prog_guide/img/rte_flow_q_init.svg @@ -0,0 +1,71 @@ + + + + + + + + + + + + + + + rte_eth_dev_configure + () + + + + rte_flow_configure() + + + + rte_flow_item_template_create() + + + + rte_flow_action_template_create + () + + + + rte_eal_init + () + + + + + + + rte_flow_table_create + ( + ) + + + + + + rte_eth_dev_start() + + + \ No newline at end of file diff --git a/doc/guides/prog_guide/img/rte_flow_q_usage.svg b/doc/guides/prog_guide/img/rte_flow_q_usage.svg new file mode 100644 index 0000000000..14447ef8eb --- /dev/null +++ b/doc/guides/prog_guide/img/rte_flow_q_usage.svg @@ -0,0 +1,60 @@ + + + + + + + + + + + + + + rte_eth_rx_burst() + + analyze packet + + rte_flow_q_create_flow() + + more packets? + + + + + + + add new rule? + + + yes + + no + + + destroy the rule? + + + rte_flow_q_destroy_flow() + + + + + rte_flow_q_dequeue() + + rte_flow_q_drain() + + + no + + yes + + no + + yes + + \ No newline at end of file diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index aa9d4e9573..b004811a20 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3607,18 +3607,22 @@ Hints about the expected number of counters or meters in an application, for example, allow PMD to prepare and optimize NIC memory layout in advance. ``rte_flow_configure()`` must be called before any flow rule is created, but after an Ethernet device is configured. +It also creates flow queues for asynchronous flow rules operations via +queue-based API, see `Asynchronous operations`_ section. .. code-block:: c int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error); Arguments: - ``port_id``: port identifier of Ethernet device. - ``port_attr``: port attributes for flow management library. +- ``queue_attr``: queue attributes for asynchronous operations. - ``error``: perform verbose error reporting if not NULL. PMDs initialize this structure in case of error only. @@ -3750,6 +3754,160 @@ and item and action templates are created. *at, nb_action_templates, *error); +Asynchronous operations +----------------------- + +Flow rules creation/destruction can be done by using lockless flow queues. +An application configures the number of queues during the initialization stage. +Then create/destroy operations are enqueued without any locks asynchronously. +By adopting an asynchronous queue-based approach, the packet processing can +continue with handling next packets while insertion/destruction of a flow rule +is processed inside the hardware. The application is expected to poll for +results later to see if the flow rule is successfully inserted/destroyed. +User data is returned as part of the result to identify the enqueued operation. +Polling must be done before the queue is overflowed or periodically. +Operations can be reordered inside a queue, so the result of the rule creation +needs to be polled first before enqueueing the destroy operation for the rule. +Flow handle is valid once the create operation is enqueued and must be +destroyed even if the operation is not successful and the rule is not inserted. + +The asynchronous flow rule insertion logic can be broken into two phases. + +1. Initialization stage as shown here: + +.. _figure_rte_flow_q_init: + +.. figure:: img/rte_flow_q_init.* + +2. Main loop as presented on a datapath application example: + +.. _figure_rte_flow_q_usage: + +.. figure:: img/rte_flow_q_usage.* + +Enqueue creation operation +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enqueueing a flow rule creation operation is similar to simple creation. + +.. code-block:: c + + struct rte_flow * + rte_flow_q_flow_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_table *table, + const struct rte_flow_item items[], + uint8_t item_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + struct rte_flow_error *error); + +A valid handle in case of success is returned. It must be destroyed later +by calling ``rte_flow_q_flow_destroy()`` even if the rule is rejected by HW. + +Enqueue destruction operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enqueueing a flow rule destruction operation is similar to simple destruction. + +.. code-block:: c + + int + rte_flow_q_flow_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *error); + +Drain a queue +~~~~~~~~~~~~~ + +Function to drain the queue and push all internally stored rules to the NIC. + +.. code-block:: c + + int + rte_flow_q_drain(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error); + +There is the drain attribute in the queue operation attributes. +When set, the requested operation must be sent to the HW without any delay. +If not, multiple operations can be bulked together and not sent to HW right +away to save SW/HW interactions and prioritize throughput over latency. +The application must invoke this function to actually push all outstanding +operations to HW in the latter case. + +Dequeue operations +~~~~~~~~~~~~~~~~~~ + +Dequeue rte flow operations. + +The application must invoke this function in order to complete the asynchronous +flow rule operation and to receive the flow rule operation status. + +.. code-block:: c + + int + rte_flow_q_dequeue(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); + +Multiple outstanding operations can be dequeued simultaneously. +User data may be provided during a flow creation/destruction in order +to distinguish between multiple operations. User data is returned as part +of the result to provide a method to detect which operation is completed. + +Enqueue indirect action creation operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Asynchronous version of indirect action creation API. + +.. code-block:: c + + struct rte_flow_action_handle * + rte_flow_q_action_handle_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + struct rte_flow_error *error); + +A valid handle in case of success is returned. It must be destroyed later by +calling ``rte_flow_q_action_handle_destroy()`` even if the rule is rejected. + +Enqueue indirect action destruction operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Asynchronous version of indirect action destruction API. + +.. code-block:: c + + int + rte_flow_q_action_handle_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + struct rte_flow_error *error); + +Enqueue indirect action update operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Asynchronous version of indirect action update API. + +.. code-block:: c + + int + rte_flow_q_action_handle_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + struct rte_flow_error *error); + .. _flow_isolated_mode: Flow isolated mode diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index af56f54bc4..7ccac912a3 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -66,6 +66,15 @@ New Features ``rte_flow_table_destroy``, ``rte_flow_item_template_destroy`` and ``rte_flow_action_template_destroy`` respectively. +* ethdev: Added ``rte_flow_q_flow_create`` and ``rte_flow_q_flow_destroy`` API + to enqueue flow creaion/destruction operations asynchronously as well as + ``rte_flow_q_dequeue`` to poll results of these operations and + ``rte_flow_q_drain`` to drain the flow queue and pass all operations to NIC. + Introduced asynchronous API for indirect actions management as well: + ``rte_flow_q_action_handle_create``, ``rte_flow_q_action_handle_destroy`` and + ``rte_flow_q_action_handle_update``. + + Removed Items ------------- diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index 20613f6bed..6da899c5df 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1395,6 +1395,7 @@ rte_flow_flex_item_release(uint16_t port_id, int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; @@ -1404,7 +1405,8 @@ rte_flow_configure(uint16_t port_id, return -rte_errno; if (likely(!!ops->configure)) { return flow_err(port_id, - ops->configure(dev, port_attr, error), + ops->configure(dev, port_attr, + queue_attr, error), error); } return rte_flow_error_set(error, ENOTSUP, @@ -1552,3 +1554,172 @@ rte_flow_table_destroy(uint16_t port_id, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, rte_strerror(ENOTSUP)); } + +struct rte_flow * +rte_flow_q_flow_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_table *table, + const struct rte_flow_item items[], + uint8_t item_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow *flow; + + if (unlikely(!ops)) + return NULL; + if (likely(!!ops->q_flow_create)) { + flow = ops->q_flow_create(dev, queue_id, q_ops_attr, table, + items, item_template_index, + actions, action_template_index, + error); + if (flow == NULL) + flow_err(port_id, -rte_errno, error); + return flow; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_q_flow_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->q_flow_destroy)) { + return flow_err(port_id, + ops->q_flow_destroy(dev, queue_id, + q_ops_attr, flow, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +struct rte_flow_action_handle * +rte_flow_q_action_handle_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow_action_handle *handle; + + if (unlikely(!ops)) + return NULL; + if (unlikely(!ops->q_action_handle_create)) { + rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); + return NULL; + } + handle = ops->q_action_handle_create(dev, queue_id, q_ops_attr, + indir_action_conf, action, error); + if (handle == NULL) + flow_err(port_id, -rte_errno, error); + return handle; +} + +int +rte_flow_q_action_handle_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (unlikely(!ops->q_action_handle_destroy)) + return rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOSYS)); + ret = ops->q_action_handle_destroy(dev, queue_id, q_ops_attr, + action_handle, error); + return flow_err(port_id, ret, error); +} + +int +rte_flow_q_action_handle_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (unlikely(!ops->q_action_handle_update)) + return rte_flow_error_set(error, ENOSYS, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOSYS)); + ret = ops->q_action_handle_update(dev, queue_id, q_ops_attr, + action_handle, update, error); + return flow_err(port_id, ret, error); +} + +int +rte_flow_q_drain(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->q_drain)) { + return flow_err(port_id, + ops->q_drain(dev, queue_id, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +int +rte_flow_q_dequeue(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->q_dequeue)) { + ret = ops->q_dequeue(dev, queue_id, res, n_res, error); + return ret ? ret : flow_err(port_id, ret, error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 2e54e9d0e3..07193090f2 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4865,6 +4865,13 @@ struct rte_flow_port_attr { * Version of the struct layout, should be 0. */ uint32_t version; + /** + * Number of flow queues to be configured. + * Flow queues are used for asynchronous flow rule operations. + * The order of operations is not guaranteed inside a queue. + * Flow queues are not thread-safe. + */ + uint16_t nb_queues; /** * Number of counter actions pre-configured. * If set to 0, PMD will allocate counters dynamically. @@ -4885,6 +4892,21 @@ struct rte_flow_port_attr { uint32_t nb_meters; }; +/** + * Flow engine queue configuration. + */ +__extension__ +struct rte_flow_queue_attr { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * Number of flow rule operations a queue can hold. + */ + uint32_t size; +}; + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. @@ -4903,6 +4925,9 @@ struct rte_flow_port_attr { * Port identifier of Ethernet device. * @param[in] port_attr * Port configuration attributes. + * @param[in] queue_attr + * Array that holds attributes for each flow queue. + * Number of elements is set in @p port_attr.nb_queues. * @param[out] error * Perform verbose error reporting if not NULL. * PMDs initialize this structure in case of error only. @@ -4914,6 +4939,7 @@ __rte_experimental int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error); /** @@ -5185,6 +5211,328 @@ rte_flow_table_destroy(uint16_t port_id, struct rte_flow_table *table, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Queue operation attributes. + */ +struct rte_flow_q_ops_attr { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * The user data that will be returned on the completion events. + */ + void *user_data; + /** + * When set, the requested action must be sent to the HW without + * any delay. Any prior requests must be also sent to the HW. + * If this bit is cleared, the application must call the + * rte_flow_queue_drain API to actually send the request to the HW. + */ + uint32_t drain:1; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule creation operation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue used to insert the rule. + * @param[in] q_ops_attr + * Rule creation operation attributes. + * @param[in] table + * Table to select templates from. + * @param[in] items + * List of pattern items to be used. + * The list order should match the order in the item template. + * The spec is the only relevant member of the item that is being used. + * @param[in] item_template_index + * Item template index in the table. + * @param[in] actions + * List of actions to be used. + * The list order should match the order in the action template. + * @param[in] action_template_index + * Action template index in the table. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + * The rule handle doesn't mean that the rule was offloaded. + * Only completion result indicates that the rule was offloaded. + */ +__rte_experimental +struct rte_flow * +rte_flow_q_flow_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_table *table, + const struct rte_flow_item items[], + uint8_t item_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule destruction operation. + * + * This function enqueues a destruction operation on the queue. + * Application should assume that after calling this function + * the rule handle is not valid anymore. + * Completion indicates the full removal of the rule from the HW. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue which is used to destroy the rule. + * This must match the queue on which the rule was created. + * @param[in] q_ops_attr + * Rule destroy operation attributes. + * @param[in] flow + * Flow handle to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_q_flow_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action creation operation. + * @see rte_flow_action_handle_create + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to create the rule. + * @param[in] q_ops_attr + * Queue operation attributes. + * @param[in] indir_action_conf + * Action configuration for the indirect action object creation. + * @param[in] action + * Specific configuration of the indirect action object. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if action pointed by *action* handle was not found. + * - (-EBUSY) if action pointed by *action* handle still used by some rules + * rte_errno is also set. + */ +__rte_experimental +struct rte_flow_action_handle * +rte_flow_q_action_handle_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action destruction operation. + * The destroy queue must be the same + * as the queue on which the action was created. + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to destroy the rule. + * @param[in] q_ops_attr + * Queue operation attributes. + * @param[in] action_handle + * Handle for the indirect action object to be destroyed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if action pointed by *action* handle was not found. + * - (-EBUSY) if action pointed by *action* handle still used by some rules + * rte_errno is also set. + */ +__rte_experimental +int +rte_flow_q_action_handle_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue indirect action update operation. + * @see rte_flow_action_handle_create + * + * @param[in] port_id + * Port identifier of Ethernet device. + * @param[in] queue_id + * Flow queue which is used to update the rule. + * @param[in] q_ops_attr + * Queue operation attributes. + * @param[in] action_handle + * Handle for the indirect action object to be updated. + * @param[in] update + * Update profile specification used to modify the action pointed by handle. + * *update* could be with the same type of the immediate action corresponding + * to the *handle* argument when creating, or a wrapper structure includes + * action configuration to be updated and bit fields to indicate the member + * of fields inside the action to update. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * - (0) if success. + * - (-ENODEV) if *port_id* invalid. + * - (-ENOSYS) if underlying device does not support this functionality. + * - (-EIO) if underlying device is removed. + * - (-ENOENT) if action pointed by *action* handle was not found. + * - (-EBUSY) if action pointed by *action* handle still used by some rules + * rte_errno is also set. + */ +__rte_experimental +int +rte_flow_q_action_handle_update(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Drain the queue and push all internally stored rules to the HW. + * Non-drained rules are rules that were inserted without the drain flag set. + * Can be used to notify the HW about batch of rules prepared by the SW to + * reduce the number of communications between the HW and SW. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue to be drained. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_q_drain(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Dequeue operation status. + */ +enum rte_flow_q_op_status { + /** + * The operation was completed successfully. + */ + RTE_FLOW_Q_OP_SUCCESS, + /** + * The operation was not completed successfully. + */ + RTE_FLOW_Q_OP_ERROR, +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Queue operation attributes + */ +__extension__ +struct rte_flow_q_op_res { + /** + * Version of the struct layout, should be 0. + */ + uint32_t version; + /** + * Returns the status of the operation that this completion signals. + */ + enum rte_flow_q_op_status status; + /** + * The user data that will be returned on the completion events. + */ + void *user_data; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Dequeue a rte flow operation. + * The application must invoke this function in order to complete + * the flow rule offloading and to receive the flow rule operation status. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue which is used to dequeue the operation. + * @param[out] res + * Array of results that will be set. + * @param[in] n_res + * Maximum number of results that can be returned. + * This value is equal to the size of the res array. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Number of results that were dequeued, + * a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_q_dequeue(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index cda021c302..d1cfdd2d75 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -156,6 +156,7 @@ struct rte_flow_ops { int (*configure) (struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *err); /** See rte_flow_item_template_create() */ struct rte_flow_item_template *(*item_template_create) @@ -194,6 +195,66 @@ struct rte_flow_ops { (struct rte_eth_dev *dev, struct rte_flow_table *table, struct rte_flow_error *err); + /** See rte_flow_q_flow_create() */ + struct rte_flow *(*q_flow_create) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_table *table, + const struct rte_flow_item items[], + uint8_t item_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + struct rte_flow_error *err); + /** See rte_flow_q_flow_destroy() */ + int (*q_flow_destroy) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *err); + /** See rte_flow_q_flow_update() */ + int (*q_flow_update) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + struct rte_flow_error *err); + /** See rte_flow_q_action_handle_create() */ + struct rte_flow_action_handle *(*q_action_handle_create) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + struct rte_flow_error *err); + /** See rte_flow_q_action_handle_destroy() */ + int (*q_action_handle_destroy) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + struct rte_flow_error *error); + /** See rte_flow_q_action_handle_update() */ + int (*q_action_handle_update) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + struct rte_flow_error *error); + /** See rte_flow_q_drain() */ + int (*q_drain) + (struct rte_eth_dev *dev, + uint32_t queue_id, + struct rte_flow_error *err); + /** See rte_flow_q_dequeue() */ + int (*q_dequeue) + (struct rte_eth_dev *dev, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index cfd5e2a3e4..d705e36c90 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -265,6 +265,13 @@ EXPERIMENTAL { rte_flow_action_template_destroy; rte_flow_table_create; rte_flow_table_destroy; + rte_flow_q_flow_create; + rte_flow_q_flow_destroy; + rte_flow_q_action_handle_create; + rte_flow_q_action_handle_destroy; + rte_flow_q_action_handle_update; + rte_flow_q_drain; + rte_flow_q_dequeue; }; INTERNAL { -- 2.18.2