From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0C56AA00C2; Sun, 20 Feb 2022 04:44:50 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 04CE341147; Sun, 20 Feb 2022 04:44:48 +0100 (CET) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2085.outbound.protection.outlook.com [40.107.212.85]) by mails.dpdk.org (Postfix) with ESMTP id 525DE40395 for ; Sun, 20 Feb 2022 04:44:46 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UvqYnhz7DBWquLgrQYXH+71yIbghTrbowKBOmwILv1N/jRCdMzkdFiB/87ORMHFajszaxQWyFQnRs9zI87uXSPrakG66xTJvRgB1eUzhvHyssYF47LftvYiR78LC354IRSsgwVkxVccHb6YKa+oUz1NDCdvzsEkFIoNIyY/cVHzEUUZm+T/mgKaEv4ilsObF1plGFYlIsIFAObWxZ5F9kXIbtzOMNIAfh9i2OPOGDqQtVyaoCD8QL6PzfH6wtiCQVzQPa9FPuGwVTwSn9iVtcIQ5nffKEYzpSAJYQLVgdxr7pdlxuNk4B8vY9hj4NVmO/vAbxVgRJsCZs34VfvapPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=IkEkmU8YA6Dg09C0qWhd2ow3+bonFmhApDcJ1yTyYyk=; b=exsVlRYoCcWcZAwEYoQuap6c/F/SJiOGO5WbVN536cxcoSaT+4SQo6ULXstCOKqhywRHSd0fe/+U9Z6uP0za5uKM20JlTWp6ja2QDQFLVHcxdvolIRN6UU+Jf/+tb8mrOwXl/8mdE3Jl6q1tCpSqKhcCogJ6EtS9KTlhH7K4b1rFqMJfUHiLbNmXYanc3m9trxwarDPQBQIWZh55/xNtBlitpGIGgvDSvTSKliIE54KGzUuGSdF2JH/vSdm/7O5DqDaqF0iEt/PdO0Fx3X11Fa9VaJrfA/BIJ2MS0bHXaXwSFdKqCB+5Qw+I8c7YVTVS5KyHTtLC9Lfj8jOWQgmUBg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=intel.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IkEkmU8YA6Dg09C0qWhd2ow3+bonFmhApDcJ1yTyYyk=; b=Ob549CPLzLmpZfUQyCzw7Rc+9WHMrQ6vCBvjAUTdHybCosS7ckqKEO862lKbbtW5ARlVR7YIrXRz24cHkTYaNSC0c+RQgzMS8avmwLIyCSRXm4NZiBTld0tdONK0X33rDIYrSiTHItqySMcjNc97efP51MSaQmwC1/0p8s7J8Yoso7PC1hEqTeUITnmvSzPK6bZtGVPqpsweLXfedc+c1Z7ThXvwBjbRSqSwbZQHgi5c3Kap3LwBfpBhT8URI3gkwg7HQQT78dE+VgHBWS7y0r3kyCv+6FvF+UuoVrKMS6VIVz91BupTgA2Jt/rBYQeLYzD76PxazbrJy4Vez411wA== Received: from DM6PR12MB2714.namprd12.prod.outlook.com (2603:10b6:5:42::18) by BL1PR12MB5286.namprd12.prod.outlook.com (2603:10b6:208:31d::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.17; Sun, 20 Feb 2022 03:44:43 +0000 Received: from BN9PR03CA0986.namprd03.prod.outlook.com (2603:10b6:408:109::31) by DM6PR12MB2714.namprd12.prod.outlook.com (2603:10b6:5:42::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.22; Sun, 20 Feb 2022 03:44:40 +0000 Received: from BN8NAM11FT005.eop-nam11.prod.protection.outlook.com (2603:10b6:408:109:cafe::6e) by BN9PR03CA0986.outlook.office365.com (2603:10b6:408:109::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.14 via Frontend Transport; Sun, 20 Feb 2022 03:44:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.238) by BN8NAM11FT005.mail.protection.outlook.com (10.13.176.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4995.15 via Frontend Transport; Sun, 20 Feb 2022 03:44:40 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 20 Feb 2022 03:44:39 +0000 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Sat, 19 Feb 2022 19:44:36 -0800 From: Alexander Kozyrev To: CC: , , , , , , , , , Subject: [PATCH v8 03/11] ethdev: bring in async queue-based flow rules operations Date: Sun, 20 Feb 2022 05:44:01 +0200 Message-ID: <20220220034409.2226860-4-akozyrev@nvidia.com> X-Mailer: git-send-email 2.18.2 In-Reply-To: <20220220034409.2226860-1-akozyrev@nvidia.com> References: <20220219041144.2145380-1-akozyrev@nvidia.com> <20220220034409.2226860-1-akozyrev@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 769d28a3-dbac-42d7-349e-08d9f4235364 X-MS-TrafficTypeDiagnostic: DM6PR12MB2714:EE_|BL1PR12MB5286:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: K4ib6Ik2yNuSEPwCSgqXSeOaDFn0BNgM4kQWYUEq2HPGYaNm/8rRWTgEzL+hSm9hEr8Wn5z5rcf4G6Bdvcq8PejxL/pYDvJM0oM3VyFOi6Pv+KYkVDw7yUgwBUiesjt5ZMAn49Cb6TzW3b6yBfMZCf5D26C/dBnoiQD/th5IC0zu8uKkcUTQmcoOMwuGxwuPnOo2yBi7+sFB38j71az9kDFnI3mxdRrgmjYeZi2SHNt+rSoKAJ6vtTsx5F06wZJNSqsrkIEXFkTyRwqVEmG0f1AR3p/lxv+dIxCtzaQCp7R1PUEyJjFGYDbOZ79H99Zj8QqEGTlbCGxs0cf1xXLXq+VB6NSUr1LjKVMpBdwYMjQGAfh6bRpAoOw3a5y/6SV+hBkrf/Mj9yjGqh5axBsR+Ih6dKbZCUJxdVaGbWKaC+G/o3YsPv4sgsA9t5UIWWOIeEdne4JeIJEWqw82kwV1x2F5UQDNs3uTO95OhOUJMfJpxgP4/dr0GbLJ8Cr6EOydq/6jP1tT6hTpjTYbd2i+xMzduUCiqtB+5yA8QD1jUjGeOSaNJd+/8lyBvE+om52QAtxxRUi0VNaERnKFA9G13Zz6D4rJ6yymmFjKr36HCg1lBxHq9KKL2f7M03sUQNFIud2yZ/VQorR1sYQOMKBeDwtDprJCFqVkoR8EZhPQ0ZQk+kJ62n7U9jUbTNYOZPb8XEb7DsFP0SXKH8e9iohr7T7FHSCL+ZhvxQwzkmI8DGXqpD9b/xO+P20wSBUscL8BlYTtJHV2epoQGB/oRaso3nZnqMrzrBPHgFRt4qTtY3A4YjvKpxx3vdkjTP+DrE+3YBpEaeOnwN48gQtDGL+/WavNZ/PAgnVrUmxr9VDnPgk= X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(36840700001)(46966006)(40470700004)(186003)(26005)(16526019)(426003)(336012)(47076005)(40460700003)(1076003)(2616005)(356005)(81166007)(36756003)(2906002)(5660300002)(19273905006)(8936002)(7416002)(30864003)(508600001)(70586007)(70206006)(4326008)(82310400004)(6666004)(8676002)(83380400001)(36860700001)(86362001)(316002)(6916009)(54906003)(36900700001)(579004)(563064011); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Feb 2022 03:44:40.7244 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 769d28a3-dbac-42d7-349e-08d9f4235364 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT005.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5286 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org A new, faster, queue-based flow rules management mechanism is needed for applications offloading rules inside the datapath. This asynchronous and lockless mechanism frees the CPU for further packet processing and reduces the performance impact of the flow rules creation/destruction on the datapath. Note that queues are not thread-safe and the queue should be accessed from the same thread for all queue operations. It is the responsibility of the app to sync the queue functions in case of multi-threaded access to the same queue. The rte_flow_async_create() function enqueues a flow creation to the requested queue. It benefits from already configured resources and sets unique values on top of item and action templates. A flow rule is enqueued on the specified flow queue and offloaded asynchronously to the hardware. The function returns immediately to spare CPU for further packet processing. The application must invoke the rte_flow_pull() function to complete the flow rule operation offloading, to clear the queue, and to receive the operation status. The rte_flow_async_destroy() function enqueues a flow destruction to the requested queue. Signed-off-by: Alexander Kozyrev Acked-by: Ori Kam --- .../prog_guide/img/rte_flow_async_init.svg | 205 ++++++++++ .../prog_guide/img/rte_flow_async_usage.svg | 354 ++++++++++++++++++ doc/guides/prog_guide/rte_flow.rst | 124 ++++++ doc/guides/rel_notes/release_22_03.rst | 7 + lib/ethdev/rte_flow.c | 110 +++++- lib/ethdev/rte_flow.h | 251 +++++++++++++ lib/ethdev/rte_flow_driver.h | 35 ++ lib/ethdev/version.map | 4 + 8 files changed, 1087 insertions(+), 3 deletions(-) create mode 100644 doc/guides/prog_guide/img/rte_flow_async_init.svg create mode 100644 doc/guides/prog_guide/img/rte_flow_async_usage.svg diff --git a/doc/guides/prog_guide/img/rte_flow_async_init.svg b/doc/guides/prog_guide/img/rte_flow_async_init.svg new file mode 100644 index 0000000000..f66e9c73d7 --- /dev/null +++ b/doc/guides/prog_guide/img/rte_flow_async_init.svg @@ -0,0 +1,205 @@ + + + + + + + + + + + + + + + + + rte_eth_dev_configure + () + + + rte_flow_configure() + + + rte_flow_pattern_template_create() + + rte_flow_actions_template_create() + + rte_eal_init() + + + + + rte_flow_template_table_create() + + + + rte_eth_dev_start() + + + diff --git a/doc/guides/prog_guide/img/rte_flow_async_usage.svg b/doc/guides/prog_guide/img/rte_flow_async_usage.svg new file mode 100644 index 0000000000..bb978bca1e --- /dev/null +++ b/doc/guides/prog_guide/img/rte_flow_async_usage.svg @@ -0,0 +1,354 @@ + + + + + + + + + + + + + + + + rte_eth_rx_burst() + + analyze packet + + rte_flow_async_create() + + more packets? + + + + + + + add new rule? + + + yes + + no + + + destroy the rule? + + + rte_flow_async_destroy() + + + + + rte_flow_pull() + + rte_flow_push() + + + no + + yes + + no + + yes + + diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst index 6cdfea09be..436845717f 100644 --- a/doc/guides/prog_guide/rte_flow.rst +++ b/doc/guides/prog_guide/rte_flow.rst @@ -3624,12 +3624,16 @@ Expected number of resources in an application allows PMD to prepare and optimize NIC hardware configuration and memory layout in advance. ``rte_flow_configure()`` must be called before any flow rule is created, but after an Ethernet device is configured. +It also creates flow queues for asynchronous flow rules operations via +queue-based API, see `Asynchronous operations`_ section. .. code-block:: c int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error); Information about the number of available resources can be retrieved via @@ -3640,6 +3644,7 @@ Information about the number of available resources can be retrieved via int rte_flow_info_get(uint16_t port_id, struct rte_flow_port_info *port_info, + struct rte_flow_queue_info *queue_info, struct rte_flow_error *error); Flow templates @@ -3777,6 +3782,125 @@ and pattern and actions templates are created. &actions_templates, nb_actions_templ, &error); +Asynchronous operations +----------------------- + +Flow rules management can be done via special lockless flow management queues. +- Queue operations are asynchronous and not thread-safe. + +- Operations can thus be invoked by the app's datapath, + packet processing can continue while queue operations are processed by NIC. + +- Number of flow queues is configured at initialization stage. + +- Available operation types: rule creation, rule destruction, + indirect rule creation, indirect rule destruction, indirect rule update. + +- Operations may be reordered within a queue. + +- Operations can be postponed and pushed to NIC in batches. + +- Results pulling must be done on time to avoid queue overflows. + +- User data is returned as part of the result to identify an operation. + +- Flow handle is valid once the creation operation is enqueued and must be + destroyed even if the operation is not successful and the rule is not inserted. + +- Application must wait for the creation operation result before enqueueing + the deletion operation to make sure the creation is processed by NIC. + +The asynchronous flow rule insertion logic can be broken into two phases. + +1. Initialization stage as shown here: + +.. _figure_rte_flow_async_init: + +.. figure:: img/rte_flow_async_init.* + +2. Main loop as presented on a datapath application example: + +.. _figure_rte_flow_async_usage: + +.. figure:: img/rte_flow_async_usage.* + +Enqueue creation operation +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enqueueing a flow rule creation operation is similar to simple creation. + +.. code-block:: c + + struct rte_flow * + rte_flow_async_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_template_table *template_table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error); + +A valid handle in case of success is returned. It must be destroyed later +by calling ``rte_flow_async_destroy()`` even if the rule is rejected by HW. + +Enqueue destruction operation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Enqueueing a flow rule destruction operation is similar to simple destruction. + +.. code-block:: c + + int + rte_flow_async_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + void *user_data, + struct rte_flow_error *error); + +Push enqueued operations +~~~~~~~~~~~~~~~~~~~~~~~~ + +Pushing all internally stored rules from a queue to the NIC. + +.. code-block:: c + + int + rte_flow_push(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error); + +There is the postpone attribute in the queue operation attributes. +When it is set, multiple operations can be bulked together and not sent to HW +right away to save SW/HW interactions and prioritize throughput over latency. +The application must invoke this function to actually push all outstanding +operations to HW in this case. + +Pull enqueued operations +~~~~~~~~~~~~~~~~~~~~~~~~ + +Pulling asynchronous operations results. + +The application must invoke this function in order to complete asynchronous +flow rule operations and to receive flow rule operations statuses. + +.. code-block:: c + + int + rte_flow_pull(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); + +Multiple outstanding operation results can be pulled simultaneously. +User data may be provided during a flow creation/destruction in order +to distinguish between multiple operations. User data is returned as part +of the result to provide a method to detect which operation is completed. + .. _flow_isolated_mode: Flow isolated mode diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst index 7150d06c87..cd495ef40c 100644 --- a/doc/guides/rel_notes/release_22_03.rst +++ b/doc/guides/rel_notes/release_22_03.rst @@ -113,6 +113,13 @@ New Features ``rte_flow_template_table_destroy``, ``rte_flow_pattern_template_destroy`` and ``rte_flow_actions_template_destroy``. +* ** Added functions for asynchronous flow rules creation/destruction + + * ethdev: Added ``rte_flow_async_create`` and ``rte_flow_async_destroy`` API + to enqueue flow creaion/destruction operations asynchronously as well as + ``rte_flow_pull`` to poll and retrieve results of these operations and + ``rte_flow_push`` to push all the in-flight operations to the NIC. + * **Updated AF_XDP PMD** * Added support for libxdp >=v1.2.2. diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index e9f684eedb..4e7b202522 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -1396,6 +1396,7 @@ rte_flow_flex_item_release(uint16_t port_id, int rte_flow_info_get(uint16_t port_id, struct rte_flow_port_info *port_info, + struct rte_flow_queue_info *queue_info, struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; @@ -1415,7 +1416,7 @@ rte_flow_info_get(uint16_t port_id, return -rte_errno; if (likely(!!ops->info_get)) { return flow_err(port_id, - ops->info_get(dev, port_info, error), + ops->info_get(dev, port_info, queue_info, error), error); } return rte_flow_error_set(error, ENOTSUP, @@ -1426,6 +1427,8 @@ rte_flow_info_get(uint16_t port_id, int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; @@ -1433,7 +1436,7 @@ rte_flow_configure(uint16_t port_id, int ret; dev->data->flow_configured = 0; - if (port_attr == NULL) { + if (port_attr == NULL || queue_attr == NULL) { RTE_FLOW_LOG(ERR, "Port %"PRIu16" info is NULL.\n", port_id); return -EINVAL; } @@ -1452,7 +1455,7 @@ rte_flow_configure(uint16_t port_id, if (unlikely(!ops)) return -rte_errno; if (likely(!!ops->configure)) { - ret = ops->configure(dev, port_attr, error); + ret = ops->configure(dev, port_attr, nb_queue, queue_attr, error); if (ret == 0) dev->data->flow_configured = 1; return flow_err(port_id, ret, error); @@ -1713,3 +1716,104 @@ rte_flow_template_table_destroy(uint16_t port_id, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, rte_strerror(ENOTSUP)); } + +struct rte_flow * +rte_flow_async_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_template_table *template_table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + struct rte_flow *flow; + + if (unlikely(!ops)) + return NULL; + if (likely(!!ops->async_create)) { + flow = ops->async_create(dev, queue_id, + q_ops_attr, template_table, + pattern, pattern_template_index, + actions, actions_template_index, + user_data, error); + if (flow == NULL) + flow_err(port_id, -rte_errno, error); + return flow; + } + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); + return NULL; +} + +int +rte_flow_async_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + void *user_data, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->async_destroy)) { + return flow_err(port_id, + ops->async_destroy(dev, queue_id, + q_ops_attr, flow, + user_data, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +int +rte_flow_push(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->push)) { + return flow_err(port_id, + ops->push(dev, queue_id, error), + error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} + +int +rte_flow_pull(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); + int ret; + + if (unlikely(!ops)) + return -rte_errno; + if (likely(!!ops->pull)) { + ret = ops->pull(dev, queue_id, res, n_res, error); + return ret ? ret : flow_err(port_id, ret, error); + } + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, rte_strerror(ENOTSUP)); +} diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index 776e8ccc11..9e71a576f6 100644 --- a/lib/ethdev/rte_flow.h +++ b/lib/ethdev/rte_flow.h @@ -4884,6 +4884,10 @@ rte_flow_flex_item_release(uint16_t port_id, * */ struct rte_flow_port_info { + /** + * Maximum umber of queues for asynchronous operations. + */ + uint32_t max_nb_queues; /** * Maximum number of counters. * @see RTE_FLOW_ACTION_TYPE_COUNT @@ -4901,6 +4905,21 @@ struct rte_flow_port_info { uint32_t max_nb_meters; }; +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Information about flow engine asynchronous queues. + * The value only valid if @p port_attr.max_nb_queues is not zero. + * + */ +struct rte_flow_queue_info { + /** + * Maximum number of operations a queue can hold. + */ + uint32_t max_size; +}; + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. @@ -4912,6 +4931,9 @@ struct rte_flow_port_info { * @param[out] port_info * A pointer to a structure of type *rte_flow_port_info* * to be filled with the resources information of the port. + * @param[out] queue_info + * A pointer to a structure of type *rte_flow_queue_info* + * to be filled with the asynchronous queues information. * @param[out] error * Perform verbose error reporting if not NULL. * PMDs initialize this structure in case of error only. @@ -4923,6 +4945,7 @@ __rte_experimental int rte_flow_info_get(uint16_t port_id, struct rte_flow_port_info *port_info, + struct rte_flow_queue_info *queue_info, struct rte_flow_error *error); /** @@ -4951,6 +4974,21 @@ struct rte_flow_port_attr { uint32_t nb_meters; }; +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Flow engine asynchronous queues settings. + * The value means default value picked by PMD. + * + */ +struct rte_flow_queue_attr { + /** + * Number of flow rule operations a queue can hold. + */ + uint32_t size; +}; + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice. @@ -4970,6 +5008,11 @@ struct rte_flow_port_attr { * Port identifier of Ethernet device. * @param[in] port_attr * Port configuration attributes. + * @param[in] nb_queue + * Number of flow queues to be configured. + * @param[in] queue_attr + * Array that holds attributes for each flow queue. + * Number of elements is set in @p port_attr.nb_queues. * @param[out] error * Perform verbose error reporting if not NULL. * PMDs initialize this structure in case of error only. @@ -4981,6 +5024,8 @@ __rte_experimental int rte_flow_configure(uint16_t port_id, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *error); /** @@ -5257,6 +5302,212 @@ rte_flow_template_table_destroy(uint16_t port_id, struct rte_flow_template_table *template_table, struct rte_flow_error *error); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Queue operation attributes. + */ +__extension__ +struct rte_flow_q_ops_attr { + /** + * When set, the requested action will not be sent to the HW immediately. + * The application must call the rte_flow_queue_push to actually send it. + */ + uint32_t postpone:1; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule creation operation. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue used to insert the rule. + * @param[in] q_ops_attr + * Rule creation operation attributes. + * @param[in] template_table + * Template table to select templates from. + * @param[in] pattern + * List of pattern items to be used. + * The list order should match the order in the pattern template. + * The spec is the only relevant member of the item that is being used. + * @param[in] pattern_template_index + * Pattern template index in the table. + * @param[in] actions + * List of actions to be used. + * The list order should match the order in the actions template. + * @param[in] actions_template_index + * Actions template index in the table. + * @param[in] user_data + * The user data that will be returned on the completion events. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Handle on success, NULL otherwise and rte_errno is set. + * The rule handle doesn't mean that the rule has been populated. + * Only completion result indicates that if there was success or failure. + */ +__rte_experimental +struct rte_flow * +rte_flow_async_create(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_template_table *template_table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue rule destruction operation. + * + * This function enqueues a destruction operation on the queue. + * Application should assume that after calling this function + * the rule handle is not valid anymore. + * Completion indicates the full removal of the rule from the HW. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue which is used to destroy the rule. + * This must match the queue on which the rule was created. + * @param[in] q_ops_attr + * Rule destroy operation attributes. + * @param[in] flow + * Flow handle to be destroyed. + * @param[in] user_data + * The user data that will be returned on the completion events. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +__rte_experimental +int +rte_flow_async_destroy(uint16_t port_id, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + void *user_data, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Push all internally stored rules to the HW. + * Postponed rules are rules that were inserted with the postpone flag set. + * Can be used to notify the HW about batch of rules prepared by the SW to + * reduce the number of communications between the HW and SW. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue to be pushed. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set + * to one of the error codes defined: + * - (ENODEV) if *port_id* invalid. + * - (ENOSYS) if underlying device does not support this functionality. + * - (EIO) if underlying device is removed. + * - (EINVAL) if *queue_id* invalid. + */ +__rte_experimental +int +rte_flow_push(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_error *error); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Queue operation status. + */ +enum rte_flow_q_op_status { + /** + * The operation was completed successfully. + */ + RTE_FLOW_Q_OP_SUCCESS, + /** + * The operation was not completed successfully. + */ + RTE_FLOW_Q_OP_ERROR, +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Queue operation results. + */ +__extension__ +struct rte_flow_q_op_res { + /** + * Returns the status of the operation that this completion signals. + */ + enum rte_flow_q_op_status status; + /** + * The user data that will be returned on the completion events. + */ + void *user_data; +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Pull a rte flow operation. + * The application must invoke this function in order to complete + * the flow rule offloading and to retrieve the flow rule operation status. + * + * @param port_id + * Port identifier of Ethernet device. + * @param queue_id + * Flow queue which is used to pull the operation. + * @param[out] res + * Array of results that will be set. + * @param[in] n_res + * Maximum number of results that can be returned. + * This value is equal to the size of the res array. + * @param[out] error + * Perform verbose error reporting if not NULL. + * PMDs initialize this structure in case of error only. + * + * @return + * Number of results that were pulled, + * a negative errno value otherwise and rte_errno is set + * to one of the error codes defined: + * - (ENODEV) if *port_id* invalid. + * - (ENOSYS) if underlying device does not support this functionality. + * - (EIO) if underlying device is removed. + * - (EINVAL) if *queue_id* invalid. + */ +__rte_experimental +int +rte_flow_pull(uint16_t port_id, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index 2d96db1dc7..332783cf78 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -156,11 +156,14 @@ struct rte_flow_ops { int (*info_get) (struct rte_eth_dev *dev, struct rte_flow_port_info *port_info, + struct rte_flow_queue_info *queue_info, struct rte_flow_error *err); /** See rte_flow_configure() */ int (*configure) (struct rte_eth_dev *dev, const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], struct rte_flow_error *err); /** See rte_flow_pattern_template_create() */ struct rte_flow_pattern_template *(*pattern_template_create) @@ -199,6 +202,38 @@ struct rte_flow_ops { (struct rte_eth_dev *dev, struct rte_flow_template_table *template_table, struct rte_flow_error *err); + /** See rte_flow_async_create() */ + struct rte_flow *(*async_create) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow_template_table *template_table, + const struct rte_flow_item pattern[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *err); + /** See rte_flow_async_destroy() */ + int (*async_destroy) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_q_ops_attr *q_ops_attr, + struct rte_flow *flow, + void *user_data, + struct rte_flow_error *err); + /** See rte_flow_push() */ + int (*push) + (struct rte_eth_dev *dev, + uint32_t queue_id, + struct rte_flow_error *err); + /** See rte_flow_pull() */ + int (*pull) + (struct rte_eth_dev *dev, + uint32_t queue_id, + struct rte_flow_q_op_res res[], + uint16_t n_res, + struct rte_flow_error *error); }; /** diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 62ff791261..13c1a22118 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -272,6 +272,10 @@ EXPERIMENTAL { rte_flow_actions_template_destroy; rte_flow_template_table_create; rte_flow_template_table_destroy; + rte_flow_async_create; + rte_flow_async_destroy; + rte_flow_push; + rte_flow_pull; }; INTERNAL { -- 2.18.2