From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7230C43A3C; Tue, 6 Feb 2024 18:37:53 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0DF91402B5; Tue, 6 Feb 2024 18:37:53 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2053.outbound.protection.outlook.com [40.107.220.53]) by mails.dpdk.org (Postfix) with ESMTP id 9949040272 for ; Tue, 6 Feb 2024 18:37:51 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Qio/CFn+WuocPr3GfD/j8p5J7KxWkR9yAJzpxhRgBLEl7UONUy4Z9A4F9Ufvol0h5h+RfcRStW4CcxOZB6hQO7+ZZpq3UpJwjnGFCAPT8t8dbMO14pWjV2LqGMZ/Aytcw0XbGYRPYPK7WjYufvaGt3Taj5Ik54H8Vc0cazd+ExVhyGcf8OifYyvQfuu4s4zzJc9Gmh2rJ5DZr9KCIgCZ/A81ZYwEjnPaly3aTAWVfuQ021PZkDWu9nc2PTfVP8tM1C5iGR5zwh8/IxR29Me8O/5zZ1cVASaw2edlDpC7YghpDxZ72X1O2xHo0u7st+BCKwNFOgKgmOEJcRZiN5vwIg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=0AKCDJH4gDvO6AASb/Vr3tKd9E/lVP4W2nvx5CpagNg=; b=B80IytGU/W0fEe6D/lu6GPE4a8LXwaqvFzF5F1mczU9yLZfDdOYsTwkYBmDn16MUNN53H3F++5w3lPTXEybjLe22QWDouDqVv53jltIkHs2yMDAMghbOfnkLGH6iNiknPjR6mwbQjkp+xpiRlaqcg0/2a2TP76RhPAiVsVIn1uhSXd1MCyXzoo0nKgxbOitaYEXKrQnC2WgOjpyYeM8/brphbPkoSSFFm/zT6ZzicPiZV6enK+79WHkURrFhuMPKPiYvpfLe5G8h/KGiVed5Ix980txqA5gh0ijv7p1JZhl//CDKrRYbYgXHfCqAeI9fy0cKO2P3nLGRewtklK8FCg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=0AKCDJH4gDvO6AASb/Vr3tKd9E/lVP4W2nvx5CpagNg=; b=IRw7n3fG7YF/WeBEeoBP3fAe+ODmCVBg7gn4erLNpLfbjwRBw+Zr2I+m9c4Da6bpDcHyT7lQCHMJ3beB0uzTqfGUTYS5AP5Lim6AsXcAN0J2Q7MKe4NMdtrLbQh4ETcmEiwyf8N0a7yMxZNRje3oKs4svWx3eJ9HYd0iFoe/9CWfbdUJmRCHNjj1F703PdleiCwGh5IXrQOj7/08ihP9NhJM9bDDVmtLsYGUkQA3fVupF5NbdSLQljOJOoMVonXxulDhOm9ZM1KsNSMfSqm+8vFqsvfqXSmm705NNp4n13ttxKHy1PHoPnvq5lvcSAyEKAavUDRFcYBT6/qNfWkMoQ== Received: from MW4P222CA0018.NAMP222.PROD.OUTLOOK.COM (2603:10b6:303:114::23) by CY8PR12MB8244.namprd12.prod.outlook.com (2603:10b6:930:72::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7270.16; Tue, 6 Feb 2024 17:37:47 +0000 Received: from CO1PEPF000044F7.namprd21.prod.outlook.com (2603:10b6:303:114:cafe::53) by MW4P222CA0018.outlook.office365.com (2603:10b6:303:114::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7249.36 via Frontend Transport; Tue, 6 Feb 2024 17:37:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by CO1PEPF000044F7.mail.protection.outlook.com (10.167.241.197) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7292.0 via Frontend Transport; Tue, 6 Feb 2024 17:37:46 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Tue, 6 Feb 2024 09:37:33 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Tue, 6 Feb 2024 09:37:30 -0800 From: Dariusz Sosnowski To: Matan Azrad , Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko CC: Subject: [PATCH v3] ethdev: fast path async flow API Date: Tue, 6 Feb 2024 19:36:31 +0200 Message-ID: <20240206173631.2310255-1-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240131093523.1553028-1-dsosnowski@nvidia.com> References: <20240131093523.1553028-1-dsosnowski@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CO1PEPF000044F7:EE_|CY8PR12MB8244:EE_ X-MS-Office365-Filtering-Correlation-Id: 8c90b276-3d2e-4080-712d-08dc273a54fd X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: VImrmi/jl7lAWWdFDAOtynedm2+W4mKSvhB1ieToqz6e8mKHDrYall3punNgGgHtkmvnFTF9tC7NyffGXsCErPlw4IZFTJjGSCuEis88c473GiT+4dxjxw8fY3E6r1+PfNaWqL/rB3QHvBzm1vwXi2tdWRQzd+PNAtFwskSFXzbczwajyToWjPlHrPfPTylVmVIYv1PQsDs+XdMrhrkU66bZOEbtkQZJUxhMzNW6dqjItZBD0C6ntjvmCiNJGGQ8aW9B9HpIGzLWpBuQWIdU5Wb5TX3LJ+G/idov+KoIk0Du+u/E9/BjImZaexpD/r0ir6SJj/6z4TCrGtViVowxeSox+0Dpbdfz/v84A0eMvrW85hKoWd2PYQ7ucpzJeHeONf/ncR3JAcjXHfOAMTUoT5vw6TUcerF9sdiCJJ8IEBWUmKaQ5gkOKOflgVdKR5H0l+P5po/I8CcSua9vNbJ32gc1sjqHQuXK069XMU4Ez0J4DyRIZP4GEw6ovCtuRuukiAISOJpX5KVaeuPOvQZsZ+1hYKsRJGBXgRXeDqSvu82rwpz5qME98XOUUCr81yNNhjR6t8FqpWwcYoR6NiyUe/U4oOqNUmoxA2+4Mso8NgYFYz/ghNaZ16agQCrKylds1GI6x8EVHUGikla3eYiz91gidd3IiflECYbzzeAyA2r13Qp0saZ7ynun+SSltmht+iMUIsejheX6VaJWE10bUyrL78AWT+JAYNcG/6uuR0bpHne3C/JKgdf06exGmMsR X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(136003)(396003)(376002)(346002)(39860400002)(230922051799003)(186009)(451199024)(1800799012)(64100799003)(82310400011)(36840700001)(46966006)(40470700004)(36756003)(5660300002)(2906002)(30864003)(41300700001)(86362001)(55016003)(1076003)(336012)(426003)(83380400001)(47076005)(2616005)(16526019)(26005)(36860700001)(6286002)(478600001)(7696005)(40480700001)(40460700003)(356005)(82740400003)(7636003)(70586007)(8936002)(70206006)(110136005)(4326008)(8676002)(316002)(559001)(579004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Feb 2024 17:37:46.5687 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8c90b276-3d2e-4080-712d-08dc273a54fd X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1PEPF000044F7.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB8244 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch reworks the async flow API functions called in data path, to reduce the overhead during flow operations at the library level. Main source of the overhead was indirection and checks done while ethdev library was fetching rte_flow_ops from a given driver. This patch introduces rte_flow_fp_ops struct which holds callbacks to driver's implementation of fast path async flow API functions. Each driver implementing these functions must populate flow_fp_ops field inside rte_eth_dev structure with a reference to its own implementation. By default, ethdev library provides dummy callbacks with implementations returning ENOSYS. Such design provides a few assumptions: - rte_flow_fp_ops struct for given port is always available. - Each callback is either: - Default provided by library. - Set up by driver. As a result, no checks for availability of the implementation are needed at library level in data path. Any library-level validation checks in async flow API are compiled if and only if RTE_FLOW_DEBUG macro is defined. This design was based on changes in ethdev library introduced in [1]. These changes apply only to the following API functions: - rte_flow_async_create() - rte_flow_async_create_by_index() - rte_flow_async_actions_update() - rte_flow_async_destroy() - rte_flow_push() - rte_flow_pull() - rte_flow_async_action_handle_create() - rte_flow_async_action_handle_destroy() - rte_flow_async_action_handle_update() - rte_flow_async_action_handle_query() - rte_flow_async_action_handle_query_update() - rte_flow_async_action_list_handle_create() - rte_flow_async_action_list_handle_destroy() - rte_flow_async_action_list_handle_query_update() This patch also adjusts the mlx5 PMD to the introduced flow API changes. [1] commit c87d435a4d79 ("ethdev: copy fast-path API into separate structure") Signed-off-by: Dariusz Sosnowski Acked-by: Ori Kam --- v3: - Documented RTE_FLOW_DEBUG build option. - Enabled RTE_FLOW_DEBUG automatically on debug builds. - Fixed pointer checks to compare against NULL explicitly. v2: - Fixed mlx5 PMD build issue with older versions of rdma-core. --- doc/guides/nics/build_and_test.rst | 9 +- doc/guides/rel_notes/release_24_03.rst | 37 ++ drivers/net/mlx5/mlx5_flow.c | 608 +------------------------ drivers/net/mlx5/mlx5_flow_hw.c | 25 + lib/ethdev/ethdev_driver.c | 4 + lib/ethdev/ethdev_driver.h | 4 + lib/ethdev/meson.build | 4 + lib/ethdev/rte_flow.c | 519 ++++++++++++++++----- lib/ethdev/rte_flow_driver.h | 277 ++++++----- lib/ethdev/version.map | 2 + 10 files changed, 647 insertions(+), 842 deletions(-) diff --git a/doc/guides/nics/build_and_test.rst b/doc/guides/nics/build_and_test.rst index e8b29c2277..453fa74b39 100644 --- a/doc/guides/nics/build_and_test.rst +++ b/doc/guides/nics/build_and_test.rst @@ -36,11 +36,16 @@ The ethdev layer supports below build options for debug purpose: Build with debug code on Tx path. +- ``RTE_FLOW_DEBUG`` (default **disabled**; enabled automatically on debug builds) + + Build with debug code in asynchronous flow APIs. + .. Note:: - The ethdev library use above options to wrap debug code to trace invalid parameters + The ethdev library uses above options to wrap debug code to trace invalid parameters on data path APIs, so performance downgrade is expected when enabling those options. - Each PMD can decide to reuse them to wrap their own debug code in the Rx/Tx path. + Each PMD can decide to reuse them to wrap their own debug code in the Rx/Tx path + and in asynchronous flow APIs implementation. Running testpmd in Linux ------------------------ diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst index 6f8ad27808..b62330b8b1 100644 --- a/doc/guides/rel_notes/release_24_03.rst +++ b/doc/guides/rel_notes/release_24_03.rst @@ -86,6 +86,43 @@ API Changes * gso: ``rte_gso_segment`` now returns -ENOTSUP for unknown protocols. +* ethdev: PMDs implementing asynchronous flow operations are required to provide relevant functions + implementation through ``rte_flow_fp_ops`` struct, instead of ``rte_flow_ops`` struct. + Pointer to device-dependent ``rte_flow_fp_ops`` should be provided to ``rte_eth_dev.flow_fp_ops``. + This change applies to the following API functions: + + * ``rte_flow_async_create`` + * ``rte_flow_async_create_by_index`` + * ``rte_flow_async_actions_update`` + * ``rte_flow_async_destroy`` + * ``rte_flow_push`` + * ``rte_flow_pull`` + * ``rte_flow_async_action_handle_create`` + * ``rte_flow_async_action_handle_destroy`` + * ``rte_flow_async_action_handle_update`` + * ``rte_flow_async_action_handle_query`` + * ``rte_flow_async_action_handle_query_update`` + * ``rte_flow_async_action_list_handle_create`` + * ``rte_flow_async_action_list_handle_destroy`` + * ``rte_flow_async_action_list_handle_query_update`` + +* ethdev: Removed the following fields from ``rte_flow_ops`` struct: + + * ``async_create`` + * ``async_create_by_index`` + * ``async_actions_update`` + * ``async_destroy`` + * ``push`` + * ``pull`` + * ``async_action_handle_create`` + * ``async_action_handle_destroy`` + * ``async_action_handle_update`` + * ``async_action_handle_query`` + * ``async_action_handle_query_update`` + * ``async_action_list_handle_create`` + * ``async_action_list_handle_destroy`` + * ``async_action_list_handle_query_update`` + ABI Changes ----------- diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 85e8c77c81..0ff3b91596 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -1055,98 +1055,13 @@ mlx5_flow_group_set_miss_actions(struct rte_eth_dev *dev, const struct rte_flow_group_attr *attr, const struct rte_flow_action actions[], struct rte_flow_error *error); -static struct rte_flow * -mlx5_flow_async_flow_create(struct rte_eth_dev *dev, - uint32_t queue, - const struct rte_flow_op_attr *attr, - struct rte_flow_template_table *table, - const struct rte_flow_item items[], - uint8_t pattern_template_index, - const struct rte_flow_action actions[], - uint8_t action_template_index, - void *user_data, - struct rte_flow_error *error); -static struct rte_flow * -mlx5_flow_async_flow_create_by_index(struct rte_eth_dev *dev, - uint32_t queue, - const struct rte_flow_op_attr *attr, - struct rte_flow_template_table *table, - uint32_t rule_index, - const struct rte_flow_action actions[], - uint8_t action_template_index, - void *user_data, - struct rte_flow_error *error); -static int -mlx5_flow_async_flow_update(struct rte_eth_dev *dev, - uint32_t queue, - const struct rte_flow_op_attr *attr, - struct rte_flow *flow, - const struct rte_flow_action actions[], - uint8_t action_template_index, - void *user_data, - struct rte_flow_error *error); -static int -mlx5_flow_async_flow_destroy(struct rte_eth_dev *dev, - uint32_t queue, - const struct rte_flow_op_attr *attr, - struct rte_flow *flow, - void *user_data, - struct rte_flow_error *error); -static int -mlx5_flow_pull(struct rte_eth_dev *dev, - uint32_t queue, - struct rte_flow_op_result res[], - uint16_t n_res, - struct rte_flow_error *error); -static int -mlx5_flow_push(struct rte_eth_dev *dev, - uint32_t queue, - struct rte_flow_error *error); - -static struct rte_flow_action_handle * -mlx5_flow_async_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, - const struct rte_flow_op_attr *attr, - const struct rte_flow_indir_action_conf *conf, - const struct rte_flow_action *action, - void *user_data, - struct rte_flow_error *error); - -static int -mlx5_flow_async_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, - const struct rte_flow_op_attr *attr, - struct rte_flow_action_handle *handle, - const void *update, - void *user_data, - struct rte_flow_error *error); static int -mlx5_flow_async_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, - const struct rte_flow_op_attr *attr, - struct rte_flow_action_handle *handle, - void *user_data, - struct rte_flow_error *error); - -static int -mlx5_flow_async_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, - const struct rte_flow_op_attr *attr, - const struct rte_flow_action_handle *handle, - void *data, - void *user_data, - struct rte_flow_error *error); -static int mlx5_action_handle_query_update(struct rte_eth_dev *dev, struct rte_flow_action_handle *handle, const void *update, void *query, enum rte_flow_query_update_mode qu_mode, struct rte_flow_error *error); -static int -mlx5_flow_async_action_handle_query_update - (struct rte_eth_dev *dev, uint32_t queue_id, - const struct rte_flow_op_attr *op_attr, - struct rte_flow_action_handle *action_handle, - const void *update, void *query, - enum rte_flow_query_update_mode qu_mode, - void *user_data, struct rte_flow_error *error); static struct rte_flow_action_list_handle * mlx5_action_list_handle_create(struct rte_eth_dev *dev, @@ -1159,20 +1074,6 @@ mlx5_action_list_handle_destroy(struct rte_eth_dev *dev, struct rte_flow_action_list_handle *handle, struct rte_flow_error *error); -static struct rte_flow_action_list_handle * -mlx5_flow_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue_id, - const struct rte_flow_op_attr *attr, - const struct - rte_flow_indir_action_conf *conf, - const struct rte_flow_action *actions, - void *user_data, - struct rte_flow_error *error); -static int -mlx5_flow_async_action_list_handle_destroy - (struct rte_eth_dev *dev, uint32_t queue_id, - const struct rte_flow_op_attr *op_attr, - struct rte_flow_action_list_handle *action_handle, - void *user_data, struct rte_flow_error *error); static int mlx5_flow_action_list_handle_query_update(struct rte_eth_dev *dev, const @@ -1180,17 +1081,7 @@ mlx5_flow_action_list_handle_query_update(struct rte_eth_dev *dev, const void **update, void **query, enum rte_flow_query_update_mode mode, struct rte_flow_error *error); -static int -mlx5_flow_async_action_list_handle_query_update(struct rte_eth_dev *dev, - uint32_t queue_id, - const struct rte_flow_op_attr *attr, - const struct - rte_flow_action_list_handle *handle, - const void **update, - void **query, - enum rte_flow_query_update_mode mode, - void *user_data, - struct rte_flow_error *error); + static int mlx5_flow_calc_table_hash(struct rte_eth_dev *dev, const struct rte_flow_template_table *table, @@ -1232,26 +1123,8 @@ static const struct rte_flow_ops mlx5_flow_ops = { .template_table_create = mlx5_flow_table_create, .template_table_destroy = mlx5_flow_table_destroy, .group_set_miss_actions = mlx5_flow_group_set_miss_actions, - .async_create = mlx5_flow_async_flow_create, - .async_create_by_index = mlx5_flow_async_flow_create_by_index, - .async_destroy = mlx5_flow_async_flow_destroy, - .pull = mlx5_flow_pull, - .push = mlx5_flow_push, - .async_action_handle_create = mlx5_flow_async_action_handle_create, - .async_action_handle_update = mlx5_flow_async_action_handle_update, - .async_action_handle_query_update = - mlx5_flow_async_action_handle_query_update, - .async_action_handle_query = mlx5_flow_async_action_handle_query, - .async_action_handle_destroy = mlx5_flow_async_action_handle_destroy, - .async_actions_update = mlx5_flow_async_flow_update, - .async_action_list_handle_create = - mlx5_flow_async_action_list_handle_create, - .async_action_list_handle_destroy = - mlx5_flow_async_action_list_handle_destroy, .action_list_handle_query_update = mlx5_flow_action_list_handle_query_update, - .async_action_list_handle_query_update = - mlx5_flow_async_action_list_handle_query_update, .flow_calc_table_hash = mlx5_flow_calc_table_hash, }; @@ -9427,424 +9300,6 @@ mlx5_flow_group_set_miss_actions(struct rte_eth_dev *dev, return fops->group_set_miss_actions(dev, group_id, attr, actions, error); } -/** - * Enqueue flow creation. - * - * @param[in] dev - * Pointer to the rte_eth_dev structure. - * @param[in] queue_id - * The queue to create the flow. - * @param[in] attr - * Pointer to the flow operation attributes. - * @param[in] items - * Items with flow spec value. - * @param[in] pattern_template_index - * The item pattern flow follows from the table. - * @param[in] actions - * Action with flow spec value. - * @param[in] action_template_index - * The action pattern flow follows from the table. - * @param[in] user_data - * Pointer to the user_data. - * @param[out] error - * Pointer to error structure. - * - * @return - * Flow pointer on success, NULL otherwise and rte_errno is set. - */ -static struct rte_flow * -mlx5_flow_async_flow_create(struct rte_eth_dev *dev, - uint32_t queue_id, - const struct rte_flow_op_attr *attr, - struct rte_flow_template_table *table, - const struct rte_flow_item items[], - uint8_t pattern_template_index, - const struct rte_flow_action actions[], - uint8_t action_template_index, - void *user_data, - struct rte_flow_error *error) -{ - const struct mlx5_flow_driver_ops *fops; - struct rte_flow_attr fattr = {0}; - - if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW) { - rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "flow_q create with incorrect steering mode"); - return NULL; - } - fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW); - return fops->async_flow_create(dev, queue_id, attr, table, - items, pattern_template_index, - actions, action_template_index, - user_data, error); -} - -/** - * Enqueue flow creation by index. - * - * @param[in] dev - * Pointer to the rte_eth_dev structure. - * @param[in] queue_id - * The queue to create the flow. - * @param[in] attr - * Pointer to the flow operation attributes. - * @param[in] rule_index - * The item pattern flow follows from the table. - * @param[in] actions - * Action with flow spec value. - * @param[in] action_template_index - * The action pattern flow follows from the table. - * @param[in] user_data - * Pointer to the user_data. - * @param[out] error - * Pointer to error structure. - * - * @return - * Flow pointer on success, NULL otherwise and rte_errno is set. - */ -static struct rte_flow * -mlx5_flow_async_flow_create_by_index(struct rte_eth_dev *dev, - uint32_t queue_id, - const struct rte_flow_op_attr *attr, - struct rte_flow_template_table *table, - uint32_t rule_index, - const struct rte_flow_action actions[], - uint8_t action_template_index, - void *user_data, - struct rte_flow_error *error) -{ - const struct mlx5_flow_driver_ops *fops; - struct rte_flow_attr fattr = {0}; - - if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW) { - rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "flow_q create with incorrect steering mode"); - return NULL; - } - fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW); - return fops->async_flow_create_by_index(dev, queue_id, attr, table, - rule_index, actions, action_template_index, - user_data, error); -} - -/** - * Enqueue flow update. - * - * @param[in] dev - * Pointer to the rte_eth_dev structure. - * @param[in] queue - * The queue to destroy the flow. - * @param[in] attr - * Pointer to the flow operation attributes. - * @param[in] flow - * Pointer to the flow to be destroyed. - * @param[in] actions - * Action with flow spec value. - * @param[in] action_template_index - * The action pattern flow follows from the table. - * @param[in] user_data - * Pointer to the user_data. - * @param[out] error - * Pointer to error structure. - * - * @return - * 0 on success, negative value otherwise and rte_errno is set. - */ -static int -mlx5_flow_async_flow_update(struct rte_eth_dev *dev, - uint32_t queue, - const struct rte_flow_op_attr *attr, - struct rte_flow *flow, - const struct rte_flow_action actions[], - uint8_t action_template_index, - void *user_data, - struct rte_flow_error *error) -{ - const struct mlx5_flow_driver_ops *fops; - struct rte_flow_attr fattr = {0}; - - if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "flow_q update with incorrect steering mode"); - fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW); - return fops->async_flow_update(dev, queue, attr, flow, - actions, action_template_index, user_data, error); -} - -/** - * Enqueue flow destruction. - * - * @param[in] dev - * Pointer to the rte_eth_dev structure. - * @param[in] queue - * The queue to destroy the flow. - * @param[in] attr - * Pointer to the flow operation attributes. - * @param[in] flow - * Pointer to the flow to be destroyed. - * @param[in] user_data - * Pointer to the user_data. - * @param[out] error - * Pointer to error structure. - * - * @return - * 0 on success, negative value otherwise and rte_errno is set. - */ -static int -mlx5_flow_async_flow_destroy(struct rte_eth_dev *dev, - uint32_t queue, - const struct rte_flow_op_attr *attr, - struct rte_flow *flow, - void *user_data, - struct rte_flow_error *error) -{ - const struct mlx5_flow_driver_ops *fops; - struct rte_flow_attr fattr = {0}; - - if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "flow_q destroy with incorrect steering mode"); - fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW); - return fops->async_flow_destroy(dev, queue, attr, flow, - user_data, error); -} - -/** - * Pull the enqueued flows. - * - * @param[in] dev - * Pointer to the rte_eth_dev structure. - * @param[in] queue - * The queue to pull the result. - * @param[in/out] res - * Array to save the results. - * @param[in] n_res - * Available result with the array. - * @param[out] error - * Pointer to error structure. - * - * @return - * Result number on success, negative value otherwise and rte_errno is set. - */ -static int -mlx5_flow_pull(struct rte_eth_dev *dev, - uint32_t queue, - struct rte_flow_op_result res[], - uint16_t n_res, - struct rte_flow_error *error) -{ - const struct mlx5_flow_driver_ops *fops; - struct rte_flow_attr attr = {0}; - - if (flow_get_drv_type(dev, &attr) != MLX5_FLOW_TYPE_HW) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "flow_q pull with incorrect steering mode"); - fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW); - return fops->pull(dev, queue, res, n_res, error); -} - -/** - * Push the enqueued flows. - * - * @param[in] dev - * Pointer to the rte_eth_dev structure. - * @param[in] queue - * The queue to push the flows. - * @param[out] error - * Pointer to error structure. - * - * @return - * 0 on success, negative value otherwise and rte_errno is set. - */ -static int -mlx5_flow_push(struct rte_eth_dev *dev, - uint32_t queue, - struct rte_flow_error *error) -{ - const struct mlx5_flow_driver_ops *fops; - struct rte_flow_attr attr = {0}; - - if (flow_get_drv_type(dev, &attr) != MLX5_FLOW_TYPE_HW) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "flow_q push with incorrect steering mode"); - fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW); - return fops->push(dev, queue, error); -} - -/** - * Create shared action. - * - * @param[in] dev - * Pointer to the rte_eth_dev structure. - * @param[in] queue - * Which queue to be used.. - * @param[in] attr - * Operation attribute. - * @param[in] conf - * Indirect action configuration. - * @param[in] action - * rte_flow action detail. - * @param[in] user_data - * Pointer to the user_data. - * @param[out] error - * Pointer to error structure. - * - * @return - * Action handle on success, NULL otherwise and rte_errno is set. - */ -static struct rte_flow_action_handle * -mlx5_flow_async_action_handle_create(struct rte_eth_dev *dev, uint32_t queue, - const struct rte_flow_op_attr *attr, - const struct rte_flow_indir_action_conf *conf, - const struct rte_flow_action *action, - void *user_data, - struct rte_flow_error *error) -{ - const struct mlx5_flow_driver_ops *fops = - flow_get_drv_ops(MLX5_FLOW_TYPE_HW); - - return fops->async_action_create(dev, queue, attr, conf, action, - user_data, error); -} - -/** - * Update shared action. - * - * @param[in] dev - * Pointer to the rte_eth_dev structure. - * @param[in] queue - * Which queue to be used.. - * @param[in] attr - * Operation attribute. - * @param[in] handle - * Action handle to be updated. - * @param[in] update - * Update value. - * @param[in] user_data - * Pointer to the user_data. - * @param[out] error - * Pointer to error structure. - * - * @return - * 0 on success, negative value otherwise and rte_errno is set. - */ -static int -mlx5_flow_async_action_handle_update(struct rte_eth_dev *dev, uint32_t queue, - const struct rte_flow_op_attr *attr, - struct rte_flow_action_handle *handle, - const void *update, - void *user_data, - struct rte_flow_error *error) -{ - const struct mlx5_flow_driver_ops *fops = - flow_get_drv_ops(MLX5_FLOW_TYPE_HW); - - return fops->async_action_update(dev, queue, attr, handle, - update, user_data, error); -} - -static int -mlx5_flow_async_action_handle_query_update - (struct rte_eth_dev *dev, uint32_t queue_id, - const struct rte_flow_op_attr *op_attr, - struct rte_flow_action_handle *action_handle, - const void *update, void *query, - enum rte_flow_query_update_mode qu_mode, - void *user_data, struct rte_flow_error *error) -{ - const struct mlx5_flow_driver_ops *fops = - flow_get_drv_ops(MLX5_FLOW_TYPE_HW); - - if (!fops || !fops->async_action_query_update) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, NULL, - "async query_update not supported"); - return fops->async_action_query_update - (dev, queue_id, op_attr, action_handle, - update, query, qu_mode, user_data, error); -} - -/** - * Query shared action. - * - * @param[in] dev - * Pointer to the rte_eth_dev structure. - * @param[in] queue - * Which queue to be used.. - * @param[in] attr - * Operation attribute. - * @param[in] handle - * Action handle to be updated. - * @param[in] data - * Pointer query result data. - * @param[in] user_data - * Pointer to the user_data. - * @param[out] error - * Pointer to error structure. - * - * @return - * 0 on success, negative value otherwise and rte_errno is set. - */ -static int -mlx5_flow_async_action_handle_query(struct rte_eth_dev *dev, uint32_t queue, - const struct rte_flow_op_attr *attr, - const struct rte_flow_action_handle *handle, - void *data, - void *user_data, - struct rte_flow_error *error) -{ - const struct mlx5_flow_driver_ops *fops = - flow_get_drv_ops(MLX5_FLOW_TYPE_HW); - - return fops->async_action_query(dev, queue, attr, handle, - data, user_data, error); -} - -/** - * Destroy shared action. - * - * @param[in] dev - * Pointer to the rte_eth_dev structure. - * @param[in] queue - * Which queue to be used.. - * @param[in] attr - * Operation attribute. - * @param[in] handle - * Action handle to be destroyed. - * @param[in] user_data - * Pointer to the user_data. - * @param[out] error - * Pointer to error structure. - * - * @return - * 0 on success, negative value otherwise and rte_errno is set. - */ -static int -mlx5_flow_async_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue, - const struct rte_flow_op_attr *attr, - struct rte_flow_action_handle *handle, - void *user_data, - struct rte_flow_error *error) -{ - const struct mlx5_flow_driver_ops *fops = - flow_get_drv_ops(MLX5_FLOW_TYPE_HW); - - return fops->async_action_destroy(dev, queue, attr, handle, - user_data, error); -} - /** * Allocate a new memory for the counter values wrapped by all the needed * management. @@ -11015,41 +10470,6 @@ mlx5_action_list_handle_destroy(struct rte_eth_dev *dev, return fops->action_list_handle_destroy(dev, handle, error); } -static struct rte_flow_action_list_handle * -mlx5_flow_async_action_list_handle_create(struct rte_eth_dev *dev, - uint32_t queue_id, - const struct - rte_flow_op_attr *op_attr, - const struct - rte_flow_indir_action_conf *conf, - const struct rte_flow_action *actions, - void *user_data, - struct rte_flow_error *error) -{ - const struct mlx5_flow_driver_ops *fops; - - MLX5_DRV_FOPS_OR_ERR(dev, fops, async_action_list_handle_create, NULL); - return fops->async_action_list_handle_create(dev, queue_id, op_attr, - conf, actions, user_data, - error); -} - -static int -mlx5_flow_async_action_list_handle_destroy - (struct rte_eth_dev *dev, uint32_t queue_id, - const struct rte_flow_op_attr *op_attr, - struct rte_flow_action_list_handle *action_handle, - void *user_data, struct rte_flow_error *error) -{ - const struct mlx5_flow_driver_ops *fops; - - MLX5_DRV_FOPS_OR_ERR(dev, fops, - async_action_list_handle_destroy, ENOTSUP); - return fops->async_action_list_handle_destroy(dev, queue_id, op_attr, - action_handle, user_data, - error); -} - static int mlx5_flow_action_list_handle_query_update(struct rte_eth_dev *dev, const @@ -11065,32 +10485,6 @@ mlx5_flow_action_list_handle_query_update(struct rte_eth_dev *dev, return fops->action_list_handle_query_update(dev, handle, update, query, mode, error); } - -static int -mlx5_flow_async_action_list_handle_query_update(struct rte_eth_dev *dev, - uint32_t queue_id, - const - struct rte_flow_op_attr *op_attr, - const struct - rte_flow_action_list_handle *handle, - const void **update, - void **query, - enum - rte_flow_query_update_mode mode, - void *user_data, - struct rte_flow_error *error) -{ - const struct mlx5_flow_driver_ops *fops; - - MLX5_DRV_FOPS_OR_ERR(dev, fops, - async_action_list_handle_query_update, ENOTSUP); - return fops->async_action_list_handle_query_update(dev, queue_id, op_attr, - handle, update, - query, mode, - user_data, error); -} - - static int mlx5_flow_calc_table_hash(struct rte_eth_dev *dev, const struct rte_flow_template_table *table, diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index da873ae2e2..c65ebfbba2 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -3,6 +3,7 @@ */ #include +#include #include @@ -14,6 +15,9 @@ #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) #include "mlx5_hws_cnt.h" +/** Fast path async flow API functions. */ +static struct rte_flow_fp_ops mlx5_flow_hw_fp_ops; + /* The maximum actions support in the flow. */ #define MLX5_HW_MAX_ACTS 16 @@ -9543,6 +9547,7 @@ flow_hw_configure(struct rte_eth_dev *dev, mlx5_free(_queue_attr); if (port_attr->flags & RTE_FLOW_PORT_FLAG_STRICT_QUEUE) priv->hws_strict_queue = 1; + dev->flow_fp_ops = &mlx5_flow_hw_fp_ops; return 0; err: if (priv->hws_ctpool) { @@ -9617,6 +9622,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev) if (!priv->dr_ctx) return; + dev->flow_fp_ops = &rte_flow_fp_default_ops; flow_hw_rxq_flag_set(dev, false); flow_hw_flush_all_ctrl_flows(dev); flow_hw_cleanup_tx_repr_tagging(dev); @@ -12992,4 +12998,23 @@ mlx5_reformat_action_destroy(struct rte_eth_dev *dev, mlx5_free(handle); return 0; } + +static struct rte_flow_fp_ops mlx5_flow_hw_fp_ops = { + .async_create = flow_hw_async_flow_create, + .async_create_by_index = flow_hw_async_flow_create_by_index, + .async_actions_update = flow_hw_async_flow_update, + .async_destroy = flow_hw_async_flow_destroy, + .push = flow_hw_push, + .pull = flow_hw_pull, + .async_action_handle_create = flow_hw_action_handle_create, + .async_action_handle_destroy = flow_hw_action_handle_destroy, + .async_action_handle_update = flow_hw_action_handle_update, + .async_action_handle_query = flow_hw_action_handle_query, + .async_action_handle_query_update = flow_hw_async_action_handle_query_update, + .async_action_list_handle_create = flow_hw_async_action_list_handle_create, + .async_action_list_handle_destroy = flow_hw_async_action_list_handle_destroy, + .async_action_list_handle_query_update = + flow_hw_async_action_list_handle_query_update, +}; + #endif diff --git a/lib/ethdev/ethdev_driver.c b/lib/ethdev/ethdev_driver.c index bd917a15fc..34909a3018 100644 --- a/lib/ethdev/ethdev_driver.c +++ b/lib/ethdev/ethdev_driver.c @@ -10,6 +10,7 @@ #include "ethdev_driver.h" #include "ethdev_private.h" +#include "rte_flow_driver.h" /** * A set of values to describe the possible states of a switch domain. @@ -110,6 +111,7 @@ rte_eth_dev_allocate(const char *name) } eth_dev = eth_dev_get(port_id); + eth_dev->flow_fp_ops = &rte_flow_fp_default_ops; strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name)); eth_dev->data->port_id = port_id; eth_dev->data->backer_port_id = RTE_MAX_ETHPORTS; @@ -245,6 +247,8 @@ rte_eth_dev_release_port(struct rte_eth_dev *eth_dev) eth_dev_fp_ops_reset(rte_eth_fp_ops + eth_dev->data->port_id); + eth_dev->flow_fp_ops = &rte_flow_fp_default_ops; + rte_spinlock_lock(rte_mcfg_ethdev_get_lock()); eth_dev->state = RTE_ETH_DEV_UNUSED; diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index b482cd12bb..b2e879ae1d 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -71,6 +71,10 @@ struct rte_eth_dev { struct rte_eth_dev_data *data; void *process_private; /**< Pointer to per-process device data */ const struct eth_dev_ops *dev_ops; /**< Functions exported by PMD */ + /** + * Fast path flow API functions exported by PMD. + */ + const struct rte_flow_fp_ops *flow_fp_ops; struct rte_device *device; /**< Backing device */ struct rte_intr_handle *intr_handle; /**< Device interrupt handle */ diff --git a/lib/ethdev/meson.build b/lib/ethdev/meson.build index 3497aa1548..b8859de11b 100644 --- a/lib/ethdev/meson.build +++ b/lib/ethdev/meson.build @@ -49,3 +49,7 @@ deps += ['net', 'kvargs', 'meter', 'telemetry'] if is_freebsd annotate_locks = false endif + +if get_option('buildtype').contains('debug') + cflags += ['-DRTE_FLOW_DEBUG'] +endif diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index f49d1d3767..02522730b3 100644 --- a/lib/ethdev/rte_flow.c +++ b/lib/ethdev/rte_flow.c @@ -2013,16 +2013,26 @@ rte_flow_async_create(uint16_t port_id, struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; - const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); struct rte_flow *flow; - flow = ops->async_create(dev, queue_id, - op_attr, template_table, - pattern, pattern_template_index, - actions, actions_template_index, - user_data, error); - if (flow == NULL) - flow_err(port_id, -rte_errno, error); +#ifdef RTE_FLOW_DEBUG + if (!rte_eth_dev_is_valid_port(port_id)) { + rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENODEV)); + return NULL; + } + if (dev->flow_fp_ops == NULL || dev->flow_fp_ops->async_create == NULL) { + rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); + return NULL; + } +#endif + + flow = dev->flow_fp_ops->async_create(dev, queue_id, + op_attr, template_table, + pattern, pattern_template_index, + actions, actions_template_index, + user_data, error); rte_flow_trace_async_create(port_id, queue_id, op_attr, template_table, pattern, pattern_template_index, actions, @@ -2043,16 +2053,24 @@ rte_flow_async_create_by_index(uint16_t port_id, struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; - const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); - struct rte_flow *flow; - flow = ops->async_create_by_index(dev, queue_id, - op_attr, template_table, rule_index, - actions, actions_template_index, - user_data, error); - if (flow == NULL) - flow_err(port_id, -rte_errno, error); - return flow; +#ifdef RTE_FLOW_DEBUG + if (!rte_eth_dev_is_valid_port(port_id)) { + rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENODEV)); + return NULL; + } + if (dev->flow_fp_ops == NULL || dev->flow_fp_ops->async_create_by_index == NULL) { + rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); + return NULL; + } +#endif + + return dev->flow_fp_ops->async_create_by_index(dev, queue_id, + op_attr, template_table, rule_index, + actions, actions_template_index, + user_data, error); } int @@ -2064,14 +2082,20 @@ rte_flow_async_destroy(uint16_t port_id, struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; - const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); int ret; - ret = flow_err(port_id, - ops->async_destroy(dev, queue_id, - op_attr, flow, - user_data, error), - error); +#ifdef RTE_FLOW_DEBUG + if (!rte_eth_dev_is_valid_port(port_id)) + return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENODEV)); + if (dev->flow_fp_ops == NULL || dev->flow_fp_ops->async_destroy == NULL) + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +#endif + + ret = dev->flow_fp_ops->async_destroy(dev, queue_id, + op_attr, flow, + user_data, error); rte_flow_trace_async_destroy(port_id, queue_id, op_attr, flow, user_data, ret); @@ -2090,15 +2114,21 @@ rte_flow_async_actions_update(uint16_t port_id, struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; - const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); int ret; - ret = flow_err(port_id, - ops->async_actions_update(dev, queue_id, op_attr, - flow, actions, - actions_template_index, - user_data, error), - error); +#ifdef RTE_FLOW_DEBUG + if (!rte_eth_dev_is_valid_port(port_id)) + return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENODEV)); + if (dev->flow_fp_ops == NULL || dev->flow_fp_ops->async_actions_update == NULL) + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +#endif + + ret = dev->flow_fp_ops->async_actions_update(dev, queue_id, op_attr, + flow, actions, + actions_template_index, + user_data, error); rte_flow_trace_async_actions_update(port_id, queue_id, op_attr, flow, actions, actions_template_index, @@ -2113,12 +2143,18 @@ rte_flow_push(uint16_t port_id, struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; - const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); int ret; - ret = flow_err(port_id, - ops->push(dev, queue_id, error), - error); +#ifdef RTE_FLOW_DEBUG + if (!rte_eth_dev_is_valid_port(port_id)) + return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENODEV)); + if (dev->flow_fp_ops == NULL || dev->flow_fp_ops->push == NULL) + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +#endif + + ret = dev->flow_fp_ops->push(dev, queue_id, error); rte_flow_trace_push(port_id, queue_id, ret); @@ -2133,16 +2169,22 @@ rte_flow_pull(uint16_t port_id, struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; - const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); int ret; - int rc; - ret = ops->pull(dev, queue_id, res, n_res, error); - rc = ret ? ret : flow_err(port_id, ret, error); +#ifdef RTE_FLOW_DEBUG + if (!rte_eth_dev_is_valid_port(port_id)) + return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENODEV)); + if (dev->flow_fp_ops == NULL || dev->flow_fp_ops->pull == NULL) + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +#endif + + ret = dev->flow_fp_ops->pull(dev, queue_id, res, n_res, error); - rte_flow_trace_pull(port_id, queue_id, res, n_res, rc); + rte_flow_trace_pull(port_id, queue_id, res, n_res, ret); - return rc; + return ret; } struct rte_flow_action_handle * @@ -2155,13 +2197,24 @@ rte_flow_async_action_handle_create(uint16_t port_id, struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; - const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); struct rte_flow_action_handle *handle; - handle = ops->async_action_handle_create(dev, queue_id, op_attr, - indir_action_conf, action, user_data, error); - if (handle == NULL) - flow_err(port_id, -rte_errno, error); +#ifdef RTE_FLOW_DEBUG + if (!rte_eth_dev_is_valid_port(port_id)) { + rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENODEV)); + return NULL; + } + if (dev->flow_fp_ops == NULL || dev->flow_fp_ops->async_action_handle_create == NULL) { + rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); + return NULL; + } +#endif + + handle = dev->flow_fp_ops->async_action_handle_create(dev, queue_id, op_attr, + indir_action_conf, action, + user_data, error); rte_flow_trace_async_action_handle_create(port_id, queue_id, op_attr, indir_action_conf, action, @@ -2179,12 +2232,19 @@ rte_flow_async_action_handle_destroy(uint16_t port_id, struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; - const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); int ret; - ret = ops->async_action_handle_destroy(dev, queue_id, op_attr, - action_handle, user_data, error); - ret = flow_err(port_id, ret, error); +#ifdef RTE_FLOW_DEBUG + if (!rte_eth_dev_is_valid_port(port_id)) + return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENODEV)); + if (dev->flow_fp_ops == NULL || dev->flow_fp_ops->async_action_handle_destroy == NULL) + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +#endif + + ret = dev->flow_fp_ops->async_action_handle_destroy(dev, queue_id, op_attr, + action_handle, user_data, error); rte_flow_trace_async_action_handle_destroy(port_id, queue_id, op_attr, action_handle, user_data, ret); @@ -2202,12 +2262,19 @@ rte_flow_async_action_handle_update(uint16_t port_id, struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; - const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); int ret; - ret = ops->async_action_handle_update(dev, queue_id, op_attr, - action_handle, update, user_data, error); - ret = flow_err(port_id, ret, error); +#ifdef RTE_FLOW_DEBUG + if (!rte_eth_dev_is_valid_port(port_id)) + return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENODEV)); + if (dev->flow_fp_ops == NULL || dev->flow_fp_ops->async_action_handle_update == NULL) + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +#endif + + ret = dev->flow_fp_ops->async_action_handle_update(dev, queue_id, op_attr, + action_handle, update, user_data, error); rte_flow_trace_async_action_handle_update(port_id, queue_id, op_attr, action_handle, update, @@ -2226,14 +2293,19 @@ rte_flow_async_action_handle_query(uint16_t port_id, struct rte_flow_error *error) { struct rte_eth_dev *dev = &rte_eth_devices[port_id]; - const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error); int ret; - if (unlikely(!ops)) - return -rte_errno; - ret = ops->async_action_handle_query(dev, queue_id, op_attr, - action_handle, data, user_data, error); - ret = flow_err(port_id, ret, error); +#ifdef RTE_FLOW_DEBUG + if (!rte_eth_dev_is_valid_port(port_id)) + return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENODEV)); + if (dev->flow_fp_ops == NULL || dev->flow_fp_ops->async_action_handle_query == NULL) + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +#endif + + ret = dev->flow_fp_ops->async_action_handle_query(dev, queue_id, op_attr, + action_handle, data, user_data, error); rte_flow_trace_async_action_handle_query(port_id, queue_id, op_attr, action_handle, data, user_data, @@ -2276,24 +2348,21 @@ rte_flow_async_action_handle_query_update(uint16_t port_id, uint32_t queue_id, void *user_data, struct rte_flow_error *error) { - int ret; - struct rte_eth_dev *dev; - const struct rte_flow_ops *ops; + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; - RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); - if (!handle) - return -EINVAL; - if (!update && !query) - return -EINVAL; - dev = &rte_eth_devices[port_id]; - ops = rte_flow_ops_get(port_id, error); - if (!ops || !ops->async_action_handle_query_update) - return -ENOTSUP; - ret = ops->async_action_handle_query_update(dev, queue_id, attr, - handle, update, - query, mode, - user_data, error); - return flow_err(port_id, ret, error); +#ifdef RTE_FLOW_DEBUG + if (!rte_eth_dev_is_valid_port(port_id)) + return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENODEV)); + if (dev->flow_fp_ops == NULL || dev->flow_fp_ops->async_action_handle_query_update == NULL) + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +#endif + + return dev->flow_fp_ops->async_action_handle_query_update(dev, queue_id, attr, + handle, update, + query, mode, + user_data, error); } struct rte_flow_action_list_handle * @@ -2353,24 +2422,28 @@ rte_flow_async_action_list_handle_create(uint16_t port_id, uint32_t queue_id, void *user_data, struct rte_flow_error *error) { - int ret; - struct rte_eth_dev *dev; - const struct rte_flow_ops *ops; + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; struct rte_flow_action_list_handle *handle; + int ret; - RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, NULL); - ops = rte_flow_ops_get(port_id, error); - if (!ops || !ops->async_action_list_handle_create) { - rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "action_list handle not supported"); +#ifdef RTE_FLOW_DEBUG + if (!rte_eth_dev_is_valid_port(port_id)) { + rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENODEV)); return NULL; } - dev = &rte_eth_devices[port_id]; - handle = ops->async_action_list_handle_create(dev, queue_id, attr, conf, - actions, user_data, - error); + if (dev->flow_fp_ops == NULL || dev->flow_fp_ops->async_action_list_handle_create == NULL) { + rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); + return NULL; + } +#endif + + handle = dev->flow_fp_ops->async_action_list_handle_create(dev, queue_id, attr, conf, + actions, user_data, + error); ret = flow_err(port_id, -rte_errno, error); + rte_flow_trace_async_action_list_handle_create(port_id, queue_id, attr, conf, actions, user_data, ret); @@ -2383,20 +2456,21 @@ rte_flow_async_action_list_handle_destroy(uint16_t port_id, uint32_t queue_id, struct rte_flow_action_list_handle *handle, void *user_data, struct rte_flow_error *error) { + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; int ret; - struct rte_eth_dev *dev; - const struct rte_flow_ops *ops; - RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); - ops = rte_flow_ops_get(port_id, error); - if (!ops || !ops->async_action_list_handle_destroy) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "async action_list handle not supported"); - dev = &rte_eth_devices[port_id]; - ret = ops->async_action_list_handle_destroy(dev, queue_id, op_attr, - handle, user_data, error); - ret = flow_err(port_id, ret, error); +#ifdef RTE_FLOW_DEBUG + if (!rte_eth_dev_is_valid_port(port_id)) + return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENODEV)); + if (dev->flow_fp_ops == NULL || dev->flow_fp_ops->async_action_list_handle_destroy == NULL) + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +#endif + + ret = dev->flow_fp_ops->async_action_list_handle_destroy(dev, queue_id, op_attr, + handle, user_data, error); + rte_flow_trace_async_action_list_handle_destroy(port_id, queue_id, op_attr, handle, user_data, ret); @@ -2437,22 +2511,24 @@ rte_flow_async_action_list_handle_query_update(uint16_t port_id, uint32_t queue_ enum rte_flow_query_update_mode mode, void *user_data, struct rte_flow_error *error) { + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; int ret; - struct rte_eth_dev *dev; - const struct rte_flow_ops *ops; - RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); - ops = rte_flow_ops_get(port_id, error); - if (!ops || !ops->async_action_list_handle_query_update) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "action_list async query_update not supported"); - dev = &rte_eth_devices[port_id]; - ret = ops->async_action_list_handle_query_update(dev, queue_id, attr, - handle, update, query, - mode, user_data, - error); - ret = flow_err(port_id, ret, error); +#ifdef RTE_FLOW_DEBUG + if (!rte_eth_dev_is_valid_port(port_id)) + return rte_flow_error_set(error, ENODEV, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENODEV)); + if (dev->flow_fp_ops == NULL || + dev->flow_fp_ops->async_action_list_handle_query_update == NULL) + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +#endif + + ret = dev->flow_fp_ops->async_action_list_handle_query_update(dev, queue_id, attr, + handle, update, query, + mode, user_data, + error); + rte_flow_trace_async_action_list_handle_query_update(port_id, queue_id, attr, handle, update, query, @@ -2481,3 +2557,216 @@ rte_flow_calc_table_hash(uint16_t port_id, const struct rte_flow_template_table hash, error); return flow_err(port_id, ret, error); } + +static struct rte_flow * +rte_flow_dummy_async_create(struct rte_eth_dev *dev __rte_unused, + uint32_t queue __rte_unused, + const struct rte_flow_op_attr *attr __rte_unused, + struct rte_flow_template_table *table __rte_unused, + const struct rte_flow_item items[] __rte_unused, + uint8_t pattern_template_index __rte_unused, + const struct rte_flow_action actions[] __rte_unused, + uint8_t action_template_index __rte_unused, + void *user_data __rte_unused, + struct rte_flow_error *error) +{ + rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); + return NULL; +} + +static struct rte_flow * +rte_flow_dummy_async_create_by_index(struct rte_eth_dev *dev __rte_unused, + uint32_t queue __rte_unused, + const struct rte_flow_op_attr *attr __rte_unused, + struct rte_flow_template_table *table __rte_unused, + uint32_t rule_index __rte_unused, + const struct rte_flow_action actions[] __rte_unused, + uint8_t action_template_index __rte_unused, + void *user_data __rte_unused, + struct rte_flow_error *error) +{ + rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); + return NULL; +} + +static int +rte_flow_dummy_async_actions_update(struct rte_eth_dev *dev __rte_unused, + uint32_t queue_id __rte_unused, + const struct rte_flow_op_attr *op_attr __rte_unused, + struct rte_flow *flow __rte_unused, + const struct rte_flow_action actions[] __rte_unused, + uint8_t actions_template_index __rte_unused, + void *user_data __rte_unused, + struct rte_flow_error *error) +{ + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +} + +static int +rte_flow_dummy_async_destroy(struct rte_eth_dev *dev __rte_unused, + uint32_t queue_id __rte_unused, + const struct rte_flow_op_attr *op_attr __rte_unused, + struct rte_flow *flow __rte_unused, + void *user_data __rte_unused, + struct rte_flow_error *error) +{ + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +} + +static int +rte_flow_dummy_push(struct rte_eth_dev *dev __rte_unused, + uint32_t queue_id __rte_unused, + struct rte_flow_error *error) +{ + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +} + +static int +rte_flow_dummy_pull(struct rte_eth_dev *dev __rte_unused, + uint32_t queue_id __rte_unused, + struct rte_flow_op_result res[] __rte_unused, + uint16_t n_res __rte_unused, + struct rte_flow_error *error) +{ + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +} + +static struct rte_flow_action_handle * +rte_flow_dummy_async_action_handle_create( + struct rte_eth_dev *dev __rte_unused, + uint32_t queue_id __rte_unused, + const struct rte_flow_op_attr *op_attr __rte_unused, + const struct rte_flow_indir_action_conf *indir_action_conf __rte_unused, + const struct rte_flow_action *action __rte_unused, + void *user_data __rte_unused, + struct rte_flow_error *error) +{ + rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); + return NULL; +} + +static int +rte_flow_dummy_async_action_handle_destroy( + struct rte_eth_dev *dev __rte_unused, + uint32_t queue_id __rte_unused, + const struct rte_flow_op_attr *op_attr __rte_unused, + struct rte_flow_action_handle *action_handle __rte_unused, + void *user_data __rte_unused, + struct rte_flow_error *error) +{ + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +} + +static int +rte_flow_dummy_async_action_handle_update( + struct rte_eth_dev *dev __rte_unused, + uint32_t queue_id __rte_unused, + const struct rte_flow_op_attr *op_attr __rte_unused, + struct rte_flow_action_handle *action_handle __rte_unused, + const void *update __rte_unused, + void *user_data __rte_unused, + struct rte_flow_error *error) +{ + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +} + +static int +rte_flow_dummy_async_action_handle_query( + struct rte_eth_dev *dev __rte_unused, + uint32_t queue_id __rte_unused, + const struct rte_flow_op_attr *op_attr __rte_unused, + const struct rte_flow_action_handle *action_handle __rte_unused, + void *data __rte_unused, + void *user_data __rte_unused, + struct rte_flow_error *error) +{ + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +} + +static int +rte_flow_dummy_async_action_handle_query_update( + struct rte_eth_dev *dev __rte_unused, + uint32_t queue_id __rte_unused, + const struct rte_flow_op_attr *attr __rte_unused, + struct rte_flow_action_handle *handle __rte_unused, + const void *update __rte_unused, + void *query __rte_unused, + enum rte_flow_query_update_mode mode __rte_unused, + void *user_data __rte_unused, + struct rte_flow_error *error) +{ + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +} + +static struct rte_flow_action_list_handle * +rte_flow_dummy_async_action_list_handle_create( + struct rte_eth_dev *dev __rte_unused, + uint32_t queue_id __rte_unused, + const struct rte_flow_op_attr *attr __rte_unused, + const struct rte_flow_indir_action_conf *conf __rte_unused, + const struct rte_flow_action *actions __rte_unused, + void *user_data __rte_unused, + struct rte_flow_error *error) +{ + rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); + return NULL; +} + +static int +rte_flow_dummy_async_action_list_handle_destroy( + struct rte_eth_dev *dev __rte_unused, + uint32_t queue_id __rte_unused, + const struct rte_flow_op_attr *op_attr __rte_unused, + struct rte_flow_action_list_handle *handle __rte_unused, + void *user_data __rte_unused, + struct rte_flow_error *error) +{ + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +} + +static int +rte_flow_dummy_async_action_list_handle_query_update( + struct rte_eth_dev *dev __rte_unused, + uint32_t queue_id __rte_unused, + const struct rte_flow_op_attr *attr __rte_unused, + const struct rte_flow_action_list_handle *handle __rte_unused, + const void **update __rte_unused, + void **query __rte_unused, + enum rte_flow_query_update_mode mode __rte_unused, + void *user_data __rte_unused, + struct rte_flow_error *error) +{ + return rte_flow_error_set(error, ENOSYS, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + rte_strerror(ENOSYS)); +} + +struct rte_flow_fp_ops rte_flow_fp_default_ops = { + .async_create = rte_flow_dummy_async_create, + .async_create_by_index = rte_flow_dummy_async_create_by_index, + .async_actions_update = rte_flow_dummy_async_actions_update, + .async_destroy = rte_flow_dummy_async_destroy, + .push = rte_flow_dummy_push, + .pull = rte_flow_dummy_pull, + .async_action_handle_create = rte_flow_dummy_async_action_handle_create, + .async_action_handle_destroy = rte_flow_dummy_async_action_handle_destroy, + .async_action_handle_update = rte_flow_dummy_async_action_handle_update, + .async_action_handle_query = rte_flow_dummy_async_action_handle_query, + .async_action_handle_query_update = rte_flow_dummy_async_action_handle_query_update, + .async_action_list_handle_create = rte_flow_dummy_async_action_list_handle_create, + .async_action_list_handle_destroy = rte_flow_dummy_async_action_list_handle_destroy, + .async_action_list_handle_query_update = + rte_flow_dummy_async_action_list_handle_query_update, +}; diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h index f35f659503..dd9d01045d 100644 --- a/lib/ethdev/rte_flow_driver.h +++ b/lib/ethdev/rte_flow_driver.h @@ -234,122 +234,12 @@ struct rte_flow_ops { const struct rte_flow_group_attr *attr, const struct rte_flow_action actions[], struct rte_flow_error *err); - /** See rte_flow_async_create() */ - struct rte_flow *(*async_create) - (struct rte_eth_dev *dev, - uint32_t queue_id, - const struct rte_flow_op_attr *op_attr, - struct rte_flow_template_table *template_table, - const struct rte_flow_item pattern[], - uint8_t pattern_template_index, - const struct rte_flow_action actions[], - uint8_t actions_template_index, - void *user_data, - struct rte_flow_error *err); - /** See rte_flow_async_create_by_index() */ - struct rte_flow *(*async_create_by_index) - (struct rte_eth_dev *dev, - uint32_t queue_id, - const struct rte_flow_op_attr *op_attr, - struct rte_flow_template_table *template_table, - uint32_t rule_index, - const struct rte_flow_action actions[], - uint8_t actions_template_index, - void *user_data, - struct rte_flow_error *err); - /** See rte_flow_async_destroy() */ - int (*async_destroy) - (struct rte_eth_dev *dev, - uint32_t queue_id, - const struct rte_flow_op_attr *op_attr, - struct rte_flow *flow, - void *user_data, - struct rte_flow_error *err); - /** See rte_flow_push() */ - int (*push) - (struct rte_eth_dev *dev, - uint32_t queue_id, - struct rte_flow_error *err); - /** See rte_flow_pull() */ - int (*pull) - (struct rte_eth_dev *dev, - uint32_t queue_id, - struct rte_flow_op_result res[], - uint16_t n_res, - struct rte_flow_error *error); - /** See rte_flow_async_action_handle_create() */ - struct rte_flow_action_handle *(*async_action_handle_create) - (struct rte_eth_dev *dev, - uint32_t queue_id, - const struct rte_flow_op_attr *op_attr, - const struct rte_flow_indir_action_conf *indir_action_conf, - const struct rte_flow_action *action, - void *user_data, - struct rte_flow_error *err); - /** See rte_flow_async_action_handle_destroy() */ - int (*async_action_handle_destroy) - (struct rte_eth_dev *dev, - uint32_t queue_id, - const struct rte_flow_op_attr *op_attr, - struct rte_flow_action_handle *action_handle, - void *user_data, - struct rte_flow_error *error); - /** See rte_flow_async_action_handle_update() */ - int (*async_action_handle_update) - (struct rte_eth_dev *dev, - uint32_t queue_id, - const struct rte_flow_op_attr *op_attr, - struct rte_flow_action_handle *action_handle, - const void *update, - void *user_data, - struct rte_flow_error *error); - /** See rte_flow_async_action_handle_query() */ - int (*async_action_handle_query) - (struct rte_eth_dev *dev, - uint32_t queue_id, - const struct rte_flow_op_attr *op_attr, - const struct rte_flow_action_handle *action_handle, - void *data, - void *user_data, - struct rte_flow_error *error); - /** See rte_flow_async_action_handle_query_update */ - int (*async_action_handle_query_update) - (struct rte_eth_dev *dev, uint32_t queue_id, - const struct rte_flow_op_attr *op_attr, - struct rte_flow_action_handle *action_handle, - const void *update, void *query, - enum rte_flow_query_update_mode qu_mode, - void *user_data, struct rte_flow_error *error); /** See rte_flow_actions_update(). */ int (*actions_update) (struct rte_eth_dev *dev, struct rte_flow *flow, const struct rte_flow_action actions[], struct rte_flow_error *error); - /** See rte_flow_async_actions_update() */ - int (*async_actions_update) - (struct rte_eth_dev *dev, - uint32_t queue_id, - const struct rte_flow_op_attr *op_attr, - struct rte_flow *flow, - const struct rte_flow_action actions[], - uint8_t actions_template_index, - void *user_data, - struct rte_flow_error *error); - /** @see rte_flow_async_action_list_handle_create() */ - struct rte_flow_action_list_handle * - (*async_action_list_handle_create) - (struct rte_eth_dev *dev, uint32_t queue_id, - const struct rte_flow_op_attr *attr, - const struct rte_flow_indir_action_conf *conf, - const struct rte_flow_action *actions, - void *user_data, struct rte_flow_error *error); - /** @see rte_flow_async_action_list_handle_destroy() */ - int (*async_action_list_handle_destroy) - (struct rte_eth_dev *dev, uint32_t queue_id, - const struct rte_flow_op_attr *op_attr, - struct rte_flow_action_list_handle *action_handle, - void *user_data, struct rte_flow_error *error); /** @see rte_flow_action_list_handle_query_update() */ int (*action_list_handle_query_update) (struct rte_eth_dev *dev, @@ -357,14 +247,6 @@ struct rte_flow_ops { const void **update, void **query, enum rte_flow_query_update_mode mode, struct rte_flow_error *error); - /** @see rte_flow_async_action_list_handle_query_update() */ - int (*async_action_list_handle_query_update) - (struct rte_eth_dev *dev, uint32_t queue_id, - const struct rte_flow_op_attr *attr, - const struct rte_flow_action_list_handle *handle, - const void **update, void **query, - enum rte_flow_query_update_mode mode, - void *user_data, struct rte_flow_error *error); /** @see rte_flow_calc_table_hash() */ int (*flow_calc_table_hash) (struct rte_eth_dev *dev, const struct rte_flow_template_table *table, @@ -394,6 +276,165 @@ rte_flow_ops_get(uint16_t port_id, struct rte_flow_error *error); int rte_flow_restore_info_dynflag_register(void); +/** @internal Enqueue rule creation operation. */ +typedef struct rte_flow *(*rte_flow_async_create_t)(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_template_table *table, + const struct rte_flow_item *items, + uint8_t pattern_template_index, + const struct rte_flow_action *actions, + uint8_t action_template_index, + void *user_data, + struct rte_flow_error *error); + +/** @internal Enqueue rule creation by index operation. */ +typedef struct rte_flow *(*rte_flow_async_create_by_index_t)(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_template_table *table, + uint32_t rule_index, + const struct rte_flow_action *actions, + uint8_t action_template_index, + void *user_data, + struct rte_flow_error *error); + +/** @internal Enqueue rule update operation. */ +typedef int (*rte_flow_async_actions_update_t)(struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *flow, + const struct rte_flow_action *actions, + uint8_t actions_template_index, + void *user_data, + struct rte_flow_error *error); + +/** @internal Enqueue rule destruction operation. */ +typedef int (*rte_flow_async_destroy_t)(struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow *flow, + void *user_data, + struct rte_flow_error *error); + +/** @internal Push all internally stored rules to the HW. */ +typedef int (*rte_flow_push_t)(struct rte_eth_dev *dev, + uint32_t queue_id, + struct rte_flow_error *error); + +/** @internal Pull the flow rule operations results from the HW. */ +typedef int (*rte_flow_pull_t)(struct rte_eth_dev *dev, + uint32_t queue_id, + struct rte_flow_op_result *res, + uint16_t n_res, + struct rte_flow_error *error); + +/** @internal Enqueue indirect action creation operation. */ +typedef struct rte_flow_action_handle *(*rte_flow_async_action_handle_create_t)( + struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + const struct rte_flow_indir_action_conf *indir_action_conf, + const struct rte_flow_action *action, + void *user_data, + struct rte_flow_error *error); + +/** @internal Enqueue indirect action destruction operation. */ +typedef int (*rte_flow_async_action_handle_destroy_t)(struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + void *user_data, + struct rte_flow_error *error); + +/** @internal Enqueue indirect action update operation. */ +typedef int (*rte_flow_async_action_handle_update_t)(struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_handle *action_handle, + const void *update, + void *user_data, + struct rte_flow_error *error); + +/** @internal Enqueue indirect action query operation. */ +typedef int (*rte_flow_async_action_handle_query_t) + (struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + const struct rte_flow_action_handle *action_handle, + void *data, + void *user_data, + struct rte_flow_error *error); + +/** @internal Enqueue indirect action query and/or update operation. */ +typedef int (*rte_flow_async_action_handle_query_update_t)(struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *attr, + struct rte_flow_action_handle *handle, + const void *update, void *query, + enum rte_flow_query_update_mode mode, + void *user_data, + struct rte_flow_error *error); + +/** @internal Enqueue indirect action list creation operation. */ +typedef struct rte_flow_action_list_handle *(*rte_flow_async_action_list_handle_create_t)( + struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct rte_flow_indir_action_conf *conf, + const struct rte_flow_action *actions, + void *user_data, + struct rte_flow_error *error); + +/** @internal Enqueue indirect action list destruction operation. */ +typedef int (*rte_flow_async_action_list_handle_destroy_t)( + struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *op_attr, + struct rte_flow_action_list_handle *handle, + void *user_data, + struct rte_flow_error *error); + +/** @internal Enqueue indirect action list query and/or update operation. */ +typedef int (*rte_flow_async_action_list_handle_query_update_t)( + struct rte_eth_dev *dev, + uint32_t queue_id, + const struct rte_flow_op_attr *attr, + const struct rte_flow_action_list_handle *handle, + const void **update, + void **query, + enum rte_flow_query_update_mode mode, + void *user_data, + struct rte_flow_error *error); + +/** + * @internal + * + * Fast path async flow functions are held in a flat array, one entry per ethdev. + */ +struct rte_flow_fp_ops { + rte_flow_async_create_t async_create; + rte_flow_async_create_by_index_t async_create_by_index; + rte_flow_async_actions_update_t async_actions_update; + rte_flow_async_destroy_t async_destroy; + rte_flow_push_t push; + rte_flow_pull_t pull; + rte_flow_async_action_handle_create_t async_action_handle_create; + rte_flow_async_action_handle_destroy_t async_action_handle_destroy; + rte_flow_async_action_handle_update_t async_action_handle_update; + rte_flow_async_action_handle_query_t async_action_handle_query; + rte_flow_async_action_handle_query_update_t async_action_handle_query_update; + rte_flow_async_action_list_handle_create_t async_action_list_handle_create; + rte_flow_async_action_list_handle_destroy_t async_action_list_handle_destroy; + rte_flow_async_action_list_handle_query_update_t async_action_list_handle_query_update; +} __rte_cache_aligned; + +/** + * @internal + * Default implementation of fast path flow API functions. + */ +extern struct rte_flow_fp_ops rte_flow_fp_default_ops; + #ifdef __cplusplus } #endif diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 5c4917c020..a8758084f6 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -345,4 +345,6 @@ INTERNAL { rte_eth_representor_id_get; rte_eth_switch_domain_alloc; rte_eth_switch_domain_free; + + rte_flow_fp_default_ops; }; -- 2.25.1