From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A77A45BC5; Thu, 24 Oct 2024 17:45:11 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AB49B434EE; Thu, 24 Oct 2024 17:44:55 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2050.outbound.protection.outlook.com [40.107.243.50]) by mails.dpdk.org (Postfix) with ESMTP id 76018434E2 for ; Thu, 24 Oct 2024 17:44:54 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=cRv7qhEJv8n7kP6VBXqc7lck3wWv14AjxnTY7K3t6vSlwfHflSWZCfqA0DSXT5Jai5KA0AHTuLFmtw4vhk4MmkCSjuvtDRFPLoa2CEuln3zCWYj6Tfo0zFuchdQPp8wuB3/Rg+VsQ9Xs4kkpLnQkGWgOYCHViInlcVyv06swqCi6HhPEgTwsY+v3OQLNEJ+UOcVA4HJzRi9iJ1j+1O4fOB2xOGv6XeIDtI0kFFJYtAp+tu7QVbaHYDzSUBbjOVOkFb6WaR6r2dYLe5Kl/dZClAzO1r1qMbRZmRABzcp6t2lcGhWrKi7OlUJciIY5CaHBwZ5GMljf8tR6DKdwwpHogg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4t/zPSDm8MCTH2/iTnbfntgqiCe+Wu7EGFn7OFMV9Sc=; b=WBwwPcKbN80UhREvK7DKORdFOXCcB6CWMnYclixbWUvTkiR8JnB17EDQAoRG9KLJ2HiX2uXqhxZ2Lif0G9A24POzmbYjga6tYOKRrNPNTZsinfLXgQEiUe3oD752/q53qgP+6Sl7MQZRhyzfzmELpQe1+0hgU6fJu5vq5tGgoCs8F6y8/oksnmgnY1Syd5RZzm1lwG8dZ1VMA6TkOcOWHvr63kcpuTdrvKZ6DUxwgUqnndNwl7HnETkAPUQSXWSlW36JLsrKbxVnQHnelKWC5u9jjWEcOEcQQaACpOCnafNaqWzAFr8y6ir8xqJdwIxOSEWNLX5mtuuOu5gbX/r99Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4t/zPSDm8MCTH2/iTnbfntgqiCe+Wu7EGFn7OFMV9Sc=; b=uLfAkps1+DS6E71Ry616MqfKv62v48PI/2Jo4Drklc2b96vmPlI9jLt0nJuM/bWLSFr+g8bD4aU2Bz8m/nGRDGpRUo14WB2u/pxcWJMlc0APjzmh/dorsXAK07fGMT8ATWp6L2cjLglROWUF5LHtw2feZk6asx6bHYohgC2Ynjnq+NPeZF/Ed87hYN7iBrW+2S3O0ClRH9VrBNHrdO9NO0gDWV/Dy4m+AD/gkfabi4ZinWi5277qOPCJaoSb3/Tg30SLQPyH58ZFBfsDaNztJLon30MbnY1X0rUq09BAjyd0vDqXPpRVniEjBxdUR+Vm7cbt94huoUTHSUiZcEjw9g== Received: from MN2PR18CA0023.namprd18.prod.outlook.com (2603:10b6:208:23c::28) by MN0PR12MB6101.namprd12.prod.outlook.com (2603:10b6:208:3cb::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8093.19; Thu, 24 Oct 2024 15:44:46 +0000 Received: from BN3PEPF0000B36E.namprd21.prod.outlook.com (2603:10b6:208:23c:cafe::23) by MN2PR18CA0023.outlook.office365.com (2603:10b6:208:23c::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8093.18 via Frontend Transport; Thu, 24 Oct 2024 15:44:46 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN3PEPF0000B36E.mail.protection.outlook.com (10.167.243.165) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8114.2 via Frontend Transport; Thu, 24 Oct 2024 15:44:45 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 24 Oct 2024 08:44:28 -0700 Received: from pegasus01.mtr.labs.mlnx (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Thu, 24 Oct 2024 08:44:25 -0700 From: Alexander Kozyrev To: CC: , , , , , , , , Subject: [PATCH v3 4/5] net/mlx5: add flow rule insertion by index with pattern Date: Thu, 24 Oct 2024 18:41:28 +0300 Message-ID: <20241024154351.1743447-5-akozyrev@nvidia.com> X-Mailer: git-send-email 2.43.5 In-Reply-To: <20241024154351.1743447-1-akozyrev@nvidia.com> References: <20241024154351.1743447-1-akozyrev@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN3PEPF0000B36E:EE_|MN0PR12MB6101:EE_ X-MS-Office365-Filtering-Correlation-Id: 04674097-f985-4d16-672b-08dcf442c93c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|1800799024|376014|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?w549wDAt6G1M6cfzcZF+hsj7AcUR6Z1yejDJaN2E1Pw6eUdIcq+iMoP920m8?= =?us-ascii?Q?hVVyD/mCw/ecZymzzAs6+fiH4iAdpEwA3fUncboJsqKDEhEBqPcGlOvSWMw3?= =?us-ascii?Q?HO05MlaljTmBkmPRnSG1FmITdrCzYj7cutcumM/s17WB4wx4ghGow0keEkgk?= =?us-ascii?Q?UF8L79K9anSjhU8dSjDcFyiOpIRxLXhzpq9f9BPAxW/ZW0nWN6vdFg0QsUae?= =?us-ascii?Q?8HWOcXkWqeE7t6h7sbgVTme4ZYlgZsGY0bXFAOWNUI6974HkxOxAoefa15Ph?= =?us-ascii?Q?YBr1GDUrTCcd51D7ggwaW6e7wpDKtUeD+qAAIi5riIMP8hi3vlBaBTrnJuGd?= =?us-ascii?Q?KS6m+BMsaOgBAmhGFR+gZxYAHtc38taCQfJPqkOAWdxvta663BelH4HaarOC?= =?us-ascii?Q?S0HcA5NzFMwqyIe3LLUSoIZgDecXJeiI8aDuaVy5jrV2fvL+/mUWG71RIlWK?= =?us-ascii?Q?O+wp6Utuv5UhjQg2cuevsxXLGjWDHf/9InPXKdxv4V1Z4GrsR3j/SOhcNtv8?= =?us-ascii?Q?3pP1W8VGZymOd7FqzwsNFhe1fStxHPy2dsaUH2FinE6ixPdTLD1VvEOOMlW9?= =?us-ascii?Q?W2u/s3/hwhGrnJQy+lW57oZiEGI/+jU5/+fxUxF11k9IqEWskYbElu/B/XMn?= =?us-ascii?Q?bnwhv8YMNS20CmYeoMUBYLIGTyshralALmwglw26dCOzksyM5KysMTyMd+wQ?= =?us-ascii?Q?Lz8kRB3fZltfBmZplJpg/nillTs45F4edbImU5cFj7K/+G+fV3uh2mMjW6ab?= =?us-ascii?Q?CSMq0m5bBPuOURSOp6noWI2nc5bzVnm02mr+1zvzh+NDUHiSrHev1sy1RwIv?= =?us-ascii?Q?rVOE32NxEm2UgC6C/8KAoOry4E7ryfI2C7ebo/72KuRVRXI/v8q904UdPV7A?= =?us-ascii?Q?07LHPIVfRzblJH/jKhVYuAXkcPs4Ud/TglgOc5fU2KxLkFxSvvDkZJ9tNaeG?= =?us-ascii?Q?OKB3Mq46xvS7e+vrMUkd3FbWjylA6qeRIurvm4l2S3cabH8jTh7iWdt4DU0+?= =?us-ascii?Q?6PdL7XG/5VVBSll4bjoA29Ngw5st/G0rfdBPhBPmvWMGZYH+UNSrOZ4kaixW?= =?us-ascii?Q?osJt3BgDbMiOotaaaaGhlPBMvBVDI8+OAxalx31Rlz2VcYVMRAe9j79ybwcb?= =?us-ascii?Q?ZsBDCJGqGgLyDrQ6PsmKpZLOY4/VDXqgv+sZUmMeRzVOq4A5D9X/f8YLrhSG?= =?us-ascii?Q?vc0a5kLbsE2CcYYNR15nE8vyYujlkwpFO1O+PnGZDRHt8TphpQ+o0RsquYU3?= =?us-ascii?Q?sqfpvkQoUu5fONUfBaYDbYrt6Nzcmdo8nPeJ4ud8YPEv4omTsIZU+UO9Pkh9?= =?us-ascii?Q?QFmsVEhX7mgCh0moWe8+t4BJztSz0Sg4EVNhIipnIsgjVI1OpcUVZF27PnKF?= =?us-ascii?Q?tGAvU7+DJr17g9qDJARQ5RXJg78OYt8fR4sqmQmIJ4G9CPg8Og=3D=3D?= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230040)(1800799024)(376014)(36860700013)(82310400026); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Oct 2024 15:44:45.8934 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 04674097-f985-4d16-672b-08dcf442c93c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN3PEPF0000B36E.namprd21.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6101 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implement rte_flow_async_create_by_index_with_pattern() function. Rework the driver implementation to reduce code duplication by providing a single flow insertion routine, that can be called with different parameters depending on the insertion type. Signed-off-by: Alexander Kozyrev --- doc/guides/rel_notes/release_24_11.rst | 2 + drivers/net/mlx5/mlx5_flow_hw.c | 281 +++++++------------------ 2 files changed, 83 insertions(+), 200 deletions(-) diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst index fa4822d928..07a8435b19 100644 --- a/doc/guides/rel_notes/release_24_11.rst +++ b/doc/guides/rel_notes/release_24_11.rst @@ -247,6 +247,8 @@ New Features Added ability for node to advertise and update multiple xstat counters, that can be retrieved using ``rte_graph_cluster_stats_get``. +* **Updated NVIDIA MLX5 net driver.** + * Added rte_flow_async_create_by_index_with_pattern() support. Removed Items ------------- diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index c236831e21..412d927efb 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -335,18 +335,13 @@ static __rte_always_inline uint32_t flow_hw_tx_tag_regc_value(struct rte_eth_dev static int flow_hw_async_create_validate(struct rte_eth_dev *dev, const uint32_t queue, const struct rte_flow_template_table *table, + enum rte_flow_table_insertion_type insertion_type, + const uint32_t rule_index, const struct rte_flow_item items[], const uint8_t pattern_template_index, const struct rte_flow_action actions[], const uint8_t action_template_index, struct rte_flow_error *error); -static int flow_hw_async_create_by_index_validate(struct rte_eth_dev *dev, - const uint32_t queue, - const struct rte_flow_template_table *table, - const uint32_t rule_index, - const struct rte_flow_action actions[], - const uint8_t action_template_index, - struct rte_flow_error *error); static int flow_hw_async_update_validate(struct rte_eth_dev *dev, const uint32_t queue, const struct rte_flow_hw *flow, @@ -3884,6 +3879,12 @@ flow_hw_get_rule_items(struct rte_eth_dev *dev, * The queue to create the flow. * @param[in] attr * Pointer to the flow operation attributes. + * @param[in] table + * Pointer to the template table. + * @param[in] insertion_type + * Insertion type for flow rules. + * @param[in] rule_index + * The item pattern flow follows from the table. * @param[in] items * Items with flow spec value. * @param[in] pattern_template_index @@ -3900,17 +3901,19 @@ flow_hw_get_rule_items(struct rte_eth_dev *dev, * @return * Flow pointer on success, NULL otherwise and rte_errno is set. */ -static struct rte_flow * -flow_hw_async_flow_create(struct rte_eth_dev *dev, - uint32_t queue, - const struct rte_flow_op_attr *attr, - struct rte_flow_template_table *table, - const struct rte_flow_item items[], - uint8_t pattern_template_index, - const struct rte_flow_action actions[], - uint8_t action_template_index, - void *user_data, - struct rte_flow_error *error) +static __rte_always_inline struct rte_flow * +flow_hw_async_flow_create_generic(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_template_table *table, + enum rte_flow_table_insertion_type insertion_type, + uint32_t rule_index, + const struct rte_flow_item items[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + void *user_data, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5dr_rule_attr rule_attr = { @@ -3928,8 +3931,8 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, int ret; if (mlx5_fp_debug_enabled()) { - if (flow_hw_async_create_validate(dev, queue, table, items, pattern_template_index, - actions, action_template_index, error)) + if (flow_hw_async_create_validate(dev, queue, table, insertion_type, rule_index, + items, pattern_template_index, actions, action_template_index, error)) return NULL; } flow = mlx5_ipool_malloc(table->flow, &flow_idx); @@ -3967,7 +3970,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices * for rule insertion hints. */ - flow->rule_idx = flow->res_idx - 1; + flow->rule_idx = (rule_index == UINT32_MAX) ? flow->res_idx - 1 : rule_index; rule_attr.rule_idx = flow->rule_idx; /* * Construct the flow actions based on the input actions. @@ -4023,33 +4026,26 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, return NULL; } -/** - * Enqueue HW steering flow creation by index. - * - * The flow will be applied to the HW only if the postpone bit is not set or - * the extra push function is called. - * The flow creation status should be checked from dequeue result. - * - * @param[in] dev - * Pointer to the rte_eth_dev structure. - * @param[in] queue - * The queue to create the flow. - * @param[in] attr - * Pointer to the flow operation attributes. - * @param[in] rule_index - * The item pattern flow follows from the table. - * @param[in] actions - * Action with flow spec value. - * @param[in] action_template_index - * The action pattern flow follows from the table. - * @param[in] user_data - * Pointer to the user_data. - * @param[out] error - * Pointer to error structure. - * - * @return - * Flow pointer on success, NULL otherwise and rte_errno is set. - */ +static struct rte_flow * +flow_hw_async_flow_create(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_template_table *table, + const struct rte_flow_item items[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + void *user_data, + struct rte_flow_error *error) +{ + uint32_t rule_index = UINT32_MAX; + + return flow_hw_async_flow_create_generic(dev, queue, attr, table, + RTE_FLOW_TABLE_INSERTION_TYPE_PATTERN, rule_index, + items, pattern_template_index, actions, action_template_index, + user_data, error); +} + static struct rte_flow * flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, uint32_t queue, @@ -4062,105 +4058,31 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, struct rte_flow_error *error) { struct rte_flow_item items[] = {{.type = RTE_FLOW_ITEM_TYPE_END,}}; - struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5dr_rule_attr rule_attr = { - .queue_id = queue, - .user_data = user_data, - .burst = attr->postpone, - }; - struct mlx5dr_rule_action *rule_acts; - struct mlx5_flow_hw_action_params ap; - struct rte_flow_hw *flow = NULL; - uint32_t flow_idx = 0; - uint32_t res_idx = 0; - int ret; + uint8_t pattern_template_index = 0; - if (mlx5_fp_debug_enabled()) { - if (flow_hw_async_create_by_index_validate(dev, queue, table, rule_index, - actions, action_template_index, error)) - return NULL; - } - flow = mlx5_ipool_malloc(table->flow, &flow_idx); - if (!flow) { - rte_errno = ENOMEM; - goto error; - } - rule_acts = flow_hw_get_dr_action_buffer(priv, table, action_template_index, queue); - /* - * Set the table here in order to know the destination table - * when free the flow afterwards. - */ - flow->table = table; - flow->mt_idx = 0; - flow->idx = flow_idx; - if (table->resource) { - mlx5_ipool_malloc(table->resource, &res_idx); - if (!res_idx) { - rte_errno = ENOMEM; - goto error; - } - flow->res_idx = res_idx; - } else { - flow->res_idx = flow_idx; - } - flow->flags = 0; - /* - * Set the flow operation type here in order to know if the flow memory - * should be freed or not when get the result from dequeue. - */ - flow->operation_type = MLX5_FLOW_HW_FLOW_OP_TYPE_CREATE; - flow->user_data = user_data; - rule_attr.user_data = flow; - /* Set the rule index. */ - flow->rule_idx = rule_index; - rule_attr.rule_idx = flow->rule_idx; - /* - * Construct the flow actions based on the input actions. - * The implicitly appended action is always fixed, like metadata - * copy action from FDB to NIC Rx. - * No need to copy and contrust a new "actions" list based on the - * user's input, in order to save the cost. - */ - if (flow_hw_actions_construct(dev, flow, &ap, - &table->ats[action_template_index], - table->its[0]->item_flags, table, - actions, rule_acts, queue, error)) { - rte_errno = EINVAL; - goto error; - } - if (likely(!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr))) { - ret = mlx5dr_rule_create(table->matcher_info[0].matcher, - 0, items, action_template_index, - rule_acts, &rule_attr, - (struct mlx5dr_rule *)flow->rule); - } else { - struct rte_flow_hw_aux *aux = mlx5_flow_hw_aux(dev->data->port_id, flow); - uint32_t selector; + return flow_hw_async_flow_create_generic(dev, queue, attr, table, + RTE_FLOW_TABLE_INSERTION_TYPE_INDEX, rule_index, + items, pattern_template_index, actions, action_template_index, + user_data, error); +} - flow->operation_type = MLX5_FLOW_HW_FLOW_OP_TYPE_RSZ_TBL_CREATE; - rte_rwlock_read_lock(&table->matcher_replace_rwlk); - selector = table->matcher_selector; - ret = mlx5dr_rule_create(table->matcher_info[selector].matcher, - 0, items, action_template_index, - rule_acts, &rule_attr, - (struct mlx5dr_rule *)flow->rule); - rte_rwlock_read_unlock(&table->matcher_replace_rwlk); - aux->matcher_selector = selector; - flow->flags |= MLX5_FLOW_HW_FLOW_FLAG_MATCHER_SELECTOR; - } - if (likely(!ret)) { - flow_hw_q_inc_flow_ops(priv, queue); - return (struct rte_flow *)flow; - } -error: - if (table->resource && res_idx) - mlx5_ipool_free(table->resource, res_idx); - if (flow_idx) - mlx5_ipool_free(table->flow, flow_idx); - rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "fail to create rte flow"); - return NULL; +static struct rte_flow * +flow_hw_async_flow_create_by_index_with_pattern(struct rte_eth_dev *dev, + uint32_t queue, + const struct rte_flow_op_attr *attr, + struct rte_flow_template_table *table, + uint32_t rule_index, + const struct rte_flow_item items[], + uint8_t pattern_template_index, + const struct rte_flow_action actions[], + uint8_t action_template_index, + void *user_data, + struct rte_flow_error *error) +{ + return flow_hw_async_flow_create_generic(dev, queue, attr, table, + RTE_FLOW_TABLE_INSERTION_TYPE_INDEX_WITH_PATTERN, rule_index, + items, pattern_template_index, actions, action_template_index, + user_data, error); } /** @@ -16770,6 +16692,8 @@ flow_hw_async_op_validate(struct rte_eth_dev *dev, * The queue to create the flow. * @param[in] table * Pointer to template table. + * @param[in] rule_index + * The item pattern flow follows from the table. * @param[in] items * Items with flow spec value. * @param[in] pattern_template_index @@ -16789,6 +16713,8 @@ static int flow_hw_async_create_validate(struct rte_eth_dev *dev, const uint32_t queue, const struct rte_flow_template_table *table, + enum rte_flow_table_insertion_type insertion_type, + uint32_t rule_index, const struct rte_flow_item items[], const uint8_t pattern_template_index, const struct rte_flow_action actions[], @@ -16798,63 +16724,18 @@ flow_hw_async_create_validate(struct rte_eth_dev *dev, if (flow_hw_async_op_validate(dev, queue, table, error)) return -rte_errno; - if (table->cfg.attr.insertion_type != RTE_FLOW_TABLE_INSERTION_TYPE_PATTERN) - return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Only pattern insertion is allowed on this table"); - - if (flow_hw_validate_rule_pattern(dev, table, pattern_template_index, items, error)) - return -rte_errno; - - if (flow_hw_validate_rule_actions(dev, table, action_template_index, actions, error)) - return -rte_errno; - - return 0; -} + if (insertion_type != table->cfg.attr.insertion_type) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Flow rule insertion type mismatch with table configuration"); -/** - * Validate user input for rte_flow_async_create_by_index() implementation. - * - * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op. - * - * @param[in] dev - * Pointer to the rte_eth_dev structure. - * @param[in] queue - * The queue to create the flow. - * @param[in] table - * Pointer to template table. - * @param[in] rule_index - * Rule index in the table. - * Inserting a rule to already occupied index results in undefined behavior. - * @param[in] actions - * Action with flow spec value. - * @param[in] action_template_index - * The action pattern flow follows from the table. - * @param[out] error - * Pointer to error structure. - * - * @return - * 0 if user input is valid. - * Negative errno otherwise, rte_errno and error struct is set. - */ -static int -flow_hw_async_create_by_index_validate(struct rte_eth_dev *dev, - const uint32_t queue, - const struct rte_flow_template_table *table, - const uint32_t rule_index, - const struct rte_flow_action actions[], - const uint8_t action_template_index, - struct rte_flow_error *error) -{ - if (flow_hw_async_op_validate(dev, queue, table, error)) - return -rte_errno; + if (table->cfg.attr.insertion_type != RTE_FLOW_TABLE_INSERTION_TYPE_PATTERN) + if (rule_index >= table->cfg.attr.nb_flows) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Flow rule index exceeds table size"); if (table->cfg.attr.insertion_type != RTE_FLOW_TABLE_INSERTION_TYPE_INDEX) - return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Only index insertion is allowed on this table"); - - if (rule_index >= table->cfg.attr.nb_flows) - return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Flow rule index exceeds table size"); + if (flow_hw_validate_rule_pattern(dev, table, pattern_template_index, items, error)) + return -rte_errno; if (flow_hw_validate_rule_actions(dev, table, action_template_index, actions, error)) return -rte_errno; @@ -16862,7 +16743,6 @@ flow_hw_async_create_by_index_validate(struct rte_eth_dev *dev, return 0; } - /** * Validate user input for rte_flow_async_update() implementation. * @@ -16935,6 +16815,7 @@ flow_hw_async_destroy_validate(struct rte_eth_dev *dev, static struct rte_flow_fp_ops mlx5_flow_hw_fp_ops = { .async_create = flow_hw_async_flow_create, .async_create_by_index = flow_hw_async_flow_create_by_index, + .async_create_by_index_with_pattern = flow_hw_async_flow_create_by_index_with_pattern, .async_actions_update = flow_hw_async_flow_update, .async_destroy = flow_hw_async_flow_destroy, .push = flow_hw_push, -- 2.43.5