From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EFE9544060; Wed, 12 Jun 2024 18:26:23 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E85F94279F; Wed, 12 Jun 2024 18:25:28 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2058.outbound.protection.outlook.com [40.107.244.58]) by mails.dpdk.org (Postfix) with ESMTP id EF32F427B2 for ; Wed, 12 Jun 2024 18:25:27 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kAP+U8p1zKLGYZJDdpVDZTX5hX8D6zUuR+ROSoLWk+f0Rsh6Q5SNfLR40Tvn3JXXAqc1/14/yv+0iY2H8M8HUorYYZ7mDOzxh8zANSiU8Zav7cLIxiU6/Lhtgzic8IVj2sLlUyjaie9bYu1z9FmQTYzoXFP+Y/YGibd4wPOhYxHVNU/bEGct9FBcbBGmO2Q9aRb4Y20aGLTgBKcBAI+YFoDDkUjvVFcYmEtfSJN29mVvUE8rYS7Xn6QE7j6k4+J17WjGywhKrGvQ54F75UaK4s8Y6wWKWNBt3CoAxKsyT2xEz87WiSSzMQQlpLbj0DebZpNSics6Yj1zHyaH7YqZkQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=1NMt4bdxnj0uWVMuavKdJSUPVtaXEqL1ezUOaV+HJWI=; b=BfAmgO69CrDtxZGkL//JWcHS7TDHtK12+tTOaV/8g/8vYI1kOqMFQLS25uBjBab8UGSlVuk9Hmu0XlS79fvPnClRSE+71dYof/ZbVTnkxZq5tHphGHJgNxTS2GGVF64RVGNFLFXi9HOPznF9i1ng2URi/CoyfbnERsb7Eid45nGZB750NS4X8hzu+Ufy02EiXl61/fWaTlsVi4K96YrE4yLny4UGM4UR9vkISVw0D29Rt2Shfv7CAXGq17IKE+CHwg/TiD22n02BnYQEuZWiAv2x0xp+ZFIjyo94QpUB18fZ5m8QNX8x4XhI/Lo1FO1mA4yxRutAY8drWkoPNwfZ3g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1NMt4bdxnj0uWVMuavKdJSUPVtaXEqL1ezUOaV+HJWI=; b=cq4vmIIe2eePCLvDpzbOZ34HjBQEqVDlZINTlatsHhfLJjXylCSwz3RQr2P9Jhmz9Y9EtJGKZmRWgVe6SrCwb+fhFPKDFDH/AIAmzEV2rUc52Ss12gKOEY2Ffq+EN6wbexZ1vh45Sz1stNY/V0+XHjDFS0/zL1EpKJuCIoYRflNbl9dHuceA3Q2PKHqakZQ+a5oWQe9oex2V88ITpPJ5VEZh3ABhC3zCveHM/cojsndbc9GgGazsqjkftry+lvp4UzzCPqPYAOgRDsUe92m6K5MiAHQiusapbknSJNiL+5SAqvYbyKGzqT/kKknGzCxNl8WXz0M2dFySQZDOUX+VWA== Received: from BLAPR03CA0167.namprd03.prod.outlook.com (2603:10b6:208:32f::23) by IA1PR12MB8285.namprd12.prod.outlook.com (2603:10b6:208:3f6::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Wed, 12 Jun 2024 16:25:23 +0000 Received: from BN2PEPF000044AC.namprd04.prod.outlook.com (2603:10b6:208:32f:cafe::2d) by BLAPR03CA0167.outlook.office365.com (2603:10b6:208:32f::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.23 via Frontend Transport; Wed, 12 Jun 2024 16:25:23 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN2PEPF000044AC.mail.protection.outlook.com (10.167.243.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.15 via Frontend Transport; Wed, 12 Jun 2024 16:25:23 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Wed, 12 Jun 2024 09:25:05 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Wed, 12 Jun 2024 09:25:04 -0700 From: Dariusz Sosnowski To: Viacheslav Ovsiienko , Ori Kam , Suanming Mou , Matan Azrad CC: Subject: [PATCH v2 8/8] net/mlx5: add async flow operation validation Date: Wed, 12 Jun 2024 18:24:26 +0200 Message-ID: <20240612162426.978117-9-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240612162426.978117-1-dsosnowski@nvidia.com> References: <20240605183419.489323-1-dsosnowski@nvidia.com> <20240612162426.978117-1-dsosnowski@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN2PEPF000044AC:EE_|IA1PR12MB8285:EE_ X-MS-Office365-Filtering-Correlation-Id: 14879e5e-fb2e-480b-0f83-08dc8afc4293 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230034|36860700007|376008|1800799018|82310400020; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?IQ21iG0D+FV2xsrxsfu3QvLuO+YPtwfeZXEKt47CbPzAOFIDZa1CUjDpajqT?= =?us-ascii?Q?jZxr6NlwfzPebEr+VFqGJxirJOZedIqDoZs5ZC9edY7pCrHVr3t7G/zZPgi2?= =?us-ascii?Q?zW53y8k2vm+uY1Lr70Mx+W3J8J1dyN6gypuZqnQi5TuTaB5cCUldAmFIYasl?= =?us-ascii?Q?8Z3DwkxCRFr0ImD0fCv8qpCNOXv51CiVoFBniZXaeY+zVbpj4limkoBLyJhL?= =?us-ascii?Q?zX2sMq37KxwOTLDgwpgYU/b6Icf66rNTRb5MMGVfbb92TBc6tr0lUPQgqghH?= =?us-ascii?Q?+I3r4p0H2shsnHqlTWGgDvg5draZ2jBTSmAP9zY90S/2G70Q0s6FGJzcOCj7?= =?us-ascii?Q?L1lROh0P5BKm4aURSBTTvr+7mlXvyVaxGHZhQpXoZONraaIOVi18eI0WrVCQ?= =?us-ascii?Q?JwX1UkQtlRReXt42jp080W6ACr/OzXRbRdaKxlXv2PQFxyds0L2IlzshyfKA?= =?us-ascii?Q?fP+BEmgAH7El/K1vclx2BIlyzcoQV3szKGxRU7YJj0NGaIBtJd545UF7xNSo?= =?us-ascii?Q?Vy8o4/JhlJMKlZkUh+K6Od2uf2ROh3x7wyA6bm68T3G5H00uZyooRGVDKVcq?= =?us-ascii?Q?T457sXto03K4oXq3ozKuwa4IT59o5hNywpzgdpoEiNoHq2g54fXteqa8VpXF?= =?us-ascii?Q?5nSr7DYAzrDVKoqaVi6Mf0cjEckgstCtu8Q05idvI5kdqqKZK5lYlAj3iQGp?= =?us-ascii?Q?fYrbVk8lJKHWvWgU5RFtNFDfYYBSgHI3mphRmSHKOhgpxOgeOtIYJSuKz3X0?= =?us-ascii?Q?n6RQ89uo/G8INRUhdcy1EHJD+agm12W1yry9hujMvTpI3a17kW1gI87E42og?= =?us-ascii?Q?8z8jzb5ten6dXFXOtMNFCGlvuHvf+nXHJJ5Jug/GmCg+E8+TaPm7Txrnw2xJ?= =?us-ascii?Q?zv3XyS/nMMa9oEafvgms3he5/JXSZMhEETP5r3cLaRTCyHdYbNCyXod1SS3M?= =?us-ascii?Q?mhSMH8NZkp48+k9uCF1dmVQfcJ56xQKLU09SBLwNhmYZT/mOmY077dELGlG4?= =?us-ascii?Q?2G/ZPuJx86epe+/1s5fiwQ6g19KYNh6SYVbL4tsn5MutPbA1FNrGMpn20yKV?= =?us-ascii?Q?OHwubnHtsn27rkgLdPvrXGw4jbap/C8MMpmzpGqk1UHK8TEOOgnBkkXjbn5j?= =?us-ascii?Q?7u7PvXpj6fzZ+nSOV1q8a5Vr1/Q9RzUGx7vTz17wFS8QugBZHHRzUur9EhdF?= =?us-ascii?Q?cI1NSSOuSiq5JnmRU805PDqNeRGn77xu9lGCyUWCJjEyOpNaG7Zpo/uG2u8B?= =?us-ascii?Q?11lcypGiZ0C6JguYgSYC+tdIKxqcHqdV0+sej89NCmUgRbZ9Swly2aAKkisO?= =?us-ascii?Q?eFK2kNhJhb5E0rI+uypzSSbvabva3uyq9uXkhbpfJ5zQF6OWTuZek4nXP56T?= =?us-ascii?Q?2HJ6AdLu8h/5URJNnY/fZeDf//vBCWpmqY8VyjApCOXJ24ww1Q=3D=3D?= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230034)(36860700007)(376008)(1800799018)(82310400020); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Jun 2024 16:25:23.1062 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 14879e5e-fb2e-480b-0f83-08dc8afc4293 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN2PEPF000044AC.namprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB8285 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds validation to implementations of the following API functions: - rte_flow_async_create() - rte_flow_async_create_by_index() - rte_flow_async_update() - rte_flow_async_destroy() These validations are enabled if and only if RTE_LIBRTE_MLX5_DEBUG macro is defined. Signed-off-by: Dariusz Sosnowski Acked-by: Ori Kam --- doc/guides/rel_notes/release_24_07.rst | 1 + drivers/net/mlx5/mlx5_flow_hw.c | 491 ++++++++++++++++++++++++- 2 files changed, 488 insertions(+), 4 deletions(-) diff --git a/doc/guides/rel_notes/release_24_07.rst b/doc/guides/rel_notes/release_24_07.rst index 7688ed2764..5f37e2283c 100644 --- a/doc/guides/rel_notes/release_24_07.rst +++ b/doc/guides/rel_notes/release_24_07.rst @@ -86,6 +86,7 @@ New Features * Added match with Tx queue. * Added match with external Tx queue. * Added match with E-Switch manager. + * Added flow item and actions validation to async flow API. Removed Items diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 19d6105be8..1db35c7d16 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -332,6 +332,32 @@ mlx5_flow_ct_init(struct rte_eth_dev *dev, static __rte_always_inline uint32_t flow_hw_tx_tag_regc_mask(struct rte_eth_dev *dev); static __rte_always_inline uint32_t flow_hw_tx_tag_regc_value(struct rte_eth_dev *dev); +static int flow_hw_async_create_validate(struct rte_eth_dev *dev, + const uint32_t queue, + const struct rte_flow_template_table *table, + const struct rte_flow_item items[], + const uint8_t pattern_template_index, + const struct rte_flow_action actions[], + const uint8_t action_template_index, + struct rte_flow_error *error); +static int flow_hw_async_create_by_index_validate(struct rte_eth_dev *dev, + const uint32_t queue, + const struct rte_flow_template_table *table, + const uint32_t rule_index, + const struct rte_flow_action actions[], + const uint8_t action_template_index, + struct rte_flow_error *error); +static int flow_hw_async_update_validate(struct rte_eth_dev *dev, + const uint32_t queue, + const struct rte_flow_hw *flow, + const struct rte_flow_action actions[], + const uint8_t action_template_index, + struct rte_flow_error *error); +static int flow_hw_async_destroy_validate(struct rte_eth_dev *dev, + const uint32_t queue, + const struct rte_flow_hw *flow, + struct rte_flow_error *error); + const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops; /* DR action flags with different table. */ @@ -3856,6 +3882,11 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, uint32_t res_idx = 0; int ret; + if (mlx5_fp_debug_enabled()) { + if (flow_hw_async_create_validate(dev, queue, table, items, pattern_template_index, + actions, action_template_index, error)) + return NULL; + } flow = mlx5_ipool_malloc(table->flow, &flow_idx); if (!flow) goto error; @@ -3995,10 +4026,10 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev, uint32_t res_idx = 0; int ret; - if (unlikely(rule_index >= table->cfg.attr.nb_flows)) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Flow rule index exceeds table size"); - return NULL; + if (mlx5_fp_debug_enabled()) { + if (flow_hw_async_create_by_index_validate(dev, queue, table, rule_index, + actions, action_template_index, error)) + return NULL; } flow = mlx5_ipool_malloc(table->flow, &flow_idx); if (!flow) @@ -4131,6 +4162,11 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev, uint32_t res_idx = 0; int ret; + if (mlx5_fp_debug_enabled()) { + if (flow_hw_async_update_validate(dev, queue, of, actions, action_template_index, + error)) + return -rte_errno; + } aux = mlx5_flow_hw_aux(dev->data->port_id, of); nf = &aux->upd_flow; memset(nf, 0, sizeof(struct rte_flow_hw)); @@ -4239,6 +4275,10 @@ flow_hw_async_flow_destroy(struct rte_eth_dev *dev, &fh->table->cfg.attr); int ret; + if (mlx5_fp_debug_enabled()) { + if (flow_hw_async_destroy_validate(dev, queue, fh, error)) + return -rte_errno; + } fh->operation_type = !resizable ? MLX5_FLOW_HW_FLOW_OP_TYPE_DESTROY : MLX5_FLOW_HW_FLOW_OP_TYPE_RSZ_TBL_DESTROY; @@ -16147,6 +16187,449 @@ mlx5_reformat_action_destroy(struct rte_eth_dev *dev, return 0; } +static bool +flow_hw_is_item_masked(const struct rte_flow_item *item) +{ + const uint8_t *byte; + int size; + int i; + + if (item->mask == NULL) + return false; + + switch ((int)item->type) { + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + size = sizeof(struct rte_flow_item_tag); + break; + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: + size = sizeof(struct mlx5_rte_flow_item_sq); + break; + default: + size = rte_flow_conv(RTE_FLOW_CONV_OP_ITEM_MASK, NULL, 0, item, NULL); + /* + * Pattern template items are passed to this function. + * These items were already validated, so error is not expected. + * Also, if mask is NULL, then spec size is bigger than 0 always. + */ + MLX5_ASSERT(size > 0); + } + + byte = (const uint8_t *)item->mask; + for (i = 0; i < size; ++i) + if (byte[i]) + return true; + + return false; +} + +static int +flow_hw_validate_rule_pattern(struct rte_eth_dev *dev, + const struct rte_flow_template_table *table, + const uint8_t pattern_template_idx, + const struct rte_flow_item items[], + struct rte_flow_error *error) +{ + const struct rte_flow_pattern_template *pt; + const struct rte_flow_item *pt_item; + + if (pattern_template_idx >= table->nb_item_templates) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Pattern template index out of range"); + + pt = table->its[pattern_template_idx]; + pt_item = pt->items; + + /* If any item was prepended, skip it. */ + if (pt->implicit_port || pt->implicit_tag) + pt_item++; + + for (; pt_item->type != RTE_FLOW_ITEM_TYPE_END; pt_item++, items++) { + if (pt_item->type != items->type) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + items, "Item type does not match the template"); + + /* + * Assumptions: + * - Currently mlx5dr layer contains info on which fields in masks are supported. + * - This info is not exposed to PMD directly. + * - Because of that, it is assumed that since pattern template is correct, + * then, items' masks in pattern template have nonzero values only in + * supported fields. + * This is known, because a temporary mlx5dr matcher is created during pattern + * template creation to validate the template. + * - As a result, it is safe to look for nonzero bytes in mask to determine if + * item spec is needed in a flow rule. + */ + if (!flow_hw_is_item_masked(pt_item)) + continue; + + if (items->spec == NULL) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_SPEC, + items, "Item spec is required"); + + switch (items->type) { + const struct rte_flow_item_ethdev *ethdev; + const struct rte_flow_item_tx_queue *tx_queue; + struct mlx5_txq_ctrl *txq; + + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + ethdev = items->spec; + if (flow_hw_validate_target_port_id(dev, ethdev->port_id)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_SPEC, items, + "Invalid port"); + } + break; + case RTE_FLOW_ITEM_TYPE_TX_QUEUE: + tx_queue = items->spec; + if (mlx5_is_external_txq(dev, tx_queue->tx_queue)) + continue; + txq = mlx5_txq_get(dev, tx_queue->tx_queue); + if (!txq) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_SPEC, items, + "Invalid Tx queue"); + mlx5_txq_release(dev, tx_queue->tx_queue); + default: + break; + } + } + + return 0; +} + +static bool +flow_hw_valid_indirect_action_type(const struct rte_flow_action *user_action, + const enum rte_flow_action_type expected_type) +{ + uint32_t user_indirect_type = MLX5_INDIRECT_ACTION_TYPE_GET(user_action->conf); + uint32_t expected_indirect_type; + + switch ((int)expected_type) { + case RTE_FLOW_ACTION_TYPE_RSS: + case MLX5_RTE_FLOW_ACTION_TYPE_RSS: + expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_RSS; + break; + case RTE_FLOW_ACTION_TYPE_COUNT: + case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: + expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_COUNT; + break; + case RTE_FLOW_ACTION_TYPE_AGE: + expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_AGE; + break; + case RTE_FLOW_ACTION_TYPE_CONNTRACK: + expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_CT; + break; + case RTE_FLOW_ACTION_TYPE_METER_MARK: + case MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK: + expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_METER_MARK; + break; + case RTE_FLOW_ACTION_TYPE_QUOTA: + expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_QUOTA; + break; + default: + return false; + } + + return user_indirect_type == expected_indirect_type; +} + +static int +flow_hw_validate_rule_actions(struct rte_eth_dev *dev, + const struct rte_flow_template_table *table, + const uint8_t actions_template_idx, + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + const struct rte_flow_actions_template *at; + const struct mlx5_hw_actions *hw_acts; + const struct mlx5_action_construct_data *act_data; + unsigned int idx; + + if (actions_template_idx >= table->nb_action_templates) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Actions template index out of range"); + + at = table->ats[actions_template_idx].action_template; + hw_acts = &table->ats[actions_template_idx].acts; + + for (idx = 0; actions[idx].type != RTE_FLOW_ACTION_TYPE_END; ++idx) { + const struct rte_flow_action *user_action = &actions[idx]; + const struct rte_flow_action *tmpl_action = &at->orig_actions[idx]; + + if (user_action->type != tmpl_action->type) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + user_action, + "Action type does not match type specified in " + "actions template"); + } + + /* + * Only go through unmasked actions and check if configuration is provided. + * Configuration of masked actions is ignored. + */ + LIST_FOREACH(act_data, &hw_acts->act_list, next) { + const struct rte_flow_action *user_action; + + user_action = &actions[act_data->action_src]; + + /* Skip actions which do not require conf. */ + switch ((int)user_action->type) { + case RTE_FLOW_ACTION_TYPE_COUNT: + case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: + case MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK: + continue; + default: + break; + } + + if (user_action->conf == NULL) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + user_action, + "Action requires configuration"); + + switch ((int)user_action->type) { + enum rte_flow_action_type expected_type; + const struct rte_flow_action_ethdev *ethdev; + const struct rte_flow_action_modify_field *mf; + + case RTE_FLOW_ACTION_TYPE_INDIRECT: + expected_type = act_data->indirect.expected_type; + if (!flow_hw_valid_indirect_action_type(user_action, expected_type)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + user_action, + "Indirect action type does not match " + "the type specified in the mask"); + break; + case RTE_FLOW_ACTION_TYPE_QUEUE: + if (mlx5_flow_validate_target_queue(dev, user_action, error)) + return -rte_errno; + break; + case RTE_FLOW_ACTION_TYPE_RSS: + if (mlx5_validate_action_rss(dev, user_action, error)) + return -rte_errno; + break; + case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: + /* TODO: Compare other fields if needed. */ + mf = user_action->conf; + if (mf->operation != act_data->modify_header.action.operation || + mf->src.field != act_data->modify_header.action.src.field || + mf->dst.field != act_data->modify_header.action.dst.field || + mf->width != act_data->modify_header.action.width) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + user_action, + "Modify field configuration does not " + "match configuration from actions " + "template"); + break; + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + ethdev = user_action->conf; + if (flow_hw_validate_target_port_id(dev, ethdev->port_id)) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION_CONF, + user_action, "Invalid port"); + } + break; + default: + break; + } + } + + return 0; +} + +static int +flow_hw_async_op_validate(struct rte_eth_dev *dev, + const uint32_t queue, + const struct rte_flow_template_table *table, + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + MLX5_ASSERT(table != NULL); + + if (table->cfg.external && queue >= priv->hw_attr->nb_queue) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Incorrect queue"); + + return 0; +} + +/** + * Validate user input for rte_flow_async_create() implementation. + * + * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to create the flow. + * @param[in] table + * Pointer to template table. + * @param[in] items + * Items with flow spec value. + * @param[in] pattern_template_index + * The item pattern flow follows from the table. + * @param[in] actions + * Action with flow spec value. + * @param[in] action_template_index + * The action pattern flow follows from the table. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 if user input is valid. + * Negative errno otherwise, rte_errno and error struct is populated. + */ +static int +flow_hw_async_create_validate(struct rte_eth_dev *dev, + const uint32_t queue, + const struct rte_flow_template_table *table, + const struct rte_flow_item items[], + const uint8_t pattern_template_index, + const struct rte_flow_action actions[], + const uint8_t action_template_index, + struct rte_flow_error *error) +{ + if (flow_hw_async_op_validate(dev, queue, table, error)) + return -rte_errno; + + if (table->cfg.attr.insertion_type != RTE_FLOW_TABLE_INSERTION_TYPE_PATTERN) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Only pattern insertion is allowed on this table"); + + if (flow_hw_validate_rule_pattern(dev, table, pattern_template_index, items, error)) + return -rte_errno; + + if (flow_hw_validate_rule_actions(dev, table, action_template_index, actions, error)) + return -rte_errno; + + return 0; +} + +/** + * Validate user input for rte_flow_async_create_by_index() implementation. + * + * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to create the flow. + * @param[in] table + * Pointer to template table. + * @param[in] rule_index + * Rule index in the table. + * Inserting a rule to already occupied index results in undefined behavior. + * @param[in] actions + * Action with flow spec value. + * @param[in] action_template_index + * The action pattern flow follows from the table. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 if user input is valid. + * Negative errno otherwise, rte_errno and error struct is set. + */ +static int +flow_hw_async_create_by_index_validate(struct rte_eth_dev *dev, + const uint32_t queue, + const struct rte_flow_template_table *table, + const uint32_t rule_index, + const struct rte_flow_action actions[], + const uint8_t action_template_index, + struct rte_flow_error *error) +{ + if (flow_hw_async_op_validate(dev, queue, table, error)) + return -rte_errno; + + if (table->cfg.attr.insertion_type != RTE_FLOW_TABLE_INSERTION_TYPE_INDEX) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Only index insertion is allowed on this table"); + + if (rule_index >= table->cfg.attr.nb_flows) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Flow rule index exceeds table size"); + + if (flow_hw_validate_rule_actions(dev, table, action_template_index, actions, error)) + return -rte_errno; + + return 0; +} + + +/** + * Validate user input for rte_flow_async_update() implementation. + * + * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to create the flow. + * @param[in] flow + * Flow rule to be updated. + * @param[in] actions + * Action with flow spec value. + * @param[in] action_template_index + * The action pattern flow follows from the table. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 if user input is valid. + * Negative errno otherwise, rte_errno and error struct is set. + */ +static int +flow_hw_async_update_validate(struct rte_eth_dev *dev, + const uint32_t queue, + const struct rte_flow_hw *flow, + const struct rte_flow_action actions[], + const uint8_t action_template_index, + struct rte_flow_error *error) +{ + if (flow_hw_async_op_validate(dev, queue, flow->table, error)) + return -rte_errno; + + if (flow_hw_validate_rule_actions(dev, flow->table, action_template_index, actions, error)) + return -rte_errno; + + return 0; +} + +/** + * Validate user input for rte_flow_async_destroy() implementation. + * + * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + * @param[in] queue + * The queue to create the flow. + * @param[in] flow + * Flow rule to be destroyed. + * @param[out] error + * Pointer to error structure. + * + * @return + * 0 if user input is valid. + * Negative errno otherwise, rte_errno and error struct is set. + */ +static int +flow_hw_async_destroy_validate(struct rte_eth_dev *dev, + const uint32_t queue, + const struct rte_flow_hw *flow, + struct rte_flow_error *error) +{ + if (flow_hw_async_op_validate(dev, queue, flow->table, error)) + return -rte_errno; + + return 0; +} + static struct rte_flow_fp_ops mlx5_flow_hw_fp_ops = { .async_create = flow_hw_async_flow_create, .async_create_by_index = flow_hw_async_flow_create_by_index, -- 2.39.2