From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 7B6EC454E5;
	Wed, 26 Jun 2024 20:16:07 +0200 (CEST)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 6A7804067E;
	Wed, 26 Jun 2024 20:15:33 +0200 (CEST)
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on2044.outbound.protection.outlook.com [40.107.101.44])
 by mails.dpdk.org (Postfix) with ESMTP id 404EA40665
 for <dev@dpdk.org>; Wed, 26 Jun 2024 20:15:28 +0200 (CEST)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bNPknVy5ZZjpiA23FmpGpgOzch/iZuoC9ImUY839Nf0FsNME2Srz8zoTDBEtGFxOoPDv44Cd1lZpzvpLrdBNbVRhALybiQeBczsWDmxYK8CHUu33vu8RLrc6Ubyd0oSZrxNWb4j0Fq3ulcEqMdVWq760sVEJGSOP0L1mUQl+qLf/Utcs5JGyGON6A9ruoUGs6aJTgWokYkESn/JH0YmShgnsnlz8wWIgMITMmLgeCnq6ZJntUhoudAkfVps6GsoXR1OSmLsUv6+b4ISkepzD2V/h18zxCjSd6REiMB+GsyNXxPum4QBOfjYSVT/FkQhmh7nTxzR+HtFi2hdtAhG2nw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9IJ0Cb1uZu+D0G9BC2+PM32Ma1QwUKQL2/gPAPAyvNk=;
 b=NRUawl1RL3BID4VAWrdSCLKTzc7eKeRe5LWPT5AtuFYiUYxprMXLe7+7B7FeJdU+0tOcwYLj7u4RWYTBhoMBWjFcVC5niW/a50n8NT4Vl8/+mRXSVz/6H113hRX8isdwJkaoqeq439FMjj+IRTCjfXl1SowPVk8n0lwqvT7gz5iNRCvTxff8EbK+fI3RZU/hXFlpFhBwxL+29Dg79HvjL6e3+eDafjO4vGAyromGmG51QhzVptbzRkSad1IuXy6dMEUCmaXao9SN7r9NssiaZOIVAaQ2WaNt2WTgbq7uJsqdJl0GuiYHjxwssRcBLFKEIfDmB8B3tz+4zZY7eVaI+Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com;
 dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com;
 dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9IJ0Cb1uZu+D0G9BC2+PM32Ma1QwUKQL2/gPAPAyvNk=;
 b=hkiyh88Lphd+f+kjflrNiwCTmQdVP902SyC2U3VHTlo+hMYB4S/dlKlrBijkE1F7dq5jcy14PclHpqB8Y6UaOYi0mjeQoT0PBvUFodrS0eMHkvlBNup4q4SaDXpiP0R9f8Le7xzECqgrywNvnede8fIPILKxiHgOJcErv0DKihMqUcchYU9jdBCCy6Du03AdBu88bUDaABiouSi0B3NeTtrUzsjYWC75n4AsFrPH/tZTOGaUMgQng0mW1tw0uCNIoVxkHvh/2gsBxJ95fYIQUGACznOnFZXN5YATw1ls6e3SljikpyQQTFeG7p5240Kv6PZYdraYnCeTSXIYD+hbmw==
Received: from MW4PR04CA0178.namprd04.prod.outlook.com (2603:10b6:303:85::33)
 by BL1PR12MB5802.namprd12.prod.outlook.com (2603:10b6:208:392::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Wed, 26 Jun
 2024 18:15:20 +0000
Received: from SJ5PEPF000001CD.namprd05.prod.outlook.com
 (2603:10b6:303:85:cafe::58) by MW4PR04CA0178.outlook.office365.com
 (2603:10b6:303:85::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.38 via Frontend
 Transport; Wed, 26 Jun 2024 18:15:19 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161)
 smtp.mailfrom=nvidia.com;
 dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=nvidia.com;
Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates
 216.228.117.161 as permitted sender) receiver=protection.outlook.com;
 client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C
Received: from mail.nvidia.com (216.228.117.161) by
 SJ5PEPF000001CD.mail.protection.outlook.com (10.167.242.42) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.7677.15 via Frontend Transport; Wed, 26 Jun 2024 18:15:19 +0000
Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com
 (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Wed, 26 Jun
 2024 11:15:03 -0700
Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com
 (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Wed, 26 Jun
 2024 11:15:01 -0700
From: Dariusz Sosnowski <dsosnowski@nvidia.com>
To: Viacheslav Ovsiienko <viacheslavo@nvidia.com>, Bing Zhao
 <bingz@nvidia.com>, Ori Kam <orika@nvidia.com>, Suanming Mou
 <suanmingm@nvidia.com>, Matan Azrad <matan@nvidia.com>
CC: <dev@dpdk.org>
Subject: [PATCH v3 8/8] net/mlx5: add async flow operation validation
Date: Wed, 26 Jun 2024 20:14:28 +0200
Message-ID: <20240626181428.1678402-9-dsosnowski@nvidia.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240626181428.1678402-1-dsosnowski@nvidia.com>
References: <20240612162426.978117-1-dsosnowski@nvidia.com>
 <20240626181428.1678402-1-dsosnowski@nvidia.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.126.230.35]
X-ClientProxiedBy: rnnvmail203.nvidia.com (10.129.68.9) To
 rnnvmail201.nvidia.com (10.129.68.8)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ5PEPF000001CD:EE_|BL1PR12MB5802:EE_
X-MS-Office365-Filtering-Correlation-Id: 11064d71-948f-4bdd-348d-08dc960bf033
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
 ARA:13230038|1800799022|82310400024|36860700011|376012; 
X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?3H8v4Q34BZWJ+3LcqhYKgDm8755cqyI6lrbpueBhG09yIO1HyPq5tHtifRUu?=
 =?us-ascii?Q?3kqE1HNpGWJS0tV0buinH4+lcVWuSY6BtBwqE/iJgcuwU8wF/5bK0/it73kt?=
 =?us-ascii?Q?qj1ThmBfn6rbUM9yrLXbpBZ8Lf+0VMR5wP2x+ADO147/8pSNdckStuHjz23F?=
 =?us-ascii?Q?IiT8+TIPD44gGJmHq2vAOBNaclTrcwt6RmTtjxny18xAJ8XryXoKEN7NTbOK?=
 =?us-ascii?Q?WI2mKBIa+oFEBTd923pvqLkZcThx7e0lo2J34w40ROdVS7azOSnVud3KpreB?=
 =?us-ascii?Q?JYj5bKTZUnlOjb5hWSoZSc1j5Sa+MJEKXAML1tB5e39bXsWI55YjZ1Z8AFOQ?=
 =?us-ascii?Q?5zRnj/5siTcXpQDNqqCa4TJCrB0wUnr+XCchLbbW5Hc7xxz1eSqrQkqNmLC4?=
 =?us-ascii?Q?TOB8TfIW0JtH7tN1UKC/hsoqkJo59zsCiLV4wUuecHhG4mue2jOMuSm1obIh?=
 =?us-ascii?Q?Xd1ZHtaYPofQ/56ST2zzpjjJlf5/4PwbSxCwyENWNnSAx8of2Ddw8zyVQDhb?=
 =?us-ascii?Q?2IIFQ0bjCMl1sZCi25l84DjcLC+B0nLVeYivz9y/1Cal/DCE1QI4NcwjdMrj?=
 =?us-ascii?Q?jIhcEwAfcYZNg6YIZxEMOB2Q5Ek9J+7mYkdhLTIl8m3JCSnRFOabn3fdszx2?=
 =?us-ascii?Q?9MRWYGHI5NKjSdGX1wKoC//tBdPwTfgbDxjGRf1Pp+X0NXaNldfP8aiEiTRT?=
 =?us-ascii?Q?OE5uNH6YunrJZBa/LdVhIeJVgnsxFw1aLxu2D5qxWw2iP0VYERCewzN/yAC2?=
 =?us-ascii?Q?gqaZINnAF3Gbyh93AAyKKbAeQtFAGWN2gr4hzKAV4JCa/1lNSl9zlHzwf/ID?=
 =?us-ascii?Q?TMk2Zv0EiRtvAZEgha8W1ENfTEZtvI3tcD7g4jgszgpaGaHIR/4r016Ayps8?=
 =?us-ascii?Q?ImjVIq4N1hL5bS0L76I8qIKsoc/h9VRklwHfOGPSjKVkPyKq8cUxcHknqx5l?=
 =?us-ascii?Q?cgpj7EVRfrqW4maQljQa7C53WPmcCXUl0sUv63s9UlbJSd5X1z0NQhLzKdy2?=
 =?us-ascii?Q?KQiHK2Snnd+YijYzi7WZznb2ZZOG2hHT3kSAz/0n/aH/B90zKTpbTBWEWdDg?=
 =?us-ascii?Q?tXGDcvlfIBrFUYgyrXSMWnxPtHl47G6W2qI6BDjYJxghaCCS4A87VbU7UJyU?=
 =?us-ascii?Q?EaYV3afFQUKhARILmBiLML7XQPmd5FZg2WQtOJpOwcVqPtRpZJUe0VCH8An4?=
 =?us-ascii?Q?n/0YPfSUeNEBsehNdRmfPNWPOXe9ALokAFyDp9VJ964g3iqZIZ5ttZSqb1E2?=
 =?us-ascii?Q?aWY7LNjm7D9j1TpqVHPRY+8PQmt1tT9Qc2zqdZnVN/jaz3je6vduggMZ9PW7?=
 =?us-ascii?Q?gadsCayaSJchtGiYSHFbdeyWPovdTyz+vdmpNeFLABOmKCXBnW8IAqEDLdDY?=
 =?us-ascii?Q?8L1cfN191dJrushm/FjlL7HlvhLVjr4NVVZyXwI5YVW1VQOYm1LBAkf+bOjq?=
 =?us-ascii?Q?EgcVEF/dDIWc+Ihhd/+wZromx6+Ewaof?=
X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE;
 SFS:(13230038)(1800799022)(82310400024)(36860700011)(376012); DIR:OUT;
 SFP:1101; 
X-OriginatorOrg: Nvidia.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jun 2024 18:15:19.8305 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 11064d71-948f-4bdd-348d-08dc960bf033
X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161];
 Helo=[mail.nvidia.com]
X-MS-Exchange-CrossTenant-AuthSource: SJ5PEPF000001CD.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5802
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

This patch adds validation to implementations of the following API
functions:

- rte_flow_async_create()
- rte_flow_async_create_by_index()
- rte_flow_async_update()
- rte_flow_async_destroy()

These validations are enabled if and only if RTE_LIBRTE_MLX5_DEBUG
macro is defined.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/rel_notes/release_24_07.rst |   1 +
 drivers/net/mlx5/mlx5_flow_hw.c        | 491 ++++++++++++++++++++++++-
 2 files changed, 488 insertions(+), 4 deletions(-)

diff --git a/doc/guides/rel_notes/release_24_07.rst b/doc/guides/rel_notes/release_24_07.rst
index 7c88de381b..cccb3ca834 100644
--- a/doc/guides/rel_notes/release_24_07.rst
+++ b/doc/guides/rel_notes/release_24_07.rst
@@ -97,6 +97,7 @@ New Features
   * Added match with Tx queue.
   * Added match with external Tx queue.
   * Added match with E-Switch manager.
+  * Added flow item and actions validation to async flow API.
 
 * **Updated TAP driver.**
 
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index bc084af87e..f389817858 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -332,6 +332,32 @@ mlx5_flow_ct_init(struct rte_eth_dev *dev,
 static __rte_always_inline uint32_t flow_hw_tx_tag_regc_mask(struct rte_eth_dev *dev);
 static __rte_always_inline uint32_t flow_hw_tx_tag_regc_value(struct rte_eth_dev *dev);
 
+static int flow_hw_async_create_validate(struct rte_eth_dev *dev,
+					 const uint32_t queue,
+					 const struct rte_flow_template_table *table,
+					 const struct rte_flow_item items[],
+					 const uint8_t pattern_template_index,
+					 const struct rte_flow_action actions[],
+					 const uint8_t action_template_index,
+					 struct rte_flow_error *error);
+static int flow_hw_async_create_by_index_validate(struct rte_eth_dev *dev,
+						  const uint32_t queue,
+						  const struct rte_flow_template_table *table,
+						  const uint32_t rule_index,
+						  const struct rte_flow_action actions[],
+						  const uint8_t action_template_index,
+						  struct rte_flow_error *error);
+static int flow_hw_async_update_validate(struct rte_eth_dev *dev,
+					 const uint32_t queue,
+					 const struct rte_flow_hw *flow,
+					 const struct rte_flow_action actions[],
+					 const uint8_t action_template_index,
+					 struct rte_flow_error *error);
+static int flow_hw_async_destroy_validate(struct rte_eth_dev *dev,
+					  const uint32_t queue,
+					  const struct rte_flow_hw *flow,
+					  struct rte_flow_error *error);
+
 const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops;
 
 /* DR action flags with different table. */
@@ -3860,6 +3886,11 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
 	uint32_t res_idx = 0;
 	int ret;
 
+	if (mlx5_fp_debug_enabled()) {
+		if (flow_hw_async_create_validate(dev, queue, table, items, pattern_template_index,
+						  actions, action_template_index, error))
+			return NULL;
+	}
 	flow = mlx5_ipool_malloc(table->flow, &flow_idx);
 	if (!flow)
 		goto error;
@@ -3999,10 +4030,10 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
 	uint32_t res_idx = 0;
 	int ret;
 
-	if (unlikely(rule_index >= table->cfg.attr.nb_flows)) {
-		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
-				   "Flow rule index exceeds table size");
-		return NULL;
+	if (mlx5_fp_debug_enabled()) {
+		if (flow_hw_async_create_by_index_validate(dev, queue, table, rule_index,
+							   actions, action_template_index, error))
+			return NULL;
 	}
 	flow = mlx5_ipool_malloc(table->flow, &flow_idx);
 	if (!flow)
@@ -4135,6 +4166,11 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev,
 	uint32_t res_idx = 0;
 	int ret;
 
+	if (mlx5_fp_debug_enabled()) {
+		if (flow_hw_async_update_validate(dev, queue, of, actions, action_template_index,
+						  error))
+			return -rte_errno;
+	}
 	aux = mlx5_flow_hw_aux(dev->data->port_id, of);
 	nf = &aux->upd_flow;
 	memset(nf, 0, sizeof(struct rte_flow_hw));
@@ -4243,6 +4279,10 @@ flow_hw_async_flow_destroy(struct rte_eth_dev *dev,
 							   &fh->table->cfg.attr);
 	int ret;
 
+	if (mlx5_fp_debug_enabled()) {
+		if (flow_hw_async_destroy_validate(dev, queue, fh, error))
+			return -rte_errno;
+	}
 	fh->operation_type = !resizable ?
 			     MLX5_FLOW_HW_FLOW_OP_TYPE_DESTROY :
 			     MLX5_FLOW_HW_FLOW_OP_TYPE_RSZ_TBL_DESTROY;
@@ -16152,6 +16192,449 @@ mlx5_reformat_action_destroy(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static bool
+flow_hw_is_item_masked(const struct rte_flow_item *item)
+{
+	const uint8_t *byte;
+	int size;
+	int i;
+
+	if (item->mask == NULL)
+		return false;
+
+	switch ((int)item->type) {
+	case MLX5_RTE_FLOW_ITEM_TYPE_TAG:
+		size = sizeof(struct rte_flow_item_tag);
+		break;
+	case MLX5_RTE_FLOW_ITEM_TYPE_SQ:
+		size = sizeof(struct mlx5_rte_flow_item_sq);
+		break;
+	default:
+		size = rte_flow_conv(RTE_FLOW_CONV_OP_ITEM_MASK, NULL, 0, item, NULL);
+		/*
+		 * Pattern template items are passed to this function.
+		 * These items were already validated, so error is not expected.
+		 * Also, if mask is NULL, then spec size is bigger than 0 always.
+		 */
+		MLX5_ASSERT(size > 0);
+	}
+
+	byte = (const uint8_t *)item->mask;
+	for (i = 0; i < size; ++i)
+		if (byte[i])
+			return true;
+
+	return false;
+}
+
+static int
+flow_hw_validate_rule_pattern(struct rte_eth_dev *dev,
+			      const struct rte_flow_template_table *table,
+			      const uint8_t pattern_template_idx,
+			      const struct rte_flow_item items[],
+			      struct rte_flow_error *error)
+{
+	const struct rte_flow_pattern_template *pt;
+	const struct rte_flow_item *pt_item;
+
+	if (pattern_template_idx >= table->nb_item_templates)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Pattern template index out of range");
+
+	pt = table->its[pattern_template_idx];
+	pt_item = pt->items;
+
+	/* If any item was prepended, skip it. */
+	if (pt->implicit_port || pt->implicit_tag)
+		pt_item++;
+
+	for (; pt_item->type != RTE_FLOW_ITEM_TYPE_END; pt_item++, items++) {
+		if (pt_item->type != items->type)
+			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+						  items, "Item type does not match the template");
+
+		/*
+		 * Assumptions:
+		 * - Currently mlx5dr layer contains info on which fields in masks are supported.
+		 * - This info is not exposed to PMD directly.
+		 * - Because of that, it is assumed that since pattern template is correct,
+		 *   then, items' masks in pattern template have nonzero values only in
+		 *   supported fields.
+		 *   This is known, because a temporary mlx5dr matcher is created during pattern
+		 *   template creation to validate the template.
+		 * - As a result, it is safe to look for nonzero bytes in mask to determine if
+		 *   item spec is needed in a flow rule.
+		 */
+		if (!flow_hw_is_item_masked(pt_item))
+			continue;
+
+		if (items->spec == NULL)
+			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_SPEC,
+						  items, "Item spec is required");
+
+		switch (items->type) {
+		const struct rte_flow_item_ethdev *ethdev;
+		const struct rte_flow_item_tx_queue *tx_queue;
+		struct mlx5_txq_ctrl *txq;
+
+		case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT:
+			ethdev = items->spec;
+			if (flow_hw_validate_target_port_id(dev, ethdev->port_id)) {
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ITEM_SPEC, items,
+							  "Invalid port");
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_TX_QUEUE:
+			tx_queue = items->spec;
+			if (mlx5_is_external_txq(dev, tx_queue->tx_queue))
+				continue;
+			txq = mlx5_txq_get(dev, tx_queue->tx_queue);
+			if (!txq)
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ITEM_SPEC, items,
+							  "Invalid Tx queue");
+			mlx5_txq_release(dev, tx_queue->tx_queue);
+		default:
+			break;
+		}
+	}
+
+	return 0;
+}
+
+static bool
+flow_hw_valid_indirect_action_type(const struct rte_flow_action *user_action,
+				   const enum rte_flow_action_type expected_type)
+{
+	uint32_t user_indirect_type = MLX5_INDIRECT_ACTION_TYPE_GET(user_action->conf);
+	uint32_t expected_indirect_type;
+
+	switch ((int)expected_type) {
+	case RTE_FLOW_ACTION_TYPE_RSS:
+	case MLX5_RTE_FLOW_ACTION_TYPE_RSS:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_RSS;
+		break;
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+	case MLX5_RTE_FLOW_ACTION_TYPE_COUNT:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_COUNT;
+		break;
+	case RTE_FLOW_ACTION_TYPE_AGE:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_AGE;
+		break;
+	case RTE_FLOW_ACTION_TYPE_CONNTRACK:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_CT;
+		break;
+	case RTE_FLOW_ACTION_TYPE_METER_MARK:
+	case MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_METER_MARK;
+		break;
+	case RTE_FLOW_ACTION_TYPE_QUOTA:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_QUOTA;
+		break;
+	default:
+		return false;
+	}
+
+	return user_indirect_type == expected_indirect_type;
+}
+
+static int
+flow_hw_validate_rule_actions(struct rte_eth_dev *dev,
+			      const struct rte_flow_template_table *table,
+			      const uint8_t actions_template_idx,
+			      const struct rte_flow_action actions[],
+			      struct rte_flow_error *error)
+{
+	const struct rte_flow_actions_template *at;
+	const struct mlx5_hw_actions *hw_acts;
+	const struct mlx5_action_construct_data *act_data;
+	unsigned int idx;
+
+	if (actions_template_idx >= table->nb_action_templates)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Actions template index out of range");
+
+	at = table->ats[actions_template_idx].action_template;
+	hw_acts = &table->ats[actions_template_idx].acts;
+
+	for (idx = 0; actions[idx].type != RTE_FLOW_ACTION_TYPE_END; ++idx) {
+		const struct rte_flow_action *user_action = &actions[idx];
+		const struct rte_flow_action *tmpl_action = &at->orig_actions[idx];
+
+		if (user_action->type != tmpl_action->type)
+			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+						  user_action,
+						  "Action type does not match type specified in "
+						  "actions template");
+	}
+
+	/*
+	 * Only go through unmasked actions and check if configuration is provided.
+	 * Configuration of masked actions is ignored.
+	 */
+	LIST_FOREACH(act_data, &hw_acts->act_list, next) {
+		const struct rte_flow_action *user_action;
+
+		user_action = &actions[act_data->action_src];
+
+		/* Skip actions which do not require conf. */
+		switch ((int)user_action->type) {
+		case RTE_FLOW_ACTION_TYPE_COUNT:
+		case MLX5_RTE_FLOW_ACTION_TYPE_COUNT:
+		case MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK:
+			continue;
+		default:
+			break;
+		}
+
+		if (user_action->conf == NULL)
+			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+						  user_action,
+						  "Action requires configuration");
+
+		switch ((int)user_action->type) {
+		enum rte_flow_action_type expected_type;
+		const struct rte_flow_action_ethdev *ethdev;
+		const struct rte_flow_action_modify_field *mf;
+
+		case RTE_FLOW_ACTION_TYPE_INDIRECT:
+			expected_type = act_data->indirect.expected_type;
+			if (!flow_hw_valid_indirect_action_type(user_action, expected_type))
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+							  user_action,
+							  "Indirect action type does not match "
+							  "the type specified in the mask");
+			break;
+		case RTE_FLOW_ACTION_TYPE_QUEUE:
+			if (mlx5_flow_validate_target_queue(dev, user_action, error))
+				return -rte_errno;
+			break;
+		case RTE_FLOW_ACTION_TYPE_RSS:
+			if (mlx5_validate_action_rss(dev, user_action, error))
+				return -rte_errno;
+			break;
+		case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
+			/* TODO: Compare other fields if needed. */
+			mf = user_action->conf;
+			if (mf->operation != act_data->modify_header.action.operation ||
+			    mf->src.field != act_data->modify_header.action.src.field ||
+			    mf->dst.field != act_data->modify_header.action.dst.field ||
+			    mf->width != act_data->modify_header.action.width)
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+							  user_action,
+							  "Modify field configuration does not "
+							  "match configuration from actions "
+							  "template");
+			break;
+		case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
+			ethdev = user_action->conf;
+			if (flow_hw_validate_target_port_id(dev, ethdev->port_id)) {
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+							  user_action, "Invalid port");
+			}
+			break;
+		default:
+			break;
+		}
+	}
+
+	return 0;
+}
+
+static int
+flow_hw_async_op_validate(struct rte_eth_dev *dev,
+			  const uint32_t queue,
+			  const struct rte_flow_template_table *table,
+			  struct rte_flow_error *error)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	MLX5_ASSERT(table != NULL);
+
+	if (table->cfg.external && queue >= priv->hw_attr->nb_queue)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Incorrect queue");
+
+	return 0;
+}
+
+/**
+ * Validate user input for rte_flow_async_create() implementation.
+ *
+ * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to create the flow.
+ * @param[in] table
+ *   Pointer to template table.
+ * @param[in] items
+ *   Items with flow spec value.
+ * @param[in] pattern_template_index
+ *   The item pattern flow follows from the table.
+ * @param[in] actions
+ *   Action with flow spec value.
+ * @param[in] action_template_index
+ *   The action pattern flow follows from the table.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 if user input is valid.
+ *    Negative errno otherwise, rte_errno and error struct is populated.
+ */
+static int
+flow_hw_async_create_validate(struct rte_eth_dev *dev,
+			      const uint32_t queue,
+			      const struct rte_flow_template_table *table,
+			      const struct rte_flow_item items[],
+			      const uint8_t pattern_template_index,
+			      const struct rte_flow_action actions[],
+			      const uint8_t action_template_index,
+			      struct rte_flow_error *error)
+{
+	if (flow_hw_async_op_validate(dev, queue, table, error))
+		return -rte_errno;
+
+	if (table->cfg.attr.insertion_type != RTE_FLOW_TABLE_INSERTION_TYPE_PATTERN)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Only pattern insertion is allowed on this table");
+
+	if (flow_hw_validate_rule_pattern(dev, table, pattern_template_index, items, error))
+		return -rte_errno;
+
+	if (flow_hw_validate_rule_actions(dev, table, action_template_index, actions, error))
+		return -rte_errno;
+
+	return 0;
+}
+
+/**
+ * Validate user input for rte_flow_async_create_by_index() implementation.
+ *
+ * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to create the flow.
+ * @param[in] table
+ *   Pointer to template table.
+ * @param[in] rule_index
+ *   Rule index in the table.
+ *   Inserting a rule to already occupied index results in undefined behavior.
+ * @param[in] actions
+ *   Action with flow spec value.
+ * @param[in] action_template_index
+ *   The action pattern flow follows from the table.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 if user input is valid.
+ *    Negative errno otherwise, rte_errno and error struct is set.
+ */
+static int
+flow_hw_async_create_by_index_validate(struct rte_eth_dev *dev,
+				       const uint32_t queue,
+				       const struct rte_flow_template_table *table,
+				       const uint32_t rule_index,
+				       const struct rte_flow_action actions[],
+				       const uint8_t action_template_index,
+				       struct rte_flow_error *error)
+{
+	if (flow_hw_async_op_validate(dev, queue, table, error))
+		return -rte_errno;
+
+	if (table->cfg.attr.insertion_type != RTE_FLOW_TABLE_INSERTION_TYPE_INDEX)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Only index insertion is allowed on this table");
+
+	if (rule_index >= table->cfg.attr.nb_flows)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Flow rule index exceeds table size");
+
+	if (flow_hw_validate_rule_actions(dev, table, action_template_index, actions, error))
+		return -rte_errno;
+
+	return 0;
+}
+
+
+/**
+ * Validate user input for rte_flow_async_update() implementation.
+ *
+ * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to create the flow.
+ * @param[in] flow
+ *   Flow rule to be updated.
+ * @param[in] actions
+ *   Action with flow spec value.
+ * @param[in] action_template_index
+ *   The action pattern flow follows from the table.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 if user input is valid.
+ *    Negative errno otherwise, rte_errno and error struct is set.
+ */
+static int
+flow_hw_async_update_validate(struct rte_eth_dev *dev,
+			      const uint32_t queue,
+			      const struct rte_flow_hw *flow,
+			      const struct rte_flow_action actions[],
+			      const uint8_t action_template_index,
+			      struct rte_flow_error *error)
+{
+	if (flow_hw_async_op_validate(dev, queue, flow->table, error))
+		return -rte_errno;
+
+	if (flow_hw_validate_rule_actions(dev, flow->table, action_template_index, actions, error))
+		return -rte_errno;
+
+	return 0;
+}
+
+/**
+ * Validate user input for rte_flow_async_destroy() implementation.
+ *
+ * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to create the flow.
+ * @param[in] flow
+ *   Flow rule to be destroyed.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 if user input is valid.
+ *    Negative errno otherwise, rte_errno and error struct is set.
+ */
+static int
+flow_hw_async_destroy_validate(struct rte_eth_dev *dev,
+			       const uint32_t queue,
+			       const struct rte_flow_hw *flow,
+			       struct rte_flow_error *error)
+{
+	if (flow_hw_async_op_validate(dev, queue, flow->table, error))
+		return -rte_errno;
+
+	return 0;
+}
+
 static struct rte_flow_fp_ops mlx5_flow_hw_fp_ops = {
 	.async_create = flow_hw_async_flow_create,
 	.async_create_by_index = flow_hw_async_flow_create_by_index,
-- 
2.39.2