DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/9] net/mlx5: flow fast path validation
@ 2024-06-05 18:34 Dariusz Sosnowski
  2024-06-05 18:34 ` [PATCH 1/9] ethdev: support duplicating only item mask Dariusz Sosnowski
                   ` (11 more replies)
  0 siblings, 12 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-05 18:34 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad,
	Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko
  Cc: dev

This patchset adds validations for flow items and actions
supplied by the user to fast path async flow API functions
implemented in mlx5 PMD.

- Patch 1: Adds the ability to generically calculate
  flow item's mask size using rte_flow_conv().
  This change is used in follow up patches to check for
  non-zero mask in pattern template.
- Patches 2-7: Prepares mlx5 PMD for adding fast path validations.
  Specifically:

  - Required functionality for validating queue indexes and
    target represented ports are extracted for reuse.
  - Additional info, required for validation, is stored in
    pattern and actions templates.
  - Introduces mlx5_fp_debug_enabled() function for checking
    if mlx5 PMD was compiled in debug mode.

- Patch 8: Introduce validations to:

  - rte_flow_async_create()
  - rte_flow_async_create_by_index()
  - rte_flow_async_actions_update()
  - rte_flow_async_destroy()

  These validations are enabled if and only if
  RTE_LIBRTE_MLX5_DEBUG macro is defined during compilation.

Depends-on: series-32087 ("net/mlx5: add match with Tx queue item")
Depends-on: series-32080 ("net/mlx5: validate HWS template items")

Dariusz Sosnowski (9):
  ethdev: support duplicating only item mask
  net/mlx5: extract target port validation
  net/mlx5: extract queue index validation
  net/mlx5: store pattern template items
  net/mlx5: store original actions in template
  net/mlx5: store expected type on indirect action
  net/mlx5: store modify field action
  common/mlx5: add debug mode indicator
  net/mlx5: add async flow operation validation

 doc/guides/rel_notes/release_24_07.rst |   1 +
 drivers/common/mlx5/mlx5_common.h      |  13 +
 drivers/net/mlx5/mlx5_flow.c           |  59 ++-
 drivers/net/mlx5/mlx5_flow.h           |  12 +
 drivers/net/mlx5/mlx5_flow_hw.c        | 620 +++++++++++++++++++++++--
 lib/ethdev/rte_flow.c                  |  15 +-
 lib/ethdev/rte_flow.h                  |  12 +
 7 files changed, 682 insertions(+), 50 deletions(-)

--
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 1/9] ethdev: support duplicating only item mask
  2024-06-05 18:34 [PATCH 0/9] net/mlx5: flow fast path validation Dariusz Sosnowski
@ 2024-06-05 18:34 ` Dariusz Sosnowski
  2024-06-05 18:34 ` [PATCH 2/9] net/mlx5: extract target port validation Dariusz Sosnowski
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-05 18:34 UTC (permalink / raw)
  To: Ori Kam, Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko; +Cc: dev

Extend rte_flow_conv() to support working only on item's mask.
This allows drivers to get only the mask's size when working on pattern
templates and duplicate items having only the mask in a generic way.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 lib/ethdev/rte_flow.c | 15 +++++++++++++--
 lib/ethdev/rte_flow.h | 12 ++++++++++++
 2 files changed, 25 insertions(+), 2 deletions(-)

diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 7ab1100ea0..ca2f85c3fa 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -60,9 +60,10 @@ rte_flow_conv_copy(void *buf, const void *data, const size_t size,
 	bool rte_type = type >= 0;
 
 	size_t sz = rte_type ? desc[type].size : sizeof(void *);
-	if (buf == NULL || data == NULL)
+	if (data == NULL)
 		return 0;
-	rte_memcpy(buf, data, (size > sz ? sz : size));
+	if (buf != NULL)
+		rte_memcpy(buf, data, (size > sz ? sz : size));
 	if (rte_type && desc[type].desc_fn)
 		sz += desc[type].desc_fn(size > 0 ? buf : NULL, data);
 	return sz;
@@ -1090,6 +1091,7 @@ rte_flow_conv(enum rte_flow_conv_op op,
 
 	switch (op) {
 		const struct rte_flow_attr *attr;
+		const struct rte_flow_item *item;
 
 	case RTE_FLOW_CONV_OP_NONE:
 		ret = 0;
@@ -1104,6 +1106,15 @@ rte_flow_conv(enum rte_flow_conv_op op,
 	case RTE_FLOW_CONV_OP_ITEM:
 		ret = rte_flow_conv_pattern(dst, size, src, 1, error);
 		break;
+	case RTE_FLOW_CONV_OP_ITEM_MASK:
+		item = src;
+		if (item->mask == NULL) {
+			ret = rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_MASK,
+						 item, "Mask not provided");
+			break;
+		}
+		ret = rte_flow_conv_item_spec(dst, size, src, RTE_FLOW_CONV_ITEM_MASK);
+		break;
 	case RTE_FLOW_CONV_OP_ACTION:
 		ret = rte_flow_conv_actions(dst, size, src, 1, error);
 		break;
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 6e8ab1d4c7..35f180e735 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4409,6 +4409,18 @@ enum rte_flow_conv_op {
 	 */
 	RTE_FLOW_CONV_OP_ITEM,
 
+	/**
+	 * Convert a single item mask.
+	 *
+	 * Duplicates only @p mask.
+	 *
+	 * - @p src type:
+	 *   @code const struct rte_flow_item * @endcode
+	 * - @p dst type:
+	 *   @code struct rte_flow_item * @endcode
+	 */
+	RTE_FLOW_CONV_OP_ITEM_MASK,
+
 	/**
 	 * Convert a single action.
 	 *
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 2/9] net/mlx5: extract target port validation
  2024-06-05 18:34 [PATCH 0/9] net/mlx5: flow fast path validation Dariusz Sosnowski
  2024-06-05 18:34 ` [PATCH 1/9] ethdev: support duplicating only item mask Dariusz Sosnowski
@ 2024-06-05 18:34 ` Dariusz Sosnowski
  2024-06-05 18:34 ` [PATCH 3/9] net/mlx5: extract queue index validation Dariusz Sosnowski
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-05 18:34 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

Extract code responsible for validation if port specified in
configuration of RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT action
is correct. Allow for reuse of this logic for both
RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT actions and
RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT items.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_hw.c | 63 +++++++++++++++++++++------------
 1 file changed, 40 insertions(+), 23 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index e8562660dd..24b86535b1 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -5716,6 +5716,42 @@ flow_hw_validate_action_port_representor(struct rte_eth_dev *dev __rte_unused,
 	return 0;
 }
 
+static int
+flow_hw_validate_target_port_id(struct rte_eth_dev *dev,
+				uint16_t target_port_id)
+{
+	struct mlx5_priv *port_priv;
+	struct mlx5_priv *dev_priv;
+
+	if (target_port_id == MLX5_REPRESENTED_PORT_ESW_MGR)
+		return 0;
+
+	port_priv = mlx5_port_to_eswitch_info(target_port_id, false);
+	if (!port_priv) {
+		rte_errno = EINVAL;
+		DRV_LOG(ERR, "Port %u Failed to obtain E-Switch info for port %u",
+			dev->data->port_id, target_port_id);
+		return -rte_errno;
+	}
+
+	dev_priv = mlx5_dev_to_eswitch_info(dev);
+	if (!dev_priv) {
+		rte_errno = EINVAL;
+		DRV_LOG(ERR, "Port %u Failed to obtain E-Switch info for transfer proxy",
+			dev->data->port_id);
+		return -rte_errno;
+	}
+
+	if (port_priv->domain_id != dev_priv->domain_id) {
+		rte_errno = EINVAL;
+		DRV_LOG(ERR, "Port %u Failed to obtain E-Switch info for transfer proxy",
+			dev->data->port_id);
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
 static int
 flow_hw_validate_action_represented_port(struct rte_eth_dev *dev,
 					 const struct rte_flow_action *action,
@@ -5732,32 +5768,13 @@ flow_hw_validate_action_represented_port(struct rte_eth_dev *dev,
 					  "cannot use represented_port actions"
 					  " without an E-Switch");
 	if (mask_conf && mask_conf->port_id) {
-		struct mlx5_priv *port_priv;
-		struct mlx5_priv *dev_priv;
-
 		if (!action_conf)
 			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
 						  action, "port index was not provided");
-		port_priv = mlx5_port_to_eswitch_info(action_conf->port_id, false);
-		if (!port_priv)
-			return rte_flow_error_set(error, rte_errno,
-						  RTE_FLOW_ERROR_TYPE_ACTION,
-						  action,
-						  "failed to obtain E-Switch"
-						  " info for port");
-		dev_priv = mlx5_dev_to_eswitch_info(dev);
-		if (!dev_priv)
-			return rte_flow_error_set(error, rte_errno,
-						  RTE_FLOW_ERROR_TYPE_ACTION,
-						  action,
-						  "failed to obtain E-Switch"
-						  " info for transfer proxy");
-		if (port_priv->domain_id != dev_priv->domain_id)
-			return rte_flow_error_set(error, rte_errno,
-						  RTE_FLOW_ERROR_TYPE_ACTION,
-						  action,
-						  "cannot forward to port from"
-						  " a different E-Switch");
+
+		if (flow_hw_validate_target_port_id(dev, action_conf->port_id))
+			return rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_ACTION,
+						  action, "port index is invalid");
 	}
 	return 0;
 }
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 3/9] net/mlx5: extract queue index validation
  2024-06-05 18:34 [PATCH 0/9] net/mlx5: flow fast path validation Dariusz Sosnowski
  2024-06-05 18:34 ` [PATCH 1/9] ethdev: support duplicating only item mask Dariusz Sosnowski
  2024-06-05 18:34 ` [PATCH 2/9] net/mlx5: extract target port validation Dariusz Sosnowski
@ 2024-06-05 18:34 ` Dariusz Sosnowski
  2024-06-05 18:34 ` [PATCH 4/9] net/mlx5: store pattern template items Dariusz Sosnowski
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-05 18:34 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

Extract validation of queue index out of validation of
RTE_FLOW_ACTION_TYPE_QUEUE action to allow reuse in template API
flow rule creation implementation.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.c | 59 +++++++++++++++++++++++++-----------
 drivers/net/mlx5/mlx5_flow.h |  3 ++
 2 files changed, 44 insertions(+), 18 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 18c8890e4a..9643fa424f 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2020,6 +2020,46 @@ mlx5_flow_validate_action_drop(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/*
+ * Check if a queue specified in the queue action is valid.
+ *
+ * @param[in] dev
+ *   Pointer to the Ethernet device structure.
+ * @param[in] action
+ *   Pointer to the queue action.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_flow_validate_target_queue(struct rte_eth_dev *dev,
+				const struct rte_flow_action *action,
+				struct rte_flow_error *error)
+{
+	const struct rte_flow_action_queue *queue = action->conf;
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	if (mlx5_is_external_rxq(dev, queue->index))
+		return 0;
+	if (!priv->rxqs_n)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+					  NULL, "No Rx queues configured");
+	if (queue->index >= priv->rxqs_n)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+					  &queue->index,
+					  "queue index out of range");
+	if (mlx5_rxq_get(dev, queue->index) == NULL)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+					  &queue->index,
+					  "queue is not configured");
+	return 0;
+}
+
 /*
  * Validate the queue action.
  *
@@ -2044,7 +2084,6 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
 				const struct rte_flow_attr *attr,
 				struct rte_flow_error *error)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
 	const struct rte_flow_action_queue *queue = action->conf;
 
 	if (!queue)
@@ -2060,23 +2099,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
 					  "queue action not supported for egress.");
-	if (mlx5_is_external_rxq(dev, queue->index))
-		return 0;
-	if (!priv->rxqs_n)
-		return rte_flow_error_set(error, EINVAL,
-					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
-					  NULL, "No Rx queues configured");
-	if (queue->index >= priv->rxqs_n)
-		return rte_flow_error_set(error, EINVAL,
-					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
-					  &queue->index,
-					  "queue index out of range");
-	if (mlx5_rxq_get(dev, queue->index) == NULL)
-		return rte_flow_error_set(error, EINVAL,
-					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
-					  &queue->index,
-					  "queue is not configured");
-	return 0;
+	return mlx5_flow_validate_target_queue(dev, action, error);
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 5f51f7257e..7be95a7d18 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -2948,6 +2948,9 @@ int mlx5_flow_validate_action_mark(struct rte_eth_dev *dev,
 				   uint64_t action_flags,
 				   const struct rte_flow_attr *attr,
 				   struct rte_flow_error *error);
+int mlx5_flow_validate_target_queue(struct rte_eth_dev *dev,
+				    const struct rte_flow_action *action,
+				    struct rte_flow_error *error);
 int mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
 				    uint64_t action_flags,
 				    struct rte_eth_dev *dev,
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 4/9] net/mlx5: store pattern template items
  2024-06-05 18:34 [PATCH 0/9] net/mlx5: flow fast path validation Dariusz Sosnowski
                   ` (2 preceding siblings ...)
  2024-06-05 18:34 ` [PATCH 3/9] net/mlx5: extract queue index validation Dariusz Sosnowski
@ 2024-06-05 18:34 ` Dariusz Sosnowski
  2024-06-05 18:34 ` [PATCH 5/9] net/mlx5: store original actions in template Dariusz Sosnowski
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-05 18:34 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

Store items which define the pattern template on creation.
This allows validation of items, provided by the user during flow rule
creation, against pattern template.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |  2 ++
 drivers/net/mlx5/mlx5_flow_hw.c | 29 +++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 7be95a7d18..5997ffb5e0 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1489,6 +1489,8 @@ struct rte_flow_pattern_template {
 	/* Manages all GENEVE TLV options used by this pattern template. */
 	struct mlx5_geneve_tlv_options_mng geneve_opt_mng;
 	uint8_t flex_item; /* flex item index. */
+	/* Items on which this pattern template is based on. */
+	struct rte_flow_item *items;
 };
 
 /* Flow action template struct. */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 24b86535b1..132f6ca264 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -8401,6 +8401,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
 		.mask = &tag_m,
 		.last = NULL
 	};
+	int it_items_size;
 	unsigned int i = 0;
 	int rc;
 
@@ -8444,6 +8445,31 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
 	it->attr = *attr;
 	it->item_flags = item_flags;
 	it->orig_item_nb = orig_item_nb;
+	it_items_size = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, NULL, 0, tmpl_items, error);
+	if (it_items_size <= 0) {
+		rte_flow_error_set(error, ENOMEM,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL,
+				   "Failed to determine buffer size for pattern");
+		goto error;
+	}
+	it_items_size = RTE_ALIGN(it_items_size, 16);
+	it->items = mlx5_malloc(MLX5_MEM_ZERO, it_items_size, 0, rte_dev_numa_node(dev->device));
+	if (it->items == NULL) {
+		rte_flow_error_set(error, ENOMEM,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL,
+				   "Cannot allocate memory for pattern");
+		goto error;
+	}
+	rc = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, it->items, it_items_size, tmpl_items, error);
+	if (rc <= 0) {
+		rte_flow_error_set(error, ENOMEM,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL,
+				   "Failed to store pattern");
+		goto error;
+	}
 	it->mt = mlx5dr_match_template_create(tmpl_items, attr->relaxed_matching);
 	if (!it->mt) {
 		rte_flow_error_set(error, rte_errno,
@@ -8458,6 +8484,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
 		else if (attr->egress)
 			it->implicit_tag = true;
 		mlx5_free(copied_items);
+		copied_items = NULL;
 	}
 	/* Either inner or outer, can't both. */
 	if (it->item_flags & (MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT |
@@ -8518,6 +8545,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
 			mlx5_geneve_tlv_options_unregister(priv, &it->geneve_opt_mng);
 		if (it->mt)
 			claim_zero(mlx5dr_match_template_destroy(it->mt));
+		mlx5_free(it->items);
 		mlx5_free(it);
 	}
 	if (copied_items)
@@ -8560,6 +8588,7 @@ flow_hw_pattern_template_destroy(struct rte_eth_dev *dev,
 	flow_hw_flex_item_release(dev, &template->flex_item);
 	mlx5_geneve_tlv_options_unregister(priv, &template->geneve_opt_mng);
 	claim_zero(mlx5dr_match_template_destroy(template->mt));
+	mlx5_free(template->items);
 	mlx5_free(template);
 	return 0;
 }
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 5/9] net/mlx5: store original actions in template
  2024-06-05 18:34 [PATCH 0/9] net/mlx5: flow fast path validation Dariusz Sosnowski
                   ` (3 preceding siblings ...)
  2024-06-05 18:34 ` [PATCH 4/9] net/mlx5: store pattern template items Dariusz Sosnowski
@ 2024-06-05 18:34 ` Dariusz Sosnowski
  2024-06-05 18:34 ` [PATCH 6/9] net/mlx5: store expected type on indirect action Dariusz Sosnowski
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-05 18:34 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

When actions template is created in mlx5 PMD, some actions will be
replaced with another or some actions will be added to implement
required template semantics on NVIDIA NICs. For example:

- RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID actions is replaced with
  modify field action.
- Modify field action is added for preserving metadata
  between FDB and NIC domains.
- Modify field action is added when quota action is used to amend
  ASO syndrome.

Types of actions and their order in flow rules based on some
actions template must match the original, not modified one.

This patch adds preserving of the original actions for actions template
to allow for easier validation of ordering and types on fast path.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |  1 +
 drivers/net/mlx5/mlx5_flow_hw.c | 11 +++++++++++
 2 files changed, 12 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 5997ffb5e0..a40e8385b3 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1499,6 +1499,7 @@ struct rte_flow_actions_template {
 	/* Template attributes. */
 	struct rte_flow_actions_template_attr attr;
 	struct rte_flow_action *actions; /* Cached flow actions. */
+	struct rte_flow_action *orig_actions; /* Original flow actions. */
 	struct rte_flow_action *masks; /* Cached action masks.*/
 	struct mlx5dr_action_template *tmpl; /* mlx5dr action template. */
 	uint64_t action_flags; /* Bit-map of all valid action in template. */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 132f6ca264..52ad419c95 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -7415,6 +7415,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	int len, act_len, mask_len;
+	int orig_act_len;
 	unsigned int act_num;
 	unsigned int i;
 	struct rte_flow_actions_template *at = NULL;
@@ -7529,6 +7530,10 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
 	len += RTE_ALIGN(mask_len, 16);
 	len += RTE_ALIGN(act_num * sizeof(*at->dr_off), 16);
 	len += RTE_ALIGN(act_num * sizeof(*at->src_off), 16);
+	orig_act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, NULL, 0, actions, error);
+	if (orig_act_len <= 0)
+		return NULL;
+	len += RTE_ALIGN(orig_act_len, 16);
 	at = mlx5_malloc(MLX5_MEM_ZERO, len + sizeof(*at),
 			 RTE_CACHE_LINE_SIZE, rte_socket_id());
 	if (!at) {
@@ -7556,6 +7561,12 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
 	at->src_off = RTE_PTR_ADD(at->dr_off,
 				  RTE_ALIGN(act_num * sizeof(*at->dr_off), 16));
 	memcpy(at->src_off, src_off, act_num * sizeof(at->src_off[0]));
+	at->orig_actions = RTE_PTR_ADD(at->src_off,
+				       RTE_ALIGN(act_num * sizeof(*at->src_off), 16));
+	orig_act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, at->orig_actions, orig_act_len,
+				     actions, error);
+	if (orig_act_len <= 0)
+		goto error;
 	at->actions_num = act_num;
 	for (i = 0; i < at->actions_num; ++i)
 		at->dr_off[i] = UINT16_MAX;
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 6/9] net/mlx5: store expected type on indirect action
  2024-06-05 18:34 [PATCH 0/9] net/mlx5: flow fast path validation Dariusz Sosnowski
                   ` (4 preceding siblings ...)
  2024-06-05 18:34 ` [PATCH 5/9] net/mlx5: store original actions in template Dariusz Sosnowski
@ 2024-06-05 18:34 ` Dariusz Sosnowski
  2024-06-05 18:34 ` [PATCH 7/9] net/mlx5: store modify field action Dariusz Sosnowski
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-05 18:34 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

When template table is created, list of unmasked actions is recorded for
future flow rule insertions.
This patch expands entries for RTE_FLOW_ACTION_TYPE_INDIRECT actions in
this list with information about expected indirect action type.
This will be used in follow up commits which add flow rule operation
validation. Specifically, this will be used to verify if indirect action
provided by the user references an action of a correct type.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |  4 ++++
 drivers/net/mlx5/mlx5_flow_hw.c | 22 ++++++++++++++++++++--
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index a40e8385b3..cd6fed4723 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1409,6 +1409,10 @@ struct mlx5_action_construct_data {
 	uint16_t action_dst; /* mlx5dr_rule_action dst offset. */
 	indirect_list_callback_t indirect_list_cb;
 	union {
+		struct {
+			/* Expected type of indirection action. */
+			enum rte_flow_action_type expected_type;
+		} indirect;
 		struct {
 			/* encap data len. */
 			uint16_t len;
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 52ad419c95..5e9d43ab22 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -768,6 +768,24 @@ __flow_hw_act_data_general_append(struct mlx5_priv *priv,
 	return 0;
 }
 
+static __rte_always_inline int
+__flow_hw_act_data_indirect_append(struct mlx5_priv *priv,
+				   struct mlx5_hw_actions *acts,
+				   enum rte_flow_action_type type,
+				   enum rte_flow_action_type mask_type,
+				   uint16_t action_src,
+				   uint16_t action_dst)
+{
+	struct mlx5_action_construct_data *act_data;
+
+	act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst);
+	if (!act_data)
+		return -1;
+	act_data->indirect.expected_type = mask_type;
+	LIST_INSERT_HEAD(&acts->act_list, act_data, next);
+	return 0;
+}
+
 static __rte_always_inline int
 flow_hw_act_data_indirect_list_append(struct mlx5_priv *priv,
 				      struct mlx5_hw_actions *acts,
@@ -2215,9 +2233,9 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
 				if (flow_hw_shared_action_translate
 				(dev, actions, acts, src_pos, dr_pos))
 					goto err;
-			} else if (__flow_hw_act_data_general_append
+			} else if (__flow_hw_act_data_indirect_append
 					(priv, acts, RTE_FLOW_ACTION_TYPE_INDIRECT,
-					 src_pos, dr_pos)){
+					 masks->type, src_pos, dr_pos)){
 				goto err;
 			}
 			break;
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 7/9] net/mlx5: store modify field action
  2024-06-05 18:34 [PATCH 0/9] net/mlx5: flow fast path validation Dariusz Sosnowski
                   ` (5 preceding siblings ...)
  2024-06-05 18:34 ` [PATCH 6/9] net/mlx5: store expected type on indirect action Dariusz Sosnowski
@ 2024-06-05 18:34 ` Dariusz Sosnowski
  2024-06-05 18:34 ` [PATCH 8/9] common/mlx5: add debug mode indicator Dariusz Sosnowski
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-05 18:34 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

When template table is created, list of unmasked actions is recorded for
future flow rule insertions.
This patch expands entries for RTE_FLOW_ACTION_TYPE_MODIFY_FIELD actions
in this list with a copy of the action from the template.
This will be used in follow up commits which add flow rule operation
validation. Specifically, to validate that
RTE_FLOW_ACTION_TYPE_MODIFY_FIELD action passed by the user is correctly
configured.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    | 2 ++
 drivers/net/mlx5/mlx5_flow_hw.c | 4 +++-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index cd6fed4723..5ea3f22a2e 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1439,6 +1439,8 @@ struct mlx5_action_construct_data {
 			 * PRM actions.
 			 */
 			uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS];
+			/* Copy of action passed to the action template. */
+			struct rte_flow_action_modify_field action;
 		} modify_header;
 		struct {
 			bool symmetric_hash_function; /* Symmetric RSS hash */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 5e9d43ab22..a04e8647cb 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -883,6 +883,7 @@ __flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv,
 				     enum rte_flow_action_type type,
 				     uint16_t action_src,
 				     uint16_t action_dst,
+				     const struct rte_flow_action_modify_field *mf,
 				     uint16_t mhdr_cmds_off,
 				     uint16_t mhdr_cmds_end,
 				     bool shared,
@@ -895,6 +896,7 @@ __flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv,
 	act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst);
 	if (!act_data)
 		return -1;
+	act_data->modify_header.action = *mf;
 	act_data->modify_header.mhdr_cmds_off = mhdr_cmds_off;
 	act_data->modify_header.mhdr_cmds_end = mhdr_cmds_end;
 	act_data->modify_header.shared = shared;
@@ -1372,7 +1374,7 @@ flow_hw_modify_field_compile(struct rte_eth_dev *dev,
 	if (shared)
 		return 0;
 	ret = __flow_hw_act_data_hdr_modify_append(priv, acts, RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
-						   src_pos, mhdr->pos,
+						   src_pos, mhdr->pos, conf,
 						   cmds_start, cmds_end, shared,
 						   field, dcopy, mask);
 	if (ret)
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 8/9] common/mlx5: add debug mode indicator
  2024-06-05 18:34 [PATCH 0/9] net/mlx5: flow fast path validation Dariusz Sosnowski
                   ` (6 preceding siblings ...)
  2024-06-05 18:34 ` [PATCH 7/9] net/mlx5: store modify field action Dariusz Sosnowski
@ 2024-06-05 18:34 ` Dariusz Sosnowski
  2024-06-05 18:34 ` [PATCH 9/9] net/mlx5: add async flow operation validation Dariusz Sosnowski
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-05 18:34 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

Add mlx5_fp_debug_enabled() function which:

- returns true if RTE_LIBRTE_MLX5_DEBUG is defined,
- returns false otherwise.

This allows for conditional execution of code meant to be executed only
when mlx5 debug mode is enabled, without adding conditional compilation
guards inside source code.
When mlx5 debug mode is disabled, any code running if
mlx5_fp_debug_enabled() returns true will be removed by optimizing
compiler due to dead code elimination.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/common/mlx5/mlx5_common.h | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h
index 14c70edbef..1abd1e8239 100644
--- a/drivers/common/mlx5/mlx5_common.h
+++ b/drivers/common/mlx5/mlx5_common.h
@@ -109,6 +109,19 @@ pmd_drv_log_basename(const char *s)
 
 #endif /* RTE_LIBRTE_MLX5_DEBUG */
 
+/**
+ * Returns true if debug mode is enabled for fast path operations.
+ */
+static inline bool
+mlx5_fp_debug_enabled(void)
+{
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+	return true;
+#else
+	return false;
+#endif
+}
+
 /* Allocate a buffer on the stack and fill it with a printf format string. */
 #define MKSTR(name, ...) \
 	int mkstr_size_##name = snprintf(NULL, 0, "" __VA_ARGS__); \
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH 9/9] net/mlx5: add async flow operation validation
  2024-06-05 18:34 [PATCH 0/9] net/mlx5: flow fast path validation Dariusz Sosnowski
                   ` (7 preceding siblings ...)
  2024-06-05 18:34 ` [PATCH 8/9] common/mlx5: add debug mode indicator Dariusz Sosnowski
@ 2024-06-05 18:34 ` Dariusz Sosnowski
  2024-06-06  8:50 ` [PATCH 0/9] net/mlx5: flow fast path validation Dariusz Sosnowski
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-05 18:34 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

This patch adds validation to implementations of the following API
functions:

- rte_flow_async_create()
- rte_flow_async_create_by_index()
- rte_flow_async_update()
- rte_flow_async_destroy()

These validations are enabled if and only if RTE_LIBRTE_MLX5_DEBUG
macro is defined.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/rel_notes/release_24_07.rst |   1 +
 drivers/net/mlx5/mlx5_flow_hw.c        | 491 ++++++++++++++++++++++++-
 2 files changed, 488 insertions(+), 4 deletions(-)

diff --git a/doc/guides/rel_notes/release_24_07.rst b/doc/guides/rel_notes/release_24_07.rst
index 3a3257fcd5..39b074c63c 100644
--- a/doc/guides/rel_notes/release_24_07.rst
+++ b/doc/guides/rel_notes/release_24_07.rst
@@ -85,6 +85,7 @@ New Features
 
   * Added match with Tx queue.
   * Added match with external Tx queue.
+  * Added flow item and actions validation to async flow API.
 
 
 Removed Items
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index a04e8647cb..7588bd90c4 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -302,6 +302,32 @@ flow_hw_construct_quota(struct mlx5_priv *priv,
 static __rte_always_inline uint32_t flow_hw_tx_tag_regc_mask(struct rte_eth_dev *dev);
 static __rte_always_inline uint32_t flow_hw_tx_tag_regc_value(struct rte_eth_dev *dev);
 
+static int flow_hw_async_create_validate(struct rte_eth_dev *dev,
+					 const uint32_t queue,
+					 const struct rte_flow_template_table *table,
+					 const struct rte_flow_item items[],
+					 const uint8_t pattern_template_index,
+					 const struct rte_flow_action actions[],
+					 const uint8_t action_template_index,
+					 struct rte_flow_error *error);
+static int flow_hw_async_create_by_index_validate(struct rte_eth_dev *dev,
+						  const uint32_t queue,
+						  const struct rte_flow_template_table *table,
+						  const uint32_t rule_index,
+						  const struct rte_flow_action actions[],
+						  const uint8_t action_template_index,
+						  struct rte_flow_error *error);
+static int flow_hw_async_update_validate(struct rte_eth_dev *dev,
+					 const uint32_t queue,
+					 const struct rte_flow_hw *flow,
+					 const struct rte_flow_action actions[],
+					 const uint8_t action_template_index,
+					 struct rte_flow_error *error);
+static int flow_hw_async_destroy_validate(struct rte_eth_dev *dev,
+					  const uint32_t queue,
+					  const struct rte_flow_hw *flow,
+					  struct rte_flow_error *error);
+
 const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops;
 
 /* DR action flags with different table. */
@@ -3543,6 +3569,11 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
 	uint32_t res_idx = 0;
 	int ret;
 
+	if (mlx5_fp_debug_enabled()) {
+		if (flow_hw_async_create_validate(dev, queue, table, items, pattern_template_index,
+						  actions, action_template_index, error))
+			return NULL;
+	}
 	flow = mlx5_ipool_malloc(table->flow, &flow_idx);
 	if (!flow)
 		goto error;
@@ -3681,10 +3712,10 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
 	uint32_t res_idx = 0;
 	int ret;
 
-	if (unlikely(rule_index >= table->cfg.attr.nb_flows)) {
-		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
-				   "Flow rule index exceeds table size");
-		return NULL;
+	if (mlx5_fp_debug_enabled()) {
+		if (flow_hw_async_create_by_index_validate(dev, queue, table, rule_index,
+							   actions, action_template_index, error))
+			return NULL;
 	}
 	flow = mlx5_ipool_malloc(table->flow, &flow_idx);
 	if (!flow)
@@ -3816,6 +3847,11 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev,
 	uint32_t res_idx = 0;
 	int ret;
 
+	if (mlx5_fp_debug_enabled()) {
+		if (flow_hw_async_update_validate(dev, queue, of, actions, action_template_index,
+						  error))
+			return -rte_errno;
+	}
 	aux = mlx5_flow_hw_aux(dev->data->port_id, of);
 	nf = &aux->upd_flow;
 	memset(nf, 0, sizeof(struct rte_flow_hw));
@@ -3923,6 +3959,10 @@ flow_hw_async_flow_destroy(struct rte_eth_dev *dev,
 							   &fh->table->cfg.attr);
 	int ret;
 
+	if (mlx5_fp_debug_enabled()) {
+		if (flow_hw_async_destroy_validate(dev, queue, fh, error))
+			return -rte_errno;
+	}
 	fh->operation_type = !resizable ?
 			     MLX5_FLOW_HW_FLOW_OP_TYPE_DESTROY :
 			     MLX5_FLOW_HW_FLOW_OP_TYPE_RSZ_TBL_DESTROY;
@@ -14929,6 +14969,449 @@ mlx5_reformat_action_destroy(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static bool
+flow_hw_is_item_masked(const struct rte_flow_item *item)
+{
+	const uint8_t *byte;
+	int size;
+	int i;
+
+	if (item->mask == NULL)
+		return false;
+
+	switch ((int)item->type) {
+	case MLX5_RTE_FLOW_ITEM_TYPE_TAG:
+		size = sizeof(struct rte_flow_item_tag);
+		break;
+	case MLX5_RTE_FLOW_ITEM_TYPE_SQ:
+		size = sizeof(struct mlx5_rte_flow_item_sq);
+		break;
+	default:
+		size = rte_flow_conv(RTE_FLOW_CONV_OP_ITEM_MASK, NULL, 0, item, NULL);
+		/*
+		 * Pattern template items are passed to this function.
+		 * These items were already validated, so error is not expected.
+		 * Also, if mask is NULL, then spec size is bigger than 0 always.
+		 */
+		MLX5_ASSERT(size > 0);
+	}
+
+	byte = (const uint8_t *)item->mask;
+	for (i = 0; i < size; ++i)
+		if (byte[i])
+			return true;
+
+	return false;
+}
+
+static int
+flow_hw_validate_rule_pattern(struct rte_eth_dev *dev,
+			      const struct rte_flow_template_table *table,
+			      const uint8_t pattern_template_idx,
+			      const struct rte_flow_item items[],
+			      struct rte_flow_error *error)
+{
+	const struct rte_flow_pattern_template *pt;
+	const struct rte_flow_item *pt_item;
+
+	if (pattern_template_idx >= table->nb_item_templates)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Pattern template index out of range");
+
+	pt = table->its[pattern_template_idx];
+	pt_item = pt->items;
+
+	/* If any item was prepended, skip it. */
+	if (pt->implicit_port || pt->implicit_tag)
+		pt_item++;
+
+	for (; pt_item->type != RTE_FLOW_ITEM_TYPE_END; pt_item++, items++) {
+		if (pt_item->type != items->type)
+			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+						  items, "Item type does not match the template");
+
+		/*
+		 * Assumptions:
+		 * - Currently mlx5dr layer contains info on which fields in masks are supported.
+		 * - This info is not exposed to PMD directly.
+		 * - Because of that, it is assumed that since pattern template is correct,
+		 *   then, items' masks in pattern template have nonzero values only in
+		 *   supported fields.
+		 *   This is known, because a temporary mlx5dr matcher is created during pattern
+		 *   template creation to validate the template.
+		 * - As a result, it is safe to look for nonzero bytes in mask to determine if
+		 *   item spec is needed in a flow rule.
+		 */
+		if (!flow_hw_is_item_masked(pt_item))
+			continue;
+
+		if (items->spec == NULL)
+			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_SPEC,
+						  items, "Item spec is required");
+
+		switch (items->type) {
+		const struct rte_flow_item_ethdev *ethdev;
+		const struct rte_flow_item_tx_queue *tx_queue;
+		struct mlx5_txq_ctrl *txq;
+
+		case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT:
+			ethdev = items->spec;
+			if (flow_hw_validate_target_port_id(dev, ethdev->port_id)) {
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ITEM_SPEC, items,
+							  "Invalid port");
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_TX_QUEUE:
+			tx_queue = items->spec;
+			if (mlx5_is_external_txq(dev, tx_queue->tx_queue))
+				continue;
+			txq = mlx5_txq_get(dev, tx_queue->tx_queue);
+			if (!txq)
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ITEM_SPEC, items,
+							  "Invalid Tx queue");
+			mlx5_txq_release(dev, tx_queue->tx_queue);
+		default:
+			break;
+		}
+	}
+
+	return 0;
+}
+
+static bool
+flow_hw_valid_indirect_action_type(const struct rte_flow_action *user_action,
+				   const enum rte_flow_action_type expected_type)
+{
+	uint32_t user_indirect_type = MLX5_INDIRECT_ACTION_TYPE_GET(user_action->conf);
+	uint32_t expected_indirect_type;
+
+	switch ((int)expected_type) {
+	case RTE_FLOW_ACTION_TYPE_RSS:
+	case MLX5_RTE_FLOW_ACTION_TYPE_RSS:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_RSS;
+		break;
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+	case MLX5_RTE_FLOW_ACTION_TYPE_COUNT:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_COUNT;
+		break;
+	case RTE_FLOW_ACTION_TYPE_AGE:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_AGE;
+		break;
+	case RTE_FLOW_ACTION_TYPE_CONNTRACK:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_CT;
+		break;
+	case RTE_FLOW_ACTION_TYPE_METER_MARK:
+	case MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_METER_MARK;
+		break;
+	case RTE_FLOW_ACTION_TYPE_QUOTA:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_QUOTA;
+		break;
+	default:
+		return false;
+	}
+
+	return user_indirect_type == expected_indirect_type;
+}
+
+static int
+flow_hw_validate_rule_actions(struct rte_eth_dev *dev,
+			      const struct rte_flow_template_table *table,
+			      const uint8_t actions_template_idx,
+			      const struct rte_flow_action actions[],
+			      struct rte_flow_error *error)
+{
+	const struct rte_flow_actions_template *at;
+	const struct mlx5_hw_actions *hw_acts;
+	const struct mlx5_action_construct_data *act_data;
+	unsigned int idx;
+
+	if (actions_template_idx >= table->nb_action_templates)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Actions template index out of range");
+
+	at = table->ats[actions_template_idx].action_template;
+	hw_acts = &table->ats[actions_template_idx].acts;
+
+	for (idx = 0; actions[idx].type != RTE_FLOW_ACTION_TYPE_END; ++idx) {
+		const struct rte_flow_action *user_action = &actions[idx];
+		const struct rte_flow_action *tmpl_action = &at->orig_actions[idx];
+
+		if (user_action->type != tmpl_action->type)
+			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+						  user_action,
+						  "Action type does not match type specified in "
+						  "actions template");
+	}
+
+	/*
+	 * Only go through unmasked actions and check if configuration is provided.
+	 * Configuration of masked actions is ignored.
+	 */
+	LIST_FOREACH(act_data, &hw_acts->act_list, next) {
+		const struct rte_flow_action *user_action;
+
+		user_action = &actions[act_data->action_src];
+
+		/* Skip actions which do not require conf. */
+		switch ((int)user_action->type) {
+		case RTE_FLOW_ACTION_TYPE_COUNT:
+		case MLX5_RTE_FLOW_ACTION_TYPE_COUNT:
+		case MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK:
+			continue;
+		default:
+			break;
+		}
+
+		if (user_action->conf == NULL)
+			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+						  user_action,
+						  "Action requires configuration");
+
+		switch ((int)user_action->type) {
+		enum rte_flow_action_type expected_type;
+		const struct rte_flow_action_ethdev *ethdev;
+		const struct rte_flow_action_modify_field *mf;
+
+		case RTE_FLOW_ACTION_TYPE_INDIRECT:
+			expected_type = act_data->indirect.expected_type;
+			if (!flow_hw_valid_indirect_action_type(user_action, expected_type))
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+							  user_action,
+							  "Indirect action type does not match "
+							  "the type specified in the mask");
+			break;
+		case RTE_FLOW_ACTION_TYPE_QUEUE:
+			if (mlx5_flow_validate_target_queue(dev, user_action, error))
+				return -rte_errno;
+			break;
+		case RTE_FLOW_ACTION_TYPE_RSS:
+			if (mlx5_validate_action_rss(dev, user_action, error))
+				return -rte_errno;
+			break;
+		case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
+			/* TODO: Compare other fields if needed. */
+			mf = user_action->conf;
+			if (mf->operation != act_data->modify_header.action.operation ||
+			    mf->src.field != act_data->modify_header.action.src.field ||
+			    mf->dst.field != act_data->modify_header.action.dst.field ||
+			    mf->width != act_data->modify_header.action.width)
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+							  user_action,
+							  "Modify field configuration does not "
+							  "match configuration from actions "
+							  "template");
+			break;
+		case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
+			ethdev = user_action->conf;
+			if (flow_hw_validate_target_port_id(dev, ethdev->port_id)) {
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+							  user_action, "Invalid port");
+			}
+			break;
+		default:
+			break;
+		}
+	}
+
+	return 0;
+}
+
+static int
+flow_hw_async_op_validate(struct rte_eth_dev *dev,
+			  const uint32_t queue,
+			  const struct rte_flow_template_table *table,
+			  struct rte_flow_error *error)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	MLX5_ASSERT(table != NULL);
+
+	if (table->cfg.external && queue >= priv->hw_attr->nb_queue)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Incorrect queue");
+
+	return 0;
+}
+
+/**
+ * Validate user input for rte_flow_async_create() implementation.
+ *
+ * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to create the flow.
+ * @param[in] table
+ *   Pointer to template table.
+ * @param[in] items
+ *   Items with flow spec value.
+ * @param[in] pattern_template_index
+ *   The item pattern flow follows from the table.
+ * @param[in] actions
+ *   Action with flow spec value.
+ * @param[in] action_template_index
+ *   The action pattern flow follows from the table.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 if user input is valid.
+ *    Negative errno otherwise, rte_errno and error struct is populated.
+ */
+static int
+flow_hw_async_create_validate(struct rte_eth_dev *dev,
+			      const uint32_t queue,
+			      const struct rte_flow_template_table *table,
+			      const struct rte_flow_item items[],
+			      const uint8_t pattern_template_index,
+			      const struct rte_flow_action actions[],
+			      const uint8_t action_template_index,
+			      struct rte_flow_error *error)
+{
+	if (flow_hw_async_op_validate(dev, queue, table, error))
+		return -rte_errno;
+
+	if (table->cfg.attr.insertion_type != RTE_FLOW_TABLE_INSERTION_TYPE_PATTERN)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Only pattern insertion is allowed on this table");
+
+	if (flow_hw_validate_rule_pattern(dev, table, pattern_template_index, items, error))
+		return -rte_errno;
+
+	if (flow_hw_validate_rule_actions(dev, table, action_template_index, actions, error))
+		return -rte_errno;
+
+	return 0;
+}
+
+/**
+ * Validate user input for rte_flow_async_create_by_index() implementation.
+ *
+ * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to create the flow.
+ * @param[in] table
+ *   Pointer to template table.
+ * @param[in] rule_index
+ *   Rule index in the table.
+ *   Inserting a rule to already occupied index results in undefined behavior.
+ * @param[in] actions
+ *   Action with flow spec value.
+ * @param[in] action_template_index
+ *   The action pattern flow follows from the table.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 if user input is valid.
+ *    Negative errno otherwise, rte_errno and error struct is set.
+ */
+static int
+flow_hw_async_create_by_index_validate(struct rte_eth_dev *dev,
+				       const uint32_t queue,
+				       const struct rte_flow_template_table *table,
+				       const uint32_t rule_index,
+				       const struct rte_flow_action actions[],
+				       const uint8_t action_template_index,
+				       struct rte_flow_error *error)
+{
+	if (flow_hw_async_op_validate(dev, queue, table, error))
+		return -rte_errno;
+
+	if (table->cfg.attr.insertion_type != RTE_FLOW_TABLE_INSERTION_TYPE_INDEX)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Only index insertion is allowed on this table");
+
+	if (rule_index >= table->cfg.attr.nb_flows)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Flow rule index exceeds table size");
+
+	if (flow_hw_validate_rule_actions(dev, table, action_template_index, actions, error))
+		return -rte_errno;
+
+	return 0;
+}
+
+
+/**
+ * Validate user input for rte_flow_async_update() implementation.
+ *
+ * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to create the flow.
+ * @param[in] flow
+ *   Flow rule to be updated.
+ * @param[in] actions
+ *   Action with flow spec value.
+ * @param[in] action_template_index
+ *   The action pattern flow follows from the table.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 if user input is valid.
+ *    Negative errno otherwise, rte_errno and error struct is set.
+ */
+static int
+flow_hw_async_update_validate(struct rte_eth_dev *dev,
+			      const uint32_t queue,
+			      const struct rte_flow_hw *flow,
+			      const struct rte_flow_action actions[],
+			      const uint8_t action_template_index,
+			      struct rte_flow_error *error)
+{
+	if (flow_hw_async_op_validate(dev, queue, flow->table, error))
+		return -rte_errno;
+
+	if (flow_hw_validate_rule_actions(dev, flow->table, action_template_index, actions, error))
+		return -rte_errno;
+
+	return 0;
+}
+
+/**
+ * Validate user input for rte_flow_async_destroy() implementation.
+ *
+ * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to create the flow.
+ * @param[in] flow
+ *   Flow rule to be destroyed.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 if user input is valid.
+ *    Negative errno otherwise, rte_errno and error struct is set.
+ */
+static int
+flow_hw_async_destroy_validate(struct rte_eth_dev *dev,
+			       const uint32_t queue,
+			       const struct rte_flow_hw *flow,
+			       struct rte_flow_error *error)
+{
+	if (flow_hw_async_op_validate(dev, queue, flow->table, error))
+		return -rte_errno;
+
+	return 0;
+}
+
 static struct rte_flow_fp_ops mlx5_flow_hw_fp_ops = {
 	.async_create = flow_hw_async_flow_create,
 	.async_create_by_index = flow_hw_async_flow_create_by_index,
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* RE: [PATCH 0/9] net/mlx5: flow fast path validation
  2024-06-05 18:34 [PATCH 0/9] net/mlx5: flow fast path validation Dariusz Sosnowski
                   ` (8 preceding siblings ...)
  2024-06-05 18:34 ` [PATCH 9/9] net/mlx5: add async flow operation validation Dariusz Sosnowski
@ 2024-06-06  8:50 ` Dariusz Sosnowski
  2024-06-12 16:18 ` [PATCH v2] ethdev: support duplicating only item mask Dariusz Sosnowski
  2024-06-12 16:24 ` [PATCH v2 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
  11 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-06  8:50 UTC (permalink / raw)
  To: Slava Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad,
	NBU-Contact-Thomas Monjalon (EXTERNAL),
	Ferruh Yigit, Andrew Rybchenko
  Cc: dev

> - Patches 2-7: Prepares mlx5 PMD for adding fast path validations.

Correction: Patches 2-8

> - Patch 8: Introduce validations to:

Correction: Patch 9

Best regards,
Dariusz Sosnowski

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v2] ethdev: support duplicating only item mask
  2024-06-05 18:34 [PATCH 0/9] net/mlx5: flow fast path validation Dariusz Sosnowski
                   ` (9 preceding siblings ...)
  2024-06-06  8:50 ` [PATCH 0/9] net/mlx5: flow fast path validation Dariusz Sosnowski
@ 2024-06-12 16:18 ` Dariusz Sosnowski
  2024-06-12 22:28   ` Ferruh Yigit
  2024-06-12 16:24 ` [PATCH v2 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
  11 siblings, 1 reply; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-12 16:18 UTC (permalink / raw)
  To: Ori Kam, Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko; +Cc: dev

Extend rte_flow_conv() to support working only on item's mask.
This allows drivers to get only the mask's size when working on pattern
templates and duplicate items having only the mask in a generic way.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
v2:
- Split series 32095 into separate ethdev and mlx5 series.
- Rebased on latest commit in next-net-main.
---
 lib/ethdev/rte_flow.c | 15 +++++++++++++--
 lib/ethdev/rte_flow.h | 12 ++++++++++++
 2 files changed, 25 insertions(+), 2 deletions(-)

diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index 7ab1100ea0..ca2f85c3fa 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -60,9 +60,10 @@ rte_flow_conv_copy(void *buf, const void *data, const size_t size,
 	bool rte_type = type >= 0;

 	size_t sz = rte_type ? desc[type].size : sizeof(void *);
-	if (buf == NULL || data == NULL)
+	if (data == NULL)
 		return 0;
-	rte_memcpy(buf, data, (size > sz ? sz : size));
+	if (buf != NULL)
+		rte_memcpy(buf, data, (size > sz ? sz : size));
 	if (rte_type && desc[type].desc_fn)
 		sz += desc[type].desc_fn(size > 0 ? buf : NULL, data);
 	return sz;
@@ -1090,6 +1091,7 @@ rte_flow_conv(enum rte_flow_conv_op op,

 	switch (op) {
 		const struct rte_flow_attr *attr;
+		const struct rte_flow_item *item;

 	case RTE_FLOW_CONV_OP_NONE:
 		ret = 0;
@@ -1104,6 +1106,15 @@ rte_flow_conv(enum rte_flow_conv_op op,
 	case RTE_FLOW_CONV_OP_ITEM:
 		ret = rte_flow_conv_pattern(dst, size, src, 1, error);
 		break;
+	case RTE_FLOW_CONV_OP_ITEM_MASK:
+		item = src;
+		if (item->mask == NULL) {
+			ret = rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_MASK,
+						 item, "Mask not provided");
+			break;
+		}
+		ret = rte_flow_conv_item_spec(dst, size, src, RTE_FLOW_CONV_ITEM_MASK);
+		break;
 	case RTE_FLOW_CONV_OP_ACTION:
 		ret = rte_flow_conv_actions(dst, size, src, 1, error);
 		break;
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index fb84992b61..f864578f80 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -4426,6 +4426,18 @@ enum rte_flow_conv_op {
 	 */
 	RTE_FLOW_CONV_OP_ITEM,

+	/**
+	 * Convert a single item mask.
+	 *
+	 * Duplicates only @p mask.
+	 *
+	 * - @p src type:
+	 *   @code const struct rte_flow_item * @endcode
+	 * - @p dst type:
+	 *   @code struct rte_flow_item * @endcode
+	 */
+	RTE_FLOW_CONV_OP_ITEM_MASK,
+
 	/**
 	 * Convert a single action.
 	 *
--
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v2 0/8] net/mlx5: flow fast path validation
  2024-06-05 18:34 [PATCH 0/9] net/mlx5: flow fast path validation Dariusz Sosnowski
                   ` (10 preceding siblings ...)
  2024-06-12 16:18 ` [PATCH v2] ethdev: support duplicating only item mask Dariusz Sosnowski
@ 2024-06-12 16:24 ` Dariusz Sosnowski
  2024-06-12 16:24   ` [PATCH v2 1/8] net/mlx5: extract target port validation Dariusz Sosnowski
                     ` (8 more replies)
  11 siblings, 9 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-12 16:24 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

This patchset adds validations for flow items and actions
supplied by the user to fast path async flow API functions
implemented in mlx5 PMD.

- Patches 1-7: Prepares mlx5 PMD for adding fast path validations.
  Specifically:

  - Required functionality for validating queue indexes and
    target represented ports are extracted for reuse.
  - Additional info, required for validation, is stored in
    pattern and actions templates.
  - Introduces mlx5_fp_debug_enabled() function for checking
    if mlx5 PMD was compiled in debug mode.

- Patch 8: Introduce validations to:

  - rte_flow_async_create()
  - rte_flow_async_create_by_index()
  - rte_flow_async_actions_update()
  - rte_flow_async_destroy()

  These validations are enabled if and only if
  RTE_LIBRTE_MLX5_DEBUG macro is defined during compilation.

Depends-on: series-32150 ("ethdev: support duplicating only item mask")

v2:
- Split series 32095 into separate ethdev and mlx5 series.
- Rebased on latest commit in next-net-mlx-main.
- Removed Depends-on tags for already merged mlx5 patch series.

Dariusz Sosnowski (8):
  net/mlx5: extract target port validation
  net/mlx5: extract queue index validation
  net/mlx5: store pattern template items
  net/mlx5: store original actions in template
  net/mlx5: store expected type on indirect action
  net/mlx5: store modify field action
  common/mlx5: add debug mode indicator
  net/mlx5: add async flow operation validation

 doc/guides/rel_notes/release_24_07.rst |   1 +
 drivers/common/mlx5/mlx5_common.h      |  13 +
 drivers/net/mlx5/mlx5_flow.c           |  59 ++-
 drivers/net/mlx5/mlx5_flow.h           |  12 +
 drivers/net/mlx5/mlx5_flow_hw.c        | 620 +++++++++++++++++++++++--
 5 files changed, 657 insertions(+), 48 deletions(-)

--
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v2 1/8] net/mlx5: extract target port validation
  2024-06-12 16:24 ` [PATCH v2 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
@ 2024-06-12 16:24   ` Dariusz Sosnowski
  2024-06-12 16:24   ` [PATCH v2 2/8] net/mlx5: extract queue index validation Dariusz Sosnowski
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-12 16:24 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

Extract code responsible for validation if port specified in
configuration of RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT action
is correct. Allow for reuse of this logic for both
RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT actions and
RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT items.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_hw.c | 63 +++++++++++++++++++++------------
 1 file changed, 40 insertions(+), 23 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index eb89dcf454..aee0201cbc 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -6046,6 +6046,42 @@ flow_hw_validate_action_port_representor(struct rte_eth_dev *dev __rte_unused,
 	return 0;
 }
 
+static int
+flow_hw_validate_target_port_id(struct rte_eth_dev *dev,
+				uint16_t target_port_id)
+{
+	struct mlx5_priv *port_priv;
+	struct mlx5_priv *dev_priv;
+
+	if (target_port_id == MLX5_REPRESENTED_PORT_ESW_MGR)
+		return 0;
+
+	port_priv = mlx5_port_to_eswitch_info(target_port_id, false);
+	if (!port_priv) {
+		rte_errno = EINVAL;
+		DRV_LOG(ERR, "Port %u Failed to obtain E-Switch info for port %u",
+			dev->data->port_id, target_port_id);
+		return -rte_errno;
+	}
+
+	dev_priv = mlx5_dev_to_eswitch_info(dev);
+	if (!dev_priv) {
+		rte_errno = EINVAL;
+		DRV_LOG(ERR, "Port %u Failed to obtain E-Switch info for transfer proxy",
+			dev->data->port_id);
+		return -rte_errno;
+	}
+
+	if (port_priv->domain_id != dev_priv->domain_id) {
+		rte_errno = EINVAL;
+		DRV_LOG(ERR, "Port %u Failed to obtain E-Switch info for transfer proxy",
+			dev->data->port_id);
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
 static int
 flow_hw_validate_action_represented_port(struct rte_eth_dev *dev,
 					 const struct rte_flow_action *action,
@@ -6062,32 +6098,13 @@ flow_hw_validate_action_represented_port(struct rte_eth_dev *dev,
 					  "cannot use represented_port actions"
 					  " without an E-Switch");
 	if (mask_conf && mask_conf->port_id) {
-		struct mlx5_priv *port_priv;
-		struct mlx5_priv *dev_priv;
-
 		if (!action_conf)
 			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
 						  action, "port index was not provided");
-		port_priv = mlx5_port_to_eswitch_info(action_conf->port_id, false);
-		if (!port_priv)
-			return rte_flow_error_set(error, rte_errno,
-						  RTE_FLOW_ERROR_TYPE_ACTION,
-						  action,
-						  "failed to obtain E-Switch"
-						  " info for port");
-		dev_priv = mlx5_dev_to_eswitch_info(dev);
-		if (!dev_priv)
-			return rte_flow_error_set(error, rte_errno,
-						  RTE_FLOW_ERROR_TYPE_ACTION,
-						  action,
-						  "failed to obtain E-Switch"
-						  " info for transfer proxy");
-		if (port_priv->domain_id != dev_priv->domain_id)
-			return rte_flow_error_set(error, rte_errno,
-						  RTE_FLOW_ERROR_TYPE_ACTION,
-						  action,
-						  "cannot forward to port from"
-						  " a different E-Switch");
+
+		if (flow_hw_validate_target_port_id(dev, action_conf->port_id))
+			return rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_ACTION,
+						  action, "port index is invalid");
 	}
 	return 0;
 }
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v2 2/8] net/mlx5: extract queue index validation
  2024-06-12 16:24 ` [PATCH v2 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
  2024-06-12 16:24   ` [PATCH v2 1/8] net/mlx5: extract target port validation Dariusz Sosnowski
@ 2024-06-12 16:24   ` Dariusz Sosnowski
  2024-06-12 16:24   ` [PATCH v2 3/8] net/mlx5: store pattern template items Dariusz Sosnowski
                     ` (6 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-12 16:24 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

Extract validation of queue index out of validation of
RTE_FLOW_ACTION_TYPE_QUEUE action to allow reuse in template API
flow rule creation implementation.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.c | 59 +++++++++++++++++++++++++-----------
 drivers/net/mlx5/mlx5_flow.h |  3 ++
 2 files changed, 44 insertions(+), 18 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 7bcbbc74b5..5374c95954 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2020,6 +2020,46 @@ mlx5_flow_validate_action_drop(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/*
+ * Check if a queue specified in the queue action is valid.
+ *
+ * @param[in] dev
+ *   Pointer to the Ethernet device structure.
+ * @param[in] action
+ *   Pointer to the queue action.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_flow_validate_target_queue(struct rte_eth_dev *dev,
+				const struct rte_flow_action *action,
+				struct rte_flow_error *error)
+{
+	const struct rte_flow_action_queue *queue = action->conf;
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	if (mlx5_is_external_rxq(dev, queue->index))
+		return 0;
+	if (!priv->rxqs_n)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+					  NULL, "No Rx queues configured");
+	if (queue->index >= priv->rxqs_n)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+					  &queue->index,
+					  "queue index out of range");
+	if (mlx5_rxq_get(dev, queue->index) == NULL)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+					  &queue->index,
+					  "queue is not configured");
+	return 0;
+}
+
 /*
  * Validate the queue action.
  *
@@ -2044,7 +2084,6 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
 				const struct rte_flow_attr *attr,
 				struct rte_flow_error *error)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
 	const struct rte_flow_action_queue *queue = action->conf;
 
 	if (!queue)
@@ -2060,23 +2099,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
 					  "queue action not supported for egress.");
-	if (mlx5_is_external_rxq(dev, queue->index))
-		return 0;
-	if (!priv->rxqs_n)
-		return rte_flow_error_set(error, EINVAL,
-					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
-					  NULL, "No Rx queues configured");
-	if (queue->index >= priv->rxqs_n)
-		return rte_flow_error_set(error, EINVAL,
-					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
-					  &queue->index,
-					  "queue index out of range");
-	if (mlx5_rxq_get(dev, queue->index) == NULL)
-		return rte_flow_error_set(error, EINVAL,
-					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
-					  &queue->index,
-					  "queue is not configured");
-	return 0;
+	return mlx5_flow_validate_target_queue(dev, action, error);
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 92e2ecedb3..da7d06d033 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -3018,6 +3018,9 @@ int mlx5_flow_validate_action_mark(struct rte_eth_dev *dev,
 				   uint64_t action_flags,
 				   const struct rte_flow_attr *attr,
 				   struct rte_flow_error *error);
+int mlx5_flow_validate_target_queue(struct rte_eth_dev *dev,
+				    const struct rte_flow_action *action,
+				    struct rte_flow_error *error);
 int mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
 				    uint64_t action_flags,
 				    struct rte_eth_dev *dev,
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v2 3/8] net/mlx5: store pattern template items
  2024-06-12 16:24 ` [PATCH v2 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
  2024-06-12 16:24   ` [PATCH v2 1/8] net/mlx5: extract target port validation Dariusz Sosnowski
  2024-06-12 16:24   ` [PATCH v2 2/8] net/mlx5: extract queue index validation Dariusz Sosnowski
@ 2024-06-12 16:24   ` Dariusz Sosnowski
  2024-06-12 16:24   ` [PATCH v2 4/8] net/mlx5: store original actions in template Dariusz Sosnowski
                     ` (5 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-12 16:24 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

Store items which define the pattern template on creation.
This allows validation of items, provided by the user during flow rule
creation, against pattern template.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |  2 ++
 drivers/net/mlx5/mlx5_flow_hw.c | 29 +++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index da7d06d033..5c2cc2b7c1 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1522,6 +1522,8 @@ struct rte_flow_pattern_template {
 	/* Manages all GENEVE TLV options used by this pattern template. */
 	struct mlx5_geneve_tlv_options_mng geneve_opt_mng;
 	uint8_t flex_item; /* flex item index. */
+	/* Items on which this pattern template is based on. */
+	struct rte_flow_item *items;
 };
 
 /* Flow action template struct. */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index aee0201cbc..0ddae54ed2 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -8767,6 +8767,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
 		.mask = &tag_m,
 		.last = NULL
 	};
+	int it_items_size;
 	unsigned int i = 0;
 	int rc;
 
@@ -8810,6 +8811,31 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
 	it->attr = *attr;
 	it->item_flags = item_flags;
 	it->orig_item_nb = orig_item_nb;
+	it_items_size = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, NULL, 0, tmpl_items, error);
+	if (it_items_size <= 0) {
+		rte_flow_error_set(error, ENOMEM,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL,
+				   "Failed to determine buffer size for pattern");
+		goto error;
+	}
+	it_items_size = RTE_ALIGN(it_items_size, 16);
+	it->items = mlx5_malloc(MLX5_MEM_ZERO, it_items_size, 0, rte_dev_numa_node(dev->device));
+	if (it->items == NULL) {
+		rte_flow_error_set(error, ENOMEM,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL,
+				   "Cannot allocate memory for pattern");
+		goto error;
+	}
+	rc = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, it->items, it_items_size, tmpl_items, error);
+	if (rc <= 0) {
+		rte_flow_error_set(error, ENOMEM,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL,
+				   "Failed to store pattern");
+		goto error;
+	}
 	it->mt = mlx5dr_match_template_create(tmpl_items, attr->relaxed_matching);
 	if (!it->mt) {
 		rte_flow_error_set(error, rte_errno,
@@ -8824,6 +8850,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
 		else if (attr->egress)
 			it->implicit_tag = true;
 		mlx5_free(copied_items);
+		copied_items = NULL;
 	}
 	/* Either inner or outer, can't both. */
 	if (it->item_flags & (MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT |
@@ -8884,6 +8911,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
 			mlx5_geneve_tlv_options_unregister(priv, &it->geneve_opt_mng);
 		if (it->mt)
 			claim_zero(mlx5dr_match_template_destroy(it->mt));
+		mlx5_free(it->items);
 		mlx5_free(it);
 	}
 	if (copied_items)
@@ -8926,6 +8954,7 @@ flow_hw_pattern_template_destroy(struct rte_eth_dev *dev,
 	flow_hw_flex_item_release(dev, &template->flex_item);
 	mlx5_geneve_tlv_options_unregister(priv, &template->geneve_opt_mng);
 	claim_zero(mlx5dr_match_template_destroy(template->mt));
+	mlx5_free(template->items);
 	mlx5_free(template);
 	return 0;
 }
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v2 4/8] net/mlx5: store original actions in template
  2024-06-12 16:24 ` [PATCH v2 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
                     ` (2 preceding siblings ...)
  2024-06-12 16:24   ` [PATCH v2 3/8] net/mlx5: store pattern template items Dariusz Sosnowski
@ 2024-06-12 16:24   ` Dariusz Sosnowski
  2024-06-12 16:24   ` [PATCH v2 5/8] net/mlx5: store expected type on indirect action Dariusz Sosnowski
                     ` (4 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-12 16:24 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

When actions template is created in mlx5 PMD, some actions will be
replaced with another or some actions will be added to implement
required template semantics on NVIDIA NICs. For example:

- RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID actions is replaced with
  modify field action.
- Modify field action is added for preserving metadata
  between FDB and NIC domains.
- Modify field action is added when quota action is used to amend
  ASO syndrome.

Types of actions and their order in flow rules based on some
actions template must match the original, not modified one.

This patch adds preserving of the original actions for actions template
to allow for easier validation of ordering and types on fast path.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |  1 +
 drivers/net/mlx5/mlx5_flow_hw.c | 11 +++++++++++
 2 files changed, 12 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 5c2cc2b7c1..8e99e76e5d 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1532,6 +1532,7 @@ struct rte_flow_actions_template {
 	/* Template attributes. */
 	struct rte_flow_actions_template_attr attr;
 	struct rte_flow_action *actions; /* Cached flow actions. */
+	struct rte_flow_action *orig_actions; /* Original flow actions. */
 	struct rte_flow_action *masks; /* Cached action masks.*/
 	struct mlx5dr_action_template *tmpl; /* mlx5dr action template. */
 	uint64_t action_flags; /* Bit-map of all valid action in template. */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 0ddae54ed2..ddc9cf24d6 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -7745,6 +7745,7 @@ __flow_hw_actions_template_create(struct rte_eth_dev *dev,
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	int len, act_len, mask_len;
+	int orig_act_len;
 	unsigned int act_num;
 	unsigned int i;
 	struct rte_flow_actions_template *at = NULL;
@@ -7862,6 +7863,10 @@ __flow_hw_actions_template_create(struct rte_eth_dev *dev,
 	len += RTE_ALIGN(mask_len, 16);
 	len += RTE_ALIGN(act_num * sizeof(*at->dr_off), 16);
 	len += RTE_ALIGN(act_num * sizeof(*at->src_off), 16);
+	orig_act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, NULL, 0, actions, error);
+	if (orig_act_len <= 0)
+		return NULL;
+	len += RTE_ALIGN(orig_act_len, 16);
 	at = mlx5_malloc(MLX5_MEM_ZERO, len + sizeof(*at),
 			 RTE_CACHE_LINE_SIZE, rte_socket_id());
 	if (!at) {
@@ -7889,6 +7894,12 @@ __flow_hw_actions_template_create(struct rte_eth_dev *dev,
 	at->src_off = RTE_PTR_ADD(at->dr_off,
 				  RTE_ALIGN(act_num * sizeof(*at->dr_off), 16));
 	memcpy(at->src_off, src_off, act_num * sizeof(at->src_off[0]));
+	at->orig_actions = RTE_PTR_ADD(at->src_off,
+				       RTE_ALIGN(act_num * sizeof(*at->src_off), 16));
+	orig_act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, at->orig_actions, orig_act_len,
+				     actions, error);
+	if (orig_act_len <= 0)
+		goto error;
 	at->actions_num = act_num;
 	for (i = 0; i < at->actions_num; ++i)
 		at->dr_off[i] = UINT16_MAX;
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v2 5/8] net/mlx5: store expected type on indirect action
  2024-06-12 16:24 ` [PATCH v2 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
                     ` (3 preceding siblings ...)
  2024-06-12 16:24   ` [PATCH v2 4/8] net/mlx5: store original actions in template Dariusz Sosnowski
@ 2024-06-12 16:24   ` Dariusz Sosnowski
  2024-06-12 16:24   ` [PATCH v2 6/8] net/mlx5: store modify field action Dariusz Sosnowski
                     ` (3 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-12 16:24 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

When template table is created, list of unmasked actions is recorded for
future flow rule insertions.
This patch expands entries for RTE_FLOW_ACTION_TYPE_INDIRECT actions in
this list with information about expected indirect action type.
This will be used in follow up commits which add flow rule operation
validation. Specifically, this will be used to verify if indirect action
provided by the user references an action of a correct type.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |  4 ++++
 drivers/net/mlx5/mlx5_flow_hw.c | 22 ++++++++++++++++++++--
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 8e99e76e5d..33847e2272 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1442,6 +1442,10 @@ struct mlx5_action_construct_data {
 	uint16_t action_dst; /* mlx5dr_rule_action dst offset. */
 	indirect_list_callback_t indirect_list_cb;
 	union {
+		struct {
+			/* Expected type of indirection action. */
+			enum rte_flow_action_type expected_type;
+		} indirect;
 		struct {
 			/* encap data len. */
 			uint16_t len;
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index ddc9cf24d6..21a885517a 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -997,6 +997,24 @@ __flow_hw_act_data_general_append(struct mlx5_priv *priv,
 	return 0;
 }
 
+static __rte_always_inline int
+__flow_hw_act_data_indirect_append(struct mlx5_priv *priv,
+				   struct mlx5_hw_actions *acts,
+				   enum rte_flow_action_type type,
+				   enum rte_flow_action_type mask_type,
+				   uint16_t action_src,
+				   uint16_t action_dst)
+{
+	struct mlx5_action_construct_data *act_data;
+
+	act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst);
+	if (!act_data)
+		return -1;
+	act_data->indirect.expected_type = mask_type;
+	LIST_INSERT_HEAD(&acts->act_list, act_data, next);
+	return 0;
+}
+
 static __rte_always_inline int
 flow_hw_act_data_indirect_list_append(struct mlx5_priv *priv,
 				      struct mlx5_hw_actions *acts,
@@ -2482,9 +2500,9 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev,
 				if (flow_hw_shared_action_translate
 				(dev, actions, acts, src_pos, dr_pos))
 					goto err;
-			} else if (__flow_hw_act_data_general_append
+			} else if (__flow_hw_act_data_indirect_append
 					(priv, acts, RTE_FLOW_ACTION_TYPE_INDIRECT,
-					 src_pos, dr_pos)){
+					 masks->type, src_pos, dr_pos)){
 				goto err;
 			}
 			break;
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v2 6/8] net/mlx5: store modify field action
  2024-06-12 16:24 ` [PATCH v2 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
                     ` (4 preceding siblings ...)
  2024-06-12 16:24   ` [PATCH v2 5/8] net/mlx5: store expected type on indirect action Dariusz Sosnowski
@ 2024-06-12 16:24   ` Dariusz Sosnowski
  2024-06-12 16:24   ` [PATCH v2 7/8] common/mlx5: add debug mode indicator Dariusz Sosnowski
                     ` (2 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-12 16:24 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

When template table is created, list of unmasked actions is recorded for
future flow rule insertions.
This patch expands entries for RTE_FLOW_ACTION_TYPE_MODIFY_FIELD actions
in this list with a copy of the action from the template.
This will be used in follow up commits which add flow rule operation
validation. Specifically, to validate that
RTE_FLOW_ACTION_TYPE_MODIFY_FIELD action passed by the user is correctly
configured.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    | 2 ++
 drivers/net/mlx5/mlx5_flow_hw.c | 4 +++-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 33847e2272..6974d4e075 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1472,6 +1472,8 @@ struct mlx5_action_construct_data {
 			 * PRM actions.
 			 */
 			uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS];
+			/* Copy of action passed to the action template. */
+			struct rte_flow_action_modify_field action;
 		} modify_header;
 		struct {
 			bool symmetric_hash_function; /* Symmetric RSS hash */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 21a885517a..19d6105be8 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -1112,6 +1112,7 @@ __flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv,
 				     enum rte_flow_action_type type,
 				     uint16_t action_src,
 				     uint16_t action_dst,
+				     const struct rte_flow_action_modify_field *mf,
 				     uint16_t mhdr_cmds_off,
 				     uint16_t mhdr_cmds_end,
 				     bool shared,
@@ -1124,6 +1125,7 @@ __flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv,
 	act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst);
 	if (!act_data)
 		return -1;
+	act_data->modify_header.action = *mf;
 	act_data->modify_header.mhdr_cmds_off = mhdr_cmds_off;
 	act_data->modify_header.mhdr_cmds_end = mhdr_cmds_end;
 	act_data->modify_header.shared = shared;
@@ -1601,7 +1603,7 @@ flow_hw_modify_field_compile(struct rte_eth_dev *dev,
 	if (shared)
 		return 0;
 	ret = __flow_hw_act_data_hdr_modify_append(priv, acts, RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
-						   src_pos, mhdr->pos,
+						   src_pos, mhdr->pos, conf,
 						   cmds_start, cmds_end, shared,
 						   field, dcopy, mask);
 	if (ret)
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v2 7/8] common/mlx5: add debug mode indicator
  2024-06-12 16:24 ` [PATCH v2 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
                     ` (5 preceding siblings ...)
  2024-06-12 16:24   ` [PATCH v2 6/8] net/mlx5: store modify field action Dariusz Sosnowski
@ 2024-06-12 16:24   ` Dariusz Sosnowski
  2024-06-12 16:24   ` [PATCH v2 8/8] net/mlx5: add async flow operation validation Dariusz Sosnowski
  2024-06-26 18:14   ` [PATCH v3 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
  8 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-12 16:24 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

Add mlx5_fp_debug_enabled() function which:

- returns true if RTE_LIBRTE_MLX5_DEBUG is defined,
- returns false otherwise.

This allows for conditional execution of code meant to be executed only
when mlx5 debug mode is enabled, without adding conditional compilation
guards inside source code.
When mlx5 debug mode is disabled, any code running if
mlx5_fp_debug_enabled() returns true will be removed by optimizing
compiler due to dead code elimination.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/common/mlx5/mlx5_common.h | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h
index 14c70edbef..1abd1e8239 100644
--- a/drivers/common/mlx5/mlx5_common.h
+++ b/drivers/common/mlx5/mlx5_common.h
@@ -109,6 +109,19 @@ pmd_drv_log_basename(const char *s)
 
 #endif /* RTE_LIBRTE_MLX5_DEBUG */
 
+/**
+ * Returns true if debug mode is enabled for fast path operations.
+ */
+static inline bool
+mlx5_fp_debug_enabled(void)
+{
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+	return true;
+#else
+	return false;
+#endif
+}
+
 /* Allocate a buffer on the stack and fill it with a printf format string. */
 #define MKSTR(name, ...) \
 	int mkstr_size_##name = snprintf(NULL, 0, "" __VA_ARGS__); \
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v2 8/8] net/mlx5: add async flow operation validation
  2024-06-12 16:24 ` [PATCH v2 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
                     ` (6 preceding siblings ...)
  2024-06-12 16:24   ` [PATCH v2 7/8] common/mlx5: add debug mode indicator Dariusz Sosnowski
@ 2024-06-12 16:24   ` Dariusz Sosnowski
  2024-06-26 18:14   ` [PATCH v3 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
  8 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-12 16:24 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

This patch adds validation to implementations of the following API
functions:

- rte_flow_async_create()
- rte_flow_async_create_by_index()
- rte_flow_async_update()
- rte_flow_async_destroy()

These validations are enabled if and only if RTE_LIBRTE_MLX5_DEBUG
macro is defined.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/rel_notes/release_24_07.rst |   1 +
 drivers/net/mlx5/mlx5_flow_hw.c        | 491 ++++++++++++++++++++++++-
 2 files changed, 488 insertions(+), 4 deletions(-)

diff --git a/doc/guides/rel_notes/release_24_07.rst b/doc/guides/rel_notes/release_24_07.rst
index 7688ed2764..5f37e2283c 100644
--- a/doc/guides/rel_notes/release_24_07.rst
+++ b/doc/guides/rel_notes/release_24_07.rst
@@ -86,6 +86,7 @@ New Features
   * Added match with Tx queue.
   * Added match with external Tx queue.
   * Added match with E-Switch manager.
+  * Added flow item and actions validation to async flow API.
 
 
 Removed Items
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 19d6105be8..1db35c7d16 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -332,6 +332,32 @@ mlx5_flow_ct_init(struct rte_eth_dev *dev,
 static __rte_always_inline uint32_t flow_hw_tx_tag_regc_mask(struct rte_eth_dev *dev);
 static __rte_always_inline uint32_t flow_hw_tx_tag_regc_value(struct rte_eth_dev *dev);
 
+static int flow_hw_async_create_validate(struct rte_eth_dev *dev,
+					 const uint32_t queue,
+					 const struct rte_flow_template_table *table,
+					 const struct rte_flow_item items[],
+					 const uint8_t pattern_template_index,
+					 const struct rte_flow_action actions[],
+					 const uint8_t action_template_index,
+					 struct rte_flow_error *error);
+static int flow_hw_async_create_by_index_validate(struct rte_eth_dev *dev,
+						  const uint32_t queue,
+						  const struct rte_flow_template_table *table,
+						  const uint32_t rule_index,
+						  const struct rte_flow_action actions[],
+						  const uint8_t action_template_index,
+						  struct rte_flow_error *error);
+static int flow_hw_async_update_validate(struct rte_eth_dev *dev,
+					 const uint32_t queue,
+					 const struct rte_flow_hw *flow,
+					 const struct rte_flow_action actions[],
+					 const uint8_t action_template_index,
+					 struct rte_flow_error *error);
+static int flow_hw_async_destroy_validate(struct rte_eth_dev *dev,
+					  const uint32_t queue,
+					  const struct rte_flow_hw *flow,
+					  struct rte_flow_error *error);
+
 const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops;
 
 /* DR action flags with different table. */
@@ -3856,6 +3882,11 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
 	uint32_t res_idx = 0;
 	int ret;
 
+	if (mlx5_fp_debug_enabled()) {
+		if (flow_hw_async_create_validate(dev, queue, table, items, pattern_template_index,
+						  actions, action_template_index, error))
+			return NULL;
+	}
 	flow = mlx5_ipool_malloc(table->flow, &flow_idx);
 	if (!flow)
 		goto error;
@@ -3995,10 +4026,10 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
 	uint32_t res_idx = 0;
 	int ret;
 
-	if (unlikely(rule_index >= table->cfg.attr.nb_flows)) {
-		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
-				   "Flow rule index exceeds table size");
-		return NULL;
+	if (mlx5_fp_debug_enabled()) {
+		if (flow_hw_async_create_by_index_validate(dev, queue, table, rule_index,
+							   actions, action_template_index, error))
+			return NULL;
 	}
 	flow = mlx5_ipool_malloc(table->flow, &flow_idx);
 	if (!flow)
@@ -4131,6 +4162,11 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev,
 	uint32_t res_idx = 0;
 	int ret;
 
+	if (mlx5_fp_debug_enabled()) {
+		if (flow_hw_async_update_validate(dev, queue, of, actions, action_template_index,
+						  error))
+			return -rte_errno;
+	}
 	aux = mlx5_flow_hw_aux(dev->data->port_id, of);
 	nf = &aux->upd_flow;
 	memset(nf, 0, sizeof(struct rte_flow_hw));
@@ -4239,6 +4275,10 @@ flow_hw_async_flow_destroy(struct rte_eth_dev *dev,
 							   &fh->table->cfg.attr);
 	int ret;
 
+	if (mlx5_fp_debug_enabled()) {
+		if (flow_hw_async_destroy_validate(dev, queue, fh, error))
+			return -rte_errno;
+	}
 	fh->operation_type = !resizable ?
 			     MLX5_FLOW_HW_FLOW_OP_TYPE_DESTROY :
 			     MLX5_FLOW_HW_FLOW_OP_TYPE_RSZ_TBL_DESTROY;
@@ -16147,6 +16187,449 @@ mlx5_reformat_action_destroy(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static bool
+flow_hw_is_item_masked(const struct rte_flow_item *item)
+{
+	const uint8_t *byte;
+	int size;
+	int i;
+
+	if (item->mask == NULL)
+		return false;
+
+	switch ((int)item->type) {
+	case MLX5_RTE_FLOW_ITEM_TYPE_TAG:
+		size = sizeof(struct rte_flow_item_tag);
+		break;
+	case MLX5_RTE_FLOW_ITEM_TYPE_SQ:
+		size = sizeof(struct mlx5_rte_flow_item_sq);
+		break;
+	default:
+		size = rte_flow_conv(RTE_FLOW_CONV_OP_ITEM_MASK, NULL, 0, item, NULL);
+		/*
+		 * Pattern template items are passed to this function.
+		 * These items were already validated, so error is not expected.
+		 * Also, if mask is NULL, then spec size is bigger than 0 always.
+		 */
+		MLX5_ASSERT(size > 0);
+	}
+
+	byte = (const uint8_t *)item->mask;
+	for (i = 0; i < size; ++i)
+		if (byte[i])
+			return true;
+
+	return false;
+}
+
+static int
+flow_hw_validate_rule_pattern(struct rte_eth_dev *dev,
+			      const struct rte_flow_template_table *table,
+			      const uint8_t pattern_template_idx,
+			      const struct rte_flow_item items[],
+			      struct rte_flow_error *error)
+{
+	const struct rte_flow_pattern_template *pt;
+	const struct rte_flow_item *pt_item;
+
+	if (pattern_template_idx >= table->nb_item_templates)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Pattern template index out of range");
+
+	pt = table->its[pattern_template_idx];
+	pt_item = pt->items;
+
+	/* If any item was prepended, skip it. */
+	if (pt->implicit_port || pt->implicit_tag)
+		pt_item++;
+
+	for (; pt_item->type != RTE_FLOW_ITEM_TYPE_END; pt_item++, items++) {
+		if (pt_item->type != items->type)
+			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+						  items, "Item type does not match the template");
+
+		/*
+		 * Assumptions:
+		 * - Currently mlx5dr layer contains info on which fields in masks are supported.
+		 * - This info is not exposed to PMD directly.
+		 * - Because of that, it is assumed that since pattern template is correct,
+		 *   then, items' masks in pattern template have nonzero values only in
+		 *   supported fields.
+		 *   This is known, because a temporary mlx5dr matcher is created during pattern
+		 *   template creation to validate the template.
+		 * - As a result, it is safe to look for nonzero bytes in mask to determine if
+		 *   item spec is needed in a flow rule.
+		 */
+		if (!flow_hw_is_item_masked(pt_item))
+			continue;
+
+		if (items->spec == NULL)
+			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_SPEC,
+						  items, "Item spec is required");
+
+		switch (items->type) {
+		const struct rte_flow_item_ethdev *ethdev;
+		const struct rte_flow_item_tx_queue *tx_queue;
+		struct mlx5_txq_ctrl *txq;
+
+		case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT:
+			ethdev = items->spec;
+			if (flow_hw_validate_target_port_id(dev, ethdev->port_id)) {
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ITEM_SPEC, items,
+							  "Invalid port");
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_TX_QUEUE:
+			tx_queue = items->spec;
+			if (mlx5_is_external_txq(dev, tx_queue->tx_queue))
+				continue;
+			txq = mlx5_txq_get(dev, tx_queue->tx_queue);
+			if (!txq)
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ITEM_SPEC, items,
+							  "Invalid Tx queue");
+			mlx5_txq_release(dev, tx_queue->tx_queue);
+		default:
+			break;
+		}
+	}
+
+	return 0;
+}
+
+static bool
+flow_hw_valid_indirect_action_type(const struct rte_flow_action *user_action,
+				   const enum rte_flow_action_type expected_type)
+{
+	uint32_t user_indirect_type = MLX5_INDIRECT_ACTION_TYPE_GET(user_action->conf);
+	uint32_t expected_indirect_type;
+
+	switch ((int)expected_type) {
+	case RTE_FLOW_ACTION_TYPE_RSS:
+	case MLX5_RTE_FLOW_ACTION_TYPE_RSS:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_RSS;
+		break;
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+	case MLX5_RTE_FLOW_ACTION_TYPE_COUNT:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_COUNT;
+		break;
+	case RTE_FLOW_ACTION_TYPE_AGE:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_AGE;
+		break;
+	case RTE_FLOW_ACTION_TYPE_CONNTRACK:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_CT;
+		break;
+	case RTE_FLOW_ACTION_TYPE_METER_MARK:
+	case MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_METER_MARK;
+		break;
+	case RTE_FLOW_ACTION_TYPE_QUOTA:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_QUOTA;
+		break;
+	default:
+		return false;
+	}
+
+	return user_indirect_type == expected_indirect_type;
+}
+
+static int
+flow_hw_validate_rule_actions(struct rte_eth_dev *dev,
+			      const struct rte_flow_template_table *table,
+			      const uint8_t actions_template_idx,
+			      const struct rte_flow_action actions[],
+			      struct rte_flow_error *error)
+{
+	const struct rte_flow_actions_template *at;
+	const struct mlx5_hw_actions *hw_acts;
+	const struct mlx5_action_construct_data *act_data;
+	unsigned int idx;
+
+	if (actions_template_idx >= table->nb_action_templates)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Actions template index out of range");
+
+	at = table->ats[actions_template_idx].action_template;
+	hw_acts = &table->ats[actions_template_idx].acts;
+
+	for (idx = 0; actions[idx].type != RTE_FLOW_ACTION_TYPE_END; ++idx) {
+		const struct rte_flow_action *user_action = &actions[idx];
+		const struct rte_flow_action *tmpl_action = &at->orig_actions[idx];
+
+		if (user_action->type != tmpl_action->type)
+			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+						  user_action,
+						  "Action type does not match type specified in "
+						  "actions template");
+	}
+
+	/*
+	 * Only go through unmasked actions and check if configuration is provided.
+	 * Configuration of masked actions is ignored.
+	 */
+	LIST_FOREACH(act_data, &hw_acts->act_list, next) {
+		const struct rte_flow_action *user_action;
+
+		user_action = &actions[act_data->action_src];
+
+		/* Skip actions which do not require conf. */
+		switch ((int)user_action->type) {
+		case RTE_FLOW_ACTION_TYPE_COUNT:
+		case MLX5_RTE_FLOW_ACTION_TYPE_COUNT:
+		case MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK:
+			continue;
+		default:
+			break;
+		}
+
+		if (user_action->conf == NULL)
+			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+						  user_action,
+						  "Action requires configuration");
+
+		switch ((int)user_action->type) {
+		enum rte_flow_action_type expected_type;
+		const struct rte_flow_action_ethdev *ethdev;
+		const struct rte_flow_action_modify_field *mf;
+
+		case RTE_FLOW_ACTION_TYPE_INDIRECT:
+			expected_type = act_data->indirect.expected_type;
+			if (!flow_hw_valid_indirect_action_type(user_action, expected_type))
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+							  user_action,
+							  "Indirect action type does not match "
+							  "the type specified in the mask");
+			break;
+		case RTE_FLOW_ACTION_TYPE_QUEUE:
+			if (mlx5_flow_validate_target_queue(dev, user_action, error))
+				return -rte_errno;
+			break;
+		case RTE_FLOW_ACTION_TYPE_RSS:
+			if (mlx5_validate_action_rss(dev, user_action, error))
+				return -rte_errno;
+			break;
+		case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
+			/* TODO: Compare other fields if needed. */
+			mf = user_action->conf;
+			if (mf->operation != act_data->modify_header.action.operation ||
+			    mf->src.field != act_data->modify_header.action.src.field ||
+			    mf->dst.field != act_data->modify_header.action.dst.field ||
+			    mf->width != act_data->modify_header.action.width)
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+							  user_action,
+							  "Modify field configuration does not "
+							  "match configuration from actions "
+							  "template");
+			break;
+		case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
+			ethdev = user_action->conf;
+			if (flow_hw_validate_target_port_id(dev, ethdev->port_id)) {
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+							  user_action, "Invalid port");
+			}
+			break;
+		default:
+			break;
+		}
+	}
+
+	return 0;
+}
+
+static int
+flow_hw_async_op_validate(struct rte_eth_dev *dev,
+			  const uint32_t queue,
+			  const struct rte_flow_template_table *table,
+			  struct rte_flow_error *error)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	MLX5_ASSERT(table != NULL);
+
+	if (table->cfg.external && queue >= priv->hw_attr->nb_queue)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Incorrect queue");
+
+	return 0;
+}
+
+/**
+ * Validate user input for rte_flow_async_create() implementation.
+ *
+ * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to create the flow.
+ * @param[in] table
+ *   Pointer to template table.
+ * @param[in] items
+ *   Items with flow spec value.
+ * @param[in] pattern_template_index
+ *   The item pattern flow follows from the table.
+ * @param[in] actions
+ *   Action with flow spec value.
+ * @param[in] action_template_index
+ *   The action pattern flow follows from the table.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 if user input is valid.
+ *    Negative errno otherwise, rte_errno and error struct is populated.
+ */
+static int
+flow_hw_async_create_validate(struct rte_eth_dev *dev,
+			      const uint32_t queue,
+			      const struct rte_flow_template_table *table,
+			      const struct rte_flow_item items[],
+			      const uint8_t pattern_template_index,
+			      const struct rte_flow_action actions[],
+			      const uint8_t action_template_index,
+			      struct rte_flow_error *error)
+{
+	if (flow_hw_async_op_validate(dev, queue, table, error))
+		return -rte_errno;
+
+	if (table->cfg.attr.insertion_type != RTE_FLOW_TABLE_INSERTION_TYPE_PATTERN)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Only pattern insertion is allowed on this table");
+
+	if (flow_hw_validate_rule_pattern(dev, table, pattern_template_index, items, error))
+		return -rte_errno;
+
+	if (flow_hw_validate_rule_actions(dev, table, action_template_index, actions, error))
+		return -rte_errno;
+
+	return 0;
+}
+
+/**
+ * Validate user input for rte_flow_async_create_by_index() implementation.
+ *
+ * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to create the flow.
+ * @param[in] table
+ *   Pointer to template table.
+ * @param[in] rule_index
+ *   Rule index in the table.
+ *   Inserting a rule to already occupied index results in undefined behavior.
+ * @param[in] actions
+ *   Action with flow spec value.
+ * @param[in] action_template_index
+ *   The action pattern flow follows from the table.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 if user input is valid.
+ *    Negative errno otherwise, rte_errno and error struct is set.
+ */
+static int
+flow_hw_async_create_by_index_validate(struct rte_eth_dev *dev,
+				       const uint32_t queue,
+				       const struct rte_flow_template_table *table,
+				       const uint32_t rule_index,
+				       const struct rte_flow_action actions[],
+				       const uint8_t action_template_index,
+				       struct rte_flow_error *error)
+{
+	if (flow_hw_async_op_validate(dev, queue, table, error))
+		return -rte_errno;
+
+	if (table->cfg.attr.insertion_type != RTE_FLOW_TABLE_INSERTION_TYPE_INDEX)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Only index insertion is allowed on this table");
+
+	if (rule_index >= table->cfg.attr.nb_flows)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Flow rule index exceeds table size");
+
+	if (flow_hw_validate_rule_actions(dev, table, action_template_index, actions, error))
+		return -rte_errno;
+
+	return 0;
+}
+
+
+/**
+ * Validate user input for rte_flow_async_update() implementation.
+ *
+ * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to create the flow.
+ * @param[in] flow
+ *   Flow rule to be updated.
+ * @param[in] actions
+ *   Action with flow spec value.
+ * @param[in] action_template_index
+ *   The action pattern flow follows from the table.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 if user input is valid.
+ *    Negative errno otherwise, rte_errno and error struct is set.
+ */
+static int
+flow_hw_async_update_validate(struct rte_eth_dev *dev,
+			      const uint32_t queue,
+			      const struct rte_flow_hw *flow,
+			      const struct rte_flow_action actions[],
+			      const uint8_t action_template_index,
+			      struct rte_flow_error *error)
+{
+	if (flow_hw_async_op_validate(dev, queue, flow->table, error))
+		return -rte_errno;
+
+	if (flow_hw_validate_rule_actions(dev, flow->table, action_template_index, actions, error))
+		return -rte_errno;
+
+	return 0;
+}
+
+/**
+ * Validate user input for rte_flow_async_destroy() implementation.
+ *
+ * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to create the flow.
+ * @param[in] flow
+ *   Flow rule to be destroyed.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 if user input is valid.
+ *    Negative errno otherwise, rte_errno and error struct is set.
+ */
+static int
+flow_hw_async_destroy_validate(struct rte_eth_dev *dev,
+			       const uint32_t queue,
+			       const struct rte_flow_hw *flow,
+			       struct rte_flow_error *error)
+{
+	if (flow_hw_async_op_validate(dev, queue, flow->table, error))
+		return -rte_errno;
+
+	return 0;
+}
+
 static struct rte_flow_fp_ops mlx5_flow_hw_fp_ops = {
 	.async_create = flow_hw_async_flow_create,
 	.async_create_by_index = flow_hw_async_flow_create_by_index,
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2] ethdev: support duplicating only item mask
  2024-06-12 16:18 ` [PATCH v2] ethdev: support duplicating only item mask Dariusz Sosnowski
@ 2024-06-12 22:28   ` Ferruh Yigit
  0 siblings, 0 replies; 31+ messages in thread
From: Ferruh Yigit @ 2024-06-12 22:28 UTC (permalink / raw)
  To: Dariusz Sosnowski, Ori Kam, Thomas Monjalon, Andrew Rybchenko; +Cc: dev

On 6/12/2024 5:18 PM, Dariusz Sosnowski wrote:
> Extend rte_flow_conv() to support working only on item's mask.
> This allows drivers to get only the mask's size when working on pattern
> templates and duplicate items having only the mask in a generic way.
> 
> Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>
> 

Applied to dpdk-next-net/main, thanks.


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v3 0/8] net/mlx5: flow fast path validation
  2024-06-12 16:24 ` [PATCH v2 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
                     ` (7 preceding siblings ...)
  2024-06-12 16:24   ` [PATCH v2 8/8] net/mlx5: add async flow operation validation Dariusz Sosnowski
@ 2024-06-26 18:14   ` Dariusz Sosnowski
  2024-06-26 18:14     ` [PATCH v3 1/8] net/mlx5: extract target port validation Dariusz Sosnowski
                       ` (7 more replies)
  8 siblings, 8 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-26 18:14 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

This patchset adds validations for flow items and actions
supplied by the user to fast path async flow API functions
implemented in mlx5 PMD.

- Patches 1-7: Prepares mlx5 PMD for adding fast path validations.
  Specifically:

  - Required functionality for validating queue indexes and
    target represented ports are extracted for reuse.
  - Additional info, required for validation, is stored in
    pattern and actions templates.
  - Introduces mlx5_fp_debug_enabled() function for checking
    if mlx5 PMD was compiled in debug mode.

- Patch 8: Introduce validations to:

  - rte_flow_async_create()
  - rte_flow_async_create_by_index()
  - rte_flow_async_actions_update()
  - rte_flow_async_destroy()

  These validations are enabled if and only if
  RTE_LIBRTE_MLX5_DEBUG macro is defined during compilation.

v3:
- Rebased on 24.07-rc1.
- Removed Depends-on tag for already marged ethdev patch.

v2:
- Split series 32095 into separate ethdev and mlx5 series.
- Rebased on latest commit in next-net-mlx-main.
- Removed Depends-on tags for already merged mlx5 patch series.

Dariusz Sosnowski (8):
  net/mlx5: extract target port validation
  net/mlx5: extract queue index validation
  net/mlx5: store pattern template items
  net/mlx5: store original actions in template
  net/mlx5: store expected type on indirect action
  net/mlx5: store modify field action
  common/mlx5: add debug mode indicator
  net/mlx5: add async flow operation validation

 doc/guides/rel_notes/release_24_07.rst |   1 +
 drivers/common/mlx5/mlx5_common.h      |  13 +
 drivers/net/mlx5/mlx5_flow.c           |  59 ++-
 drivers/net/mlx5/mlx5_flow.h           |  12 +
 drivers/net/mlx5/mlx5_flow_hw.c        | 620 +++++++++++++++++++++++--
 5 files changed, 657 insertions(+), 48 deletions(-)

--
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v3 1/8] net/mlx5: extract target port validation
  2024-06-26 18:14   ` [PATCH v3 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
@ 2024-06-26 18:14     ` Dariusz Sosnowski
  2024-06-26 18:14     ` [PATCH v3 2/8] net/mlx5: extract queue index validation Dariusz Sosnowski
                       ` (6 subsequent siblings)
  7 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-26 18:14 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

Extract code responsible for validation if port specified in
configuration of RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT action
is correct. Allow for reuse of this logic for both
RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT actions and
RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT items.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_hw.c | 63 +++++++++++++++++++++------------
 1 file changed, 40 insertions(+), 23 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index b8d71b145d..c0d763198d 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -6051,6 +6051,42 @@ flow_hw_validate_action_port_representor(struct rte_eth_dev *dev __rte_unused,
 	return 0;
 }
 
+static int
+flow_hw_validate_target_port_id(struct rte_eth_dev *dev,
+				uint16_t target_port_id)
+{
+	struct mlx5_priv *port_priv;
+	struct mlx5_priv *dev_priv;
+
+	if (target_port_id == MLX5_REPRESENTED_PORT_ESW_MGR)
+		return 0;
+
+	port_priv = mlx5_port_to_eswitch_info(target_port_id, false);
+	if (!port_priv) {
+		rte_errno = EINVAL;
+		DRV_LOG(ERR, "Port %u Failed to obtain E-Switch info for port %u",
+			dev->data->port_id, target_port_id);
+		return -rte_errno;
+	}
+
+	dev_priv = mlx5_dev_to_eswitch_info(dev);
+	if (!dev_priv) {
+		rte_errno = EINVAL;
+		DRV_LOG(ERR, "Port %u Failed to obtain E-Switch info for transfer proxy",
+			dev->data->port_id);
+		return -rte_errno;
+	}
+
+	if (port_priv->domain_id != dev_priv->domain_id) {
+		rte_errno = EINVAL;
+		DRV_LOG(ERR, "Port %u Failed to obtain E-Switch info for transfer proxy",
+			dev->data->port_id);
+		return -rte_errno;
+	}
+
+	return 0;
+}
+
 static int
 flow_hw_validate_action_represented_port(struct rte_eth_dev *dev,
 					 const struct rte_flow_action *action,
@@ -6067,32 +6103,13 @@ flow_hw_validate_action_represented_port(struct rte_eth_dev *dev,
 					  "cannot use represented_port actions"
 					  " without an E-Switch");
 	if (mask_conf && mask_conf->port_id) {
-		struct mlx5_priv *port_priv;
-		struct mlx5_priv *dev_priv;
-
 		if (!action_conf)
 			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
 						  action, "port index was not provided");
-		port_priv = mlx5_port_to_eswitch_info(action_conf->port_id, false);
-		if (!port_priv)
-			return rte_flow_error_set(error, rte_errno,
-						  RTE_FLOW_ERROR_TYPE_ACTION,
-						  action,
-						  "failed to obtain E-Switch"
-						  " info for port");
-		dev_priv = mlx5_dev_to_eswitch_info(dev);
-		if (!dev_priv)
-			return rte_flow_error_set(error, rte_errno,
-						  RTE_FLOW_ERROR_TYPE_ACTION,
-						  action,
-						  "failed to obtain E-Switch"
-						  " info for transfer proxy");
-		if (port_priv->domain_id != dev_priv->domain_id)
-			return rte_flow_error_set(error, rte_errno,
-						  RTE_FLOW_ERROR_TYPE_ACTION,
-						  action,
-						  "cannot forward to port from"
-						  " a different E-Switch");
+
+		if (flow_hw_validate_target_port_id(dev, action_conf->port_id))
+			return rte_flow_error_set(error, rte_errno, RTE_FLOW_ERROR_TYPE_ACTION,
+						  action, "port index is invalid");
 	}
 	return 0;
 }
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v3 2/8] net/mlx5: extract queue index validation
  2024-06-26 18:14   ` [PATCH v3 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
  2024-06-26 18:14     ` [PATCH v3 1/8] net/mlx5: extract target port validation Dariusz Sosnowski
@ 2024-06-26 18:14     ` Dariusz Sosnowski
  2024-06-26 18:14     ` [PATCH v3 3/8] net/mlx5: store pattern template items Dariusz Sosnowski
                       ` (5 subsequent siblings)
  7 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-26 18:14 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

Extract validation of queue index out of validation of
RTE_FLOW_ACTION_TYPE_QUEUE action to allow reuse in template API
flow rule creation implementation.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.c | 59 +++++++++++++++++++++++++-----------
 drivers/net/mlx5/mlx5_flow.h |  3 ++
 2 files changed, 44 insertions(+), 18 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 3a59fb525a..daf0e41c6f 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2020,6 +2020,46 @@ mlx5_flow_validate_action_drop(struct rte_eth_dev *dev,
 	return 0;
 }
 
+/*
+ * Check if a queue specified in the queue action is valid.
+ *
+ * @param[in] dev
+ *   Pointer to the Ethernet device structure.
+ * @param[in] action
+ *   Pointer to the queue action.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_flow_validate_target_queue(struct rte_eth_dev *dev,
+				const struct rte_flow_action *action,
+				struct rte_flow_error *error)
+{
+	const struct rte_flow_action_queue *queue = action->conf;
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	if (mlx5_is_external_rxq(dev, queue->index))
+		return 0;
+	if (!priv->rxqs_n)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+					  NULL, "No Rx queues configured");
+	if (queue->index >= priv->rxqs_n)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+					  &queue->index,
+					  "queue index out of range");
+	if (mlx5_rxq_get(dev, queue->index) == NULL)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+					  &queue->index,
+					  "queue is not configured");
+	return 0;
+}
+
 /*
  * Validate the queue action.
  *
@@ -2044,7 +2084,6 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
 				const struct rte_flow_attr *attr,
 				struct rte_flow_error *error)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
 	const struct rte_flow_action_queue *queue = action->conf;
 
 	if (!queue)
@@ -2060,23 +2099,7 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
 					  "queue action not supported for egress.");
-	if (mlx5_is_external_rxq(dev, queue->index))
-		return 0;
-	if (!priv->rxqs_n)
-		return rte_flow_error_set(error, EINVAL,
-					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
-					  NULL, "No Rx queues configured");
-	if (queue->index >= priv->rxqs_n)
-		return rte_flow_error_set(error, EINVAL,
-					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
-					  &queue->index,
-					  "queue index out of range");
-	if (mlx5_rxq_get(dev, queue->index) == NULL)
-		return rte_flow_error_set(error, EINVAL,
-					  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
-					  &queue->index,
-					  "queue is not configured");
-	return 0;
+	return mlx5_flow_validate_target_queue(dev, action, error);
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 92e2ecedb3..da7d06d033 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -3018,6 +3018,9 @@ int mlx5_flow_validate_action_mark(struct rte_eth_dev *dev,
 				   uint64_t action_flags,
 				   const struct rte_flow_attr *attr,
 				   struct rte_flow_error *error);
+int mlx5_flow_validate_target_queue(struct rte_eth_dev *dev,
+				    const struct rte_flow_action *action,
+				    struct rte_flow_error *error);
 int mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
 				    uint64_t action_flags,
 				    struct rte_eth_dev *dev,
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v3 3/8] net/mlx5: store pattern template items
  2024-06-26 18:14   ` [PATCH v3 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
  2024-06-26 18:14     ` [PATCH v3 1/8] net/mlx5: extract target port validation Dariusz Sosnowski
  2024-06-26 18:14     ` [PATCH v3 2/8] net/mlx5: extract queue index validation Dariusz Sosnowski
@ 2024-06-26 18:14     ` Dariusz Sosnowski
  2024-06-26 18:14     ` [PATCH v3 4/8] net/mlx5: store original actions in template Dariusz Sosnowski
                       ` (4 subsequent siblings)
  7 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-26 18:14 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

Store items which define the pattern template on creation.
This allows validation of items, provided by the user during flow rule
creation, against pattern template.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |  2 ++
 drivers/net/mlx5/mlx5_flow_hw.c | 29 +++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index da7d06d033..5c2cc2b7c1 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1522,6 +1522,8 @@ struct rte_flow_pattern_template {
 	/* Manages all GENEVE TLV options used by this pattern template. */
 	struct mlx5_geneve_tlv_options_mng geneve_opt_mng;
 	uint8_t flex_item; /* flex item index. */
+	/* Items on which this pattern template is based on. */
+	struct rte_flow_item *items;
 };
 
 /* Flow action template struct. */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index c0d763198d..1ad261d529 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -8772,6 +8772,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
 		.mask = &tag_m,
 		.last = NULL
 	};
+	int it_items_size;
 	unsigned int i = 0;
 	int rc;
 
@@ -8815,6 +8816,31 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
 	it->attr = *attr;
 	it->item_flags = item_flags;
 	it->orig_item_nb = orig_item_nb;
+	it_items_size = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, NULL, 0, tmpl_items, error);
+	if (it_items_size <= 0) {
+		rte_flow_error_set(error, ENOMEM,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL,
+				   "Failed to determine buffer size for pattern");
+		goto error;
+	}
+	it_items_size = RTE_ALIGN(it_items_size, 16);
+	it->items = mlx5_malloc(MLX5_MEM_ZERO, it_items_size, 0, rte_dev_numa_node(dev->device));
+	if (it->items == NULL) {
+		rte_flow_error_set(error, ENOMEM,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL,
+				   "Cannot allocate memory for pattern");
+		goto error;
+	}
+	rc = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, it->items, it_items_size, tmpl_items, error);
+	if (rc <= 0) {
+		rte_flow_error_set(error, ENOMEM,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				   NULL,
+				   "Failed to store pattern");
+		goto error;
+	}
 	it->mt = mlx5dr_match_template_create(tmpl_items, attr->relaxed_matching);
 	if (!it->mt) {
 		rte_flow_error_set(error, rte_errno,
@@ -8829,6 +8855,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
 		else if (attr->egress)
 			it->implicit_tag = true;
 		mlx5_free(copied_items);
+		copied_items = NULL;
 	}
 	/* Either inner or outer, can't both. */
 	if (it->item_flags & (MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT |
@@ -8889,6 +8916,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
 			mlx5_geneve_tlv_options_unregister(priv, &it->geneve_opt_mng);
 		if (it->mt)
 			claim_zero(mlx5dr_match_template_destroy(it->mt));
+		mlx5_free(it->items);
 		mlx5_free(it);
 	}
 	if (copied_items)
@@ -8931,6 +8959,7 @@ flow_hw_pattern_template_destroy(struct rte_eth_dev *dev,
 	flow_hw_flex_item_release(dev, &template->flex_item);
 	mlx5_geneve_tlv_options_unregister(priv, &template->geneve_opt_mng);
 	claim_zero(mlx5dr_match_template_destroy(template->mt));
+	mlx5_free(template->items);
 	mlx5_free(template);
 	return 0;
 }
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v3 4/8] net/mlx5: store original actions in template
  2024-06-26 18:14   ` [PATCH v3 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
                       ` (2 preceding siblings ...)
  2024-06-26 18:14     ` [PATCH v3 3/8] net/mlx5: store pattern template items Dariusz Sosnowski
@ 2024-06-26 18:14     ` Dariusz Sosnowski
  2024-06-26 18:14     ` [PATCH v3 5/8] net/mlx5: store expected type on indirect action Dariusz Sosnowski
                       ` (3 subsequent siblings)
  7 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-26 18:14 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

When actions template is created in mlx5 PMD, some actions will be
replaced with another or some actions will be added to implement
required template semantics on NVIDIA NICs. For example:

- RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID actions is replaced with
  modify field action.
- Modify field action is added for preserving metadata
  between FDB and NIC domains.
- Modify field action is added when quota action is used to amend
  ASO syndrome.

Types of actions and their order in flow rules based on some
actions template must match the original, not modified one.

This patch adds preserving of the original actions for actions template
to allow for easier validation of ordering and types on fast path.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |  1 +
 drivers/net/mlx5/mlx5_flow_hw.c | 11 +++++++++++
 2 files changed, 12 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 5c2cc2b7c1..8e99e76e5d 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1532,6 +1532,7 @@ struct rte_flow_actions_template {
 	/* Template attributes. */
 	struct rte_flow_actions_template_attr attr;
 	struct rte_flow_action *actions; /* Cached flow actions. */
+	struct rte_flow_action *orig_actions; /* Original flow actions. */
 	struct rte_flow_action *masks; /* Cached action masks.*/
 	struct mlx5dr_action_template *tmpl; /* mlx5dr action template. */
 	uint64_t action_flags; /* Bit-map of all valid action in template. */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 1ad261d529..1ccaf844b8 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -7750,6 +7750,7 @@ __flow_hw_actions_template_create(struct rte_eth_dev *dev,
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	int len, act_len, mask_len;
+	int orig_act_len;
 	unsigned int act_num;
 	unsigned int i;
 	struct rte_flow_actions_template *at = NULL;
@@ -7867,6 +7868,10 @@ __flow_hw_actions_template_create(struct rte_eth_dev *dev,
 	len += RTE_ALIGN(mask_len, 16);
 	len += RTE_ALIGN(act_num * sizeof(*at->dr_off), 16);
 	len += RTE_ALIGN(act_num * sizeof(*at->src_off), 16);
+	orig_act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, NULL, 0, actions, error);
+	if (orig_act_len <= 0)
+		return NULL;
+	len += RTE_ALIGN(orig_act_len, 16);
 	at = mlx5_malloc(MLX5_MEM_ZERO, len + sizeof(*at),
 			 RTE_CACHE_LINE_SIZE, rte_socket_id());
 	if (!at) {
@@ -7894,6 +7899,12 @@ __flow_hw_actions_template_create(struct rte_eth_dev *dev,
 	at->src_off = RTE_PTR_ADD(at->dr_off,
 				  RTE_ALIGN(act_num * sizeof(*at->dr_off), 16));
 	memcpy(at->src_off, src_off, act_num * sizeof(at->src_off[0]));
+	at->orig_actions = RTE_PTR_ADD(at->src_off,
+				       RTE_ALIGN(act_num * sizeof(*at->src_off), 16));
+	orig_act_len = rte_flow_conv(RTE_FLOW_CONV_OP_ACTIONS, at->orig_actions, orig_act_len,
+				     actions, error);
+	if (orig_act_len <= 0)
+		goto error;
 	at->actions_num = act_num;
 	for (i = 0; i < at->actions_num; ++i)
 		at->dr_off[i] = UINT16_MAX;
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v3 5/8] net/mlx5: store expected type on indirect action
  2024-06-26 18:14   ` [PATCH v3 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
                       ` (3 preceding siblings ...)
  2024-06-26 18:14     ` [PATCH v3 4/8] net/mlx5: store original actions in template Dariusz Sosnowski
@ 2024-06-26 18:14     ` Dariusz Sosnowski
  2024-06-26 18:14     ` [PATCH v3 6/8] net/mlx5: store modify field action Dariusz Sosnowski
                       ` (2 subsequent siblings)
  7 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-26 18:14 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

When template table is created, list of unmasked actions is recorded for
future flow rule insertions.
This patch expands entries for RTE_FLOW_ACTION_TYPE_INDIRECT actions in
this list with information about expected indirect action type.
This will be used in follow up commits which add flow rule operation
validation. Specifically, this will be used to verify if indirect action
provided by the user references an action of a correct type.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |  4 ++++
 drivers/net/mlx5/mlx5_flow_hw.c | 22 ++++++++++++++++++++--
 2 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 8e99e76e5d..33847e2272 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1442,6 +1442,10 @@ struct mlx5_action_construct_data {
 	uint16_t action_dst; /* mlx5dr_rule_action dst offset. */
 	indirect_list_callback_t indirect_list_cb;
 	union {
+		struct {
+			/* Expected type of indirection action. */
+			enum rte_flow_action_type expected_type;
+		} indirect;
 		struct {
 			/* encap data len. */
 			uint16_t len;
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 1ccaf844b8..f337f76450 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -997,6 +997,24 @@ __flow_hw_act_data_general_append(struct mlx5_priv *priv,
 	return 0;
 }
 
+static __rte_always_inline int
+__flow_hw_act_data_indirect_append(struct mlx5_priv *priv,
+				   struct mlx5_hw_actions *acts,
+				   enum rte_flow_action_type type,
+				   enum rte_flow_action_type mask_type,
+				   uint16_t action_src,
+				   uint16_t action_dst)
+{
+	struct mlx5_action_construct_data *act_data;
+
+	act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst);
+	if (!act_data)
+		return -1;
+	act_data->indirect.expected_type = mask_type;
+	LIST_INSERT_HEAD(&acts->act_list, act_data, next);
+	return 0;
+}
+
 static __rte_always_inline int
 flow_hw_act_data_indirect_list_append(struct mlx5_priv *priv,
 				      struct mlx5_hw_actions *acts,
@@ -2486,9 +2504,9 @@ __flow_hw_translate_actions_template(struct rte_eth_dev *dev,
 				if (flow_hw_shared_action_translate
 				(dev, actions, acts, src_pos, dr_pos))
 					goto err;
-			} else if (__flow_hw_act_data_general_append
+			} else if (__flow_hw_act_data_indirect_append
 					(priv, acts, RTE_FLOW_ACTION_TYPE_INDIRECT,
-					 src_pos, dr_pos)){
+					 masks->type, src_pos, dr_pos)){
 				goto err;
 			}
 			break;
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v3 6/8] net/mlx5: store modify field action
  2024-06-26 18:14   ` [PATCH v3 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
                       ` (4 preceding siblings ...)
  2024-06-26 18:14     ` [PATCH v3 5/8] net/mlx5: store expected type on indirect action Dariusz Sosnowski
@ 2024-06-26 18:14     ` Dariusz Sosnowski
  2024-06-26 18:14     ` [PATCH v3 7/8] common/mlx5: add debug mode indicator Dariusz Sosnowski
  2024-06-26 18:14     ` [PATCH v3 8/8] net/mlx5: add async flow operation validation Dariusz Sosnowski
  7 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-26 18:14 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

When template table is created, list of unmasked actions is recorded for
future flow rule insertions.
This patch expands entries for RTE_FLOW_ACTION_TYPE_MODIFY_FIELD actions
in this list with a copy of the action from the template.
This will be used in follow up commits which add flow rule operation
validation. Specifically, to validate that
RTE_FLOW_ACTION_TYPE_MODIFY_FIELD action passed by the user is correctly
configured.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    | 2 ++
 drivers/net/mlx5/mlx5_flow_hw.c | 4 +++-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 33847e2272..6974d4e075 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1472,6 +1472,8 @@ struct mlx5_action_construct_data {
 			 * PRM actions.
 			 */
 			uint32_t mask[MLX5_ACT_MAX_MOD_FIELDS];
+			/* Copy of action passed to the action template. */
+			struct rte_flow_action_modify_field action;
 		} modify_header;
 		struct {
 			bool symmetric_hash_function; /* Symmetric RSS hash */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index f337f76450..bc084af87e 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -1112,6 +1112,7 @@ __flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv,
 				     enum rte_flow_action_type type,
 				     uint16_t action_src,
 				     uint16_t action_dst,
+				     const struct rte_flow_action_modify_field *mf,
 				     uint16_t mhdr_cmds_off,
 				     uint16_t mhdr_cmds_end,
 				     bool shared,
@@ -1124,6 +1125,7 @@ __flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv,
 	act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst);
 	if (!act_data)
 		return -1;
+	act_data->modify_header.action = *mf;
 	act_data->modify_header.mhdr_cmds_off = mhdr_cmds_off;
 	act_data->modify_header.mhdr_cmds_end = mhdr_cmds_end;
 	act_data->modify_header.shared = shared;
@@ -1605,7 +1607,7 @@ flow_hw_modify_field_compile(struct rte_eth_dev *dev,
 	if (shared)
 		return 0;
 	ret = __flow_hw_act_data_hdr_modify_append(priv, acts, RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
-						   src_pos, mhdr->pos,
+						   src_pos, mhdr->pos, conf,
 						   cmds_start, cmds_end, shared,
 						   field, dcopy, mask);
 	if (ret)
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v3 7/8] common/mlx5: add debug mode indicator
  2024-06-26 18:14   ` [PATCH v3 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
                       ` (5 preceding siblings ...)
  2024-06-26 18:14     ` [PATCH v3 6/8] net/mlx5: store modify field action Dariusz Sosnowski
@ 2024-06-26 18:14     ` Dariusz Sosnowski
  2024-06-26 18:14     ` [PATCH v3 8/8] net/mlx5: add async flow operation validation Dariusz Sosnowski
  7 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-26 18:14 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

Add mlx5_fp_debug_enabled() function which:

- returns true if RTE_LIBRTE_MLX5_DEBUG is defined,
- returns false otherwise.

This allows for conditional execution of code meant to be executed only
when mlx5 debug mode is enabled, without adding conditional compilation
guards inside source code.
When mlx5 debug mode is disabled, any code running if
mlx5_fp_debug_enabled() returns true will be removed by optimizing
compiler due to dead code elimination.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/common/mlx5/mlx5_common.h | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h
index 14c70edbef..1abd1e8239 100644
--- a/drivers/common/mlx5/mlx5_common.h
+++ b/drivers/common/mlx5/mlx5_common.h
@@ -109,6 +109,19 @@ pmd_drv_log_basename(const char *s)
 
 #endif /* RTE_LIBRTE_MLX5_DEBUG */
 
+/**
+ * Returns true if debug mode is enabled for fast path operations.
+ */
+static inline bool
+mlx5_fp_debug_enabled(void)
+{
+#ifdef RTE_LIBRTE_MLX5_DEBUG
+	return true;
+#else
+	return false;
+#endif
+}
+
 /* Allocate a buffer on the stack and fill it with a printf format string. */
 #define MKSTR(name, ...) \
 	int mkstr_size_##name = snprintf(NULL, 0, "" __VA_ARGS__); \
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v3 8/8] net/mlx5: add async flow operation validation
  2024-06-26 18:14   ` [PATCH v3 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
                       ` (6 preceding siblings ...)
  2024-06-26 18:14     ` [PATCH v3 7/8] common/mlx5: add debug mode indicator Dariusz Sosnowski
@ 2024-06-26 18:14     ` Dariusz Sosnowski
  7 siblings, 0 replies; 31+ messages in thread
From: Dariusz Sosnowski @ 2024-06-26 18:14 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Bing Zhao, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

This patch adds validation to implementations of the following API
functions:

- rte_flow_async_create()
- rte_flow_async_create_by_index()
- rte_flow_async_update()
- rte_flow_async_destroy()

These validations are enabled if and only if RTE_LIBRTE_MLX5_DEBUG
macro is defined.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/rel_notes/release_24_07.rst |   1 +
 drivers/net/mlx5/mlx5_flow_hw.c        | 491 ++++++++++++++++++++++++-
 2 files changed, 488 insertions(+), 4 deletions(-)

diff --git a/doc/guides/rel_notes/release_24_07.rst b/doc/guides/rel_notes/release_24_07.rst
index 7c88de381b..cccb3ca834 100644
--- a/doc/guides/rel_notes/release_24_07.rst
+++ b/doc/guides/rel_notes/release_24_07.rst
@@ -97,6 +97,7 @@ New Features
   * Added match with Tx queue.
   * Added match with external Tx queue.
   * Added match with E-Switch manager.
+  * Added flow item and actions validation to async flow API.
 
 * **Updated TAP driver.**
 
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index bc084af87e..f389817858 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -332,6 +332,32 @@ mlx5_flow_ct_init(struct rte_eth_dev *dev,
 static __rte_always_inline uint32_t flow_hw_tx_tag_regc_mask(struct rte_eth_dev *dev);
 static __rte_always_inline uint32_t flow_hw_tx_tag_regc_value(struct rte_eth_dev *dev);
 
+static int flow_hw_async_create_validate(struct rte_eth_dev *dev,
+					 const uint32_t queue,
+					 const struct rte_flow_template_table *table,
+					 const struct rte_flow_item items[],
+					 const uint8_t pattern_template_index,
+					 const struct rte_flow_action actions[],
+					 const uint8_t action_template_index,
+					 struct rte_flow_error *error);
+static int flow_hw_async_create_by_index_validate(struct rte_eth_dev *dev,
+						  const uint32_t queue,
+						  const struct rte_flow_template_table *table,
+						  const uint32_t rule_index,
+						  const struct rte_flow_action actions[],
+						  const uint8_t action_template_index,
+						  struct rte_flow_error *error);
+static int flow_hw_async_update_validate(struct rte_eth_dev *dev,
+					 const uint32_t queue,
+					 const struct rte_flow_hw *flow,
+					 const struct rte_flow_action actions[],
+					 const uint8_t action_template_index,
+					 struct rte_flow_error *error);
+static int flow_hw_async_destroy_validate(struct rte_eth_dev *dev,
+					  const uint32_t queue,
+					  const struct rte_flow_hw *flow,
+					  struct rte_flow_error *error);
+
 const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops;
 
 /* DR action flags with different table. */
@@ -3860,6 +3886,11 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
 	uint32_t res_idx = 0;
 	int ret;
 
+	if (mlx5_fp_debug_enabled()) {
+		if (flow_hw_async_create_validate(dev, queue, table, items, pattern_template_index,
+						  actions, action_template_index, error))
+			return NULL;
+	}
 	flow = mlx5_ipool_malloc(table->flow, &flow_idx);
 	if (!flow)
 		goto error;
@@ -3999,10 +4030,10 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
 	uint32_t res_idx = 0;
 	int ret;
 
-	if (unlikely(rule_index >= table->cfg.attr.nb_flows)) {
-		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
-				   "Flow rule index exceeds table size");
-		return NULL;
+	if (mlx5_fp_debug_enabled()) {
+		if (flow_hw_async_create_by_index_validate(dev, queue, table, rule_index,
+							   actions, action_template_index, error))
+			return NULL;
 	}
 	flow = mlx5_ipool_malloc(table->flow, &flow_idx);
 	if (!flow)
@@ -4135,6 +4166,11 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev,
 	uint32_t res_idx = 0;
 	int ret;
 
+	if (mlx5_fp_debug_enabled()) {
+		if (flow_hw_async_update_validate(dev, queue, of, actions, action_template_index,
+						  error))
+			return -rte_errno;
+	}
 	aux = mlx5_flow_hw_aux(dev->data->port_id, of);
 	nf = &aux->upd_flow;
 	memset(nf, 0, sizeof(struct rte_flow_hw));
@@ -4243,6 +4279,10 @@ flow_hw_async_flow_destroy(struct rte_eth_dev *dev,
 							   &fh->table->cfg.attr);
 	int ret;
 
+	if (mlx5_fp_debug_enabled()) {
+		if (flow_hw_async_destroy_validate(dev, queue, fh, error))
+			return -rte_errno;
+	}
 	fh->operation_type = !resizable ?
 			     MLX5_FLOW_HW_FLOW_OP_TYPE_DESTROY :
 			     MLX5_FLOW_HW_FLOW_OP_TYPE_RSZ_TBL_DESTROY;
@@ -16152,6 +16192,449 @@ mlx5_reformat_action_destroy(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static bool
+flow_hw_is_item_masked(const struct rte_flow_item *item)
+{
+	const uint8_t *byte;
+	int size;
+	int i;
+
+	if (item->mask == NULL)
+		return false;
+
+	switch ((int)item->type) {
+	case MLX5_RTE_FLOW_ITEM_TYPE_TAG:
+		size = sizeof(struct rte_flow_item_tag);
+		break;
+	case MLX5_RTE_FLOW_ITEM_TYPE_SQ:
+		size = sizeof(struct mlx5_rte_flow_item_sq);
+		break;
+	default:
+		size = rte_flow_conv(RTE_FLOW_CONV_OP_ITEM_MASK, NULL, 0, item, NULL);
+		/*
+		 * Pattern template items are passed to this function.
+		 * These items were already validated, so error is not expected.
+		 * Also, if mask is NULL, then spec size is bigger than 0 always.
+		 */
+		MLX5_ASSERT(size > 0);
+	}
+
+	byte = (const uint8_t *)item->mask;
+	for (i = 0; i < size; ++i)
+		if (byte[i])
+			return true;
+
+	return false;
+}
+
+static int
+flow_hw_validate_rule_pattern(struct rte_eth_dev *dev,
+			      const struct rte_flow_template_table *table,
+			      const uint8_t pattern_template_idx,
+			      const struct rte_flow_item items[],
+			      struct rte_flow_error *error)
+{
+	const struct rte_flow_pattern_template *pt;
+	const struct rte_flow_item *pt_item;
+
+	if (pattern_template_idx >= table->nb_item_templates)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Pattern template index out of range");
+
+	pt = table->its[pattern_template_idx];
+	pt_item = pt->items;
+
+	/* If any item was prepended, skip it. */
+	if (pt->implicit_port || pt->implicit_tag)
+		pt_item++;
+
+	for (; pt_item->type != RTE_FLOW_ITEM_TYPE_END; pt_item++, items++) {
+		if (pt_item->type != items->type)
+			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM,
+						  items, "Item type does not match the template");
+
+		/*
+		 * Assumptions:
+		 * - Currently mlx5dr layer contains info on which fields in masks are supported.
+		 * - This info is not exposed to PMD directly.
+		 * - Because of that, it is assumed that since pattern template is correct,
+		 *   then, items' masks in pattern template have nonzero values only in
+		 *   supported fields.
+		 *   This is known, because a temporary mlx5dr matcher is created during pattern
+		 *   template creation to validate the template.
+		 * - As a result, it is safe to look for nonzero bytes in mask to determine if
+		 *   item spec is needed in a flow rule.
+		 */
+		if (!flow_hw_is_item_masked(pt_item))
+			continue;
+
+		if (items->spec == NULL)
+			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_SPEC,
+						  items, "Item spec is required");
+
+		switch (items->type) {
+		const struct rte_flow_item_ethdev *ethdev;
+		const struct rte_flow_item_tx_queue *tx_queue;
+		struct mlx5_txq_ctrl *txq;
+
+		case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT:
+			ethdev = items->spec;
+			if (flow_hw_validate_target_port_id(dev, ethdev->port_id)) {
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ITEM_SPEC, items,
+							  "Invalid port");
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_TX_QUEUE:
+			tx_queue = items->spec;
+			if (mlx5_is_external_txq(dev, tx_queue->tx_queue))
+				continue;
+			txq = mlx5_txq_get(dev, tx_queue->tx_queue);
+			if (!txq)
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ITEM_SPEC, items,
+							  "Invalid Tx queue");
+			mlx5_txq_release(dev, tx_queue->tx_queue);
+		default:
+			break;
+		}
+	}
+
+	return 0;
+}
+
+static bool
+flow_hw_valid_indirect_action_type(const struct rte_flow_action *user_action,
+				   const enum rte_flow_action_type expected_type)
+{
+	uint32_t user_indirect_type = MLX5_INDIRECT_ACTION_TYPE_GET(user_action->conf);
+	uint32_t expected_indirect_type;
+
+	switch ((int)expected_type) {
+	case RTE_FLOW_ACTION_TYPE_RSS:
+	case MLX5_RTE_FLOW_ACTION_TYPE_RSS:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_RSS;
+		break;
+	case RTE_FLOW_ACTION_TYPE_COUNT:
+	case MLX5_RTE_FLOW_ACTION_TYPE_COUNT:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_COUNT;
+		break;
+	case RTE_FLOW_ACTION_TYPE_AGE:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_AGE;
+		break;
+	case RTE_FLOW_ACTION_TYPE_CONNTRACK:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_CT;
+		break;
+	case RTE_FLOW_ACTION_TYPE_METER_MARK:
+	case MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_METER_MARK;
+		break;
+	case RTE_FLOW_ACTION_TYPE_QUOTA:
+		expected_indirect_type = MLX5_INDIRECT_ACTION_TYPE_QUOTA;
+		break;
+	default:
+		return false;
+	}
+
+	return user_indirect_type == expected_indirect_type;
+}
+
+static int
+flow_hw_validate_rule_actions(struct rte_eth_dev *dev,
+			      const struct rte_flow_template_table *table,
+			      const uint8_t actions_template_idx,
+			      const struct rte_flow_action actions[],
+			      struct rte_flow_error *error)
+{
+	const struct rte_flow_actions_template *at;
+	const struct mlx5_hw_actions *hw_acts;
+	const struct mlx5_action_construct_data *act_data;
+	unsigned int idx;
+
+	if (actions_template_idx >= table->nb_action_templates)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Actions template index out of range");
+
+	at = table->ats[actions_template_idx].action_template;
+	hw_acts = &table->ats[actions_template_idx].acts;
+
+	for (idx = 0; actions[idx].type != RTE_FLOW_ACTION_TYPE_END; ++idx) {
+		const struct rte_flow_action *user_action = &actions[idx];
+		const struct rte_flow_action *tmpl_action = &at->orig_actions[idx];
+
+		if (user_action->type != tmpl_action->type)
+			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+						  user_action,
+						  "Action type does not match type specified in "
+						  "actions template");
+	}
+
+	/*
+	 * Only go through unmasked actions and check if configuration is provided.
+	 * Configuration of masked actions is ignored.
+	 */
+	LIST_FOREACH(act_data, &hw_acts->act_list, next) {
+		const struct rte_flow_action *user_action;
+
+		user_action = &actions[act_data->action_src];
+
+		/* Skip actions which do not require conf. */
+		switch ((int)user_action->type) {
+		case RTE_FLOW_ACTION_TYPE_COUNT:
+		case MLX5_RTE_FLOW_ACTION_TYPE_COUNT:
+		case MLX5_RTE_FLOW_ACTION_TYPE_METER_MARK:
+			continue;
+		default:
+			break;
+		}
+
+		if (user_action->conf == NULL)
+			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+						  user_action,
+						  "Action requires configuration");
+
+		switch ((int)user_action->type) {
+		enum rte_flow_action_type expected_type;
+		const struct rte_flow_action_ethdev *ethdev;
+		const struct rte_flow_action_modify_field *mf;
+
+		case RTE_FLOW_ACTION_TYPE_INDIRECT:
+			expected_type = act_data->indirect.expected_type;
+			if (!flow_hw_valid_indirect_action_type(user_action, expected_type))
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+							  user_action,
+							  "Indirect action type does not match "
+							  "the type specified in the mask");
+			break;
+		case RTE_FLOW_ACTION_TYPE_QUEUE:
+			if (mlx5_flow_validate_target_queue(dev, user_action, error))
+				return -rte_errno;
+			break;
+		case RTE_FLOW_ACTION_TYPE_RSS:
+			if (mlx5_validate_action_rss(dev, user_action, error))
+				return -rte_errno;
+			break;
+		case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
+			/* TODO: Compare other fields if needed. */
+			mf = user_action->conf;
+			if (mf->operation != act_data->modify_header.action.operation ||
+			    mf->src.field != act_data->modify_header.action.src.field ||
+			    mf->dst.field != act_data->modify_header.action.dst.field ||
+			    mf->width != act_data->modify_header.action.width)
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+							  user_action,
+							  "Modify field configuration does not "
+							  "match configuration from actions "
+							  "template");
+			break;
+		case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
+			ethdev = user_action->conf;
+			if (flow_hw_validate_target_port_id(dev, ethdev->port_id)) {
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+							  user_action, "Invalid port");
+			}
+			break;
+		default:
+			break;
+		}
+	}
+
+	return 0;
+}
+
+static int
+flow_hw_async_op_validate(struct rte_eth_dev *dev,
+			  const uint32_t queue,
+			  const struct rte_flow_template_table *table,
+			  struct rte_flow_error *error)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	MLX5_ASSERT(table != NULL);
+
+	if (table->cfg.external && queue >= priv->hw_attr->nb_queue)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Incorrect queue");
+
+	return 0;
+}
+
+/**
+ * Validate user input for rte_flow_async_create() implementation.
+ *
+ * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to create the flow.
+ * @param[in] table
+ *   Pointer to template table.
+ * @param[in] items
+ *   Items with flow spec value.
+ * @param[in] pattern_template_index
+ *   The item pattern flow follows from the table.
+ * @param[in] actions
+ *   Action with flow spec value.
+ * @param[in] action_template_index
+ *   The action pattern flow follows from the table.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 if user input is valid.
+ *    Negative errno otherwise, rte_errno and error struct is populated.
+ */
+static int
+flow_hw_async_create_validate(struct rte_eth_dev *dev,
+			      const uint32_t queue,
+			      const struct rte_flow_template_table *table,
+			      const struct rte_flow_item items[],
+			      const uint8_t pattern_template_index,
+			      const struct rte_flow_action actions[],
+			      const uint8_t action_template_index,
+			      struct rte_flow_error *error)
+{
+	if (flow_hw_async_op_validate(dev, queue, table, error))
+		return -rte_errno;
+
+	if (table->cfg.attr.insertion_type != RTE_FLOW_TABLE_INSERTION_TYPE_PATTERN)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Only pattern insertion is allowed on this table");
+
+	if (flow_hw_validate_rule_pattern(dev, table, pattern_template_index, items, error))
+		return -rte_errno;
+
+	if (flow_hw_validate_rule_actions(dev, table, action_template_index, actions, error))
+		return -rte_errno;
+
+	return 0;
+}
+
+/**
+ * Validate user input for rte_flow_async_create_by_index() implementation.
+ *
+ * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to create the flow.
+ * @param[in] table
+ *   Pointer to template table.
+ * @param[in] rule_index
+ *   Rule index in the table.
+ *   Inserting a rule to already occupied index results in undefined behavior.
+ * @param[in] actions
+ *   Action with flow spec value.
+ * @param[in] action_template_index
+ *   The action pattern flow follows from the table.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 if user input is valid.
+ *    Negative errno otherwise, rte_errno and error struct is set.
+ */
+static int
+flow_hw_async_create_by_index_validate(struct rte_eth_dev *dev,
+				       const uint32_t queue,
+				       const struct rte_flow_template_table *table,
+				       const uint32_t rule_index,
+				       const struct rte_flow_action actions[],
+				       const uint8_t action_template_index,
+				       struct rte_flow_error *error)
+{
+	if (flow_hw_async_op_validate(dev, queue, table, error))
+		return -rte_errno;
+
+	if (table->cfg.attr.insertion_type != RTE_FLOW_TABLE_INSERTION_TYPE_INDEX)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Only index insertion is allowed on this table");
+
+	if (rule_index >= table->cfg.attr.nb_flows)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Flow rule index exceeds table size");
+
+	if (flow_hw_validate_rule_actions(dev, table, action_template_index, actions, error))
+		return -rte_errno;
+
+	return 0;
+}
+
+
+/**
+ * Validate user input for rte_flow_async_update() implementation.
+ *
+ * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to create the flow.
+ * @param[in] flow
+ *   Flow rule to be updated.
+ * @param[in] actions
+ *   Action with flow spec value.
+ * @param[in] action_template_index
+ *   The action pattern flow follows from the table.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 if user input is valid.
+ *    Negative errno otherwise, rte_errno and error struct is set.
+ */
+static int
+flow_hw_async_update_validate(struct rte_eth_dev *dev,
+			      const uint32_t queue,
+			      const struct rte_flow_hw *flow,
+			      const struct rte_flow_action actions[],
+			      const uint8_t action_template_index,
+			      struct rte_flow_error *error)
+{
+	if (flow_hw_async_op_validate(dev, queue, flow->table, error))
+		return -rte_errno;
+
+	if (flow_hw_validate_rule_actions(dev, flow->table, action_template_index, actions, error))
+		return -rte_errno;
+
+	return 0;
+}
+
+/**
+ * Validate user input for rte_flow_async_destroy() implementation.
+ *
+ * If RTE_LIBRTE_MLX5_DEBUG macro is not defined, this function is a no-op.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to create the flow.
+ * @param[in] flow
+ *   Flow rule to be destroyed.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 if user input is valid.
+ *    Negative errno otherwise, rte_errno and error struct is set.
+ */
+static int
+flow_hw_async_destroy_validate(struct rte_eth_dev *dev,
+			       const uint32_t queue,
+			       const struct rte_flow_hw *flow,
+			       struct rte_flow_error *error)
+{
+	if (flow_hw_async_op_validate(dev, queue, flow->table, error))
+		return -rte_errno;
+
+	return 0;
+}
+
 static struct rte_flow_fp_ops mlx5_flow_hw_fp_ops = {
 	.async_create = flow_hw_async_flow_create,
 	.async_create_by_index = flow_hw_async_flow_create_by_index,
-- 
2.39.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2024-06-26 18:16 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-06-05 18:34 [PATCH 0/9] net/mlx5: flow fast path validation Dariusz Sosnowski
2024-06-05 18:34 ` [PATCH 1/9] ethdev: support duplicating only item mask Dariusz Sosnowski
2024-06-05 18:34 ` [PATCH 2/9] net/mlx5: extract target port validation Dariusz Sosnowski
2024-06-05 18:34 ` [PATCH 3/9] net/mlx5: extract queue index validation Dariusz Sosnowski
2024-06-05 18:34 ` [PATCH 4/9] net/mlx5: store pattern template items Dariusz Sosnowski
2024-06-05 18:34 ` [PATCH 5/9] net/mlx5: store original actions in template Dariusz Sosnowski
2024-06-05 18:34 ` [PATCH 6/9] net/mlx5: store expected type on indirect action Dariusz Sosnowski
2024-06-05 18:34 ` [PATCH 7/9] net/mlx5: store modify field action Dariusz Sosnowski
2024-06-05 18:34 ` [PATCH 8/9] common/mlx5: add debug mode indicator Dariusz Sosnowski
2024-06-05 18:34 ` [PATCH 9/9] net/mlx5: add async flow operation validation Dariusz Sosnowski
2024-06-06  8:50 ` [PATCH 0/9] net/mlx5: flow fast path validation Dariusz Sosnowski
2024-06-12 16:18 ` [PATCH v2] ethdev: support duplicating only item mask Dariusz Sosnowski
2024-06-12 22:28   ` Ferruh Yigit
2024-06-12 16:24 ` [PATCH v2 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
2024-06-12 16:24   ` [PATCH v2 1/8] net/mlx5: extract target port validation Dariusz Sosnowski
2024-06-12 16:24   ` [PATCH v2 2/8] net/mlx5: extract queue index validation Dariusz Sosnowski
2024-06-12 16:24   ` [PATCH v2 3/8] net/mlx5: store pattern template items Dariusz Sosnowski
2024-06-12 16:24   ` [PATCH v2 4/8] net/mlx5: store original actions in template Dariusz Sosnowski
2024-06-12 16:24   ` [PATCH v2 5/8] net/mlx5: store expected type on indirect action Dariusz Sosnowski
2024-06-12 16:24   ` [PATCH v2 6/8] net/mlx5: store modify field action Dariusz Sosnowski
2024-06-12 16:24   ` [PATCH v2 7/8] common/mlx5: add debug mode indicator Dariusz Sosnowski
2024-06-12 16:24   ` [PATCH v2 8/8] net/mlx5: add async flow operation validation Dariusz Sosnowski
2024-06-26 18:14   ` [PATCH v3 0/8] net/mlx5: flow fast path validation Dariusz Sosnowski
2024-06-26 18:14     ` [PATCH v3 1/8] net/mlx5: extract target port validation Dariusz Sosnowski
2024-06-26 18:14     ` [PATCH v3 2/8] net/mlx5: extract queue index validation Dariusz Sosnowski
2024-06-26 18:14     ` [PATCH v3 3/8] net/mlx5: store pattern template items Dariusz Sosnowski
2024-06-26 18:14     ` [PATCH v3 4/8] net/mlx5: store original actions in template Dariusz Sosnowski
2024-06-26 18:14     ` [PATCH v3 5/8] net/mlx5: store expected type on indirect action Dariusz Sosnowski
2024-06-26 18:14     ` [PATCH v3 6/8] net/mlx5: store modify field action Dariusz Sosnowski
2024-06-26 18:14     ` [PATCH v3 7/8] common/mlx5: add debug mode indicator Dariusz Sosnowski
2024-06-26 18:14     ` [PATCH v3 8/8] net/mlx5: add async flow operation validation Dariusz Sosnowski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).