DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 1/9] net/mlx5: update flex parser arc types support
@ 2024-09-11 16:04 Viacheslav Ovsiienko
  2024-09-11 16:04 ` [PATCH 2/9] net/mlx5: add flex item query tunnel mode routine Viacheslav Ovsiienko
                   ` (8 more replies)
  0 siblings, 9 replies; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-11 16:04 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski

Add support for input IPv4 and for ESP output flex parser arcs.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_flex.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index 8a02247406..5b104d583c 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -1111,6 +1111,8 @@ mlx5_flex_arc_type(enum rte_flow_item_type type, int in)
 		return MLX5_GRAPH_ARC_NODE_GENEVE;
 	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
 		return MLX5_GRAPH_ARC_NODE_VXLAN_GPE;
+	case RTE_FLOW_ITEM_TYPE_ESP:
+		return MLX5_GRAPH_ARC_NODE_IPSEC_ESP;
 	default:
 		return -EINVAL;
 	}
@@ -1148,6 +1150,22 @@ mlx5_flex_arc_in_udp(const struct rte_flow_item *item,
 	return rte_be_to_cpu_16(spec->hdr.dst_port);
 }
 
+static int
+mlx5_flex_arc_in_ipv4(const struct rte_flow_item *item,
+		      struct rte_flow_error *error)
+{
+	const struct rte_flow_item_ipv4 *spec = item->spec;
+	const struct rte_flow_item_ipv4 *mask = item->mask;
+	struct rte_flow_item_ipv4 ip = { .hdr.next_proto_id = 0xff };
+
+	if (memcmp(mask, &ip, sizeof(struct rte_flow_item_ipv4))) {
+		return rte_flow_error_set
+			(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item,
+			 "invalid ipv4 item mask, full mask is desired");
+	}
+	return spec->hdr.next_proto_id;
+}
+
 static int
 mlx5_flex_arc_in_ipv6(const struct rte_flow_item *item,
 		      struct rte_flow_error *error)
@@ -1210,6 +1228,9 @@ mlx5_flex_translate_arc_in(struct mlx5_hca_flex_attr *attr,
 		case RTE_FLOW_ITEM_TYPE_UDP:
 			ret = mlx5_flex_arc_in_udp(rte_item, error);
 			break;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+			ret = mlx5_flex_arc_in_ipv4(rte_item, error);
+			break;
 		case RTE_FLOW_ITEM_TYPE_IPV6:
 			ret = mlx5_flex_arc_in_ipv6(rte_item, error);
 			break;
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 2/9] net/mlx5: add flex item query tunnel mode routine
  2024-09-11 16:04 [PATCH 1/9] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
@ 2024-09-11 16:04 ` Viacheslav Ovsiienko
  2024-09-11 16:04 ` [PATCH 3/9] net/mlx5/hws: fix flex item support as tunnel header Viacheslav Ovsiienko
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-11 16:04 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski

Once parsing the RTE item array the PMD needs to know
whether the flex item represents the tunnel header.
The appropriate tunnel mode query API is added.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5.h           |  2 ++
 drivers/net/mlx5/mlx5_flow_flex.c | 27 +++++++++++++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 869aac032b..6d163996e4 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -2605,6 +2605,8 @@ int mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
 int mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item,
 					    void *flex, uint32_t byte_off,
 					    bool is_mask, bool tunnel, uint32_t *value);
+int mlx5_flex_get_tunnel_mode(const struct rte_flow_item *item,
+			      enum rte_flow_item_flex_tunnel_mode *tunnel_mode);
 int mlx5_flex_acquire_index(struct rte_eth_dev *dev,
 			    struct rte_flow_item_flex_handle *handle,
 			    bool acquire);
diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index 5b104d583c..0c41b956b0 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -291,6 +291,33 @@ mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item,
 	return 0;
 }
 
+/**
+ * Get the flex parser tunnel mode.
+ *
+ * @param[in] item
+ *   RTE Flex item.
+ * @param[in, out] tunnel_mode
+ *   Pointer to return tunnel mode.
+ *
+ * @return
+ *   0 on success, otherwise negative error code.
+ */
+int
+mlx5_flex_get_tunnel_mode(const struct rte_flow_item *item,
+			  enum rte_flow_item_flex_tunnel_mode *tunnel_mode)
+{
+	if (item && item->spec && tunnel_mode) {
+		const struct rte_flow_item_flex *spec = item->spec;
+		struct mlx5_flex_item *flex = (struct mlx5_flex_item *)spec->handle;
+
+		if (flex) {
+			*tunnel_mode = flex->tunnel_mode;
+			return 0;
+		}
+	}
+	return -EINVAL;
+}
+
 /**
  * Translate item pattern into matcher fields according to translation
  * array.
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 3/9] net/mlx5/hws: fix flex item support as tunnel header
  2024-09-11 16:04 [PATCH 1/9] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
  2024-09-11 16:04 ` [PATCH 2/9] net/mlx5: add flex item query tunnel mode routine Viacheslav Ovsiienko
@ 2024-09-11 16:04 ` Viacheslav Ovsiienko
  2024-09-11 16:04 ` [PATCH 4/9] net/mlx5: fix flex item tunnel mode handling Viacheslav Ovsiienko
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-11 16:04 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski, stable

The RTE flex item can represent the tunnel header and
split the inner and outer layer items. HWS did not
support this flex item specifics.

Fixes: 8c0ca7527bc8 ("net/mlx5/hws: support flex item matching")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_definer.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 51a3f7be4b..2dfcc5eba6 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -3267,8 +3267,17 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
 			break;
 		case RTE_FLOW_ITEM_TYPE_FLEX:
 			ret = mlx5dr_definer_conv_item_flex_parser(&cd, items, i);
-			item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_FLEX :
-						  MLX5_FLOW_ITEM_OUTER_FLEX;
+			if (ret == 0) {
+				enum rte_flow_item_flex_tunnel_mode tunnel_mode =
+								FLEX_TUNNEL_MODE_SINGLE;
+
+				ret = mlx5_flex_get_tunnel_mode(items, &tunnel_mode);
+				if (tunnel_mode == FLEX_TUNNEL_MODE_TUNNEL)
+					item_flags |= MLX5_FLOW_ITEM_FLEX_TUNNEL;
+				else
+					item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_FLEX :
+								  MLX5_FLOW_ITEM_OUTER_FLEX;
+			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_MPLS:
 			ret = mlx5dr_definer_conv_item_mpls(&cd, items, i);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 4/9] net/mlx5: fix flex item tunnel mode handling
  2024-09-11 16:04 [PATCH 1/9] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
  2024-09-11 16:04 ` [PATCH 2/9] net/mlx5: add flex item query tunnel mode routine Viacheslav Ovsiienko
  2024-09-11 16:04 ` [PATCH 3/9] net/mlx5/hws: fix flex item support as tunnel header Viacheslav Ovsiienko
@ 2024-09-11 16:04 ` Viacheslav Ovsiienko
  2024-09-11 16:04 ` [PATCH 5/9] net/mlx5: fix number of supported flex parsers Viacheslav Ovsiienko
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-11 16:04 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski, stable

The RTE flex item can represent tunnel header itself,
and split inner and outer items, it should be reflected
in the item flags while PMD is processing the item array.

Fixes: 8c0ca7527bc8 ("net/mlx5/hws: support flex item matching")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_hw.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 50888944a5..a275154d4b 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -558,6 +558,7 @@ flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
 	uint64_t last_item = 0;
 
 	for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
+		enum rte_flow_item_flex_tunnel_mode tunnel_mode = FLEX_TUNNEL_MODE_SINGLE;
 		int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
 		int item_type = items->type;
 
@@ -606,6 +607,13 @@ flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
 		case RTE_FLOW_ITEM_TYPE_COMPARE:
 			last_item = MLX5_FLOW_ITEM_COMPARE;
 			break;
+		case RTE_FLOW_ITEM_TYPE_FLEX:
+			mlx5_flex_get_tunnel_mode(items, &tunnel_mode);
+			last_item = tunnel_mode == FLEX_TUNNEL_MODE_TUNNEL ?
+					MLX5_FLOW_ITEM_FLEX_TUNNEL :
+					tunnel ? MLX5_FLOW_ITEM_INNER_FLEX :
+						MLX5_FLOW_ITEM_OUTER_FLEX;
+			break;
 		default:
 			break;
 		}
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 5/9] net/mlx5: fix number of supported flex parsers
  2024-09-11 16:04 [PATCH 1/9] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
                   ` (2 preceding siblings ...)
  2024-09-11 16:04 ` [PATCH 4/9] net/mlx5: fix flex item tunnel mode handling Viacheslav Ovsiienko
@ 2024-09-11 16:04 ` Viacheslav Ovsiienko
  2024-09-11 16:04 ` [PATCH 6/9] app/testpmd: remove flex item init command leftover Viacheslav Ovsiienko
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-11 16:04 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski, stable

The hardware supports up to 8 flex parser configurations.
Some of them can be utilized internally by firmware, depending on
the configured profile ("FLEX_PARSER_PROFILE_ENABLE" in NV-setting).
The firmware does not report in capabilities how many flex parser
configuration is remaining available (this is device-wide resource
and can be allocated runtime by other agents - kernel, DPDK
applications, etc.), and once there is no more available parsers
on the parse object creation moment firmware just returns an error.

Fixes: db25cadc0887 ("net/mlx5: add flex item operations")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 6d163996e4..b1423b6868 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -69,7 +69,7 @@
 #define MLX5_ROOT_TBL_MODIFY_NUM		16
 
 /* Maximal number of flex items created on the port.*/
-#define MLX5_PORT_FLEX_ITEM_NUM			4
+#define MLX5_PORT_FLEX_ITEM_NUM			8
 
 /* Maximal number of field/field parts to map into sample registers .*/
 #define MLX5_FLEX_ITEM_MAPPING_NUM		32
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 6/9] app/testpmd: remove flex item init command leftover
  2024-09-11 16:04 [PATCH 1/9] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
                   ` (3 preceding siblings ...)
  2024-09-11 16:04 ` [PATCH 5/9] net/mlx5: fix number of supported flex parsers Viacheslav Ovsiienko
@ 2024-09-11 16:04 ` Viacheslav Ovsiienko
  2024-09-11 16:04 ` [PATCH 7/9] net/mlx5: fix next protocol validation after flex item Viacheslav Ovsiienko
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-11 16:04 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski

There was a leftover of "flow flex init" command used
for debug purposes and had no useful functionality in
the production code.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 app/test-pmd/cmdline_flow.c | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index d04280eb3e..858f4077bd 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -106,7 +106,6 @@ enum index {
 	HASH,
 
 	/* Flex arguments */
-	FLEX_ITEM_INIT,
 	FLEX_ITEM_CREATE,
 	FLEX_ITEM_DESTROY,
 
@@ -1317,7 +1316,6 @@ struct parse_action_priv {
 	})
 
 static const enum index next_flex_item[] = {
-	FLEX_ITEM_INIT,
 	FLEX_ITEM_CREATE,
 	FLEX_ITEM_DESTROY,
 	ZERO,
@@ -4171,15 +4169,6 @@ static const struct token token_list[] = {
 		.next = NEXT(next_flex_item),
 		.call = parse_flex,
 	},
-	[FLEX_ITEM_INIT] = {
-		.name = "init",
-		.help = "flex item init",
-		.args = ARGS(ARGS_ENTRY(struct buffer, args.flex.token),
-			     ARGS_ENTRY(struct buffer, port)),
-		.next = NEXT(NEXT_ENTRY(COMMON_FLEX_TOKEN),
-			     NEXT_ENTRY(COMMON_PORT_ID)),
-		.call = parse_flex
-	},
 	[FLEX_ITEM_CREATE] = {
 		.name = "create",
 		.help = "flex item create",
@@ -11431,7 +11420,6 @@ parse_flex(struct context *ctx, const struct token *token,
 		switch (ctx->curr) {
 		default:
 			break;
-		case FLEX_ITEM_INIT:
 		case FLEX_ITEM_CREATE:
 		case FLEX_ITEM_DESTROY:
 			out->command = ctx->curr;
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 7/9] net/mlx5: fix next protocol validation after flex item
  2024-09-11 16:04 [PATCH 1/9] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
                   ` (4 preceding siblings ...)
  2024-09-11 16:04 ` [PATCH 6/9] app/testpmd: remove flex item init command leftover Viacheslav Ovsiienko
@ 2024-09-11 16:04 ` Viacheslav Ovsiienko
  2024-09-11 16:04 ` [PATCH 8/9] net/mlx5: fix non full word sample fields in " Viacheslav Ovsiienko
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-11 16:04 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski, stable

On the flow validation some items may check the preceding protocols.
In case of flex item the next protocol is opaque (or can be multiple
ones) we should set neutral value and allow successful validation,
for example, for the combination of flex and following ESP items.

Fixes: a23e9b6e3ee9 ("net/mlx5: handle flex item in flows")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_dv.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index a51d4dd1a4..b18bb430d7 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -8196,6 +8196,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 							 tunnel != 0, error);
 			if (ret < 0)
 				return ret;
+			/* Reset for next proto, it is unknown. */
+			next_protocol = 0xff;
 			break;
 		case RTE_FLOW_ITEM_TYPE_METER_COLOR:
 			ret = flow_dv_validate_item_meter_color(dev, items,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 8/9] net/mlx5: fix non full word sample fields in flex item
  2024-09-11 16:04 [PATCH 1/9] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
                   ` (5 preceding siblings ...)
  2024-09-11 16:04 ` [PATCH 7/9] net/mlx5: fix next protocol validation after flex item Viacheslav Ovsiienko
@ 2024-09-11 16:04 ` Viacheslav Ovsiienko
  2024-09-11 16:04 ` [PATCH 9/9] net/mlx5: fix flex item header length field translation Viacheslav Ovsiienko
  2024-09-18 13:46 ` [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item Viacheslav Ovsiienko
  8 siblings, 0 replies; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-11 16:04 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski, stable

If the sample field in flex item did not cover the entire
32-bit word (width was not verified 32 bits) or was not aligned
on the byte boundary the match on this sample in flows
happened to be ignored or wrongly missed. The field mask
"def" was build in wrong endianness, and non-byte aligned
shifts were wrongly performed for the pattern masks and values.

Fixes: 6dac7d7ff2bf ("net/mlx5: translate flex item pattern into matcher")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_definer.c |  4 +--
 drivers/net/mlx5/mlx5.h               |  5 ++-
 drivers/net/mlx5/mlx5_flow_dv.c       |  5 ++-
 drivers/net/mlx5/mlx5_flow_flex.c     | 47 +++++++++++++--------------
 4 files changed, 29 insertions(+), 32 deletions(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 2dfcc5eba6..10b986d66b 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -574,7 +574,7 @@ mlx5dr_definer_flex_parser_set(struct mlx5dr_definer_fc *fc,
 	idx = fc->fname - MLX5DR_DEFINER_FNAME_FLEX_PARSER_0;
 	byte_off -= idx * sizeof(uint32_t);
 	ret = mlx5_flex_get_parser_value_per_byte_off(flex, flex->handle, byte_off,
-						      false, is_inner, &val);
+						      is_inner, &val);
 	if (ret == -1 || !val)
 		return;
 
@@ -2825,7 +2825,7 @@ mlx5dr_definer_conv_item_flex_parser(struct mlx5dr_definer_conv_data *cd,
 	for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
 		byte_off = base_off - i * sizeof(uint32_t);
 		ret = mlx5_flex_get_parser_value_per_byte_off(m, v->handle, byte_off,
-							      true, is_inner, &mask);
+							      is_inner, &mask);
 		if (ret == -1) {
 			rte_errno = EINVAL;
 			return rte_errno;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index b1423b6868..0fb18f7fb1 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -2600,11 +2600,10 @@ void mlx5_flex_flow_translate_item(struct rte_eth_dev *dev, void *matcher,
 				   void *key, const struct rte_flow_item *item,
 				   bool is_inner);
 int mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
-			    uint32_t idx, uint32_t *pos,
-			    bool is_inner, uint32_t *def);
+			    uint32_t idx, uint32_t *pos, bool is_inner);
 int mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item,
 					    void *flex, uint32_t byte_off,
-					    bool is_mask, bool tunnel, uint32_t *value);
+					    bool tunnel, uint32_t *value);
 int mlx5_flex_get_tunnel_mode(const struct rte_flow_item *item,
 			      enum rte_flow_item_flex_tunnel_mode *tunnel_mode);
 int mlx5_flex_acquire_index(struct rte_eth_dev *dev,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index b18bb430d7..d2a3f829d5 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1526,7 +1526,6 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
 	const struct mlx5_flex_pattern_field *map;
 	uint32_t offset = data->offset;
 	uint32_t width_left = width;
-	uint32_t def;
 	uint32_t cur_width = 0;
 	uint32_t tmp_ofs;
 	uint32_t idx = 0;
@@ -1551,7 +1550,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
 	tmp_ofs = pos < data->offset ? data->offset - pos : 0;
 	for (j = i; i < flex->mapnum && width_left > 0; ) {
 		map = flex->map + i;
-		id = mlx5_flex_get_sample_id(flex, i, &pos, false, &def);
+		id = mlx5_flex_get_sample_id(flex, i, &pos, false);
 		if (id == -1) {
 			i++;
 			/* All left length is dummy */
@@ -1570,7 +1569,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
 			 * 2. Width has been covered.
 			 */
 			for (j = i + 1; j < flex->mapnum; j++) {
-				tmp_id = mlx5_flex_get_sample_id(flex, j, &pos, false, &def);
+				tmp_id = mlx5_flex_get_sample_id(flex, j, &pos, false);
 				if (tmp_id == -1) {
 					i = j;
 					pos -= flex->map[j].width;
diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index 0c41b956b0..bf38643a23 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -118,28 +118,32 @@ mlx5_flex_get_bitfield(const struct rte_flow_item_flex *item,
 		       uint32_t pos, uint32_t width, uint32_t shift)
 {
 	const uint8_t *ptr = item->pattern + pos / CHAR_BIT;
-	uint32_t val, vbits;
+	uint32_t val, vbits, skip = pos % CHAR_BIT;
 
 	/* Proceed the bitfield start byte. */
 	MLX5_ASSERT(width <= sizeof(uint32_t) * CHAR_BIT && width);
 	MLX5_ASSERT(width + shift <= sizeof(uint32_t) * CHAR_BIT);
 	if (item->length <= pos / CHAR_BIT)
 		return 0;
-	val = *ptr++ >> (pos % CHAR_BIT);
+	/* Bits are enumerated in byte in network order: 01234567 */
+	val = *ptr++;
 	vbits = CHAR_BIT - pos % CHAR_BIT;
-	pos = (pos + vbits) / CHAR_BIT;
+	pos = RTE_ALIGN_CEIL(pos, CHAR_BIT) / CHAR_BIT;
 	vbits = RTE_MIN(vbits, width);
-	val &= RTE_BIT32(vbits) - 1;
+	/* Load bytes to cover the field width, checking pattern boundary */
 	while (vbits < width && pos < item->length) {
 		uint32_t part = RTE_MIN(width - vbits, (uint32_t)CHAR_BIT);
 		uint32_t tmp = *ptr++;
 
-		pos++;
-		tmp &= RTE_BIT32(part) - 1;
-		val |= tmp << vbits;
+		val |= tmp << RTE_ALIGN_CEIL(vbits, CHAR_BIT);
 		vbits += part;
+		pos++;
 	}
-	return rte_bswap32(val <<= shift);
+	val = rte_cpu_to_be_32(val);
+	val <<= skip;
+	val >>= shift;
+	val &= (RTE_BIT64(width) - 1) << (sizeof(uint32_t) * CHAR_BIT - shift - width);
+	return val;
 }
 
 #define SET_FP_MATCH_SAMPLE_ID(x, def, msk, val, sid) \
@@ -211,21 +215,17 @@ mlx5_flex_set_match_sample(void *misc4_m, void *misc4_v,
  *   Where to search the value and mask.
  * @param[in] is_inner
  *   For inner matching or not.
- * @param[in, def] def
- *   Mask generated by mapping shift and width.
  *
  * @return
  *   0 on success, -1 to ignore.
  */
 int
 mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
-			uint32_t idx, uint32_t *pos,
-			bool is_inner, uint32_t *def)
+			uint32_t idx, uint32_t *pos, bool is_inner)
 {
 	const struct mlx5_flex_pattern_field *map = tp->map + idx;
 	uint32_t id = map->reg_id;
 
-	*def = (RTE_BIT64(map->width) - 1) << map->shift;
 	/* Skip placeholders for DUMMY fields. */
 	if (id == MLX5_INVALID_SAMPLE_REG_ID) {
 		*pos += map->width;
@@ -252,8 +252,6 @@ mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
  *   Mlx5 flex item sample mapping handle.
  * @param[in] byte_off
  *   Mlx5 flex item format_select_dw.
- * @param[in] is_mask
- *   Spec or mask.
  * @param[in] tunnel
  *   Tunnel mode or not.
  * @param[in, def] value
@@ -265,25 +263,23 @@ mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
 int
 mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item,
 					void *flex, uint32_t byte_off,
-					bool is_mask, bool tunnel, uint32_t *value)
+					bool tunnel, uint32_t *value)
 {
 	struct mlx5_flex_pattern_field *map;
 	struct mlx5_flex_item *tp = flex;
-	uint32_t def, i, pos, val;
+	uint32_t i, pos, val;
 	int id;
 
 	*value = 0;
 	for (i = 0, pos = 0; i < tp->mapnum && pos < item->length * CHAR_BIT; i++) {
 		map = tp->map + i;
-		id = mlx5_flex_get_sample_id(tp, i, &pos, tunnel, &def);
+		id = mlx5_flex_get_sample_id(tp, i, &pos, tunnel);
 		if (id == -1)
 			continue;
 		if (id >= (int)tp->devx_fp->num_samples || id >= MLX5_GRAPH_NODE_SAMPLE_NUM)
 			return -1;
 		if (byte_off == tp->devx_fp->sample_info[id].sample_dw_data * sizeof(uint32_t)) {
 			val = mlx5_flex_get_bitfield(item, pos, map->width, map->shift);
-			if (is_mask)
-				val &= RTE_BE32(def);
 			*value |= val;
 		}
 		pos += map->width;
@@ -355,10 +351,10 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev *dev,
 	spec = item->spec;
 	mask = item->mask;
 	tp = (struct mlx5_flex_item *)spec->handle;
-	for (i = 0; i < tp->mapnum; i++) {
+	for (i = 0; i < tp->mapnum && pos < (spec->length * CHAR_BIT); i++) {
 		struct mlx5_flex_pattern_field *map = tp->map + i;
 		uint32_t val, msk, def;
-		int id = mlx5_flex_get_sample_id(tp, i, &pos, is_inner, &def);
+		int id = mlx5_flex_get_sample_id(tp, i, &pos, is_inner);
 
 		if (id == -1)
 			continue;
@@ -366,11 +362,14 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev *dev,
 		if (id >= (int)tp->devx_fp->num_samples ||
 		    id >= MLX5_GRAPH_NODE_SAMPLE_NUM)
 			return;
+		def = (uint32_t)(RTE_BIT64(map->width) - 1);
+		def <<= (sizeof(uint32_t) * CHAR_BIT - map->shift - map->width);
 		val = mlx5_flex_get_bitfield(spec, pos, map->width, map->shift);
-		msk = mlx5_flex_get_bitfield(mask, pos, map->width, map->shift);
+		msk = pos < (mask->length * CHAR_BIT) ?
+		      mlx5_flex_get_bitfield(mask, pos, map->width, map->shift) : def;
 		sample_id = tp->devx_fp->sample_ids[id];
 		mlx5_flex_set_match_sample(misc4_m, misc4_v,
-					   def, msk & def, val & msk & def,
+					   def, msk, val & msk,
 					   sample_id, id);
 		pos += map->width;
 	}
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH 9/9] net/mlx5: fix flex item header length field translation
  2024-09-11 16:04 [PATCH 1/9] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
                   ` (6 preceding siblings ...)
  2024-09-11 16:04 ` [PATCH 8/9] net/mlx5: fix non full word sample fields in " Viacheslav Ovsiienko
@ 2024-09-11 16:04 ` Viacheslav Ovsiienko
  2024-09-18 13:46 ` [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item Viacheslav Ovsiienko
  8 siblings, 0 replies; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-11 16:04 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski, stable

There are hardware imposed limitations on the header length
field description for the mask and shift combinations in the
FIELD_MODE_OFFSET mode.

The patch updates:
  - parameter check for FIELD_MODE_OFFSET for the header length
    field
  - check whether length field crosses dword boundaries in header
  - correct mask extension to the hardware required width 6-bits
  - correct adjusting the mask left margin offset, preventing
    dword offset

Fixes: b293e8e49d78 ("net/mlx5: translate flex item configuration")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_flex.c | 120 ++++++++++++++++--------------
 1 file changed, 66 insertions(+), 54 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index bf38643a23..afed16985a 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -449,12 +449,14 @@ mlx5_flex_release_index(struct rte_eth_dev *dev,
  *
  *   shift      mask
  * ------- ---------------
- *    0     b111100  0x3C
- *    1     b111110  0x3E
- *    2     b111111  0x3F
- *    3     b011111  0x1F
- *    4     b001111  0x0F
- *    5     b000111  0x07
+ *    0     b11111100  0x3C
+ *    1     b01111110  0x3E
+ *    2     b00111111  0x3F
+ *    3     b00011111  0x1F
+ *    4     b00001111  0x0F
+ *    5     b00000111  0x07
+ *    6     b00000011  0x03
+ *    7     b00000001  0x01
  */
 static uint8_t
 mlx5_flex_hdr_len_mask(uint8_t shift,
@@ -464,8 +466,7 @@ mlx5_flex_hdr_len_mask(uint8_t shift,
 	int diff = shift - MLX5_PARSE_GRAPH_NODE_HDR_LEN_SHIFT_DWORD;
 
 	base_mask = mlx5_hca_parse_graph_node_base_hdr_len_mask(attr);
-	return diff == 0 ? base_mask :
-	       diff < 0 ? (base_mask << -diff) & base_mask : base_mask >> diff;
+	return diff < 0 ? base_mask << -diff : base_mask >> diff;
 }
 
 static int
@@ -476,7 +477,6 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
 {
 	const struct rte_flow_item_flex_field *field = &conf->next_header;
 	struct mlx5_devx_graph_node_attr *node = &devx->devx_conf;
-	uint32_t len_width, mask;
 
 	if (field->field_base % CHAR_BIT)
 		return rte_flow_error_set
@@ -504,7 +504,14 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
 				 "negative header length field base (FIXED)");
 		node->header_length_mode = MLX5_GRAPH_NODE_LEN_FIXED;
 		break;
-	case FIELD_MODE_OFFSET:
+	case FIELD_MODE_OFFSET: {
+		uint32_t msb, lsb;
+		int32_t shift = field->offset_shift;
+		uint32_t offset = field->offset_base;
+		uint32_t mask = field->offset_mask;
+		uint32_t wmax = attr->header_length_mask_width +
+				MLX5_PARSE_GRAPH_NODE_HDR_LEN_SHIFT_DWORD;
+
 		if (!(attr->header_length_mode &
 		    RTE_BIT32(MLX5_GRAPH_NODE_LEN_FIELD)))
 			return rte_flow_error_set
@@ -514,47 +521,73 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
 				 "field size is a must for offset mode");
-		if (field->field_size + field->offset_base < attr->header_length_mask_width)
+		if ((offset ^ (field->field_size + offset)) >> 5)
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "field size plus offset_base is too small");
-		node->header_length_mode = MLX5_GRAPH_NODE_LEN_FIELD;
-		if (field->offset_mask == 0 ||
-		    !rte_is_power_of_2(field->offset_mask + 1))
+				 "field crosses the 32-bit word boundary");
+		/* Hardware counts in dwords, all shifts done by offset within mask */
+		if (shift < 0 || (uint32_t)shift >= wmax)
+			return rte_flow_error_set
+				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+				 "header length field shift exceeds limits (OFFSET)");
+		if (!mask)
+			return rte_flow_error_set
+				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+				 "zero length field offset mask (OFFSET)");
+		msb = rte_fls_u32(mask) - 1;
+		lsb = rte_bsf32(mask);
+		if (!rte_is_power_of_2((mask >> lsb) + 1))
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "invalid length field offset mask (OFFSET)");
-		len_width = rte_fls_u32(field->offset_mask);
-		if (len_width > attr->header_length_mask_width)
+				 "length field offset mask not contiguous (OFFSET)");
+		if (msb >= field->field_size)
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "length field offset mask too wide (OFFSET)");
-		mask = mlx5_flex_hdr_len_mask(field->offset_shift, attr);
-		if (mask < field->offset_mask)
+				 "length field offset mask exceeds field size (OFFSET)");
+		if (msb >= wmax)
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "length field shift too big (OFFSET)");
-		node->header_length_field_mask = RTE_MIN(mask,
-							 field->offset_mask);
+				 "length field offset mask exceeds supported width (OFFSET)");
+		if (mask & ~mlx5_flex_hdr_len_mask(shift, attr))
+			return rte_flow_error_set
+				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+				 "mask and shift combination not supported (OFFSET)");
+		msb++;
+		offset += field->field_size - msb;
+		if (msb < attr->header_length_mask_width) {
+			if (attr->header_length_mask_width - msb > offset)
+				return rte_flow_error_set
+					(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+					 "field size plus offset_base is too small");
+			offset += msb;
+			/*
+			 * Here we can move to preceding dword. Hardware does
+			 * cyclic left shift so we should avoid this and stay
+			 * at current dword offset.
+			 */
+			offset = (offset & ~0x1Fu) |
+				 ((offset - attr->header_length_mask_width) & 0x1F);
+		}
+		node->header_length_mode = MLX5_GRAPH_NODE_LEN_FIELD;
+		node->header_length_field_mask = mask;
+		node->header_length_field_shift = shift;
+		node->header_length_field_offset = offset;
 		break;
+	}
 	case FIELD_MODE_BITMASK:
 		if (!(attr->header_length_mode &
 		    RTE_BIT32(MLX5_GRAPH_NODE_LEN_BITMASK)))
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
 				 "unsupported header length field mode (BITMASK)");
-		if (attr->header_length_mask_width < field->field_size)
+		if (field->offset_shift > 15 || field->offset_shift < 0)
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "header length field width exceeds limit");
+				 "header length field shift exceeds limit (BITMASK)");
 		node->header_length_mode = MLX5_GRAPH_NODE_LEN_BITMASK;
-		mask = mlx5_flex_hdr_len_mask(field->offset_shift, attr);
-		if (mask < field->offset_mask)
-			return rte_flow_error_set
-				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "length field shift too big (BITMASK)");
-		node->header_length_field_mask = RTE_MIN(mask,
-							 field->offset_mask);
+		node->header_length_field_mask = field->offset_mask;
+		node->header_length_field_shift = field->offset_shift;
+		node->header_length_field_offset = field->offset_base;
 		break;
 	default:
 		return rte_flow_error_set
@@ -567,27 +600,6 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
 			(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
 			 "header length field base exceeds limit");
 	node->header_length_base_value = field->field_base / CHAR_BIT;
-	if (field->field_mode == FIELD_MODE_OFFSET ||
-	    field->field_mode == FIELD_MODE_BITMASK) {
-		if (field->offset_shift > 15 || field->offset_shift < 0)
-			return rte_flow_error_set
-				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "header length field shift exceeds limit");
-		node->header_length_field_shift = field->offset_shift;
-		node->header_length_field_offset = field->offset_base;
-	}
-	if (field->field_mode == FIELD_MODE_OFFSET) {
-		if (field->field_size > attr->header_length_mask_width) {
-			node->header_length_field_offset +=
-				field->field_size - attr->header_length_mask_width;
-		} else if (field->field_size < attr->header_length_mask_width) {
-			node->header_length_field_offset -=
-				attr->header_length_mask_width - field->field_size;
-			node->header_length_field_mask =
-					RTE_MIN(node->header_length_field_mask,
-						(1u << field->field_size) - 1);
-		}
-	}
 	return 0;
 }
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item
  2024-09-11 16:04 [PATCH 1/9] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
                   ` (7 preceding siblings ...)
  2024-09-11 16:04 ` [PATCH 9/9] net/mlx5: fix flex item header length field translation Viacheslav Ovsiienko
@ 2024-09-18 13:46 ` Viacheslav Ovsiienko
  2024-09-18 13:46   ` [PATCH v2 1/9] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
                     ` (10 more replies)
  8 siblings, 11 replies; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-18 13:46 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski

There is a series of independent patches related to the flex item.
There is no direct dependency between patches besides the merging
dependency inferred by git, the latter is reason the patches are
sent in series. For more details, please see the individual patch
commit messages.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

Viacheslav Ovsiienko (9):
  net/mlx5: update flex parser arc types support
  net/mlx5: add flex item query tunnel mode routine
  net/mlx5/hws: fix flex item support as tunnel header
  net/mlx5: fix flex item tunnel mode handling
  net/mlx5: fix number of supported flex parsers
  app/testpmd: remove flex item init command leftover
  net/mlx5: fix next protocol validation after flex item
  net/mlx5: fix non full word sample fields in flex item
  net/mlx5: fix flex item header length field translation

 app/test-pmd/cmdline_flow.c           |  12 --
 drivers/net/mlx5/hws/mlx5dr_definer.c |  17 +-
 drivers/net/mlx5/mlx5.h               |   9 +-
 drivers/net/mlx5/mlx5_flow_dv.c       |   7 +-
 drivers/net/mlx5/mlx5_flow_flex.c     | 215 ++++++++++++++++----------
 drivers/net/mlx5/mlx5_flow_hw.c       |   8 +
 6 files changed, 167 insertions(+), 101 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 1/9] net/mlx5: update flex parser arc types support
  2024-09-18 13:46 ` [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item Viacheslav Ovsiienko
@ 2024-09-18 13:46   ` Viacheslav Ovsiienko
  2024-09-18 13:57     ` Dariusz Sosnowski
  2024-09-18 13:46   ` [PATCH v2 2/9] net/mlx5: add flex item query tunnel mode routine Viacheslav Ovsiienko
                     ` (9 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-18 13:46 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski

Add support for input IPv4 and for ESP output flex parser arcs.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_flex.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index 8a02247406..5b104d583c 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -1111,6 +1111,8 @@ mlx5_flex_arc_type(enum rte_flow_item_type type, int in)
 		return MLX5_GRAPH_ARC_NODE_GENEVE;
 	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
 		return MLX5_GRAPH_ARC_NODE_VXLAN_GPE;
+	case RTE_FLOW_ITEM_TYPE_ESP:
+		return MLX5_GRAPH_ARC_NODE_IPSEC_ESP;
 	default:
 		return -EINVAL;
 	}
@@ -1148,6 +1150,22 @@ mlx5_flex_arc_in_udp(const struct rte_flow_item *item,
 	return rte_be_to_cpu_16(spec->hdr.dst_port);
 }
 
+static int
+mlx5_flex_arc_in_ipv4(const struct rte_flow_item *item,
+		      struct rte_flow_error *error)
+{
+	const struct rte_flow_item_ipv4 *spec = item->spec;
+	const struct rte_flow_item_ipv4 *mask = item->mask;
+	struct rte_flow_item_ipv4 ip = { .hdr.next_proto_id = 0xff };
+
+	if (memcmp(mask, &ip, sizeof(struct rte_flow_item_ipv4))) {
+		return rte_flow_error_set
+			(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item,
+			 "invalid ipv4 item mask, full mask is desired");
+	}
+	return spec->hdr.next_proto_id;
+}
+
 static int
 mlx5_flex_arc_in_ipv6(const struct rte_flow_item *item,
 		      struct rte_flow_error *error)
@@ -1210,6 +1228,9 @@ mlx5_flex_translate_arc_in(struct mlx5_hca_flex_attr *attr,
 		case RTE_FLOW_ITEM_TYPE_UDP:
 			ret = mlx5_flex_arc_in_udp(rte_item, error);
 			break;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+			ret = mlx5_flex_arc_in_ipv4(rte_item, error);
+			break;
 		case RTE_FLOW_ITEM_TYPE_IPV6:
 			ret = mlx5_flex_arc_in_ipv6(rte_item, error);
 			break;
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 2/9] net/mlx5: add flex item query tunnel mode routine
  2024-09-18 13:46 ` [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item Viacheslav Ovsiienko
  2024-09-18 13:46   ` [PATCH v2 1/9] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
@ 2024-09-18 13:46   ` Viacheslav Ovsiienko
  2024-09-18 13:57     ` Dariusz Sosnowski
  2024-09-18 13:46   ` [PATCH v2 3/9] net/mlx5/hws: fix flex item support as tunnel header Viacheslav Ovsiienko
                     ` (8 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-18 13:46 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski

Once parsing the RTE item array the PMD needs to know
whether the flex item represents the tunnel header.
The appropriate tunnel mode query API is added.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5.h           |  2 ++
 drivers/net/mlx5/mlx5_flow_flex.c | 27 +++++++++++++++++++++++++++
 2 files changed, 29 insertions(+)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 869aac032b..6d163996e4 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -2605,6 +2605,8 @@ int mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
 int mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item,
 					    void *flex, uint32_t byte_off,
 					    bool is_mask, bool tunnel, uint32_t *value);
+int mlx5_flex_get_tunnel_mode(const struct rte_flow_item *item,
+			      enum rte_flow_item_flex_tunnel_mode *tunnel_mode);
 int mlx5_flex_acquire_index(struct rte_eth_dev *dev,
 			    struct rte_flow_item_flex_handle *handle,
 			    bool acquire);
diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index 5b104d583c..0c41b956b0 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -291,6 +291,33 @@ mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item,
 	return 0;
 }
 
+/**
+ * Get the flex parser tunnel mode.
+ *
+ * @param[in] item
+ *   RTE Flex item.
+ * @param[in, out] tunnel_mode
+ *   Pointer to return tunnel mode.
+ *
+ * @return
+ *   0 on success, otherwise negative error code.
+ */
+int
+mlx5_flex_get_tunnel_mode(const struct rte_flow_item *item,
+			  enum rte_flow_item_flex_tunnel_mode *tunnel_mode)
+{
+	if (item && item->spec && tunnel_mode) {
+		const struct rte_flow_item_flex *spec = item->spec;
+		struct mlx5_flex_item *flex = (struct mlx5_flex_item *)spec->handle;
+
+		if (flex) {
+			*tunnel_mode = flex->tunnel_mode;
+			return 0;
+		}
+	}
+	return -EINVAL;
+}
+
 /**
  * Translate item pattern into matcher fields according to translation
  * array.
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 3/9] net/mlx5/hws: fix flex item support as tunnel header
  2024-09-18 13:46 ` [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item Viacheslav Ovsiienko
  2024-09-18 13:46   ` [PATCH v2 1/9] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
  2024-09-18 13:46   ` [PATCH v2 2/9] net/mlx5: add flex item query tunnel mode routine Viacheslav Ovsiienko
@ 2024-09-18 13:46   ` Viacheslav Ovsiienko
  2024-09-18 13:57     ` Dariusz Sosnowski
  2024-09-18 13:46   ` [PATCH v2 4/9] net/mlx5: fix flex item tunnel mode handling Viacheslav Ovsiienko
                     ` (7 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-18 13:46 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski, stable

The RTE flex item can represent the tunnel header and
split the inner and outer layer items. HWS did not
support this flex item specifics.

Fixes: 8c0ca7527bc8 ("net/mlx5/hws: support flex item matching")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_definer.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 51a3f7be4b..2dfcc5eba6 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -3267,8 +3267,17 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
 			break;
 		case RTE_FLOW_ITEM_TYPE_FLEX:
 			ret = mlx5dr_definer_conv_item_flex_parser(&cd, items, i);
-			item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_FLEX :
-						  MLX5_FLOW_ITEM_OUTER_FLEX;
+			if (ret == 0) {
+				enum rte_flow_item_flex_tunnel_mode tunnel_mode =
+								FLEX_TUNNEL_MODE_SINGLE;
+
+				ret = mlx5_flex_get_tunnel_mode(items, &tunnel_mode);
+				if (tunnel_mode == FLEX_TUNNEL_MODE_TUNNEL)
+					item_flags |= MLX5_FLOW_ITEM_FLEX_TUNNEL;
+				else
+					item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_FLEX :
+								  MLX5_FLOW_ITEM_OUTER_FLEX;
+			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_MPLS:
 			ret = mlx5dr_definer_conv_item_mpls(&cd, items, i);
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 4/9] net/mlx5: fix flex item tunnel mode handling
  2024-09-18 13:46 ` [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item Viacheslav Ovsiienko
                     ` (2 preceding siblings ...)
  2024-09-18 13:46   ` [PATCH v2 3/9] net/mlx5/hws: fix flex item support as tunnel header Viacheslav Ovsiienko
@ 2024-09-18 13:46   ` Viacheslav Ovsiienko
  2024-09-18 13:57     ` Dariusz Sosnowski
  2024-09-18 13:46   ` [PATCH v2 5/9] net/mlx5: fix number of supported flex parsers Viacheslav Ovsiienko
                     ` (6 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-18 13:46 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski, stable

The RTE flex item can represent tunnel header itself,
and split inner and outer items, it should be reflected
in the item flags while PMD is processing the item array.

Fixes: 8c0ca7527bc8 ("net/mlx5/hws: support flex item matching")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_hw.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 50888944a5..a275154d4b 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -558,6 +558,7 @@ flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
 	uint64_t last_item = 0;
 
 	for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
+		enum rte_flow_item_flex_tunnel_mode tunnel_mode = FLEX_TUNNEL_MODE_SINGLE;
 		int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
 		int item_type = items->type;
 
@@ -606,6 +607,13 @@ flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
 		case RTE_FLOW_ITEM_TYPE_COMPARE:
 			last_item = MLX5_FLOW_ITEM_COMPARE;
 			break;
+		case RTE_FLOW_ITEM_TYPE_FLEX:
+			mlx5_flex_get_tunnel_mode(items, &tunnel_mode);
+			last_item = tunnel_mode == FLEX_TUNNEL_MODE_TUNNEL ?
+					MLX5_FLOW_ITEM_FLEX_TUNNEL :
+					tunnel ? MLX5_FLOW_ITEM_INNER_FLEX :
+						MLX5_FLOW_ITEM_OUTER_FLEX;
+			break;
 		default:
 			break;
 		}
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 5/9] net/mlx5: fix number of supported flex parsers
  2024-09-18 13:46 ` [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item Viacheslav Ovsiienko
                     ` (3 preceding siblings ...)
  2024-09-18 13:46   ` [PATCH v2 4/9] net/mlx5: fix flex item tunnel mode handling Viacheslav Ovsiienko
@ 2024-09-18 13:46   ` Viacheslav Ovsiienko
  2024-09-18 13:57     ` Dariusz Sosnowski
  2024-09-18 13:46   ` [PATCH v2 6/9] app/testpmd: remove flex item init command leftover Viacheslav Ovsiienko
                     ` (5 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-18 13:46 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski, stable

The hardware supports up to 8 flex parser configurations.
Some of them can be utilized internally by firmware, depending on
the configured profile ("FLEX_PARSER_PROFILE_ENABLE" in NV-setting).
The firmware does not report in capabilities how many flex parser
configuration is remaining available (this is device-wide resource
and can be allocated runtime by other agents - kernel, DPDK
applications, etc.), and once there is no more available parsers
on the parse object creation moment firmware just returns an error.

Fixes: db25cadc0887 ("net/mlx5: add flex item operations")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 6d163996e4..b1423b6868 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -69,7 +69,7 @@
 #define MLX5_ROOT_TBL_MODIFY_NUM		16
 
 /* Maximal number of flex items created on the port.*/
-#define MLX5_PORT_FLEX_ITEM_NUM			4
+#define MLX5_PORT_FLEX_ITEM_NUM			8
 
 /* Maximal number of field/field parts to map into sample registers .*/
 #define MLX5_FLEX_ITEM_MAPPING_NUM		32
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 6/9] app/testpmd: remove flex item init command leftover
  2024-09-18 13:46 ` [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item Viacheslav Ovsiienko
                     ` (4 preceding siblings ...)
  2024-09-18 13:46   ` [PATCH v2 5/9] net/mlx5: fix number of supported flex parsers Viacheslav Ovsiienko
@ 2024-09-18 13:46   ` Viacheslav Ovsiienko
  2024-09-18 13:58     ` Dariusz Sosnowski
  2024-09-18 13:46   ` [PATCH v2 7/9] net/mlx5: fix next protocol validation after flex item Viacheslav Ovsiienko
                     ` (4 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-18 13:46 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski

There was a leftover of "flow flex init" command used
for debug purposes and had no useful functionality in
the production code.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 app/test-pmd/cmdline_flow.c | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index d04280eb3e..858f4077bd 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -106,7 +106,6 @@ enum index {
 	HASH,
 
 	/* Flex arguments */
-	FLEX_ITEM_INIT,
 	FLEX_ITEM_CREATE,
 	FLEX_ITEM_DESTROY,
 
@@ -1317,7 +1316,6 @@ struct parse_action_priv {
 	})
 
 static const enum index next_flex_item[] = {
-	FLEX_ITEM_INIT,
 	FLEX_ITEM_CREATE,
 	FLEX_ITEM_DESTROY,
 	ZERO,
@@ -4171,15 +4169,6 @@ static const struct token token_list[] = {
 		.next = NEXT(next_flex_item),
 		.call = parse_flex,
 	},
-	[FLEX_ITEM_INIT] = {
-		.name = "init",
-		.help = "flex item init",
-		.args = ARGS(ARGS_ENTRY(struct buffer, args.flex.token),
-			     ARGS_ENTRY(struct buffer, port)),
-		.next = NEXT(NEXT_ENTRY(COMMON_FLEX_TOKEN),
-			     NEXT_ENTRY(COMMON_PORT_ID)),
-		.call = parse_flex
-	},
 	[FLEX_ITEM_CREATE] = {
 		.name = "create",
 		.help = "flex item create",
@@ -11431,7 +11420,6 @@ parse_flex(struct context *ctx, const struct token *token,
 		switch (ctx->curr) {
 		default:
 			break;
-		case FLEX_ITEM_INIT:
 		case FLEX_ITEM_CREATE:
 		case FLEX_ITEM_DESTROY:
 			out->command = ctx->curr;
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 7/9] net/mlx5: fix next protocol validation after flex item
  2024-09-18 13:46 ` [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item Viacheslav Ovsiienko
                     ` (5 preceding siblings ...)
  2024-09-18 13:46   ` [PATCH v2 6/9] app/testpmd: remove flex item init command leftover Viacheslav Ovsiienko
@ 2024-09-18 13:46   ` Viacheslav Ovsiienko
  2024-09-18 13:58     ` Dariusz Sosnowski
  2024-09-18 13:46   ` [PATCH v2 8/9] net/mlx5: fix non full word sample fields in " Viacheslav Ovsiienko
                     ` (3 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-18 13:46 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski, stable

On the flow validation some items may check the preceding protocols.
In case of flex item the next protocol is opaque (or can be multiple
ones) we should set neutral value and allow successful validation,
for example, for the combination of flex and following ESP items.

Fixes: a23e9b6e3ee9 ("net/mlx5: handle flex item in flows")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_dv.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index a51d4dd1a4..b18bb430d7 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -8196,6 +8196,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 							 tunnel != 0, error);
 			if (ret < 0)
 				return ret;
+			/* Reset for next proto, it is unknown. */
+			next_protocol = 0xff;
 			break;
 		case RTE_FLOW_ITEM_TYPE_METER_COLOR:
 			ret = flow_dv_validate_item_meter_color(dev, items,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 8/9] net/mlx5: fix non full word sample fields in flex item
  2024-09-18 13:46 ` [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item Viacheslav Ovsiienko
                     ` (6 preceding siblings ...)
  2024-09-18 13:46   ` [PATCH v2 7/9] net/mlx5: fix next protocol validation after flex item Viacheslav Ovsiienko
@ 2024-09-18 13:46   ` Viacheslav Ovsiienko
  2024-09-18 13:58     ` Dariusz Sosnowski
  2024-09-18 13:46   ` [PATCH v2 9/9] net/mlx5: fix flex item header length field translation Viacheslav Ovsiienko
                     ` (2 subsequent siblings)
  10 siblings, 1 reply; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-18 13:46 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski, stable

If the sample field in flex item did not cover the entire
32-bit word (width was not verified 32 bits) or was not aligned
on the byte boundary the match on this sample in flows
happened to be ignored or wrongly missed. The field mask
"def" was build in wrong endianness, and non-byte aligned
shifts were wrongly performed for the pattern masks and values.

Fixes: 6dac7d7ff2bf ("net/mlx5: translate flex item pattern into matcher")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_definer.c |  4 +--
 drivers/net/mlx5/mlx5.h               |  5 ++-
 drivers/net/mlx5/mlx5_flow_dv.c       |  5 ++-
 drivers/net/mlx5/mlx5_flow_flex.c     | 47 +++++++++++++--------------
 4 files changed, 29 insertions(+), 32 deletions(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 2dfcc5eba6..10b986d66b 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -574,7 +574,7 @@ mlx5dr_definer_flex_parser_set(struct mlx5dr_definer_fc *fc,
 	idx = fc->fname - MLX5DR_DEFINER_FNAME_FLEX_PARSER_0;
 	byte_off -= idx * sizeof(uint32_t);
 	ret = mlx5_flex_get_parser_value_per_byte_off(flex, flex->handle, byte_off,
-						      false, is_inner, &val);
+						      is_inner, &val);
 	if (ret == -1 || !val)
 		return;
 
@@ -2825,7 +2825,7 @@ mlx5dr_definer_conv_item_flex_parser(struct mlx5dr_definer_conv_data *cd,
 	for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
 		byte_off = base_off - i * sizeof(uint32_t);
 		ret = mlx5_flex_get_parser_value_per_byte_off(m, v->handle, byte_off,
-							      true, is_inner, &mask);
+							      is_inner, &mask);
 		if (ret == -1) {
 			rte_errno = EINVAL;
 			return rte_errno;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index b1423b6868..0fb18f7fb1 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -2600,11 +2600,10 @@ void mlx5_flex_flow_translate_item(struct rte_eth_dev *dev, void *matcher,
 				   void *key, const struct rte_flow_item *item,
 				   bool is_inner);
 int mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
-			    uint32_t idx, uint32_t *pos,
-			    bool is_inner, uint32_t *def);
+			    uint32_t idx, uint32_t *pos, bool is_inner);
 int mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item,
 					    void *flex, uint32_t byte_off,
-					    bool is_mask, bool tunnel, uint32_t *value);
+					    bool tunnel, uint32_t *value);
 int mlx5_flex_get_tunnel_mode(const struct rte_flow_item *item,
 			      enum rte_flow_item_flex_tunnel_mode *tunnel_mode);
 int mlx5_flex_acquire_index(struct rte_eth_dev *dev,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index b18bb430d7..d2a3f829d5 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1526,7 +1526,6 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
 	const struct mlx5_flex_pattern_field *map;
 	uint32_t offset = data->offset;
 	uint32_t width_left = width;
-	uint32_t def;
 	uint32_t cur_width = 0;
 	uint32_t tmp_ofs;
 	uint32_t idx = 0;
@@ -1551,7 +1550,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
 	tmp_ofs = pos < data->offset ? data->offset - pos : 0;
 	for (j = i; i < flex->mapnum && width_left > 0; ) {
 		map = flex->map + i;
-		id = mlx5_flex_get_sample_id(flex, i, &pos, false, &def);
+		id = mlx5_flex_get_sample_id(flex, i, &pos, false);
 		if (id == -1) {
 			i++;
 			/* All left length is dummy */
@@ -1570,7 +1569,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
 			 * 2. Width has been covered.
 			 */
 			for (j = i + 1; j < flex->mapnum; j++) {
-				tmp_id = mlx5_flex_get_sample_id(flex, j, &pos, false, &def);
+				tmp_id = mlx5_flex_get_sample_id(flex, j, &pos, false);
 				if (tmp_id == -1) {
 					i = j;
 					pos -= flex->map[j].width;
diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index 0c41b956b0..bf38643a23 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -118,28 +118,32 @@ mlx5_flex_get_bitfield(const struct rte_flow_item_flex *item,
 		       uint32_t pos, uint32_t width, uint32_t shift)
 {
 	const uint8_t *ptr = item->pattern + pos / CHAR_BIT;
-	uint32_t val, vbits;
+	uint32_t val, vbits, skip = pos % CHAR_BIT;
 
 	/* Proceed the bitfield start byte. */
 	MLX5_ASSERT(width <= sizeof(uint32_t) * CHAR_BIT && width);
 	MLX5_ASSERT(width + shift <= sizeof(uint32_t) * CHAR_BIT);
 	if (item->length <= pos / CHAR_BIT)
 		return 0;
-	val = *ptr++ >> (pos % CHAR_BIT);
+	/* Bits are enumerated in byte in network order: 01234567 */
+	val = *ptr++;
 	vbits = CHAR_BIT - pos % CHAR_BIT;
-	pos = (pos + vbits) / CHAR_BIT;
+	pos = RTE_ALIGN_CEIL(pos, CHAR_BIT) / CHAR_BIT;
 	vbits = RTE_MIN(vbits, width);
-	val &= RTE_BIT32(vbits) - 1;
+	/* Load bytes to cover the field width, checking pattern boundary */
 	while (vbits < width && pos < item->length) {
 		uint32_t part = RTE_MIN(width - vbits, (uint32_t)CHAR_BIT);
 		uint32_t tmp = *ptr++;
 
-		pos++;
-		tmp &= RTE_BIT32(part) - 1;
-		val |= tmp << vbits;
+		val |= tmp << RTE_ALIGN_CEIL(vbits, CHAR_BIT);
 		vbits += part;
+		pos++;
 	}
-	return rte_bswap32(val <<= shift);
+	val = rte_cpu_to_be_32(val);
+	val <<= skip;
+	val >>= shift;
+	val &= (RTE_BIT64(width) - 1) << (sizeof(uint32_t) * CHAR_BIT - shift - width);
+	return val;
 }
 
 #define SET_FP_MATCH_SAMPLE_ID(x, def, msk, val, sid) \
@@ -211,21 +215,17 @@ mlx5_flex_set_match_sample(void *misc4_m, void *misc4_v,
  *   Where to search the value and mask.
  * @param[in] is_inner
  *   For inner matching or not.
- * @param[in, def] def
- *   Mask generated by mapping shift and width.
  *
  * @return
  *   0 on success, -1 to ignore.
  */
 int
 mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
-			uint32_t idx, uint32_t *pos,
-			bool is_inner, uint32_t *def)
+			uint32_t idx, uint32_t *pos, bool is_inner)
 {
 	const struct mlx5_flex_pattern_field *map = tp->map + idx;
 	uint32_t id = map->reg_id;
 
-	*def = (RTE_BIT64(map->width) - 1) << map->shift;
 	/* Skip placeholders for DUMMY fields. */
 	if (id == MLX5_INVALID_SAMPLE_REG_ID) {
 		*pos += map->width;
@@ -252,8 +252,6 @@ mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
  *   Mlx5 flex item sample mapping handle.
  * @param[in] byte_off
  *   Mlx5 flex item format_select_dw.
- * @param[in] is_mask
- *   Spec or mask.
  * @param[in] tunnel
  *   Tunnel mode or not.
  * @param[in, def] value
@@ -265,25 +263,23 @@ mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
 int
 mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item,
 					void *flex, uint32_t byte_off,
-					bool is_mask, bool tunnel, uint32_t *value)
+					bool tunnel, uint32_t *value)
 {
 	struct mlx5_flex_pattern_field *map;
 	struct mlx5_flex_item *tp = flex;
-	uint32_t def, i, pos, val;
+	uint32_t i, pos, val;
 	int id;
 
 	*value = 0;
 	for (i = 0, pos = 0; i < tp->mapnum && pos < item->length * CHAR_BIT; i++) {
 		map = tp->map + i;
-		id = mlx5_flex_get_sample_id(tp, i, &pos, tunnel, &def);
+		id = mlx5_flex_get_sample_id(tp, i, &pos, tunnel);
 		if (id == -1)
 			continue;
 		if (id >= (int)tp->devx_fp->num_samples || id >= MLX5_GRAPH_NODE_SAMPLE_NUM)
 			return -1;
 		if (byte_off == tp->devx_fp->sample_info[id].sample_dw_data * sizeof(uint32_t)) {
 			val = mlx5_flex_get_bitfield(item, pos, map->width, map->shift);
-			if (is_mask)
-				val &= RTE_BE32(def);
 			*value |= val;
 		}
 		pos += map->width;
@@ -355,10 +351,10 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev *dev,
 	spec = item->spec;
 	mask = item->mask;
 	tp = (struct mlx5_flex_item *)spec->handle;
-	for (i = 0; i < tp->mapnum; i++) {
+	for (i = 0; i < tp->mapnum && pos < (spec->length * CHAR_BIT); i++) {
 		struct mlx5_flex_pattern_field *map = tp->map + i;
 		uint32_t val, msk, def;
-		int id = mlx5_flex_get_sample_id(tp, i, &pos, is_inner, &def);
+		int id = mlx5_flex_get_sample_id(tp, i, &pos, is_inner);
 
 		if (id == -1)
 			continue;
@@ -366,11 +362,14 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev *dev,
 		if (id >= (int)tp->devx_fp->num_samples ||
 		    id >= MLX5_GRAPH_NODE_SAMPLE_NUM)
 			return;
+		def = (uint32_t)(RTE_BIT64(map->width) - 1);
+		def <<= (sizeof(uint32_t) * CHAR_BIT - map->shift - map->width);
 		val = mlx5_flex_get_bitfield(spec, pos, map->width, map->shift);
-		msk = mlx5_flex_get_bitfield(mask, pos, map->width, map->shift);
+		msk = pos < (mask->length * CHAR_BIT) ?
+		      mlx5_flex_get_bitfield(mask, pos, map->width, map->shift) : def;
 		sample_id = tp->devx_fp->sample_ids[id];
 		mlx5_flex_set_match_sample(misc4_m, misc4_v,
-					   def, msk & def, val & msk & def,
+					   def, msk, val & msk,
 					   sample_id, id);
 		pos += map->width;
 	}
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH v2 9/9] net/mlx5: fix flex item header length field translation
  2024-09-18 13:46 ` [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item Viacheslav Ovsiienko
                     ` (7 preceding siblings ...)
  2024-09-18 13:46   ` [PATCH v2 8/9] net/mlx5: fix non full word sample fields in " Viacheslav Ovsiienko
@ 2024-09-18 13:46   ` Viacheslav Ovsiienko
  2024-09-18 13:58     ` Dariusz Sosnowski
  2024-09-18 13:51   ` [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item Dariusz Sosnowski
  2024-09-22 13:32   ` Raslan Darawsheh
  10 siblings, 1 reply; 30+ messages in thread
From: Viacheslav Ovsiienko @ 2024-09-18 13:46 UTC (permalink / raw)
  To: dev; +Cc: matan, rasland, orika, dsosnowski, stable

There are hardware imposed limitations on the header length
field description for the mask and shift combinations in the
FIELD_MODE_OFFSET mode.

The patch updates:
  - parameter check for FIELD_MODE_OFFSET for the header length
    field
  - check whether length field crosses dword boundaries in header
  - correct mask extension to the hardware required width 6-bits
  - correct adjusting the mask left margin offset, preventing
    dword offset

Fixes: b293e8e49d78 ("net/mlx5: translate flex item configuration")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_flex.c | 120 ++++++++++++++++--------------
 1 file changed, 66 insertions(+), 54 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index bf38643a23..afed16985a 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -449,12 +449,14 @@ mlx5_flex_release_index(struct rte_eth_dev *dev,
  *
  *   shift      mask
  * ------- ---------------
- *    0     b111100  0x3C
- *    1     b111110  0x3E
- *    2     b111111  0x3F
- *    3     b011111  0x1F
- *    4     b001111  0x0F
- *    5     b000111  0x07
+ *    0     b11111100  0x3C
+ *    1     b01111110  0x3E
+ *    2     b00111111  0x3F
+ *    3     b00011111  0x1F
+ *    4     b00001111  0x0F
+ *    5     b00000111  0x07
+ *    6     b00000011  0x03
+ *    7     b00000001  0x01
  */
 static uint8_t
 mlx5_flex_hdr_len_mask(uint8_t shift,
@@ -464,8 +466,7 @@ mlx5_flex_hdr_len_mask(uint8_t shift,
 	int diff = shift - MLX5_PARSE_GRAPH_NODE_HDR_LEN_SHIFT_DWORD;
 
 	base_mask = mlx5_hca_parse_graph_node_base_hdr_len_mask(attr);
-	return diff == 0 ? base_mask :
-	       diff < 0 ? (base_mask << -diff) & base_mask : base_mask >> diff;
+	return diff < 0 ? base_mask << -diff : base_mask >> diff;
 }
 
 static int
@@ -476,7 +477,6 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
 {
 	const struct rte_flow_item_flex_field *field = &conf->next_header;
 	struct mlx5_devx_graph_node_attr *node = &devx->devx_conf;
-	uint32_t len_width, mask;
 
 	if (field->field_base % CHAR_BIT)
 		return rte_flow_error_set
@@ -504,7 +504,14 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
 				 "negative header length field base (FIXED)");
 		node->header_length_mode = MLX5_GRAPH_NODE_LEN_FIXED;
 		break;
-	case FIELD_MODE_OFFSET:
+	case FIELD_MODE_OFFSET: {
+		uint32_t msb, lsb;
+		int32_t shift = field->offset_shift;
+		uint32_t offset = field->offset_base;
+		uint32_t mask = field->offset_mask;
+		uint32_t wmax = attr->header_length_mask_width +
+				MLX5_PARSE_GRAPH_NODE_HDR_LEN_SHIFT_DWORD;
+
 		if (!(attr->header_length_mode &
 		    RTE_BIT32(MLX5_GRAPH_NODE_LEN_FIELD)))
 			return rte_flow_error_set
@@ -514,47 +521,73 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
 				 "field size is a must for offset mode");
-		if (field->field_size + field->offset_base < attr->header_length_mask_width)
+		if ((offset ^ (field->field_size + offset)) >> 5)
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "field size plus offset_base is too small");
-		node->header_length_mode = MLX5_GRAPH_NODE_LEN_FIELD;
-		if (field->offset_mask == 0 ||
-		    !rte_is_power_of_2(field->offset_mask + 1))
+				 "field crosses the 32-bit word boundary");
+		/* Hardware counts in dwords, all shifts done by offset within mask */
+		if (shift < 0 || (uint32_t)shift >= wmax)
+			return rte_flow_error_set
+				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+				 "header length field shift exceeds limits (OFFSET)");
+		if (!mask)
+			return rte_flow_error_set
+				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+				 "zero length field offset mask (OFFSET)");
+		msb = rte_fls_u32(mask) - 1;
+		lsb = rte_bsf32(mask);
+		if (!rte_is_power_of_2((mask >> lsb) + 1))
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "invalid length field offset mask (OFFSET)");
-		len_width = rte_fls_u32(field->offset_mask);
-		if (len_width > attr->header_length_mask_width)
+				 "length field offset mask not contiguous (OFFSET)");
+		if (msb >= field->field_size)
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "length field offset mask too wide (OFFSET)");
-		mask = mlx5_flex_hdr_len_mask(field->offset_shift, attr);
-		if (mask < field->offset_mask)
+				 "length field offset mask exceeds field size (OFFSET)");
+		if (msb >= wmax)
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "length field shift too big (OFFSET)");
-		node->header_length_field_mask = RTE_MIN(mask,
-							 field->offset_mask);
+				 "length field offset mask exceeds supported width (OFFSET)");
+		if (mask & ~mlx5_flex_hdr_len_mask(shift, attr))
+			return rte_flow_error_set
+				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+				 "mask and shift combination not supported (OFFSET)");
+		msb++;
+		offset += field->field_size - msb;
+		if (msb < attr->header_length_mask_width) {
+			if (attr->header_length_mask_width - msb > offset)
+				return rte_flow_error_set
+					(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+					 "field size plus offset_base is too small");
+			offset += msb;
+			/*
+			 * Here we can move to preceding dword. Hardware does
+			 * cyclic left shift so we should avoid this and stay
+			 * at current dword offset.
+			 */
+			offset = (offset & ~0x1Fu) |
+				 ((offset - attr->header_length_mask_width) & 0x1F);
+		}
+		node->header_length_mode = MLX5_GRAPH_NODE_LEN_FIELD;
+		node->header_length_field_mask = mask;
+		node->header_length_field_shift = shift;
+		node->header_length_field_offset = offset;
 		break;
+	}
 	case FIELD_MODE_BITMASK:
 		if (!(attr->header_length_mode &
 		    RTE_BIT32(MLX5_GRAPH_NODE_LEN_BITMASK)))
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
 				 "unsupported header length field mode (BITMASK)");
-		if (attr->header_length_mask_width < field->field_size)
+		if (field->offset_shift > 15 || field->offset_shift < 0)
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "header length field width exceeds limit");
+				 "header length field shift exceeds limit (BITMASK)");
 		node->header_length_mode = MLX5_GRAPH_NODE_LEN_BITMASK;
-		mask = mlx5_flex_hdr_len_mask(field->offset_shift, attr);
-		if (mask < field->offset_mask)
-			return rte_flow_error_set
-				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "length field shift too big (BITMASK)");
-		node->header_length_field_mask = RTE_MIN(mask,
-							 field->offset_mask);
+		node->header_length_field_mask = field->offset_mask;
+		node->header_length_field_shift = field->offset_shift;
+		node->header_length_field_offset = field->offset_base;
 		break;
 	default:
 		return rte_flow_error_set
@@ -567,27 +600,6 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
 			(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
 			 "header length field base exceeds limit");
 	node->header_length_base_value = field->field_base / CHAR_BIT;
-	if (field->field_mode == FIELD_MODE_OFFSET ||
-	    field->field_mode == FIELD_MODE_BITMASK) {
-		if (field->offset_shift > 15 || field->offset_shift < 0)
-			return rte_flow_error_set
-				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "header length field shift exceeds limit");
-		node->header_length_field_shift = field->offset_shift;
-		node->header_length_field_offset = field->offset_base;
-	}
-	if (field->field_mode == FIELD_MODE_OFFSET) {
-		if (field->field_size > attr->header_length_mask_width) {
-			node->header_length_field_offset +=
-				field->field_size - attr->header_length_mask_width;
-		} else if (field->field_size < attr->header_length_mask_width) {
-			node->header_length_field_offset -=
-				attr->header_length_mask_width - field->field_size;
-			node->header_length_field_mask =
-					RTE_MIN(node->header_length_field_mask,
-						(1u << field->field_size) - 1);
-		}
-	}
 	return 0;
 }
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item
  2024-09-18 13:46 ` [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item Viacheslav Ovsiienko
                     ` (8 preceding siblings ...)
  2024-09-18 13:46   ` [PATCH v2 9/9] net/mlx5: fix flex item header length field translation Viacheslav Ovsiienko
@ 2024-09-18 13:51   ` Dariusz Sosnowski
  2024-09-22 13:32   ` Raslan Darawsheh
  10 siblings, 0 replies; 30+ messages in thread
From: Dariusz Sosnowski @ 2024-09-18 13:51 UTC (permalink / raw)
  To: Slava Ovsiienko, dev; +Cc: Matan Azrad, Raslan Darawsheh, Ori Kam

> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Wednesday, September 18, 2024 15:46
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; Ori Kam <orika@nvidia.com>; Dariusz Sosnowski
> <dsosnowski@nvidia.com>
> Subject: [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item
> 
> There is a series of independent patches related to the flex item.
> There is no direct dependency between patches besides the merging dependency
> inferred by git, the latter is reason the patches are sent in series. For more details,
> please see the individual patch commit messages.
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> 
> Viacheslav Ovsiienko (9):
>   net/mlx5: update flex parser arc types support
>   net/mlx5: add flex item query tunnel mode routine
>   net/mlx5/hws: fix flex item support as tunnel header
>   net/mlx5: fix flex item tunnel mode handling
>   net/mlx5: fix number of supported flex parsers
>   app/testpmd: remove flex item init command leftover
>   net/mlx5: fix next protocol validation after flex item
>   net/mlx5: fix non full word sample fields in flex item
>   net/mlx5: fix flex item header length field translation
> 
>  app/test-pmd/cmdline_flow.c           |  12 --
>  drivers/net/mlx5/hws/mlx5dr_definer.c |  17 +-
>  drivers/net/mlx5/mlx5.h               |   9 +-
>  drivers/net/mlx5/mlx5_flow_dv.c       |   7 +-
>  drivers/net/mlx5/mlx5_flow_flex.c     | 215 ++++++++++++++++----------
>  drivers/net/mlx5/mlx5_flow_hw.c       |   8 +
>  6 files changed, 167 insertions(+), 101 deletions(-)
> 
> --
> 2.34.1

Series-acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>

Best regards,
Dariusz Sosnowski


^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [PATCH v2 1/9] net/mlx5: update flex parser arc types support
  2024-09-18 13:46   ` [PATCH v2 1/9] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
@ 2024-09-18 13:57     ` Dariusz Sosnowski
  0 siblings, 0 replies; 30+ messages in thread
From: Dariusz Sosnowski @ 2024-09-18 13:57 UTC (permalink / raw)
  To: Slava Ovsiienko, dev; +Cc: Matan Azrad, Raslan Darawsheh, Ori Kam

> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Wednesday, September 18, 2024 15:46
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; Ori Kam <orika@nvidia.com>; Dariusz Sosnowski
> <dsosnowski@nvidia.com>
> Subject: [PATCH v2 1/9] net/mlx5: update flex parser arc types support
> 
> Add support for input IPv4 and for ESP output flex parser arcs.
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  drivers/net/mlx5/mlx5_flow_flex.c | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
> 
> diff --git a/drivers/net/mlx5/mlx5_flow_flex.c
> b/drivers/net/mlx5/mlx5_flow_flex.c
> index 8a02247406..5b104d583c 100644
> --- a/drivers/net/mlx5/mlx5_flow_flex.c
> +++ b/drivers/net/mlx5/mlx5_flow_flex.c
> @@ -1111,6 +1111,8 @@ mlx5_flex_arc_type(enum rte_flow_item_type type,
> int in)
>  		return MLX5_GRAPH_ARC_NODE_GENEVE;
>  	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
>  		return MLX5_GRAPH_ARC_NODE_VXLAN_GPE;
> +	case RTE_FLOW_ITEM_TYPE_ESP:
> +		return MLX5_GRAPH_ARC_NODE_IPSEC_ESP;
>  	default:
>  		return -EINVAL;
>  	}
> @@ -1148,6 +1150,22 @@ mlx5_flex_arc_in_udp(const struct rte_flow_item
> *item,
>  	return rte_be_to_cpu_16(spec->hdr.dst_port);
>  }
> 
> +static int
> +mlx5_flex_arc_in_ipv4(const struct rte_flow_item *item,
> +		      struct rte_flow_error *error)
> +{
> +	const struct rte_flow_item_ipv4 *spec = item->spec;
> +	const struct rte_flow_item_ipv4 *mask = item->mask;
> +	struct rte_flow_item_ipv4 ip = { .hdr.next_proto_id = 0xff };
> +
> +	if (memcmp(mask, &ip, sizeof(struct rte_flow_item_ipv4))) {
> +		return rte_flow_error_set
> +			(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item,
> +			 "invalid ipv4 item mask, full mask is desired");
> +	}
> +	return spec->hdr.next_proto_id;
> +}
> +
>  static int
>  mlx5_flex_arc_in_ipv6(const struct rte_flow_item *item,
>  		      struct rte_flow_error *error)
> @@ -1210,6 +1228,9 @@ mlx5_flex_translate_arc_in(struct mlx5_hca_flex_attr
> *attr,
>  		case RTE_FLOW_ITEM_TYPE_UDP:
>  			ret = mlx5_flex_arc_in_udp(rte_item, error);
>  			break;
> +		case RTE_FLOW_ITEM_TYPE_IPV4:
> +			ret = mlx5_flex_arc_in_ipv4(rte_item, error);
> +			break;
>  		case RTE_FLOW_ITEM_TYPE_IPV6:
>  			ret = mlx5_flex_arc_in_ipv6(rte_item, error);
>  			break;
> --
> 2.34.1

Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>

Resending the Ack for each patch separately, because patchwork assigned my Ack for the series to v1, not v2.

Best regards,
Dariusz Sosnowski


^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [PATCH v2 2/9] net/mlx5: add flex item query tunnel mode routine
  2024-09-18 13:46   ` [PATCH v2 2/9] net/mlx5: add flex item query tunnel mode routine Viacheslav Ovsiienko
@ 2024-09-18 13:57     ` Dariusz Sosnowski
  0 siblings, 0 replies; 30+ messages in thread
From: Dariusz Sosnowski @ 2024-09-18 13:57 UTC (permalink / raw)
  To: Slava Ovsiienko, dev; +Cc: Matan Azrad, Raslan Darawsheh, Ori Kam



> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Wednesday, September 18, 2024 15:46
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; Ori Kam <orika@nvidia.com>; Dariusz Sosnowski
> <dsosnowski@nvidia.com>
> Subject: [PATCH v2 2/9] net/mlx5: add flex item query tunnel mode routine
> 
> Once parsing the RTE item array the PMD needs to know whether the flex item
> represents the tunnel header.
> The appropriate tunnel mode query API is added.
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  drivers/net/mlx5/mlx5.h           |  2 ++
>  drivers/net/mlx5/mlx5_flow_flex.c | 27 +++++++++++++++++++++++++++
>  2 files changed, 29 insertions(+)
> 
> diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index
> 869aac032b..6d163996e4 100644
> --- a/drivers/net/mlx5/mlx5.h
> +++ b/drivers/net/mlx5/mlx5.h
> @@ -2605,6 +2605,8 @@ int mlx5_flex_get_sample_id(const struct
> mlx5_flex_item *tp,  int mlx5_flex_get_parser_value_per_byte_off(const struct
> rte_flow_item_flex *item,
>  					    void *flex, uint32_t byte_off,
>  					    bool is_mask, bool tunnel,
> uint32_t *value);
> +int mlx5_flex_get_tunnel_mode(const struct rte_flow_item *item,
> +			      enum rte_flow_item_flex_tunnel_mode
> *tunnel_mode);
>  int mlx5_flex_acquire_index(struct rte_eth_dev *dev,
>  			    struct rte_flow_item_flex_handle *handle,
>  			    bool acquire);
> diff --git a/drivers/net/mlx5/mlx5_flow_flex.c
> b/drivers/net/mlx5/mlx5_flow_flex.c
> index 5b104d583c..0c41b956b0 100644
> --- a/drivers/net/mlx5/mlx5_flow_flex.c
> +++ b/drivers/net/mlx5/mlx5_flow_flex.c
> @@ -291,6 +291,33 @@ mlx5_flex_get_parser_value_per_byte_off(const struct
> rte_flow_item_flex *item,
>  	return 0;
>  }
> 
> +/**
> + * Get the flex parser tunnel mode.
> + *
> + * @param[in] item
> + *   RTE Flex item.
> + * @param[in, out] tunnel_mode
> + *   Pointer to return tunnel mode.
> + *
> + * @return
> + *   0 on success, otherwise negative error code.
> + */
> +int
> +mlx5_flex_get_tunnel_mode(const struct rte_flow_item *item,
> +			  enum rte_flow_item_flex_tunnel_mode
> *tunnel_mode) {
> +	if (item && item->spec && tunnel_mode) {
> +		const struct rte_flow_item_flex *spec = item->spec;
> +		struct mlx5_flex_item *flex = (struct mlx5_flex_item *)spec-
> >handle;
> +
> +		if (flex) {
> +			*tunnel_mode = flex->tunnel_mode;
> +			return 0;
> +		}
> +	}
> +	return -EINVAL;
> +}
> +
>  /**
>   * Translate item pattern into matcher fields according to translation
>   * array.
> --
> 2.34.1

Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>

Resending the Ack for each patch separately, because patchwork assigned my Ack for the series to v1, not v2.

Best regards,
Dariusz Sosnowski


^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [PATCH v2 3/9] net/mlx5/hws: fix flex item support as tunnel header
  2024-09-18 13:46   ` [PATCH v2 3/9] net/mlx5/hws: fix flex item support as tunnel header Viacheslav Ovsiienko
@ 2024-09-18 13:57     ` Dariusz Sosnowski
  0 siblings, 0 replies; 30+ messages in thread
From: Dariusz Sosnowski @ 2024-09-18 13:57 UTC (permalink / raw)
  To: Slava Ovsiienko, dev; +Cc: Matan Azrad, Raslan Darawsheh, Ori Kam, stable



> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Wednesday, September 18, 2024 15:46
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; Ori Kam <orika@nvidia.com>; Dariusz Sosnowski
> <dsosnowski@nvidia.com>; stable@dpdk.org
> Subject: [PATCH v2 3/9] net/mlx5/hws: fix flex item support as tunnel header
> 
> The RTE flex item can represent the tunnel header and split the inner and outer
> layer items. HWS did not support this flex item specifics.
> 
> Fixes: 8c0ca7527bc8 ("net/mlx5/hws: support flex item matching")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  drivers/net/mlx5/hws/mlx5dr_definer.c | 13 +++++++++++--
>  1 file changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c
> b/drivers/net/mlx5/hws/mlx5dr_definer.c
> index 51a3f7be4b..2dfcc5eba6 100644
> --- a/drivers/net/mlx5/hws/mlx5dr_definer.c
> +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
> @@ -3267,8 +3267,17 @@ mlx5dr_definer_conv_items_to_hl(struct
> mlx5dr_context *ctx,
>  			break;
>  		case RTE_FLOW_ITEM_TYPE_FLEX:
>  			ret = mlx5dr_definer_conv_item_flex_parser(&cd,
> items, i);
> -			item_flags |= cd.tunnel ?
> MLX5_FLOW_ITEM_INNER_FLEX :
> -
> MLX5_FLOW_ITEM_OUTER_FLEX;
> +			if (ret == 0) {
> +				enum rte_flow_item_flex_tunnel_mode
> tunnel_mode =
> +
> 	FLEX_TUNNEL_MODE_SINGLE;
> +
> +				ret = mlx5_flex_get_tunnel_mode(items,
> &tunnel_mode);
> +				if (tunnel_mode ==
> FLEX_TUNNEL_MODE_TUNNEL)
> +					item_flags |=
> MLX5_FLOW_ITEM_FLEX_TUNNEL;
> +				else
> +					item_flags |= cd.tunnel ?
> MLX5_FLOW_ITEM_INNER_FLEX :
> +
> MLX5_FLOW_ITEM_OUTER_FLEX;
> +			}
>  			break;
>  		case RTE_FLOW_ITEM_TYPE_MPLS:
>  			ret = mlx5dr_definer_conv_item_mpls(&cd, items, i);
> --
> 2.34.1

Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>

Resending the Ack for each patch separately, because patchwork assigned my Ack for the series to v1, not v2.

Best regards,
Dariusz Sosnowski


^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [PATCH v2 4/9] net/mlx5: fix flex item tunnel mode handling
  2024-09-18 13:46   ` [PATCH v2 4/9] net/mlx5: fix flex item tunnel mode handling Viacheslav Ovsiienko
@ 2024-09-18 13:57     ` Dariusz Sosnowski
  0 siblings, 0 replies; 30+ messages in thread
From: Dariusz Sosnowski @ 2024-09-18 13:57 UTC (permalink / raw)
  To: Slava Ovsiienko, dev; +Cc: Matan Azrad, Raslan Darawsheh, Ori Kam, stable



> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Wednesday, September 18, 2024 15:46
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; Ori Kam <orika@nvidia.com>; Dariusz Sosnowski
> <dsosnowski@nvidia.com>; stable@dpdk.org
> Subject: [PATCH v2 4/9] net/mlx5: fix flex item tunnel mode handling
> 
> The RTE flex item can represent tunnel header itself, and split inner and outer
> items, it should be reflected in the item flags while PMD is processing the item
> array.
> 
> Fixes: 8c0ca7527bc8 ("net/mlx5/hws: support flex item matching")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  drivers/net/mlx5/mlx5_flow_hw.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
> index 50888944a5..a275154d4b 100644
> --- a/drivers/net/mlx5/mlx5_flow_hw.c
> +++ b/drivers/net/mlx5/mlx5_flow_hw.c
> @@ -558,6 +558,7 @@ flow_hw_matching_item_flags_get(const struct
> rte_flow_item items[])
>  	uint64_t last_item = 0;
> 
>  	for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
> +		enum rte_flow_item_flex_tunnel_mode tunnel_mode =
> +FLEX_TUNNEL_MODE_SINGLE;
>  		int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
>  		int item_type = items->type;
> 
> @@ -606,6 +607,13 @@ flow_hw_matching_item_flags_get(const struct
> rte_flow_item items[])
>  		case RTE_FLOW_ITEM_TYPE_COMPARE:
>  			last_item = MLX5_FLOW_ITEM_COMPARE;
>  			break;
> +		case RTE_FLOW_ITEM_TYPE_FLEX:
> +			mlx5_flex_get_tunnel_mode(items, &tunnel_mode);
> +			last_item = tunnel_mode ==
> FLEX_TUNNEL_MODE_TUNNEL ?
> +					MLX5_FLOW_ITEM_FLEX_TUNNEL :
> +					tunnel ?
> MLX5_FLOW_ITEM_INNER_FLEX :
> +
> 	MLX5_FLOW_ITEM_OUTER_FLEX;
> +			break;
>  		default:
>  			break;
>  		}
> --
> 2.34.1

Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>

Resending the Ack for each patch separately, because patchwork assigned my Ack for the series to v1, not v2.

Best regards,
Dariusz Sosnowski


^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [PATCH v2 5/9] net/mlx5: fix number of supported flex parsers
  2024-09-18 13:46   ` [PATCH v2 5/9] net/mlx5: fix number of supported flex parsers Viacheslav Ovsiienko
@ 2024-09-18 13:57     ` Dariusz Sosnowski
  0 siblings, 0 replies; 30+ messages in thread
From: Dariusz Sosnowski @ 2024-09-18 13:57 UTC (permalink / raw)
  To: Slava Ovsiienko, dev; +Cc: Matan Azrad, Raslan Darawsheh, Ori Kam, stable



> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Wednesday, September 18, 2024 15:46
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; Ori Kam <orika@nvidia.com>; Dariusz Sosnowski
> <dsosnowski@nvidia.com>; stable@dpdk.org
> Subject: [PATCH v2 5/9] net/mlx5: fix number of supported flex parsers
> 
> The hardware supports up to 8 flex parser configurations.
> Some of them can be utilized internally by firmware, depending on the
> configured profile ("FLEX_PARSER_PROFILE_ENABLE" in NV-setting).
> The firmware does not report in capabilities how many flex parser configuration
> is remaining available (this is device-wide resource and can be allocated runtime
> by other agents - kernel, DPDK applications, etc.), and once there is no more
> available parsers on the parse object creation moment firmware just returns an
> error.
> 
> Fixes: db25cadc0887 ("net/mlx5: add flex item operations")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  drivers/net/mlx5/mlx5.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index
> 6d163996e4..b1423b6868 100644
> --- a/drivers/net/mlx5/mlx5.h
> +++ b/drivers/net/mlx5/mlx5.h
> @@ -69,7 +69,7 @@
>  #define MLX5_ROOT_TBL_MODIFY_NUM		16
> 
>  /* Maximal number of flex items created on the port.*/
> -#define MLX5_PORT_FLEX_ITEM_NUM			4
> +#define MLX5_PORT_FLEX_ITEM_NUM			8
> 
>  /* Maximal number of field/field parts to map into sample registers .*/
>  #define MLX5_FLEX_ITEM_MAPPING_NUM		32
> --
> 2.34.1

Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>

Resending the Ack for each patch separately, because patchwork assigned my Ack for the series to v1, not v2.

Best regards,
Dariusz Sosnowski


^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [PATCH v2 6/9] app/testpmd: remove flex item init command leftover
  2024-09-18 13:46   ` [PATCH v2 6/9] app/testpmd: remove flex item init command leftover Viacheslav Ovsiienko
@ 2024-09-18 13:58     ` Dariusz Sosnowski
  0 siblings, 0 replies; 30+ messages in thread
From: Dariusz Sosnowski @ 2024-09-18 13:58 UTC (permalink / raw)
  To: Slava Ovsiienko, dev; +Cc: Matan Azrad, Raslan Darawsheh, Ori Kam



> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Wednesday, September 18, 2024 15:46
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; Ori Kam <orika@nvidia.com>; Dariusz Sosnowski
> <dsosnowski@nvidia.com>
> Subject: [PATCH v2 6/9] app/testpmd: remove flex item init command leftover
> 
> There was a leftover of "flow flex init" command used for debug purposes and
> had no useful functionality in the production code.
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  app/test-pmd/cmdline_flow.c | 12 ------------
>  1 file changed, 12 deletions(-)
> 
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c index
> d04280eb3e..858f4077bd 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -106,7 +106,6 @@ enum index {
>  	HASH,
> 
>  	/* Flex arguments */
> -	FLEX_ITEM_INIT,
>  	FLEX_ITEM_CREATE,
>  	FLEX_ITEM_DESTROY,
> 
> @@ -1317,7 +1316,6 @@ struct parse_action_priv {
>  	})
> 
>  static const enum index next_flex_item[] = {
> -	FLEX_ITEM_INIT,
>  	FLEX_ITEM_CREATE,
>  	FLEX_ITEM_DESTROY,
>  	ZERO,
> @@ -4171,15 +4169,6 @@ static const struct token token_list[] = {
>  		.next = NEXT(next_flex_item),
>  		.call = parse_flex,
>  	},
> -	[FLEX_ITEM_INIT] = {
> -		.name = "init",
> -		.help = "flex item init",
> -		.args = ARGS(ARGS_ENTRY(struct buffer, args.flex.token),
> -			     ARGS_ENTRY(struct buffer, port)),
> -		.next = NEXT(NEXT_ENTRY(COMMON_FLEX_TOKEN),
> -			     NEXT_ENTRY(COMMON_PORT_ID)),
> -		.call = parse_flex
> -	},
>  	[FLEX_ITEM_CREATE] = {
>  		.name = "create",
>  		.help = "flex item create",
> @@ -11431,7 +11420,6 @@ parse_flex(struct context *ctx, const struct token
> *token,
>  		switch (ctx->curr) {
>  		default:
>  			break;
> -		case FLEX_ITEM_INIT:
>  		case FLEX_ITEM_CREATE:
>  		case FLEX_ITEM_DESTROY:
>  			out->command = ctx->curr;
> --
> 2.34.1

Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>

Resending the Ack for each patch separately, because patchwork assigned my Ack for the series to v1, not v2.

Best regards,
Dariusz Sosnowski


^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [PATCH v2 7/9] net/mlx5: fix next protocol validation after flex item
  2024-09-18 13:46   ` [PATCH v2 7/9] net/mlx5: fix next protocol validation after flex item Viacheslav Ovsiienko
@ 2024-09-18 13:58     ` Dariusz Sosnowski
  0 siblings, 0 replies; 30+ messages in thread
From: Dariusz Sosnowski @ 2024-09-18 13:58 UTC (permalink / raw)
  To: Slava Ovsiienko, dev; +Cc: Matan Azrad, Raslan Darawsheh, Ori Kam, stable



> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Wednesday, September 18, 2024 15:46
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; Ori Kam <orika@nvidia.com>; Dariusz Sosnowski
> <dsosnowski@nvidia.com>; stable@dpdk.org
> Subject: [PATCH v2 7/9] net/mlx5: fix next protocol validation after flex item
> 
> On the flow validation some items may check the preceding protocols.
> In case of flex item the next protocol is opaque (or can be multiple
> ones) we should set neutral value and allow successful validation, for example,
> for the combination of flex and following ESP items.
> 
> Fixes: a23e9b6e3ee9 ("net/mlx5: handle flex item in flows")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  drivers/net/mlx5/mlx5_flow_dv.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
> index a51d4dd1a4..b18bb430d7 100644
> --- a/drivers/net/mlx5/mlx5_flow_dv.c
> +++ b/drivers/net/mlx5/mlx5_flow_dv.c
> @@ -8196,6 +8196,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const
> struct rte_flow_attr *attr,
>  							 tunnel != 0,
> error);
>  			if (ret < 0)
>  				return ret;
> +			/* Reset for next proto, it is unknown. */
> +			next_protocol = 0xff;
>  			break;
>  		case RTE_FLOW_ITEM_TYPE_METER_COLOR:
>  			ret = flow_dv_validate_item_meter_color(dev, items,
> --
> 2.34.1

Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>

Resending the Ack for each patch separately, because patchwork assigned my Ack for the series to v1, not v2.

Best regards,
Dariusz Sosnowski

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [PATCH v2 8/9] net/mlx5: fix non full word sample fields in flex item
  2024-09-18 13:46   ` [PATCH v2 8/9] net/mlx5: fix non full word sample fields in " Viacheslav Ovsiienko
@ 2024-09-18 13:58     ` Dariusz Sosnowski
  0 siblings, 0 replies; 30+ messages in thread
From: Dariusz Sosnowski @ 2024-09-18 13:58 UTC (permalink / raw)
  To: Slava Ovsiienko, dev; +Cc: Matan Azrad, Raslan Darawsheh, Ori Kam, stable



> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Wednesday, September 18, 2024 15:46
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; Ori Kam <orika@nvidia.com>; Dariusz Sosnowski
> <dsosnowski@nvidia.com>; stable@dpdk.org
> Subject: [PATCH v2 8/9] net/mlx5: fix non full word sample fields in flex item
> 
> If the sample field in flex item did not cover the entire 32-bit word (width was not
> verified 32 bits) or was not aligned on the byte boundary the match on this
> sample in flows happened to be ignored or wrongly missed. The field mask "def"
> was build in wrong endianness, and non-byte aligned shifts were wrongly
> performed for the pattern masks and values.
> 
> Fixes: 6dac7d7ff2bf ("net/mlx5: translate flex item pattern into matcher")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  drivers/net/mlx5/hws/mlx5dr_definer.c |  4 +--
>  drivers/net/mlx5/mlx5.h               |  5 ++-
>  drivers/net/mlx5/mlx5_flow_dv.c       |  5 ++-
>  drivers/net/mlx5/mlx5_flow_flex.c     | 47 +++++++++++++--------------
>  4 files changed, 29 insertions(+), 32 deletions(-)
> 
> diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c
> b/drivers/net/mlx5/hws/mlx5dr_definer.c
> index 2dfcc5eba6..10b986d66b 100644
> --- a/drivers/net/mlx5/hws/mlx5dr_definer.c
> +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
> @@ -574,7 +574,7 @@ mlx5dr_definer_flex_parser_set(struct
> mlx5dr_definer_fc *fc,
>  	idx = fc->fname - MLX5DR_DEFINER_FNAME_FLEX_PARSER_0;
>  	byte_off -= idx * sizeof(uint32_t);
>  	ret = mlx5_flex_get_parser_value_per_byte_off(flex, flex->handle,
> byte_off,
> -						      false, is_inner, &val);
> +						      is_inner, &val);
>  	if (ret == -1 || !val)
>  		return;
> 
> @@ -2825,7 +2825,7 @@ mlx5dr_definer_conv_item_flex_parser(struct
> mlx5dr_definer_conv_data *cd,
>  	for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
>  		byte_off = base_off - i * sizeof(uint32_t);
>  		ret = mlx5_flex_get_parser_value_per_byte_off(m, v->handle,
> byte_off,
> -							      true, is_inner,
> &mask);
> +							      is_inner,
> &mask);
>  		if (ret == -1) {
>  			rte_errno = EINVAL;
>  			return rte_errno;
> diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index
> b1423b6868..0fb18f7fb1 100644
> --- a/drivers/net/mlx5/mlx5.h
> +++ b/drivers/net/mlx5/mlx5.h
> @@ -2600,11 +2600,10 @@ void mlx5_flex_flow_translate_item(struct
> rte_eth_dev *dev, void *matcher,
>  				   void *key, const struct rte_flow_item *item,
>  				   bool is_inner);
>  int mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
> -			    uint32_t idx, uint32_t *pos,
> -			    bool is_inner, uint32_t *def);
> +			    uint32_t idx, uint32_t *pos, bool is_inner);
>  int mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex
> *item,
>  					    void *flex, uint32_t byte_off,
> -					    bool is_mask, bool tunnel,
> uint32_t *value);
> +					    bool tunnel, uint32_t *value);
>  int mlx5_flex_get_tunnel_mode(const struct rte_flow_item *item,
>  			      enum rte_flow_item_flex_tunnel_mode
> *tunnel_mode);  int mlx5_flex_acquire_index(struct rte_eth_dev *dev, diff --git
> a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index
> b18bb430d7..d2a3f829d5 100644
> --- a/drivers/net/mlx5/mlx5_flow_dv.c
> +++ b/drivers/net/mlx5/mlx5_flow_dv.c
> @@ -1526,7 +1526,6 @@ mlx5_modify_flex_item(const struct rte_eth_dev
> *dev,
>  	const struct mlx5_flex_pattern_field *map;
>  	uint32_t offset = data->offset;
>  	uint32_t width_left = width;
> -	uint32_t def;
>  	uint32_t cur_width = 0;
>  	uint32_t tmp_ofs;
>  	uint32_t idx = 0;
> @@ -1551,7 +1550,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev
> *dev,
>  	tmp_ofs = pos < data->offset ? data->offset - pos : 0;
>  	for (j = i; i < flex->mapnum && width_left > 0; ) {
>  		map = flex->map + i;
> -		id = mlx5_flex_get_sample_id(flex, i, &pos, false, &def);
> +		id = mlx5_flex_get_sample_id(flex, i, &pos, false);
>  		if (id == -1) {
>  			i++;
>  			/* All left length is dummy */
> @@ -1570,7 +1569,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev
> *dev,
>  			 * 2. Width has been covered.
>  			 */
>  			for (j = i + 1; j < flex->mapnum; j++) {
> -				tmp_id = mlx5_flex_get_sample_id(flex, j,
> &pos, false, &def);
> +				tmp_id = mlx5_flex_get_sample_id(flex, j,
> &pos, false);
>  				if (tmp_id == -1) {
>  					i = j;
>  					pos -= flex->map[j].width;
> diff --git a/drivers/net/mlx5/mlx5_flow_flex.c
> b/drivers/net/mlx5/mlx5_flow_flex.c
> index 0c41b956b0..bf38643a23 100644
> --- a/drivers/net/mlx5/mlx5_flow_flex.c
> +++ b/drivers/net/mlx5/mlx5_flow_flex.c
> @@ -118,28 +118,32 @@ mlx5_flex_get_bitfield(const struct
> rte_flow_item_flex *item,
>  		       uint32_t pos, uint32_t width, uint32_t shift)  {
>  	const uint8_t *ptr = item->pattern + pos / CHAR_BIT;
> -	uint32_t val, vbits;
> +	uint32_t val, vbits, skip = pos % CHAR_BIT;
> 
>  	/* Proceed the bitfield start byte. */
>  	MLX5_ASSERT(width <= sizeof(uint32_t) * CHAR_BIT && width);
>  	MLX5_ASSERT(width + shift <= sizeof(uint32_t) * CHAR_BIT);
>  	if (item->length <= pos / CHAR_BIT)
>  		return 0;
> -	val = *ptr++ >> (pos % CHAR_BIT);
> +	/* Bits are enumerated in byte in network order: 01234567 */
> +	val = *ptr++;
>  	vbits = CHAR_BIT - pos % CHAR_BIT;
> -	pos = (pos + vbits) / CHAR_BIT;
> +	pos = RTE_ALIGN_CEIL(pos, CHAR_BIT) / CHAR_BIT;
>  	vbits = RTE_MIN(vbits, width);
> -	val &= RTE_BIT32(vbits) - 1;
> +	/* Load bytes to cover the field width, checking pattern boundary */
>  	while (vbits < width && pos < item->length) {
>  		uint32_t part = RTE_MIN(width - vbits, (uint32_t)CHAR_BIT);
>  		uint32_t tmp = *ptr++;
> 
> -		pos++;
> -		tmp &= RTE_BIT32(part) - 1;
> -		val |= tmp << vbits;
> +		val |= tmp << RTE_ALIGN_CEIL(vbits, CHAR_BIT);
>  		vbits += part;
> +		pos++;
>  	}
> -	return rte_bswap32(val <<= shift);
> +	val = rte_cpu_to_be_32(val);
> +	val <<= skip;
> +	val >>= shift;
> +	val &= (RTE_BIT64(width) - 1) << (sizeof(uint32_t) * CHAR_BIT - shift -
> width);
> +	return val;
>  }
> 
>  #define SET_FP_MATCH_SAMPLE_ID(x, def, msk, val, sid) \ @@ -211,21 +215,17
> @@ mlx5_flex_set_match_sample(void *misc4_m, void *misc4_v,
>   *   Where to search the value and mask.
>   * @param[in] is_inner
>   *   For inner matching or not.
> - * @param[in, def] def
> - *   Mask generated by mapping shift and width.
>   *
>   * @return
>   *   0 on success, -1 to ignore.
>   */
>  int
>  mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
> -			uint32_t idx, uint32_t *pos,
> -			bool is_inner, uint32_t *def)
> +			uint32_t idx, uint32_t *pos, bool is_inner)
>  {
>  	const struct mlx5_flex_pattern_field *map = tp->map + idx;
>  	uint32_t id = map->reg_id;
> 
> -	*def = (RTE_BIT64(map->width) - 1) << map->shift;
>  	/* Skip placeholders for DUMMY fields. */
>  	if (id == MLX5_INVALID_SAMPLE_REG_ID) {
>  		*pos += map->width;
> @@ -252,8 +252,6 @@ mlx5_flex_get_sample_id(const struct mlx5_flex_item
> *tp,
>   *   Mlx5 flex item sample mapping handle.
>   * @param[in] byte_off
>   *   Mlx5 flex item format_select_dw.
> - * @param[in] is_mask
> - *   Spec or mask.
>   * @param[in] tunnel
>   *   Tunnel mode or not.
>   * @param[in, def] value
> @@ -265,25 +263,23 @@ mlx5_flex_get_sample_id(const struct mlx5_flex_item
> *tp,  int  mlx5_flex_get_parser_value_per_byte_off(const struct
> rte_flow_item_flex *item,
>  					void *flex, uint32_t byte_off,
> -					bool is_mask, bool tunnel, uint32_t
> *value)
> +					bool tunnel, uint32_t *value)
>  {
>  	struct mlx5_flex_pattern_field *map;
>  	struct mlx5_flex_item *tp = flex;
> -	uint32_t def, i, pos, val;
> +	uint32_t i, pos, val;
>  	int id;
> 
>  	*value = 0;
>  	for (i = 0, pos = 0; i < tp->mapnum && pos < item->length * CHAR_BIT;
> i++) {
>  		map = tp->map + i;
> -		id = mlx5_flex_get_sample_id(tp, i, &pos, tunnel, &def);
> +		id = mlx5_flex_get_sample_id(tp, i, &pos, tunnel);
>  		if (id == -1)
>  			continue;
>  		if (id >= (int)tp->devx_fp->num_samples || id >=
> MLX5_GRAPH_NODE_SAMPLE_NUM)
>  			return -1;
>  		if (byte_off == tp->devx_fp->sample_info[id].sample_dw_data *
> sizeof(uint32_t)) {
>  			val = mlx5_flex_get_bitfield(item, pos, map->width,
> map->shift);
> -			if (is_mask)
> -				val &= RTE_BE32(def);
>  			*value |= val;
>  		}
>  		pos += map->width;
> @@ -355,10 +351,10 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev
> *dev,
>  	spec = item->spec;
>  	mask = item->mask;
>  	tp = (struct mlx5_flex_item *)spec->handle;
> -	for (i = 0; i < tp->mapnum; i++) {
> +	for (i = 0; i < tp->mapnum && pos < (spec->length * CHAR_BIT); i++) {
>  		struct mlx5_flex_pattern_field *map = tp->map + i;
>  		uint32_t val, msk, def;
> -		int id = mlx5_flex_get_sample_id(tp, i, &pos, is_inner, &def);
> +		int id = mlx5_flex_get_sample_id(tp, i, &pos, is_inner);
> 
>  		if (id == -1)
>  			continue;
> @@ -366,11 +362,14 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev
> *dev,
>  		if (id >= (int)tp->devx_fp->num_samples ||
>  		    id >= MLX5_GRAPH_NODE_SAMPLE_NUM)
>  			return;
> +		def = (uint32_t)(RTE_BIT64(map->width) - 1);
> +		def <<= (sizeof(uint32_t) * CHAR_BIT - map->shift - map-
> >width);
>  		val = mlx5_flex_get_bitfield(spec, pos, map->width, map-
> >shift);
> -		msk = mlx5_flex_get_bitfield(mask, pos, map->width, map-
> >shift);
> +		msk = pos < (mask->length * CHAR_BIT) ?
> +		      mlx5_flex_get_bitfield(mask, pos, map->width, map->shift) :
> +def;
>  		sample_id = tp->devx_fp->sample_ids[id];
>  		mlx5_flex_set_match_sample(misc4_m, misc4_v,
> -					   def, msk & def, val & msk & def,
> +					   def, msk, val & msk,
>  					   sample_id, id);
>  		pos += map->width;
>  	}
> --
> 2.34.1

Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>

Resending the Ack for each patch separately, because patchwork assigned my Ack for the series to v1, not v2.

Best regards,
Dariusz Sosnowski


^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [PATCH v2 9/9] net/mlx5: fix flex item header length field translation
  2024-09-18 13:46   ` [PATCH v2 9/9] net/mlx5: fix flex item header length field translation Viacheslav Ovsiienko
@ 2024-09-18 13:58     ` Dariusz Sosnowski
  0 siblings, 0 replies; 30+ messages in thread
From: Dariusz Sosnowski @ 2024-09-18 13:58 UTC (permalink / raw)
  To: Slava Ovsiienko, dev; +Cc: Matan Azrad, Raslan Darawsheh, Ori Kam, stable



> -----Original Message-----
> From: Slava Ovsiienko <viacheslavo@nvidia.com>
> Sent: Wednesday, September 18, 2024 15:46
> To: dev@dpdk.org
> Cc: Matan Azrad <matan@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; Ori Kam <orika@nvidia.com>; Dariusz Sosnowski
> <dsosnowski@nvidia.com>; stable@dpdk.org
> Subject: [PATCH v2 9/9] net/mlx5: fix flex item header length field translation
> 
> There are hardware imposed limitations on the header length field description for
> the mask and shift combinations in the FIELD_MODE_OFFSET mode.
> 
> The patch updates:
>   - parameter check for FIELD_MODE_OFFSET for the header length
>     field
>   - check whether length field crosses dword boundaries in header
>   - correct mask extension to the hardware required width 6-bits
>   - correct adjusting the mask left margin offset, preventing
>     dword offset
> 
> Fixes: b293e8e49d78 ("net/mlx5: translate flex item configuration")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  drivers/net/mlx5/mlx5_flow_flex.c | 120 ++++++++++++++++--------------
>  1 file changed, 66 insertions(+), 54 deletions(-)
> 
> diff --git a/drivers/net/mlx5/mlx5_flow_flex.c
> b/drivers/net/mlx5/mlx5_flow_flex.c
> index bf38643a23..afed16985a 100644
> --- a/drivers/net/mlx5/mlx5_flow_flex.c
> +++ b/drivers/net/mlx5/mlx5_flow_flex.c
> @@ -449,12 +449,14 @@ mlx5_flex_release_index(struct rte_eth_dev *dev,
>   *
>   *   shift      mask
>   * ------- ---------------
> - *    0     b111100  0x3C
> - *    1     b111110  0x3E
> - *    2     b111111  0x3F
> - *    3     b011111  0x1F
> - *    4     b001111  0x0F
> - *    5     b000111  0x07
> + *    0     b11111100  0x3C
> + *    1     b01111110  0x3E
> + *    2     b00111111  0x3F
> + *    3     b00011111  0x1F
> + *    4     b00001111  0x0F
> + *    5     b00000111  0x07
> + *    6     b00000011  0x03
> + *    7     b00000001  0x01
>   */
>  static uint8_t
>  mlx5_flex_hdr_len_mask(uint8_t shift,
> @@ -464,8 +466,7 @@ mlx5_flex_hdr_len_mask(uint8_t shift,
>  	int diff = shift - MLX5_PARSE_GRAPH_NODE_HDR_LEN_SHIFT_DWORD;
> 
>  	base_mask = mlx5_hca_parse_graph_node_base_hdr_len_mask(attr);
> -	return diff == 0 ? base_mask :
> -	       diff < 0 ? (base_mask << -diff) & base_mask : base_mask >> diff;
> +	return diff < 0 ? base_mask << -diff : base_mask >> diff;
>  }
> 
>  static int
> @@ -476,7 +477,6 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr
> *attr,  {
>  	const struct rte_flow_item_flex_field *field = &conf->next_header;
>  	struct mlx5_devx_graph_node_attr *node = &devx->devx_conf;
> -	uint32_t len_width, mask;
> 
>  	if (field->field_base % CHAR_BIT)
>  		return rte_flow_error_set
> @@ -504,7 +504,14 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr
> *attr,
>  				 "negative header length field base (FIXED)");
>  		node->header_length_mode =
> MLX5_GRAPH_NODE_LEN_FIXED;
>  		break;
> -	case FIELD_MODE_OFFSET:
> +	case FIELD_MODE_OFFSET: {
> +		uint32_t msb, lsb;
> +		int32_t shift = field->offset_shift;
> +		uint32_t offset = field->offset_base;
> +		uint32_t mask = field->offset_mask;
> +		uint32_t wmax = attr->header_length_mask_width +
> +
> 	MLX5_PARSE_GRAPH_NODE_HDR_LEN_SHIFT_DWORD;
> +
>  		if (!(attr->header_length_mode &
>  		    RTE_BIT32(MLX5_GRAPH_NODE_LEN_FIELD)))
>  			return rte_flow_error_set
> @@ -514,47 +521,73 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr
> *attr,
>  			return rte_flow_error_set
>  				(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ITEM, NULL,
>  				 "field size is a must for offset mode");
> -		if (field->field_size + field->offset_base < attr-
> >header_length_mask_width)
> +		if ((offset ^ (field->field_size + offset)) >> 5)
>  			return rte_flow_error_set
>  				(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ITEM, NULL,
> -				 "field size plus offset_base is too small");
> -		node->header_length_mode =
> MLX5_GRAPH_NODE_LEN_FIELD;
> -		if (field->offset_mask == 0 ||
> -		    !rte_is_power_of_2(field->offset_mask + 1))
> +				 "field crosses the 32-bit word boundary");
> +		/* Hardware counts in dwords, all shifts done by offset within
> mask */
> +		if (shift < 0 || (uint32_t)shift >= wmax)
> +			return rte_flow_error_set
> +				(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ITEM, NULL,
> +				 "header length field shift exceeds limits
> (OFFSET)");
> +		if (!mask)
> +			return rte_flow_error_set
> +				(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ITEM, NULL,
> +				 "zero length field offset mask (OFFSET)");
> +		msb = rte_fls_u32(mask) - 1;
> +		lsb = rte_bsf32(mask);
> +		if (!rte_is_power_of_2((mask >> lsb) + 1))
>  			return rte_flow_error_set
>  				(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ITEM, NULL,
> -				 "invalid length field offset mask (OFFSET)");
> -		len_width = rte_fls_u32(field->offset_mask);
> -		if (len_width > attr->header_length_mask_width)
> +				 "length field offset mask not contiguous
> (OFFSET)");
> +		if (msb >= field->field_size)
>  			return rte_flow_error_set
>  				(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ITEM, NULL,
> -				 "length field offset mask too wide
> (OFFSET)");
> -		mask = mlx5_flex_hdr_len_mask(field->offset_shift, attr);
> -		if (mask < field->offset_mask)
> +				 "length field offset mask exceeds field size
> (OFFSET)");
> +		if (msb >= wmax)
>  			return rte_flow_error_set
>  				(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ITEM, NULL,
> -				 "length field shift too big (OFFSET)");
> -		node->header_length_field_mask = RTE_MIN(mask,
> -							 field-
> >offset_mask);
> +				 "length field offset mask exceeds supported
> width (OFFSET)");
> +		if (mask & ~mlx5_flex_hdr_len_mask(shift, attr))
> +			return rte_flow_error_set
> +				(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ITEM, NULL,
> +				 "mask and shift combination not supported
> (OFFSET)");
> +		msb++;
> +		offset += field->field_size - msb;
> +		if (msb < attr->header_length_mask_width) {
> +			if (attr->header_length_mask_width - msb > offset)
> +				return rte_flow_error_set
> +					(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ITEM, NULL,
> +					 "field size plus offset_base is too
> small");
> +			offset += msb;
> +			/*
> +			 * Here we can move to preceding dword. Hardware
> does
> +			 * cyclic left shift so we should avoid this and stay
> +			 * at current dword offset.
> +			 */
> +			offset = (offset & ~0x1Fu) |
> +				 ((offset - attr->header_length_mask_width)
> & 0x1F);
> +		}
> +		node->header_length_mode =
> MLX5_GRAPH_NODE_LEN_FIELD;
> +		node->header_length_field_mask = mask;
> +		node->header_length_field_shift = shift;
> +		node->header_length_field_offset = offset;
>  		break;
> +	}
>  	case FIELD_MODE_BITMASK:
>  		if (!(attr->header_length_mode &
>  		    RTE_BIT32(MLX5_GRAPH_NODE_LEN_BITMASK)))
>  			return rte_flow_error_set
>  				(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ITEM, NULL,
>  				 "unsupported header length field mode
> (BITMASK)");
> -		if (attr->header_length_mask_width < field->field_size)
> +		if (field->offset_shift > 15 || field->offset_shift < 0)
>  			return rte_flow_error_set
>  				(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ITEM, NULL,
> -				 "header length field width exceeds limit");
> +				 "header length field shift exceeds limit
> (BITMASK)");
>  		node->header_length_mode =
> MLX5_GRAPH_NODE_LEN_BITMASK;
> -		mask = mlx5_flex_hdr_len_mask(field->offset_shift, attr);
> -		if (mask < field->offset_mask)
> -			return rte_flow_error_set
> -				(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ITEM, NULL,
> -				 "length field shift too big (BITMASK)");
> -		node->header_length_field_mask = RTE_MIN(mask,
> -							 field-
> >offset_mask);
> +		node->header_length_field_mask = field->offset_mask;
> +		node->header_length_field_shift = field->offset_shift;
> +		node->header_length_field_offset = field->offset_base;
>  		break;
>  	default:
>  		return rte_flow_error_set
> @@ -567,27 +600,6 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr
> *attr,
>  			(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
>  			 "header length field base exceeds limit");
>  	node->header_length_base_value = field->field_base / CHAR_BIT;
> -	if (field->field_mode == FIELD_MODE_OFFSET ||
> -	    field->field_mode == FIELD_MODE_BITMASK) {
> -		if (field->offset_shift > 15 || field->offset_shift < 0)
> -			return rte_flow_error_set
> -				(error, EINVAL,
> RTE_FLOW_ERROR_TYPE_ITEM, NULL,
> -				 "header length field shift exceeds limit");
> -		node->header_length_field_shift = field->offset_shift;
> -		node->header_length_field_offset = field->offset_base;
> -	}
> -	if (field->field_mode == FIELD_MODE_OFFSET) {
> -		if (field->field_size > attr->header_length_mask_width) {
> -			node->header_length_field_offset +=
> -				field->field_size - attr-
> >header_length_mask_width;
> -		} else if (field->field_size < attr->header_length_mask_width) {
> -			node->header_length_field_offset -=
> -				attr->header_length_mask_width - field-
> >field_size;
> -			node->header_length_field_mask =
> -					RTE_MIN(node-
> >header_length_field_mask,
> -						(1u << field->field_size) -
> 1);
> -		}
> -	}
>  	return 0;
>  }
> 
> --
> 2.34.1

Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>

Resending the Ack for each patch separately, because patchwork assigned my Ack for the series to v1, not v2.

Best regards,
Dariusz Sosnowski


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item
  2024-09-18 13:46 ` [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item Viacheslav Ovsiienko
                     ` (9 preceding siblings ...)
  2024-09-18 13:51   ` [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item Dariusz Sosnowski
@ 2024-09-22 13:32   ` Raslan Darawsheh
  10 siblings, 0 replies; 30+ messages in thread
From: Raslan Darawsheh @ 2024-09-22 13:32 UTC (permalink / raw)
  To: Slava Ovsiienko, dev; +Cc: Matan Azrad, Ori Kam, Dariusz Sosnowski


Hi,

From: Slava Ovsiienko <viacheslavo@nvidia.com>
Sent: Wednesday, September 18, 2024 4:46 PM
To: dev@dpdk.org
Cc: Matan Azrad; Raslan Darawsheh; Ori Kam; Dariusz Sosnowski
Subject: [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item

There is a series of independent patches related to the flex item.
There is no direct dependency between patches besides the merging
dependency inferred by git, the latter is reason the patches are
sent in series. For more details, please see the individual patch
commit messages.

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

Viacheslav Ovsiienko (9):
  net/mlx5: update flex parser arc types support
  net/mlx5: add flex item query tunnel mode routine
  net/mlx5/hws: fix flex item support as tunnel header
  net/mlx5: fix flex item tunnel mode handling
  net/mlx5: fix number of supported flex parsers
  app/testpmd: remove flex item init command leftover
  net/mlx5: fix next protocol validation after flex item
  net/mlx5: fix non full word sample fields in flex item
  net/mlx5: fix flex item header length field translation

 app/test-pmd/cmdline_flow.c           |  12 --
 drivers/net/mlx5/hws/mlx5dr_definer.c |  17 +-
 drivers/net/mlx5/mlx5.h               |   9 +-
 drivers/net/mlx5/mlx5_flow_dv.c       |   7 +-
 drivers/net/mlx5/mlx5_flow_flex.c     | 215 ++++++++++++++++----------
 drivers/net/mlx5/mlx5_flow_hw.c       |   8 +
 6 files changed, 167 insertions(+), 101 deletions(-)

--
2.34.1

Series applied to next-net-mlx,

Kindest regards
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2024-09-22 13:32 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-09-11 16:04 [PATCH 1/9] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
2024-09-11 16:04 ` [PATCH 2/9] net/mlx5: add flex item query tunnel mode routine Viacheslav Ovsiienko
2024-09-11 16:04 ` [PATCH 3/9] net/mlx5/hws: fix flex item support as tunnel header Viacheslav Ovsiienko
2024-09-11 16:04 ` [PATCH 4/9] net/mlx5: fix flex item tunnel mode handling Viacheslav Ovsiienko
2024-09-11 16:04 ` [PATCH 5/9] net/mlx5: fix number of supported flex parsers Viacheslav Ovsiienko
2024-09-11 16:04 ` [PATCH 6/9] app/testpmd: remove flex item init command leftover Viacheslav Ovsiienko
2024-09-11 16:04 ` [PATCH 7/9] net/mlx5: fix next protocol validation after flex item Viacheslav Ovsiienko
2024-09-11 16:04 ` [PATCH 8/9] net/mlx5: fix non full word sample fields in " Viacheslav Ovsiienko
2024-09-11 16:04 ` [PATCH 9/9] net/mlx5: fix flex item header length field translation Viacheslav Ovsiienko
2024-09-18 13:46 ` [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item Viacheslav Ovsiienko
2024-09-18 13:46   ` [PATCH v2 1/9] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
2024-09-18 13:57     ` Dariusz Sosnowski
2024-09-18 13:46   ` [PATCH v2 2/9] net/mlx5: add flex item query tunnel mode routine Viacheslav Ovsiienko
2024-09-18 13:57     ` Dariusz Sosnowski
2024-09-18 13:46   ` [PATCH v2 3/9] net/mlx5/hws: fix flex item support as tunnel header Viacheslav Ovsiienko
2024-09-18 13:57     ` Dariusz Sosnowski
2024-09-18 13:46   ` [PATCH v2 4/9] net/mlx5: fix flex item tunnel mode handling Viacheslav Ovsiienko
2024-09-18 13:57     ` Dariusz Sosnowski
2024-09-18 13:46   ` [PATCH v2 5/9] net/mlx5: fix number of supported flex parsers Viacheslav Ovsiienko
2024-09-18 13:57     ` Dariusz Sosnowski
2024-09-18 13:46   ` [PATCH v2 6/9] app/testpmd: remove flex item init command leftover Viacheslav Ovsiienko
2024-09-18 13:58     ` Dariusz Sosnowski
2024-09-18 13:46   ` [PATCH v2 7/9] net/mlx5: fix next protocol validation after flex item Viacheslav Ovsiienko
2024-09-18 13:58     ` Dariusz Sosnowski
2024-09-18 13:46   ` [PATCH v2 8/9] net/mlx5: fix non full word sample fields in " Viacheslav Ovsiienko
2024-09-18 13:58     ` Dariusz Sosnowski
2024-09-18 13:46   ` [PATCH v2 9/9] net/mlx5: fix flex item header length field translation Viacheslav Ovsiienko
2024-09-18 13:58     ` Dariusz Sosnowski
2024-09-18 13:51   ` [PATCH v2 0/9] net/mlx5: cumulative fix series for flex item Dariusz Sosnowski
2024-09-22 13:32   ` Raslan Darawsheh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).