patches for DPDK stable branches
 help / color / mirror / Atom feed
* [PATCH 22.11 1/3] net/mlx5: update flex parser arc types support
@ 2024-10-31 12:44 Viacheslav Ovsiienko
  2024-10-31 12:44 ` [PATCH 22.11 2/3] net/mlx5: fix non full word sample fields in flex item Viacheslav Ovsiienko
  2024-10-31 12:44 ` [PATCH 22.11 3/3] net/mlx5: fix flex item header length field translation Viacheslav Ovsiienko
  0 siblings, 2 replies; 4+ messages in thread
From: Viacheslav Ovsiienko @ 2024-10-31 12:44 UTC (permalink / raw)
  To: stable; +Cc: bluca, Dariusz Sosnowski

[ upstream commit 6dfb83f13f7a6d259e4ecd3d53d40b9ed87e2fe1 ]

Add support for input IPv4 and for ESP output flex parser arcs.
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_flex.c | 40 +++++++++++++++++++++++++++++++
 1 file changed, 40 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index fb08910ddb..b63441b199 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -1006,6 +1006,8 @@ mlx5_flex_arc_type(enum rte_flow_item_type type, int in)
 		return MLX5_GRAPH_ARC_NODE_GENEVE;
 	case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
 		return MLX5_GRAPH_ARC_NODE_VXLAN_GPE;
+	case RTE_FLOW_ITEM_TYPE_ESP:
+		return MLX5_GRAPH_ARC_NODE_IPSEC_ESP;
 	default:
 		return -EINVAL;
 	}
@@ -1043,6 +1045,38 @@ mlx5_flex_arc_in_udp(const struct rte_flow_item *item,
 	return rte_be_to_cpu_16(spec->hdr.dst_port);
 }
 
+static int
+mlx5_flex_arc_in_ipv4(const struct rte_flow_item *item,
+		      struct rte_flow_error *error)
+{
+	const struct rte_flow_item_ipv4 *spec = item->spec;
+	const struct rte_flow_item_ipv4 *mask = item->mask;
+	struct rte_flow_item_ipv4 ip = { .hdr.next_proto_id = 0xff };
+
+	if (memcmp(mask, &ip, sizeof(struct rte_flow_item_ipv4))) {
+		return rte_flow_error_set
+			(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item,
+			 "invalid ipv4 item mask, full mask is desired");
+	}
+	return spec->hdr.next_proto_id;
+}
+
+static int
+mlx5_flex_arc_in_ipv6(const struct rte_flow_item *item,
+		      struct rte_flow_error *error)
+{
+	const struct rte_flow_item_ipv6 *spec = item->spec;
+	const struct rte_flow_item_ipv6 *mask = item->mask;
+	struct rte_flow_item_ipv6 ip = { .hdr.proto = 0xff };
+
+	if (memcmp(mask, &ip, sizeof(struct rte_flow_item_ipv6))) {
+		return rte_flow_error_set
+			(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, item,
+			 "invalid ipv6 item mask, full mask is desired");
+	}
+	return spec->hdr.proto;
+}
+
 static int
 mlx5_flex_translate_arc_in(struct mlx5_hca_flex_attr *attr,
 			   const struct rte_flow_item_flex_conf *conf,
@@ -1089,6 +1123,12 @@ mlx5_flex_translate_arc_in(struct mlx5_hca_flex_attr *attr,
 		case RTE_FLOW_ITEM_TYPE_UDP:
 			ret = mlx5_flex_arc_in_udp(rte_item, error);
 			break;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+			ret = mlx5_flex_arc_in_ipv4(rte_item, error);
+			break;
+		case RTE_FLOW_ITEM_TYPE_IPV6:
+			ret = mlx5_flex_arc_in_ipv6(rte_item, error);
+			break;
 		default:
 			MLX5_ASSERT(false);
 			return rte_flow_error_set
-- 
2.34.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 22.11 2/3] net/mlx5: fix non full word sample fields in flex item
  2024-10-31 12:44 [PATCH 22.11 1/3] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
@ 2024-10-31 12:44 ` Viacheslav Ovsiienko
  2024-10-31 12:44 ` [PATCH 22.11 3/3] net/mlx5: fix flex item header length field translation Viacheslav Ovsiienko
  1 sibling, 0 replies; 4+ messages in thread
From: Viacheslav Ovsiienko @ 2024-10-31 12:44 UTC (permalink / raw)
  To: stable; +Cc: bluca, Dariusz Sosnowski

[ upstream commit 97e19f0762e5235d6914845a59823d4ea36925bb ]

If the sample field in flex item did not cover the entire
32-bit word (width was not verified 32 bits) or was not aligned
on the byte boundary the match on this sample in flows
happened to be ignored or wrongly missed. The field mask
"def" was build in wrong endianness, and non-byte aligned
shifts were wrongly performed for the pattern masks and values.

Fixes: 6dac7d7ff2bf ("net/mlx5: translate flex item pattern into matcher")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_flex.c | 32 ++++++++++++++++++-------------
 1 file changed, 19 insertions(+), 13 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index b63441b199..e4321941a6 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -118,28 +118,32 @@ mlx5_flex_get_bitfield(const struct rte_flow_item_flex *item,
 		       uint32_t pos, uint32_t width, uint32_t shift)
 {
 	const uint8_t *ptr = item->pattern + pos / CHAR_BIT;
-	uint32_t val, vbits;
+	uint32_t val, vbits, skip = pos % CHAR_BIT;
 
 	/* Proceed the bitfield start byte. */
 	MLX5_ASSERT(width <= sizeof(uint32_t) * CHAR_BIT && width);
 	MLX5_ASSERT(width + shift <= sizeof(uint32_t) * CHAR_BIT);
 	if (item->length <= pos / CHAR_BIT)
 		return 0;
-	val = *ptr++ >> (pos % CHAR_BIT);
+	/* Bits are enumerated in byte in network order: 01234567 */
+	val = *ptr++;
 	vbits = CHAR_BIT - pos % CHAR_BIT;
-	pos = (pos + vbits) / CHAR_BIT;
+	pos = RTE_ALIGN_CEIL(pos, CHAR_BIT) / CHAR_BIT;
 	vbits = RTE_MIN(vbits, width);
-	val &= RTE_BIT32(vbits) - 1;
+	/* Load bytes to cover the field width, checking pattern boundary */
 	while (vbits < width && pos < item->length) {
 		uint32_t part = RTE_MIN(width - vbits, (uint32_t)CHAR_BIT);
 		uint32_t tmp = *ptr++;
 
-		pos++;
-		tmp &= RTE_BIT32(part) - 1;
-		val |= tmp << vbits;
+		val |= tmp << RTE_ALIGN_CEIL(vbits, CHAR_BIT);
 		vbits += part;
+		pos++;
 	}
-	return rte_bswap32(val <<= shift);
+	val = rte_cpu_to_be_32(val);
+	val <<= skip;
+	val >>= shift;
+	val &= (RTE_BIT64(width) - 1) << (sizeof(uint32_t) * CHAR_BIT - shift - width);
+	return val;
 }
 
 #define SET_FP_MATCH_SAMPLE_ID(x, def, msk, val, sid) \
@@ -235,19 +239,21 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev *dev,
 	mask = item->mask;
 	tp = (struct mlx5_flex_item *)spec->handle;
 	MLX5_ASSERT(mlx5_flex_index(dev->data->dev_private, tp) >= 0);
-	for (i = 0; i < tp->mapnum; i++) {
+	for (i = 0; i < tp->mapnum && pos < (spec->length * CHAR_BIT); i++) {
 		struct mlx5_flex_pattern_field *map = tp->map + i;
 		uint32_t id = map->reg_id;
-		uint32_t def = (RTE_BIT64(map->width) - 1) << map->shift;
-		uint32_t val, msk;
+		uint32_t val, msk, def;
 
 		/* Skip placeholders for DUMMY fields. */
 		if (id == MLX5_INVALID_SAMPLE_REG_ID) {
 			pos += map->width;
 			continue;
 		}
+		def = (uint32_t)(RTE_BIT64(map->width) - 1);
+		def <<= (sizeof(uint32_t) * CHAR_BIT - map->shift - map->width);
 		val = mlx5_flex_get_bitfield(spec, pos, map->width, map->shift);
-		msk = mlx5_flex_get_bitfield(mask, pos, map->width, map->shift);
+		msk = pos < (mask->length * CHAR_BIT) ?
+		      mlx5_flex_get_bitfield(mask, pos, map->width, map->shift) : def;
 		MLX5_ASSERT(map->width);
 		MLX5_ASSERT(id < tp->devx_fp->num_samples);
 		if (tp->tunnel_mode == FLEX_TUNNEL_MODE_MULTI && is_inner) {
@@ -258,7 +264,7 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev *dev,
 			id += num_samples;
 		}
 		mlx5_flex_set_match_sample(misc4_m, misc4_v,
-					   def, msk & def, val & msk & def,
+					   def, msk, val & msk,
 					   tp->devx_fp->sample_ids[id], id);
 		pos += map->width;
 	}
-- 
2.34.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 22.11 3/3] net/mlx5: fix flex item header length field translation
  2024-10-31 12:44 [PATCH 22.11 1/3] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
  2024-10-31 12:44 ` [PATCH 22.11 2/3] net/mlx5: fix non full word sample fields in flex item Viacheslav Ovsiienko
@ 2024-10-31 12:44 ` Viacheslav Ovsiienko
  2024-10-31 14:34   ` Luca Boccassi
  1 sibling, 1 reply; 4+ messages in thread
From: Viacheslav Ovsiienko @ 2024-10-31 12:44 UTC (permalink / raw)
  To: stable; +Cc: bluca, Dariusz Sosnowski

[ upstream commit net/mlx5: fix flex item header length field translation ]

There are hardware imposed limitations on the header length
field description for the mask and shift combinations in the
FIELD_MODE_OFFSET mode.

The patch updates:
  - parameter check for FIELD_MODE_OFFSET for the header length
    field
  - check whether length field crosses dword boundaries in header
  - correct mask extension to the hardware required width 6-bits
  - correct adjusting the mask left margin offset, preventing
    dword offset

Fixes: b293e8e49d78 ("net/mlx5: translate flex item configuration")
Cc: stable@dpdk.org

Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_flex.c | 112 +++++++++++++++++++-----------
 1 file changed, 72 insertions(+), 40 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index e4321941a6..32ab45b7e0 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -344,12 +344,14 @@ mlx5_flex_release_index(struct rte_eth_dev *dev,
  *
  *   shift      mask
  * ------- ---------------
- *    0     b111100  0x3C
- *    1     b111110  0x3E
- *    2     b111111  0x3F
- *    3     b011111  0x1F
- *    4     b001111  0x0F
- *    5     b000111  0x07
+ *    0     b11111100  0x3C
+ *    1     b01111110  0x3E
+ *    2     b00111111  0x3F
+ *    3     b00011111  0x1F
+ *    4     b00001111  0x0F
+ *    5     b00000111  0x07
+ *    6     b00000011  0x03
+ *    7     b00000001  0x01
  */
 static uint8_t
 mlx5_flex_hdr_len_mask(uint8_t shift,
@@ -359,8 +361,7 @@ mlx5_flex_hdr_len_mask(uint8_t shift,
 	int diff = shift - MLX5_PARSE_GRAPH_NODE_HDR_LEN_SHIFT_DWORD;
 
 	base_mask = mlx5_hca_parse_graph_node_base_hdr_len_mask(attr);
-	return diff == 0 ? base_mask :
-	       diff < 0 ? (base_mask << -diff) & base_mask : base_mask >> diff;
+	return diff < 0 ? base_mask << -diff : base_mask >> diff;
 }
 
 static int
@@ -371,7 +372,6 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
 {
 	const struct rte_flow_item_flex_field *field = &conf->next_header;
 	struct mlx5_devx_graph_node_attr *node = &devx->devx_conf;
-	uint32_t len_width, mask;
 
 	if (field->field_base % CHAR_BIT)
 		return rte_flow_error_set
@@ -399,49 +399,90 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
 				 "negative header length field base (FIXED)");
 		node->header_length_mode = MLX5_GRAPH_NODE_LEN_FIXED;
 		break;
-	case FIELD_MODE_OFFSET:
+	case FIELD_MODE_OFFSET: {
+		uint32_t msb, lsb;
+		int32_t shift = field->offset_shift;
+		uint32_t offset = field->offset_base;
+		uint32_t mask = field->offset_mask;
+		uint32_t wmax = attr->header_length_mask_width +
+				MLX5_PARSE_GRAPH_NODE_HDR_LEN_SHIFT_DWORD;
+
 		if (!(attr->header_length_mode &
 		    RTE_BIT32(MLX5_GRAPH_NODE_LEN_FIELD)))
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
 				 "unsupported header length field mode (OFFSET)");
-		node->header_length_mode = MLX5_GRAPH_NODE_LEN_FIELD;
-		if (field->offset_mask == 0 ||
-		    !rte_is_power_of_2(field->offset_mask + 1))
+		if (!field->field_size)
+			return rte_flow_error_set
+				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+				 "field size is a must for offset mode");
+		if ((offset ^ (field->field_size + offset)) >> 5)
+			return rte_flow_error_set
+				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+				 "field crosses the 32-bit word boundary");
+		/* Hardware counts in dwords, all shifts done by offset within mask */
+		if (shift < 0 || (uint32_t)shift >= wmax)
+			return rte_flow_error_set
+				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+				 "header length field shift exceeds limits (OFFSET)");
+		if (!mask)
+			return rte_flow_error_set
+				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+				 "zero length field offset mask (OFFSET)");
+		msb = rte_fls_u32(mask) - 1;
+		lsb = rte_bsf32(mask);
+		if (!rte_is_power_of_2((mask >> lsb) + 1))
+			return rte_flow_error_set
+				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+				 "length field offset mask not contiguous (OFFSET)");
+		if (msb >= field->field_size)
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "invalid length field offset mask (OFFSET)");
-		len_width = rte_fls_u32(field->offset_mask);
-		if (len_width > attr->header_length_mask_width)
+				 "length field offset mask exceeds field size (OFFSET)");
+		if (msb >= wmax)
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "length field offset mask too wide (OFFSET)");
-		mask = mlx5_flex_hdr_len_mask(field->offset_shift, attr);
-		if (mask < field->offset_mask)
+				 "length field offset mask exceeds supported width (OFFSET)");
+		if (mask & ~mlx5_flex_hdr_len_mask(shift, attr))
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "length field shift too big (OFFSET)");
-		node->header_length_field_mask = RTE_MIN(mask,
-							 field->offset_mask);
+				 "mask and shift combination not supported (OFFSET)");
+		msb++;
+		offset += field->field_size - msb;
+		if (msb < attr->header_length_mask_width) {
+			if (attr->header_length_mask_width - msb > offset)
+				return rte_flow_error_set
+					(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+					 "field size plus offset_base is too small");
+			offset += msb;
+			/*
+			 * Here we can move to preceding dword. Hardware does
+			 * cyclic left shift so we should avoid this and stay
+			 * at current dword offset.
+			 */
+			offset = (offset & ~0x1Fu) |
+				 ((offset - attr->header_length_mask_width) & 0x1F);
+		}
+		node->header_length_mode = MLX5_GRAPH_NODE_LEN_FIELD;
+		node->header_length_field_mask = mask;
+		node->header_length_field_shift = shift;
+		node->header_length_field_offset = offset;
 		break;
+	}
 	case FIELD_MODE_BITMASK:
 		if (!(attr->header_length_mode &
 		    RTE_BIT32(MLX5_GRAPH_NODE_LEN_BITMASK)))
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
 				 "unsupported header length field mode (BITMASK)");
-		if (attr->header_length_mask_width < field->field_size)
+		if (field->offset_shift > 15 || field->offset_shift < 0)
 			return rte_flow_error_set
 				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "header length field width exceeds limit");
+				 "header length field shift exceeds limit (BITMASK)");
 		node->header_length_mode = MLX5_GRAPH_NODE_LEN_BITMASK;
-		mask = mlx5_flex_hdr_len_mask(field->offset_shift, attr);
-		if (mask < field->offset_mask)
-			return rte_flow_error_set
-				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "length field shift too big (BITMASK)");
-		node->header_length_field_mask = RTE_MIN(mask,
-							 field->offset_mask);
+		node->header_length_field_mask = field->offset_mask;
+		node->header_length_field_shift = field->offset_shift;
+		node->header_length_field_offset = field->offset_base;
 		break;
 	default:
 		return rte_flow_error_set
@@ -454,15 +495,6 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
 			(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
 			 "header length field base exceeds limit");
 	node->header_length_base_value = field->field_base / CHAR_BIT;
-	if (field->field_mode == FIELD_MODE_OFFSET ||
-	    field->field_mode == FIELD_MODE_BITMASK) {
-		if (field->offset_shift > 15 || field->offset_shift < 0)
-			return rte_flow_error_set
-				(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
-				 "header length field shift exceeds limit");
-		node->header_length_field_shift	= field->offset_shift;
-		node->header_length_field_offset = field->offset_base;
-	}
 	return 0;
 }
 
-- 
2.34.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 22.11 3/3] net/mlx5: fix flex item header length field translation
  2024-10-31 12:44 ` [PATCH 22.11 3/3] net/mlx5: fix flex item header length field translation Viacheslav Ovsiienko
@ 2024-10-31 14:34   ` Luca Boccassi
  0 siblings, 0 replies; 4+ messages in thread
From: Luca Boccassi @ 2024-10-31 14:34 UTC (permalink / raw)
  To: Viacheslav Ovsiienko; +Cc: stable, Dariusz Sosnowski

On Thu, 31 Oct 2024 at 12:45, Viacheslav Ovsiienko
<viacheslavo@nvidia.com> wrote:
>
> [ upstream commit net/mlx5: fix flex item header length field translation ]
>
> There are hardware imposed limitations on the header length
> field description for the mask and shift combinations in the
> FIELD_MODE_OFFSET mode.
>
> The patch updates:
>   - parameter check for FIELD_MODE_OFFSET for the header length
>     field
>   - check whether length field crosses dword boundaries in header
>   - correct mask extension to the hardware required width 6-bits
>   - correct adjusting the mask left margin offset, preventing
>     dword offset
>
> Fixes: b293e8e49d78 ("net/mlx5: translate flex item configuration")
> Cc: stable@dpdk.org
>
> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
> ---
>  drivers/net/mlx5/mlx5_flow_flex.c | 112 +++++++++++++++++++-----------
>  1 file changed, 72 insertions(+), 40 deletions(-)

Thanks, all applied

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2024-10-31 14:35 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-31 12:44 [PATCH 22.11 1/3] net/mlx5: update flex parser arc types support Viacheslav Ovsiienko
2024-10-31 12:44 ` [PATCH 22.11 2/3] net/mlx5: fix non full word sample fields in flex item Viacheslav Ovsiienko
2024-10-31 12:44 ` [PATCH 22.11 3/3] net/mlx5: fix flex item header length field translation Viacheslav Ovsiienko
2024-10-31 14:34   ` Luca Boccassi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).