DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/5] net/mlx5: support match on L3 fragmented packets
@ 2020-10-13 14:16 Dekel Peled
  2020-10-13 14:16 ` [dpdk-dev] [PATCH 1/5] net/mlx5: remove handling of ICMP " Dekel Peled
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Dekel Peled @ 2020-10-13 14:16 UTC (permalink / raw)
  To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

Patch series [1] adds to ethdev and testpmd the support of matching on
fragmented/non-fragmented packets.
Following [1], this patch series adds to MLX5 PMD the support of matching
on IPv4 fragmented packets, IPv6 fragmented packets, and IPv6 fragment
extension header item.

[1] https://mails.dpdk.org/archives/dev/2020-October/186177.html

Dekel Peled (5):
  net/mlx5: remove handling of ICMP fragmented packets
  net/mlx5: support match on IPv4 fragment packets
  net/mlx5: support match on IPv6 fragment packets
  net/mlx5: support match on IPv6 fragment ext. item
  net/mlx5: enforce limitation on IPv6 next proto

 doc/guides/nics/mlx5.rst               |   7 +
 doc/guides/rel_notes/release_20_11.rst |   5 +
 drivers/net/mlx5/mlx5_flow.c           |  62 ++++--
 drivers/net/mlx5/mlx5_flow.h           |  14 ++
 drivers/net/mlx5/mlx5_flow_dv.c        | 382 +++++++++++++++++++++++++++++----
 drivers/net/mlx5/mlx5_flow_verbs.c     |   9 +-
 6 files changed, 420 insertions(+), 59 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 1/5] net/mlx5: remove handling of ICMP fragmented packets
  2020-10-13 14:16 [dpdk-dev] [PATCH 0/5] net/mlx5: support match on L3 fragmented packets Dekel Peled
@ 2020-10-13 14:16 ` Dekel Peled
  2020-10-13 14:16 ` [dpdk-dev] [PATCH 2/5] net/mlx5: support match on IPv4 fragment packets Dekel Peled
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Dekel Peled @ 2020-10-13 14:16 UTC (permalink / raw)
  To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

Commit [1] forced setting of match on 'frag' bit mask 1 and value 0.
Previous patch in this series added support of match on fragmented and
non-fragmented packets on L3 items, so this setting is now redundant.

This patch removes the changes done in [1].

[1] commit 85407db9f60d ("net/mlx5: fix matching for ICMP fragments")

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_dv.c | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 2bbfcea..c0fb311 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -7345,12 +7345,6 @@ struct field_modify_info modify_tcp[] = {
 		return;
 	if (!icmp6_m)
 		icmp6_m = &rte_flow_item_icmp6_mask;
-	/*
-	 * Force flow only to match the non-fragmented IPv6 ICMPv6 packets.
-	 * If only the protocol is specified, no need to match the frag.
-	 */
-	MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1);
-	MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 0);
 	MLX5_SET(fte_match_set_misc3, misc3_m, icmpv6_type, icmp6_m->type);
 	MLX5_SET(fte_match_set_misc3, misc3_v, icmpv6_type,
 		 icmp6_v->type & icmp6_m->type);
@@ -7400,12 +7394,6 @@ struct field_modify_info modify_tcp[] = {
 		return;
 	if (!icmp_m)
 		icmp_m = &rte_flow_item_icmp_mask;
-	/*
-	 * Force flow only to match the non-fragmented IPv4 ICMP packets.
-	 * If only the protocol is specified, no need to match the frag.
-	 */
-	MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1);
-	MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 0);
 	MLX5_SET(fte_match_set_misc3, misc3_m, icmp_type,
 		 icmp_m->hdr.icmp_type);
 	MLX5_SET(fte_match_set_misc3, misc3_v, icmp_type,
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 2/5] net/mlx5: support match on IPv4 fragment packets
  2020-10-13 14:16 [dpdk-dev] [PATCH 0/5] net/mlx5: support match on L3 fragmented packets Dekel Peled
  2020-10-13 14:16 ` [dpdk-dev] [PATCH 1/5] net/mlx5: remove handling of ICMP " Dekel Peled
@ 2020-10-13 14:16 ` Dekel Peled
  2020-10-13 14:16 ` [dpdk-dev] [PATCH 3/5] net/mlx5: support match on IPv6 " Dekel Peled
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Dekel Peled @ 2020-10-13 14:16 UTC (permalink / raw)
  To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

This patch adds to MLX5 PMD the support of matching on IPv4
fragmented and non-fragmented packets, using the IPv4 header
fragment_offset field.

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/rel_notes/release_20_11.rst |   5 ++
 drivers/net/mlx5/mlx5_flow.c           |  48 ++++++----
 drivers/net/mlx5/mlx5_flow.h           |  10 +++
 drivers/net/mlx5/mlx5_flow_dv.c        | 156 ++++++++++++++++++++++++++++-----
 drivers/net/mlx5/mlx5_flow_verbs.c     |   9 +-
 5 files changed, 183 insertions(+), 45 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index a01552c..792d547 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -148,6 +148,11 @@ New Features
   * Extern objects and functions can be plugged into the pipeline.
   * Transaction-oriented table updates.
 
+* **Updated Mellanox mlx5 driver.**
+
+  Updated Mellanox mlx5 driver with new features and improvements, including:
+
+  * Added support for matching on fragmented/non-fragmented IPv4 packets.
 
 Removed Items
 -------------
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 0a54818..38cfd0f 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -800,6 +800,8 @@ struct mlx5_flow_tunnel_info {
  *   Bit-masks covering supported fields by the NIC to compare with user mask.
  * @param[in] size
  *   Bit-masks size in bytes.
+ * @param[in] range_accepted
+ *   True if range of values is accepted for specific fields, false otherwise.
  * @param[out] error
  *   Pointer to error structure.
  *
@@ -811,6 +813,7 @@ struct mlx5_flow_tunnel_info {
 			  const uint8_t *mask,
 			  const uint8_t *nic_mask,
 			  unsigned int size,
+			  bool range_accepted,
 			  struct rte_flow_error *error)
 {
 	unsigned int i;
@@ -828,7 +831,7 @@ struct mlx5_flow_tunnel_info {
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "mask/last without a spec is not"
 					  " supported");
-	if (item->spec && item->last) {
+	if (item->spec && item->last && !range_accepted) {
 		uint8_t spec[size];
 		uint8_t last[size];
 		unsigned int i;
@@ -1603,7 +1606,8 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&rte_flow_item_icmp6_mask,
-		 sizeof(struct rte_flow_item_icmp6), error);
+		 sizeof(struct rte_flow_item_icmp6),
+		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -1661,7 +1665,8 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&nic_mask,
-		 sizeof(struct rte_flow_item_icmp), error);
+		 sizeof(struct rte_flow_item_icmp),
+		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -1716,7 +1721,7 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
 					(const uint8_t *)&nic_mask,
 					sizeof(struct rte_flow_item_eth),
-					error);
+					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	return ret;
 }
 
@@ -1770,7 +1775,7 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
 					(const uint8_t *)&nic_mask,
 					sizeof(struct rte_flow_item_vlan),
-					error);
+					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
 	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
@@ -1822,6 +1827,8 @@ struct mlx5_flow_tunnel_info {
  * @param[in] acc_mask
  *   Acceptable mask, if NULL default internal default mask
  *   will be used to check whether item fields are supported.
+ * @param[in] range_accepted
+ *   True if range of values is accepted for specific fields, false otherwise.
  * @param[out] error
  *   Pointer to error structure.
  *
@@ -1834,6 +1841,7 @@ struct mlx5_flow_tunnel_info {
 			     uint64_t last_item,
 			     uint16_t ether_type,
 			     const struct rte_flow_item_ipv4 *acc_mask,
+			     bool range_accepted,
 			     struct rte_flow_error *error)
 {
 	const struct rte_flow_item_ipv4 *mask = item->mask;
@@ -1904,7 +1912,7 @@ struct mlx5_flow_tunnel_info {
 					acc_mask ? (const uint8_t *)acc_mask
 						 : (const uint8_t *)&nic_mask,
 					sizeof(struct rte_flow_item_ipv4),
-					error);
+					range_accepted, error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -2003,7 +2011,7 @@ struct mlx5_flow_tunnel_info {
 					acc_mask ? (const uint8_t *)acc_mask
 						 : (const uint8_t *)&nic_mask,
 					sizeof(struct rte_flow_item_ipv6),
-					error);
+					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -2058,7 +2066,8 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&rte_flow_item_udp_mask,
-		 sizeof(struct rte_flow_item_udp), error);
+		 sizeof(struct rte_flow_item_udp), MLX5_ITEM_RANGE_NOT_ACCEPTED,
+		 error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -2113,7 +2122,8 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)flow_mask,
-		 sizeof(struct rte_flow_item_tcp), error);
+		 sizeof(struct rte_flow_item_tcp), MLX5_ITEM_RANGE_NOT_ACCEPTED,
+		 error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -2167,7 +2177,7 @@ struct mlx5_flow_tunnel_info {
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&rte_flow_item_vxlan_mask,
 		 sizeof(struct rte_flow_item_vxlan),
-		 error);
+		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	if (spec) {
@@ -2238,7 +2248,7 @@ struct mlx5_flow_tunnel_info {
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&rte_flow_item_vxlan_gpe_mask,
 		 sizeof(struct rte_flow_item_vxlan_gpe),
-		 error);
+		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	if (spec) {
@@ -2312,7 +2322,7 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&gre_key_default_mask,
-		 sizeof(rte_be32_t), error);
+		 sizeof(rte_be32_t), MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	return ret;
 }
 
@@ -2364,7 +2374,8 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&nic_mask,
-		 sizeof(struct rte_flow_item_gre), error);
+		 sizeof(struct rte_flow_item_gre), MLX5_ITEM_RANGE_NOT_ACCEPTED,
+		 error);
 	if (ret < 0)
 		return ret;
 #ifndef HAVE_MLX5DV_DR
@@ -2439,7 +2450,8 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 				  (item, (const uint8_t *)mask,
 				   (const uint8_t *)&nic_mask,
-				   sizeof(struct rte_flow_item_geneve), error);
+				   sizeof(struct rte_flow_item_geneve),
+				   MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
 	if (spec) {
@@ -2522,7 +2534,8 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&rte_flow_item_mpls_mask,
-		 sizeof(struct rte_flow_item_mpls), error);
+		 sizeof(struct rte_flow_item_mpls),
+		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -2577,7 +2590,8 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&rte_flow_item_nvgre_mask,
-		 sizeof(struct rte_flow_item_nvgre), error);
+		 sizeof(struct rte_flow_item_nvgre),
+		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -2671,7 +2685,7 @@ struct mlx5_flow_tunnel_info {
 					 acc_mask ? (const uint8_t *)acc_mask
 						  : (const uint8_t *)&nic_mask,
 					 sizeof(struct rte_flow_item_ecpri),
-					 error);
+					 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 }
 
 /* Allocate unique ID for the split Q/RSS subflows. */
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 279daf2..1e30c93 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -330,6 +330,14 @@ enum mlx5_feature_name {
 #define MLX5_ENCAPSULATION_DECISION_SIZE (sizeof(struct rte_flow_item_eth) + \
 					  sizeof(struct rte_flow_item_ipv4))
 
+/* IPv4 fragment_offset field contains relevant data in bits 2 to 15. */
+#define MLX5_IPV4_FRAG_OFFSET_MASK \
+		(RTE_IPV4_HDR_OFFSET_MASK | RTE_IPV4_HDR_MF_FLAG)
+
+/* Specific item's fields can accept a range of values (using spec and last). */
+#define MLX5_ITEM_RANGE_NOT_ACCEPTED	false
+#define MLX5_ITEM_RANGE_ACCEPTED	true
+
 /* Software header modify action numbers of a flow. */
 #define MLX5_ACT_NUM_MDF_IPV4		1
 #define MLX5_ACT_NUM_MDF_IPV6		4
@@ -985,6 +993,7 @@ int mlx5_flow_item_acceptable(const struct rte_flow_item *item,
 			      const uint8_t *mask,
 			      const uint8_t *nic_mask,
 			      unsigned int size,
+			      bool range_accepted,
 			      struct rte_flow_error *error);
 int mlx5_flow_validate_item_eth(const struct rte_flow_item *item,
 				uint64_t item_flags,
@@ -1002,6 +1011,7 @@ int mlx5_flow_validate_item_ipv4(const struct rte_flow_item *item,
 				 uint64_t last_item,
 				 uint16_t ether_type,
 				 const struct rte_flow_item_ipv4 *acc_mask,
+				 bool range_accepted,
 				 struct rte_flow_error *error);
 int mlx5_flow_validate_item_ipv6(const struct rte_flow_item *item,
 				 uint64_t item_flags,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index c0fb311..08e6f74 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1418,7 +1418,7 @@ struct field_modify_info modify_tcp[] = {
 	ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
 					(const uint8_t *)&nic_mask,
 					sizeof(struct rte_flow_item_mark),
-					error);
+					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -1494,7 +1494,7 @@ struct field_modify_info modify_tcp[] = {
 	ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
 					(const uint8_t *)&nic_mask,
 					sizeof(struct rte_flow_item_meta),
-					error);
+					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	return ret;
 }
 
@@ -1547,7 +1547,7 @@ struct field_modify_info modify_tcp[] = {
 	ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
 					(const uint8_t *)&nic_mask,
 					sizeof(struct rte_flow_item_tag),
-					error);
+					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	if (mask->index != 0xff)
@@ -1618,7 +1618,7 @@ struct field_modify_info modify_tcp[] = {
 				(item, (const uint8_t *)mask,
 				 (const uint8_t *)&rte_flow_item_port_id_mask,
 				 sizeof(struct rte_flow_item_port_id),
-				 error);
+				 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
 	if (!spec)
@@ -1691,7 +1691,7 @@ struct field_modify_info modify_tcp[] = {
 	ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
 					(const uint8_t *)&nic_mask,
 					sizeof(struct rte_flow_item_vlan),
-					error);
+					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
 	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
@@ -1778,11 +1778,126 @@ struct field_modify_info modify_tcp[] = {
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "Match is supported for GTP"
 					  " flags only");
-	return mlx5_flow_item_acceptable
-		(item, (const uint8_t *)mask,
-		 (const uint8_t *)&nic_mask,
-		 sizeof(struct rte_flow_item_gtp),
-		 error);
+	return mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
+					 (const uint8_t *)&nic_mask,
+					 sizeof(struct rte_flow_item_gtp),
+					 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
+}
+
+/**
+ * Validate IPV4 item.
+ * Use existing validation function mlx5_flow_validate_item_ipv4(), and
+ * add specific validation of fragment_offset field,
+ *
+ * @param[in] item
+ *   Item specification.
+ * @param[in] item_flags
+ *   Bit-fields that holds the items detected until now.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_item_ipv4(const struct rte_flow_item *item,
+			   uint64_t item_flags,
+			   uint64_t last_item,
+			   uint16_t ether_type,
+			   struct rte_flow_error *error)
+{
+	int ret;
+	const struct rte_flow_item_ipv4 *spec = item->spec;
+	const struct rte_flow_item_ipv4 *last = item->last;
+	const struct rte_flow_item_ipv4 *mask = item->mask;
+	rte_be16_t fragment_offset_spec = 0;
+	rte_be16_t fragment_offset_last = 0;
+	const struct rte_flow_item_ipv4 nic_ipv4_mask = {
+		.hdr = {
+			.src_addr = RTE_BE32(0xffffffff),
+			.dst_addr = RTE_BE32(0xffffffff),
+			.type_of_service = 0xff,
+			.fragment_offset = RTE_BE16(0xffff),
+			.next_proto_id = 0xff,
+			.time_to_live = 0xff,
+		},
+	};
+
+	ret = mlx5_flow_validate_item_ipv4(item, item_flags, last_item,
+					   ether_type, &nic_ipv4_mask,
+					   MLX5_ITEM_RANGE_ACCEPTED, error);
+	if (ret < 0)
+		return ret;
+	if (spec && mask)
+		fragment_offset_spec = spec->hdr.fragment_offset &
+				       mask->hdr.fragment_offset;
+	if (!fragment_offset_spec)
+		return 0;
+	/*
+	 * spec and mask are valid, enforce using full mask to make sure the
+	 * complete value is used correctly.
+	 */
+	if ((mask->hdr.fragment_offset & RTE_BE16(MLX5_IPV4_FRAG_OFFSET_MASK))
+			!= RTE_BE16(MLX5_IPV4_FRAG_OFFSET_MASK))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM_MASK,
+					  item, "must use full mask for"
+					  " fragment_offset");
+	/*
+	 * Match on fragment_offset 0x2000 means MF is 1 and frag-offset is 0,
+	 * indicating this is 1st fragment of fragmented packet.
+	 * This is not yet supported in MLX5, return appropriate error message.
+	 */
+	if (fragment_offset_spec == RTE_BE16(RTE_IPV4_HDR_MF_FLAG))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "match on first fragment not "
+					  "supported");
+	if (fragment_offset_spec && !last)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "specified value not supported");
+	/* spec and last are valid, validate the specified range. */
+	fragment_offset_last = last->hdr.fragment_offset &
+			       mask->hdr.fragment_offset;
+	/*
+	 * Match on fragment_offset spec 0x2001 and last 0x3fff
+	 * means MF is 1 and frag-offset is > 0.
+	 * This packet is fragment 2nd and onward, excluding last.
+	 * This is not yet supported in MLX5, return appropriate
+	 * error message.
+	 */
+	if (fragment_offset_spec == RTE_BE16(RTE_IPV4_HDR_MF_FLAG + 1) &&
+	    fragment_offset_last == RTE_BE16(MLX5_IPV4_FRAG_OFFSET_MASK))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM_LAST,
+					  last, "match on following "
+					  "fragments not supported");
+	/*
+	 * Match on fragment_offset spec 0x0001 and last 0x1fff
+	 * means MF is 0 and frag-offset is > 0.
+	 * This packet is last fragment of fragmented packet.
+	 * This is not yet supported in MLX5, return appropriate
+	 * error message.
+	 */
+	if (fragment_offset_spec == RTE_BE16(1) &&
+	    fragment_offset_last == RTE_BE16(RTE_IPV4_HDR_OFFSET_MASK))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM_LAST,
+					  last, "match on last "
+					  "fragment not supported");
+	/*
+	 * Match on fragment_offset spec 0x0001 and last 0x3fff
+	 * means MF and/or frag-offset is not 0.
+	 * This is a fragmented packet.
+	 * Other range values are invalid and rejected.
+	 */
+	if (!(fragment_offset_spec == RTE_BE16(1) &&
+	      fragment_offset_last == RTE_BE16(MLX5_IPV4_FRAG_OFFSET_MASK)))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM_LAST, last,
+					  "specified range not supported");
+	return 0;
 }
 
 /**
@@ -5084,15 +5199,6 @@ struct field_modify_info modify_tcp[] = {
 			.dst_port = RTE_BE16(UINT16_MAX),
 		}
 	};
-	const struct rte_flow_item_ipv4 nic_ipv4_mask = {
-		.hdr = {
-			.src_addr = RTE_BE32(0xffffffff),
-			.dst_addr = RTE_BE32(0xffffffff),
-			.type_of_service = 0xff,
-			.next_proto_id = 0xff,
-			.time_to_live = 0xff,
-		},
-	};
 	const struct rte_flow_item_ipv6 nic_ipv6_mask = {
 		.hdr = {
 			.src_addr =
@@ -5192,11 +5298,9 @@ struct field_modify_info modify_tcp[] = {
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			mlx5_flow_tunnel_ip_check(items, next_protocol,
 						  &item_flags, &tunnel);
-			ret = mlx5_flow_validate_item_ipv4(items, item_flags,
-							   last_item,
-							   ether_type,
-							   &nic_ipv4_mask,
-							   error);
+			ret = flow_dv_validate_item_ipv4(items, item_flags,
+							 last_item, ether_type,
+							 error);
 			if (ret < 0)
 				return ret;
 			last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 :
@@ -6296,6 +6400,10 @@ struct field_modify_info modify_tcp[] = {
 		 ipv4_m->hdr.time_to_live);
 	MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit,
 		 ipv4_v->hdr.time_to_live & ipv4_m->hdr.time_to_live);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag,
+		 !!(ipv4_m->hdr.fragment_offset));
+	MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag,
+		 !!(ipv4_v->hdr.fragment_offset & ipv4_m->hdr.fragment_offset));
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 62c18b8..276bcb5 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1312,10 +1312,11 @@
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
-			ret = mlx5_flow_validate_item_ipv4(items, item_flags,
-							   last_item,
-							   ether_type, NULL,
-							   error);
+			ret = mlx5_flow_validate_item_ipv4
+						(items, item_flags,
+						 last_item, ether_type, NULL,
+						 MLX5_ITEM_RANGE_NOT_ACCEPTED,
+						 error);
 			if (ret < 0)
 				return ret;
 			last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 :
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 3/5] net/mlx5: support match on IPv6 fragment packets
  2020-10-13 14:16 [dpdk-dev] [PATCH 0/5] net/mlx5: support match on L3 fragmented packets Dekel Peled
  2020-10-13 14:16 ` [dpdk-dev] [PATCH 1/5] net/mlx5: remove handling of ICMP " Dekel Peled
  2020-10-13 14:16 ` [dpdk-dev] [PATCH 2/5] net/mlx5: support match on IPv4 fragment packets Dekel Peled
@ 2020-10-13 14:16 ` Dekel Peled
  2020-10-13 14:16 ` [dpdk-dev] [PATCH 4/5] net/mlx5: support match on IPv6 fragment ext. item Dekel Peled
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Dekel Peled @ 2020-10-13 14:16 UTC (permalink / raw)
  To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

This patch adds to MLX5 PMD the support of matching on IPv6
fragmented and non-fragmented packets, using the new field
has_frag_ext, added to rte_flow following RFC [1].

[1] https://mails.dpdk.org/archives/dev/2020-August/177257.html

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/rel_notes/release_20_11.rst | 2 +-
 drivers/net/mlx5/mlx5_flow_dv.c        | 5 +++++
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 792d547..ceb4770 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -152,7 +152,7 @@ New Features
 
   Updated Mellanox mlx5 driver with new features and improvements, including:
 
-  * Added support for matching on fragmented/non-fragmented IPv4 packets.
+  * Added support for matching on fragmented/non-fragmented IPv4/IPv6 packets.
 
 Removed Items
 -------------
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 08e6f74..6f7377b 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -5211,6 +5211,7 @@ struct field_modify_info modify_tcp[] = {
 			.proto = 0xff,
 			.hop_limits = 0xff,
 		},
+		.has_frag_ext = 1,
 	};
 	const struct rte_flow_item_ecpri nic_ecpri_mask = {
 		.hdr = {
@@ -6519,6 +6520,10 @@ struct field_modify_info modify_tcp[] = {
 		 ipv6_m->hdr.hop_limits);
 	MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit,
 		 ipv6_v->hdr.hop_limits & ipv6_m->hdr.hop_limits);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag,
+		 !!(ipv6_m->has_frag_ext));
+	MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag,
+		 !!(ipv6_v->has_frag_ext & ipv6_m->has_frag_ext));
 }
 
 /**
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 4/5] net/mlx5: support match on IPv6 fragment ext. item
  2020-10-13 14:16 [dpdk-dev] [PATCH 0/5] net/mlx5: support match on L3 fragmented packets Dekel Peled
                   ` (2 preceding siblings ...)
  2020-10-13 14:16 ` [dpdk-dev] [PATCH 3/5] net/mlx5: support match on IPv6 " Dekel Peled
@ 2020-10-13 14:16 ` Dekel Peled
  2020-10-13 14:16 ` [dpdk-dev] [PATCH 5/5] net/mlx5: enforce limitation on IPv6 next proto Dekel Peled
  2020-10-15 14:05 ` [dpdk-dev] [PATCH v2 0/5] net/mlx5: support match on L3 fragmented packets Dekel Peled
  5 siblings, 0 replies; 13+ messages in thread
From: Dekel Peled @ 2020-10-13 14:16 UTC (permalink / raw)
  To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

rte_flow update, following RFC [1], added to ethdev the rte_flow item
ipv6_frag_ext.
This patch adds to MLX5 PMD the option to match on this item type.

[1] http://mails.dpdk.org/archives/dev/2020-March/160255.html

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |   4 +
 drivers/net/mlx5/mlx5_flow_dv.c | 209 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 213 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 1e30c93..376519f 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -122,6 +122,10 @@ enum mlx5_feature_name {
 /* Pattern eCPRI Layer bit. */
 #define MLX5_FLOW_LAYER_ECPRI (UINT64_C(1) << 29)
 
+/* IPv6 Fragment Extension Header bit. */
+#define MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT (1u << 30)
+#define MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT (1u << 31)
+
 /* Outer Masks. */
 #define MLX5_FLOW_LAYER_OUTER_L3 \
 	(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 6f7377b..d5feee6 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1901,6 +1901,120 @@ struct field_modify_info modify_tcp[] = {
 }
 
 /**
+ * Validate IPV6 fragment extension item.
+ *
+ * @param[in] item
+ *   Item specification.
+ * @param[in] item_flags
+ *   Bit-fields that holds the items detected until now.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_item_ipv6_frag_ext(const struct rte_flow_item *item,
+				    uint64_t item_flags,
+				    struct rte_flow_error *error)
+{
+	const struct rte_flow_item_ipv6_frag_ext *spec = item->spec;
+	const struct rte_flow_item_ipv6_frag_ext *last = item->last;
+	const struct rte_flow_item_ipv6_frag_ext *mask = item->mask;
+	rte_be16_t frag_data_spec = 0;
+	rte_be16_t frag_data_last = 0;
+	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
+	const uint64_t l4m = tunnel ? MLX5_FLOW_LAYER_INNER_L4 :
+				      MLX5_FLOW_LAYER_OUTER_L4;
+	int ret = 0;
+	struct rte_flow_item_ipv6_frag_ext nic_mask = {
+		.hdr = {
+			.next_header = 0xff,
+			.frag_data = RTE_BE16(0xffff),
+		},
+	};
+
+	if (item_flags & l4m)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "ipv6 fragment extension item cannot "
+					  "follow L4 item.");
+	if ((tunnel && !(item_flags & MLX5_FLOW_LAYER_INNER_L3_IPV6)) ||
+	    (!tunnel && !(item_flags & MLX5_FLOW_LAYER_OUTER_L3_IPV6)))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "ipv6 fragment extension item must "
+					  "follow ipv6 item");
+	if (spec && mask)
+		frag_data_spec = spec->hdr.frag_data & mask->hdr.frag_data;
+	if (!frag_data_spec)
+		return 0;
+	/*
+	 * spec and mask are valid, enforce using full mask to make sure the
+	 * complete value is used correctly.
+	 */
+	if ((mask->hdr.frag_data & RTE_BE16(RTE_IPV6_FRAG_USED_MASK)) !=
+				RTE_BE16(RTE_IPV6_FRAG_USED_MASK))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM_MASK,
+					  item, "must use full mask for"
+					  " frag_data");
+	/*
+	 * Match on frag_data 0x00001 means M is 1 and frag-offset is 0.
+	 * This is 1st fragment of fragmented packet.
+	 */
+	if (frag_data_spec == RTE_BE16(RTE_IPV6_EHDR_MF_MASK))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "match on first fragment not "
+					  "supported");
+	if (frag_data_spec && !last)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "specified value not supported");
+	ret = mlx5_flow_item_acceptable
+				(item, (const uint8_t *)mask,
+				 (const uint8_t *)&nic_mask,
+				 sizeof(struct rte_flow_item_ipv6_frag_ext),
+				 MLX5_ITEM_RANGE_ACCEPTED, error);
+	if (ret)
+		return ret;
+	/* spec and last are valid, validate the specified range. */
+	frag_data_last = last->hdr.frag_data & mask->hdr.frag_data;
+	/*
+	 * Match on frag_data spec 0x0009 and last 0xfff9
+	 * means M is 1 and frag-offset is > 0.
+	 * This packet is fragment 2nd and onward, excluding last.
+	 * This is not yet supported in MLX5, return appropriate
+	 * error message.
+	 */
+	if (frag_data_spec == RTE_BE16(RTE_IPV6_EHDR_FO_ALIGN |
+				       RTE_IPV6_EHDR_MF_MASK) &&
+	    frag_data_last == RTE_BE16(RTE_IPV6_FRAG_USED_MASK))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM_LAST,
+					  last, "match on following "
+					  "fragments not supported");
+	/*
+	 * Match on frag_data spec 0x0008 and last 0xfff8
+	 * means M is 0 and frag-offset is > 0.
+	 * This packet is last fragment of fragmented packet.
+	 * This is not yet supported in MLX5, return appropriate
+	 * error message.
+	 */
+	if (frag_data_spec == RTE_BE16(RTE_IPV6_EHDR_FO_ALIGN) &&
+	    frag_data_last == RTE_BE16(RTE_IPV6_EHDR_FO_MASK))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM_LAST,
+					  last, "match on last "
+					  "fragment not supported");
+	/* Other range values are invalid and rejected. */
+	return rte_flow_error_set(error, EINVAL,
+				  RTE_FLOW_ERROR_TYPE_ITEM_LAST, last,
+				  "specified range not supported");
+}
+
+/**
  * Validate the pop VLAN action.
  *
  * @param[in] dev
@@ -5349,6 +5463,29 @@ struct field_modify_info modify_tcp[] = {
 				next_protocol = 0xff;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT:
+			ret = flow_dv_validate_item_ipv6_frag_ext(items,
+								  item_flags,
+								  error);
+			if (ret < 0)
+				return ret;
+			last_item = tunnel ?
+					MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT :
+					MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT;
+			if (items->mask != NULL &&
+			    ((const struct rte_flow_item_ipv6_frag_ext *)
+			     items->mask)->hdr.next_header) {
+				next_protocol =
+				((const struct rte_flow_item_ipv6_frag_ext *)
+				 items->spec)->hdr.next_header;
+				next_protocol &=
+				((const struct rte_flow_item_ipv6_frag_ext *)
+				 items->mask)->hdr.next_header;
+			} else {
+				/* Reset for inner layer. */
+				next_protocol = 0xff;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_TCP:
 			ret = mlx5_flow_validate_item_tcp
 						(items, item_flags,
@@ -6527,6 +6664,57 @@ struct field_modify_info modify_tcp[] = {
 }
 
 /**
+ * Add IPV6 fragment extension item to matcher and to the value.
+ *
+ * @param[in, out] matcher
+ *   Flow matcher.
+ * @param[in, out] key
+ *   Flow matcher value.
+ * @param[in] item
+ *   Flow pattern to translate.
+ * @param[in] inner
+ *   Item is inner pattern.
+ */
+static void
+flow_dv_translate_item_ipv6_frag_ext(void *matcher, void *key,
+				     const struct rte_flow_item *item,
+				     int inner)
+{
+	const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_m = item->mask;
+	const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_v = item->spec;
+	const struct rte_flow_item_ipv6_frag_ext nic_mask = {
+		.hdr = {
+			.next_header = 0xff,
+			.frag_data = RTE_BE16(0xffff),
+		},
+	};
+	void *headers_m;
+	void *headers_v;
+
+	if (inner) {
+		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
+					 inner_headers);
+		headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers);
+	} else {
+		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
+					 outer_headers);
+		headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers);
+	}
+	/* IPv6 fragment extension item exists, so packet is IP fragment. */
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 1);
+	if (!ipv6_frag_ext_v)
+		return;
+	if (!ipv6_frag_ext_m)
+		ipv6_frag_ext_m = &nic_mask;
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol,
+		 ipv6_frag_ext_m->hdr.next_header);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol,
+		 ipv6_frag_ext_v->hdr.next_header &
+		 ipv6_frag_ext_m->hdr.next_header);
+}
+
+/**
  * Add TCP item to matcher and to the value.
  *
  * @param[in, out] matcher
@@ -8881,6 +9069,27 @@ struct field_modify_info modify_tcp[] = {
 				next_protocol = 0xff;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT:
+			flow_dv_translate_item_ipv6_frag_ext(match_mask,
+							     match_value,
+							     items, tunnel);
+			last_item = tunnel ?
+					MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT :
+					MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT;
+			if (items->mask != NULL &&
+			    ((const struct rte_flow_item_ipv6_frag_ext *)
+			     items->mask)->hdr.next_header) {
+				next_protocol =
+				((const struct rte_flow_item_ipv6_frag_ext *)
+				 items->spec)->hdr.next_header;
+				next_protocol &=
+				((const struct rte_flow_item_ipv6_frag_ext *)
+				 items->mask)->hdr.next_header;
+			} else {
+				/* Reset for inner layer. */
+				next_protocol = 0xff;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_TCP:
 			flow_dv_translate_item_tcp(match_mask, match_value,
 						   items, tunnel);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH 5/5] net/mlx5: enforce limitation on IPv6 next proto
  2020-10-13 14:16 [dpdk-dev] [PATCH 0/5] net/mlx5: support match on L3 fragmented packets Dekel Peled
                   ` (3 preceding siblings ...)
  2020-10-13 14:16 ` [dpdk-dev] [PATCH 4/5] net/mlx5: support match on IPv6 fragment ext. item Dekel Peled
@ 2020-10-13 14:16 ` Dekel Peled
  2020-10-15 14:05 ` [dpdk-dev] [PATCH v2 0/5] net/mlx5: support match on L3 fragmented packets Dekel Peled
  5 siblings, 0 replies; 13+ messages in thread
From: Dekel Peled @ 2020-10-13 14:16 UTC (permalink / raw)
  To: orika, thomas, ferruh.yigit, arybchenko, konstantin.ananyev,
	olivier.matz, wenzhuo.lu, beilei.xing, bernard.iremonger, matan,
	shahafs, viacheslavo
  Cc: dev

Due to PRM requirement, the IPv6 header item 'proto' field, indicating
the next header protocol, should not be set as extension header.
This patch adds the relevant validation, and documents the limitation.

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/nics/mlx5.rst     |  7 +++++++
 drivers/net/mlx5/mlx5_flow.c | 14 ++++++++++++--
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index a174cdd..b7a4dce 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -311,6 +311,13 @@ Limitations
     for some NICs (such as ConnectX-6 Dx and BlueField 2).
     The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support.
 
+- IPv6 header item 'proto' field, indicating the next header protocol, should
+  not be set as extension header.
+  In case the next header is an extension header, it should not be specified in
+  IPv6 header item 'proto' field.
+  The last extension header item 'next header' field can specify the following
+  header protocol type.
+
 Statistics
 ----------
 
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 38cfd0f..84931a3 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1977,9 +1977,9 @@ struct mlx5_flow_tunnel_info {
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "IPv6 cannot follow L2/VLAN layer "
 					  "which ether type is not IPv6");
+	if (mask && spec)
+		next_proto = mask->hdr.proto & spec->hdr.proto;
 	if (item_flags & MLX5_FLOW_LAYER_IPV6_ENCAP) {
-		if (mask && spec)
-			next_proto = mask->hdr.proto & spec->hdr.proto;
 		if (next_proto == IPPROTO_IPIP || next_proto == IPPROTO_IPV6)
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
@@ -1987,6 +1987,16 @@ struct mlx5_flow_tunnel_info {
 						  "multiple tunnel "
 						  "not supported");
 	}
+	if (next_proto == IPPROTO_HOPOPTS  ||
+	    next_proto == IPPROTO_ROUTING  ||
+	    next_proto == IPPROTO_FRAGMENT ||
+	    next_proto == IPPROTO_ESP	   ||
+	    next_proto == IPPROTO_AH	   ||
+	    next_proto == IPPROTO_DSTOPTS)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "IPv6 proto (next header) should "
+					  "not be set as extension header");
 	if (item_flags & MLX5_FLOW_LAYER_IPIP)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 0/5] net/mlx5: support match on L3 fragmented packets
  2020-10-13 14:16 [dpdk-dev] [PATCH 0/5] net/mlx5: support match on L3 fragmented packets Dekel Peled
                   ` (4 preceding siblings ...)
  2020-10-13 14:16 ` [dpdk-dev] [PATCH 5/5] net/mlx5: enforce limitation on IPv6 next proto Dekel Peled
@ 2020-10-15 14:05 ` Dekel Peled
  2020-10-15 14:05   ` [dpdk-dev] [PATCH v2 1/5] net/mlx5: remove handling of ICMP " Dekel Peled
                     ` (5 more replies)
  5 siblings, 6 replies; 13+ messages in thread
From: Dekel Peled @ 2020-10-15 14:05 UTC (permalink / raw)
  To: matan, shahafs, viacheslavo; +Cc: dev

Patch series [1] adds to ethdev and testpmd the support of matching on
fragmented/non-fragmented packets.
Following [1], this patch series adds to MLX5 PMD the support of matching
on IPv4 fragmented packets, IPv6 fragmented packets, and IPv6 fragment
extension header item.

[1] https://mails.dpdk.org/archives/dev/2020-October/186177.html

---
v2: rebase on top of latest commits.
---

Dekel Peled (5):
  net/mlx5: remove handling of ICMP fragmented packets
  net/mlx5: support match on IPv4 fragment packets
  net/mlx5: support match on IPv6 fragment packets
  net/mlx5: support match on IPv6 fragment ext. item
  net/mlx5: enforce limitation on IPv6 next proto

 doc/guides/nics/mlx5.rst               |   7 +
 doc/guides/rel_notes/release_20_11.rst |   5 +
 drivers/net/mlx5/mlx5_flow.c           |  62 ++++--
 drivers/net/mlx5/mlx5_flow.h           |  14 ++
 drivers/net/mlx5/mlx5_flow_dv.c        | 382 +++++++++++++++++++++++++++++----
 drivers/net/mlx5/mlx5_flow_verbs.c     |   9 +-
 6 files changed, 420 insertions(+), 59 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 1/5] net/mlx5: remove handling of ICMP fragmented packets
  2020-10-15 14:05 ` [dpdk-dev] [PATCH v2 0/5] net/mlx5: support match on L3 fragmented packets Dekel Peled
@ 2020-10-15 14:05   ` Dekel Peled
  2020-10-15 14:05   ` [dpdk-dev] [PATCH v2 2/5] net/mlx5: support match on IPv4 fragment packets Dekel Peled
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Dekel Peled @ 2020-10-15 14:05 UTC (permalink / raw)
  To: matan, shahafs, viacheslavo; +Cc: dev

Commit [1] forced setting of match on 'frag' bit mask 1 and value 0.
Previous patch in this series added support of match on fragmented and
non-fragmented packets on L3 items, so this setting is now redundant.

This patch removes the changes done in [1].

[1] commit 85407db9f60d ("net/mlx5: fix matching for ICMP fragments")

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_dv.c | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 361c32d..3fa51c3 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -7560,12 +7560,6 @@ struct field_modify_info modify_tcp[] = {
 		return;
 	if (!icmp6_m)
 		icmp6_m = &rte_flow_item_icmp6_mask;
-	/*
-	 * Force flow only to match the non-fragmented IPv6 ICMPv6 packets.
-	 * If only the protocol is specified, no need to match the frag.
-	 */
-	MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1);
-	MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 0);
 	MLX5_SET(fte_match_set_misc3, misc3_m, icmpv6_type, icmp6_m->type);
 	MLX5_SET(fte_match_set_misc3, misc3_v, icmpv6_type,
 		 icmp6_v->type & icmp6_m->type);
@@ -7615,12 +7609,6 @@ struct field_modify_info modify_tcp[] = {
 		return;
 	if (!icmp_m)
 		icmp_m = &rte_flow_item_icmp_mask;
-	/*
-	 * Force flow only to match the non-fragmented IPv4 ICMP packets.
-	 * If only the protocol is specified, no need to match the frag.
-	 */
-	MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1);
-	MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 0);
 	MLX5_SET(fte_match_set_misc3, misc3_m, icmp_type,
 		 icmp_m->hdr.icmp_type);
 	MLX5_SET(fte_match_set_misc3, misc3_v, icmp_type,
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 2/5] net/mlx5: support match on IPv4 fragment packets
  2020-10-15 14:05 ` [dpdk-dev] [PATCH v2 0/5] net/mlx5: support match on L3 fragmented packets Dekel Peled
  2020-10-15 14:05   ` [dpdk-dev] [PATCH v2 1/5] net/mlx5: remove handling of ICMP " Dekel Peled
@ 2020-10-15 14:05   ` Dekel Peled
  2020-10-15 14:05   ` [dpdk-dev] [PATCH v2 3/5] net/mlx5: support match on IPv6 " Dekel Peled
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Dekel Peled @ 2020-10-15 14:05 UTC (permalink / raw)
  To: matan, shahafs, viacheslavo; +Cc: dev

This patch adds to MLX5 PMD the support of matching on IPv4
fragmented and non-fragmented packets, using the IPv4 header
fragment_offset field.

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/rel_notes/release_20_11.rst |   5 ++
 drivers/net/mlx5/mlx5_flow.c           |  48 ++++++----
 drivers/net/mlx5/mlx5_flow.h           |  10 +++
 drivers/net/mlx5/mlx5_flow_dv.c        | 156 ++++++++++++++++++++++++++++-----
 drivers/net/mlx5/mlx5_flow_verbs.c     |   9 +-
 5 files changed, 183 insertions(+), 45 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index f8686a5..18e2c5e 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -214,6 +214,11 @@ New Features
   * Added new ``RTE_ACL_CLASSIFY_AVX512X32`` vector implementation,
     which can process up to 32 flows in parallel. Requires AVX512 support.
 
+* **Updated Mellanox mlx5 driver.**
+
+  Updated Mellanox mlx5 driver with new features and improvements, including:
+
+  * Added support for matching on fragmented/non-fragmented IPv4 packets.
 
 Removed Items
 -------------
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 3d38e11..1116ebb 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -800,6 +800,8 @@ struct mlx5_flow_tunnel_info {
  *   Bit-masks covering supported fields by the NIC to compare with user mask.
  * @param[in] size
  *   Bit-masks size in bytes.
+ * @param[in] range_accepted
+ *   True if range of values is accepted for specific fields, false otherwise.
  * @param[out] error
  *   Pointer to error structure.
  *
@@ -811,6 +813,7 @@ struct mlx5_flow_tunnel_info {
 			  const uint8_t *mask,
 			  const uint8_t *nic_mask,
 			  unsigned int size,
+			  bool range_accepted,
 			  struct rte_flow_error *error)
 {
 	unsigned int i;
@@ -828,7 +831,7 @@ struct mlx5_flow_tunnel_info {
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "mask/last without a spec is not"
 					  " supported");
-	if (item->spec && item->last) {
+	if (item->spec && item->last && !range_accepted) {
 		uint8_t spec[size];
 		uint8_t last[size];
 		unsigned int i;
@@ -1603,7 +1606,8 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&rte_flow_item_icmp6_mask,
-		 sizeof(struct rte_flow_item_icmp6), error);
+		 sizeof(struct rte_flow_item_icmp6),
+		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -1661,7 +1665,8 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&nic_mask,
-		 sizeof(struct rte_flow_item_icmp), error);
+		 sizeof(struct rte_flow_item_icmp),
+		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -1716,7 +1721,7 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
 					(const uint8_t *)&nic_mask,
 					sizeof(struct rte_flow_item_eth),
-					error);
+					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	return ret;
 }
 
@@ -1770,7 +1775,7 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
 					(const uint8_t *)&nic_mask,
 					sizeof(struct rte_flow_item_vlan),
-					error);
+					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
 	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
@@ -1822,6 +1827,8 @@ struct mlx5_flow_tunnel_info {
  * @param[in] acc_mask
  *   Acceptable mask, if NULL default internal default mask
  *   will be used to check whether item fields are supported.
+ * @param[in] range_accepted
+ *   True if range of values is accepted for specific fields, false otherwise.
  * @param[out] error
  *   Pointer to error structure.
  *
@@ -1834,6 +1841,7 @@ struct mlx5_flow_tunnel_info {
 			     uint64_t last_item,
 			     uint16_t ether_type,
 			     const struct rte_flow_item_ipv4 *acc_mask,
+			     bool range_accepted,
 			     struct rte_flow_error *error)
 {
 	const struct rte_flow_item_ipv4 *mask = item->mask;
@@ -1904,7 +1912,7 @@ struct mlx5_flow_tunnel_info {
 					acc_mask ? (const uint8_t *)acc_mask
 						 : (const uint8_t *)&nic_mask,
 					sizeof(struct rte_flow_item_ipv4),
-					error);
+					range_accepted, error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -2003,7 +2011,7 @@ struct mlx5_flow_tunnel_info {
 					acc_mask ? (const uint8_t *)acc_mask
 						 : (const uint8_t *)&nic_mask,
 					sizeof(struct rte_flow_item_ipv6),
-					error);
+					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -2058,7 +2066,8 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&rte_flow_item_udp_mask,
-		 sizeof(struct rte_flow_item_udp), error);
+		 sizeof(struct rte_flow_item_udp), MLX5_ITEM_RANGE_NOT_ACCEPTED,
+		 error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -2113,7 +2122,8 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)flow_mask,
-		 sizeof(struct rte_flow_item_tcp), error);
+		 sizeof(struct rte_flow_item_tcp), MLX5_ITEM_RANGE_NOT_ACCEPTED,
+		 error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -2167,7 +2177,7 @@ struct mlx5_flow_tunnel_info {
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&rte_flow_item_vxlan_mask,
 		 sizeof(struct rte_flow_item_vxlan),
-		 error);
+		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	if (spec) {
@@ -2238,7 +2248,7 @@ struct mlx5_flow_tunnel_info {
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&rte_flow_item_vxlan_gpe_mask,
 		 sizeof(struct rte_flow_item_vxlan_gpe),
-		 error);
+		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	if (spec) {
@@ -2312,7 +2322,7 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&gre_key_default_mask,
-		 sizeof(rte_be32_t), error);
+		 sizeof(rte_be32_t), MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	return ret;
 }
 
@@ -2364,7 +2374,8 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&nic_mask,
-		 sizeof(struct rte_flow_item_gre), error);
+		 sizeof(struct rte_flow_item_gre), MLX5_ITEM_RANGE_NOT_ACCEPTED,
+		 error);
 	if (ret < 0)
 		return ret;
 #ifndef HAVE_MLX5DV_DR
@@ -2439,7 +2450,8 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 				  (item, (const uint8_t *)mask,
 				   (const uint8_t *)&nic_mask,
-				   sizeof(struct rte_flow_item_geneve), error);
+				   sizeof(struct rte_flow_item_geneve),
+				   MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
 	if (spec) {
@@ -2522,7 +2534,8 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&rte_flow_item_mpls_mask,
-		 sizeof(struct rte_flow_item_mpls), error);
+		 sizeof(struct rte_flow_item_mpls),
+		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -2577,7 +2590,8 @@ struct mlx5_flow_tunnel_info {
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
 		 (const uint8_t *)&rte_flow_item_nvgre_mask,
-		 sizeof(struct rte_flow_item_nvgre), error);
+		 sizeof(struct rte_flow_item_nvgre),
+		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -2671,7 +2685,7 @@ struct mlx5_flow_tunnel_info {
 					 acc_mask ? (const uint8_t *)acc_mask
 						  : (const uint8_t *)&nic_mask,
 					 sizeof(struct rte_flow_item_ecpri),
-					 error);
+					 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 }
 
 /* Allocate unique ID for the split Q/RSS subflows. */
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 0dc76c4..b6efbff 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -331,6 +331,14 @@ enum mlx5_feature_name {
 #define MLX5_ENCAPSULATION_DECISION_SIZE (sizeof(struct rte_flow_item_eth) + \
 					  sizeof(struct rte_flow_item_ipv4))
 
+/* IPv4 fragment_offset field contains relevant data in bits 2 to 15. */
+#define MLX5_IPV4_FRAG_OFFSET_MASK \
+		(RTE_IPV4_HDR_OFFSET_MASK | RTE_IPV4_HDR_MF_FLAG)
+
+/* Specific item's fields can accept a range of values (using spec and last). */
+#define MLX5_ITEM_RANGE_NOT_ACCEPTED	false
+#define MLX5_ITEM_RANGE_ACCEPTED	true
+
 /* Software header modify action numbers of a flow. */
 #define MLX5_ACT_NUM_MDF_IPV4		1
 #define MLX5_ACT_NUM_MDF_IPV6		4
@@ -1046,6 +1054,7 @@ int mlx5_flow_item_acceptable(const struct rte_flow_item *item,
 			      const uint8_t *mask,
 			      const uint8_t *nic_mask,
 			      unsigned int size,
+			      bool range_accepted,
 			      struct rte_flow_error *error);
 int mlx5_flow_validate_item_eth(const struct rte_flow_item *item,
 				uint64_t item_flags,
@@ -1063,6 +1072,7 @@ int mlx5_flow_validate_item_ipv4(const struct rte_flow_item *item,
 				 uint64_t last_item,
 				 uint16_t ether_type,
 				 const struct rte_flow_item_ipv4 *acc_mask,
+				 bool range_accepted,
 				 struct rte_flow_error *error);
 int mlx5_flow_validate_item_ipv6(const struct rte_flow_item *item,
 				 uint64_t item_flags,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 3fa51c3..ff97f78 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1426,7 +1426,7 @@ struct field_modify_info modify_tcp[] = {
 	ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
 					(const uint8_t *)&nic_mask,
 					sizeof(struct rte_flow_item_mark),
-					error);
+					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	return 0;
@@ -1502,7 +1502,7 @@ struct field_modify_info modify_tcp[] = {
 	ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
 					(const uint8_t *)&nic_mask,
 					sizeof(struct rte_flow_item_meta),
-					error);
+					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	return ret;
 }
 
@@ -1555,7 +1555,7 @@ struct field_modify_info modify_tcp[] = {
 	ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
 					(const uint8_t *)&nic_mask,
 					sizeof(struct rte_flow_item_tag),
-					error);
+					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
 		return ret;
 	if (mask->index != 0xff)
@@ -1626,7 +1626,7 @@ struct field_modify_info modify_tcp[] = {
 				(item, (const uint8_t *)mask,
 				 (const uint8_t *)&rte_flow_item_port_id_mask,
 				 sizeof(struct rte_flow_item_port_id),
-				 error);
+				 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
 	if (!spec)
@@ -1699,7 +1699,7 @@ struct field_modify_info modify_tcp[] = {
 	ret = mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
 					(const uint8_t *)&nic_mask,
 					sizeof(struct rte_flow_item_vlan),
-					error);
+					MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret)
 		return ret;
 	if (!tunnel && mask->tci != RTE_BE16(0x0fff)) {
@@ -1786,11 +1786,126 @@ struct field_modify_info modify_tcp[] = {
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "Match is supported for GTP"
 					  " flags only");
-	return mlx5_flow_item_acceptable
-		(item, (const uint8_t *)mask,
-		 (const uint8_t *)&nic_mask,
-		 sizeof(struct rte_flow_item_gtp),
-		 error);
+	return mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
+					 (const uint8_t *)&nic_mask,
+					 sizeof(struct rte_flow_item_gtp),
+					 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
+}
+
+/**
+ * Validate IPV4 item.
+ * Use existing validation function mlx5_flow_validate_item_ipv4(), and
+ * add specific validation of fragment_offset field,
+ *
+ * @param[in] item
+ *   Item specification.
+ * @param[in] item_flags
+ *   Bit-fields that holds the items detected until now.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_item_ipv4(const struct rte_flow_item *item,
+			   uint64_t item_flags,
+			   uint64_t last_item,
+			   uint16_t ether_type,
+			   struct rte_flow_error *error)
+{
+	int ret;
+	const struct rte_flow_item_ipv4 *spec = item->spec;
+	const struct rte_flow_item_ipv4 *last = item->last;
+	const struct rte_flow_item_ipv4 *mask = item->mask;
+	rte_be16_t fragment_offset_spec = 0;
+	rte_be16_t fragment_offset_last = 0;
+	const struct rte_flow_item_ipv4 nic_ipv4_mask = {
+		.hdr = {
+			.src_addr = RTE_BE32(0xffffffff),
+			.dst_addr = RTE_BE32(0xffffffff),
+			.type_of_service = 0xff,
+			.fragment_offset = RTE_BE16(0xffff),
+			.next_proto_id = 0xff,
+			.time_to_live = 0xff,
+		},
+	};
+
+	ret = mlx5_flow_validate_item_ipv4(item, item_flags, last_item,
+					   ether_type, &nic_ipv4_mask,
+					   MLX5_ITEM_RANGE_ACCEPTED, error);
+	if (ret < 0)
+		return ret;
+	if (spec && mask)
+		fragment_offset_spec = spec->hdr.fragment_offset &
+				       mask->hdr.fragment_offset;
+	if (!fragment_offset_spec)
+		return 0;
+	/*
+	 * spec and mask are valid, enforce using full mask to make sure the
+	 * complete value is used correctly.
+	 */
+	if ((mask->hdr.fragment_offset & RTE_BE16(MLX5_IPV4_FRAG_OFFSET_MASK))
+			!= RTE_BE16(MLX5_IPV4_FRAG_OFFSET_MASK))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM_MASK,
+					  item, "must use full mask for"
+					  " fragment_offset");
+	/*
+	 * Match on fragment_offset 0x2000 means MF is 1 and frag-offset is 0,
+	 * indicating this is 1st fragment of fragmented packet.
+	 * This is not yet supported in MLX5, return appropriate error message.
+	 */
+	if (fragment_offset_spec == RTE_BE16(RTE_IPV4_HDR_MF_FLAG))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "match on first fragment not "
+					  "supported");
+	if (fragment_offset_spec && !last)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "specified value not supported");
+	/* spec and last are valid, validate the specified range. */
+	fragment_offset_last = last->hdr.fragment_offset &
+			       mask->hdr.fragment_offset;
+	/*
+	 * Match on fragment_offset spec 0x2001 and last 0x3fff
+	 * means MF is 1 and frag-offset is > 0.
+	 * This packet is fragment 2nd and onward, excluding last.
+	 * This is not yet supported in MLX5, return appropriate
+	 * error message.
+	 */
+	if (fragment_offset_spec == RTE_BE16(RTE_IPV4_HDR_MF_FLAG + 1) &&
+	    fragment_offset_last == RTE_BE16(MLX5_IPV4_FRAG_OFFSET_MASK))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM_LAST,
+					  last, "match on following "
+					  "fragments not supported");
+	/*
+	 * Match on fragment_offset spec 0x0001 and last 0x1fff
+	 * means MF is 0 and frag-offset is > 0.
+	 * This packet is last fragment of fragmented packet.
+	 * This is not yet supported in MLX5, return appropriate
+	 * error message.
+	 */
+	if (fragment_offset_spec == RTE_BE16(1) &&
+	    fragment_offset_last == RTE_BE16(RTE_IPV4_HDR_OFFSET_MASK))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM_LAST,
+					  last, "match on last "
+					  "fragment not supported");
+	/*
+	 * Match on fragment_offset spec 0x0001 and last 0x3fff
+	 * means MF and/or frag-offset is not 0.
+	 * This is a fragmented packet.
+	 * Other range values are invalid and rejected.
+	 */
+	if (!(fragment_offset_spec == RTE_BE16(1) &&
+	      fragment_offset_last == RTE_BE16(MLX5_IPV4_FRAG_OFFSET_MASK)))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM_LAST, last,
+					  "specified range not supported");
+	return 0;
 }
 
 /**
@@ -5290,15 +5405,6 @@ struct field_modify_info modify_tcp[] = {
 			.dst_port = RTE_BE16(UINT16_MAX),
 		}
 	};
-	const struct rte_flow_item_ipv4 nic_ipv4_mask = {
-		.hdr = {
-			.src_addr = RTE_BE32(0xffffffff),
-			.dst_addr = RTE_BE32(0xffffffff),
-			.type_of_service = 0xff,
-			.next_proto_id = 0xff,
-			.time_to_live = 0xff,
-		},
-	};
 	const struct rte_flow_item_ipv6 nic_ipv6_mask = {
 		.hdr = {
 			.src_addr =
@@ -5398,11 +5504,9 @@ struct field_modify_info modify_tcp[] = {
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			mlx5_flow_tunnel_ip_check(items, next_protocol,
 						  &item_flags, &tunnel);
-			ret = mlx5_flow_validate_item_ipv4(items, item_flags,
-							   last_item,
-							   ether_type,
-							   &nic_ipv4_mask,
-							   error);
+			ret = flow_dv_validate_item_ipv4(items, item_flags,
+							 last_item, ether_type,
+							 error);
 			if (ret < 0)
 				return ret;
 			last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 :
@@ -6511,6 +6615,10 @@ struct field_modify_info modify_tcp[] = {
 		 ipv4_m->hdr.time_to_live);
 	MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit,
 		 ipv4_v->hdr.time_to_live & ipv4_m->hdr.time_to_live);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag,
+		 !!(ipv4_m->hdr.fragment_offset));
+	MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag,
+		 !!(ipv4_v->hdr.fragment_offset & ipv4_m->hdr.fragment_offset));
 }
 
 /**
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 62c18b8..276bcb5 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1312,10 +1312,11 @@
 			}
 			break;
 		case RTE_FLOW_ITEM_TYPE_IPV4:
-			ret = mlx5_flow_validate_item_ipv4(items, item_flags,
-							   last_item,
-							   ether_type, NULL,
-							   error);
+			ret = mlx5_flow_validate_item_ipv4
+						(items, item_flags,
+						 last_item, ether_type, NULL,
+						 MLX5_ITEM_RANGE_NOT_ACCEPTED,
+						 error);
 			if (ret < 0)
 				return ret;
 			last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 :
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 3/5] net/mlx5: support match on IPv6 fragment packets
  2020-10-15 14:05 ` [dpdk-dev] [PATCH v2 0/5] net/mlx5: support match on L3 fragmented packets Dekel Peled
  2020-10-15 14:05   ` [dpdk-dev] [PATCH v2 1/5] net/mlx5: remove handling of ICMP " Dekel Peled
  2020-10-15 14:05   ` [dpdk-dev] [PATCH v2 2/5] net/mlx5: support match on IPv4 fragment packets Dekel Peled
@ 2020-10-15 14:05   ` Dekel Peled
  2020-10-15 14:05   ` [dpdk-dev] [PATCH v2 4/5] net/mlx5: support match on IPv6 fragment ext. item Dekel Peled
                     ` (2 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Dekel Peled @ 2020-10-15 14:05 UTC (permalink / raw)
  To: matan, shahafs, viacheslavo; +Cc: dev

This patch adds to MLX5 PMD the support of matching on IPv6
fragmented and non-fragmented packets, using the new field
has_frag_ext, added to rte_flow following RFC [1].

[1] https://mails.dpdk.org/archives/dev/2020-August/177257.html

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/rel_notes/release_20_11.rst | 2 +-
 drivers/net/mlx5/mlx5_flow_dv.c        | 5 +++++
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 18e2c5e..097117e 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -218,7 +218,7 @@ New Features
 
   Updated Mellanox mlx5 driver with new features and improvements, including:
 
-  * Added support for matching on fragmented/non-fragmented IPv4 packets.
+  * Added support for matching on fragmented/non-fragmented IPv4/IPv6 packets.
 
 Removed Items
 -------------
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index ff97f78..b61a361 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -5417,6 +5417,7 @@ struct field_modify_info modify_tcp[] = {
 			.proto = 0xff,
 			.hop_limits = 0xff,
 		},
+		.has_frag_ext = 1,
 	};
 	const struct rte_flow_item_ecpri nic_ecpri_mask = {
 		.hdr = {
@@ -6734,6 +6735,10 @@ struct field_modify_info modify_tcp[] = {
 		 ipv6_m->hdr.hop_limits);
 	MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit,
 		 ipv6_v->hdr.hop_limits & ipv6_m->hdr.hop_limits);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag,
+		 !!(ipv6_m->has_frag_ext));
+	MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag,
+		 !!(ipv6_v->has_frag_ext & ipv6_m->has_frag_ext));
 }
 
 /**
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 4/5] net/mlx5: support match on IPv6 fragment ext. item
  2020-10-15 14:05 ` [dpdk-dev] [PATCH v2 0/5] net/mlx5: support match on L3 fragmented packets Dekel Peled
                     ` (2 preceding siblings ...)
  2020-10-15 14:05   ` [dpdk-dev] [PATCH v2 3/5] net/mlx5: support match on IPv6 " Dekel Peled
@ 2020-10-15 14:05   ` Dekel Peled
  2020-10-15 14:05   ` [dpdk-dev] [PATCH v2 5/5] net/mlx5: enforce limitation on IPv6 next proto Dekel Peled
  2020-10-18 15:14   ` [dpdk-dev] [PATCH v2 0/5] net/mlx5: support match on L3 fragmented packets Raslan Darawsheh
  5 siblings, 0 replies; 13+ messages in thread
From: Dekel Peled @ 2020-10-15 14:05 UTC (permalink / raw)
  To: matan, shahafs, viacheslavo; +Cc: dev

rte_flow update, following RFC [1], added to ethdev the rte_flow item
ipv6_frag_ext.
This patch adds to MLX5 PMD the option to match on this item type.

[1] http://mails.dpdk.org/archives/dev/2020-March/160255.html

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |   4 +
 drivers/net/mlx5/mlx5_flow_dv.c | 209 ++++++++++++++++++++++++++++++++++++++++
 2 files changed, 213 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index b6efbff..ec6aa19 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -122,6 +122,10 @@ enum mlx5_feature_name {
 /* Pattern eCPRI Layer bit. */
 #define MLX5_FLOW_LAYER_ECPRI (UINT64_C(1) << 29)
 
+/* IPv6 Fragment Extension Header bit. */
+#define MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT (1u << 30)
+#define MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT (1u << 31)
+
 /* Outer Masks. */
 #define MLX5_FLOW_LAYER_OUTER_L3 \
 	(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index b61a361..06c8b08 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1909,6 +1909,120 @@ struct field_modify_info modify_tcp[] = {
 }
 
 /**
+ * Validate IPV6 fragment extension item.
+ *
+ * @param[in] item
+ *   Item specification.
+ * @param[in] item_flags
+ *   Bit-fields that holds the items detected until now.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_dv_validate_item_ipv6_frag_ext(const struct rte_flow_item *item,
+				    uint64_t item_flags,
+				    struct rte_flow_error *error)
+{
+	const struct rte_flow_item_ipv6_frag_ext *spec = item->spec;
+	const struct rte_flow_item_ipv6_frag_ext *last = item->last;
+	const struct rte_flow_item_ipv6_frag_ext *mask = item->mask;
+	rte_be16_t frag_data_spec = 0;
+	rte_be16_t frag_data_last = 0;
+	const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
+	const uint64_t l4m = tunnel ? MLX5_FLOW_LAYER_INNER_L4 :
+				      MLX5_FLOW_LAYER_OUTER_L4;
+	int ret = 0;
+	struct rte_flow_item_ipv6_frag_ext nic_mask = {
+		.hdr = {
+			.next_header = 0xff,
+			.frag_data = RTE_BE16(0xffff),
+		},
+	};
+
+	if (item_flags & l4m)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "ipv6 fragment extension item cannot "
+					  "follow L4 item.");
+	if ((tunnel && !(item_flags & MLX5_FLOW_LAYER_INNER_L3_IPV6)) ||
+	    (!tunnel && !(item_flags & MLX5_FLOW_LAYER_OUTER_L3_IPV6)))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "ipv6 fragment extension item must "
+					  "follow ipv6 item");
+	if (spec && mask)
+		frag_data_spec = spec->hdr.frag_data & mask->hdr.frag_data;
+	if (!frag_data_spec)
+		return 0;
+	/*
+	 * spec and mask are valid, enforce using full mask to make sure the
+	 * complete value is used correctly.
+	 */
+	if ((mask->hdr.frag_data & RTE_BE16(RTE_IPV6_FRAG_USED_MASK)) !=
+				RTE_BE16(RTE_IPV6_FRAG_USED_MASK))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM_MASK,
+					  item, "must use full mask for"
+					  " frag_data");
+	/*
+	 * Match on frag_data 0x00001 means M is 1 and frag-offset is 0.
+	 * This is 1st fragment of fragmented packet.
+	 */
+	if (frag_data_spec == RTE_BE16(RTE_IPV6_EHDR_MF_MASK))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "match on first fragment not "
+					  "supported");
+	if (frag_data_spec && !last)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "specified value not supported");
+	ret = mlx5_flow_item_acceptable
+				(item, (const uint8_t *)mask,
+				 (const uint8_t *)&nic_mask,
+				 sizeof(struct rte_flow_item_ipv6_frag_ext),
+				 MLX5_ITEM_RANGE_ACCEPTED, error);
+	if (ret)
+		return ret;
+	/* spec and last are valid, validate the specified range. */
+	frag_data_last = last->hdr.frag_data & mask->hdr.frag_data;
+	/*
+	 * Match on frag_data spec 0x0009 and last 0xfff9
+	 * means M is 1 and frag-offset is > 0.
+	 * This packet is fragment 2nd and onward, excluding last.
+	 * This is not yet supported in MLX5, return appropriate
+	 * error message.
+	 */
+	if (frag_data_spec == RTE_BE16(RTE_IPV6_EHDR_FO_ALIGN |
+				       RTE_IPV6_EHDR_MF_MASK) &&
+	    frag_data_last == RTE_BE16(RTE_IPV6_FRAG_USED_MASK))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM_LAST,
+					  last, "match on following "
+					  "fragments not supported");
+	/*
+	 * Match on frag_data spec 0x0008 and last 0xfff8
+	 * means M is 0 and frag-offset is > 0.
+	 * This packet is last fragment of fragmented packet.
+	 * This is not yet supported in MLX5, return appropriate
+	 * error message.
+	 */
+	if (frag_data_spec == RTE_BE16(RTE_IPV6_EHDR_FO_ALIGN) &&
+	    frag_data_last == RTE_BE16(RTE_IPV6_EHDR_FO_MASK))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM_LAST,
+					  last, "match on last "
+					  "fragment not supported");
+	/* Other range values are invalid and rejected. */
+	return rte_flow_error_set(error, EINVAL,
+				  RTE_FLOW_ERROR_TYPE_ITEM_LAST, last,
+				  "specified range not supported");
+}
+
+/**
  * Validate the pop VLAN action.
  *
  * @param[in] dev
@@ -5555,6 +5669,29 @@ struct field_modify_info modify_tcp[] = {
 				next_protocol = 0xff;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT:
+			ret = flow_dv_validate_item_ipv6_frag_ext(items,
+								  item_flags,
+								  error);
+			if (ret < 0)
+				return ret;
+			last_item = tunnel ?
+					MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT :
+					MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT;
+			if (items->mask != NULL &&
+			    ((const struct rte_flow_item_ipv6_frag_ext *)
+			     items->mask)->hdr.next_header) {
+				next_protocol =
+				((const struct rte_flow_item_ipv6_frag_ext *)
+				 items->spec)->hdr.next_header;
+				next_protocol &=
+				((const struct rte_flow_item_ipv6_frag_ext *)
+				 items->mask)->hdr.next_header;
+			} else {
+				/* Reset for inner layer. */
+				next_protocol = 0xff;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_TCP:
 			ret = mlx5_flow_validate_item_tcp
 						(items, item_flags,
@@ -6742,6 +6879,57 @@ struct field_modify_info modify_tcp[] = {
 }
 
 /**
+ * Add IPV6 fragment extension item to matcher and to the value.
+ *
+ * @param[in, out] matcher
+ *   Flow matcher.
+ * @param[in, out] key
+ *   Flow matcher value.
+ * @param[in] item
+ *   Flow pattern to translate.
+ * @param[in] inner
+ *   Item is inner pattern.
+ */
+static void
+flow_dv_translate_item_ipv6_frag_ext(void *matcher, void *key,
+				     const struct rte_flow_item *item,
+				     int inner)
+{
+	const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_m = item->mask;
+	const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_v = item->spec;
+	const struct rte_flow_item_ipv6_frag_ext nic_mask = {
+		.hdr = {
+			.next_header = 0xff,
+			.frag_data = RTE_BE16(0xffff),
+		},
+	};
+	void *headers_m;
+	void *headers_v;
+
+	if (inner) {
+		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
+					 inner_headers);
+		headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers);
+	} else {
+		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
+					 outer_headers);
+		headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers);
+	}
+	/* IPv6 fragment extension item exists, so packet is IP fragment. */
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 1);
+	if (!ipv6_frag_ext_v)
+		return;
+	if (!ipv6_frag_ext_m)
+		ipv6_frag_ext_m = &nic_mask;
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol,
+		 ipv6_frag_ext_m->hdr.next_header);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol,
+		 ipv6_frag_ext_v->hdr.next_header &
+		 ipv6_frag_ext_m->hdr.next_header);
+}
+
+/**
  * Add TCP item to matcher and to the value.
  *
  * @param[in, out] matcher
@@ -9844,6 +10032,27 @@ struct field_modify_info modify_tcp[] = {
 				next_protocol = 0xff;
 			}
 			break;
+		case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT:
+			flow_dv_translate_item_ipv6_frag_ext(match_mask,
+							     match_value,
+							     items, tunnel);
+			last_item = tunnel ?
+					MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT :
+					MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT;
+			if (items->mask != NULL &&
+			    ((const struct rte_flow_item_ipv6_frag_ext *)
+			     items->mask)->hdr.next_header) {
+				next_protocol =
+				((const struct rte_flow_item_ipv6_frag_ext *)
+				 items->spec)->hdr.next_header;
+				next_protocol &=
+				((const struct rte_flow_item_ipv6_frag_ext *)
+				 items->mask)->hdr.next_header;
+			} else {
+				/* Reset for inner layer. */
+				next_protocol = 0xff;
+			}
+			break;
 		case RTE_FLOW_ITEM_TYPE_TCP:
 			flow_dv_translate_item_tcp(match_mask, match_value,
 						   items, tunnel);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [dpdk-dev] [PATCH v2 5/5] net/mlx5: enforce limitation on IPv6 next proto
  2020-10-15 14:05 ` [dpdk-dev] [PATCH v2 0/5] net/mlx5: support match on L3 fragmented packets Dekel Peled
                     ` (3 preceding siblings ...)
  2020-10-15 14:05   ` [dpdk-dev] [PATCH v2 4/5] net/mlx5: support match on IPv6 fragment ext. item Dekel Peled
@ 2020-10-15 14:05   ` Dekel Peled
  2020-10-18 15:14   ` [dpdk-dev] [PATCH v2 0/5] net/mlx5: support match on L3 fragmented packets Raslan Darawsheh
  5 siblings, 0 replies; 13+ messages in thread
From: Dekel Peled @ 2020-10-15 14:05 UTC (permalink / raw)
  To: matan, shahafs, viacheslavo; +Cc: dev

Due to PRM requirement, the IPv6 header item 'proto' field, indicating
the next header protocol, should not be set as extension header.
This patch adds the relevant validation, and documents the limitation.

Signed-off-by: Dekel Peled <dekelp@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/nics/mlx5.rst     |  7 +++++++
 drivers/net/mlx5/mlx5_flow.c | 14 ++++++++++++--
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index a071db2..c2bd737 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -317,6 +317,13 @@ Limitations
   - The E-Switch Sample flow must have the eswitch_manager VPORT destination (PF or ECPF) and no additional actions.
   - For ConnectX-5, the ``RTE_FLOW_ACTION_TYPE_SAMPLE`` is typically used as first action in the E-Switch egress flow if with header modify or encapsulation actions.
 
+- IPv6 header item 'proto' field, indicating the next header protocol, should
+  not be set as extension header.
+  In case the next header is an extension header, it should not be specified in
+  IPv6 header item 'proto' field.
+  The last extension header item 'next header' field can specify the following
+  header protocol type.
+
 Statistics
 ----------
 
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 1116ebb..2922cae 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1977,9 +1977,9 @@ struct mlx5_flow_tunnel_info {
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "IPv6 cannot follow L2/VLAN layer "
 					  "which ether type is not IPv6");
+	if (mask && spec)
+		next_proto = mask->hdr.proto & spec->hdr.proto;
 	if (item_flags & MLX5_FLOW_LAYER_IPV6_ENCAP) {
-		if (mask && spec)
-			next_proto = mask->hdr.proto & spec->hdr.proto;
 		if (next_proto == IPPROTO_IPIP || next_proto == IPPROTO_IPV6)
 			return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
@@ -1987,6 +1987,16 @@ struct mlx5_flow_tunnel_info {
 						  "multiple tunnel "
 						  "not supported");
 	}
+	if (next_proto == IPPROTO_HOPOPTS  ||
+	    next_proto == IPPROTO_ROUTING  ||
+	    next_proto == IPPROTO_FRAGMENT ||
+	    next_proto == IPPROTO_ESP	   ||
+	    next_proto == IPPROTO_AH	   ||
+	    next_proto == IPPROTO_DSTOPTS)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM, item,
+					  "IPv6 proto (next header) should "
+					  "not be set as extension header");
 	if (item_flags & MLX5_FLOW_LAYER_IPIP)
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/5] net/mlx5: support match on L3 fragmented packets
  2020-10-15 14:05 ` [dpdk-dev] [PATCH v2 0/5] net/mlx5: support match on L3 fragmented packets Dekel Peled
                     ` (4 preceding siblings ...)
  2020-10-15 14:05   ` [dpdk-dev] [PATCH v2 5/5] net/mlx5: enforce limitation on IPv6 next proto Dekel Peled
@ 2020-10-18 15:14   ` Raslan Darawsheh
  5 siblings, 0 replies; 13+ messages in thread
From: Raslan Darawsheh @ 2020-10-18 15:14 UTC (permalink / raw)
  To: Dekel Peled, Matan Azrad, Shahaf Shuler, Slava Ovsiienko; +Cc: dev

Hi,

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Dekel Peled
> Sent: Thursday, October 15, 2020 5:06 PM
> To: Matan Azrad <matan@nvidia.com>; Shahaf Shuler
> <shahafs@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>
> Cc: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v2 0/5] net/mlx5: support match on L3
> fragmented packets
> 
> Patch series [1] adds to ethdev and testpmd the support of matching on
> fragmented/non-fragmented packets.
> Following [1], this patch series adds to MLX5 PMD the support of matching
> on IPv4 fragmented packets, IPv6 fragmented packets, and IPv6 fragment
> extension header item.
> 
> [1]
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail
> s.dpdk.org%2Farchives%2Fdev%2F2020-
> October%2F186177.html&amp;data=02%7C01%7Crasland%40nvidia.com%7C
> f93dfbbfe11f463a132408d87113be17%7C43083d15727340c1b7db39efd9ccc17
> a%7C0%7C0%7C637383676888432962&amp;sdata=owqbu4vFkga0jijb87wjtx2
> WQ1I2xuhrGLr1NDcISw8%3D&amp;reserved=0
> 
> ---
> v2: rebase on top of latest commits.
> ---
> 
> Dekel Peled (5):
>   net/mlx5: remove handling of ICMP fragmented packets
>   net/mlx5: support match on IPv4 fragment packets
>   net/mlx5: support match on IPv6 fragment packets
>   net/mlx5: support match on IPv6 fragment ext. item
>   net/mlx5: enforce limitation on IPv6 next proto
> 
>  doc/guides/nics/mlx5.rst               |   7 +
>  doc/guides/rel_notes/release_20_11.rst |   5 +
>  drivers/net/mlx5/mlx5_flow.c           |  62 ++++--
>  drivers/net/mlx5/mlx5_flow.h           |  14 ++
>  drivers/net/mlx5/mlx5_flow_dv.c        | 382
> +++++++++++++++++++++++++++++----
>  drivers/net/mlx5/mlx5_flow_verbs.c     |   9 +-
>  6 files changed, 420 insertions(+), 59 deletions(-)
> 
> --
> 1.8.3.1

Series applied to next-net-mlx,


Kindest regards,
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-10-18 15:14 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-13 14:16 [dpdk-dev] [PATCH 0/5] net/mlx5: support match on L3 fragmented packets Dekel Peled
2020-10-13 14:16 ` [dpdk-dev] [PATCH 1/5] net/mlx5: remove handling of ICMP " Dekel Peled
2020-10-13 14:16 ` [dpdk-dev] [PATCH 2/5] net/mlx5: support match on IPv4 fragment packets Dekel Peled
2020-10-13 14:16 ` [dpdk-dev] [PATCH 3/5] net/mlx5: support match on IPv6 " Dekel Peled
2020-10-13 14:16 ` [dpdk-dev] [PATCH 4/5] net/mlx5: support match on IPv6 fragment ext. item Dekel Peled
2020-10-13 14:16 ` [dpdk-dev] [PATCH 5/5] net/mlx5: enforce limitation on IPv6 next proto Dekel Peled
2020-10-15 14:05 ` [dpdk-dev] [PATCH v2 0/5] net/mlx5: support match on L3 fragmented packets Dekel Peled
2020-10-15 14:05   ` [dpdk-dev] [PATCH v2 1/5] net/mlx5: remove handling of ICMP " Dekel Peled
2020-10-15 14:05   ` [dpdk-dev] [PATCH v2 2/5] net/mlx5: support match on IPv4 fragment packets Dekel Peled
2020-10-15 14:05   ` [dpdk-dev] [PATCH v2 3/5] net/mlx5: support match on IPv6 " Dekel Peled
2020-10-15 14:05   ` [dpdk-dev] [PATCH v2 4/5] net/mlx5: support match on IPv6 fragment ext. item Dekel Peled
2020-10-15 14:05   ` [dpdk-dev] [PATCH v2 5/5] net/mlx5: enforce limitation on IPv6 next proto Dekel Peled
2020-10-18 15:14   ` [dpdk-dev] [PATCH v2 0/5] net/mlx5: support match on L3 fragmented packets Raslan Darawsheh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).