DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v2 0/2] support VXLAN header last 8-bits reserved field matching
@ 2021-07-05  9:50 rongwei liu
  2021-07-05  9:50 ` [dpdk-dev] [PATCH v2 1/2] drivers: add VXLAN header the last 8-bits matching support rongwei liu
  2021-07-05  9:50 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: support VXLAN last 8-bits field matching rongwei liu
  0 siblings, 2 replies; 34+ messages in thread
From: rongwei liu @ 2021-07-05  9:50 UTC (permalink / raw)
  To: matan, viacheslavo, orika, thomas; +Cc: dev, rasland

This update adds support for the VXLAN last 8-bits reserved field
matching when creating sw steering rules.

Add a new testpmd pattern field 'last_rsvd' that supports the last
8-bits matching of VXLAN header.

rongwei liu (2):
  drivers: add VXLAN header the last 8-bits matching support
  app/testpmd: support VXLAN last 8-bits field matching

 app/test-pmd/cmdline_flow.c                 |   9 ++
 app/test-pmd/util.c                         |   5 +-
 doc/guides/nics/mlx5.rst                    |  11 +-
 doc/guides/rel_notes/release_21_08.rst      |   7 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |   1 +
 drivers/common/mlx5/mlx5_devx_cmds.c        |   3 +
 drivers/common/mlx5/mlx5_devx_cmds.h        |   6 +
 drivers/common/mlx5/mlx5_prm.h              |  41 ++++-
 drivers/net/mlx5/linux/mlx5_os.c            |  77 ++++++++++
 drivers/net/mlx5/mlx5.h                     |   2 +
 drivers/net/mlx5/mlx5_flow.c                |  26 +++-
 drivers/net/mlx5/mlx5_flow.h                |   4 +-
 drivers/net/mlx5/mlx5_flow_dv.c             | 160 ++++++++++++++------
 drivers/net/mlx5/mlx5_flow_verbs.c          |   3 +-
 drivers/vdpa/mlx5/mlx5_vdpa_steer.c         |   6 +-
 15 files changed, 294 insertions(+), 67 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [dpdk-dev] [PATCH v2 1/2] drivers: add VXLAN header the last 8-bits matching support
  2021-07-05  9:50 [dpdk-dev] [PATCH v2 0/2] support VXLAN header last 8-bits reserved field matching rongwei liu
@ 2021-07-05  9:50 ` rongwei liu
  2021-07-06 12:35   ` Thomas Monjalon
  2021-07-05  9:50 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: support VXLAN last 8-bits field matching rongwei liu
  1 sibling, 1 reply; 34+ messages in thread
From: rongwei liu @ 2021-07-05  9:50 UTC (permalink / raw)
  To: matan, viacheslavo, orika, thomas, Shahaf Shuler; +Cc: dev, rasland

This update adds support for the VXLAN alert bits matching when
creating steering rules. At the PCIe probe stage, we create a
dummy VXLAN matcher using misc5 to check rdma-core library's
capability.

The logic is, group 0 depends on HCA_CAP to enable misc or misc5
for VXLAN matching while group non zero depends on the rdma-core
capability.

Signed-off-by: rongwei liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 doc/guides/nics/mlx5.rst             |  11 +-
 drivers/common/mlx5/mlx5_devx_cmds.c |   3 +
 drivers/common/mlx5/mlx5_devx_cmds.h |   6 +
 drivers/common/mlx5/mlx5_prm.h       |  41 +++++--
 drivers/net/mlx5/linux/mlx5_os.c     |  77 +++++++++++++
 drivers/net/mlx5/mlx5.h              |   2 +
 drivers/net/mlx5/mlx5_flow.c         |  26 ++++-
 drivers/net/mlx5/mlx5_flow.h         |   4 +-
 drivers/net/mlx5/mlx5_flow_dv.c      | 160 +++++++++++++++++++--------
 drivers/net/mlx5/mlx5_flow_verbs.c   |   3 +-
 drivers/vdpa/mlx5/mlx5_vdpa_steer.c  |   6 +-
 11 files changed, 274 insertions(+), 65 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index eb44a070b1..88401226d8 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -189,8 +189,15 @@ Limitations
   size and ``txq_inline_min`` settings and may be from 2 (worst case forced by maximal
   inline settings) to 58.
 
-- Flows with a VXLAN Network Identifier equal (or ends to be equal)
-  to 0 are not supported.
+- Match on VXLAN supports the following fields only:
+
+     - VNI
+     - Last reserved 8-bits
+
+  Last reserved 8-bits matching is only supported When using DV flow
+  engine (``dv_flow_en`` = 1).
+  Group zero's behavior may differ which depends on FW.
+  Matching value equals 0 (value & mask) is not supported.
 
 - L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with MPLSoGRE and MPLSoUDP.
 
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index f5914bce32..63ae95832d 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -947,6 +947,9 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
 	attr->log_max_ft_sampler_num = MLX5_GET
 		(flow_table_nic_cap, hcattr,
 		 flow_table_properties_nic_receive.log_max_ft_sampler_num);
+	attr->flow.tunnel_header_0_1 = MLX5_GET
+		(flow_table_nic_cap, hcattr,
+		 ft_field_support_2_nic_receive.tunnel_header_0_1);
 	attr->pkt_integrity_match = mlx5_devx_query_pkt_integrity_match(hcattr);
 	/* Query HCA offloads for Ethernet protocol. */
 	memset(in, 0, sizeof(in));
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index f8a17b886b..124f43e852 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -89,6 +89,11 @@ struct mlx5_hca_vdpa_attr {
 	uint64_t doorbell_bar_offset;
 };
 
+struct mlx5_hca_flow_attr {
+	uint32_t tunnel_header_0_1;
+	uint32_t tunnel_header_2_3;
+};
+
 /* HCA supports this number of time periods for LRO. */
 #define MLX5_LRO_NUM_SUPP_PERIODS 4
 
@@ -155,6 +160,7 @@ struct mlx5_hca_attr {
 	uint32_t pkt_integrity_match:1; /* 1 if HW supports integrity item */
 	struct mlx5_hca_qos_attr qos;
 	struct mlx5_hca_vdpa_attr vdpa;
+	struct mlx5_hca_flow_attr flow;
 	int log_max_qp_sz;
 	int log_max_cq_sz;
 	int log_max_qp;
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 26761f5bd3..7950070976 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -977,6 +977,18 @@ struct mlx5_ifc_fte_match_set_misc4_bits {
 	u8 reserved_at_100[0x100];
 };
 
+struct mlx5_ifc_fte_match_set_misc5_bits {
+	u8 macsec_tag_0[0x20];
+	u8 macsec_tag_1[0x20];
+	u8 macsec_tag_2[0x20];
+	u8 macsec_tag_3[0x20];
+	u8 tunnel_header_0[0x20];
+	u8 tunnel_header_1[0x20];
+	u8 tunnel_header_2[0x20];
+	u8 tunnel_header_3[0x20];
+	u8 reserved[0x100];
+};
+
 /* Flow matcher. */
 struct mlx5_ifc_fte_match_param_bits {
 	struct mlx5_ifc_fte_match_set_lyr_2_4_bits outer_headers;
@@ -985,12 +997,13 @@ struct mlx5_ifc_fte_match_param_bits {
 	struct mlx5_ifc_fte_match_set_misc2_bits misc_parameters_2;
 	struct mlx5_ifc_fte_match_set_misc3_bits misc_parameters_3;
 	struct mlx5_ifc_fte_match_set_misc4_bits misc_parameters_4;
+	struct mlx5_ifc_fte_match_set_misc5_bits misc_parameters_5;
 /*
  * Add reserved bit to match the struct size with the size defined in PRM.
  * This extension is not required in Linux.
  */
 #ifndef HAVE_INFINIBAND_VERBS_H
-	u8 reserved_0[0x400];
+	u8 reserved_0[0x200];
 #endif
 };
 
@@ -1007,6 +1020,7 @@ enum {
 	MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT,
 	MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT,
 	MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT,
+	MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT,
 };
 
 enum {
@@ -1784,7 +1798,12 @@ struct mlx5_ifc_roce_caps_bits {
  * Table 1872 - Flow Table Fields Supported 2 Format
  */
 struct mlx5_ifc_ft_fields_support_2_bits {
-	u8 reserved_at_0[0x14];
+	u8 reserved_at_0[0xf];
+	u8 tunnel_header_2_3[0x1];
+	u8 tunnel_header_0_1[0x1];
+	u8 macsec_syndrome[0x1];
+	u8 macsec_tag[0x1];
+	u8 outer_lrh_sl[0x1];
 	u8 inner_ipv4_ihl[0x1];
 	u8 outer_ipv4_ihl[0x1];
 	u8 psp_syndrome[0x1];
@@ -1797,18 +1816,26 @@ struct mlx5_ifc_ft_fields_support_2_bits {
 	u8 inner_l4_checksum_ok[0x1];
 	u8 outer_ipv4_checksum_ok[0x1];
 	u8 outer_l4_checksum_ok[0x1];
+	u8 reserved_at_20[0x60];
 };
 
 struct mlx5_ifc_flow_table_nic_cap_bits {
 	u8 reserved_at_0[0x200];
 	struct mlx5_ifc_flow_table_prop_layout_bits
-	       flow_table_properties_nic_receive;
+		flow_table_properties_nic_receive;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_receive_rdma;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_receive_sniffer;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_transmit;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_transmit_rdma;
 	struct mlx5_ifc_flow_table_prop_layout_bits
-	       flow_table_properties_unused[5];
-	u8 reserved_at_1C0[0x200];
-	u8 header_modify_nic_receive[0x400];
+		flow_table_properties_nic_transmit_sniffer;
+	u8 reserved_at_e00[0x600];
 	struct mlx5_ifc_ft_fields_support_2_bits
-	       ft_field_support_2_nic_receive;
+		ft_field_support_2_nic_receive;
 };
 
 struct mlx5_ifc_cmd_hca_cap_2_bits {
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 92b3009786..4111c01ecb 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -193,6 +193,79 @@ mlx5_alloc_verbs_buf(size_t size, void *data)
 	return ret;
 }
 
+/**
+ * Detect misc5 support or not
+ *
+ * @param[in] priv
+ *   Device private data pointer
+ */
+#ifdef HAVE_MLX5DV_DR
+static void
+__mlx5_discovery_misc5_cap(struct mlx5_priv *priv)
+{
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+	/* Dummy VxLAN matcher to detect rdma-core misc5 cap
+	 * Case: IPv4--->UDP--->VxLAN--->vni
+	 */
+	void *tbl;
+	struct mlx5_flow_dv_match_params matcher_mask;
+	void *match_m;
+	void *matcher;
+	void *headers_m;
+	void *misc5_m;
+	uint32_t *tunnel_header_m;
+	struct mlx5dv_flow_matcher_attr dv_attr;
+
+	memset(&matcher_mask, 0, sizeof(matcher_mask));
+	matcher_mask.size = sizeof(matcher_mask.buf);
+	match_m = matcher_mask.buf;
+	headers_m = MLX5_ADDR_OF(fte_match_param, match_m, outer_headers);
+	misc5_m = MLX5_ADDR_OF(fte_match_param,
+			       match_m, misc_parameters_5);
+	tunnel_header_m = (uint32_t *)
+				MLX5_ADDR_OF(fte_match_set_misc5,
+				misc5_m, tunnel_header_1);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, 4);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xffff);
+	*tunnel_header_m = 0xffffff;
+
+	tbl = mlx5_glue->dr_create_flow_tbl(priv->sh->rx_domain, 1);
+	if (!tbl) {
+		DRV_LOG(INFO, "No SW steering support");
+		return;
+	}
+	dv_attr.type = IBV_FLOW_ATTR_NORMAL,
+	dv_attr.match_mask = (void *)&matcher_mask,
+	dv_attr.match_criteria_enable =
+			(1 << MLX5_MATCH_CRITERIA_ENABLE_OUTER_BIT) |
+			(1 << MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT);
+	dv_attr.priority = 3;
+#ifdef HAVE_MLX5DV_DR_ESWITCH
+	void *misc2_m;
+	if (priv->config.dv_esw_en) {
+		/* FDB enabled reg_c_0 */
+		dv_attr.match_criteria_enable |=
+				(1 << MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT);
+		misc2_m = MLX5_ADDR_OF(fte_match_param,
+				       match_m, misc_parameters_2);
+		MLX5_SET(fte_match_set_misc2, misc2_m,
+			 metadata_reg_c_0, 0xffff);
+	}
+#endif
+	matcher = mlx5_glue->dv_create_flow_matcher(priv->sh->ctx,
+						    &dv_attr, tbl);
+	if (matcher) {
+		priv->sh->misc5_cap = 1;
+		mlx5_glue->dv_destroy_flow_matcher(matcher);
+	}
+	mlx5_glue->dr_destroy_flow_tbl(tbl);
+#else
+	RTE_SET_USED(priv);
+#endif
+}
+#endif
+
 /**
  * Verbs callback to free a memory.
  *
@@ -355,6 +428,8 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv)
 			mlx5_glue->dr_reclaim_domain_memory(sh->fdb_domain, 1);
 	}
 	sh->pop_vlan_action = mlx5_glue->dr_create_flow_action_pop_vlan();
+
+	__mlx5_discovery_misc5_cap(priv);
 #endif /* HAVE_MLX5DV_DR */
 	sh->default_miss_action =
 			mlx5_glue->dr_create_flow_action_default_miss();
@@ -1304,6 +1379,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 				goto error;
 			}
 		}
+		if (config->hca_attr.flow.tunnel_header_0_1)
+			sh->tunnel_header_0_1 = 1;
 #endif
 #ifdef HAVE_MLX5_DR_CREATE_ACTION_ASO
 		if (config->hca_attr.flow_hit_aso &&
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 1b2dc8f815..e53fbc6126 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1071,6 +1071,8 @@ struct mlx5_dev_ctx_shared {
 	uint32_t qp_ts_format:2; /* QP timestamp formats supported. */
 	uint32_t meter_aso_en:1; /* Flow Meter ASO is supported. */
 	uint32_t ct_aso_en:1; /* Connection Tracking ASO is supported. */
+	uint32_t tunnel_header_0_1:1; /* tunnel_header_0_1 is supported. */
+	uint32_t misc5_cap:1; /* misc5 matcher parameter is supported. */
 	uint32_t max_port; /* Maximal IB device port index. */
 	struct mlx5_bond_info bond; /* Bonding information. */
 	void *ctx; /* Verbs/DV/DevX context. */
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 3b7c94d92f..bb0c99fa06 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2395,12 +2395,14 @@ mlx5_flow_validate_item_tcp(const struct rte_flow_item *item,
 /**
  * Validate VXLAN item.
  *
+ * @param[in] dev
+ *   Pointer to the Ethernet device structure.
  * @param[in] item
  *   Item specification.
  * @param[in] item_flags
  *   Bit-fields that holds the items detected until now.
- * @param[in] target_protocol
- *   The next protocol in the previous item.
+ * @param[in] attr
+ *   Flow rule attributes.
  * @param[out] error
  *   Pointer to error structure.
  *
@@ -2408,24 +2410,32 @@ mlx5_flow_validate_item_tcp(const struct rte_flow_item *item,
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 int
-mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
+mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
+			      const struct rte_flow_item *item,
 			      uint64_t item_flags,
+			      const struct rte_flow_attr *attr,
 			      struct rte_flow_error *error)
 {
 	const struct rte_flow_item_vxlan *spec = item->spec;
 	const struct rte_flow_item_vxlan *mask = item->mask;
 	int ret;
+	struct mlx5_priv *priv = dev->data->dev_private;
 	union vni {
 		uint32_t vlan_id;
 		uint8_t vni[4];
 	} id = { .vlan_id = 0, };
-
+	const struct rte_flow_item_vxlan nic_mask = {
+		.vni = "\xff\xff\xff",
+		.rsvd1 = 0xff,
+	};
+	const struct rte_flow_item_vxlan *valid_mask;
 
 	if (item_flags & MLX5_FLOW_LAYER_TUNNEL)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "multiple tunnel layers not"
 					  " supported");
+	valid_mask = &rte_flow_item_vxlan_mask;
 	/*
 	 * Verify only UDPv4 is present as defined in
 	 * https://tools.ietf.org/html/rfc7348
@@ -2436,9 +2446,15 @@ mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
 					  "no outer UDP layer found");
 	if (!mask)
 		mask = &rte_flow_item_vxlan_mask;
+	/* FDB domain & NIC domain non-zero group */
+	if ((attr->transfer || attr->group) && priv->sh->misc5_cap)
+		valid_mask = &nic_mask;
+	/* Group zero in NIC domain */
+	if (!attr->group && !attr->transfer && priv->sh->tunnel_header_0_1)
+		valid_mask = &nic_mask;
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
-		 (const uint8_t *)&rte_flow_item_vxlan_mask,
+		 (const uint8_t *)valid_mask,
 		 sizeof(struct rte_flow_item_vxlan),
 		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 2f2aa962f9..3739dcc319 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1521,8 +1521,10 @@ int mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 				 uint64_t item_flags,
 				 struct rte_eth_dev *dev,
 				 struct rte_flow_error *error);
-int mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
+int mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
+				  const struct rte_flow_item *item,
 				  uint64_t item_flags,
+				  const struct rte_flow_attr *attr,
 				  struct rte_flow_error *error);
 int mlx5_flow_validate_item_vxlan_gpe(const struct rte_flow_item *item,
 				      uint64_t item_flags,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index a04a3c2bb8..eaa43ffa78 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -6888,7 +6888,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			last_item = MLX5_FLOW_LAYER_GRE_KEY;
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			ret = mlx5_flow_validate_item_vxlan(items, item_flags,
+			ret = mlx5_flow_validate_item_vxlan(dev, items,
+							    item_flags, attr,
 							    error);
 			if (ret < 0)
 				return ret;
@@ -7847,15 +7848,7 @@ flow_dv_prepare(struct rte_eth_dev *dev,
 	memset(dev_flow, 0, sizeof(*dev_flow));
 	dev_flow->handle = dev_handle;
 	dev_flow->handle_idx = handle_idx;
-	/*
-	 * In some old rdma-core releases, before continuing, a check of the
-	 * length of matching parameter will be done at first. It needs to use
-	 * the length without misc4 param. If the flow has misc4 support, then
-	 * the length needs to be adjusted accordingly. Each param member is
-	 * aligned with a 64B boundary naturally.
-	 */
-	dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param) -
-				  MLX5_ST_SZ_BYTES(fte_match_set_misc4);
+	dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param);
 	dev_flow->ingress = attr->ingress;
 	dev_flow->dv.transfer = attr->transfer;
 	return dev_flow;
@@ -8636,6 +8629,10 @@ flow_dv_translate_item_nvgre(void *matcher, void *key,
 /**
  * Add VXLAN item to matcher and to the value.
  *
+ * @param[in] dev
+ *   Pointer to the Ethernet device structure.
+ * @param[in] attr
+ *   Flow rule attributes.
  * @param[in, out] matcher
  *   Flow matcher.
  * @param[in, out] key
@@ -8646,7 +8643,9 @@ flow_dv_translate_item_nvgre(void *matcher, void *key,
  *   Item is inner pattern.
  */
 static void
-flow_dv_translate_item_vxlan(void *matcher, void *key,
+flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
+			     const struct rte_flow_attr *attr,
+			     void *matcher, void *key,
 			     const struct rte_flow_item *item,
 			     int inner)
 {
@@ -8654,13 +8653,16 @@ flow_dv_translate_item_vxlan(void *matcher, void *key,
 	const struct rte_flow_item_vxlan *vxlan_v = item->spec;
 	void *headers_m;
 	void *headers_v;
-	void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters);
-	void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
-	char *vni_m;
-	char *vni_v;
+	void *misc5_m;
+	void *misc5_v;
+	uint32_t *tunnel_header_v;
+	uint32_t *tunnel_header_m;
 	uint16_t dport;
-	int size;
-	int i;
+	struct mlx5_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item_vxlan nic_mask = {
+		.vni = "\xff\xff\xff",
+		.rsvd1 = 0xff,
+	};
 
 	if (inner) {
 		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
@@ -8679,14 +8681,52 @@ flow_dv_translate_item_vxlan(void *matcher, void *key,
 	}
 	if (!vxlan_v)
 		return;
-	if (!vxlan_m)
-		vxlan_m = &rte_flow_item_vxlan_mask;
-	size = sizeof(vxlan_m->vni);
-	vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni);
-	vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
-	memcpy(vni_m, vxlan_m->vni, size);
-	for (i = 0; i < size; ++i)
-		vni_v[i] = vni_m[i] & vxlan_v->vni[i];
+	if (!vxlan_m) {
+		if ((!attr->group && !priv->sh->tunnel_header_0_1) ||
+		    (attr->group && !priv->sh->misc5_cap))
+			vxlan_m = &rte_flow_item_vxlan_mask;
+		else
+			vxlan_m = &nic_mask;
+	}
+	if ((!attr->group && !attr->transfer && !priv->sh->tunnel_header_0_1) ||
+	    ((attr->group || attr->transfer) && !priv->sh->misc5_cap)) {
+		void *misc_m;
+		void *misc_v;
+		char *vni_m;
+		char *vni_v;
+		int size;
+		int i;
+		misc_m = MLX5_ADDR_OF(fte_match_param,
+				      matcher, misc_parameters);
+		misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
+		size = sizeof(vxlan_m->vni);
+		vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni);
+		vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
+		memcpy(vni_m, vxlan_m->vni, size);
+		for (i = 0; i < size; ++i)
+			vni_v[i] = vni_m[i] & vxlan_v->vni[i];
+		return;
+	}
+	misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5);
+	misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5);
+	tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
+						   misc5_v,
+						   tunnel_header_1);
+	tunnel_header_m = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
+						   misc5_m,
+						   tunnel_header_1);
+	*tunnel_header_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) |
+			   (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 |
+			   (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16;
+	if (*tunnel_header_v)
+		*tunnel_header_m = vxlan_m->vni[0] |
+			vxlan_m->vni[1] << 8 |
+			vxlan_m->vni[2] << 16;
+	else
+		*tunnel_header_m = 0x0;
+	*tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;
+	if (vxlan_v->rsvd1 & vxlan_m->rsvd1)
+		*tunnel_header_m |= vxlan_m->rsvd1 << 24;
 }
 
 /**
@@ -9848,9 +9888,32 @@ flow_dv_matcher_enable(uint32_t *match_criteria)
 	match_criteria_enable |=
 		(!HEADER_IS_ZERO(match_criteria, misc_parameters_4)) <<
 		MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT;
+	match_criteria_enable |=
+		(!HEADER_IS_ZERO(match_criteria, misc_parameters_5)) <<
+		MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT;
 	return match_criteria_enable;
 }
 
+static void
+__flow_dv_adjust_buf_size(size_t *size, uint8_t match_criteria)
+{
+	/*
+	 * Check flow matching criteria first, subtract misc5/4 length if flow
+	 * doesn't own misc5/4 parameters. In some old rdma-core releases,
+	 * misc5/4 are not supported, and matcher creation failure is expected
+	 * w/o subtration. If misc5 is provided, misc4 must be counted in since
+	 * misc5 is right after misc4.
+	 */
+	if (!(match_criteria & (1 << MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT))) {
+		*size = MLX5_ST_SZ_BYTES(fte_match_param) -
+			MLX5_ST_SZ_BYTES(fte_match_set_misc5);
+		if (!(match_criteria & (1 <<
+			MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT))) {
+			*size -= MLX5_ST_SZ_BYTES(fte_match_set_misc4);
+		}
+	}
+}
+
 struct mlx5_hlist_entry *
 flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void *cb_ctx)
 {
@@ -10117,6 +10180,8 @@ flow_dv_matcher_create_cb(struct mlx5_cache_list *list,
 	*cache = *ref;
 	dv_attr.match_criteria_enable =
 		flow_dv_matcher_enable(cache->mask.buf);
+	__flow_dv_adjust_buf_size(&ref->mask.size,
+				  dv_attr.match_criteria_enable);
 	dv_attr.priority = ref->priority;
 	if (tbl->is_egress)
 		dv_attr.flags |= IBV_FLOW_ATTR_FLAGS_EGRESS;
@@ -10166,7 +10231,6 @@ flow_dv_matcher_register(struct rte_eth_dev *dev,
 		.error = error,
 		.data = ref,
 	};
-
 	/**
 	 * tunnel offload API requires this registration for cases when
 	 * tunnel match rule was inserted before tunnel set rule.
@@ -12025,8 +12089,7 @@ flow_dv_translate(struct rte_eth_dev *dev,
 	uint64_t action_flags = 0;
 	struct mlx5_flow_dv_matcher matcher = {
 		.mask = {
-			.size = sizeof(matcher.mask.buf) -
-				MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+			.size = sizeof(matcher.mask.buf),
 		},
 	};
 	int actions_n = 0;
@@ -12833,7 +12896,8 @@ flow_dv_translate(struct rte_eth_dev *dev,
 			last_item = MLX5_FLOW_LAYER_GRE;
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			flow_dv_translate_item_vxlan(match_mask, match_value,
+			flow_dv_translate_item_vxlan(dev, attr,
+						     match_mask, match_value,
 						     items, tunnel);
 			matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc);
 			last_item = MLX5_FLOW_LAYER_VXLAN;
@@ -12931,10 +12995,6 @@ flow_dv_translate(struct rte_eth_dev *dev,
 						NULL,
 						"cannot create eCPRI parser");
 			}
-			/* Adjust the length matcher and device flow value. */
-			matcher.mask.size = MLX5_ST_SZ_BYTES(fte_match_param);
-			dev_flow->dv.value.size =
-					MLX5_ST_SZ_BYTES(fte_match_param);
 			flow_dv_translate_item_ecpri(dev, match_mask,
 						     match_value, items);
 			/* No other protocol should follow eCPRI layer. */
@@ -13235,6 +13295,7 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
 	int idx;
 	struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace();
 	struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc;
+	uint8_t misc_mask;
 
 	MLX5_ASSERT(wks);
 	for (idx = wks->flow_idx - 1; idx >= 0; idx--) {
@@ -13305,6 +13366,8 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
 			}
 			dv->actions[n++] = priv->sh->default_miss_action;
 		}
+		misc_mask = flow_dv_matcher_enable(dv->value.buf);
+		__flow_dv_adjust_buf_size(&dv->value.size, misc_mask);
 		err = mlx5_flow_os_create_flow(dv_h->matcher->matcher_object,
 					       (void *)&dv->value, n,
 					       dv->actions, &dh->drv_flow);
@@ -15353,14 +15416,13 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev,
 {
 	int ret;
 	struct mlx5_flow_dv_match_params value = {
-		.size = sizeof(value.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(value.buf),
 	};
 	struct mlx5_flow_dv_match_params matcher = {
-		.size = sizeof(matcher.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(matcher.buf),
 	};
 	struct mlx5_priv *priv = dev->data->dev_private;
+	uint8_t misc_mask;
 
 	if (!is_default_policy && (priv->representor || priv->master)) {
 		if (flow_dv_translate_item_port_id(dev, matcher.buf,
@@ -15374,6 +15436,8 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev,
 				(enum modify_reg)color_reg_c_idx,
 				rte_col_2_mlx5_col(color),
 				UINT32_MAX);
+	misc_mask = flow_dv_matcher_enable(value.buf);
+	__flow_dv_adjust_buf_size(&value.size, misc_mask);
 	ret = mlx5_flow_os_create_flow(matcher_object,
 			(void *)&value, actions_n, actions, rule);
 	if (ret) {
@@ -15396,14 +15460,12 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev,
 	struct mlx5_flow_tbl_resource *tbl_rsc = sub_policy->tbl_rsc;
 	struct mlx5_flow_dv_matcher matcher = {
 		.mask = {
-			.size = sizeof(matcher.mask.buf) -
-				MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+			.size = sizeof(matcher.mask.buf),
 		},
 		.tbl = tbl_rsc,
 	};
 	struct mlx5_flow_dv_match_params value = {
-		.size = sizeof(value.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(value.buf),
 	};
 	struct mlx5_flow_cb_ctx ctx = {
 		.error = error,
@@ -15780,12 +15842,10 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 	int domain, ret, i;
 	struct mlx5_flow_counter *cnt;
 	struct mlx5_flow_dv_match_params value = {
-		.size = sizeof(value.buf) -
-		MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(value.buf),
 	};
 	struct mlx5_flow_dv_match_params matcher_para = {
-		.size = sizeof(matcher_para.buf) -
-		MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(matcher_para.buf),
 	};
 	int mtr_id_reg_c = mlx5_flow_get_reg_id(dev, MLX5_MTR_ID,
 						     0, &error);
@@ -15794,8 +15854,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 	struct mlx5_cache_entry *entry;
 	struct mlx5_flow_dv_matcher matcher = {
 		.mask = {
-			.size = sizeof(matcher.mask.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+			.size = sizeof(matcher.mask.buf),
 		},
 	};
 	struct mlx5_flow_dv_matcher *drop_matcher;
@@ -15803,6 +15862,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 		.error = &error,
 		.data = &matcher,
 	};
+	uint8_t misc_mask;
 
 	if (!priv->mtr_en || mtr_id_reg_c < 0) {
 		rte_errno = ENOTSUP;
@@ -15852,6 +15912,8 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 			actions[i++] = priv->sh->dr_drop_action;
 			flow_dv_match_meta_reg(matcher_para.buf, value.buf,
 				(enum modify_reg)mtr_id_reg_c, 0, 0);
+			misc_mask = flow_dv_matcher_enable(value.buf);
+			__flow_dv_adjust_buf_size(&value.size, misc_mask);
 			ret = mlx5_flow_os_create_flow
 				(mtrmng->def_matcher[domain]->matcher_object,
 				(void *)&value, i, actions,
@@ -15895,6 +15957,8 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 					fm->drop_cnt, NULL);
 		actions[i++] = cnt->action;
 		actions[i++] = priv->sh->dr_drop_action;
+		misc_mask = flow_dv_matcher_enable(value.buf);
+		__flow_dv_adjust_buf_size(&value.size, misc_mask);
 		ret = mlx5_flow_os_create_flow(drop_matcher->matcher_object,
 					       (void *)&value, i, actions,
 					       &fm->drop_rule[domain]);
@@ -16175,10 +16239,12 @@ mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev)
 	if (ret)
 		goto err;
 	dv_attr.match_criteria_enable = flow_dv_matcher_enable(mask.buf);
+	__flow_dv_adjust_buf_size(&mask.size, dv_attr.match_criteria_enable);
 	ret = mlx5_flow_os_create_flow_matcher(sh->ctx, &dv_attr, tbl->obj,
 					       &matcher);
 	if (ret)
 		goto err;
+	__flow_dv_adjust_buf_size(&value.size, dv_attr.match_criteria_enable);
 	ret = mlx5_flow_os_create_flow(matcher, (void *)&value, 1,
 				       actions, &flow);
 err:
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index fe9673310a..7b3d0b320d 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1381,7 +1381,8 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 					     MLX5_FLOW_LAYER_OUTER_L4_TCP;
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			ret = mlx5_flow_validate_item_vxlan(items, item_flags,
+			ret = mlx5_flow_validate_item_vxlan(dev, items,
+							    item_flags, attr,
 							    error);
 			if (ret < 0)
 				return ret;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
index 1fcd24c002..383f003966 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
@@ -140,11 +140,13 @@ mlx5_vdpa_rss_flows_create(struct mlx5_vdpa_priv *priv)
 		/**< Matcher value. This value is used as the mask or a key. */
 	} matcher_mask = {
 				.size = sizeof(matcher_mask.buf) -
-					MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+					MLX5_ST_SZ_BYTES(fte_match_set_misc4) -
+					MLX5_ST_SZ_BYTES(fte_match_set_misc5),
 			},
 	  matcher_value = {
 				.size = sizeof(matcher_value.buf) -
-					MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+					MLX5_ST_SZ_BYTES(fte_match_set_misc4) -
+					MLX5_ST_SZ_BYTES(fte_match_set_misc5),
 			};
 	struct mlx5dv_flow_matcher_attr dv_attr = {
 		.type = IBV_FLOW_ATTR_NORMAL,
-- 
2.27.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [dpdk-dev] [PATCH v2 2/2] app/testpmd: support VXLAN last 8-bits field matching
  2021-07-05  9:50 [dpdk-dev] [PATCH v2 0/2] support VXLAN header last 8-bits reserved field matching rongwei liu
  2021-07-05  9:50 ` [dpdk-dev] [PATCH v2 1/2] drivers: add VXLAN header the last 8-bits matching support rongwei liu
@ 2021-07-05  9:50 ` rongwei liu
  2021-07-06 12:28   ` Thomas Monjalon
  1 sibling, 1 reply; 34+ messages in thread
From: rongwei liu @ 2021-07-05  9:50 UTC (permalink / raw)
  To: matan, viacheslavo, orika, thomas, Xiaoyun Li; +Cc: dev, rasland

Add a new testpmd pattern field 'last_rsvd' that supports the
last 8-bits matching of VXLAN header.

The examples for the "last_rsvd" pattern field are as below:

1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...

This flow will exactly match the last 8-bits to be 0x80.

2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
vxlan mask 0x80 / end ...

This flow will only match the MSB of the last 8-bits to be 1.

Signed-off-by: rongwei liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 9 +++++++++
 app/test-pmd/util.c                         | 5 +++--
 doc/guides/rel_notes/release_21_08.rst      | 7 +++++++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 1 +
 4 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 1c587bb7b8..6e76a625ca 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -207,6 +207,7 @@ enum index {
 	ITEM_SCTP_CKSUM,
 	ITEM_VXLAN,
 	ITEM_VXLAN_VNI,
+	ITEM_VXLAN_LAST_RSVD,
 	ITEM_E_TAG,
 	ITEM_E_TAG_GRP_ECID_B,
 	ITEM_NVGRE,
@@ -1129,6 +1130,7 @@ static const enum index item_sctp[] = {
 
 static const enum index item_vxlan[] = {
 	ITEM_VXLAN_VNI,
+	ITEM_VXLAN_LAST_RSVD,
 	ITEM_NEXT,
 	ZERO,
 };
@@ -2806,6 +2808,13 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vxlan, NEXT_ENTRY(UNSIGNED), item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, vni)),
 	},
+	[ITEM_VXLAN_LAST_RSVD] = {
+		.name = "last_rsvd",
+		.help = "VXLAN last reserved bits",
+		.next = NEXT(item_vxlan, NEXT_ENTRY(UNSIGNED), item_param),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan,
+					     rsvd1)),
+	},
 	[ITEM_E_TAG] = {
 		.name = "e_tag",
 		.help = "match E-Tag header",
diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
index a9e431a8b2..59626518d5 100644
--- a/app/test-pmd/util.c
+++ b/app/test-pmd/util.c
@@ -266,8 +266,9 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
 				vx_vni = rte_be_to_cpu_32(vxlan_hdr->vx_vni);
 				MKDUMPSTR(print_buf, buf_size, cur_len,
 					  " - VXLAN packet: packet type =%d, "
-					  "Destination UDP port =%d, VNI = %d",
-					  packet_type, udp_port, vx_vni >> 8);
+					  "Destination UDP port =%d, VNI = %d, "
+					  "last_rsvd = %d", packet_type,
+					  udp_port, vx_vni >> 8, vx_vni & 0xff);
 			}
 		}
 		MKDUMPSTR(print_buf, buf_size, cur_len,
diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
index a6ecfdf3ce..ad89af8466 100644
--- a/doc/guides/rel_notes/release_21_08.rst
+++ b/doc/guides/rel_notes/release_21_08.rst
@@ -55,6 +55,12 @@ New Features
      Also, make sure to start the actual text at the margin.
      =======================================================
 
+* **Updated Mellanox mlx5 driver.**
+
+  Updated the Mellanox mlx5 driver with new features and improvements, including:
+
+  * Added support for matching on vxlan header last 8-bits reserved field.
+
 
 Removed Items
 -------------
@@ -136,3 +142,4 @@ Tested Platforms
    This section is a comment. Do not overwrite or remove it.
    Also, make sure to start the actual text at the margin.
    =======================================================
+
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 33857acf54..4ca3103067 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3694,6 +3694,7 @@ This section lists supported pattern items and their attributes, if any.
 - ``vxlan``: match VXLAN header.
 
   - ``vni {unsigned}``: VXLAN identifier.
+  - ``last_rsvd {unsigned}``: VXLAN last reserved 8-bits.
 
 - ``e_tag``: match IEEE 802.1BR E-Tag header.
 
-- 
2.27.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] app/testpmd: support VXLAN last 8-bits field matching
  2021-07-05  9:50 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: support VXLAN last 8-bits field matching rongwei liu
@ 2021-07-06 12:28   ` Thomas Monjalon
  0 siblings, 0 replies; 34+ messages in thread
From: Thomas Monjalon @ 2021-07-06 12:28 UTC (permalink / raw)
  To: dev
  Cc: matan, viacheslavo, orika, Xiaoyun Li, rasland, rongwei liu,
	andrew.rybchenko, david.marchand, ajit.khaparde

+Cc more people

05/07/2021 11:50, rongwei liu:
> Add a new testpmd pattern field 'last_rsvd' that supports the
> last 8-bits matching of VXLAN header.
> 
> The examples for the "last_rsvd" pattern field are as below:
> 
> 1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...
> 
> This flow will exactly match the last 8-bits to be 0x80.
> 
> 2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
> vxlan mask 0x80 / end ...
> 
> This flow will only match the MSB of the last 8-bits to be 1.
> 
> Signed-off-by: rongwei liu <rongweil@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  app/test-pmd/cmdline_flow.c                 | 9 +++++++++
>  app/test-pmd/util.c                         | 5 +++--
>  doc/guides/rel_notes/release_21_08.rst      | 7 +++++++
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst | 1 +
>  4 files changed, 20 insertions(+), 2 deletions(-)
> 
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index 1c587bb7b8..6e76a625ca 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -207,6 +207,7 @@ enum index {
>  	ITEM_SCTP_CKSUM,
>  	ITEM_VXLAN,
>  	ITEM_VXLAN_VNI,
> +	ITEM_VXLAN_LAST_RSVD,
>  	ITEM_E_TAG,
>  	ITEM_E_TAG_GRP_ECID_B,
>  	ITEM_NVGRE,
> @@ -1129,6 +1130,7 @@ static const enum index item_sctp[] = {
>  
>  static const enum index item_vxlan[] = {
>  	ITEM_VXLAN_VNI,
> +	ITEM_VXLAN_LAST_RSVD,
>  	ITEM_NEXT,
>  	ZERO,
>  };
> @@ -2806,6 +2808,13 @@ static const struct token token_list[] = {
>  		.next = NEXT(item_vxlan, NEXT_ENTRY(UNSIGNED), item_param),
>  		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, vni)),
>  	},
> +	[ITEM_VXLAN_LAST_RSVD] = {
> +		.name = "last_rsvd",
> +		.help = "VXLAN last reserved bits",
> +		.next = NEXT(item_vxlan, NEXT_ENTRY(UNSIGNED), item_param),
> +		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan,
> +					     rsvd1)),
> +	},
>  	[ITEM_E_TAG] = {
>  		.name = "e_tag",
>  		.help = "match E-Tag header",
> diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
> index a9e431a8b2..59626518d5 100644
> --- a/app/test-pmd/util.c
> +++ b/app/test-pmd/util.c
> @@ -266,8 +266,9 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
>  				vx_vni = rte_be_to_cpu_32(vxlan_hdr->vx_vni);
>  				MKDUMPSTR(print_buf, buf_size, cur_len,
>  					  " - VXLAN packet: packet type =%d, "
> -					  "Destination UDP port =%d, VNI = %d",
> -					  packet_type, udp_port, vx_vni >> 8);
> +					  "Destination UDP port =%d, VNI = %d, "
> +					  "last_rsvd = %d", packet_type,
> +					  udp_port, vx_vni >> 8, vx_vni & 0xff);
>  			}
>  		}
>  		MKDUMPSTR(print_buf, buf_size, cur_len,
> diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
> index a6ecfdf3ce..ad89af8466 100644
> --- a/doc/guides/rel_notes/release_21_08.rst
> +++ b/doc/guides/rel_notes/release_21_08.rst
> @@ -55,6 +55,12 @@ New Features
>       Also, make sure to start the actual text at the margin.
>       =======================================================
>  
> +* **Updated Mellanox mlx5 driver.**
> +
> +  Updated the Mellanox mlx5 driver with new features and improvements, including:
> +
> +  * Added support for matching on vxlan header last 8-bits reserved field.
> +
>  
>  Removed Items
>  -------------
> @@ -136,3 +142,4 @@ Tested Platforms
>     This section is a comment. Do not overwrite or remove it.
>     Also, make sure to start the actual text at the margin.
>     =======================================================
> +
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 33857acf54..4ca3103067 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -3694,6 +3694,7 @@ This section lists supported pattern items and their attributes, if any.
>  - ``vxlan``: match VXLAN header.
>  
>    - ``vni {unsigned}``: VXLAN identifier.
> +  - ``last_rsvd {unsigned}``: VXLAN last reserved 8-bits.
>  
>  - ``e_tag``: match IEEE 802.1BR E-Tag header.
>  
> 






^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/2] drivers: add VXLAN header the last 8-bits matching support
  2021-07-05  9:50 ` [dpdk-dev] [PATCH v2 1/2] drivers: add VXLAN header the last 8-bits matching support rongwei liu
@ 2021-07-06 12:35   ` Thomas Monjalon
  2021-07-07  8:09     ` [dpdk-dev] [PATCH v4 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
  0 siblings, 1 reply; 34+ messages in thread
From: Thomas Monjalon @ 2021-07-06 12:35 UTC (permalink / raw)
  To: rongwei liu
  Cc: matan, viacheslavo, orika, dev, rasland, andrew.rybchenko, ferruh.yigit

The title would be more accurate if starting with net/mlx5,
even if there is a small change for vDPA included.

05/07/2021 11:50, rongwei liu:
> This update adds support for the VXLAN alert bits matching when

There is an alert bit in the first byte, specified in this RFC draft:
https://datatracker.ietf.org/doc/html/draft-singh-nvo3-vxlan-router-alert-00

> creating steering rules. At the PCIe probe stage, we create a
> dummy VXLAN matcher using misc5 to check rdma-core library's
> capability.
> 
> The logic is, group 0 depends on HCA_CAP to enable misc or misc5
> for VXLAN matching while group non zero depends on the rdma-core
> capability.
> 
> Signed-off-by: rongwei liu <rongweil@nvidia.com>

Please use capitals in your "English-written name".
It could be something like "Rongwei Liu".




^ permalink raw reply	[flat|nested] 34+ messages in thread

* [dpdk-dev] [PATCH v4 0/2] support VXLAN header the last 8-bits matching
  2021-07-06 12:35   ` Thomas Monjalon
@ 2021-07-07  8:09     ` Rongwei Liu
  2021-07-07  8:09       ` [dpdk-dev] [PATCH v4 1/2] net/mlx5: add VXLAN header the last 8-bits matching support Rongwei Liu
                         ` (2 more replies)
  0 siblings, 3 replies; 34+ messages in thread
From: Rongwei Liu @ 2021-07-07  8:09 UTC (permalink / raw)
  To: matan, viacheslavo, orika, thomas; +Cc: dev, rasland

This update adds support for the VXLAN the last 8-bits reserved
field matching when creating sw steering rules.

Add a new testpmd pattern field 'last_rsvd' that supports the last
8-bits matching of VXLAN header.

Rongwei Liu (2):
  net/mlx5: add VXLAN header the last 8-bits matching support
  app/testpmd: support VXLAN the last 8-bits field matching

 app/test-pmd/cmdline_flow.c                 |   9 ++
 app/test-pmd/util.c                         |   5 +-
 doc/guides/nics/mlx5.rst                    |  11 +-
 doc/guides/rel_notes/release_21_08.rst      |   6 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |   1 +
 drivers/common/mlx5/mlx5_devx_cmds.c        |   3 +
 drivers/common/mlx5/mlx5_devx_cmds.h        |   6 +
 drivers/common/mlx5/mlx5_prm.h              |  41 ++++-
 drivers/net/mlx5/linux/mlx5_os.c            |  77 ++++++++++
 drivers/net/mlx5/mlx5.h                     |   2 +
 drivers/net/mlx5/mlx5_flow.c                |  26 +++-
 drivers/net/mlx5/mlx5_flow.h                |   4 +-
 drivers/net/mlx5/mlx5_flow_dv.c             | 160 ++++++++++++++------
 drivers/net/mlx5/mlx5_flow_verbs.c          |   3 +-
 drivers/vdpa/mlx5/mlx5_vdpa_steer.c         |   6 +-
 15 files changed, 293 insertions(+), 67 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [dpdk-dev] [PATCH v4 1/2] net/mlx5: add VXLAN header the last 8-bits matching support
  2021-07-07  8:09     ` [dpdk-dev] [PATCH v4 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
@ 2021-07-07  8:09       ` Rongwei Liu
  2021-07-07  8:09       ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: support VXLAN the last 8-bits field matching Rongwei Liu
  2021-07-13  8:33       ` [dpdk-dev] [PATCH v4 0/2] support VXLAN header the last 8-bits matching Andrew Rybchenko
  2 siblings, 0 replies; 34+ messages in thread
From: Rongwei Liu @ 2021-07-07  8:09 UTC (permalink / raw)
  To: matan, viacheslavo, orika, thomas, Shahaf Shuler; +Cc: dev, rasland

This update adds support for the VXLAN header last 8-bits
matching when creating steering rules. At the PCIe probe
stage, we create a dummy VXLAN matcher using misc5 to check
rdma-core library's capability.

The logic is, group 0 depends on HCA_CAP to enable misc or misc5
for VXLAN matching while group non zero depends on the rdma-core
capability.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 doc/guides/nics/mlx5.rst             |  11 +-
 drivers/common/mlx5/mlx5_devx_cmds.c |   3 +
 drivers/common/mlx5/mlx5_devx_cmds.h |   6 +
 drivers/common/mlx5/mlx5_prm.h       |  41 +++++--
 drivers/net/mlx5/linux/mlx5_os.c     |  77 +++++++++++++
 drivers/net/mlx5/mlx5.h              |   2 +
 drivers/net/mlx5/mlx5_flow.c         |  26 ++++-
 drivers/net/mlx5/mlx5_flow.h         |   4 +-
 drivers/net/mlx5/mlx5_flow_dv.c      | 160 +++++++++++++++++++--------
 drivers/net/mlx5/mlx5_flow_verbs.c   |   3 +-
 drivers/vdpa/mlx5/mlx5_vdpa_steer.c  |   6 +-
 11 files changed, 274 insertions(+), 65 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index a16af32e67..2ae7157617 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -189,8 +189,15 @@ Limitations
   size and ``txq_inline_min`` settings and may be from 2 (worst case forced by maximal
   inline settings) to 58.
 
-- Flows with a VXLAN Network Identifier equal (or ends to be equal)
-  to 0 are not supported.
+- Match on VXLAN supports the following fields only:
+
+     - VNI
+     - Last reserved 8-bits
+
+  Last reserved 8-bits matching is only supported When using DV flow
+  engine (``dv_flow_en`` = 1).
+  Group zero's behavior may differ which depends on FW.
+  Matching value equals 0 (value & mask) is not supported.
 
 - L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with MPLSoGRE and MPLSoUDP.
 
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index f5914bce32..63ae95832d 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -947,6 +947,9 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
 	attr->log_max_ft_sampler_num = MLX5_GET
 		(flow_table_nic_cap, hcattr,
 		 flow_table_properties_nic_receive.log_max_ft_sampler_num);
+	attr->flow.tunnel_header_0_1 = MLX5_GET
+		(flow_table_nic_cap, hcattr,
+		 ft_field_support_2_nic_receive.tunnel_header_0_1);
 	attr->pkt_integrity_match = mlx5_devx_query_pkt_integrity_match(hcattr);
 	/* Query HCA offloads for Ethernet protocol. */
 	memset(in, 0, sizeof(in));
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index f8a17b886b..124f43e852 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -89,6 +89,11 @@ struct mlx5_hca_vdpa_attr {
 	uint64_t doorbell_bar_offset;
 };
 
+struct mlx5_hca_flow_attr {
+	uint32_t tunnel_header_0_1;
+	uint32_t tunnel_header_2_3;
+};
+
 /* HCA supports this number of time periods for LRO. */
 #define MLX5_LRO_NUM_SUPP_PERIODS 4
 
@@ -155,6 +160,7 @@ struct mlx5_hca_attr {
 	uint32_t pkt_integrity_match:1; /* 1 if HW supports integrity item */
 	struct mlx5_hca_qos_attr qos;
 	struct mlx5_hca_vdpa_attr vdpa;
+	struct mlx5_hca_flow_attr flow;
 	int log_max_qp_sz;
 	int log_max_cq_sz;
 	int log_max_qp;
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 26761f5bd3..7950070976 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -977,6 +977,18 @@ struct mlx5_ifc_fte_match_set_misc4_bits {
 	u8 reserved_at_100[0x100];
 };
 
+struct mlx5_ifc_fte_match_set_misc5_bits {
+	u8 macsec_tag_0[0x20];
+	u8 macsec_tag_1[0x20];
+	u8 macsec_tag_2[0x20];
+	u8 macsec_tag_3[0x20];
+	u8 tunnel_header_0[0x20];
+	u8 tunnel_header_1[0x20];
+	u8 tunnel_header_2[0x20];
+	u8 tunnel_header_3[0x20];
+	u8 reserved[0x100];
+};
+
 /* Flow matcher. */
 struct mlx5_ifc_fte_match_param_bits {
 	struct mlx5_ifc_fte_match_set_lyr_2_4_bits outer_headers;
@@ -985,12 +997,13 @@ struct mlx5_ifc_fte_match_param_bits {
 	struct mlx5_ifc_fte_match_set_misc2_bits misc_parameters_2;
 	struct mlx5_ifc_fte_match_set_misc3_bits misc_parameters_3;
 	struct mlx5_ifc_fte_match_set_misc4_bits misc_parameters_4;
+	struct mlx5_ifc_fte_match_set_misc5_bits misc_parameters_5;
 /*
  * Add reserved bit to match the struct size with the size defined in PRM.
  * This extension is not required in Linux.
  */
 #ifndef HAVE_INFINIBAND_VERBS_H
-	u8 reserved_0[0x400];
+	u8 reserved_0[0x200];
 #endif
 };
 
@@ -1007,6 +1020,7 @@ enum {
 	MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT,
 	MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT,
 	MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT,
+	MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT,
 };
 
 enum {
@@ -1784,7 +1798,12 @@ struct mlx5_ifc_roce_caps_bits {
  * Table 1872 - Flow Table Fields Supported 2 Format
  */
 struct mlx5_ifc_ft_fields_support_2_bits {
-	u8 reserved_at_0[0x14];
+	u8 reserved_at_0[0xf];
+	u8 tunnel_header_2_3[0x1];
+	u8 tunnel_header_0_1[0x1];
+	u8 macsec_syndrome[0x1];
+	u8 macsec_tag[0x1];
+	u8 outer_lrh_sl[0x1];
 	u8 inner_ipv4_ihl[0x1];
 	u8 outer_ipv4_ihl[0x1];
 	u8 psp_syndrome[0x1];
@@ -1797,18 +1816,26 @@ struct mlx5_ifc_ft_fields_support_2_bits {
 	u8 inner_l4_checksum_ok[0x1];
 	u8 outer_ipv4_checksum_ok[0x1];
 	u8 outer_l4_checksum_ok[0x1];
+	u8 reserved_at_20[0x60];
 };
 
 struct mlx5_ifc_flow_table_nic_cap_bits {
 	u8 reserved_at_0[0x200];
 	struct mlx5_ifc_flow_table_prop_layout_bits
-	       flow_table_properties_nic_receive;
+		flow_table_properties_nic_receive;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_receive_rdma;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_receive_sniffer;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_transmit;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_transmit_rdma;
 	struct mlx5_ifc_flow_table_prop_layout_bits
-	       flow_table_properties_unused[5];
-	u8 reserved_at_1C0[0x200];
-	u8 header_modify_nic_receive[0x400];
+		flow_table_properties_nic_transmit_sniffer;
+	u8 reserved_at_e00[0x600];
 	struct mlx5_ifc_ft_fields_support_2_bits
-	       ft_field_support_2_nic_receive;
+		ft_field_support_2_nic_receive;
 };
 
 struct mlx5_ifc_cmd_hca_cap_2_bits {
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index b3f9e392ab..4fc6969a30 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -193,6 +193,79 @@ mlx5_alloc_verbs_buf(size_t size, void *data)
 	return ret;
 }
 
+/**
+ * Detect misc5 support or not
+ *
+ * @param[in] priv
+ *   Device private data pointer
+ */
+#ifdef HAVE_MLX5DV_DR
+static void
+__mlx5_discovery_misc5_cap(struct mlx5_priv *priv)
+{
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+	/* Dummy VxLAN matcher to detect rdma-core misc5 cap
+	 * Case: IPv4--->UDP--->VxLAN--->vni
+	 */
+	void *tbl;
+	struct mlx5_flow_dv_match_params matcher_mask;
+	void *match_m;
+	void *matcher;
+	void *headers_m;
+	void *misc5_m;
+	uint32_t *tunnel_header_m;
+	struct mlx5dv_flow_matcher_attr dv_attr;
+
+	memset(&matcher_mask, 0, sizeof(matcher_mask));
+	matcher_mask.size = sizeof(matcher_mask.buf);
+	match_m = matcher_mask.buf;
+	headers_m = MLX5_ADDR_OF(fte_match_param, match_m, outer_headers);
+	misc5_m = MLX5_ADDR_OF(fte_match_param,
+			       match_m, misc_parameters_5);
+	tunnel_header_m = (uint32_t *)
+				MLX5_ADDR_OF(fte_match_set_misc5,
+				misc5_m, tunnel_header_1);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, 4);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xffff);
+	*tunnel_header_m = 0xffffff;
+
+	tbl = mlx5_glue->dr_create_flow_tbl(priv->sh->rx_domain, 1);
+	if (!tbl) {
+		DRV_LOG(INFO, "No SW steering support");
+		return;
+	}
+	dv_attr.type = IBV_FLOW_ATTR_NORMAL,
+	dv_attr.match_mask = (void *)&matcher_mask,
+	dv_attr.match_criteria_enable =
+			(1 << MLX5_MATCH_CRITERIA_ENABLE_OUTER_BIT) |
+			(1 << MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT);
+	dv_attr.priority = 3;
+#ifdef HAVE_MLX5DV_DR_ESWITCH
+	void *misc2_m;
+	if (priv->config.dv_esw_en) {
+		/* FDB enabled reg_c_0 */
+		dv_attr.match_criteria_enable |=
+				(1 << MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT);
+		misc2_m = MLX5_ADDR_OF(fte_match_param,
+				       match_m, misc_parameters_2);
+		MLX5_SET(fte_match_set_misc2, misc2_m,
+			 metadata_reg_c_0, 0xffff);
+	}
+#endif
+	matcher = mlx5_glue->dv_create_flow_matcher(priv->sh->ctx,
+						    &dv_attr, tbl);
+	if (matcher) {
+		priv->sh->misc5_cap = 1;
+		mlx5_glue->dv_destroy_flow_matcher(matcher);
+	}
+	mlx5_glue->dr_destroy_flow_tbl(tbl);
+#else
+	RTE_SET_USED(priv);
+#endif
+}
+#endif
+
 /**
  * Verbs callback to free a memory.
  *
@@ -355,6 +428,8 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv)
 			mlx5_glue->dr_reclaim_domain_memory(sh->fdb_domain, 1);
 	}
 	sh->pop_vlan_action = mlx5_glue->dr_create_flow_action_pop_vlan();
+
+	__mlx5_discovery_misc5_cap(priv);
 #endif /* HAVE_MLX5DV_DR */
 	sh->default_miss_action =
 			mlx5_glue->dr_create_flow_action_default_miss();
@@ -1304,6 +1379,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 				goto error;
 			}
 		}
+		if (config->hca_attr.flow.tunnel_header_0_1)
+			sh->tunnel_header_0_1 = 1;
 #endif
 #ifdef HAVE_MLX5_DR_CREATE_ACTION_ASO
 		if (config->hca_attr.flow_hit_aso &&
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 36d7c9ce77..dd9cdff4e0 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1071,6 +1071,8 @@ struct mlx5_dev_ctx_shared {
 	uint32_t qp_ts_format:2; /* QP timestamp formats supported. */
 	uint32_t meter_aso_en:1; /* Flow Meter ASO is supported. */
 	uint32_t ct_aso_en:1; /* Connection Tracking ASO is supported. */
+	uint32_t tunnel_header_0_1:1; /* tunnel_header_0_1 is supported. */
+	uint32_t misc5_cap:1; /* misc5 matcher parameter is supported. */
 	uint32_t max_port; /* Maximal IB device port index. */
 	struct mlx5_bond_info bond; /* Bonding information. */
 	void *ctx; /* Verbs/DV/DevX context. */
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 459f03ff40..ce1d649347 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2398,12 +2398,14 @@ mlx5_flow_validate_item_tcp(const struct rte_flow_item *item,
 /**
  * Validate VXLAN item.
  *
+ * @param[in] dev
+ *   Pointer to the Ethernet device structure.
  * @param[in] item
  *   Item specification.
  * @param[in] item_flags
  *   Bit-fields that holds the items detected until now.
- * @param[in] target_protocol
- *   The next protocol in the previous item.
+ * @param[in] attr
+ *   Flow rule attributes.
  * @param[out] error
  *   Pointer to error structure.
  *
@@ -2411,24 +2413,32 @@ mlx5_flow_validate_item_tcp(const struct rte_flow_item *item,
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 int
-mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
+mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
+			      const struct rte_flow_item *item,
 			      uint64_t item_flags,
+			      const struct rte_flow_attr *attr,
 			      struct rte_flow_error *error)
 {
 	const struct rte_flow_item_vxlan *spec = item->spec;
 	const struct rte_flow_item_vxlan *mask = item->mask;
 	int ret;
+	struct mlx5_priv *priv = dev->data->dev_private;
 	union vni {
 		uint32_t vlan_id;
 		uint8_t vni[4];
 	} id = { .vlan_id = 0, };
-
+	const struct rte_flow_item_vxlan nic_mask = {
+		.vni = "\xff\xff\xff",
+		.rsvd1 = 0xff,
+	};
+	const struct rte_flow_item_vxlan *valid_mask;
 
 	if (item_flags & MLX5_FLOW_LAYER_TUNNEL)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "multiple tunnel layers not"
 					  " supported");
+	valid_mask = &rte_flow_item_vxlan_mask;
 	/*
 	 * Verify only UDPv4 is present as defined in
 	 * https://tools.ietf.org/html/rfc7348
@@ -2439,9 +2449,15 @@ mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
 					  "no outer UDP layer found");
 	if (!mask)
 		mask = &rte_flow_item_vxlan_mask;
+	/* FDB domain & NIC domain non-zero group */
+	if ((attr->transfer || attr->group) && priv->sh->misc5_cap)
+		valid_mask = &nic_mask;
+	/* Group zero in NIC domain */
+	if (!attr->group && !attr->transfer && priv->sh->tunnel_header_0_1)
+		valid_mask = &nic_mask;
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
-		 (const uint8_t *)&rte_flow_item_vxlan_mask,
+		 (const uint8_t *)valid_mask,
 		 sizeof(struct rte_flow_item_vxlan),
 		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 2f2aa962f9..3739dcc319 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1521,8 +1521,10 @@ int mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 				 uint64_t item_flags,
 				 struct rte_eth_dev *dev,
 				 struct rte_flow_error *error);
-int mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
+int mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
+				  const struct rte_flow_item *item,
 				  uint64_t item_flags,
+				  const struct rte_flow_attr *attr,
 				  struct rte_flow_error *error);
 int mlx5_flow_validate_item_vxlan_gpe(const struct rte_flow_item *item,
 				      uint64_t item_flags,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index a5a7990d53..87d5d5b90a 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -6924,7 +6924,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			last_item = MLX5_FLOW_LAYER_GRE_KEY;
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			ret = mlx5_flow_validate_item_vxlan(items, item_flags,
+			ret = mlx5_flow_validate_item_vxlan(dev, items,
+							    item_flags, attr,
 							    error);
 			if (ret < 0)
 				return ret;
@@ -7884,15 +7885,7 @@ flow_dv_prepare(struct rte_eth_dev *dev,
 	memset(dev_flow, 0, sizeof(*dev_flow));
 	dev_flow->handle = dev_handle;
 	dev_flow->handle_idx = handle_idx;
-	/*
-	 * In some old rdma-core releases, before continuing, a check of the
-	 * length of matching parameter will be done at first. It needs to use
-	 * the length without misc4 param. If the flow has misc4 support, then
-	 * the length needs to be adjusted accordingly. Each param member is
-	 * aligned with a 64B boundary naturally.
-	 */
-	dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param) -
-				  MLX5_ST_SZ_BYTES(fte_match_set_misc4);
+	dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param);
 	dev_flow->ingress = attr->ingress;
 	dev_flow->dv.transfer = attr->transfer;
 	return dev_flow;
@@ -8673,6 +8666,10 @@ flow_dv_translate_item_nvgre(void *matcher, void *key,
 /**
  * Add VXLAN item to matcher and to the value.
  *
+ * @param[in] dev
+ *   Pointer to the Ethernet device structure.
+ * @param[in] attr
+ *   Flow rule attributes.
  * @param[in, out] matcher
  *   Flow matcher.
  * @param[in, out] key
@@ -8683,7 +8680,9 @@ flow_dv_translate_item_nvgre(void *matcher, void *key,
  *   Item is inner pattern.
  */
 static void
-flow_dv_translate_item_vxlan(void *matcher, void *key,
+flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
+			     const struct rte_flow_attr *attr,
+			     void *matcher, void *key,
 			     const struct rte_flow_item *item,
 			     int inner)
 {
@@ -8691,13 +8690,16 @@ flow_dv_translate_item_vxlan(void *matcher, void *key,
 	const struct rte_flow_item_vxlan *vxlan_v = item->spec;
 	void *headers_m;
 	void *headers_v;
-	void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters);
-	void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
-	char *vni_m;
-	char *vni_v;
+	void *misc5_m;
+	void *misc5_v;
+	uint32_t *tunnel_header_v;
+	uint32_t *tunnel_header_m;
 	uint16_t dport;
-	int size;
-	int i;
+	struct mlx5_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item_vxlan nic_mask = {
+		.vni = "\xff\xff\xff",
+		.rsvd1 = 0xff,
+	};
 
 	if (inner) {
 		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
@@ -8716,14 +8718,52 @@ flow_dv_translate_item_vxlan(void *matcher, void *key,
 	}
 	if (!vxlan_v)
 		return;
-	if (!vxlan_m)
-		vxlan_m = &rte_flow_item_vxlan_mask;
-	size = sizeof(vxlan_m->vni);
-	vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni);
-	vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
-	memcpy(vni_m, vxlan_m->vni, size);
-	for (i = 0; i < size; ++i)
-		vni_v[i] = vni_m[i] & vxlan_v->vni[i];
+	if (!vxlan_m) {
+		if ((!attr->group && !priv->sh->tunnel_header_0_1) ||
+		    (attr->group && !priv->sh->misc5_cap))
+			vxlan_m = &rte_flow_item_vxlan_mask;
+		else
+			vxlan_m = &nic_mask;
+	}
+	if ((!attr->group && !attr->transfer && !priv->sh->tunnel_header_0_1) ||
+	    ((attr->group || attr->transfer) && !priv->sh->misc5_cap)) {
+		void *misc_m;
+		void *misc_v;
+		char *vni_m;
+		char *vni_v;
+		int size;
+		int i;
+		misc_m = MLX5_ADDR_OF(fte_match_param,
+				      matcher, misc_parameters);
+		misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
+		size = sizeof(vxlan_m->vni);
+		vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni);
+		vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
+		memcpy(vni_m, vxlan_m->vni, size);
+		for (i = 0; i < size; ++i)
+			vni_v[i] = vni_m[i] & vxlan_v->vni[i];
+		return;
+	}
+	misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5);
+	misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5);
+	tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
+						   misc5_v,
+						   tunnel_header_1);
+	tunnel_header_m = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
+						   misc5_m,
+						   tunnel_header_1);
+	*tunnel_header_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) |
+			   (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 |
+			   (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16;
+	if (*tunnel_header_v)
+		*tunnel_header_m = vxlan_m->vni[0] |
+			vxlan_m->vni[1] << 8 |
+			vxlan_m->vni[2] << 16;
+	else
+		*tunnel_header_m = 0x0;
+	*tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;
+	if (vxlan_v->rsvd1 & vxlan_m->rsvd1)
+		*tunnel_header_m |= vxlan_m->rsvd1 << 24;
 }
 
 /**
@@ -9887,9 +9927,32 @@ flow_dv_matcher_enable(uint32_t *match_criteria)
 	match_criteria_enable |=
 		(!HEADER_IS_ZERO(match_criteria, misc_parameters_4)) <<
 		MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT;
+	match_criteria_enable |=
+		(!HEADER_IS_ZERO(match_criteria, misc_parameters_5)) <<
+		MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT;
 	return match_criteria_enable;
 }
 
+static void
+__flow_dv_adjust_buf_size(size_t *size, uint8_t match_criteria)
+{
+	/*
+	 * Check flow matching criteria first, subtract misc5/4 length if flow
+	 * doesn't own misc5/4 parameters. In some old rdma-core releases,
+	 * misc5/4 are not supported, and matcher creation failure is expected
+	 * w/o subtration. If misc5 is provided, misc4 must be counted in since
+	 * misc5 is right after misc4.
+	 */
+	if (!(match_criteria & (1 << MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT))) {
+		*size = MLX5_ST_SZ_BYTES(fte_match_param) -
+			MLX5_ST_SZ_BYTES(fte_match_set_misc5);
+		if (!(match_criteria & (1 <<
+			MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT))) {
+			*size -= MLX5_ST_SZ_BYTES(fte_match_set_misc4);
+		}
+	}
+}
+
 struct mlx5_hlist_entry *
 flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void *cb_ctx)
 {
@@ -10156,6 +10219,8 @@ flow_dv_matcher_create_cb(struct mlx5_cache_list *list,
 	*cache = *ref;
 	dv_attr.match_criteria_enable =
 		flow_dv_matcher_enable(cache->mask.buf);
+	__flow_dv_adjust_buf_size(&ref->mask.size,
+				  dv_attr.match_criteria_enable);
 	dv_attr.priority = ref->priority;
 	if (tbl->is_egress)
 		dv_attr.flags |= IBV_FLOW_ATTR_FLAGS_EGRESS;
@@ -10205,7 +10270,6 @@ flow_dv_matcher_register(struct rte_eth_dev *dev,
 		.error = error,
 		.data = ref,
 	};
-
 	/**
 	 * tunnel offload API requires this registration for cases when
 	 * tunnel match rule was inserted before tunnel set rule.
@@ -12064,8 +12128,7 @@ flow_dv_translate(struct rte_eth_dev *dev,
 	uint64_t action_flags = 0;
 	struct mlx5_flow_dv_matcher matcher = {
 		.mask = {
-			.size = sizeof(matcher.mask.buf) -
-				MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+			.size = sizeof(matcher.mask.buf),
 		},
 	};
 	int actions_n = 0;
@@ -12872,7 +12935,8 @@ flow_dv_translate(struct rte_eth_dev *dev,
 			last_item = MLX5_FLOW_LAYER_GRE;
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			flow_dv_translate_item_vxlan(match_mask, match_value,
+			flow_dv_translate_item_vxlan(dev, attr,
+						     match_mask, match_value,
 						     items, tunnel);
 			matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc);
 			last_item = MLX5_FLOW_LAYER_VXLAN;
@@ -12970,10 +13034,6 @@ flow_dv_translate(struct rte_eth_dev *dev,
 						NULL,
 						"cannot create eCPRI parser");
 			}
-			/* Adjust the length matcher and device flow value. */
-			matcher.mask.size = MLX5_ST_SZ_BYTES(fte_match_param);
-			dev_flow->dv.value.size =
-					MLX5_ST_SZ_BYTES(fte_match_param);
 			flow_dv_translate_item_ecpri(dev, match_mask,
 						     match_value, items);
 			/* No other protocol should follow eCPRI layer. */
@@ -13274,6 +13334,7 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
 	int idx;
 	struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace();
 	struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc;
+	uint8_t misc_mask;
 
 	MLX5_ASSERT(wks);
 	for (idx = wks->flow_idx - 1; idx >= 0; idx--) {
@@ -13344,6 +13405,8 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
 			}
 			dv->actions[n++] = priv->sh->default_miss_action;
 		}
+		misc_mask = flow_dv_matcher_enable(dv->value.buf);
+		__flow_dv_adjust_buf_size(&dv->value.size, misc_mask);
 		err = mlx5_flow_os_create_flow(dv_h->matcher->matcher_object,
 					       (void *)&dv->value, n,
 					       dv->actions, &dh->drv_flow);
@@ -15392,14 +15455,13 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev,
 {
 	int ret;
 	struct mlx5_flow_dv_match_params value = {
-		.size = sizeof(value.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(value.buf),
 	};
 	struct mlx5_flow_dv_match_params matcher = {
-		.size = sizeof(matcher.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(matcher.buf),
 	};
 	struct mlx5_priv *priv = dev->data->dev_private;
+	uint8_t misc_mask;
 
 	if (match_src_port && (priv->representor || priv->master)) {
 		if (flow_dv_translate_item_port_id(dev, matcher.buf,
@@ -15413,6 +15475,8 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev,
 				(enum modify_reg)color_reg_c_idx,
 				rte_col_2_mlx5_col(color),
 				UINT32_MAX);
+	misc_mask = flow_dv_matcher_enable(value.buf);
+	__flow_dv_adjust_buf_size(&value.size, misc_mask);
 	ret = mlx5_flow_os_create_flow(matcher_object,
 			(void *)&value, actions_n, actions, rule);
 	if (ret) {
@@ -15435,14 +15499,12 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev,
 	struct mlx5_flow_tbl_resource *tbl_rsc = sub_policy->tbl_rsc;
 	struct mlx5_flow_dv_matcher matcher = {
 		.mask = {
-			.size = sizeof(matcher.mask.buf) -
-				MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+			.size = sizeof(matcher.mask.buf),
 		},
 		.tbl = tbl_rsc,
 	};
 	struct mlx5_flow_dv_match_params value = {
-		.size = sizeof(value.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(value.buf),
 	};
 	struct mlx5_flow_cb_ctx ctx = {
 		.error = error,
@@ -15822,12 +15884,10 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 	int domain, ret, i;
 	struct mlx5_flow_counter *cnt;
 	struct mlx5_flow_dv_match_params value = {
-		.size = sizeof(value.buf) -
-		MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(value.buf),
 	};
 	struct mlx5_flow_dv_match_params matcher_para = {
-		.size = sizeof(matcher_para.buf) -
-		MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(matcher_para.buf),
 	};
 	int mtr_id_reg_c = mlx5_flow_get_reg_id(dev, MLX5_MTR_ID,
 						     0, &error);
@@ -15836,8 +15896,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 	struct mlx5_cache_entry *entry;
 	struct mlx5_flow_dv_matcher matcher = {
 		.mask = {
-			.size = sizeof(matcher.mask.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+			.size = sizeof(matcher.mask.buf),
 		},
 	};
 	struct mlx5_flow_dv_matcher *drop_matcher;
@@ -15845,6 +15904,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 		.error = &error,
 		.data = &matcher,
 	};
+	uint8_t misc_mask;
 
 	if (!priv->mtr_en || mtr_id_reg_c < 0) {
 		rte_errno = ENOTSUP;
@@ -15894,6 +15954,8 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 			actions[i++] = priv->sh->dr_drop_action;
 			flow_dv_match_meta_reg(matcher_para.buf, value.buf,
 				(enum modify_reg)mtr_id_reg_c, 0, 0);
+			misc_mask = flow_dv_matcher_enable(value.buf);
+			__flow_dv_adjust_buf_size(&value.size, misc_mask);
 			ret = mlx5_flow_os_create_flow
 				(mtrmng->def_matcher[domain]->matcher_object,
 				(void *)&value, i, actions,
@@ -15937,6 +15999,8 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 					fm->drop_cnt, NULL);
 		actions[i++] = cnt->action;
 		actions[i++] = priv->sh->dr_drop_action;
+		misc_mask = flow_dv_matcher_enable(value.buf);
+		__flow_dv_adjust_buf_size(&value.size, misc_mask);
 		ret = mlx5_flow_os_create_flow(drop_matcher->matcher_object,
 					       (void *)&value, i, actions,
 					       &fm->drop_rule[domain]);
@@ -16217,10 +16281,12 @@ mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev)
 	if (ret)
 		goto err;
 	dv_attr.match_criteria_enable = flow_dv_matcher_enable(mask.buf);
+	__flow_dv_adjust_buf_size(&mask.size, dv_attr.match_criteria_enable);
 	ret = mlx5_flow_os_create_flow_matcher(sh->ctx, &dv_attr, tbl->obj,
 					       &matcher);
 	if (ret)
 		goto err;
+	__flow_dv_adjust_buf_size(&value.size, dv_attr.match_criteria_enable);
 	ret = mlx5_flow_os_create_flow(matcher, (void *)&value, 1,
 				       actions, &flow);
 err:
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index fe9673310a..7b3d0b320d 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1381,7 +1381,8 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 					     MLX5_FLOW_LAYER_OUTER_L4_TCP;
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			ret = mlx5_flow_validate_item_vxlan(items, item_flags,
+			ret = mlx5_flow_validate_item_vxlan(dev, items,
+							    item_flags, attr,
 							    error);
 			if (ret < 0)
 				return ret;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
index 1fcd24c002..383f003966 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
@@ -140,11 +140,13 @@ mlx5_vdpa_rss_flows_create(struct mlx5_vdpa_priv *priv)
 		/**< Matcher value. This value is used as the mask or a key. */
 	} matcher_mask = {
 				.size = sizeof(matcher_mask.buf) -
-					MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+					MLX5_ST_SZ_BYTES(fte_match_set_misc4) -
+					MLX5_ST_SZ_BYTES(fte_match_set_misc5),
 			},
 	  matcher_value = {
 				.size = sizeof(matcher_value.buf) -
-					MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+					MLX5_ST_SZ_BYTES(fte_match_set_misc4) -
+					MLX5_ST_SZ_BYTES(fte_match_set_misc5),
 			};
 	struct mlx5dv_flow_matcher_attr dv_attr = {
 		.type = IBV_FLOW_ATTR_NORMAL,
-- 
2.27.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [dpdk-dev] [PATCH v4 2/2] app/testpmd: support VXLAN the last 8-bits field matching
  2021-07-07  8:09     ` [dpdk-dev] [PATCH v4 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
  2021-07-07  8:09       ` [dpdk-dev] [PATCH v4 1/2] net/mlx5: add VXLAN header the last 8-bits matching support Rongwei Liu
@ 2021-07-07  8:09       ` Rongwei Liu
  2021-07-13  8:33       ` [dpdk-dev] [PATCH v4 0/2] support VXLAN header the last 8-bits matching Andrew Rybchenko
  2 siblings, 0 replies; 34+ messages in thread
From: Rongwei Liu @ 2021-07-07  8:09 UTC (permalink / raw)
  To: matan, viacheslavo, orika, thomas, Xiaoyun Li; +Cc: dev, rasland

Add a new testpmd pattern field 'last_rsvd' that supports the
last 8-bits matching of VXLAN header.

The examples for the "last_rsvd" pattern field are as below:

1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...

This flow will exactly match the last 8-bits to be 0x80.

2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
vxlan mask 0x80 / end ...

This flow will only match the MSB of the last 8-bits to be 1.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 9 +++++++++
 app/test-pmd/util.c                         | 5 +++--
 doc/guides/rel_notes/release_21_08.rst      | 6 ++++++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 1 +
 4 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 1c587bb7b8..6e76a625ca 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -207,6 +207,7 @@ enum index {
 	ITEM_SCTP_CKSUM,
 	ITEM_VXLAN,
 	ITEM_VXLAN_VNI,
+	ITEM_VXLAN_LAST_RSVD,
 	ITEM_E_TAG,
 	ITEM_E_TAG_GRP_ECID_B,
 	ITEM_NVGRE,
@@ -1129,6 +1130,7 @@ static const enum index item_sctp[] = {
 
 static const enum index item_vxlan[] = {
 	ITEM_VXLAN_VNI,
+	ITEM_VXLAN_LAST_RSVD,
 	ITEM_NEXT,
 	ZERO,
 };
@@ -2806,6 +2808,13 @@ static const struct token token_list[] = {
 		.next = NEXT(item_vxlan, NEXT_ENTRY(UNSIGNED), item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, vni)),
 	},
+	[ITEM_VXLAN_LAST_RSVD] = {
+		.name = "last_rsvd",
+		.help = "VXLAN last reserved bits",
+		.next = NEXT(item_vxlan, NEXT_ENTRY(UNSIGNED), item_param),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan,
+					     rsvd1)),
+	},
 	[ITEM_E_TAG] = {
 		.name = "e_tag",
 		.help = "match E-Tag header",
diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
index a9e431a8b2..59626518d5 100644
--- a/app/test-pmd/util.c
+++ b/app/test-pmd/util.c
@@ -266,8 +266,9 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
 				vx_vni = rte_be_to_cpu_32(vxlan_hdr->vx_vni);
 				MKDUMPSTR(print_buf, buf_size, cur_len,
 					  " - VXLAN packet: packet type =%d, "
-					  "Destination UDP port =%d, VNI = %d",
-					  packet_type, udp_port, vx_vni >> 8);
+					  "Destination UDP port =%d, VNI = %d, "
+					  "last_rsvd = %d", packet_type,
+					  udp_port, vx_vni >> 8, vx_vni & 0xff);
 			}
 		}
 		MKDUMPSTR(print_buf, buf_size, cur_len,
diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
index 0a05cb02fa..9166c24995 100644
--- a/doc/guides/rel_notes/release_21_08.rst
+++ b/doc/guides/rel_notes/release_21_08.rst
@@ -68,6 +68,11 @@ New Features
   usecases. Configuration happens via standard rawdev enq/deq operations. See
   the :doc:`../rawdevs/cnxk_bphy` rawdev guide for more details on this driver.
 
+* **Updated Mellanox mlx5 driver.**
+
+  Updated the Mellanox mlx5 driver with new features and improvements, including:
+
+  * Added support for matching on vxlan header last 8-bits reserved field.
 
 Removed Items
 -------------
@@ -152,3 +157,4 @@ Tested Platforms
    This section is a comment. Do not overwrite or remove it.
    Also, make sure to start the actual text at the margin.
    =======================================================
+
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 33857acf54..4ca3103067 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3694,6 +3694,7 @@ This section lists supported pattern items and their attributes, if any.
 - ``vxlan``: match VXLAN header.
 
   - ``vni {unsigned}``: VXLAN identifier.
+  - ``last_rsvd {unsigned}``: VXLAN last reserved 8-bits.
 
 - ``e_tag``: match IEEE 802.1BR E-Tag header.
 
-- 
2.27.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v4 0/2] support VXLAN header the last 8-bits matching
  2021-07-07  8:09     ` [dpdk-dev] [PATCH v4 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
  2021-07-07  8:09       ` [dpdk-dev] [PATCH v4 1/2] net/mlx5: add VXLAN header the last 8-bits matching support Rongwei Liu
  2021-07-07  8:09       ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: support VXLAN the last 8-bits field matching Rongwei Liu
@ 2021-07-13  8:33       ` Andrew Rybchenko
  2021-07-13  9:55         ` [dpdk-dev] [PATCH v5 " Rongwei Liu
  2021-07-13  9:56         ` [dpdk-dev] [PATCH v4 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
  2 siblings, 2 replies; 34+ messages in thread
From: Andrew Rybchenko @ 2021-07-13  8:33 UTC (permalink / raw)
  To: Rongwei Liu, matan, viacheslavo, orika, thomas; +Cc: dev, rasland

On 7/7/21 11:09 AM, Rongwei Liu wrote:
> This update adds support for the VXLAN the last 8-bits reserved
> field matching when creating sw steering rules.
> 
> Add a new testpmd pattern field 'last_rsvd' that supports the last
> 8-bits matching of VXLAN header.

The version fails to apply, please, send rebased version.

Thanks,
Andrew.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [dpdk-dev] [PATCH v5 0/2] support VXLAN header the last 8-bits matching
  2021-07-13  8:33       ` [dpdk-dev] [PATCH v4 0/2] support VXLAN header the last 8-bits matching Andrew Rybchenko
@ 2021-07-13  9:55         ` Rongwei Liu
  2021-07-13  9:55           ` [dpdk-dev] [PATCH v5 1/2] net/mlx5: add VXLAN header the last 8-bits matching support Rongwei Liu
  2021-07-13  9:55           ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: support VXLAN the last 8-bits field matching Rongwei Liu
  2021-07-13  9:56         ` [dpdk-dev] [PATCH v4 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
  1 sibling, 2 replies; 34+ messages in thread
From: Rongwei Liu @ 2021-07-13  9:55 UTC (permalink / raw)
  To: matan, viacheslavo, orika, thomas; +Cc: dev, rasland

This update adds support for the VXLAN the last 8-bits reserved
field matching when creating sw steering rules.

Add a new testpmd pattern field 'last_rsvd' that supports the last
8-bits matching of VXLAN header.

Rongwei Liu (2):
  net/mlx5: add VXLAN header the last 8-bits matching support
  app/testpmd: support VXLAN the last 8-bits field matching

 app/test-pmd/cmdline_flow.c                 |   9 ++
 app/test-pmd/util.c                         |   5 +-
 doc/guides/nics/mlx5.rst                    |  11 +-
 doc/guides/rel_notes/release_21_08.rst      |   6 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |   1 +
 drivers/common/mlx5/mlx5_devx_cmds.c        |   3 +
 drivers/common/mlx5/mlx5_devx_cmds.h        |   6 +
 drivers/common/mlx5/mlx5_prm.h              |  41 ++++-
 drivers/net/mlx5/linux/mlx5_os.c            |  77 ++++++++++
 drivers/net/mlx5/mlx5.h                     |   2 +
 drivers/net/mlx5/mlx5_flow.c                |  26 +++-
 drivers/net/mlx5/mlx5_flow.h                |   4 +-
 drivers/net/mlx5/mlx5_flow_dv.c             | 160 ++++++++++++++------
 drivers/net/mlx5/mlx5_flow_verbs.c          |   3 +-
 drivers/vdpa/mlx5/mlx5_vdpa_steer.c         |   6 +-
 15 files changed, 293 insertions(+), 67 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [dpdk-dev] [PATCH v5 1/2] net/mlx5: add VXLAN header the last 8-bits matching support
  2021-07-13  9:55         ` [dpdk-dev] [PATCH v5 " Rongwei Liu
@ 2021-07-13  9:55           ` Rongwei Liu
  2021-07-13 10:27             ` Raslan Darawsheh
  2021-07-13  9:55           ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: support VXLAN the last 8-bits field matching Rongwei Liu
  1 sibling, 1 reply; 34+ messages in thread
From: Rongwei Liu @ 2021-07-13  9:55 UTC (permalink / raw)
  To: matan, viacheslavo, orika, thomas, Shahaf Shuler; +Cc: dev, rasland

This update adds support for the VXLAN header last 8-bits
matching when creating steering rules. At the PCIe probe
stage, we create a dummy VXLAN matcher using misc5 to check
rdma-core library's capability.

The logic is, group 0 depends on HCA_CAP to enable misc or misc5
for VXLAN matching while group non zero depends on the rdma-core
capability.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 doc/guides/nics/mlx5.rst             |  11 +-
 drivers/common/mlx5/mlx5_devx_cmds.c |   3 +
 drivers/common/mlx5/mlx5_devx_cmds.h |   6 +
 drivers/common/mlx5/mlx5_prm.h       |  41 +++++--
 drivers/net/mlx5/linux/mlx5_os.c     |  77 +++++++++++++
 drivers/net/mlx5/mlx5.h              |   2 +
 drivers/net/mlx5/mlx5_flow.c         |  26 ++++-
 drivers/net/mlx5/mlx5_flow.h         |   4 +-
 drivers/net/mlx5/mlx5_flow_dv.c      | 160 +++++++++++++++++++--------
 drivers/net/mlx5/mlx5_flow_verbs.c   |   3 +-
 drivers/vdpa/mlx5/mlx5_vdpa_steer.c  |   6 +-
 11 files changed, 274 insertions(+), 65 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 8253b96e92..5842991d5d 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -195,8 +195,15 @@ Limitations
   size and ``txq_inline_min`` settings and may be from 2 (worst case forced by maximal
   inline settings) to 58.
 
-- Flows with a VXLAN Network Identifier equal (or ends to be equal)
-  to 0 are not supported.
+- Match on VXLAN supports the following fields only:
+
+     - VNI
+     - Last reserved 8-bits
+
+  Last reserved 8-bits matching is only supported When using DV flow
+  engine (``dv_flow_en`` = 1).
+  Group zero's behavior may differ which depends on FW.
+  Matching value equals 0 (value & mask) is not supported.
 
 - L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with MPLSoGRE and MPLSoUDP.
 
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index f5914bce32..63ae95832d 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -947,6 +947,9 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
 	attr->log_max_ft_sampler_num = MLX5_GET
 		(flow_table_nic_cap, hcattr,
 		 flow_table_properties_nic_receive.log_max_ft_sampler_num);
+	attr->flow.tunnel_header_0_1 = MLX5_GET
+		(flow_table_nic_cap, hcattr,
+		 ft_field_support_2_nic_receive.tunnel_header_0_1);
 	attr->pkt_integrity_match = mlx5_devx_query_pkt_integrity_match(hcattr);
 	/* Query HCA offloads for Ethernet protocol. */
 	memset(in, 0, sizeof(in));
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index f8a17b886b..124f43e852 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -89,6 +89,11 @@ struct mlx5_hca_vdpa_attr {
 	uint64_t doorbell_bar_offset;
 };
 
+struct mlx5_hca_flow_attr {
+	uint32_t tunnel_header_0_1;
+	uint32_t tunnel_header_2_3;
+};
+
 /* HCA supports this number of time periods for LRO. */
 #define MLX5_LRO_NUM_SUPP_PERIODS 4
 
@@ -155,6 +160,7 @@ struct mlx5_hca_attr {
 	uint32_t pkt_integrity_match:1; /* 1 if HW supports integrity item */
 	struct mlx5_hca_qos_attr qos;
 	struct mlx5_hca_vdpa_attr vdpa;
+	struct mlx5_hca_flow_attr flow;
 	int log_max_qp_sz;
 	int log_max_cq_sz;
 	int log_max_qp;
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 26761f5bd3..7950070976 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -977,6 +977,18 @@ struct mlx5_ifc_fte_match_set_misc4_bits {
 	u8 reserved_at_100[0x100];
 };
 
+struct mlx5_ifc_fte_match_set_misc5_bits {
+	u8 macsec_tag_0[0x20];
+	u8 macsec_tag_1[0x20];
+	u8 macsec_tag_2[0x20];
+	u8 macsec_tag_3[0x20];
+	u8 tunnel_header_0[0x20];
+	u8 tunnel_header_1[0x20];
+	u8 tunnel_header_2[0x20];
+	u8 tunnel_header_3[0x20];
+	u8 reserved[0x100];
+};
+
 /* Flow matcher. */
 struct mlx5_ifc_fte_match_param_bits {
 	struct mlx5_ifc_fte_match_set_lyr_2_4_bits outer_headers;
@@ -985,12 +997,13 @@ struct mlx5_ifc_fte_match_param_bits {
 	struct mlx5_ifc_fte_match_set_misc2_bits misc_parameters_2;
 	struct mlx5_ifc_fte_match_set_misc3_bits misc_parameters_3;
 	struct mlx5_ifc_fte_match_set_misc4_bits misc_parameters_4;
+	struct mlx5_ifc_fte_match_set_misc5_bits misc_parameters_5;
 /*
  * Add reserved bit to match the struct size with the size defined in PRM.
  * This extension is not required in Linux.
  */
 #ifndef HAVE_INFINIBAND_VERBS_H
-	u8 reserved_0[0x400];
+	u8 reserved_0[0x200];
 #endif
 };
 
@@ -1007,6 +1020,7 @@ enum {
 	MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT,
 	MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT,
 	MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT,
+	MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT,
 };
 
 enum {
@@ -1784,7 +1798,12 @@ struct mlx5_ifc_roce_caps_bits {
  * Table 1872 - Flow Table Fields Supported 2 Format
  */
 struct mlx5_ifc_ft_fields_support_2_bits {
-	u8 reserved_at_0[0x14];
+	u8 reserved_at_0[0xf];
+	u8 tunnel_header_2_3[0x1];
+	u8 tunnel_header_0_1[0x1];
+	u8 macsec_syndrome[0x1];
+	u8 macsec_tag[0x1];
+	u8 outer_lrh_sl[0x1];
 	u8 inner_ipv4_ihl[0x1];
 	u8 outer_ipv4_ihl[0x1];
 	u8 psp_syndrome[0x1];
@@ -1797,18 +1816,26 @@ struct mlx5_ifc_ft_fields_support_2_bits {
 	u8 inner_l4_checksum_ok[0x1];
 	u8 outer_ipv4_checksum_ok[0x1];
 	u8 outer_l4_checksum_ok[0x1];
+	u8 reserved_at_20[0x60];
 };
 
 struct mlx5_ifc_flow_table_nic_cap_bits {
 	u8 reserved_at_0[0x200];
 	struct mlx5_ifc_flow_table_prop_layout_bits
-	       flow_table_properties_nic_receive;
+		flow_table_properties_nic_receive;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_receive_rdma;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_receive_sniffer;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_transmit;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_transmit_rdma;
 	struct mlx5_ifc_flow_table_prop_layout_bits
-	       flow_table_properties_unused[5];
-	u8 reserved_at_1C0[0x200];
-	u8 header_modify_nic_receive[0x400];
+		flow_table_properties_nic_transmit_sniffer;
+	u8 reserved_at_e00[0x600];
 	struct mlx5_ifc_ft_fields_support_2_bits
-	       ft_field_support_2_nic_receive;
+		ft_field_support_2_nic_receive;
 };
 
 struct mlx5_ifc_cmd_hca_cap_2_bits {
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index be22d9cbd2..55bb71c170 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -193,6 +193,79 @@ mlx5_alloc_verbs_buf(size_t size, void *data)
 	return ret;
 }
 
+/**
+ * Detect misc5 support or not
+ *
+ * @param[in] priv
+ *   Device private data pointer
+ */
+#ifdef HAVE_MLX5DV_DR
+static void
+__mlx5_discovery_misc5_cap(struct mlx5_priv *priv)
+{
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+	/* Dummy VxLAN matcher to detect rdma-core misc5 cap
+	 * Case: IPv4--->UDP--->VxLAN--->vni
+	 */
+	void *tbl;
+	struct mlx5_flow_dv_match_params matcher_mask;
+	void *match_m;
+	void *matcher;
+	void *headers_m;
+	void *misc5_m;
+	uint32_t *tunnel_header_m;
+	struct mlx5dv_flow_matcher_attr dv_attr;
+
+	memset(&matcher_mask, 0, sizeof(matcher_mask));
+	matcher_mask.size = sizeof(matcher_mask.buf);
+	match_m = matcher_mask.buf;
+	headers_m = MLX5_ADDR_OF(fte_match_param, match_m, outer_headers);
+	misc5_m = MLX5_ADDR_OF(fte_match_param,
+			       match_m, misc_parameters_5);
+	tunnel_header_m = (uint32_t *)
+				MLX5_ADDR_OF(fte_match_set_misc5,
+				misc5_m, tunnel_header_1);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, 4);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xffff);
+	*tunnel_header_m = 0xffffff;
+
+	tbl = mlx5_glue->dr_create_flow_tbl(priv->sh->rx_domain, 1);
+	if (!tbl) {
+		DRV_LOG(INFO, "No SW steering support");
+		return;
+	}
+	dv_attr.type = IBV_FLOW_ATTR_NORMAL,
+	dv_attr.match_mask = (void *)&matcher_mask,
+	dv_attr.match_criteria_enable =
+			(1 << MLX5_MATCH_CRITERIA_ENABLE_OUTER_BIT) |
+			(1 << MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT);
+	dv_attr.priority = 3;
+#ifdef HAVE_MLX5DV_DR_ESWITCH
+	void *misc2_m;
+	if (priv->config.dv_esw_en) {
+		/* FDB enabled reg_c_0 */
+		dv_attr.match_criteria_enable |=
+				(1 << MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT);
+		misc2_m = MLX5_ADDR_OF(fte_match_param,
+				       match_m, misc_parameters_2);
+		MLX5_SET(fte_match_set_misc2, misc2_m,
+			 metadata_reg_c_0, 0xffff);
+	}
+#endif
+	matcher = mlx5_glue->dv_create_flow_matcher(priv->sh->ctx,
+						    &dv_attr, tbl);
+	if (matcher) {
+		priv->sh->misc5_cap = 1;
+		mlx5_glue->dv_destroy_flow_matcher(matcher);
+	}
+	mlx5_glue->dr_destroy_flow_tbl(tbl);
+#else
+	RTE_SET_USED(priv);
+#endif
+}
+#endif
+
 /**
  * Verbs callback to free a memory.
  *
@@ -364,6 +437,8 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv)
 		if (sh->fdb_domain)
 			mlx5_glue->dr_allow_duplicate_rules(sh->fdb_domain, 0);
 	}
+
+	__mlx5_discovery_misc5_cap(priv);
 #endif /* HAVE_MLX5DV_DR */
 	sh->default_miss_action =
 			mlx5_glue->dr_create_flow_action_default_miss();
@@ -1313,6 +1388,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 				goto error;
 			}
 		}
+		if (config->hca_attr.flow.tunnel_header_0_1)
+			sh->tunnel_header_0_1 = 1;
 #endif
 #ifdef HAVE_MLX5_DR_CREATE_ACTION_ASO
 		if (config->hca_attr.flow_hit_aso &&
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f864c1d701..75a0e04ea0 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1094,6 +1094,8 @@ struct mlx5_dev_ctx_shared {
 	uint32_t qp_ts_format:2; /* QP timestamp formats supported. */
 	uint32_t meter_aso_en:1; /* Flow Meter ASO is supported. */
 	uint32_t ct_aso_en:1; /* Connection Tracking ASO is supported. */
+	uint32_t tunnel_header_0_1:1; /* tunnel_header_0_1 is supported. */
+	uint32_t misc5_cap:1; /* misc5 matcher parameter is supported. */
 	uint32_t max_port; /* Maximal IB device port index. */
 	struct mlx5_bond_info bond; /* Bonding information. */
 	void *ctx; /* Verbs/DV/DevX context. */
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 2feddb0254..f3f5752dbe 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2410,12 +2410,14 @@ mlx5_flow_validate_item_tcp(const struct rte_flow_item *item,
 /**
  * Validate VXLAN item.
  *
+ * @param[in] dev
+ *   Pointer to the Ethernet device structure.
  * @param[in] item
  *   Item specification.
  * @param[in] item_flags
  *   Bit-fields that holds the items detected until now.
- * @param[in] target_protocol
- *   The next protocol in the previous item.
+ * @param[in] attr
+ *   Flow rule attributes.
  * @param[out] error
  *   Pointer to error structure.
  *
@@ -2423,24 +2425,32 @@ mlx5_flow_validate_item_tcp(const struct rte_flow_item *item,
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 int
-mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
+mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
+			      const struct rte_flow_item *item,
 			      uint64_t item_flags,
+			      const struct rte_flow_attr *attr,
 			      struct rte_flow_error *error)
 {
 	const struct rte_flow_item_vxlan *spec = item->spec;
 	const struct rte_flow_item_vxlan *mask = item->mask;
 	int ret;
+	struct mlx5_priv *priv = dev->data->dev_private;
 	union vni {
 		uint32_t vlan_id;
 		uint8_t vni[4];
 	} id = { .vlan_id = 0, };
-
+	const struct rte_flow_item_vxlan nic_mask = {
+		.vni = "\xff\xff\xff",
+		.rsvd1 = 0xff,
+	};
+	const struct rte_flow_item_vxlan *valid_mask;
 
 	if (item_flags & MLX5_FLOW_LAYER_TUNNEL)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "multiple tunnel layers not"
 					  " supported");
+	valid_mask = &rte_flow_item_vxlan_mask;
 	/*
 	 * Verify only UDPv4 is present as defined in
 	 * https://tools.ietf.org/html/rfc7348
@@ -2451,9 +2461,15 @@ mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
 					  "no outer UDP layer found");
 	if (!mask)
 		mask = &rte_flow_item_vxlan_mask;
+	/* FDB domain & NIC domain non-zero group */
+	if ((attr->transfer || attr->group) && priv->sh->misc5_cap)
+		valid_mask = &nic_mask;
+	/* Group zero in NIC domain */
+	if (!attr->group && !attr->transfer && priv->sh->tunnel_header_0_1)
+		valid_mask = &nic_mask;
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
-		 (const uint8_t *)&rte_flow_item_vxlan_mask,
+		 (const uint8_t *)valid_mask,
 		 sizeof(struct rte_flow_item_vxlan),
 		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 7d97c5880f..66a38c3630 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1533,8 +1533,10 @@ int mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 				 uint64_t item_flags,
 				 struct rte_eth_dev *dev,
 				 struct rte_flow_error *error);
-int mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
+int mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
+				  const struct rte_flow_item *item,
 				  uint64_t item_flags,
+				  const struct rte_flow_attr *attr,
 				  struct rte_flow_error *error);
 int mlx5_flow_validate_item_vxlan_gpe(const struct rte_flow_item *item,
 				      uint64_t item_flags,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 2f4c0eeb5b..6c3715a5e8 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -6930,7 +6930,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			last_item = MLX5_FLOW_LAYER_GRE_KEY;
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			ret = mlx5_flow_validate_item_vxlan(items, item_flags,
+			ret = mlx5_flow_validate_item_vxlan(dev, items,
+							    item_flags, attr,
 							    error);
 			if (ret < 0)
 				return ret;
@@ -7892,15 +7893,7 @@ flow_dv_prepare(struct rte_eth_dev *dev,
 	memset(dev_flow, 0, sizeof(*dev_flow));
 	dev_flow->handle = dev_handle;
 	dev_flow->handle_idx = handle_idx;
-	/*
-	 * In some old rdma-core releases, before continuing, a check of the
-	 * length of matching parameter will be done at first. It needs to use
-	 * the length without misc4 param. If the flow has misc4 support, then
-	 * the length needs to be adjusted accordingly. Each param member is
-	 * aligned with a 64B boundary naturally.
-	 */
-	dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param) -
-				  MLX5_ST_SZ_BYTES(fte_match_set_misc4);
+	dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param);
 	dev_flow->ingress = attr->ingress;
 	dev_flow->dv.transfer = attr->transfer;
 	return dev_flow;
@@ -8681,6 +8674,10 @@ flow_dv_translate_item_nvgre(void *matcher, void *key,
 /**
  * Add VXLAN item to matcher and to the value.
  *
+ * @param[in] dev
+ *   Pointer to the Ethernet device structure.
+ * @param[in] attr
+ *   Flow rule attributes.
  * @param[in, out] matcher
  *   Flow matcher.
  * @param[in, out] key
@@ -8691,7 +8688,9 @@ flow_dv_translate_item_nvgre(void *matcher, void *key,
  *   Item is inner pattern.
  */
 static void
-flow_dv_translate_item_vxlan(void *matcher, void *key,
+flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
+			     const struct rte_flow_attr *attr,
+			     void *matcher, void *key,
 			     const struct rte_flow_item *item,
 			     int inner)
 {
@@ -8699,13 +8698,16 @@ flow_dv_translate_item_vxlan(void *matcher, void *key,
 	const struct rte_flow_item_vxlan *vxlan_v = item->spec;
 	void *headers_m;
 	void *headers_v;
-	void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters);
-	void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
-	char *vni_m;
-	char *vni_v;
+	void *misc5_m;
+	void *misc5_v;
+	uint32_t *tunnel_header_v;
+	uint32_t *tunnel_header_m;
 	uint16_t dport;
-	int size;
-	int i;
+	struct mlx5_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item_vxlan nic_mask = {
+		.vni = "\xff\xff\xff",
+		.rsvd1 = 0xff,
+	};
 
 	if (inner) {
 		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
@@ -8724,14 +8726,52 @@ flow_dv_translate_item_vxlan(void *matcher, void *key,
 	}
 	if (!vxlan_v)
 		return;
-	if (!vxlan_m)
-		vxlan_m = &rte_flow_item_vxlan_mask;
-	size = sizeof(vxlan_m->vni);
-	vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni);
-	vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
-	memcpy(vni_m, vxlan_m->vni, size);
-	for (i = 0; i < size; ++i)
-		vni_v[i] = vni_m[i] & vxlan_v->vni[i];
+	if (!vxlan_m) {
+		if ((!attr->group && !priv->sh->tunnel_header_0_1) ||
+		    (attr->group && !priv->sh->misc5_cap))
+			vxlan_m = &rte_flow_item_vxlan_mask;
+		else
+			vxlan_m = &nic_mask;
+	}
+	if ((!attr->group && !attr->transfer && !priv->sh->tunnel_header_0_1) ||
+	    ((attr->group || attr->transfer) && !priv->sh->misc5_cap)) {
+		void *misc_m;
+		void *misc_v;
+		char *vni_m;
+		char *vni_v;
+		int size;
+		int i;
+		misc_m = MLX5_ADDR_OF(fte_match_param,
+				      matcher, misc_parameters);
+		misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
+		size = sizeof(vxlan_m->vni);
+		vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni);
+		vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
+		memcpy(vni_m, vxlan_m->vni, size);
+		for (i = 0; i < size; ++i)
+			vni_v[i] = vni_m[i] & vxlan_v->vni[i];
+		return;
+	}
+	misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5);
+	misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5);
+	tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
+						   misc5_v,
+						   tunnel_header_1);
+	tunnel_header_m = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
+						   misc5_m,
+						   tunnel_header_1);
+	*tunnel_header_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) |
+			   (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 |
+			   (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16;
+	if (*tunnel_header_v)
+		*tunnel_header_m = vxlan_m->vni[0] |
+			vxlan_m->vni[1] << 8 |
+			vxlan_m->vni[2] << 16;
+	else
+		*tunnel_header_m = 0x0;
+	*tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;
+	if (vxlan_v->rsvd1 & vxlan_m->rsvd1)
+		*tunnel_header_m |= vxlan_m->rsvd1 << 24;
 }
 
 /**
@@ -9892,9 +9932,32 @@ flow_dv_matcher_enable(uint32_t *match_criteria)
 	match_criteria_enable |=
 		(!HEADER_IS_ZERO(match_criteria, misc_parameters_4)) <<
 		MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT;
+	match_criteria_enable |=
+		(!HEADER_IS_ZERO(match_criteria, misc_parameters_5)) <<
+		MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT;
 	return match_criteria_enable;
 }
 
+static void
+__flow_dv_adjust_buf_size(size_t *size, uint8_t match_criteria)
+{
+	/*
+	 * Check flow matching criteria first, subtract misc5/4 length if flow
+	 * doesn't own misc5/4 parameters. In some old rdma-core releases,
+	 * misc5/4 are not supported, and matcher creation failure is expected
+	 * w/o subtration. If misc5 is provided, misc4 must be counted in since
+	 * misc5 is right after misc4.
+	 */
+	if (!(match_criteria & (1 << MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT))) {
+		*size = MLX5_ST_SZ_BYTES(fte_match_param) -
+			MLX5_ST_SZ_BYTES(fte_match_set_misc5);
+		if (!(match_criteria & (1 <<
+			MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT))) {
+			*size -= MLX5_ST_SZ_BYTES(fte_match_set_misc4);
+		}
+	}
+}
+
 struct mlx5_hlist_entry *
 flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void *cb_ctx)
 {
@@ -10161,6 +10224,8 @@ flow_dv_matcher_create_cb(struct mlx5_cache_list *list,
 	*cache = *ref;
 	dv_attr.match_criteria_enable =
 		flow_dv_matcher_enable(cache->mask.buf);
+	__flow_dv_adjust_buf_size(&ref->mask.size,
+				  dv_attr.match_criteria_enable);
 	dv_attr.priority = ref->priority;
 	if (tbl->is_egress)
 		dv_attr.flags |= IBV_FLOW_ATTR_FLAGS_EGRESS;
@@ -10210,7 +10275,6 @@ flow_dv_matcher_register(struct rte_eth_dev *dev,
 		.error = error,
 		.data = ref,
 	};
-
 	/**
 	 * tunnel offload API requires this registration for cases when
 	 * tunnel match rule was inserted before tunnel set rule.
@@ -12069,8 +12133,7 @@ flow_dv_translate(struct rte_eth_dev *dev,
 	uint64_t action_flags = 0;
 	struct mlx5_flow_dv_matcher matcher = {
 		.mask = {
-			.size = sizeof(matcher.mask.buf) -
-				MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+			.size = sizeof(matcher.mask.buf),
 		},
 	};
 	int actions_n = 0;
@@ -12877,7 +12940,8 @@ flow_dv_translate(struct rte_eth_dev *dev,
 			last_item = MLX5_FLOW_LAYER_GRE;
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			flow_dv_translate_item_vxlan(match_mask, match_value,
+			flow_dv_translate_item_vxlan(dev, attr,
+						     match_mask, match_value,
 						     items, tunnel);
 			matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc);
 			last_item = MLX5_FLOW_LAYER_VXLAN;
@@ -12975,10 +13039,6 @@ flow_dv_translate(struct rte_eth_dev *dev,
 						NULL,
 						"cannot create eCPRI parser");
 			}
-			/* Adjust the length matcher and device flow value. */
-			matcher.mask.size = MLX5_ST_SZ_BYTES(fte_match_param);
-			dev_flow->dv.value.size =
-					MLX5_ST_SZ_BYTES(fte_match_param);
 			flow_dv_translate_item_ecpri(dev, match_mask,
 						     match_value, items);
 			/* No other protocol should follow eCPRI layer. */
@@ -13288,6 +13348,7 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
 	int idx;
 	struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace();
 	struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc;
+	uint8_t misc_mask;
 
 	MLX5_ASSERT(wks);
 	for (idx = wks->flow_idx - 1; idx >= 0; idx--) {
@@ -13358,6 +13419,8 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
 			}
 			dv->actions[n++] = priv->sh->default_miss_action;
 		}
+		misc_mask = flow_dv_matcher_enable(dv->value.buf);
+		__flow_dv_adjust_buf_size(&dv->value.size, misc_mask);
 		err = mlx5_flow_os_create_flow(dv_h->matcher->matcher_object,
 					       (void *)&dv->value, n,
 					       dv->actions, &dh->drv_flow);
@@ -15476,14 +15539,13 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev,
 {
 	int ret;
 	struct mlx5_flow_dv_match_params value = {
-		.size = sizeof(value.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(value.buf),
 	};
 	struct mlx5_flow_dv_match_params matcher = {
-		.size = sizeof(matcher.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(matcher.buf),
 	};
 	struct mlx5_priv *priv = dev->data->dev_private;
+	uint8_t misc_mask;
 
 	if (match_src_port && (priv->representor || priv->master)) {
 		if (flow_dv_translate_item_port_id(dev, matcher.buf,
@@ -15497,6 +15559,8 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev,
 				(enum modify_reg)color_reg_c_idx,
 				rte_col_2_mlx5_col(color),
 				UINT32_MAX);
+	misc_mask = flow_dv_matcher_enable(value.buf);
+	__flow_dv_adjust_buf_size(&value.size, misc_mask);
 	ret = mlx5_flow_os_create_flow(matcher_object,
 			(void *)&value, actions_n, actions, rule);
 	if (ret) {
@@ -15521,14 +15585,12 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev,
 	struct mlx5_flow_tbl_resource *tbl_rsc = sub_policy->tbl_rsc;
 	struct mlx5_flow_dv_matcher matcher = {
 		.mask = {
-			.size = sizeof(matcher.mask.buf) -
-				MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+			.size = sizeof(matcher.mask.buf),
 		},
 		.tbl = tbl_rsc,
 	};
 	struct mlx5_flow_dv_match_params value = {
-		.size = sizeof(value.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(value.buf),
 	};
 	struct mlx5_flow_cb_ctx ctx = {
 		.error = error,
@@ -16002,12 +16064,10 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 	int domain, ret, i;
 	struct mlx5_flow_counter *cnt;
 	struct mlx5_flow_dv_match_params value = {
-		.size = sizeof(value.buf) -
-		MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(value.buf),
 	};
 	struct mlx5_flow_dv_match_params matcher_para = {
-		.size = sizeof(matcher_para.buf) -
-		MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(matcher_para.buf),
 	};
 	int mtr_id_reg_c = mlx5_flow_get_reg_id(dev, MLX5_MTR_ID,
 						     0, &error);
@@ -16016,8 +16076,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 	struct mlx5_cache_entry *entry;
 	struct mlx5_flow_dv_matcher matcher = {
 		.mask = {
-			.size = sizeof(matcher.mask.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+			.size = sizeof(matcher.mask.buf),
 		},
 	};
 	struct mlx5_flow_dv_matcher *drop_matcher;
@@ -16025,6 +16084,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 		.error = &error,
 		.data = &matcher,
 	};
+	uint8_t misc_mask;
 
 	if (!priv->mtr_en || mtr_id_reg_c < 0) {
 		rte_errno = ENOTSUP;
@@ -16074,6 +16134,8 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 			actions[i++] = priv->sh->dr_drop_action;
 			flow_dv_match_meta_reg(matcher_para.buf, value.buf,
 				(enum modify_reg)mtr_id_reg_c, 0, 0);
+			misc_mask = flow_dv_matcher_enable(value.buf);
+			__flow_dv_adjust_buf_size(&value.size, misc_mask);
 			ret = mlx5_flow_os_create_flow
 				(mtrmng->def_matcher[domain]->matcher_object,
 				(void *)&value, i, actions,
@@ -16117,6 +16179,8 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 					fm->drop_cnt, NULL);
 		actions[i++] = cnt->action;
 		actions[i++] = priv->sh->dr_drop_action;
+		misc_mask = flow_dv_matcher_enable(value.buf);
+		__flow_dv_adjust_buf_size(&value.size, misc_mask);
 		ret = mlx5_flow_os_create_flow(drop_matcher->matcher_object,
 					       (void *)&value, i, actions,
 					       &fm->drop_rule[domain]);
@@ -16637,10 +16701,12 @@ mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev)
 	if (ret)
 		goto err;
 	dv_attr.match_criteria_enable = flow_dv_matcher_enable(mask.buf);
+	__flow_dv_adjust_buf_size(&mask.size, dv_attr.match_criteria_enable);
 	ret = mlx5_flow_os_create_flow_matcher(sh->ctx, &dv_attr, tbl->obj,
 					       &matcher);
 	if (ret)
 		goto err;
+	__flow_dv_adjust_buf_size(&value.size, dv_attr.match_criteria_enable);
 	ret = mlx5_flow_os_create_flow(matcher, (void *)&value, 1,
 				       actions, &flow);
 err:
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index fe9673310a..7b3d0b320d 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1381,7 +1381,8 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 					     MLX5_FLOW_LAYER_OUTER_L4_TCP;
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			ret = mlx5_flow_validate_item_vxlan(items, item_flags,
+			ret = mlx5_flow_validate_item_vxlan(dev, items,
+							    item_flags, attr,
 							    error);
 			if (ret < 0)
 				return ret;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
index 1fcd24c002..383f003966 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
@@ -140,11 +140,13 @@ mlx5_vdpa_rss_flows_create(struct mlx5_vdpa_priv *priv)
 		/**< Matcher value. This value is used as the mask or a key. */
 	} matcher_mask = {
 				.size = sizeof(matcher_mask.buf) -
-					MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+					MLX5_ST_SZ_BYTES(fte_match_set_misc4) -
+					MLX5_ST_SZ_BYTES(fte_match_set_misc5),
 			},
 	  matcher_value = {
 				.size = sizeof(matcher_value.buf) -
-					MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+					MLX5_ST_SZ_BYTES(fte_match_set_misc4) -
+					MLX5_ST_SZ_BYTES(fte_match_set_misc5),
 			};
 	struct mlx5dv_flow_matcher_attr dv_attr = {
 		.type = IBV_FLOW_ATTR_NORMAL,
-- 
2.27.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [dpdk-dev] [PATCH v5 2/2] app/testpmd: support VXLAN the last 8-bits field matching
  2021-07-13  9:55         ` [dpdk-dev] [PATCH v5 " Rongwei Liu
  2021-07-13  9:55           ` [dpdk-dev] [PATCH v5 1/2] net/mlx5: add VXLAN header the last 8-bits matching support Rongwei Liu
@ 2021-07-13  9:55           ` Rongwei Liu
  2021-07-13 10:02             ` Raslan Darawsheh
  1 sibling, 1 reply; 34+ messages in thread
From: Rongwei Liu @ 2021-07-13  9:55 UTC (permalink / raw)
  To: matan, viacheslavo, orika, thomas, Xiaoyun Li; +Cc: dev, rasland

Add a new testpmd pattern field 'last_rsvd' that supports the
last 8-bits matching of VXLAN header.

The examples for the "last_rsvd" pattern field are as below:

1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...

This flow will exactly match the last 8-bits to be 0x80.

2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
vxlan mask 0x80 / end ...

This flow will only match the MSB of the last 8-bits to be 1.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 9 +++++++++
 app/test-pmd/util.c                         | 5 +++--
 doc/guides/rel_notes/release_21_08.rst      | 6 ++++++
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 1 +
 4 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 8fc0e1469d..3d5ab806c3 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -205,6 +205,7 @@ enum index {
 	ITEM_SCTP_CKSUM,
 	ITEM_VXLAN,
 	ITEM_VXLAN_VNI,
+	ITEM_VXLAN_LAST_RSVD,
 	ITEM_E_TAG,
 	ITEM_E_TAG_GRP_ECID_B,
 	ITEM_NVGRE,
@@ -1127,6 +1128,7 @@ static const enum index item_sctp[] = {
 
 static const enum index item_vxlan[] = {
 	ITEM_VXLAN_VNI,
+	ITEM_VXLAN_LAST_RSVD,
 	ITEM_NEXT,
 	ZERO,
 };
@@ -2839,6 +2841,13 @@ static const struct token token_list[] = {
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, vni)),
 	},
+	[ITEM_VXLAN_LAST_RSVD] = {
+		.name = "last_rsvd",
+		.help = "VXLAN last reserved bits",
+		.next = NEXT(item_vxlan, NEXT_ENTRY(UNSIGNED), item_param),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan,
+					     rsvd1)),
+	},
 	[ITEM_E_TAG] = {
 		.name = "e_tag",
 		.help = "match E-Tag header",
diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
index a9e431a8b2..59626518d5 100644
--- a/app/test-pmd/util.c
+++ b/app/test-pmd/util.c
@@ -266,8 +266,9 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
 				vx_vni = rte_be_to_cpu_32(vxlan_hdr->vx_vni);
 				MKDUMPSTR(print_buf, buf_size, cur_len,
 					  " - VXLAN packet: packet type =%d, "
-					  "Destination UDP port =%d, VNI = %d",
-					  packet_type, udp_port, vx_vni >> 8);
+					  "Destination UDP port =%d, VNI = %d, "
+					  "last_rsvd = %d", packet_type,
+					  udp_port, vx_vni >> 8, vx_vni & 0xff);
 			}
 		}
 		MKDUMPSTR(print_buf, buf_size, cur_len,
diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
index 6a902ef9ac..3fb17bbf77 100644
--- a/doc/guides/rel_notes/release_21_08.rst
+++ b/doc/guides/rel_notes/release_21_08.rst
@@ -117,6 +117,11 @@ New Features
   The experimental PMD power management API now supports managing
   multiple Ethernet Rx queues per lcore.
 
+* **Updated Mellanox mlx5 driver.**
+
+  Updated the Mellanox mlx5 driver with new features and improvements, including:
+
+  * Added support for matching on vxlan header last 8-bits reserved field.
 
 Removed Items
 -------------
@@ -208,3 +213,4 @@ Tested Platforms
    This section is a comment. Do not overwrite or remove it.
    Also, make sure to start the actual text at the margin.
    =======================================================
+
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 33857acf54..4ca3103067 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3694,6 +3694,7 @@ This section lists supported pattern items and their attributes, if any.
 - ``vxlan``: match VXLAN header.
 
   - ``vni {unsigned}``: VXLAN identifier.
+  - ``last_rsvd {unsigned}``: VXLAN last reserved 8-bits.
 
 - ``e_tag``: match IEEE 802.1BR E-Tag header.
 
-- 
2.27.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v4 0/2] support VXLAN header the last 8-bits matching
  2021-07-13  8:33       ` [dpdk-dev] [PATCH v4 0/2] support VXLAN header the last 8-bits matching Andrew Rybchenko
  2021-07-13  9:55         ` [dpdk-dev] [PATCH v5 " Rongwei Liu
@ 2021-07-13  9:56         ` Rongwei Liu
  1 sibling, 0 replies; 34+ messages in thread
From: Rongwei Liu @ 2021-07-13  9:56 UTC (permalink / raw)
  To: Andrew Rybchenko, Matan Azrad, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon
  Cc: dev, Raslan Darawsheh

Hi Andrew:
	Thanks for the review.
	V5 has been sent after rebasing.

BR
Rongwei

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Tuesday, July 13, 2021 4:33 PM
> To: Rongwei Liu <rongweil@nvidia.com>; Matan Azrad <matan@nvidia.com>;
> Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>;
> NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> Subject: Re: [dpdk-dev] [PATCH v4 0/2] support VXLAN header the last 8-bits
> matching
> 
> External email: Use caution opening links or attachments
> 
> 
> On 7/7/21 11:09 AM, Rongwei Liu wrote:
> > This update adds support for the VXLAN the last 8-bits reserved field
> > matching when creating sw steering rules.
> >
> > Add a new testpmd pattern field 'last_rsvd' that supports the last
> > 8-bits matching of VXLAN header.
> 
> The version fails to apply, please, send rebased version.
> 
> Thanks,
> Andrew.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v5 2/2] app/testpmd: support VXLAN the last 8-bits field matching
  2021-07-13  9:55           ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: support VXLAN the last 8-bits field matching Rongwei Liu
@ 2021-07-13 10:02             ` Raslan Darawsheh
  2021-07-13 10:06               ` Andrew Rybchenko
  0 siblings, 1 reply; 34+ messages in thread
From: Raslan Darawsheh @ 2021-07-13 10:02 UTC (permalink / raw)
  To: Rongwei Liu, Matan Azrad, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon, Xiaoyun Li
  Cc: dev

Hi,
[...]
> diff --git a/doc/guides/rel_notes/release_21_08.rst
> b/doc/guides/rel_notes/release_21_08.rst
> index 6a902ef9ac..3fb17bbf77 100644
> --- a/doc/guides/rel_notes/release_21_08.rst
> +++ b/doc/guides/rel_notes/release_21_08.rst
> @@ -117,6 +117,11 @@ New Features
>    The experimental PMD power management API now supports managing
>    multiple Ethernet Rx queues per lcore.
> 
> +* **Updated Mellanox mlx5 driver.**
> +
> +  Updated the Mellanox mlx5 driver with new features and improvements,
> including:
> +
> +  * Added support for matching on vxlan header last 8-bits reserved field.
> 
This change should be part of the first patch not related to testpmd part.

Also, how about something like this:
Added support for matching on the reserved filed of VXLAN header (last 8-bits).

[..]
Kindest regards
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v5 2/2] app/testpmd: support VXLAN the last 8-bits field matching
  2021-07-13 10:02             ` Raslan Darawsheh
@ 2021-07-13 10:06               ` Andrew Rybchenko
  0 siblings, 0 replies; 34+ messages in thread
From: Andrew Rybchenko @ 2021-07-13 10:06 UTC (permalink / raw)
  To: Raslan Darawsheh, Rongwei Liu, Matan Azrad, Slava Ovsiienko,
	Ori Kam, NBU-Contact-Thomas Monjalon, Xiaoyun Li
  Cc: dev

On 7/13/21 1:02 PM, Raslan Darawsheh wrote:
> Hi,
> [...]
>> diff --git a/doc/guides/rel_notes/release_21_08.rst
>> b/doc/guides/rel_notes/release_21_08.rst
>> index 6a902ef9ac..3fb17bbf77 100644
>> --- a/doc/guides/rel_notes/release_21_08.rst
>> +++ b/doc/guides/rel_notes/release_21_08.rst
>> @@ -117,6 +117,11 @@ New Features
>>    The experimental PMD power management API now supports managing
>>    multiple Ethernet Rx queues per lcore.
>>
>> +* **Updated Mellanox mlx5 driver.**
>> +
>> +  Updated the Mellanox mlx5 driver with new features and improvements,
>> including:
>> +
>> +  * Added support for matching on vxlan header last 8-bits reserved field.
>>
> This change should be part of the first patch not related to testpmd part.
> 
> Also, how about something like this:
> Added support for matching on the reserved filed of VXLAN header (last 8-bits).
> 

Also it should be merged with already existing "Updated
Mellanox mlx5 driver." in 21.08 release notes.

Thanks,
Andrew.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v5 1/2] net/mlx5: add VXLAN header the last 8-bits matching support
  2021-07-13  9:55           ` [dpdk-dev] [PATCH v5 1/2] net/mlx5: add VXLAN header the last 8-bits matching support Rongwei Liu
@ 2021-07-13 10:27             ` Raslan Darawsheh
  2021-07-13 10:50               ` [dpdk-dev] [PATCH v6 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
  2021-07-13 10:52               ` [dpdk-dev] [PATCH v5 1/2] net/mlx5: add VXLAN header the last 8-bits matching support Rongwei Liu
  0 siblings, 2 replies; 34+ messages in thread
From: Raslan Darawsheh @ 2021-07-13 10:27 UTC (permalink / raw)
  To: Rongwei Liu, Matan Azrad, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon, Shahaf Shuler
  Cc: dev

Hi,


> -----Original Message-----
> From: Rongwei Liu <rongweil@nvidia.com>
> Sent: Tuesday, July 13, 2021 12:55 PM
> To: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-
> Thomas Monjalon <thomas@monjalon.net>; Shahaf Shuler
> <shahafs@nvidia.com>
> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> Subject: [PATCH v5 1/2] net/mlx5: add VXLAN header the last 8-bits matching
> support
Title can be improved:
How about:
"net/mlx5: support matching on reserved field of VXLAN"
> 
> This update adds support for the VXLAN header last 8-bits
> matching when creating steering rules. At the PCIe probe
> stage, we create a dummy VXLAN matcher using misc5 to check
> rdma-core library's capability.
This adds matching on reserved field of VXLAN header (the last 8-bits).

The capability from both rdma-core and FW is detected by creating
a dummy matcher using misc5 when the device is probed.

> 
> The logic is, group 0 depends on HCA_CAP to enable misc or misc5
> for VXLAN matching while group non zero depends on the rdma-core
> capability.
> 
for none-zero groups the capability is detected from rdma-core,
meanwhile for group zero it's relying on the HCA_CAP from FW.

> Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  doc/guides/nics/mlx5.rst             |  11 +-
>  drivers/common/mlx5/mlx5_devx_cmds.c |   3 +
>  drivers/common/mlx5/mlx5_devx_cmds.h |   6 +
>  drivers/common/mlx5/mlx5_prm.h       |  41 +++++--
>  drivers/net/mlx5/linux/mlx5_os.c     |  77 +++++++++++++
>  drivers/net/mlx5/mlx5.h              |   2 +
>  drivers/net/mlx5/mlx5_flow.c         |  26 ++++-
>  drivers/net/mlx5/mlx5_flow.h         |   4 +-
>  drivers/net/mlx5/mlx5_flow_dv.c      | 160 +++++++++++++++++++--------
>  drivers/net/mlx5/mlx5_flow_verbs.c   |   3 +-
>  drivers/vdpa/mlx5/mlx5_vdpa_steer.c  |   6 +-
>  11 files changed, 274 insertions(+), 65 deletions(-)
> 
> diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
> index 8253b96e92..5842991d5d 100644
> --- a/doc/guides/nics/mlx5.rst
> +++ b/doc/guides/nics/mlx5.rst
> @@ -195,8 +195,15 @@ Limitations
>    size and ``txq_inline_min`` settings and may be from 2 (worst case forced
> by maximal
>    inline settings) to 58.
> 
> -- Flows with a VXLAN Network Identifier equal (or ends to be equal)
> -  to 0 are not supported.
> +- Match on VXLAN supports the following fields only:
> +
> +     - VNI
> +     - Last reserved 8-bits
> +
> +  Last reserved 8-bits matching is only supported When using DV flow
> +  engine (``dv_flow_en`` = 1).
> +  Group zero's behavior may differ which depends on FW.
> +  Matching value equals 0 (value & mask) is not supported.
> 
>  - L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with
> MPLSoGRE and MPLSoUDP.
> 
> diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c
> b/drivers/common/mlx5/mlx5_devx_cmds.c
> index f5914bce32..63ae95832d 100644
> --- a/drivers/common/mlx5/mlx5_devx_cmds.c
> +++ b/drivers/common/mlx5/mlx5_devx_cmds.c
> @@ -947,6 +947,9 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
>  	attr->log_max_ft_sampler_num = MLX5_GET
>  		(flow_table_nic_cap, hcattr,
> 
> flow_table_properties_nic_receive.log_max_ft_sampler_num);
> +	attr->flow.tunnel_header_0_1 = MLX5_GET
> +		(flow_table_nic_cap, hcattr,
> +		 ft_field_support_2_nic_receive.tunnel_header_0_1);
>  	attr->pkt_integrity_match =
> mlx5_devx_query_pkt_integrity_match(hcattr);
>  	/* Query HCA offloads for Ethernet protocol. */
>  	memset(in, 0, sizeof(in));
> diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h
> b/drivers/common/mlx5/mlx5_devx_cmds.h
> index f8a17b886b..124f43e852 100644
> --- a/drivers/common/mlx5/mlx5_devx_cmds.h
> +++ b/drivers/common/mlx5/mlx5_devx_cmds.h
> @@ -89,6 +89,11 @@ struct mlx5_hca_vdpa_attr {
>  	uint64_t doorbell_bar_offset;
>  };
> 
> +struct mlx5_hca_flow_attr {
> +	uint32_t tunnel_header_0_1;
> +	uint32_t tunnel_header_2_3;
> +};
> +
>  /* HCA supports this number of time periods for LRO. */
>  #define MLX5_LRO_NUM_SUPP_PERIODS 4
> 
> @@ -155,6 +160,7 @@ struct mlx5_hca_attr {
>  	uint32_t pkt_integrity_match:1; /* 1 if HW supports integrity item */
>  	struct mlx5_hca_qos_attr qos;
>  	struct mlx5_hca_vdpa_attr vdpa;
> +	struct mlx5_hca_flow_attr flow;
>  	int log_max_qp_sz;
>  	int log_max_cq_sz;
>  	int log_max_qp;
> diff --git a/drivers/common/mlx5/mlx5_prm.h
> b/drivers/common/mlx5/mlx5_prm.h
> index 26761f5bd3..7950070976 100644
> --- a/drivers/common/mlx5/mlx5_prm.h
> +++ b/drivers/common/mlx5/mlx5_prm.h
> @@ -977,6 +977,18 @@ struct mlx5_ifc_fte_match_set_misc4_bits {
>  	u8 reserved_at_100[0x100];
>  };
> 
> +struct mlx5_ifc_fte_match_set_misc5_bits {
> +	u8 macsec_tag_0[0x20];
> +	u8 macsec_tag_1[0x20];
> +	u8 macsec_tag_2[0x20];
> +	u8 macsec_tag_3[0x20];
> +	u8 tunnel_header_0[0x20];
> +	u8 tunnel_header_1[0x20];
> +	u8 tunnel_header_2[0x20];
> +	u8 tunnel_header_3[0x20];
> +	u8 reserved[0x100];
> +};
> +
>  /* Flow matcher. */
>  struct mlx5_ifc_fte_match_param_bits {
>  	struct mlx5_ifc_fte_match_set_lyr_2_4_bits outer_headers;
> @@ -985,12 +997,13 @@ struct mlx5_ifc_fte_match_param_bits {
>  	struct mlx5_ifc_fte_match_set_misc2_bits misc_parameters_2;
>  	struct mlx5_ifc_fte_match_set_misc3_bits misc_parameters_3;
>  	struct mlx5_ifc_fte_match_set_misc4_bits misc_parameters_4;
> +	struct mlx5_ifc_fte_match_set_misc5_bits misc_parameters_5;
>  /*
>   * Add reserved bit to match the struct size with the size defined in PRM.
>   * This extension is not required in Linux.
>   */
>  #ifndef HAVE_INFINIBAND_VERBS_H
> -	u8 reserved_0[0x400];
> +	u8 reserved_0[0x200];
>  #endif
>  };
> 
> @@ -1007,6 +1020,7 @@ enum {
>  	MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT,
>  	MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT,
>  	MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT,
> +	MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT,
>  };
> 
>  enum {
> @@ -1784,7 +1798,12 @@ struct mlx5_ifc_roce_caps_bits {
>   * Table 1872 - Flow Table Fields Supported 2 Format
>   */
>  struct mlx5_ifc_ft_fields_support_2_bits {
> -	u8 reserved_at_0[0x14];
> +	u8 reserved_at_0[0xf];
> +	u8 tunnel_header_2_3[0x1];
> +	u8 tunnel_header_0_1[0x1];
> +	u8 macsec_syndrome[0x1];
> +	u8 macsec_tag[0x1];
> +	u8 outer_lrh_sl[0x1];
>  	u8 inner_ipv4_ihl[0x1];
>  	u8 outer_ipv4_ihl[0x1];
>  	u8 psp_syndrome[0x1];
> @@ -1797,18 +1816,26 @@ struct mlx5_ifc_ft_fields_support_2_bits {
>  	u8 inner_l4_checksum_ok[0x1];
>  	u8 outer_ipv4_checksum_ok[0x1];
>  	u8 outer_l4_checksum_ok[0x1];
> +	u8 reserved_at_20[0x60];
>  };
> 
>  struct mlx5_ifc_flow_table_nic_cap_bits {
>  	u8 reserved_at_0[0x200];
>  	struct mlx5_ifc_flow_table_prop_layout_bits
> -	       flow_table_properties_nic_receive;
> +		flow_table_properties_nic_receive;
> +	struct mlx5_ifc_flow_table_prop_layout_bits
> +		flow_table_properties_nic_receive_rdma;
> +	struct mlx5_ifc_flow_table_prop_layout_bits
> +		flow_table_properties_nic_receive_sniffer;
> +	struct mlx5_ifc_flow_table_prop_layout_bits
> +		flow_table_properties_nic_transmit;
> +	struct mlx5_ifc_flow_table_prop_layout_bits
> +		flow_table_properties_nic_transmit_rdma;
>  	struct mlx5_ifc_flow_table_prop_layout_bits
> -	       flow_table_properties_unused[5];
> -	u8 reserved_at_1C0[0x200];
> -	u8 header_modify_nic_receive[0x400];
> +		flow_table_properties_nic_transmit_sniffer;
> +	u8 reserved_at_e00[0x600];
>  	struct mlx5_ifc_ft_fields_support_2_bits
> -	       ft_field_support_2_nic_receive;
> +		ft_field_support_2_nic_receive;
>  };
> 
>  struct mlx5_ifc_cmd_hca_cap_2_bits {
> diff --git a/drivers/net/mlx5/linux/mlx5_os.c
> b/drivers/net/mlx5/linux/mlx5_os.c
> index be22d9cbd2..55bb71c170 100644
> --- a/drivers/net/mlx5/linux/mlx5_os.c
> +++ b/drivers/net/mlx5/linux/mlx5_os.c
> @@ -193,6 +193,79 @@ mlx5_alloc_verbs_buf(size_t size, void *data)
>  	return ret;
>  }
> 
> +/**
> + * Detect misc5 support or not
> + *
> + * @param[in] priv
> + *   Device private data pointer
> + */
> +#ifdef HAVE_MLX5DV_DR
> +static void
> +__mlx5_discovery_misc5_cap(struct mlx5_priv *priv)
> +{
> +#ifdef HAVE_IBV_FLOW_DV_SUPPORT
> +	/* Dummy VxLAN matcher to detect rdma-core misc5 cap
> +	 * Case: IPv4--->UDP--->VxLAN--->vni
> +	 */
> +	void *tbl;
> +	struct mlx5_flow_dv_match_params matcher_mask;
> +	void *match_m;
> +	void *matcher;
> +	void *headers_m;
> +	void *misc5_m;
> +	uint32_t *tunnel_header_m;
> +	struct mlx5dv_flow_matcher_attr dv_attr;
> +
> +	memset(&matcher_mask, 0, sizeof(matcher_mask));
> +	matcher_mask.size = sizeof(matcher_mask.buf);
> +	match_m = matcher_mask.buf;
> +	headers_m = MLX5_ADDR_OF(fte_match_param, match_m,
> outer_headers);
> +	misc5_m = MLX5_ADDR_OF(fte_match_param,
> +			       match_m, misc_parameters_5);
> +	tunnel_header_m = (uint32_t *)
> +				MLX5_ADDR_OF(fte_match_set_misc5,
> +				misc5_m, tunnel_header_1);
> +	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff);
> +	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, 4);
> +	MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xffff);
> +	*tunnel_header_m = 0xffffff;
> +
> +	tbl = mlx5_glue->dr_create_flow_tbl(priv->sh->rx_domain, 1);
> +	if (!tbl) {
> +		DRV_LOG(INFO, "No SW steering support");
> +		return;
> +	}
> +	dv_attr.type = IBV_FLOW_ATTR_NORMAL,
> +	dv_attr.match_mask = (void *)&matcher_mask,
> +	dv_attr.match_criteria_enable =
> +			(1 << MLX5_MATCH_CRITERIA_ENABLE_OUTER_BIT)
> |
> +			(1 << MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT);
> +	dv_attr.priority = 3;
> +#ifdef HAVE_MLX5DV_DR_ESWITCH
> +	void *misc2_m;
> +	if (priv->config.dv_esw_en) {
> +		/* FDB enabled reg_c_0 */
> +		dv_attr.match_criteria_enable |=
> +				(1 <<
> MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT);
> +		misc2_m = MLX5_ADDR_OF(fte_match_param,
> +				       match_m, misc_parameters_2);
> +		MLX5_SET(fte_match_set_misc2, misc2_m,
> +			 metadata_reg_c_0, 0xffff);
> +	}
> +#endif
> +	matcher = mlx5_glue->dv_create_flow_matcher(priv->sh->ctx,
> +						    &dv_attr, tbl);
> +	if (matcher) {
> +		priv->sh->misc5_cap = 1;
> +		mlx5_glue->dv_destroy_flow_matcher(matcher);
> +	}
> +	mlx5_glue->dr_destroy_flow_tbl(tbl);
> +#else
> +	RTE_SET_USED(priv);
> +#endif
> +}
> +#endif
> +
>  /**
>   * Verbs callback to free a memory.
>   *
> @@ -364,6 +437,8 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv)
>  		if (sh->fdb_domain)
>  			mlx5_glue->dr_allow_duplicate_rules(sh-
> >fdb_domain, 0);
>  	}
> +
> +	__mlx5_discovery_misc5_cap(priv);
>  #endif /* HAVE_MLX5DV_DR */
>  	sh->default_miss_action =
>  			mlx5_glue->dr_create_flow_action_default_miss();
> @@ -1313,6 +1388,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
>  				goto error;
>  			}
>  		}
> +		if (config->hca_attr.flow.tunnel_header_0_1)
> +			sh->tunnel_header_0_1 = 1;
>  #endif
>  #ifdef HAVE_MLX5_DR_CREATE_ACTION_ASO
>  		if (config->hca_attr.flow_hit_aso &&
> diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
> index f864c1d701..75a0e04ea0 100644
> --- a/drivers/net/mlx5/mlx5.h
> +++ b/drivers/net/mlx5/mlx5.h
> @@ -1094,6 +1094,8 @@ struct mlx5_dev_ctx_shared {
>  	uint32_t qp_ts_format:2; /* QP timestamp formats supported. */
>  	uint32_t meter_aso_en:1; /* Flow Meter ASO is supported. */
>  	uint32_t ct_aso_en:1; /* Connection Tracking ASO is supported. */
> +	uint32_t tunnel_header_0_1:1; /* tunnel_header_0_1 is supported.
> */
> +	uint32_t misc5_cap:1; /* misc5 matcher parameter is supported. */
>  	uint32_t max_port; /* Maximal IB device port index. */
>  	struct mlx5_bond_info bond; /* Bonding information. */
>  	void *ctx; /* Verbs/DV/DevX context. */
> diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
> index 2feddb0254..f3f5752dbe 100644
> --- a/drivers/net/mlx5/mlx5_flow.c
> +++ b/drivers/net/mlx5/mlx5_flow.c
> @@ -2410,12 +2410,14 @@ mlx5_flow_validate_item_tcp(const struct
> rte_flow_item *item,
>  /**
>   * Validate VXLAN item.
>   *
> + * @param[in] dev
> + *   Pointer to the Ethernet device structure.
>   * @param[in] item
>   *   Item specification.
>   * @param[in] item_flags
>   *   Bit-fields that holds the items detected until now.
> - * @param[in] target_protocol
> - *   The next protocol in the previous item.
> + * @param[in] attr
> + *   Flow rule attributes.
>   * @param[out] error
>   *   Pointer to error structure.
>   *
> @@ -2423,24 +2425,32 @@ mlx5_flow_validate_item_tcp(const struct
> rte_flow_item *item,
>   *   0 on success, a negative errno value otherwise and rte_errno is set.
>   */
>  int
> -mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
> +mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
> +			      const struct rte_flow_item *item,
>  			      uint64_t item_flags,
> +			      const struct rte_flow_attr *attr,
>  			      struct rte_flow_error *error)
>  {
>  	const struct rte_flow_item_vxlan *spec = item->spec;
>  	const struct rte_flow_item_vxlan *mask = item->mask;
>  	int ret;
> +	struct mlx5_priv *priv = dev->data->dev_private;
>  	union vni {
>  		uint32_t vlan_id;
>  		uint8_t vni[4];
>  	} id = { .vlan_id = 0, };
> -
> +	const struct rte_flow_item_vxlan nic_mask = {
> +		.vni = "\xff\xff\xff",
> +		.rsvd1 = 0xff,
> +	};
> +	const struct rte_flow_item_vxlan *valid_mask;
> 
>  	if (item_flags & MLX5_FLOW_LAYER_TUNNEL)
>  		return rte_flow_error_set(error, ENOTSUP,
>  					  RTE_FLOW_ERROR_TYPE_ITEM,
> item,
>  					  "multiple tunnel layers not"
>  					  " supported");
> +	valid_mask = &rte_flow_item_vxlan_mask;
>  	/*
>  	 * Verify only UDPv4 is present as defined in
>  	 * https://tools.ietf.org/html/rfc7348
> @@ -2451,9 +2461,15 @@ mlx5_flow_validate_item_vxlan(const struct
> rte_flow_item *item,
>  					  "no outer UDP layer found");
>  	if (!mask)
>  		mask = &rte_flow_item_vxlan_mask;
> +	/* FDB domain & NIC domain non-zero group */
> +	if ((attr->transfer || attr->group) && priv->sh->misc5_cap)
> +		valid_mask = &nic_mask;
> +	/* Group zero in NIC domain */
> +	if (!attr->group && !attr->transfer && priv->sh-
> >tunnel_header_0_1)
> +		valid_mask = &nic_mask;
>  	ret = mlx5_flow_item_acceptable
>  		(item, (const uint8_t *)mask,
> -		 (const uint8_t *)&rte_flow_item_vxlan_mask,
> +		 (const uint8_t *)valid_mask,
>  		 sizeof(struct rte_flow_item_vxlan),
>  		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
>  	if (ret < 0)
> diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
> index 7d97c5880f..66a38c3630 100644
> --- a/drivers/net/mlx5/mlx5_flow.h
> +++ b/drivers/net/mlx5/mlx5_flow.h
> @@ -1533,8 +1533,10 @@ int mlx5_flow_validate_item_vlan(const struct
> rte_flow_item *item,
>  				 uint64_t item_flags,
>  				 struct rte_eth_dev *dev,
>  				 struct rte_flow_error *error);
> -int mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
> +int mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
> +				  const struct rte_flow_item *item,
>  				  uint64_t item_flags,
> +				  const struct rte_flow_attr *attr,
>  				  struct rte_flow_error *error);
>  int mlx5_flow_validate_item_vxlan_gpe(const struct rte_flow_item *item,
>  				      uint64_t item_flags,
> diff --git a/drivers/net/mlx5/mlx5_flow_dv.c
> b/drivers/net/mlx5/mlx5_flow_dv.c
> index 2f4c0eeb5b..6c3715a5e8 100644
> --- a/drivers/net/mlx5/mlx5_flow_dv.c
> +++ b/drivers/net/mlx5/mlx5_flow_dv.c
> @@ -6930,7 +6930,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const
> struct rte_flow_attr *attr,
>  			last_item = MLX5_FLOW_LAYER_GRE_KEY;
>  			break;
>  		case RTE_FLOW_ITEM_TYPE_VXLAN:
> -			ret = mlx5_flow_validate_item_vxlan(items,
> item_flags,
> +			ret = mlx5_flow_validate_item_vxlan(dev, items,
> +							    item_flags, attr,
>  							    error);
>  			if (ret < 0)
>  				return ret;
> @@ -7892,15 +7893,7 @@ flow_dv_prepare(struct rte_eth_dev *dev,
>  	memset(dev_flow, 0, sizeof(*dev_flow));
>  	dev_flow->handle = dev_handle;
>  	dev_flow->handle_idx = handle_idx;
> -	/*
> -	 * In some old rdma-core releases, before continuing, a check of the
> -	 * length of matching parameter will be done at first. It needs to use
> -	 * the length without misc4 param. If the flow has misc4 support,
> then
> -	 * the length needs to be adjusted accordingly. Each param member
> is
> -	 * aligned with a 64B boundary naturally.
> -	 */
> -	dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param) -
> -				  MLX5_ST_SZ_BYTES(fte_match_set_misc4);
> +	dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param);
>  	dev_flow->ingress = attr->ingress;
>  	dev_flow->dv.transfer = attr->transfer;
>  	return dev_flow;
> @@ -8681,6 +8674,10 @@ flow_dv_translate_item_nvgre(void *matcher,
> void *key,
>  /**
>   * Add VXLAN item to matcher and to the value.
>   *
> + * @param[in] dev
> + *   Pointer to the Ethernet device structure.
> + * @param[in] attr
> + *   Flow rule attributes.
>   * @param[in, out] matcher
>   *   Flow matcher.
>   * @param[in, out] key
> @@ -8691,7 +8688,9 @@ flow_dv_translate_item_nvgre(void *matcher,
> void *key,
>   *   Item is inner pattern.
>   */
>  static void
> -flow_dv_translate_item_vxlan(void *matcher, void *key,
> +flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
> +			     const struct rte_flow_attr *attr,
> +			     void *matcher, void *key,
>  			     const struct rte_flow_item *item,
>  			     int inner)
>  {
> @@ -8699,13 +8698,16 @@ flow_dv_translate_item_vxlan(void *matcher,
> void *key,
>  	const struct rte_flow_item_vxlan *vxlan_v = item->spec;
>  	void *headers_m;
>  	void *headers_v;
> -	void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher,
> misc_parameters);
> -	void *misc_v = MLX5_ADDR_OF(fte_match_param, key,
> misc_parameters);
> -	char *vni_m;
> -	char *vni_v;
> +	void *misc5_m;
> +	void *misc5_v;
> +	uint32_t *tunnel_header_v;
> +	uint32_t *tunnel_header_m;
>  	uint16_t dport;
> -	int size;
> -	int i;
> +	struct mlx5_priv *priv = dev->data->dev_private;
> +	const struct rte_flow_item_vxlan nic_mask = {
> +		.vni = "\xff\xff\xff",
> +		.rsvd1 = 0xff,
> +	};
> 
>  	if (inner) {
>  		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
> @@ -8724,14 +8726,52 @@ flow_dv_translate_item_vxlan(void *matcher,
> void *key,
>  	}
>  	if (!vxlan_v)
>  		return;
> -	if (!vxlan_m)
> -		vxlan_m = &rte_flow_item_vxlan_mask;
> -	size = sizeof(vxlan_m->vni);
> -	vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni);
> -	vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
> -	memcpy(vni_m, vxlan_m->vni, size);
> -	for (i = 0; i < size; ++i)
> -		vni_v[i] = vni_m[i] & vxlan_v->vni[i];
> +	if (!vxlan_m) {
> +		if ((!attr->group && !priv->sh->tunnel_header_0_1) ||
> +		    (attr->group && !priv->sh->misc5_cap))
> +			vxlan_m = &rte_flow_item_vxlan_mask;
> +		else
> +			vxlan_m = &nic_mask;
> +	}
> +	if ((!attr->group && !attr->transfer && !priv->sh-
> >tunnel_header_0_1) ||
> +	    ((attr->group || attr->transfer) && !priv->sh->misc5_cap)) {
> +		void *misc_m;
> +		void *misc_v;
> +		char *vni_m;
> +		char *vni_v;
> +		int size;
> +		int i;
> +		misc_m = MLX5_ADDR_OF(fte_match_param,
> +				      matcher, misc_parameters);
> +		misc_v = MLX5_ADDR_OF(fte_match_param, key,
> misc_parameters);
> +		size = sizeof(vxlan_m->vni);
> +		vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m,
> vxlan_vni);
> +		vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v,
> vxlan_vni);
> +		memcpy(vni_m, vxlan_m->vni, size);
> +		for (i = 0; i < size; ++i)
> +			vni_v[i] = vni_m[i] & vxlan_v->vni[i];
> +		return;
> +	}
> +	misc5_m = MLX5_ADDR_OF(fte_match_param, matcher,
> misc_parameters_5);
> +	misc5_v = MLX5_ADDR_OF(fte_match_param, key,
> misc_parameters_5);
> +	tunnel_header_v = (uint32_t
> *)MLX5_ADDR_OF(fte_match_set_misc5,
> +						   misc5_v,
> +						   tunnel_header_1);
> +	tunnel_header_m = (uint32_t
> *)MLX5_ADDR_OF(fte_match_set_misc5,
> +						   misc5_m,
> +						   tunnel_header_1);
> +	*tunnel_header_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) |
> +			   (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 |
> +			   (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16;
> +	if (*tunnel_header_v)
> +		*tunnel_header_m = vxlan_m->vni[0] |
> +			vxlan_m->vni[1] << 8 |
> +			vxlan_m->vni[2] << 16;
> +	else
> +		*tunnel_header_m = 0x0;
> +	*tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;
> +	if (vxlan_v->rsvd1 & vxlan_m->rsvd1)
> +		*tunnel_header_m |= vxlan_m->rsvd1 << 24;
>  }
> 
>  /**
> @@ -9892,9 +9932,32 @@ flow_dv_matcher_enable(uint32_t
> *match_criteria)
>  	match_criteria_enable |=
>  		(!HEADER_IS_ZERO(match_criteria, misc_parameters_4)) <<
>  		MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT;
> +	match_criteria_enable |=
> +		(!HEADER_IS_ZERO(match_criteria, misc_parameters_5)) <<
> +		MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT;
>  	return match_criteria_enable;
>  }
> 
> +static void
> +__flow_dv_adjust_buf_size(size_t *size, uint8_t match_criteria)
> +{
> +	/*
> +	 * Check flow matching criteria first, subtract misc5/4 length if flow
> +	 * doesn't own misc5/4 parameters. In some old rdma-core releases,
> +	 * misc5/4 are not supported, and matcher creation failure is
> expected
> +	 * w/o subtration. If misc5 is provided, misc4 must be counted in
> since
> +	 * misc5 is right after misc4.
> +	 */
> +	if (!(match_criteria & (1 <<
> MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT))) {
> +		*size = MLX5_ST_SZ_BYTES(fte_match_param) -
> +			MLX5_ST_SZ_BYTES(fte_match_set_misc5);
> +		if (!(match_criteria & (1 <<
> +			MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT))) {
> +			*size -= MLX5_ST_SZ_BYTES(fte_match_set_misc4);
> +		}
> +	}
> +}
> +
>  struct mlx5_hlist_entry *
>  flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void *cb_ctx)
>  {
> @@ -10161,6 +10224,8 @@ flow_dv_matcher_create_cb(struct
> mlx5_cache_list *list,
>  	*cache = *ref;
>  	dv_attr.match_criteria_enable =
>  		flow_dv_matcher_enable(cache->mask.buf);
> +	__flow_dv_adjust_buf_size(&ref->mask.size,
> +				  dv_attr.match_criteria_enable);
>  	dv_attr.priority = ref->priority;
>  	if (tbl->is_egress)
>  		dv_attr.flags |= IBV_FLOW_ATTR_FLAGS_EGRESS;
> @@ -10210,7 +10275,6 @@ flow_dv_matcher_register(struct rte_eth_dev
> *dev,
>  		.error = error,
>  		.data = ref,
>  	};
> -
>  	/**
>  	 * tunnel offload API requires this registration for cases when
>  	 * tunnel match rule was inserted before tunnel set rule.
> @@ -12069,8 +12133,7 @@ flow_dv_translate(struct rte_eth_dev *dev,
>  	uint64_t action_flags = 0;
>  	struct mlx5_flow_dv_matcher matcher = {
>  		.mask = {
> -			.size = sizeof(matcher.mask.buf) -
> -				MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> +			.size = sizeof(matcher.mask.buf),
>  		},
>  	};
>  	int actions_n = 0;
> @@ -12877,7 +12940,8 @@ flow_dv_translate(struct rte_eth_dev *dev,
>  			last_item = MLX5_FLOW_LAYER_GRE;
>  			break;
>  		case RTE_FLOW_ITEM_TYPE_VXLAN:
> -			flow_dv_translate_item_vxlan(match_mask,
> match_value,
> +			flow_dv_translate_item_vxlan(dev, attr,
> +						     match_mask,
> match_value,
>  						     items, tunnel);
>  			matcher.priority =
> MLX5_TUNNEL_PRIO_GET(rss_desc);
>  			last_item = MLX5_FLOW_LAYER_VXLAN;
> @@ -12975,10 +13039,6 @@ flow_dv_translate(struct rte_eth_dev *dev,
>  						NULL,
>  						"cannot create eCPRI
> parser");
>  			}
> -			/* Adjust the length matcher and device flow value.
> */
> -			matcher.mask.size =
> MLX5_ST_SZ_BYTES(fte_match_param);
> -			dev_flow->dv.value.size =
> -
> 	MLX5_ST_SZ_BYTES(fte_match_param);
>  			flow_dv_translate_item_ecpri(dev, match_mask,
>  						     match_value, items);
>  			/* No other protocol should follow eCPRI layer. */
> @@ -13288,6 +13348,7 @@ flow_dv_apply(struct rte_eth_dev *dev, struct
> rte_flow *flow,
>  	int idx;
>  	struct mlx5_flow_workspace *wks =
> mlx5_flow_get_thread_workspace();
>  	struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc;
> +	uint8_t misc_mask;
> 
>  	MLX5_ASSERT(wks);
>  	for (idx = wks->flow_idx - 1; idx >= 0; idx--) {
> @@ -13358,6 +13419,8 @@ flow_dv_apply(struct rte_eth_dev *dev, struct
> rte_flow *flow,
>  			}
>  			dv->actions[n++] = priv->sh->default_miss_action;
>  		}
> +		misc_mask = flow_dv_matcher_enable(dv->value.buf);
> +		__flow_dv_adjust_buf_size(&dv->value.size, misc_mask);
>  		err = mlx5_flow_os_create_flow(dv_h->matcher-
> >matcher_object,
>  					       (void *)&dv->value, n,
>  					       dv->actions, &dh->drv_flow);
> @@ -15476,14 +15539,13 @@ __flow_dv_create_policy_flow(struct
> rte_eth_dev *dev,
>  {
>  	int ret;
>  	struct mlx5_flow_dv_match_params value = {
> -		.size = sizeof(value.buf) -
> -			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> +		.size = sizeof(value.buf),
>  	};
>  	struct mlx5_flow_dv_match_params matcher = {
> -		.size = sizeof(matcher.buf) -
> -			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> +		.size = sizeof(matcher.buf),
>  	};
>  	struct mlx5_priv *priv = dev->data->dev_private;
> +	uint8_t misc_mask;
> 
>  	if (match_src_port && (priv->representor || priv->master)) {
>  		if (flow_dv_translate_item_port_id(dev, matcher.buf,
> @@ -15497,6 +15559,8 @@ __flow_dv_create_policy_flow(struct
> rte_eth_dev *dev,
>  				(enum modify_reg)color_reg_c_idx,
>  				rte_col_2_mlx5_col(color),
>  				UINT32_MAX);
> +	misc_mask = flow_dv_matcher_enable(value.buf);
> +	__flow_dv_adjust_buf_size(&value.size, misc_mask);
>  	ret = mlx5_flow_os_create_flow(matcher_object,
>  			(void *)&value, actions_n, actions, rule);
>  	if (ret) {
> @@ -15521,14 +15585,12 @@ __flow_dv_create_policy_matcher(struct
> rte_eth_dev *dev,
>  	struct mlx5_flow_tbl_resource *tbl_rsc = sub_policy->tbl_rsc;
>  	struct mlx5_flow_dv_matcher matcher = {
>  		.mask = {
> -			.size = sizeof(matcher.mask.buf) -
> -				MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> +			.size = sizeof(matcher.mask.buf),
>  		},
>  		.tbl = tbl_rsc,
>  	};
>  	struct mlx5_flow_dv_match_params value = {
> -		.size = sizeof(value.buf) -
> -			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> +		.size = sizeof(value.buf),
>  	};
>  	struct mlx5_flow_cb_ctx ctx = {
>  		.error = error,
> @@ -16002,12 +16064,10 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev
> *dev,
>  	int domain, ret, i;
>  	struct mlx5_flow_counter *cnt;
>  	struct mlx5_flow_dv_match_params value = {
> -		.size = sizeof(value.buf) -
> -		MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> +		.size = sizeof(value.buf),
>  	};
>  	struct mlx5_flow_dv_match_params matcher_para = {
> -		.size = sizeof(matcher_para.buf) -
> -		MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> +		.size = sizeof(matcher_para.buf),
>  	};
>  	int mtr_id_reg_c = mlx5_flow_get_reg_id(dev, MLX5_MTR_ID,
>  						     0, &error);
> @@ -16016,8 +16076,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev
> *dev,
>  	struct mlx5_cache_entry *entry;
>  	struct mlx5_flow_dv_matcher matcher = {
>  		.mask = {
> -			.size = sizeof(matcher.mask.buf) -
> -			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> +			.size = sizeof(matcher.mask.buf),
>  		},
>  	};
>  	struct mlx5_flow_dv_matcher *drop_matcher;
> @@ -16025,6 +16084,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev
> *dev,
>  		.error = &error,
>  		.data = &matcher,
>  	};
> +	uint8_t misc_mask;
> 
>  	if (!priv->mtr_en || mtr_id_reg_c < 0) {
>  		rte_errno = ENOTSUP;
> @@ -16074,6 +16134,8 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev
> *dev,
>  			actions[i++] = priv->sh->dr_drop_action;
>  			flow_dv_match_meta_reg(matcher_para.buf,
> value.buf,
>  				(enum modify_reg)mtr_id_reg_c, 0, 0);
> +			misc_mask = flow_dv_matcher_enable(value.buf);
> +			__flow_dv_adjust_buf_size(&value.size,
> misc_mask);
>  			ret = mlx5_flow_os_create_flow
>  				(mtrmng->def_matcher[domain]-
> >matcher_object,
>  				(void *)&value, i, actions,
> @@ -16117,6 +16179,8 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev
> *dev,
>  					fm->drop_cnt, NULL);
>  		actions[i++] = cnt->action;
>  		actions[i++] = priv->sh->dr_drop_action;
> +		misc_mask = flow_dv_matcher_enable(value.buf);
> +		__flow_dv_adjust_buf_size(&value.size, misc_mask);
>  		ret = mlx5_flow_os_create_flow(drop_matcher-
> >matcher_object,
>  					       (void *)&value, i, actions,
>  					       &fm->drop_rule[domain]);
> @@ -16637,10 +16701,12 @@
> mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev)
>  	if (ret)
>  		goto err;
>  	dv_attr.match_criteria_enable =
> flow_dv_matcher_enable(mask.buf);
> +	__flow_dv_adjust_buf_size(&mask.size,
> dv_attr.match_criteria_enable);
>  	ret = mlx5_flow_os_create_flow_matcher(sh->ctx, &dv_attr, tbl-
> >obj,
>  					       &matcher);
>  	if (ret)
>  		goto err;
> +	__flow_dv_adjust_buf_size(&value.size,
> dv_attr.match_criteria_enable);
>  	ret = mlx5_flow_os_create_flow(matcher, (void *)&value, 1,
>  				       actions, &flow);
>  err:
> diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c
> b/drivers/net/mlx5/mlx5_flow_verbs.c
> index fe9673310a..7b3d0b320d 100644
> --- a/drivers/net/mlx5/mlx5_flow_verbs.c
> +++ b/drivers/net/mlx5/mlx5_flow_verbs.c
> @@ -1381,7 +1381,8 @@ flow_verbs_validate(struct rte_eth_dev *dev,
> 
> MLX5_FLOW_LAYER_OUTER_L4_TCP;
>  			break;
>  		case RTE_FLOW_ITEM_TYPE_VXLAN:
> -			ret = mlx5_flow_validate_item_vxlan(items,
> item_flags,
> +			ret = mlx5_flow_validate_item_vxlan(dev, items,
> +							    item_flags, attr,
>  							    error);
>  			if (ret < 0)
>  				return ret;
> diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
> b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
> index 1fcd24c002..383f003966 100644
> --- a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
> +++ b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
> @@ -140,11 +140,13 @@ mlx5_vdpa_rss_flows_create(struct
> mlx5_vdpa_priv *priv)
>  		/**< Matcher value. This value is used as the mask or a key.
> */
>  	} matcher_mask = {
>  				.size = sizeof(matcher_mask.buf) -
> -
> 	MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> +
> 	MLX5_ST_SZ_BYTES(fte_match_set_misc4) -
> +
> 	MLX5_ST_SZ_BYTES(fte_match_set_misc5),
>  			},
>  	  matcher_value = {
>  				.size = sizeof(matcher_value.buf) -
> -
> 	MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> +
> 	MLX5_ST_SZ_BYTES(fte_match_set_misc4) -
> +
> 	MLX5_ST_SZ_BYTES(fte_match_set_misc5),
>  			};
>  	struct mlx5dv_flow_matcher_attr dv_attr = {
>  		.type = IBV_FLOW_ATTR_NORMAL,
> --
> 2.27.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [dpdk-dev] [PATCH v6 0/2] support VXLAN header the last 8-bits matching
  2021-07-13 10:27             ` Raslan Darawsheh
@ 2021-07-13 10:50               ` Rongwei Liu
  2021-07-13 10:50                 ` [dpdk-dev] [PATCH v6 1/2] net/mlx5: support matching on the reserved field of VXLAN Rongwei Liu
  2021-07-13 10:50                 ` [dpdk-dev] [PATCH v6 2/2] app/testpmd: support VXLAN header last 8-bits matching Rongwei Liu
  2021-07-13 10:52               ` [dpdk-dev] [PATCH v5 1/2] net/mlx5: add VXLAN header the last 8-bits matching support Rongwei Liu
  1 sibling, 2 replies; 34+ messages in thread
From: Rongwei Liu @ 2021-07-13 10:50 UTC (permalink / raw)
  To: matan, viacheslavo, orika, thomas; +Cc: dev, rasland

This update adds support for VXLAN the last 8-bits reserved
field matching when creating sw steering rules.

Add a new testpmd pattern field 'last_rsvd' that supports the last
8-bits matching of VXLAN header.

Rongwei Liu (2):
  net/mlx5: support matching on the reserved field of VXLAN
  app/testpmd: support VXLAN header last 8-bits matching

 app/test-pmd/cmdline_flow.c                 |   9 ++
 app/test-pmd/util.c                         |   5 +-
 doc/guides/nics/mlx5.rst                    |  11 +-
 doc/guides/rel_notes/release_21_08.rst      |   6 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |   1 +
 drivers/common/mlx5/mlx5_devx_cmds.c        |   3 +
 drivers/common/mlx5/mlx5_devx_cmds.h        |   6 +
 drivers/common/mlx5/mlx5_prm.h              |  41 ++++-
 drivers/net/mlx5/linux/mlx5_os.c            |  77 ++++++++++
 drivers/net/mlx5/mlx5.h                     |   2 +
 drivers/net/mlx5/mlx5_flow.c                |  26 +++-
 drivers/net/mlx5/mlx5_flow.h                |   4 +-
 drivers/net/mlx5/mlx5_flow_dv.c             | 160 ++++++++++++++------
 drivers/net/mlx5/mlx5_flow_verbs.c          |   3 +-
 drivers/vdpa/mlx5/mlx5_vdpa_steer.c         |   6 +-
 15 files changed, 293 insertions(+), 67 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [dpdk-dev] [PATCH v6 1/2] net/mlx5: support matching on the reserved field of VXLAN
  2021-07-13 10:50               ` [dpdk-dev] [PATCH v6 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
@ 2021-07-13 10:50                 ` Rongwei Liu
  2021-07-13 11:40                   ` Raslan Darawsheh
  2021-07-13 10:50                 ` [dpdk-dev] [PATCH v6 2/2] app/testpmd: support VXLAN header last 8-bits matching Rongwei Liu
  1 sibling, 1 reply; 34+ messages in thread
From: Rongwei Liu @ 2021-07-13 10:50 UTC (permalink / raw)
  To: matan, viacheslavo, orika, thomas, Shahaf Shuler; +Cc: dev, rasland

This adds matching on the reserved field of VXLAN
header (the last 8-bits). The capability from rdma-core
is detected by creating a dummy matcher using misc5
when the device is probed.

For non-zero groups and FDB domain, the capability is
detected from rdma-core, meanwhile for NIC domain group
zero it's relying on the HCA_CAP from FW.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 doc/guides/nics/mlx5.rst               |  11 +-
 doc/guides/rel_notes/release_21_08.rst |   6 +
 drivers/common/mlx5/mlx5_devx_cmds.c   |   3 +
 drivers/common/mlx5/mlx5_devx_cmds.h   |   6 +
 drivers/common/mlx5/mlx5_prm.h         |  41 +++++--
 drivers/net/mlx5/linux/mlx5_os.c       |  77 ++++++++++++
 drivers/net/mlx5/mlx5.h                |   2 +
 drivers/net/mlx5/mlx5_flow.c           |  26 +++-
 drivers/net/mlx5/mlx5_flow.h           |   4 +-
 drivers/net/mlx5/mlx5_flow_dv.c        | 160 +++++++++++++++++--------
 drivers/net/mlx5/mlx5_flow_verbs.c     |   3 +-
 drivers/vdpa/mlx5/mlx5_vdpa_steer.c    |   6 +-
 12 files changed, 280 insertions(+), 65 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 8253b96e92..5842991d5d 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -195,8 +195,15 @@ Limitations
   size and ``txq_inline_min`` settings and may be from 2 (worst case forced by maximal
   inline settings) to 58.
 
-- Flows with a VXLAN Network Identifier equal (or ends to be equal)
-  to 0 are not supported.
+- Match on VXLAN supports the following fields only:
+
+     - VNI
+     - Last reserved 8-bits
+
+  Last reserved 8-bits matching is only supported When using DV flow
+  engine (``dv_flow_en`` = 1).
+  Group zero's behavior may differ which depends on FW.
+  Matching value equals 0 (value & mask) is not supported.
 
 - L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with MPLSoGRE and MPLSoUDP.
 
diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
index 6a902ef9ac..3fb17bbf77 100644
--- a/doc/guides/rel_notes/release_21_08.rst
+++ b/doc/guides/rel_notes/release_21_08.rst
@@ -117,6 +117,11 @@ New Features
   The experimental PMD power management API now supports managing
   multiple Ethernet Rx queues per lcore.
 
+* **Updated Mellanox mlx5 driver.**
+
+  Updated the Mellanox mlx5 driver with new features and improvements, including:
+
+  * Added support for matching on vxlan header last 8-bits reserved field.
 
 Removed Items
 -------------
@@ -208,3 +213,4 @@ Tested Platforms
    This section is a comment. Do not overwrite or remove it.
    Also, make sure to start the actual text at the margin.
    =======================================================
+
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index f5914bce32..63ae95832d 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -947,6 +947,9 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
 	attr->log_max_ft_sampler_num = MLX5_GET
 		(flow_table_nic_cap, hcattr,
 		 flow_table_properties_nic_receive.log_max_ft_sampler_num);
+	attr->flow.tunnel_header_0_1 = MLX5_GET
+		(flow_table_nic_cap, hcattr,
+		 ft_field_support_2_nic_receive.tunnel_header_0_1);
 	attr->pkt_integrity_match = mlx5_devx_query_pkt_integrity_match(hcattr);
 	/* Query HCA offloads for Ethernet protocol. */
 	memset(in, 0, sizeof(in));
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index f8a17b886b..124f43e852 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -89,6 +89,11 @@ struct mlx5_hca_vdpa_attr {
 	uint64_t doorbell_bar_offset;
 };
 
+struct mlx5_hca_flow_attr {
+	uint32_t tunnel_header_0_1;
+	uint32_t tunnel_header_2_3;
+};
+
 /* HCA supports this number of time periods for LRO. */
 #define MLX5_LRO_NUM_SUPP_PERIODS 4
 
@@ -155,6 +160,7 @@ struct mlx5_hca_attr {
 	uint32_t pkt_integrity_match:1; /* 1 if HW supports integrity item */
 	struct mlx5_hca_qos_attr qos;
 	struct mlx5_hca_vdpa_attr vdpa;
+	struct mlx5_hca_flow_attr flow;
 	int log_max_qp_sz;
 	int log_max_cq_sz;
 	int log_max_qp;
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 26761f5bd3..7950070976 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -977,6 +977,18 @@ struct mlx5_ifc_fte_match_set_misc4_bits {
 	u8 reserved_at_100[0x100];
 };
 
+struct mlx5_ifc_fte_match_set_misc5_bits {
+	u8 macsec_tag_0[0x20];
+	u8 macsec_tag_1[0x20];
+	u8 macsec_tag_2[0x20];
+	u8 macsec_tag_3[0x20];
+	u8 tunnel_header_0[0x20];
+	u8 tunnel_header_1[0x20];
+	u8 tunnel_header_2[0x20];
+	u8 tunnel_header_3[0x20];
+	u8 reserved[0x100];
+};
+
 /* Flow matcher. */
 struct mlx5_ifc_fte_match_param_bits {
 	struct mlx5_ifc_fte_match_set_lyr_2_4_bits outer_headers;
@@ -985,12 +997,13 @@ struct mlx5_ifc_fte_match_param_bits {
 	struct mlx5_ifc_fte_match_set_misc2_bits misc_parameters_2;
 	struct mlx5_ifc_fte_match_set_misc3_bits misc_parameters_3;
 	struct mlx5_ifc_fte_match_set_misc4_bits misc_parameters_4;
+	struct mlx5_ifc_fte_match_set_misc5_bits misc_parameters_5;
 /*
  * Add reserved bit to match the struct size with the size defined in PRM.
  * This extension is not required in Linux.
  */
 #ifndef HAVE_INFINIBAND_VERBS_H
-	u8 reserved_0[0x400];
+	u8 reserved_0[0x200];
 #endif
 };
 
@@ -1007,6 +1020,7 @@ enum {
 	MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT,
 	MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT,
 	MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT,
+	MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT,
 };
 
 enum {
@@ -1784,7 +1798,12 @@ struct mlx5_ifc_roce_caps_bits {
  * Table 1872 - Flow Table Fields Supported 2 Format
  */
 struct mlx5_ifc_ft_fields_support_2_bits {
-	u8 reserved_at_0[0x14];
+	u8 reserved_at_0[0xf];
+	u8 tunnel_header_2_3[0x1];
+	u8 tunnel_header_0_1[0x1];
+	u8 macsec_syndrome[0x1];
+	u8 macsec_tag[0x1];
+	u8 outer_lrh_sl[0x1];
 	u8 inner_ipv4_ihl[0x1];
 	u8 outer_ipv4_ihl[0x1];
 	u8 psp_syndrome[0x1];
@@ -1797,18 +1816,26 @@ struct mlx5_ifc_ft_fields_support_2_bits {
 	u8 inner_l4_checksum_ok[0x1];
 	u8 outer_ipv4_checksum_ok[0x1];
 	u8 outer_l4_checksum_ok[0x1];
+	u8 reserved_at_20[0x60];
 };
 
 struct mlx5_ifc_flow_table_nic_cap_bits {
 	u8 reserved_at_0[0x200];
 	struct mlx5_ifc_flow_table_prop_layout_bits
-	       flow_table_properties_nic_receive;
+		flow_table_properties_nic_receive;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_receive_rdma;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_receive_sniffer;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_transmit;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_transmit_rdma;
 	struct mlx5_ifc_flow_table_prop_layout_bits
-	       flow_table_properties_unused[5];
-	u8 reserved_at_1C0[0x200];
-	u8 header_modify_nic_receive[0x400];
+		flow_table_properties_nic_transmit_sniffer;
+	u8 reserved_at_e00[0x600];
 	struct mlx5_ifc_ft_fields_support_2_bits
-	       ft_field_support_2_nic_receive;
+		ft_field_support_2_nic_receive;
 };
 
 struct mlx5_ifc_cmd_hca_cap_2_bits {
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index be22d9cbd2..55bb71c170 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -193,6 +193,79 @@ mlx5_alloc_verbs_buf(size_t size, void *data)
 	return ret;
 }
 
+/**
+ * Detect misc5 support or not
+ *
+ * @param[in] priv
+ *   Device private data pointer
+ */
+#ifdef HAVE_MLX5DV_DR
+static void
+__mlx5_discovery_misc5_cap(struct mlx5_priv *priv)
+{
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+	/* Dummy VxLAN matcher to detect rdma-core misc5 cap
+	 * Case: IPv4--->UDP--->VxLAN--->vni
+	 */
+	void *tbl;
+	struct mlx5_flow_dv_match_params matcher_mask;
+	void *match_m;
+	void *matcher;
+	void *headers_m;
+	void *misc5_m;
+	uint32_t *tunnel_header_m;
+	struct mlx5dv_flow_matcher_attr dv_attr;
+
+	memset(&matcher_mask, 0, sizeof(matcher_mask));
+	matcher_mask.size = sizeof(matcher_mask.buf);
+	match_m = matcher_mask.buf;
+	headers_m = MLX5_ADDR_OF(fte_match_param, match_m, outer_headers);
+	misc5_m = MLX5_ADDR_OF(fte_match_param,
+			       match_m, misc_parameters_5);
+	tunnel_header_m = (uint32_t *)
+				MLX5_ADDR_OF(fte_match_set_misc5,
+				misc5_m, tunnel_header_1);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, 4);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xffff);
+	*tunnel_header_m = 0xffffff;
+
+	tbl = mlx5_glue->dr_create_flow_tbl(priv->sh->rx_domain, 1);
+	if (!tbl) {
+		DRV_LOG(INFO, "No SW steering support");
+		return;
+	}
+	dv_attr.type = IBV_FLOW_ATTR_NORMAL,
+	dv_attr.match_mask = (void *)&matcher_mask,
+	dv_attr.match_criteria_enable =
+			(1 << MLX5_MATCH_CRITERIA_ENABLE_OUTER_BIT) |
+			(1 << MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT);
+	dv_attr.priority = 3;
+#ifdef HAVE_MLX5DV_DR_ESWITCH
+	void *misc2_m;
+	if (priv->config.dv_esw_en) {
+		/* FDB enabled reg_c_0 */
+		dv_attr.match_criteria_enable |=
+				(1 << MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT);
+		misc2_m = MLX5_ADDR_OF(fte_match_param,
+				       match_m, misc_parameters_2);
+		MLX5_SET(fte_match_set_misc2, misc2_m,
+			 metadata_reg_c_0, 0xffff);
+	}
+#endif
+	matcher = mlx5_glue->dv_create_flow_matcher(priv->sh->ctx,
+						    &dv_attr, tbl);
+	if (matcher) {
+		priv->sh->misc5_cap = 1;
+		mlx5_glue->dv_destroy_flow_matcher(matcher);
+	}
+	mlx5_glue->dr_destroy_flow_tbl(tbl);
+#else
+	RTE_SET_USED(priv);
+#endif
+}
+#endif
+
 /**
  * Verbs callback to free a memory.
  *
@@ -364,6 +437,8 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv)
 		if (sh->fdb_domain)
 			mlx5_glue->dr_allow_duplicate_rules(sh->fdb_domain, 0);
 	}
+
+	__mlx5_discovery_misc5_cap(priv);
 #endif /* HAVE_MLX5DV_DR */
 	sh->default_miss_action =
 			mlx5_glue->dr_create_flow_action_default_miss();
@@ -1313,6 +1388,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 				goto error;
 			}
 		}
+		if (config->hca_attr.flow.tunnel_header_0_1)
+			sh->tunnel_header_0_1 = 1;
 #endif
 #ifdef HAVE_MLX5_DR_CREATE_ACTION_ASO
 		if (config->hca_attr.flow_hit_aso &&
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f864c1d701..75a0e04ea0 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1094,6 +1094,8 @@ struct mlx5_dev_ctx_shared {
 	uint32_t qp_ts_format:2; /* QP timestamp formats supported. */
 	uint32_t meter_aso_en:1; /* Flow Meter ASO is supported. */
 	uint32_t ct_aso_en:1; /* Connection Tracking ASO is supported. */
+	uint32_t tunnel_header_0_1:1; /* tunnel_header_0_1 is supported. */
+	uint32_t misc5_cap:1; /* misc5 matcher parameter is supported. */
 	uint32_t max_port; /* Maximal IB device port index. */
 	struct mlx5_bond_info bond; /* Bonding information. */
 	void *ctx; /* Verbs/DV/DevX context. */
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 2feddb0254..f3f5752dbe 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2410,12 +2410,14 @@ mlx5_flow_validate_item_tcp(const struct rte_flow_item *item,
 /**
  * Validate VXLAN item.
  *
+ * @param[in] dev
+ *   Pointer to the Ethernet device structure.
  * @param[in] item
  *   Item specification.
  * @param[in] item_flags
  *   Bit-fields that holds the items detected until now.
- * @param[in] target_protocol
- *   The next protocol in the previous item.
+ * @param[in] attr
+ *   Flow rule attributes.
  * @param[out] error
  *   Pointer to error structure.
  *
@@ -2423,24 +2425,32 @@ mlx5_flow_validate_item_tcp(const struct rte_flow_item *item,
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 int
-mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
+mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
+			      const struct rte_flow_item *item,
 			      uint64_t item_flags,
+			      const struct rte_flow_attr *attr,
 			      struct rte_flow_error *error)
 {
 	const struct rte_flow_item_vxlan *spec = item->spec;
 	const struct rte_flow_item_vxlan *mask = item->mask;
 	int ret;
+	struct mlx5_priv *priv = dev->data->dev_private;
 	union vni {
 		uint32_t vlan_id;
 		uint8_t vni[4];
 	} id = { .vlan_id = 0, };
-
+	const struct rte_flow_item_vxlan nic_mask = {
+		.vni = "\xff\xff\xff",
+		.rsvd1 = 0xff,
+	};
+	const struct rte_flow_item_vxlan *valid_mask;
 
 	if (item_flags & MLX5_FLOW_LAYER_TUNNEL)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "multiple tunnel layers not"
 					  " supported");
+	valid_mask = &rte_flow_item_vxlan_mask;
 	/*
 	 * Verify only UDPv4 is present as defined in
 	 * https://tools.ietf.org/html/rfc7348
@@ -2451,9 +2461,15 @@ mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
 					  "no outer UDP layer found");
 	if (!mask)
 		mask = &rte_flow_item_vxlan_mask;
+	/* FDB domain & NIC domain non-zero group */
+	if ((attr->transfer || attr->group) && priv->sh->misc5_cap)
+		valid_mask = &nic_mask;
+	/* Group zero in NIC domain */
+	if (!attr->group && !attr->transfer && priv->sh->tunnel_header_0_1)
+		valid_mask = &nic_mask;
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
-		 (const uint8_t *)&rte_flow_item_vxlan_mask,
+		 (const uint8_t *)valid_mask,
 		 sizeof(struct rte_flow_item_vxlan),
 		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 7d97c5880f..66a38c3630 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1533,8 +1533,10 @@ int mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 				 uint64_t item_flags,
 				 struct rte_eth_dev *dev,
 				 struct rte_flow_error *error);
-int mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
+int mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
+				  const struct rte_flow_item *item,
 				  uint64_t item_flags,
+				  const struct rte_flow_attr *attr,
 				  struct rte_flow_error *error);
 int mlx5_flow_validate_item_vxlan_gpe(const struct rte_flow_item *item,
 				      uint64_t item_flags,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 2f4c0eeb5b..6c3715a5e8 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -6930,7 +6930,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			last_item = MLX5_FLOW_LAYER_GRE_KEY;
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			ret = mlx5_flow_validate_item_vxlan(items, item_flags,
+			ret = mlx5_flow_validate_item_vxlan(dev, items,
+							    item_flags, attr,
 							    error);
 			if (ret < 0)
 				return ret;
@@ -7892,15 +7893,7 @@ flow_dv_prepare(struct rte_eth_dev *dev,
 	memset(dev_flow, 0, sizeof(*dev_flow));
 	dev_flow->handle = dev_handle;
 	dev_flow->handle_idx = handle_idx;
-	/*
-	 * In some old rdma-core releases, before continuing, a check of the
-	 * length of matching parameter will be done at first. It needs to use
-	 * the length without misc4 param. If the flow has misc4 support, then
-	 * the length needs to be adjusted accordingly. Each param member is
-	 * aligned with a 64B boundary naturally.
-	 */
-	dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param) -
-				  MLX5_ST_SZ_BYTES(fte_match_set_misc4);
+	dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param);
 	dev_flow->ingress = attr->ingress;
 	dev_flow->dv.transfer = attr->transfer;
 	return dev_flow;
@@ -8681,6 +8674,10 @@ flow_dv_translate_item_nvgre(void *matcher, void *key,
 /**
  * Add VXLAN item to matcher and to the value.
  *
+ * @param[in] dev
+ *   Pointer to the Ethernet device structure.
+ * @param[in] attr
+ *   Flow rule attributes.
  * @param[in, out] matcher
  *   Flow matcher.
  * @param[in, out] key
@@ -8691,7 +8688,9 @@ flow_dv_translate_item_nvgre(void *matcher, void *key,
  *   Item is inner pattern.
  */
 static void
-flow_dv_translate_item_vxlan(void *matcher, void *key,
+flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
+			     const struct rte_flow_attr *attr,
+			     void *matcher, void *key,
 			     const struct rte_flow_item *item,
 			     int inner)
 {
@@ -8699,13 +8698,16 @@ flow_dv_translate_item_vxlan(void *matcher, void *key,
 	const struct rte_flow_item_vxlan *vxlan_v = item->spec;
 	void *headers_m;
 	void *headers_v;
-	void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters);
-	void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
-	char *vni_m;
-	char *vni_v;
+	void *misc5_m;
+	void *misc5_v;
+	uint32_t *tunnel_header_v;
+	uint32_t *tunnel_header_m;
 	uint16_t dport;
-	int size;
-	int i;
+	struct mlx5_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item_vxlan nic_mask = {
+		.vni = "\xff\xff\xff",
+		.rsvd1 = 0xff,
+	};
 
 	if (inner) {
 		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
@@ -8724,14 +8726,52 @@ flow_dv_translate_item_vxlan(void *matcher, void *key,
 	}
 	if (!vxlan_v)
 		return;
-	if (!vxlan_m)
-		vxlan_m = &rte_flow_item_vxlan_mask;
-	size = sizeof(vxlan_m->vni);
-	vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni);
-	vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
-	memcpy(vni_m, vxlan_m->vni, size);
-	for (i = 0; i < size; ++i)
-		vni_v[i] = vni_m[i] & vxlan_v->vni[i];
+	if (!vxlan_m) {
+		if ((!attr->group && !priv->sh->tunnel_header_0_1) ||
+		    (attr->group && !priv->sh->misc5_cap))
+			vxlan_m = &rte_flow_item_vxlan_mask;
+		else
+			vxlan_m = &nic_mask;
+	}
+	if ((!attr->group && !attr->transfer && !priv->sh->tunnel_header_0_1) ||
+	    ((attr->group || attr->transfer) && !priv->sh->misc5_cap)) {
+		void *misc_m;
+		void *misc_v;
+		char *vni_m;
+		char *vni_v;
+		int size;
+		int i;
+		misc_m = MLX5_ADDR_OF(fte_match_param,
+				      matcher, misc_parameters);
+		misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
+		size = sizeof(vxlan_m->vni);
+		vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni);
+		vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
+		memcpy(vni_m, vxlan_m->vni, size);
+		for (i = 0; i < size; ++i)
+			vni_v[i] = vni_m[i] & vxlan_v->vni[i];
+		return;
+	}
+	misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5);
+	misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5);
+	tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
+						   misc5_v,
+						   tunnel_header_1);
+	tunnel_header_m = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
+						   misc5_m,
+						   tunnel_header_1);
+	*tunnel_header_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) |
+			   (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 |
+			   (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16;
+	if (*tunnel_header_v)
+		*tunnel_header_m = vxlan_m->vni[0] |
+			vxlan_m->vni[1] << 8 |
+			vxlan_m->vni[2] << 16;
+	else
+		*tunnel_header_m = 0x0;
+	*tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;
+	if (vxlan_v->rsvd1 & vxlan_m->rsvd1)
+		*tunnel_header_m |= vxlan_m->rsvd1 << 24;
 }
 
 /**
@@ -9892,9 +9932,32 @@ flow_dv_matcher_enable(uint32_t *match_criteria)
 	match_criteria_enable |=
 		(!HEADER_IS_ZERO(match_criteria, misc_parameters_4)) <<
 		MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT;
+	match_criteria_enable |=
+		(!HEADER_IS_ZERO(match_criteria, misc_parameters_5)) <<
+		MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT;
 	return match_criteria_enable;
 }
 
+static void
+__flow_dv_adjust_buf_size(size_t *size, uint8_t match_criteria)
+{
+	/*
+	 * Check flow matching criteria first, subtract misc5/4 length if flow
+	 * doesn't own misc5/4 parameters. In some old rdma-core releases,
+	 * misc5/4 are not supported, and matcher creation failure is expected
+	 * w/o subtration. If misc5 is provided, misc4 must be counted in since
+	 * misc5 is right after misc4.
+	 */
+	if (!(match_criteria & (1 << MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT))) {
+		*size = MLX5_ST_SZ_BYTES(fte_match_param) -
+			MLX5_ST_SZ_BYTES(fte_match_set_misc5);
+		if (!(match_criteria & (1 <<
+			MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT))) {
+			*size -= MLX5_ST_SZ_BYTES(fte_match_set_misc4);
+		}
+	}
+}
+
 struct mlx5_hlist_entry *
 flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void *cb_ctx)
 {
@@ -10161,6 +10224,8 @@ flow_dv_matcher_create_cb(struct mlx5_cache_list *list,
 	*cache = *ref;
 	dv_attr.match_criteria_enable =
 		flow_dv_matcher_enable(cache->mask.buf);
+	__flow_dv_adjust_buf_size(&ref->mask.size,
+				  dv_attr.match_criteria_enable);
 	dv_attr.priority = ref->priority;
 	if (tbl->is_egress)
 		dv_attr.flags |= IBV_FLOW_ATTR_FLAGS_EGRESS;
@@ -10210,7 +10275,6 @@ flow_dv_matcher_register(struct rte_eth_dev *dev,
 		.error = error,
 		.data = ref,
 	};
-
 	/**
 	 * tunnel offload API requires this registration for cases when
 	 * tunnel match rule was inserted before tunnel set rule.
@@ -12069,8 +12133,7 @@ flow_dv_translate(struct rte_eth_dev *dev,
 	uint64_t action_flags = 0;
 	struct mlx5_flow_dv_matcher matcher = {
 		.mask = {
-			.size = sizeof(matcher.mask.buf) -
-				MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+			.size = sizeof(matcher.mask.buf),
 		},
 	};
 	int actions_n = 0;
@@ -12877,7 +12940,8 @@ flow_dv_translate(struct rte_eth_dev *dev,
 			last_item = MLX5_FLOW_LAYER_GRE;
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			flow_dv_translate_item_vxlan(match_mask, match_value,
+			flow_dv_translate_item_vxlan(dev, attr,
+						     match_mask, match_value,
 						     items, tunnel);
 			matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc);
 			last_item = MLX5_FLOW_LAYER_VXLAN;
@@ -12975,10 +13039,6 @@ flow_dv_translate(struct rte_eth_dev *dev,
 						NULL,
 						"cannot create eCPRI parser");
 			}
-			/* Adjust the length matcher and device flow value. */
-			matcher.mask.size = MLX5_ST_SZ_BYTES(fte_match_param);
-			dev_flow->dv.value.size =
-					MLX5_ST_SZ_BYTES(fte_match_param);
 			flow_dv_translate_item_ecpri(dev, match_mask,
 						     match_value, items);
 			/* No other protocol should follow eCPRI layer. */
@@ -13288,6 +13348,7 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
 	int idx;
 	struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace();
 	struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc;
+	uint8_t misc_mask;
 
 	MLX5_ASSERT(wks);
 	for (idx = wks->flow_idx - 1; idx >= 0; idx--) {
@@ -13358,6 +13419,8 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
 			}
 			dv->actions[n++] = priv->sh->default_miss_action;
 		}
+		misc_mask = flow_dv_matcher_enable(dv->value.buf);
+		__flow_dv_adjust_buf_size(&dv->value.size, misc_mask);
 		err = mlx5_flow_os_create_flow(dv_h->matcher->matcher_object,
 					       (void *)&dv->value, n,
 					       dv->actions, &dh->drv_flow);
@@ -15476,14 +15539,13 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev,
 {
 	int ret;
 	struct mlx5_flow_dv_match_params value = {
-		.size = sizeof(value.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(value.buf),
 	};
 	struct mlx5_flow_dv_match_params matcher = {
-		.size = sizeof(matcher.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(matcher.buf),
 	};
 	struct mlx5_priv *priv = dev->data->dev_private;
+	uint8_t misc_mask;
 
 	if (match_src_port && (priv->representor || priv->master)) {
 		if (flow_dv_translate_item_port_id(dev, matcher.buf,
@@ -15497,6 +15559,8 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev,
 				(enum modify_reg)color_reg_c_idx,
 				rte_col_2_mlx5_col(color),
 				UINT32_MAX);
+	misc_mask = flow_dv_matcher_enable(value.buf);
+	__flow_dv_adjust_buf_size(&value.size, misc_mask);
 	ret = mlx5_flow_os_create_flow(matcher_object,
 			(void *)&value, actions_n, actions, rule);
 	if (ret) {
@@ -15521,14 +15585,12 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev,
 	struct mlx5_flow_tbl_resource *tbl_rsc = sub_policy->tbl_rsc;
 	struct mlx5_flow_dv_matcher matcher = {
 		.mask = {
-			.size = sizeof(matcher.mask.buf) -
-				MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+			.size = sizeof(matcher.mask.buf),
 		},
 		.tbl = tbl_rsc,
 	};
 	struct mlx5_flow_dv_match_params value = {
-		.size = sizeof(value.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(value.buf),
 	};
 	struct mlx5_flow_cb_ctx ctx = {
 		.error = error,
@@ -16002,12 +16064,10 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 	int domain, ret, i;
 	struct mlx5_flow_counter *cnt;
 	struct mlx5_flow_dv_match_params value = {
-		.size = sizeof(value.buf) -
-		MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(value.buf),
 	};
 	struct mlx5_flow_dv_match_params matcher_para = {
-		.size = sizeof(matcher_para.buf) -
-		MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(matcher_para.buf),
 	};
 	int mtr_id_reg_c = mlx5_flow_get_reg_id(dev, MLX5_MTR_ID,
 						     0, &error);
@@ -16016,8 +16076,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 	struct mlx5_cache_entry *entry;
 	struct mlx5_flow_dv_matcher matcher = {
 		.mask = {
-			.size = sizeof(matcher.mask.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+			.size = sizeof(matcher.mask.buf),
 		},
 	};
 	struct mlx5_flow_dv_matcher *drop_matcher;
@@ -16025,6 +16084,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 		.error = &error,
 		.data = &matcher,
 	};
+	uint8_t misc_mask;
 
 	if (!priv->mtr_en || mtr_id_reg_c < 0) {
 		rte_errno = ENOTSUP;
@@ -16074,6 +16134,8 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 			actions[i++] = priv->sh->dr_drop_action;
 			flow_dv_match_meta_reg(matcher_para.buf, value.buf,
 				(enum modify_reg)mtr_id_reg_c, 0, 0);
+			misc_mask = flow_dv_matcher_enable(value.buf);
+			__flow_dv_adjust_buf_size(&value.size, misc_mask);
 			ret = mlx5_flow_os_create_flow
 				(mtrmng->def_matcher[domain]->matcher_object,
 				(void *)&value, i, actions,
@@ -16117,6 +16179,8 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 					fm->drop_cnt, NULL);
 		actions[i++] = cnt->action;
 		actions[i++] = priv->sh->dr_drop_action;
+		misc_mask = flow_dv_matcher_enable(value.buf);
+		__flow_dv_adjust_buf_size(&value.size, misc_mask);
 		ret = mlx5_flow_os_create_flow(drop_matcher->matcher_object,
 					       (void *)&value, i, actions,
 					       &fm->drop_rule[domain]);
@@ -16637,10 +16701,12 @@ mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev)
 	if (ret)
 		goto err;
 	dv_attr.match_criteria_enable = flow_dv_matcher_enable(mask.buf);
+	__flow_dv_adjust_buf_size(&mask.size, dv_attr.match_criteria_enable);
 	ret = mlx5_flow_os_create_flow_matcher(sh->ctx, &dv_attr, tbl->obj,
 					       &matcher);
 	if (ret)
 		goto err;
+	__flow_dv_adjust_buf_size(&value.size, dv_attr.match_criteria_enable);
 	ret = mlx5_flow_os_create_flow(matcher, (void *)&value, 1,
 				       actions, &flow);
 err:
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index fe9673310a..7b3d0b320d 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1381,7 +1381,8 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 					     MLX5_FLOW_LAYER_OUTER_L4_TCP;
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			ret = mlx5_flow_validate_item_vxlan(items, item_flags,
+			ret = mlx5_flow_validate_item_vxlan(dev, items,
+							    item_flags, attr,
 							    error);
 			if (ret < 0)
 				return ret;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
index 1fcd24c002..383f003966 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
@@ -140,11 +140,13 @@ mlx5_vdpa_rss_flows_create(struct mlx5_vdpa_priv *priv)
 		/**< Matcher value. This value is used as the mask or a key. */
 	} matcher_mask = {
 				.size = sizeof(matcher_mask.buf) -
-					MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+					MLX5_ST_SZ_BYTES(fte_match_set_misc4) -
+					MLX5_ST_SZ_BYTES(fte_match_set_misc5),
 			},
 	  matcher_value = {
 				.size = sizeof(matcher_value.buf) -
-					MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+					MLX5_ST_SZ_BYTES(fte_match_set_misc4) -
+					MLX5_ST_SZ_BYTES(fte_match_set_misc5),
 			};
 	struct mlx5dv_flow_matcher_attr dv_attr = {
 		.type = IBV_FLOW_ATTR_NORMAL,
-- 
2.27.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [dpdk-dev] [PATCH v6 2/2] app/testpmd: support VXLAN header last 8-bits matching
  2021-07-13 10:50               ` [dpdk-dev] [PATCH v6 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
  2021-07-13 10:50                 ` [dpdk-dev] [PATCH v6 1/2] net/mlx5: support matching on the reserved field of VXLAN Rongwei Liu
@ 2021-07-13 10:50                 ` Rongwei Liu
  2021-07-13 11:37                   ` Raslan Darawsheh
  1 sibling, 1 reply; 34+ messages in thread
From: Rongwei Liu @ 2021-07-13 10:50 UTC (permalink / raw)
  To: matan, viacheslavo, orika, thomas, Xiaoyun Li; +Cc: dev, rasland

Add a new testpmd pattern field 'last_rsvd' that supports the
last 8-bits matching of VXLAN header.

The examples for the "last_rsvd" pattern field are as below:

1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...

This flow will exactly match the last 8-bits to be 0x80.

2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
vxlan mask 0x80 / end ...

This flow will only match the MSB of the last 8-bits to be 1.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 9 +++++++++
 app/test-pmd/util.c                         | 5 +++--
 doc/guides/testpmd_app_ug/testpmd_funcs.rst | 1 +
 3 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 8fc0e1469d..3d5ab806c3 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -205,6 +205,7 @@ enum index {
 	ITEM_SCTP_CKSUM,
 	ITEM_VXLAN,
 	ITEM_VXLAN_VNI,
+	ITEM_VXLAN_LAST_RSVD,
 	ITEM_E_TAG,
 	ITEM_E_TAG_GRP_ECID_B,
 	ITEM_NVGRE,
@@ -1127,6 +1128,7 @@ static const enum index item_sctp[] = {
 
 static const enum index item_vxlan[] = {
 	ITEM_VXLAN_VNI,
+	ITEM_VXLAN_LAST_RSVD,
 	ITEM_NEXT,
 	ZERO,
 };
@@ -2839,6 +2841,13 @@ static const struct token token_list[] = {
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, vni)),
 	},
+	[ITEM_VXLAN_LAST_RSVD] = {
+		.name = "last_rsvd",
+		.help = "VXLAN last reserved bits",
+		.next = NEXT(item_vxlan, NEXT_ENTRY(UNSIGNED), item_param),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan,
+					     rsvd1)),
+	},
 	[ITEM_E_TAG] = {
 		.name = "e_tag",
 		.help = "match E-Tag header",
diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
index a9e431a8b2..59626518d5 100644
--- a/app/test-pmd/util.c
+++ b/app/test-pmd/util.c
@@ -266,8 +266,9 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
 				vx_vni = rte_be_to_cpu_32(vxlan_hdr->vx_vni);
 				MKDUMPSTR(print_buf, buf_size, cur_len,
 					  " - VXLAN packet: packet type =%d, "
-					  "Destination UDP port =%d, VNI = %d",
-					  packet_type, udp_port, vx_vni >> 8);
+					  "Destination UDP port =%d, VNI = %d, "
+					  "last_rsvd = %d", packet_type,
+					  udp_port, vx_vni >> 8, vx_vni & 0xff);
 			}
 		}
 		MKDUMPSTR(print_buf, buf_size, cur_len,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 33857acf54..4ca3103067 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3694,6 +3694,7 @@ This section lists supported pattern items and their attributes, if any.
 - ``vxlan``: match VXLAN header.
 
   - ``vni {unsigned}``: VXLAN identifier.
+  - ``last_rsvd {unsigned}``: VXLAN last reserved 8-bits.
 
 - ``e_tag``: match IEEE 802.1BR E-Tag header.
 
-- 
2.27.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v5 1/2] net/mlx5: add VXLAN header the last 8-bits matching support
  2021-07-13 10:27             ` Raslan Darawsheh
  2021-07-13 10:50               ` [dpdk-dev] [PATCH v6 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
@ 2021-07-13 10:52               ` Rongwei Liu
  1 sibling, 0 replies; 34+ messages in thread
From: Rongwei Liu @ 2021-07-13 10:52 UTC (permalink / raw)
  To: Raslan Darawsheh, Matan Azrad, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon, Shahaf Shuler
  Cc: dev

Hi Raslan:
	V6 was sent to address the comment.
	BTW, title " Added support for matching on the reserved filed of VXLAN header (last 8-bits)" is too long to pass the git-log-check.
	Thanks.

BR
Rongwei

> -----Original Message-----
> From: Raslan Darawsheh <rasland@nvidia.com>
> Sent: Tuesday, July 13, 2021 6:28 PM
> To: Rongwei Liu <rongweil@nvidia.com>; Matan Azrad <matan@nvidia.com>;
> Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>;
> NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Shahaf Shuler
> <shahafs@nvidia.com>
> Cc: dev@dpdk.org
> Subject: RE: [PATCH v5 1/2] net/mlx5: add VXLAN header the last 8-bits
> matching support
> 
> Hi,
> 
> 
> > -----Original Message-----
> > From: Rongwei Liu <rongweil@nvidia.com>
> > Sent: Tuesday, July 13, 2021 12:55 PM
> > To: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> > <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-
> > Thomas Monjalon <thomas@monjalon.net>; Shahaf Shuler
> > <shahafs@nvidia.com>
> > Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> > Subject: [PATCH v5 1/2] net/mlx5: add VXLAN header the last 8-bits
> > matching support
> Title can be improved:
> How about:
> "net/mlx5: support matching on reserved field of VXLAN"
> >
> > This update adds support for the VXLAN header last 8-bits matching
> > when creating steering rules. At the PCIe probe stage, we create a
> > dummy VXLAN matcher using misc5 to check rdma-core library's
> > capability.
> This adds matching on reserved field of VXLAN header (the last 8-bits).
> 
> The capability from both rdma-core and FW is detected by creating a dummy
> matcher using misc5 when the device is probed.
> 
> >
> > The logic is, group 0 depends on HCA_CAP to enable misc or misc5 for
> > VXLAN matching while group non zero depends on the rdma-core
> > capability.
> >
> for none-zero groups the capability is detected from rdma-core, meanwhile
> for group zero it's relying on the HCA_CAP from FW.
> 
> > Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
> > Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> > ---
> >  doc/guides/nics/mlx5.rst             |  11 +-
> >  drivers/common/mlx5/mlx5_devx_cmds.c |   3 +
> >  drivers/common/mlx5/mlx5_devx_cmds.h |   6 +
> >  drivers/common/mlx5/mlx5_prm.h       |  41 +++++--
> >  drivers/net/mlx5/linux/mlx5_os.c     |  77 +++++++++++++
> >  drivers/net/mlx5/mlx5.h              |   2 +
> >  drivers/net/mlx5/mlx5_flow.c         |  26 ++++-
> >  drivers/net/mlx5/mlx5_flow.h         |   4 +-
> >  drivers/net/mlx5/mlx5_flow_dv.c      | 160 +++++++++++++++++++--------
> >  drivers/net/mlx5/mlx5_flow_verbs.c   |   3 +-
> >  drivers/vdpa/mlx5/mlx5_vdpa_steer.c  |   6 +-
> >  11 files changed, 274 insertions(+), 65 deletions(-)
> >
> > diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index
> > 8253b96e92..5842991d5d 100644
> > --- a/doc/guides/nics/mlx5.rst
> > +++ b/doc/guides/nics/mlx5.rst
> > @@ -195,8 +195,15 @@ Limitations
> >    size and ``txq_inline_min`` settings and may be from 2 (worst case
> > forced by maximal
> >    inline settings) to 58.
> >
> > -- Flows with a VXLAN Network Identifier equal (or ends to be equal)
> > -  to 0 are not supported.
> > +- Match on VXLAN supports the following fields only:
> > +
> > +     - VNI
> > +     - Last reserved 8-bits
> > +
> > +  Last reserved 8-bits matching is only supported When using DV flow
> > + engine (``dv_flow_en`` = 1).
> > +  Group zero's behavior may differ which depends on FW.
> > +  Matching value equals 0 (value & mask) is not supported.
> >
> >  - L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with
> > MPLSoGRE and MPLSoUDP.
> >
> > diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c
> > b/drivers/common/mlx5/mlx5_devx_cmds.c
> > index f5914bce32..63ae95832d 100644
> > --- a/drivers/common/mlx5/mlx5_devx_cmds.c
> > +++ b/drivers/common/mlx5/mlx5_devx_cmds.c
> > @@ -947,6 +947,9 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
> >  	attr->log_max_ft_sampler_num = MLX5_GET
> >  		(flow_table_nic_cap, hcattr,
> >
> > flow_table_properties_nic_receive.log_max_ft_sampler_num);
> > +	attr->flow.tunnel_header_0_1 = MLX5_GET
> > +		(flow_table_nic_cap, hcattr,
> > +		 ft_field_support_2_nic_receive.tunnel_header_0_1);
> >  	attr->pkt_integrity_match =
> > mlx5_devx_query_pkt_integrity_match(hcattr);
> >  	/* Query HCA offloads for Ethernet protocol. */
> >  	memset(in, 0, sizeof(in));
> > diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h
> > b/drivers/common/mlx5/mlx5_devx_cmds.h
> > index f8a17b886b..124f43e852 100644
> > --- a/drivers/common/mlx5/mlx5_devx_cmds.h
> > +++ b/drivers/common/mlx5/mlx5_devx_cmds.h
> > @@ -89,6 +89,11 @@ struct mlx5_hca_vdpa_attr {
> >  	uint64_t doorbell_bar_offset;
> >  };
> >
> > +struct mlx5_hca_flow_attr {
> > +	uint32_t tunnel_header_0_1;
> > +	uint32_t tunnel_header_2_3;
> > +};
> > +
> >  /* HCA supports this number of time periods for LRO. */  #define
> > MLX5_LRO_NUM_SUPP_PERIODS 4
> >
> > @@ -155,6 +160,7 @@ struct mlx5_hca_attr {
> >  	uint32_t pkt_integrity_match:1; /* 1 if HW supports integrity item */
> >  	struct mlx5_hca_qos_attr qos;
> >  	struct mlx5_hca_vdpa_attr vdpa;
> > +	struct mlx5_hca_flow_attr flow;
> >  	int log_max_qp_sz;
> >  	int log_max_cq_sz;
> >  	int log_max_qp;
> > diff --git a/drivers/common/mlx5/mlx5_prm.h
> > b/drivers/common/mlx5/mlx5_prm.h index 26761f5bd3..7950070976
> 100644
> > --- a/drivers/common/mlx5/mlx5_prm.h
> > +++ b/drivers/common/mlx5/mlx5_prm.h
> > @@ -977,6 +977,18 @@ struct mlx5_ifc_fte_match_set_misc4_bits {
> >  	u8 reserved_at_100[0x100];
> >  };
> >
> > +struct mlx5_ifc_fte_match_set_misc5_bits {
> > +	u8 macsec_tag_0[0x20];
> > +	u8 macsec_tag_1[0x20];
> > +	u8 macsec_tag_2[0x20];
> > +	u8 macsec_tag_3[0x20];
> > +	u8 tunnel_header_0[0x20];
> > +	u8 tunnel_header_1[0x20];
> > +	u8 tunnel_header_2[0x20];
> > +	u8 tunnel_header_3[0x20];
> > +	u8 reserved[0x100];
> > +};
> > +
> >  /* Flow matcher. */
> >  struct mlx5_ifc_fte_match_param_bits {
> >  	struct mlx5_ifc_fte_match_set_lyr_2_4_bits outer_headers; @@ -
> 985,12
> > +997,13 @@ struct mlx5_ifc_fte_match_param_bits {
> >  	struct mlx5_ifc_fte_match_set_misc2_bits misc_parameters_2;
> >  	struct mlx5_ifc_fte_match_set_misc3_bits misc_parameters_3;
> >  	struct mlx5_ifc_fte_match_set_misc4_bits misc_parameters_4;
> > +	struct mlx5_ifc_fte_match_set_misc5_bits misc_parameters_5;
> >  /*
> >   * Add reserved bit to match the struct size with the size defined in PRM.
> >   * This extension is not required in Linux.
> >   */
> >  #ifndef HAVE_INFINIBAND_VERBS_H
> > -	u8 reserved_0[0x400];
> > +	u8 reserved_0[0x200];
> >  #endif
> >  };
> >
> > @@ -1007,6 +1020,7 @@ enum {
> >  	MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT,
> >  	MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT,
> >  	MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT,
> > +	MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT,
> >  };
> >
> >  enum {
> > @@ -1784,7 +1798,12 @@ struct mlx5_ifc_roce_caps_bits {
> >   * Table 1872 - Flow Table Fields Supported 2 Format
> >   */
> >  struct mlx5_ifc_ft_fields_support_2_bits {
> > -	u8 reserved_at_0[0x14];
> > +	u8 reserved_at_0[0xf];
> > +	u8 tunnel_header_2_3[0x1];
> > +	u8 tunnel_header_0_1[0x1];
> > +	u8 macsec_syndrome[0x1];
> > +	u8 macsec_tag[0x1];
> > +	u8 outer_lrh_sl[0x1];
> >  	u8 inner_ipv4_ihl[0x1];
> >  	u8 outer_ipv4_ihl[0x1];
> >  	u8 psp_syndrome[0x1];
> > @@ -1797,18 +1816,26 @@ struct mlx5_ifc_ft_fields_support_2_bits {
> >  	u8 inner_l4_checksum_ok[0x1];
> >  	u8 outer_ipv4_checksum_ok[0x1];
> >  	u8 outer_l4_checksum_ok[0x1];
> > +	u8 reserved_at_20[0x60];
> >  };
> >
> >  struct mlx5_ifc_flow_table_nic_cap_bits {
> >  	u8 reserved_at_0[0x200];
> >  	struct mlx5_ifc_flow_table_prop_layout_bits
> > -	       flow_table_properties_nic_receive;
> > +		flow_table_properties_nic_receive;
> > +	struct mlx5_ifc_flow_table_prop_layout_bits
> > +		flow_table_properties_nic_receive_rdma;
> > +	struct mlx5_ifc_flow_table_prop_layout_bits
> > +		flow_table_properties_nic_receive_sniffer;
> > +	struct mlx5_ifc_flow_table_prop_layout_bits
> > +		flow_table_properties_nic_transmit;
> > +	struct mlx5_ifc_flow_table_prop_layout_bits
> > +		flow_table_properties_nic_transmit_rdma;
> >  	struct mlx5_ifc_flow_table_prop_layout_bits
> > -	       flow_table_properties_unused[5];
> > -	u8 reserved_at_1C0[0x200];
> > -	u8 header_modify_nic_receive[0x400];
> > +		flow_table_properties_nic_transmit_sniffer;
> > +	u8 reserved_at_e00[0x600];
> >  	struct mlx5_ifc_ft_fields_support_2_bits
> > -	       ft_field_support_2_nic_receive;
> > +		ft_field_support_2_nic_receive;
> >  };
> >
> >  struct mlx5_ifc_cmd_hca_cap_2_bits {
> > diff --git a/drivers/net/mlx5/linux/mlx5_os.c
> > b/drivers/net/mlx5/linux/mlx5_os.c
> > index be22d9cbd2..55bb71c170 100644
> > --- a/drivers/net/mlx5/linux/mlx5_os.c
> > +++ b/drivers/net/mlx5/linux/mlx5_os.c
> > @@ -193,6 +193,79 @@ mlx5_alloc_verbs_buf(size_t size, void *data)
> >  	return ret;
> >  }
> >
> > +/**
> > + * Detect misc5 support or not
> > + *
> > + * @param[in] priv
> > + *   Device private data pointer
> > + */
> > +#ifdef HAVE_MLX5DV_DR
> > +static void
> > +__mlx5_discovery_misc5_cap(struct mlx5_priv *priv) { #ifdef
> > +HAVE_IBV_FLOW_DV_SUPPORT
> > +	/* Dummy VxLAN matcher to detect rdma-core misc5 cap
> > +	 * Case: IPv4--->UDP--->VxLAN--->vni
> > +	 */
> > +	void *tbl;
> > +	struct mlx5_flow_dv_match_params matcher_mask;
> > +	void *match_m;
> > +	void *matcher;
> > +	void *headers_m;
> > +	void *misc5_m;
> > +	uint32_t *tunnel_header_m;
> > +	struct mlx5dv_flow_matcher_attr dv_attr;
> > +
> > +	memset(&matcher_mask, 0, sizeof(matcher_mask));
> > +	matcher_mask.size = sizeof(matcher_mask.buf);
> > +	match_m = matcher_mask.buf;
> > +	headers_m = MLX5_ADDR_OF(fte_match_param, match_m,
> > outer_headers);
> > +	misc5_m = MLX5_ADDR_OF(fte_match_param,
> > +			       match_m, misc_parameters_5);
> > +	tunnel_header_m = (uint32_t *)
> > +				MLX5_ADDR_OF(fte_match_set_misc5,
> > +				misc5_m, tunnel_header_1);
> > +	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff);
> > +	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, 4);
> > +	MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xffff);
> > +	*tunnel_header_m = 0xffffff;
> > +
> > +	tbl = mlx5_glue->dr_create_flow_tbl(priv->sh->rx_domain, 1);
> > +	if (!tbl) {
> > +		DRV_LOG(INFO, "No SW steering support");
> > +		return;
> > +	}
> > +	dv_attr.type = IBV_FLOW_ATTR_NORMAL,
> > +	dv_attr.match_mask = (void *)&matcher_mask,
> > +	dv_attr.match_criteria_enable =
> > +			(1 << MLX5_MATCH_CRITERIA_ENABLE_OUTER_BIT)
> > |
> > +			(1 << MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT);
> > +	dv_attr.priority = 3;
> > +#ifdef HAVE_MLX5DV_DR_ESWITCH
> > +	void *misc2_m;
> > +	if (priv->config.dv_esw_en) {
> > +		/* FDB enabled reg_c_0 */
> > +		dv_attr.match_criteria_enable |=
> > +				(1 <<
> > MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT);
> > +		misc2_m = MLX5_ADDR_OF(fte_match_param,
> > +				       match_m, misc_parameters_2);
> > +		MLX5_SET(fte_match_set_misc2, misc2_m,
> > +			 metadata_reg_c_0, 0xffff);
> > +	}
> > +#endif
> > +	matcher = mlx5_glue->dv_create_flow_matcher(priv->sh->ctx,
> > +						    &dv_attr, tbl);
> > +	if (matcher) {
> > +		priv->sh->misc5_cap = 1;
> > +		mlx5_glue->dv_destroy_flow_matcher(matcher);
> > +	}
> > +	mlx5_glue->dr_destroy_flow_tbl(tbl);
> > +#else
> > +	RTE_SET_USED(priv);
> > +#endif
> > +}
> > +#endif
> > +
> >  /**
> >   * Verbs callback to free a memory.
> >   *
> > @@ -364,6 +437,8 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv)
> >  		if (sh->fdb_domain)
> >  			mlx5_glue->dr_allow_duplicate_rules(sh-
> > >fdb_domain, 0);
> >  	}
> > +
> > +	__mlx5_discovery_misc5_cap(priv);
> >  #endif /* HAVE_MLX5DV_DR */
> >  	sh->default_miss_action =
> >  			mlx5_glue->dr_create_flow_action_default_miss();
> > @@ -1313,6 +1388,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
> >  				goto error;
> >  			}
> >  		}
> > +		if (config->hca_attr.flow.tunnel_header_0_1)
> > +			sh->tunnel_header_0_1 = 1;
> >  #endif
> >  #ifdef HAVE_MLX5_DR_CREATE_ACTION_ASO
> >  		if (config->hca_attr.flow_hit_aso && diff --git
> > a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index
> > f864c1d701..75a0e04ea0 100644
> > --- a/drivers/net/mlx5/mlx5.h
> > +++ b/drivers/net/mlx5/mlx5.h
> > @@ -1094,6 +1094,8 @@ struct mlx5_dev_ctx_shared {
> >  	uint32_t qp_ts_format:2; /* QP timestamp formats supported. */
> >  	uint32_t meter_aso_en:1; /* Flow Meter ASO is supported. */
> >  	uint32_t ct_aso_en:1; /* Connection Tracking ASO is supported. */
> > +	uint32_t tunnel_header_0_1:1; /* tunnel_header_0_1 is supported.
> > */
> > +	uint32_t misc5_cap:1; /* misc5 matcher parameter is supported. */
> >  	uint32_t max_port; /* Maximal IB device port index. */
> >  	struct mlx5_bond_info bond; /* Bonding information. */
> >  	void *ctx; /* Verbs/DV/DevX context. */ diff --git
> > a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index
> > 2feddb0254..f3f5752dbe 100644
> > --- a/drivers/net/mlx5/mlx5_flow.c
> > +++ b/drivers/net/mlx5/mlx5_flow.c
> > @@ -2410,12 +2410,14 @@ mlx5_flow_validate_item_tcp(const struct
> > rte_flow_item *item,
> >  /**
> >   * Validate VXLAN item.
> >   *
> > + * @param[in] dev
> > + *   Pointer to the Ethernet device structure.
> >   * @param[in] item
> >   *   Item specification.
> >   * @param[in] item_flags
> >   *   Bit-fields that holds the items detected until now.
> > - * @param[in] target_protocol
> > - *   The next protocol in the previous item.
> > + * @param[in] attr
> > + *   Flow rule attributes.
> >   * @param[out] error
> >   *   Pointer to error structure.
> >   *
> > @@ -2423,24 +2425,32 @@ mlx5_flow_validate_item_tcp(const struct
> > rte_flow_item *item,
> >   *   0 on success, a negative errno value otherwise and rte_errno is set.
> >   */
> >  int
> > -mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
> > +mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
> > +			      const struct rte_flow_item *item,
> >  			      uint64_t item_flags,
> > +			      const struct rte_flow_attr *attr,
> >  			      struct rte_flow_error *error)  {
> >  	const struct rte_flow_item_vxlan *spec = item->spec;
> >  	const struct rte_flow_item_vxlan *mask = item->mask;
> >  	int ret;
> > +	struct mlx5_priv *priv = dev->data->dev_private;
> >  	union vni {
> >  		uint32_t vlan_id;
> >  		uint8_t vni[4];
> >  	} id = { .vlan_id = 0, };
> > -
> > +	const struct rte_flow_item_vxlan nic_mask = {
> > +		.vni = "\xff\xff\xff",
> > +		.rsvd1 = 0xff,
> > +	};
> > +	const struct rte_flow_item_vxlan *valid_mask;
> >
> >  	if (item_flags & MLX5_FLOW_LAYER_TUNNEL)
> >  		return rte_flow_error_set(error, ENOTSUP,
> >  					  RTE_FLOW_ERROR_TYPE_ITEM,
> > item,
> >  					  "multiple tunnel layers not"
> >  					  " supported");
> > +	valid_mask = &rte_flow_item_vxlan_mask;
> >  	/*
> >  	 * Verify only UDPv4 is present as defined in
> >  	 * https://tools.ietf.org/html/rfc7348
> > @@ -2451,9 +2461,15 @@ mlx5_flow_validate_item_vxlan(const struct
> > rte_flow_item *item,
> >  					  "no outer UDP layer found");
> >  	if (!mask)
> >  		mask = &rte_flow_item_vxlan_mask;
> > +	/* FDB domain & NIC domain non-zero group */
> > +	if ((attr->transfer || attr->group) && priv->sh->misc5_cap)
> > +		valid_mask = &nic_mask;
> > +	/* Group zero in NIC domain */
> > +	if (!attr->group && !attr->transfer && priv->sh-
> > >tunnel_header_0_1)
> > +		valid_mask = &nic_mask;
> >  	ret = mlx5_flow_item_acceptable
> >  		(item, (const uint8_t *)mask,
> > -		 (const uint8_t *)&rte_flow_item_vxlan_mask,
> > +		 (const uint8_t *)valid_mask,
> >  		 sizeof(struct rte_flow_item_vxlan),
> >  		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
> >  	if (ret < 0)
> > diff --git a/drivers/net/mlx5/mlx5_flow.h
> > b/drivers/net/mlx5/mlx5_flow.h index 7d97c5880f..66a38c3630 100644
> > --- a/drivers/net/mlx5/mlx5_flow.h
> > +++ b/drivers/net/mlx5/mlx5_flow.h
> > @@ -1533,8 +1533,10 @@ int mlx5_flow_validate_item_vlan(const struct
> > rte_flow_item *item,
> >  				 uint64_t item_flags,
> >  				 struct rte_eth_dev *dev,
> >  				 struct rte_flow_error *error);
> > -int mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
> > +int mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
> > +				  const struct rte_flow_item *item,
> >  				  uint64_t item_flags,
> > +				  const struct rte_flow_attr *attr,
> >  				  struct rte_flow_error *error);
> >  int mlx5_flow_validate_item_vxlan_gpe(const struct rte_flow_item *item,
> >  				      uint64_t item_flags,
> > diff --git a/drivers/net/mlx5/mlx5_flow_dv.c
> > b/drivers/net/mlx5/mlx5_flow_dv.c index 2f4c0eeb5b..6c3715a5e8 100644
> > --- a/drivers/net/mlx5/mlx5_flow_dv.c
> > +++ b/drivers/net/mlx5/mlx5_flow_dv.c
> > @@ -6930,7 +6930,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const
> > struct rte_flow_attr *attr,
> >  			last_item = MLX5_FLOW_LAYER_GRE_KEY;
> >  			break;
> >  		case RTE_FLOW_ITEM_TYPE_VXLAN:
> > -			ret = mlx5_flow_validate_item_vxlan(items,
> > item_flags,
> > +			ret = mlx5_flow_validate_item_vxlan(dev, items,
> > +							    item_flags, attr,
> >  							    error);
> >  			if (ret < 0)
> >  				return ret;
> > @@ -7892,15 +7893,7 @@ flow_dv_prepare(struct rte_eth_dev *dev,
> >  	memset(dev_flow, 0, sizeof(*dev_flow));
> >  	dev_flow->handle = dev_handle;
> >  	dev_flow->handle_idx = handle_idx;
> > -	/*
> > -	 * In some old rdma-core releases, before continuing, a check of the
> > -	 * length of matching parameter will be done at first. It needs to use
> > -	 * the length without misc4 param. If the flow has misc4 support,
> > then
> > -	 * the length needs to be adjusted accordingly. Each param member
> > is
> > -	 * aligned with a 64B boundary naturally.
> > -	 */
> > -	dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param) -
> > -				  MLX5_ST_SZ_BYTES(fte_match_set_misc4);
> > +	dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param);
> >  	dev_flow->ingress = attr->ingress;
> >  	dev_flow->dv.transfer = attr->transfer;
> >  	return dev_flow;
> > @@ -8681,6 +8674,10 @@ flow_dv_translate_item_nvgre(void *matcher,
> > void *key,
> >  /**
> >   * Add VXLAN item to matcher and to the value.
> >   *
> > + * @param[in] dev
> > + *   Pointer to the Ethernet device structure.
> > + * @param[in] attr
> > + *   Flow rule attributes.
> >   * @param[in, out] matcher
> >   *   Flow matcher.
> >   * @param[in, out] key
> > @@ -8691,7 +8688,9 @@ flow_dv_translate_item_nvgre(void *matcher,
> void
> > *key,
> >   *   Item is inner pattern.
> >   */
> >  static void
> > -flow_dv_translate_item_vxlan(void *matcher, void *key,
> > +flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
> > +			     const struct rte_flow_attr *attr,
> > +			     void *matcher, void *key,
> >  			     const struct rte_flow_item *item,
> >  			     int inner)
> >  {
> > @@ -8699,13 +8698,16 @@ flow_dv_translate_item_vxlan(void *matcher,
> > void *key,
> >  	const struct rte_flow_item_vxlan *vxlan_v = item->spec;
> >  	void *headers_m;
> >  	void *headers_v;
> > -	void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher,
> > misc_parameters);
> > -	void *misc_v = MLX5_ADDR_OF(fte_match_param, key,
> > misc_parameters);
> > -	char *vni_m;
> > -	char *vni_v;
> > +	void *misc5_m;
> > +	void *misc5_v;
> > +	uint32_t *tunnel_header_v;
> > +	uint32_t *tunnel_header_m;
> >  	uint16_t dport;
> > -	int size;
> > -	int i;
> > +	struct mlx5_priv *priv = dev->data->dev_private;
> > +	const struct rte_flow_item_vxlan nic_mask = {
> > +		.vni = "\xff\xff\xff",
> > +		.rsvd1 = 0xff,
> > +	};
> >
> >  	if (inner) {
> >  		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
> @@ -8724,14
> > +8726,52 @@ flow_dv_translate_item_vxlan(void *matcher, void *key,
> >  	}
> >  	if (!vxlan_v)
> >  		return;
> > -	if (!vxlan_m)
> > -		vxlan_m = &rte_flow_item_vxlan_mask;
> > -	size = sizeof(vxlan_m->vni);
> > -	vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni);
> > -	vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
> > -	memcpy(vni_m, vxlan_m->vni, size);
> > -	for (i = 0; i < size; ++i)
> > -		vni_v[i] = vni_m[i] & vxlan_v->vni[i];
> > +	if (!vxlan_m) {
> > +		if ((!attr->group && !priv->sh->tunnel_header_0_1) ||
> > +		    (attr->group && !priv->sh->misc5_cap))
> > +			vxlan_m = &rte_flow_item_vxlan_mask;
> > +		else
> > +			vxlan_m = &nic_mask;
> > +	}
> > +	if ((!attr->group && !attr->transfer && !priv->sh-
> > >tunnel_header_0_1) ||
> > +	    ((attr->group || attr->transfer) && !priv->sh->misc5_cap)) {
> > +		void *misc_m;
> > +		void *misc_v;
> > +		char *vni_m;
> > +		char *vni_v;
> > +		int size;
> > +		int i;
> > +		misc_m = MLX5_ADDR_OF(fte_match_param,
> > +				      matcher, misc_parameters);
> > +		misc_v = MLX5_ADDR_OF(fte_match_param, key,
> > misc_parameters);
> > +		size = sizeof(vxlan_m->vni);
> > +		vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m,
> > vxlan_vni);
> > +		vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v,
> > vxlan_vni);
> > +		memcpy(vni_m, vxlan_m->vni, size);
> > +		for (i = 0; i < size; ++i)
> > +			vni_v[i] = vni_m[i] & vxlan_v->vni[i];
> > +		return;
> > +	}
> > +	misc5_m = MLX5_ADDR_OF(fte_match_param, matcher,
> > misc_parameters_5);
> > +	misc5_v = MLX5_ADDR_OF(fte_match_param, key,
> > misc_parameters_5);
> > +	tunnel_header_v = (uint32_t
> > *)MLX5_ADDR_OF(fte_match_set_misc5,
> > +						   misc5_v,
> > +						   tunnel_header_1);
> > +	tunnel_header_m = (uint32_t
> > *)MLX5_ADDR_OF(fte_match_set_misc5,
> > +						   misc5_m,
> > +						   tunnel_header_1);
> > +	*tunnel_header_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) |
> > +			   (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 |
> > +			   (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16;
> > +	if (*tunnel_header_v)
> > +		*tunnel_header_m = vxlan_m->vni[0] |
> > +			vxlan_m->vni[1] << 8 |
> > +			vxlan_m->vni[2] << 16;
> > +	else
> > +		*tunnel_header_m = 0x0;
> > +	*tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;
> > +	if (vxlan_v->rsvd1 & vxlan_m->rsvd1)
> > +		*tunnel_header_m |= vxlan_m->rsvd1 << 24;
> >  }
> >
> >  /**
> > @@ -9892,9 +9932,32 @@ flow_dv_matcher_enable(uint32_t
> > *match_criteria)
> >  	match_criteria_enable |=
> >  		(!HEADER_IS_ZERO(match_criteria, misc_parameters_4)) <<
> >  		MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT;
> > +	match_criteria_enable |=
> > +		(!HEADER_IS_ZERO(match_criteria, misc_parameters_5)) <<
> > +		MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT;
> >  	return match_criteria_enable;
> >  }
> >
> > +static void
> > +__flow_dv_adjust_buf_size(size_t *size, uint8_t match_criteria) {
> > +	/*
> > +	 * Check flow matching criteria first, subtract misc5/4 length if flow
> > +	 * doesn't own misc5/4 parameters. In some old rdma-core releases,
> > +	 * misc5/4 are not supported, and matcher creation failure is
> > expected
> > +	 * w/o subtration. If misc5 is provided, misc4 must be counted in
> > since
> > +	 * misc5 is right after misc4.
> > +	 */
> > +	if (!(match_criteria & (1 <<
> > MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT))) {
> > +		*size = MLX5_ST_SZ_BYTES(fte_match_param) -
> > +			MLX5_ST_SZ_BYTES(fte_match_set_misc5);
> > +		if (!(match_criteria & (1 <<
> > +			MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT))) {
> > +			*size -= MLX5_ST_SZ_BYTES(fte_match_set_misc4);
> > +		}
> > +	}
> > +}
> > +
> >  struct mlx5_hlist_entry *
> >  flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void
> > *cb_ctx)  { @@ -10161,6 +10224,8 @@ flow_dv_matcher_create_cb(struct
> > mlx5_cache_list *list,
> >  	*cache = *ref;
> >  	dv_attr.match_criteria_enable =
> >  		flow_dv_matcher_enable(cache->mask.buf);
> > +	__flow_dv_adjust_buf_size(&ref->mask.size,
> > +				  dv_attr.match_criteria_enable);
> >  	dv_attr.priority = ref->priority;
> >  	if (tbl->is_egress)
> >  		dv_attr.flags |= IBV_FLOW_ATTR_FLAGS_EGRESS; @@ -
> 10210,7 +10275,6
> > @@ flow_dv_matcher_register(struct rte_eth_dev *dev,
> >  		.error = error,
> >  		.data = ref,
> >  	};
> > -
> >  	/**
> >  	 * tunnel offload API requires this registration for cases when
> >  	 * tunnel match rule was inserted before tunnel set rule.
> > @@ -12069,8 +12133,7 @@ flow_dv_translate(struct rte_eth_dev *dev,
> >  	uint64_t action_flags = 0;
> >  	struct mlx5_flow_dv_matcher matcher = {
> >  		.mask = {
> > -			.size = sizeof(matcher.mask.buf) -
> > -				MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> > +			.size = sizeof(matcher.mask.buf),
> >  		},
> >  	};
> >  	int actions_n = 0;
> > @@ -12877,7 +12940,8 @@ flow_dv_translate(struct rte_eth_dev *dev,
> >  			last_item = MLX5_FLOW_LAYER_GRE;
> >  			break;
> >  		case RTE_FLOW_ITEM_TYPE_VXLAN:
> > -			flow_dv_translate_item_vxlan(match_mask,
> > match_value,
> > +			flow_dv_translate_item_vxlan(dev, attr,
> > +						     match_mask,
> > match_value,
> >  						     items, tunnel);
> >  			matcher.priority =
> > MLX5_TUNNEL_PRIO_GET(rss_desc);
> >  			last_item = MLX5_FLOW_LAYER_VXLAN; @@ -
> 12975,10 +13039,6 @@
> > flow_dv_translate(struct rte_eth_dev *dev,
> >  						NULL,
> >  						"cannot create eCPRI
> > parser");
> >  			}
> > -			/* Adjust the length matcher and device flow value.
> > */
> > -			matcher.mask.size =
> > MLX5_ST_SZ_BYTES(fte_match_param);
> > -			dev_flow->dv.value.size =
> > -
> > 	MLX5_ST_SZ_BYTES(fte_match_param);
> >  			flow_dv_translate_item_ecpri(dev, match_mask,
> >  						     match_value, items);
> >  			/* No other protocol should follow eCPRI layer. */
> @@ -13288,6
> > +13348,7 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow
> > *flow,
> >  	int idx;
> >  	struct mlx5_flow_workspace *wks =
> > mlx5_flow_get_thread_workspace();
> >  	struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc;
> > +	uint8_t misc_mask;
> >
> >  	MLX5_ASSERT(wks);
> >  	for (idx = wks->flow_idx - 1; idx >= 0; idx--) { @@ -13358,6
> > +13419,8 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow
> > *flow,
> >  			}
> >  			dv->actions[n++] = priv->sh->default_miss_action;
> >  		}
> > +		misc_mask = flow_dv_matcher_enable(dv->value.buf);
> > +		__flow_dv_adjust_buf_size(&dv->value.size, misc_mask);
> >  		err = mlx5_flow_os_create_flow(dv_h->matcher-
> > >matcher_object,
> >  					       (void *)&dv->value, n,
> >  					       dv->actions, &dh->drv_flow);
> @@ -15476,14 +15539,13 @@
> > __flow_dv_create_policy_flow(struct
> > rte_eth_dev *dev,
> >  {
> >  	int ret;
> >  	struct mlx5_flow_dv_match_params value = {
> > -		.size = sizeof(value.buf) -
> > -			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> > +		.size = sizeof(value.buf),
> >  	};
> >  	struct mlx5_flow_dv_match_params matcher = {
> > -		.size = sizeof(matcher.buf) -
> > -			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> > +		.size = sizeof(matcher.buf),
> >  	};
> >  	struct mlx5_priv *priv = dev->data->dev_private;
> > +	uint8_t misc_mask;
> >
> >  	if (match_src_port && (priv->representor || priv->master)) {
> >  		if (flow_dv_translate_item_port_id(dev, matcher.buf, @@ -
> 15497,6
> > +15559,8 @@ __flow_dv_create_policy_flow(struct
> > rte_eth_dev *dev,
> >  				(enum modify_reg)color_reg_c_idx,
> >  				rte_col_2_mlx5_col(color),
> >  				UINT32_MAX);
> > +	misc_mask = flow_dv_matcher_enable(value.buf);
> > +	__flow_dv_adjust_buf_size(&value.size, misc_mask);
> >  	ret = mlx5_flow_os_create_flow(matcher_object,
> >  			(void *)&value, actions_n, actions, rule);
> >  	if (ret) {
> > @@ -15521,14 +15585,12 @@ __flow_dv_create_policy_matcher(struct
> > rte_eth_dev *dev,
> >  	struct mlx5_flow_tbl_resource *tbl_rsc = sub_policy->tbl_rsc;
> >  	struct mlx5_flow_dv_matcher matcher = {
> >  		.mask = {
> > -			.size = sizeof(matcher.mask.buf) -
> > -				MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> > +			.size = sizeof(matcher.mask.buf),
> >  		},
> >  		.tbl = tbl_rsc,
> >  	};
> >  	struct mlx5_flow_dv_match_params value = {
> > -		.size = sizeof(value.buf) -
> > -			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> > +		.size = sizeof(value.buf),
> >  	};
> >  	struct mlx5_flow_cb_ctx ctx = {
> >  		.error = error,
> > @@ -16002,12 +16064,10 @@ flow_dv_create_mtr_tbls(struct
> rte_eth_dev
> > *dev,
> >  	int domain, ret, i;
> >  	struct mlx5_flow_counter *cnt;
> >  	struct mlx5_flow_dv_match_params value = {
> > -		.size = sizeof(value.buf) -
> > -		MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> > +		.size = sizeof(value.buf),
> >  	};
> >  	struct mlx5_flow_dv_match_params matcher_para = {
> > -		.size = sizeof(matcher_para.buf) -
> > -		MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> > +		.size = sizeof(matcher_para.buf),
> >  	};
> >  	int mtr_id_reg_c = mlx5_flow_get_reg_id(dev, MLX5_MTR_ID,
> >  						     0, &error);
> > @@ -16016,8 +16076,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev
> > *dev,
> >  	struct mlx5_cache_entry *entry;
> >  	struct mlx5_flow_dv_matcher matcher = {
> >  		.mask = {
> > -			.size = sizeof(matcher.mask.buf) -
> > -			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> > +			.size = sizeof(matcher.mask.buf),
> >  		},
> >  	};
> >  	struct mlx5_flow_dv_matcher *drop_matcher; @@ -16025,6
> +16084,7 @@
> > flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
> >  		.error = &error,
> >  		.data = &matcher,
> >  	};
> > +	uint8_t misc_mask;
> >
> >  	if (!priv->mtr_en || mtr_id_reg_c < 0) {
> >  		rte_errno = ENOTSUP;
> > @@ -16074,6 +16134,8 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev
> > *dev,
> >  			actions[i++] = priv->sh->dr_drop_action;
> >  			flow_dv_match_meta_reg(matcher_para.buf,
> > value.buf,
> >  				(enum modify_reg)mtr_id_reg_c, 0, 0);
> > +			misc_mask = flow_dv_matcher_enable(value.buf);
> > +			__flow_dv_adjust_buf_size(&value.size,
> > misc_mask);
> >  			ret = mlx5_flow_os_create_flow
> >  				(mtrmng->def_matcher[domain]-
> > >matcher_object,
> >  				(void *)&value, i, actions,
> > @@ -16117,6 +16179,8 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev
> > *dev,
> >  					fm->drop_cnt, NULL);
> >  		actions[i++] = cnt->action;
> >  		actions[i++] = priv->sh->dr_drop_action;
> > +		misc_mask = flow_dv_matcher_enable(value.buf);
> > +		__flow_dv_adjust_buf_size(&value.size, misc_mask);
> >  		ret = mlx5_flow_os_create_flow(drop_matcher-
> > >matcher_object,
> >  					       (void *)&value, i, actions,
> >  					       &fm->drop_rule[domain]);
> > @@ -16637,10 +16701,12 @@
> > mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev)
> >  	if (ret)
> >  		goto err;
> >  	dv_attr.match_criteria_enable =
> > flow_dv_matcher_enable(mask.buf);
> > +	__flow_dv_adjust_buf_size(&mask.size,
> > dv_attr.match_criteria_enable);
> >  	ret = mlx5_flow_os_create_flow_matcher(sh->ctx, &dv_attr, tbl-
> > >obj,
> >  					       &matcher);
> >  	if (ret)
> >  		goto err;
> > +	__flow_dv_adjust_buf_size(&value.size,
> > dv_attr.match_criteria_enable);
> >  	ret = mlx5_flow_os_create_flow(matcher, (void *)&value, 1,
> >  				       actions, &flow);
> >  err:
> > diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c
> > b/drivers/net/mlx5/mlx5_flow_verbs.c
> > index fe9673310a..7b3d0b320d 100644
> > --- a/drivers/net/mlx5/mlx5_flow_verbs.c
> > +++ b/drivers/net/mlx5/mlx5_flow_verbs.c
> > @@ -1381,7 +1381,8 @@ flow_verbs_validate(struct rte_eth_dev *dev,
> >
> > MLX5_FLOW_LAYER_OUTER_L4_TCP;
> >  			break;
> >  		case RTE_FLOW_ITEM_TYPE_VXLAN:
> > -			ret = mlx5_flow_validate_item_vxlan(items,
> > item_flags,
> > +			ret = mlx5_flow_validate_item_vxlan(dev, items,
> > +							    item_flags, attr,
> >  							    error);
> >  			if (ret < 0)
> >  				return ret;
> > diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
> > b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
> > index 1fcd24c002..383f003966 100644
> > --- a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
> > +++ b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
> > @@ -140,11 +140,13 @@ mlx5_vdpa_rss_flows_create(struct
> mlx5_vdpa_priv
> > *priv)
> >  		/**< Matcher value. This value is used as the mask or a key.
> > */
> >  	} matcher_mask = {
> >  				.size = sizeof(matcher_mask.buf) -
> > -
> > 	MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> > +
> > 	MLX5_ST_SZ_BYTES(fte_match_set_misc4) -
> > +
> > 	MLX5_ST_SZ_BYTES(fte_match_set_misc5),
> >  			},
> >  	  matcher_value = {
> >  				.size = sizeof(matcher_value.buf) -
> > -
> > 	MLX5_ST_SZ_BYTES(fte_match_set_misc4),
> > +
> > 	MLX5_ST_SZ_BYTES(fte_match_set_misc4) -
> > +
> > 	MLX5_ST_SZ_BYTES(fte_match_set_misc5),
> >  			};
> >  	struct mlx5dv_flow_matcher_attr dv_attr = {
> >  		.type = IBV_FLOW_ATTR_NORMAL,
> > --
> > 2.27.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v6 2/2] app/testpmd: support VXLAN header last 8-bits matching
  2021-07-13 10:50                 ` [dpdk-dev] [PATCH v6 2/2] app/testpmd: support VXLAN header last 8-bits matching Rongwei Liu
@ 2021-07-13 11:37                   ` Raslan Darawsheh
  2021-07-13 11:39                     ` Rongwei Liu
  0 siblings, 1 reply; 34+ messages in thread
From: Raslan Darawsheh @ 2021-07-13 11:37 UTC (permalink / raw)
  To: Rongwei Liu, Matan Azrad, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon, Xiaoyun Li
  Cc: dev

Hi,

> -----Original Message-----
> From: Rongwei Liu <rongweil@nvidia.com>
> Sent: Tuesday, July 13, 2021 1:50 PM
> To: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-
> Thomas Monjalon <thomas@monjalon.net>; Xiaoyun Li
> <xiaoyun.li@intel.com>
> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> Subject: [PATCH v6 2/2] app/testpmd: support VXLAN header last 8-bits
> matching
Small change I guess can be done while integration:
"app/testpmd: support matching reserved filed for VXLAN"
> 
> Add a new testpmd pattern field 'last_rsvd' that supports the
> last 8-bits matching of VXLAN header.
> 
> The examples for the "last_rsvd" pattern field are as below:
> 
> 1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...
> 
> This flow will exactly match the last 8-bits to be 0x80.
> 
> 2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
> vxlan mask 0x80 / end ...
> 
> This flow will only match the MSB of the last 8-bits to be 1.
> 
> Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Otherwise,
Acked-by: Raslan Darawsheh <rasland@nvidia.com>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v6 2/2] app/testpmd: support VXLAN header last 8-bits matching
  2021-07-13 11:37                   ` Raslan Darawsheh
@ 2021-07-13 11:39                     ` Rongwei Liu
  0 siblings, 0 replies; 34+ messages in thread
From: Rongwei Liu @ 2021-07-13 11:39 UTC (permalink / raw)
  To: Raslan Darawsheh, Matan Azrad, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon, Xiaoyun Li
  Cc: dev

Hi Raslan:
     Sound good.
     Thanks

获取 Outlook for iOS<https://aka.ms/o0ukef>
________________________________
发件人: Raslan Darawsheh <rasland@nvidia.com>
发送时间: Tuesday, July 13, 2021 7:37:52 PM
收件人: Rongwei Liu <rongweil@nvidia.com>; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Xiaoyun Li <xiaoyun.li@intel.com>
抄送: dev@dpdk.org <dev@dpdk.org>
主题: RE: [PATCH v6 2/2] app/testpmd: support VXLAN header last 8-bits matching

Hi,

> -----Original Message-----
> From: Rongwei Liu <rongweil@nvidia.com>
> Sent: Tuesday, July 13, 2021 1:50 PM
> To: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-
> Thomas Monjalon <thomas@monjalon.net>; Xiaoyun Li
> <xiaoyun.li@intel.com>
> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> Subject: [PATCH v6 2/2] app/testpmd: support VXLAN header last 8-bits
> matching
Small change I guess can be done while integration:
"app/testpmd: support matching reserved filed for VXLAN"
>
> Add a new testpmd pattern field 'last_rsvd' that supports the
> last 8-bits matching of VXLAN header.
>
> The examples for the "last_rsvd" pattern field are as below:
>
> 1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...
>
> This flow will exactly match the last 8-bits to be 0x80.
>
> 2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
> vxlan mask 0x80 / end ...
>
> This flow will only match the MSB of the last 8-bits to be 1.
>
> Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Otherwise,
Acked-by: Raslan Darawsheh <rasland@nvidia.com>


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v6 1/2] net/mlx5: support matching on the reserved field of VXLAN
  2021-07-13 10:50                 ` [dpdk-dev] [PATCH v6 1/2] net/mlx5: support matching on the reserved field of VXLAN Rongwei Liu
@ 2021-07-13 11:40                   ` Raslan Darawsheh
  2021-07-13 11:49                     ` Rongwei Liu
  2021-07-13 12:11                     ` [dpdk-dev] [PATCH v6 1/2] net/mlx5: support matching on the reserved field of VXLAN Rongwei Liu
  0 siblings, 2 replies; 34+ messages in thread
From: Raslan Darawsheh @ 2021-07-13 11:40 UTC (permalink / raw)
  To: Rongwei Liu, Matan Azrad, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon, Shahaf Shuler
  Cc: dev

Hi,

> -----Original Message-----
> From: Rongwei Liu <rongweil@nvidia.com>
> Sent: Tuesday, July 13, 2021 1:50 PM
> To: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-
> Thomas Monjalon <thomas@monjalon.net>; Shahaf Shuler
> <shahafs@nvidia.com>
> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> Subject: [PATCH v6 1/2] net/mlx5: support matching on the reserved field of
> VXLAN
> 
> This adds matching on the reserved field of VXLAN
> header (the last 8-bits). The capability from rdma-core
> is detected by creating a dummy matcher using misc5
> when the device is probed.
> 
> For non-zero groups and FDB domain, the capability is
> detected from rdma-core, meanwhile for NIC domain group
> zero it's relying on the HCA_CAP from FW.
> 
> Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  doc/guides/nics/mlx5.rst               |  11 +-
>  doc/guides/rel_notes/release_21_08.rst |   6 +
>  drivers/common/mlx5/mlx5_devx_cmds.c   |   3 +
>  drivers/common/mlx5/mlx5_devx_cmds.h   |   6 +
>  drivers/common/mlx5/mlx5_prm.h         |  41 +++++--
>  drivers/net/mlx5/linux/mlx5_os.c       |  77 ++++++++++++
>  drivers/net/mlx5/mlx5.h                |   2 +
>  drivers/net/mlx5/mlx5_flow.c           |  26 +++-
>  drivers/net/mlx5/mlx5_flow.h           |   4 +-
>  drivers/net/mlx5/mlx5_flow_dv.c        | 160 +++++++++++++++++--------
>  drivers/net/mlx5/mlx5_flow_verbs.c     |   3 +-
>  drivers/vdpa/mlx5/mlx5_vdpa_steer.c    |   6 +-
>  12 files changed, 280 insertions(+), 65 deletions(-)
> 
> diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
> index 8253b96e92..5842991d5d 100644
> --- a/doc/guides/nics/mlx5.rst
> +++ b/doc/guides/nics/mlx5.rst
> @@ -195,8 +195,15 @@ Limitations
>    size and ``txq_inline_min`` settings and may be from 2 (worst case forced
> by maximal
>    inline settings) to 58.
> 
> -- Flows with a VXLAN Network Identifier equal (or ends to be equal)
> -  to 0 are not supported.
> +- Match on VXLAN supports the following fields only:
> +
> +     - VNI
> +     - Last reserved 8-bits
> +
> +  Last reserved 8-bits matching is only supported When using DV flow
> +  engine (``dv_flow_en`` = 1).
> +  Group zero's behavior may differ which depends on FW.
> +  Matching value equals 0 (value & mask) is not supported.
> 
>  - L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with
> MPLSoGRE and MPLSoUDP.
> 
> diff --git a/doc/guides/rel_notes/release_21_08.rst
> b/doc/guides/rel_notes/release_21_08.rst
> index 6a902ef9ac..3fb17bbf77 100644
> --- a/doc/guides/rel_notes/release_21_08.rst
> +++ b/doc/guides/rel_notes/release_21_08.rst
> @@ -117,6 +117,11 @@ New Features
>    The experimental PMD power management API now supports managing
>    multiple Ethernet Rx queues per lcore.
> 
> +* **Updated Mellanox mlx5 driver.**
> +
> +  Updated the Mellanox mlx5 driver with new features and improvements,
> including:
> +
> +  * Added support for matching on vxlan header last 8-bits reserved field.
> 
I guess this need to be rebased which is what Andrew mentioned in his previous comment,
Otherwise,
Acked-by: Raslan Darawsheh <rasland@nvidia.com>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v6 1/2] net/mlx5: support matching on the reserved field of VXLAN
  2021-07-13 11:40                   ` Raslan Darawsheh
@ 2021-07-13 11:49                     ` Rongwei Liu
  2021-07-13 12:09                       ` [dpdk-dev] [PATCH v7 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
  2021-07-13 12:11                     ` [dpdk-dev] [PATCH v6 1/2] net/mlx5: support matching on the reserved field of VXLAN Rongwei Liu
  1 sibling, 1 reply; 34+ messages in thread
From: Rongwei Liu @ 2021-07-13 11:49 UTC (permalink / raw)
  To: Raslan Darawsheh, Matan Azrad, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon, Shahaf Shuler
  Cc: dev

Hi Raslan:
    Starting feom v5, rebase is already done.
    Do we have new conflicts now?

获取 Outlook for iOS<https://aka.ms/o0ukef>
________________________________
发件人: Raslan Darawsheh <rasland@nvidia.com>
发送时间: Tuesday, July 13, 2021 7:40:37 PM
收件人: Rongwei Liu <rongweil@nvidia.com>; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Shahaf Shuler <shahafs@nvidia.com>
抄送: dev@dpdk.org <dev@dpdk.org>
主题: RE: [PATCH v6 1/2] net/mlx5: support matching on the reserved field of VXLAN

Hi,

> -----Original Message-----
> From: Rongwei Liu <rongweil@nvidia.com>
> Sent: Tuesday, July 13, 2021 1:50 PM
> To: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-
> Thomas Monjalon <thomas@monjalon.net>; Shahaf Shuler
> <shahafs@nvidia.com>
> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> Subject: [PATCH v6 1/2] net/mlx5: support matching on the reserved field of
> VXLAN
>
> This adds matching on the reserved field of VXLAN
> header (the last 8-bits). The capability from rdma-core
> is detected by creating a dummy matcher using misc5
> when the device is probed.
>
> For non-zero groups and FDB domain, the capability is
> detected from rdma-core, meanwhile for NIC domain group
> zero it's relying on the HCA_CAP from FW.
>
> Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  doc/guides/nics/mlx5.rst               |  11 +-
>  doc/guides/rel_notes/release_21_08.rst |   6 +
>  drivers/common/mlx5/mlx5_devx_cmds.c   |   3 +
>  drivers/common/mlx5/mlx5_devx_cmds.h   |   6 +
>  drivers/common/mlx5/mlx5_prm.h         |  41 +++++--
>  drivers/net/mlx5/linux/mlx5_os.c       |  77 ++++++++++++
>  drivers/net/mlx5/mlx5.h                |   2 +
>  drivers/net/mlx5/mlx5_flow.c           |  26 +++-
>  drivers/net/mlx5/mlx5_flow.h           |   4 +-
>  drivers/net/mlx5/mlx5_flow_dv.c        | 160 +++++++++++++++++--------
>  drivers/net/mlx5/mlx5_flow_verbs.c     |   3 +-
>  drivers/vdpa/mlx5/mlx5_vdpa_steer.c    |   6 +-
>  12 files changed, 280 insertions(+), 65 deletions(-)
>
> diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
> index 8253b96e92..5842991d5d 100644
> --- a/doc/guides/nics/mlx5.rst
> +++ b/doc/guides/nics/mlx5.rst
> @@ -195,8 +195,15 @@ Limitations
>    size and ``txq_inline_min`` settings and may be from 2 (worst case forced
> by maximal
>    inline settings) to 58.
>
> -- Flows with a VXLAN Network Identifier equal (or ends to be equal)
> -  to 0 are not supported.
> +- Match on VXLAN supports the following fields only:
> +
> +     - VNI
> +     - Last reserved 8-bits
> +
> +  Last reserved 8-bits matching is only supported When using DV flow
> +  engine (``dv_flow_en`` = 1).
> +  Group zero's behavior may differ which depends on FW.
> +  Matching value equals 0 (value & mask) is not supported.
>
>  - L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with
> MPLSoGRE and MPLSoUDP.
>
> diff --git a/doc/guides/rel_notes/release_21_08.rst
> b/doc/guides/rel_notes/release_21_08.rst
> index 6a902ef9ac..3fb17bbf77 100644
> --- a/doc/guides/rel_notes/release_21_08.rst
> +++ b/doc/guides/rel_notes/release_21_08.rst
> @@ -117,6 +117,11 @@ New Features
>    The experimental PMD power management API now supports managing
>    multiple Ethernet Rx queues per lcore.
>
> +* **Updated Mellanox mlx5 driver.**
> +
> +  Updated the Mellanox mlx5 driver with new features and improvements,
> including:
> +
> +  * Added support for matching on vxlan header last 8-bits reserved field.
>
I guess this need to be rebased which is what Andrew mentioned in his previous comment,
Otherwise,
Acked-by: Raslan Darawsheh <rasland@nvidia.com>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [dpdk-dev] [PATCH v7 0/2] support VXLAN header the last 8-bits matching
  2021-07-13 11:49                     ` Rongwei Liu
@ 2021-07-13 12:09                       ` Rongwei Liu
  2021-07-13 12:09                         ` [dpdk-dev] [PATCH v7 1/2] net/mlx5: support matching on the reserved field of VXLAN Rongwei Liu
                                           ` (2 more replies)
  0 siblings, 3 replies; 34+ messages in thread
From: Rongwei Liu @ 2021-07-13 12:09 UTC (permalink / raw)
  To: matan, viacheslavo, orika, thomas; +Cc: dev, rasland

This update adds support for VXLAN the last 8-bits reserved
field matching when creating sw steering rules.

Rongwei Liu (2):
  net/mlx5: support matching on the reserved field of VXLAN
  app/testpmd: support matching the reserved filed for VXLAN

 app/test-pmd/cmdline_flow.c                 |  10 ++
 app/test-pmd/util.c                         |   5 +-
 doc/guides/nics/mlx5.rst                    |  11 +-
 doc/guides/rel_notes/release_21_08.rst      |   6 +
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |   1 +
 drivers/common/mlx5/mlx5_devx_cmds.c        |   3 +
 drivers/common/mlx5/mlx5_devx_cmds.h        |   6 +
 drivers/common/mlx5/mlx5_prm.h              |  41 ++++-
 drivers/net/mlx5/linux/mlx5_os.c            |  77 ++++++++++
 drivers/net/mlx5/mlx5.h                     |   2 +
 drivers/net/mlx5/mlx5_flow.c                |  26 +++-
 drivers/net/mlx5/mlx5_flow.h                |   4 +-
 drivers/net/mlx5/mlx5_flow_dv.c             | 160 ++++++++++++++------
 drivers/net/mlx5/mlx5_flow_verbs.c          |   3 +-
 drivers/vdpa/mlx5/mlx5_vdpa_steer.c         |   6 +-
 15 files changed, 294 insertions(+), 67 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [dpdk-dev] [PATCH v7 1/2] net/mlx5: support matching on the reserved field of VXLAN
  2021-07-13 12:09                       ` [dpdk-dev] [PATCH v7 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
@ 2021-07-13 12:09                         ` Rongwei Liu
  2021-07-13 12:55                           ` Raslan Darawsheh
  2021-07-13 12:09                         ` [dpdk-dev] [PATCH v7 2/2] app/testpmd: support matching the reserved filed for VXLAN Rongwei Liu
  2021-07-13 13:09                         ` [dpdk-dev] [PATCH v7 0/2] support VXLAN header the last 8-bits matching Andrew Rybchenko
  2 siblings, 1 reply; 34+ messages in thread
From: Rongwei Liu @ 2021-07-13 12:09 UTC (permalink / raw)
  To: matan, viacheslavo, orika, thomas, Shahaf Shuler; +Cc: dev, rasland

This adds matching on the reserved field of VXLAN
header (the last 8-bits). The capability from rdma-core
is detected by creating a dummy matcher using misc5
when the device is probed.

For non-zero groups and FDB domain, the capability is
detected from rdma-core, meanwhile for NIC domain group
zero it's relying on the HCA_CAP from FW.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 doc/guides/nics/mlx5.rst               |  11 +-
 doc/guides/rel_notes/release_21_08.rst |   6 +
 drivers/common/mlx5/mlx5_devx_cmds.c   |   3 +
 drivers/common/mlx5/mlx5_devx_cmds.h   |   6 +
 drivers/common/mlx5/mlx5_prm.h         |  41 +++++--
 drivers/net/mlx5/linux/mlx5_os.c       |  77 ++++++++++++
 drivers/net/mlx5/mlx5.h                |   2 +
 drivers/net/mlx5/mlx5_flow.c           |  26 +++-
 drivers/net/mlx5/mlx5_flow.h           |   4 +-
 drivers/net/mlx5/mlx5_flow_dv.c        | 160 +++++++++++++++++--------
 drivers/net/mlx5/mlx5_flow_verbs.c     |   3 +-
 drivers/vdpa/mlx5/mlx5_vdpa_steer.c    |   6 +-
 12 files changed, 280 insertions(+), 65 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 8253b96e92..5842991d5d 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -195,8 +195,15 @@ Limitations
   size and ``txq_inline_min`` settings and may be from 2 (worst case forced by maximal
   inline settings) to 58.
 
-- Flows with a VXLAN Network Identifier equal (or ends to be equal)
-  to 0 are not supported.
+- Match on VXLAN supports the following fields only:
+
+     - VNI
+     - Last reserved 8-bits
+
+  Last reserved 8-bits matching is only supported When using DV flow
+  engine (``dv_flow_en`` = 1).
+  Group zero's behavior may differ which depends on FW.
+  Matching value equals 0 (value & mask) is not supported.
 
 - L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with MPLSoGRE and MPLSoUDP.
 
diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
index 6a902ef9ac..3fb17bbf77 100644
--- a/doc/guides/rel_notes/release_21_08.rst
+++ b/doc/guides/rel_notes/release_21_08.rst
@@ -117,6 +117,11 @@ New Features
   The experimental PMD power management API now supports managing
   multiple Ethernet Rx queues per lcore.
 
+* **Updated Mellanox mlx5 driver.**
+
+  Updated the Mellanox mlx5 driver with new features and improvements, including:
+
+  * Added support for matching on vxlan header last 8-bits reserved field.
 
 Removed Items
 -------------
@@ -208,3 +213,4 @@ Tested Platforms
    This section is a comment. Do not overwrite or remove it.
    Also, make sure to start the actual text at the margin.
    =======================================================
+
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index f5914bce32..63ae95832d 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -947,6 +947,9 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
 	attr->log_max_ft_sampler_num = MLX5_GET
 		(flow_table_nic_cap, hcattr,
 		 flow_table_properties_nic_receive.log_max_ft_sampler_num);
+	attr->flow.tunnel_header_0_1 = MLX5_GET
+		(flow_table_nic_cap, hcattr,
+		 ft_field_support_2_nic_receive.tunnel_header_0_1);
 	attr->pkt_integrity_match = mlx5_devx_query_pkt_integrity_match(hcattr);
 	/* Query HCA offloads for Ethernet protocol. */
 	memset(in, 0, sizeof(in));
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index f8a17b886b..124f43e852 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -89,6 +89,11 @@ struct mlx5_hca_vdpa_attr {
 	uint64_t doorbell_bar_offset;
 };
 
+struct mlx5_hca_flow_attr {
+	uint32_t tunnel_header_0_1;
+	uint32_t tunnel_header_2_3;
+};
+
 /* HCA supports this number of time periods for LRO. */
 #define MLX5_LRO_NUM_SUPP_PERIODS 4
 
@@ -155,6 +160,7 @@ struct mlx5_hca_attr {
 	uint32_t pkt_integrity_match:1; /* 1 if HW supports integrity item */
 	struct mlx5_hca_qos_attr qos;
 	struct mlx5_hca_vdpa_attr vdpa;
+	struct mlx5_hca_flow_attr flow;
 	int log_max_qp_sz;
 	int log_max_cq_sz;
 	int log_max_qp;
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 26761f5bd3..7950070976 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -977,6 +977,18 @@ struct mlx5_ifc_fte_match_set_misc4_bits {
 	u8 reserved_at_100[0x100];
 };
 
+struct mlx5_ifc_fte_match_set_misc5_bits {
+	u8 macsec_tag_0[0x20];
+	u8 macsec_tag_1[0x20];
+	u8 macsec_tag_2[0x20];
+	u8 macsec_tag_3[0x20];
+	u8 tunnel_header_0[0x20];
+	u8 tunnel_header_1[0x20];
+	u8 tunnel_header_2[0x20];
+	u8 tunnel_header_3[0x20];
+	u8 reserved[0x100];
+};
+
 /* Flow matcher. */
 struct mlx5_ifc_fte_match_param_bits {
 	struct mlx5_ifc_fte_match_set_lyr_2_4_bits outer_headers;
@@ -985,12 +997,13 @@ struct mlx5_ifc_fte_match_param_bits {
 	struct mlx5_ifc_fte_match_set_misc2_bits misc_parameters_2;
 	struct mlx5_ifc_fte_match_set_misc3_bits misc_parameters_3;
 	struct mlx5_ifc_fte_match_set_misc4_bits misc_parameters_4;
+	struct mlx5_ifc_fte_match_set_misc5_bits misc_parameters_5;
 /*
  * Add reserved bit to match the struct size with the size defined in PRM.
  * This extension is not required in Linux.
  */
 #ifndef HAVE_INFINIBAND_VERBS_H
-	u8 reserved_0[0x400];
+	u8 reserved_0[0x200];
 #endif
 };
 
@@ -1007,6 +1020,7 @@ enum {
 	MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT,
 	MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT,
 	MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT,
+	MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT,
 };
 
 enum {
@@ -1784,7 +1798,12 @@ struct mlx5_ifc_roce_caps_bits {
  * Table 1872 - Flow Table Fields Supported 2 Format
  */
 struct mlx5_ifc_ft_fields_support_2_bits {
-	u8 reserved_at_0[0x14];
+	u8 reserved_at_0[0xf];
+	u8 tunnel_header_2_3[0x1];
+	u8 tunnel_header_0_1[0x1];
+	u8 macsec_syndrome[0x1];
+	u8 macsec_tag[0x1];
+	u8 outer_lrh_sl[0x1];
 	u8 inner_ipv4_ihl[0x1];
 	u8 outer_ipv4_ihl[0x1];
 	u8 psp_syndrome[0x1];
@@ -1797,18 +1816,26 @@ struct mlx5_ifc_ft_fields_support_2_bits {
 	u8 inner_l4_checksum_ok[0x1];
 	u8 outer_ipv4_checksum_ok[0x1];
 	u8 outer_l4_checksum_ok[0x1];
+	u8 reserved_at_20[0x60];
 };
 
 struct mlx5_ifc_flow_table_nic_cap_bits {
 	u8 reserved_at_0[0x200];
 	struct mlx5_ifc_flow_table_prop_layout_bits
-	       flow_table_properties_nic_receive;
+		flow_table_properties_nic_receive;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_receive_rdma;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_receive_sniffer;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_transmit;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+		flow_table_properties_nic_transmit_rdma;
 	struct mlx5_ifc_flow_table_prop_layout_bits
-	       flow_table_properties_unused[5];
-	u8 reserved_at_1C0[0x200];
-	u8 header_modify_nic_receive[0x400];
+		flow_table_properties_nic_transmit_sniffer;
+	u8 reserved_at_e00[0x600];
 	struct mlx5_ifc_ft_fields_support_2_bits
-	       ft_field_support_2_nic_receive;
+		ft_field_support_2_nic_receive;
 };
 
 struct mlx5_ifc_cmd_hca_cap_2_bits {
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index be22d9cbd2..55bb71c170 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -193,6 +193,79 @@ mlx5_alloc_verbs_buf(size_t size, void *data)
 	return ret;
 }
 
+/**
+ * Detect misc5 support or not
+ *
+ * @param[in] priv
+ *   Device private data pointer
+ */
+#ifdef HAVE_MLX5DV_DR
+static void
+__mlx5_discovery_misc5_cap(struct mlx5_priv *priv)
+{
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+	/* Dummy VxLAN matcher to detect rdma-core misc5 cap
+	 * Case: IPv4--->UDP--->VxLAN--->vni
+	 */
+	void *tbl;
+	struct mlx5_flow_dv_match_params matcher_mask;
+	void *match_m;
+	void *matcher;
+	void *headers_m;
+	void *misc5_m;
+	uint32_t *tunnel_header_m;
+	struct mlx5dv_flow_matcher_attr dv_attr;
+
+	memset(&matcher_mask, 0, sizeof(matcher_mask));
+	matcher_mask.size = sizeof(matcher_mask.buf);
+	match_m = matcher_mask.buf;
+	headers_m = MLX5_ADDR_OF(fte_match_param, match_m, outer_headers);
+	misc5_m = MLX5_ADDR_OF(fte_match_param,
+			       match_m, misc_parameters_5);
+	tunnel_header_m = (uint32_t *)
+				MLX5_ADDR_OF(fte_match_set_misc5,
+				misc5_m, tunnel_header_1);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, 4);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xffff);
+	*tunnel_header_m = 0xffffff;
+
+	tbl = mlx5_glue->dr_create_flow_tbl(priv->sh->rx_domain, 1);
+	if (!tbl) {
+		DRV_LOG(INFO, "No SW steering support");
+		return;
+	}
+	dv_attr.type = IBV_FLOW_ATTR_NORMAL,
+	dv_attr.match_mask = (void *)&matcher_mask,
+	dv_attr.match_criteria_enable =
+			(1 << MLX5_MATCH_CRITERIA_ENABLE_OUTER_BIT) |
+			(1 << MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT);
+	dv_attr.priority = 3;
+#ifdef HAVE_MLX5DV_DR_ESWITCH
+	void *misc2_m;
+	if (priv->config.dv_esw_en) {
+		/* FDB enabled reg_c_0 */
+		dv_attr.match_criteria_enable |=
+				(1 << MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT);
+		misc2_m = MLX5_ADDR_OF(fte_match_param,
+				       match_m, misc_parameters_2);
+		MLX5_SET(fte_match_set_misc2, misc2_m,
+			 metadata_reg_c_0, 0xffff);
+	}
+#endif
+	matcher = mlx5_glue->dv_create_flow_matcher(priv->sh->ctx,
+						    &dv_attr, tbl);
+	if (matcher) {
+		priv->sh->misc5_cap = 1;
+		mlx5_glue->dv_destroy_flow_matcher(matcher);
+	}
+	mlx5_glue->dr_destroy_flow_tbl(tbl);
+#else
+	RTE_SET_USED(priv);
+#endif
+}
+#endif
+
 /**
  * Verbs callback to free a memory.
  *
@@ -364,6 +437,8 @@ mlx5_alloc_shared_dr(struct mlx5_priv *priv)
 		if (sh->fdb_domain)
 			mlx5_glue->dr_allow_duplicate_rules(sh->fdb_domain, 0);
 	}
+
+	__mlx5_discovery_misc5_cap(priv);
 #endif /* HAVE_MLX5DV_DR */
 	sh->default_miss_action =
 			mlx5_glue->dr_create_flow_action_default_miss();
@@ -1313,6 +1388,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 				goto error;
 			}
 		}
+		if (config->hca_attr.flow.tunnel_header_0_1)
+			sh->tunnel_header_0_1 = 1;
 #endif
 #ifdef HAVE_MLX5_DR_CREATE_ACTION_ASO
 		if (config->hca_attr.flow_hit_aso &&
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f864c1d701..75a0e04ea0 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1094,6 +1094,8 @@ struct mlx5_dev_ctx_shared {
 	uint32_t qp_ts_format:2; /* QP timestamp formats supported. */
 	uint32_t meter_aso_en:1; /* Flow Meter ASO is supported. */
 	uint32_t ct_aso_en:1; /* Connection Tracking ASO is supported. */
+	uint32_t tunnel_header_0_1:1; /* tunnel_header_0_1 is supported. */
+	uint32_t misc5_cap:1; /* misc5 matcher parameter is supported. */
 	uint32_t max_port; /* Maximal IB device port index. */
 	struct mlx5_bond_info bond; /* Bonding information. */
 	void *ctx; /* Verbs/DV/DevX context. */
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 2feddb0254..f3f5752dbe 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -2410,12 +2410,14 @@ mlx5_flow_validate_item_tcp(const struct rte_flow_item *item,
 /**
  * Validate VXLAN item.
  *
+ * @param[in] dev
+ *   Pointer to the Ethernet device structure.
  * @param[in] item
  *   Item specification.
  * @param[in] item_flags
  *   Bit-fields that holds the items detected until now.
- * @param[in] target_protocol
- *   The next protocol in the previous item.
+ * @param[in] attr
+ *   Flow rule attributes.
  * @param[out] error
  *   Pointer to error structure.
  *
@@ -2423,24 +2425,32 @@ mlx5_flow_validate_item_tcp(const struct rte_flow_item *item,
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 int
-mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
+mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
+			      const struct rte_flow_item *item,
 			      uint64_t item_flags,
+			      const struct rte_flow_attr *attr,
 			      struct rte_flow_error *error)
 {
 	const struct rte_flow_item_vxlan *spec = item->spec;
 	const struct rte_flow_item_vxlan *mask = item->mask;
 	int ret;
+	struct mlx5_priv *priv = dev->data->dev_private;
 	union vni {
 		uint32_t vlan_id;
 		uint8_t vni[4];
 	} id = { .vlan_id = 0, };
-
+	const struct rte_flow_item_vxlan nic_mask = {
+		.vni = "\xff\xff\xff",
+		.rsvd1 = 0xff,
+	};
+	const struct rte_flow_item_vxlan *valid_mask;
 
 	if (item_flags & MLX5_FLOW_LAYER_TUNNEL)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ITEM, item,
 					  "multiple tunnel layers not"
 					  " supported");
+	valid_mask = &rte_flow_item_vxlan_mask;
 	/*
 	 * Verify only UDPv4 is present as defined in
 	 * https://tools.ietf.org/html/rfc7348
@@ -2451,9 +2461,15 @@ mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
 					  "no outer UDP layer found");
 	if (!mask)
 		mask = &rte_flow_item_vxlan_mask;
+	/* FDB domain & NIC domain non-zero group */
+	if ((attr->transfer || attr->group) && priv->sh->misc5_cap)
+		valid_mask = &nic_mask;
+	/* Group zero in NIC domain */
+	if (!attr->group && !attr->transfer && priv->sh->tunnel_header_0_1)
+		valid_mask = &nic_mask;
 	ret = mlx5_flow_item_acceptable
 		(item, (const uint8_t *)mask,
-		 (const uint8_t *)&rte_flow_item_vxlan_mask,
+		 (const uint8_t *)valid_mask,
 		 sizeof(struct rte_flow_item_vxlan),
 		 MLX5_ITEM_RANGE_NOT_ACCEPTED, error);
 	if (ret < 0)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 7d97c5880f..66a38c3630 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1533,8 +1533,10 @@ int mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
 				 uint64_t item_flags,
 				 struct rte_eth_dev *dev,
 				 struct rte_flow_error *error);
-int mlx5_flow_validate_item_vxlan(const struct rte_flow_item *item,
+int mlx5_flow_validate_item_vxlan(struct rte_eth_dev *dev,
+				  const struct rte_flow_item *item,
 				  uint64_t item_flags,
+				  const struct rte_flow_attr *attr,
 				  struct rte_flow_error *error);
 int mlx5_flow_validate_item_vxlan_gpe(const struct rte_flow_item *item,
 				      uint64_t item_flags,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 2f4c0eeb5b..6c3715a5e8 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -6930,7 +6930,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 			last_item = MLX5_FLOW_LAYER_GRE_KEY;
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			ret = mlx5_flow_validate_item_vxlan(items, item_flags,
+			ret = mlx5_flow_validate_item_vxlan(dev, items,
+							    item_flags, attr,
 							    error);
 			if (ret < 0)
 				return ret;
@@ -7892,15 +7893,7 @@ flow_dv_prepare(struct rte_eth_dev *dev,
 	memset(dev_flow, 0, sizeof(*dev_flow));
 	dev_flow->handle = dev_handle;
 	dev_flow->handle_idx = handle_idx;
-	/*
-	 * In some old rdma-core releases, before continuing, a check of the
-	 * length of matching parameter will be done at first. It needs to use
-	 * the length without misc4 param. If the flow has misc4 support, then
-	 * the length needs to be adjusted accordingly. Each param member is
-	 * aligned with a 64B boundary naturally.
-	 */
-	dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param) -
-				  MLX5_ST_SZ_BYTES(fte_match_set_misc4);
+	dev_flow->dv.value.size = MLX5_ST_SZ_BYTES(fte_match_param);
 	dev_flow->ingress = attr->ingress;
 	dev_flow->dv.transfer = attr->transfer;
 	return dev_flow;
@@ -8681,6 +8674,10 @@ flow_dv_translate_item_nvgre(void *matcher, void *key,
 /**
  * Add VXLAN item to matcher and to the value.
  *
+ * @param[in] dev
+ *   Pointer to the Ethernet device structure.
+ * @param[in] attr
+ *   Flow rule attributes.
  * @param[in, out] matcher
  *   Flow matcher.
  * @param[in, out] key
@@ -8691,7 +8688,9 @@ flow_dv_translate_item_nvgre(void *matcher, void *key,
  *   Item is inner pattern.
  */
 static void
-flow_dv_translate_item_vxlan(void *matcher, void *key,
+flow_dv_translate_item_vxlan(struct rte_eth_dev *dev,
+			     const struct rte_flow_attr *attr,
+			     void *matcher, void *key,
 			     const struct rte_flow_item *item,
 			     int inner)
 {
@@ -8699,13 +8698,16 @@ flow_dv_translate_item_vxlan(void *matcher, void *key,
 	const struct rte_flow_item_vxlan *vxlan_v = item->spec;
 	void *headers_m;
 	void *headers_v;
-	void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters);
-	void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
-	char *vni_m;
-	char *vni_v;
+	void *misc5_m;
+	void *misc5_v;
+	uint32_t *tunnel_header_v;
+	uint32_t *tunnel_header_m;
 	uint16_t dport;
-	int size;
-	int i;
+	struct mlx5_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item_vxlan nic_mask = {
+		.vni = "\xff\xff\xff",
+		.rsvd1 = 0xff,
+	};
 
 	if (inner) {
 		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
@@ -8724,14 +8726,52 @@ flow_dv_translate_item_vxlan(void *matcher, void *key,
 	}
 	if (!vxlan_v)
 		return;
-	if (!vxlan_m)
-		vxlan_m = &rte_flow_item_vxlan_mask;
-	size = sizeof(vxlan_m->vni);
-	vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni);
-	vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
-	memcpy(vni_m, vxlan_m->vni, size);
-	for (i = 0; i < size; ++i)
-		vni_v[i] = vni_m[i] & vxlan_v->vni[i];
+	if (!vxlan_m) {
+		if ((!attr->group && !priv->sh->tunnel_header_0_1) ||
+		    (attr->group && !priv->sh->misc5_cap))
+			vxlan_m = &rte_flow_item_vxlan_mask;
+		else
+			vxlan_m = &nic_mask;
+	}
+	if ((!attr->group && !attr->transfer && !priv->sh->tunnel_header_0_1) ||
+	    ((attr->group || attr->transfer) && !priv->sh->misc5_cap)) {
+		void *misc_m;
+		void *misc_v;
+		char *vni_m;
+		char *vni_v;
+		int size;
+		int i;
+		misc_m = MLX5_ADDR_OF(fte_match_param,
+				      matcher, misc_parameters);
+		misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters);
+		size = sizeof(vxlan_m->vni);
+		vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni);
+		vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni);
+		memcpy(vni_m, vxlan_m->vni, size);
+		for (i = 0; i < size; ++i)
+			vni_v[i] = vni_m[i] & vxlan_v->vni[i];
+		return;
+	}
+	misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5);
+	misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5);
+	tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
+						   misc5_v,
+						   tunnel_header_1);
+	tunnel_header_m = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5,
+						   misc5_m,
+						   tunnel_header_1);
+	*tunnel_header_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) |
+			   (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 |
+			   (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16;
+	if (*tunnel_header_v)
+		*tunnel_header_m = vxlan_m->vni[0] |
+			vxlan_m->vni[1] << 8 |
+			vxlan_m->vni[2] << 16;
+	else
+		*tunnel_header_m = 0x0;
+	*tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24;
+	if (vxlan_v->rsvd1 & vxlan_m->rsvd1)
+		*tunnel_header_m |= vxlan_m->rsvd1 << 24;
 }
 
 /**
@@ -9892,9 +9932,32 @@ flow_dv_matcher_enable(uint32_t *match_criteria)
 	match_criteria_enable |=
 		(!HEADER_IS_ZERO(match_criteria, misc_parameters_4)) <<
 		MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT;
+	match_criteria_enable |=
+		(!HEADER_IS_ZERO(match_criteria, misc_parameters_5)) <<
+		MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT;
 	return match_criteria_enable;
 }
 
+static void
+__flow_dv_adjust_buf_size(size_t *size, uint8_t match_criteria)
+{
+	/*
+	 * Check flow matching criteria first, subtract misc5/4 length if flow
+	 * doesn't own misc5/4 parameters. In some old rdma-core releases,
+	 * misc5/4 are not supported, and matcher creation failure is expected
+	 * w/o subtration. If misc5 is provided, misc4 must be counted in since
+	 * misc5 is right after misc4.
+	 */
+	if (!(match_criteria & (1 << MLX5_MATCH_CRITERIA_ENABLE_MISC5_BIT))) {
+		*size = MLX5_ST_SZ_BYTES(fte_match_param) -
+			MLX5_ST_SZ_BYTES(fte_match_set_misc5);
+		if (!(match_criteria & (1 <<
+			MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT))) {
+			*size -= MLX5_ST_SZ_BYTES(fte_match_set_misc4);
+		}
+	}
+}
+
 struct mlx5_hlist_entry *
 flow_dv_tbl_create_cb(struct mlx5_hlist *list, uint64_t key64, void *cb_ctx)
 {
@@ -10161,6 +10224,8 @@ flow_dv_matcher_create_cb(struct mlx5_cache_list *list,
 	*cache = *ref;
 	dv_attr.match_criteria_enable =
 		flow_dv_matcher_enable(cache->mask.buf);
+	__flow_dv_adjust_buf_size(&ref->mask.size,
+				  dv_attr.match_criteria_enable);
 	dv_attr.priority = ref->priority;
 	if (tbl->is_egress)
 		dv_attr.flags |= IBV_FLOW_ATTR_FLAGS_EGRESS;
@@ -10210,7 +10275,6 @@ flow_dv_matcher_register(struct rte_eth_dev *dev,
 		.error = error,
 		.data = ref,
 	};
-
 	/**
 	 * tunnel offload API requires this registration for cases when
 	 * tunnel match rule was inserted before tunnel set rule.
@@ -12069,8 +12133,7 @@ flow_dv_translate(struct rte_eth_dev *dev,
 	uint64_t action_flags = 0;
 	struct mlx5_flow_dv_matcher matcher = {
 		.mask = {
-			.size = sizeof(matcher.mask.buf) -
-				MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+			.size = sizeof(matcher.mask.buf),
 		},
 	};
 	int actions_n = 0;
@@ -12877,7 +12940,8 @@ flow_dv_translate(struct rte_eth_dev *dev,
 			last_item = MLX5_FLOW_LAYER_GRE;
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			flow_dv_translate_item_vxlan(match_mask, match_value,
+			flow_dv_translate_item_vxlan(dev, attr,
+						     match_mask, match_value,
 						     items, tunnel);
 			matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc);
 			last_item = MLX5_FLOW_LAYER_VXLAN;
@@ -12975,10 +13039,6 @@ flow_dv_translate(struct rte_eth_dev *dev,
 						NULL,
 						"cannot create eCPRI parser");
 			}
-			/* Adjust the length matcher and device flow value. */
-			matcher.mask.size = MLX5_ST_SZ_BYTES(fte_match_param);
-			dev_flow->dv.value.size =
-					MLX5_ST_SZ_BYTES(fte_match_param);
 			flow_dv_translate_item_ecpri(dev, match_mask,
 						     match_value, items);
 			/* No other protocol should follow eCPRI layer. */
@@ -13288,6 +13348,7 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
 	int idx;
 	struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace();
 	struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc;
+	uint8_t misc_mask;
 
 	MLX5_ASSERT(wks);
 	for (idx = wks->flow_idx - 1; idx >= 0; idx--) {
@@ -13358,6 +13419,8 @@ flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
 			}
 			dv->actions[n++] = priv->sh->default_miss_action;
 		}
+		misc_mask = flow_dv_matcher_enable(dv->value.buf);
+		__flow_dv_adjust_buf_size(&dv->value.size, misc_mask);
 		err = mlx5_flow_os_create_flow(dv_h->matcher->matcher_object,
 					       (void *)&dv->value, n,
 					       dv->actions, &dh->drv_flow);
@@ -15476,14 +15539,13 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev,
 {
 	int ret;
 	struct mlx5_flow_dv_match_params value = {
-		.size = sizeof(value.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(value.buf),
 	};
 	struct mlx5_flow_dv_match_params matcher = {
-		.size = sizeof(matcher.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(matcher.buf),
 	};
 	struct mlx5_priv *priv = dev->data->dev_private;
+	uint8_t misc_mask;
 
 	if (match_src_port && (priv->representor || priv->master)) {
 		if (flow_dv_translate_item_port_id(dev, matcher.buf,
@@ -15497,6 +15559,8 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev,
 				(enum modify_reg)color_reg_c_idx,
 				rte_col_2_mlx5_col(color),
 				UINT32_MAX);
+	misc_mask = flow_dv_matcher_enable(value.buf);
+	__flow_dv_adjust_buf_size(&value.size, misc_mask);
 	ret = mlx5_flow_os_create_flow(matcher_object,
 			(void *)&value, actions_n, actions, rule);
 	if (ret) {
@@ -15521,14 +15585,12 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev,
 	struct mlx5_flow_tbl_resource *tbl_rsc = sub_policy->tbl_rsc;
 	struct mlx5_flow_dv_matcher matcher = {
 		.mask = {
-			.size = sizeof(matcher.mask.buf) -
-				MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+			.size = sizeof(matcher.mask.buf),
 		},
 		.tbl = tbl_rsc,
 	};
 	struct mlx5_flow_dv_match_params value = {
-		.size = sizeof(value.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(value.buf),
 	};
 	struct mlx5_flow_cb_ctx ctx = {
 		.error = error,
@@ -16002,12 +16064,10 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 	int domain, ret, i;
 	struct mlx5_flow_counter *cnt;
 	struct mlx5_flow_dv_match_params value = {
-		.size = sizeof(value.buf) -
-		MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(value.buf),
 	};
 	struct mlx5_flow_dv_match_params matcher_para = {
-		.size = sizeof(matcher_para.buf) -
-		MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+		.size = sizeof(matcher_para.buf),
 	};
 	int mtr_id_reg_c = mlx5_flow_get_reg_id(dev, MLX5_MTR_ID,
 						     0, &error);
@@ -16016,8 +16076,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 	struct mlx5_cache_entry *entry;
 	struct mlx5_flow_dv_matcher matcher = {
 		.mask = {
-			.size = sizeof(matcher.mask.buf) -
-			MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+			.size = sizeof(matcher.mask.buf),
 		},
 	};
 	struct mlx5_flow_dv_matcher *drop_matcher;
@@ -16025,6 +16084,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 		.error = &error,
 		.data = &matcher,
 	};
+	uint8_t misc_mask;
 
 	if (!priv->mtr_en || mtr_id_reg_c < 0) {
 		rte_errno = ENOTSUP;
@@ -16074,6 +16134,8 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 			actions[i++] = priv->sh->dr_drop_action;
 			flow_dv_match_meta_reg(matcher_para.buf, value.buf,
 				(enum modify_reg)mtr_id_reg_c, 0, 0);
+			misc_mask = flow_dv_matcher_enable(value.buf);
+			__flow_dv_adjust_buf_size(&value.size, misc_mask);
 			ret = mlx5_flow_os_create_flow
 				(mtrmng->def_matcher[domain]->matcher_object,
 				(void *)&value, i, actions,
@@ -16117,6 +16179,8 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev,
 					fm->drop_cnt, NULL);
 		actions[i++] = cnt->action;
 		actions[i++] = priv->sh->dr_drop_action;
+		misc_mask = flow_dv_matcher_enable(value.buf);
+		__flow_dv_adjust_buf_size(&value.size, misc_mask);
 		ret = mlx5_flow_os_create_flow(drop_matcher->matcher_object,
 					       (void *)&value, i, actions,
 					       &fm->drop_rule[domain]);
@@ -16637,10 +16701,12 @@ mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev)
 	if (ret)
 		goto err;
 	dv_attr.match_criteria_enable = flow_dv_matcher_enable(mask.buf);
+	__flow_dv_adjust_buf_size(&mask.size, dv_attr.match_criteria_enable);
 	ret = mlx5_flow_os_create_flow_matcher(sh->ctx, &dv_attr, tbl->obj,
 					       &matcher);
 	if (ret)
 		goto err;
+	__flow_dv_adjust_buf_size(&value.size, dv_attr.match_criteria_enable);
 	ret = mlx5_flow_os_create_flow(matcher, (void *)&value, 1,
 				       actions, &flow);
 err:
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index fe9673310a..7b3d0b320d 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1381,7 +1381,8 @@ flow_verbs_validate(struct rte_eth_dev *dev,
 					     MLX5_FLOW_LAYER_OUTER_L4_TCP;
 			break;
 		case RTE_FLOW_ITEM_TYPE_VXLAN:
-			ret = mlx5_flow_validate_item_vxlan(items, item_flags,
+			ret = mlx5_flow_validate_item_vxlan(dev, items,
+							    item_flags, attr,
 							    error);
 			if (ret < 0)
 				return ret;
diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
index 1fcd24c002..383f003966 100644
--- a/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
+++ b/drivers/vdpa/mlx5/mlx5_vdpa_steer.c
@@ -140,11 +140,13 @@ mlx5_vdpa_rss_flows_create(struct mlx5_vdpa_priv *priv)
 		/**< Matcher value. This value is used as the mask or a key. */
 	} matcher_mask = {
 				.size = sizeof(matcher_mask.buf) -
-					MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+					MLX5_ST_SZ_BYTES(fte_match_set_misc4) -
+					MLX5_ST_SZ_BYTES(fte_match_set_misc5),
 			},
 	  matcher_value = {
 				.size = sizeof(matcher_value.buf) -
-					MLX5_ST_SZ_BYTES(fte_match_set_misc4),
+					MLX5_ST_SZ_BYTES(fte_match_set_misc4) -
+					MLX5_ST_SZ_BYTES(fte_match_set_misc5),
 			};
 	struct mlx5dv_flow_matcher_attr dv_attr = {
 		.type = IBV_FLOW_ATTR_NORMAL,
-- 
2.27.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* [dpdk-dev] [PATCH v7 2/2] app/testpmd: support matching the reserved filed for VXLAN
  2021-07-13 12:09                       ` [dpdk-dev] [PATCH v7 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
  2021-07-13 12:09                         ` [dpdk-dev] [PATCH v7 1/2] net/mlx5: support matching on the reserved field of VXLAN Rongwei Liu
@ 2021-07-13 12:09                         ` Rongwei Liu
  2021-07-13 12:54                           ` Raslan Darawsheh
  2021-07-13 13:09                         ` [dpdk-dev] [PATCH v7 0/2] support VXLAN header the last 8-bits matching Andrew Rybchenko
  2 siblings, 1 reply; 34+ messages in thread
From: Rongwei Liu @ 2021-07-13 12:09 UTC (permalink / raw)
  To: matan, viacheslavo, orika, thomas, Xiaoyun Li; +Cc: dev, rasland

Add a new testpmd pattern field 'last_rsvd' that supports the
last 8-bits matching of VXLAN header.

The examples for the "last_rsvd" pattern field are as below:

1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...

This flow will exactly match the last 8-bits to be 0x80.

2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
vxlan mask 0x80 / end ...

This flow will only match the MSB of the last 8-bits to be 1.

Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 app/test-pmd/cmdline_flow.c                 | 10 ++++++++++
 app/test-pmd/util.c                         |  5 +++--
 doc/guides/testpmd_app_ug/testpmd_funcs.rst |  1 +
 3 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 8fc0e1469d..58c6f8151c 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -205,6 +205,7 @@ enum index {
 	ITEM_SCTP_CKSUM,
 	ITEM_VXLAN,
 	ITEM_VXLAN_VNI,
+	ITEM_VXLAN_LAST_RSVD,
 	ITEM_E_TAG,
 	ITEM_E_TAG_GRP_ECID_B,
 	ITEM_NVGRE,
@@ -1127,6 +1128,7 @@ static const enum index item_sctp[] = {
 
 static const enum index item_vxlan[] = {
 	ITEM_VXLAN_VNI,
+	ITEM_VXLAN_LAST_RSVD,
 	ITEM_NEXT,
 	ZERO,
 };
@@ -2839,6 +2841,14 @@ static const struct token token_list[] = {
 			     item_param),
 		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan, vni)),
 	},
+	[ITEM_VXLAN_LAST_RSVD] = {
+		.name = "last_rsvd",
+		.help = "VXLAN last reserved bits",
+		.next = NEXT(item_vxlan, NEXT_ENTRY(COMMON_UNSIGNED),
+			     item_param),
+		.args = ARGS(ARGS_ENTRY_HTON(struct rte_flow_item_vxlan,
+					     rsvd1)),
+	},
 	[ITEM_E_TAG] = {
 		.name = "e_tag",
 		.help = "match E-Tag header",
diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
index a9e431a8b2..59626518d5 100644
--- a/app/test-pmd/util.c
+++ b/app/test-pmd/util.c
@@ -266,8 +266,9 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
 				vx_vni = rte_be_to_cpu_32(vxlan_hdr->vx_vni);
 				MKDUMPSTR(print_buf, buf_size, cur_len,
 					  " - VXLAN packet: packet type =%d, "
-					  "Destination UDP port =%d, VNI = %d",
-					  packet_type, udp_port, vx_vni >> 8);
+					  "Destination UDP port =%d, VNI = %d, "
+					  "last_rsvd = %d", packet_type,
+					  udp_port, vx_vni >> 8, vx_vni & 0xff);
 			}
 		}
 		MKDUMPSTR(print_buf, buf_size, cur_len,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 33857acf54..4ca3103067 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3694,6 +3694,7 @@ This section lists supported pattern items and their attributes, if any.
 - ``vxlan``: match VXLAN header.
 
   - ``vni {unsigned}``: VXLAN identifier.
+  - ``last_rsvd {unsigned}``: VXLAN last reserved 8-bits.
 
 - ``e_tag``: match IEEE 802.1BR E-Tag header.
 
-- 
2.27.0


^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v6 1/2] net/mlx5: support matching on the reserved field of VXLAN
  2021-07-13 11:40                   ` Raslan Darawsheh
  2021-07-13 11:49                     ` Rongwei Liu
@ 2021-07-13 12:11                     ` Rongwei Liu
  1 sibling, 0 replies; 34+ messages in thread
From: Rongwei Liu @ 2021-07-13 12:11 UTC (permalink / raw)
  To: Raslan Darawsheh, Matan Azrad, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon, Shahaf Shuler
  Cc: dev

Hi Raslan:
	V7 was sent:
	1. app/testpmd title changed per your comments.
	2. UNSIGNED to newest COMMON_UNSIGNED to fix compilation error.
	Thanks

BR
Rongwei

> -----Original Message-----
> From: Raslan Darawsheh <rasland@nvidia.com>
> Sent: Tuesday, July 13, 2021 7:41 PM
> To: Rongwei Liu <rongweil@nvidia.com>; Matan Azrad <matan@nvidia.com>;
> Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>;
> NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Shahaf Shuler
> <shahafs@nvidia.com>
> Cc: dev@dpdk.org
> Subject: RE: [PATCH v6 1/2] net/mlx5: support matching on the reserved field
> of VXLAN
> 
> Hi,
> 
> > -----Original Message-----
> > From: Rongwei Liu <rongweil@nvidia.com>
> > Sent: Tuesday, July 13, 2021 1:50 PM
> > To: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> > <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-
> > Thomas Monjalon <thomas@monjalon.net>; Shahaf Shuler
> > <shahafs@nvidia.com>
> > Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> > Subject: [PATCH v6 1/2] net/mlx5: support matching on the reserved
> > field of VXLAN
> >
> > This adds matching on the reserved field of VXLAN header (the last
> > 8-bits). The capability from rdma-core is detected by creating a dummy
> > matcher using misc5 when the device is probed.
> >
> > For non-zero groups and FDB domain, the capability is detected from
> > rdma-core, meanwhile for NIC domain group zero it's relying on the
> > HCA_CAP from FW.
> >
> > Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
> > Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> > ---
> >  doc/guides/nics/mlx5.rst               |  11 +-
> >  doc/guides/rel_notes/release_21_08.rst |   6 +
> >  drivers/common/mlx5/mlx5_devx_cmds.c   |   3 +
> >  drivers/common/mlx5/mlx5_devx_cmds.h   |   6 +
> >  drivers/common/mlx5/mlx5_prm.h         |  41 +++++--
> >  drivers/net/mlx5/linux/mlx5_os.c       |  77 ++++++++++++
> >  drivers/net/mlx5/mlx5.h                |   2 +
> >  drivers/net/mlx5/mlx5_flow.c           |  26 +++-
> >  drivers/net/mlx5/mlx5_flow.h           |   4 +-
> >  drivers/net/mlx5/mlx5_flow_dv.c        | 160 +++++++++++++++++--------
> >  drivers/net/mlx5/mlx5_flow_verbs.c     |   3 +-
> >  drivers/vdpa/mlx5/mlx5_vdpa_steer.c    |   6 +-
> >  12 files changed, 280 insertions(+), 65 deletions(-)
> >
> > diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index
> > 8253b96e92..5842991d5d 100644
> > --- a/doc/guides/nics/mlx5.rst
> > +++ b/doc/guides/nics/mlx5.rst
> > @@ -195,8 +195,15 @@ Limitations
> >    size and ``txq_inline_min`` settings and may be from 2 (worst case
> > forced by maximal
> >    inline settings) to 58.
> >
> > -- Flows with a VXLAN Network Identifier equal (or ends to be equal)
> > -  to 0 are not supported.
> > +- Match on VXLAN supports the following fields only:
> > +
> > +     - VNI
> > +     - Last reserved 8-bits
> > +
> > +  Last reserved 8-bits matching is only supported When using DV flow
> > + engine (``dv_flow_en`` = 1).
> > +  Group zero's behavior may differ which depends on FW.
> > +  Matching value equals 0 (value & mask) is not supported.
> >
> >  - L3 VXLAN and VXLAN-GPE tunnels cannot be supported together with
> > MPLSoGRE and MPLSoUDP.
> >
> > diff --git a/doc/guides/rel_notes/release_21_08.rst
> > b/doc/guides/rel_notes/release_21_08.rst
> > index 6a902ef9ac..3fb17bbf77 100644
> > --- a/doc/guides/rel_notes/release_21_08.rst
> > +++ b/doc/guides/rel_notes/release_21_08.rst
> > @@ -117,6 +117,11 @@ New Features
> >    The experimental PMD power management API now supports managing
> >    multiple Ethernet Rx queues per lcore.
> >
> > +* **Updated Mellanox mlx5 driver.**
> > +
> > +  Updated the Mellanox mlx5 driver with new features and
> > + improvements,
> > including:
> > +
> > +  * Added support for matching on vxlan header last 8-bits reserved field.
> >
> I guess this need to be rebased which is what Andrew mentioned in his
> previous comment, Otherwise,
> Acked-by: Raslan Darawsheh <rasland@nvidia.com>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v7 2/2] app/testpmd: support matching the reserved filed for VXLAN
  2021-07-13 12:09                         ` [dpdk-dev] [PATCH v7 2/2] app/testpmd: support matching the reserved filed for VXLAN Rongwei Liu
@ 2021-07-13 12:54                           ` Raslan Darawsheh
  2021-07-13 15:34                             ` Raslan Darawsheh
  0 siblings, 1 reply; 34+ messages in thread
From: Raslan Darawsheh @ 2021-07-13 12:54 UTC (permalink / raw)
  To: Rongwei Liu, Matan Azrad, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon, Xiaoyun Li
  Cc: dev


> -----Original Message-----
> From: Rongwei Liu <rongweil@nvidia.com>
> Sent: Tuesday, July 13, 2021 3:09 PM
> To: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-
> Thomas Monjalon <thomas@monjalon.net>; Xiaoyun Li
> <xiaoyun.li@intel.com>
> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> Subject: [PATCH v7 2/2] app/testpmd: support matching the reserved filed
> for VXLAN
> 
> Add a new testpmd pattern field 'last_rsvd' that supports the
> last 8-bits matching of VXLAN header.
> 
> The examples for the "last_rsvd" pattern field are as below:
> 
> 1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...
> 
> This flow will exactly match the last 8-bits to be 0x80.
> 
> 2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80
> vxlan mask 0x80 / end ...
> 
> This flow will only match the MSB of the last 8-bits to be 1.
> 
> Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
You should have kept my Ack from previous version in general, 
But this is for future,
For this thank you:
Acked-by: Raslan Darawsheh <rasland@nvidia.com>

Kindest regards,
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v7 1/2] net/mlx5: support matching on the reserved field of VXLAN
  2021-07-13 12:09                         ` [dpdk-dev] [PATCH v7 1/2] net/mlx5: support matching on the reserved field of VXLAN Rongwei Liu
@ 2021-07-13 12:55                           ` Raslan Darawsheh
  2021-07-13 13:44                             ` Rongwei Liu
  0 siblings, 1 reply; 34+ messages in thread
From: Raslan Darawsheh @ 2021-07-13 12:55 UTC (permalink / raw)
  To: Rongwei Liu, Matan Azrad, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon, Shahaf Shuler
  Cc: dev


> -----Original Message-----
> From: Rongwei Liu <rongweil@nvidia.com>
> Sent: Tuesday, July 13, 2021 3:09 PM
> To: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-
> Thomas Monjalon <thomas@monjalon.net>; Shahaf Shuler
> <shahafs@nvidia.com>
> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> Subject: [PATCH v7 1/2] net/mlx5: support matching on the reserved field of
> VXLAN
> 
> This adds matching on the reserved field of VXLAN
> header (the last 8-bits). The capability from rdma-core
> is detected by creating a dummy matcher using misc5
> when the device is probed.
> 
> For non-zero groups and FDB domain, the capability is
> detected from rdma-core, meanwhile for NIC domain group
> zero it's relying on the HCA_CAP from FW.
> 
> Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Again, you should have kept my Aack from previous version,

Thanks anyway,
Acked-by: Raslan Darawsheh <rasland@nvidia.com>

Kindest regards,
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v7 0/2] support VXLAN header the last 8-bits matching
  2021-07-13 12:09                       ` [dpdk-dev] [PATCH v7 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
  2021-07-13 12:09                         ` [dpdk-dev] [PATCH v7 1/2] net/mlx5: support matching on the reserved field of VXLAN Rongwei Liu
  2021-07-13 12:09                         ` [dpdk-dev] [PATCH v7 2/2] app/testpmd: support matching the reserved filed for VXLAN Rongwei Liu
@ 2021-07-13 13:09                         ` Andrew Rybchenko
  2 siblings, 0 replies; 34+ messages in thread
From: Andrew Rybchenko @ 2021-07-13 13:09 UTC (permalink / raw)
  To: Rongwei Liu, matan, viacheslavo, orika, thomas; +Cc: dev, rasland

On 7/13/21 3:09 PM, Rongwei Liu wrote:
> This update adds support for VXLAN the last 8-bits reserved
> field matching when creating sw steering rules.
> 
> Rongwei Liu (2):
>   net/mlx5: support matching on the reserved field of VXLAN
>   app/testpmd: support matching the reserved filed for VXLAN
> 
>  app/test-pmd/cmdline_flow.c                 |  10 ++
>  app/test-pmd/util.c                         |   5 +-
>  doc/guides/nics/mlx5.rst                    |  11 +-
>  doc/guides/rel_notes/release_21_08.rst      |   6 +
>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |   1 +
>  drivers/common/mlx5/mlx5_devx_cmds.c        |   3 +
>  drivers/common/mlx5/mlx5_devx_cmds.h        |   6 +
>  drivers/common/mlx5/mlx5_prm.h              |  41 ++++-
>  drivers/net/mlx5/linux/mlx5_os.c            |  77 ++++++++++
>  drivers/net/mlx5/mlx5.h                     |   2 +
>  drivers/net/mlx5/mlx5_flow.c                |  26 +++-
>  drivers/net/mlx5/mlx5_flow.h                |   4 +-
>  drivers/net/mlx5/mlx5_flow_dv.c             | 160 ++++++++++++++------
>  drivers/net/mlx5/mlx5_flow_verbs.c          |   3 +-
>  drivers/vdpa/mlx5/mlx5_vdpa_steer.c         |   6 +-
>  15 files changed, 294 insertions(+), 67 deletions(-)
> 

With release notes appropriately squashed

Applied, thanks.

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v7 1/2] net/mlx5: support matching on the reserved field of VXLAN
  2021-07-13 12:55                           ` Raslan Darawsheh
@ 2021-07-13 13:44                             ` Rongwei Liu
  0 siblings, 0 replies; 34+ messages in thread
From: Rongwei Liu @ 2021-07-13 13:44 UTC (permalink / raw)
  To: Raslan Darawsheh, Matan Azrad, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon, Shahaf Shuler
  Cc: dev

Hi Raslan:
	That' my first time and not familiar with the process.
	Keep in mind now.
	Thanks.

BR
Rongwei

> -----Original Message-----
> From: Raslan Darawsheh <rasland@nvidia.com>
> Sent: Tuesday, July 13, 2021 8:56 PM
> To: Rongwei Liu <rongweil@nvidia.com>; Matan Azrad <matan@nvidia.com>;
> Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>;
> NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Shahaf Shuler
> <shahafs@nvidia.com>
> Cc: dev@dpdk.org
> Subject: RE: [PATCH v7 1/2] net/mlx5: support matching on the reserved field
> of VXLAN
> 
> 
> > -----Original Message-----
> > From: Rongwei Liu <rongweil@nvidia.com>
> > Sent: Tuesday, July 13, 2021 3:09 PM
> > To: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> > <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-
> > Thomas Monjalon <thomas@monjalon.net>; Shahaf Shuler
> > <shahafs@nvidia.com>
> > Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> > Subject: [PATCH v7 1/2] net/mlx5: support matching on the reserved
> > field of VXLAN
> >
> > This adds matching on the reserved field of VXLAN header (the last
> > 8-bits). The capability from rdma-core is detected by creating a dummy
> > matcher using misc5 when the device is probed.
> >
> > For non-zero groups and FDB domain, the capability is detected from
> > rdma-core, meanwhile for NIC domain group zero it's relying on the
> > HCA_CAP from FW.
> >
> > Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
> > Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> Again, you should have kept my Aack from previous version,
> 
> Thanks anyway,
> Acked-by: Raslan Darawsheh <rasland@nvidia.com>
> 
> Kindest regards,
> Raslan Darawsheh

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v7 2/2] app/testpmd: support matching the reserved filed for VXLAN
  2021-07-13 12:54                           ` Raslan Darawsheh
@ 2021-07-13 15:34                             ` Raslan Darawsheh
  2021-07-13 15:36                               ` Rongwei Liu
  0 siblings, 1 reply; 34+ messages in thread
From: Raslan Darawsheh @ 2021-07-13 15:34 UTC (permalink / raw)
  To: Rongwei Liu, Matan Azrad, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon, Xiaoyun Li, Andrew Rybchenko
  Cc: dev


> -----Original Message-----
> From: Raslan Darawsheh
> Sent: Tuesday, July 13, 2021 3:55 PM
> To: Rongwei Liu <rongweil@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam
> <orika@nvidia.com>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; Xiaoyun Li <xiaoyun.li@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [PATCH v7 2/2] app/testpmd: support matching the reserved
> filed for VXLAN
@Andrew Rybchenko
I've just noticed there is a typo in the title when I pulled this to next-net-mlx from next-net:
Typo: filed -> field ?

Kindest regards,
Raslan Darawsheh
> 
> 
> > -----Original Message-----
> > From: Rongwei Liu <rongweil@nvidia.com>
> > Sent: Tuesday, July 13, 2021 3:09 PM
> > To: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> > <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-
> > Thomas Monjalon <thomas@monjalon.net>; Xiaoyun Li
> > <xiaoyun.li@intel.com>
> > Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> > Subject: [PATCH v7 2/2] app/testpmd: support matching the reserved
> > filed for VXLAN
> >
> > Add a new testpmd pattern field 'last_rsvd' that supports the last
> > 8-bits matching of VXLAN header.
> >
> > The examples for the "last_rsvd" pattern field are as below:
> >
> > 1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...
> >
> > This flow will exactly match the last 8-bits to be 0x80.
> >
> > 2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80 vxlan mask
> > 0x80 / end ...
> >
> > This flow will only match the MSB of the last 8-bits to be 1.
> >
> > Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
> > Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> You should have kept my Ack from previous version in general, But this is for
> future, For this thank you:
> Acked-by: Raslan Darawsheh <rasland@nvidia.com>
> 
> Kindest regards,
> Raslan Darawsheh

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [dpdk-dev] [PATCH v7 2/2] app/testpmd: support matching the reserved filed for VXLAN
  2021-07-13 15:34                             ` Raslan Darawsheh
@ 2021-07-13 15:36                               ` Rongwei Liu
  0 siblings, 0 replies; 34+ messages in thread
From: Rongwei Liu @ 2021-07-13 15:36 UTC (permalink / raw)
  To: Raslan Darawsheh, Matan Azrad, Slava Ovsiienko, Ori Kam,
	NBU-Contact-Thomas Monjalon, Xiaoyun Li, Andrew Rybchenko
  Cc: dev

HI Raslan:
	Yes, you are right.
	Forgot to correct the typo when addressing your comment to change the title.

BR
Rongwei

> -----Original Message-----
> From: Raslan Darawsheh <rasland@nvidia.com>
> Sent: Tuesday, July 13, 2021 11:34 PM
> To: Rongwei Liu <rongweil@nvidia.com>; Matan Azrad <matan@nvidia.com>;
> Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>;
> NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Xiaoyun Li
> <xiaoyun.li@intel.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>
> Cc: dev@dpdk.org
> Subject: RE: [PATCH v7 2/2] app/testpmd: support matching the reserved
> filed for VXLAN
> 
> 
> > -----Original Message-----
> > From: Raslan Darawsheh
> > Sent: Tuesday, July 13, 2021 3:55 PM
> > To: Rongwei Liu <rongweil@nvidia.com>; Matan Azrad
> <matan@nvidia.com>;
> > Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>;
> > NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Xiaoyun Li
> > <xiaoyun.li@intel.com>
> > Cc: dev@dpdk.org
> > Subject: RE: [PATCH v7 2/2] app/testpmd: support matching the reserved
> > filed for VXLAN
> @Andrew Rybchenko
> I've just noticed there is a typo in the title when I pulled this to next-net-mlx
> from next-net:
> Typo: filed -> field ?
> 
> Kindest regards,
> Raslan Darawsheh
> >
> >
> > > -----Original Message-----
> > > From: Rongwei Liu <rongweil@nvidia.com>
> > > Sent: Tuesday, July 13, 2021 3:09 PM
> > > To: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> > > <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-
> > > Thomas Monjalon <thomas@monjalon.net>; Xiaoyun Li
> > > <xiaoyun.li@intel.com>
> > > Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> > > Subject: [PATCH v7 2/2] app/testpmd: support matching the reserved
> > > filed for VXLAN
> > >
> > > Add a new testpmd pattern field 'last_rsvd' that supports the last
> > > 8-bits matching of VXLAN header.
> > >
> > > The examples for the "last_rsvd" pattern field are as below:
> > >
> > > 1. ...pattern eth / ipv4 / udp / vxlan last_rsvd is 0x80 / end ...
> > >
> > > This flow will exactly match the last 8-bits to be 0x80.
> > >
> > > 2. ...pattern eth / ipv4 / udp / vxlan last_rsvd spec 0x80 vxlan
> > > mask
> > > 0x80 / end ...
> > >
> > > This flow will only match the MSB of the last 8-bits to be 1.
> > >
> > > Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
> > > Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> > You should have kept my Ack from previous version in general, But this
> > is for future, For this thank you:
> > Acked-by: Raslan Darawsheh <rasland@nvidia.com>
> >
> > Kindest regards,
> > Raslan Darawsheh

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2021-07-13 15:36 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-05  9:50 [dpdk-dev] [PATCH v2 0/2] support VXLAN header last 8-bits reserved field matching rongwei liu
2021-07-05  9:50 ` [dpdk-dev] [PATCH v2 1/2] drivers: add VXLAN header the last 8-bits matching support rongwei liu
2021-07-06 12:35   ` Thomas Monjalon
2021-07-07  8:09     ` [dpdk-dev] [PATCH v4 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
2021-07-07  8:09       ` [dpdk-dev] [PATCH v4 1/2] net/mlx5: add VXLAN header the last 8-bits matching support Rongwei Liu
2021-07-07  8:09       ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: support VXLAN the last 8-bits field matching Rongwei Liu
2021-07-13  8:33       ` [dpdk-dev] [PATCH v4 0/2] support VXLAN header the last 8-bits matching Andrew Rybchenko
2021-07-13  9:55         ` [dpdk-dev] [PATCH v5 " Rongwei Liu
2021-07-13  9:55           ` [dpdk-dev] [PATCH v5 1/2] net/mlx5: add VXLAN header the last 8-bits matching support Rongwei Liu
2021-07-13 10:27             ` Raslan Darawsheh
2021-07-13 10:50               ` [dpdk-dev] [PATCH v6 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
2021-07-13 10:50                 ` [dpdk-dev] [PATCH v6 1/2] net/mlx5: support matching on the reserved field of VXLAN Rongwei Liu
2021-07-13 11:40                   ` Raslan Darawsheh
2021-07-13 11:49                     ` Rongwei Liu
2021-07-13 12:09                       ` [dpdk-dev] [PATCH v7 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
2021-07-13 12:09                         ` [dpdk-dev] [PATCH v7 1/2] net/mlx5: support matching on the reserved field of VXLAN Rongwei Liu
2021-07-13 12:55                           ` Raslan Darawsheh
2021-07-13 13:44                             ` Rongwei Liu
2021-07-13 12:09                         ` [dpdk-dev] [PATCH v7 2/2] app/testpmd: support matching the reserved filed for VXLAN Rongwei Liu
2021-07-13 12:54                           ` Raslan Darawsheh
2021-07-13 15:34                             ` Raslan Darawsheh
2021-07-13 15:36                               ` Rongwei Liu
2021-07-13 13:09                         ` [dpdk-dev] [PATCH v7 0/2] support VXLAN header the last 8-bits matching Andrew Rybchenko
2021-07-13 12:11                     ` [dpdk-dev] [PATCH v6 1/2] net/mlx5: support matching on the reserved field of VXLAN Rongwei Liu
2021-07-13 10:50                 ` [dpdk-dev] [PATCH v6 2/2] app/testpmd: support VXLAN header last 8-bits matching Rongwei Liu
2021-07-13 11:37                   ` Raslan Darawsheh
2021-07-13 11:39                     ` Rongwei Liu
2021-07-13 10:52               ` [dpdk-dev] [PATCH v5 1/2] net/mlx5: add VXLAN header the last 8-bits matching support Rongwei Liu
2021-07-13  9:55           ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: support VXLAN the last 8-bits field matching Rongwei Liu
2021-07-13 10:02             ` Raslan Darawsheh
2021-07-13 10:06               ` Andrew Rybchenko
2021-07-13  9:56         ` [dpdk-dev] [PATCH v4 0/2] support VXLAN header the last 8-bits matching Rongwei Liu
2021-07-05  9:50 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: support VXLAN last 8-bits field matching rongwei liu
2021-07-06 12:28   ` Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).