DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/4] net/mlx5: add integrity flow item support
@ 2021-04-28 17:59 Gregory Etelson
  2021-04-28 17:59 ` [dpdk-dev] [PATCH 1/4] ethdev: fix integrity flow item Gregory Etelson
                   ` (5 more replies)
  0 siblings, 6 replies; 24+ messages in thread
From: Gregory Etelson @ 2021-04-28 17:59 UTC (permalink / raw)
  To: dev; +Cc: getelson, matan, orika, rasland, Viacheslav Ovsiienko

Support integrity flow item in MLX5 PMD.
Integrity flow item was described in
commit b10a421a1f3b ("ethdev: add packet integrity check flow rules")

Gregory Etelson (4):
  ethdev: fix integrity flow item
  net/mlx5: update PRM definitions
  net/mlx5: support integrity flow item
  doc: add MLX5 PMD integrity item limitations

 doc/guides/nics/mlx5.rst             |  15 ++
 drivers/common/mlx5/mlx5_devx_cmds.c |  31 +++-
 drivers/common/mlx5/mlx5_devx_cmds.h |   1 +
 drivers/common/mlx5/mlx5_prm.h       |  35 +++-
 drivers/net/mlx5/mlx5_flow.c         |  25 +++
 drivers/net/mlx5/mlx5_flow.h         |  26 +++
 drivers/net/mlx5/mlx5_flow_dv.c      | 258 +++++++++++++++++++++++++++
 lib/ethdev/rte_flow.c                |   1 +
 lib/ethdev/rte_flow.h                |   9 +
 9 files changed, 395 insertions(+), 6 deletions(-)

Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

-- 
2.31.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH 1/4] ethdev: fix integrity flow item
  2021-04-28 17:59 [dpdk-dev] [PATCH 0/4] net/mlx5: add integrity flow item support Gregory Etelson
@ 2021-04-28 17:59 ` Gregory Etelson
  2021-04-28 18:06   ` Thomas Monjalon
  2021-04-28 17:59 ` [dpdk-dev] [PATCH 2/4] net/mlx5: update PRM definitions Gregory Etelson
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 24+ messages in thread
From: Gregory Etelson @ 2021-04-28 17:59 UTC (permalink / raw)
  To: dev
  Cc: getelson, matan, orika, rasland, Viacheslav Ovsiienko,
	Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde

Add integrity item definition to the rte_flow_desc_item array.
The new entry allows RTE conv API to work with the new flow item.

Add bitmasks to the integrity item value.
The masks allow to query multiple integrity filters in a single
compare operation.

Fixes: b10a421a1f3b ("ethdev: add packet integrity check flow rules")

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 lib/ethdev/rte_flow.c | 1 +
 lib/ethdev/rte_flow.h | 9 +++++++++
 2 files changed, 10 insertions(+)

diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index c7c7108933..8cb7a069c8 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -98,6 +98,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = {
 	MK_FLOW_ITEM(PFCP, sizeof(struct rte_flow_item_pfcp)),
 	MK_FLOW_ITEM(ECPRI, sizeof(struct rte_flow_item_ecpri)),
 	MK_FLOW_ITEM(GENEVE_OPT, sizeof(struct rte_flow_item_geneve_opt)),
+	MK_FLOW_ITEM(INTEGRITY, sizeof(struct rte_flow_item_integrity)),
 	MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)),
 };
 
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 94c8c1ccc8..cf72999cd9 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1738,6 +1738,15 @@ struct rte_flow_item_integrity {
 	};
 };
 
+#define RTE_FLOW_ITEM_INTEGRITY_PKT_OK       (1ULL << 0)
+#define RTE_FLOW_ITEM_INTEGRITY_L2_OK        (1ULL << 1)
+#define RTE_FLOW_ITEM_INTEGRITY_L3_OK        (1ULL << 2)
+#define RTE_FLOW_ITEM_INTEGRITY_L4_OK        (1ULL << 3)
+#define RTE_FLOW_ITEM_INTEGRITY_L2_CRC_OK    (1ULL << 4)
+#define RTE_FLOW_ITEM_INTEGRITY_IPV4_CSUM_OK (1ULL << 5)
+#define RTE_FLOW_ITEM_INTEGRITY_L4_CSUM_OK   (1ULL << 6)
+#define RTE_FLOW_ITEM_INTEGRITY_L3_LEN_OK    (1ULL << 7)
+
 #ifndef __cplusplus
 static const struct rte_flow_item_integrity
 rte_flow_item_integrity_mask = {
-- 
2.31.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH 2/4] net/mlx5: update PRM definitions
  2021-04-28 17:59 [dpdk-dev] [PATCH 0/4] net/mlx5: add integrity flow item support Gregory Etelson
  2021-04-28 17:59 ` [dpdk-dev] [PATCH 1/4] ethdev: fix integrity flow item Gregory Etelson
@ 2021-04-28 17:59 ` Gregory Etelson
  2021-04-28 17:59 ` [dpdk-dev] [PATCH 3/4] net/mlx5: support integrity flow item Gregory Etelson
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 24+ messages in thread
From: Gregory Etelson @ 2021-04-28 17:59 UTC (permalink / raw)
  To: dev; +Cc: getelson, matan, orika, rasland, Viacheslav Ovsiienko, Shahaf Shuler

Add integrity and IPv4 IHL bits to PRM file.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/common/mlx5/mlx5_devx_cmds.c | 31 ++++++++++++++++++++----
 drivers/common/mlx5/mlx5_devx_cmds.h |  1 +
 drivers/common/mlx5/mlx5_prm.h       | 35 ++++++++++++++++++++++++++--
 3 files changed, 61 insertions(+), 6 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 6c6f4391a1..3d3994e575 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -626,6 +626,29 @@ mlx5_devx_cmd_create_flex_parser(void *ctx,
 	return parse_flex_obj;
 }
 
+static int
+mlx5_devx_query_pkt_integrity_match(void *hcattr)
+{
+	return MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.inner_l3_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.inner_l4_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.outer_l3_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.outer_l4_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive
+				.inner_ipv4_checksum_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.inner_l4_checksum_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive
+				.outer_ipv4_checksum_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.outer_l4_checksum_ok);
+}
+
 /**
  * Query HCA attributes.
  * Using those attributes we can check on run time if the device
@@ -823,10 +846,10 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
 		return -1;
 	}
 	hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability);
-	attr->log_max_ft_sampler_num =
-			MLX5_GET(flow_table_nic_cap,
-			hcattr, flow_table_properties.log_max_ft_sampler_num);
-
+	attr->log_max_ft_sampler_num = MLX5_GET
+		(flow_table_nic_cap, hcattr,
+		 flow_table_properties_nic_receive.log_max_ft_sampler_num);
+	attr->pkt_integrity_match = mlx5_devx_query_pkt_integrity_match(hcattr);
 	/* Query HCA offloads for Ethernet protocol. */
 	memset(in, 0, sizeof(in));
 	memset(out, 0, sizeof(out));
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index eee8fee107..b31a828383 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -142,6 +142,7 @@ struct mlx5_hca_attr {
 	uint32_t cqe_compression:1;
 	uint32_t mini_cqe_resp_flow_tag:1;
 	uint32_t mini_cqe_resp_l3_l4_tag:1;
+	uint32_t pkt_integrity_match:1; /* 1 if HW supports integrity item */
 	struct mlx5_hca_qos_attr qos;
 	struct mlx5_hca_vdpa_attr vdpa;
 	int log_max_qp_sz;
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index c6d8060bb9..903faccd56 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -778,7 +778,12 @@ struct mlx5_ifc_fte_match_set_lyr_2_4_bits {
 	u8 tcp_flags[0x9];
 	u8 tcp_sport[0x10];
 	u8 tcp_dport[0x10];
-	u8 reserved_at_c0[0x18];
+	u8 reserved_at_c0[0x10];
+	u8 ipv4_ihl[0x4];
+	u8 l3_ok[0x1];
+	u8 l4_ok[0x1];
+	u8 ipv4_checksum_ok[0x1];
+	u8 l4_checksum_ok[0x1];
 	u8 ip_ttl_hoplimit[0x8];
 	u8 udp_sport[0x10];
 	u8 udp_dport[0x10];
@@ -1656,9 +1661,35 @@ struct mlx5_ifc_roce_caps_bits {
 	u8 reserved_at_20[0x7e0];
 };
 
+/*
+ * Table 1872 - Flow Table Fields Supported 2 Format
+ */
+struct mlx5_ifc_ft_fields_support_2_bits {
+	u8 reserved_at_0[0x14];
+	u8 inner_ipv4_ihl[0x1];
+	u8 outer_ipv4_ihl[0x1];
+	u8 psp_syndrome[0x1];
+	u8 inner_l3_ok[0x1];
+	u8 inner_l4_ok[0x1];
+	u8 outer_l3_ok[0x1];
+	u8 outer_l4_ok[0x1];
+	u8 psp_header[0x1];
+	u8 inner_ipv4_checksum_ok[0x1];
+	u8 inner_l4_checksum_ok[0x1];
+	u8 outer_ipv4_checksum_ok[0x1];
+	u8 outer_l4_checksum_ok[0x1];
+};
+
 struct mlx5_ifc_flow_table_nic_cap_bits {
 	u8	   reserved_at_0[0x200];
-	struct mlx5_ifc_flow_table_prop_layout_bits flow_table_properties;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+	       flow_table_properties_nic_receive;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+	       flow_table_properties_unused[5];
+	u8         reserved_at_1C0[0x200];
+	u8         header_modify_nic_receive[0x400];
+	struct mlx5_ifc_ft_fields_support_2_bits
+	       ft_field_support_2_nic_receive;
 };
 
 union mlx5_ifc_hca_cap_union_bits {
-- 
2.31.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH 3/4] net/mlx5: support integrity flow item
  2021-04-28 17:59 [dpdk-dev] [PATCH 0/4] net/mlx5: add integrity flow item support Gregory Etelson
  2021-04-28 17:59 ` [dpdk-dev] [PATCH 1/4] ethdev: fix integrity flow item Gregory Etelson
  2021-04-28 17:59 ` [dpdk-dev] [PATCH 2/4] net/mlx5: update PRM definitions Gregory Etelson
@ 2021-04-28 17:59 ` Gregory Etelson
  2021-04-28 17:59 ` [dpdk-dev] [PATCH 4/4] doc: add MLX5 PMD integrity item limitations Gregory Etelson
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 24+ messages in thread
From: Gregory Etelson @ 2021-04-28 17:59 UTC (permalink / raw)
  To: dev; +Cc: getelson, matan, orika, rasland, Viacheslav Ovsiienko, Shahaf Shuler

MLX5 PMD supports the following integrity filters for outer and
inner network headers:
- l3_ok
- l4_ok
- ipv4_csum_ok
- l4_csum_ok

`level` values 0 and 1 reference outer headers.
`level` > 1 reference inner headers.

Flow rule items supplied by application must explicitly specify
network headers referred by integrity item. For example:
flow create 0 ingress
  pattern
    integrity level is 0 value mask l3_ok value spec l3_ok /
    eth / ipv6 / end …

or

flow create 0 ingress
  pattern
    integrity level is 0 value mask l4_ok value spec 0 /
    eth / ipv4 proto is udp / end …

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.c    |  25 ++++
 drivers/net/mlx5/mlx5_flow.h    |  26 ++++
 drivers/net/mlx5/mlx5_flow_dv.c | 258 ++++++++++++++++++++++++++++++++
 3 files changed, 309 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 15ed5ec7a2..db9a251c68 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -8083,6 +8083,31 @@ mlx5_action_handle_flush(struct rte_eth_dev *dev)
 	return ret;
 }
 
+const struct rte_flow_item *
+mlx5_flow_find_tunnel_item(const struct rte_flow_item *item)
+{
+	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
+		switch (item->type) {
+		default:
+			break;
+		case RTE_FLOW_ITEM_TYPE_VXLAN:
+		case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
+		case RTE_FLOW_ITEM_TYPE_GRE:
+		case RTE_FLOW_ITEM_TYPE_MPLS:
+		case RTE_FLOW_ITEM_TYPE_NVGRE:
+		case RTE_FLOW_ITEM_TYPE_GENEVE:
+			return item;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+		case RTE_FLOW_ITEM_TYPE_IPV6:
+			if (item[1].type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+			    item[1].type == RTE_FLOW_ITEM_TYPE_IPV6)
+				return item;
+			break;
+		}
+	}
+	return NULL;
+}
+
 #ifndef HAVE_MLX5DV_DR
 #define MLX5_DOMAIN_SYNC_FLOW ((1 << 0) | (1 << 1))
 #else
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 56908ae08b..eb7035d259 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -145,6 +145,9 @@ enum mlx5_feature_name {
 #define MLX5_FLOW_LAYER_GENEVE_OPT (UINT64_C(1) << 32)
 #define MLX5_FLOW_LAYER_GTP_PSC (UINT64_C(1) << 33)
 
+/* INTEGRITY item bit */
+#define MLX5_FLOW_ITEM_INTEGRITY (UINT64_C(1) << 34)
+
 /* Outer Masks. */
 #define MLX5_FLOW_LAYER_OUTER_L3 \
 	(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6)
@@ -1010,6 +1013,20 @@ struct rte_flow {
 	(MLX5_RSS_HASH_IPV6 | IBV_RX_HASH_DST_PORT_TCP)
 #define MLX5_RSS_HASH_NONE 0ULL
 
+/*
+ * Define integrity bits supported by the PMD
+ */
+#define MLX5_DV_PKT_INTEGRITY_MASK \
+	(RTE_FLOW_ITEM_INTEGRITY_L3_OK | RTE_FLOW_ITEM_INTEGRITY_L4_OK | \
+	 RTE_FLOW_ITEM_INTEGRITY_IPV4_CSUM_OK | \
+	 RTE_FLOW_ITEM_INTEGRITY_L4_CSUM_OK)
+
+#define MLX5_ETHER_TYPE_FROM_HEADER(_s, _m, _itm, _prt) do { \
+	(_prt) = ((const struct _s *)(_itm)->mask)->_m;       \
+	(_prt) &= ((const struct _s *)(_itm)->spec)->_m;      \
+	(_prt) = rte_be_to_cpu_16((_prt));                    \
+} while (0)
+
 /* array of valid combinations of RX Hash fields for RSS */
 static const uint64_t mlx5_rss_hash_fields[] = {
 	MLX5_RSS_HASH_IPV4,
@@ -1282,6 +1299,13 @@ mlx5_aso_meter_by_idx(struct mlx5_priv *priv, uint32_t idx)
 	return &pool->mtrs[idx % MLX5_ASO_MTRS_PER_POOL];
 }
 
+static __rte_always_inline const struct rte_flow_item *
+mlx5_find_end_item(const struct rte_flow_item *item)
+{
+	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++);
+	return item;
+}
+
 int mlx5_flow_group_to_table(struct rte_eth_dev *dev,
 			     const struct mlx5_flow_tunnel *tunnel,
 			     uint32_t group, uint32_t *table,
@@ -1433,6 +1457,8 @@ struct mlx5_flow_meter_sub_policy *mlx5_flow_meter_sub_policy_rss_prepare
 		struct mlx5_flow_rss_desc *rss_desc[MLX5_MTR_RTE_COLORS]);
 int mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev);
 int mlx5_action_handle_flush(struct rte_eth_dev *dev);
+const struct rte_flow_item *
+mlx5_flow_find_tunnel_item(const struct rte_flow_item *item);
 void mlx5_release_tunnel_hub(struct mlx5_dev_ctx_shared *sh, uint16_t port_id);
 int mlx5_alloc_tunnel_hub(struct mlx5_dev_ctx_shared *sh);
 
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index d810466242..2d4042e458 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -6230,6 +6230,163 @@ flow_dv_validate_attributes(struct rte_eth_dev *dev,
 	return ret;
 }
 
+static uint16_t
+mlx5_flow_locate_proto_l3(const struct rte_flow_item **head,
+			  const struct rte_flow_item *end)
+{
+	const struct rte_flow_item *item = *head;
+	uint16_t l3_protocol;
+
+	for (; item != end; item++) {
+		switch (item->type) {
+		default:
+			break;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+			l3_protocol = RTE_ETHER_TYPE_IPV4;
+			goto l3_ok;
+		case RTE_FLOW_ITEM_TYPE_IPV6:
+			l3_protocol = RTE_ETHER_TYPE_IPV6;
+			goto l3_ok;
+		case RTE_FLOW_ITEM_TYPE_ETH:
+			if (item->mask && item->spec) {
+				MLX5_ETHER_TYPE_FROM_HEADER(rte_flow_item_eth,
+							    type, item,
+							    l3_protocol);
+				if (l3_protocol == RTE_ETHER_TYPE_IPV4 ||
+				    l3_protocol == RTE_ETHER_TYPE_IPV6)
+					goto l3_ok;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_VLAN:
+			if (item->mask && item->spec) {
+				MLX5_ETHER_TYPE_FROM_HEADER(rte_flow_item_vlan,
+							    inner_type, item,
+							    l3_protocol);
+				if (l3_protocol == RTE_ETHER_TYPE_IPV4 ||
+				    l3_protocol == RTE_ETHER_TYPE_IPV6)
+					goto l3_ok;
+			}
+			break;
+		}
+	}
+
+	return 0;
+
+l3_ok:
+	*head = item;
+	return l3_protocol;
+}
+
+static uint8_t
+mlx5_flow_locate_proto_l4(const struct rte_flow_item **head,
+			  const struct rte_flow_item *end)
+{
+	const struct rte_flow_item *item = *head;
+	uint8_t l4_protocol;
+
+	for (; item != end; item++) {
+		switch (item->type) {
+		default:
+			break;
+		case RTE_FLOW_ITEM_TYPE_TCP:
+			l4_protocol = IPPROTO_TCP;
+			goto l4_ok;
+		case RTE_FLOW_ITEM_TYPE_UDP:
+			l4_protocol = IPPROTO_UDP;
+			goto l4_ok;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+			if (item->mask && item->spec) {
+				const struct rte_flow_item_ipv4 *mask, *spec;
+
+				mask = (typeof(mask))item->mask;
+				spec = (typeof(spec))item->spec;
+				l4_protocol = mask->hdr.next_proto_id &
+					      spec->hdr.next_proto_id;
+				if (l4_protocol == IPPROTO_TCP ||
+				    l4_protocol == IPPROTO_UDP)
+					goto l4_ok;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_IPV6:
+			if (item->mask && item->spec) {
+				const struct rte_flow_item_ipv6 *mask, *spec;
+				mask = (typeof(mask))item->mask;
+				spec = (typeof(spec))item->spec;
+				l4_protocol = mask->hdr.proto & spec->hdr.proto;
+				if (l4_protocol == IPPROTO_TCP ||
+				    l4_protocol == IPPROTO_UDP)
+					goto l4_ok;
+			}
+			break;
+		}
+	}
+
+	return 0;
+
+l4_ok:
+	*head = item;
+	return l4_protocol;
+}
+
+static int
+flow_dv_validate_item_integrity(struct rte_eth_dev *dev,
+				const struct rte_flow_item *rule_items,
+				const struct rte_flow_item *integrity_item,
+				struct rte_flow_error *error)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *tunnel_item, *end_item, *item = rule_items;
+	const struct rte_flow_item_integrity *mask = (typeof(mask))
+						     integrity_item->mask;
+	const struct rte_flow_item_integrity *spec = (typeof(spec))
+						     integrity_item->spec;
+	uint32_t protocol;
+
+	if (!priv->config.hca_attr.pkt_integrity_match)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM,
+					  integrity_item,
+					  "packet integrity integrity_item not supported");
+	if (!mask)
+		mask = &rte_flow_item_integrity_mask;
+	if (mask->value && ((mask->value & ~MLX5_DV_PKT_INTEGRITY_MASK) != 0))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM,
+					  integrity_item,
+					  "unsupported integrity filter");
+	tunnel_item = mlx5_flow_find_tunnel_item(rule_items);
+	if (spec->level > 1) {
+		if (!tunnel_item)
+			return rte_flow_error_set(error, ENOTSUP,
+						  RTE_FLOW_ERROR_TYPE_ITEM,
+						  integrity_item,
+						  "missing tunnel item");
+		item = tunnel_item;
+		end_item = mlx5_find_end_item(tunnel_item);
+	} else {
+		end_item = tunnel_item ? tunnel_item :
+			   mlx5_find_end_item(integrity_item);
+	}
+	if (mask->l3_ok || mask->ipv4_csum_ok) {
+		protocol = mlx5_flow_locate_proto_l3(&item, end_item);
+		if (!protocol)
+			return rte_flow_error_set(error, EINVAL,
+						  RTE_FLOW_ERROR_TYPE_ITEM,
+						  integrity_item,
+						  "missing L3 protocol");
+	}
+	if (mask->l4_ok || mask->l4_csum_ok) {
+		protocol = mlx5_flow_locate_proto_l4(&item, end_item);
+		if (!protocol)
+			return rte_flow_error_set(error, EINVAL,
+						  RTE_FLOW_ERROR_TYPE_ITEM,
+						  integrity_item,
+						  "missing L4 protocol");
+	}
+
+	return 0;
+}
+
 /**
  * Internal validation function. For validating both actions and items.
  *
@@ -6321,6 +6478,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 		.fdb_def_rule = !!priv->fdb_def_rule,
 	};
 	const struct rte_eth_hairpin_conf *conf;
+	const struct rte_flow_item *rule_items = items;
 	bool def_policy = false;
 
 	if (items == NULL)
@@ -6644,6 +6802,18 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 				return ret;
 			last_item = MLX5_FLOW_LAYER_ECPRI;
 			break;
+		case RTE_FLOW_ITEM_TYPE_INTEGRITY:
+			if (item_flags & RTE_FLOW_ITEM_TYPE_INTEGRITY)
+				return rte_flow_error_set
+					(error, ENOTSUP,
+					 RTE_FLOW_ERROR_TYPE_ITEM,
+					 NULL, "multiple integrity items not supported");
+			ret = flow_dv_validate_item_integrity(dev, rule_items,
+							      items, error);
+			if (ret < 0)
+				return ret;
+			last_item = MLX5_FLOW_ITEM_INTEGRITY;
+			break;
 		default:
 			return rte_flow_error_set(error, ENOTSUP,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
@@ -11119,6 +11289,90 @@ flow_dv_translate_create_aso_age(struct rte_eth_dev *dev,
 	return age_idx;
 }
 
+static void
+flow_dv_translate_integrity_l4(const struct rte_flow_item_integrity *mask,
+			       const struct rte_flow_item_integrity *value,
+			       void *headers_m, void *headers_v)
+{
+	if (mask->l4_ok) {
+		/* application l4_ok filter aggregates all hardware l4 filters
+		 * therefore hw l4_checksum_ok must be implicitly added here.
+		 */
+		MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok, 1);
+		if (value->l4_ok) {
+			/* application l4_ok = 1 matches sets both hw flags
+			 * l4_ok and l4_checksum_ok flags to 1.
+			 */
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v,
+				 l4_checksum_ok, 1);
+			MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_ok, 1);
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v, l4_ok, 1);
+		} else {
+			/* application l4_ok = 0 matches on hw flag
+			 * l4_checksum_ok = 0 only.
+			 */
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v,
+				 l4_checksum_ok, 0);
+		}
+	} else if (mask->l4_csum_ok) {
+		MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok,
+			 mask->l4_csum_ok);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_checksum_ok,
+			 mask->ipv4_csum_ok & value->ipv4_csum_ok);
+	}
+}
+
+static void
+flow_dv_translate_integrity_l3(const struct rte_flow_item_integrity *mask,
+			       const struct rte_flow_item_integrity *value,
+			       void *headers_m, void *headers_v)
+{
+	if (mask->l3_ok) {
+		MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_checksum_ok,
+			 mask->ipv4_csum_ok);
+		if (value->l3_ok) {
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v,
+				 ipv4_checksum_ok, 1);
+			MLX5_SET(fte_match_set_lyr_2_4, headers_m, l3_ok, 1);
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v, l3_ok, 1);
+		} else {
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v,
+				 ipv4_checksum_ok, 0);
+		}
+	} else if (mask->ipv4_csum_ok) {
+		MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_checksum_ok,
+			 mask->ipv4_csum_ok);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_checksum_ok,
+			 value->ipv4_csum_ok);
+	}
+}
+
+static void
+flow_dv_translate_item_integrity(void *matcher, void *key,
+				 const struct rte_flow_item *item)
+{
+	const struct rte_flow_item_integrity *mask = item->mask;
+	const struct rte_flow_item_integrity *value = item->spec;
+	void *headers_m;
+	void *headers_v;
+
+	if (!value)
+		return;
+	if (!mask)
+		mask = &rte_flow_item_integrity_mask;
+	if (value->level > 1) {
+		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
+					 inner_headers);
+		headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers);
+	} else {
+		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
+					 outer_headers);
+		headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers);
+	}
+	flow_dv_translate_integrity_l3(mask, value, headers_m, headers_v);
+	flow_dv_translate_integrity_l4(mask, value, headers_m, headers_v);
+}
+
 /**
  * Fill the flow with DV spec, lock free
  * (mutex should be acquired by caller).
@@ -12027,6 +12281,10 @@ flow_dv_translate(struct rte_eth_dev *dev,
 			/* No other protocol should follow eCPRI layer. */
 			last_item = MLX5_FLOW_LAYER_ECPRI;
 			break;
+		case RTE_FLOW_ITEM_TYPE_INTEGRITY:
+			flow_dv_translate_item_integrity(match_mask,
+							 match_value, items);
+			break;
 		default:
 			break;
 		}
-- 
2.31.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH 4/4] doc: add MLX5 PMD integrity item limitations
  2021-04-28 17:59 [dpdk-dev] [PATCH 0/4] net/mlx5: add integrity flow item support Gregory Etelson
                   ` (2 preceding siblings ...)
  2021-04-28 17:59 ` [dpdk-dev] [PATCH 3/4] net/mlx5: support integrity flow item Gregory Etelson
@ 2021-04-28 17:59 ` Gregory Etelson
  2021-04-29  6:16 ` [dpdk-dev] [PATCH v2 0/4] net/mlx5: add integrity flow item support Gregory Etelson
  2021-04-29 18:36 ` [dpdk-dev] [PATCH v3 0/4] net/mlx5: add integrity flow " Gregory Etelson
  5 siblings, 0 replies; 24+ messages in thread
From: Gregory Etelson @ 2021-04-28 17:59 UTC (permalink / raw)
  To: dev; +Cc: getelson, matan, orika, rasland, Viacheslav Ovsiienko, Shahaf Shuler

Add MLX5 PMD integrity item limitations.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 doc/guides/nics/mlx5.rst | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index b27a9a69f6..12b45a69b5 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -107,6 +107,7 @@ Features
 - 21844 flow priorities for ingress or egress flow groups greater than 0 and for any transfer
   flow group.
 - Flow metering, including meter policy API.
+- Flow integrity offload API.
 
 Limitations
 -----------
@@ -417,6 +418,20 @@ Limitations
      - yellow: must be empty.
      - RED: must be DROP.
 
+- Integrity:
+
+  - Integrity offload is enabled for **ConnectX-6** family.
+  - Verification bits provided by the hardware are ``l3_ok``, ``ipv4_csum_ok``, ``l4_ok``, ``l4_csum_ok``.
+  - ``level`` value 0 references outer headers.
+  - Multiple integrity items not supported in a single flow rule.
+  - Flow rule items supplied by application must explicitly specify network headers referred by integrity item.
+    For example, if integrity item mask sets ``l4_ok`` or ``l4_csum_ok`` bits, reference to L4 network header,
+    TCP or UDP, must be in the rule pattern as well::
+
+      flow create 0 ingress pattern integrity level is 0 value mask l3_ok value spec l3_ok / eth / ipv6 / end …
+      or
+      flow create 0 ingress pattern integrity level is 0 value mask l4_ok value spec 0 / eth / ipv4 proto is udp / end …
+
 Statistics
 ----------
 
-- 
2.31.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH 1/4] ethdev: fix integrity flow item
  2021-04-28 17:59 ` [dpdk-dev] [PATCH 1/4] ethdev: fix integrity flow item Gregory Etelson
@ 2021-04-28 18:06   ` Thomas Monjalon
  0 siblings, 0 replies; 24+ messages in thread
From: Thomas Monjalon @ 2021-04-28 18:06 UTC (permalink / raw)
  To: Gregory Etelson
  Cc: dev, matan, orika, rasland, Viacheslav Ovsiienko, Ferruh Yigit,
	Andrew Rybchenko, Ajit Khaparde

28/04/2021 19:59, Gregory Etelson:
> Add integrity item definition to the rte_flow_desc_item array.
> The new entry allows RTE conv API to work with the new flow item.

What is RTE conv API?

> Add bitmasks to the integrity item value.
> The masks allow to query multiple integrity filters in a single
> compare operation.
[...]
> +#define RTE_FLOW_ITEM_INTEGRITY_PKT_OK       (1ULL << 0)
> +#define RTE_FLOW_ITEM_INTEGRITY_L2_OK        (1ULL << 1)
> +#define RTE_FLOW_ITEM_INTEGRITY_L3_OK        (1ULL << 2)
> +#define RTE_FLOW_ITEM_INTEGRITY_L4_OK        (1ULL << 3)
> +#define RTE_FLOW_ITEM_INTEGRITY_L2_CRC_OK    (1ULL << 4)
> +#define RTE_FLOW_ITEM_INTEGRITY_IPV4_CSUM_OK (1ULL << 5)
> +#define RTE_FLOW_ITEM_INTEGRITY_L4_CSUM_OK   (1ULL << 6)
> +#define RTE_FLOW_ITEM_INTEGRITY_L3_LEN_OK    (1ULL << 7)

Please use RTE_BIT macro,
and add a reference to these bits in a doxygen comment
where appropriate, thanks.



^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v2 0/4] net/mlx5: add integrity flow item support
  2021-04-28 17:59 [dpdk-dev] [PATCH 0/4] net/mlx5: add integrity flow item support Gregory Etelson
                   ` (3 preceding siblings ...)
  2021-04-28 17:59 ` [dpdk-dev] [PATCH 4/4] doc: add MLX5 PMD integrity item limitations Gregory Etelson
@ 2021-04-29  6:16 ` Gregory Etelson
  2021-04-29  6:16   ` [dpdk-dev] [PATCH v2 1/4] ethdev: fix integrity flow item Gregory Etelson
                     ` (3 more replies)
  2021-04-29 18:36 ` [dpdk-dev] [PATCH v3 0/4] net/mlx5: add integrity flow " Gregory Etelson
  5 siblings, 4 replies; 24+ messages in thread
From: Gregory Etelson @ 2021-04-29  6:16 UTC (permalink / raw)
  To: dev; +Cc: getelson, matan, orika, rasland, Viacheslav Ovsiienko

Support integrity flow item in MLX5 PMD.
Integrity flow item was described in
commit b10a421a1f3b ("ethdev: add packet integrity check flow rules")

v2:
Add MLX5 PMD integrity item support to 21.05 release notes.
Use RTE_BIT64() macro in RTE_FLOW_ITEM_INTEGRITY_* definition.

Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

Gregory Etelson (4):
  ethdev: fix integrity flow item
  net/mlx5: update PRM definitions
  net/mlx5: support integrity flow item
  doc: add MLX5 PMD integrity item support

 doc/guides/nics/mlx5.rst               |  15 ++
 doc/guides/rel_notes/release_21_02.rst |   1 +
 drivers/common/mlx5/mlx5_devx_cmds.c   |  31 ++-
 drivers/common/mlx5/mlx5_devx_cmds.h   |   1 +
 drivers/common/mlx5/mlx5_prm.h         |  35 +++-
 drivers/net/mlx5/mlx5_flow.c           |  25 +++
 drivers/net/mlx5/mlx5_flow.h           |  26 +++
 drivers/net/mlx5/mlx5_flow_dv.c        | 258 +++++++++++++++++++++++++
 lib/ethdev/rte_flow.c                  |   1 +
 lib/ethdev/rte_flow.h                  |   9 +
 10 files changed, 396 insertions(+), 6 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v2 1/4] ethdev: fix integrity flow item
  2021-04-29  6:16 ` [dpdk-dev] [PATCH v2 0/4] net/mlx5: add integrity flow item support Gregory Etelson
@ 2021-04-29  6:16   ` Gregory Etelson
  2021-04-29  7:57     ` Thomas Monjalon
  2021-04-29 10:13     ` Ori Kam
  2021-04-29  6:16   ` [dpdk-dev] [PATCH v2 2/4] net/mlx5: update PRM definitions Gregory Etelson
                     ` (2 subsequent siblings)
  3 siblings, 2 replies; 24+ messages in thread
From: Gregory Etelson @ 2021-04-29  6:16 UTC (permalink / raw)
  To: dev
  Cc: getelson, matan, orika, rasland, Viacheslav Ovsiienko,
	Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde

Add integrity item definition to the rte_flow_desc_item array.
The new entry allows to build RTE flow item from a data
stored in rte_flow_item_integrity type.

Add bitmasks to the integrity item value.
The masks allow to query multiple integrity filters in a single
compare operation.

Fixes: b10a421a1f3b ("ethdev: add packet integrity check flow rules")

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 lib/ethdev/rte_flow.c | 1 +
 lib/ethdev/rte_flow.h | 9 +++++++++
 2 files changed, 10 insertions(+)

diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index c7c7108933..8cb7a069c8 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -98,6 +98,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = {
 	MK_FLOW_ITEM(PFCP, sizeof(struct rte_flow_item_pfcp)),
 	MK_FLOW_ITEM(ECPRI, sizeof(struct rte_flow_item_ecpri)),
 	MK_FLOW_ITEM(GENEVE_OPT, sizeof(struct rte_flow_item_geneve_opt)),
+	MK_FLOW_ITEM(INTEGRITY, sizeof(struct rte_flow_item_integrity)),
 	MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)),
 };
 
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 94c8c1ccc8..147fdefcae 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -1738,6 +1738,15 @@ struct rte_flow_item_integrity {
 	};
 };
 
+#define RTE_FLOW_ITEM_INTEGRITY_PKT_OK       RTE_BIT64(0)
+#define RTE_FLOW_ITEM_INTEGRITY_L2_OK        RTE_BIT64(1)
+#define RTE_FLOW_ITEM_INTEGRITY_L3_OK        RTE_BIT64(2)
+#define RTE_FLOW_ITEM_INTEGRITY_L4_OK        RTE_BIT64(3)
+#define RTE_FLOW_ITEM_INTEGRITY_L2_CRC_OK    RTE_BIT64(4)
+#define RTE_FLOW_ITEM_INTEGRITY_IPV4_CSUM_OK RTE_BIT64(5)
+#define RTE_FLOW_ITEM_INTEGRITY_L4_CSUM_OK   RTE_BIT64(6)
+#define RTE_FLOW_ITEM_INTEGRITY_L3_LEN_OK    RTE_BIT64(7)
+
 #ifndef __cplusplus
 static const struct rte_flow_item_integrity
 rte_flow_item_integrity_mask = {
-- 
2.31.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v2 2/4] net/mlx5: update PRM definitions
  2021-04-29  6:16 ` [dpdk-dev] [PATCH v2 0/4] net/mlx5: add integrity flow item support Gregory Etelson
  2021-04-29  6:16   ` [dpdk-dev] [PATCH v2 1/4] ethdev: fix integrity flow item Gregory Etelson
@ 2021-04-29  6:16   ` Gregory Etelson
  2021-04-29  6:16   ` [dpdk-dev] [PATCH v2 3/4] net/mlx5: support integrity flow item Gregory Etelson
  2021-04-29  6:16   ` [dpdk-dev] [PATCH v2 4/4] doc: add MLX5 PMD integrity item support Gregory Etelson
  3 siblings, 0 replies; 24+ messages in thread
From: Gregory Etelson @ 2021-04-29  6:16 UTC (permalink / raw)
  To: dev; +Cc: getelson, matan, orika, rasland, Viacheslav Ovsiienko, Shahaf Shuler

Add integrity and IPv4 IHL bits to PRM file.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/common/mlx5/mlx5_devx_cmds.c | 31 ++++++++++++++++++++----
 drivers/common/mlx5/mlx5_devx_cmds.h |  1 +
 drivers/common/mlx5/mlx5_prm.h       | 35 ++++++++++++++++++++++++++--
 3 files changed, 61 insertions(+), 6 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 6c6f4391a1..3d3994e575 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -626,6 +626,29 @@ mlx5_devx_cmd_create_flex_parser(void *ctx,
 	return parse_flex_obj;
 }
 
+static int
+mlx5_devx_query_pkt_integrity_match(void *hcattr)
+{
+	return MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.inner_l3_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.inner_l4_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.outer_l3_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.outer_l4_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive
+				.inner_ipv4_checksum_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.inner_l4_checksum_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive
+				.outer_ipv4_checksum_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.outer_l4_checksum_ok);
+}
+
 /**
  * Query HCA attributes.
  * Using those attributes we can check on run time if the device
@@ -823,10 +846,10 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
 		return -1;
 	}
 	hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability);
-	attr->log_max_ft_sampler_num =
-			MLX5_GET(flow_table_nic_cap,
-			hcattr, flow_table_properties.log_max_ft_sampler_num);
-
+	attr->log_max_ft_sampler_num = MLX5_GET
+		(flow_table_nic_cap, hcattr,
+		 flow_table_properties_nic_receive.log_max_ft_sampler_num);
+	attr->pkt_integrity_match = mlx5_devx_query_pkt_integrity_match(hcattr);
 	/* Query HCA offloads for Ethernet protocol. */
 	memset(in, 0, sizeof(in));
 	memset(out, 0, sizeof(out));
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index eee8fee107..b31a828383 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -142,6 +142,7 @@ struct mlx5_hca_attr {
 	uint32_t cqe_compression:1;
 	uint32_t mini_cqe_resp_flow_tag:1;
 	uint32_t mini_cqe_resp_l3_l4_tag:1;
+	uint32_t pkt_integrity_match:1; /* 1 if HW supports integrity item */
 	struct mlx5_hca_qos_attr qos;
 	struct mlx5_hca_vdpa_attr vdpa;
 	int log_max_qp_sz;
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index c6d8060bb9..903faccd56 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -778,7 +778,12 @@ struct mlx5_ifc_fte_match_set_lyr_2_4_bits {
 	u8 tcp_flags[0x9];
 	u8 tcp_sport[0x10];
 	u8 tcp_dport[0x10];
-	u8 reserved_at_c0[0x18];
+	u8 reserved_at_c0[0x10];
+	u8 ipv4_ihl[0x4];
+	u8 l3_ok[0x1];
+	u8 l4_ok[0x1];
+	u8 ipv4_checksum_ok[0x1];
+	u8 l4_checksum_ok[0x1];
 	u8 ip_ttl_hoplimit[0x8];
 	u8 udp_sport[0x10];
 	u8 udp_dport[0x10];
@@ -1656,9 +1661,35 @@ struct mlx5_ifc_roce_caps_bits {
 	u8 reserved_at_20[0x7e0];
 };
 
+/*
+ * Table 1872 - Flow Table Fields Supported 2 Format
+ */
+struct mlx5_ifc_ft_fields_support_2_bits {
+	u8 reserved_at_0[0x14];
+	u8 inner_ipv4_ihl[0x1];
+	u8 outer_ipv4_ihl[0x1];
+	u8 psp_syndrome[0x1];
+	u8 inner_l3_ok[0x1];
+	u8 inner_l4_ok[0x1];
+	u8 outer_l3_ok[0x1];
+	u8 outer_l4_ok[0x1];
+	u8 psp_header[0x1];
+	u8 inner_ipv4_checksum_ok[0x1];
+	u8 inner_l4_checksum_ok[0x1];
+	u8 outer_ipv4_checksum_ok[0x1];
+	u8 outer_l4_checksum_ok[0x1];
+};
+
 struct mlx5_ifc_flow_table_nic_cap_bits {
 	u8	   reserved_at_0[0x200];
-	struct mlx5_ifc_flow_table_prop_layout_bits flow_table_properties;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+	       flow_table_properties_nic_receive;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+	       flow_table_properties_unused[5];
+	u8         reserved_at_1C0[0x200];
+	u8         header_modify_nic_receive[0x400];
+	struct mlx5_ifc_ft_fields_support_2_bits
+	       ft_field_support_2_nic_receive;
 };
 
 union mlx5_ifc_hca_cap_union_bits {
-- 
2.31.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v2 3/4] net/mlx5: support integrity flow item
  2021-04-29  6:16 ` [dpdk-dev] [PATCH v2 0/4] net/mlx5: add integrity flow item support Gregory Etelson
  2021-04-29  6:16   ` [dpdk-dev] [PATCH v2 1/4] ethdev: fix integrity flow item Gregory Etelson
  2021-04-29  6:16   ` [dpdk-dev] [PATCH v2 2/4] net/mlx5: update PRM definitions Gregory Etelson
@ 2021-04-29  6:16   ` Gregory Etelson
  2021-04-29  6:16   ` [dpdk-dev] [PATCH v2 4/4] doc: add MLX5 PMD integrity item support Gregory Etelson
  3 siblings, 0 replies; 24+ messages in thread
From: Gregory Etelson @ 2021-04-29  6:16 UTC (permalink / raw)
  To: dev; +Cc: getelson, matan, orika, rasland, Viacheslav Ovsiienko, Shahaf Shuler

MLX5 PMD supports the following integrity filters for outer and
inner network headers:
- l3_ok
- l4_ok
- ipv4_csum_ok
- l4_csum_ok

`level` values 0 and 1 reference outer headers.
`level` > 1 reference inner headers.

Flow rule items supplied by application must explicitly specify
network headers referred by integrity item. For example:
flow create 0 ingress
  pattern
    integrity level is 0 value mask l3_ok value spec l3_ok /
    eth / ipv6 / end …

or

flow create 0 ingress
  pattern
    integrity level is 0 value mask l4_ok value spec 0 /
    eth / ipv4 proto is udp / end …

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.c    |  25 ++++
 drivers/net/mlx5/mlx5_flow.h    |  26 ++++
 drivers/net/mlx5/mlx5_flow_dv.c | 258 ++++++++++++++++++++++++++++++++
 3 files changed, 309 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 15ed5ec7a2..db9a251c68 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -8083,6 +8083,31 @@ mlx5_action_handle_flush(struct rte_eth_dev *dev)
 	return ret;
 }
 
+const struct rte_flow_item *
+mlx5_flow_find_tunnel_item(const struct rte_flow_item *item)
+{
+	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
+		switch (item->type) {
+		default:
+			break;
+		case RTE_FLOW_ITEM_TYPE_VXLAN:
+		case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
+		case RTE_FLOW_ITEM_TYPE_GRE:
+		case RTE_FLOW_ITEM_TYPE_MPLS:
+		case RTE_FLOW_ITEM_TYPE_NVGRE:
+		case RTE_FLOW_ITEM_TYPE_GENEVE:
+			return item;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+		case RTE_FLOW_ITEM_TYPE_IPV6:
+			if (item[1].type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+			    item[1].type == RTE_FLOW_ITEM_TYPE_IPV6)
+				return item;
+			break;
+		}
+	}
+	return NULL;
+}
+
 #ifndef HAVE_MLX5DV_DR
 #define MLX5_DOMAIN_SYNC_FLOW ((1 << 0) | (1 << 1))
 #else
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 56908ae08b..eb7035d259 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -145,6 +145,9 @@ enum mlx5_feature_name {
 #define MLX5_FLOW_LAYER_GENEVE_OPT (UINT64_C(1) << 32)
 #define MLX5_FLOW_LAYER_GTP_PSC (UINT64_C(1) << 33)
 
+/* INTEGRITY item bit */
+#define MLX5_FLOW_ITEM_INTEGRITY (UINT64_C(1) << 34)
+
 /* Outer Masks. */
 #define MLX5_FLOW_LAYER_OUTER_L3 \
 	(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6)
@@ -1010,6 +1013,20 @@ struct rte_flow {
 	(MLX5_RSS_HASH_IPV6 | IBV_RX_HASH_DST_PORT_TCP)
 #define MLX5_RSS_HASH_NONE 0ULL
 
+/*
+ * Define integrity bits supported by the PMD
+ */
+#define MLX5_DV_PKT_INTEGRITY_MASK \
+	(RTE_FLOW_ITEM_INTEGRITY_L3_OK | RTE_FLOW_ITEM_INTEGRITY_L4_OK | \
+	 RTE_FLOW_ITEM_INTEGRITY_IPV4_CSUM_OK | \
+	 RTE_FLOW_ITEM_INTEGRITY_L4_CSUM_OK)
+
+#define MLX5_ETHER_TYPE_FROM_HEADER(_s, _m, _itm, _prt) do { \
+	(_prt) = ((const struct _s *)(_itm)->mask)->_m;       \
+	(_prt) &= ((const struct _s *)(_itm)->spec)->_m;      \
+	(_prt) = rte_be_to_cpu_16((_prt));                    \
+} while (0)
+
 /* array of valid combinations of RX Hash fields for RSS */
 static const uint64_t mlx5_rss_hash_fields[] = {
 	MLX5_RSS_HASH_IPV4,
@@ -1282,6 +1299,13 @@ mlx5_aso_meter_by_idx(struct mlx5_priv *priv, uint32_t idx)
 	return &pool->mtrs[idx % MLX5_ASO_MTRS_PER_POOL];
 }
 
+static __rte_always_inline const struct rte_flow_item *
+mlx5_find_end_item(const struct rte_flow_item *item)
+{
+	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++);
+	return item;
+}
+
 int mlx5_flow_group_to_table(struct rte_eth_dev *dev,
 			     const struct mlx5_flow_tunnel *tunnel,
 			     uint32_t group, uint32_t *table,
@@ -1433,6 +1457,8 @@ struct mlx5_flow_meter_sub_policy *mlx5_flow_meter_sub_policy_rss_prepare
 		struct mlx5_flow_rss_desc *rss_desc[MLX5_MTR_RTE_COLORS]);
 int mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev);
 int mlx5_action_handle_flush(struct rte_eth_dev *dev);
+const struct rte_flow_item *
+mlx5_flow_find_tunnel_item(const struct rte_flow_item *item);
 void mlx5_release_tunnel_hub(struct mlx5_dev_ctx_shared *sh, uint16_t port_id);
 int mlx5_alloc_tunnel_hub(struct mlx5_dev_ctx_shared *sh);
 
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index d810466242..2d4042e458 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -6230,6 +6230,163 @@ flow_dv_validate_attributes(struct rte_eth_dev *dev,
 	return ret;
 }
 
+static uint16_t
+mlx5_flow_locate_proto_l3(const struct rte_flow_item **head,
+			  const struct rte_flow_item *end)
+{
+	const struct rte_flow_item *item = *head;
+	uint16_t l3_protocol;
+
+	for (; item != end; item++) {
+		switch (item->type) {
+		default:
+			break;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+			l3_protocol = RTE_ETHER_TYPE_IPV4;
+			goto l3_ok;
+		case RTE_FLOW_ITEM_TYPE_IPV6:
+			l3_protocol = RTE_ETHER_TYPE_IPV6;
+			goto l3_ok;
+		case RTE_FLOW_ITEM_TYPE_ETH:
+			if (item->mask && item->spec) {
+				MLX5_ETHER_TYPE_FROM_HEADER(rte_flow_item_eth,
+							    type, item,
+							    l3_protocol);
+				if (l3_protocol == RTE_ETHER_TYPE_IPV4 ||
+				    l3_protocol == RTE_ETHER_TYPE_IPV6)
+					goto l3_ok;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_VLAN:
+			if (item->mask && item->spec) {
+				MLX5_ETHER_TYPE_FROM_HEADER(rte_flow_item_vlan,
+							    inner_type, item,
+							    l3_protocol);
+				if (l3_protocol == RTE_ETHER_TYPE_IPV4 ||
+				    l3_protocol == RTE_ETHER_TYPE_IPV6)
+					goto l3_ok;
+			}
+			break;
+		}
+	}
+
+	return 0;
+
+l3_ok:
+	*head = item;
+	return l3_protocol;
+}
+
+static uint8_t
+mlx5_flow_locate_proto_l4(const struct rte_flow_item **head,
+			  const struct rte_flow_item *end)
+{
+	const struct rte_flow_item *item = *head;
+	uint8_t l4_protocol;
+
+	for (; item != end; item++) {
+		switch (item->type) {
+		default:
+			break;
+		case RTE_FLOW_ITEM_TYPE_TCP:
+			l4_protocol = IPPROTO_TCP;
+			goto l4_ok;
+		case RTE_FLOW_ITEM_TYPE_UDP:
+			l4_protocol = IPPROTO_UDP;
+			goto l4_ok;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+			if (item->mask && item->spec) {
+				const struct rte_flow_item_ipv4 *mask, *spec;
+
+				mask = (typeof(mask))item->mask;
+				spec = (typeof(spec))item->spec;
+				l4_protocol = mask->hdr.next_proto_id &
+					      spec->hdr.next_proto_id;
+				if (l4_protocol == IPPROTO_TCP ||
+				    l4_protocol == IPPROTO_UDP)
+					goto l4_ok;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_IPV6:
+			if (item->mask && item->spec) {
+				const struct rte_flow_item_ipv6 *mask, *spec;
+				mask = (typeof(mask))item->mask;
+				spec = (typeof(spec))item->spec;
+				l4_protocol = mask->hdr.proto & spec->hdr.proto;
+				if (l4_protocol == IPPROTO_TCP ||
+				    l4_protocol == IPPROTO_UDP)
+					goto l4_ok;
+			}
+			break;
+		}
+	}
+
+	return 0;
+
+l4_ok:
+	*head = item;
+	return l4_protocol;
+}
+
+static int
+flow_dv_validate_item_integrity(struct rte_eth_dev *dev,
+				const struct rte_flow_item *rule_items,
+				const struct rte_flow_item *integrity_item,
+				struct rte_flow_error *error)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *tunnel_item, *end_item, *item = rule_items;
+	const struct rte_flow_item_integrity *mask = (typeof(mask))
+						     integrity_item->mask;
+	const struct rte_flow_item_integrity *spec = (typeof(spec))
+						     integrity_item->spec;
+	uint32_t protocol;
+
+	if (!priv->config.hca_attr.pkt_integrity_match)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM,
+					  integrity_item,
+					  "packet integrity integrity_item not supported");
+	if (!mask)
+		mask = &rte_flow_item_integrity_mask;
+	if (mask->value && ((mask->value & ~MLX5_DV_PKT_INTEGRITY_MASK) != 0))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM,
+					  integrity_item,
+					  "unsupported integrity filter");
+	tunnel_item = mlx5_flow_find_tunnel_item(rule_items);
+	if (spec->level > 1) {
+		if (!tunnel_item)
+			return rte_flow_error_set(error, ENOTSUP,
+						  RTE_FLOW_ERROR_TYPE_ITEM,
+						  integrity_item,
+						  "missing tunnel item");
+		item = tunnel_item;
+		end_item = mlx5_find_end_item(tunnel_item);
+	} else {
+		end_item = tunnel_item ? tunnel_item :
+			   mlx5_find_end_item(integrity_item);
+	}
+	if (mask->l3_ok || mask->ipv4_csum_ok) {
+		protocol = mlx5_flow_locate_proto_l3(&item, end_item);
+		if (!protocol)
+			return rte_flow_error_set(error, EINVAL,
+						  RTE_FLOW_ERROR_TYPE_ITEM,
+						  integrity_item,
+						  "missing L3 protocol");
+	}
+	if (mask->l4_ok || mask->l4_csum_ok) {
+		protocol = mlx5_flow_locate_proto_l4(&item, end_item);
+		if (!protocol)
+			return rte_flow_error_set(error, EINVAL,
+						  RTE_FLOW_ERROR_TYPE_ITEM,
+						  integrity_item,
+						  "missing L4 protocol");
+	}
+
+	return 0;
+}
+
 /**
  * Internal validation function. For validating both actions and items.
  *
@@ -6321,6 +6478,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 		.fdb_def_rule = !!priv->fdb_def_rule,
 	};
 	const struct rte_eth_hairpin_conf *conf;
+	const struct rte_flow_item *rule_items = items;
 	bool def_policy = false;
 
 	if (items == NULL)
@@ -6644,6 +6802,18 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 				return ret;
 			last_item = MLX5_FLOW_LAYER_ECPRI;
 			break;
+		case RTE_FLOW_ITEM_TYPE_INTEGRITY:
+			if (item_flags & RTE_FLOW_ITEM_TYPE_INTEGRITY)
+				return rte_flow_error_set
+					(error, ENOTSUP,
+					 RTE_FLOW_ERROR_TYPE_ITEM,
+					 NULL, "multiple integrity items not supported");
+			ret = flow_dv_validate_item_integrity(dev, rule_items,
+							      items, error);
+			if (ret < 0)
+				return ret;
+			last_item = MLX5_FLOW_ITEM_INTEGRITY;
+			break;
 		default:
 			return rte_flow_error_set(error, ENOTSUP,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
@@ -11119,6 +11289,90 @@ flow_dv_translate_create_aso_age(struct rte_eth_dev *dev,
 	return age_idx;
 }
 
+static void
+flow_dv_translate_integrity_l4(const struct rte_flow_item_integrity *mask,
+			       const struct rte_flow_item_integrity *value,
+			       void *headers_m, void *headers_v)
+{
+	if (mask->l4_ok) {
+		/* application l4_ok filter aggregates all hardware l4 filters
+		 * therefore hw l4_checksum_ok must be implicitly added here.
+		 */
+		MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok, 1);
+		if (value->l4_ok) {
+			/* application l4_ok = 1 matches sets both hw flags
+			 * l4_ok and l4_checksum_ok flags to 1.
+			 */
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v,
+				 l4_checksum_ok, 1);
+			MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_ok, 1);
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v, l4_ok, 1);
+		} else {
+			/* application l4_ok = 0 matches on hw flag
+			 * l4_checksum_ok = 0 only.
+			 */
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v,
+				 l4_checksum_ok, 0);
+		}
+	} else if (mask->l4_csum_ok) {
+		MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok,
+			 mask->l4_csum_ok);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_checksum_ok,
+			 mask->ipv4_csum_ok & value->ipv4_csum_ok);
+	}
+}
+
+static void
+flow_dv_translate_integrity_l3(const struct rte_flow_item_integrity *mask,
+			       const struct rte_flow_item_integrity *value,
+			       void *headers_m, void *headers_v)
+{
+	if (mask->l3_ok) {
+		MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_checksum_ok,
+			 mask->ipv4_csum_ok);
+		if (value->l3_ok) {
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v,
+				 ipv4_checksum_ok, 1);
+			MLX5_SET(fte_match_set_lyr_2_4, headers_m, l3_ok, 1);
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v, l3_ok, 1);
+		} else {
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v,
+				 ipv4_checksum_ok, 0);
+		}
+	} else if (mask->ipv4_csum_ok) {
+		MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_checksum_ok,
+			 mask->ipv4_csum_ok);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_checksum_ok,
+			 value->ipv4_csum_ok);
+	}
+}
+
+static void
+flow_dv_translate_item_integrity(void *matcher, void *key,
+				 const struct rte_flow_item *item)
+{
+	const struct rte_flow_item_integrity *mask = item->mask;
+	const struct rte_flow_item_integrity *value = item->spec;
+	void *headers_m;
+	void *headers_v;
+
+	if (!value)
+		return;
+	if (!mask)
+		mask = &rte_flow_item_integrity_mask;
+	if (value->level > 1) {
+		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
+					 inner_headers);
+		headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers);
+	} else {
+		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
+					 outer_headers);
+		headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers);
+	}
+	flow_dv_translate_integrity_l3(mask, value, headers_m, headers_v);
+	flow_dv_translate_integrity_l4(mask, value, headers_m, headers_v);
+}
+
 /**
  * Fill the flow with DV spec, lock free
  * (mutex should be acquired by caller).
@@ -12027,6 +12281,10 @@ flow_dv_translate(struct rte_eth_dev *dev,
 			/* No other protocol should follow eCPRI layer. */
 			last_item = MLX5_FLOW_LAYER_ECPRI;
 			break;
+		case RTE_FLOW_ITEM_TYPE_INTEGRITY:
+			flow_dv_translate_item_integrity(match_mask,
+							 match_value, items);
+			break;
 		default:
 			break;
 		}
-- 
2.31.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v2 4/4] doc: add MLX5 PMD integrity item support
  2021-04-29  6:16 ` [dpdk-dev] [PATCH v2 0/4] net/mlx5: add integrity flow item support Gregory Etelson
                     ` (2 preceding siblings ...)
  2021-04-29  6:16   ` [dpdk-dev] [PATCH v2 3/4] net/mlx5: support integrity flow item Gregory Etelson
@ 2021-04-29  6:16   ` Gregory Etelson
  3 siblings, 0 replies; 24+ messages in thread
From: Gregory Etelson @ 2021-04-29  6:16 UTC (permalink / raw)
  To: dev; +Cc: getelson, matan, orika, rasland, Viacheslav Ovsiienko, Shahaf Shuler

Add MLX5 PMD integrity item support to 21.05 release notes.

Add MLX5 PMD integrity item limitations to the PMD records.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 doc/guides/nics/mlx5.rst               | 15 +++++++++++++++
 doc/guides/rel_notes/release_21_02.rst |  1 +
 2 files changed, 16 insertions(+)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index b27a9a69f6..12b45a69b5 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -107,6 +107,7 @@ Features
 - 21844 flow priorities for ingress or egress flow groups greater than 0 and for any transfer
   flow group.
 - Flow metering, including meter policy API.
+- Flow integrity offload API.
 
 Limitations
 -----------
@@ -417,6 +418,20 @@ Limitations
      - yellow: must be empty.
      - RED: must be DROP.
 
+- Integrity:
+
+  - Integrity offload is enabled for **ConnectX-6** family.
+  - Verification bits provided by the hardware are ``l3_ok``, ``ipv4_csum_ok``, ``l4_ok``, ``l4_csum_ok``.
+  - ``level`` value 0 references outer headers.
+  - Multiple integrity items not supported in a single flow rule.
+  - Flow rule items supplied by application must explicitly specify network headers referred by integrity item.
+    For example, if integrity item mask sets ``l4_ok`` or ``l4_csum_ok`` bits, reference to L4 network header,
+    TCP or UDP, must be in the rule pattern as well::
+
+      flow create 0 ingress pattern integrity level is 0 value mask l3_ok value spec l3_ok / eth / ipv6 / end …
+      or
+      flow create 0 ingress pattern integrity level is 0 value mask l4_ok value spec 0 / eth / ipv4 proto is udp / end …
+
 Statistics
 ----------
 
diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
index 1813fe767a..ce27879f08 100644
--- a/doc/guides/rel_notes/release_21_02.rst
+++ b/doc/guides/rel_notes/release_21_02.rst
@@ -138,6 +138,7 @@ New Features
     egress flow groups greater than 0 and for any transfer flow group.
   * Added support for the Tx mbuf fast free offload.
   * Added support for flow modify field action.
+  * Added support for flow integrity item.
 
 * **Updated the Pensando ionic driver.**
 
-- 
2.31.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/4] ethdev: fix integrity flow item
  2021-04-29  6:16   ` [dpdk-dev] [PATCH v2 1/4] ethdev: fix integrity flow item Gregory Etelson
@ 2021-04-29  7:57     ` Thomas Monjalon
  2021-04-29 10:13     ` Ori Kam
  1 sibling, 0 replies; 24+ messages in thread
From: Thomas Monjalon @ 2021-04-29  7:57 UTC (permalink / raw)
  To: Gregory Etelson
  Cc: dev, matan, orika, rasland, Viacheslav Ovsiienko, Ferruh Yigit,
	Andrew Rybchenko, Ajit Khaparde, asafp

29/04/2021 08:16, Gregory Etelson:
> Add integrity item definition to the rte_flow_desc_item array.
> The new entry allows to build RTE flow item from a data
> stored in rte_flow_item_integrity type.
> 
> Add bitmasks to the integrity item value.
> The masks allow to query multiple integrity filters in a single
> compare operation.
> 
> Fixes: b10a421a1f3b ("ethdev: add packet integrity check flow rules")
> 
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> +#define RTE_FLOW_ITEM_INTEGRITY_PKT_OK       RTE_BIT64(0)
> +#define RTE_FLOW_ITEM_INTEGRITY_L2_OK        RTE_BIT64(1)
> +#define RTE_FLOW_ITEM_INTEGRITY_L3_OK        RTE_BIT64(2)
> +#define RTE_FLOW_ITEM_INTEGRITY_L4_OK        RTE_BIT64(3)
> +#define RTE_FLOW_ITEM_INTEGRITY_L2_CRC_OK    RTE_BIT64(4)
> +#define RTE_FLOW_ITEM_INTEGRITY_IPV4_CSUM_OK RTE_BIT64(5)
> +#define RTE_FLOW_ITEM_INTEGRITY_L4_CSUM_OK   RTE_BIT64(6)
> +#define RTE_FLOW_ITEM_INTEGRITY_L3_LEN_OK    RTE_BIT64(7)

I still have the same comment as in v1: we are missing
an API comment to reference the bits RTE_FLOW_ITEM_INTEGRITY_*
where it should be used.



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/4] ethdev: fix integrity flow item
  2021-04-29  6:16   ` [dpdk-dev] [PATCH v2 1/4] ethdev: fix integrity flow item Gregory Etelson
  2021-04-29  7:57     ` Thomas Monjalon
@ 2021-04-29 10:13     ` Ori Kam
  2021-04-29 11:37       ` Thomas Monjalon
  1 sibling, 1 reply; 24+ messages in thread
From: Ori Kam @ 2021-04-29 10:13 UTC (permalink / raw)
  To: Gregory Etelson, dev
  Cc: Matan Azrad, Raslan Darawsheh, Slava Ovsiienko,
	NBU-Contact-Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko,
	Ajit Khaparde

Hi Gregory,

> -----Original Message-----
> From: Gregory Etelson <getelson@nvidia.com>
> Sent: Thursday, April 29, 2021 9:17 AM
> Subject: [PATCH v2 1/4] ethdev: fix integrity flow item
> 
> Add integrity item definition to the rte_flow_desc_item array.
> The new entry allows to build RTE flow item from a data stored in
> rte_flow_item_integrity type.
> 
> Add bitmasks to the integrity item value.
> The masks allow to query multiple integrity filters in a single compare
> operation.
> 
> Fixes: b10a421a1f3b ("ethdev: add packet integrity check flow rules")
> 
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  lib/ethdev/rte_flow.c | 1 +
>  lib/ethdev/rte_flow.h | 9 +++++++++
>  2 files changed, 10 insertions(+)
> 
> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index
> c7c7108933..8cb7a069c8 100644
> --- a/lib/ethdev/rte_flow.c
> +++ b/lib/ethdev/rte_flow.c
> @@ -98,6 +98,7 @@ static const struct rte_flow_desc_data
> rte_flow_desc_item[] = {
>  	MK_FLOW_ITEM(PFCP, sizeof(struct rte_flow_item_pfcp)),
>  	MK_FLOW_ITEM(ECPRI, sizeof(struct rte_flow_item_ecpri)),
>  	MK_FLOW_ITEM(GENEVE_OPT, sizeof(struct
> rte_flow_item_geneve_opt)),
> +	MK_FLOW_ITEM(INTEGRITY, sizeof(struct rte_flow_item_integrity)),
>  	MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)),  };
> 
This fix is correct. 

> diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
> 94c8c1ccc8..147fdefcae 100644
> --- a/lib/ethdev/rte_flow.h
> +++ b/lib/ethdev/rte_flow.h
> @@ -1738,6 +1738,15 @@ struct rte_flow_item_integrity {
>  	};
>  };
> 
> +#define RTE_FLOW_ITEM_INTEGRITY_PKT_OK       RTE_BIT64(0)
> +#define RTE_FLOW_ITEM_INTEGRITY_L2_OK        RTE_BIT64(1)
> +#define RTE_FLOW_ITEM_INTEGRITY_L3_OK        RTE_BIT64(2)
> +#define RTE_FLOW_ITEM_INTEGRITY_L4_OK        RTE_BIT64(3)
> +#define RTE_FLOW_ITEM_INTEGRITY_L2_CRC_OK    RTE_BIT64(4)
> +#define RTE_FLOW_ITEM_INTEGRITY_IPV4_CSUM_OK RTE_BIT64(5)
> +#define RTE_FLOW_ITEM_INTEGRITY_L4_CSUM_OK   RTE_BIT64(6)
> +#define RTE_FLOW_ITEM_INTEGRITY_L3_LEN_OK    RTE_BIT64(7)
> +

I don't think that we need those flags, this means two option for the same API,
I suggest that we remove the value from the struct.

In any case I think this should be in a different thread then the above fix.

>  #ifndef __cplusplus
>  static const struct rte_flow_item_integrity  rte_flow_item_integrity_mask =
> {
> --
> 2.31.1

Best,
Ori

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/4] ethdev: fix integrity flow item
  2021-04-29 10:13     ` Ori Kam
@ 2021-04-29 11:37       ` Thomas Monjalon
  0 siblings, 0 replies; 24+ messages in thread
From: Thomas Monjalon @ 2021-04-29 11:37 UTC (permalink / raw)
  To: Gregory Etelson, Ori Kam
  Cc: dev, Matan Azrad, Raslan Darawsheh, Slava Ovsiienko,
	Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde

29/04/2021 12:13, Ori Kam:
> From: Gregory Etelson <getelson@nvidia.com>
> > 
> > Add integrity item definition to the rte_flow_desc_item array.
> > The new entry allows to build RTE flow item from a data stored in
> > rte_flow_item_integrity type.
> > 
> > Add bitmasks to the integrity item value.
> > The masks allow to query multiple integrity filters in a single compare
> > operation.
> > 
> > Fixes: b10a421a1f3b ("ethdev: add packet integrity check flow rules")
> > 
> > Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> > Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> > ---
> >  lib/ethdev/rte_flow.c | 1 +
> >  lib/ethdev/rte_flow.h | 9 +++++++++
> >  2 files changed, 10 insertions(+)
> > 
> > diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c index
> > c7c7108933..8cb7a069c8 100644
> > --- a/lib/ethdev/rte_flow.c
> > +++ b/lib/ethdev/rte_flow.c
> > @@ -98,6 +98,7 @@ static const struct rte_flow_desc_data
> > rte_flow_desc_item[] = {
> >  	MK_FLOW_ITEM(PFCP, sizeof(struct rte_flow_item_pfcp)),
> >  	MK_FLOW_ITEM(ECPRI, sizeof(struct rte_flow_item_ecpri)),
> >  	MK_FLOW_ITEM(GENEVE_OPT, sizeof(struct
> > rte_flow_item_geneve_opt)),
> > +	MK_FLOW_ITEM(INTEGRITY, sizeof(struct rte_flow_item_integrity)),
> >  	MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)),  };
> > 
> This fix is correct. 
> 
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> > +#define RTE_FLOW_ITEM_INTEGRITY_PKT_OK       RTE_BIT64(0)
> > +#define RTE_FLOW_ITEM_INTEGRITY_L2_OK        RTE_BIT64(1)
> > +#define RTE_FLOW_ITEM_INTEGRITY_L3_OK        RTE_BIT64(2)
> > +#define RTE_FLOW_ITEM_INTEGRITY_L4_OK        RTE_BIT64(3)
> > +#define RTE_FLOW_ITEM_INTEGRITY_L2_CRC_OK    RTE_BIT64(4)
> > +#define RTE_FLOW_ITEM_INTEGRITY_IPV4_CSUM_OK RTE_BIT64(5)
> > +#define RTE_FLOW_ITEM_INTEGRITY_L4_CSUM_OK   RTE_BIT64(6)
> > +#define RTE_FLOW_ITEM_INTEGRITY_L3_LEN_OK    RTE_BIT64(7)
> > +
> 
> I don't think that we need those flags, this means two option for the same API,
> I suggest that we remove the value from the struct.

To make it clear, these flags were for use with
rte_flow_item_integrity.value, but it seems we can just remove
the struct member "value" which was unioned with some bitfields.

> In any case I think this should be in a different thread then the above fix.

I am OK to have such fix, it looks better to remove the union
which leads to duplicate the API.




^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 0/4] net/mlx5: add integrity flow item support
  2021-04-28 17:59 [dpdk-dev] [PATCH 0/4] net/mlx5: add integrity flow item support Gregory Etelson
                   ` (4 preceding siblings ...)
  2021-04-29  6:16 ` [dpdk-dev] [PATCH v2 0/4] net/mlx5: add integrity flow item support Gregory Etelson
@ 2021-04-29 18:36 ` Gregory Etelson
  2021-04-29 18:36   ` [dpdk-dev] [PATCH v3 1/4] ethdev: fix integrity flow item Gregory Etelson
                     ` (4 more replies)
  5 siblings, 5 replies; 24+ messages in thread
From: Gregory Etelson @ 2021-04-29 18:36 UTC (permalink / raw)
  To: dev; +Cc: getelson, matan, orika, rasland, Viacheslav Ovsiienko

v2:
Add MLX5 PMD integrity item support to 21.05 release notes.
Use RTE_BIT64() macro in RTE_FLOW_ITEM_INTEGRITY_* definition.

v3:
Remove RTE_FLOW_ITEM_INTEGRITY_* bit masks.

Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

Gregory Etelson (4):
  ethdev: fix integrity flow item
  net/mlx5: update PRM definitions
  net/mlx5: support integrity flow item
  doc: add MLX5 PMD integrity item support

 doc/guides/nics/mlx5.rst               |  15 ++
 doc/guides/rel_notes/release_21_02.rst |   1 +
 drivers/common/mlx5/mlx5_devx_cmds.c   |  31 ++-
 drivers/common/mlx5/mlx5_devx_cmds.h   |   1 +
 drivers/common/mlx5/mlx5_prm.h         |  37 ++-
 drivers/net/mlx5/mlx5_flow.h           |  29 +++
 drivers/net/mlx5/mlx5_flow_dv.c        | 311 +++++++++++++++++++++++++
 lib/ethdev/rte_flow.c                  |   1 +
 8 files changed, 419 insertions(+), 7 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 1/4] ethdev: fix integrity flow item
  2021-04-29 18:36 ` [dpdk-dev] [PATCH v3 0/4] net/mlx5: add integrity flow " Gregory Etelson
@ 2021-04-29 18:36   ` Gregory Etelson
  2021-04-29 21:19     ` Ajit Khaparde
  2021-04-29 18:36   ` [dpdk-dev] [PATCH v3 2/4] net/mlx5: update PRM definitions Gregory Etelson
                     ` (3 subsequent siblings)
  4 siblings, 1 reply; 24+ messages in thread
From: Gregory Etelson @ 2021-04-29 18:36 UTC (permalink / raw)
  To: dev
  Cc: getelson, matan, orika, rasland, Viacheslav Ovsiienko,
	Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde

Add integrity item definition to the rte_flow_desc_item array.
The new entry allows to build RTE flow item from a data
stored in rte_flow_item_integrity type.

Fixes: b10a421a1f3b ("ethdev: add packet integrity check flow rules")

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 lib/ethdev/rte_flow.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
index c7c7108933..8cb7a069c8 100644
--- a/lib/ethdev/rte_flow.c
+++ b/lib/ethdev/rte_flow.c
@@ -98,6 +98,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = {
 	MK_FLOW_ITEM(PFCP, sizeof(struct rte_flow_item_pfcp)),
 	MK_FLOW_ITEM(ECPRI, sizeof(struct rte_flow_item_ecpri)),
 	MK_FLOW_ITEM(GENEVE_OPT, sizeof(struct rte_flow_item_geneve_opt)),
+	MK_FLOW_ITEM(INTEGRITY, sizeof(struct rte_flow_item_integrity)),
 	MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)),
 };
 
-- 
2.31.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 2/4] net/mlx5: update PRM definitions
  2021-04-29 18:36 ` [dpdk-dev] [PATCH v3 0/4] net/mlx5: add integrity flow " Gregory Etelson
  2021-04-29 18:36   ` [dpdk-dev] [PATCH v3 1/4] ethdev: fix integrity flow item Gregory Etelson
@ 2021-04-29 18:36   ` Gregory Etelson
  2021-04-29 18:36   ` [dpdk-dev] [PATCH v3 3/4] net/mlx5: support integrity flow item Gregory Etelson
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 24+ messages in thread
From: Gregory Etelson @ 2021-04-29 18:36 UTC (permalink / raw)
  To: dev; +Cc: getelson, matan, orika, rasland, Viacheslav Ovsiienko, Shahaf Shuler

Add integrity and IPv4 IHL bits to PRM file.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/common/mlx5/mlx5_devx_cmds.c | 31 ++++++++++++++++++++---
 drivers/common/mlx5/mlx5_devx_cmds.h |  1 +
 drivers/common/mlx5/mlx5_prm.h       | 37 +++++++++++++++++++++++++---
 3 files changed, 62 insertions(+), 7 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 79fff6457c..1b54c05313 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -626,6 +626,29 @@ mlx5_devx_cmd_create_flex_parser(void *ctx,
 	return parse_flex_obj;
 }
 
+static int
+mlx5_devx_query_pkt_integrity_match(void *hcattr)
+{
+	return MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.inner_l3_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.inner_l4_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.outer_l3_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.outer_l4_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive
+				.inner_ipv4_checksum_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.inner_l4_checksum_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive
+				.outer_ipv4_checksum_ok) &&
+	       MLX5_GET(flow_table_nic_cap, hcattr,
+			ft_field_support_2_nic_receive.outer_l4_checksum_ok);
+}
+
 /**
  * Query HCA attributes.
  * Using those attributes we can check on run time if the device
@@ -823,10 +846,10 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
 		return -1;
 	}
 	hcattr = MLX5_ADDR_OF(query_hca_cap_out, out, capability);
-	attr->log_max_ft_sampler_num =
-			MLX5_GET(flow_table_nic_cap,
-			hcattr, flow_table_properties.log_max_ft_sampler_num);
-
+	attr->log_max_ft_sampler_num = MLX5_GET
+		(flow_table_nic_cap, hcattr,
+		 flow_table_properties_nic_receive.log_max_ft_sampler_num);
+	attr->pkt_integrity_match = mlx5_devx_query_pkt_integrity_match(hcattr);
 	/* Query HCA offloads for Ethernet protocol. */
 	memset(in, 0, sizeof(in));
 	memset(out, 0, sizeof(out));
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index 870bdb6b30..5681e03fee 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -143,6 +143,7 @@ struct mlx5_hca_attr {
 	uint32_t cqe_compression:1;
 	uint32_t mini_cqe_resp_flow_tag:1;
 	uint32_t mini_cqe_resp_l3_l4_tag:1;
+	uint32_t pkt_integrity_match:1; /* 1 if HW supports integrity item */
 	struct mlx5_hca_qos_attr qos;
 	struct mlx5_hca_vdpa_attr vdpa;
 	int log_max_qp_sz;
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index efa5ae67bf..330101233a 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -778,7 +778,12 @@ struct mlx5_ifc_fte_match_set_lyr_2_4_bits {
 	u8 tcp_flags[0x9];
 	u8 tcp_sport[0x10];
 	u8 tcp_dport[0x10];
-	u8 reserved_at_c0[0x18];
+	u8 reserved_at_c0[0x10];
+	u8 ipv4_ihl[0x4];
+	u8 l3_ok[0x1];
+	u8 l4_ok[0x1];
+	u8 ipv4_checksum_ok[0x1];
+	u8 l4_checksum_ok[0x1];
 	u8 ip_ttl_hoplimit[0x8];
 	u8 udp_sport[0x10];
 	u8 udp_dport[0x10];
@@ -1656,9 +1661,35 @@ struct mlx5_ifc_roce_caps_bits {
 	u8 reserved_at_20[0x7e0];
 };
 
+/*
+ * Table 1872 - Flow Table Fields Supported 2 Format
+ */
+struct mlx5_ifc_ft_fields_support_2_bits {
+	u8 reserved_at_0[0x14];
+	u8 inner_ipv4_ihl[0x1];
+	u8 outer_ipv4_ihl[0x1];
+	u8 psp_syndrome[0x1];
+	u8 inner_l3_ok[0x1];
+	u8 inner_l4_ok[0x1];
+	u8 outer_l3_ok[0x1];
+	u8 outer_l4_ok[0x1];
+	u8 psp_header[0x1];
+	u8 inner_ipv4_checksum_ok[0x1];
+	u8 inner_l4_checksum_ok[0x1];
+	u8 outer_ipv4_checksum_ok[0x1];
+	u8 outer_l4_checksum_ok[0x1];
+};
+
 struct mlx5_ifc_flow_table_nic_cap_bits {
-	u8	   reserved_at_0[0x200];
-	struct mlx5_ifc_flow_table_prop_layout_bits flow_table_properties;
+	u8 reserved_at_0[0x200];
+	struct mlx5_ifc_flow_table_prop_layout_bits
+	       flow_table_properties_nic_receive;
+	struct mlx5_ifc_flow_table_prop_layout_bits
+	       flow_table_properties_unused[5];
+	u8 reserved_at_1C0[0x200];
+	u8 header_modify_nic_receive[0x400];
+	struct mlx5_ifc_ft_fields_support_2_bits
+	       ft_field_support_2_nic_receive;
 };
 
 union mlx5_ifc_hca_cap_union_bits {
-- 
2.31.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 3/4] net/mlx5: support integrity flow item
  2021-04-29 18:36 ` [dpdk-dev] [PATCH v3 0/4] net/mlx5: add integrity flow " Gregory Etelson
  2021-04-29 18:36   ` [dpdk-dev] [PATCH v3 1/4] ethdev: fix integrity flow item Gregory Etelson
  2021-04-29 18:36   ` [dpdk-dev] [PATCH v3 2/4] net/mlx5: update PRM definitions Gregory Etelson
@ 2021-04-29 18:36   ` Gregory Etelson
  2021-04-29 18:36   ` [dpdk-dev] [PATCH v3 4/4] doc: add MLX5 PMD integrity item support Gregory Etelson
  2021-05-04 15:42   ` [dpdk-dev] [PATCH v3 0/4] net/mlx5: add integrity flow " Ferruh Yigit
  4 siblings, 0 replies; 24+ messages in thread
From: Gregory Etelson @ 2021-04-29 18:36 UTC (permalink / raw)
  To: dev; +Cc: getelson, matan, orika, rasland, Viacheslav Ovsiienko, Shahaf Shuler

MLX5 PMD supports the following integrity filters for outer and
inner network headers:
- l3_ok
- l4_ok
- ipv4_csum_ok
- l4_csum_ok

`level` values 0 and 1 reference outer headers.
`level` > 1 reference inner headers.

Flow rule items supplied by application must explicitly specify
network headers referred by integrity item. For example:
flow create 0 ingress
  pattern
    integrity level is 0 value mask l3_ok value spec l3_ok /
    eth / ipv6 / end …

or

flow create 0 ingress
  pattern
    integrity level is 0 value mask l4_ok value spec 0 /
    eth / ipv4 proto is udp / end …

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |  29 +++
 drivers/net/mlx5/mlx5_flow_dv.c | 311 ++++++++++++++++++++++++++++++++
 2 files changed, 340 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 56908ae08b..6b3bcf3f46 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -145,6 +145,9 @@ enum mlx5_feature_name {
 #define MLX5_FLOW_LAYER_GENEVE_OPT (UINT64_C(1) << 32)
 #define MLX5_FLOW_LAYER_GTP_PSC (UINT64_C(1) << 33)
 
+/* INTEGRITY item bit */
+#define MLX5_FLOW_ITEM_INTEGRITY (UINT64_C(1) << 34)
+
 /* Outer Masks. */
 #define MLX5_FLOW_LAYER_OUTER_L3 \
 	(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6)
@@ -1010,6 +1013,14 @@ struct rte_flow {
 	(MLX5_RSS_HASH_IPV6 | IBV_RX_HASH_DST_PORT_TCP)
 #define MLX5_RSS_HASH_NONE 0ULL
 
+
+/* extract next protocol type from Ethernet & VLAN headers */
+#define MLX5_ETHER_TYPE_FROM_HEADER(_s, _m, _itm, _prt) do { \
+	(_prt) = ((const struct _s *)(_itm)->mask)->_m;       \
+	(_prt) &= ((const struct _s *)(_itm)->spec)->_m;      \
+	(_prt) = rte_be_to_cpu_16((_prt));                    \
+} while (0)
+
 /* array of valid combinations of RX Hash fields for RSS */
 static const uint64_t mlx5_rss_hash_fields[] = {
 	MLX5_RSS_HASH_IPV4,
@@ -1282,6 +1293,24 @@ mlx5_aso_meter_by_idx(struct mlx5_priv *priv, uint32_t idx)
 	return &pool->mtrs[idx % MLX5_ASO_MTRS_PER_POOL];
 }
 
+static __rte_always_inline const struct rte_flow_item *
+mlx5_find_end_item(const struct rte_flow_item *item)
+{
+	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++);
+	return item;
+}
+
+static __rte_always_inline bool
+mlx5_validate_integrity_item(const struct rte_flow_item_integrity *item)
+{
+	struct rte_flow_item_integrity test = *item;
+	test.l3_ok = 0;
+	test.l4_ok = 0;
+	test.ipv4_csum_ok = 0;
+	test.l4_csum_ok = 0;
+	return (test.value == 0);
+}
+
 int mlx5_flow_group_to_table(struct rte_eth_dev *dev,
 			     const struct mlx5_flow_tunnel *tunnel,
 			     uint32_t group, uint32_t *table,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index d810466242..6d094d7d0e 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -268,6 +268,31 @@ struct field_modify_info modify_tcp[] = {
 	{0, 0, 0},
 };
 
+static const struct rte_flow_item *
+mlx5_flow_find_tunnel_item(const struct rte_flow_item *item)
+{
+	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
+		switch (item->type) {
+		default:
+			break;
+		case RTE_FLOW_ITEM_TYPE_VXLAN:
+		case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
+		case RTE_FLOW_ITEM_TYPE_GRE:
+		case RTE_FLOW_ITEM_TYPE_MPLS:
+		case RTE_FLOW_ITEM_TYPE_NVGRE:
+		case RTE_FLOW_ITEM_TYPE_GENEVE:
+			return item;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+		case RTE_FLOW_ITEM_TYPE_IPV6:
+			if (item[1].type == RTE_FLOW_ITEM_TYPE_IPV4 ||
+			    item[1].type == RTE_FLOW_ITEM_TYPE_IPV6)
+				return item;
+			break;
+		}
+	}
+	return NULL;
+}
+
 static void
 mlx5_flow_tunnel_ip_check(const struct rte_flow_item *item __rte_unused,
 			  uint8_t next_protocol, uint64_t *item_flags,
@@ -6230,6 +6255,158 @@ flow_dv_validate_attributes(struct rte_eth_dev *dev,
 	return ret;
 }
 
+static uint16_t
+mlx5_flow_locate_proto_l3(const struct rte_flow_item **head,
+			  const struct rte_flow_item *end)
+{
+	const struct rte_flow_item *item = *head;
+	uint16_t l3_protocol;
+
+	for (; item != end; item++) {
+		switch (item->type) {
+		default:
+			break;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+			l3_protocol = RTE_ETHER_TYPE_IPV4;
+			goto l3_ok;
+		case RTE_FLOW_ITEM_TYPE_IPV6:
+			l3_protocol = RTE_ETHER_TYPE_IPV6;
+			goto l3_ok;
+		case RTE_FLOW_ITEM_TYPE_ETH:
+			if (item->mask && item->spec) {
+				MLX5_ETHER_TYPE_FROM_HEADER(rte_flow_item_eth,
+							    type, item,
+							    l3_protocol);
+				if (l3_protocol == RTE_ETHER_TYPE_IPV4 ||
+				    l3_protocol == RTE_ETHER_TYPE_IPV6)
+					goto l3_ok;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_VLAN:
+			if (item->mask && item->spec) {
+				MLX5_ETHER_TYPE_FROM_HEADER(rte_flow_item_vlan,
+							    inner_type, item,
+							    l3_protocol);
+				if (l3_protocol == RTE_ETHER_TYPE_IPV4 ||
+				    l3_protocol == RTE_ETHER_TYPE_IPV6)
+					goto l3_ok;
+			}
+			break;
+		}
+	}
+	return 0;
+l3_ok:
+	*head = item;
+	return l3_protocol;
+}
+
+static uint8_t
+mlx5_flow_locate_proto_l4(const struct rte_flow_item **head,
+			  const struct rte_flow_item *end)
+{
+	const struct rte_flow_item *item = *head;
+	uint8_t l4_protocol;
+
+	for (; item != end; item++) {
+		switch (item->type) {
+		default:
+			break;
+		case RTE_FLOW_ITEM_TYPE_TCP:
+			l4_protocol = IPPROTO_TCP;
+			goto l4_ok;
+		case RTE_FLOW_ITEM_TYPE_UDP:
+			l4_protocol = IPPROTO_UDP;
+			goto l4_ok;
+		case RTE_FLOW_ITEM_TYPE_IPV4:
+			if (item->mask && item->spec) {
+				const struct rte_flow_item_ipv4 *mask, *spec;
+
+				mask = (typeof(mask))item->mask;
+				spec = (typeof(spec))item->spec;
+				l4_protocol = mask->hdr.next_proto_id &
+					      spec->hdr.next_proto_id;
+				if (l4_protocol == IPPROTO_TCP ||
+				    l4_protocol == IPPROTO_UDP)
+					goto l4_ok;
+			}
+			break;
+		case RTE_FLOW_ITEM_TYPE_IPV6:
+			if (item->mask && item->spec) {
+				const struct rte_flow_item_ipv6 *mask, *spec;
+				mask = (typeof(mask))item->mask;
+				spec = (typeof(spec))item->spec;
+				l4_protocol = mask->hdr.proto & spec->hdr.proto;
+				if (l4_protocol == IPPROTO_TCP ||
+				    l4_protocol == IPPROTO_UDP)
+					goto l4_ok;
+			}
+			break;
+		}
+	}
+	return 0;
+l4_ok:
+	*head = item;
+	return l4_protocol;
+}
+
+static int
+flow_dv_validate_item_integrity(struct rte_eth_dev *dev,
+				const struct rte_flow_item *rule_items,
+				const struct rte_flow_item *integrity_item,
+				struct rte_flow_error *error)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	const struct rte_flow_item *tunnel_item, *end_item, *item = rule_items;
+	const struct rte_flow_item_integrity *mask = (typeof(mask))
+						     integrity_item->mask;
+	const struct rte_flow_item_integrity *spec = (typeof(spec))
+						     integrity_item->spec;
+	uint32_t protocol;
+
+	if (!priv->config.hca_attr.pkt_integrity_match)
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM,
+					  integrity_item,
+					  "packet integrity integrity_item not supported");
+	if (!mask)
+		mask = &rte_flow_item_integrity_mask;
+	if (!mlx5_validate_integrity_item(mask))
+		return rte_flow_error_set(error, ENOTSUP,
+					  RTE_FLOW_ERROR_TYPE_ITEM,
+					  integrity_item,
+					  "unsupported integrity filter");
+	tunnel_item = mlx5_flow_find_tunnel_item(rule_items);
+	if (spec->level > 1) {
+		if (!tunnel_item)
+			return rte_flow_error_set(error, ENOTSUP,
+						  RTE_FLOW_ERROR_TYPE_ITEM,
+						  integrity_item,
+						  "missing tunnel item");
+		item = tunnel_item;
+		end_item = mlx5_find_end_item(tunnel_item);
+	} else {
+		end_item = tunnel_item ? tunnel_item :
+			   mlx5_find_end_item(integrity_item);
+	}
+	if (mask->l3_ok || mask->ipv4_csum_ok) {
+		protocol = mlx5_flow_locate_proto_l3(&item, end_item);
+		if (!protocol)
+			return rte_flow_error_set(error, EINVAL,
+						  RTE_FLOW_ERROR_TYPE_ITEM,
+						  integrity_item,
+						  "missing L3 protocol");
+	}
+	if (mask->l4_ok || mask->l4_csum_ok) {
+		protocol = mlx5_flow_locate_proto_l4(&item, end_item);
+		if (!protocol)
+			return rte_flow_error_set(error, EINVAL,
+						  RTE_FLOW_ERROR_TYPE_ITEM,
+						  integrity_item,
+						  "missing L4 protocol");
+	}
+	return 0;
+}
+
 /**
  * Internal validation function. For validating both actions and items.
  *
@@ -6321,6 +6498,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 		.fdb_def_rule = !!priv->fdb_def_rule,
 	};
 	const struct rte_eth_hairpin_conf *conf;
+	const struct rte_flow_item *rule_items = items;
 	bool def_policy = false;
 
 	if (items == NULL)
@@ -6644,6 +6822,18 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 				return ret;
 			last_item = MLX5_FLOW_LAYER_ECPRI;
 			break;
+		case RTE_FLOW_ITEM_TYPE_INTEGRITY:
+			if (item_flags & MLX5_FLOW_ITEM_INTEGRITY)
+				return rte_flow_error_set
+					(error, ENOTSUP,
+					 RTE_FLOW_ERROR_TYPE_ITEM,
+					 NULL, "multiple integrity items not supported");
+			ret = flow_dv_validate_item_integrity(dev, rule_items,
+							      items, error);
+			if (ret < 0)
+				return ret;
+			last_item = MLX5_FLOW_ITEM_INTEGRITY;
+			break;
 		default:
 			return rte_flow_error_set(error, ENOTSUP,
 						  RTE_FLOW_ERROR_TYPE_ITEM,
@@ -11119,6 +11309,121 @@ flow_dv_translate_create_aso_age(struct rte_eth_dev *dev,
 	return age_idx;
 }
 
+static void
+flow_dv_translate_integrity_l4(const struct rte_flow_item_integrity *mask,
+			       const struct rte_flow_item_integrity *value,
+			       void *headers_m, void *headers_v)
+{
+	if (mask->l4_ok) {
+		/* application l4_ok filter aggregates all hardware l4 filters
+		 * therefore hw l4_checksum_ok must be implicitly added here.
+		 */
+		struct rte_flow_item_integrity local_item;
+
+		local_item.l4_csum_ok = 1;
+		MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok,
+			 local_item.l4_csum_ok);
+		if (value->l4_ok) {
+			/* application l4_ok = 1 matches sets both hw flags
+			 * l4_ok and l4_checksum_ok flags to 1.
+			 */
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v,
+				 l4_checksum_ok, local_item.l4_csum_ok);
+			MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_ok,
+				 mask->l4_ok);
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v, l4_ok,
+				 value->l4_ok);
+		} else {
+			/* application l4_ok = 0 matches on hw flag
+			 * l4_checksum_ok = 0 only.
+			 */
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v,
+				 l4_checksum_ok, 0);
+		}
+	} else if (mask->l4_csum_ok) {
+		MLX5_SET(fte_match_set_lyr_2_4, headers_m, l4_checksum_ok,
+			 mask->l4_csum_ok);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_checksum_ok,
+			 value->l4_csum_ok);
+	}
+}
+
+static void
+flow_dv_translate_integrity_l3(const struct rte_flow_item_integrity *mask,
+			       const struct rte_flow_item_integrity *value,
+			       void *headers_m, void *headers_v,
+			       bool is_ipv4)
+{
+	if (mask->l3_ok) {
+		/* application l3_ok filter aggregates all hardware l3 filters
+		 * therefore hw ipv4_checksum_ok must be implicitly added here.
+		 */
+		struct rte_flow_item_integrity local_item;
+
+		local_item.ipv4_csum_ok = !!is_ipv4;
+		MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_checksum_ok,
+			 local_item.ipv4_csum_ok);
+		if (value->l3_ok) {
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v,
+				 ipv4_checksum_ok, local_item.ipv4_csum_ok);
+			MLX5_SET(fte_match_set_lyr_2_4, headers_m, l3_ok,
+				 mask->l3_ok);
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v, l3_ok,
+				 value->l3_ok);
+		} else {
+			MLX5_SET(fte_match_set_lyr_2_4, headers_v,
+				 ipv4_checksum_ok, 0);
+		}
+	} else if (mask->ipv4_csum_ok) {
+		MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_checksum_ok,
+			 mask->ipv4_csum_ok);
+		MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_checksum_ok,
+			 value->ipv4_csum_ok);
+	}
+}
+
+static void
+flow_dv_translate_item_integrity(void *matcher, void *key,
+				 const struct rte_flow_item *head_item,
+				 const struct rte_flow_item *integrity_item)
+{
+	const struct rte_flow_item_integrity *mask = integrity_item->mask;
+	const struct rte_flow_item_integrity *value = integrity_item->spec;
+	const struct rte_flow_item *tunnel_item, *end_item, *item;
+	void *headers_m;
+	void *headers_v;
+	uint32_t l3_protocol;
+
+	if (!value)
+		return;
+	if (!mask)
+		mask = &rte_flow_item_integrity_mask;
+	if (value->level > 1) {
+		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
+					 inner_headers);
+		headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers);
+	} else {
+		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
+					 outer_headers);
+		headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers);
+	}
+	tunnel_item = mlx5_flow_find_tunnel_item(head_item);
+	if (value->level > 1) {
+		/* tunnel item was verified during the item validation */
+		item = tunnel_item;
+		end_item = mlx5_find_end_item(tunnel_item);
+	} else {
+		item = head_item;
+		end_item = tunnel_item ? tunnel_item :
+			   mlx5_find_end_item(integrity_item);
+	}
+	l3_protocol = mask->l3_ok ?
+		      mlx5_flow_locate_proto_l3(&item, end_item) : 0;
+	flow_dv_translate_integrity_l3(mask, value, headers_m, headers_v,
+				       l3_protocol == RTE_ETHER_TYPE_IPV4);
+	flow_dv_translate_integrity_l4(mask, value, headers_m, headers_v);
+}
+
 /**
  * Fill the flow with DV spec, lock free
  * (mutex should be acquired by caller).
@@ -11199,6 +11504,7 @@ flow_dv_translate(struct rte_eth_dev *dev,
 		.skip_scale = dev_flow->skip_scale &
 			(1 << MLX5_SCALE_FLOW_GROUP_BIT),
 	};
+	const struct rte_flow_item *head_item = items;
 
 	if (!wks)
 		return rte_flow_error_set(error, ENOMEM,
@@ -12027,6 +12333,11 @@ flow_dv_translate(struct rte_eth_dev *dev,
 			/* No other protocol should follow eCPRI layer. */
 			last_item = MLX5_FLOW_LAYER_ECPRI;
 			break;
+		case RTE_FLOW_ITEM_TYPE_INTEGRITY:
+			flow_dv_translate_item_integrity(match_mask,
+							 match_value,
+							 head_item, items);
+			break;
 		default:
 			break;
 		}
-- 
2.31.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* [dpdk-dev] [PATCH v3 4/4] doc: add MLX5 PMD integrity item support
  2021-04-29 18:36 ` [dpdk-dev] [PATCH v3 0/4] net/mlx5: add integrity flow " Gregory Etelson
                     ` (2 preceding siblings ...)
  2021-04-29 18:36   ` [dpdk-dev] [PATCH v3 3/4] net/mlx5: support integrity flow item Gregory Etelson
@ 2021-04-29 18:36   ` Gregory Etelson
  2021-05-04 15:21     ` Ferruh Yigit
  2021-05-04 15:42   ` [dpdk-dev] [PATCH v3 0/4] net/mlx5: add integrity flow " Ferruh Yigit
  4 siblings, 1 reply; 24+ messages in thread
From: Gregory Etelson @ 2021-04-29 18:36 UTC (permalink / raw)
  To: dev; +Cc: getelson, matan, orika, rasland, Viacheslav Ovsiienko, Shahaf Shuler

Add MLX5 PMD integrity item support to 21.05 release notes.

Add MLX5 PMD integrity item limitations to the PMD records.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 doc/guides/nics/mlx5.rst               | 15 +++++++++++++++
 doc/guides/rel_notes/release_21_02.rst |  1 +
 2 files changed, 16 insertions(+)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 2bb4f18a08..cbf16ad598 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -107,6 +107,7 @@ Features
 - 21844 flow priorities for ingress or egress flow groups greater than 0 and for any transfer
   flow group.
 - Flow metering, including meter policy API.
+- Flow integrity offload API.
 
 Limitations
 -----------
@@ -418,6 +419,20 @@ Limitations
      - RED: must be DROP.
   - meter profile packet mode is supported.
 
+- Integrity:
+
+  - Integrity offload is enabled for **ConnectX-6** family.
+  - Verification bits provided by the hardware are ``l3_ok``, ``ipv4_csum_ok``, ``l4_ok``, ``l4_csum_ok``.
+  - ``level`` value 0 references outer headers.
+  - Multiple integrity items not supported in a single flow rule.
+  - Flow rule items supplied by application must explicitly specify network headers referred by integrity item.
+    For example, if integrity item mask sets ``l4_ok`` or ``l4_csum_ok`` bits, reference to L4 network header,
+    TCP or UDP, must be in the rule pattern as well::
+
+      flow create 0 ingress pattern integrity level is 0 value mask l3_ok value spec l3_ok / eth / ipv6 / end …
+      or
+      flow create 0 ingress pattern integrity level is 0 value mask l4_ok value spec 0 / eth / ipv4 proto is udp / end …
+
 Statistics
 ----------
 
diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
index 1813fe767a..ce27879f08 100644
--- a/doc/guides/rel_notes/release_21_02.rst
+++ b/doc/guides/rel_notes/release_21_02.rst
@@ -138,6 +138,7 @@ New Features
     egress flow groups greater than 0 and for any transfer flow group.
   * Added support for the Tx mbuf fast free offload.
   * Added support for flow modify field action.
+  * Added support for flow integrity item.
 
 * **Updated the Pensando ionic driver.**
 
-- 
2.31.1


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/4] ethdev: fix integrity flow item
  2021-04-29 18:36   ` [dpdk-dev] [PATCH v3 1/4] ethdev: fix integrity flow item Gregory Etelson
@ 2021-04-29 21:19     ` Ajit Khaparde
  2021-05-02  5:54       ` Ori Kam
  0 siblings, 1 reply; 24+ messages in thread
From: Ajit Khaparde @ 2021-04-29 21:19 UTC (permalink / raw)
  To: Gregory Etelson
  Cc: dpdk-dev, Matan Azrad, Ori Kam, Raslan Darawsheh,
	Viacheslav Ovsiienko, Thomas Monjalon, Ferruh Yigit,
	Andrew Rybchenko

[-- Attachment #1: Type: text/plain, Size: 1165 bytes --]

On Thu, Apr 29, 2021 at 11:37 AM Gregory Etelson <getelson@nvidia.com> wrote:
>
> Add integrity item definition to the rte_flow_desc_item array.
> The new entry allows to build RTE flow item from a data
> stored in rte_flow_item_integrity type.
>
> Fixes: b10a421a1f3b ("ethdev: add packet integrity check flow rules")
>
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

> ---
>  lib/ethdev/rte_flow.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> index c7c7108933..8cb7a069c8 100644
> --- a/lib/ethdev/rte_flow.c
> +++ b/lib/ethdev/rte_flow.c
> @@ -98,6 +98,7 @@ static const struct rte_flow_desc_data rte_flow_desc_item[] = {
>         MK_FLOW_ITEM(PFCP, sizeof(struct rte_flow_item_pfcp)),
>         MK_FLOW_ITEM(ECPRI, sizeof(struct rte_flow_item_ecpri)),
>         MK_FLOW_ITEM(GENEVE_OPT, sizeof(struct rte_flow_item_geneve_opt)),
> +       MK_FLOW_ITEM(INTEGRITY, sizeof(struct rte_flow_item_integrity)),
>         MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)),
>  };
>
> --
> 2.31.1
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v3 1/4] ethdev: fix integrity flow item
  2021-04-29 21:19     ` Ajit Khaparde
@ 2021-05-02  5:54       ` Ori Kam
  0 siblings, 0 replies; 24+ messages in thread
From: Ori Kam @ 2021-05-02  5:54 UTC (permalink / raw)
  To: Ajit Khaparde, Gregory Etelson
  Cc: dpdk-dev, Matan Azrad, Raslan Darawsheh, Slava Ovsiienko,
	NBU-Contact-Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko

Hi

> -----Original Message-----
> From: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Sent: Friday, April 30, 2021 12:20 AM
> To: Gregory Etelson <getelson@nvidia.com>
> Subject: Re: [PATCH v3 1/4] ethdev: fix integrity flow item
> 
> On Thu, Apr 29, 2021 at 11:37 AM Gregory Etelson <getelson@nvidia.com>
> wrote:
> >
> > Add integrity item definition to the rte_flow_desc_item array.
> > The new entry allows to build RTE flow item from a data
> > stored in rte_flow_item_integrity type.
> >
> > Fixes: b10a421a1f3b ("ethdev: add packet integrity check flow rules")
> >
> > Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> > Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> 
> > ---
> >  lib/ethdev/rte_flow.c | 1 +
> >  1 file changed, 1 insertion(+)
> >
> > diff --git a/lib/ethdev/rte_flow.c b/lib/ethdev/rte_flow.c
> > index c7c7108933..8cb7a069c8 100644
> > --- a/lib/ethdev/rte_flow.c
> > +++ b/lib/ethdev/rte_flow.c
> > @@ -98,6 +98,7 @@ static const struct rte_flow_desc_data
> rte_flow_desc_item[] = {
> >         MK_FLOW_ITEM(PFCP, sizeof(struct rte_flow_item_pfcp)),
> >         MK_FLOW_ITEM(ECPRI, sizeof(struct rte_flow_item_ecpri)),
> >         MK_FLOW_ITEM(GENEVE_OPT, sizeof(struct
> rte_flow_item_geneve_opt)),
> > +       MK_FLOW_ITEM(INTEGRITY, sizeof(struct rte_flow_item_integrity)),
> >         MK_FLOW_ITEM(CONNTRACK, sizeof(uint32_t)),
> >  };
> >
> > --
> > 2.31.1
> >

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/4] doc: add MLX5 PMD integrity item support
  2021-04-29 18:36   ` [dpdk-dev] [PATCH v3 4/4] doc: add MLX5 PMD integrity item support Gregory Etelson
@ 2021-05-04 15:21     ` Ferruh Yigit
  2021-05-04 15:26       ` Ferruh Yigit
  0 siblings, 1 reply; 24+ messages in thread
From: Ferruh Yigit @ 2021-05-04 15:21 UTC (permalink / raw)
  To: Gregory Etelson, dev
  Cc: matan, orika, rasland, Viacheslav Ovsiienko, Shahaf Shuler

On 4/29/2021 7:36 PM, Gregory Etelson wrote:
> Add MLX5 PMD integrity item support to 21.05 release notes.
> 
> Add MLX5 PMD integrity item limitations to the PMD records.
> 

Hi Gregory,

It is expected to have doc updates with the actual patch, it seems we have
missed it for this time.

Can you please add the fixes tag for the patch that added the feature?

> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
>  doc/guides/nics/mlx5.rst               | 15 +++++++++++++++
>  doc/guides/rel_notes/release_21_02.rst |  1 +
>  2 files changed, 16 insertions(+)
> 
> diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
> index 2bb4f18a08..cbf16ad598 100644
> --- a/doc/guides/nics/mlx5.rst
> +++ b/doc/guides/nics/mlx5.rst
> @@ -107,6 +107,7 @@ Features
>  - 21844 flow priorities for ingress or egress flow groups greater than 0 and for any transfer
>    flow group.
>  - Flow metering, including meter policy API.
> +- Flow integrity offload API.
>  
>  Limitations
>  -----------
> @@ -418,6 +419,20 @@ Limitations
>       - RED: must be DROP.
>    - meter profile packet mode is supported.
>  
> +- Integrity:
> +
> +  - Integrity offload is enabled for **ConnectX-6** family.
> +  - Verification bits provided by the hardware are ``l3_ok``, ``ipv4_csum_ok``, ``l4_ok``, ``l4_csum_ok``.
> +  - ``level`` value 0 references outer headers.
> +  - Multiple integrity items not supported in a single flow rule.
> +  - Flow rule items supplied by application must explicitly specify network headers referred by integrity item.
> +    For example, if integrity item mask sets ``l4_ok`` or ``l4_csum_ok`` bits, reference to L4 network header,
> +    TCP or UDP, must be in the rule pattern as well::
> +
> +      flow create 0 ingress pattern integrity level is 0 value mask l3_ok value spec l3_ok / eth / ipv6 / end …
> +      or
> +      flow create 0 ingress pattern integrity level is 0 value mask l4_ok value spec 0 / eth / ipv4 proto is udp / end …
> +
>  Statistics
>  ----------
>  
> diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
> index 1813fe767a..ce27879f08 100644
> --- a/doc/guides/rel_notes/release_21_02.rst
> +++ b/doc/guides/rel_notes/release_21_02.rst
> @@ -138,6 +138,7 @@ New Features
>      egress flow groups greater than 0 and for any transfer flow group.
>    * Added support for the Tx mbuf fast free offload.
>    * Added support for flow modify field action.
> +  * Added support for flow integrity item.
>  
>  * **Updated the Pensando ionic driver.**
>  
> 


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v3 4/4] doc: add MLX5 PMD integrity item support
  2021-05-04 15:21     ` Ferruh Yigit
@ 2021-05-04 15:26       ` Ferruh Yigit
  0 siblings, 0 replies; 24+ messages in thread
From: Ferruh Yigit @ 2021-05-04 15:26 UTC (permalink / raw)
  To: Gregory Etelson, dev
  Cc: matan, orika, rasland, Viacheslav Ovsiienko, Shahaf Shuler

On 5/4/2021 4:21 PM, Ferruh Yigit wrote:
> On 4/29/2021 7:36 PM, Gregory Etelson wrote:
>> Add MLX5 PMD integrity item support to 21.05 release notes.
>>
>> Add MLX5 PMD integrity item limitations to the PMD records.
>>
> 
> Hi Gregory,
> 
> It is expected to have doc updates with the actual patch, it seems we have
> missed it for this time.
> 
> Can you please add the fixes tag for the patch that added the feature?
> 

Ahh, it is introduced in the same set, I will squash this patch to it.

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [dpdk-dev] [PATCH v3 0/4] net/mlx5: add integrity flow item support
  2021-04-29 18:36 ` [dpdk-dev] [PATCH v3 0/4] net/mlx5: add integrity flow " Gregory Etelson
                     ` (3 preceding siblings ...)
  2021-04-29 18:36   ` [dpdk-dev] [PATCH v3 4/4] doc: add MLX5 PMD integrity item support Gregory Etelson
@ 2021-05-04 15:42   ` Ferruh Yigit
  4 siblings, 0 replies; 24+ messages in thread
From: Ferruh Yigit @ 2021-05-04 15:42 UTC (permalink / raw)
  To: Gregory Etelson, dev; +Cc: matan, orika, rasland, Viacheslav Ovsiienko

On 4/29/2021 7:36 PM, Gregory Etelson wrote:
> v2:
> Add MLX5 PMD integrity item support to 21.05 release notes.
> Use RTE_BIT64() macro in RTE_FLOW_ITEM_INTEGRITY_* definition.
> 
> v3:
> Remove RTE_FLOW_ITEM_INTEGRITY_* bit masks.
> 
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> 
> Gregory Etelson (4):
>   ethdev: fix integrity flow item
>   net/mlx5: update PRM definitions
>   net/mlx5: support integrity flow item
>   doc: add MLX5 PMD integrity item support
> 

Series applied to dpdk-next-net/main, thanks.

4/4 squashed to 3/4.

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2021-05-04 15:42 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-28 17:59 [dpdk-dev] [PATCH 0/4] net/mlx5: add integrity flow item support Gregory Etelson
2021-04-28 17:59 ` [dpdk-dev] [PATCH 1/4] ethdev: fix integrity flow item Gregory Etelson
2021-04-28 18:06   ` Thomas Monjalon
2021-04-28 17:59 ` [dpdk-dev] [PATCH 2/4] net/mlx5: update PRM definitions Gregory Etelson
2021-04-28 17:59 ` [dpdk-dev] [PATCH 3/4] net/mlx5: support integrity flow item Gregory Etelson
2021-04-28 17:59 ` [dpdk-dev] [PATCH 4/4] doc: add MLX5 PMD integrity item limitations Gregory Etelson
2021-04-29  6:16 ` [dpdk-dev] [PATCH v2 0/4] net/mlx5: add integrity flow item support Gregory Etelson
2021-04-29  6:16   ` [dpdk-dev] [PATCH v2 1/4] ethdev: fix integrity flow item Gregory Etelson
2021-04-29  7:57     ` Thomas Monjalon
2021-04-29 10:13     ` Ori Kam
2021-04-29 11:37       ` Thomas Monjalon
2021-04-29  6:16   ` [dpdk-dev] [PATCH v2 2/4] net/mlx5: update PRM definitions Gregory Etelson
2021-04-29  6:16   ` [dpdk-dev] [PATCH v2 3/4] net/mlx5: support integrity flow item Gregory Etelson
2021-04-29  6:16   ` [dpdk-dev] [PATCH v2 4/4] doc: add MLX5 PMD integrity item support Gregory Etelson
2021-04-29 18:36 ` [dpdk-dev] [PATCH v3 0/4] net/mlx5: add integrity flow " Gregory Etelson
2021-04-29 18:36   ` [dpdk-dev] [PATCH v3 1/4] ethdev: fix integrity flow item Gregory Etelson
2021-04-29 21:19     ` Ajit Khaparde
2021-05-02  5:54       ` Ori Kam
2021-04-29 18:36   ` [dpdk-dev] [PATCH v3 2/4] net/mlx5: update PRM definitions Gregory Etelson
2021-04-29 18:36   ` [dpdk-dev] [PATCH v3 3/4] net/mlx5: support integrity flow item Gregory Etelson
2021-04-29 18:36   ` [dpdk-dev] [PATCH v3 4/4] doc: add MLX5 PMD integrity item support Gregory Etelson
2021-05-04 15:21     ` Ferruh Yigit
2021-05-04 15:26       ` Ferruh Yigit
2021-05-04 15:42   ` [dpdk-dev] [PATCH v3 0/4] net/mlx5: add integrity flow " Ferruh Yigit

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).