patches for DPDK stable branches
 help / color / Atom feed
* [dpdk-stable] [PATCH 2/4] net/ice: add redirect support for VSI list rule
       [not found] <20200605074031.16231-1-wei.zhao1@intel.com>
@ 2020-06-05  7:40 ` Wei Zhao
  2020-06-05  7:40 ` [dpdk-stable] [PATCH 3/4] net/ice: add check for NVGRE protocol Wei Zhao
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-05  7:40 UTC (permalink / raw)
  To: dev; +Cc: qi.z.zhang, stable, Wei Zhao

This patch enable redirect switch rule of vsi list type.

Fixes: 397b4b3c5095 ("net/ice: enable flow redirect on switch")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index a5dd1f7ab..fdb1eb755 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -1662,6 +1662,9 @@ ice_switch_redirect(struct ice_adapter *ad,
 	uint16_t lkups_cnt;
 	int ret;
 
+	if (rdata->vsi_handle != rd->vsi_handle)
+		return 0;
+
 	sw = hw->switch_info;
 	if (!sw->recp_list[rdata->rid].recp_created)
 		return -EINVAL;
@@ -1673,25 +1676,30 @@ ice_switch_redirect(struct ice_adapter *ad,
 	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_adv_fltr_mgmt_list_entry,
 			    list_entry) {
 		rinfo = list_itr->rule_info;
-		if (rinfo.fltr_rule_id == rdata->rule_id &&
+		if ((rinfo.fltr_rule_id == rdata->rule_id &&
 		    rinfo.sw_act.fltr_act == ICE_FWD_TO_VSI &&
-		    rinfo.sw_act.vsi_handle == rd->vsi_handle) {
+		    rinfo.sw_act.vsi_handle == rd->vsi_handle) ||
+		    (rinfo.fltr_rule_id == rdata->rule_id &&
+		    rinfo.sw_act.fltr_act == ICE_FWD_TO_VSI_LIST)){
 			lkups_cnt = list_itr->lkups_cnt;
 			lkups_dp = (struct ice_adv_lkup_elem *)
 				ice_memdup(hw, list_itr->lkups,
 					   sizeof(*list_itr->lkups) *
 					   lkups_cnt, ICE_NONDMA_TO_NONDMA);
+
 			if (!lkups_dp) {
 				PMD_DRV_LOG(ERR, "Failed to allocate memory.");
 				return -EINVAL;
 			}
 
+			if (rinfo.sw_act.fltr_act == ICE_FWD_TO_VSI_LIST)
+				rinfo.sw_act.vsi_handle = rd->vsi_handle;
 			break;
 		}
 	}
 
 	if (!lkups_dp)
-		return 0;
+		return -EINVAL;
 
 	/* Remove the old rule */
 	ret = ice_rem_adv_rule(hw, list_itr->lkups,
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH 3/4] net/ice: add check for NVGRE protocol
       [not found] <20200605074031.16231-1-wei.zhao1@intel.com>
  2020-06-05  7:40 ` [dpdk-stable] [PATCH 2/4] net/ice: add redirect support for VSI list rule Wei Zhao
@ 2020-06-05  7:40 ` Wei Zhao
  2020-06-05  7:40 ` [dpdk-stable] [PATCH 4/4] net/ice: support switch flow for specific L4 type Wei Zhao
  2020-06-17  6:14 ` [dpdk-stable] [PATCH v2 0/4] enable more PPPoE packet type for switch Wei Zhao
  3 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-05  7:40 UTC (permalink / raw)
  To: dev; +Cc: qi.z.zhang, stable, Wei Zhao

This patch add check for protocol type of IPv4 packet,
it need to update tunnel type when NVGRE is in payload.

Fixes: 6bc7628c5e0b ("net/ice: change default tunnel type")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index fdb1eb755..be86b6bdf 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -28,6 +28,7 @@
 #define MAX_QGRP_NUM_TYPE 7
 #define ICE_PPP_IPV4_PROTO	0x0021
 #define ICE_PPP_IPV6_PROTO	0x0057
+#define ICE_IPV4_PROTO_NVGRE	0x2F
 
 #define ICE_SW_INSET_ETHER ( \
 	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
@@ -632,6 +633,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.ipv4_hdr.protocol =
 						ipv4_mask->hdr.next_proto_id;
 				}
+				if ((ipv4_spec->hdr.next_proto_id &
+					ipv4_mask->hdr.next_proto_id) ==
+					ICE_IPV4_PROTO_NVGRE)
+					*tun_type = ICE_SW_TUN_AND_NON_TUN;
 				if (ipv4_mask->hdr.type_of_service) {
 					list[t].h_u.ipv4_hdr.tos =
 						ipv4_spec->hdr.type_of_service;
@@ -1526,7 +1531,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 	const struct rte_flow_item *item = pattern;
 	uint16_t item_num = 0;
 	enum ice_sw_tunnel_type tun_type =
-		ICE_SW_TUN_AND_NON_TUN;
+			ICE_NON_TUN;
 	struct ice_pattern_match_item *pattern_match_item = NULL;
 
 	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH 4/4] net/ice: support switch flow for specific L4 type
       [not found] <20200605074031.16231-1-wei.zhao1@intel.com>
  2020-06-05  7:40 ` [dpdk-stable] [PATCH 2/4] net/ice: add redirect support for VSI list rule Wei Zhao
  2020-06-05  7:40 ` [dpdk-stable] [PATCH 3/4] net/ice: add check for NVGRE protocol Wei Zhao
@ 2020-06-05  7:40 ` Wei Zhao
  2020-06-17  6:14 ` [dpdk-stable] [PATCH v2 0/4] enable more PPPoE packet type for switch Wei Zhao
  3 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-05  7:40 UTC (permalink / raw)
  To: dev; +Cc: qi.z.zhang, stable, Wei Zhao

This patch add more specific tunnel type for ipv4/ipv6 packet,
it enable tcp/udp layer of ipv4/ipv6 as L4 payload but without
L4 dst/src port number as input set for the switch filter rule.

Fixes: 47d460d63233 ("net/ice: rework switch filter")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index be86b6bdf..aa99f26b0 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -471,11 +471,11 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
 	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
 	uint64_t input_set = ICE_INSET_NONE;
+	uint16_t tunnel_valid = 0;
 	bool pppoe_elem_valid = 0;
 	bool pppoe_patt_valid = 0;
 	bool pppoe_prot_valid = 0;
 	bool profile_rule = 0;
-	bool tunnel_valid = 0;
 	bool ipv6_valiad = 0;
 	bool ipv4_valiad = 0;
 	bool udp_valiad = 0;
@@ -960,7 +960,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					   "Invalid NVGRE item");
 				return 0;
 			}
-			tunnel_valid = 1;
+			tunnel_valid = 2;
 			if (nvgre_spec && nvgre_mask) {
 				list[t].type = ICE_NVGRE;
 				if (nvgre_mask->tni[0] ||
@@ -1325,6 +1325,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			*tun_type = ICE_SW_TUN_PPPOE;
 	}
 
+	if (!pppoe_patt_valid) {
+		if (tunnel_valid == 1)
+			*tun_type = ICE_SW_TUN_VXLAN;
+		else if (tunnel_valid == 2)
+			*tun_type = ICE_SW_TUN_NVGRE;
+		else if (ipv4_valiad && tcp_valiad)
+			*tun_type = ICE_SW_IPV4_TCP;
+		else if (ipv4_valiad && udp_valiad)
+			*tun_type = ICE_SW_IPV4_UDP;
+		else if (ipv6_valiad && tcp_valiad)
+			*tun_type = ICE_SW_IPV6_TCP;
+		else if (ipv6_valiad && udp_valiad)
+			*tun_type = ICE_SW_IPV6_UDP;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
@@ -1536,10 +1551,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 
 	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
 		item_num++;
-		if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
-			tun_type = ICE_SW_TUN_VXLAN;
-		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
-			tun_type = ICE_SW_TUN_NVGRE;
 		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
 			const struct rte_flow_item_eth *eth_mask;
 			if (item->mask)
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v2 0/4] enable more PPPoE packet type for switch
       [not found] <20200605074031.16231-1-wei.zhao1@intel.com>
                   ` (2 preceding siblings ...)
  2020-06-05  7:40 ` [dpdk-stable] [PATCH 4/4] net/ice: support switch flow for specific L4 type Wei Zhao
@ 2020-06-17  6:14 ` Wei Zhao
  2020-06-17  6:14   ` [dpdk-stable] [PATCH v2 1/4] net/ice: add support " Wei Zhao
                     ` (5 more replies)
  3 siblings, 6 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-17  6:14 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang

1. add more support for switch parser of pppoe packet.
2. add redirect support for VSI list rule
3. add check for NVGRE protocol
4. support flow for specific L4 type

This patchset is based on:
[1] https://patches.dpdk.org/cover/70762/ : net/ice: base code update

Depends-on: series-10300

v2:
fix bug in patch add redirect support for VSI list rule.
add information in release note.

Wei Zhao (4):
  net/ice: add support more PPPoE packet type for switch
  net/ice: add redirect support for VSI list rule
  net/ice: add check for NVGRE protocol
  net/ice: support switch flow for specific L4 type

 doc/guides/rel_notes/release_20_08.rst |   6 +
 drivers/net/ice/ice_switch_filter.c    | 161 +++++++++++++++++++++----
 2 files changed, 142 insertions(+), 25 deletions(-)

-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v2 1/4] net/ice: add support more PPPoE packet type for switch
  2020-06-17  6:14 ` [dpdk-stable] [PATCH v2 0/4] enable more PPPoE packet type for switch Wei Zhao
@ 2020-06-17  6:14   ` " Wei Zhao
  2020-06-17  6:14   ` [dpdk-stable] [PATCH v2 2/4] net/ice: add redirect support for VSI list rule Wei Zhao
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-17  6:14 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, Wei Zhao

This patch add more support for switch parser of pppoe packet,
it enable parse tcp/udp L4 layer and ipv4/ipv6 L3 layer parser for
pppoe payload, so we can use L4 dst/src port and L3 ip address as
input set for switch filter pppoe related rule.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 doc/guides/rel_notes/release_20_08.rst |   6 ++
 drivers/net/ice/ice_switch_filter.c    | 115 +++++++++++++++++++++----
 2 files changed, 106 insertions(+), 15 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 86d240213..d2193b0a6 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -55,6 +55,12 @@ New Features
      This section is a comment. Do not overwrite or remove it.
      Also, make sure to start the actual text at the margin.
      =========================================================
+* **Updated the Intel ice driver.**
+
+  Updated the Intel ice driver with new features and improvements, including:
+
+  * Add support more PPPoE packet type for switch filter
+
 
 * **Updated Mellanox mlx5 driver.**
 
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 20e8187d3..a5dd1f7ab 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -26,6 +26,8 @@
 
 
 #define MAX_QGRP_NUM_TYPE 7
+#define ICE_PPP_IPV4_PROTO	0x0021
+#define ICE_PPP_IPV6_PROTO	0x0057
 
 #define ICE_SW_INSET_ETHER ( \
 	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
@@ -95,6 +97,18 @@
 	ICE_INSET_VLAN_OUTER | ICE_INSET_VLAN_INNER | \
 	ICE_INSET_DMAC | ICE_INSET_ETHERTYPE | ICE_INSET_PPPOE_SESSION | \
 	ICE_INSET_PPPOE_PROTO)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4 ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4_TCP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_TCP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4_UDP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_UDP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6 ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6_TCP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_TCP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6_UDP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_UDP)
 #define ICE_SW_INSET_MAC_IPV4_ESP ( \
 	ICE_SW_INSET_MAC_IPV4 | ICE_INSET_ESP_SPI)
 #define ICE_SW_INSET_MAC_IPV6_ESP ( \
@@ -154,10 +168,6 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = {
 			ICE_SW_INSET_DIST_NVGRE_IPV4_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_nvgre_eth_ipv4_tcp,
 			ICE_SW_INSET_DIST_NVGRE_IPV4_TCP, ICE_INSET_NONE},
-	{pattern_eth_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
-	{pattern_eth_vlan_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_pppoes,
 			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes,
@@ -166,6 +176,30 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = {
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes_proto,
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_esp,
 			ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_udp_esp,
@@ -254,10 +288,6 @@ ice_pattern_match_item ice_switch_pattern_perm[] = {
 			ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_nvgre_eth_ipv4_tcp,
 			ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP, ICE_INSET_NONE},
-	{pattern_eth_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
-	{pattern_eth_vlan_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_pppoes,
 			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes,
@@ -266,6 +296,30 @@ ice_pattern_match_item ice_switch_pattern_perm[] = {
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes_proto,
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_esp,
 			ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_udp_esp,
@@ -416,13 +470,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
 	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
 	uint64_t input_set = ICE_INSET_NONE;
-	uint16_t j, t = 0;
+	bool pppoe_elem_valid = 0;
+	bool pppoe_patt_valid = 0;
+	bool pppoe_prot_valid = 0;
 	bool profile_rule = 0;
 	bool tunnel_valid = 0;
-	bool pppoe_valid = 0;
 	bool ipv6_valiad = 0;
 	bool ipv4_valiad = 0;
 	bool udp_valiad = 0;
+	bool tcp_valiad = 0;
+	uint16_t j, t = 0;
 
 	for (item = pattern; item->type !=
 			RTE_FLOW_ITEM_TYPE_END; item++) {
@@ -752,6 +809,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		case RTE_FLOW_ITEM_TYPE_TCP:
 			tcp_spec = item->spec;
 			tcp_mask = item->mask;
+			tcp_valiad = 1;
 			if (tcp_spec && tcp_mask) {
 				/* Check TCP mask and update input set */
 				if (tcp_mask->hdr.sent_seq ||
@@ -969,6 +1027,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					"Invalid pppoe item");
 				return 0;
 			}
+			pppoe_patt_valid = 1;
 			if (pppoe_spec && pppoe_mask) {
 				/* Check pppoe mask and update input set */
 				if (pppoe_mask->length ||
@@ -989,7 +1048,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					input_set |= ICE_INSET_PPPOE_SESSION;
 				}
 				t++;
-				pppoe_valid = 1;
+				pppoe_elem_valid = 1;
 			}
 			break;
 
@@ -1010,7 +1069,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				return 0;
 			}
 			if (pppoe_proto_spec && pppoe_proto_mask) {
-				if (pppoe_valid)
+				if (pppoe_elem_valid)
 					t--;
 				list[t].type = ICE_PPPOE;
 				if (pppoe_proto_mask->proto_id) {
@@ -1019,9 +1078,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.pppoe_hdr.ppp_prot_id =
 						pppoe_proto_mask->proto_id;
 					input_set |= ICE_INSET_PPPOE_PROTO;
+
+					pppoe_prot_valid = 1;
 				}
+				if ((pppoe_proto_mask->proto_id &
+					pppoe_proto_spec->proto_id) !=
+					    CPU_TO_BE16(ICE_PPP_IPV4_PROTO) &&
+					(pppoe_proto_mask->proto_id &
+					pppoe_proto_spec->proto_id) !=
+					    CPU_TO_BE16(ICE_PPP_IPV6_PROTO))
+					*tun_type = ICE_SW_TUN_PPPOE_PAY;
+				else
+					*tun_type = ICE_SW_TUN_PPPOE;
 				t++;
 			}
+
 			break;
 
 		case RTE_FLOW_ITEM_TYPE_ESP:
@@ -1232,6 +1303,23 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		}
 	}
 
+	if (pppoe_patt_valid && !pppoe_prot_valid) {
+		if (ipv6_valiad && udp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6_UDP;
+		else if (ipv6_valiad && tcp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6_TCP;
+		else if (ipv4_valiad && udp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4_UDP;
+		else if (ipv4_valiad && tcp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4_TCP;
+		else if (ipv6_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6;
+		else if (ipv4_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4;
+		else
+			*tun_type = ICE_SW_TUN_PPPOE;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
@@ -1447,9 +1535,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 			tun_type = ICE_SW_TUN_VXLAN;
 		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
 			tun_type = ICE_SW_TUN_NVGRE;
-		if (item->type == RTE_FLOW_ITEM_TYPE_PPPOED ||
-				item->type == RTE_FLOW_ITEM_TYPE_PPPOES)
-			tun_type = ICE_SW_TUN_PPPOE;
 		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
 			const struct rte_flow_item_eth *eth_mask;
 			if (item->mask)
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v2 2/4] net/ice: add redirect support for VSI list rule
  2020-06-17  6:14 ` [dpdk-stable] [PATCH v2 0/4] enable more PPPoE packet type for switch Wei Zhao
  2020-06-17  6:14   ` [dpdk-stable] [PATCH v2 1/4] net/ice: add support " Wei Zhao
@ 2020-06-17  6:14   ` Wei Zhao
  2020-06-22 15:25     ` Zhang, Qi Z
  2020-06-17  6:14   ` [dpdk-stable] [PATCH v2 3/4] net/ice: add check for NVGRE protocol Wei Zhao
                     ` (3 subsequent siblings)
  5 siblings, 1 reply; 44+ messages in thread
From: Wei Zhao @ 2020-06-17  6:14 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, Wei Zhao

This patch enable redirect switch rule of vsi list type.

Fixes: 397b4b3c5095 ("net/ice: enable flow redirect on switch")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index a5dd1f7ab..3c0c36bce 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -1662,6 +1662,9 @@ ice_switch_redirect(struct ice_adapter *ad,
 	uint16_t lkups_cnt;
 	int ret;
 
+	if (rdata->vsi_handle != rd->vsi_handle)
+		return 0;
+
 	sw = hw->switch_info;
 	if (!sw->recp_list[rdata->rid].recp_created)
 		return -EINVAL;
@@ -1673,25 +1676,32 @@ ice_switch_redirect(struct ice_adapter *ad,
 	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_adv_fltr_mgmt_list_entry,
 			    list_entry) {
 		rinfo = list_itr->rule_info;
-		if (rinfo.fltr_rule_id == rdata->rule_id &&
+		if ((rinfo.fltr_rule_id == rdata->rule_id &&
 		    rinfo.sw_act.fltr_act == ICE_FWD_TO_VSI &&
-		    rinfo.sw_act.vsi_handle == rd->vsi_handle) {
+		    rinfo.sw_act.vsi_handle == rd->vsi_handle) ||
+		    (rinfo.fltr_rule_id == rdata->rule_id &&
+		    rinfo.sw_act.fltr_act == ICE_FWD_TO_VSI_LIST)){
 			lkups_cnt = list_itr->lkups_cnt;
 			lkups_dp = (struct ice_adv_lkup_elem *)
 				ice_memdup(hw, list_itr->lkups,
 					   sizeof(*list_itr->lkups) *
 					   lkups_cnt, ICE_NONDMA_TO_NONDMA);
+
 			if (!lkups_dp) {
 				PMD_DRV_LOG(ERR, "Failed to allocate memory.");
 				return -EINVAL;
 			}
 
+			if (rinfo.sw_act.fltr_act == ICE_FWD_TO_VSI_LIST) {
+				rinfo.sw_act.vsi_handle = rd->vsi_handle;
+				rinfo.sw_act.fltr_act = ICE_FWD_TO_VSI;
+			}
 			break;
 		}
 	}
 
 	if (!lkups_dp)
-		return 0;
+		return -EINVAL;
 
 	/* Remove the old rule */
 	ret = ice_rem_adv_rule(hw, list_itr->lkups,
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v2 3/4] net/ice: add check for NVGRE protocol
  2020-06-17  6:14 ` [dpdk-stable] [PATCH v2 0/4] enable more PPPoE packet type for switch Wei Zhao
  2020-06-17  6:14   ` [dpdk-stable] [PATCH v2 1/4] net/ice: add support " Wei Zhao
  2020-06-17  6:14   ` [dpdk-stable] [PATCH v2 2/4] net/ice: add redirect support for VSI list rule Wei Zhao
@ 2020-06-17  6:14   ` Wei Zhao
  2020-06-22 15:49     ` Zhang, Qi Z
  2020-06-17  6:14   ` [dpdk-stable] [PATCH v2 4/4] net/ice: support switch flow for specific L4 type Wei Zhao
                     ` (2 subsequent siblings)
  5 siblings, 1 reply; 44+ messages in thread
From: Wei Zhao @ 2020-06-17  6:14 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, Wei Zhao

This patch add check for protocol type of IPv4 packet,
it need to update tunnel type when NVGRE is in payload.

Fixes: 6bc7628c5e0b ("net/ice: change default tunnel type")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 3c0c36bce..3b38195d6 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -28,6 +28,7 @@
 #define MAX_QGRP_NUM_TYPE 7
 #define ICE_PPP_IPV4_PROTO	0x0021
 #define ICE_PPP_IPV6_PROTO	0x0057
+#define ICE_IPV4_PROTO_NVGRE	0x2F
 
 #define ICE_SW_INSET_ETHER ( \
 	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
@@ -632,6 +633,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.ipv4_hdr.protocol =
 						ipv4_mask->hdr.next_proto_id;
 				}
+				if ((ipv4_spec->hdr.next_proto_id &
+					ipv4_mask->hdr.next_proto_id) ==
+					ICE_IPV4_PROTO_NVGRE)
+					*tun_type = ICE_SW_TUN_AND_NON_TUN;
 				if (ipv4_mask->hdr.type_of_service) {
 					list[t].h_u.ipv4_hdr.tos =
 						ipv4_spec->hdr.type_of_service;
@@ -1526,7 +1531,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 	const struct rte_flow_item *item = pattern;
 	uint16_t item_num = 0;
 	enum ice_sw_tunnel_type tun_type =
-		ICE_SW_TUN_AND_NON_TUN;
+			ICE_NON_TUN;
 	struct ice_pattern_match_item *pattern_match_item = NULL;
 
 	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v2 4/4] net/ice: support switch flow for specific L4 type
  2020-06-17  6:14 ` [dpdk-stable] [PATCH v2 0/4] enable more PPPoE packet type for switch Wei Zhao
                     ` (2 preceding siblings ...)
  2020-06-17  6:14   ` [dpdk-stable] [PATCH v2 3/4] net/ice: add check for NVGRE protocol Wei Zhao
@ 2020-06-17  6:14   ` Wei Zhao
  2020-06-22 15:36     ` Zhang, Qi Z
  2020-06-28  3:21   ` [dpdk-stable] [PATCH v3 0/4] enable more PPPoE packet type for switch Wei Zhao
  2020-06-28  5:01   ` [dpdk-stable] [PATCH v3 0/4] enable more PPPoE packet type for switch Wei Zhao
  5 siblings, 1 reply; 44+ messages in thread
From: Wei Zhao @ 2020-06-17  6:14 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, Wei Zhao

This patch add more specific tunnel type for ipv4/ipv6 packet,
it enable tcp/udp layer of ipv4/ipv6 as L4 payload but without
L4 dst/src port number as input set for the switch filter rule.

Fixes: 47d460d63233 ("net/ice: rework switch filter")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 3b38195d6..f4fd8ff33 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -471,11 +471,11 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
 	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
 	uint64_t input_set = ICE_INSET_NONE;
+	uint16_t tunnel_valid = 0;
 	bool pppoe_elem_valid = 0;
 	bool pppoe_patt_valid = 0;
 	bool pppoe_prot_valid = 0;
 	bool profile_rule = 0;
-	bool tunnel_valid = 0;
 	bool ipv6_valiad = 0;
 	bool ipv4_valiad = 0;
 	bool udp_valiad = 0;
@@ -960,7 +960,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					   "Invalid NVGRE item");
 				return 0;
 			}
-			tunnel_valid = 1;
+			tunnel_valid = 2;
 			if (nvgre_spec && nvgre_mask) {
 				list[t].type = ICE_NVGRE;
 				if (nvgre_mask->tni[0] ||
@@ -1325,6 +1325,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			*tun_type = ICE_SW_TUN_PPPOE;
 	}
 
+	if (!pppoe_patt_valid) {
+		if (tunnel_valid == 1)
+			*tun_type = ICE_SW_TUN_VXLAN;
+		else if (tunnel_valid == 2)
+			*tun_type = ICE_SW_TUN_NVGRE;
+		else if (ipv4_valiad && tcp_valiad)
+			*tun_type = ICE_SW_IPV4_TCP;
+		else if (ipv4_valiad && udp_valiad)
+			*tun_type = ICE_SW_IPV4_UDP;
+		else if (ipv6_valiad && tcp_valiad)
+			*tun_type = ICE_SW_IPV6_TCP;
+		else if (ipv6_valiad && udp_valiad)
+			*tun_type = ICE_SW_IPV6_UDP;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
@@ -1536,10 +1551,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 
 	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
 		item_num++;
-		if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
-			tun_type = ICE_SW_TUN_VXLAN;
-		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
-			tun_type = ICE_SW_TUN_NVGRE;
 		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
 			const struct rte_flow_item_eth *eth_mask;
 			if (item->mask)
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [dpdk-stable] [PATCH v2 2/4] net/ice: add redirect support for VSI list rule
  2020-06-17  6:14   ` [dpdk-stable] [PATCH v2 2/4] net/ice: add redirect support for VSI list rule Wei Zhao
@ 2020-06-22 15:25     ` Zhang, Qi Z
  0 siblings, 0 replies; 44+ messages in thread
From: Zhang, Qi Z @ 2020-06-22 15:25 UTC (permalink / raw)
  To: Zhao1, Wei, dev; +Cc: stable



> -----Original Message-----
> From: Zhao1, Wei <wei.zhao1@intel.com>
> Sent: Wednesday, June 17, 2020 2:14 PM
> To: dev@dpdk.org
> Cc: stable@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Zhao1, Wei
> <wei.zhao1@intel.com>
> Subject: [PATCH v2 2/4] net/ice: add redirect support for VSI list rule
> 
> This patch enable redirect switch rule of vsi list type.
> 
> Fixes: 397b4b3c5095 ("net/ice: enable flow redirect on switch")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> ---
>  drivers/net/ice/ice_switch_filter.c | 16 +++++++++++++---
>  1 file changed, 13 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/ice/ice_switch_filter.c
> b/drivers/net/ice/ice_switch_filter.c
> index a5dd1f7ab..3c0c36bce 100644
> --- a/drivers/net/ice/ice_switch_filter.c
> +++ b/drivers/net/ice/ice_switch_filter.c
> @@ -1662,6 +1662,9 @@ ice_switch_redirect(struct ice_adapter *ad,
>  	uint16_t lkups_cnt;
>  	int ret;
> 
> +	if (rdata->vsi_handle != rd->vsi_handle)
> +		return 0;
> +
>  	sw = hw->switch_info;
>  	if (!sw->recp_list[rdata->rid].recp_created)
>  		return -EINVAL;
> @@ -1673,25 +1676,32 @@ ice_switch_redirect(struct ice_adapter *ad,
>  	LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_adv_fltr_mgmt_list_entry,
>  			    list_entry) {
>  		rinfo = list_itr->rule_info;
> -		if (rinfo.fltr_rule_id == rdata->rule_id &&
> +		if ((rinfo.fltr_rule_id == rdata->rule_id &&
>  		    rinfo.sw_act.fltr_act == ICE_FWD_TO_VSI &&
> -		    rinfo.sw_act.vsi_handle == rd->vsi_handle) {
> +		    rinfo.sw_act.vsi_handle == rd->vsi_handle) ||
> +		    (rinfo.fltr_rule_id == rdata->rule_id &&
> +		    rinfo.sw_act.fltr_act == ICE_FWD_TO_VSI_LIST)){
>  			lkups_cnt = list_itr->lkups_cnt;
>  			lkups_dp = (struct ice_adv_lkup_elem *)
>  				ice_memdup(hw, list_itr->lkups,
>  					   sizeof(*list_itr->lkups) *
>  					   lkups_cnt, ICE_NONDMA_TO_NONDMA);
> +

Acked-by: Qi Zhang <qi.z.zhang@intel.com>

Applied to dpdk-next-net-intel after 

1, remove above redundant empty line and 
2. reword on the commit log and title as below

Title: redirect switch rule with to VSI list action

Support redirect a switch rule if its action is to VSI list.

Thanks
Qi

>  			if (!lkups_dp) {
>  				PMD_DRV_LOG(ERR, "Failed to allocate memory.");
>  				return -EINVAL;
>  			}
> 
> +			if (rinfo.sw_act.fltr_act == ICE_FWD_TO_VSI_LIST) {
> +				rinfo.sw_act.vsi_handle = rd->vsi_handle;
> +				rinfo.sw_act.fltr_act = ICE_FWD_TO_VSI;
> +			}
>  			break;
>  		}
>  	}
> 
>  	if (!lkups_dp)
> -		return 0;
> +		return -EINVAL;
> 
>  	/* Remove the old rule */
>  	ret = ice_rem_adv_rule(hw, list_itr->lkups,
> --
> 2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [dpdk-stable] [PATCH v2 4/4] net/ice: support switch flow for specific L4 type
  2020-06-17  6:14   ` [dpdk-stable] [PATCH v2 4/4] net/ice: support switch flow for specific L4 type Wei Zhao
@ 2020-06-22 15:36     ` Zhang, Qi Z
  2020-06-23  1:12       ` Zhao1, Wei
  0 siblings, 1 reply; 44+ messages in thread
From: Zhang, Qi Z @ 2020-06-22 15:36 UTC (permalink / raw)
  To: Zhao1, Wei, dev; +Cc: stable



> -----Original Message-----
> From: Zhao1, Wei <wei.zhao1@intel.com>
> Sent: Wednesday, June 17, 2020 2:14 PM
> To: dev@dpdk.org
> Cc: stable@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Zhao1, Wei
> <wei.zhao1@intel.com>
> Subject: [PATCH v2 4/4] net/ice: support switch flow for specific L4 type
> 
> This patch add more specific tunnel type for ipv4/ipv6 packet, it enable
> tcp/udp layer of ipv4/ipv6 as L4 payload but without
> L4 dst/src port number as input set for the switch filter rule.
> 
> Fixes: 47d460d63233 ("net/ice: rework switch filter")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> ---
>  drivers/net/ice/ice_switch_filter.c | 23 +++++++++++++++++------
>  1 file changed, 17 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/net/ice/ice_switch_filter.c
> b/drivers/net/ice/ice_switch_filter.c
> index 3b38195d6..f4fd8ff33 100644
> --- a/drivers/net/ice/ice_switch_filter.c
> +++ b/drivers/net/ice/ice_switch_filter.c
> @@ -471,11 +471,11 @@ ice_switch_inset_get(const struct rte_flow_item
> pattern[],
>  	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
>  	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
>  	uint64_t input_set = ICE_INSET_NONE;
> +	uint16_t tunnel_valid = 0;

why not vxlan_valid and nvgre_valid to keep consistent naming with other variables?
Can we use a bitmap  

>  	bool pppoe_elem_valid = 0;
>  	bool pppoe_patt_valid = 0;
>  	bool pppoe_prot_valid = 0;
>  	bool profile_rule = 0;
> -	bool tunnel_valid = 0;
>  	bool ipv6_valiad = 0;
>  	bool ipv4_valiad = 0;
>  	bool udp_valiad = 0;
> @@ -960,7 +960,7 @@ ice_switch_inset_get(const struct rte_flow_item
> pattern[],
>  					   "Invalid NVGRE item");
>  				return 0;
>  			}
> -			tunnel_valid = 1;
> +			tunnel_valid = 2;
>  			if (nvgre_spec && nvgre_mask) {
>  				list[t].type = ICE_NVGRE;
>  				if (nvgre_mask->tni[0] ||
> @@ -1325,6 +1325,21 @@ ice_switch_inset_get(const struct rte_flow_item
> pattern[],
>  			*tun_type = ICE_SW_TUN_PPPOE;
>  	}
> 
> +	if (!pppoe_patt_valid) {
> +		if (tunnel_valid == 1)
> +			*tun_type = ICE_SW_TUN_VXLAN;
> +		else if (tunnel_valid == 2)
> +			*tun_type = ICE_SW_TUN_NVGRE;
> +		else if (ipv4_valiad && tcp_valiad)
> +			*tun_type = ICE_SW_IPV4_TCP;
> +		else if (ipv4_valiad && udp_valiad)
> +			*tun_type = ICE_SW_IPV4_UDP;
> +		else if (ipv6_valiad && tcp_valiad)
> +			*tun_type = ICE_SW_IPV6_TCP;
> +		else if (ipv6_valiad && udp_valiad)
> +			*tun_type = ICE_SW_IPV6_UDP;
> +	}
> +
>  	*lkups_num = t;
> 
>  	return input_set;
> @@ -1536,10 +1551,6 @@ ice_switch_parse_pattern_action(struct
> ice_adapter *ad,
> 
>  	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
>  		item_num++;
> -		if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
> -			tun_type = ICE_SW_TUN_VXLAN;
> -		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
> -			tun_type = ICE_SW_TUN_NVGRE;
>  		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
>  			const struct rte_flow_item_eth *eth_mask;
>  			if (item->mask)
> --
> 2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [dpdk-stable] [PATCH v2 3/4] net/ice: add check for NVGRE protocol
  2020-06-17  6:14   ` [dpdk-stable] [PATCH v2 3/4] net/ice: add check for NVGRE protocol Wei Zhao
@ 2020-06-22 15:49     ` Zhang, Qi Z
  2020-06-23  1:11       ` Zhao1, Wei
  0 siblings, 1 reply; 44+ messages in thread
From: Zhang, Qi Z @ 2020-06-22 15:49 UTC (permalink / raw)
  To: Zhao1, Wei, dev; +Cc: stable



> -----Original Message-----
> From: Zhao1, Wei <wei.zhao1@intel.com>
> Sent: Wednesday, June 17, 2020 2:14 PM
> To: dev@dpdk.org
> Cc: stable@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Zhao1, Wei
> <wei.zhao1@intel.com>
> Subject: [PATCH v2 3/4] net/ice: add check for NVGRE protocol

fix tunnel type for switch rule

> 
> This patch add check for protocol type of IPv4 packet, it need to update tunnel
> type when NVGRE is in payload.

The patch change default tunnel type to ICE_NON_TUN and only change to 
ICE_SW_TUN_AND_NON_TUN to hint switch engine if GRE proto is matched in a IPv4 header.

> 
> Fixes: 6bc7628c5e0b ("net/ice: change default tunnel type")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> ---
>  drivers/net/ice/ice_switch_filter.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ice/ice_switch_filter.c
> b/drivers/net/ice/ice_switch_filter.c
> index 3c0c36bce..3b38195d6 100644
> --- a/drivers/net/ice/ice_switch_filter.c
> +++ b/drivers/net/ice/ice_switch_filter.c
> @@ -28,6 +28,7 @@
>  #define MAX_QGRP_NUM_TYPE 7
>  #define ICE_PPP_IPV4_PROTO	0x0021
>  #define ICE_PPP_IPV6_PROTO	0x0057
> +#define ICE_IPV4_PROTO_NVGRE	0x2F
To keep consistent 
#define ICE_IPV4_NVGRE_PROTO 0x002F 

> 
>  #define ICE_SW_INSET_ETHER ( \
>  	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) @@ -632,6
> +633,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
>  					list[t].m_u.ipv4_hdr.protocol =
>  						ipv4_mask->hdr.next_proto_id;
>  				}
> +				if ((ipv4_spec->hdr.next_proto_id &
> +					ipv4_mask->hdr.next_proto_id) ==
> +					ICE_IPV4_PROTO_NVGRE)
> +					*tun_type = ICE_SW_TUN_AND_NON_TUN;
>  				if (ipv4_mask->hdr.type_of_service) {
>  					list[t].h_u.ipv4_hdr.tos =
>  						ipv4_spec->hdr.type_of_service;
> @@ -1526,7 +1531,7 @@ ice_switch_parse_pattern_action(struct ice_adapter
> *ad,
>  	const struct rte_flow_item *item = pattern;
>  	uint16_t item_num = 0;
>  	enum ice_sw_tunnel_type tun_type =
> -		ICE_SW_TUN_AND_NON_TUN;
> +			ICE_NON_TUN;
>  	struct ice_pattern_match_item *pattern_match_item = NULL;
> 
>  	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
> --
> 2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [dpdk-stable] [PATCH v2 3/4] net/ice: add check for NVGRE protocol
  2020-06-22 15:49     ` Zhang, Qi Z
@ 2020-06-23  1:11       ` Zhao1, Wei
  0 siblings, 0 replies; 44+ messages in thread
From: Zhao1, Wei @ 2020-06-23  1:11 UTC (permalink / raw)
  To: Zhang, Qi Z, dev; +Cc: stable



> -----Original Message-----
> From: Zhang, Qi Z <qi.z.zhang@intel.com>
> Sent: Monday, June 22, 2020 11:50 PM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: stable@dpdk.org
> Subject: RE: [PATCH v2 3/4] net/ice: add check for NVGRE protocol
> 
> 
> 
> > -----Original Message-----
> > From: Zhao1, Wei <wei.zhao1@intel.com>
> > Sent: Wednesday, June 17, 2020 2:14 PM
> > To: dev@dpdk.org
> > Cc: stable@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Zhao1, Wei
> > <wei.zhao1@intel.com>
> > Subject: [PATCH v2 3/4] net/ice: add check for NVGRE protocol
> 
> fix tunnel type for switch rule
> 
> >
> > This patch add check for protocol type of IPv4 packet, it need to
> > update tunnel type when NVGRE is in payload.
> 
> The patch change default tunnel type to ICE_NON_TUN and only change to
> ICE_SW_TUN_AND_NON_TUN to hint switch engine if GRE proto is matched in
> a IPv4 header.
> 

Ok, Update in v3 
> >
> > Fixes: 6bc7628c5e0b ("net/ice: change default tunnel type")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > ---
> >  drivers/net/ice/ice_switch_filter.c | 7 ++++++-
> >  1 file changed, 6 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/ice/ice_switch_filter.c
> > b/drivers/net/ice/ice_switch_filter.c
> > index 3c0c36bce..3b38195d6 100644
> > --- a/drivers/net/ice/ice_switch_filter.c
> > +++ b/drivers/net/ice/ice_switch_filter.c
> > @@ -28,6 +28,7 @@
> >  #define MAX_QGRP_NUM_TYPE 7
> >  #define ICE_PPP_IPV4_PROTO	0x0021
> >  #define ICE_PPP_IPV6_PROTO	0x0057
> > +#define ICE_IPV4_PROTO_NVGRE	0x2F
> To keep consistent
> #define ICE_IPV4_NVGRE_PROTO 0x002F
> 

Ok, Update in v3

> >
> >  #define ICE_SW_INSET_ETHER ( \
> >  	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) @@
> -632,6
> > +633,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
> >  					list[t].m_u.ipv4_hdr.protocol =
> >  						ipv4_mask->hdr.next_proto_id;
> >  				}
> > +				if ((ipv4_spec->hdr.next_proto_id &
> > +					ipv4_mask->hdr.next_proto_id) ==
> > +					ICE_IPV4_PROTO_NVGRE)
> > +					*tun_type = ICE_SW_TUN_AND_NON_TUN;
> >  				if (ipv4_mask->hdr.type_of_service) {
> >  					list[t].h_u.ipv4_hdr.tos =
> >  						ipv4_spec->hdr.type_of_service; @@ -1526,7
> +1531,7 @@
> > ice_switch_parse_pattern_action(struct ice_adapter *ad,
> >  	const struct rte_flow_item *item = pattern;
> >  	uint16_t item_num = 0;
> >  	enum ice_sw_tunnel_type tun_type =
> > -		ICE_SW_TUN_AND_NON_TUN;
> > +			ICE_NON_TUN;
> >  	struct ice_pattern_match_item *pattern_match_item = NULL;
> >
> >  	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
> > --
> > 2.19.1
> 


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [dpdk-stable] [PATCH v2 4/4] net/ice: support switch flow for specific L4 type
  2020-06-22 15:36     ` Zhang, Qi Z
@ 2020-06-23  1:12       ` Zhao1, Wei
  0 siblings, 0 replies; 44+ messages in thread
From: Zhao1, Wei @ 2020-06-23  1:12 UTC (permalink / raw)
  To: Zhang, Qi Z, dev; +Cc: stable



> -----Original Message-----
> From: Zhang, Qi Z <qi.z.zhang@intel.com>
> Sent: Monday, June 22, 2020 11:36 PM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: stable@dpdk.org
> Subject: RE: [PATCH v2 4/4] net/ice: support switch flow for specific L4 type
> 
> 
> 
> > -----Original Message-----
> > From: Zhao1, Wei <wei.zhao1@intel.com>
> > Sent: Wednesday, June 17, 2020 2:14 PM
> > To: dev@dpdk.org
> > Cc: stable@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Zhao1, Wei
> > <wei.zhao1@intel.com>
> > Subject: [PATCH v2 4/4] net/ice: support switch flow for specific L4
> > type
> >
> > This patch add more specific tunnel type for ipv4/ipv6 packet, it
> > enable tcp/udp layer of ipv4/ipv6 as L4 payload but without
> > L4 dst/src port number as input set for the switch filter rule.
> >
> > Fixes: 47d460d63233 ("net/ice: rework switch filter")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > ---
> >  drivers/net/ice/ice_switch_filter.c | 23 +++++++++++++++++------
> >  1 file changed, 17 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/net/ice/ice_switch_filter.c
> > b/drivers/net/ice/ice_switch_filter.c
> > index 3b38195d6..f4fd8ff33 100644
> > --- a/drivers/net/ice/ice_switch_filter.c
> > +++ b/drivers/net/ice/ice_switch_filter.c
> > @@ -471,11 +471,11 @@ ice_switch_inset_get(const struct rte_flow_item
> > pattern[],
> >  	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
> >  	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
> >  	uint64_t input_set = ICE_INSET_NONE;
> > +	uint16_t tunnel_valid = 0;
> 
> why not vxlan_valid and nvgre_valid to keep consistent naming with other
> variables?
> Can we use a bitmap

Ok, Update in v3

> 
> >  	bool pppoe_elem_valid = 0;
> >  	bool pppoe_patt_valid = 0;
> >  	bool pppoe_prot_valid = 0;
> >  	bool profile_rule = 0;
> > -	bool tunnel_valid = 0;
> >  	bool ipv6_valiad = 0;
> >  	bool ipv4_valiad = 0;
> >  	bool udp_valiad = 0;
> > @@ -960,7 +960,7 @@ ice_switch_inset_get(const struct rte_flow_item
> > pattern[],
> >  					   "Invalid NVGRE item");
> >  				return 0;
> >  			}
> > -			tunnel_valid = 1;
> > +			tunnel_valid = 2;
> >  			if (nvgre_spec && nvgre_mask) {
> >  				list[t].type = ICE_NVGRE;
> >  				if (nvgre_mask->tni[0] ||
> > @@ -1325,6 +1325,21 @@ ice_switch_inset_get(const struct rte_flow_item
> > pattern[],
> >  			*tun_type = ICE_SW_TUN_PPPOE;
> >  	}
> >
> > +	if (!pppoe_patt_valid) {
> > +		if (tunnel_valid == 1)
> > +			*tun_type = ICE_SW_TUN_VXLAN;
> > +		else if (tunnel_valid == 2)
> > +			*tun_type = ICE_SW_TUN_NVGRE;
> > +		else if (ipv4_valiad && tcp_valiad)
> > +			*tun_type = ICE_SW_IPV4_TCP;
> > +		else if (ipv4_valiad && udp_valiad)
> > +			*tun_type = ICE_SW_IPV4_UDP;
> > +		else if (ipv6_valiad && tcp_valiad)
> > +			*tun_type = ICE_SW_IPV6_TCP;
> > +		else if (ipv6_valiad && udp_valiad)
> > +			*tun_type = ICE_SW_IPV6_UDP;
> > +	}
> > +
> >  	*lkups_num = t;
> >
> >  	return input_set;
> > @@ -1536,10 +1551,6 @@ ice_switch_parse_pattern_action(struct
> > ice_adapter *ad,
> >
> >  	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
> >  		item_num++;
> > -		if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
> > -			tun_type = ICE_SW_TUN_VXLAN;
> > -		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
> > -			tun_type = ICE_SW_TUN_NVGRE;
> >  		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
> >  			const struct rte_flow_item_eth *eth_mask;
> >  			if (item->mask)
> > --
> > 2.19.1
> 


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v3 0/4] enable more PPPoE packet type for switch
  2020-06-17  6:14 ` [dpdk-stable] [PATCH v2 0/4] enable more PPPoE packet type for switch Wei Zhao
                     ` (3 preceding siblings ...)
  2020-06-17  6:14   ` [dpdk-stable] [PATCH v2 4/4] net/ice: support switch flow for specific L4 type Wei Zhao
@ 2020-06-28  3:21   ` Wei Zhao
  2020-06-28  3:21     ` [dpdk-stable] [PATCH v3 1/4] net/ice: add support " Wei Zhao
                       ` (3 more replies)
  2020-06-28  5:01   ` [dpdk-stable] [PATCH v3 0/4] enable more PPPoE packet type for switch Wei Zhao
  5 siblings, 4 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-28  3:21 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu

1. add more support for switch parser of pppoe packet.
2. add check for NVGRE protocol
3. support flow for specific L4 type
4. add input set byte number check

This patchset is based on:
[1] https://patches.dpdk.org/cover/70762/ : net/ice: base code update

Depends-on: series-10300

v2:
fix bug in patch add redirect support for VSI list rule.
add information in release note.

v3:
add input set byte number check
code update as comment of code style

Wei Zhao (4):
  net/ice: add support more PPPoE packet type for switch
  net/ice: fix tunnel type for switch rule
  net/ice: support switch flow for specific L4 type
  net/ice: add input set byte number check

 doc/guides/rel_notes/release_20_08.rst |   2 +
 drivers/net/ice/ice_switch_filter.c    | 190 +++++++++++++++++++++----
 2 files changed, 167 insertions(+), 25 deletions(-)

-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v3 1/4] net/ice: add support more PPPoE packet type for switch
  2020-06-28  3:21   ` [dpdk-stable] [PATCH v3 0/4] enable more PPPoE packet type for switch Wei Zhao
@ 2020-06-28  3:21     ` " Wei Zhao
  2020-06-28  3:21     ` [dpdk-stable] [PATCH v3 2/4] net/ice: fix tunnel type for switch rule Wei Zhao
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-28  3:21 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add more support for switch parser of pppoe packet,
it enable parse tcp/udp L4 layer and ipv4/ipv6 L3 layer parser for
pppoe payload, so we can use L4 dst/src port and L3 ip address as
input set for switch filter pppoe related rule.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 doc/guides/rel_notes/release_20_08.rst |   2 +
 drivers/net/ice/ice_switch_filter.c    | 115 +++++++++++++++++++++----
 2 files changed, 102 insertions(+), 15 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 3c40424cc..79ef218b9 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -86,6 +86,8 @@ New Features
   Updated the Intel ice driver with new features and improvements, including:
 
   * Added support for DCF datapath configuration.
+  * Added support for more PPPoE packet type for switch filter.
+
 
 Removed Items
 -------------
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 5ccd020c5..3c0c36bce 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -26,6 +26,8 @@
 
 
 #define MAX_QGRP_NUM_TYPE 7
+#define ICE_PPP_IPV4_PROTO	0x0021
+#define ICE_PPP_IPV6_PROTO	0x0057
 
 #define ICE_SW_INSET_ETHER ( \
 	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
@@ -95,6 +97,18 @@
 	ICE_INSET_VLAN_OUTER | ICE_INSET_VLAN_INNER | \
 	ICE_INSET_DMAC | ICE_INSET_ETHERTYPE | ICE_INSET_PPPOE_SESSION | \
 	ICE_INSET_PPPOE_PROTO)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4 ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4_TCP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_TCP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4_UDP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_UDP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6 ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6_TCP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_TCP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6_UDP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_UDP)
 #define ICE_SW_INSET_MAC_IPV4_ESP ( \
 	ICE_SW_INSET_MAC_IPV4 | ICE_INSET_ESP_SPI)
 #define ICE_SW_INSET_MAC_IPV6_ESP ( \
@@ -154,10 +168,6 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = {
 			ICE_SW_INSET_DIST_NVGRE_IPV4_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_nvgre_eth_ipv4_tcp,
 			ICE_SW_INSET_DIST_NVGRE_IPV4_TCP, ICE_INSET_NONE},
-	{pattern_eth_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
-	{pattern_eth_vlan_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_pppoes,
 			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes,
@@ -166,6 +176,30 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = {
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes_proto,
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_esp,
 			ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_udp_esp,
@@ -254,10 +288,6 @@ ice_pattern_match_item ice_switch_pattern_perm[] = {
 			ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_nvgre_eth_ipv4_tcp,
 			ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP, ICE_INSET_NONE},
-	{pattern_eth_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
-	{pattern_eth_vlan_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_pppoes,
 			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes,
@@ -266,6 +296,30 @@ ice_pattern_match_item ice_switch_pattern_perm[] = {
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes_proto,
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_esp,
 			ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_udp_esp,
@@ -416,13 +470,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
 	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
 	uint64_t input_set = ICE_INSET_NONE;
-	uint16_t j, t = 0;
+	bool pppoe_elem_valid = 0;
+	bool pppoe_patt_valid = 0;
+	bool pppoe_prot_valid = 0;
 	bool profile_rule = 0;
 	bool tunnel_valid = 0;
-	bool pppoe_valid = 0;
 	bool ipv6_valiad = 0;
 	bool ipv4_valiad = 0;
 	bool udp_valiad = 0;
+	bool tcp_valiad = 0;
+	uint16_t j, t = 0;
 
 	for (item = pattern; item->type !=
 			RTE_FLOW_ITEM_TYPE_END; item++) {
@@ -752,6 +809,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		case RTE_FLOW_ITEM_TYPE_TCP:
 			tcp_spec = item->spec;
 			tcp_mask = item->mask;
+			tcp_valiad = 1;
 			if (tcp_spec && tcp_mask) {
 				/* Check TCP mask and update input set */
 				if (tcp_mask->hdr.sent_seq ||
@@ -969,6 +1027,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					"Invalid pppoe item");
 				return 0;
 			}
+			pppoe_patt_valid = 1;
 			if (pppoe_spec && pppoe_mask) {
 				/* Check pppoe mask and update input set */
 				if (pppoe_mask->length ||
@@ -989,7 +1048,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					input_set |= ICE_INSET_PPPOE_SESSION;
 				}
 				t++;
-				pppoe_valid = 1;
+				pppoe_elem_valid = 1;
 			}
 			break;
 
@@ -1010,7 +1069,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				return 0;
 			}
 			if (pppoe_proto_spec && pppoe_proto_mask) {
-				if (pppoe_valid)
+				if (pppoe_elem_valid)
 					t--;
 				list[t].type = ICE_PPPOE;
 				if (pppoe_proto_mask->proto_id) {
@@ -1019,9 +1078,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.pppoe_hdr.ppp_prot_id =
 						pppoe_proto_mask->proto_id;
 					input_set |= ICE_INSET_PPPOE_PROTO;
+
+					pppoe_prot_valid = 1;
 				}
+				if ((pppoe_proto_mask->proto_id &
+					pppoe_proto_spec->proto_id) !=
+					    CPU_TO_BE16(ICE_PPP_IPV4_PROTO) &&
+					(pppoe_proto_mask->proto_id &
+					pppoe_proto_spec->proto_id) !=
+					    CPU_TO_BE16(ICE_PPP_IPV6_PROTO))
+					*tun_type = ICE_SW_TUN_PPPOE_PAY;
+				else
+					*tun_type = ICE_SW_TUN_PPPOE;
 				t++;
 			}
+
 			break;
 
 		case RTE_FLOW_ITEM_TYPE_ESP:
@@ -1232,6 +1303,23 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		}
 	}
 
+	if (pppoe_patt_valid && !pppoe_prot_valid) {
+		if (ipv6_valiad && udp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6_UDP;
+		else if (ipv6_valiad && tcp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6_TCP;
+		else if (ipv4_valiad && udp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4_UDP;
+		else if (ipv4_valiad && tcp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4_TCP;
+		else if (ipv6_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6;
+		else if (ipv4_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4;
+		else
+			*tun_type = ICE_SW_TUN_PPPOE;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
@@ -1447,9 +1535,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 			tun_type = ICE_SW_TUN_VXLAN;
 		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
 			tun_type = ICE_SW_TUN_NVGRE;
-		if (item->type == RTE_FLOW_ITEM_TYPE_PPPOED ||
-				item->type == RTE_FLOW_ITEM_TYPE_PPPOES)
-			tun_type = ICE_SW_TUN_PPPOE;
 		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
 			const struct rte_flow_item_eth *eth_mask;
 			if (item->mask)
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v3 2/4] net/ice: fix tunnel type for switch rule
  2020-06-28  3:21   ` [dpdk-stable] [PATCH v3 0/4] enable more PPPoE packet type for switch Wei Zhao
  2020-06-28  3:21     ` [dpdk-stable] [PATCH v3 1/4] net/ice: add support " Wei Zhao
@ 2020-06-28  3:21     ` Wei Zhao
  2020-06-28  3:21     ` [dpdk-stable] [PATCH v3 3/4] net/ice: support switch flow for specific L4 type Wei Zhao
  2020-06-28  3:21     ` [dpdk-stable] [PATCH v3 4/4] net/ice: add input set byte number check Wei Zhao
  3 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-28  3:21 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add check for protocol type of IPv4 packet,
it need to update tunnel type when NVGRE is in payload.

Fixes: 6bc7628c5e0b ("net/ice: change default tunnel type")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 3c0c36bce..c607e8d17 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -28,6 +28,7 @@
 #define MAX_QGRP_NUM_TYPE 7
 #define ICE_PPP_IPV4_PROTO	0x0021
 #define ICE_PPP_IPV6_PROTO	0x0057
+#define ICE_IPV4_PROTO_NVGRE	0x002F
 
 #define ICE_SW_INSET_ETHER ( \
 	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
@@ -632,6 +633,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.ipv4_hdr.protocol =
 						ipv4_mask->hdr.next_proto_id;
 				}
+				if ((ipv4_spec->hdr.next_proto_id &
+					ipv4_mask->hdr.next_proto_id) ==
+					ICE_IPV4_PROTO_NVGRE)
+					*tun_type = ICE_SW_TUN_AND_NON_TUN;
 				if (ipv4_mask->hdr.type_of_service) {
 					list[t].h_u.ipv4_hdr.tos =
 						ipv4_spec->hdr.type_of_service;
@@ -1526,7 +1531,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 	const struct rte_flow_item *item = pattern;
 	uint16_t item_num = 0;
 	enum ice_sw_tunnel_type tun_type =
-		ICE_SW_TUN_AND_NON_TUN;
+			ICE_NON_TUN;
 	struct ice_pattern_match_item *pattern_match_item = NULL;
 
 	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v3 3/4] net/ice: support switch flow for specific L4 type
  2020-06-28  3:21   ` [dpdk-stable] [PATCH v3 0/4] enable more PPPoE packet type for switch Wei Zhao
  2020-06-28  3:21     ` [dpdk-stable] [PATCH v3 1/4] net/ice: add support " Wei Zhao
  2020-06-28  3:21     ` [dpdk-stable] [PATCH v3 2/4] net/ice: fix tunnel type for switch rule Wei Zhao
@ 2020-06-28  3:21     ` Wei Zhao
  2020-06-28  3:21     ` [dpdk-stable] [PATCH v3 4/4] net/ice: add input set byte number check Wei Zhao
  3 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-28  3:21 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add more specific tunnel type for ipv4/ipv6 packet,
it enable tcp/udp layer of ipv4/ipv6 as L4 payload but without
L4 dst/src port number as input set for the switch filter rule.

Fixes: 47d460d63233 ("net/ice: rework switch filter")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 27 ++++++++++++++++++++-------
 1 file changed, 20 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index c607e8d17..c1ea74c73 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -29,6 +29,8 @@
 #define ICE_PPP_IPV4_PROTO	0x0021
 #define ICE_PPP_IPV6_PROTO	0x0057
 #define ICE_IPV4_PROTO_NVGRE	0x002F
+#define ICE_TUN_VXLAN_VALID	0x0001
+#define ICE_TUN_NVGRE_VALID	0x0002
 
 #define ICE_SW_INSET_ETHER ( \
 	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
@@ -471,11 +473,11 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
 	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
 	uint64_t input_set = ICE_INSET_NONE;
+	uint16_t tunnel_valid = 0;
 	bool pppoe_elem_valid = 0;
 	bool pppoe_patt_valid = 0;
 	bool pppoe_prot_valid = 0;
 	bool profile_rule = 0;
-	bool tunnel_valid = 0;
 	bool ipv6_valiad = 0;
 	bool ipv4_valiad = 0;
 	bool udp_valiad = 0;
@@ -924,7 +926,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				return 0;
 			}
 
-			tunnel_valid = 1;
+			tunnel_valid = ICE_TUN_VXLAN_VALID;
 			if (vxlan_spec && vxlan_mask) {
 				list[t].type = ICE_VXLAN;
 				if (vxlan_mask->vni[0] ||
@@ -960,7 +962,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					   "Invalid NVGRE item");
 				return 0;
 			}
-			tunnel_valid = 1;
+			tunnel_valid = ICE_TUN_NVGRE_VALID;
 			if (nvgre_spec && nvgre_mask) {
 				list[t].type = ICE_NVGRE;
 				if (nvgre_mask->tni[0] ||
@@ -1325,6 +1327,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			*tun_type = ICE_SW_TUN_PPPOE;
 	}
 
+	if (*tun_type == ICE_NON_TUN) {
+		if (tunnel_valid == ICE_TUN_VXLAN_VALID)
+			*tun_type = ICE_SW_TUN_VXLAN;
+		else if (tunnel_valid == ICE_TUN_NVGRE_VALID)
+			*tun_type = ICE_SW_TUN_NVGRE;
+		else if (ipv4_valiad && tcp_valiad)
+			*tun_type = ICE_SW_IPV4_TCP;
+		else if (ipv4_valiad && udp_valiad)
+			*tun_type = ICE_SW_IPV4_UDP;
+		else if (ipv6_valiad && tcp_valiad)
+			*tun_type = ICE_SW_IPV6_TCP;
+		else if (ipv6_valiad && udp_valiad)
+			*tun_type = ICE_SW_IPV6_UDP;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
@@ -1536,10 +1553,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 
 	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
 		item_num++;
-		if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
-			tun_type = ICE_SW_TUN_VXLAN;
-		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
-			tun_type = ICE_SW_TUN_NVGRE;
 		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
 			const struct rte_flow_item_eth *eth_mask;
 			if (item->mask)
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v3 4/4] net/ice: add input set byte number check
  2020-06-28  3:21   ` [dpdk-stable] [PATCH v3 0/4] enable more PPPoE packet type for switch Wei Zhao
                       ` (2 preceding siblings ...)
  2020-06-28  3:21     ` [dpdk-stable] [PATCH v3 3/4] net/ice: support switch flow for specific L4 type Wei Zhao
@ 2020-06-28  3:21     ` Wei Zhao
  3 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-28  3:21 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add the total input set byte number check,
as there is a hardware requirement for the total number
of 32 byte.

Fixes: 47d460d63233 ("net/ice: rework switch filter")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 43 +++++++++++++++++++++++++++--
 1 file changed, 40 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index c1ea74c73..a4d7fcb14 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -25,7 +25,8 @@
 #include "ice_generic_flow.h"
 
 
-#define MAX_QGRP_NUM_TYPE 7
+#define MAX_QGRP_NUM_TYPE	7
+#define MAX_INPUT_SET_BYTE	32
 #define ICE_PPP_IPV4_PROTO	0x0021
 #define ICE_PPP_IPV6_PROTO	0x0057
 #define ICE_IPV4_PROTO_NVGRE	0x002F
@@ -473,6 +474,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
 	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
 	uint64_t input_set = ICE_INSET_NONE;
+	uint16_t feild_vec_byte = 0;
 	uint16_t tunnel_valid = 0;
 	bool pppoe_elem_valid = 0;
 	bool pppoe_patt_valid = 0;
@@ -540,6 +542,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						m->src_addr[j] =
 						eth_mask->src.addr_bytes[j];
 						i = 1;
+						feild_vec_byte++;
 					}
 					if (eth_mask->dst.addr_bytes[j]) {
 						h->dst_addr[j] =
@@ -547,6 +550,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						m->dst_addr[j] =
 						eth_mask->dst.addr_bytes[j];
 						i = 1;
+						feild_vec_byte++;
 					}
 				}
 				if (i)
@@ -557,6 +561,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						eth_spec->type;
 					list[t].m_u.ethertype.ethtype_id =
 						eth_mask->type;
+					feild_vec_byte += 2;
 					t++;
 				}
 			}
@@ -616,24 +621,28 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv4_spec->hdr.src_addr;
 					list[t].m_u.ipv4_hdr.src_addr =
 						ipv4_mask->hdr.src_addr;
+					feild_vec_byte += 2;
 				}
 				if (ipv4_mask->hdr.dst_addr) {
 					list[t].h_u.ipv4_hdr.dst_addr =
 						ipv4_spec->hdr.dst_addr;
 					list[t].m_u.ipv4_hdr.dst_addr =
 						ipv4_mask->hdr.dst_addr;
+					feild_vec_byte += 2;
 				}
 				if (ipv4_mask->hdr.time_to_live) {
 					list[t].h_u.ipv4_hdr.time_to_live =
 						ipv4_spec->hdr.time_to_live;
 					list[t].m_u.ipv4_hdr.time_to_live =
 						ipv4_mask->hdr.time_to_live;
+					feild_vec_byte++;
 				}
 				if (ipv4_mask->hdr.next_proto_id) {
 					list[t].h_u.ipv4_hdr.protocol =
 						ipv4_spec->hdr.next_proto_id;
 					list[t].m_u.ipv4_hdr.protocol =
 						ipv4_mask->hdr.next_proto_id;
+					feild_vec_byte++;
 				}
 				if ((ipv4_spec->hdr.next_proto_id &
 					ipv4_mask->hdr.next_proto_id) ==
@@ -644,6 +653,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv4_spec->hdr.type_of_service;
 					list[t].m_u.ipv4_hdr.tos =
 						ipv4_mask->hdr.type_of_service;
+					feild_vec_byte++;
 				}
 				t++;
 			}
@@ -721,12 +731,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv6_spec->hdr.src_addr[j];
 						s->src_addr[j] =
 						ipv6_mask->hdr.src_addr[j];
+						feild_vec_byte++;
 					}
 					if (ipv6_mask->hdr.dst_addr[j]) {
 						f->dst_addr[j] =
 						ipv6_spec->hdr.dst_addr[j];
 						s->dst_addr[j] =
 						ipv6_mask->hdr.dst_addr[j];
+						feild_vec_byte++;
 					}
 				}
 				if (ipv6_mask->hdr.proto) {
@@ -734,12 +746,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv6_spec->hdr.proto;
 					s->next_hdr =
 						ipv6_mask->hdr.proto;
+					feild_vec_byte++;
 				}
 				if (ipv6_mask->hdr.hop_limits) {
 					f->hop_limit =
 						ipv6_spec->hdr.hop_limits;
 					s->hop_limit =
 						ipv6_mask->hdr.hop_limits;
+					feild_vec_byte++;
 				}
 				if (ipv6_mask->hdr.vtc_flow &
 						rte_cpu_to_be_32
@@ -757,6 +771,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 							RTE_IPV6_HDR_TC_MASK) >>
 							RTE_IPV6_HDR_TC_SHIFT;
 					s->be_ver_tc_flow = CPU_TO_BE32(vtf.u.val);
+					feild_vec_byte += 4;
 				}
 				t++;
 			}
@@ -802,14 +817,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						udp_spec->hdr.src_port;
 					list[t].m_u.l4_hdr.src_port =
 						udp_mask->hdr.src_port;
+					feild_vec_byte += 2;
 				}
 				if (udp_mask->hdr.dst_port) {
 					list[t].h_u.l4_hdr.dst_port =
 						udp_spec->hdr.dst_port;
 					list[t].m_u.l4_hdr.dst_port =
 						udp_mask->hdr.dst_port;
+					feild_vec_byte += 2;
 				}
-						t++;
+				t++;
 			}
 			break;
 
@@ -854,12 +871,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						tcp_spec->hdr.src_port;
 					list[t].m_u.l4_hdr.src_port =
 						tcp_mask->hdr.src_port;
+					feild_vec_byte += 2;
 				}
 				if (tcp_mask->hdr.dst_port) {
 					list[t].h_u.l4_hdr.dst_port =
 						tcp_spec->hdr.dst_port;
 					list[t].m_u.l4_hdr.dst_port =
 						tcp_mask->hdr.dst_port;
+					feild_vec_byte += 2;
 				}
 				t++;
 			}
@@ -899,12 +918,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						sctp_spec->hdr.src_port;
 					list[t].m_u.sctp_hdr.src_port =
 						sctp_mask->hdr.src_port;
+					feild_vec_byte += 2;
 				}
 				if (sctp_mask->hdr.dst_port) {
 					list[t].h_u.sctp_hdr.dst_port =
 						sctp_spec->hdr.dst_port;
 					list[t].m_u.sctp_hdr.dst_port =
 						sctp_mask->hdr.dst_port;
+					feild_vec_byte += 2;
 				}
 				t++;
 			}
@@ -942,6 +963,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						vxlan_mask->vni[0];
 					input_set |=
 						ICE_INSET_TUN_VXLAN_VNI;
+					feild_vec_byte += 2;
 				}
 				t++;
 			}
@@ -978,6 +1000,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						nvgre_mask->tni[0];
 					input_set |=
 						ICE_INSET_TUN_NVGRE_TNI;
+					feild_vec_byte += 2;
 				}
 				t++;
 			}
@@ -1006,6 +1029,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.vlan_hdr.vlan =
 						vlan_mask->tci;
 					input_set |= ICE_INSET_VLAN_OUTER;
+					feild_vec_byte += 2;
 				}
 				if (vlan_mask->inner_type) {
 					list[t].h_u.vlan_hdr.type =
@@ -1013,6 +1037,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.vlan_hdr.type =
 						vlan_mask->inner_type;
 					input_set |= ICE_INSET_ETHERTYPE;
+					feild_vec_byte += 2;
 				}
 				t++;
 			}
@@ -1053,6 +1078,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.pppoe_hdr.session_id =
 						pppoe_mask->session_id;
 					input_set |= ICE_INSET_PPPOE_SESSION;
+					feild_vec_byte += 2;
 				}
 				t++;
 				pppoe_elem_valid = 1;
@@ -1085,7 +1111,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.pppoe_hdr.ppp_prot_id =
 						pppoe_proto_mask->proto_id;
 					input_set |= ICE_INSET_PPPOE_PROTO;
-
+					feild_vec_byte += 2;
 					pppoe_prot_valid = 1;
 				}
 				if ((pppoe_proto_mask->proto_id &
@@ -1142,6 +1168,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				list[t].m_u.esp_hdr.spi =
 					esp_mask->hdr.spi;
 				input_set |= ICE_INSET_ESP_SPI;
+				feild_vec_byte += 4;
 				t++;
 			}
 
@@ -1198,6 +1225,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				list[t].m_u.ah_hdr.spi =
 					ah_mask->spi;
 				input_set |= ICE_INSET_AH_SPI;
+				feild_vec_byte += 4;
 				t++;
 			}
 
@@ -1237,6 +1265,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				list[t].m_u.l2tpv3_sess_hdr.session_id =
 					l2tp_mask->session_id;
 				input_set |= ICE_INSET_L2TPV3OIP_SESSION_ID;
+				feild_vec_byte += 4;
 				t++;
 			}
 
@@ -1342,6 +1371,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			*tun_type = ICE_SW_IPV6_UDP;
 	}
 
+	if (feild_vec_byte >= MAX_INPUT_SET_BYTE) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item,
+			"too much input set");
+		return -ENOTSUP;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v3 0/4] enable more PPPoE packet type for switch
  2020-06-17  6:14 ` [dpdk-stable] [PATCH v2 0/4] enable more PPPoE packet type for switch Wei Zhao
                     ` (4 preceding siblings ...)
  2020-06-28  3:21   ` [dpdk-stable] [PATCH v3 0/4] enable more PPPoE packet type for switch Wei Zhao
@ 2020-06-28  5:01   ` Wei Zhao
  2020-06-28  5:01     ` [dpdk-stable] [PATCH v3 1/4] net/ice: add support " Wei Zhao
                       ` (4 more replies)
  5 siblings, 5 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-28  5:01 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu

1. add more support for switch parser of pppoe packet.
2. add check for NVGRE protocol
3. support flow for specific L4 type
4. add input set byte number check

This patchset is based on:
[1] https://patches.dpdk.org/cover/70762/ : net/ice: base code update

Depends-on: series-10300

v2:
fix bug in patch add redirect support for VSI list rule.
add information in release note.

v3:
add input set byte number check
code update as comment of code style

Wei Zhao (4):
  net/ice: add support more PPPoE packet type for switch
  net/ice: fix tunnel type for switch rule
  net/ice: support switch flow for specific L4 type
  net/ice: add input set byte number check

 doc/guides/rel_notes/release_20_08.rst |   2 +
 drivers/net/ice/ice_switch_filter.c    | 190 +++++++++++++++++++++----
 2 files changed, 167 insertions(+), 25 deletions(-)

-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v3 1/4] net/ice: add support more PPPoE packet type for switch
  2020-06-28  5:01   ` [dpdk-stable] [PATCH v3 0/4] enable more PPPoE packet type for switch Wei Zhao
@ 2020-06-28  5:01     ` " Wei Zhao
  2020-06-28  5:01     ` [dpdk-stable] [PATCH v3 2/4] net/ice: fix tunnel type for switch rule Wei Zhao
                       ` (3 subsequent siblings)
  4 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-28  5:01 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add more support for switch parser of pppoe packet,
it enable parse tcp/udp L4 layer and ipv4/ipv6 L3 layer parser for
pppoe payload, so we can use L4 dst/src port and L3 ip address as
input set for switch filter pppoe related rule.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 doc/guides/rel_notes/release_20_08.rst |   2 +
 drivers/net/ice/ice_switch_filter.c    | 115 +++++++++++++++++++++----
 2 files changed, 102 insertions(+), 15 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 3c40424cc..79ef218b9 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -86,6 +86,8 @@ New Features
   Updated the Intel ice driver with new features and improvements, including:
 
   * Added support for DCF datapath configuration.
+  * Added support for more PPPoE packet type for switch filter.
+
 
 Removed Items
 -------------
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 5ccd020c5..3c0c36bce 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -26,6 +26,8 @@
 
 
 #define MAX_QGRP_NUM_TYPE 7
+#define ICE_PPP_IPV4_PROTO	0x0021
+#define ICE_PPP_IPV6_PROTO	0x0057
 
 #define ICE_SW_INSET_ETHER ( \
 	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
@@ -95,6 +97,18 @@
 	ICE_INSET_VLAN_OUTER | ICE_INSET_VLAN_INNER | \
 	ICE_INSET_DMAC | ICE_INSET_ETHERTYPE | ICE_INSET_PPPOE_SESSION | \
 	ICE_INSET_PPPOE_PROTO)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4 ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4_TCP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_TCP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4_UDP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_UDP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6 ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6_TCP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_TCP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6_UDP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_UDP)
 #define ICE_SW_INSET_MAC_IPV4_ESP ( \
 	ICE_SW_INSET_MAC_IPV4 | ICE_INSET_ESP_SPI)
 #define ICE_SW_INSET_MAC_IPV6_ESP ( \
@@ -154,10 +168,6 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = {
 			ICE_SW_INSET_DIST_NVGRE_IPV4_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_nvgre_eth_ipv4_tcp,
 			ICE_SW_INSET_DIST_NVGRE_IPV4_TCP, ICE_INSET_NONE},
-	{pattern_eth_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
-	{pattern_eth_vlan_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_pppoes,
 			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes,
@@ -166,6 +176,30 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = {
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes_proto,
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_esp,
 			ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_udp_esp,
@@ -254,10 +288,6 @@ ice_pattern_match_item ice_switch_pattern_perm[] = {
 			ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_nvgre_eth_ipv4_tcp,
 			ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP, ICE_INSET_NONE},
-	{pattern_eth_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
-	{pattern_eth_vlan_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_pppoes,
 			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes,
@@ -266,6 +296,30 @@ ice_pattern_match_item ice_switch_pattern_perm[] = {
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes_proto,
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_esp,
 			ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_udp_esp,
@@ -416,13 +470,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
 	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
 	uint64_t input_set = ICE_INSET_NONE;
-	uint16_t j, t = 0;
+	bool pppoe_elem_valid = 0;
+	bool pppoe_patt_valid = 0;
+	bool pppoe_prot_valid = 0;
 	bool profile_rule = 0;
 	bool tunnel_valid = 0;
-	bool pppoe_valid = 0;
 	bool ipv6_valiad = 0;
 	bool ipv4_valiad = 0;
 	bool udp_valiad = 0;
+	bool tcp_valiad = 0;
+	uint16_t j, t = 0;
 
 	for (item = pattern; item->type !=
 			RTE_FLOW_ITEM_TYPE_END; item++) {
@@ -752,6 +809,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		case RTE_FLOW_ITEM_TYPE_TCP:
 			tcp_spec = item->spec;
 			tcp_mask = item->mask;
+			tcp_valiad = 1;
 			if (tcp_spec && tcp_mask) {
 				/* Check TCP mask and update input set */
 				if (tcp_mask->hdr.sent_seq ||
@@ -969,6 +1027,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					"Invalid pppoe item");
 				return 0;
 			}
+			pppoe_patt_valid = 1;
 			if (pppoe_spec && pppoe_mask) {
 				/* Check pppoe mask and update input set */
 				if (pppoe_mask->length ||
@@ -989,7 +1048,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					input_set |= ICE_INSET_PPPOE_SESSION;
 				}
 				t++;
-				pppoe_valid = 1;
+				pppoe_elem_valid = 1;
 			}
 			break;
 
@@ -1010,7 +1069,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				return 0;
 			}
 			if (pppoe_proto_spec && pppoe_proto_mask) {
-				if (pppoe_valid)
+				if (pppoe_elem_valid)
 					t--;
 				list[t].type = ICE_PPPOE;
 				if (pppoe_proto_mask->proto_id) {
@@ -1019,9 +1078,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.pppoe_hdr.ppp_prot_id =
 						pppoe_proto_mask->proto_id;
 					input_set |= ICE_INSET_PPPOE_PROTO;
+
+					pppoe_prot_valid = 1;
 				}
+				if ((pppoe_proto_mask->proto_id &
+					pppoe_proto_spec->proto_id) !=
+					    CPU_TO_BE16(ICE_PPP_IPV4_PROTO) &&
+					(pppoe_proto_mask->proto_id &
+					pppoe_proto_spec->proto_id) !=
+					    CPU_TO_BE16(ICE_PPP_IPV6_PROTO))
+					*tun_type = ICE_SW_TUN_PPPOE_PAY;
+				else
+					*tun_type = ICE_SW_TUN_PPPOE;
 				t++;
 			}
+
 			break;
 
 		case RTE_FLOW_ITEM_TYPE_ESP:
@@ -1232,6 +1303,23 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		}
 	}
 
+	if (pppoe_patt_valid && !pppoe_prot_valid) {
+		if (ipv6_valiad && udp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6_UDP;
+		else if (ipv6_valiad && tcp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6_TCP;
+		else if (ipv4_valiad && udp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4_UDP;
+		else if (ipv4_valiad && tcp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4_TCP;
+		else if (ipv6_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6;
+		else if (ipv4_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4;
+		else
+			*tun_type = ICE_SW_TUN_PPPOE;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
@@ -1447,9 +1535,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 			tun_type = ICE_SW_TUN_VXLAN;
 		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
 			tun_type = ICE_SW_TUN_NVGRE;
-		if (item->type == RTE_FLOW_ITEM_TYPE_PPPOED ||
-				item->type == RTE_FLOW_ITEM_TYPE_PPPOES)
-			tun_type = ICE_SW_TUN_PPPOE;
 		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
 			const struct rte_flow_item_eth *eth_mask;
 			if (item->mask)
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v3 2/4] net/ice: fix tunnel type for switch rule
  2020-06-28  5:01   ` [dpdk-stable] [PATCH v3 0/4] enable more PPPoE packet type for switch Wei Zhao
  2020-06-28  5:01     ` [dpdk-stable] [PATCH v3 1/4] net/ice: add support " Wei Zhao
@ 2020-06-28  5:01     ` Wei Zhao
  2020-06-28  5:01     ` [dpdk-stable] [PATCH v3 3/4] net/ice: support switch flow for specific L4 type Wei Zhao
                       ` (2 subsequent siblings)
  4 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-28  5:01 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add check for protocol type of IPv4 packet,
it need to update tunnel type when NVGRE is in payload.

Fixes: 6bc7628c5e0b ("net/ice: change default tunnel type")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 3c0c36bce..c607e8d17 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -28,6 +28,7 @@
 #define MAX_QGRP_NUM_TYPE 7
 #define ICE_PPP_IPV4_PROTO	0x0021
 #define ICE_PPP_IPV6_PROTO	0x0057
+#define ICE_IPV4_PROTO_NVGRE	0x002F
 
 #define ICE_SW_INSET_ETHER ( \
 	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
@@ -632,6 +633,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.ipv4_hdr.protocol =
 						ipv4_mask->hdr.next_proto_id;
 				}
+				if ((ipv4_spec->hdr.next_proto_id &
+					ipv4_mask->hdr.next_proto_id) ==
+					ICE_IPV4_PROTO_NVGRE)
+					*tun_type = ICE_SW_TUN_AND_NON_TUN;
 				if (ipv4_mask->hdr.type_of_service) {
 					list[t].h_u.ipv4_hdr.tos =
 						ipv4_spec->hdr.type_of_service;
@@ -1526,7 +1531,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 	const struct rte_flow_item *item = pattern;
 	uint16_t item_num = 0;
 	enum ice_sw_tunnel_type tun_type =
-		ICE_SW_TUN_AND_NON_TUN;
+			ICE_NON_TUN;
 	struct ice_pattern_match_item *pattern_match_item = NULL;
 
 	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v3 3/4] net/ice: support switch flow for specific L4 type
  2020-06-28  5:01   ` [dpdk-stable] [PATCH v3 0/4] enable more PPPoE packet type for switch Wei Zhao
  2020-06-28  5:01     ` [dpdk-stable] [PATCH v3 1/4] net/ice: add support " Wei Zhao
  2020-06-28  5:01     ` [dpdk-stable] [PATCH v3 2/4] net/ice: fix tunnel type for switch rule Wei Zhao
@ 2020-06-28  5:01     ` Wei Zhao
  2020-06-29  1:55       ` Zhang, Qi Z
  2020-06-28  5:01     ` [dpdk-stable] [PATCH v3 4/4] net/ice: add input set byte number check Wei Zhao
  2020-06-28  5:28     ` [dpdk-stable] [PATCH v4 0/4] enable more PPPoE packet type for switch Wei Zhao
  4 siblings, 1 reply; 44+ messages in thread
From: Wei Zhao @ 2020-06-28  5:01 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add more specific tunnel type for ipv4/ipv6 packet,
it enable tcp/udp layer of ipv4/ipv6 as L4 payload but without
L4 dst/src port number as input set for the switch filter rule.

Fixes: 47d460d63233 ("net/ice: rework switch filter")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 27 ++++++++++++++++++++-------
 1 file changed, 20 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index c607e8d17..c1ea74c73 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -29,6 +29,8 @@
 #define ICE_PPP_IPV4_PROTO	0x0021
 #define ICE_PPP_IPV6_PROTO	0x0057
 #define ICE_IPV4_PROTO_NVGRE	0x002F
+#define ICE_TUN_VXLAN_VALID	0x0001
+#define ICE_TUN_NVGRE_VALID	0x0002
 
 #define ICE_SW_INSET_ETHER ( \
 	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
@@ -471,11 +473,11 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
 	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
 	uint64_t input_set = ICE_INSET_NONE;
+	uint16_t tunnel_valid = 0;
 	bool pppoe_elem_valid = 0;
 	bool pppoe_patt_valid = 0;
 	bool pppoe_prot_valid = 0;
 	bool profile_rule = 0;
-	bool tunnel_valid = 0;
 	bool ipv6_valiad = 0;
 	bool ipv4_valiad = 0;
 	bool udp_valiad = 0;
@@ -924,7 +926,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				return 0;
 			}
 
-			tunnel_valid = 1;
+			tunnel_valid = ICE_TUN_VXLAN_VALID;
 			if (vxlan_spec && vxlan_mask) {
 				list[t].type = ICE_VXLAN;
 				if (vxlan_mask->vni[0] ||
@@ -960,7 +962,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					   "Invalid NVGRE item");
 				return 0;
 			}
-			tunnel_valid = 1;
+			tunnel_valid = ICE_TUN_NVGRE_VALID;
 			if (nvgre_spec && nvgre_mask) {
 				list[t].type = ICE_NVGRE;
 				if (nvgre_mask->tni[0] ||
@@ -1325,6 +1327,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			*tun_type = ICE_SW_TUN_PPPOE;
 	}
 
+	if (*tun_type == ICE_NON_TUN) {
+		if (tunnel_valid == ICE_TUN_VXLAN_VALID)
+			*tun_type = ICE_SW_TUN_VXLAN;
+		else if (tunnel_valid == ICE_TUN_NVGRE_VALID)
+			*tun_type = ICE_SW_TUN_NVGRE;
+		else if (ipv4_valiad && tcp_valiad)
+			*tun_type = ICE_SW_IPV4_TCP;
+		else if (ipv4_valiad && udp_valiad)
+			*tun_type = ICE_SW_IPV4_UDP;
+		else if (ipv6_valiad && tcp_valiad)
+			*tun_type = ICE_SW_IPV6_TCP;
+		else if (ipv6_valiad && udp_valiad)
+			*tun_type = ICE_SW_IPV6_UDP;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
@@ -1536,10 +1553,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 
 	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
 		item_num++;
-		if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
-			tun_type = ICE_SW_TUN_VXLAN;
-		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
-			tun_type = ICE_SW_TUN_NVGRE;
 		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
 			const struct rte_flow_item_eth *eth_mask;
 			if (item->mask)
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v3 4/4] net/ice: add input set byte number check
  2020-06-28  5:01   ` [dpdk-stable] [PATCH v3 0/4] enable more PPPoE packet type for switch Wei Zhao
                       ` (2 preceding siblings ...)
  2020-06-28  5:01     ` [dpdk-stable] [PATCH v3 3/4] net/ice: support switch flow for specific L4 type Wei Zhao
@ 2020-06-28  5:01     ` Wei Zhao
  2020-06-28  5:28     ` [dpdk-stable] [PATCH v4 0/4] enable more PPPoE packet type for switch Wei Zhao
  4 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-28  5:01 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add the total input set byte number check,
as there is a hardware requirement for the total number
of 32 byte.

Fixes: 47d460d63233 ("net/ice: rework switch filter")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 43 +++++++++++++++++++++++++++--
 1 file changed, 40 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index c1ea74c73..a4d7fcb14 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -25,7 +25,8 @@
 #include "ice_generic_flow.h"
 
 
-#define MAX_QGRP_NUM_TYPE 7
+#define MAX_QGRP_NUM_TYPE	7
+#define MAX_INPUT_SET_BYTE	32
 #define ICE_PPP_IPV4_PROTO	0x0021
 #define ICE_PPP_IPV6_PROTO	0x0057
 #define ICE_IPV4_PROTO_NVGRE	0x002F
@@ -473,6 +474,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
 	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
 	uint64_t input_set = ICE_INSET_NONE;
+	uint16_t feild_vec_byte = 0;
 	uint16_t tunnel_valid = 0;
 	bool pppoe_elem_valid = 0;
 	bool pppoe_patt_valid = 0;
@@ -540,6 +542,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						m->src_addr[j] =
 						eth_mask->src.addr_bytes[j];
 						i = 1;
+						feild_vec_byte++;
 					}
 					if (eth_mask->dst.addr_bytes[j]) {
 						h->dst_addr[j] =
@@ -547,6 +550,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						m->dst_addr[j] =
 						eth_mask->dst.addr_bytes[j];
 						i = 1;
+						feild_vec_byte++;
 					}
 				}
 				if (i)
@@ -557,6 +561,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						eth_spec->type;
 					list[t].m_u.ethertype.ethtype_id =
 						eth_mask->type;
+					feild_vec_byte += 2;
 					t++;
 				}
 			}
@@ -616,24 +621,28 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv4_spec->hdr.src_addr;
 					list[t].m_u.ipv4_hdr.src_addr =
 						ipv4_mask->hdr.src_addr;
+					feild_vec_byte += 2;
 				}
 				if (ipv4_mask->hdr.dst_addr) {
 					list[t].h_u.ipv4_hdr.dst_addr =
 						ipv4_spec->hdr.dst_addr;
 					list[t].m_u.ipv4_hdr.dst_addr =
 						ipv4_mask->hdr.dst_addr;
+					feild_vec_byte += 2;
 				}
 				if (ipv4_mask->hdr.time_to_live) {
 					list[t].h_u.ipv4_hdr.time_to_live =
 						ipv4_spec->hdr.time_to_live;
 					list[t].m_u.ipv4_hdr.time_to_live =
 						ipv4_mask->hdr.time_to_live;
+					feild_vec_byte++;
 				}
 				if (ipv4_mask->hdr.next_proto_id) {
 					list[t].h_u.ipv4_hdr.protocol =
 						ipv4_spec->hdr.next_proto_id;
 					list[t].m_u.ipv4_hdr.protocol =
 						ipv4_mask->hdr.next_proto_id;
+					feild_vec_byte++;
 				}
 				if ((ipv4_spec->hdr.next_proto_id &
 					ipv4_mask->hdr.next_proto_id) ==
@@ -644,6 +653,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv4_spec->hdr.type_of_service;
 					list[t].m_u.ipv4_hdr.tos =
 						ipv4_mask->hdr.type_of_service;
+					feild_vec_byte++;
 				}
 				t++;
 			}
@@ -721,12 +731,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv6_spec->hdr.src_addr[j];
 						s->src_addr[j] =
 						ipv6_mask->hdr.src_addr[j];
+						feild_vec_byte++;
 					}
 					if (ipv6_mask->hdr.dst_addr[j]) {
 						f->dst_addr[j] =
 						ipv6_spec->hdr.dst_addr[j];
 						s->dst_addr[j] =
 						ipv6_mask->hdr.dst_addr[j];
+						feild_vec_byte++;
 					}
 				}
 				if (ipv6_mask->hdr.proto) {
@@ -734,12 +746,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv6_spec->hdr.proto;
 					s->next_hdr =
 						ipv6_mask->hdr.proto;
+					feild_vec_byte++;
 				}
 				if (ipv6_mask->hdr.hop_limits) {
 					f->hop_limit =
 						ipv6_spec->hdr.hop_limits;
 					s->hop_limit =
 						ipv6_mask->hdr.hop_limits;
+					feild_vec_byte++;
 				}
 				if (ipv6_mask->hdr.vtc_flow &
 						rte_cpu_to_be_32
@@ -757,6 +771,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 							RTE_IPV6_HDR_TC_MASK) >>
 							RTE_IPV6_HDR_TC_SHIFT;
 					s->be_ver_tc_flow = CPU_TO_BE32(vtf.u.val);
+					feild_vec_byte += 4;
 				}
 				t++;
 			}
@@ -802,14 +817,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						udp_spec->hdr.src_port;
 					list[t].m_u.l4_hdr.src_port =
 						udp_mask->hdr.src_port;
+					feild_vec_byte += 2;
 				}
 				if (udp_mask->hdr.dst_port) {
 					list[t].h_u.l4_hdr.dst_port =
 						udp_spec->hdr.dst_port;
 					list[t].m_u.l4_hdr.dst_port =
 						udp_mask->hdr.dst_port;
+					feild_vec_byte += 2;
 				}
-						t++;
+				t++;
 			}
 			break;
 
@@ -854,12 +871,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						tcp_spec->hdr.src_port;
 					list[t].m_u.l4_hdr.src_port =
 						tcp_mask->hdr.src_port;
+					feild_vec_byte += 2;
 				}
 				if (tcp_mask->hdr.dst_port) {
 					list[t].h_u.l4_hdr.dst_port =
 						tcp_spec->hdr.dst_port;
 					list[t].m_u.l4_hdr.dst_port =
 						tcp_mask->hdr.dst_port;
+					feild_vec_byte += 2;
 				}
 				t++;
 			}
@@ -899,12 +918,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						sctp_spec->hdr.src_port;
 					list[t].m_u.sctp_hdr.src_port =
 						sctp_mask->hdr.src_port;
+					feild_vec_byte += 2;
 				}
 				if (sctp_mask->hdr.dst_port) {
 					list[t].h_u.sctp_hdr.dst_port =
 						sctp_spec->hdr.dst_port;
 					list[t].m_u.sctp_hdr.dst_port =
 						sctp_mask->hdr.dst_port;
+					feild_vec_byte += 2;
 				}
 				t++;
 			}
@@ -942,6 +963,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						vxlan_mask->vni[0];
 					input_set |=
 						ICE_INSET_TUN_VXLAN_VNI;
+					feild_vec_byte += 2;
 				}
 				t++;
 			}
@@ -978,6 +1000,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						nvgre_mask->tni[0];
 					input_set |=
 						ICE_INSET_TUN_NVGRE_TNI;
+					feild_vec_byte += 2;
 				}
 				t++;
 			}
@@ -1006,6 +1029,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.vlan_hdr.vlan =
 						vlan_mask->tci;
 					input_set |= ICE_INSET_VLAN_OUTER;
+					feild_vec_byte += 2;
 				}
 				if (vlan_mask->inner_type) {
 					list[t].h_u.vlan_hdr.type =
@@ -1013,6 +1037,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.vlan_hdr.type =
 						vlan_mask->inner_type;
 					input_set |= ICE_INSET_ETHERTYPE;
+					feild_vec_byte += 2;
 				}
 				t++;
 			}
@@ -1053,6 +1078,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.pppoe_hdr.session_id =
 						pppoe_mask->session_id;
 					input_set |= ICE_INSET_PPPOE_SESSION;
+					feild_vec_byte += 2;
 				}
 				t++;
 				pppoe_elem_valid = 1;
@@ -1085,7 +1111,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.pppoe_hdr.ppp_prot_id =
 						pppoe_proto_mask->proto_id;
 					input_set |= ICE_INSET_PPPOE_PROTO;
-
+					feild_vec_byte += 2;
 					pppoe_prot_valid = 1;
 				}
 				if ((pppoe_proto_mask->proto_id &
@@ -1142,6 +1168,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				list[t].m_u.esp_hdr.spi =
 					esp_mask->hdr.spi;
 				input_set |= ICE_INSET_ESP_SPI;
+				feild_vec_byte += 4;
 				t++;
 			}
 
@@ -1198,6 +1225,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				list[t].m_u.ah_hdr.spi =
 					ah_mask->spi;
 				input_set |= ICE_INSET_AH_SPI;
+				feild_vec_byte += 4;
 				t++;
 			}
 
@@ -1237,6 +1265,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				list[t].m_u.l2tpv3_sess_hdr.session_id =
 					l2tp_mask->session_id;
 				input_set |= ICE_INSET_L2TPV3OIP_SESSION_ID;
+				feild_vec_byte += 4;
 				t++;
 			}
 
@@ -1342,6 +1371,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			*tun_type = ICE_SW_IPV6_UDP;
 	}
 
+	if (feild_vec_byte >= MAX_INPUT_SET_BYTE) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item,
+			"too much input set");
+		return -ENOTSUP;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v4 0/4] enable more PPPoE packet type for switch
  2020-06-28  5:01   ` [dpdk-stable] [PATCH v3 0/4] enable more PPPoE packet type for switch Wei Zhao
                       ` (3 preceding siblings ...)
  2020-06-28  5:01     ` [dpdk-stable] [PATCH v3 4/4] net/ice: add input set byte number check Wei Zhao
@ 2020-06-28  5:28     ` Wei Zhao
  2020-06-28  5:28       ` [dpdk-stable] [PATCH v4 1/4] net/ice: add support " Wei Zhao
                         ` (4 more replies)
  4 siblings, 5 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-28  5:28 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu

1. add more support for switch parser of pppoe packet.
2. add check for NVGRE protocol
3. support flow for specific L4 type
4. add input set byte number check

This patchset is based on:
[1] https://patches.dpdk.org/cover/70762/ : net/ice: base code update

Depends-on: series-10300

v2:
fix bug in patch add redirect support for VSI list rule.
add information in release note.

v3:
add input set byte number check
code update as comment of code style

v4:
fix typo in patch

Wei Zhao (4):
  net/ice: add support more PPPoE packet type for switch
  net/ice: fix tunnel type for switch rule
  net/ice: support switch flow for specific L4 type
  net/ice: add input set byte number check

 doc/guides/rel_notes/release_20_08.rst |   1 +
 drivers/net/ice/ice_switch_filter.c    | 190 +++++++++++++++++++++----
 2 files changed, 166 insertions(+), 25 deletions(-)

-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v4 1/4] net/ice: add support more PPPoE packet type for switch
  2020-06-28  5:28     ` [dpdk-stable] [PATCH v4 0/4] enable more PPPoE packet type for switch Wei Zhao
@ 2020-06-28  5:28       ` " Wei Zhao
  2020-06-28  5:28       ` [dpdk-stable] [PATCH v4 2/4] net/ice: fix tunnel type for switch rule Wei Zhao
                         ` (3 subsequent siblings)
  4 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-28  5:28 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add more support for switch parser of pppoe packet,
it enable parse tcp/udp L4 layer and ipv4/ipv6 L3 layer parser for
pppoe payload, so we can use L4 dst/src port and L3 ip address as
input set for switch filter pppoe related rule.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 doc/guides/rel_notes/release_20_08.rst |   1 +
 drivers/net/ice/ice_switch_filter.c    | 115 +++++++++++++++++++++----
 2 files changed, 101 insertions(+), 15 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 3c40424cc..90b58a027 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -86,6 +86,7 @@ New Features
   Updated the Intel ice driver with new features and improvements, including:
 
   * Added support for DCF datapath configuration.
+  * Added support for more PPPoE packet type for switch filter.
 
 Removed Items
 -------------
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 5ccd020c5..3c0c36bce 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -26,6 +26,8 @@
 
 
 #define MAX_QGRP_NUM_TYPE 7
+#define ICE_PPP_IPV4_PROTO	0x0021
+#define ICE_PPP_IPV6_PROTO	0x0057
 
 #define ICE_SW_INSET_ETHER ( \
 	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
@@ -95,6 +97,18 @@
 	ICE_INSET_VLAN_OUTER | ICE_INSET_VLAN_INNER | \
 	ICE_INSET_DMAC | ICE_INSET_ETHERTYPE | ICE_INSET_PPPOE_SESSION | \
 	ICE_INSET_PPPOE_PROTO)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4 ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4_TCP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_TCP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4_UDP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_UDP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6 ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6_TCP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_TCP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6_UDP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_UDP)
 #define ICE_SW_INSET_MAC_IPV4_ESP ( \
 	ICE_SW_INSET_MAC_IPV4 | ICE_INSET_ESP_SPI)
 #define ICE_SW_INSET_MAC_IPV6_ESP ( \
@@ -154,10 +168,6 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = {
 			ICE_SW_INSET_DIST_NVGRE_IPV4_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_nvgre_eth_ipv4_tcp,
 			ICE_SW_INSET_DIST_NVGRE_IPV4_TCP, ICE_INSET_NONE},
-	{pattern_eth_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
-	{pattern_eth_vlan_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_pppoes,
 			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes,
@@ -166,6 +176,30 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = {
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes_proto,
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_esp,
 			ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_udp_esp,
@@ -254,10 +288,6 @@ ice_pattern_match_item ice_switch_pattern_perm[] = {
 			ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_nvgre_eth_ipv4_tcp,
 			ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP, ICE_INSET_NONE},
-	{pattern_eth_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
-	{pattern_eth_vlan_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_pppoes,
 			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes,
@@ -266,6 +296,30 @@ ice_pattern_match_item ice_switch_pattern_perm[] = {
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes_proto,
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_esp,
 			ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_udp_esp,
@@ -416,13 +470,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
 	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
 	uint64_t input_set = ICE_INSET_NONE;
-	uint16_t j, t = 0;
+	bool pppoe_elem_valid = 0;
+	bool pppoe_patt_valid = 0;
+	bool pppoe_prot_valid = 0;
 	bool profile_rule = 0;
 	bool tunnel_valid = 0;
-	bool pppoe_valid = 0;
 	bool ipv6_valiad = 0;
 	bool ipv4_valiad = 0;
 	bool udp_valiad = 0;
+	bool tcp_valiad = 0;
+	uint16_t j, t = 0;
 
 	for (item = pattern; item->type !=
 			RTE_FLOW_ITEM_TYPE_END; item++) {
@@ -752,6 +809,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		case RTE_FLOW_ITEM_TYPE_TCP:
 			tcp_spec = item->spec;
 			tcp_mask = item->mask;
+			tcp_valiad = 1;
 			if (tcp_spec && tcp_mask) {
 				/* Check TCP mask and update input set */
 				if (tcp_mask->hdr.sent_seq ||
@@ -969,6 +1027,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					"Invalid pppoe item");
 				return 0;
 			}
+			pppoe_patt_valid = 1;
 			if (pppoe_spec && pppoe_mask) {
 				/* Check pppoe mask and update input set */
 				if (pppoe_mask->length ||
@@ -989,7 +1048,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					input_set |= ICE_INSET_PPPOE_SESSION;
 				}
 				t++;
-				pppoe_valid = 1;
+				pppoe_elem_valid = 1;
 			}
 			break;
 
@@ -1010,7 +1069,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				return 0;
 			}
 			if (pppoe_proto_spec && pppoe_proto_mask) {
-				if (pppoe_valid)
+				if (pppoe_elem_valid)
 					t--;
 				list[t].type = ICE_PPPOE;
 				if (pppoe_proto_mask->proto_id) {
@@ -1019,9 +1078,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.pppoe_hdr.ppp_prot_id =
 						pppoe_proto_mask->proto_id;
 					input_set |= ICE_INSET_PPPOE_PROTO;
+
+					pppoe_prot_valid = 1;
 				}
+				if ((pppoe_proto_mask->proto_id &
+					pppoe_proto_spec->proto_id) !=
+					    CPU_TO_BE16(ICE_PPP_IPV4_PROTO) &&
+					(pppoe_proto_mask->proto_id &
+					pppoe_proto_spec->proto_id) !=
+					    CPU_TO_BE16(ICE_PPP_IPV6_PROTO))
+					*tun_type = ICE_SW_TUN_PPPOE_PAY;
+				else
+					*tun_type = ICE_SW_TUN_PPPOE;
 				t++;
 			}
+
 			break;
 
 		case RTE_FLOW_ITEM_TYPE_ESP:
@@ -1232,6 +1303,23 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		}
 	}
 
+	if (pppoe_patt_valid && !pppoe_prot_valid) {
+		if (ipv6_valiad && udp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6_UDP;
+		else if (ipv6_valiad && tcp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6_TCP;
+		else if (ipv4_valiad && udp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4_UDP;
+		else if (ipv4_valiad && tcp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4_TCP;
+		else if (ipv6_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6;
+		else if (ipv4_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4;
+		else
+			*tun_type = ICE_SW_TUN_PPPOE;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
@@ -1447,9 +1535,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 			tun_type = ICE_SW_TUN_VXLAN;
 		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
 			tun_type = ICE_SW_TUN_NVGRE;
-		if (item->type == RTE_FLOW_ITEM_TYPE_PPPOED ||
-				item->type == RTE_FLOW_ITEM_TYPE_PPPOES)
-			tun_type = ICE_SW_TUN_PPPOE;
 		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
 			const struct rte_flow_item_eth *eth_mask;
 			if (item->mask)
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v4 2/4] net/ice: fix tunnel type for switch rule
  2020-06-28  5:28     ` [dpdk-stable] [PATCH v4 0/4] enable more PPPoE packet type for switch Wei Zhao
  2020-06-28  5:28       ` [dpdk-stable] [PATCH v4 1/4] net/ice: add support " Wei Zhao
@ 2020-06-28  5:28       ` Wei Zhao
  2020-06-28  5:28       ` [dpdk-stable] [PATCH v4 3/4] net/ice: support switch flow for specific L4 type Wei Zhao
                         ` (2 subsequent siblings)
  4 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-28  5:28 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add check for protocol type of IPv4 packet,
it need to update tunnel type when NVGRE is in payload.

Fixes: 6bc7628c5e0b ("net/ice: change default tunnel type")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 3c0c36bce..c607e8d17 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -28,6 +28,7 @@
 #define MAX_QGRP_NUM_TYPE 7
 #define ICE_PPP_IPV4_PROTO	0x0021
 #define ICE_PPP_IPV6_PROTO	0x0057
+#define ICE_IPV4_PROTO_NVGRE	0x002F
 
 #define ICE_SW_INSET_ETHER ( \
 	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
@@ -632,6 +633,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.ipv4_hdr.protocol =
 						ipv4_mask->hdr.next_proto_id;
 				}
+				if ((ipv4_spec->hdr.next_proto_id &
+					ipv4_mask->hdr.next_proto_id) ==
+					ICE_IPV4_PROTO_NVGRE)
+					*tun_type = ICE_SW_TUN_AND_NON_TUN;
 				if (ipv4_mask->hdr.type_of_service) {
 					list[t].h_u.ipv4_hdr.tos =
 						ipv4_spec->hdr.type_of_service;
@@ -1526,7 +1531,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 	const struct rte_flow_item *item = pattern;
 	uint16_t item_num = 0;
 	enum ice_sw_tunnel_type tun_type =
-		ICE_SW_TUN_AND_NON_TUN;
+			ICE_NON_TUN;
 	struct ice_pattern_match_item *pattern_match_item = NULL;
 
 	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v4 3/4] net/ice: support switch flow for specific L4 type
  2020-06-28  5:28     ` [dpdk-stable] [PATCH v4 0/4] enable more PPPoE packet type for switch Wei Zhao
  2020-06-28  5:28       ` [dpdk-stable] [PATCH v4 1/4] net/ice: add support " Wei Zhao
  2020-06-28  5:28       ` [dpdk-stable] [PATCH v4 2/4] net/ice: fix tunnel type for switch rule Wei Zhao
@ 2020-06-28  5:28       ` Wei Zhao
  2020-06-28  5:28       ` [dpdk-stable] [PATCH v4 4/4] net/ice: add input set byte number check Wei Zhao
  2020-06-29  5:10       ` [dpdk-stable] [PATCH v5 0/5] enable more PPPoE packet type for switch Wei Zhao
  4 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-28  5:28 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add more specific tunnel type for ipv4/ipv6 packet,
it enable tcp/udp layer of ipv4/ipv6 as L4 payload but without
L4 dst/src port number as input set for the switch filter rule.

Fixes: 47d460d63233 ("net/ice: rework switch filter")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 27 ++++++++++++++++++++-------
 1 file changed, 20 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index c607e8d17..c1ea74c73 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -29,6 +29,8 @@
 #define ICE_PPP_IPV4_PROTO	0x0021
 #define ICE_PPP_IPV6_PROTO	0x0057
 #define ICE_IPV4_PROTO_NVGRE	0x002F
+#define ICE_TUN_VXLAN_VALID	0x0001
+#define ICE_TUN_NVGRE_VALID	0x0002
 
 #define ICE_SW_INSET_ETHER ( \
 	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
@@ -471,11 +473,11 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
 	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
 	uint64_t input_set = ICE_INSET_NONE;
+	uint16_t tunnel_valid = 0;
 	bool pppoe_elem_valid = 0;
 	bool pppoe_patt_valid = 0;
 	bool pppoe_prot_valid = 0;
 	bool profile_rule = 0;
-	bool tunnel_valid = 0;
 	bool ipv6_valiad = 0;
 	bool ipv4_valiad = 0;
 	bool udp_valiad = 0;
@@ -924,7 +926,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				return 0;
 			}
 
-			tunnel_valid = 1;
+			tunnel_valid = ICE_TUN_VXLAN_VALID;
 			if (vxlan_spec && vxlan_mask) {
 				list[t].type = ICE_VXLAN;
 				if (vxlan_mask->vni[0] ||
@@ -960,7 +962,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					   "Invalid NVGRE item");
 				return 0;
 			}
-			tunnel_valid = 1;
+			tunnel_valid = ICE_TUN_NVGRE_VALID;
 			if (nvgre_spec && nvgre_mask) {
 				list[t].type = ICE_NVGRE;
 				if (nvgre_mask->tni[0] ||
@@ -1325,6 +1327,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			*tun_type = ICE_SW_TUN_PPPOE;
 	}
 
+	if (*tun_type == ICE_NON_TUN) {
+		if (tunnel_valid == ICE_TUN_VXLAN_VALID)
+			*tun_type = ICE_SW_TUN_VXLAN;
+		else if (tunnel_valid == ICE_TUN_NVGRE_VALID)
+			*tun_type = ICE_SW_TUN_NVGRE;
+		else if (ipv4_valiad && tcp_valiad)
+			*tun_type = ICE_SW_IPV4_TCP;
+		else if (ipv4_valiad && udp_valiad)
+			*tun_type = ICE_SW_IPV4_UDP;
+		else if (ipv6_valiad && tcp_valiad)
+			*tun_type = ICE_SW_IPV6_TCP;
+		else if (ipv6_valiad && udp_valiad)
+			*tun_type = ICE_SW_IPV6_UDP;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
@@ -1536,10 +1553,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 
 	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
 		item_num++;
-		if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
-			tun_type = ICE_SW_TUN_VXLAN;
-		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
-			tun_type = ICE_SW_TUN_NVGRE;
 		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
 			const struct rte_flow_item_eth *eth_mask;
 			if (item->mask)
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v4 4/4] net/ice: add input set byte number check
  2020-06-28  5:28     ` [dpdk-stable] [PATCH v4 0/4] enable more PPPoE packet type for switch Wei Zhao
                         ` (2 preceding siblings ...)
  2020-06-28  5:28       ` [dpdk-stable] [PATCH v4 3/4] net/ice: support switch flow for specific L4 type Wei Zhao
@ 2020-06-28  5:28       ` Wei Zhao
  2020-06-29  5:10       ` [dpdk-stable] [PATCH v5 0/5] enable more PPPoE packet type for switch Wei Zhao
  4 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-28  5:28 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add the total input set byte number check,
as there is a hardware requirement for the total number
of 32 byte.

Fixes: 47d460d63233 ("net/ice: rework switch filter")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 43 +++++++++++++++++++++++++++--
 1 file changed, 40 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index c1ea74c73..d399c5a2e 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -25,7 +25,8 @@
 #include "ice_generic_flow.h"
 
 
-#define MAX_QGRP_NUM_TYPE 7
+#define MAX_QGRP_NUM_TYPE	7
+#define MAX_INPUT_SET_BYTE	32
 #define ICE_PPP_IPV4_PROTO	0x0021
 #define ICE_PPP_IPV6_PROTO	0x0057
 #define ICE_IPV4_PROTO_NVGRE	0x002F
@@ -473,6 +474,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
 	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
 	uint64_t input_set = ICE_INSET_NONE;
+	uint16_t input_set_byte = 0;
 	uint16_t tunnel_valid = 0;
 	bool pppoe_elem_valid = 0;
 	bool pppoe_patt_valid = 0;
@@ -540,6 +542,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						m->src_addr[j] =
 						eth_mask->src.addr_bytes[j];
 						i = 1;
+						input_set_byte++;
 					}
 					if (eth_mask->dst.addr_bytes[j]) {
 						h->dst_addr[j] =
@@ -547,6 +550,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						m->dst_addr[j] =
 						eth_mask->dst.addr_bytes[j];
 						i = 1;
+						input_set_byte++;
 					}
 				}
 				if (i)
@@ -557,6 +561,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						eth_spec->type;
 					list[t].m_u.ethertype.ethtype_id =
 						eth_mask->type;
+					input_set_byte += 2;
 					t++;
 				}
 			}
@@ -616,24 +621,28 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv4_spec->hdr.src_addr;
 					list[t].m_u.ipv4_hdr.src_addr =
 						ipv4_mask->hdr.src_addr;
+					input_set_byte += 2;
 				}
 				if (ipv4_mask->hdr.dst_addr) {
 					list[t].h_u.ipv4_hdr.dst_addr =
 						ipv4_spec->hdr.dst_addr;
 					list[t].m_u.ipv4_hdr.dst_addr =
 						ipv4_mask->hdr.dst_addr;
+					input_set_byte += 2;
 				}
 				if (ipv4_mask->hdr.time_to_live) {
 					list[t].h_u.ipv4_hdr.time_to_live =
 						ipv4_spec->hdr.time_to_live;
 					list[t].m_u.ipv4_hdr.time_to_live =
 						ipv4_mask->hdr.time_to_live;
+					input_set_byte++;
 				}
 				if (ipv4_mask->hdr.next_proto_id) {
 					list[t].h_u.ipv4_hdr.protocol =
 						ipv4_spec->hdr.next_proto_id;
 					list[t].m_u.ipv4_hdr.protocol =
 						ipv4_mask->hdr.next_proto_id;
+					input_set_byte++;
 				}
 				if ((ipv4_spec->hdr.next_proto_id &
 					ipv4_mask->hdr.next_proto_id) ==
@@ -644,6 +653,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv4_spec->hdr.type_of_service;
 					list[t].m_u.ipv4_hdr.tos =
 						ipv4_mask->hdr.type_of_service;
+					input_set_byte++;
 				}
 				t++;
 			}
@@ -721,12 +731,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv6_spec->hdr.src_addr[j];
 						s->src_addr[j] =
 						ipv6_mask->hdr.src_addr[j];
+						input_set_byte++;
 					}
 					if (ipv6_mask->hdr.dst_addr[j]) {
 						f->dst_addr[j] =
 						ipv6_spec->hdr.dst_addr[j];
 						s->dst_addr[j] =
 						ipv6_mask->hdr.dst_addr[j];
+						input_set_byte++;
 					}
 				}
 				if (ipv6_mask->hdr.proto) {
@@ -734,12 +746,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv6_spec->hdr.proto;
 					s->next_hdr =
 						ipv6_mask->hdr.proto;
+					input_set_byte++;
 				}
 				if (ipv6_mask->hdr.hop_limits) {
 					f->hop_limit =
 						ipv6_spec->hdr.hop_limits;
 					s->hop_limit =
 						ipv6_mask->hdr.hop_limits;
+					input_set_byte++;
 				}
 				if (ipv6_mask->hdr.vtc_flow &
 						rte_cpu_to_be_32
@@ -757,6 +771,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 							RTE_IPV6_HDR_TC_MASK) >>
 							RTE_IPV6_HDR_TC_SHIFT;
 					s->be_ver_tc_flow = CPU_TO_BE32(vtf.u.val);
+					input_set_byte += 4;
 				}
 				t++;
 			}
@@ -802,14 +817,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						udp_spec->hdr.src_port;
 					list[t].m_u.l4_hdr.src_port =
 						udp_mask->hdr.src_port;
+					input_set_byte += 2;
 				}
 				if (udp_mask->hdr.dst_port) {
 					list[t].h_u.l4_hdr.dst_port =
 						udp_spec->hdr.dst_port;
 					list[t].m_u.l4_hdr.dst_port =
 						udp_mask->hdr.dst_port;
+					input_set_byte += 2;
 				}
-						t++;
+				t++;
 			}
 			break;
 
@@ -854,12 +871,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						tcp_spec->hdr.src_port;
 					list[t].m_u.l4_hdr.src_port =
 						tcp_mask->hdr.src_port;
+					input_set_byte += 2;
 				}
 				if (tcp_mask->hdr.dst_port) {
 					list[t].h_u.l4_hdr.dst_port =
 						tcp_spec->hdr.dst_port;
 					list[t].m_u.l4_hdr.dst_port =
 						tcp_mask->hdr.dst_port;
+					input_set_byte += 2;
 				}
 				t++;
 			}
@@ -899,12 +918,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						sctp_spec->hdr.src_port;
 					list[t].m_u.sctp_hdr.src_port =
 						sctp_mask->hdr.src_port;
+					input_set_byte += 2;
 				}
 				if (sctp_mask->hdr.dst_port) {
 					list[t].h_u.sctp_hdr.dst_port =
 						sctp_spec->hdr.dst_port;
 					list[t].m_u.sctp_hdr.dst_port =
 						sctp_mask->hdr.dst_port;
+					input_set_byte += 2;
 				}
 				t++;
 			}
@@ -942,6 +963,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						vxlan_mask->vni[0];
 					input_set |=
 						ICE_INSET_TUN_VXLAN_VNI;
+					input_set_byte += 2;
 				}
 				t++;
 			}
@@ -978,6 +1000,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						nvgre_mask->tni[0];
 					input_set |=
 						ICE_INSET_TUN_NVGRE_TNI;
+					input_set_byte += 2;
 				}
 				t++;
 			}
@@ -1006,6 +1029,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.vlan_hdr.vlan =
 						vlan_mask->tci;
 					input_set |= ICE_INSET_VLAN_OUTER;
+					input_set_byte += 2;
 				}
 				if (vlan_mask->inner_type) {
 					list[t].h_u.vlan_hdr.type =
@@ -1013,6 +1037,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.vlan_hdr.type =
 						vlan_mask->inner_type;
 					input_set |= ICE_INSET_ETHERTYPE;
+					input_set_byte += 2;
 				}
 				t++;
 			}
@@ -1053,6 +1078,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.pppoe_hdr.session_id =
 						pppoe_mask->session_id;
 					input_set |= ICE_INSET_PPPOE_SESSION;
+					input_set_byte += 2;
 				}
 				t++;
 				pppoe_elem_valid = 1;
@@ -1085,7 +1111,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.pppoe_hdr.ppp_prot_id =
 						pppoe_proto_mask->proto_id;
 					input_set |= ICE_INSET_PPPOE_PROTO;
-
+					input_set_byte += 2;
 					pppoe_prot_valid = 1;
 				}
 				if ((pppoe_proto_mask->proto_id &
@@ -1142,6 +1168,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				list[t].m_u.esp_hdr.spi =
 					esp_mask->hdr.spi;
 				input_set |= ICE_INSET_ESP_SPI;
+				input_set_byte += 4;
 				t++;
 			}
 
@@ -1198,6 +1225,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				list[t].m_u.ah_hdr.spi =
 					ah_mask->spi;
 				input_set |= ICE_INSET_AH_SPI;
+				input_set_byte += 4;
 				t++;
 			}
 
@@ -1237,6 +1265,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				list[t].m_u.l2tpv3_sess_hdr.session_id =
 					l2tp_mask->session_id;
 				input_set |= ICE_INSET_L2TPV3OIP_SESSION_ID;
+				input_set_byte += 4;
 				t++;
 			}
 
@@ -1342,6 +1371,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			*tun_type = ICE_SW_IPV6_UDP;
 	}
 
+	if (input_set_byte > MAX_INPUT_SET_BYTE) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item,
+			"too much input set");
+		return -ENOTSUP;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [dpdk-stable] [PATCH v3 3/4] net/ice: support switch flow for specific L4 type
  2020-06-28  5:01     ` [dpdk-stable] [PATCH v3 3/4] net/ice: support switch flow for specific L4 type Wei Zhao
@ 2020-06-29  1:55       ` Zhang, Qi Z
  2020-06-29  2:01         ` Zhao1, Wei
  0 siblings, 1 reply; 44+ messages in thread
From: Zhang, Qi Z @ 2020-06-29  1:55 UTC (permalink / raw)
  To: Zhao1, Wei, dev; +Cc: stable, Lu, Nannan



> -----Original Message-----
> From: Zhao1, Wei <wei.zhao1@intel.com>
> Sent: Sunday, June 28, 2020 1:02 PM
> To: dev@dpdk.org
> Cc: stable@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Lu, Nannan
> <nannan.lu@intel.com>; Zhao1, Wei <wei.zhao1@intel.com>
> Subject: [PATCH v3 3/4] net/ice: support switch flow for specific L4 type
> 
> This patch add more specific tunnel type for ipv4/ipv6 packet, it enable
> tcp/udp layer of ipv4/ipv6 as L4 payload but without
> L4 dst/src port number as input set for the switch filter rule.
> 
> Fixes: 47d460d63233 ("net/ice: rework switch filter")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> ---
>  drivers/net/ice/ice_switch_filter.c | 27 ++++++++++++++++++++-------
>  1 file changed, 20 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/net/ice/ice_switch_filter.c
> b/drivers/net/ice/ice_switch_filter.c
> index c607e8d17..c1ea74c73 100644
> --- a/drivers/net/ice/ice_switch_filter.c
> +++ b/drivers/net/ice/ice_switch_filter.c
> @@ -29,6 +29,8 @@
>  #define ICE_PPP_IPV4_PROTO	0x0021
>  #define ICE_PPP_IPV6_PROTO	0x0057
>  #define ICE_IPV4_PROTO_NVGRE	0x002F
> +#define ICE_TUN_VXLAN_VALID	0x0001
> +#define ICE_TUN_NVGRE_VALID	0x0002

Why not apply the same pattern with other valid flag?
I mean use vxlan_valid and nvgre_valid.
Could be tunnel_valid = vxlan_valid | nvgre_valid.

> 
>  #define ICE_SW_INSET_ETHER ( \
>  	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) @@
> -471,11 +473,11 @@ ice_switch_inset_get(const struct rte_flow_item
> pattern[],
>  	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
>  	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
>  	uint64_t input_set = ICE_INSET_NONE;
> +	uint16_t tunnel_valid = 0;
>  	bool pppoe_elem_valid = 0;
>  	bool pppoe_patt_valid = 0;
>  	bool pppoe_prot_valid = 0;
>  	bool profile_rule = 0;
> -	bool tunnel_valid = 0;
>  	bool ipv6_valiad = 0;
>  	bool ipv4_valiad = 0;
>  	bool udp_valiad = 0;
> @@ -924,7 +926,7 @@ ice_switch_inset_get(const struct rte_flow_item
> pattern[],
>  				return 0;
>  			}
> 
> -			tunnel_valid = 1;
> +			tunnel_valid = ICE_TUN_VXLAN_VALID;
>  			if (vxlan_spec && vxlan_mask) {
>  				list[t].type = ICE_VXLAN;
>  				if (vxlan_mask->vni[0] ||
> @@ -960,7 +962,7 @@ ice_switch_inset_get(const struct rte_flow_item
> pattern[],
>  					   "Invalid NVGRE item");
>  				return 0;
>  			}
> -			tunnel_valid = 1;
> +			tunnel_valid = ICE_TUN_NVGRE_VALID;
>  			if (nvgre_spec && nvgre_mask) {
>  				list[t].type = ICE_NVGRE;
>  				if (nvgre_mask->tni[0] ||
> @@ -1325,6 +1327,21 @@ ice_switch_inset_get(const struct rte_flow_item
> pattern[],
>  			*tun_type = ICE_SW_TUN_PPPOE;
>  	}
> 
> +	if (*tun_type == ICE_NON_TUN) {
> +		if (tunnel_valid == ICE_TUN_VXLAN_VALID)
> +			*tun_type = ICE_SW_TUN_VXLAN;
> +		else if (tunnel_valid == ICE_TUN_NVGRE_VALID)
> +			*tun_type = ICE_SW_TUN_NVGRE;
> +		else if (ipv4_valiad && tcp_valiad)
> +			*tun_type = ICE_SW_IPV4_TCP;
> +		else if (ipv4_valiad && udp_valiad)
> +			*tun_type = ICE_SW_IPV4_UDP;
> +		else if (ipv6_valiad && tcp_valiad)
> +			*tun_type = ICE_SW_IPV6_TCP;
> +		else if (ipv6_valiad && udp_valiad)
> +			*tun_type = ICE_SW_IPV6_UDP;
> +	}
> +
>  	*lkups_num = t;
> 
>  	return input_set;
> @@ -1536,10 +1553,6 @@ ice_switch_parse_pattern_action(struct
> ice_adapter *ad,
> 
>  	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
>  		item_num++;
> -		if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
> -			tun_type = ICE_SW_TUN_VXLAN;
> -		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
> -			tun_type = ICE_SW_TUN_NVGRE;
>  		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
>  			const struct rte_flow_item_eth *eth_mask;
>  			if (item->mask)
> --
> 2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [dpdk-stable] [PATCH v3 3/4] net/ice: support switch flow for specific L4 type
  2020-06-29  1:55       ` Zhang, Qi Z
@ 2020-06-29  2:01         ` Zhao1, Wei
  0 siblings, 0 replies; 44+ messages in thread
From: Zhao1, Wei @ 2020-06-29  2:01 UTC (permalink / raw)
  To: Zhang, Qi Z, dev; +Cc: stable, Lu, Nannan

Hi, 

> -----Original Message-----
> From: Zhang, Qi Z <qi.z.zhang@intel.com>
> Sent: Monday, June 29, 2020 9:56 AM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: stable@dpdk.org; Lu, Nannan <nannan.lu@intel.com>
> Subject: RE: [PATCH v3 3/4] net/ice: support switch flow for specific L4 type
> 
> 
> 
> > -----Original Message-----
> > From: Zhao1, Wei <wei.zhao1@intel.com>
> > Sent: Sunday, June 28, 2020 1:02 PM
> > To: dev@dpdk.org
> > Cc: stable@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Lu, Nannan
> > <nannan.lu@intel.com>; Zhao1, Wei <wei.zhao1@intel.com>
> > Subject: [PATCH v3 3/4] net/ice: support switch flow for specific L4
> > type
> >
> > This patch add more specific tunnel type for ipv4/ipv6 packet, it
> > enable tcp/udp layer of ipv4/ipv6 as L4 payload but without
> > L4 dst/src port number as input set for the switch filter rule.
> >
> > Fixes: 47d460d63233 ("net/ice: rework switch filter")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > ---
> >  drivers/net/ice/ice_switch_filter.c | 27 ++++++++++++++++++++-------
> >  1 file changed, 20 insertions(+), 7 deletions(-)
> >
> > diff --git a/drivers/net/ice/ice_switch_filter.c
> > b/drivers/net/ice/ice_switch_filter.c
> > index c607e8d17..c1ea74c73 100644
> > --- a/drivers/net/ice/ice_switch_filter.c
> > +++ b/drivers/net/ice/ice_switch_filter.c
> > @@ -29,6 +29,8 @@
> >  #define ICE_PPP_IPV4_PROTO	0x0021
> >  #define ICE_PPP_IPV6_PROTO	0x0057
> >  #define ICE_IPV4_PROTO_NVGRE	0x002F
> > +#define ICE_TUN_VXLAN_VALID	0x0001
> > +#define ICE_TUN_NVGRE_VALID	0x0002
> 
> Why not apply the same pattern with other valid flag?
> I mean use vxlan_valid and nvgre_valid.
> Could be tunnel_valid = vxlan_valid | nvgre_valid.

Because we will extend to gtp-u or other kinds of packet, there will be more and more xxx_valid variable.
I think we can follow rte layer to use bit define kinds of tunnel packet.
It is too complex to define too many valid flag.

> 
> >
> >  #define ICE_SW_INSET_ETHER ( \
> >  	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) @@
> > -471,11 +473,11 @@ ice_switch_inset_get(const struct rte_flow_item
> > pattern[],
> >  	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
> >  	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
> >  	uint64_t input_set = ICE_INSET_NONE;
> > +	uint16_t tunnel_valid = 0;
> >  	bool pppoe_elem_valid = 0;
> >  	bool pppoe_patt_valid = 0;
> >  	bool pppoe_prot_valid = 0;
> >  	bool profile_rule = 0;
> > -	bool tunnel_valid = 0;
> >  	bool ipv6_valiad = 0;
> >  	bool ipv4_valiad = 0;
> >  	bool udp_valiad = 0;
> > @@ -924,7 +926,7 @@ ice_switch_inset_get(const struct rte_flow_item
> > pattern[],
> >  				return 0;
> >  			}
> >
> > -			tunnel_valid = 1;
> > +			tunnel_valid = ICE_TUN_VXLAN_VALID;
> >  			if (vxlan_spec && vxlan_mask) {
> >  				list[t].type = ICE_VXLAN;
> >  				if (vxlan_mask->vni[0] ||
> > @@ -960,7 +962,7 @@ ice_switch_inset_get(const struct rte_flow_item
> > pattern[],
> >  					   "Invalid NVGRE item");
> >  				return 0;
> >  			}
> > -			tunnel_valid = 1;
> > +			tunnel_valid = ICE_TUN_NVGRE_VALID;
> >  			if (nvgre_spec && nvgre_mask) {
> >  				list[t].type = ICE_NVGRE;
> >  				if (nvgre_mask->tni[0] ||
> > @@ -1325,6 +1327,21 @@ ice_switch_inset_get(const struct rte_flow_item
> > pattern[],
> >  			*tun_type = ICE_SW_TUN_PPPOE;
> >  	}
> >
> > +	if (*tun_type == ICE_NON_TUN) {
> > +		if (tunnel_valid == ICE_TUN_VXLAN_VALID)
> > +			*tun_type = ICE_SW_TUN_VXLAN;
> > +		else if (tunnel_valid == ICE_TUN_NVGRE_VALID)
> > +			*tun_type = ICE_SW_TUN_NVGRE;
> > +		else if (ipv4_valiad && tcp_valiad)
> > +			*tun_type = ICE_SW_IPV4_TCP;
> > +		else if (ipv4_valiad && udp_valiad)
> > +			*tun_type = ICE_SW_IPV4_UDP;
> > +		else if (ipv6_valiad && tcp_valiad)
> > +			*tun_type = ICE_SW_IPV6_TCP;
> > +		else if (ipv6_valiad && udp_valiad)
> > +			*tun_type = ICE_SW_IPV6_UDP;
> > +	}
> > +
> >  	*lkups_num = t;
> >
> >  	return input_set;
> > @@ -1536,10 +1553,6 @@ ice_switch_parse_pattern_action(struct
> > ice_adapter *ad,
> >
> >  	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
> >  		item_num++;
> > -		if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
> > -			tun_type = ICE_SW_TUN_VXLAN;
> > -		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
> > -			tun_type = ICE_SW_TUN_NVGRE;
> >  		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
> >  			const struct rte_flow_item_eth *eth_mask;
> >  			if (item->mask)
> > --
> > 2.19.1
> 


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v5 0/5] enable more PPPoE packet type for switch
  2020-06-28  5:28     ` [dpdk-stable] [PATCH v4 0/4] enable more PPPoE packet type for switch Wei Zhao
                         ` (3 preceding siblings ...)
  2020-06-28  5:28       ` [dpdk-stable] [PATCH v4 4/4] net/ice: add input set byte number check Wei Zhao
@ 2020-06-29  5:10       ` Wei Zhao
  2020-06-29  5:10         ` [dpdk-stable] [PATCH v5 1/5] net/ice: add support " Wei Zhao
                           ` (6 more replies)
  4 siblings, 7 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-29  5:10 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu

1. add more support for switch parser of pppoe packet.
2. add check for NVGRE protocol
3. support flow for specific L4 type
4. add input set byte number check
5. fix typo

This patchset is based on:
[1] https://patches.dpdk.org/cover/70762/ : net/ice: base code update

Depends-on: series-10300

v2:
fix bug in patch add redirect support for VSI list rule.
add information in release note.

v3:
add input set byte number check
code update as comment of code style

v4:
fix typo in patch

v5:
add more valid flag

Wei Zhao (5):
  net/ice: add support more PPPoE packet type for switch
  net/ice: fix tunnel type for switch rule
  net/ice: support switch flow for specific L4 type
  net/ice: add input set byte number check
  net/ice: fix typo

 doc/guides/rel_notes/release_20_08.rst |   1 +
 drivers/net/ice/ice_switch_filter.c    | 241 ++++++++++++++++++++-----
 2 files changed, 192 insertions(+), 50 deletions(-)

-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v5 1/5] net/ice: add support more PPPoE packet type for switch
  2020-06-29  5:10       ` [dpdk-stable] [PATCH v5 0/5] enable more PPPoE packet type for switch Wei Zhao
@ 2020-06-29  5:10         ` " Wei Zhao
  2020-06-29  5:10         ` [dpdk-stable] [PATCH v5 2/5] net/ice: fix tunnel type for switch rule Wei Zhao
                           ` (5 subsequent siblings)
  6 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-29  5:10 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add more support for switch parser of pppoe packet,
it enable parse tcp/udp L4 layer and ipv4/ipv6 L3 layer parser for
pppoe payload, so we can use L4 dst/src port and L3 ip address as
input set for switch filter pppoe related rule.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 doc/guides/rel_notes/release_20_08.rst |   1 +
 drivers/net/ice/ice_switch_filter.c    | 115 +++++++++++++++++++++----
 2 files changed, 101 insertions(+), 15 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 3c40424cc..90b58a027 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -86,6 +86,7 @@ New Features
   Updated the Intel ice driver with new features and improvements, including:
 
   * Added support for DCF datapath configuration.
+  * Added support for more PPPoE packet type for switch filter.
 
 Removed Items
 -------------
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 5ccd020c5..3c0c36bce 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -26,6 +26,8 @@
 
 
 #define MAX_QGRP_NUM_TYPE 7
+#define ICE_PPP_IPV4_PROTO	0x0021
+#define ICE_PPP_IPV6_PROTO	0x0057
 
 #define ICE_SW_INSET_ETHER ( \
 	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
@@ -95,6 +97,18 @@
 	ICE_INSET_VLAN_OUTER | ICE_INSET_VLAN_INNER | \
 	ICE_INSET_DMAC | ICE_INSET_ETHERTYPE | ICE_INSET_PPPOE_SESSION | \
 	ICE_INSET_PPPOE_PROTO)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4 ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4_TCP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_TCP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4_UDP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_UDP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6 ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6_TCP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_TCP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6_UDP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_UDP)
 #define ICE_SW_INSET_MAC_IPV4_ESP ( \
 	ICE_SW_INSET_MAC_IPV4 | ICE_INSET_ESP_SPI)
 #define ICE_SW_INSET_MAC_IPV6_ESP ( \
@@ -154,10 +168,6 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = {
 			ICE_SW_INSET_DIST_NVGRE_IPV4_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_nvgre_eth_ipv4_tcp,
 			ICE_SW_INSET_DIST_NVGRE_IPV4_TCP, ICE_INSET_NONE},
-	{pattern_eth_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
-	{pattern_eth_vlan_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_pppoes,
 			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes,
@@ -166,6 +176,30 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = {
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes_proto,
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_esp,
 			ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_udp_esp,
@@ -254,10 +288,6 @@ ice_pattern_match_item ice_switch_pattern_perm[] = {
 			ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_nvgre_eth_ipv4_tcp,
 			ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP, ICE_INSET_NONE},
-	{pattern_eth_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
-	{pattern_eth_vlan_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_pppoes,
 			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes,
@@ -266,6 +296,30 @@ ice_pattern_match_item ice_switch_pattern_perm[] = {
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes_proto,
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_esp,
 			ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_udp_esp,
@@ -416,13 +470,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
 	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
 	uint64_t input_set = ICE_INSET_NONE;
-	uint16_t j, t = 0;
+	bool pppoe_elem_valid = 0;
+	bool pppoe_patt_valid = 0;
+	bool pppoe_prot_valid = 0;
 	bool profile_rule = 0;
 	bool tunnel_valid = 0;
-	bool pppoe_valid = 0;
 	bool ipv6_valiad = 0;
 	bool ipv4_valiad = 0;
 	bool udp_valiad = 0;
+	bool tcp_valiad = 0;
+	uint16_t j, t = 0;
 
 	for (item = pattern; item->type !=
 			RTE_FLOW_ITEM_TYPE_END; item++) {
@@ -752,6 +809,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		case RTE_FLOW_ITEM_TYPE_TCP:
 			tcp_spec = item->spec;
 			tcp_mask = item->mask;
+			tcp_valiad = 1;
 			if (tcp_spec && tcp_mask) {
 				/* Check TCP mask and update input set */
 				if (tcp_mask->hdr.sent_seq ||
@@ -969,6 +1027,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					"Invalid pppoe item");
 				return 0;
 			}
+			pppoe_patt_valid = 1;
 			if (pppoe_spec && pppoe_mask) {
 				/* Check pppoe mask and update input set */
 				if (pppoe_mask->length ||
@@ -989,7 +1048,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					input_set |= ICE_INSET_PPPOE_SESSION;
 				}
 				t++;
-				pppoe_valid = 1;
+				pppoe_elem_valid = 1;
 			}
 			break;
 
@@ -1010,7 +1069,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				return 0;
 			}
 			if (pppoe_proto_spec && pppoe_proto_mask) {
-				if (pppoe_valid)
+				if (pppoe_elem_valid)
 					t--;
 				list[t].type = ICE_PPPOE;
 				if (pppoe_proto_mask->proto_id) {
@@ -1019,9 +1078,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.pppoe_hdr.ppp_prot_id =
 						pppoe_proto_mask->proto_id;
 					input_set |= ICE_INSET_PPPOE_PROTO;
+
+					pppoe_prot_valid = 1;
 				}
+				if ((pppoe_proto_mask->proto_id &
+					pppoe_proto_spec->proto_id) !=
+					    CPU_TO_BE16(ICE_PPP_IPV4_PROTO) &&
+					(pppoe_proto_mask->proto_id &
+					pppoe_proto_spec->proto_id) !=
+					    CPU_TO_BE16(ICE_PPP_IPV6_PROTO))
+					*tun_type = ICE_SW_TUN_PPPOE_PAY;
+				else
+					*tun_type = ICE_SW_TUN_PPPOE;
 				t++;
 			}
+
 			break;
 
 		case RTE_FLOW_ITEM_TYPE_ESP:
@@ -1232,6 +1303,23 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		}
 	}
 
+	if (pppoe_patt_valid && !pppoe_prot_valid) {
+		if (ipv6_valiad && udp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6_UDP;
+		else if (ipv6_valiad && tcp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6_TCP;
+		else if (ipv4_valiad && udp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4_UDP;
+		else if (ipv4_valiad && tcp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4_TCP;
+		else if (ipv6_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6;
+		else if (ipv4_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4;
+		else
+			*tun_type = ICE_SW_TUN_PPPOE;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
@@ -1447,9 +1535,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 			tun_type = ICE_SW_TUN_VXLAN;
 		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
 			tun_type = ICE_SW_TUN_NVGRE;
-		if (item->type == RTE_FLOW_ITEM_TYPE_PPPOED ||
-				item->type == RTE_FLOW_ITEM_TYPE_PPPOES)
-			tun_type = ICE_SW_TUN_PPPOE;
 		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
 			const struct rte_flow_item_eth *eth_mask;
 			if (item->mask)
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v5 2/5] net/ice: fix tunnel type for switch rule
  2020-06-29  5:10       ` [dpdk-stable] [PATCH v5 0/5] enable more PPPoE packet type for switch Wei Zhao
  2020-06-29  5:10         ` [dpdk-stable] [PATCH v5 1/5] net/ice: add support " Wei Zhao
@ 2020-06-29  5:10         ` Wei Zhao
  2020-06-29  5:10         ` [dpdk-stable] [PATCH v5 3/5] net/ice: support switch flow for specific L4 type Wei Zhao
                           ` (4 subsequent siblings)
  6 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-29  5:10 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add check for protocol type of IPv4 packet,
it need to update tunnel type when NVGRE is in payload.

Fixes: 6bc7628c5e0b ("net/ice: change default tunnel type")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 3c0c36bce..c607e8d17 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -28,6 +28,7 @@
 #define MAX_QGRP_NUM_TYPE 7
 #define ICE_PPP_IPV4_PROTO	0x0021
 #define ICE_PPP_IPV6_PROTO	0x0057
+#define ICE_IPV4_PROTO_NVGRE	0x002F
 
 #define ICE_SW_INSET_ETHER ( \
 	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
@@ -632,6 +633,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.ipv4_hdr.protocol =
 						ipv4_mask->hdr.next_proto_id;
 				}
+				if ((ipv4_spec->hdr.next_proto_id &
+					ipv4_mask->hdr.next_proto_id) ==
+					ICE_IPV4_PROTO_NVGRE)
+					*tun_type = ICE_SW_TUN_AND_NON_TUN;
 				if (ipv4_mask->hdr.type_of_service) {
 					list[t].h_u.ipv4_hdr.tos =
 						ipv4_spec->hdr.type_of_service;
@@ -1526,7 +1531,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 	const struct rte_flow_item *item = pattern;
 	uint16_t item_num = 0;
 	enum ice_sw_tunnel_type tun_type =
-		ICE_SW_TUN_AND_NON_TUN;
+			ICE_NON_TUN;
 	struct ice_pattern_match_item *pattern_match_item = NULL;
 
 	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v5 3/5] net/ice: support switch flow for specific L4 type
  2020-06-29  5:10       ` [dpdk-stable] [PATCH v5 0/5] enable more PPPoE packet type for switch Wei Zhao
  2020-06-29  5:10         ` [dpdk-stable] [PATCH v5 1/5] net/ice: add support " Wei Zhao
  2020-06-29  5:10         ` [dpdk-stable] [PATCH v5 2/5] net/ice: fix tunnel type for switch rule Wei Zhao
@ 2020-06-29  5:10         ` Wei Zhao
  2020-06-29  5:10         ` [dpdk-stable] [PATCH v5 4/5] net/ice: add input set byte number check Wei Zhao
                           ` (3 subsequent siblings)
  6 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-29  5:10 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add more specific tunnel type for ipv4/ipv6 packet,
it enable tcp/udp layer of ipv4/ipv6 as L4 payload but without
L4 dst/src port number as input set for the switch filter rule.

Fixes: 47d460d63233 ("net/ice: rework switch filter")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 26 ++++++++++++++++++++------
 1 file changed, 20 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index c607e8d17..7d1cd98f5 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -474,8 +474,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	bool pppoe_elem_valid = 0;
 	bool pppoe_patt_valid = 0;
 	bool pppoe_prot_valid = 0;
-	bool profile_rule = 0;
 	bool tunnel_valid = 0;
+	bool profile_rule = 0;
+	bool nvgre_valid = 0;
+	bool vxlan_valid = 0;
 	bool ipv6_valiad = 0;
 	bool ipv4_valiad = 0;
 	bool udp_valiad = 0;
@@ -923,7 +925,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					   "Invalid VXLAN item");
 				return 0;
 			}
-
+			vxlan_valid = 1;
 			tunnel_valid = 1;
 			if (vxlan_spec && vxlan_mask) {
 				list[t].type = ICE_VXLAN;
@@ -960,6 +962,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					   "Invalid NVGRE item");
 				return 0;
 			}
+			nvgre_valid = 1;
 			tunnel_valid = 1;
 			if (nvgre_spec && nvgre_mask) {
 				list[t].type = ICE_NVGRE;
@@ -1325,6 +1328,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			*tun_type = ICE_SW_TUN_PPPOE;
 	}
 
+	if (*tun_type == ICE_NON_TUN) {
+		if (vxlan_valid)
+			*tun_type = ICE_SW_TUN_VXLAN;
+		else if (nvgre_valid)
+			*tun_type = ICE_SW_TUN_NVGRE;
+		else if (ipv4_valiad && tcp_valiad)
+			*tun_type = ICE_SW_IPV4_TCP;
+		else if (ipv4_valiad && udp_valiad)
+			*tun_type = ICE_SW_IPV4_UDP;
+		else if (ipv6_valiad && tcp_valiad)
+			*tun_type = ICE_SW_IPV6_TCP;
+		else if (ipv6_valiad && udp_valiad)
+			*tun_type = ICE_SW_IPV6_UDP;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
@@ -1536,10 +1554,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 
 	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
 		item_num++;
-		if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
-			tun_type = ICE_SW_TUN_VXLAN;
-		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
-			tun_type = ICE_SW_TUN_NVGRE;
 		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
 			const struct rte_flow_item_eth *eth_mask;
 			if (item->mask)
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v5 4/5] net/ice: add input set byte number check
  2020-06-29  5:10       ` [dpdk-stable] [PATCH v5 0/5] enable more PPPoE packet type for switch Wei Zhao
                           ` (2 preceding siblings ...)
  2020-06-29  5:10         ` [dpdk-stable] [PATCH v5 3/5] net/ice: support switch flow for specific L4 type Wei Zhao
@ 2020-06-29  5:10         ` Wei Zhao
  2020-06-29  5:10         ` [dpdk-stable] [PATCH v5 5/5] net/ice: fix typo Wei Zhao
                           ` (2 subsequent siblings)
  6 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-29  5:10 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add the total input set byte number check,
as there is a hardware requirement for the total number
of 32 byte.

Fixes: 47d460d63233 ("net/ice: rework switch filter")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 43 +++++++++++++++++++++++++++--
 1 file changed, 40 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 7d1cd98f5..5054555c2 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -25,7 +25,8 @@
 #include "ice_generic_flow.h"
 
 
-#define MAX_QGRP_NUM_TYPE 7
+#define MAX_QGRP_NUM_TYPE	7
+#define MAX_INPUT_SET_BYTE	32
 #define ICE_PPP_IPV4_PROTO	0x0021
 #define ICE_PPP_IPV6_PROTO	0x0057
 #define ICE_IPV4_PROTO_NVGRE	0x002F
@@ -471,6 +472,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
 	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
 	uint64_t input_set = ICE_INSET_NONE;
+	uint16_t input_set_byte = 0;
 	bool pppoe_elem_valid = 0;
 	bool pppoe_patt_valid = 0;
 	bool pppoe_prot_valid = 0;
@@ -540,6 +542,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						m->src_addr[j] =
 						eth_mask->src.addr_bytes[j];
 						i = 1;
+						input_set_byte++;
 					}
 					if (eth_mask->dst.addr_bytes[j]) {
 						h->dst_addr[j] =
@@ -547,6 +550,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						m->dst_addr[j] =
 						eth_mask->dst.addr_bytes[j];
 						i = 1;
+						input_set_byte++;
 					}
 				}
 				if (i)
@@ -557,6 +561,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						eth_spec->type;
 					list[t].m_u.ethertype.ethtype_id =
 						eth_mask->type;
+					input_set_byte += 2;
 					t++;
 				}
 			}
@@ -616,24 +621,28 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv4_spec->hdr.src_addr;
 					list[t].m_u.ipv4_hdr.src_addr =
 						ipv4_mask->hdr.src_addr;
+					input_set_byte += 2;
 				}
 				if (ipv4_mask->hdr.dst_addr) {
 					list[t].h_u.ipv4_hdr.dst_addr =
 						ipv4_spec->hdr.dst_addr;
 					list[t].m_u.ipv4_hdr.dst_addr =
 						ipv4_mask->hdr.dst_addr;
+					input_set_byte += 2;
 				}
 				if (ipv4_mask->hdr.time_to_live) {
 					list[t].h_u.ipv4_hdr.time_to_live =
 						ipv4_spec->hdr.time_to_live;
 					list[t].m_u.ipv4_hdr.time_to_live =
 						ipv4_mask->hdr.time_to_live;
+					input_set_byte++;
 				}
 				if (ipv4_mask->hdr.next_proto_id) {
 					list[t].h_u.ipv4_hdr.protocol =
 						ipv4_spec->hdr.next_proto_id;
 					list[t].m_u.ipv4_hdr.protocol =
 						ipv4_mask->hdr.next_proto_id;
+					input_set_byte++;
 				}
 				if ((ipv4_spec->hdr.next_proto_id &
 					ipv4_mask->hdr.next_proto_id) ==
@@ -644,6 +653,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv4_spec->hdr.type_of_service;
 					list[t].m_u.ipv4_hdr.tos =
 						ipv4_mask->hdr.type_of_service;
+					input_set_byte++;
 				}
 				t++;
 			}
@@ -721,12 +731,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv6_spec->hdr.src_addr[j];
 						s->src_addr[j] =
 						ipv6_mask->hdr.src_addr[j];
+						input_set_byte++;
 					}
 					if (ipv6_mask->hdr.dst_addr[j]) {
 						f->dst_addr[j] =
 						ipv6_spec->hdr.dst_addr[j];
 						s->dst_addr[j] =
 						ipv6_mask->hdr.dst_addr[j];
+						input_set_byte++;
 					}
 				}
 				if (ipv6_mask->hdr.proto) {
@@ -734,12 +746,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv6_spec->hdr.proto;
 					s->next_hdr =
 						ipv6_mask->hdr.proto;
+					input_set_byte++;
 				}
 				if (ipv6_mask->hdr.hop_limits) {
 					f->hop_limit =
 						ipv6_spec->hdr.hop_limits;
 					s->hop_limit =
 						ipv6_mask->hdr.hop_limits;
+					input_set_byte++;
 				}
 				if (ipv6_mask->hdr.vtc_flow &
 						rte_cpu_to_be_32
@@ -757,6 +771,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 							RTE_IPV6_HDR_TC_MASK) >>
 							RTE_IPV6_HDR_TC_SHIFT;
 					s->be_ver_tc_flow = CPU_TO_BE32(vtf.u.val);
+					input_set_byte += 4;
 				}
 				t++;
 			}
@@ -802,14 +817,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						udp_spec->hdr.src_port;
 					list[t].m_u.l4_hdr.src_port =
 						udp_mask->hdr.src_port;
+					input_set_byte += 2;
 				}
 				if (udp_mask->hdr.dst_port) {
 					list[t].h_u.l4_hdr.dst_port =
 						udp_spec->hdr.dst_port;
 					list[t].m_u.l4_hdr.dst_port =
 						udp_mask->hdr.dst_port;
+					input_set_byte += 2;
 				}
-						t++;
+				t++;
 			}
 			break;
 
@@ -854,12 +871,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						tcp_spec->hdr.src_port;
 					list[t].m_u.l4_hdr.src_port =
 						tcp_mask->hdr.src_port;
+					input_set_byte += 2;
 				}
 				if (tcp_mask->hdr.dst_port) {
 					list[t].h_u.l4_hdr.dst_port =
 						tcp_spec->hdr.dst_port;
 					list[t].m_u.l4_hdr.dst_port =
 						tcp_mask->hdr.dst_port;
+					input_set_byte += 2;
 				}
 				t++;
 			}
@@ -899,12 +918,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						sctp_spec->hdr.src_port;
 					list[t].m_u.sctp_hdr.src_port =
 						sctp_mask->hdr.src_port;
+					input_set_byte += 2;
 				}
 				if (sctp_mask->hdr.dst_port) {
 					list[t].h_u.sctp_hdr.dst_port =
 						sctp_spec->hdr.dst_port;
 					list[t].m_u.sctp_hdr.dst_port =
 						sctp_mask->hdr.dst_port;
+					input_set_byte += 2;
 				}
 				t++;
 			}
@@ -942,6 +963,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						vxlan_mask->vni[0];
 					input_set |=
 						ICE_INSET_TUN_VXLAN_VNI;
+					input_set_byte += 2;
 				}
 				t++;
 			}
@@ -979,6 +1001,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						nvgre_mask->tni[0];
 					input_set |=
 						ICE_INSET_TUN_NVGRE_TNI;
+					input_set_byte += 2;
 				}
 				t++;
 			}
@@ -1007,6 +1030,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.vlan_hdr.vlan =
 						vlan_mask->tci;
 					input_set |= ICE_INSET_VLAN_OUTER;
+					input_set_byte += 2;
 				}
 				if (vlan_mask->inner_type) {
 					list[t].h_u.vlan_hdr.type =
@@ -1014,6 +1038,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.vlan_hdr.type =
 						vlan_mask->inner_type;
 					input_set |= ICE_INSET_ETHERTYPE;
+					input_set_byte += 2;
 				}
 				t++;
 			}
@@ -1054,6 +1079,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.pppoe_hdr.session_id =
 						pppoe_mask->session_id;
 					input_set |= ICE_INSET_PPPOE_SESSION;
+					input_set_byte += 2;
 				}
 				t++;
 				pppoe_elem_valid = 1;
@@ -1086,7 +1112,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.pppoe_hdr.ppp_prot_id =
 						pppoe_proto_mask->proto_id;
 					input_set |= ICE_INSET_PPPOE_PROTO;
-
+					input_set_byte += 2;
 					pppoe_prot_valid = 1;
 				}
 				if ((pppoe_proto_mask->proto_id &
@@ -1143,6 +1169,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				list[t].m_u.esp_hdr.spi =
 					esp_mask->hdr.spi;
 				input_set |= ICE_INSET_ESP_SPI;
+				input_set_byte += 4;
 				t++;
 			}
 
@@ -1199,6 +1226,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				list[t].m_u.ah_hdr.spi =
 					ah_mask->spi;
 				input_set |= ICE_INSET_AH_SPI;
+				input_set_byte += 4;
 				t++;
 			}
 
@@ -1238,6 +1266,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				list[t].m_u.l2tpv3_sess_hdr.session_id =
 					l2tp_mask->session_id;
 				input_set |= ICE_INSET_L2TPV3OIP_SESSION_ID;
+				input_set_byte += 4;
 				t++;
 			}
 
@@ -1343,6 +1372,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			*tun_type = ICE_SW_IPV6_UDP;
 	}
 
+	if (input_set_byte > MAX_INPUT_SET_BYTE) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item,
+			"too much input set");
+		return -ENOTSUP;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v5 5/5] net/ice: fix typo
  2020-06-29  5:10       ` [dpdk-stable] [PATCH v5 0/5] enable more PPPoE packet type for switch Wei Zhao
                           ` (3 preceding siblings ...)
  2020-06-29  5:10         ` [dpdk-stable] [PATCH v5 4/5] net/ice: add input set byte number check Wei Zhao
@ 2020-06-29  5:10         ` Wei Zhao
  2020-07-03  2:47         ` [dpdk-stable] [PATCH v5 0/5] enable more PPPoE packet type for switch Lu, Nannan
  2020-07-03  6:19         ` [dpdk-stable] [PATCH v6 " Wei Zhao
  6 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-06-29  5:10 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

fix typo of "valid".

Fixes: 8f5d8e74fb38 ("net/ice: support flow for AH ESP and L2TP")
Fixes: 66ff8851792f ("net/ice: support ESP/AH/L2TP")
Fixes: 45b53ed3701d ("net/ice: support IPv6 NAT-T")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 76 ++++++++++++++---------------
 1 file changed, 38 insertions(+), 38 deletions(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 5054555c2..267af5a54 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -480,10 +480,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	bool profile_rule = 0;
 	bool nvgre_valid = 0;
 	bool vxlan_valid = 0;
-	bool ipv6_valiad = 0;
-	bool ipv4_valiad = 0;
-	bool udp_valiad = 0;
-	bool tcp_valiad = 0;
+	bool ipv6_valid = 0;
+	bool ipv4_valid = 0;
+	bool udp_valid = 0;
+	bool tcp_valid = 0;
 	uint16_t j, t = 0;
 
 	for (item = pattern; item->type !=
@@ -570,7 +570,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			ipv4_spec = item->spec;
 			ipv4_mask = item->mask;
-			ipv4_valiad = 1;
+			ipv4_valid = 1;
 			if (ipv4_spec && ipv4_mask) {
 				/* Check IPv4 mask and update input set */
 				if (ipv4_mask->hdr.version_ihl ||
@@ -662,7 +662,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		case RTE_FLOW_ITEM_TYPE_IPV6:
 			ipv6_spec = item->spec;
 			ipv6_mask = item->mask;
-			ipv6_valiad = 1;
+			ipv6_valid = 1;
 			if (ipv6_spec && ipv6_mask) {
 				if (ipv6_mask->hdr.payload_len) {
 					rte_flow_error_set(error, EINVAL,
@@ -780,7 +780,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		case RTE_FLOW_ITEM_TYPE_UDP:
 			udp_spec = item->spec;
 			udp_mask = item->mask;
-			udp_valiad = 1;
+			udp_valid = 1;
 			if (udp_spec && udp_mask) {
 				/* Check UDP mask and update input set*/
 				if (udp_mask->hdr.dgram_len ||
@@ -833,7 +833,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		case RTE_FLOW_ITEM_TYPE_TCP:
 			tcp_spec = item->spec;
 			tcp_mask = item->mask;
-			tcp_valiad = 1;
+			tcp_valid = 1;
 			if (tcp_spec && tcp_mask) {
 				/* Check TCP mask and update input set */
 				if (tcp_mask->hdr.sent_seq ||
@@ -1151,16 +1151,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 
 			if (!esp_spec && !esp_mask && !input_set) {
 				profile_rule = 1;
-				if (ipv6_valiad && udp_valiad)
+				if (ipv6_valid && udp_valid)
 					*tun_type =
 					ICE_SW_TUN_PROFID_IPV6_NAT_T;
-				else if (ipv6_valiad)
+				else if (ipv6_valid)
 					*tun_type = ICE_SW_TUN_PROFID_IPV6_ESP;
-				else if (ipv4_valiad)
+				else if (ipv4_valid)
 					return 0;
 			} else if (esp_spec && esp_mask &&
 						esp_mask->hdr.spi){
-				if (udp_valiad)
+				if (udp_valid)
 					list[t].type = ICE_NAT_T;
 				else
 					list[t].type = ICE_ESP;
@@ -1174,13 +1174,13 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			}
 
 			if (!profile_rule) {
-				if (ipv6_valiad && udp_valiad)
+				if (ipv6_valid && udp_valid)
 					*tun_type = ICE_SW_TUN_IPV6_NAT_T;
-				else if (ipv4_valiad && udp_valiad)
+				else if (ipv4_valid && udp_valid)
 					*tun_type = ICE_SW_TUN_IPV4_NAT_T;
-				else if (ipv6_valiad)
+				else if (ipv6_valid)
 					*tun_type = ICE_SW_TUN_IPV6_ESP;
-				else if (ipv4_valiad)
+				else if (ipv4_valid)
 					*tun_type = ICE_SW_TUN_IPV4_ESP;
 			}
 			break;
@@ -1211,12 +1211,12 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 
 			if (!ah_spec && !ah_mask && !input_set) {
 				profile_rule = 1;
-				if (ipv6_valiad && udp_valiad)
+				if (ipv6_valid && udp_valid)
 					*tun_type =
 					ICE_SW_TUN_PROFID_IPV6_NAT_T;
-				else if (ipv6_valiad)
+				else if (ipv6_valid)
 					*tun_type = ICE_SW_TUN_PROFID_IPV6_AH;
-				else if (ipv4_valiad)
+				else if (ipv4_valid)
 					return 0;
 			} else if (ah_spec && ah_mask &&
 						ah_mask->spi){
@@ -1231,11 +1231,11 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			}
 
 			if (!profile_rule) {
-				if (udp_valiad)
+				if (udp_valid)
 					return 0;
-				else if (ipv6_valiad)
+				else if (ipv6_valid)
 					*tun_type = ICE_SW_TUN_IPV6_AH;
-				else if (ipv4_valiad)
+				else if (ipv4_valid)
 					*tun_type = ICE_SW_TUN_IPV4_AH;
 			}
 			break;
@@ -1253,10 +1253,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			}
 
 			if (!l2tp_spec && !l2tp_mask && !input_set) {
-				if (ipv6_valiad)
+				if (ipv6_valid)
 					*tun_type =
 					ICE_SW_TUN_PROFID_MAC_IPV6_L2TPV3;
-				else if (ipv4_valiad)
+				else if (ipv4_valid)
 					return 0;
 			} else if (l2tp_spec && l2tp_mask &&
 						l2tp_mask->session_id){
@@ -1271,10 +1271,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			}
 
 			if (!profile_rule) {
-				if (ipv6_valiad)
+				if (ipv6_valid)
 					*tun_type =
 					ICE_SW_TUN_IPV6_L2TPV3;
-				else if (ipv4_valiad)
+				else if (ipv4_valid)
 					*tun_type =
 					ICE_SW_TUN_IPV4_L2TPV3;
 			}
@@ -1308,7 +1308,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				}
 				if (pfcp_mask->s_field &&
 					pfcp_spec->s_field == 0x01 &&
-					ipv6_valiad)
+					ipv6_valid)
 					*tun_type =
 					ICE_SW_TUN_PROFID_IPV6_PFCP_SESSION;
 				else if (pfcp_mask->s_field &&
@@ -1317,7 +1317,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					ICE_SW_TUN_PROFID_IPV4_PFCP_SESSION;
 				else if (pfcp_mask->s_field &&
 					!pfcp_spec->s_field &&
-					ipv6_valiad)
+					ipv6_valid)
 					*tun_type =
 					ICE_SW_TUN_PROFID_IPV6_PFCP_NODE;
 				else if (pfcp_mask->s_field &&
@@ -1341,17 +1341,17 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	}
 
 	if (pppoe_patt_valid && !pppoe_prot_valid) {
-		if (ipv6_valiad && udp_valiad)
+		if (ipv6_valid && udp_valid)
 			*tun_type = ICE_SW_TUN_PPPOE_IPV6_UDP;
-		else if (ipv6_valiad && tcp_valiad)
+		else if (ipv6_valid && tcp_valid)
 			*tun_type = ICE_SW_TUN_PPPOE_IPV6_TCP;
-		else if (ipv4_valiad && udp_valiad)
+		else if (ipv4_valid && udp_valid)
 			*tun_type = ICE_SW_TUN_PPPOE_IPV4_UDP;
-		else if (ipv4_valiad && tcp_valiad)
+		else if (ipv4_valid && tcp_valid)
 			*tun_type = ICE_SW_TUN_PPPOE_IPV4_TCP;
-		else if (ipv6_valiad)
+		else if (ipv6_valid)
 			*tun_type = ICE_SW_TUN_PPPOE_IPV6;
-		else if (ipv4_valiad)
+		else if (ipv4_valid)
 			*tun_type = ICE_SW_TUN_PPPOE_IPV4;
 		else
 			*tun_type = ICE_SW_TUN_PPPOE;
@@ -1362,13 +1362,13 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			*tun_type = ICE_SW_TUN_VXLAN;
 		else if (nvgre_valid)
 			*tun_type = ICE_SW_TUN_NVGRE;
-		else if (ipv4_valiad && tcp_valiad)
+		else if (ipv4_valid && tcp_valid)
 			*tun_type = ICE_SW_IPV4_TCP;
-		else if (ipv4_valiad && udp_valiad)
+		else if (ipv4_valid && udp_valid)
 			*tun_type = ICE_SW_IPV4_UDP;
-		else if (ipv6_valiad && tcp_valiad)
+		else if (ipv6_valid && tcp_valid)
 			*tun_type = ICE_SW_IPV6_TCP;
-		else if (ipv6_valiad && udp_valiad)
+		else if (ipv6_valid && udp_valid)
 			*tun_type = ICE_SW_IPV6_UDP;
 	}
 
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [dpdk-stable] [PATCH v5 0/5] enable more PPPoE packet type for switch
  2020-06-29  5:10       ` [dpdk-stable] [PATCH v5 0/5] enable more PPPoE packet type for switch Wei Zhao
                           ` (4 preceding siblings ...)
  2020-06-29  5:10         ` [dpdk-stable] [PATCH v5 5/5] net/ice: fix typo Wei Zhao
@ 2020-07-03  2:47         ` Lu, Nannan
  2020-07-03  6:19         ` [dpdk-stable] [PATCH v6 " Wei Zhao
  6 siblings, 0 replies; 44+ messages in thread
From: Lu, Nannan @ 2020-07-03  2:47 UTC (permalink / raw)
  To: Zhao1, Wei, dev; +Cc: stable, Zhang, Qi Z

-----Original Message-----
From: Zhao1, Wei <wei.zhao1@intel.com> 
Sent: Monday, June 29, 2020 1:10 PM
To: dev@dpdk.org
Cc: stable@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Lu, Nannan <nannan.lu@intel.com>
Subject: [PATCH v5 0/5] enable more PPPoE packet type for switch

1. add more support for switch parser of pppoe packet.
2. add check for NVGRE protocol
3. support flow for specific L4 type
4. add input set byte number check
5. fix typo

This patchset is based on:
[1] https://patches.dpdk.org/cover/70762/ : net/ice: base code update

Depends-on: series-10300

v2:
fix bug in patch add redirect support for VSI list rule.
add information in release note.

v3:
add input set byte number check
code update as comment of code style

v4:
fix typo in patch

v5:
add more valid flag

Wei Zhao (5):
  net/ice: add support more PPPoE packet type for switch
  net/ice: fix tunnel type for switch rule
  net/ice: support switch flow for specific L4 type
  net/ice: add input set byte number check
  net/ice: fix typo

 doc/guides/rel_notes/release_20_08.rst |   1 +
 drivers/net/ice/ice_switch_filter.c    | 241 ++++++++++++++++++++-----
 2 files changed, 192 insertions(+), 50 deletions(-)

Tested-by: Nannan Lu <nannan.lu@intel.com>

-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v6 0/5] enable more PPPoE packet type for switch
  2020-06-29  5:10       ` [dpdk-stable] [PATCH v5 0/5] enable more PPPoE packet type for switch Wei Zhao
                           ` (5 preceding siblings ...)
  2020-07-03  2:47         ` [dpdk-stable] [PATCH v5 0/5] enable more PPPoE packet type for switch Lu, Nannan
@ 2020-07-03  6:19         ` " Wei Zhao
  2020-07-03  6:19           ` [dpdk-stable] [PATCH v6 1/5] net/ice: add support more PPPoE packeat " Wei Zhao
                             ` (5 more replies)
  6 siblings, 6 replies; 44+ messages in thread
From: Wei Zhao @ 2020-07-03  6:19 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu

1. add more support for switch parser of pppoe packet.
2. add check for NVGRE protocol
3. support flow for specific L4 type
4. add input set byte number check
5. fix typo

This patchset is based on:
[1] https://patches.dpdk.org/cover/70762/ : net/ice: base code update

Depends-on: series-10300

v2:
fix bug in patch add redirect support for VSI list rule.
add information in release note.

v3:
add input set byte number check
code update as comment of code style

v4:
fix typo in patch

v5:
add more valid flag

v6:
rebase for code merge

Wei Zhao (5):
  net/ice: add support more PPPoE packeat type for switch
  net/ice: fix tunnel type for switch rule
  net/ice: support switch flow for specific L4 type
  net/ice: add input set byte number check
  net/ice: fix typo

 doc/guides/rel_notes/release_20_08.rst |   1 +
 drivers/net/ice/ice_switch_filter.c    | 241 ++++++++++++++++++++-----
 2 files changed, 192 insertions(+), 50 deletions(-)

Tested-by: Nannan Lu <nannan.lu@intel.com>

-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v6 1/5] net/ice: add support more PPPoE packeat type for switch
  2020-07-03  6:19         ` [dpdk-stable] [PATCH v6 " Wei Zhao
@ 2020-07-03  6:19           ` " Wei Zhao
  2020-07-03  6:19           ` [dpdk-stable] [PATCH v6 2/5] net/ice: fix tunnel type for switch rule Wei Zhao
                             ` (4 subsequent siblings)
  5 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-07-03  6:19 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add more support for switch parser of pppoe packet,
it enable parse tcp/udp L4 layer and ipv4/ipv6 L3 layer parser for
pppoe payload, so we can use L4 dst/src port and L3 ip address as
input set for switch filter pppoe related rule.

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 doc/guides/rel_notes/release_20_08.rst |   1 +
 drivers/net/ice/ice_switch_filter.c    | 115 +++++++++++++++++++++----
 2 files changed, 101 insertions(+), 15 deletions(-)

diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 5cbc4ce14..f4b858727 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -90,6 +90,7 @@ New Features
   Updated the Intel ice driver with new features and improvements, including:
 
   * Added support for DCF datapath configuration.
+  * Added support for more PPPoE packet type for switch filter.
 
 * **Added support for BPF_ABS/BPF_IND load instructions.**
 
diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index cc0af23ad..12a015f87 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -27,6 +27,8 @@
 
 
 #define MAX_QGRP_NUM_TYPE 7
+#define ICE_PPP_IPV4_PROTO	0x0021
+#define ICE_PPP_IPV6_PROTO	0x0057
 
 #define ICE_SW_INSET_ETHER ( \
 	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
@@ -96,6 +98,18 @@
 	ICE_INSET_VLAN_OUTER | ICE_INSET_VLAN_INNER | \
 	ICE_INSET_DMAC | ICE_INSET_ETHERTYPE | ICE_INSET_PPPOE_SESSION | \
 	ICE_INSET_PPPOE_PROTO)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4 ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4_TCP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_TCP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV4_UDP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV4_UDP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6 ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6_TCP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_TCP)
+#define ICE_SW_INSET_MAC_PPPOE_IPV6_UDP ( \
+	ICE_SW_INSET_MAC_PPPOE | ICE_SW_INSET_MAC_IPV6_UDP)
 #define ICE_SW_INSET_MAC_IPV4_ESP ( \
 	ICE_SW_INSET_MAC_IPV4 | ICE_INSET_ESP_SPI)
 #define ICE_SW_INSET_MAC_IPV6_ESP ( \
@@ -155,10 +169,6 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = {
 			ICE_SW_INSET_DIST_NVGRE_IPV4_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_nvgre_eth_ipv4_tcp,
 			ICE_SW_INSET_DIST_NVGRE_IPV4_TCP, ICE_INSET_NONE},
-	{pattern_eth_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
-	{pattern_eth_vlan_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_pppoes,
 			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes,
@@ -167,6 +177,30 @@ ice_pattern_match_item ice_switch_pattern_dist_comms[] = {
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes_proto,
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_esp,
 			ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_udp_esp,
@@ -255,10 +289,6 @@ ice_pattern_match_item ice_switch_pattern_perm[] = {
 			ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_nvgre_eth_ipv4_tcp,
 			ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP, ICE_INSET_NONE},
-	{pattern_eth_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
-	{pattern_eth_vlan_pppoed,
-			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_pppoes,
 			ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes,
@@ -267,6 +297,30 @@ ice_pattern_match_item ice_switch_pattern_perm[] = {
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
 	{pattern_eth_vlan_pppoes_proto,
 			ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4,
+			ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv4_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV4_UDP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6,
+			ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_tcp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_TCP, ICE_INSET_NONE},
+	{pattern_eth_vlan_pppoes_ipv6_udp,
+			ICE_SW_INSET_MAC_PPPOE_IPV6_UDP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_esp,
 			ICE_SW_INSET_MAC_IPV4_ESP, ICE_INSET_NONE},
 	{pattern_eth_ipv4_udp_esp,
@@ -417,13 +471,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
 	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
 	uint64_t input_set = ICE_INSET_NONE;
-	uint16_t j, t = 0;
+	bool pppoe_elem_valid = 0;
+	bool pppoe_patt_valid = 0;
+	bool pppoe_prot_valid = 0;
 	bool profile_rule = 0;
 	bool tunnel_valid = 0;
-	bool pppoe_valid = 0;
 	bool ipv6_valiad = 0;
 	bool ipv4_valiad = 0;
 	bool udp_valiad = 0;
+	bool tcp_valiad = 0;
+	uint16_t j, t = 0;
 
 	for (item = pattern; item->type !=
 			RTE_FLOW_ITEM_TYPE_END; item++) {
@@ -753,6 +810,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		case RTE_FLOW_ITEM_TYPE_TCP:
 			tcp_spec = item->spec;
 			tcp_mask = item->mask;
+			tcp_valiad = 1;
 			if (tcp_spec && tcp_mask) {
 				/* Check TCP mask and update input set */
 				if (tcp_mask->hdr.sent_seq ||
@@ -970,6 +1028,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					"Invalid pppoe item");
 				return 0;
 			}
+			pppoe_patt_valid = 1;
 			if (pppoe_spec && pppoe_mask) {
 				/* Check pppoe mask and update input set */
 				if (pppoe_mask->length ||
@@ -990,7 +1049,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					input_set |= ICE_INSET_PPPOE_SESSION;
 				}
 				t++;
-				pppoe_valid = 1;
+				pppoe_elem_valid = 1;
 			}
 			break;
 
@@ -1011,7 +1070,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				return 0;
 			}
 			if (pppoe_proto_spec && pppoe_proto_mask) {
-				if (pppoe_valid)
+				if (pppoe_elem_valid)
 					t--;
 				list[t].type = ICE_PPPOE;
 				if (pppoe_proto_mask->proto_id) {
@@ -1020,9 +1079,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.pppoe_hdr.ppp_prot_id =
 						pppoe_proto_mask->proto_id;
 					input_set |= ICE_INSET_PPPOE_PROTO;
+
+					pppoe_prot_valid = 1;
 				}
+				if ((pppoe_proto_mask->proto_id &
+					pppoe_proto_spec->proto_id) !=
+					    CPU_TO_BE16(ICE_PPP_IPV4_PROTO) &&
+					(pppoe_proto_mask->proto_id &
+					pppoe_proto_spec->proto_id) !=
+					    CPU_TO_BE16(ICE_PPP_IPV6_PROTO))
+					*tun_type = ICE_SW_TUN_PPPOE_PAY;
+				else
+					*tun_type = ICE_SW_TUN_PPPOE;
 				t++;
 			}
+
 			break;
 
 		case RTE_FLOW_ITEM_TYPE_ESP:
@@ -1233,6 +1304,23 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		}
 	}
 
+	if (pppoe_patt_valid && !pppoe_prot_valid) {
+		if (ipv6_valiad && udp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6_UDP;
+		else if (ipv6_valiad && tcp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6_TCP;
+		else if (ipv4_valiad && udp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4_UDP;
+		else if (ipv4_valiad && tcp_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4_TCP;
+		else if (ipv6_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV6;
+		else if (ipv4_valiad)
+			*tun_type = ICE_SW_TUN_PPPOE_IPV4;
+		else
+			*tun_type = ICE_SW_TUN_PPPOE;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
@@ -1453,9 +1541,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 			tun_type = ICE_SW_TUN_VXLAN;
 		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
 			tun_type = ICE_SW_TUN_NVGRE;
-		if (item->type == RTE_FLOW_ITEM_TYPE_PPPOED ||
-				item->type == RTE_FLOW_ITEM_TYPE_PPPOES)
-			tun_type = ICE_SW_TUN_PPPOE;
 		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
 			const struct rte_flow_item_eth *eth_mask;
 			if (item->mask)
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v6 2/5] net/ice: fix tunnel type for switch rule
  2020-07-03  6:19         ` [dpdk-stable] [PATCH v6 " Wei Zhao
  2020-07-03  6:19           ` [dpdk-stable] [PATCH v6 1/5] net/ice: add support more PPPoE packeat " Wei Zhao
@ 2020-07-03  6:19           ` Wei Zhao
  2020-07-03  6:19           ` [dpdk-stable] [PATCH v6 3/5] net/ice: support switch flow for specific L4 type Wei Zhao
                             ` (3 subsequent siblings)
  5 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-07-03  6:19 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add check for protocol type of IPv4 packet,
it need to update tunnel type when NVGRE is in payload.

Fixes: 6bc7628c5e0b ("net/ice: change default tunnel type")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 12a015f87..dae0d470b 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -29,6 +29,7 @@
 #define MAX_QGRP_NUM_TYPE 7
 #define ICE_PPP_IPV4_PROTO	0x0021
 #define ICE_PPP_IPV6_PROTO	0x0057
+#define ICE_IPV4_PROTO_NVGRE	0x002F
 
 #define ICE_SW_INSET_ETHER ( \
 	ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE)
@@ -633,6 +634,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.ipv4_hdr.protocol =
 						ipv4_mask->hdr.next_proto_id;
 				}
+				if ((ipv4_spec->hdr.next_proto_id &
+					ipv4_mask->hdr.next_proto_id) ==
+					ICE_IPV4_PROTO_NVGRE)
+					*tun_type = ICE_SW_TUN_AND_NON_TUN;
 				if (ipv4_mask->hdr.type_of_service) {
 					list[t].h_u.ipv4_hdr.tos =
 						ipv4_spec->hdr.type_of_service;
@@ -1532,7 +1537,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 	const struct rte_flow_item *item = pattern;
 	uint16_t item_num = 0;
 	enum ice_sw_tunnel_type tun_type =
-		ICE_SW_TUN_AND_NON_TUN;
+			ICE_NON_TUN;
 	struct ice_pattern_match_item *pattern_match_item = NULL;
 
 	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v6 3/5] net/ice: support switch flow for specific L4 type
  2020-07-03  6:19         ` [dpdk-stable] [PATCH v6 " Wei Zhao
  2020-07-03  6:19           ` [dpdk-stable] [PATCH v6 1/5] net/ice: add support more PPPoE packeat " Wei Zhao
  2020-07-03  6:19           ` [dpdk-stable] [PATCH v6 2/5] net/ice: fix tunnel type for switch rule Wei Zhao
@ 2020-07-03  6:19           ` Wei Zhao
  2020-07-03  6:19           ` [dpdk-stable] [PATCH v6 4/5] net/ice: add input set byte number check Wei Zhao
                             ` (2 subsequent siblings)
  5 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-07-03  6:19 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add more specific tunnel type for ipv4/ipv6 packet,
it enable tcp/udp layer of ipv4/ipv6 as L4 payload but without
L4 dst/src port number as input set for the switch filter rule.

Fixes: 47d460d63233 ("net/ice: rework switch filter")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 26 ++++++++++++++++++++------
 1 file changed, 20 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index dae0d470b..afdc116ee 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -475,8 +475,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	bool pppoe_elem_valid = 0;
 	bool pppoe_patt_valid = 0;
 	bool pppoe_prot_valid = 0;
-	bool profile_rule = 0;
 	bool tunnel_valid = 0;
+	bool profile_rule = 0;
+	bool nvgre_valid = 0;
+	bool vxlan_valid = 0;
 	bool ipv6_valiad = 0;
 	bool ipv4_valiad = 0;
 	bool udp_valiad = 0;
@@ -924,7 +926,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					   "Invalid VXLAN item");
 				return 0;
 			}
-
+			vxlan_valid = 1;
 			tunnel_valid = 1;
 			if (vxlan_spec && vxlan_mask) {
 				list[t].type = ICE_VXLAN;
@@ -961,6 +963,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					   "Invalid NVGRE item");
 				return 0;
 			}
+			nvgre_valid = 1;
 			tunnel_valid = 1;
 			if (nvgre_spec && nvgre_mask) {
 				list[t].type = ICE_NVGRE;
@@ -1326,6 +1329,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			*tun_type = ICE_SW_TUN_PPPOE;
 	}
 
+	if (*tun_type == ICE_NON_TUN) {
+		if (vxlan_valid)
+			*tun_type = ICE_SW_TUN_VXLAN;
+		else if (nvgre_valid)
+			*tun_type = ICE_SW_TUN_NVGRE;
+		else if (ipv4_valiad && tcp_valiad)
+			*tun_type = ICE_SW_IPV4_TCP;
+		else if (ipv4_valiad && udp_valiad)
+			*tun_type = ICE_SW_IPV4_UDP;
+		else if (ipv6_valiad && tcp_valiad)
+			*tun_type = ICE_SW_IPV6_TCP;
+		else if (ipv6_valiad && udp_valiad)
+			*tun_type = ICE_SW_IPV6_UDP;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
@@ -1542,10 +1560,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad,
 
 	for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
 		item_num++;
-		if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
-			tun_type = ICE_SW_TUN_VXLAN;
-		if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE)
-			tun_type = ICE_SW_TUN_NVGRE;
 		if (item->type == RTE_FLOW_ITEM_TYPE_ETH) {
 			const struct rte_flow_item_eth *eth_mask;
 			if (item->mask)
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v6 4/5] net/ice: add input set byte number check
  2020-07-03  6:19         ` [dpdk-stable] [PATCH v6 " Wei Zhao
                             ` (2 preceding siblings ...)
  2020-07-03  6:19           ` [dpdk-stable] [PATCH v6 3/5] net/ice: support switch flow for specific L4 type Wei Zhao
@ 2020-07-03  6:19           ` Wei Zhao
  2020-07-03  6:19           ` [dpdk-stable] [PATCH v6 5/5] net/ice: fix typo Wei Zhao
  2020-07-03 13:46           ` [dpdk-stable] [PATCH v6 0/5] enable more PPPoE packet type for switch Zhang, Qi Z
  5 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-07-03  6:19 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

This patch add the total input set byte number check,
as there is a hardware requirement for the total number
of 32 byte.

Fixes: 47d460d63233 ("net/ice: rework switch filter")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 43 +++++++++++++++++++++++++++--
 1 file changed, 40 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index afdc116ee..9db89a307 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -26,7 +26,8 @@
 #include "ice_dcf_ethdev.h"
 
 
-#define MAX_QGRP_NUM_TYPE 7
+#define MAX_QGRP_NUM_TYPE	7
+#define MAX_INPUT_SET_BYTE	32
 #define ICE_PPP_IPV4_PROTO	0x0021
 #define ICE_PPP_IPV6_PROTO	0x0057
 #define ICE_IPV4_PROTO_NVGRE	0x002F
@@ -472,6 +473,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	const struct rte_flow_item_l2tpv3oip *l2tp_spec, *l2tp_mask;
 	const struct rte_flow_item_pfcp *pfcp_spec, *pfcp_mask;
 	uint64_t input_set = ICE_INSET_NONE;
+	uint16_t input_set_byte = 0;
 	bool pppoe_elem_valid = 0;
 	bool pppoe_patt_valid = 0;
 	bool pppoe_prot_valid = 0;
@@ -541,6 +543,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						m->src_addr[j] =
 						eth_mask->src.addr_bytes[j];
 						i = 1;
+						input_set_byte++;
 					}
 					if (eth_mask->dst.addr_bytes[j]) {
 						h->dst_addr[j] =
@@ -548,6 +551,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						m->dst_addr[j] =
 						eth_mask->dst.addr_bytes[j];
 						i = 1;
+						input_set_byte++;
 					}
 				}
 				if (i)
@@ -558,6 +562,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						eth_spec->type;
 					list[t].m_u.ethertype.ethtype_id =
 						eth_mask->type;
+					input_set_byte += 2;
 					t++;
 				}
 			}
@@ -617,24 +622,28 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv4_spec->hdr.src_addr;
 					list[t].m_u.ipv4_hdr.src_addr =
 						ipv4_mask->hdr.src_addr;
+					input_set_byte += 2;
 				}
 				if (ipv4_mask->hdr.dst_addr) {
 					list[t].h_u.ipv4_hdr.dst_addr =
 						ipv4_spec->hdr.dst_addr;
 					list[t].m_u.ipv4_hdr.dst_addr =
 						ipv4_mask->hdr.dst_addr;
+					input_set_byte += 2;
 				}
 				if (ipv4_mask->hdr.time_to_live) {
 					list[t].h_u.ipv4_hdr.time_to_live =
 						ipv4_spec->hdr.time_to_live;
 					list[t].m_u.ipv4_hdr.time_to_live =
 						ipv4_mask->hdr.time_to_live;
+					input_set_byte++;
 				}
 				if (ipv4_mask->hdr.next_proto_id) {
 					list[t].h_u.ipv4_hdr.protocol =
 						ipv4_spec->hdr.next_proto_id;
 					list[t].m_u.ipv4_hdr.protocol =
 						ipv4_mask->hdr.next_proto_id;
+					input_set_byte++;
 				}
 				if ((ipv4_spec->hdr.next_proto_id &
 					ipv4_mask->hdr.next_proto_id) ==
@@ -645,6 +654,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv4_spec->hdr.type_of_service;
 					list[t].m_u.ipv4_hdr.tos =
 						ipv4_mask->hdr.type_of_service;
+					input_set_byte++;
 				}
 				t++;
 			}
@@ -722,12 +732,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv6_spec->hdr.src_addr[j];
 						s->src_addr[j] =
 						ipv6_mask->hdr.src_addr[j];
+						input_set_byte++;
 					}
 					if (ipv6_mask->hdr.dst_addr[j]) {
 						f->dst_addr[j] =
 						ipv6_spec->hdr.dst_addr[j];
 						s->dst_addr[j] =
 						ipv6_mask->hdr.dst_addr[j];
+						input_set_byte++;
 					}
 				}
 				if (ipv6_mask->hdr.proto) {
@@ -735,12 +747,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						ipv6_spec->hdr.proto;
 					s->next_hdr =
 						ipv6_mask->hdr.proto;
+					input_set_byte++;
 				}
 				if (ipv6_mask->hdr.hop_limits) {
 					f->hop_limit =
 						ipv6_spec->hdr.hop_limits;
 					s->hop_limit =
 						ipv6_mask->hdr.hop_limits;
+					input_set_byte++;
 				}
 				if (ipv6_mask->hdr.vtc_flow &
 						rte_cpu_to_be_32
@@ -758,6 +772,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 							RTE_IPV6_HDR_TC_MASK) >>
 							RTE_IPV6_HDR_TC_SHIFT;
 					s->be_ver_tc_flow = CPU_TO_BE32(vtf.u.val);
+					input_set_byte += 4;
 				}
 				t++;
 			}
@@ -803,14 +818,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						udp_spec->hdr.src_port;
 					list[t].m_u.l4_hdr.src_port =
 						udp_mask->hdr.src_port;
+					input_set_byte += 2;
 				}
 				if (udp_mask->hdr.dst_port) {
 					list[t].h_u.l4_hdr.dst_port =
 						udp_spec->hdr.dst_port;
 					list[t].m_u.l4_hdr.dst_port =
 						udp_mask->hdr.dst_port;
+					input_set_byte += 2;
 				}
-						t++;
+				t++;
 			}
 			break;
 
@@ -855,12 +872,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						tcp_spec->hdr.src_port;
 					list[t].m_u.l4_hdr.src_port =
 						tcp_mask->hdr.src_port;
+					input_set_byte += 2;
 				}
 				if (tcp_mask->hdr.dst_port) {
 					list[t].h_u.l4_hdr.dst_port =
 						tcp_spec->hdr.dst_port;
 					list[t].m_u.l4_hdr.dst_port =
 						tcp_mask->hdr.dst_port;
+					input_set_byte += 2;
 				}
 				t++;
 			}
@@ -900,12 +919,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						sctp_spec->hdr.src_port;
 					list[t].m_u.sctp_hdr.src_port =
 						sctp_mask->hdr.src_port;
+					input_set_byte += 2;
 				}
 				if (sctp_mask->hdr.dst_port) {
 					list[t].h_u.sctp_hdr.dst_port =
 						sctp_spec->hdr.dst_port;
 					list[t].m_u.sctp_hdr.dst_port =
 						sctp_mask->hdr.dst_port;
+					input_set_byte += 2;
 				}
 				t++;
 			}
@@ -943,6 +964,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						vxlan_mask->vni[0];
 					input_set |=
 						ICE_INSET_TUN_VXLAN_VNI;
+					input_set_byte += 2;
 				}
 				t++;
 			}
@@ -980,6 +1002,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 						nvgre_mask->tni[0];
 					input_set |=
 						ICE_INSET_TUN_NVGRE_TNI;
+					input_set_byte += 2;
 				}
 				t++;
 			}
@@ -1008,6 +1031,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.vlan_hdr.vlan =
 						vlan_mask->tci;
 					input_set |= ICE_INSET_VLAN_OUTER;
+					input_set_byte += 2;
 				}
 				if (vlan_mask->inner_type) {
 					list[t].h_u.vlan_hdr.type =
@@ -1015,6 +1039,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.vlan_hdr.type =
 						vlan_mask->inner_type;
 					input_set |= ICE_INSET_ETHERTYPE;
+					input_set_byte += 2;
 				}
 				t++;
 			}
@@ -1055,6 +1080,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.pppoe_hdr.session_id =
 						pppoe_mask->session_id;
 					input_set |= ICE_INSET_PPPOE_SESSION;
+					input_set_byte += 2;
 				}
 				t++;
 				pppoe_elem_valid = 1;
@@ -1087,7 +1113,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					list[t].m_u.pppoe_hdr.ppp_prot_id =
 						pppoe_proto_mask->proto_id;
 					input_set |= ICE_INSET_PPPOE_PROTO;
-
+					input_set_byte += 2;
 					pppoe_prot_valid = 1;
 				}
 				if ((pppoe_proto_mask->proto_id &
@@ -1144,6 +1170,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				list[t].m_u.esp_hdr.spi =
 					esp_mask->hdr.spi;
 				input_set |= ICE_INSET_ESP_SPI;
+				input_set_byte += 4;
 				t++;
 			}
 
@@ -1200,6 +1227,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				list[t].m_u.ah_hdr.spi =
 					ah_mask->spi;
 				input_set |= ICE_INSET_AH_SPI;
+				input_set_byte += 4;
 				t++;
 			}
 
@@ -1239,6 +1267,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				list[t].m_u.l2tpv3_sess_hdr.session_id =
 					l2tp_mask->session_id;
 				input_set |= ICE_INSET_L2TPV3OIP_SESSION_ID;
+				input_set_byte += 4;
 				t++;
 			}
 
@@ -1344,6 +1373,14 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			*tun_type = ICE_SW_IPV6_UDP;
 	}
 
+	if (input_set_byte > MAX_INPUT_SET_BYTE) {
+		rte_flow_error_set(error, EINVAL,
+			RTE_FLOW_ERROR_TYPE_ITEM,
+			item,
+			"too much input set");
+		return -ENOTSUP;
+	}
+
 	*lkups_num = t;
 
 	return input_set;
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [dpdk-stable] [PATCH v6 5/5] net/ice: fix typo
  2020-07-03  6:19         ` [dpdk-stable] [PATCH v6 " Wei Zhao
                             ` (3 preceding siblings ...)
  2020-07-03  6:19           ` [dpdk-stable] [PATCH v6 4/5] net/ice: add input set byte number check Wei Zhao
@ 2020-07-03  6:19           ` Wei Zhao
  2020-07-03 13:46           ` [dpdk-stable] [PATCH v6 0/5] enable more PPPoE packet type for switch Zhang, Qi Z
  5 siblings, 0 replies; 44+ messages in thread
From: Wei Zhao @ 2020-07-03  6:19 UTC (permalink / raw)
  To: dev; +Cc: stable, qi.z.zhang, nannan.lu, Wei Zhao

fix typo of "valid".

Fixes: 8f5d8e74fb38 ("net/ice: support flow for AH ESP and L2TP")
Fixes: 66ff8851792f ("net/ice: support ESP/AH/L2TP")
Fixes: 45b53ed3701d ("net/ice: support IPv6 NAT-T")
Cc: stable@dpdk.org

Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
---
 drivers/net/ice/ice_switch_filter.c | 76 ++++++++++++++---------------
 1 file changed, 38 insertions(+), 38 deletions(-)

diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c
index 9db89a307..c4b00b6a2 100644
--- a/drivers/net/ice/ice_switch_filter.c
+++ b/drivers/net/ice/ice_switch_filter.c
@@ -481,10 +481,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	bool profile_rule = 0;
 	bool nvgre_valid = 0;
 	bool vxlan_valid = 0;
-	bool ipv6_valiad = 0;
-	bool ipv4_valiad = 0;
-	bool udp_valiad = 0;
-	bool tcp_valiad = 0;
+	bool ipv6_valid = 0;
+	bool ipv4_valid = 0;
+	bool udp_valid = 0;
+	bool tcp_valid = 0;
 	uint16_t j, t = 0;
 
 	for (item = pattern; item->type !=
@@ -571,7 +571,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			ipv4_spec = item->spec;
 			ipv4_mask = item->mask;
-			ipv4_valiad = 1;
+			ipv4_valid = 1;
 			if (ipv4_spec && ipv4_mask) {
 				/* Check IPv4 mask and update input set */
 				if (ipv4_mask->hdr.version_ihl ||
@@ -663,7 +663,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		case RTE_FLOW_ITEM_TYPE_IPV6:
 			ipv6_spec = item->spec;
 			ipv6_mask = item->mask;
-			ipv6_valiad = 1;
+			ipv6_valid = 1;
 			if (ipv6_spec && ipv6_mask) {
 				if (ipv6_mask->hdr.payload_len) {
 					rte_flow_error_set(error, EINVAL,
@@ -781,7 +781,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		case RTE_FLOW_ITEM_TYPE_UDP:
 			udp_spec = item->spec;
 			udp_mask = item->mask;
-			udp_valiad = 1;
+			udp_valid = 1;
 			if (udp_spec && udp_mask) {
 				/* Check UDP mask and update input set*/
 				if (udp_mask->hdr.dgram_len ||
@@ -834,7 +834,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 		case RTE_FLOW_ITEM_TYPE_TCP:
 			tcp_spec = item->spec;
 			tcp_mask = item->mask;
-			tcp_valiad = 1;
+			tcp_valid = 1;
 			if (tcp_spec && tcp_mask) {
 				/* Check TCP mask and update input set */
 				if (tcp_mask->hdr.sent_seq ||
@@ -1152,16 +1152,16 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 
 			if (!esp_spec && !esp_mask && !input_set) {
 				profile_rule = 1;
-				if (ipv6_valiad && udp_valiad)
+				if (ipv6_valid && udp_valid)
 					*tun_type =
 					ICE_SW_TUN_PROFID_IPV6_NAT_T;
-				else if (ipv6_valiad)
+				else if (ipv6_valid)
 					*tun_type = ICE_SW_TUN_PROFID_IPV6_ESP;
-				else if (ipv4_valiad)
+				else if (ipv4_valid)
 					return 0;
 			} else if (esp_spec && esp_mask &&
 						esp_mask->hdr.spi){
-				if (udp_valiad)
+				if (udp_valid)
 					list[t].type = ICE_NAT_T;
 				else
 					list[t].type = ICE_ESP;
@@ -1175,13 +1175,13 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			}
 
 			if (!profile_rule) {
-				if (ipv6_valiad && udp_valiad)
+				if (ipv6_valid && udp_valid)
 					*tun_type = ICE_SW_TUN_IPV6_NAT_T;
-				else if (ipv4_valiad && udp_valiad)
+				else if (ipv4_valid && udp_valid)
 					*tun_type = ICE_SW_TUN_IPV4_NAT_T;
-				else if (ipv6_valiad)
+				else if (ipv6_valid)
 					*tun_type = ICE_SW_TUN_IPV6_ESP;
-				else if (ipv4_valiad)
+				else if (ipv4_valid)
 					*tun_type = ICE_SW_TUN_IPV4_ESP;
 			}
 			break;
@@ -1212,12 +1212,12 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 
 			if (!ah_spec && !ah_mask && !input_set) {
 				profile_rule = 1;
-				if (ipv6_valiad && udp_valiad)
+				if (ipv6_valid && udp_valid)
 					*tun_type =
 					ICE_SW_TUN_PROFID_IPV6_NAT_T;
-				else if (ipv6_valiad)
+				else if (ipv6_valid)
 					*tun_type = ICE_SW_TUN_PROFID_IPV6_AH;
-				else if (ipv4_valiad)
+				else if (ipv4_valid)
 					return 0;
 			} else if (ah_spec && ah_mask &&
 						ah_mask->spi){
@@ -1232,11 +1232,11 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			}
 
 			if (!profile_rule) {
-				if (udp_valiad)
+				if (udp_valid)
 					return 0;
-				else if (ipv6_valiad)
+				else if (ipv6_valid)
 					*tun_type = ICE_SW_TUN_IPV6_AH;
-				else if (ipv4_valiad)
+				else if (ipv4_valid)
 					*tun_type = ICE_SW_TUN_IPV4_AH;
 			}
 			break;
@@ -1254,10 +1254,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			}
 
 			if (!l2tp_spec && !l2tp_mask && !input_set) {
-				if (ipv6_valiad)
+				if (ipv6_valid)
 					*tun_type =
 					ICE_SW_TUN_PROFID_MAC_IPV6_L2TPV3;
-				else if (ipv4_valiad)
+				else if (ipv4_valid)
 					return 0;
 			} else if (l2tp_spec && l2tp_mask &&
 						l2tp_mask->session_id){
@@ -1272,10 +1272,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			}
 
 			if (!profile_rule) {
-				if (ipv6_valiad)
+				if (ipv6_valid)
 					*tun_type =
 					ICE_SW_TUN_IPV6_L2TPV3;
-				else if (ipv4_valiad)
+				else if (ipv4_valid)
 					*tun_type =
 					ICE_SW_TUN_IPV4_L2TPV3;
 			}
@@ -1309,7 +1309,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 				}
 				if (pfcp_mask->s_field &&
 					pfcp_spec->s_field == 0x01 &&
-					ipv6_valiad)
+					ipv6_valid)
 					*tun_type =
 					ICE_SW_TUN_PROFID_IPV6_PFCP_SESSION;
 				else if (pfcp_mask->s_field &&
@@ -1318,7 +1318,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 					ICE_SW_TUN_PROFID_IPV4_PFCP_SESSION;
 				else if (pfcp_mask->s_field &&
 					!pfcp_spec->s_field &&
-					ipv6_valiad)
+					ipv6_valid)
 					*tun_type =
 					ICE_SW_TUN_PROFID_IPV6_PFCP_NODE;
 				else if (pfcp_mask->s_field &&
@@ -1342,17 +1342,17 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 	}
 
 	if (pppoe_patt_valid && !pppoe_prot_valid) {
-		if (ipv6_valiad && udp_valiad)
+		if (ipv6_valid && udp_valid)
 			*tun_type = ICE_SW_TUN_PPPOE_IPV6_UDP;
-		else if (ipv6_valiad && tcp_valiad)
+		else if (ipv6_valid && tcp_valid)
 			*tun_type = ICE_SW_TUN_PPPOE_IPV6_TCP;
-		else if (ipv4_valiad && udp_valiad)
+		else if (ipv4_valid && udp_valid)
 			*tun_type = ICE_SW_TUN_PPPOE_IPV4_UDP;
-		else if (ipv4_valiad && tcp_valiad)
+		else if (ipv4_valid && tcp_valid)
 			*tun_type = ICE_SW_TUN_PPPOE_IPV4_TCP;
-		else if (ipv6_valiad)
+		else if (ipv6_valid)
 			*tun_type = ICE_SW_TUN_PPPOE_IPV6;
-		else if (ipv4_valiad)
+		else if (ipv4_valid)
 			*tun_type = ICE_SW_TUN_PPPOE_IPV4;
 		else
 			*tun_type = ICE_SW_TUN_PPPOE;
@@ -1363,13 +1363,13 @@ ice_switch_inset_get(const struct rte_flow_item pattern[],
 			*tun_type = ICE_SW_TUN_VXLAN;
 		else if (nvgre_valid)
 			*tun_type = ICE_SW_TUN_NVGRE;
-		else if (ipv4_valiad && tcp_valiad)
+		else if (ipv4_valid && tcp_valid)
 			*tun_type = ICE_SW_IPV4_TCP;
-		else if (ipv4_valiad && udp_valiad)
+		else if (ipv4_valid && udp_valid)
 			*tun_type = ICE_SW_IPV4_UDP;
-		else if (ipv6_valiad && tcp_valiad)
+		else if (ipv6_valid && tcp_valid)
 			*tun_type = ICE_SW_IPV6_TCP;
-		else if (ipv6_valiad && udp_valiad)
+		else if (ipv6_valid && udp_valid)
 			*tun_type = ICE_SW_IPV6_UDP;
 	}
 
-- 
2.19.1


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [dpdk-stable] [PATCH v6 0/5] enable more PPPoE packet type for switch
  2020-07-03  6:19         ` [dpdk-stable] [PATCH v6 " Wei Zhao
                             ` (4 preceding siblings ...)
  2020-07-03  6:19           ` [dpdk-stable] [PATCH v6 5/5] net/ice: fix typo Wei Zhao
@ 2020-07-03 13:46           ` Zhang, Qi Z
  5 siblings, 0 replies; 44+ messages in thread
From: Zhang, Qi Z @ 2020-07-03 13:46 UTC (permalink / raw)
  To: Zhao1, Wei, dev; +Cc: stable, Lu, Nannan



> -----Original Message-----
> From: Zhao1, Wei <wei.zhao1@intel.com>
> Sent: Friday, July 3, 2020 2:20 PM
> To: dev@dpdk.org
> Cc: stable@dpdk.org; Zhang, Qi Z <qi.z.zhang@intel.com>; Lu, Nannan
> <nannan.lu@intel.com>
> Subject: [PATCH v6 0/5] enable more PPPoE packet type for switch
> 
> 1. add more support for switch parser of pppoe packet.
> 2. add check for NVGRE protocol
> 3. support flow for specific L4 type
> 4. add input set byte number check
> 5. fix typo
> 
> This patchset is based on:
> [1] https://patches.dpdk.org/cover/70762/ : net/ice: base code update
> 
> Depends-on: series-10300
> 
> v2:
> fix bug in patch add redirect support for VSI list rule.
> add information in release note.
> 
> v3:
> add input set byte number check
> code update as comment of code style
> 
> v4:
> fix typo in patch
> 
> v5:
> add more valid flag
> 
> v6:
> rebase for code merge
> 
> Wei Zhao (5):
>   net/ice: add support more PPPoE packeat type for switch
>   net/ice: fix tunnel type for switch rule
>   net/ice: support switch flow for specific L4 type
>   net/ice: add input set byte number check
>   net/ice: fix typo
> 
>  doc/guides/rel_notes/release_20_08.rst |   1 +
>  drivers/net/ice/ice_switch_filter.c    | 241 ++++++++++++++++++++-----
>  2 files changed, 192 insertions(+), 50 deletions(-)
> 
> Tested-by: Nannan Lu <nannan.lu@intel.com>

Acked-by: Qi Zhang <qi.z.zhang@intel.com>

Applied to dpdk-next-net-intel

Thanks
Qi

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, back to index

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20200605074031.16231-1-wei.zhao1@intel.com>
2020-06-05  7:40 ` [dpdk-stable] [PATCH 2/4] net/ice: add redirect support for VSI list rule Wei Zhao
2020-06-05  7:40 ` [dpdk-stable] [PATCH 3/4] net/ice: add check for NVGRE protocol Wei Zhao
2020-06-05  7:40 ` [dpdk-stable] [PATCH 4/4] net/ice: support switch flow for specific L4 type Wei Zhao
2020-06-17  6:14 ` [dpdk-stable] [PATCH v2 0/4] enable more PPPoE packet type for switch Wei Zhao
2020-06-17  6:14   ` [dpdk-stable] [PATCH v2 1/4] net/ice: add support " Wei Zhao
2020-06-17  6:14   ` [dpdk-stable] [PATCH v2 2/4] net/ice: add redirect support for VSI list rule Wei Zhao
2020-06-22 15:25     ` Zhang, Qi Z
2020-06-17  6:14   ` [dpdk-stable] [PATCH v2 3/4] net/ice: add check for NVGRE protocol Wei Zhao
2020-06-22 15:49     ` Zhang, Qi Z
2020-06-23  1:11       ` Zhao1, Wei
2020-06-17  6:14   ` [dpdk-stable] [PATCH v2 4/4] net/ice: support switch flow for specific L4 type Wei Zhao
2020-06-22 15:36     ` Zhang, Qi Z
2020-06-23  1:12       ` Zhao1, Wei
2020-06-28  3:21   ` [dpdk-stable] [PATCH v3 0/4] enable more PPPoE packet type for switch Wei Zhao
2020-06-28  3:21     ` [dpdk-stable] [PATCH v3 1/4] net/ice: add support " Wei Zhao
2020-06-28  3:21     ` [dpdk-stable] [PATCH v3 2/4] net/ice: fix tunnel type for switch rule Wei Zhao
2020-06-28  3:21     ` [dpdk-stable] [PATCH v3 3/4] net/ice: support switch flow for specific L4 type Wei Zhao
2020-06-28  3:21     ` [dpdk-stable] [PATCH v3 4/4] net/ice: add input set byte number check Wei Zhao
2020-06-28  5:01   ` [dpdk-stable] [PATCH v3 0/4] enable more PPPoE packet type for switch Wei Zhao
2020-06-28  5:01     ` [dpdk-stable] [PATCH v3 1/4] net/ice: add support " Wei Zhao
2020-06-28  5:01     ` [dpdk-stable] [PATCH v3 2/4] net/ice: fix tunnel type for switch rule Wei Zhao
2020-06-28  5:01     ` [dpdk-stable] [PATCH v3 3/4] net/ice: support switch flow for specific L4 type Wei Zhao
2020-06-29  1:55       ` Zhang, Qi Z
2020-06-29  2:01         ` Zhao1, Wei
2020-06-28  5:01     ` [dpdk-stable] [PATCH v3 4/4] net/ice: add input set byte number check Wei Zhao
2020-06-28  5:28     ` [dpdk-stable] [PATCH v4 0/4] enable more PPPoE packet type for switch Wei Zhao
2020-06-28  5:28       ` [dpdk-stable] [PATCH v4 1/4] net/ice: add support " Wei Zhao
2020-06-28  5:28       ` [dpdk-stable] [PATCH v4 2/4] net/ice: fix tunnel type for switch rule Wei Zhao
2020-06-28  5:28       ` [dpdk-stable] [PATCH v4 3/4] net/ice: support switch flow for specific L4 type Wei Zhao
2020-06-28  5:28       ` [dpdk-stable] [PATCH v4 4/4] net/ice: add input set byte number check Wei Zhao
2020-06-29  5:10       ` [dpdk-stable] [PATCH v5 0/5] enable more PPPoE packet type for switch Wei Zhao
2020-06-29  5:10         ` [dpdk-stable] [PATCH v5 1/5] net/ice: add support " Wei Zhao
2020-06-29  5:10         ` [dpdk-stable] [PATCH v5 2/5] net/ice: fix tunnel type for switch rule Wei Zhao
2020-06-29  5:10         ` [dpdk-stable] [PATCH v5 3/5] net/ice: support switch flow for specific L4 type Wei Zhao
2020-06-29  5:10         ` [dpdk-stable] [PATCH v5 4/5] net/ice: add input set byte number check Wei Zhao
2020-06-29  5:10         ` [dpdk-stable] [PATCH v5 5/5] net/ice: fix typo Wei Zhao
2020-07-03  2:47         ` [dpdk-stable] [PATCH v5 0/5] enable more PPPoE packet type for switch Lu, Nannan
2020-07-03  6:19         ` [dpdk-stable] [PATCH v6 " Wei Zhao
2020-07-03  6:19           ` [dpdk-stable] [PATCH v6 1/5] net/ice: add support more PPPoE packeat " Wei Zhao
2020-07-03  6:19           ` [dpdk-stable] [PATCH v6 2/5] net/ice: fix tunnel type for switch rule Wei Zhao
2020-07-03  6:19           ` [dpdk-stable] [PATCH v6 3/5] net/ice: support switch flow for specific L4 type Wei Zhao
2020-07-03  6:19           ` [dpdk-stable] [PATCH v6 4/5] net/ice: add input set byte number check Wei Zhao
2020-07-03  6:19           ` [dpdk-stable] [PATCH v6 5/5] net/ice: fix typo Wei Zhao
2020-07-03 13:46           ` [dpdk-stable] [PATCH v6 0/5] enable more PPPoE packet type for switch Zhang, Qi Z

patches for DPDK stable branches

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/stable/0 stable/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 stable stable/ http://inbox.dpdk.org/stable \
		stable@dpdk.org
	public-inbox-index stable


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.stable


AGPL code for this site: git clone https://public-inbox.org/ public-inbox