From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3E1B6A0A02; Fri, 15 Jan 2021 07:42:57 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EDE15140EF5; Fri, 15 Jan 2021 07:42:56 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 0F984140EDF for ; Fri, 15 Jan 2021 07:42:54 +0100 (CET) IronPort-SDR: YDxLsRvEabyqCukmfhL4StPf2pP1QKoTrrI7jJ/GVF1/VwHWubIblATnKTf16Mcuf1VrdZsQi5 wYXXB+pkKW8A== X-IronPort-AV: E=McAfee;i="6000,8403,9864"; a="175928467" X-IronPort-AV: E=Sophos;i="5.79,348,1602572400"; d="scan'208";a="175928467" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Jan 2021 22:42:54 -0800 IronPort-SDR: HTZqiD9HPu+nH2Q9V78L9ysiC3Zd/YD2Wot+l2nv9v5kQTjB5fqAXMh0b2r/IsDV/ANNNbOjLI a1RLHPhLcu9Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,348,1602572400"; d="scan'208";a="364486757" Received: from dpdk-junfengguo-v1.sh.intel.com ([10.67.119.125]) by orsmga002.jf.intel.com with ESMTP; 14 Jan 2021 22:42:51 -0800 From: Junfeng Guo To: qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: dev@dpdk.org, haiyue.wang@intel.com, yuying.zhang@intel.com, junfeng.guo@intel.com, Wei Zhao Date: Fri, 15 Jan 2021 14:36:42 +0000 Message-Id: <20210115143642.814290-1-junfeng.guo@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210115141420.731708-1-junfeng.guo@intel.com> References: <20210115141420.731708-1-junfeng.guo@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v2] net/ice: enable QinQ filter for switch X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Enable the double VLAN support for switch QinQ filtering. Signed-off-by: Wei Zhao Signed-off-by: Haiyue Wang Signed-off-by: Junfeng Guo --- doc/guides/rel_notes/release_21_02.rst | 4 + drivers/net/ice/ice_generic_flow.c | 15 ++++ drivers/net/ice/ice_generic_flow.h | 1 + drivers/net/ice/ice_switch_filter.c | 104 ++++++++++++++++++++++--- 4 files changed, 113 insertions(+), 11 deletions(-) diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst index db40c6df3..e6049a9ef 100644 --- a/doc/guides/rel_notes/release_21_02.rst +++ b/doc/guides/rel_notes/release_21_02.rst @@ -69,6 +69,10 @@ New Features * Added GTP PDU session container matching and raw encap/decap. +* **Updated Intel ice driver.** + + * Added Double VLAN support for DCF switch QinQ filtering. + Removed Items ------------- diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c index 4313aae18..454650f6a 100644 --- a/drivers/net/ice/ice_generic_flow.c +++ b/drivers/net/ice/ice_generic_flow.c @@ -1455,6 +1455,14 @@ enum rte_flow_item_type pattern_eth_qinq_pppoes[] = { RTE_FLOW_ITEM_TYPE_PPPOES, RTE_FLOW_ITEM_TYPE_END, }; +enum rte_flow_item_type pattern_eth_qinq_pppoes_proto[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_PPPOES, + RTE_FLOW_ITEM_TYPE_PPPOE_PROTO_ID, + RTE_FLOW_ITEM_TYPE_END, +}; enum rte_flow_item_type pattern_eth_pppoes_ipv4[] = { RTE_FLOW_ITEM_TYPE_ETH, RTE_FLOW_ITEM_TYPE_PPPOES, @@ -2100,29 +2108,36 @@ static struct ice_ptype_match ice_ptype_map[] = { {pattern_eth_ipv6_pfcp, ICE_MAC_IPV6_PFCP_SESSION}, {pattern_ethertype, ICE_PTYPE_MAC_PAY}, {pattern_ethertype_vlan, ICE_PTYPE_MAC_PAY}, + {pattern_ethertype_qinq, ICE_PTYPE_MAC_PAY}, {pattern_eth_arp, ICE_PTYPE_MAC_PAY}, {pattern_eth_vlan_ipv4, ICE_PTYPE_IPV4_PAY}, + {pattern_eth_qinq_ipv4, ICE_PTYPE_IPV4_PAY}, {pattern_eth_vlan_ipv4_udp, ICE_PTYPE_IPV4_UDP_PAY}, {pattern_eth_vlan_ipv4_tcp, ICE_PTYPE_IPV4_TCP_PAY}, {pattern_eth_vlan_ipv4_sctp, ICE_PTYPE_IPV4_SCTP_PAY}, {pattern_eth_vlan_ipv6, ICE_PTYPE_IPV6_PAY}, + {pattern_eth_qinq_ipv6, ICE_PTYPE_IPV6_PAY}, {pattern_eth_vlan_ipv6_udp, ICE_PTYPE_IPV6_UDP_PAY}, {pattern_eth_vlan_ipv6_tcp, ICE_PTYPE_IPV6_TCP_PAY}, {pattern_eth_vlan_ipv6_sctp, ICE_PTYPE_IPV6_SCTP_PAY}, {pattern_eth_pppoes, ICE_MAC_PPPOE_PAY}, {pattern_eth_vlan_pppoes, ICE_MAC_PPPOE_PAY}, + {pattern_eth_qinq_pppoes, ICE_MAC_PPPOE_PAY}, {pattern_eth_pppoes_proto, ICE_MAC_PPPOE_PAY}, {pattern_eth_vlan_pppoes_proto, ICE_MAC_PPPOE_PAY}, + {pattern_eth_qinq_pppoes_proto, ICE_MAC_PPPOE_PAY}, {pattern_eth_pppoes_ipv4, ICE_MAC_PPPOE_IPV4_PAY}, {pattern_eth_pppoes_ipv4_udp, ICE_MAC_PPPOE_IPV4_UDP_PAY}, {pattern_eth_pppoes_ipv4_tcp, ICE_MAC_PPPOE_IPV4_TCP}, {pattern_eth_vlan_pppoes_ipv4, ICE_MAC_PPPOE_IPV4_PAY}, + {pattern_eth_qinq_pppoes_ipv4, ICE_MAC_PPPOE_IPV4_PAY}, {pattern_eth_vlan_pppoes_ipv4_tcp, ICE_MAC_PPPOE_IPV4_TCP}, {pattern_eth_vlan_pppoes_ipv4_udp, ICE_MAC_PPPOE_IPV4_UDP_PAY}, {pattern_eth_pppoes_ipv6, ICE_MAC_PPPOE_IPV6_PAY}, {pattern_eth_pppoes_ipv6_udp, ICE_MAC_PPPOE_IPV6_UDP_PAY}, {pattern_eth_pppoes_ipv6_tcp, ICE_MAC_PPPOE_IPV6_TCP}, {pattern_eth_vlan_pppoes_ipv6, ICE_MAC_PPPOE_IPV6_PAY}, + {pattern_eth_qinq_pppoes_ipv6, ICE_MAC_PPPOE_IPV6_PAY}, {pattern_eth_vlan_pppoes_ipv6_tcp, ICE_MAC_PPPOE_IPV6_TCP}, {pattern_eth_vlan_pppoes_ipv6_udp, ICE_MAC_PPPOE_IPV6_UDP_PAY}, {pattern_eth_ipv4_udp_vxlan_ipv4, ICE_MAC_IPV4_TUN_IPV4_PAY}, diff --git a/drivers/net/ice/ice_generic_flow.h b/drivers/net/ice/ice_generic_flow.h index 0dcb62080..18918769d 100644 --- a/drivers/net/ice/ice_generic_flow.h +++ b/drivers/net/ice/ice_generic_flow.h @@ -426,6 +426,7 @@ extern enum rte_flow_item_type pattern_eth_pppoes_proto[]; extern enum rte_flow_item_type pattern_eth_vlan_pppoes[]; extern enum rte_flow_item_type pattern_eth_vlan_pppoes_proto[]; extern enum rte_flow_item_type pattern_eth_qinq_pppoes[]; +extern enum rte_flow_item_type pattern_eth_qinq_pppoes_proto[]; extern enum rte_flow_item_type pattern_eth_pppoes_ipv4[]; extern enum rte_flow_item_type pattern_eth_vlan_pppoes_ipv4[]; extern enum rte_flow_item_type pattern_eth_qinq_pppoes_ipv4[]; diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index e5b7d5606..7bac77ecd 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -35,11 +35,15 @@ #define ICE_SW_INSET_ETHER ( \ ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) #define ICE_SW_INSET_MAC_VLAN ( \ - ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE | \ - ICE_INSET_VLAN_OUTER) + ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE | \ + ICE_INSET_VLAN_INNER) +#define ICE_SW_INSET_MAC_QINQ ( \ + ICE_SW_INSET_MAC_VLAN | ICE_INSET_VLAN_OUTER) #define ICE_SW_INSET_MAC_IPV4 ( \ ICE_INSET_DMAC | ICE_INSET_IPV4_DST | ICE_INSET_IPV4_SRC | \ ICE_INSET_IPV4_PROTO | ICE_INSET_IPV4_TTL | ICE_INSET_IPV4_TOS) +#define ICE_SW_INSET_MAC_QINQ_IPV4 ( \ + ICE_SW_INSET_MAC_QINQ | ICE_SW_INSET_MAC_IPV4) #define ICE_SW_INSET_MAC_IPV4_TCP ( \ ICE_INSET_DMAC | ICE_INSET_IPV4_DST | ICE_INSET_IPV4_SRC | \ ICE_INSET_IPV4_TTL | ICE_INSET_IPV4_TOS | \ @@ -52,6 +56,8 @@ ICE_INSET_DMAC | ICE_INSET_IPV6_DST | ICE_INSET_IPV6_SRC | \ ICE_INSET_IPV6_TC | ICE_INSET_IPV6_HOP_LIMIT | \ ICE_INSET_IPV6_NEXT_HDR) +#define ICE_SW_INSET_MAC_QINQ_IPV6 ( \ + ICE_SW_INSET_MAC_QINQ | ICE_SW_INSET_MAC_IPV6) #define ICE_SW_INSET_MAC_IPV6_TCP ( \ ICE_INSET_DMAC | ICE_INSET_IPV6_DST | ICE_INSET_IPV6_SRC | \ ICE_INSET_IPV6_HOP_LIMIT | ICE_INSET_IPV6_TC | \ @@ -146,6 +152,8 @@ ice_pattern_match_item ice_switch_pattern_dist_list[] = { ICE_SW_INSET_ETHER, ICE_INSET_NONE}, {pattern_ethertype_vlan, ICE_SW_INSET_MAC_VLAN, ICE_INSET_NONE}, + {pattern_ethertype_qinq, + ICE_SW_INSET_MAC_QINQ, ICE_INSET_NONE}, {pattern_eth_arp, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv4, @@ -226,6 +234,18 @@ ice_pattern_match_item ice_switch_pattern_dist_list[] = { ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv6_pfcp, ICE_INSET_NONE, ICE_INSET_NONE}, + {pattern_eth_qinq_ipv4, + ICE_SW_INSET_MAC_QINQ_IPV4, ICE_INSET_NONE}, + {pattern_eth_qinq_ipv6, + ICE_SW_INSET_MAC_QINQ_IPV6, ICE_INSET_NONE}, + {pattern_eth_qinq_pppoes, + ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, + {pattern_eth_qinq_pppoes_proto, + ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE}, + {pattern_eth_qinq_pppoes_ipv4, + ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE}, + {pattern_eth_qinq_pppoes_ipv6, + ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE}, }; static struct @@ -234,6 +254,8 @@ ice_pattern_match_item ice_switch_pattern_perm_list[] = { ICE_SW_INSET_ETHER, ICE_INSET_NONE}, {pattern_ethertype_vlan, ICE_SW_INSET_MAC_VLAN, ICE_INSET_NONE}, + {pattern_ethertype_qinq, + ICE_SW_INSET_MAC_QINQ, ICE_INSET_NONE}, {pattern_eth_arp, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv4, @@ -314,6 +336,18 @@ ice_pattern_match_item ice_switch_pattern_perm_list[] = { ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv6_pfcp, ICE_INSET_NONE, ICE_INSET_NONE}, + {pattern_eth_qinq_ipv4, + ICE_SW_INSET_MAC_QINQ_IPV4, ICE_INSET_NONE}, + {pattern_eth_qinq_ipv6, + ICE_SW_INSET_MAC_QINQ_IPV6, ICE_INSET_NONE}, + {pattern_eth_qinq_pppoes, + ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, + {pattern_eth_qinq_pppoes_proto, + ICE_SW_INSET_MAC_PPPOE_PROTO, ICE_INSET_NONE}, + {pattern_eth_qinq_pppoes_ipv4, + ICE_SW_INSET_MAC_PPPOE_IPV4, ICE_INSET_NONE}, + {pattern_eth_qinq_pppoes_ipv6, + ICE_SW_INSET_MAC_PPPOE_IPV6, ICE_INSET_NONE}, }; static int @@ -446,6 +480,8 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], bool pppoe_elem_valid = 0; bool pppoe_patt_valid = 0; bool pppoe_prot_valid = 0; + bool inner_vlan_valid = 0; + bool outer_vlan_valid = 0; bool tunnel_valid = 0; bool profile_rule = 0; bool nvgre_valid = 0; @@ -992,23 +1028,40 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], "Invalid VLAN item"); return 0; } + + if (!outer_vlan_valid && + (*tun_type == ICE_SW_TUN_AND_NON_TUN_QINQ || + *tun_type == ICE_NON_TUN_QINQ)) + outer_vlan_valid = 1; + else if (!inner_vlan_valid && + (*tun_type == ICE_SW_TUN_AND_NON_TUN_QINQ || + *tun_type == ICE_NON_TUN_QINQ)) + inner_vlan_valid = 1; + else if (!inner_vlan_valid) + inner_vlan_valid = 1; + if (vlan_spec && vlan_mask) { - list[t].type = ICE_VLAN_OFOS; + if (outer_vlan_valid && !inner_vlan_valid) { + list[t].type = ICE_VLAN_EX; + input_set |= ICE_INSET_VLAN_OUTER; + } else if (inner_vlan_valid) { + list[t].type = ICE_VLAN_OFOS; + input_set |= ICE_INSET_VLAN_INNER; + } + if (vlan_mask->tci) { list[t].h_u.vlan_hdr.vlan = vlan_spec->tci; list[t].m_u.vlan_hdr.vlan = vlan_mask->tci; - input_set |= ICE_INSET_VLAN_OUTER; input_set_byte += 2; } if (vlan_mask->inner_type) { - list[t].h_u.vlan_hdr.type = - vlan_spec->inner_type; - list[t].m_u.vlan_hdr.type = - vlan_mask->inner_type; - input_set |= ICE_INSET_ETHERTYPE; - input_set_byte += 2; + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Invalid VLAN input set."); + return 0; } t++; } @@ -1310,8 +1363,27 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], } } + if (*tun_type == ICE_SW_TUN_PPPOE_PAY && + inner_vlan_valid && outer_vlan_valid) + *tun_type = ICE_SW_TUN_PPPOE_PAY_QINQ; + else if (*tun_type == ICE_SW_TUN_PPPOE && + inner_vlan_valid && outer_vlan_valid) + *tun_type = ICE_SW_TUN_PPPOE_QINQ; + else if (*tun_type == ICE_NON_TUN && + inner_vlan_valid && outer_vlan_valid) + *tun_type = ICE_NON_TUN_QINQ; + else if (*tun_type == ICE_SW_TUN_AND_NON_TUN && + inner_vlan_valid && outer_vlan_valid) + *tun_type = ICE_SW_TUN_AND_NON_TUN_QINQ; + if (pppoe_patt_valid && !pppoe_prot_valid) { - if (ipv6_valid && udp_valid) + if (inner_vlan_valid && outer_vlan_valid && ipv4_valid) + *tun_type = ICE_SW_TUN_PPPOE_IPV4_QINQ; + else if (inner_vlan_valid && outer_vlan_valid && ipv6_valid) + *tun_type = ICE_SW_TUN_PPPOE_IPV6_QINQ; + else if (inner_vlan_valid && outer_vlan_valid) + *tun_type = ICE_SW_TUN_PPPOE_QINQ; + else if (ipv6_valid && udp_valid) *tun_type = ICE_SW_TUN_PPPOE_IPV6_UDP; else if (ipv6_valid && tcp_valid) *tun_type = ICE_SW_TUN_PPPOE_IPV6_TCP; @@ -1589,6 +1661,7 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, uint16_t lkups_num = 0; const struct rte_flow_item *item = pattern; uint16_t item_num = 0; + uint16_t vlan_num = 0; enum ice_sw_tunnel_type tun_type = ICE_NON_TUN; struct ice_pattern_match_item *pattern_match_item = NULL; @@ -1604,6 +1677,10 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, if (eth_mask->type == UINT16_MAX) tun_type = ICE_SW_TUN_AND_NON_TUN; } + + if (item->type == RTE_FLOW_ITEM_TYPE_VLAN) + vlan_num++; + /* reserve one more memory slot for ETH which may * consume 2 lookup items. */ @@ -1611,6 +1688,11 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, item_num++; } + if (vlan_num == 2 && tun_type == ICE_SW_TUN_AND_NON_TUN) + tun_type = ICE_SW_TUN_AND_NON_TUN_QINQ; + else if (vlan_num == 2) + tun_type = ICE_NON_TUN_QINQ; + list = rte_zmalloc(NULL, item_num * sizeof(*list), 0); if (!list) { rte_flow_error_set(error, EINVAL, -- 2.25.1