From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6C748A3168 for ; Wed, 16 Oct 2019 05:03:31 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B6A3C1C114; Wed, 16 Oct 2019 05:03:29 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 3B56737B0 for ; Wed, 16 Oct 2019 05:03:27 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 15 Oct 2019 20:03:27 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.67,302,1566889200"; d="scan'208";a="194707154" Received: from fmsmsx104.amr.corp.intel.com ([10.18.124.202]) by fmsmga008.fm.intel.com with ESMTP; 15 Oct 2019 20:03:26 -0700 Received: from fmsmsx112.amr.corp.intel.com (10.18.116.6) by fmsmsx104.amr.corp.intel.com (10.18.124.202) with Microsoft SMTP Server (TLS) id 14.3.439.0; Tue, 15 Oct 2019 20:03:26 -0700 Received: from shsmsx153.ccr.corp.intel.com (10.239.6.53) by FMSMSX112.amr.corp.intel.com (10.18.116.6) with Microsoft SMTP Server (TLS) id 14.3.439.0; Tue, 15 Oct 2019 20:03:25 -0700 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.96]) by SHSMSX153.ccr.corp.intel.com ([10.239.6.53]) with mapi id 14.03.0439.000; Wed, 16 Oct 2019 11:03:23 +0800 From: "Wang, Ying A" To: "Ye, Xiaolong" CC: "Zhang, Qi Z" , "dev@dpdk.org" , "Yang, Qiming" , "Zhao1, Wei" Thread-Topic: [PATCH v4 5/5] net/ice: rework switch filter Thread-Index: AQHVgoe1EwDcOqRk/0iTY+LtBOdsOadcDwiAgACINaA= Date: Wed, 16 Oct 2019 03:03:23 +0000 Message-ID: <44DE8E8A53B4014CA1985CEE86C07F2A0B9A7811@SHSMSX101.ccr.corp.intel.com> References: <20190926185524.317595-2-ying.a.wang@intel.com> <20191014034211.293048-1-ying.a.wang@intel.com> <20191014034211.293048-6-ying.a.wang@intel.com> <20191016025501.GH3725@intel.com> In-Reply-To: <20191016025501.GH3725@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.2.0.6 dlp-reaction: no-action x-ctpclassification: CTP_NT x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiYWZmN2NkOGMtMDBmYy00N2VkLTk5OWEtNWNmNjdlMjMyYzAwIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiUVJZSFFRR3lSNDdUNCtcL2VFNTlZVGJpUkhrYUpXOFpKb3Q1dVZQbDJGcUVXazJlc2JCR2k4aDJ1WE4rd2RKaFQifQ== x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v4 5/5] net/ice: rework switch filter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, Xiaolong I will fix the coding style warning and send v5. Thanks -Ying > -----Original Message----- > From: Ye, Xiaolong > Sent: Wednesday, October 16, 2019 10:55 AM > To: Wang, Ying A > Cc: Zhang, Qi Z ; dev@dpdk.org; Yang, Qiming > ; Zhao1, Wei > Subject: Re: [PATCH v4 5/5] net/ice: rework switch filter >=20 > Hi, >=20 > Could you check below warning in patchwork? >=20 > http://mails.dpdk.org/archives/test-report/2019-October/102523.html >=20 > Thanks, > Xiaolong >=20 > On 10/14, Ying Wang wrote: > >From: wei zhao > > > >The patch reworks packet process engine's binary classifier > >(switch) for the new framework. It also adds support for new packet > >type like PPPoE for switch filter. > > > >Signed-off-by: Wei Zhao > >Acked-by: Qi Zhang > >--- > > doc/guides/rel_notes/release_19_11.rst | 1 + > > drivers/net/ice/ice_switch_filter.c | 1191 > ++++++++++++++++++++++++++++++++ > > drivers/net/ice/ice_switch_filter.h | 4 - > > 3 files changed, 1192 insertions(+), 4 deletions(-) delete mode > >100644 drivers/net/ice/ice_switch_filter.h > > > >diff --git a/doc/guides/rel_notes/release_19_11.rst > >b/doc/guides/rel_notes/release_19_11.rst > >index 4d1698079..5014c7bf5 100644 > >--- a/doc/guides/rel_notes/release_19_11.rst > >+++ b/doc/guides/rel_notes/release_19_11.rst > >@@ -96,6 +96,7 @@ New Features > > * Added support for the ``RTE_ETH_DEV_CLOSE_REMOVE`` flag. > > * Generic filter enhancement > > - Supported pipeline mode. > >+ - Supported new packet type like PPPoE for switch filter. > > > > * **Updated the enic driver.** > > > >diff --git a/drivers/net/ice/ice_switch_filter.c > >b/drivers/net/ice/ice_switch_filter.c > >index 6fb6348b5..6c96b6f57 100644 > >--- a/drivers/net/ice/ice_switch_filter.c > >+++ b/drivers/net/ice/ice_switch_filter.c > >@@ -2,3 +2,1194 @@ > > * Copyright(c) 2019 Intel Corporation > > */ > > > >+#include > >+#include > >+#include > >+#include > >+#include > >+#include > >+#include > >+#include > >+#include > >+#include > >+#include > >+#include > >+#include > >+#include > >+#include > >+#include > >+#include "base/ice_type.h" > >+#include "base/ice_switch.h" > >+#include "base/ice_type.h" > >+#include "ice_logs.h" > >+#include "ice_ethdev.h" > >+#include "ice_generic_flow.h" > >+ > >+ > >+#define MAX_QGRP_NUM_TYPE 7 > >+ > >+#define ICE_SW_INSET_ETHER ( \ > >+ ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) #define > >+ICE_SW_INSET_MAC_IPV4 ( \ > >+ ICE_INSET_DMAC | ICE_INSET_IPV4_DST | ICE_INSET_IPV4_SRC | \ > >+ ICE_INSET_IPV4_PROTO | ICE_INSET_IPV4_TTL | ICE_INSET_IPV4_TOS) > >+#define ICE_SW_INSET_MAC_IPV4_TCP ( \ > >+ ICE_INSET_DMAC | ICE_INSET_IPV4_DST | ICE_INSET_IPV4_SRC | \ > >+ ICE_INSET_IPV4_TTL | ICE_INSET_IPV4_TOS | \ > >+ ICE_INSET_TCP_DST_PORT | ICE_INSET_TCP_SRC_PORT) #define > >+ICE_SW_INSET_MAC_IPV4_UDP ( \ > >+ ICE_INSET_DMAC | ICE_INSET_IPV4_DST | ICE_INSET_IPV4_SRC | \ > >+ ICE_INSET_IPV4_TTL | ICE_INSET_IPV4_TOS | \ > >+ ICE_INSET_UDP_DST_PORT | ICE_INSET_UDP_SRC_PORT) #define > >+ICE_SW_INSET_MAC_IPV6 ( \ > >+ ICE_INSET_DMAC | ICE_INSET_IPV6_DST | ICE_INSET_IPV6_SRC | \ > >+ ICE_INSET_IPV6_TC | ICE_INSET_IPV6_HOP_LIMIT | \ > >+ ICE_INSET_IPV6_NEXT_HDR) > >+#define ICE_SW_INSET_MAC_IPV6_TCP ( \ > >+ ICE_INSET_DMAC | ICE_INSET_IPV6_DST | ICE_INSET_IPV6_SRC | \ > >+ ICE_INSET_IPV6_HOP_LIMIT | ICE_INSET_IPV6_TC | \ > >+ ICE_INSET_TCP_DST_PORT | ICE_INSET_TCP_SRC_PORT) #define > >+ICE_SW_INSET_MAC_IPV6_UDP ( \ > >+ ICE_INSET_DMAC | ICE_INSET_IPV6_DST | ICE_INSET_IPV6_SRC | \ > >+ ICE_INSET_IPV6_HOP_LIMIT | ICE_INSET_IPV6_TC | \ > >+ ICE_INSET_UDP_DST_PORT | ICE_INSET_UDP_SRC_PORT) #define > >+ICE_SW_INSET_DIST_NVGRE_IPV4 ( \ > >+ ICE_INSET_TUN_IPV4_SRC | ICE_INSET_TUN_IPV4_DST | \ > >+ ICE_INSET_TUN_DMAC | ICE_INSET_TUN_NVGRE_TNI | > ICE_INSET_IPV4_DST) > >+#define ICE_SW_INSET_DIST_VXLAN_IPV4 ( \ > >+ ICE_INSET_TUN_IPV4_SRC | ICE_INSET_TUN_IPV4_DST | \ > >+ ICE_INSET_TUN_DMAC | ICE_INSET_TUN_VXLAN_VNI | > ICE_INSET_IPV4_DST) > >+#define ICE_SW_INSET_DIST_NVGRE_IPV4_TCP ( \ > >+ ICE_INSET_TUN_IPV4_SRC | ICE_INSET_TUN_IPV4_DST | \ > >+ ICE_INSET_TUN_TCP_SRC_PORT | ICE_INSET_TUN_TCP_DST_PORT | \ > >+ ICE_INSET_TUN_DMAC | ICE_INSET_TUN_NVGRE_TNI | > ICE_INSET_IPV4_DST) > >+#define ICE_SW_INSET_DIST_NVGRE_IPV4_UDP ( \ > >+ ICE_INSET_TUN_IPV4_SRC | ICE_INSET_TUN_IPV4_DST | \ > >+ ICE_INSET_TUN_UDP_SRC_PORT | ICE_INSET_TUN_UDP_DST_PORT | > \ > >+ ICE_INSET_TUN_DMAC | ICE_INSET_TUN_NVGRE_TNI | > ICE_INSET_IPV4_DST) > >+#define ICE_SW_INSET_DIST_VXLAN_IPV4_TCP ( \ > >+ ICE_INSET_TUN_IPV4_SRC | ICE_INSET_TUN_IPV4_DST | \ > >+ ICE_INSET_TUN_TCP_SRC_PORT | ICE_INSET_TUN_TCP_DST_PORT | \ > >+ ICE_INSET_TUN_DMAC | ICE_INSET_TUN_VXLAN_VNI | > ICE_INSET_IPV4_DST) > >+#define ICE_SW_INSET_DIST_VXLAN_IPV4_UDP ( \ > >+ ICE_INSET_TUN_IPV4_SRC | ICE_INSET_TUN_IPV4_DST | \ > >+ ICE_INSET_TUN_UDP_SRC_PORT | ICE_INSET_TUN_UDP_DST_PORT | > \ > >+ ICE_INSET_TUN_DMAC | ICE_INSET_TUN_VXLAN_VNI | > ICE_INSET_IPV4_DST) > >+#define ICE_SW_INSET_PERM_TUNNEL_IPV4 ( \ > >+ ICE_INSET_TUN_IPV4_SRC | ICE_INSET_TUN_IPV4_DST | \ > >+ ICE_INSET_TUN_IPV4_PROTO | ICE_INSET_TUN_IPV4_TOS) #define > >+ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP ( \ > >+ ICE_INSET_TUN_IPV4_SRC | ICE_INSET_TUN_IPV4_DST | \ > >+ ICE_INSET_TUN_TCP_SRC_PORT | ICE_INSET_TUN_TCP_DST_PORT | \ > >+ ICE_INSET_TUN_IPV4_TOS) > >+#define ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP ( \ > >+ ICE_INSET_TUN_IPV4_SRC | ICE_INSET_TUN_IPV4_DST | \ > >+ ICE_INSET_TUN_UDP_SRC_PORT | ICE_INSET_TUN_UDP_DST_PORT | > \ > >+ ICE_INSET_TUN_IPV4_TOS) > >+#define ICE_SW_INSET_MAC_PPPOE ( \ > >+ ICE_INSET_VLAN_OUTER | ICE_INSET_VLAN_INNER | \ > >+ ICE_INSET_DMAC | ICE_INSET_ETHERTYPE) > >+ > >+struct sw_meta { > >+ struct ice_adv_lkup_elem *list; > >+ uint16_t lkups_num; > >+ struct ice_adv_rule_info rule_info; > >+}; > >+ > >+static struct ice_flow_parser ice_switch_dist_parser_os; static struct > >+ice_flow_parser ice_switch_dist_parser_comms; static struct > >+ice_flow_parser ice_switch_perm_parser; > >+ > >+static struct > >+ice_pattern_match_item ice_switch_pattern_dist_comms[] =3D { > >+ {pattern_ethertype, > >+ ICE_SW_INSET_ETHER, ICE_INSET_NONE}, > >+ {pattern_eth_ipv4, > >+ ICE_SW_INSET_MAC_IPV4, ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_udp, > >+ ICE_SW_INSET_MAC_IPV4_UDP, ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_tcp, > >+ ICE_SW_INSET_MAC_IPV4_TCP, ICE_INSET_NONE}, > >+ {pattern_eth_ipv6, > >+ ICE_SW_INSET_MAC_IPV6, ICE_INSET_NONE}, > >+ {pattern_eth_ipv6_udp, > >+ ICE_SW_INSET_MAC_IPV6_UDP, ICE_INSET_NONE}, > >+ {pattern_eth_ipv6_tcp, > >+ ICE_SW_INSET_MAC_IPV6_TCP, ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_udp_vxlan_eth_ipv4, > >+ ICE_SW_INSET_DIST_VXLAN_IPV4, ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_udp_vxlan_eth_ipv4_udp, > >+ ICE_SW_INSET_DIST_VXLAN_IPV4_UDP, > ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_udp_vxlan_eth_ipv4_tcp, > >+ ICE_SW_INSET_DIST_VXLAN_IPV4_TCP, > ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_nvgre_eth_ipv4, > >+ ICE_SW_INSET_DIST_NVGRE_IPV4, ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_nvgre_eth_ipv4_udp, > >+ ICE_SW_INSET_DIST_NVGRE_IPV4_UDP, > ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_nvgre_eth_ipv4_tcp, > >+ ICE_SW_INSET_DIST_NVGRE_IPV4_TCP, > ICE_INSET_NONE}, > >+ {pattern_eth_pppoed, > >+ ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, > >+ {pattern_eth_vlan_pppoed, > >+ ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, > >+ {pattern_eth_pppoes, > >+ ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, > >+ {pattern_eth_vlan_pppoes, > >+ ICE_SW_INSET_MAC_PPPOE, ICE_INSET_NONE}, }; > >+ > >+static struct > >+ice_pattern_match_item ice_switch_pattern_dist_os[] =3D { > >+ {pattern_ethertype, > >+ ICE_SW_INSET_ETHER, ICE_INSET_NONE}, > >+ {pattern_eth_arp, > >+ ICE_INSET_NONE, ICE_INSET_NONE}, > >+ {pattern_eth_ipv4, > >+ ICE_SW_INSET_MAC_IPV4, ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_udp, > >+ ICE_SW_INSET_MAC_IPV4_UDP, ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_tcp, > >+ ICE_SW_INSET_MAC_IPV4_TCP, ICE_INSET_NONE}, > >+ {pattern_eth_ipv6, > >+ ICE_SW_INSET_MAC_IPV6, ICE_INSET_NONE}, > >+ {pattern_eth_ipv6_udp, > >+ ICE_SW_INSET_MAC_IPV6_UDP, ICE_INSET_NONE}, > >+ {pattern_eth_ipv6_tcp, > >+ ICE_SW_INSET_MAC_IPV6_TCP, ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_udp_vxlan_eth_ipv4, > >+ ICE_SW_INSET_DIST_VXLAN_IPV4, ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_udp_vxlan_eth_ipv4_udp, > >+ ICE_SW_INSET_DIST_VXLAN_IPV4_UDP, > ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_udp_vxlan_eth_ipv4_tcp, > >+ ICE_SW_INSET_DIST_VXLAN_IPV4_TCP, > ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_nvgre_eth_ipv4, > >+ ICE_SW_INSET_DIST_NVGRE_IPV4, ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_nvgre_eth_ipv4_udp, > >+ ICE_SW_INSET_DIST_NVGRE_IPV4_UDP, > ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_nvgre_eth_ipv4_tcp, > >+ ICE_SW_INSET_DIST_NVGRE_IPV4_TCP, > ICE_INSET_NONE}, }; > >+ > >+static struct > >+ice_pattern_match_item ice_switch_pattern_perm[] =3D { > >+ {pattern_eth_ipv4, > >+ ICE_SW_INSET_MAC_IPV4, ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_udp, > >+ ICE_SW_INSET_MAC_IPV4_UDP, ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_tcp, > >+ ICE_SW_INSET_MAC_IPV4_TCP, ICE_INSET_NONE}, > >+ {pattern_eth_ipv6, > >+ ICE_SW_INSET_MAC_IPV6, ICE_INSET_NONE}, > >+ {pattern_eth_ipv6_udp, > >+ ICE_SW_INSET_MAC_IPV6_UDP, ICE_INSET_NONE}, > >+ {pattern_eth_ipv6_tcp, > >+ ICE_SW_INSET_MAC_IPV6_TCP, ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_udp_vxlan_eth_ipv4, > >+ ICE_SW_INSET_PERM_TUNNEL_IPV4, > ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_udp_vxlan_eth_ipv4_udp, > >+ ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP, > ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_udp_vxlan_eth_ipv4_tcp, > >+ ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP, > ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_nvgre_eth_ipv4, > >+ ICE_SW_INSET_PERM_TUNNEL_IPV4, > ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_nvgre_eth_ipv4_udp, > >+ ICE_SW_INSET_PERM_TUNNEL_IPV4_UDP, > ICE_INSET_NONE}, > >+ {pattern_eth_ipv4_nvgre_eth_ipv4_tcp, > >+ ICE_SW_INSET_PERM_TUNNEL_IPV4_TCP, > ICE_INSET_NONE}, }; > >+ > >+static int > >+ice_switch_create(struct ice_adapter *ad, > >+ struct rte_flow *flow, > >+ void *meta, > >+ struct rte_flow_error *error) > >+{ > >+ int ret =3D 0; > >+ struct ice_pf *pf =3D &ad->pf; > >+ struct ice_hw *hw =3D ICE_PF_TO_HW(pf); > >+ struct ice_rule_query_data rule_added =3D {0}; > >+ struct ice_rule_query_data *filter_ptr; > >+ struct ice_adv_lkup_elem *list =3D > >+ ((struct sw_meta *)meta)->list; > >+ uint16_t lkups_cnt =3D > >+ ((struct sw_meta *)meta)->lkups_num; > >+ struct ice_adv_rule_info *rule_info =3D > >+ &((struct sw_meta *)meta)->rule_info; > >+ > >+ if (lkups_cnt > ICE_MAX_CHAIN_WORDS) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_ITEM, NULL, > >+ "item number too large for rule"); > >+ goto error; > >+ } > >+ if (!list) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_ITEM, NULL, > >+ "lookup list should not be NULL"); > >+ goto error; > >+ } > >+ ret =3D ice_add_adv_rule(hw, list, lkups_cnt, rule_info, &rule_added); > >+ if (!ret) { > >+ filter_ptr =3D rte_zmalloc("ice_switch_filter", > >+ sizeof(struct ice_rule_query_data), 0); > >+ if (!filter_ptr) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > >+ "No memory for ice_switch_filter"); > >+ goto error; > >+ } > >+ flow->rule =3D filter_ptr; > >+ rte_memcpy(filter_ptr, > >+ &rule_added, > >+ sizeof(struct ice_rule_query_data)); > >+ } else { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > >+ "switch filter create flow fail"); > >+ goto error; > >+ } > >+ > >+ rte_free(list); > >+ rte_free(meta); > >+ return 0; > >+ > >+error: > >+ rte_free(list); > >+ rte_free(meta); > >+ > >+ return -rte_errno; > >+} > >+ > >+static int > >+ice_switch_destroy(struct ice_adapter *ad, > >+ struct rte_flow *flow, > >+ struct rte_flow_error *error) > >+{ > >+ struct ice_hw *hw =3D &ad->hw; > >+ int ret; > >+ struct ice_rule_query_data *filter_ptr; > >+ > >+ filter_ptr =3D (struct ice_rule_query_data *) > >+ flow->rule; > >+ > >+ if (!filter_ptr) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > >+ "no such flow" > >+ " create by switch filter"); > >+ return -rte_errno; > >+ } > >+ > >+ ret =3D ice_rem_adv_rule_by_id(hw, filter_ptr); > >+ if (ret) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > >+ "fail to destroy switch filter rule"); > >+ return -rte_errno; > >+ } > >+ > >+ rte_free(filter_ptr); > >+ return ret; > >+} > >+ > >+static void > >+ice_switch_filter_rule_free(struct rte_flow *flow) { > >+ rte_free(flow->rule); > >+} > >+ > >+static uint64_t > >+ice_switch_inset_get(const struct rte_flow_item pattern[], > >+ struct rte_flow_error *error, > >+ struct ice_adv_lkup_elem *list, > >+ uint16_t *lkups_num, > >+ enum ice_sw_tunnel_type tun_type) > >+{ > >+ const struct rte_flow_item *item =3D pattern; > >+ enum rte_flow_item_type item_type; > >+ const struct rte_flow_item_eth *eth_spec, *eth_mask; > >+ const struct rte_flow_item_ipv4 *ipv4_spec, *ipv4_mask; > >+ const struct rte_flow_item_ipv6 *ipv6_spec, *ipv6_mask; > >+ const struct rte_flow_item_tcp *tcp_spec, *tcp_mask; > >+ const struct rte_flow_item_udp *udp_spec, *udp_mask; > >+ const struct rte_flow_item_sctp *sctp_spec, *sctp_mask; > >+ const struct rte_flow_item_nvgre *nvgre_spec, *nvgre_mask; > >+ const struct rte_flow_item_vxlan *vxlan_spec, *vxlan_mask; > >+ const struct rte_flow_item_vlan *vlan_spec, *vlan_mask; > >+ const struct rte_flow_item_pppoe *pppoe_spec, *pppoe_mask; > >+ uint8_t ipv6_addr_mask[16] =3D { > >+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, > >+ 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}; > >+ uint64_t input_set =3D ICE_INSET_NONE; > >+ uint16_t j, t =3D 0; > >+ uint16_t tunnel_valid =3D 0; > >+ > >+ > >+ for (item =3D pattern; item->type !=3D > >+ RTE_FLOW_ITEM_TYPE_END; item++) { > >+ if (item->last) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_ITEM, > >+ item, > >+ "Not support range"); > >+ return 0; > >+ } > >+ item_type =3D item->type; > >+ > >+ switch (item_type) { > >+ case RTE_FLOW_ITEM_TYPE_ETH: > >+ eth_spec =3D item->spec; > >+ eth_mask =3D item->mask; > >+ if (eth_spec && eth_mask) { > >+ if (tunnel_valid && > >+ rte_is_broadcast_ether_addr(ð_mask- > >src)) > >+ input_set |=3D ICE_INSET_TUN_SMAC; > >+ else if ( > >+ rte_is_broadcast_ether_addr(ð_mask->src)) > >+ input_set |=3D ICE_INSET_SMAC; > >+ if (tunnel_valid && > >+ rte_is_broadcast_ether_addr(ð_mask- > >dst)) > >+ input_set |=3D ICE_INSET_TUN_DMAC; > >+ else if ( > >+ rte_is_broadcast_ether_addr(ð_mask->dst)) > >+ input_set |=3D ICE_INSET_DMAC; > >+ if (eth_mask->type =3D=3D RTE_BE16(0xffff)) > >+ input_set |=3D ICE_INSET_ETHERTYPE; > >+ list[t].type =3D (tunnel_valid =3D=3D 0) ? > >+ ICE_MAC_OFOS : ICE_MAC_IL; > >+ struct ice_ether_hdr *h; > >+ struct ice_ether_hdr *m; > >+ uint16_t i =3D 0; > >+ h =3D &list[t].h_u.eth_hdr; > >+ m =3D &list[t].m_u.eth_hdr; > >+ for (j =3D 0; j < RTE_ETHER_ADDR_LEN; j++) { > >+ if (eth_mask->src.addr_bytes[j] =3D=3D > >+ UINT8_MAX) { > >+ h->src_addr[j] =3D > >+ eth_spec->src.addr_bytes[j]; > >+ m->src_addr[j] =3D > >+ eth_mask->src.addr_bytes[j]; > >+ i =3D 1; > >+ } > >+ if (eth_mask->dst.addr_bytes[j] =3D=3D > >+ UINT8_MAX) { > >+ h->dst_addr[j] =3D > >+ eth_spec->dst.addr_bytes[j]; > >+ m->dst_addr[j] =3D > >+ eth_mask->dst.addr_bytes[j]; > >+ i =3D 1; > >+ } > >+ } > >+ if (i) > >+ t++; > >+ if (eth_mask->type =3D=3D UINT16_MAX) { > >+ list[t].type =3D ICE_ETYPE_OL; > >+ list[t].h_u.ethertype.ethtype_id =3D > >+ eth_spec->type; > >+ list[t].m_u.ethertype.ethtype_id =3D > >+ UINT16_MAX; > >+ t++; > >+ } > >+ } else if (!eth_spec && !eth_mask) { > >+ list[t].type =3D (tun_type =3D=3D ICE_NON_TUN) ? > >+ ICE_MAC_OFOS : ICE_MAC_IL; > >+ } > >+ break; > >+ > >+ case RTE_FLOW_ITEM_TYPE_IPV4: > >+ ipv4_spec =3D item->spec; > >+ ipv4_mask =3D item->mask; > >+ if (ipv4_spec && ipv4_mask) { > >+ /* Check IPv4 mask and update input set */ > >+ if (ipv4_mask->hdr.version_ihl || > >+ ipv4_mask->hdr.total_length || > >+ ipv4_mask->hdr.packet_id || > >+ ipv4_mask->hdr.hdr_checksum) { > >+ rte_flow_error_set(error, EINVAL, > >+ > RTE_FLOW_ERROR_TYPE_ITEM, > >+ item, > >+ "Invalid IPv4 mask."); > >+ return 0; > >+ } > >+ > >+ if (tunnel_valid) { > >+ if (ipv4_mask->hdr.type_of_service =3D=3D > >+ UINT8_MAX) > >+ input_set |=3D > >+ > ICE_INSET_TUN_IPV4_TOS; > >+ if (ipv4_mask->hdr.src_addr =3D=3D > >+ UINT32_MAX) > >+ input_set |=3D > >+ > ICE_INSET_TUN_IPV4_SRC; > >+ if (ipv4_mask->hdr.dst_addr =3D=3D > >+ UINT32_MAX) > >+ input_set |=3D > >+ > ICE_INSET_TUN_IPV4_DST; > >+ if (ipv4_mask->hdr.time_to_live =3D=3D > >+ UINT8_MAX) > >+ input_set |=3D > >+ > ICE_INSET_TUN_IPV4_TTL; > >+ if (ipv4_mask->hdr.next_proto_id =3D=3D > >+ UINT8_MAX) > >+ input_set |=3D > >+ ICE_INSET_TUN_IPV4_PROTO; > >+ } else { > >+ if (ipv4_mask->hdr.src_addr =3D=3D > >+ UINT32_MAX) > >+ input_set |=3D > ICE_INSET_IPV4_SRC; > >+ if (ipv4_mask->hdr.dst_addr =3D=3D > >+ UINT32_MAX) > >+ input_set |=3D > ICE_INSET_IPV4_DST; > >+ if (ipv4_mask->hdr.time_to_live =3D=3D > >+ UINT8_MAX) > >+ input_set |=3D > ICE_INSET_IPV4_TTL; > >+ if (ipv4_mask->hdr.next_proto_id =3D=3D > >+ UINT8_MAX) > >+ input_set |=3D > >+ ICE_INSET_IPV4_PROTO; > >+ if (ipv4_mask->hdr.type_of_service =3D=3D > >+ UINT8_MAX) > >+ input_set |=3D > >+ ICE_INSET_IPV4_TOS; > >+ } > >+ list[t].type =3D (tunnel_valid =3D=3D 0) ? > >+ ICE_IPV4_OFOS : ICE_IPV4_IL; > >+ if (ipv4_mask->hdr.src_addr =3D=3D UINT32_MAX) { > >+ list[t].h_u.ipv4_hdr.src_addr =3D > >+ ipv4_spec->hdr.src_addr; > >+ list[t].m_u.ipv4_hdr.src_addr =3D > >+ UINT32_MAX; > >+ } > >+ if (ipv4_mask->hdr.dst_addr =3D=3D UINT32_MAX) { > >+ list[t].h_u.ipv4_hdr.dst_addr =3D > >+ ipv4_spec->hdr.dst_addr; > >+ list[t].m_u.ipv4_hdr.dst_addr =3D > >+ UINT32_MAX; > >+ } > >+ if (ipv4_mask->hdr.time_to_live =3D=3D UINT8_MAX) > { > >+ list[t].h_u.ipv4_hdr.time_to_live =3D > >+ ipv4_spec->hdr.time_to_live; > >+ list[t].m_u.ipv4_hdr.time_to_live =3D > >+ UINT8_MAX; > >+ } > >+ if (ipv4_mask->hdr.next_proto_id =3D=3D > UINT8_MAX) { > >+ list[t].h_u.ipv4_hdr.protocol =3D > >+ ipv4_spec->hdr.next_proto_id; > >+ list[t].m_u.ipv4_hdr.protocol =3D > >+ UINT8_MAX; > >+ } > >+ if (ipv4_mask->hdr.type_of_service =3D=3D > >+ UINT8_MAX) { > >+ list[t].h_u.ipv4_hdr.tos =3D > >+ ipv4_spec- > >hdr.type_of_service; > >+ list[t].m_u.ipv4_hdr.tos =3D UINT8_MAX; > >+ } > >+ t++; > >+ } else if (!ipv4_spec && !ipv4_mask) { > >+ list[t].type =3D (tunnel_valid =3D=3D 0) ? > >+ ICE_IPV4_OFOS : ICE_IPV4_IL; > >+ } > >+ break; > >+ > >+ case RTE_FLOW_ITEM_TYPE_IPV6: > >+ ipv6_spec =3D item->spec; > >+ ipv6_mask =3D item->mask; > >+ if (ipv6_spec && ipv6_mask) { > >+ if (ipv6_mask->hdr.payload_len) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_ITEM, > >+ item, > >+ "Invalid IPv6 mask"); > >+ return 0; > >+ } > >+ > >+ if (tunnel_valid) { > >+ if (!memcmp(ipv6_mask->hdr.src_addr, > >+ ipv6_addr_mask, > >+ RTE_DIM(ipv6_mask->hdr.src_addr))) > >+ input_set |=3D > >+ > ICE_INSET_TUN_IPV6_SRC; > >+ if (!memcmp(ipv6_mask->hdr.dst_addr, > >+ ipv6_addr_mask, > >+ RTE_DIM(ipv6_mask->hdr.dst_addr))) > >+ input_set |=3D > >+ > ICE_INSET_TUN_IPV6_DST; > >+ if (ipv6_mask->hdr.proto =3D=3D > UINT8_MAX) > >+ input_set |=3D > >+ > ICE_INSET_TUN_IPV6_NEXT_HDR; > >+ if (ipv6_mask->hdr.hop_limits =3D=3D > >+ UINT8_MAX) > >+ input_set |=3D > >+ > ICE_INSET_TUN_IPV6_HOP_LIMIT; > >+ if ((ipv6_mask->hdr.vtc_flow & > >+ rte_cpu_to_be_32 > >+ (RTE_IPV6_HDR_TC_MASK)) > >+ =3D=3D rte_cpu_to_be_32 > >+ > (RTE_IPV6_HDR_TC_MASK)) > >+ input_set |=3D > >+ > ICE_INSET_TUN_IPV6_TC; > >+ } else { > >+ if (!memcmp(ipv6_mask->hdr.src_addr, > >+ ipv6_addr_mask, > >+ RTE_DIM(ipv6_mask->hdr.src_addr))) > >+ input_set |=3D > ICE_INSET_IPV6_SRC; > >+ if (!memcmp(ipv6_mask->hdr.dst_addr, > >+ ipv6_addr_mask, > >+ RTE_DIM(ipv6_mask->hdr.dst_addr))) > >+ input_set |=3D > ICE_INSET_IPV6_DST; > >+ if (ipv6_mask->hdr.proto =3D=3D > UINT8_MAX) > >+ input_set |=3D > >+ ICE_INSET_IPV6_NEXT_HDR; > >+ if (ipv6_mask->hdr.hop_limits =3D=3D > >+ UINT8_MAX) > >+ input_set |=3D > >+ ICE_INSET_IPV6_HOP_LIMIT; > >+ if ((ipv6_mask->hdr.vtc_flow & > >+ rte_cpu_to_be_32 > >+ (RTE_IPV6_HDR_TC_MASK)) > >+ =3D=3D rte_cpu_to_be_32 > >+ > (RTE_IPV6_HDR_TC_MASK)) > >+ input_set |=3D > ICE_INSET_IPV6_TC; > >+ } > >+ list[t].type =3D (tunnel_valid =3D=3D 0) ? > >+ ICE_IPV6_OFOS : ICE_IPV6_IL; > >+ struct ice_ipv6_hdr *f; > >+ struct ice_ipv6_hdr *s; > >+ f =3D &list[t].h_u.ipv6_hdr; > >+ s =3D &list[t].m_u.ipv6_hdr; > >+ for (j =3D 0; j < ICE_IPV6_ADDR_LENGTH; j++) { > >+ if (ipv6_mask->hdr.src_addr[j] =3D=3D > >+ UINT8_MAX) { > >+ f->src_addr[j] =3D > >+ ipv6_spec->hdr.src_addr[j]; > >+ s->src_addr[j] =3D > >+ ipv6_mask->hdr.src_addr[j]; > >+ } > >+ if (ipv6_mask->hdr.dst_addr[j] =3D=3D > >+ UINT8_MAX) { > >+ f->dst_addr[j] =3D > >+ ipv6_spec->hdr.dst_addr[j]; > >+ s->dst_addr[j] =3D > >+ ipv6_mask->hdr.dst_addr[j]; > >+ } > >+ } > >+ if (ipv6_mask->hdr.proto =3D=3D UINT8_MAX) { > >+ f->next_hdr =3D > >+ ipv6_spec->hdr.proto; > >+ s->next_hdr =3D UINT8_MAX; > >+ } > >+ if (ipv6_mask->hdr.hop_limits =3D=3D UINT8_MAX) { > >+ f->hop_limit =3D > >+ ipv6_spec->hdr.hop_limits; > >+ s->hop_limit =3D UINT8_MAX; > >+ } > >+ if ((ipv6_mask->hdr.vtc_flow & > >+ rte_cpu_to_be_32 > >+ (RTE_IPV6_HDR_TC_MASK)) > >+ =3D=3D rte_cpu_to_be_32 > >+ (RTE_IPV6_HDR_TC_MASK)) { > >+ f->tc =3D (rte_be_to_cpu_32 > >+ (ipv6_spec->hdr.vtc_flow) & > >+ > RTE_IPV6_HDR_TC_MASK) >> > >+ > RTE_IPV6_HDR_TC_SHIFT; > >+ s->tc =3D UINT8_MAX; > >+ } > >+ t++; > >+ } else if (!ipv6_spec && !ipv6_mask) { > >+ list[t].type =3D (tun_type =3D=3D ICE_NON_TUN) ? > >+ ICE_IPV4_OFOS : ICE_IPV4_IL; > >+ } > >+ break; > >+ > >+ case RTE_FLOW_ITEM_TYPE_UDP: > >+ udp_spec =3D item->spec; > >+ udp_mask =3D item->mask; > >+ if (udp_spec && udp_mask) { > >+ /* Check UDP mask and update input set*/ > >+ if (udp_mask->hdr.dgram_len || > >+ udp_mask->hdr.dgram_cksum) { > >+ rte_flow_error_set(error, EINVAL, > >+ > RTE_FLOW_ERROR_TYPE_ITEM, > >+ item, > >+ "Invalid UDP mask"); > >+ return 0; > >+ } > >+ > >+ if (tunnel_valid) { > >+ if (udp_mask->hdr.src_port =3D=3D > >+ UINT16_MAX) > >+ input_set |=3D > >+ > ICE_INSET_TUN_UDP_SRC_PORT; > >+ if (udp_mask->hdr.dst_port =3D=3D > >+ UINT16_MAX) > >+ input_set |=3D > >+ > ICE_INSET_TUN_UDP_DST_PORT; > >+ } else { > >+ if (udp_mask->hdr.src_port =3D=3D > >+ UINT16_MAX) > >+ input_set |=3D > >+ ICE_INSET_UDP_SRC_PORT; > >+ if (udp_mask->hdr.dst_port =3D=3D > >+ UINT16_MAX) > >+ input_set |=3D > >+ ICE_INSET_UDP_DST_PORT; > >+ } > >+ if (tun_type =3D=3D ICE_SW_TUN_VXLAN && > >+ tunnel_valid =3D=3D 0) > >+ list[t].type =3D ICE_UDP_OF; > >+ else > >+ list[t].type =3D ICE_UDP_ILOS; > >+ if (udp_mask->hdr.src_port =3D=3D UINT16_MAX) { > >+ list[t].h_u.l4_hdr.src_port =3D > >+ udp_spec->hdr.src_port; > >+ list[t].m_u.l4_hdr.src_port =3D > >+ udp_mask->hdr.src_port; > >+ } > >+ if (udp_mask->hdr.dst_port =3D=3D UINT16_MAX) { > >+ list[t].h_u.l4_hdr.dst_port =3D > >+ udp_spec->hdr.dst_port; > >+ list[t].m_u.l4_hdr.dst_port =3D > >+ udp_mask->hdr.dst_port; > >+ } > >+ t++; > >+ } else if (!udp_spec && !udp_mask) { > >+ list[t].type =3D ICE_UDP_ILOS; > >+ } > >+ break; > >+ > >+ case RTE_FLOW_ITEM_TYPE_TCP: > >+ tcp_spec =3D item->spec; > >+ tcp_mask =3D item->mask; > >+ if (tcp_spec && tcp_mask) { > >+ /* Check TCP mask and update input set */ > >+ if (tcp_mask->hdr.sent_seq || > >+ tcp_mask->hdr.recv_ack || > >+ tcp_mask->hdr.data_off || > >+ tcp_mask->hdr.tcp_flags || > >+ tcp_mask->hdr.rx_win || > >+ tcp_mask->hdr.cksum || > >+ tcp_mask->hdr.tcp_urp) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_ITEM, > >+ item, > >+ "Invalid TCP mask"); > >+ return 0; > >+ } > >+ > >+ if (tunnel_valid) { > >+ if (tcp_mask->hdr.src_port =3D=3D > >+ UINT16_MAX) > >+ input_set |=3D > >+ > ICE_INSET_TUN_TCP_SRC_PORT; > >+ if (tcp_mask->hdr.dst_port =3D=3D > >+ UINT16_MAX) > >+ input_set |=3D > >+ > ICE_INSET_TUN_TCP_DST_PORT; > >+ } else { > >+ if (tcp_mask->hdr.src_port =3D=3D > >+ UINT16_MAX) > >+ input_set |=3D > >+ ICE_INSET_TCP_SRC_PORT; > >+ if (tcp_mask->hdr.dst_port =3D=3D > >+ UINT16_MAX) > >+ input_set |=3D > >+ ICE_INSET_TCP_DST_PORT; > >+ } > >+ list[t].type =3D ICE_TCP_IL; > >+ if (tcp_mask->hdr.src_port =3D=3D UINT16_MAX) { > >+ list[t].h_u.l4_hdr.src_port =3D > >+ tcp_spec->hdr.src_port; > >+ list[t].m_u.l4_hdr.src_port =3D > >+ tcp_mask->hdr.src_port; > >+ } > >+ if (tcp_mask->hdr.dst_port =3D=3D UINT16_MAX) { > >+ list[t].h_u.l4_hdr.dst_port =3D > >+ tcp_spec->hdr.dst_port; > >+ list[t].m_u.l4_hdr.dst_port =3D > >+ tcp_mask->hdr.dst_port; > >+ } > >+ t++; > >+ } else if (!tcp_spec && !tcp_mask) { > >+ list[t].type =3D ICE_TCP_IL; > >+ } > >+ break; > >+ > >+ case RTE_FLOW_ITEM_TYPE_SCTP: > >+ sctp_spec =3D item->spec; > >+ sctp_mask =3D item->mask; > >+ if (sctp_spec && sctp_mask) { > >+ /* Check SCTP mask and update input set */ > >+ if (sctp_mask->hdr.cksum) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_ITEM, > >+ item, > >+ "Invalid SCTP mask"); > >+ return 0; > >+ } > >+ > >+ if (tunnel_valid) { > >+ if (sctp_mask->hdr.src_port =3D=3D > >+ UINT16_MAX) > >+ input_set |=3D > >+ > ICE_INSET_TUN_SCTP_SRC_PORT; > >+ if (sctp_mask->hdr.dst_port =3D=3D > >+ UINT16_MAX) > >+ input_set |=3D > >+ > ICE_INSET_TUN_SCTP_DST_PORT; > >+ } else { > >+ if (sctp_mask->hdr.src_port =3D=3D > >+ UINT16_MAX) > >+ input_set |=3D > >+ ICE_INSET_SCTP_SRC_PORT; > >+ if (sctp_mask->hdr.dst_port =3D=3D > >+ UINT16_MAX) > >+ input_set |=3D > >+ ICE_INSET_SCTP_DST_PORT; > >+ } > >+ list[t].type =3D ICE_SCTP_IL; > >+ if (sctp_mask->hdr.src_port =3D=3D UINT16_MAX) { > >+ list[t].h_u.sctp_hdr.src_port =3D > >+ sctp_spec->hdr.src_port; > >+ list[t].m_u.sctp_hdr.src_port =3D > >+ sctp_mask->hdr.src_port; > >+ } > >+ if (sctp_mask->hdr.dst_port =3D=3D UINT16_MAX) { > >+ list[t].h_u.sctp_hdr.dst_port =3D > >+ sctp_spec->hdr.dst_port; > >+ list[t].m_u.sctp_hdr.dst_port =3D > >+ sctp_mask->hdr.dst_port; > >+ } > >+ t++; > >+ } else if (!sctp_spec && !sctp_mask) { > >+ list[t].type =3D ICE_SCTP_IL; > >+ } > >+ break; > >+ > >+ case RTE_FLOW_ITEM_TYPE_VXLAN: > >+ vxlan_spec =3D item->spec; > >+ vxlan_mask =3D item->mask; > >+ /* Check if VXLAN item is used to describe protocol. > >+ * If yes, both spec and mask should be NULL. > >+ * If no, both spec and mask shouldn't be NULL. > >+ */ > >+ if ((!vxlan_spec && vxlan_mask) || > >+ (vxlan_spec && !vxlan_mask)) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_ITEM, > >+ item, > >+ "Invalid VXLAN item"); > >+ return 0; > >+ } > >+ > >+ tunnel_valid =3D 1; > >+ if (vxlan_spec && vxlan_mask) { > >+ list[t].type =3D ICE_VXLAN; > >+ if (vxlan_mask->vni[0] =3D=3D UINT8_MAX && > >+ vxlan_mask->vni[1] =3D=3D UINT8_MAX && > >+ vxlan_mask->vni[2] =3D=3D UINT8_MAX) { > >+ list[t].h_u.tnl_hdr.vni =3D > >+ (vxlan_spec->vni[2] << 16) | > >+ (vxlan_spec->vni[1] << 8) | > >+ vxlan_spec->vni[0]; > >+ list[t].m_u.tnl_hdr.vni =3D > >+ UINT32_MAX; > >+ input_set |=3D > >+ ICE_INSET_TUN_VXLAN_VNI; > >+ } > >+ t++; > >+ } else if (!vxlan_spec && !vxlan_mask) { > >+ list[t].type =3D ICE_VXLAN; > >+ } > >+ break; > >+ > >+ case RTE_FLOW_ITEM_TYPE_NVGRE: > >+ nvgre_spec =3D item->spec; > >+ nvgre_mask =3D item->mask; > >+ /* Check if NVGRE item is used to describe protocol. > >+ * If yes, both spec and mask should be NULL. > >+ * If no, both spec and mask shouldn't be NULL. > >+ */ > >+ if ((!nvgre_spec && nvgre_mask) || > >+ (nvgre_spec && !nvgre_mask)) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_ITEM, > >+ item, > >+ "Invalid NVGRE item"); > >+ return 0; > >+ } > >+ tunnel_valid =3D 1; > >+ if (nvgre_spec && nvgre_mask) { > >+ list[t].type =3D ICE_NVGRE; > >+ if (nvgre_mask->tni[0] =3D=3D UINT8_MAX && > >+ nvgre_mask->tni[1] =3D=3D UINT8_MAX && > >+ nvgre_mask->tni[2] =3D=3D UINT8_MAX) { > >+ list[t].h_u.nvgre_hdr.tni_flow =3D > >+ (nvgre_spec->tni[2] << 16) | > >+ (nvgre_spec->tni[1] << 8) | > >+ nvgre_spec->tni[0]; > >+ list[t].m_u.nvgre_hdr.tni_flow =3D > >+ UINT32_MAX; > >+ input_set |=3D > >+ ICE_INSET_TUN_NVGRE_TNI; > >+ } > >+ t++; > >+ } else if (!nvgre_spec && !nvgre_mask) { > >+ list[t].type =3D ICE_NVGRE; > >+ } > >+ break; > >+ > >+ case RTE_FLOW_ITEM_TYPE_VLAN: > >+ vlan_spec =3D item->spec; > >+ vlan_mask =3D item->mask; > >+ /* Check if VLAN item is used to describe protocol. > >+ * If yes, both spec and mask should be NULL. > >+ * If no, both spec and mask shouldn't be NULL. > >+ */ > >+ if ((!vlan_spec && vlan_mask) || > >+ (vlan_spec && !vlan_mask)) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_ITEM, > >+ item, > >+ "Invalid VLAN item"); > >+ return 0; > >+ } > >+ if (vlan_spec && vlan_mask) { > >+ list[t].type =3D ICE_VLAN_OFOS; > >+ if (vlan_mask->tci =3D=3D UINT16_MAX) { > >+ list[t].h_u.vlan_hdr.vlan =3D > >+ vlan_spec->tci; > >+ list[t].m_u.vlan_hdr.vlan =3D > >+ UINT16_MAX; > >+ input_set |=3D ICE_INSET_VLAN_OUTER; > >+ } > >+ if (vlan_mask->inner_type =3D=3D UINT16_MAX) { > >+ list[t].h_u.vlan_hdr.type =3D > >+ vlan_spec->inner_type; > >+ list[t].m_u.vlan_hdr.type =3D > >+ UINT16_MAX; > >+ input_set |=3D ICE_INSET_VLAN_OUTER; > >+ } > >+ t++; > >+ } else if (!vlan_spec && !vlan_mask) { > >+ list[t].type =3D ICE_VLAN_OFOS; > >+ } > >+ break; > >+ > >+ case RTE_FLOW_ITEM_TYPE_PPPOED: > >+ case RTE_FLOW_ITEM_TYPE_PPPOES: > >+ pppoe_spec =3D item->spec; > >+ pppoe_mask =3D item->mask; > >+ /* Check if PPPoE item is used to describe protocol. > >+ * If yes, both spec and mask should be NULL. > >+ */ > >+ if (pppoe_spec || pppoe_mask) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_ITEM, > >+ item, > >+ "Invalid pppoe item"); > >+ return 0; > >+ } > >+ break; > >+ > >+ case RTE_FLOW_ITEM_TYPE_VOID: > >+ break; > >+ > >+ default: > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_ITEM, pattern, > >+ "Invalid pattern item."); > >+ goto out; > >+ } > >+ } > >+ > >+ *lkups_num =3D t; > >+ > >+ return input_set; > >+out: > >+ return 0; > >+} > >+ > >+ > >+static int > >+ice_switch_parse_action(struct ice_pf *pf, > >+ const struct rte_flow_action *actions, > >+ struct rte_flow_error *error, > >+ struct ice_adv_rule_info *rule_info) { > >+ struct ice_vsi *vsi =3D pf->main_vsi; > >+ struct rte_eth_dev *dev =3D pf->adapter->eth_dev; > >+ const struct rte_flow_action_queue *act_q; > >+ const struct rte_flow_action_rss *act_qgrop; > >+ uint16_t base_queue, i; > >+ const struct rte_flow_action *action; > >+ enum rte_flow_action_type action_type; > >+ uint16_t valid_qgrop_number[MAX_QGRP_NUM_TYPE] =3D { > >+ 2, 4, 8, 16, 32, 64, 128}; > >+ > >+ base_queue =3D pf->base_queue; > >+ for (action =3D actions; action->type !=3D > >+ RTE_FLOW_ACTION_TYPE_END; action++) { > >+ action_type =3D action->type; > >+ switch (action_type) { > >+ case RTE_FLOW_ACTION_TYPE_RSS: > >+ act_qgrop =3D action->conf; > >+ rule_info->sw_act.fltr_act =3D > >+ ICE_FWD_TO_QGRP; > >+ rule_info->sw_act.fwd_id.q_id =3D > >+ base_queue + act_qgrop->queue[0]; > >+ for (i =3D 0; i < MAX_QGRP_NUM_TYPE; i++) { > >+ if (act_qgrop->queue_num =3D=3D > >+ valid_qgrop_number[i]) > >+ break; > >+ } > >+ if (i =3D=3D MAX_QGRP_NUM_TYPE) > >+ goto error; > >+ if ((act_qgrop->queue[0] + > >+ act_qgrop->queue_num) > > >+ dev->data->nb_rx_queues) > >+ goto error; > >+ for (i =3D 0; i < act_qgrop->queue_num - 1; i++) > >+ if (act_qgrop->queue[i + 1] !=3D > >+ act_qgrop->queue[i] + 1) > >+ goto error; > >+ rule_info->sw_act.qgrp_size =3D > >+ act_qgrop->queue_num; > >+ break; > >+ case RTE_FLOW_ACTION_TYPE_QUEUE: > >+ act_q =3D action->conf; > >+ if (act_q->index >=3D dev->data->nb_rx_queues) > >+ goto error; > >+ rule_info->sw_act.fltr_act =3D > >+ ICE_FWD_TO_Q; > >+ rule_info->sw_act.fwd_id.q_id =3D > >+ base_queue + act_q->index; > >+ break; > >+ > >+ case RTE_FLOW_ACTION_TYPE_DROP: > >+ rule_info->sw_act.fltr_act =3D > >+ ICE_DROP_PACKET; > >+ break; > >+ > >+ case RTE_FLOW_ACTION_TYPE_VOID: > >+ break; > >+ > >+ default: > >+ goto error; > >+ } > >+ } > >+ > >+ rule_info->sw_act.vsi_handle =3D vsi->idx; > >+ rule_info->rx =3D 1; > >+ rule_info->sw_act.src =3D vsi->idx; > >+ rule_info->priority =3D 5; > >+ > >+ return 0; > >+ > >+error: > >+ rte_flow_error_set(error, > >+ EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, > >+ actions, > >+ "Invalid action type or queue number"); > >+ return -rte_errno; > >+} > >+ > >+static int > >+ice_switch_parse_pattern_action(struct ice_adapter *ad, > >+ struct ice_pattern_match_item *array, > >+ uint32_t array_len, > >+ const struct rte_flow_item pattern[], > >+ const struct rte_flow_action actions[], > >+ void **meta, > >+ struct rte_flow_error *error) > >+{ > >+ struct ice_pf *pf =3D &ad->pf; > >+ uint64_t inputset =3D 0; > >+ int ret =3D 0; > >+ struct sw_meta *sw_meta_ptr =3D NULL; > >+ struct ice_adv_rule_info rule_info; > >+ struct ice_adv_lkup_elem *list =3D NULL; > >+ uint16_t lkups_num =3D 0; > >+ const struct rte_flow_item *item =3D pattern; > >+ uint16_t item_num =3D 0; > >+ enum ice_sw_tunnel_type tun_type =3D ICE_NON_TUN; > >+ struct ice_pattern_match_item *pattern_match_item =3D NULL; > >+ > >+ for (; item->type !=3D RTE_FLOW_ITEM_TYPE_END; item++) { > >+ item_num++; > >+ if (item->type =3D=3D RTE_FLOW_ITEM_TYPE_VXLAN) > >+ tun_type =3D ICE_SW_TUN_VXLAN; > >+ if (item->type =3D=3D RTE_FLOW_ITEM_TYPE_NVGRE) > >+ tun_type =3D ICE_SW_TUN_NVGRE; > >+ if (item->type =3D=3D RTE_FLOW_ITEM_TYPE_PPPOED || > >+ item->type =3D=3D RTE_FLOW_ITEM_TYPE_PPPOES) > >+ tun_type =3D ICE_SW_TUN_PPPOE; > >+ if (item->type =3D=3D RTE_FLOW_ITEM_TYPE_ETH) { > >+ const struct rte_flow_item_eth *eth_mask; > >+ if (item->mask) > >+ eth_mask =3D item->mask; > >+ else > >+ continue; > >+ if (eth_mask->type =3D=3D UINT16_MAX) > >+ tun_type =3D ICE_SW_TUN_AND_NON_TUN; > >+ } > >+ /* reserve one more memory slot for ETH which may > >+ * consume 2 lookup items. > >+ */ > >+ if (item->type =3D=3D RTE_FLOW_ITEM_TYPE_ETH) > >+ item_num++; > >+ } > >+ > >+ list =3D rte_zmalloc(NULL, item_num * sizeof(*list), 0); > >+ if (!list) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > >+ "No memory for PMD internal items"); > >+ return -rte_errno; > >+ } > >+ > >+ rule_info.tun_type =3D tun_type; > >+ > >+ sw_meta_ptr =3D > >+ rte_zmalloc(NULL, sizeof(*sw_meta_ptr), 0); > >+ if (!sw_meta_ptr) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > >+ "No memory for sw_pattern_meta_ptr"); > >+ goto error; > >+ } > >+ > >+ pattern_match_item =3D > >+ ice_search_pattern_match_item(pattern, array, array_len, > error); > >+ if (!pattern_match_item) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > >+ "Invalid input pattern"); > >+ goto error; > >+ } > >+ > >+ inputset =3D ice_switch_inset_get > >+ (pattern, error, list, &lkups_num, tun_type); > >+ if (!inputset || (inputset & ~pattern_match_item->input_set_mask)) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_ITEM_SPEC, > >+ pattern, > >+ "Invalid input set"); > >+ goto error; > >+ } > >+ > >+ ret =3D ice_switch_parse_action(pf, actions, error, &rule_info); > >+ if (ret) { > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_HANDLE, NULL, > >+ "Invalid input action"); > >+ goto error; > >+ } > >+ *meta =3D sw_meta_ptr; > >+ ((struct sw_meta *)*meta)->list =3D list; > >+ ((struct sw_meta *)*meta)->lkups_num =3D lkups_num; > >+ ((struct sw_meta *)*meta)->rule_info =3D rule_info; > >+ rte_free(pattern_match_item); > >+ > >+ return 0; > >+ > >+error: > >+ rte_free(list); > >+ rte_free(sw_meta_ptr); > >+ rte_free(pattern_match_item); > >+ > >+ return -rte_errno; > >+} > >+ > >+static int > >+ice_switch_query(struct ice_adapter *ad __rte_unused, > >+ struct rte_flow *flow __rte_unused, > >+ struct rte_flow_query_count *count __rte_unused, > >+ struct rte_flow_error *error) > >+{ > >+ rte_flow_error_set(error, EINVAL, > >+ RTE_FLOW_ERROR_TYPE_HANDLE, > >+ NULL, > >+ "count action not supported by switch filter"); > >+ > >+ return -rte_errno; > >+} > >+ > >+static int > >+ice_switch_init(struct ice_adapter *ad) { > >+ int ret =3D 0; > >+ struct ice_flow_parser *dist_parser; > >+ struct ice_flow_parser *perm_parser =3D &ice_switch_perm_parser; > >+ > >+ if (ad->active_pkg_type =3D=3D ICE_PKG_TYPE_COMMS) > >+ dist_parser =3D &ice_switch_dist_parser_comms; > >+ else > >+ dist_parser =3D &ice_switch_dist_parser_os; > >+ > >+ if (ad->devargs.pipe_mode_support) > >+ ret =3D ice_register_parser(perm_parser, ad); > >+ else > >+ ret =3D ice_register_parser(dist_parser, ad); > >+ return ret; > >+} > >+ > >+static void > >+ice_switch_uninit(struct ice_adapter *ad) { > >+ struct ice_flow_parser *dist_parser; > >+ struct ice_flow_parser *perm_parser =3D &ice_switch_perm_parser; > >+ > >+ if (ad->active_pkg_type =3D=3D ICE_PKG_TYPE_COMMS) > >+ dist_parser =3D &ice_switch_dist_parser_comms; > >+ else > >+ dist_parser =3D &ice_switch_dist_parser_os; > >+ > >+ if (ad->devargs.pipe_mode_support) > >+ ice_unregister_parser(perm_parser, ad); > >+ else > >+ ice_unregister_parser(dist_parser, ad); } > >+ > >+static struct > >+ice_flow_engine ice_switch_engine =3D { > >+ .init =3D ice_switch_init, > >+ .uninit =3D ice_switch_uninit, > >+ .create =3D ice_switch_create, > >+ .destroy =3D ice_switch_destroy, > >+ .query_count =3D ice_switch_query, > >+ .free =3D ice_switch_filter_rule_free, > >+ .type =3D ICE_FLOW_ENGINE_SWITCH, > >+}; > >+ > >+static struct > >+ice_flow_parser ice_switch_dist_parser_os =3D { > >+ .engine =3D &ice_switch_engine, > >+ .array =3D ice_switch_pattern_dist_os, > >+ .array_len =3D RTE_DIM(ice_switch_pattern_dist_os), > >+ .parse_pattern_action =3D ice_switch_parse_pattern_action, > >+ .stage =3D ICE_FLOW_STAGE_DISTRIBUTOR, > >+}; > >+ > >+static struct > >+ice_flow_parser ice_switch_dist_parser_comms =3D { > >+ .engine =3D &ice_switch_engine, > >+ .array =3D ice_switch_pattern_dist_comms, > >+ .array_len =3D RTE_DIM(ice_switch_pattern_dist_comms), > >+ .parse_pattern_action =3D ice_switch_parse_pattern_action, > >+ .stage =3D ICE_FLOW_STAGE_DISTRIBUTOR, > >+}; > >+ > >+static struct > >+ice_flow_parser ice_switch_perm_parser =3D { > >+ .engine =3D &ice_switch_engine, > >+ .array =3D ice_switch_pattern_perm, > >+ .array_len =3D RTE_DIM(ice_switch_pattern_perm), > >+ .parse_pattern_action =3D ice_switch_parse_pattern_action, > >+ .stage =3D ICE_FLOW_STAGE_PERMISSION, > >+}; > >+ > >+RTE_INIT(ice_sw_engine_init) > >+{ > >+ struct ice_flow_engine *engine =3D &ice_switch_engine; > >+ ice_register_flow_engine(engine); > >+} > >diff --git a/drivers/net/ice/ice_switch_filter.h > >b/drivers/net/ice/ice_switch_filter.h > >deleted file mode 100644 > >index 6fb6348b5..000000000 > >--- a/drivers/net/ice/ice_switch_filter.h > >+++ /dev/null > >@@ -1,4 +0,0 @@ > >-/* SPDX-License-Identifier: BSD-3-Clause > >- * Copyright(c) 2019 Intel Corporation > >- */ > >- > >-- > >2.15.1 > >