From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id EF035A0350; Mon, 29 Jun 2020 07:35:54 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 084D61BEB1; Mon, 29 Jun 2020 07:35:43 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id A1CE41BDAC; Mon, 29 Jun 2020 07:35:31 +0200 (CEST) IronPort-SDR: XwqCn6Rfe+7jZH4H7QOgMairIY1AelqcZjgtmm6+y7js806Qg5XNI8mEGyrOwRvj0uozsFcAvx FdKtDdHYsz1g== X-IronPort-AV: E=McAfee;i="6000,8403,9666"; a="147458193" X-IronPort-AV: E=Sophos;i="5.75,294,1589266800"; d="scan'208";a="147458193" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Jun 2020 22:35:31 -0700 IronPort-SDR: xCm58NxyhS6c8JhGL2i6IszvExyobPbjgEDS7klJ4Ld7DBJ96hl4KOtCS7m/LeGVuAOcrexCiZ Y0zSdcsXXRrw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,294,1589266800"; d="scan'208";a="302948817" Received: from unknown (HELO localhost.localdomain.bj.intel.com) ([172.16.182.123]) by fmsmga004.fm.intel.com with ESMTP; 28 Jun 2020 22:35:29 -0700 From: Wei Zhao To: dev@dpdk.org Cc: stable@dpdk.org, qi.z.zhang@intel.com, nannan.lu@intel.com, Wei Zhao Date: Mon, 29 Jun 2020 13:10:28 +0800 Message-Id: <20200629051030.3541-4-wei.zhao1@intel.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20200629051030.3541-1-wei.zhao1@intel.com> References: <20200628052857.67428-1-wei.zhao1@intel.com> <20200629051030.3541-1-wei.zhao1@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v5 3/5] net/ice: support switch flow for specific L4 type X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add more specific tunnel type for ipv4/ipv6 packet, it enable tcp/udp layer of ipv4/ipv6 as L4 payload but without L4 dst/src port number as input set for the switch filter rule. Fixes: 47d460d63233 ("net/ice: rework switch filter") Cc: stable@dpdk.org Signed-off-by: Wei Zhao --- drivers/net/ice/ice_switch_filter.c | 26 ++++++++++++++++++++------ 1 file changed, 20 insertions(+), 6 deletions(-) diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index c607e8d17..7d1cd98f5 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -474,8 +474,10 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], bool pppoe_elem_valid = 0; bool pppoe_patt_valid = 0; bool pppoe_prot_valid = 0; - bool profile_rule = 0; bool tunnel_valid = 0; + bool profile_rule = 0; + bool nvgre_valid = 0; + bool vxlan_valid = 0; bool ipv6_valiad = 0; bool ipv4_valiad = 0; bool udp_valiad = 0; @@ -923,7 +925,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], "Invalid VXLAN item"); return 0; } - + vxlan_valid = 1; tunnel_valid = 1; if (vxlan_spec && vxlan_mask) { list[t].type = ICE_VXLAN; @@ -960,6 +962,7 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], "Invalid NVGRE item"); return 0; } + nvgre_valid = 1; tunnel_valid = 1; if (nvgre_spec && nvgre_mask) { list[t].type = ICE_NVGRE; @@ -1325,6 +1328,21 @@ ice_switch_inset_get(const struct rte_flow_item pattern[], *tun_type = ICE_SW_TUN_PPPOE; } + if (*tun_type == ICE_NON_TUN) { + if (vxlan_valid) + *tun_type = ICE_SW_TUN_VXLAN; + else if (nvgre_valid) + *tun_type = ICE_SW_TUN_NVGRE; + else if (ipv4_valiad && tcp_valiad) + *tun_type = ICE_SW_IPV4_TCP; + else if (ipv4_valiad && udp_valiad) + *tun_type = ICE_SW_IPV4_UDP; + else if (ipv6_valiad && tcp_valiad) + *tun_type = ICE_SW_IPV6_TCP; + else if (ipv6_valiad && udp_valiad) + *tun_type = ICE_SW_IPV6_UDP; + } + *lkups_num = t; return input_set; @@ -1536,10 +1554,6 @@ ice_switch_parse_pattern_action(struct ice_adapter *ad, for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { item_num++; - if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN) - tun_type = ICE_SW_TUN_VXLAN; - if (item->type == RTE_FLOW_ITEM_TYPE_NVGRE) - tun_type = ICE_SW_TUN_NVGRE; if (item->type == RTE_FLOW_ITEM_TYPE_ETH) { const struct rte_flow_item_eth *eth_mask; if (item->mask) -- 2.19.1