From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 6C6DE1B3B5; Tue, 25 Dec 2018 03:04:54 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Dec 2018 18:04:53 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,394,1539673200"; d="scan'208";a="120853820" Received: from pgsmsx105.gar.corp.intel.com ([10.221.44.96]) by orsmga002.jf.intel.com with ESMTP; 24 Dec 2018 18:04:51 -0800 Received: from pgsmsx103.gar.corp.intel.com ([169.254.2.64]) by PGSMSX105.gar.corp.intel.com ([169.254.4.231]) with mapi id 14.03.0415.000; Tue, 25 Dec 2018 10:04:50 +0800 From: "Zhao1, Wei" To: "Zhang, Qi Z" , "dev@dpdk.org" CC: "adrien.mazarguil@6wind.com" , "stable@dpdk.org" , "Lu, Wenzhuo" Thread-Topic: [PATCH] net/ixgbe: enable x550 flexible byte filter Thread-Index: AQHUldBdOTIwjaQYQEa1PekQXQJqaKWNUmuAgAFsgOA= Date: Tue, 25 Dec 2018 02:04:49 +0000 Message-ID: References: <1545025982-2065-1-git-send-email-wei.zhao1@intel.com> <039ED4275CED7440929022BC67E7061153304F00@SHSMSX103.ccr.corp.intel.com> In-Reply-To: <039ED4275CED7440929022BC67E7061153304F00@SHSMSX103.ccr.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.0.400.15 dlp-reaction: no-action x-originating-ip: [172.30.20.206] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] net/ixgbe: enable x550 flexible byte filter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 25 Dec 2018 02:04:56 -0000 Hi, qi > -----Original Message----- > From: Zhang, Qi Z > Sent: Monday, December 24, 2018 8:13 PM > To: Zhao1, Wei ; dev@dpdk.org > Cc: adrien.mazarguil@6wind.com; stable@dpdk.org; Lu, Wenzhuo > > Subject: RE: [PATCH] net/ixgbe: enable x550 flexible byte filter >=20 > Hi Wei: >=20 > > -----Original Message----- > > From: Zhao1, Wei > > Sent: Monday, December 17, 2018 1:53 PM > > To: dev@dpdk.org > > Cc: adrien.mazarguil@6wind.com; stable@dpdk.org; Lu, Wenzhuo > > ; Zhang, Qi Z ; Zhao1, Wei > > > > Subject: [PATCH] net/ixgbe: enable x550 flexible byte filter > > > > There is need for users to use flexible byte filter on x550. > > This patch enable it. >=20 > It's difficult for me review a large patch without understand the purpose > clearly. > For the description here, my understanding is you just try to enable an > existing feature for a specific device ID. > But why we have so much changes in your patch, would you explain more > detail about what is the gap here? It is because ixgbe flow parer code do not support tunnel mode flexible byt= e filter for x550, So I have to enable it in ixgbe_parse_fdir_filter_tunnel(). > BTW, it's better to separate the patch into two patches, one for fdir lay= er and > one for rte_flow layer. Yes, I will split it in v2, also I will add some code for flow CLI for pars= er HEX number, in order to not using string which pass ASIC number. >=20 > Thanks > Qi > > > > Fixes: 82fb702077f6 ("ixgbe: support new flow director modes for > > X550") > > Fixes: 11777435c727 ("net/ixgbe: parse flow director filter") > > > > Signed-off-by: Wei Zhao > > --- > > drivers/net/ixgbe/ixgbe_fdir.c | 9 +- > > drivers/net/ixgbe/ixgbe_flow.c | 274 > > ++++++++++++++++++++++++++++------------- > > 2 files changed, 195 insertions(+), 88 deletions(-) > > > > diff --git a/drivers/net/ixgbe/ixgbe_fdir.c > > b/drivers/net/ixgbe/ixgbe_fdir.c index > > e559f0f..deb9a21 100644 > > --- a/drivers/net/ixgbe/ixgbe_fdir.c > > +++ b/drivers/net/ixgbe/ixgbe_fdir.c > > @@ -307,6 +307,8 @@ fdir_set_input_mask_82599(struct rte_eth_dev > *dev) > > /* flex byte mask */ > > if (info->mask.flex_bytes_mask =3D=3D 0) > > fdirm |=3D IXGBE_FDIRM_FLEX; > > + if (info->mask.src_ipv4_mask =3D=3D 0 && info->mask.dst_ipv4_mask =3D= =3D > 0) > > + fdirm |=3D IXGBE_FDIRM_L3P; > > > > IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm); > > > > @@ -356,8 +358,7 @@ fdir_set_input_mask_x550(struct rte_eth_dev > *dev) > > /* mask VM pool and DIPv6 since there are currently not supported > > * mask FLEX byte, it will be set in flex_conf > > */ > > - uint32_t fdirm =3D IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6 | > > - IXGBE_FDIRM_FLEX; > > + uint32_t fdirm =3D IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6; > > uint32_t fdiripv6m; > > enum rte_fdir_mode mode =3D dev->data->dev_conf.fdir_conf.mode; > > uint16_t mac_mask; > > @@ -385,6 +386,10 @@ fdir_set_input_mask_x550(struct rte_eth_dev > *dev) > > return -EINVAL; > > } > > > > + /* flex byte mask */ > > + if (info->mask.flex_bytes_mask =3D=3D 0) > > + fdirm |=3D IXGBE_FDIRM_FLEX; > > + > > IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm); > > > > fdiripv6m =3D ((u32)0xFFFFU << IXGBE_FDIRIP6M_DIPM_SHIFT); diff -- > git > > a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c > > index > > f0fafeb..dc210c5 100644 > > --- a/drivers/net/ixgbe/ixgbe_flow.c > > +++ b/drivers/net/ixgbe/ixgbe_flow.c > > @@ -1622,9 +1622,9 @@ ixgbe_parse_fdir_filter_normal(struct > > rte_eth_dev *dev, > > const struct rte_flow_item_raw *raw_mask; > > const struct rte_flow_item_raw *raw_spec; > > uint8_t j; > > - > > struct ixgbe_hw *hw =3D > > IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private); > > > > + > > if (!pattern) { > > rte_flow_error_set(error, EINVAL, > > RTE_FLOW_ERROR_TYPE_ITEM_NUM, > > @@ -1651,9 +1651,7 @@ ixgbe_parse_fdir_filter_normal(struct > > rte_eth_dev *dev, > > * value. So, we need not do anything for the not provided fields > later. > > */ > > memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > - memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask)); > > - rule->mask.vlan_tci_mask =3D 0; > > - rule->mask.flex_bytes_mask =3D 0; > > + memset(&rule->mask, 0, sizeof(struct ixgbe_hw_fdir_mask)); > > > > /** > > * The first not void item should be @@ -1665,7 +1663,8 @@ > > ixgbe_parse_fdir_filter_normal(struct rte_eth_dev *dev, > > item->type !=3D RTE_FLOW_ITEM_TYPE_IPV6 && > > item->type !=3D RTE_FLOW_ITEM_TYPE_TCP && > > item->type !=3D RTE_FLOW_ITEM_TYPE_UDP && > > - item->type !=3D RTE_FLOW_ITEM_TYPE_SCTP) { > > + item->type !=3D RTE_FLOW_ITEM_TYPE_SCTP && > > + item->type !=3D RTE_FLOW_ITEM_TYPE_RAW) { > > memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > rte_flow_error_set(error, EINVAL, > > RTE_FLOW_ERROR_TYPE_ITEM, > > @@ -2201,6 +2200,7 @@ ixgbe_parse_fdir_filter_normal(struct > > rte_eth_dev *dev, > > } > > > > raw_mask =3D item->mask; > > + rule->b_mask =3D TRUE; > > > > /* check mask */ > > if (raw_mask->relative !=3D 0x1 || > > @@ -2217,6 +2217,7 @@ ixgbe_parse_fdir_filter_normal(struct > > rte_eth_dev *dev, > > } > > > > raw_spec =3D item->spec; > > + rule->b_spec =3D TRUE; > > > > /* check spec */ > > if (raw_spec->relative !=3D 0 || > > @@ -2323,6 +2324,8 @@ ixgbe_parse_fdir_filter_tunnel(const struct > > rte_flow_attr *attr, > > const struct rte_flow_item_eth *eth_mask; > > const struct rte_flow_item_vlan *vlan_spec; > > const struct rte_flow_item_vlan *vlan_mask; > > + const struct rte_flow_item_raw *raw_mask; > > + const struct rte_flow_item_raw *raw_spec; > > uint32_t j; > > > > if (!pattern) { > > @@ -2351,8 +2354,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct > > rte_flow_attr *attr, > > * value. So, we need not do anything for the not provided fields > later. > > */ > > memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > - memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask)); > > - rule->mask.vlan_tci_mask =3D 0; > > + memset(&rule->mask, 0, sizeof(struct ixgbe_hw_fdir_mask)); > > > > /** > > * The first not void item should be @@ -2364,7 +2366,8 @@ > > ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr *attr, > > item->type !=3D RTE_FLOW_ITEM_TYPE_IPV6 && > > item->type !=3D RTE_FLOW_ITEM_TYPE_UDP && > > item->type !=3D RTE_FLOW_ITEM_TYPE_VXLAN && > > - item->type !=3D RTE_FLOW_ITEM_TYPE_NVGRE) { > > + item->type !=3D RTE_FLOW_ITEM_TYPE_NVGRE && > > + item->type !=3D RTE_FLOW_ITEM_TYPE_RAW) { > > memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > rte_flow_error_set(error, EINVAL, > > RTE_FLOW_ERROR_TYPE_ITEM, > > @@ -2520,6 +2523,18 @@ ixgbe_parse_fdir_filter_tunnel(const struct > > rte_flow_attr *attr, > > &rule->ixgbe_fdir.formatted.tni_vni), > > vxlan_spec->vni, RTE_DIM(vxlan_spec->vni)); > > } > > + /* check if the next not void item is MAC VLAN RAW or > END*/ > > + item =3D next_no_void_pattern(pattern, item); > > + if (item->type !=3D RTE_FLOW_ITEM_TYPE_ETH && > > + item->type !=3D RTE_FLOW_ITEM_TYPE_VLAN && > > + item->type !=3D RTE_FLOW_ITEM_TYPE_RAW && > > + item->type !=3D RTE_FLOW_ITEM_TYPE_END){ > > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ITEM, > > + item, "Not supported by fdir filter"); > > + return -rte_errno; > > + } > > } > > > > /* Get the NVGRE info */ > > @@ -2616,16 +2631,19 @@ ixgbe_parse_fdir_filter_tunnel(const struct > > rte_flow_attr *attr, > > rte_memcpy(&rule->ixgbe_fdir.formatted.tni_vni, > > nvgre_spec->tni, RTE_DIM(nvgre_spec->tni)); > > } > > - } > > > > - /* check if the next not void item is MAC */ > > - item =3D next_no_void_pattern(pattern, item); > > - if (item->type !=3D RTE_FLOW_ITEM_TYPE_ETH) { > > - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > - rte_flow_error_set(error, EINVAL, > > - RTE_FLOW_ERROR_TYPE_ITEM, > > - item, "Not supported by fdir filter"); > > - return -rte_errno; > > + /* check if the next not void item is MAC VLAN RAW or > END*/ > > + item =3D next_no_void_pattern(pattern, item); > > + if (item->type !=3D RTE_FLOW_ITEM_TYPE_ETH && > > + item->type !=3D RTE_FLOW_ITEM_TYPE_VLAN && > > + item->type !=3D RTE_FLOW_ITEM_TYPE_RAW && > > + item->type !=3D RTE_FLOW_ITEM_TYPE_END){ > > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ITEM, > > + item, "Not supported by fdir filter"); > > + return -rte_errno; > > + } > > } > > > > /** > > @@ -2633,92 +2651,91 @@ ixgbe_parse_fdir_filter_tunnel(const struct > > rte_flow_attr *attr, > > * others should be masked. > > */ > > > > - if (!item->mask) { > > - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > - rte_flow_error_set(error, EINVAL, > > - RTE_FLOW_ERROR_TYPE_ITEM, > > - item, "Not supported by fdir filter"); > > - return -rte_errno; > > - } > > - /*Not supported last point for range*/ > > - if (item->last) { > > - rte_flow_error_set(error, EINVAL, > > - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > - item, "Not supported last point for range"); > > - return -rte_errno; > > - } > > - rule->b_mask =3D TRUE; > > - eth_mask =3D item->mask; > > - > > - /* Ether type should be masked. */ > > - if (eth_mask->type) { > > - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > - rte_flow_error_set(error, EINVAL, > > - RTE_FLOW_ERROR_TYPE_ITEM, > > - item, "Not supported by fdir filter"); > > - return -rte_errno; > > - } > > - > > - /* src MAC address should be masked. */ > > - for (j =3D 0; j < ETHER_ADDR_LEN; j++) { > > - if (eth_mask->src.addr_bytes[j]) { > > - memset(rule, 0, > > - sizeof(struct ixgbe_fdir_rule)); > > + if (item->type =3D=3D RTE_FLOW_ITEM_TYPE_ETH) { > > + if (!item->mask) { > > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > rte_flow_error_set(error, EINVAL, > > RTE_FLOW_ERROR_TYPE_ITEM, > > item, "Not supported by fdir filter"); > > return -rte_errno; > > } > > - } > > - rule->mask.mac_addr_byte_mask =3D 0; > > - for (j =3D 0; j < ETHER_ADDR_LEN; j++) { > > - /* It's a per byte mask. */ > > - if (eth_mask->dst.addr_bytes[j] =3D=3D 0xFF) { > > - rule->mask.mac_addr_byte_mask |=3D 0x1 << j; > > - } else if (eth_mask->dst.addr_bytes[j]) { > > - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > + /*Not supported last point for range*/ > > + if (item->last) { > > rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > + item, "Not supported last point for range"); > > + return -rte_errno; > > + } > > + rule->b_mask =3D TRUE; > > + eth_mask =3D item->mask; > > + > > + /* Ether type should be masked. */ > > + if (eth_mask->type) { > > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > + rte_flow_error_set(error, EINVAL, > > RTE_FLOW_ERROR_TYPE_ITEM, > > item, "Not supported by fdir filter"); > > return -rte_errno; > > } > > - } > > > > - /* When no vlan, considered as full mask. */ > > - rule->mask.vlan_tci_mask =3D rte_cpu_to_be_16(0xEFFF); > > - > > - if (item->spec) { > > - rule->b_spec =3D TRUE; > > - eth_spec =3D item->spec; > > - > > - /* Get the dst MAC. */ > > + /* src MAC address should be masked. */ > > for (j =3D 0; j < ETHER_ADDR_LEN; j++) { > > - rule->ixgbe_fdir.formatted.inner_mac[j] =3D > > - eth_spec->dst.addr_bytes[j]; > > + if (eth_mask->src.addr_bytes[j]) { > > + memset(rule, 0, > > + sizeof(struct ixgbe_fdir_rule)); > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ITEM, > > + item, "Not supported by fdir filter"); > > + return -rte_errno; > > + } > > + } > > + for (j =3D 0; j < ETHER_ADDR_LEN; j++) { > > + /* It's a per byte mask. */ > > + if (eth_mask->dst.addr_bytes[j] =3D=3D 0xFF) { > > + rule->mask.mac_addr_byte_mask |=3D 0x1 << j; > > + } else if (eth_mask->dst.addr_bytes[j]) { > > + memset(rule, 0, sizeof(struct > ixgbe_fdir_rule)); > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ITEM, > > + item, "Not supported by fdir filter"); > > + return -rte_errno; > > + } > > } > > - } > > > > - /** > > - * Check if the next not void item is vlan or ipv4. > > - * IPv6 is not supported. > > - */ > > - item =3D next_no_void_pattern(pattern, item); > > - if ((item->type !=3D RTE_FLOW_ITEM_TYPE_VLAN) && > > - (item->type !=3D RTE_FLOW_ITEM_TYPE_IPV4)) { > > - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > - rte_flow_error_set(error, EINVAL, > > - RTE_FLOW_ERROR_TYPE_ITEM, > > - item, "Not supported by fdir filter"); > > - return -rte_errno; > > - } > > - /*Not supported last point for range*/ > > - if (item->last) { > > - rte_flow_error_set(error, EINVAL, > > + if (item->spec) { > > + rule->b_spec =3D TRUE; > > + eth_spec =3D item->spec; > > + > > + /* Get the dst MAC. */ > > + for (j =3D 0; j < ETHER_ADDR_LEN; j++) { > > + rule->ixgbe_fdir.formatted.inner_mac[j] =3D > > + eth_spec->dst.addr_bytes[j]; > > + } > > + } > > + /** > > + * Check if the next not void item is vlan or ipv4. > > + * IPv6 is not supported. > > + */ > > + item =3D next_no_void_pattern(pattern, item); > > + if (item->type !=3D RTE_FLOW_ITEM_TYPE_VLAN && > > + item->type !=3D RTE_FLOW_ITEM_TYPE_RAW && > > + item->type !=3D RTE_FLOW_ITEM_TYPE_END) { > > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ITEM, > > + item, "Not supported by fdir filter"); > > + return -rte_errno; > > + } > > + /*Not supported last point for range*/ > > + if (item->last) { > > + rte_flow_error_set(error, EINVAL, > > RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > item, "Not supported last point for range"); > > - return -rte_errno; > > + return -rte_errno; > > + } > > } > > > > + > > if (item->type =3D=3D RTE_FLOW_ITEM_TYPE_VLAN) { > > if (!(item->spec && item->mask)) { > > memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); @@ - > 2736,10 > > +2753,90 @@ ixgbe_parse_fdir_filter_tunnel(const struct rte_flow_attr > > +*attr, > > rule->mask.vlan_tci_mask =3D vlan_mask->tci; > > rule->mask.vlan_tci_mask &=3D rte_cpu_to_be_16(0xEFFF); > > /* More than one tags are not supported. */ > > + item =3D next_no_void_pattern(pattern, item); > > + if (item->type !=3D RTE_FLOW_ITEM_TYPE_RAW && > > + item->type !=3D RTE_FLOW_ITEM_TYPE_END) { > > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ITEM, > > + item, "Not supported by fdir filter"); > > + return -rte_errno; > > + } > > + } > > + > > + /* Get the flex byte info */ > > + if (item->type =3D=3D RTE_FLOW_ITEM_TYPE_RAW) { > > + /* Not supported last point for range*/ > > + if (item->last) { > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > > + item, "Not supported last point for range"); > > + return -rte_errno; > > + } > > + /* mask should not be null */ > > + if (!item->mask || !item->spec) { > > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ITEM, > > + item, "Not supported by fdir filter"); > > + return -rte_errno; > > + } > > + > > + raw_mask =3D item->mask; > > + rule->b_mask =3D TRUE; > > > > + /* check mask */ > > + if (raw_mask->relative !=3D 0x1 || > > + raw_mask->search !=3D 0x1 || > > + raw_mask->reserved !=3D 0x0 || > > + (uint32_t)raw_mask->offset !=3D 0xffffffff || > > + raw_mask->limit !=3D 0xffff || > > + raw_mask->length !=3D 0xffff) { > > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ITEM, > > + item, "Not supported by fdir filter"); > > + return -rte_errno; > > + } > > + > > + raw_spec =3D item->spec; > > + rule->b_spec =3D TRUE; > > + > > + /* check spec */ > > + if (raw_spec->relative !=3D 0 || > > + raw_spec->search !=3D 0 || > > + raw_spec->reserved !=3D 0 || > > + raw_spec->offset > IXGBE_MAX_FLX_SOURCE_OFF || > > + raw_spec->offset % 2 || > > + raw_spec->limit !=3D 0 || > > + raw_spec->length !=3D 2 || > > + /* pattern can't be 0xffff */ > > + (raw_spec->pattern[0] =3D=3D 0xff && > > + raw_spec->pattern[1] =3D=3D 0xff)) { > > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ITEM, > > + item, "Not supported by fdir filter"); > > + return -rte_errno; > > + } > > + > > + /* check pattern mask */ > > + if (raw_mask->pattern[0] !=3D 0xff || > > + raw_mask->pattern[1] !=3D 0xff) { > > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > + rte_flow_error_set(error, EINVAL, > > + RTE_FLOW_ERROR_TYPE_ITEM, > > + item, "Not supported by fdir filter"); > > + return -rte_errno; > > + } > > + > > + rule->mask.flex_bytes_mask =3D 0xffff; > > + rule->ixgbe_fdir.formatted.flex_bytes =3D > > + (((uint16_t)raw_spec->pattern[1]) << 8) | > > + raw_spec->pattern[0]; > > + rule->flex_bytes_offset =3D raw_spec->offset; > > /* check if the next not void item is END */ > > item =3D next_no_void_pattern(pattern, item); > > - > > if (item->type !=3D RTE_FLOW_ITEM_TYPE_END) { > > memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > > rte_flow_error_set(error, EINVAL, > > @@ -2776,12 +2873,17 @@ ixgbe_parse_fdir_filter(struct rte_eth_dev > *dev, > > hw->mac.type !=3D ixgbe_mac_X550EM_a) > > return -ENOTSUP; > > > > + if (fdir_mode =3D=3D RTE_FDIR_MODE_PERFECT_TUNNEL) > > + goto tunnel_filter; > > + > > ret =3D ixgbe_parse_fdir_filter_normal(dev, attr, pattern, > > actions, rule, error); > > > > if (!ret) > > goto step_next; > > > > +tunnel_filter: > > + > > ret =3D ixgbe_parse_fdir_filter_tunnel(attr, pattern, > > actions, rule, error); > > > > -- > > 2.7.5