From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 0A84A1B7D4; Mon, 17 Dec 2018 07:20:01 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 16 Dec 2018 22:20:01 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,364,1539673200"; d="scan'208";a="102061380" Received: from kmsmsx151.gar.corp.intel.com ([172.21.73.86]) by orsmga008.jf.intel.com with ESMTP; 16 Dec 2018 22:19:59 -0800 Received: from pgsmsx103.gar.corp.intel.com ([169.254.2.64]) by KMSMSX151.gar.corp.intel.com ([169.254.10.146]) with mapi id 14.03.0415.000; Mon, 17 Dec 2018 14:19:58 +0800 From: "Zhao1, Wei" To: "dev@dpdk.org" , "Peng, Yuan" CC: "adrien.mazarguil@6wind.com" , "stable@dpdk.org" , "Lu, Wenzhuo" , "Zhang, Qi Z" Thread-Topic: [PATCH] net/ixgbe: enable x550 flexible byte filter Thread-Index: AQHUldBdOTIwjaQYQEa1PekQXQJqaKWCdSfw Date: Mon, 17 Dec 2018 06:19:57 +0000 Message-ID: References: <1545025982-2065-1-git-send-email-wei.zhao1@intel.com> In-Reply-To: <1545025982-2065-1-git-send-email-wei.zhao1@intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-version: 11.0.400.15 dlp-reaction: no-action x-originating-ip: [172.30.20.205] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH] net/ixgbe: enable x550 flexible byte filter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 17 Dec 2018 06:20:02 -0000 Add yuan.peng@intel.com into mail loop > -----Original Message----- > From: Zhao1, Wei > Sent: Monday, December 17, 2018 1:53 PM > To: dev@dpdk.org > Cc: adrien.mazarguil@6wind.com; stable@dpdk.org; Lu, Wenzhuo > ; Zhang, Qi Z ; Zhao1, Wei > > Subject: [PATCH] net/ixgbe: enable x550 flexible byte filter >=20 > There is need for users to use flexible byte filter on x550. > This patch enable it. >=20 > Fixes: 82fb702077f6 ("ixgbe: support new flow director modes for X550") > Fixes: 11777435c727 ("net/ixgbe: parse flow director filter") >=20 > Signed-off-by: Wei Zhao > --- > drivers/net/ixgbe/ixgbe_fdir.c | 9 +- > drivers/net/ixgbe/ixgbe_flow.c | 274 ++++++++++++++++++++++++++++-- > ----------- > 2 files changed, 195 insertions(+), 88 deletions(-) >=20 > diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdi= r.c > index e559f0f..deb9a21 100644 > --- a/drivers/net/ixgbe/ixgbe_fdir.c > +++ b/drivers/net/ixgbe/ixgbe_fdir.c > @@ -307,6 +307,8 @@ fdir_set_input_mask_82599(struct rte_eth_dev *dev) > /* flex byte mask */ > if (info->mask.flex_bytes_mask =3D=3D 0) > fdirm |=3D IXGBE_FDIRM_FLEX; > + if (info->mask.src_ipv4_mask =3D=3D 0 && info->mask.dst_ipv4_mask =3D= =3D > 0) > + fdirm |=3D IXGBE_FDIRM_L3P; >=20 > IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm); >=20 > @@ -356,8 +358,7 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev) > /* mask VM pool and DIPv6 since there are currently not supported > * mask FLEX byte, it will be set in flex_conf > */ > - uint32_t fdirm =3D IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6 | > - IXGBE_FDIRM_FLEX; > + uint32_t fdirm =3D IXGBE_FDIRM_POOL | IXGBE_FDIRM_DIPv6; > uint32_t fdiripv6m; > enum rte_fdir_mode mode =3D dev->data->dev_conf.fdir_conf.mode; > uint16_t mac_mask; > @@ -385,6 +386,10 @@ fdir_set_input_mask_x550(struct rte_eth_dev *dev) > return -EINVAL; > } >=20 > + /* flex byte mask */ > + if (info->mask.flex_bytes_mask =3D=3D 0) > + fdirm |=3D IXGBE_FDIRM_FLEX; > + > IXGBE_WRITE_REG(hw, IXGBE_FDIRM, fdirm); >=20 > fdiripv6m =3D ((u32)0xFFFFU << IXGBE_FDIRIP6M_DIPM_SHIFT); diff -- > git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c ind= ex > f0fafeb..dc210c5 100644 > --- a/drivers/net/ixgbe/ixgbe_flow.c > +++ b/drivers/net/ixgbe/ixgbe_flow.c > @@ -1622,9 +1622,9 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev > *dev, > const struct rte_flow_item_raw *raw_mask; > const struct rte_flow_item_raw *raw_spec; > uint8_t j; > - > struct ixgbe_hw *hw =3D IXGBE_DEV_PRIVATE_TO_HW(dev->data- > >dev_private); >=20 > + > if (!pattern) { > rte_flow_error_set(error, EINVAL, > RTE_FLOW_ERROR_TYPE_ITEM_NUM, > @@ -1651,9 +1651,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev > *dev, > * value. So, we need not do anything for the not provided fields > later. > */ > memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > - memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask)); > - rule->mask.vlan_tci_mask =3D 0; > - rule->mask.flex_bytes_mask =3D 0; > + memset(&rule->mask, 0, sizeof(struct ixgbe_hw_fdir_mask)); >=20 > /** > * The first not void item should be > @@ -1665,7 +1663,8 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev > *dev, > item->type !=3D RTE_FLOW_ITEM_TYPE_IPV6 && > item->type !=3D RTE_FLOW_ITEM_TYPE_TCP && > item->type !=3D RTE_FLOW_ITEM_TYPE_UDP && > - item->type !=3D RTE_FLOW_ITEM_TYPE_SCTP) { > + item->type !=3D RTE_FLOW_ITEM_TYPE_SCTP && > + item->type !=3D RTE_FLOW_ITEM_TYPE_RAW) { > memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > rte_flow_error_set(error, EINVAL, > RTE_FLOW_ERROR_TYPE_ITEM, > @@ -2201,6 +2200,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev > *dev, > } >=20 > raw_mask =3D item->mask; > + rule->b_mask =3D TRUE; >=20 > /* check mask */ > if (raw_mask->relative !=3D 0x1 || > @@ -2217,6 +2217,7 @@ ixgbe_parse_fdir_filter_normal(struct rte_eth_dev > *dev, > } >=20 > raw_spec =3D item->spec; > + rule->b_spec =3D TRUE; >=20 > /* check spec */ > if (raw_spec->relative !=3D 0 || > @@ -2323,6 +2324,8 @@ ixgbe_parse_fdir_filter_tunnel(const struct > rte_flow_attr *attr, > const struct rte_flow_item_eth *eth_mask; > const struct rte_flow_item_vlan *vlan_spec; > const struct rte_flow_item_vlan *vlan_mask; > + const struct rte_flow_item_raw *raw_mask; > + const struct rte_flow_item_raw *raw_spec; > uint32_t j; >=20 > if (!pattern) { > @@ -2351,8 +2354,7 @@ ixgbe_parse_fdir_filter_tunnel(const struct > rte_flow_attr *attr, > * value. So, we need not do anything for the not provided fields > later. > */ > memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > - memset(&rule->mask, 0xFF, sizeof(struct ixgbe_hw_fdir_mask)); > - rule->mask.vlan_tci_mask =3D 0; > + memset(&rule->mask, 0, sizeof(struct ixgbe_hw_fdir_mask)); >=20 > /** > * The first not void item should be > @@ -2364,7 +2366,8 @@ ixgbe_parse_fdir_filter_tunnel(const struct > rte_flow_attr *attr, > item->type !=3D RTE_FLOW_ITEM_TYPE_IPV6 && > item->type !=3D RTE_FLOW_ITEM_TYPE_UDP && > item->type !=3D RTE_FLOW_ITEM_TYPE_VXLAN && > - item->type !=3D RTE_FLOW_ITEM_TYPE_NVGRE) { > + item->type !=3D RTE_FLOW_ITEM_TYPE_NVGRE && > + item->type !=3D RTE_FLOW_ITEM_TYPE_RAW) { > memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > rte_flow_error_set(error, EINVAL, > RTE_FLOW_ERROR_TYPE_ITEM, > @@ -2520,6 +2523,18 @@ ixgbe_parse_fdir_filter_tunnel(const struct > rte_flow_attr *attr, > &rule->ixgbe_fdir.formatted.tni_vni), > vxlan_spec->vni, RTE_DIM(vxlan_spec->vni)); > } > + /* check if the next not void item is MAC VLAN RAW or > END*/ > + item =3D next_no_void_pattern(pattern, item); > + if (item->type !=3D RTE_FLOW_ITEM_TYPE_ETH && > + item->type !=3D RTE_FLOW_ITEM_TYPE_VLAN && > + item->type !=3D RTE_FLOW_ITEM_TYPE_RAW && > + item->type !=3D RTE_FLOW_ITEM_TYPE_END){ > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by fdir filter"); > + return -rte_errno; > + } > } >=20 > /* Get the NVGRE info */ > @@ -2616,16 +2631,19 @@ ixgbe_parse_fdir_filter_tunnel(const struct > rte_flow_attr *attr, > rte_memcpy(&rule->ixgbe_fdir.formatted.tni_vni, > nvgre_spec->tni, RTE_DIM(nvgre_spec->tni)); > } > - } >=20 > - /* check if the next not void item is MAC */ > - item =3D next_no_void_pattern(pattern, item); > - if (item->type !=3D RTE_FLOW_ITEM_TYPE_ETH) { > - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > - rte_flow_error_set(error, EINVAL, > - RTE_FLOW_ERROR_TYPE_ITEM, > - item, "Not supported by fdir filter"); > - return -rte_errno; > + /* check if the next not void item is MAC VLAN RAW or > END*/ > + item =3D next_no_void_pattern(pattern, item); > + if (item->type !=3D RTE_FLOW_ITEM_TYPE_ETH && > + item->type !=3D RTE_FLOW_ITEM_TYPE_VLAN && > + item->type !=3D RTE_FLOW_ITEM_TYPE_RAW && > + item->type !=3D RTE_FLOW_ITEM_TYPE_END){ > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by fdir filter"); > + return -rte_errno; > + } > } >=20 > /** > @@ -2633,92 +2651,91 @@ ixgbe_parse_fdir_filter_tunnel(const struct > rte_flow_attr *attr, > * others should be masked. > */ >=20 > - if (!item->mask) { > - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > - rte_flow_error_set(error, EINVAL, > - RTE_FLOW_ERROR_TYPE_ITEM, > - item, "Not supported by fdir filter"); > - return -rte_errno; > - } > - /*Not supported last point for range*/ > - if (item->last) { > - rte_flow_error_set(error, EINVAL, > - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > - item, "Not supported last point for range"); > - return -rte_errno; > - } > - rule->b_mask =3D TRUE; > - eth_mask =3D item->mask; > - > - /* Ether type should be masked. */ > - if (eth_mask->type) { > - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > - rte_flow_error_set(error, EINVAL, > - RTE_FLOW_ERROR_TYPE_ITEM, > - item, "Not supported by fdir filter"); > - return -rte_errno; > - } > - > - /* src MAC address should be masked. */ > - for (j =3D 0; j < ETHER_ADDR_LEN; j++) { > - if (eth_mask->src.addr_bytes[j]) { > - memset(rule, 0, > - sizeof(struct ixgbe_fdir_rule)); > + if (item->type =3D=3D RTE_FLOW_ITEM_TYPE_ETH) { > + if (!item->mask) { > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > rte_flow_error_set(error, EINVAL, > RTE_FLOW_ERROR_TYPE_ITEM, > item, "Not supported by fdir filter"); > return -rte_errno; > } > - } > - rule->mask.mac_addr_byte_mask =3D 0; > - for (j =3D 0; j < ETHER_ADDR_LEN; j++) { > - /* It's a per byte mask. */ > - if (eth_mask->dst.addr_bytes[j] =3D=3D 0xFF) { > - rule->mask.mac_addr_byte_mask |=3D 0x1 << j; > - } else if (eth_mask->dst.addr_bytes[j]) { > - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > + /*Not supported last point for range*/ > + if (item->last) { > rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > + item, "Not supported last point for range"); > + return -rte_errno; > + } > + rule->b_mask =3D TRUE; > + eth_mask =3D item->mask; > + > + /* Ether type should be masked. */ > + if (eth_mask->type) { > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > + rte_flow_error_set(error, EINVAL, > RTE_FLOW_ERROR_TYPE_ITEM, > item, "Not supported by fdir filter"); > return -rte_errno; > } > - } >=20 > - /* When no vlan, considered as full mask. */ > - rule->mask.vlan_tci_mask =3D rte_cpu_to_be_16(0xEFFF); > - > - if (item->spec) { > - rule->b_spec =3D TRUE; > - eth_spec =3D item->spec; > - > - /* Get the dst MAC. */ > + /* src MAC address should be masked. */ > for (j =3D 0; j < ETHER_ADDR_LEN; j++) { > - rule->ixgbe_fdir.formatted.inner_mac[j] =3D > - eth_spec->dst.addr_bytes[j]; > + if (eth_mask->src.addr_bytes[j]) { > + memset(rule, 0, > + sizeof(struct ixgbe_fdir_rule)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by fdir filter"); > + return -rte_errno; > + } > + } > + for (j =3D 0; j < ETHER_ADDR_LEN; j++) { > + /* It's a per byte mask. */ > + if (eth_mask->dst.addr_bytes[j] =3D=3D 0xFF) { > + rule->mask.mac_addr_byte_mask |=3D 0x1 << j; > + } else if (eth_mask->dst.addr_bytes[j]) { > + memset(rule, 0, sizeof(struct > ixgbe_fdir_rule)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by fdir filter"); > + return -rte_errno; > + } > } > - } >=20 > - /** > - * Check if the next not void item is vlan or ipv4. > - * IPv6 is not supported. > - */ > - item =3D next_no_void_pattern(pattern, item); > - if ((item->type !=3D RTE_FLOW_ITEM_TYPE_VLAN) && > - (item->type !=3D RTE_FLOW_ITEM_TYPE_IPV4)) { > - memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > - rte_flow_error_set(error, EINVAL, > - RTE_FLOW_ERROR_TYPE_ITEM, > - item, "Not supported by fdir filter"); > - return -rte_errno; > - } > - /*Not supported last point for range*/ > - if (item->last) { > - rte_flow_error_set(error, EINVAL, > + if (item->spec) { > + rule->b_spec =3D TRUE; > + eth_spec =3D item->spec; > + > + /* Get the dst MAC. */ > + for (j =3D 0; j < ETHER_ADDR_LEN; j++) { > + rule->ixgbe_fdir.formatted.inner_mac[j] =3D > + eth_spec->dst.addr_bytes[j]; > + } > + } > + /** > + * Check if the next not void item is vlan or ipv4. > + * IPv6 is not supported. > + */ > + item =3D next_no_void_pattern(pattern, item); > + if (item->type !=3D RTE_FLOW_ITEM_TYPE_VLAN && > + item->type !=3D RTE_FLOW_ITEM_TYPE_RAW && > + item->type !=3D RTE_FLOW_ITEM_TYPE_END) { > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by fdir filter"); > + return -rte_errno; > + } > + /*Not supported last point for range*/ > + if (item->last) { > + rte_flow_error_set(error, EINVAL, > RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > item, "Not supported last point for range"); > - return -rte_errno; > + return -rte_errno; > + } > } >=20 > + > if (item->type =3D=3D RTE_FLOW_ITEM_TYPE_VLAN) { > if (!(item->spec && item->mask)) { > memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); @@ - > 2736,10 +2753,90 @@ ixgbe_parse_fdir_filter_tunnel(const struct > rte_flow_attr *attr, > rule->mask.vlan_tci_mask =3D vlan_mask->tci; > rule->mask.vlan_tci_mask &=3D rte_cpu_to_be_16(0xEFFF); > /* More than one tags are not supported. */ > + item =3D next_no_void_pattern(pattern, item); > + if (item->type !=3D RTE_FLOW_ITEM_TYPE_RAW && > + item->type !=3D RTE_FLOW_ITEM_TYPE_END) { > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by fdir filter"); > + return -rte_errno; > + } > + } > + > + /* Get the flex byte info */ > + if (item->type =3D=3D RTE_FLOW_ITEM_TYPE_RAW) { > + /* Not supported last point for range*/ > + if (item->last) { > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, > + item, "Not supported last point for range"); > + return -rte_errno; > + } > + /* mask should not be null */ > + if (!item->mask || !item->spec) { > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by fdir filter"); > + return -rte_errno; > + } > + > + raw_mask =3D item->mask; > + rule->b_mask =3D TRUE; >=20 > + /* check mask */ > + if (raw_mask->relative !=3D 0x1 || > + raw_mask->search !=3D 0x1 || > + raw_mask->reserved !=3D 0x0 || > + (uint32_t)raw_mask->offset !=3D 0xffffffff || > + raw_mask->limit !=3D 0xffff || > + raw_mask->length !=3D 0xffff) { > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by fdir filter"); > + return -rte_errno; > + } > + > + raw_spec =3D item->spec; > + rule->b_spec =3D TRUE; > + > + /* check spec */ > + if (raw_spec->relative !=3D 0 || > + raw_spec->search !=3D 0 || > + raw_spec->reserved !=3D 0 || > + raw_spec->offset > IXGBE_MAX_FLX_SOURCE_OFF || > + raw_spec->offset % 2 || > + raw_spec->limit !=3D 0 || > + raw_spec->length !=3D 2 || > + /* pattern can't be 0xffff */ > + (raw_spec->pattern[0] =3D=3D 0xff && > + raw_spec->pattern[1] =3D=3D 0xff)) { > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by fdir filter"); > + return -rte_errno; > + } > + > + /* check pattern mask */ > + if (raw_mask->pattern[0] !=3D 0xff || > + raw_mask->pattern[1] !=3D 0xff) { > + memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > + rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, > + item, "Not supported by fdir filter"); > + return -rte_errno; > + } > + > + rule->mask.flex_bytes_mask =3D 0xffff; > + rule->ixgbe_fdir.formatted.flex_bytes =3D > + (((uint16_t)raw_spec->pattern[1]) << 8) | > + raw_spec->pattern[0]; > + rule->flex_bytes_offset =3D raw_spec->offset; > /* check if the next not void item is END */ > item =3D next_no_void_pattern(pattern, item); > - > if (item->type !=3D RTE_FLOW_ITEM_TYPE_END) { > memset(rule, 0, sizeof(struct ixgbe_fdir_rule)); > rte_flow_error_set(error, EINVAL, > @@ -2776,12 +2873,17 @@ ixgbe_parse_fdir_filter(struct rte_eth_dev *dev, > hw->mac.type !=3D ixgbe_mac_X550EM_a) > return -ENOTSUP; >=20 > + if (fdir_mode =3D=3D RTE_FDIR_MODE_PERFECT_TUNNEL) > + goto tunnel_filter; > + > ret =3D ixgbe_parse_fdir_filter_normal(dev, attr, pattern, > actions, rule, error); >=20 > if (!ret) > goto step_next; >=20 > +tunnel_filter: > + > ret =3D ixgbe_parse_fdir_filter_tunnel(attr, pattern, > actions, rule, error); >=20 > -- > 2.7.5