From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 882C5A0524; Tue, 13 Apr 2021 03:57:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 75363160984; Tue, 13 Apr 2021 03:57:11 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 524FD160983 for ; Tue, 13 Apr 2021 03:57:09 +0200 (CEST) IronPort-SDR: pQlIneQ4+OgX0Vov1ffux/u/WRr6aGDaDbGC1zhI4s6jaJHX8YABZBhv+40/tplOLluKwk+TGF DtWgEP9LU1UA== X-IronPort-AV: E=McAfee;i="6200,9189,9952"; a="191130743" X-IronPort-AV: E=Sophos;i="5.82,216,1613462400"; d="scan'208";a="191130743" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Apr 2021 18:57:06 -0700 IronPort-SDR: i2e2ahaN/a4YBPU0ouLu33/7G1ROOln7aZzOG91q23vE6ttFCOZAZRzsxVIMUWuItQToTdfUf/ 73NyQ88oOIqw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,216,1613462400"; d="scan'208";a="611573803" Received: from fmsmsx606.amr.corp.intel.com ([10.18.126.86]) by fmsmga006.fm.intel.com with ESMTP; 12 Apr 2021 18:57:06 -0700 Received: from shsmsx604.ccr.corp.intel.com (10.109.6.214) by fmsmsx606.amr.corp.intel.com (10.18.126.86) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2; Mon, 12 Apr 2021 18:57:04 -0700 Received: from shsmsx601.ccr.corp.intel.com (10.109.6.141) by SHSMSX604.ccr.corp.intel.com (10.109.6.214) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2106.2; Tue, 13 Apr 2021 09:57:02 +0800 Received: from shsmsx601.ccr.corp.intel.com ([10.109.6.141]) by SHSMSX601.ccr.corp.intel.com ([10.109.6.141]) with mapi id 15.01.2106.013; Tue, 13 Apr 2021 09:57:01 +0800 From: "Guo, Jia" To: "Xu, Ting" , "orika@nvidia.com" , "Zhang, Qi Z" , "Xing, Beilei" , "Li, Xiaoyun" , "Wu, Jingjing" , "Guo, Junfeng" CC: "dev@dpdk.org" Thread-Topic: [PATCH v3 4/4] net/iavf: support FDIR for IP fragment packet Thread-Index: AQHXLpjmGjBG5LwcLUSiEy/jATZYe6qwDPSAgAGmF4A= Date: Tue, 13 Apr 2021 01:57:01 +0000 Message-ID: References: <20210324134844.60410-1-jia.guo@intel.com> <20210411060144.72326-1-jia.guo@intel.com> <20210411060144.72326-4-jia.guo@intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.5.1.3 x-originating-ip: [10.239.127.36] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v3 4/4] net/iavf: support FDIR for IP fragment packet X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, Ting > -----Original Message----- > From: Xu, Ting > Sent: Monday, April 12, 2021 4:45 PM > To: Guo, Jia ; orika@nvidia.com; Zhang, Qi Z > ; Xing, Beilei ; Li, Xiaoyun > ; Wu, Jingjing ; Guo, Junfen= g > > Cc: dev@dpdk.org > Subject: RE: [PATCH v3 4/4] net/iavf: support FDIR for IP fragment packet >=20 > Hi, Jeff >=20 > Best Regards, > Xu Ting >=20 > > -----Original Message----- > > From: Guo, Jia > > Sent: Sunday, April 11, 2021 2:02 PM > > To: orika@nvidia.com; Zhang, Qi Z ; Xing, Beilei > > ; Li, Xiaoyun ; Wu, > > Jingjing > > Cc: dev@dpdk.org; Xu, Ting ; Guo, Jia > > > > Subject: [PATCH v3 4/4] net/iavf: support FDIR for IP fragment packet > > > > New FDIR parsing are added to handle the fragmented IPv4/IPv6 packet. > > > > Signed-off-by: Ting Xu > > Signed-off-by: Jeff Guo > > --- > > drivers/net/iavf/iavf_fdir.c | 376 ++++++++++++++++++--------- > > drivers/net/iavf/iavf_generic_flow.h | 5 + > > 2 files changed, 257 insertions(+), 124 deletions(-) > > > > diff --git a/drivers/net/iavf/iavf_fdir.c > > b/drivers/net/iavf/iavf_fdir.c index > > 62f032985a..64c169f8c4 100644 > > --- a/drivers/net/iavf/iavf_fdir.c > > +++ b/drivers/net/iavf/iavf_fdir.c > > @@ -34,7 +34,7 @@ > > #define IAVF_FDIR_INSET_ETH_IPV4 (\ > > IAVF_INSET_IPV4_SRC | IAVF_INSET_IPV4_DST | \ > IAVF_INSET_IPV4_PROTO > > | IAVF_INSET_IPV4_TOS | \ > > -IAVF_INSET_IPV4_TTL) > > +IAVF_INSET_IPV4_TTL | IAVF_INSET_IPV4_ID) > > >=20 > Skip... >=20 > > +if (ipv4_mask->hdr.version_ihl || > > + ipv4_mask->hdr.total_length || > > + ipv4_mask->hdr.hdr_checksum) { > > +rte_flow_error_set(error, EINVAL, > > + > > RTE_FLOW_ERROR_TYPE_ITEM, > > + item, "Invalid IPv4 mask."); > > +return -rte_errno; > > +} > > > > -if (tun_inner) { > > -input_set &=3D > > ~IAVF_PROT_IPV4_OUTER; > > -input_set |=3D IAVF_PROT_IPV4_INNER; > > -} >=20 > This part "tun_inner" is newly added and needed for GTPU inner, cannot be > deleted. >=20 Oh, absolution it should not be deleted, will correct it in the coming vers= ion. Thanks. > > +if (ipv4_last && > > + (ipv4_last->hdr.version_ihl || > > + ipv4_last->hdr.type_of_service || > > + ipv4_last->hdr.time_to_live || > > + ipv4_last->hdr.total_length | > > + ipv4_last->hdr.next_proto_id || > > + ipv4_last->hdr.hdr_checksum || > > + ipv4_last->hdr.src_addr || > > + ipv4_last->hdr.dst_addr)) { > > +rte_flow_error_set(error, EINVAL, > > + > > RTE_FLOW_ERROR_TYPE_ITEM, > > + item, "Invalid IPv4 last."); > > +return -rte_errno; > > +} > > > > -rte_memcpy(hdr->buffer, > > -&ipv4_spec->hdr, > > -sizeof(ipv4_spec->hdr)); > > +if (ipv4_mask->hdr.type_of_service =3D=3D > > + UINT8_MAX) { > > +input_set |=3D IAVF_INSET_IPV4_TOS; > > +VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, > > IPV4, > > + DSCP); > > +} > > + > > +if (ipv4_mask->hdr.next_proto_id =3D=3D UINT8_MAX) { input_set |=3D > > +IAVF_INSET_IPV4_PROTO; VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, > > IPV4, > > + PROT); > > +} > > + > > +if (ipv4_mask->hdr.time_to_live =3D=3D UINT8_MAX) { input_set |=3D > > +IAVF_INSET_IPV4_TTL; VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, > > IPV4, > > + TTL); > > +} > > + > > +if (ipv4_mask->hdr.src_addr =3D=3D UINT32_MAX) { input_set |=3D > > +IAVF_INSET_IPV4_SRC; VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, > > IPV4, > > + SRC); > > +} > > + > > +if (ipv4_mask->hdr.dst_addr =3D=3D UINT32_MAX) { input_set |=3D > > +IAVF_INSET_IPV4_DST; VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, > > IPV4, > > + DST); > > +} > > + > > +rte_memcpy(hdr->buffer, &ipv4_spec->hdr, > > + sizeof(ipv4_spec->hdr)); > > + > > +hdrs->count =3D ++layer; > > + > > +/* only support any packet id for fragment IPv4 > > + * any packet_id: > > + * spec is 0, last is 0xffff, mask is 0xffff */ if (ipv4_last && > > +ipv4_spec->hdr.packet_id =3D=3D 0 && > > + ipv4_last->hdr.packet_id =3D=3D UINT16_MAX && > > + ipv4_mask->hdr.packet_id =3D=3D UINT16_MAX && > > + ipv4_mask->hdr.fragment_offset =3D=3D UINT16_MAX) { > > +/* all IPv4 fragment packet has the same > > + * ethertype, if the spec is for all valid > > + * packet id, set ethertype into input set. > > + */ > > +input_set |=3D IAVF_INSET_ETHERTYPE; > > +VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr1, > > ETH, > > + ETHERTYPE); > > + > > +/* add dummy header for IPv4 Fragment */ > > +iavf_fdir_add_fragment_hdr(hdrs, layer); } else if > > +(ipv4_mask->hdr.packet_id =3D=3D UINT16_MAX) { rte_flow_error_set(erro= r, > > +EINVAL, > > + > > RTE_FLOW_ERROR_TYPE_ITEM, > > + item, "Invalid IPv4 mask."); > > +return -rte_errno; > > } > > > > -filter->add_fltr.rule_cfg.proto_hdrs.count =3D ++layer; > > break; > > > > case RTE_FLOW_ITEM_TYPE_IPV6: > > @@ -707,63 +787,109 @@ iavf_fdir_parse_pattern(__rte_unused struct > > iavf_adapter *ad, ipv6_spec =3D item->spec; ipv6_mask =3D item->mask; > > > > -hdr =3D &filter- > > >add_fltr.rule_cfg.proto_hdrs.proto_hdr[layer]; > > +hdr =3D &hdrs->proto_hdr[layer]; > > > > VIRTCHNL_SET_PROTO_HDR_TYPE(hdr, IPV6); > > > > -if (ipv6_spec && ipv6_mask) { > > -if (ipv6_mask->hdr.payload_len) { > > -rte_flow_error_set(error, EINVAL, > > - > > RTE_FLOW_ERROR_TYPE_ITEM, > > -item, "Invalid IPv6 mask"); > > -return -rte_errno; > > -} > > +if (!(ipv6_spec && ipv6_mask)) { > > +hdrs->count =3D ++layer; > > +break; > > +} > > > > -if ((ipv6_mask->hdr.vtc_flow & > > - > > rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) > > -=3D=3D > > rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) { -input_set |=3D > > IAVF_INSET_IPV6_TC; > > - > > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, TC); -} -if > > (ipv6_mask->hdr.proto =3D=3D UINT8_MAX) { -input_set |=3D > > IAVF_INSET_IPV6_NEXT_HDR; > > - > > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, PROT); -} -if > > (ipv6_mask->hdr.hop_limits =3D=3D UINT8_MAX) { -input_set |=3D > > IAVF_INSET_IPV6_HOP_LIMIT; > > - > > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, HOP_LIMIT); -} -if > > (!memcmp(ipv6_mask->hdr.src_addr, -ipv6_addr_mask, > > -RTE_DIM(ipv6_mask->hdr.src_addr))) { > > -input_set |=3D IAVF_INSET_IPV6_SRC; > > - > > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, SRC); -} -if > > (!memcmp(ipv6_mask->hdr.dst_addr, -ipv6_addr_mask, > > -RTE_DIM(ipv6_mask->hdr.dst_addr))) > > { > > -input_set |=3D IAVF_INSET_IPV6_DST; > > - > > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, IPV6, DST); -} > > +if (ipv6_mask->hdr.payload_len) { > > +rte_flow_error_set(error, EINVAL, > > + > > RTE_FLOW_ERROR_TYPE_ITEM, > > + item, "Invalid IPv6 mask"); > > +return -rte_errno; > > +} > > > > -if (tun_inner) { > > -input_set &=3D > > ~IAVF_PROT_IPV6_OUTER; > > -input_set |=3D IAVF_PROT_IPV6_INNER; > > -} >=20 > The same as ipv4. >=20 > > +if ((ipv6_mask->hdr.vtc_flow & > > + rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) > > + =3D=3D rte_cpu_to_be_32(IAVF_IPV6_TC_MASK)) { input_set |=3D > > +IAVF_INSET_IPV6_TC; VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, > > IPV6, > > + TC); > > +} > > > > -rte_memcpy(hdr->buffer, > > -&ipv6_spec->hdr, > > -sizeof(ipv6_spec->hdr)); > > +if (ipv6_mask->hdr.proto =3D=3D UINT8_MAX) { input_set |=3D > > +IAVF_INSET_IPV6_NEXT_HDR; > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, > > IPV6, > > + PROT); > > +} > > + > > +if (ipv6_mask->hdr.hop_limits =3D=3D UINT8_MAX) { input_set |=3D > > +IAVF_INSET_IPV6_HOP_LIMIT; > VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, > > IPV6, > > + HOP_LIMIT); > > +} > > + > > +if (!memcmp(ipv6_mask->hdr.src_addr, > > ipv6_addr_mask, > > + RTE_DIM(ipv6_mask->hdr.src_addr))) { input_set |=3D > > +IAVF_INSET_IPV6_SRC; VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, > > IPV6, > > + SRC); > > +} > > +if (!memcmp(ipv6_mask->hdr.dst_addr, > > ipv6_addr_mask, > > + RTE_DIM(ipv6_mask->hdr.dst_addr))) { input_set |=3D > > +IAVF_INSET_IPV6_DST; VIRTCHNL_ADD_PROTO_HDR_FIELD_BIT(hdr, > > IPV6, > > + DST); > > +} > > + > > +rte_memcpy(hdr->buffer, &ipv6_spec->hdr, > > + sizeof(ipv6_spec->hdr)); > > + > > +hdrs->count =3D ++layer; > > +break; > > + >=20 > Skip... >=20 > > @@ -84,6 +85,8 @@ > > (IAVF_PROT_IPV4_OUTER | IAVF_IP_PROTO) #define > IAVF_INSET_IPV4_TTL \ > > (IAVF_PROT_IPV4_OUTER | IAVF_IP_TTL) > > +#define IAVF_INSET_IPV4_ID \ > > +(IAVF_PROT_IPV4_OUTER | IAVF_IP_PKID) > > #define IAVF_INSET_IPV6_SRC \ > > (IAVF_PROT_IPV6_OUTER | IAVF_IP_SRC) > > #define IAVF_INSET_IPV6_DST \ > > @@ -94,6 +97,8 @@ > > (IAVF_PROT_IPV6_OUTER | IAVF_IP_TTL) > > #define IAVF_INSET_IPV6_TC \ > > (IAVF_PROT_IPV6_OUTER | IAVF_IP_TOS) > > +#define IAVF_INSET_IPV6_ID \ > > +(IAVF_PROT_IPV6_OUTER | IAVF_IP_PKID) > > > > #define IAVF_INSET_TUN_IPV4_SRC \ > > (IAVF_PROT_IPV4_INNER | IAVF_IP_SRC) > > -- > > 2.20.1 >=20