From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-db5eur01on0042.outbound.protection.outlook.com [104.47.2.42]) by dpdk.org (Postfix) with ESMTP id D05A6695D for ; Thu, 1 Nov 2018 21:49:14 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ga00COnVzIhpHPDRImHXqGJfh2rRrcM27+4uxsB+NlU=; b=ZXKm1thjeo0cjPQEGEIQVNLRRPrLTNY1SHm6AwYcCykhqzpVOQ1Iani6Lx3BN9pLJmpqrofUTv/TWgzoGpmqGk7IECskEBZjQIonTAUvzsBgZqfe35LzWEzoDqi+tWNeEFlGUJqIUHXFE5qVS4Dtx+O1257aBlc5CdOAe8O3BgE= Received: from DB3PR0502MB3980.eurprd05.prod.outlook.com (52.134.72.27) by DB3PR0502MB4009.eurprd05.prod.outlook.com (52.134.72.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1273.25; Thu, 1 Nov 2018 20:49:12 +0000 Received: from DB3PR0502MB3980.eurprd05.prod.outlook.com ([fe80::f8a1:fcab:94f0:97cc]) by DB3PR0502MB3980.eurprd05.prod.outlook.com ([fe80::f8a1:fcab:94f0:97cc%4]) with mapi id 15.20.1273.030; Thu, 1 Nov 2018 20:49:12 +0000 From: Yongseok Koh To: Slava Ovsiienko CC: Shahaf Shuler , "dev@dpdk.org" Thread-Topic: [PATCH v3 06/13] net/mlx5: add e-switch VXLAN support to validation routine Thread-Index: AQHUcd0iCe+ItRpamEeLC3JZd7MxGaU7ZO4A Date: Thu, 1 Nov 2018 20:49:12 +0000 Message-ID: <20181101204905.GG6118@mtidpdk.mti.labs.mlnx> References: <1539612815-47199-1-git-send-email-viacheslavo@mellanox.com> <1541074741-41368-1-git-send-email-viacheslavo@mellanox.com> <1541074741-41368-7-git-send-email-viacheslavo@mellanox.com> In-Reply-To: <1541074741-41368-7-git-send-email-viacheslavo@mellanox.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: BYAPR04CA0028.namprd04.prod.outlook.com (2603:10b6:a03:40::41) To DB3PR0502MB3980.eurprd05.prod.outlook.com (2603:10a6:8:10::27) authentication-results: spf=none (sender IP is ) smtp.mailfrom=yskoh@mellanox.com; x-ms-exchange-messagesentrepresentingtype: 1 x-originating-ip: [209.116.155.178] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; DB3PR0502MB4009; 6:626AQaXe6lWzNyQIZQaKrj3GCKCPFqappixRdgLnc1BYqNlKNAcoAZrAxbBJq9gjeVHzJ8QhayqBgDbBy7aZwBGjiuLJHb8kXAvFTMMfItUlbpq58zwIzgz5s/jLfBgkbKJ0r2Hn8M4gFKIe9a1PgJsC8KsbbuxQRUdQ3rYUprrh5Gef1ex0AhWJYm4HNhSr5xJ0XsbWUuhYxi12B4DQkJraTlcarzupWmgRRvz0tf+j1oCeL1LVLzxFSXWIui3HjgQ8xh8G7ADeCUA2DyiPQCx9Ge3ha/UQWRBqnbVj/UyUsC7YjnUn4wDWCcWjDJ+0xsioEbWAs98vj/9Cz0FyuGvGEtC/6DIyun05jYGNSFWWlfULllibrodPWdSr2ZB2FujyLkM4BjRm7N9XG2esi4b8pLYCftVt7zfq62pDpAXAyHexeHzYc7lvuZe81W3uPsyaA84ClNXHEMCjt/jzsA==; 5:f++CpwzgB+xhCtE316AChq26CvYgvYX/1XgkwmuD0ijS4qFxMUwvT2imjHtIdL1qvFd/jp5TcaaS4v8CZyqzrGAI4KXYpzqTgOWGLo6uacQFqXyOJPQ6mvBk3pGAkY7OGKtArBRs5aDZWNZ50kXq2BFHYr2+Q3851A8zMEy01CA=; 7:LTDog+TQIAfKH+a5W675ZPmgLgszWZsmelD+L+SsJ5OqvsMdUGT++w+tEeUrXTuMBtZv0zCUxK1Gh5LVulupQCAu98Gyy7FHMp1nGzzfkWEsrYZtEwOfDbDrfSd74N6UG0kZC2z5xQ6IHX9w+KSd0A== x-ms-office365-filtering-correlation-id: 747e87e0-e9cd-4ab1-7861-08d6403b7a68 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(4618075)(2017052603328)(7153060)(7193020); SRVR:DB3PR0502MB4009; x-ms-traffictypediagnostic: DB3PR0502MB4009: x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:; x-ms-exchange-senderadcheck: 1 x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(93006095)(93001095)(3231382)(944501410)(52105095)(10201501046)(3002001)(6055026)(148016)(149066)(150057)(6041310)(20161123560045)(20161123564045)(20161123558120)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123562045)(201708071742011)(7699051)(76991095); SRVR:DB3PR0502MB4009; BCL:0; PCL:0; RULEID:; SRVR:DB3PR0502MB4009; x-forefront-prvs: 0843C17679 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(366004)(346002)(396003)(376002)(39860400002)(136003)(189003)(199004)(305945005)(14444005)(256004)(106356001)(446003)(316002)(11346002)(66066001)(33896004)(86362001)(102836004)(7736002)(76176011)(8936002)(99286004)(105586002)(81156014)(6506007)(386003)(8676002)(52116002)(6636002)(81166006)(6862004)(2900100001)(71200400001)(68736007)(4744004)(4326008)(5660300001)(1076002)(6116002)(9686003)(71190400001)(25786009)(3846002)(14454004)(97736004)(53936002)(6436002)(6512007)(26005)(186003)(476003)(54906003)(229853002)(53946003)(6246003)(33656002)(2906002)(478600001)(486006)(6486002)(559001)(569006); DIR:OUT; SFP:1101; SCL:1; SRVR:DB3PR0502MB4009; H:DB3PR0502MB3980.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: gvgwhwZ0/0nwP0KYxArhcs1WKwc8AXVbmpUNx4Nx1BI+dGstRCi5osLuccZj+q1xm82iNE9k+xgmvOXBQ3ScZQ/Z6o2mMUhX4oi/Mz+0bFyXgBRY8rt/vfQXdyvftzTI5EMBaNWSaVevzWBaWfIhHE8G+xwdUF8eyUhuO58kkrSksu8d98NSQhP8QYiPnrZJgbeZUQKpdxpslAn1z5V2OeA3idsSA947j043GRpFEUwYbJoSK02wftbhUX9XsQdfjvlxiju2WHgUDzWqaozWuG1veugn2kaIcfeIHE3CBxe31CazCYVNLPC1WAvZyS/vogntw1wxhLu4BcWAZw6S1GQDG9GCWn0Qp3xQU491tF4= spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-ID: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 747e87e0-e9cd-4ab1-7861-08d6403b7a68 X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Nov 2018 20:49:12.5952 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR0502MB4009 Subject: Re: [dpdk-dev] [PATCH v3 06/13] net/mlx5: add e-switch VXLAN support to validation routine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2018 20:49:15 -0000 On Thu, Nov 01, 2018 at 05:19:27AM -0700, Slava Ovsiienko wrote: > This patch adds VXLAN support for flow item/action lists validation. > The following entities are now supported: >=20 > - RTE_FLOW_ITEM_TYPE_VXLAN, contains the tunnel VNI >=20 > - RTE_FLOW_ACTION_TYPE_VXLAN_DECAP, if this action is specified > the items in the flow items list treated as outer network > parameters for tunnel outer header match. The ethernet layer > addresses always are treated as inner ones. >=20 > - RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP, contains the item list to > build the encapsulation header. In current implementation the > values is the subject for some constraints: > - outer source MAC address will be always unconditionally > set to the one of MAC addresses of outer egress interface > - no way to specify source UDP port > - all abovementioned parameters are ignored if specified > in the rule, warning messages are sent to the log >=20 > Minimal tunneling support is also added. If VXLAN decapsulation > action is specified the ETH item can follow the VXLAN VNI item, > the content of this ETH item is treated as inner MAC addresses > and type. The outer ETH item for VXLAN decapsulation action > is always ignored. >=20 > Suggested-by: Adrien Mazarguil > Signed-off-by: Viacheslav Ovsiienko > --- Overall, it is so good. But please make some cosmetic changes. Refer to my comments below. When you send out v4 with the changes, please put my acked-= by tag. > drivers/net/mlx5/mlx5_flow_tcf.c | 741 +++++++++++++++++++++++++++++++++= +++++- > 1 file changed, 739 insertions(+), 2 deletions(-) >=20 > diff --git a/drivers/net/mlx5/mlx5_flow_tcf.c b/drivers/net/mlx5/mlx5_flo= w_tcf.c > index 50f3bd1..7e00232 100644 > --- a/drivers/net/mlx5/mlx5_flow_tcf.c > +++ b/drivers/net/mlx5/mlx5_flow_tcf.c > @@ -1116,6 +1116,633 @@ struct pedit_parser { > } > =20 > /** > + * Validate VXLAN_ENCAP action RTE_FLOW_ITEM_TYPE_ETH item for E-Switch. > + * The routine checks the L2 fields to be used in encapsulation header. > + * > + * @param[in] item > + * Pointer to the item structure. > + * @param[out] error > + * Pointer to the error structure. > + * > + * @return > + * 0 on success, a negative errno value otherwise and rte_errno is set= . > + **/ > +static int > +flow_tcf_validate_vxlan_encap_eth(const struct rte_flow_item *item, > + struct rte_flow_error *error) > +{ > + const struct rte_flow_item_eth *spec =3D item->spec; > + const struct rte_flow_item_eth *mask =3D item->mask; > + > + if (!spec) > + /* > + * Specification for L2 addresses can be empty > + * because these ones are optional and not > + * required directly by tc rule. Kernel tries > + * to resolve these ones on its own > + */ > + return 0; Even if it is one line of code, let's use bracket {} because it is multiple lines with a comment. Without bracket, it could cause a bug if more lines a= re added later because people would have wrong impression that there're alread= y brackets. Please also fix a few more occurrences below. > + if (!mask) > + /* If mask is not specified use the default one. */ > + mask =3D &rte_flow_item_eth_mask; > + if (memcmp(&mask->dst, > + &flow_tcf_mask_empty.eth.dst, > + sizeof(flow_tcf_mask_empty.eth.dst))) { > + if (memcmp(&mask->dst, > + &rte_flow_item_eth_mask.dst, > + sizeof(rte_flow_item_eth_mask.dst))) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, > + "no support for partial mask on" > + " \"eth.dst\" field"); The following would be better, return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, "no support for partial mask" " on \"eth.dst\" field"); But, this one is also acceptable (to minimize your effort of correction :-) return rte_flow_error_set (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, "no support for partial mask on" " \"eth.dst\" field"); Please make the same changes for the entire patch set. Thanks, Yongseok > + } > + if (memcmp(&mask->src, > + &flow_tcf_mask_empty.eth.src, > + sizeof(flow_tcf_mask_empty.eth.src))) { > + if (memcmp(&mask->src, > + &rte_flow_item_eth_mask.src, > + sizeof(rte_flow_item_eth_mask.src))) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, > + "no support for partial mask on" > + " \"eth.src\" field"); > + } > + if (mask->type !=3D RTE_BE16(0x0000)) { > + if (mask->type !=3D RTE_BE16(0xffff)) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, > + "no support for partial mask on" > + " \"eth.type\" field"); > + DRV_LOG(WARNING, > + "outer ethernet type field" > + " cannot be forced for vxlan" > + " encapsulation, parameter ignored"); > + } > + return 0; > +} > + > +/** > + * Validate VXLAN_ENCAP action RTE_FLOW_ITEM_TYPE_IPV4 item for E-Switch= . > + * The routine checks the IPv4 fields to be used in encapsulation header= . > + * > + * @param[in] item > + * Pointer to the item structure. > + * @param[out] error > + * Pointer to the error structure. > + * > + * @return > + * 0 on success, a negative errno value otherwise and rte_errno is set= . > + **/ > +static int > +flow_tcf_validate_vxlan_encap_ipv4(const struct rte_flow_item *item, > + struct rte_flow_error *error) > +{ > + const struct rte_flow_item_ipv4 *spec =3D item->spec; > + const struct rte_flow_item_ipv4 *mask =3D item->mask; > + > + if (!spec) > + /* > + * Specification for IP addresses cannot be empty > + * because it is required by tunnel_key parameter. > + */ > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, item, > + "NULL outer ipv4 address specification" > + " for vxlan encapsulation"); > + if (!mask) > + mask =3D &rte_flow_item_ipv4_mask; > + if (mask->hdr.dst_addr !=3D RTE_BE32(0x00000000)) { > + if (mask->hdr.dst_addr !=3D RTE_BE32(0xffffffff)) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, > + "no support for partial mask on" > + " \"ipv4.hdr.dst_addr\" field" > + " for vxlan encapsulation"); > + /* More IPv4 address validations can be put here. */ > + } else { > + /* > + * Kernel uses the destination IP address to determine > + * the routing path and obtain the MAC destination > + * address, so IP destination address must be > + * specified in the tc rule. > + */ > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, item, > + "outer ipv4 destination address must be" > + " specified for vxlan encapsulation"); > + } > + if (mask->hdr.src_addr !=3D RTE_BE32(0x00000000)) { > + if (mask->hdr.src_addr !=3D RTE_BE32(0xffffffff)) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, > + "no support for partial mask on" > + " \"ipv4.hdr.src_addr\" field" > + " for vxlan encapsulation"); > + /* More IPv4 address validations can be put here. */ > + } else { > + /* > + * Kernel uses the source IP address to select the > + * interface for egress encapsulated traffic, so > + * it must be specified in the tc rule. > + */ > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, item, > + "outer ipv4 source address must be" > + " specified for vxlan encapsulation"); > + } > + return 0; > +} > + > +/** > + * Validate VXLAN_ENCAP action RTE_FLOW_ITEM_TYPE_IPV6 item for E-Switch= . > + * The routine checks the IPv6 fields to be used in encapsulation header= . > + * > + * @param[in] item > + * Pointer to the item structure. > + * @param[out] error > + * Pointer to the error structure. > + * > + * @return > + * 0 on success, a negative errno value otherwise and rte_ernno is set= . > + **/ > +static int > +flow_tcf_validate_vxlan_encap_ipv6(const struct rte_flow_item *item, > + struct rte_flow_error *error) > +{ > + const struct rte_flow_item_ipv6 *spec =3D item->spec; > + const struct rte_flow_item_ipv6 *mask =3D item->mask; > + > + if (!spec) > + /* > + * Specification for IP addresses cannot be empty > + * because it is required by tunnel_key parameter. > + */ > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, item, > + "NULL outer ipv6 address specification" > + " for vxlan encapsulation"); > + if (!mask) > + mask =3D &rte_flow_item_ipv6_mask; > + if (memcmp(&mask->hdr.dst_addr, > + &flow_tcf_mask_empty.ipv6.hdr.dst_addr, > + IPV6_ADDR_LEN)) { > + if (memcmp(&mask->hdr.dst_addr, > + &rte_flow_item_ipv6_mask.hdr.dst_addr, > + IPV6_ADDR_LEN)) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, > + "no support for partial mask on" > + " \"ipv6.hdr.dst_addr\" field" > + " for vxlan encapsulation"); > + /* More IPv6 address validations can be put here. */ > + } else { > + /* > + * Kernel uses the destination IP address to determine > + * the routing path and obtain the MAC destination > + * address (heigh or gate), so IP destination address > + * must be specified within the tc rule. > + */ > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, item, > + "outer ipv6 destination address must be" > + " specified for vxlan encapsulation"); > + } > + if (memcmp(&mask->hdr.src_addr, > + &flow_tcf_mask_empty.ipv6.hdr.src_addr, > + IPV6_ADDR_LEN)) { > + if (memcmp(&mask->hdr.src_addr, > + &rte_flow_item_ipv6_mask.hdr.src_addr, > + IPV6_ADDR_LEN)) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, > + "no support for partial mask on" > + " \"ipv6.hdr.src_addr\" field" > + " for vxlan encapsulation"); > + /* More L3 address validation can be put here. */ > + } else { > + /* > + * Kernel uses the source IP address to select the > + * interface for egress encapsulated traffic, so > + * it must be specified in the tc rule. > + */ > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, item, > + "outer L3 source address must be" > + " specified for vxlan encapsulation"); > + } > + return 0; > +} > + > +/** > + * Validate VXLAN_ENCAP action RTE_FLOW_ITEM_TYPE_UDP item for E-Switch. > + * The routine checks the UDP fields to be used in encapsulation header. > + * > + * @param[in] item > + * Pointer to the item structure. > + * @param[out] error > + * Pointer to the error structure. > + * > + * @return > + * 0 on success, a negative errno value otherwise and rte_ernno is set= . > + **/ > +static int > +flow_tcf_validate_vxlan_encap_udp(const struct rte_flow_item *item, > + struct rte_flow_error *error) > +{ > + const struct rte_flow_item_udp *spec =3D item->spec; > + const struct rte_flow_item_udp *mask =3D item->mask; > + > + if (!spec) > + /* > + * Specification for UDP ports cannot be empty > + * because it is required by tunnel_key parameter. > + */ > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, item, > + "NULL UDP port specification " > + " for vxlan encapsulation"); > + if (!mask) > + mask =3D &rte_flow_item_udp_mask; > + if (mask->hdr.dst_port !=3D RTE_BE16(0x0000)) { > + if (mask->hdr.dst_port !=3D RTE_BE16(0xffff)) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, > + "no support for partial mask on" > + " \"udp.hdr.dst_port\" field" > + " for vxlan encapsulation"); > + if (!spec->hdr.dst_port) > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, item, > + "outer UDP remote port cannot be" > + " 0 for vxlan encapsulation"); > + } else { > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, item, > + "outer UDP remote port must be" > + " specified for vxlan encapsulation"); > + } > + if (mask->hdr.src_port !=3D RTE_BE16(0x0000)) { > + if (mask->hdr.src_port !=3D RTE_BE16(0xffff)) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, > + "no support for partial mask on" > + " \"udp.hdr.src_port\" field" > + " for vxlan encapsulation"); > + DRV_LOG(WARNING, > + "outer UDP source port cannot be" > + " forced for vxlan encapsulation," > + " parameter ignored"); > + } > + return 0; > +} > + > +/** > + * Validate VXLAN_ENCAP action RTE_FLOW_ITEM_TYPE_VXLAN item for E-Switc= h. > + * The routine checks the VNIP fields to be used in encapsulation header= . > + * > + * @param[in] item > + * Pointer to the item structure. > + * @param[out] error > + * Pointer to the error structure. > + * > + * @return > + * 0 on success, a negative errno value otherwise and rte_ernno is set= . > + **/ > +static int > +flow_tcf_validate_vxlan_encap_vni(const struct rte_flow_item *item, > + struct rte_flow_error *error) > +{ > + const struct rte_flow_item_vxlan *spec =3D item->spec; > + const struct rte_flow_item_vxlan *mask =3D item->mask; > + > + if (!spec) > + /* Outer VNI is required by tunnel_key parameter. */ > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, item, > + "NULL VNI specification" > + " for vxlan encapsulation"); > + if (!mask) > + mask =3D &rte_flow_item_vxlan_mask; > + if (!mask->vni[0] && !mask->vni[1] && !mask->vni[2]) > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, item, > + "outer VNI must be specified " > + "for vxlan encapsulation"); > + if (mask->vni[0] !=3D 0xff || > + mask->vni[1] !=3D 0xff || > + mask->vni[2] !=3D 0xff) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, > + "no support for partial mask on" > + " \"vxlan.vni\" field"); > + > + if (!spec->vni[0] && !spec->vni[1] && !spec->vni[2]) > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, item, > + "vxlan vni cannot be 0"); > + return 0; > +} > + > +/** > + * Validate VXLAN_ENCAP action item list for E-Switch. > + * The routine checks items to be used in encapsulation header. > + * > + * @param[in] action > + * Pointer to the VXLAN_ENCAP action structure. > + * @param[out] error > + * Pointer to the error structure. > + * > + * @return > + * 0 on success, a negative errno value otherwise and rte_ernno is set= . > + **/ > +static int > +flow_tcf_validate_vxlan_encap(const struct rte_flow_action *action, > + struct rte_flow_error *error) > +{ > + const struct rte_flow_item *items; > + int ret; > + uint32_t item_flags =3D 0; > + > + if (!action->conf) > + return rte_flow_error_set > + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, > + action, "Missing vxlan tunnel" > + " action configuration"); > + items =3D ((const struct rte_flow_action_vxlan_encap *) > + action->conf)->definition; > + if (!items) > + return rte_flow_error_set > + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, > + action, "Missing vxlan tunnel" > + " encapsulation parameters"); > + for (; items->type !=3D RTE_FLOW_ITEM_TYPE_END; items++) { > + switch (items->type) { > + case RTE_FLOW_ITEM_TYPE_VOID: > + break; > + case RTE_FLOW_ITEM_TYPE_ETH: > + ret =3D mlx5_flow_validate_item_eth(items, item_flags, > + error); > + if (ret < 0) > + return ret; > + ret =3D flow_tcf_validate_vxlan_encap_eth(items, error); > + if (ret < 0) > + return ret; > + item_flags |=3D MLX5_FLOW_LAYER_OUTER_L2; > + break; > + break; > + case RTE_FLOW_ITEM_TYPE_IPV4: > + ret =3D mlx5_flow_validate_item_ipv4(items, item_flags, > + error); > + if (ret < 0) > + return ret; > + ret =3D flow_tcf_validate_vxlan_encap_ipv4(items, error); > + if (ret < 0) > + return ret; > + item_flags |=3D MLX5_FLOW_LAYER_OUTER_L3_IPV4; > + break; > + case RTE_FLOW_ITEM_TYPE_IPV6: > + ret =3D mlx5_flow_validate_item_ipv6(items, item_flags, > + error); > + if (ret < 0) > + return ret; > + ret =3D flow_tcf_validate_vxlan_encap_ipv6(items, error); > + if (ret < 0) > + return ret; > + item_flags |=3D MLX5_FLOW_LAYER_OUTER_L3_IPV6; > + break; > + case RTE_FLOW_ITEM_TYPE_UDP: > + ret =3D mlx5_flow_validate_item_udp(items, item_flags, > + 0xFF, error); > + if (ret < 0) > + return ret; > + ret =3D flow_tcf_validate_vxlan_encap_udp(items, error); > + if (ret < 0) > + return ret; > + item_flags |=3D MLX5_FLOW_LAYER_OUTER_L4_UDP; > + break; > + case RTE_FLOW_ITEM_TYPE_VXLAN: > + ret =3D mlx5_flow_validate_item_vxlan(items, > + item_flags, error); > + if (ret < 0) > + return ret; > + ret =3D flow_tcf_validate_vxlan_encap_vni(items, error); > + if (ret < 0) > + return ret; > + item_flags |=3D MLX5_FLOW_LAYER_VXLAN; > + break; > + default: > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM, items, > + "vxlan encap item not supported"); > + } > + } > + if (!(item_flags & MLX5_FLOW_LAYER_OUTER_L3)) > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION, action, > + "no outer IP layer found" > + " for vxlan encapsulation"); > + if (!(item_flags & MLX5_FLOW_LAYER_OUTER_L4_UDP)) > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION, action, > + "no outer UDP layer found" > + " for vxlan encapsulation"); > + if (!(item_flags & MLX5_FLOW_LAYER_VXLAN)) > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION, action, > + "no VXLAN VNI found" > + " for vxlan encapsulation"); > + return 0; > +} > + > +/** > + * Validate RTE_FLOW_ITEM_TYPE_IPV4 item if VXLAN_DECAP action > + * is present in actions list. > + * > + * @param[in] ipv4 > + * Outer IPv4 address item (if any, NULL otherwise). > + * @param[out] error > + * Pointer to the error structure. > + * > + * @return > + * 0 on success, a negative errno value otherwise and rte_ernno is set= . > + **/ > +static int > +flow_tcf_validate_vxlan_decap_ipv4(const struct rte_flow_item *ipv4, > + struct rte_flow_error *error) > +{ > + const struct rte_flow_item_ipv4 *spec =3D ipv4->spec; > + const struct rte_flow_item_ipv4 *mask =3D ipv4->mask; > + > + if (!spec) > + /* > + * Specification for IP addresses cannot be empty > + * because it is required as decap parameter. > + */ > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, ipv4, > + "NULL outer ipv4 address" > + " specification for vxlan" > + " for vxlan decapsulation"); > + if (!mask) > + mask =3D &rte_flow_item_ipv4_mask; > + if (mask->hdr.dst_addr !=3D RTE_BE32(0x00000000)) { > + if (mask->hdr.dst_addr !=3D RTE_BE32(0xffffffff)) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, > + "no support for partial mask on" > + " \"ipv4.hdr.dst_addr\" field"); > + /* More IP address validations can be put here. */ > + } else { > + /* > + * Kernel uses the destination IP address > + * to determine the ingress network interface > + * for traffic being decapsulated. > + */ > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, ipv4, > + "outer ipv4 destination address" > + " must be specified for" > + " vxlan decapsulation"); > + } > + /* Source IP address is optional for decap. */ > + if (mask->hdr.src_addr !=3D RTE_BE32(0x00000000) && > + mask->hdr.src_addr !=3D RTE_BE32(0xffffffff)) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, > + "no support for partial mask on" > + " \"ipv4.hdr.src_addr\" field"); > + return 0; > +} > + > +/** > + * Validate RTE_FLOW_ITEM_TYPE_IPV6 item if VXLAN_DECAP action > + * is present in actions list. > + * > + * @param[in] ipv6 > + * Outer IPv6 address item (if any, NULL otherwise). > + * @param[out] error > + * Pointer to the error structure. > + * > + * @return > + * 0 on success, a negative errno value otherwise and rte_ernno is set= . > + **/ > +static int > +flow_tcf_validate_vxlan_decap_ipv6(const struct rte_flow_item *ipv6, > + struct rte_flow_error *error) > +{ > + const struct rte_flow_item_ipv6 *spec =3D ipv6->spec; > + const struct rte_flow_item_ipv6 *mask =3D ipv6->mask; > + > + if (!spec) > + /* > + * Specification for IP addresses cannot be empty > + * because it is required as decap parameter. > + */ > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, ipv6, > + "NULL outer ipv6 address" > + " specification for vxlan" > + " decapsulation"); > + if (!mask) > + mask =3D &rte_flow_item_ipv6_mask; > + if (memcmp(&mask->hdr.dst_addr, > + &flow_tcf_mask_empty.ipv6.hdr.dst_addr, > + IPV6_ADDR_LEN)) { > + if (memcmp(&mask->hdr.dst_addr, > + &rte_flow_item_ipv6_mask.hdr.dst_addr, > + IPV6_ADDR_LEN)) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, > + "no support for partial mask on" > + " \"ipv6.hdr.dst_addr\" field"); > + /* More IP address validations can be put here. */ > + } else { > + /* > + * Kernel uses the destination IP address > + * to determine the ingress network interface > + * for traffic being decapsulated. > + */ > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, ipv6, > + "outer ipv6 destination address must be " > + "specified for vxlan decapsulation"); > + } > + /* Source IP address is optional for decap. */ > + if (memcmp(&mask->hdr.src_addr, > + &flow_tcf_mask_empty.ipv6.hdr.src_addr, > + IPV6_ADDR_LEN)) { > + if (memcmp(&mask->hdr.src_addr, > + &rte_flow_item_ipv6_mask.hdr.src_addr, > + IPV6_ADDR_LEN)) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, > + "no support for partial mask on" > + " \"ipv6.hdr.src_addr\" field"); > + } > + return 0; > +} > + > +/** > + * Validate RTE_FLOW_ITEM_TYPE_UDP item if VXLAN_DECAP action > + * is present in actions list. > + * > + * @param[in] udp > + * Outer UDP layer item (if any, NULL otherwise). > + * @param[out] error > + * Pointer to the error structure. > + * > + * @return > + * 0 on success, a negative errno value otherwise and rte_ernno is set= . > + **/ > +static int > +flow_tcf_validate_vxlan_decap_udp(const struct rte_flow_item *udp, > + struct rte_flow_error *error) > +{ > + const struct rte_flow_item_udp *spec =3D udp->spec; > + const struct rte_flow_item_udp *mask =3D udp->mask; > + > + if (!spec) > + /* > + * Specification for UDP ports cannot be empty > + * because it is required as decap parameter. > + */ > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, udp, > + "NULL UDP port specification" > + " for VXLAN decapsulation"); > + if (!mask) > + mask =3D &rte_flow_item_udp_mask; > + if (mask->hdr.dst_port !=3D RTE_BE16(0x0000)) { > + if (mask->hdr.dst_port !=3D RTE_BE16(0xffff)) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, > + "no support for partial mask on" > + " \"udp.hdr.dst_port\" field"); > + if (!spec->hdr.dst_port) > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, udp, > + "zero decap local UDP port"); > + } else { > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ITEM, udp, > + "outer UDP destination port must be " > + "specified for vxlan decapsulation"); > + } > + if (mask->hdr.src_port !=3D RTE_BE16(0x0000)) { > + if (mask->hdr.src_port !=3D RTE_BE16(0xffff)) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, > + "no support for partial mask on" > + " \"udp.hdr.src_port\" field"); > + DRV_LOG(WARNING, > + "outer UDP local port cannot be " > + "forced for VXLAN encapsulation, " > + "parameter ignored"); > + } > + return 0; > +} > + > +/** > * Validate flow for E-Switch. > * > * @param[in] priv > @@ -1147,6 +1774,7 @@ struct pedit_parser { > const struct rte_flow_item_ipv6 *ipv6; > const struct rte_flow_item_tcp *tcp; > const struct rte_flow_item_udp *udp; > + const struct rte_flow_item_vxlan *vxlan; > } spec, mask; > union { > const struct rte_flow_action_port_id *port_id; > @@ -1156,6 +1784,7 @@ struct pedit_parser { > of_set_vlan_vid; > const struct rte_flow_action_of_set_vlan_pcp * > of_set_vlan_pcp; > + const struct rte_flow_action_vxlan_encap *vxlan_encap; > const struct rte_flow_action_set_ipv4 *set_ipv4; > const struct rte_flow_action_set_ipv6 *set_ipv6; > } conf; > @@ -1242,6 +1871,15 @@ struct pedit_parser { > " set action must follow push action"); > current_action_flag =3D MLX5_FLOW_ACTION_OF_SET_VLAN_PCP; > break; > + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: > + current_action_flag =3D MLX5_FLOW_ACTION_VXLAN_DECAP; > + break; > + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: > + ret =3D flow_tcf_validate_vxlan_encap(actions, error); > + if (ret < 0) > + return ret; > + current_action_flag =3D MLX5_FLOW_ACTION_VXLAN_ENCAP; > + break; > case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: > current_action_flag =3D MLX5_FLOW_ACTION_SET_IPV4_SRC; > break; > @@ -1302,11 +1940,32 @@ struct pedit_parser { > actions, > "can't have multiple fate" > " actions"); > + if ((current_action_flag & MLX5_TCF_VXLAN_ACTIONS) && > + (action_flags & MLX5_TCF_VXLAN_ACTIONS)) > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION, > + actions, > + "can't have multiple vxlan" > + " actions"); > + if ((current_action_flag & MLX5_TCF_VXLAN_ACTIONS) && > + (action_flags & MLX5_TCF_VLAN_ACTIONS)) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ACTION, > + actions, > + "can't have vxlan and vlan" > + " actions in the same rule"); > action_flags |=3D current_action_flag; > } > for (; items->type !=3D RTE_FLOW_ITEM_TYPE_END; items++) { > unsigned int i; > =20 > + if ((item_flags & MLX5_FLOW_LAYER_TUNNEL) && > + items->type !=3D RTE_FLOW_ITEM_TYPE_ETH) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM, > + items, > + "only L2 inner item" > + " is supported"); > switch (items->type) { > case RTE_FLOW_ITEM_TYPE_VOID: > break; > @@ -1360,7 +2019,9 @@ struct pedit_parser { > error); > if (ret < 0) > return ret; > - item_flags |=3D MLX5_FLOW_LAYER_OUTER_L2; > + item_flags |=3D (item_flags & MLX5_FLOW_LAYER_TUNNEL) ? > + MLX5_FLOW_LAYER_INNER_L2 : > + MLX5_FLOW_LAYER_OUTER_L2; > /* TODO: > * Redundant check due to different supported mask. > * Same for the rest of items. > @@ -1438,6 +2099,12 @@ struct pedit_parser { > next_protocol =3D > ((const struct rte_flow_item_ipv4 *) > (items->spec))->hdr.next_proto_id; > + if (action_flags & MLX5_FLOW_ACTION_VXLAN_DECAP) { > + ret =3D flow_tcf_validate_vxlan_decap_ipv4 > + (items, error); > + if (ret < 0) > + return ret; > + } > break; > case RTE_FLOW_ITEM_TYPE_IPV6: > ret =3D mlx5_flow_validate_item_ipv6(items, item_flags, > @@ -1465,6 +2132,12 @@ struct pedit_parser { > next_protocol =3D > ((const struct rte_flow_item_ipv6 *) > (items->spec))->hdr.proto; > + if (action_flags & MLX5_FLOW_ACTION_VXLAN_DECAP) { > + ret =3D flow_tcf_validate_vxlan_decap_ipv6 > + (items, error); > + if (ret < 0) > + return ret; > + } > break; > case RTE_FLOW_ITEM_TYPE_UDP: > ret =3D mlx5_flow_validate_item_udp(items, item_flags, > @@ -1480,6 +2153,12 @@ struct pedit_parser { > error); > if (!mask.udp) > return -rte_errno; > + if (action_flags & MLX5_FLOW_ACTION_VXLAN_DECAP) { > + ret =3D flow_tcf_validate_vxlan_decap_udp > + (items, error); > + if (ret < 0) > + return ret; > + } > break; > case RTE_FLOW_ITEM_TYPE_TCP: > ret =3D mlx5_flow_validate_item_tcp > @@ -1499,10 +2178,40 @@ struct pedit_parser { > if (!mask.tcp) > return -rte_errno; > break; > + case RTE_FLOW_ITEM_TYPE_VXLAN: > + if (!(action_flags & RTE_FLOW_ACTION_TYPE_VXLAN_DECAP)) > + return rte_flow_error_set > + (error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM, > + items, > + "vni pattern should be followed by" > + " vxlan decapsulation action"); > + ret =3D mlx5_flow_validate_item_vxlan(items, > + item_flags, error); > + if (ret < 0) > + return ret; > + item_flags |=3D MLX5_FLOW_LAYER_VXLAN; > + mask.vxlan =3D flow_tcf_item_mask > + (items, &rte_flow_item_vxlan_mask, > + &flow_tcf_mask_supported.vxlan, > + &flow_tcf_mask_empty.vxlan, > + sizeof(flow_tcf_mask_supported.vxlan), error); > + if (!mask.vxlan) > + return -rte_errno; > + if (mask.vxlan->vni[0] !=3D 0xff || > + mask.vxlan->vni[1] !=3D 0xff || > + mask.vxlan->vni[2] !=3D 0xff) > + return rte_flow_error_set > + (error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ITEM_MASK, > + mask.vxlan, > + "no support for partial or " > + "empty mask on \"vxlan.vni\" field"); > + break; > default: > return rte_flow_error_set(error, ENOTSUP, > RTE_FLOW_ERROR_TYPE_ITEM, > - NULL, "item not supported"); > + items, "item not supported"); > } > } > if ((action_flags & MLX5_TCF_PEDIT_ACTIONS) && > @@ -1571,6 +2280,12 @@ struct pedit_parser { > RTE_FLOW_ERROR_TYPE_ACTION, actions, > "vlan actions are supported" > " only with port_id action"); > + if ((action_flags & MLX5_TCF_VXLAN_ACTIONS) && > + !(action_flags & MLX5_FLOW_ACTION_PORT_ID)) > + return rte_flow_error_set(error, ENOTSUP, > + RTE_FLOW_ERROR_TYPE_ACTION, NULL, > + "vxlan actions are supported" > + " only with port_id action"); > if (!(action_flags & MLX5_TCF_FATE_ACTIONS)) > return rte_flow_error_set(error, EINVAL, > RTE_FLOW_ERROR_TYPE_ACTION, actions, > @@ -1594,6 +2309,28 @@ struct pedit_parser { > "no ethernet found in" > " pattern"); > } > + if (action_flags & MLX5_FLOW_ACTION_VXLAN_DECAP) { > + if (!(item_flags & > + (MLX5_FLOW_LAYER_OUTER_L3_IPV4 | > + MLX5_FLOW_LAYER_OUTER_L3_IPV6))) > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION, > + NULL, > + "no outer IP pattern found" > + " for vxlan decap action"); > + if (!(item_flags & MLX5_FLOW_LAYER_OUTER_L4_UDP)) > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION, > + NULL, > + "no outer UDP pattern found" > + " for vxlan decap action"); > + if (!(item_flags & MLX5_FLOW_LAYER_VXLAN)) > + return rte_flow_error_set(error, EINVAL, > + RTE_FLOW_ERROR_TYPE_ACTION, > + NULL, > + "no VNI pattern found" > + " for vxlan decap action"); > + } > return 0; > } > =20 > --=20 > 1.8.3.1 >=20