From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-db5eur01on0078.outbound.protection.outlook.com [104.47.2.78]) by dpdk.org (Postfix) with ESMTP id 9022D4CA2 for ; Tue, 2 Oct 2018 08:30:40 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HWH7sittDM+MqHJjb7PIUTtodkJWhMCaTQ9t0bIRDIc=; b=Ic220li8IafP+q/TOyl26QVj4IBuSw9fnZq+SRm3vIWFOUuRseJ6CVaYsbXhkXG2mP5Wzqd1YlwEuMnO7P41Qi4HqitogGWk3NKwVIkwITs6T4nPRqx2+CwZgSOre4tybDCSXTtEYg7RanDS4hOtxLJED6bjTXctz3hrHz3Exuw= Received: from AM4PR05MB3265.eurprd05.prod.outlook.com (10.171.186.150) by AM4PR05MB3443.eurprd05.prod.outlook.com (10.171.187.148) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1185.25; Tue, 2 Oct 2018 06:30:38 +0000 Received: from AM4PR05MB3265.eurprd05.prod.outlook.com ([fe80::81a1:b719:2345:50e5]) by AM4PR05MB3265.eurprd05.prod.outlook.com ([fe80::81a1:b719:2345:50e5%5]) with mapi id 15.20.1185.024; Tue, 2 Oct 2018 06:30:38 +0000 From: Slava Ovsiienko To: "dev@dpdk.org" CC: Shahaf Shuler , Slava Ovsiienko Thread-Topic: [PATCH 3/5] net/mlx5: e-switch VXLAN flow validation routine Thread-Index: AQHUWhlv9c2nk6IFnEC7AZTKZn5zWg== Date: Tue, 2 Oct 2018 06:30:38 +0000 Message-ID: <1538461807-37507-3-git-send-email-viacheslavo@mellanox.com> References: <1538461807-37507-1-git-send-email-viacheslavo@mellanox.com> In-Reply-To: <1538461807-37507-1-git-send-email-viacheslavo@mellanox.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: AM0PR02CA0019.eurprd02.prod.outlook.com (2603:10a6:208:3e::32) To AM4PR05MB3265.eurprd05.prod.outlook.com (2603:10a6:205:4::22) authentication-results: spf=none (sender IP is ) smtp.mailfrom=viacheslavo@mellanox.com; x-ms-exchange-messagesentrepresentingtype: 1 x-originating-ip: [37.142.13.130] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; AM4PR05MB3443; 6:uT0sFqKxjgEFbSUHNrT0GD39JI7YyvpKr+ws8KPK6RnLCwMR0V/NxZpMmXSTnHms1TxWc6yU9oUoqQCZi22r47tVPw+raok35mtc6TENrGP1gPFNa+oPDUiE4CqofJSUhQqVRDI447+lvJMNuYyiEP4BY7bO/murJKWzJcJpXaI3Zthd4P3V9C4FMK3mcqgUrxPjOqGJqYsIrxksxR8rksz5hbJuiJEbC+re+lLOU7X1Zo566tw5cXla8/1CsNVKJyVcvYfh9HzJCXvIPRW18i+R/rjeAAuQ4YtZbA+3L7t2hbAjyYZyiB333Z26IXIeFLaDe6kWUVQyFE3IIVLrKGE/tmuG6S5Vct0yfvn9inWJRAY4Zc7YBc1CLf9Ee0yg/wqQdSqtkBq9dAdeBy4Fij3SNQ8ew7+vjsv00R4oltevwcXnoKn2WP1/CSVLW6UoU17lRGKXIlSpRXpwxfqU7w==; 5:7EQVTjyczXoDsFFo/ewOGwxZbdT0InwQY7AfjyV1FPu3MfoyqGrPpN2sNaLdfFuFNT0ylJgUy5diI4uICTA//J//U3l8UR+G1ufF+pFFhpk92TZFTMyuCuxiiZvc6jDNV0OBPFlqqgGagAdnioUDUQYRQYzoIvd7+2IVGo7nBa4=; 7:4VFTyotL+2GvZDvcuvZvLQl32IH4cHBNopGDXrY9uNIDaDLEh6yCx9KfzJn0bYHE8Tq0VaXW1bpsRvu/iaAeRoMP6m6e/WhksOSK8+fjjAAhSXD+P9hWxO8Y1fiIaqoFuDILtA6XY6zagF+AZtX1YqDqOzW0xrjdiuJBs1c7jHv/NU7dTRzXoaSMEUS+jLZYix7q91JD9YHNhA1Jdl4hcKC65o/HiwMtfWiEkWyaLAwJt1PLG+Trqz67WGsPM8iI x-ms-office365-filtering-correlation-id: 4f0b4fa7-bd60-4b93-0421-08d628309171 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989299)(4534165)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(4618075)(2017052603328)(7153060)(7193020); SRVR:AM4PR05MB3443; x-ms-traffictypediagnostic: AM4PR05MB3443: x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:; x-ms-exchange-senderadcheck: 1 x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(3231355)(944501410)(52105095)(3002001)(10201501046)(93006095)(93001095)(6055026)(149066)(150057)(6041310)(20161123560045)(20161123558120)(20161123562045)(20161123564045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(201708071742011)(7699051); SRVR:AM4PR05MB3443; BCL:0; PCL:0; RULEID:; SRVR:AM4PR05MB3443; x-forefront-prvs: 0813C68E65 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(396003)(136003)(39860400002)(376002)(366004)(346002)(189003)(199004)(5640700003)(256004)(25786009)(14444005)(26005)(106356001)(54906003)(186003)(316002)(102836004)(68736007)(105586002)(6116002)(3846002)(97736004)(14454004)(81156014)(1730700003)(86362001)(2351001)(2900100001)(8676002)(7736002)(2616005)(8936002)(476003)(305945005)(446003)(11346002)(81166006)(6486002)(52116002)(386003)(6436002)(6506007)(53936002)(2906002)(4326008)(71200400001)(36756003)(478600001)(486006)(99286004)(5250100002)(66066001)(76176011)(5660300001)(6512007)(2501003)(6916009)(53946003)(4744004)(71190400001)(107886003)(579004)(559001); DIR:OUT; SFP:1101; SCL:1; SRVR:AM4PR05MB3443; H:AM4PR05MB3265.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: 4uAcgANfvf0iUmoS9lJtFbK8dt/U2FYzY37UohH6jOSHHun32/LBDyPgifNBORmQ4e1kErVLv2E8jN84jHf11eyzOhGu9++x5BpSuXDK273EdTNTTWu91v7eioFD6NNKz86Xxelnev0hfygh7r4SfFpMhaLh0CRdSDO70hk1SC1Ubnom2b4Q1j8yZKTe3o3ieL6jCOY8dnduh8xiLfO5728mHPTne6yC1L/GIJDhqoD/1fl6V7wrjrX4+x0hywCvirCzMMNqCHYaLybC+Pxarj1O41EFu+1dJv+N5/8qG6AXhRel1Y0ZGgB3CKRLaH2zJwDF1pJ5i3yXQqI/q20Kr/PjFRlwTRIgkPj+5rRSz4Y= spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 4f0b4fa7-bd60-4b93-0421-08d628309171 X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Oct 2018 06:30:38.7511 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR05MB3443 Subject: [dpdk-dev] [PATCH 3/5] net/mlx5: e-switch VXLAN flow validation routine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Oct 2018 06:30:41 -0000 This part of patchset adds support for flow item/actions lists validation. The following entities are now supported: - RTE_FLOW_ITEM_TYPE_VXLAN, contains the tunnel VNI - RTE_FLOW_ACTION_TYPE_VXLAN_DECAP, if this action is specified the items in the flow items list treated as outer network parameters for tunnel outer header match. The ethernet layer addresses always are treated as inner ones. - RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP, contains the item list to build the encapsulation header. In current implementation the values is the subject for some constraints: - outer source IP should coincide with outer egress interface assigned address - outer source MAC address will be always unconditionally set to the one of MAC addresses of outer egress interface - no way to specify source UDP port - all abovementioned parameters are ignored if specified in the rule, warning messages are sent to the log Suggested-by: Adrien Mazarguil Signed-off-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow_tcf.c | 717 +++++++++++++++++++++++++++++++++++= +++- 1 file changed, 713 insertions(+), 4 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_tcf.c b/drivers/net/mlx5/mlx5_flow_= tcf.c index 15e250c..97451bd 100644 --- a/drivers/net/mlx5/mlx5_flow_tcf.c +++ b/drivers/net/mlx5/mlx5_flow_tcf.c @@ -558,6 +558,630 @@ struct flow_tcf_ptoi { } =20 /** + * Validate VXLAN_ENCAP action RTE_FLOW_ITEM_TYPE_ETH item for E-Switch. + * + * @param[in] item + * Pointer to the itemn structure. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + **/ +static int +flow_tcf_validate_vxlan_encap_eth(const struct rte_flow_item *item, + struct rte_flow_error *error) +{ + const struct rte_flow_item_eth *spec =3D item->spec; + const struct rte_flow_item_eth *mask =3D item->mask; + + if (!spec) + /* + * Specification for L2 addresses can be empty + * because these ones are optional and not + * required directly by tc rule. + */ + return 0; + if (!mask) + /* If mask is not specified use the default one. */ + mask =3D &rte_flow_item_eth_mask; + if (memcmp(&mask->dst, + &flow_tcf_mask_empty.eth.dst, + sizeof(flow_tcf_mask_empty.eth.dst))) { + if (memcmp(&mask->dst, + &rte_flow_item_eth_mask.dst, + sizeof(rte_flow_item_eth_mask.dst))) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, + "no support for partial mask on" + " \"eth.dst\" field"); + /* + * Ethernet addresses are not supported by + * tc as tunnel_key parameters. Destination + * L2 address is needed to form encap packet + * header and retrieved by kernel from implicit + * sources (ARP table, etc), address masks are + * not supported at all. + */ + DRV_LOG(WARNING, + "outer ethernet destination address " + "cannot be forced for VXLAN " + "encapsulation, parameter ignored"); + } + if (memcmp(&mask->src, + &flow_tcf_mask_empty.eth.src, + sizeof(flow_tcf_mask_empty.eth.src))) { + if (memcmp(&mask->src, + &rte_flow_item_eth_mask.src, + sizeof(rte_flow_item_eth_mask.src))) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, + "no support for partial mask on" + " \"eth.src\" field"); + DRV_LOG(WARNING, + "outer ethernet source address " + "cannot be forced for VXLAN " + "encapsulation, parameter ignored"); + } + if (mask->type !=3D RTE_BE16(0x0000)) { + if (mask->type !=3D RTE_BE16(0xffff)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, + "no support for partial mask on" + " \"eth.type\" field"); + DRV_LOG(WARNING, + "outer ethernet type field " + "cannot be forced for VXLAN " + "encapsulation, parameter ignored"); + } + return 0; +} + +/** + * Validate VXLAN_ENCAP action RTE_FLOW_ITEM_TYPE_IPV4 item for E-Switch. + * + * @param[in] item + * Pointer to the itemn structure. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + **/ +static int +flow_tcf_validate_vxlan_encap_ipv4(const struct rte_flow_item *item, + struct rte_flow_error *error) +{ + const struct rte_flow_item_ipv4 *spec =3D item->spec; + const struct rte_flow_item_ipv4 *mask =3D item->mask; + + if (!spec) + /* + * Specification for L3 addresses cannot be empty + * because it is required by tunnel_key parameter. + */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "NULL outer L3 address specification " + " for VXLAN encapsulation"); + if (!mask) + mask =3D &rte_flow_item_ipv4_mask; + if (mask->hdr.dst_addr !=3D RTE_BE32(0x00000000)) { + if (mask->hdr.dst_addr !=3D RTE_BE32(0xffffffff)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, + "no support for partial mask on" + " \"ipv4.hdr.dst_addr\" field"); + /* More L3 address validations can be put here. */ + } else { + /* + * Kernel uses the destination L3 address to determine + * the routing path and obtain the L2 destination + * address, so L3 destination address must be + * specified in the tc rule. + */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "outer L3 destination address must be " + "specified for VXLAN encapsulation"); + } + if (mask->hdr.src_addr !=3D RTE_BE32(0x00000000)) { + if (mask->hdr.src_addr !=3D RTE_BE32(0xffffffff)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, + "no support for partial mask on" + " \"ipv4.hdr.src_addr\" field"); + /* More L3 address validations can be put here. */ + } else { + /* + * Kernel uses the source L3 address to select the + * interface for egress encapsulated traffic, so + * it must be specified in the tc rule. + */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "outer L3 source address must be " + "specified for VXLAN encapsulation"); + } + return 0; +} + +/** + * Validate VXLAN_ENCAP action RTE_FLOW_ITEM_TYPE_IPV6 item for E-Switch. + * + * @param[in] item + * Pointer to the itemn structure. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_ernno is set. + **/ +static int +flow_tcf_validate_vxlan_encap_ipv6(const struct rte_flow_item *item, + struct rte_flow_error *error) +{ + const struct rte_flow_item_ipv6 *spec =3D item->spec; + const struct rte_flow_item_ipv6 *mask =3D item->mask; + + if (!spec) + /* + * Specification for L3 addresses cannot be empty + * because it is required by tunnel_key parameter. + */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "NULL outer L3 address specification " + " for VXLAN encapsulation"); + if (!mask) + mask =3D &rte_flow_item_ipv6_mask; + if (memcmp(&mask->hdr.dst_addr, + &flow_tcf_mask_empty.ipv6.hdr.dst_addr, + sizeof(flow_tcf_mask_empty.ipv6.hdr.dst_addr))) { + if (memcmp(&mask->hdr.dst_addr, + &rte_flow_item_ipv6_mask.hdr.dst_addr, + sizeof(rte_flow_item_ipv6_mask.hdr.dst_addr))) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, + "no support for partial mask on" + " \"ipv6.hdr.dst_addr\" field"); + /* More L3 address validations can be put here. */ + } else { + /* + * Kernel uses the destination L3 address to determine + * the routing path and obtain the L2 destination + * address (heigh or gate), so L3 destination address + * must be specified within the tc rule. + */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "outer L3 destination address must be " + "specified for VXLAN encapsulation"); + } + if (memcmp(&mask->hdr.src_addr, + &flow_tcf_mask_empty.ipv6.hdr.src_addr, + sizeof(flow_tcf_mask_empty.ipv6.hdr.src_addr))) { + if (memcmp(&mask->hdr.src_addr, + &rte_flow_item_ipv6_mask.hdr.src_addr, + sizeof(rte_flow_item_ipv6_mask.hdr.src_addr))) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, + "no support for partial mask on" + " \"ipv6.hdr.src_addr\" field"); + /* More L3 address validation can be put here. */ + } else { + /* + * Kernel uses the source L3 address to select the + * interface for egress encapsulated traffic, so + * it must be specified in the tc rule. + */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "outer L3 source address must be " + "specified for VXLAN encapsulation"); + } + return 0; +} + +/** + * Validate VXLAN_ENCAP action RTE_FLOW_ITEM_TYPE_UDP item for E-Switch. + * + * @param[in] item + * Pointer to the itemn structure. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_ernno is set. + **/ +static int +flow_tcf_validate_vxlan_encap_udp(const struct rte_flow_item *item, + struct rte_flow_error *error) +{ + const struct rte_flow_item_udp *spec =3D item->spec; + const struct rte_flow_item_udp *mask =3D item->mask; + + if (!spec) + /* + * Specification for UDP ports cannot be empty + * because it is required by tunnel_key parameter. + */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "NULL UDP port specification " + " for VXLAN encapsulation"); + if (!mask) + mask =3D &rte_flow_item_udp_mask; + if (mask->hdr.dst_port !=3D RTE_BE16(0x0000)) { + if (mask->hdr.dst_port !=3D RTE_BE16(0xffff)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, + "no support for partial mask on" + " \"udp.hdr.dst_port\" field"); + if (!spec->hdr.dst_port) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "zero encap remote UDP port"); + } else { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "outer UDP remote port must be " + "specified for VXLAN encapsulation"); + } + if (mask->hdr.src_port !=3D RTE_BE16(0x0000)) { + if (mask->hdr.src_port !=3D RTE_BE16(0xffff)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, + "no support for partial mask on" + " \"udp.hdr.src_port\" field"); + DRV_LOG(WARNING, + "outer UDP source port cannot be " + "forced for VXLAN encapsulation, " + "parameter ignored"); + } + return 0; +} + +/** + * Validate VXLAN_ENCAP action RTE_FLOW_ITEM_TYPE_VXLAN item for E-Switch. + * + * @param[in] item + * Pointer to the itemn structure. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_ernno is set. + **/ +static int +flow_tcf_validate_vxlan_encap_vni(const struct rte_flow_item *item, + struct rte_flow_error *error) +{ + const struct rte_flow_item_vxlan *spec =3D item->spec; + const struct rte_flow_item_vxlan *mask =3D item->mask; + + if (!spec) + /* Outer VNI is required by tunnel_key parameter. */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "NULL VNI specification " + " for VXLAN encapsulation"); + if (!mask) + mask =3D &rte_flow_item_vxlan_mask; + if (mask->vni[0] !=3D 0 || + mask->vni[1] !=3D 0 || + mask->vni[2] !=3D 0) { + if (mask->vni[0] !=3D 0xff || + mask->vni[1] !=3D 0xff || + mask->vni[2] !=3D 0xff) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, + "no support for partial mask on" + " \"vxlan.vni\" field"); + if (spec->vni[0] =3D=3D 0 && + spec->vni[1] =3D=3D 0 && + spec->vni[2] =3D=3D 0) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "VXLAN vni cannot be 0"); + } else { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "outer VNI must be specified " + "for VXLAN encapsulation"); + } + return 0; +} + +/** + * Validate VXLAN_ENCAP action item list for E-Switch. + * + * @param[in] action + * Pointer to the VXLAN_ENCAP action structure. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_ernno is set. + **/ +static int +flow_tcf_validate_vxlan_encap(const struct rte_flow_action *action, + struct rte_flow_error *error) +{ + const struct rte_flow_item *items; + int ret; + uint32_t item_flags =3D 0; + + assert(action->type =3D=3D RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP); + if (!action->conf) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Missing VXLAN tunnel " + "action configuration"); + items =3D ((const struct rte_flow_action_vxlan_encap *) + action->conf)->definition; + if (!items) + return rte_flow_error_set + (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + action, "Missing VXLAN tunnel " + "encapsulation parameters"); + for (; items->type !=3D RTE_FLOW_ITEM_TYPE_END; items++) { + switch (items->type) { + case RTE_FLOW_ITEM_TYPE_VOID: + break; + case RTE_FLOW_ITEM_TYPE_ETH: + ret =3D mlx5_flow_validate_item_eth(items, item_flags, + error); + if (ret < 0) + return ret; + ret =3D flow_tcf_validate_vxlan_encap_eth(items, error); + if (ret < 0) + return ret; + item_flags |=3D MLX5_FLOW_LAYER_OUTER_L2; + break; + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + ret =3D mlx5_flow_validate_item_ipv4(items, item_flags, + error); + if (ret < 0) + return ret; + ret =3D flow_tcf_validate_vxlan_encap_ipv4(items, error); + if (ret < 0) + return ret; + item_flags |=3D MLX5_FLOW_LAYER_OUTER_L3_IPV4; + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + ret =3D mlx5_flow_validate_item_ipv6(items, item_flags, + error); + if (ret < 0) + return ret; + ret =3D flow_tcf_validate_vxlan_encap_ipv6(items, error); + if (ret < 0) + return ret; + item_flags |=3D MLX5_FLOW_LAYER_OUTER_L3_IPV6; + break; + case RTE_FLOW_ITEM_TYPE_UDP: + ret =3D mlx5_flow_validate_item_udp(items, item_flags, + 0xFF, error); + if (ret < 0) + return ret; + ret =3D flow_tcf_validate_vxlan_encap_udp(items, error); + if (ret < 0) + return ret; + item_flags |=3D MLX5_FLOW_LAYER_OUTER_L4_UDP; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + ret =3D mlx5_flow_validate_item_vxlan(items, + item_flags, error); + if (ret < 0) + return ret; + ret =3D flow_tcf_validate_vxlan_encap_vni(items, error); + if (ret < 0) + return ret; + item_flags |=3D MLX5_FLOW_LAYER_VXLAN; + break; + default: + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, items, + "VXLAN encap item not supported"); + } + } + if (!(item_flags & MLX5_FLOW_LAYER_OUTER_L3)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "no outer L3 layer found" + " for VXLAN encapsulation"); + if (!(item_flags & MLX5_FLOW_LAYER_OUTER_L4_UDP)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "no outer L4 layer found" + " for VXLAN encapsulation"); + if (!(item_flags & MLX5_FLOW_LAYER_VXLAN)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "no VXLAN VNI found" + " for VXLAN encapsulation"); + return 0; +} + +/** + * Validate VXLAN_DECAP action outer tunnel items for E-Switch. + * + * @param[in] item_flags + * Mask of provided outer tunnel parameters + * @param[in] ipv4 + * Outer IPv4 address item (if any, NULL otherwise). + * @param[in] ipv6 + * Outer IPv6 address item (if any, NULL otherwise). + * @param[in] udp + * Outer UDP layer item (if any, NULL otherwise). + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_ernno is set. + **/ +static int +flow_tcf_validate_vxlan_decap(uint32_t item_flags, + const struct rte_flow_action *action, + const struct rte_flow_item *ipv4, + const struct rte_flow_item *ipv6, + const struct rte_flow_item *udp, + struct rte_flow_error *error) +{ + if (!ipv4 && !ipv6) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "no outer L3 layer found" + " for VXLAN decapsulation"); + if (ipv4) { + const struct rte_flow_item_ipv4 *spec =3D ipv4->spec; + const struct rte_flow_item_ipv4 *mask =3D ipv4->mask; + + if (!spec) + /* + * Specification for L3 addresses cannot be empty + * because it is required as decap parameter. + */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, ipv4, + "NULL outer L3 address specification " + " for VXLAN decapsulation"); + if (!mask) + mask =3D &rte_flow_item_ipv4_mask; + if (mask->hdr.dst_addr !=3D RTE_BE32(0x00000000)) { + if (mask->hdr.dst_addr !=3D RTE_BE32(0xffffffff)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, + "no support for partial mask on" + " \"ipv4.hdr.dst_addr\" field"); + /* More L3 address validations can be put here. */ + } else { + /* + * Kernel uses the destination L3 address + * to determine the ingress network interface + * for traffic being decapculated. + */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, ipv4, + "outer L3 destination address must be " + "specified for VXLAN decapsulation"); + } + /* Source L3 address is optional for decap. */ + if (mask->hdr.src_addr !=3D RTE_BE32(0x00000000)) + if (mask->hdr.src_addr !=3D RTE_BE32(0xffffffff)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, + "no support for partial mask on" + " \"ipv4.hdr.src_addr\" field"); + } else { + const struct rte_flow_item_ipv6 *spec =3D ipv6->spec; + const struct rte_flow_item_ipv6 *mask =3D ipv6->mask; + + if (!spec) + /* + * Specification for L3 addresses cannot be empty + * because it is required as decap parameter. + */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, ipv6, + "NULL outer L3 address specification " + " for VXLAN decapsulation"); + if (!mask) + mask =3D &rte_flow_item_ipv6_mask; + if (memcmp(&mask->hdr.dst_addr, + &flow_tcf_mask_empty.ipv6.hdr.dst_addr, + sizeof(flow_tcf_mask_empty.ipv6.hdr.dst_addr))) { + if (memcmp(&mask->hdr.dst_addr, + &rte_flow_item_ipv6_mask.hdr.dst_addr, + sizeof(rte_flow_item_ipv6_mask.hdr.dst_addr))) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, + "no support for partial mask on" + " \"ipv6.hdr.dst_addr\" field"); + /* More L3 address validations can be put here. */ + } else { + /* + * Kernel uses the destination L3 address + * to determine the ingress network interface + * for traffic being decapculated. + */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, ipv6, + "outer L3 destination address must be " + "specified for VXLAN decapsulation"); + } + /* Source L3 address is optional for decap. */ + if (memcmp(&mask->hdr.src_addr, + &flow_tcf_mask_empty.ipv6.hdr.src_addr, + sizeof(flow_tcf_mask_empty.ipv6.hdr.src_addr))) { + if (memcmp(&mask->hdr.src_addr, + &rte_flow_item_ipv6_mask.hdr.src_addr, + sizeof(mask->hdr.src_addr))) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, + "no support for partial mask on" + " \"ipv6.hdr.src_addr\" field"); + } + } + if (!udp) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "no outer L4 layer found" + " for VXLAN decapsulation"); + } else { + const struct rte_flow_item_udp *spec =3D udp->spec; + const struct rte_flow_item_udp *mask =3D udp->mask; + + if (!spec) + /* + * Specification for UDP ports cannot be empty + * because it is required as decap parameter. + */ + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, udp, + "NULL UDP port specification " + " for VXLAN decapsulation"); + if (!mask) + mask =3D &rte_flow_item_udp_mask; + if (mask->hdr.dst_port !=3D RTE_BE16(0x0000)) { + if (mask->hdr.dst_port !=3D RTE_BE16(0xffff)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, + "no support for partial mask on" + " \"udp.hdr.dst_port\" field"); + if (!spec->hdr.dst_port) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, udp, + "zero decap local UDP port"); + } else { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, udp, + "outer UDP destination port must be " + "specified for VXLAN decapsulation"); + } + if (mask->hdr.src_port !=3D RTE_BE16(0x0000)) { + if (mask->hdr.src_port !=3D RTE_BE16(0xffff)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask, + "no support for partial mask on" + " \"udp.hdr.src_port\" field"); + DRV_LOG(WARNING, + "outer UDP local port cannot be " + "forced for VXLAN encapsulation, " + "parameter ignored"); + } + } + if (!(item_flags & MLX5_FLOW_LAYER_VXLAN)) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, action, + "no VXLAN VNI found" + " for VXLAN decapsulation"); + /* VNI is already validated, extra check can be put here. */ + return 0; +} +/** * Validate flow for E-Switch. * * @param[in] priv @@ -589,6 +1213,7 @@ struct flow_tcf_ptoi { const struct rte_flow_item_ipv6 *ipv6; const struct rte_flow_item_tcp *tcp; const struct rte_flow_item_udp *udp; + const struct rte_flow_item_vxlan *vxlan; } spec, mask; union { const struct rte_flow_action_port_id *port_id; @@ -597,7 +1222,11 @@ struct flow_tcf_ptoi { of_set_vlan_vid; const struct rte_flow_action_of_set_vlan_pcp * of_set_vlan_pcp; + const struct rte_flow_action_vxlan_encap *vxlan_encap; } conf; + const struct rte_flow_item *ipv4 =3D NULL; /* storage to check */ + const struct rte_flow_item *ipv6 =3D NULL; /* outer tunnel. */ + const struct rte_flow_item *udp =3D NULL; /* parameters. */ uint32_t item_flags =3D 0; uint32_t action_flags =3D 0; uint8_t next_protocol =3D -1; @@ -724,7 +1353,6 @@ struct flow_tcf_ptoi { error); if (ret < 0) return ret; - item_flags |=3D MLX5_FLOW_LAYER_OUTER_L3_IPV4; mask.ipv4 =3D flow_tcf_item_mask (items, &rte_flow_item_ipv4_mask, &flow_tcf_mask_supported.ipv4, @@ -745,13 +1373,22 @@ struct flow_tcf_ptoi { next_protocol =3D ((const struct rte_flow_item_ipv4 *) (items->spec))->hdr.next_proto_id; + if (item_flags & MLX5_FLOW_LAYER_OUTER_L3_IPV4) { + /* + * Multiple outer items are not allowed as + * tunnel parameters + */ + ipv4 =3D NULL; + } else { + ipv4 =3D items; + item_flags |=3D MLX5_FLOW_LAYER_OUTER_L3_IPV4; + } break; case RTE_FLOW_ITEM_TYPE_IPV6: ret =3D mlx5_flow_validate_item_ipv6(items, item_flags, error); if (ret < 0) return ret; - item_flags |=3D MLX5_FLOW_LAYER_OUTER_L3_IPV6; mask.ipv6 =3D flow_tcf_item_mask (items, &rte_flow_item_ipv6_mask, &flow_tcf_mask_supported.ipv6, @@ -772,13 +1409,22 @@ struct flow_tcf_ptoi { next_protocol =3D ((const struct rte_flow_item_ipv6 *) (items->spec))->hdr.proto; + if (item_flags & MLX5_FLOW_LAYER_OUTER_L3_IPV6) { + /* + *Multiple outer items are not allowed as + * tunnel parameters + */ + ipv6 =3D NULL; + } else { + ipv6 =3D items; + item_flags |=3D MLX5_FLOW_LAYER_OUTER_L3_IPV6; + } break; case RTE_FLOW_ITEM_TYPE_UDP: ret =3D mlx5_flow_validate_item_udp(items, item_flags, next_protocol, error); if (ret < 0) return ret; - item_flags |=3D MLX5_FLOW_LAYER_OUTER_L4_UDP; mask.udp =3D flow_tcf_item_mask (items, &rte_flow_item_udp_mask, &flow_tcf_mask_supported.udp, @@ -787,13 +1433,18 @@ struct flow_tcf_ptoi { error); if (!mask.udp) return -rte_errno; + if (item_flags & MLX5_FLOW_LAYER_OUTER_L4_UDP) { + udp =3D NULL; + } else { + udp =3D items; + item_flags |=3D MLX5_FLOW_LAYER_OUTER_L4_UDP; + } break; case RTE_FLOW_ITEM_TYPE_TCP: ret =3D mlx5_flow_validate_item_tcp(items, item_flags, next_protocol, error); if (ret < 0) return ret; - item_flags |=3D MLX5_FLOW_LAYER_OUTER_L4_TCP; mask.tcp =3D flow_tcf_item_mask (items, &rte_flow_item_tcp_mask, &flow_tcf_mask_supported.tcp, @@ -802,6 +1453,31 @@ struct flow_tcf_ptoi { error); if (!mask.tcp) return -rte_errno; + item_flags |=3D MLX5_FLOW_LAYER_OUTER_L4_TCP; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + ret =3D mlx5_flow_validate_item_vxlan(items, + item_flags, error); + if (ret < 0) + return ret; + mask.vxlan =3D flow_tcf_item_mask + (items, &rte_flow_item_vxlan_mask, + &flow_tcf_mask_supported.vxlan, + &flow_tcf_mask_empty.vxlan, + sizeof(flow_tcf_mask_supported.vxlan), + error); + if (!mask.vxlan) + return -rte_errno; + if (mask.vxlan->vni[0] !=3D 0xff || + mask.vxlan->vni[1] !=3D 0xff || + mask.vxlan->vni[2] !=3D 0xff) + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM_MASK, + mask.vxlan, + "no support for partial or " + "empty mask on \"vxlan.vni\" field"); + item_flags |=3D MLX5_FLOW_LAYER_VXLAN; break; default: return rte_flow_error_set(error, ENOTSUP, @@ -857,6 +1533,33 @@ struct flow_tcf_ptoi { case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP: action_flags |=3D MLX5_ACTION_OF_SET_VLAN_PCP; break; + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + if (action_flags & (MLX5_ACTION_VXLAN_ENCAP + | MLX5_ACTION_VXLAN_DECAP)) + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, actions, + "can't have multiple vxlan actions"); + ret =3D flow_tcf_validate_vxlan_encap(actions, error); + if (ret < 0) + return ret; + action_flags |=3D MLX5_ACTION_VXLAN_ENCAP; + break; + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + if (action_flags & (MLX5_ACTION_VXLAN_ENCAP + | MLX5_ACTION_VXLAN_DECAP)) + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, actions, + "can't have multiple vxlan actions"); + ret =3D flow_tcf_validate_vxlan_decap(item_flags, + actions, + ipv4, ipv6, udp, + error); + if (ret < 0) + return ret; + action_flags |=3D MLX5_ACTION_VXLAN_DECAP; + break; default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, @@ -864,6 +1567,12 @@ struct flow_tcf_ptoi { "action not supported"); } } + if ((item_flags & MLX5_FLOW_LAYER_VXLAN) && + !(action_flags & MLX5_ACTION_VXLAN_DECAP)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "VNI pattern should be followed " + " by VXLAN_DECAP action"); return 0; } =20 --=20 1.8.3.1