From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR03-VE1-obe.outbound.protection.outlook.com (mail-eopbgr50067.outbound.protection.outlook.com [40.107.5.67]) by dpdk.org (Postfix) with ESMTP id 0D0C55F72 for ; Fri, 2 Nov 2018 18:53:22 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=G1cO0nE51KQ+JWgtrfDAFZLVijxfbqg8024eBgGXWkg=; b=set0bWKqfxnEgiNldBF5KXWNRyFfTawZc6hwl8zIsii4IikhSdwedLx/4iG2R/sMoyBpJGFBAi9/uQ/sONTU3zFiiKwk667zXIWfnx0wr4iMbRuMBN7nItQi0rtJe8Jsns2Hu+Nk/ChSlDx5SqYPQMWN5wki3fVqpAsdAENJPYQ= Received: from AM4PR05MB3265.eurprd05.prod.outlook.com (10.171.186.150) by AM4PR05MB3203.eurprd05.prod.outlook.com (10.171.186.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1294.20; Fri, 2 Nov 2018 17:53:20 +0000 Received: from AM4PR05MB3265.eurprd05.prod.outlook.com ([fe80::544b:a68d:e6a5:ba6e]) by AM4PR05MB3265.eurprd05.prod.outlook.com ([fe80::544b:a68d:e6a5:ba6e%2]) with mapi id 15.20.1294.024; Fri, 2 Nov 2018 17:53:20 +0000 From: Slava Ovsiienko To: Shahaf Shuler CC: "dev@dpdk.org" , Yongseok Koh , Slava Ovsiienko Thread-Topic: [PATCH v4 08/13] net/mlx5: add VXLAN support to flow translate routine Thread-Index: AQHUctTxHgE0sCfJ+0a13q6N7rmBjg== Date: Fri, 2 Nov 2018 17:53:20 +0000 Message-ID: <1541181152-15788-9-git-send-email-viacheslavo@mellanox.com> References: <1541074741-41368-1-git-send-email-viacheslavo@mellanox.com> <1541181152-15788-1-git-send-email-viacheslavo@mellanox.com> In-Reply-To: <1541181152-15788-1-git-send-email-viacheslavo@mellanox.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: LO2P265CA0112.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:c::28) To AM4PR05MB3265.eurprd05.prod.outlook.com (2603:10a6:205:4::22) authentication-results: spf=none (sender IP is ) smtp.mailfrom=viacheslavo@mellanox.com; x-ms-exchange-messagesentrepresentingtype: 1 x-originating-ip: [37.142.13.130] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; AM4PR05MB3203; 6:JiFKcLB8d30lpArakFLVW52+8whNwYgLY/QnAQJykQrpWM1Nw98tIMAA3omrBG8mYN5lTPjv5WGNBI8Nuxl4aX8CunydKehSacMD4EiPAZtK0zfU4EovF9ULHXGigCbpDKuKvEsr51e6RxawxJO2Jr6Xeduj07lGW8dYso/huXty0b8deZySx1b5HapW/Sdy5iaUf6sLwzsSSqieUzy5aY6ek70LP7W5og332CnGm2f/IdCI1dsMmbSG/q0CqqjzKtoyN8cT62HEBEn6AFX2/pdyZC2p45SU+AoS19x/cBCytIjfMsz5PAtFTwHq1n0cdFxueclcyfjnRy5Nv4Gt1ihW4aXOUIPeixJtgMialeCNfP++QPs81+9Mup7V7n1WpQeoxDEieE+5UK84vzTWyMHjiv/QcYCWRKgsVX2ywLex9k5sH1SCStm9/QhfEn/gCOYqbTjtqMGshME7RtfjuA==; 5:qC22On6RF0/o6yzzM/m/3gAZ+PnA6mUpj8lxd/ua++HuVKJIkQxW3bl9E68veIZrzLky+90uJP2dDopF7fM+XzyCQ/S5OpLG/ZBRnrV633O1077XM902XmVtn/8igLciu2jJlPKStG1cLOlHwXYRKj5ImhYfvn6bvBunCFl8nHI=; 7:W1UvOrkYmxYFTRfMiDjxZnk304xAOnnANcek7OKcVKAawGwcwm2BvMBV0wQsKhZ03p1LZeMTj6WVHONl8s435cWXBhzCr2JPR0lqFnkDsqzeB/X66ZfQokQVTpT0TcDpHKJ8r7pZAX6LNj7okSgvgQ== x-ms-office365-filtering-correlation-id: 66c7de5b-a221-43b4-7b82-08d640ec13a9 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(4618075)(2017052603328)(7153060)(7193020); SRVR:AM4PR05MB3203; x-ms-traffictypediagnostic: AM4PR05MB3203: x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:; x-ms-exchange-senderadcheck: 1 x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(3231382)(944501410)(52105095)(93006095)(93001095)(10201501046)(3002001)(6055026)(148016)(149066)(150057)(6041310)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123558120)(20161123562045)(20161123564045)(20161123560045)(201708071742011)(7699051)(76991095); SRVR:AM4PR05MB3203; BCL:0; PCL:0; RULEID:; SRVR:AM4PR05MB3203; x-forefront-prvs: 08444C7C87 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(376002)(346002)(39860400002)(136003)(366004)(396003)(199004)(189003)(6636002)(11346002)(478600001)(446003)(2616005)(476003)(486006)(256004)(14444005)(386003)(6506007)(102836004)(26005)(99286004)(575784001)(86362001)(3846002)(6116002)(14454004)(6512007)(316002)(25786009)(53936002)(97736004)(71190400001)(37006003)(4326008)(53946003)(6862004)(71200400001)(186003)(107886003)(6436002)(54906003)(6486002)(5660300001)(2906002)(106356001)(36756003)(66066001)(105586002)(2900100001)(8936002)(52116002)(76176011)(81166006)(305945005)(81156014)(8676002)(68736007)(7736002)(4744004)(579004); DIR:OUT; SFP:1101; SCL:1; SRVR:AM4PR05MB3203; H:AM4PR05MB3265.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: GR13yvnLKe2tsqwLGXJMWOB8OiZ9SkHLvmApRQvIwTQRI6FGQ9WiriiRKXxA20oFzKpUHJ7AAC4NuFACmmE5gOs1YJZsU9qN/lcdd9NARd74hhPGLhtrd8xYdo1TSOblY2SOWGTLpoNmxXuFRpdQmOLCaxZBWX7JQ4pMyFQScAEho9eE4gNcuaLeMiz9fpfk3lNpH9Nqxe7wdaA8x8qNNoGdcDnqBzBeOk97LM5W1ayjLNrABpgfJoWmfz6v9DTANCs2gIXvcA8gIsgD/h0Y6vnaqlnam5wxl/p6ymDWqgUs19x93OmwlgR0wh8+I2VB2kwj+onQm56FY5LGNfMQ5LHQwwnPH+6X1Ys+keAk87o= spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 66c7de5b-a221-43b4-7b82-08d640ec13a9 X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Nov 2018 17:53:20.7478 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR05MB3203 Subject: [dpdk-dev] [PATCH v4 08/13] net/mlx5: add VXLAN support to flow translate routine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Nov 2018 17:53:22 -0000 This part of patchset adds support of VXLAN-related items and actions to the flow translation routine. Later some tunnel types, other than VXLAN can be addedd (GRE). No VTEP devices are created at this point, the flow rule is just translated, not applied yet. Suggested-by: Adrien Mazarguil Signed-off-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_flow_tcf.c | 537 ++++++++++++++++++++++++++++++++++-= ---- 1 file changed, 477 insertions(+), 60 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_tcf.c b/drivers/net/mlx5/mlx5_flow_= tcf.c index 017f2bd..b7a0c72 100644 --- a/drivers/net/mlx5/mlx5_flow_tcf.c +++ b/drivers/net/mlx5/mlx5_flow_tcf.c @@ -2799,6 +2799,241 @@ struct pedit_parser { } =20 /** + * Convert VXLAN VNI to 32-bit integer. + * + * @param[in] vni + * VXLAN VNI in 24-bit wire format. + * + * @return + * VXLAN VNI as a 32-bit integer value in network endian. + */ +static inline rte_be32_t +vxlan_vni_as_be32(const uint8_t vni[3]) +{ + union { + uint8_t vni[4]; + rte_be32_t dword; + } ret =3D { + .vni =3D { 0, vni[0], vni[1], vni[2] }, + }; + return ret.dword; +} + +/** + * Helper function to process RTE_FLOW_ITEM_TYPE_ETH entry in configuratio= n + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the MAC address field= s + * in the encapsulation parameters structure. The item must be prevalidate= d, + * no any validation checks performed by function. + * + * @param[in] spec + * RTE_FLOW_ITEM_TYPE_ETH entry specification. + * @param[in] mask + * RTE_FLOW_ITEM_TYPE_ETH entry mask. + * @param[out] encap + * Structure to fill the gathered MAC address data. + */ +static void +flow_tcf_parse_vxlan_encap_eth(const struct rte_flow_item_eth *spec, + const struct rte_flow_item_eth *mask, + struct flow_tcf_vxlan_encap *encap) +{ + /* Item must be validated before. No redundant checks. */ + assert(spec); + if (!mask || !memcmp(&mask->dst, + &rte_flow_item_eth_mask.dst, + sizeof(rte_flow_item_eth_mask.dst))) { + /* + * Ethernet addresses are not supported by + * tc as tunnel_key parameters. Destination + * address is needed to form encap packet + * header and retrieved by kernel from + * implicit sources (ARP table, etc), + * address masks are not supported at all. + */ + encap->eth.dst =3D spec->dst; + encap->mask |=3D FLOW_TCF_ENCAP_ETH_DST; + } + if (!mask || !memcmp(&mask->src, + &rte_flow_item_eth_mask.src, + sizeof(rte_flow_item_eth_mask.src))) { + /* + * Ethernet addresses are not supported by + * tc as tunnel_key parameters. Source ethernet + * address is ignored anyway. + */ + encap->eth.src =3D spec->src; + encap->mask |=3D FLOW_TCF_ENCAP_ETH_SRC; + } +} + +/** + * Helper function to process RTE_FLOW_ITEM_TYPE_IPV4 entry in configurati= on + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the IPV4 address fiel= ds + * in the encapsulation parameters structure. The item must be prevalidate= d, + * no any validation checks performed by function. + * + * @param[in] spec + * RTE_FLOW_ITEM_TYPE_IPV4 entry specification. + * @param[out] encap + * Structure to fill the gathered IPV4 address data. + */ +static void +flow_tcf_parse_vxlan_encap_ipv4(const struct rte_flow_item_ipv4 *spec, + struct flow_tcf_vxlan_encap *encap) +{ + /* Item must be validated before. No redundant checks. */ + assert(spec); + encap->ipv4.dst =3D spec->hdr.dst_addr; + encap->ipv4.src =3D spec->hdr.src_addr; + encap->mask |=3D FLOW_TCF_ENCAP_IPV4_SRC | + FLOW_TCF_ENCAP_IPV4_DST; +} + +/** + * Helper function to process RTE_FLOW_ITEM_TYPE_IPV6 entry in configurati= on + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the IPV6 address fiel= ds + * in the encapsulation parameters structure. The item must be prevalidate= d, + * no any validation checks performed by function. + * + * @param[in] spec + * RTE_FLOW_ITEM_TYPE_IPV6 entry specification. + * @param[out] encap + * Structure to fill the gathered IPV6 address data. + */ +static void +flow_tcf_parse_vxlan_encap_ipv6(const struct rte_flow_item_ipv6 *spec, + struct flow_tcf_vxlan_encap *encap) +{ + /* Item must be validated before. No redundant checks. */ + assert(spec); + memcpy(encap->ipv6.dst, spec->hdr.dst_addr, IPV6_ADDR_LEN); + memcpy(encap->ipv6.src, spec->hdr.src_addr, IPV6_ADDR_LEN); + encap->mask |=3D FLOW_TCF_ENCAP_IPV6_SRC | + FLOW_TCF_ENCAP_IPV6_DST; +} + +/** + * Helper function to process RTE_FLOW_ITEM_TYPE_UDP entry in configuratio= n + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the UDP port fields + * in the encapsulation parameters structure. The item must be prevalidate= d, + * no any validation checks performed by function. + * + * @param[in] spec + * RTE_FLOW_ITEM_TYPE_UDP entry specification. + * @param[in] mask + * RTE_FLOW_ITEM_TYPE_UDP entry mask. + * @param[out] encap + * Structure to fill the gathered UDP port data. + */ +static void +flow_tcf_parse_vxlan_encap_udp(const struct rte_flow_item_udp *spec, + const struct rte_flow_item_udp *mask, + struct flow_tcf_vxlan_encap *encap) +{ + assert(spec); + encap->udp.dst =3D spec->hdr.dst_port; + encap->mask |=3D FLOW_TCF_ENCAP_UDP_DST; + if (!mask || mask->hdr.src_port !=3D RTE_BE16(0x0000)) { + encap->udp.src =3D spec->hdr.src_port; + encap->mask |=3D FLOW_TCF_ENCAP_IPV4_SRC; + } +} + +/** + * Helper function to process RTE_FLOW_ITEM_TYPE_VXLAN entry in configurat= ion + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the VNI fields + * in the encapsulation parameters structure. The item must be prevalidate= d, + * no any validation checks performed by function. + * + * @param[in] spec + * RTE_FLOW_ITEM_TYPE_VXLAN entry specification. + * @param[out] encap + * Structure to fill the gathered VNI address data. + */ +static void +flow_tcf_parse_vxlan_encap_vni(const struct rte_flow_item_vxlan *spec, + struct flow_tcf_vxlan_encap *encap) +{ + /* Item must be validated before. Do not redundant checks. */ + assert(spec); + memcpy(encap->vxlan.vni, spec->vni, sizeof(encap->vxlan.vni)); + encap->mask |=3D FLOW_TCF_ENCAP_VXLAN_VNI; +} + +/** + * Populate consolidated encapsulation object from list of pattern items. + * + * Helper function to process configuration of action such as + * RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. The item list should be + * validated, there is no way to return an meaningful error. + * + * @param[in] action + * RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP action object. + * List of pattern items to gather data from. + * @param[out] src + * Structure to fill gathered data. + */ +static void +flow_tcf_vxlan_encap_parse(const struct rte_flow_action *action, + struct flow_tcf_vxlan_encap *encap) +{ + union { + const struct rte_flow_item_eth *eth; + const struct rte_flow_item_ipv4 *ipv4; + const struct rte_flow_item_ipv6 *ipv6; + const struct rte_flow_item_udp *udp; + const struct rte_flow_item_vxlan *vxlan; + } spec, mask; + const struct rte_flow_item *items; + + assert(action->type =3D=3D RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP); + assert(action->conf); + + items =3D ((const struct rte_flow_action_vxlan_encap *) + action->conf)->definition; + assert(items); + for (; items->type !=3D RTE_FLOW_ITEM_TYPE_END; items++) { + switch (items->type) { + case RTE_FLOW_ITEM_TYPE_VOID: + break; + case RTE_FLOW_ITEM_TYPE_ETH: + mask.eth =3D items->mask; + spec.eth =3D items->spec; + flow_tcf_parse_vxlan_encap_eth + (spec.eth, mask.eth, encap); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + spec.ipv4 =3D items->spec; + flow_tcf_parse_vxlan_encap_ipv4(spec.ipv4, encap); + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + spec.ipv6 =3D items->spec; + flow_tcf_parse_vxlan_encap_ipv6(spec.ipv6, encap); + break; + case RTE_FLOW_ITEM_TYPE_UDP: + mask.udp =3D items->mask; + spec.udp =3D items->spec; + flow_tcf_parse_vxlan_encap_udp + (spec.udp, mask.udp, encap); + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + spec.vxlan =3D items->spec; + flow_tcf_parse_vxlan_encap_vni(spec.vxlan, encap); + break; + default: + assert(false); + DRV_LOG(WARNING, + "unsupported item %p type %d," + " items must be validated" + " before flow creation", + (const void *)items, items->type); + encap->mask =3D 0; + return; + } + } +} + +/** * Translate flow for Linux TC flower and construct Netlink message. * * @param[in] priv @@ -2832,6 +3067,7 @@ struct pedit_parser { const struct rte_flow_item_ipv6 *ipv6; const struct rte_flow_item_tcp *tcp; const struct rte_flow_item_udp *udp; + const struct rte_flow_item_vxlan *vxlan; } spec, mask; union { const struct rte_flow_action_port_id *port_id; @@ -2842,6 +3078,18 @@ struct pedit_parser { const struct rte_flow_action_of_set_vlan_pcp * of_set_vlan_pcp; } conf; + union { + struct flow_tcf_tunnel_hdr *hdr; + struct flow_tcf_vxlan_decap *vxlan; + } decap =3D { + .hdr =3D NULL, + }; + union { + struct flow_tcf_tunnel_hdr *hdr; + struct flow_tcf_vxlan_encap *vxlan; + } encap =3D { + .hdr =3D NULL, + }; struct flow_tcf_ptoi ptoi[PTOI_TABLE_SZ_MAX(dev)]; struct nlmsghdr *nlh =3D dev_flow->tcf.nlh; struct tcmsg *tcm =3D dev_flow->tcf.tcm; @@ -2859,6 +3107,20 @@ struct pedit_parser { =20 claim_nonzero(flow_tcf_build_ptoi_table(dev, ptoi, PTOI_TABLE_SZ_MAX(dev))); + if (dev_flow->tcf.tunnel) { + switch (dev_flow->tcf.tunnel->type) { + case FLOW_TCF_TUNACT_VXLAN_DECAP: + decap.vxlan =3D dev_flow->tcf.vxlan_decap; + break; + case FLOW_TCF_TUNACT_VXLAN_ENCAP: + encap.vxlan =3D dev_flow->tcf.vxlan_encap; + break; + /* New tunnel actions can be added here. */ + default: + assert(false); + break; + } + } nlh =3D dev_flow->tcf.nlh; tcm =3D dev_flow->tcf.tcm; /* Prepare API must have been called beforehand. */ @@ -2876,7 +3138,6 @@ struct pedit_parser { mnl_attr_put_u32(nlh, TCA_CHAIN, attr->group); mnl_attr_put_strz(nlh, TCA_KIND, "flower"); na_flower =3D mnl_attr_nest_start(nlh, TCA_OPTIONS); - mnl_attr_put_u32(nlh, TCA_FLOWER_FLAGS, TCA_CLS_FLAGS_SKIP_SW); for (; items->type !=3D RTE_FLOW_ITEM_TYPE_END; items++) { unsigned int i; =20 @@ -2904,7 +3165,9 @@ struct pedit_parser { tcm->tcm_ifindex =3D ptoi[i].ifindex; break; case RTE_FLOW_ITEM_TYPE_ETH: - item_flags |=3D MLX5_FLOW_LAYER_OUTER_L2; + item_flags |=3D (item_flags & MLX5_FLOW_LAYER_VXLAN) ? + MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; mask.eth =3D flow_tcf_item_mask (items, &rte_flow_item_eth_mask, &flow_tcf_mask_supported.eth, @@ -2915,6 +3178,14 @@ struct pedit_parser { if (mask.eth =3D=3D &flow_tcf_mask_empty.eth) break; spec.eth =3D items->spec; + if (decap.vxlan && + !(item_flags & MLX5_FLOW_LAYER_VXLAN)) { + DRV_LOG(WARNING, + "outer L2 addresses cannot be forced" + " for vxlan decapsulation, parameter" + " ignored"); + break; + } if (mask.eth->type) { mnl_attr_put_u16(nlh, TCA_FLOWER_KEY_ETH_TYPE, spec.eth->type); @@ -2936,8 +3207,11 @@ struct pedit_parser { ETHER_ADDR_LEN, mask.eth->src.addr_bytes); } + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); break; case RTE_FLOW_ITEM_TYPE_VLAN: + assert(!encap.hdr); + assert(!decap.hdr); item_flags |=3D MLX5_FLOW_LAYER_OUTER_VLAN; mask.vlan =3D flow_tcf_item_mask (items, &rte_flow_item_vlan_mask, @@ -2969,6 +3243,7 @@ struct pedit_parser { rte_be_to_cpu_16 (spec.vlan->tci & RTE_BE16(0x0fff))); + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); break; case RTE_FLOW_ITEM_TYPE_IPV4: item_flags |=3D MLX5_FLOW_LAYER_OUTER_L3_IPV4; @@ -2979,36 +3254,52 @@ struct pedit_parser { sizeof(flow_tcf_mask_supported.ipv4), error); assert(mask.ipv4); - if (!eth_type_set || !vlan_eth_type_set) - mnl_attr_put_u16(nlh, + spec.ipv4 =3D items->spec; + if (!decap.vxlan) { + if (!eth_type_set && !vlan_eth_type_set) + mnl_attr_put_u16(nlh, vlan_present ? TCA_FLOWER_KEY_VLAN_ETH_TYPE : TCA_FLOWER_KEY_ETH_TYPE, RTE_BE16(ETH_P_IP)); - eth_type_set =3D 1; - vlan_eth_type_set =3D 1; - if (mask.ipv4 =3D=3D &flow_tcf_mask_empty.ipv4) - break; - spec.ipv4 =3D items->spec; - if (mask.ipv4->hdr.next_proto_id) { - mnl_attr_put_u8(nlh, TCA_FLOWER_KEY_IP_PROTO, - spec.ipv4->hdr.next_proto_id); - ip_proto_set =3D 1; + eth_type_set =3D 1; + vlan_eth_type_set =3D 1; + if (mask.ipv4 =3D=3D &flow_tcf_mask_empty.ipv4) + break; + if (mask.ipv4->hdr.next_proto_id) { + mnl_attr_put_u8 + (nlh, TCA_FLOWER_KEY_IP_PROTO, + spec.ipv4->hdr.next_proto_id); + ip_proto_set =3D 1; + } + } else { + assert(mask.ipv4 !=3D &flow_tcf_mask_empty.ipv4); } if (mask.ipv4->hdr.src_addr) { - mnl_attr_put_u32(nlh, TCA_FLOWER_KEY_IPV4_SRC, - spec.ipv4->hdr.src_addr); - mnl_attr_put_u32(nlh, - TCA_FLOWER_KEY_IPV4_SRC_MASK, - mask.ipv4->hdr.src_addr); + mnl_attr_put_u32 + (nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_IPV4_SRC : + TCA_FLOWER_KEY_IPV4_SRC, + spec.ipv4->hdr.src_addr); + mnl_attr_put_u32 + (nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_IPV4_SRC_MASK : + TCA_FLOWER_KEY_IPV4_SRC_MASK, + mask.ipv4->hdr.src_addr); } if (mask.ipv4->hdr.dst_addr) { - mnl_attr_put_u32(nlh, TCA_FLOWER_KEY_IPV4_DST, - spec.ipv4->hdr.dst_addr); - mnl_attr_put_u32(nlh, - TCA_FLOWER_KEY_IPV4_DST_MASK, - mask.ipv4->hdr.dst_addr); + mnl_attr_put_u32 + (nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_IPV4_DST : + TCA_FLOWER_KEY_IPV4_DST, + spec.ipv4->hdr.dst_addr); + mnl_attr_put_u32 + (nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_IPV4_DST_MASK : + TCA_FLOWER_KEY_IPV4_DST_MASK, + mask.ipv4->hdr.dst_addr); } + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); break; case RTE_FLOW_ITEM_TYPE_IPV6: item_flags |=3D MLX5_FLOW_LAYER_OUTER_L3_IPV6; @@ -3019,38 +3310,53 @@ struct pedit_parser { sizeof(flow_tcf_mask_supported.ipv6), error); assert(mask.ipv6); - if (!eth_type_set || !vlan_eth_type_set) - mnl_attr_put_u16(nlh, - vlan_present ? - TCA_FLOWER_KEY_VLAN_ETH_TYPE : - TCA_FLOWER_KEY_ETH_TYPE, - RTE_BE16(ETH_P_IPV6)); - eth_type_set =3D 1; - vlan_eth_type_set =3D 1; - if (mask.ipv6 =3D=3D &flow_tcf_mask_empty.ipv6) - break; spec.ipv6 =3D items->spec; - if (mask.ipv6->hdr.proto) { - mnl_attr_put_u8(nlh, TCA_FLOWER_KEY_IP_PROTO, - spec.ipv6->hdr.proto); - ip_proto_set =3D 1; + if (!decap.vxlan) { + if (!eth_type_set || !vlan_eth_type_set) { + mnl_attr_put_u16(nlh, + vlan_present ? + TCA_FLOWER_KEY_VLAN_ETH_TYPE : + TCA_FLOWER_KEY_ETH_TYPE, + RTE_BE16(ETH_P_IPV6)); + } + eth_type_set =3D 1; + vlan_eth_type_set =3D 1; + if (mask.ipv6 =3D=3D &flow_tcf_mask_empty.ipv6) + break; + if (mask.ipv6->hdr.proto) { + mnl_attr_put_u8 + (nlh, TCA_FLOWER_KEY_IP_PROTO, + spec.ipv6->hdr.proto); + ip_proto_set =3D 1; + } + } else { + assert(mask.ipv6 !=3D &flow_tcf_mask_empty.ipv6); } if (!IN6_IS_ADDR_UNSPECIFIED(mask.ipv6->hdr.src_addr)) { - mnl_attr_put(nlh, TCA_FLOWER_KEY_IPV6_SRC, - sizeof(spec.ipv6->hdr.src_addr), + mnl_attr_put(nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_IPV6_SRC : + TCA_FLOWER_KEY_IPV6_SRC, + IPV6_ADDR_LEN, spec.ipv6->hdr.src_addr); - mnl_attr_put(nlh, TCA_FLOWER_KEY_IPV6_SRC_MASK, - sizeof(mask.ipv6->hdr.src_addr), + mnl_attr_put(nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_IPV6_SRC_MASK : + TCA_FLOWER_KEY_IPV6_SRC_MASK, + IPV6_ADDR_LEN, mask.ipv6->hdr.src_addr); } if (!IN6_IS_ADDR_UNSPECIFIED(mask.ipv6->hdr.dst_addr)) { - mnl_attr_put(nlh, TCA_FLOWER_KEY_IPV6_DST, - sizeof(spec.ipv6->hdr.dst_addr), + mnl_attr_put(nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_IPV6_DST : + TCA_FLOWER_KEY_IPV6_DST, + IPV6_ADDR_LEN, spec.ipv6->hdr.dst_addr); - mnl_attr_put(nlh, TCA_FLOWER_KEY_IPV6_DST_MASK, - sizeof(mask.ipv6->hdr.dst_addr), + mnl_attr_put(nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_IPV6_DST_MASK : + TCA_FLOWER_KEY_IPV6_DST_MASK, + IPV6_ADDR_LEN, mask.ipv6->hdr.dst_addr); } + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); break; case RTE_FLOW_ITEM_TYPE_UDP: item_flags |=3D MLX5_FLOW_LAYER_OUTER_L4_UDP; @@ -3061,26 +3367,45 @@ struct pedit_parser { sizeof(flow_tcf_mask_supported.udp), error); assert(mask.udp); - if (!ip_proto_set) - mnl_attr_put_u8(nlh, TCA_FLOWER_KEY_IP_PROTO, - IPPROTO_UDP); - if (mask.udp =3D=3D &flow_tcf_mask_empty.udp) - break; spec.udp =3D items->spec; + if (!decap.vxlan) { + if (!ip_proto_set) + mnl_attr_put_u8 + (nlh, TCA_FLOWER_KEY_IP_PROTO, + IPPROTO_UDP); + if (mask.udp =3D=3D &flow_tcf_mask_empty.udp) + break; + } else { + assert(mask.udp !=3D &flow_tcf_mask_empty.udp); + decap.vxlan->udp_port =3D + rte_be_to_cpu_16 + (spec.udp->hdr.dst_port); + } if (mask.udp->hdr.src_port) { - mnl_attr_put_u16(nlh, TCA_FLOWER_KEY_UDP_SRC, - spec.udp->hdr.src_port); - mnl_attr_put_u16(nlh, - TCA_FLOWER_KEY_UDP_SRC_MASK, - mask.udp->hdr.src_port); + mnl_attr_put_u16 + (nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_UDP_SRC_PORT : + TCA_FLOWER_KEY_UDP_SRC, + spec.udp->hdr.src_port); + mnl_attr_put_u16 + (nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_UDP_SRC_PORT_MASK : + TCA_FLOWER_KEY_UDP_SRC_MASK, + mask.udp->hdr.src_port); } if (mask.udp->hdr.dst_port) { - mnl_attr_put_u16(nlh, TCA_FLOWER_KEY_UDP_DST, - spec.udp->hdr.dst_port); - mnl_attr_put_u16(nlh, - TCA_FLOWER_KEY_UDP_DST_MASK, - mask.udp->hdr.dst_port); + mnl_attr_put_u16 + (nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_UDP_DST_PORT : + TCA_FLOWER_KEY_UDP_DST, + spec.udp->hdr.dst_port); + mnl_attr_put_u16 + (nlh, decap.vxlan ? + TCA_FLOWER_KEY_ENC_UDP_DST_PORT_MASK : + TCA_FLOWER_KEY_UDP_DST_MASK, + mask.udp->hdr.dst_port); } + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); break; case RTE_FLOW_ITEM_TYPE_TCP: item_flags |=3D MLX5_FLOW_LAYER_OUTER_L4_TCP; @@ -3123,6 +3448,16 @@ struct pedit_parser { rte_cpu_to_be_16 (mask.tcp->hdr.tcp_flags)); } + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + assert(decap.vxlan); + item_flags |=3D MLX5_FLOW_LAYER_VXLAN; + spec.vxlan =3D items->spec; + mnl_attr_put_u32(nlh, + TCA_FLOWER_KEY_ENC_KEY_ID, + vxlan_vni_as_be32(spec.vxlan->vni)); + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); break; default: return rte_flow_error_set(error, ENOTSUP, @@ -3156,6 +3491,14 @@ struct pedit_parser { mnl_attr_put_strz(nlh, TCA_ACT_KIND, "mirred"); na_act =3D mnl_attr_nest_start(nlh, TCA_ACT_OPTIONS); assert(na_act); + if (encap.hdr) { + assert(dev_flow->tcf.tunnel); + dev_flow->tcf.tunnel->ifindex_ptr =3D + &((struct tc_mirred *) + mnl_attr_get_payload + (mnl_nlmsg_get_payload_tail + (nlh)))->ifindex; + } mnl_attr_put(nlh, TCA_MIRRED_PARMS, sizeof(struct tc_mirred), &(struct tc_mirred){ @@ -3273,6 +3616,74 @@ struct pedit_parser { conf.of_set_vlan_pcp->vlan_pcp; } break; + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + assert(decap.vxlan); + assert(dev_flow->tcf.tunnel); + dev_flow->tcf.tunnel->ifindex_ptr =3D + (unsigned int *)&tcm->tcm_ifindex; + na_act_index =3D + mnl_attr_nest_start(nlh, na_act_index_cur++); + assert(na_act_index); + mnl_attr_put_strz(nlh, TCA_ACT_KIND, "tunnel_key"); + na_act =3D mnl_attr_nest_start(nlh, TCA_ACT_OPTIONS); + assert(na_act); + mnl_attr_put(nlh, TCA_TUNNEL_KEY_PARMS, + sizeof(struct tc_tunnel_key), + &(struct tc_tunnel_key){ + .action =3D TC_ACT_PIPE, + .t_action =3D TCA_TUNNEL_KEY_ACT_RELEASE, + }); + mnl_attr_nest_end(nlh, na_act); + mnl_attr_nest_end(nlh, na_act_index); + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); + break; + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + assert(encap.vxlan); + flow_tcf_vxlan_encap_parse(actions, encap.vxlan); + na_act_index =3D + mnl_attr_nest_start(nlh, na_act_index_cur++); + assert(na_act_index); + mnl_attr_put_strz(nlh, TCA_ACT_KIND, "tunnel_key"); + na_act =3D mnl_attr_nest_start(nlh, TCA_ACT_OPTIONS); + assert(na_act); + mnl_attr_put(nlh, TCA_TUNNEL_KEY_PARMS, + sizeof(struct tc_tunnel_key), + &(struct tc_tunnel_key){ + .action =3D TC_ACT_PIPE, + .t_action =3D TCA_TUNNEL_KEY_ACT_SET, + }); + if (encap.vxlan->mask & FLOW_TCF_ENCAP_UDP_DST) + mnl_attr_put_u16(nlh, + TCA_TUNNEL_KEY_ENC_DST_PORT, + encap.vxlan->udp.dst); + if (encap.vxlan->mask & FLOW_TCF_ENCAP_IPV4_SRC) + mnl_attr_put_u32(nlh, + TCA_TUNNEL_KEY_ENC_IPV4_SRC, + encap.vxlan->ipv4.src); + if (encap.vxlan->mask & FLOW_TCF_ENCAP_IPV4_DST) + mnl_attr_put_u32(nlh, + TCA_TUNNEL_KEY_ENC_IPV4_DST, + encap.vxlan->ipv4.dst); + if (encap.vxlan->mask & FLOW_TCF_ENCAP_IPV6_SRC) + mnl_attr_put(nlh, + TCA_TUNNEL_KEY_ENC_IPV6_SRC, + sizeof(encap.vxlan->ipv6.src), + &encap.vxlan->ipv6.src); + if (encap.vxlan->mask & FLOW_TCF_ENCAP_IPV6_DST) + mnl_attr_put(nlh, + TCA_TUNNEL_KEY_ENC_IPV6_DST, + sizeof(encap.vxlan->ipv6.dst), + &encap.vxlan->ipv6.dst); + if (encap.vxlan->mask & FLOW_TCF_ENCAP_VXLAN_VNI) + mnl_attr_put_u32(nlh, + TCA_TUNNEL_KEY_ENC_KEY_ID, + vxlan_vni_as_be32 + (encap.vxlan->vxlan.vni)); + mnl_attr_put_u8(nlh, TCA_TUNNEL_KEY_NO_CSUM, 0); + mnl_attr_nest_end(nlh, na_act); + mnl_attr_nest_end(nlh, na_act_index); + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); + break; case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST: case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC: @@ -3299,7 +3710,13 @@ struct pedit_parser { assert(na_flower); assert(na_flower_act); mnl_attr_nest_end(nlh, na_flower_act); + mnl_attr_put_u32(nlh, TCA_FLOWER_FLAGS, + decap.vxlan ? 0 : TCA_CLS_FLAGS_SKIP_SW); mnl_attr_nest_end(nlh, na_flower); + if (dev_flow->tcf.tunnel && dev_flow->tcf.tunnel->ifindex_ptr) + dev_flow->tcf.tunnel->ifindex_org =3D + *dev_flow->tcf.tunnel->ifindex_ptr; + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); return 0; } =20 --=20 1.8.3.1