From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR04-VI1-obe.outbound.protection.outlook.com (mail-eopbgr80075.outbound.protection.outlook.com [40.107.8.75]) by dpdk.org (Postfix) with ESMTP id 6FE515F13 for ; Fri, 2 Nov 2018 22:53:54 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ATNBOkP5WYkTjZyOKrqXnFo6EbcD8sNQZtFOuxaEeuA=; b=SzDsjzARdu2VflBHO4eIS4XJ3oMJ+vpKiq87ffbDJie5tuQZJPVH9t4W78RDzlq79wWfOXXbbNPLAS23gWRvXt8Amce5E1UMkH0IqQmsmcFzlaHf7Fax8hUpLLCHcfo61MFQN9yTgNJyxrtCuQf0CeHRKSskW5+65gTgoyugeXU= Received: from DB3PR0502MB3980.eurprd05.prod.outlook.com (52.134.72.27) by DB3PR0502MB3979.eurprd05.prod.outlook.com (52.134.67.155) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1273.26; Fri, 2 Nov 2018 21:53:50 +0000 Received: from DB3PR0502MB3980.eurprd05.prod.outlook.com ([fe80::f8a1:fcab:94f0:97cc]) by DB3PR0502MB3980.eurprd05.prod.outlook.com ([fe80::f8a1:fcab:94f0:97cc%4]) with mapi id 15.20.1273.030; Fri, 2 Nov 2018 21:53:50 +0000 From: Yongseok Koh To: Slava Ovsiienko CC: Shahaf Shuler , "dev@dpdk.org" Thread-Topic: [PATCH v4 08/13] net/mlx5: add VXLAN support to flow translate routine Thread-Index: AQHUctTxlhWiA1zXJkqNLfniDg/ziaU9B1qA Date: Fri, 2 Nov 2018 21:53:50 +0000 Message-ID: <20181102215336.GC15737@mtidpdk.mti.labs.mlnx> References: <1541074741-41368-1-git-send-email-viacheslavo@mellanox.com> <1541181152-15788-1-git-send-email-viacheslavo@mellanox.com> <1541181152-15788-9-git-send-email-viacheslavo@mellanox.com> In-Reply-To: <1541181152-15788-9-git-send-email-viacheslavo@mellanox.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: BYAPR08CA0020.namprd08.prod.outlook.com (2603:10b6:a03:100::33) To DB3PR0502MB3980.eurprd05.prod.outlook.com (2603:10a6:8:10::27) authentication-results: spf=none (sender IP is ) smtp.mailfrom=yskoh@mellanox.com; x-ms-exchange-messagesentrepresentingtype: 1 x-originating-ip: [209.116.155.178] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; DB3PR0502MB3979; 6:VDxQ9ddN+bnsoI5PKovjTjs2vF3v8geiA63gjqwtuYaMZt+nUjqfjDeccAaOAzSL7cRygPp/ZdHzEPb3ri1KDDBwqY71/8K7sT6++bZYKyps0NW9ce1Z3flOAfrpUrgRPpLAJ27UCos4Xk9XZ7iXidEVaPt3nwArPC87aOmk/iAi3foFOF+5p4QuT6isCLoRIHaNHyFfXRpAO25NTsREPuA2kBKbxZ71102FqVgcHkJsTQ/cTp1mTxTgQoBPP+o8avVOyhAWy/UKVaQWq8ug41/uVDFJgLKjOF5hOjb2TCF/ZBUlK6EHTeEgCqT0bLLWhEuesjF82BTQvhyz8eu+dthbmt8pykbGpzuDpWehzFGooqctN2nm6n4EKpS/1r6vt+YEC6hPFA5E5JwE3MzuIBDKBPUqIojS0LHxJuW+AtoCWv7zVU5Di/ETfR7vCqDmIC2SPaAOomArpJTzF0uHbQ==; 5:HdCdXe3Fy9ug+E8AaQ9fbbuMg5UrXLODw0smr3HVsXPLBjKyjbgItNiX0CquzGxgosDY1kJuk/pNho7NZR4ttVDHVPkKpEfX/omYriNbEe6Z8BqpyKAxUB681vls52J4W2WrcScEOXK/exOoBzdQ4SUvaA8hWSxbggzeHYc7xxk=; 7:byH7BTTeCveYgwyGJW6JsHIc2gRzdrO2aIv6gVx9Wgw6azZRyltqkxwx9FJG0L9RXYlNwm4LSQZeN7rcVgcXikqGh8XPFKOzw3IzhekRt5qaXXQJsDQwLTvee84Z9nabn4P336aMQ8/lhLfn/ldm3Q== x-ms-office365-filtering-correlation-id: 69e8ee3b-7b14-48ed-9185-08d6410dac4f x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(4618075)(2017052603328)(7153060)(7193020); SRVR:DB3PR0502MB3979; x-ms-traffictypediagnostic: DB3PR0502MB3979: x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:; x-ms-exchange-senderadcheck: 1 x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(3002001)(93006095)(93001095)(10201501046)(3231382)(944501410)(52105095)(6055026)(148016)(149066)(150057)(6041310)(20161123560045)(20161123564045)(20161123562045)(20161123558120)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(201708071742011)(7699051)(76991095); SRVR:DB3PR0502MB3979; BCL:0; PCL:0; RULEID:; SRVR:DB3PR0502MB3979; x-forefront-prvs: 08444C7C87 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(39860400002)(136003)(396003)(346002)(366004)(376002)(189003)(199004)(2900100001)(316002)(4744004)(33656002)(6436002)(71200400001)(9686003)(53946003)(6512007)(97736004)(71190400001)(4326008)(25786009)(54906003)(6486002)(6862004)(106356001)(478600001)(66066001)(11346002)(86362001)(6636002)(575784001)(52116002)(7736002)(305945005)(486006)(446003)(476003)(5660300001)(186003)(99286004)(81156014)(6246003)(68736007)(14444005)(102836004)(2906002)(3846002)(6116002)(6506007)(8676002)(8936002)(81166006)(229853002)(76176011)(33896004)(14454004)(53936002)(256004)(105586002)(26005)(386003)(1076002)(579004); DIR:OUT; SFP:1101; SCL:1; SRVR:DB3PR0502MB3979; H:DB3PR0502MB3980.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: inqEdML6CuNNgt98iG+AvzQywvW4SMNis1D7HqlB2wZCXud3eSUK36ne+6A/MqSJVwnT9hygrMFYsXcEXIh/Hh+PDCBBcLJ1SPVir46O/bJso+WAUUO35WmeqPgQ/0QysEA0Ota/AW3N9wkLAa6slZUuBQ9BbQ0yLrTe4x8Xw2Mj6lCJAcuCQVpohGLc5vPQiwq81AEThntLQOOqk+JoQRFoB0IAvA5rApeturVLT0YrV09vLB63VC0qw3zZv19IxgoKIUmSNwq1L50xHV57SIWlhhgiGJa6Q8Nk9k5c8YzdYXCJDNTRrBom8mafDUYKdZWQNOcv0VSMpsxiN1N85J62RFnQ78+QCSNRPI3VxFw= spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-ID: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 69e8ee3b-7b14-48ed-9185-08d6410dac4f X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Nov 2018 21:53:50.6473 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR0502MB3979 Subject: Re: [dpdk-dev] [PATCH v4 08/13] net/mlx5: add VXLAN support to flow translate routine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 02 Nov 2018 21:53:54 -0000 On Fri, Nov 02, 2018 at 10:53:20AM -0700, Slava Ovsiienko wrote: > This part of patchset adds support of VXLAN-related items and > actions to the flow translation routine. Later some tunnel types, > other than VXLAN can be addedd (GRE). No VTEP devices are created at > this point, the flow rule is just translated, not applied yet. >=20 > Suggested-by: Adrien Mazarguil > Signed-off-by: Viacheslav Ovsiienko > --- Please fix indentation I pointed out in the last review. Then, submit v4 with my acked-by tag. Find my comments below. > drivers/net/mlx5/mlx5_flow_tcf.c | 537 +++++++++++++++++++++++++++++++++= +----- > 1 file changed, 477 insertions(+), 60 deletions(-) >=20 > diff --git a/drivers/net/mlx5/mlx5_flow_tcf.c b/drivers/net/mlx5/mlx5_flo= w_tcf.c > index 017f2bd..b7a0c72 100644 > --- a/drivers/net/mlx5/mlx5_flow_tcf.c > +++ b/drivers/net/mlx5/mlx5_flow_tcf.c > @@ -2799,6 +2799,241 @@ struct pedit_parser { > } > =20 > /** > + * Convert VXLAN VNI to 32-bit integer. > + * > + * @param[in] vni > + * VXLAN VNI in 24-bit wire format. > + * > + * @return > + * VXLAN VNI as a 32-bit integer value in network endian. > + */ > +static inline rte_be32_t > +vxlan_vni_as_be32(const uint8_t vni[3]) > +{ > + union { > + uint8_t vni[4]; > + rte_be32_t dword; > + } ret =3D { > + .vni =3D { 0, vni[0], vni[1], vni[2] }, > + }; > + return ret.dword; > +} > + > +/** > + * Helper function to process RTE_FLOW_ITEM_TYPE_ETH entry in configurat= ion > + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the MAC address fie= lds > + * in the encapsulation parameters structure. The item must be prevalida= ted, > + * no any validation checks performed by function. > + * > + * @param[in] spec > + * RTE_FLOW_ITEM_TYPE_ETH entry specification. > + * @param[in] mask > + * RTE_FLOW_ITEM_TYPE_ETH entry mask. > + * @param[out] encap > + * Structure to fill the gathered MAC address data. > + */ > +static void > +flow_tcf_parse_vxlan_encap_eth(const struct rte_flow_item_eth *spec, > + const struct rte_flow_item_eth *mask, > + struct flow_tcf_vxlan_encap *encap) > +{ > + /* Item must be validated before. No redundant checks. */ > + assert(spec); > + if (!mask || !memcmp(&mask->dst, > + &rte_flow_item_eth_mask.dst, > + sizeof(rte_flow_item_eth_mask.dst))) { > + /* > + * Ethernet addresses are not supported by > + * tc as tunnel_key parameters. Destination > + * address is needed to form encap packet > + * header and retrieved by kernel from > + * implicit sources (ARP table, etc), > + * address masks are not supported at all. > + */ > + encap->eth.dst =3D spec->dst; > + encap->mask |=3D FLOW_TCF_ENCAP_ETH_DST; > + } > + if (!mask || !memcmp(&mask->src, > + &rte_flow_item_eth_mask.src, > + sizeof(rte_flow_item_eth_mask.src))) { > + /* > + * Ethernet addresses are not supported by > + * tc as tunnel_key parameters. Source ethernet > + * address is ignored anyway. > + */ > + encap->eth.src =3D spec->src; > + encap->mask |=3D FLOW_TCF_ENCAP_ETH_SRC; > + } > +} > + > +/** > + * Helper function to process RTE_FLOW_ITEM_TYPE_IPV4 entry in configura= tion > + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the IPV4 address fi= elds > + * in the encapsulation parameters structure. The item must be prevalida= ted, > + * no any validation checks performed by function. > + * > + * @param[in] spec > + * RTE_FLOW_ITEM_TYPE_IPV4 entry specification. > + * @param[out] encap > + * Structure to fill the gathered IPV4 address data. > + */ > +static void > +flow_tcf_parse_vxlan_encap_ipv4(const struct rte_flow_item_ipv4 *spec, > + struct flow_tcf_vxlan_encap *encap) > +{ > + /* Item must be validated before. No redundant checks. */ > + assert(spec); > + encap->ipv4.dst =3D spec->hdr.dst_addr; > + encap->ipv4.src =3D spec->hdr.src_addr; > + encap->mask |=3D FLOW_TCF_ENCAP_IPV4_SRC | > + FLOW_TCF_ENCAP_IPV4_DST; > +} > + > +/** > + * Helper function to process RTE_FLOW_ITEM_TYPE_IPV6 entry in configura= tion > + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the IPV6 address fi= elds > + * in the encapsulation parameters structure. The item must be prevalida= ted, > + * no any validation checks performed by function. > + * > + * @param[in] spec > + * RTE_FLOW_ITEM_TYPE_IPV6 entry specification. > + * @param[out] encap > + * Structure to fill the gathered IPV6 address data. > + */ > +static void > +flow_tcf_parse_vxlan_encap_ipv6(const struct rte_flow_item_ipv6 *spec, > + struct flow_tcf_vxlan_encap *encap) > +{ > + /* Item must be validated before. No redundant checks. */ > + assert(spec); > + memcpy(encap->ipv6.dst, spec->hdr.dst_addr, IPV6_ADDR_LEN); > + memcpy(encap->ipv6.src, spec->hdr.src_addr, IPV6_ADDR_LEN); > + encap->mask |=3D FLOW_TCF_ENCAP_IPV6_SRC | > + FLOW_TCF_ENCAP_IPV6_DST; > +} > + > +/** > + * Helper function to process RTE_FLOW_ITEM_TYPE_UDP entry in configurat= ion > + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the UDP port fields > + * in the encapsulation parameters structure. The item must be prevalida= ted, > + * no any validation checks performed by function. > + * > + * @param[in] spec > + * RTE_FLOW_ITEM_TYPE_UDP entry specification. > + * @param[in] mask > + * RTE_FLOW_ITEM_TYPE_UDP entry mask. > + * @param[out] encap > + * Structure to fill the gathered UDP port data. > + */ > +static void > +flow_tcf_parse_vxlan_encap_udp(const struct rte_flow_item_udp *spec, > + const struct rte_flow_item_udp *mask, > + struct flow_tcf_vxlan_encap *encap) > +{ > + assert(spec); > + encap->udp.dst =3D spec->hdr.dst_port; > + encap->mask |=3D FLOW_TCF_ENCAP_UDP_DST; > + if (!mask || mask->hdr.src_port !=3D RTE_BE16(0x0000)) { > + encap->udp.src =3D spec->hdr.src_port; > + encap->mask |=3D FLOW_TCF_ENCAP_IPV4_SRC; > + } > +} > + > +/** > + * Helper function to process RTE_FLOW_ITEM_TYPE_VXLAN entry in configur= ation > + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the VNI fields > + * in the encapsulation parameters structure. The item must be prevalida= ted, > + * no any validation checks performed by function. > + * > + * @param[in] spec > + * RTE_FLOW_ITEM_TYPE_VXLAN entry specification. > + * @param[out] encap > + * Structure to fill the gathered VNI address data. > + */ > +static void > +flow_tcf_parse_vxlan_encap_vni(const struct rte_flow_item_vxlan *spec, > + struct flow_tcf_vxlan_encap *encap) > +{ > + /* Item must be validated before. Do not redundant checks. */ > + assert(spec); > + memcpy(encap->vxlan.vni, spec->vni, sizeof(encap->vxlan.vni)); > + encap->mask |=3D FLOW_TCF_ENCAP_VXLAN_VNI; > +} > + > +/** > + * Populate consolidated encapsulation object from list of pattern items= . > + * > + * Helper function to process configuration of action such as > + * RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. The item list should be > + * validated, there is no way to return an meaningful error. > + * > + * @param[in] action > + * RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP action object. > + * List of pattern items to gather data from. > + * @param[out] src > + * Structure to fill gathered data. > + */ > +static void > +flow_tcf_vxlan_encap_parse(const struct rte_flow_action *action, > + struct flow_tcf_vxlan_encap *encap) > +{ > + union { > + const struct rte_flow_item_eth *eth; > + const struct rte_flow_item_ipv4 *ipv4; > + const struct rte_flow_item_ipv6 *ipv6; > + const struct rte_flow_item_udp *udp; > + const struct rte_flow_item_vxlan *vxlan; > + } spec, mask; > + const struct rte_flow_item *items; > + > + assert(action->type =3D=3D RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP); > + assert(action->conf); > + > + items =3D ((const struct rte_flow_action_vxlan_encap *) > + action->conf)->definition; > + assert(items); > + for (; items->type !=3D RTE_FLOW_ITEM_TYPE_END; items++) { > + switch (items->type) { > + case RTE_FLOW_ITEM_TYPE_VOID: > + break; > + case RTE_FLOW_ITEM_TYPE_ETH: > + mask.eth =3D items->mask; > + spec.eth =3D items->spec; > + flow_tcf_parse_vxlan_encap_eth > + (spec.eth, mask.eth, encap); Indentation. flow_tcf_parse_vxlan_encap_eth(spec.eth, mask.eth, encap); > + break; > + case RTE_FLOW_ITEM_TYPE_IPV4: > + spec.ipv4 =3D items->spec; > + flow_tcf_parse_vxlan_encap_ipv4(spec.ipv4, encap); > + break; > + case RTE_FLOW_ITEM_TYPE_IPV6: > + spec.ipv6 =3D items->spec; > + flow_tcf_parse_vxlan_encap_ipv6(spec.ipv6, encap); > + break; > + case RTE_FLOW_ITEM_TYPE_UDP: > + mask.udp =3D items->mask; > + spec.udp =3D items->spec; > + flow_tcf_parse_vxlan_encap_udp > + (spec.udp, mask.udp, encap); Indentation. flow_tcf_parse_vxlan_encap_udp(spec.udp, mask.udp, encap); > + break; > + case RTE_FLOW_ITEM_TYPE_VXLAN: > + spec.vxlan =3D items->spec; > + flow_tcf_parse_vxlan_encap_vni(spec.vxlan, encap); > + break; > + default: > + assert(false); > + DRV_LOG(WARNING, > + "unsupported item %p type %d," > + " items must be validated" > + " before flow creation", > + (const void *)items, items->type); > + encap->mask =3D 0; > + return; > + } > + } > +} > + > +/** > * Translate flow for Linux TC flower and construct Netlink message. > * > * @param[in] priv > @@ -2832,6 +3067,7 @@ struct pedit_parser { > const struct rte_flow_item_ipv6 *ipv6; > const struct rte_flow_item_tcp *tcp; > const struct rte_flow_item_udp *udp; > + const struct rte_flow_item_vxlan *vxlan; > } spec, mask; > union { > const struct rte_flow_action_port_id *port_id; > @@ -2842,6 +3078,18 @@ struct pedit_parser { > const struct rte_flow_action_of_set_vlan_pcp * > of_set_vlan_pcp; > } conf; > + union { > + struct flow_tcf_tunnel_hdr *hdr; > + struct flow_tcf_vxlan_decap *vxlan; > + } decap =3D { > + .hdr =3D NULL, > + }; > + union { > + struct flow_tcf_tunnel_hdr *hdr; > + struct flow_tcf_vxlan_encap *vxlan; > + } encap =3D { > + .hdr =3D NULL, > + }; > struct flow_tcf_ptoi ptoi[PTOI_TABLE_SZ_MAX(dev)]; > struct nlmsghdr *nlh =3D dev_flow->tcf.nlh; > struct tcmsg *tcm =3D dev_flow->tcf.tcm; > @@ -2859,6 +3107,20 @@ struct pedit_parser { > =20 > claim_nonzero(flow_tcf_build_ptoi_table(dev, ptoi, > PTOI_TABLE_SZ_MAX(dev))); > + if (dev_flow->tcf.tunnel) { > + switch (dev_flow->tcf.tunnel->type) { > + case FLOW_TCF_TUNACT_VXLAN_DECAP: > + decap.vxlan =3D dev_flow->tcf.vxlan_decap; > + break; > + case FLOW_TCF_TUNACT_VXLAN_ENCAP: > + encap.vxlan =3D dev_flow->tcf.vxlan_encap; > + break; > + /* New tunnel actions can be added here. */ > + default: > + assert(false); > + break; > + } > + } > nlh =3D dev_flow->tcf.nlh; > tcm =3D dev_flow->tcf.tcm; > /* Prepare API must have been called beforehand. */ > @@ -2876,7 +3138,6 @@ struct pedit_parser { > mnl_attr_put_u32(nlh, TCA_CHAIN, attr->group); > mnl_attr_put_strz(nlh, TCA_KIND, "flower"); > na_flower =3D mnl_attr_nest_start(nlh, TCA_OPTIONS); > - mnl_attr_put_u32(nlh, TCA_FLOWER_FLAGS, TCA_CLS_FLAGS_SKIP_SW); > for (; items->type !=3D RTE_FLOW_ITEM_TYPE_END; items++) { > unsigned int i; > =20 > @@ -2904,7 +3165,9 @@ struct pedit_parser { > tcm->tcm_ifindex =3D ptoi[i].ifindex; > break; > case RTE_FLOW_ITEM_TYPE_ETH: > - item_flags |=3D MLX5_FLOW_LAYER_OUTER_L2; > + item_flags |=3D (item_flags & MLX5_FLOW_LAYER_VXLAN) ? > + MLX5_FLOW_LAYER_INNER_L2 : > + MLX5_FLOW_LAYER_OUTER_L2; Indentation. item_flags |=3D (item_flags & MLX5_FLOW_LAYER_VXLAN) ? MLX5_FLOW_LAYER_INNER_L2 : MLX5_FLOW_LAYER_OUTER_L2; > mask.eth =3D flow_tcf_item_mask > (items, &rte_flow_item_eth_mask, > &flow_tcf_mask_supported.eth, > @@ -2915,6 +3178,14 @@ struct pedit_parser { > if (mask.eth =3D=3D &flow_tcf_mask_empty.eth) > break; > spec.eth =3D items->spec; > + if (decap.vxlan && > + !(item_flags & MLX5_FLOW_LAYER_VXLAN)) { > + DRV_LOG(WARNING, > + "outer L2 addresses cannot be forced" > + " for vxlan decapsulation, parameter" > + " ignored"); > + break; > + } > if (mask.eth->type) { > mnl_attr_put_u16(nlh, TCA_FLOWER_KEY_ETH_TYPE, > spec.eth->type); > @@ -2936,8 +3207,11 @@ struct pedit_parser { > ETHER_ADDR_LEN, > mask.eth->src.addr_bytes); > } > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > break; > case RTE_FLOW_ITEM_TYPE_VLAN: > + assert(!encap.hdr); > + assert(!decap.hdr); > item_flags |=3D MLX5_FLOW_LAYER_OUTER_VLAN; > mask.vlan =3D flow_tcf_item_mask > (items, &rte_flow_item_vlan_mask, > @@ -2969,6 +3243,7 @@ struct pedit_parser { > rte_be_to_cpu_16 > (spec.vlan->tci & > RTE_BE16(0x0fff))); > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > break; > case RTE_FLOW_ITEM_TYPE_IPV4: > item_flags |=3D MLX5_FLOW_LAYER_OUTER_L3_IPV4; > @@ -2979,36 +3254,52 @@ struct pedit_parser { > sizeof(flow_tcf_mask_supported.ipv4), > error); > assert(mask.ipv4); > - if (!eth_type_set || !vlan_eth_type_set) > - mnl_attr_put_u16(nlh, > + spec.ipv4 =3D items->spec; > + if (!decap.vxlan) { > + if (!eth_type_set && !vlan_eth_type_set) > + mnl_attr_put_u16(nlh, > vlan_present ? > TCA_FLOWER_KEY_VLAN_ETH_TYPE : > TCA_FLOWER_KEY_ETH_TYPE, > RTE_BE16(ETH_P_IP)); Here, mnl_attr_put_u16 (nlh, vlan_present ? TCA_FLOWER_KEY_VLAN_ETH_TYPE : TCA_FLOWER_KEY_ETH_TYPE, RTE_BE16(ETH_P_IP)); > - eth_type_set =3D 1; > - vlan_eth_type_set =3D 1; > - if (mask.ipv4 =3D=3D &flow_tcf_mask_empty.ipv4) > - break; > - spec.ipv4 =3D items->spec; > - if (mask.ipv4->hdr.next_proto_id) { > - mnl_attr_put_u8(nlh, TCA_FLOWER_KEY_IP_PROTO, > - spec.ipv4->hdr.next_proto_id); > - ip_proto_set =3D 1; > + eth_type_set =3D 1; > + vlan_eth_type_set =3D 1; > + if (mask.ipv4 =3D=3D &flow_tcf_mask_empty.ipv4) > + break; > + if (mask.ipv4->hdr.next_proto_id) { > + mnl_attr_put_u8 > + (nlh, TCA_FLOWER_KEY_IP_PROTO, > + spec.ipv4->hdr.next_proto_id); > + ip_proto_set =3D 1; > + } > + } else { > + assert(mask.ipv4 !=3D &flow_tcf_mask_empty.ipv4); > } > if (mask.ipv4->hdr.src_addr) { > - mnl_attr_put_u32(nlh, TCA_FLOWER_KEY_IPV4_SRC, > - spec.ipv4->hdr.src_addr); > - mnl_attr_put_u32(nlh, > - TCA_FLOWER_KEY_IPV4_SRC_MASK, > - mask.ipv4->hdr.src_addr); > + mnl_attr_put_u32 > + (nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_IPV4_SRC : > + TCA_FLOWER_KEY_IPV4_SRC, > + spec.ipv4->hdr.src_addr); > + mnl_attr_put_u32 > + (nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_IPV4_SRC_MASK : > + TCA_FLOWER_KEY_IPV4_SRC_MASK, > + mask.ipv4->hdr.src_addr); > } > if (mask.ipv4->hdr.dst_addr) { > - mnl_attr_put_u32(nlh, TCA_FLOWER_KEY_IPV4_DST, > - spec.ipv4->hdr.dst_addr); > - mnl_attr_put_u32(nlh, > - TCA_FLOWER_KEY_IPV4_DST_MASK, > - mask.ipv4->hdr.dst_addr); > + mnl_attr_put_u32 > + (nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_IPV4_DST : > + TCA_FLOWER_KEY_IPV4_DST, > + spec.ipv4->hdr.dst_addr); > + mnl_attr_put_u32 > + (nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_IPV4_DST_MASK : > + TCA_FLOWER_KEY_IPV4_DST_MASK, > + mask.ipv4->hdr.dst_addr); > } > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > break; > case RTE_FLOW_ITEM_TYPE_IPV6: > item_flags |=3D MLX5_FLOW_LAYER_OUTER_L3_IPV6; > @@ -3019,38 +3310,53 @@ struct pedit_parser { > sizeof(flow_tcf_mask_supported.ipv6), > error); > assert(mask.ipv6); > - if (!eth_type_set || !vlan_eth_type_set) > - mnl_attr_put_u16(nlh, > - vlan_present ? > - TCA_FLOWER_KEY_VLAN_ETH_TYPE : > - TCA_FLOWER_KEY_ETH_TYPE, > - RTE_BE16(ETH_P_IPV6)); > - eth_type_set =3D 1; > - vlan_eth_type_set =3D 1; > - if (mask.ipv6 =3D=3D &flow_tcf_mask_empty.ipv6) > - break; > spec.ipv6 =3D items->spec; > - if (mask.ipv6->hdr.proto) { > - mnl_attr_put_u8(nlh, TCA_FLOWER_KEY_IP_PROTO, > - spec.ipv6->hdr.proto); > - ip_proto_set =3D 1; > + if (!decap.vxlan) { > + if (!eth_type_set || !vlan_eth_type_set) { > + mnl_attr_put_u16(nlh, > + vlan_present ? > + TCA_FLOWER_KEY_VLAN_ETH_TYPE : > + TCA_FLOWER_KEY_ETH_TYPE, > + RTE_BE16(ETH_P_IPV6)); Here, mnl_attr_put_u16 (nlh, vlan_present ? TCA_FLOWER_KEY_VLAN_ETH_TYPE : TCA_FLOWER_KEY_ETH_TYPE, RTE_BE16(ETH_P_IPV6)); > + } > + eth_type_set =3D 1; > + vlan_eth_type_set =3D 1; > + if (mask.ipv6 =3D=3D &flow_tcf_mask_empty.ipv6) > + break; > + if (mask.ipv6->hdr.proto) { > + mnl_attr_put_u8 > + (nlh, TCA_FLOWER_KEY_IP_PROTO, > + spec.ipv6->hdr.proto); > + ip_proto_set =3D 1; > + } > + } else { > + assert(mask.ipv6 !=3D &flow_tcf_mask_empty.ipv6); > } > if (!IN6_IS_ADDR_UNSPECIFIED(mask.ipv6->hdr.src_addr)) { > - mnl_attr_put(nlh, TCA_FLOWER_KEY_IPV6_SRC, > - sizeof(spec.ipv6->hdr.src_addr), > + mnl_attr_put(nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_IPV6_SRC : > + TCA_FLOWER_KEY_IPV6_SRC, > + IPV6_ADDR_LEN, > spec.ipv6->hdr.src_addr); > - mnl_attr_put(nlh, TCA_FLOWER_KEY_IPV6_SRC_MASK, > - sizeof(mask.ipv6->hdr.src_addr), > + mnl_attr_put(nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_IPV6_SRC_MASK : > + TCA_FLOWER_KEY_IPV6_SRC_MASK, > + IPV6_ADDR_LEN, > mask.ipv6->hdr.src_addr); > } > if (!IN6_IS_ADDR_UNSPECIFIED(mask.ipv6->hdr.dst_addr)) { > - mnl_attr_put(nlh, TCA_FLOWER_KEY_IPV6_DST, > - sizeof(spec.ipv6->hdr.dst_addr), > + mnl_attr_put(nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_IPV6_DST : > + TCA_FLOWER_KEY_IPV6_DST, > + IPV6_ADDR_LEN, > spec.ipv6->hdr.dst_addr); > - mnl_attr_put(nlh, TCA_FLOWER_KEY_IPV6_DST_MASK, > - sizeof(mask.ipv6->hdr.dst_addr), > + mnl_attr_put(nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_IPV6_DST_MASK : > + TCA_FLOWER_KEY_IPV6_DST_MASK, > + IPV6_ADDR_LEN, > mask.ipv6->hdr.dst_addr); > } > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > break; > case RTE_FLOW_ITEM_TYPE_UDP: > item_flags |=3D MLX5_FLOW_LAYER_OUTER_L4_UDP; > @@ -3061,26 +3367,45 @@ struct pedit_parser { > sizeof(flow_tcf_mask_supported.udp), > error); > assert(mask.udp); > - if (!ip_proto_set) > - mnl_attr_put_u8(nlh, TCA_FLOWER_KEY_IP_PROTO, > - IPPROTO_UDP); > - if (mask.udp =3D=3D &flow_tcf_mask_empty.udp) > - break; > spec.udp =3D items->spec; > + if (!decap.vxlan) { > + if (!ip_proto_set) > + mnl_attr_put_u8 > + (nlh, TCA_FLOWER_KEY_IP_PROTO, > + IPPROTO_UDP); > + if (mask.udp =3D=3D &flow_tcf_mask_empty.udp) > + break; > + } else { > + assert(mask.udp !=3D &flow_tcf_mask_empty.udp); > + decap.vxlan->udp_port =3D > + rte_be_to_cpu_16 > + (spec.udp->hdr.dst_port); > + } > if (mask.udp->hdr.src_port) { > - mnl_attr_put_u16(nlh, TCA_FLOWER_KEY_UDP_SRC, > - spec.udp->hdr.src_port); > - mnl_attr_put_u16(nlh, > - TCA_FLOWER_KEY_UDP_SRC_MASK, > - mask.udp->hdr.src_port); > + mnl_attr_put_u16 > + (nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_UDP_SRC_PORT : > + TCA_FLOWER_KEY_UDP_SRC, > + spec.udp->hdr.src_port); > + mnl_attr_put_u16 > + (nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_UDP_SRC_PORT_MASK : > + TCA_FLOWER_KEY_UDP_SRC_MASK, > + mask.udp->hdr.src_port); > } > if (mask.udp->hdr.dst_port) { > - mnl_attr_put_u16(nlh, TCA_FLOWER_KEY_UDP_DST, > - spec.udp->hdr.dst_port); > - mnl_attr_put_u16(nlh, > - TCA_FLOWER_KEY_UDP_DST_MASK, > - mask.udp->hdr.dst_port); > + mnl_attr_put_u16 > + (nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_UDP_DST_PORT : > + TCA_FLOWER_KEY_UDP_DST, > + spec.udp->hdr.dst_port); > + mnl_attr_put_u16 > + (nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_UDP_DST_PORT_MASK : > + TCA_FLOWER_KEY_UDP_DST_MASK, > + mask.udp->hdr.dst_port); > } > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > break; > case RTE_FLOW_ITEM_TYPE_TCP: > item_flags |=3D MLX5_FLOW_LAYER_OUTER_L4_TCP; > @@ -3123,6 +3448,16 @@ struct pedit_parser { > rte_cpu_to_be_16 > (mask.tcp->hdr.tcp_flags)); > } > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > + break; > + case RTE_FLOW_ITEM_TYPE_VXLAN: > + assert(decap.vxlan); > + item_flags |=3D MLX5_FLOW_LAYER_VXLAN; > + spec.vxlan =3D items->spec; > + mnl_attr_put_u32(nlh, > + TCA_FLOWER_KEY_ENC_KEY_ID, > + vxlan_vni_as_be32(spec.vxlan->vni)); > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > break; > default: > return rte_flow_error_set(error, ENOTSUP, > @@ -3156,6 +3491,14 @@ struct pedit_parser { > mnl_attr_put_strz(nlh, TCA_ACT_KIND, "mirred"); > na_act =3D mnl_attr_nest_start(nlh, TCA_ACT_OPTIONS); > assert(na_act); > + if (encap.hdr) { > + assert(dev_flow->tcf.tunnel); > + dev_flow->tcf.tunnel->ifindex_ptr =3D > + &((struct tc_mirred *) > + mnl_attr_get_payload > + (mnl_nlmsg_get_payload_tail > + (nlh)))->ifindex; > + } > mnl_attr_put(nlh, TCA_MIRRED_PARMS, > sizeof(struct tc_mirred), > &(struct tc_mirred){ > @@ -3273,6 +3616,74 @@ struct pedit_parser { > conf.of_set_vlan_pcp->vlan_pcp; > } > break; > + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: > + assert(decap.vxlan); > + assert(dev_flow->tcf.tunnel); > + dev_flow->tcf.tunnel->ifindex_ptr =3D > + (unsigned int *)&tcm->tcm_ifindex; > + na_act_index =3D > + mnl_attr_nest_start(nlh, na_act_index_cur++); > + assert(na_act_index); > + mnl_attr_put_strz(nlh, TCA_ACT_KIND, "tunnel_key"); > + na_act =3D mnl_attr_nest_start(nlh, TCA_ACT_OPTIONS); > + assert(na_act); > + mnl_attr_put(nlh, TCA_TUNNEL_KEY_PARMS, > + sizeof(struct tc_tunnel_key), > + &(struct tc_tunnel_key){ > + .action =3D TC_ACT_PIPE, > + .t_action =3D TCA_TUNNEL_KEY_ACT_RELEASE, > + }); > + mnl_attr_nest_end(nlh, na_act); > + mnl_attr_nest_end(nlh, na_act_index); > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > + break; > + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: > + assert(encap.vxlan); > + flow_tcf_vxlan_encap_parse(actions, encap.vxlan); > + na_act_index =3D > + mnl_attr_nest_start(nlh, na_act_index_cur++); > + assert(na_act_index); > + mnl_attr_put_strz(nlh, TCA_ACT_KIND, "tunnel_key"); > + na_act =3D mnl_attr_nest_start(nlh, TCA_ACT_OPTIONS); > + assert(na_act); > + mnl_attr_put(nlh, TCA_TUNNEL_KEY_PARMS, > + sizeof(struct tc_tunnel_key), > + &(struct tc_tunnel_key){ > + .action =3D TC_ACT_PIPE, > + .t_action =3D TCA_TUNNEL_KEY_ACT_SET, > + }); > + if (encap.vxlan->mask & FLOW_TCF_ENCAP_UDP_DST) > + mnl_attr_put_u16(nlh, > + TCA_TUNNEL_KEY_ENC_DST_PORT, > + encap.vxlan->udp.dst); > + if (encap.vxlan->mask & FLOW_TCF_ENCAP_IPV4_SRC) > + mnl_attr_put_u32(nlh, > + TCA_TUNNEL_KEY_ENC_IPV4_SRC, > + encap.vxlan->ipv4.src); > + if (encap.vxlan->mask & FLOW_TCF_ENCAP_IPV4_DST) > + mnl_attr_put_u32(nlh, > + TCA_TUNNEL_KEY_ENC_IPV4_DST, > + encap.vxlan->ipv4.dst); > + if (encap.vxlan->mask & FLOW_TCF_ENCAP_IPV6_SRC) > + mnl_attr_put(nlh, > + TCA_TUNNEL_KEY_ENC_IPV6_SRC, > + sizeof(encap.vxlan->ipv6.src), > + &encap.vxlan->ipv6.src); > + if (encap.vxlan->mask & FLOW_TCF_ENCAP_IPV6_DST) > + mnl_attr_put(nlh, > + TCA_TUNNEL_KEY_ENC_IPV6_DST, > + sizeof(encap.vxlan->ipv6.dst), > + &encap.vxlan->ipv6.dst); > + if (encap.vxlan->mask & FLOW_TCF_ENCAP_VXLAN_VNI) > + mnl_attr_put_u32(nlh, > + TCA_TUNNEL_KEY_ENC_KEY_ID, > + vxlan_vni_as_be32 > + (encap.vxlan->vxlan.vni)); > + mnl_attr_put_u8(nlh, TCA_TUNNEL_KEY_NO_CSUM, 0); > + mnl_attr_nest_end(nlh, na_act); > + mnl_attr_nest_end(nlh, na_act_index); > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > + break; > case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: > case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST: > case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC: > @@ -3299,7 +3710,13 @@ struct pedit_parser { > assert(na_flower); > assert(na_flower_act); > mnl_attr_nest_end(nlh, na_flower_act); > + mnl_attr_put_u32(nlh, TCA_FLOWER_FLAGS, > + decap.vxlan ? 0 : TCA_CLS_FLAGS_SKIP_SW); Last one. mnl_attr_put_u32(nlh, TCA_FLOWER_FLAGS, decap.vxlan ? 0 : TCA_CLS_FLAGS_SKIP_SW); Thanks, Yongseok > mnl_attr_nest_end(nlh, na_flower); > + if (dev_flow->tcf.tunnel && dev_flow->tcf.tunnel->ifindex_ptr) > + dev_flow->tcf.tunnel->ifindex_org =3D > + *dev_flow->tcf.tunnel->ifindex_ptr; > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > return 0; > } > =20 > --=20 > 1.8.3.1 >=20