From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR02-VE1-obe.outbound.protection.outlook.com (mail-eopbgr20042.outbound.protection.outlook.com [40.107.2.42]) by dpdk.org (Postfix) with ESMTP id EC6F67CB0 for ; Thu, 1 Nov 2018 22:18:39 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YN0iiTnQelIQtro+StUbPb8S1fItA4ufcCxFsB7MM7A=; b=ERZsM665LwlU0pJKnmW6l5Ob3d2C+l70iOlvzbgD7RhGXJK0Pp6D5P1XDjeyzXsyNtsVnLgD7e3dLI6fkt64WQ2VkAnL6ZdWbO9V7lq3XKQC2N3dybrCMdN6+ZDn+fMFa4aXk6GaRg0lBpZYYfRVtInGwmBb6WiYGqd9dWoH5nM= Received: from DB3PR0502MB3980.eurprd05.prod.outlook.com (52.134.72.27) by DB3PR0502MB3964.eurprd05.prod.outlook.com (52.134.65.161) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1273.25; Thu, 1 Nov 2018 21:18:38 +0000 Received: from DB3PR0502MB3980.eurprd05.prod.outlook.com ([fe80::f8a1:fcab:94f0:97cc]) by DB3PR0502MB3980.eurprd05.prod.outlook.com ([fe80::f8a1:fcab:94f0:97cc%4]) with mapi id 15.20.1273.030; Thu, 1 Nov 2018 21:18:38 +0000 From: Yongseok Koh To: Slava Ovsiienko CC: Shahaf Shuler , "dev@dpdk.org" Thread-Topic: [PATCH v3 08/13] net/mlx5: add VXLAN support to flow translate routine Thread-Index: AQHUcd0kT0tFxhfH106m/Uuzh5rQ/KU7bSSA Date: Thu, 1 Nov 2018 21:18:38 +0000 Message-ID: <20181101211829.GI6118@mtidpdk.mti.labs.mlnx> References: <1539612815-47199-1-git-send-email-viacheslavo@mellanox.com> <1541074741-41368-1-git-send-email-viacheslavo@mellanox.com> <1541074741-41368-9-git-send-email-viacheslavo@mellanox.com> In-Reply-To: <1541074741-41368-9-git-send-email-viacheslavo@mellanox.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: BYAPR02CA0048.namprd02.prod.outlook.com (2603:10b6:a03:54::25) To DB3PR0502MB3980.eurprd05.prod.outlook.com (2603:10a6:8:10::27) authentication-results: spf=none (sender IP is ) smtp.mailfrom=yskoh@mellanox.com; x-ms-exchange-messagesentrepresentingtype: 1 x-originating-ip: [209.116.155.178] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; DB3PR0502MB3964; 6:dbJVYcyh3xnkpitwWaEDECK616gE70L53Rs+Wbkcd91v48O/tBI7HcGV3//nre7Xp/pyVtDu+uut9oeW0Q/xVTNiAKA7AGnSU3i9l7vUPpxIznotBbDLyBSL7VKh4kyO9xbHmjUsFA++srmZ5biBOCX37EVjCOpbuX2v6EjzI4K82uNvkSRFBdUHjnhZXGO+oq4ViLzfPFWDcdS0iCKMZz6AoyFQUXyt9BdQ3Pt1EPUvUYweBLOaA4UBk22B2BjgjpP5W+LCNJHE30OMK7lUjBuQMekrp7ylvRg5bjN7zFWjSc6kwSzUkbj9TZVJnrS9T0ZnO+pBLCOggZmOG1z6H+Ky+pEIYgRI9w+d7sYmRj+1FW8pPVUtS0ktIvuhHeL5VQAGPE27Vo4swUt3USz90OvJ5P5WWb0MKBQhg8u/fWpQJ9Odlni3ipPiKY4gPyeax7/hRA4AX69alNjFNtkmPA==; 5:WtxvBCIsaZ6Z2A0Ew0osC+WhfvjMCS97RH8iTpUWwf49t/0pEopBjQYjygs7brn3iF2FUsrrJrQ8A/6z9Aq1IxScbXU+YXGpvbEGykOovZvaMGWTuBEM4RDVr+mvRI5RVYVVMatOUCtTmEITV0x2BbSuMAS/bSsIyQtiY6Wzo/k=; 7:qAOHeNJHPMwRqE583r6whnmQZcNmbjM8249jsh9lFqFOrnqi8tTdpxLrZU2b8Hy2Tvl22/uljodieTz8ElHt0xiWbUdjqEAnX7B3BQDo8hummQcn9jXFJUZIWBqgFHGCwSLhQFwlP6ewM71qORQguQ== x-ms-office365-filtering-correlation-id: 0f255c0f-685d-4690-9a2a-08d6403f96d6 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989299)(5600074)(711020)(4618075)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7153060)(7193020); SRVR:DB3PR0502MB3964; x-ms-traffictypediagnostic: DB3PR0502MB3964: x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:; x-ms-exchange-senderadcheck: 1 x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(5005006)(8121501046)(3231382)(944501410)(52105095)(93006095)(93001095)(3002001)(10201501046)(6055026)(148016)(149066)(150057)(6041310)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123564045)(20161123560045)(20161123558120)(20161123562045)(201708071742011)(7699051)(76991095); SRVR:DB3PR0502MB3964; BCL:0; PCL:0; RULEID:; SRVR:DB3PR0502MB3964; x-forefront-prvs: 0843C17679 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(396003)(136003)(366004)(39860400002)(376002)(346002)(189003)(199004)(256004)(14444005)(4744004)(106356001)(446003)(6506007)(386003)(33896004)(76176011)(102836004)(105586002)(26005)(229853002)(5660300001)(478600001)(8676002)(81156014)(6636002)(25786009)(14454004)(486006)(71190400001)(476003)(11346002)(575784001)(71200400001)(81166006)(86362001)(6246003)(97736004)(33656002)(68736007)(6862004)(53936002)(53946003)(1076002)(6116002)(8936002)(66066001)(3846002)(2900100001)(52116002)(99286004)(6512007)(186003)(316002)(54906003)(7736002)(6486002)(4326008)(305945005)(2906002)(6436002)(9686003)(579004); DIR:OUT; SFP:1101; SCL:1; SRVR:DB3PR0502MB3964; H:DB3PR0502MB3980.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: JOxJNKtHuvXdRSoTLIRtbTBLaObRspQDvZK9a1EsFzlCggIt+btz/uTXxNldmd/k5MH7okYM2BgeG20gMB+P7wg2PWjPIBkxFw1ouYFM9D3sVLlZ07u/cLhz83oE+T5fI9l/EsN3jQz6M7YPM1RsRkOoHEVPcdIL8bPm+Dg4Dg44QGHWRGS43vbjkR/u2PC1ePl/jGK5kY6A2svbeyJetzBrclWp1pVPfjNItOsP+9f2Z+LSTwOg0IrLhmsQjdwF7DDxT3nLsYVDTCyLmbI+34vb/6LL2LBCa5Et3eCczX12TBJrKSGcj/JsyiY0/oCulkYrpWfm+Z5I14PSj/JATG6H84Tl3pLVnnulXFc3AhU= spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="us-ascii" Content-ID: Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 0f255c0f-685d-4690-9a2a-08d6403f96d6 X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Nov 2018 21:18:38.2611 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR0502MB3964 Subject: Re: [dpdk-dev] [PATCH v3 08/13] net/mlx5: add VXLAN support to flow translate routine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 01 Nov 2018 21:18:40 -0000 On Thu, Nov 01, 2018 at 05:19:30AM -0700, Slava Ovsiienko wrote: > This part of patchset adds support of VXLAN-related items and > actions to the flow translation routine. Later some tunnel types, > other than VXLAN can be addedd (GRE). No VTEP devices are created at > this point, the flow rule is just translated, not applied yet. >=20 > Suggested-by: Adrien Mazarguil > Signed-off-by: Viacheslav Ovsiienko > --- > drivers/net/mlx5/mlx5_flow_tcf.c | 535 +++++++++++++++++++++++++++++++++= +----- > 1 file changed, 472 insertions(+), 63 deletions(-) >=20 > diff --git a/drivers/net/mlx5/mlx5_flow_tcf.c b/drivers/net/mlx5/mlx5_flo= w_tcf.c > index b5be264..c404a63 100644 > --- a/drivers/net/mlx5/mlx5_flow_tcf.c > +++ b/drivers/net/mlx5/mlx5_flow_tcf.c > @@ -2020,8 +2020,8 @@ struct pedit_parser { > if (ret < 0) > return ret; > item_flags |=3D (item_flags & MLX5_FLOW_LAYER_TUNNEL) ? > - MLX5_FLOW_LAYER_INNER_L2 : > - MLX5_FLOW_LAYER_OUTER_L2; > + MLX5_FLOW_LAYER_INNER_L2 : > + MLX5_FLOW_LAYER_OUTER_L2; Irrelevant. Please remove. > /* TODO: > * Redundant check due to different supported mask. > * Same for the rest of items. > @@ -2179,7 +2179,7 @@ struct pedit_parser { > return -rte_errno; > break; > case RTE_FLOW_ITEM_TYPE_VXLAN: > - if (!(action_flags & RTE_FLOW_ACTION_TYPE_VXLAN_DECAP)) > + if (!(action_flags & MLX5_FLOW_ACTION_VXLAN_DECAP)) Shouldn't this be fixed in patch [6/13]? > return rte_flow_error_set > (error, ENOTSUP, > RTE_FLOW_ERROR_TYPE_ITEM, > @@ -2762,6 +2762,241 @@ struct pedit_parser { > } > =20 > /** > + * Convert VXLAN VNI to 32-bit integer. > + * > + * @param[in] vni > + * VXLAN VNI in 24-bit wire format. > + * > + * @return > + * VXLAN VNI as a 32-bit integer value in network endian. > + */ > +static inline rte_be32_t > +vxlan_vni_as_be32(const uint8_t vni[3]) > +{ > + union { > + uint8_t vni[4]; > + rte_be32_t dword; > + } ret =3D { > + .vni =3D { 0, vni[0], vni[1], vni[2] }, > + }; > + return ret.dword; > +} > + > +/** > + * Helper function to process RTE_FLOW_ITEM_TYPE_ETH entry in configurat= ion > + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the MAC address fie= lds > + * in the encapsulation parameters structure. The item must be prevalida= ted, > + * no any validation checks performed by function. > + * > + * @param[in] spec > + * RTE_FLOW_ITEM_TYPE_ETH entry specification. > + * @param[in] mask > + * RTE_FLOW_ITEM_TYPE_ETH entry mask. > + * @param[out] encap > + * Structure to fill the gathered MAC address data. > + */ > +static void > +flow_tcf_parse_vxlan_encap_eth(const struct rte_flow_item_eth *spec, > + const struct rte_flow_item_eth *mask, > + struct flow_tcf_vxlan_encap *encap) > +{ > + /* Item must be validated before. No redundant checks. */ > + assert(spec); > + if (!mask || !memcmp(&mask->dst, > + &rte_flow_item_eth_mask.dst, > + sizeof(rte_flow_item_eth_mask.dst))) { > + /* > + * Ethernet addresses are not supported by > + * tc as tunnel_key parameters. Destination > + * address is needed to form encap packet > + * header and retrieved by kernel from > + * implicit sources (ARP table, etc), > + * address masks are not supported at all. > + */ > + encap->eth.dst =3D spec->dst; > + encap->mask |=3D FLOW_TCF_ENCAP_ETH_DST; > + } > + if (!mask || !memcmp(&mask->src, > + &rte_flow_item_eth_mask.src, > + sizeof(rte_flow_item_eth_mask.src))) { > + /* > + * Ethernet addresses are not supported by > + * tc as tunnel_key parameters. Source ethernet > + * address is ignored anyway. > + */ > + encap->eth.src =3D spec->src; > + encap->mask |=3D FLOW_TCF_ENCAP_ETH_SRC; > + } > +} > + > +/** > + * Helper function to process RTE_FLOW_ITEM_TYPE_IPV4 entry in configura= tion > + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the IPV4 address fi= elds > + * in the encapsulation parameters structure. The item must be prevalida= ted, > + * no any validation checks performed by function. > + * > + * @param[in] spec > + * RTE_FLOW_ITEM_TYPE_IPV4 entry specification. > + * @param[out] encap > + * Structure to fill the gathered IPV4 address data. > + */ > +static void > +flow_tcf_parse_vxlan_encap_ipv4(const struct rte_flow_item_ipv4 *spec, > + struct flow_tcf_vxlan_encap *encap) > +{ > + /* Item must be validated before. No redundant checks. */ > + assert(spec); > + encap->ipv4.dst =3D spec->hdr.dst_addr; > + encap->ipv4.src =3D spec->hdr.src_addr; > + encap->mask |=3D FLOW_TCF_ENCAP_IPV4_SRC | > + FLOW_TCF_ENCAP_IPV4_DST; > +} > + > +/** > + * Helper function to process RTE_FLOW_ITEM_TYPE_IPV6 entry in configura= tion > + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the IPV6 address fi= elds > + * in the encapsulation parameters structure. The item must be prevalida= ted, > + * no any validation checks performed by function. > + * > + * @param[in] spec > + * RTE_FLOW_ITEM_TYPE_IPV6 entry specification. > + * @param[out] encap > + * Structure to fill the gathered IPV6 address data. > + */ > +static void > +flow_tcf_parse_vxlan_encap_ipv6(const struct rte_flow_item_ipv6 *spec, > + struct flow_tcf_vxlan_encap *encap) > +{ > + /* Item must be validated before. No redundant checks. */ > + assert(spec); > + memcpy(encap->ipv6.dst, spec->hdr.dst_addr, sizeof(encap->ipv6.dst)); > + memcpy(encap->ipv6.src, spec->hdr.src_addr, sizeof(encap->ipv6.src)); > + encap->mask |=3D FLOW_TCF_ENCAP_IPV6_SRC | > + FLOW_TCF_ENCAP_IPV6_DST; > +} > + > +/** > + * Helper function to process RTE_FLOW_ITEM_TYPE_UDP entry in configurat= ion > + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the UDP port fields > + * in the encapsulation parameters structure. The item must be prevalida= ted, > + * no any validation checks performed by function. > + * > + * @param[in] spec > + * RTE_FLOW_ITEM_TYPE_UDP entry specification. > + * @param[in] mask > + * RTE_FLOW_ITEM_TYPE_UDP entry mask. > + * @param[out] encap > + * Structure to fill the gathered UDP port data. > + */ > +static void > +flow_tcf_parse_vxlan_encap_udp(const struct rte_flow_item_udp *spec, > + const struct rte_flow_item_udp *mask, > + struct flow_tcf_vxlan_encap *encap) > +{ > + assert(spec); > + encap->udp.dst =3D spec->hdr.dst_port; > + encap->mask |=3D FLOW_TCF_ENCAP_UDP_DST; > + if (!mask || mask->hdr.src_port !=3D RTE_BE16(0x0000)) { > + encap->udp.src =3D spec->hdr.src_port; > + encap->mask |=3D FLOW_TCF_ENCAP_IPV4_SRC; > + } > +} > + > +/** > + * Helper function to process RTE_FLOW_ITEM_TYPE_VXLAN entry in configur= ation > + * of action RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. Fills the VNI fields > + * in the encapsulation parameters structure. The item must be prevalida= ted, > + * no any validation checks performed by function. > + * > + * @param[in] spec > + * RTE_FLOW_ITEM_TYPE_VXLAN entry specification. > + * @param[out] encap > + * Structure to fill the gathered VNI address data. > + */ > +static void > +flow_tcf_parse_vxlan_encap_vni(const struct rte_flow_item_vxlan *spec, > + struct flow_tcf_vxlan_encap *encap) > +{ > + /* Item must be validated before. Do not redundant checks. */ > + assert(spec); > + memcpy(encap->vxlan.vni, spec->vni, sizeof(encap->vxlan.vni)); > + encap->mask |=3D FLOW_TCF_ENCAP_VXLAN_VNI; > +} > + > +/** > + * Populate consolidated encapsulation object from list of pattern items= . > + * > + * Helper function to process configuration of action such as > + * RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP. The item list should be > + * validated, there is no way to return an meaningful error. > + * > + * @param[in] action > + * RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP action object. > + * List of pattern items to gather data from. > + * @param[out] src > + * Structure to fill gathered data. > + */ > +static void > +flow_tcf_vxlan_encap_parse(const struct rte_flow_action *action, > + struct flow_tcf_vxlan_encap *encap) > +{ > + union { > + const struct rte_flow_item_eth *eth; > + const struct rte_flow_item_ipv4 *ipv4; > + const struct rte_flow_item_ipv6 *ipv6; > + const struct rte_flow_item_udp *udp; > + const struct rte_flow_item_vxlan *vxlan; > + } spec, mask; > + const struct rte_flow_item *items; > + > + assert(action->type =3D=3D RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP); > + assert(action->conf); > + > + items =3D ((const struct rte_flow_action_vxlan_encap *) > + action->conf)->definition; > + assert(items); > + for (; items->type !=3D RTE_FLOW_ITEM_TYPE_END; items++) { > + switch (items->type) { > + case RTE_FLOW_ITEM_TYPE_VOID: > + break; > + case RTE_FLOW_ITEM_TYPE_ETH: > + mask.eth =3D items->mask; > + spec.eth =3D items->spec; > + flow_tcf_parse_vxlan_encap_eth > + (spec.eth, mask.eth, encap); > + break; > + case RTE_FLOW_ITEM_TYPE_IPV4: > + spec.ipv4 =3D items->spec; > + flow_tcf_parse_vxlan_encap_ipv4(spec.ipv4, encap); > + break; > + case RTE_FLOW_ITEM_TYPE_IPV6: > + spec.ipv6 =3D items->spec; > + flow_tcf_parse_vxlan_encap_ipv6(spec.ipv6, encap); > + break; > + case RTE_FLOW_ITEM_TYPE_UDP: > + mask.udp =3D items->mask; > + spec.udp =3D items->spec; > + flow_tcf_parse_vxlan_encap_udp > + (spec.udp, mask.udp, encap); > + break; > + case RTE_FLOW_ITEM_TYPE_VXLAN: > + spec.vxlan =3D items->spec; > + flow_tcf_parse_vxlan_encap_vni(spec.vxlan, encap); > + break; > + default: > + assert(false); > + DRV_LOG(WARNING, > + "unsupported item %p type %d," > + " items must be validated" > + " before flow creation", > + (const void *)items, items->type); > + encap->mask =3D 0; > + return; > + } > + } > +} > + > +/** > * Translate flow for Linux TC flower and construct Netlink message. > * > * @param[in] priv > @@ -2795,6 +3030,7 @@ struct pedit_parser { > const struct rte_flow_item_ipv6 *ipv6; > const struct rte_flow_item_tcp *tcp; > const struct rte_flow_item_udp *udp; > + const struct rte_flow_item_vxlan *vxlan; > } spec, mask; > union { > const struct rte_flow_action_port_id *port_id; > @@ -2805,6 +3041,14 @@ struct pedit_parser { > const struct rte_flow_action_of_set_vlan_pcp * > of_set_vlan_pcp; > } conf; > + union { > + struct flow_tcf_tunnel_hdr *hdr; > + struct flow_tcf_vxlan_decap *vxlan; > + } decap; > + union { > + struct flow_tcf_tunnel_hdr *hdr; > + struct flow_tcf_vxlan_encap *vxlan; > + } encap; > struct flow_tcf_ptoi ptoi[PTOI_TABLE_SZ_MAX(dev)]; > struct nlmsghdr *nlh =3D dev_flow->tcf.nlh; > struct tcmsg *tcm =3D dev_flow->tcf.tcm; > @@ -2822,6 +3066,16 @@ struct pedit_parser { > =20 > claim_nonzero(flow_tcf_build_ptoi_table(dev, ptoi, > PTOI_TABLE_SZ_MAX(dev))); > + encap.hdr =3D NULL; > + decap.hdr =3D NULL; Please do this initialization in the declaration above. E.g., union { struct flow_tcf_tunnel_hdr *hdr; struct flow_tcf_vxlan_decap *vxlan; } decap =3D { .hdr =3D NULL, }; > + if (dev_flow->flow->actions & MLX5_FLOW_ACTION_VXLAN_ENCAP) { > + encap.vxlan =3D dev_flow->tcf.vxlan_encap; > + encap.vxlan->hdr.type =3D FLOW_TCF_TUNACT_VXLAN_ENCAP; > + } > + if (dev_flow->flow->actions & MLX5_FLOW_ACTION_VXLAN_DECAP) { > + decap.vxlan =3D dev_flow->tcf.vxlan_decap; > + decap.vxlan->hdr.type =3D FLOW_TCF_TUNACT_VXLAN_DECAP; > + } Like I asked in the previous patch, please set the type in _prepare(), then= this part can be like: if (dev_flow->tcf.tunnel->type =3D=3D FLOW_TCF_TUNACT_VXLAN_ENCAP) encap.vxlan =3D dev_flow->tcf.vxlan_encap; if (dev_flow->flow->actions & MLX5_FLOW_ACTION_VXLAN_DECAP) decap.vxlan =3D dev_flow->tcf.vxlan_decap; > nlh =3D dev_flow->tcf.nlh; > tcm =3D dev_flow->tcf.tcm; > /* Prepare API must have been called beforehand. */ > @@ -2839,7 +3093,6 @@ struct pedit_parser { > mnl_attr_put_u32(nlh, TCA_CHAIN, attr->group); > mnl_attr_put_strz(nlh, TCA_KIND, "flower"); > na_flower =3D mnl_attr_nest_start(nlh, TCA_OPTIONS); > - mnl_attr_put_u32(nlh, TCA_FLOWER_FLAGS, TCA_CLS_FLAGS_SKIP_SW); > for (; items->type !=3D RTE_FLOW_ITEM_TYPE_END; items++) { > unsigned int i; > =20 > @@ -2867,7 +3120,9 @@ struct pedit_parser { > tcm->tcm_ifindex =3D ptoi[i].ifindex; > break; > case RTE_FLOW_ITEM_TYPE_ETH: > - item_flags |=3D MLX5_FLOW_LAYER_OUTER_L2; > + item_flags |=3D (item_flags & MLX5_FLOW_LAYER_VXLAN) ? > + MLX5_FLOW_LAYER_INNER_L2 : > + MLX5_FLOW_LAYER_OUTER_L2; Indentation. > mask.eth =3D flow_tcf_item_mask > (items, &rte_flow_item_eth_mask, > &flow_tcf_mask_supported.eth, > @@ -2878,6 +3133,14 @@ struct pedit_parser { > if (mask.eth =3D=3D &flow_tcf_mask_empty.eth) > break; > spec.eth =3D items->spec; > + if (decap.vxlan && > + !(item_flags & MLX5_FLOW_LAYER_VXLAN)) { > + DRV_LOG(WARNING, > + "outer L2 addresses cannot be forced" > + " for vxlan decapsulation, parameter" > + " ignored"); > + break; > + } > if (mask.eth->type) { > mnl_attr_put_u16(nlh, TCA_FLOWER_KEY_ETH_TYPE, > spec.eth->type); > @@ -2899,8 +3162,11 @@ struct pedit_parser { > ETHER_ADDR_LEN, > mask.eth->src.addr_bytes); > } > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > break; > case RTE_FLOW_ITEM_TYPE_VLAN: > + assert(!encap.hdr); > + assert(!decap.hdr); > item_flags |=3D MLX5_FLOW_LAYER_OUTER_VLAN; > mask.vlan =3D flow_tcf_item_mask > (items, &rte_flow_item_vlan_mask, > @@ -2932,6 +3198,7 @@ struct pedit_parser { > rte_be_to_cpu_16 > (spec.vlan->tci & > RTE_BE16(0x0fff))); > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > break; > case RTE_FLOW_ITEM_TYPE_IPV4: > item_flags |=3D MLX5_FLOW_LAYER_OUTER_L3_IPV4; > @@ -2942,36 +3209,52 @@ struct pedit_parser { > sizeof(flow_tcf_mask_supported.ipv4), > error); > assert(mask.ipv4); > - if (!eth_type_set || !vlan_eth_type_set) > - mnl_attr_put_u16(nlh, > + spec.ipv4 =3D items->spec; > + if (!decap.vxlan) { > + if (!eth_type_set && !vlan_eth_type_set) > + mnl_attr_put_u16(nlh, > vlan_present ? > TCA_FLOWER_KEY_VLAN_ETH_TYPE : > TCA_FLOWER_KEY_ETH_TYPE, > RTE_BE16(ETH_P_IP)); Indentation. > - eth_type_set =3D 1; > - vlan_eth_type_set =3D 1; > - if (mask.ipv4 =3D=3D &flow_tcf_mask_empty.ipv4) > - break; > - spec.ipv4 =3D items->spec; > - if (mask.ipv4->hdr.next_proto_id) { > - mnl_attr_put_u8(nlh, TCA_FLOWER_KEY_IP_PROTO, > - spec.ipv4->hdr.next_proto_id); > - ip_proto_set =3D 1; > + eth_type_set =3D 1; > + vlan_eth_type_set =3D 1; > + if (mask.ipv4 =3D=3D &flow_tcf_mask_empty.ipv4) > + break; > + if (mask.ipv4->hdr.next_proto_id) { > + mnl_attr_put_u8 > + (nlh, TCA_FLOWER_KEY_IP_PROTO, > + spec.ipv4->hdr.next_proto_id); > + ip_proto_set =3D 1; > + } > + } else { > + assert(mask.ipv4 !=3D &flow_tcf_mask_empty.ipv4); > } > if (mask.ipv4->hdr.src_addr) { > - mnl_attr_put_u32(nlh, TCA_FLOWER_KEY_IPV4_SRC, > - spec.ipv4->hdr.src_addr); > - mnl_attr_put_u32(nlh, > - TCA_FLOWER_KEY_IPV4_SRC_MASK, > - mask.ipv4->hdr.src_addr); > + mnl_attr_put_u32 > + (nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_IPV4_SRC : > + TCA_FLOWER_KEY_IPV4_SRC, > + spec.ipv4->hdr.src_addr); > + mnl_attr_put_u32 > + (nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_IPV4_SRC_MASK : > + TCA_FLOWER_KEY_IPV4_SRC_MASK, > + mask.ipv4->hdr.src_addr); > } > if (mask.ipv4->hdr.dst_addr) { > - mnl_attr_put_u32(nlh, TCA_FLOWER_KEY_IPV4_DST, > - spec.ipv4->hdr.dst_addr); > - mnl_attr_put_u32(nlh, > - TCA_FLOWER_KEY_IPV4_DST_MASK, > - mask.ipv4->hdr.dst_addr); > + mnl_attr_put_u32 > + (nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_IPV4_DST : > + TCA_FLOWER_KEY_IPV4_DST, > + spec.ipv4->hdr.dst_addr); > + mnl_attr_put_u32 > + (nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_IPV4_DST_MASK : > + TCA_FLOWER_KEY_IPV4_DST_MASK, > + mask.ipv4->hdr.dst_addr); > } > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > break; > case RTE_FLOW_ITEM_TYPE_IPV6: > item_flags |=3D MLX5_FLOW_LAYER_OUTER_L3_IPV6; > @@ -2982,38 +3265,53 @@ struct pedit_parser { > sizeof(flow_tcf_mask_supported.ipv6), > error); > assert(mask.ipv6); > - if (!eth_type_set || !vlan_eth_type_set) > - mnl_attr_put_u16(nlh, > - vlan_present ? > - TCA_FLOWER_KEY_VLAN_ETH_TYPE : > - TCA_FLOWER_KEY_ETH_TYPE, > - RTE_BE16(ETH_P_IPV6)); > - eth_type_set =3D 1; > - vlan_eth_type_set =3D 1; > - if (mask.ipv6 =3D=3D &flow_tcf_mask_empty.ipv6) > - break; > spec.ipv6 =3D items->spec; > - if (mask.ipv6->hdr.proto) { > - mnl_attr_put_u8(nlh, TCA_FLOWER_KEY_IP_PROTO, > - spec.ipv6->hdr.proto); > - ip_proto_set =3D 1; > + if (!decap.vxlan) { > + if (!eth_type_set || !vlan_eth_type_set) { > + mnl_attr_put_u16(nlh, > + vlan_present ? > + TCA_FLOWER_KEY_VLAN_ETH_TYPE : > + TCA_FLOWER_KEY_ETH_TYPE, > + RTE_BE16(ETH_P_IPV6)); Indentation. > + } > + eth_type_set =3D 1; > + vlan_eth_type_set =3D 1; > + if (mask.ipv6 =3D=3D &flow_tcf_mask_empty.ipv6) > + break; > + if (mask.ipv6->hdr.proto) { > + mnl_attr_put_u8 > + (nlh, TCA_FLOWER_KEY_IP_PROTO, > + spec.ipv6->hdr.proto); > + ip_proto_set =3D 1; > + } > + } else { > + assert(mask.ipv6 !=3D &flow_tcf_mask_empty.ipv6); > } > if (!IN6_IS_ADDR_UNSPECIFIED(mask.ipv6->hdr.src_addr)) { > - mnl_attr_put(nlh, TCA_FLOWER_KEY_IPV6_SRC, > - sizeof(spec.ipv6->hdr.src_addr), > + mnl_attr_put(nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_IPV6_SRC : > + TCA_FLOWER_KEY_IPV6_SRC, > + IPV6_ADDR_LEN, > spec.ipv6->hdr.src_addr); > - mnl_attr_put(nlh, TCA_FLOWER_KEY_IPV6_SRC_MASK, > - sizeof(mask.ipv6->hdr.src_addr), > + mnl_attr_put(nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_IPV6_SRC_MASK : > + TCA_FLOWER_KEY_IPV6_SRC_MASK, > + IPV6_ADDR_LEN, > mask.ipv6->hdr.src_addr); > } > if (!IN6_IS_ADDR_UNSPECIFIED(mask.ipv6->hdr.dst_addr)) { > - mnl_attr_put(nlh, TCA_FLOWER_KEY_IPV6_DST, > - sizeof(spec.ipv6->hdr.dst_addr), > + mnl_attr_put(nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_IPV6_DST : > + TCA_FLOWER_KEY_IPV6_DST, > + IPV6_ADDR_LEN, > spec.ipv6->hdr.dst_addr); > - mnl_attr_put(nlh, TCA_FLOWER_KEY_IPV6_DST_MASK, > - sizeof(mask.ipv6->hdr.dst_addr), > + mnl_attr_put(nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_IPV6_DST_MASK : > + TCA_FLOWER_KEY_IPV6_DST_MASK, > + IPV6_ADDR_LEN, > mask.ipv6->hdr.dst_addr); > } > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > break; > case RTE_FLOW_ITEM_TYPE_UDP: > item_flags |=3D MLX5_FLOW_LAYER_OUTER_L4_UDP; > @@ -3024,26 +3322,45 @@ struct pedit_parser { > sizeof(flow_tcf_mask_supported.udp), > error); > assert(mask.udp); > - if (!ip_proto_set) > - mnl_attr_put_u8(nlh, TCA_FLOWER_KEY_IP_PROTO, > - IPPROTO_UDP); > - if (mask.udp =3D=3D &flow_tcf_mask_empty.udp) > - break; > spec.udp =3D items->spec; > + if (!decap.vxlan) { > + if (!ip_proto_set) > + mnl_attr_put_u8 > + (nlh, TCA_FLOWER_KEY_IP_PROTO, > + IPPROTO_UDP); > + if (mask.udp =3D=3D &flow_tcf_mask_empty.udp) > + break; > + } else { > + assert(mask.udp !=3D &flow_tcf_mask_empty.udp); > + decap.vxlan->udp_port =3D > + rte_be_to_cpu_16 > + (spec.udp->hdr.dst_port); > + } > if (mask.udp->hdr.src_port) { > - mnl_attr_put_u16(nlh, TCA_FLOWER_KEY_UDP_SRC, > - spec.udp->hdr.src_port); > - mnl_attr_put_u16(nlh, > - TCA_FLOWER_KEY_UDP_SRC_MASK, > - mask.udp->hdr.src_port); > + mnl_attr_put_u16 > + (nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_UDP_SRC_PORT : > + TCA_FLOWER_KEY_UDP_SRC, > + spec.udp->hdr.src_port); > + mnl_attr_put_u16 > + (nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_UDP_SRC_PORT_MASK : > + TCA_FLOWER_KEY_UDP_SRC_MASK, > + mask.udp->hdr.src_port); > } > if (mask.udp->hdr.dst_port) { > - mnl_attr_put_u16(nlh, TCA_FLOWER_KEY_UDP_DST, > - spec.udp->hdr.dst_port); > - mnl_attr_put_u16(nlh, > - TCA_FLOWER_KEY_UDP_DST_MASK, > - mask.udp->hdr.dst_port); > + mnl_attr_put_u16 > + (nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_UDP_DST_PORT : > + TCA_FLOWER_KEY_UDP_DST, > + spec.udp->hdr.dst_port); > + mnl_attr_put_u16 > + (nlh, decap.vxlan ? > + TCA_FLOWER_KEY_ENC_UDP_DST_PORT_MASK : > + TCA_FLOWER_KEY_UDP_DST_MASK, > + mask.udp->hdr.dst_port); > } > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > break; > case RTE_FLOW_ITEM_TYPE_TCP: > item_flags |=3D MLX5_FLOW_LAYER_OUTER_L4_TCP; > @@ -3086,6 +3403,16 @@ struct pedit_parser { > rte_cpu_to_be_16 > (mask.tcp->hdr.tcp_flags)); > } > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > + break; > + case RTE_FLOW_ITEM_TYPE_VXLAN: > + assert(decap.vxlan); > + item_flags |=3D MLX5_FLOW_LAYER_VXLAN; > + spec.vxlan =3D items->spec; > + mnl_attr_put_u32(nlh, > + TCA_FLOWER_KEY_ENC_KEY_ID, > + vxlan_vni_as_be32(spec.vxlan->vni)); > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > break; > default: > return rte_flow_error_set(error, ENOTSUP, > @@ -3119,6 +3446,14 @@ struct pedit_parser { > mnl_attr_put_strz(nlh, TCA_ACT_KIND, "mirred"); > na_act =3D mnl_attr_nest_start(nlh, TCA_ACT_OPTIONS); > assert(na_act); > + if (encap.hdr) { > + assert(dev_flow->tcf.tunnel); > + dev_flow->tcf.tunnel->ifindex_ptr =3D > + &((struct tc_mirred *) > + mnl_attr_get_payload > + (mnl_nlmsg_get_payload_tail > + (nlh)))->ifindex; > + } > mnl_attr_put(nlh, TCA_MIRRED_PARMS, > sizeof(struct tc_mirred), > &(struct tc_mirred){ > @@ -3236,6 +3571,74 @@ struct pedit_parser { > conf.of_set_vlan_pcp->vlan_pcp; > } > break; > + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: > + assert(decap.vxlan); > + assert(dev_flow->tcf.tunnel); > + dev_flow->tcf.tunnel->ifindex_ptr =3D > + (unsigned int *)&tcm->tcm_ifindex; > + na_act_index =3D > + mnl_attr_nest_start(nlh, na_act_index_cur++); > + assert(na_act_index); > + mnl_attr_put_strz(nlh, TCA_ACT_KIND, "tunnel_key"); > + na_act =3D mnl_attr_nest_start(nlh, TCA_ACT_OPTIONS); > + assert(na_act); > + mnl_attr_put(nlh, TCA_TUNNEL_KEY_PARMS, > + sizeof(struct tc_tunnel_key), > + &(struct tc_tunnel_key){ > + .action =3D TC_ACT_PIPE, > + .t_action =3D TCA_TUNNEL_KEY_ACT_RELEASE, > + }); > + mnl_attr_nest_end(nlh, na_act); > + mnl_attr_nest_end(nlh, na_act_index); > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > + break; > + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: > + assert(encap.vxlan); > + flow_tcf_vxlan_encap_parse(actions, encap.vxlan); > + na_act_index =3D > + mnl_attr_nest_start(nlh, na_act_index_cur++); > + assert(na_act_index); > + mnl_attr_put_strz(nlh, TCA_ACT_KIND, "tunnel_key"); > + na_act =3D mnl_attr_nest_start(nlh, TCA_ACT_OPTIONS); > + assert(na_act); > + mnl_attr_put(nlh, TCA_TUNNEL_KEY_PARMS, > + sizeof(struct tc_tunnel_key), > + &(struct tc_tunnel_key){ > + .action =3D TC_ACT_PIPE, > + .t_action =3D TCA_TUNNEL_KEY_ACT_SET, > + }); > + if (encap.vxlan->mask & FLOW_TCF_ENCAP_UDP_DST) > + mnl_attr_put_u16(nlh, > + TCA_TUNNEL_KEY_ENC_DST_PORT, > + encap.vxlan->udp.dst); > + if (encap.vxlan->mask & FLOW_TCF_ENCAP_IPV4_SRC) > + mnl_attr_put_u32(nlh, > + TCA_TUNNEL_KEY_ENC_IPV4_SRC, > + encap.vxlan->ipv4.src); > + if (encap.vxlan->mask & FLOW_TCF_ENCAP_IPV4_DST) > + mnl_attr_put_u32(nlh, > + TCA_TUNNEL_KEY_ENC_IPV4_DST, > + encap.vxlan->ipv4.dst); > + if (encap.vxlan->mask & FLOW_TCF_ENCAP_IPV6_SRC) > + mnl_attr_put(nlh, > + TCA_TUNNEL_KEY_ENC_IPV6_SRC, > + sizeof(encap.vxlan->ipv6.src), > + &encap.vxlan->ipv6.src); > + if (encap.vxlan->mask & FLOW_TCF_ENCAP_IPV6_DST) > + mnl_attr_put(nlh, > + TCA_TUNNEL_KEY_ENC_IPV6_DST, > + sizeof(encap.vxlan->ipv6.dst), > + &encap.vxlan->ipv6.dst); > + if (encap.vxlan->mask & FLOW_TCF_ENCAP_VXLAN_VNI) > + mnl_attr_put_u32(nlh, > + TCA_TUNNEL_KEY_ENC_KEY_ID, > + vxlan_vni_as_be32 > + (encap.vxlan->vxlan.vni)); > + mnl_attr_put_u8(nlh, TCA_TUNNEL_KEY_NO_CSUM, 0); > + mnl_attr_nest_end(nlh, na_act); > + mnl_attr_nest_end(nlh, na_act_index); > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > + break; > case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: > case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST: > case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC: > @@ -3262,7 +3665,13 @@ struct pedit_parser { > assert(na_flower); > assert(na_flower_act); > mnl_attr_nest_end(nlh, na_flower_act); > + mnl_attr_put_u32(nlh, TCA_FLOWER_FLAGS, > + decap.vxlan ? 0 : TCA_CLS_FLAGS_SKIP_SW); Indentation. Thanks, Yongseok > mnl_attr_nest_end(nlh, na_flower); > + if (dev_flow->tcf.tunnel && dev_flow->tcf.tunnel->ifindex_ptr) > + dev_flow->tcf.tunnel->ifindex_org =3D > + *dev_flow->tcf.tunnel->ifindex_ptr; > + assert(dev_flow->tcf.nlsize >=3D nlh->nlmsg_len); > return 0; > } > =20 > --=20 > 1.8.3.1 >=20