From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from EUR02-HE1-obe.outbound.protection.outlook.com (mail-eopbgr10059.outbound.protection.outlook.com [40.107.1.59]) by dpdk.org (Postfix) with ESMTP id 6E64E1B133 for ; Tue, 25 Sep 2018 01:17:49 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TjtYuE2qDGB7qItXtlF1U6Jlc8gjmXABMEN9IqOd7vU=; b=B311b/07x4fHzGoEn3FiHGIHT5D1TADv7LZ0oUkqW8NXVRKgzHyIJUIVI+sxDd8pjd02BEGCxNiVqVj/XRiqXsO9OCUxY9pxDJumXXgofzml3QzgCSBxw2id+krCKWVIImzfFgfeR37Ows6l83SzegqhNKP2YPAc+W9B5SQSHUc= Received: from DB3PR0502MB3980.eurprd05.prod.outlook.com (52.134.72.27) by DB3PR0502MB3996.eurprd05.prod.outlook.com (52.134.65.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.1164.25; Mon, 24 Sep 2018 23:17:47 +0000 Received: from DB3PR0502MB3980.eurprd05.prod.outlook.com ([fe80::1cb0:661b:ecab:6045]) by DB3PR0502MB3980.eurprd05.prod.outlook.com ([fe80::1cb0:661b:ecab:6045%2]) with mapi id 15.20.1164.017; Mon, 24 Sep 2018 23:17:47 +0000 From: Yongseok Koh To: Thomas Monjalon , Shahaf Shuler CC: "dev@dpdk.org" , Ori Kam Thread-Topic: [PATCH v3 07/11] net/mlx5: add Direct Verbs translate items Thread-Index: AQHUVFzOsXnxE5lHPEyoB7tByi0PDQ== Date: Mon, 24 Sep 2018 23:17:47 +0000 Message-ID: <20180924231721.15799-8-yskoh@mellanox.com> References: <20180919072143.23211-1-yskoh@mellanox.com> <20180924231721.15799-1-yskoh@mellanox.com> In-Reply-To: <20180924231721.15799-1-yskoh@mellanox.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-clientproxiedby: BN6PR03CA0021.namprd03.prod.outlook.com (2603:10b6:404:23::31) To DB3PR0502MB3980.eurprd05.prod.outlook.com (2603:10a6:8:10::27) authentication-results: spf=none (sender IP is ) smtp.mailfrom=yskoh@mellanox.com; x-ms-exchange-messagesentrepresentingtype: 1 x-originating-ip: [209.116.155.178] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; DB3PR0502MB3996; 6:LpZmfzk8bZESlGcaqRVZPmLO1LXTLyMRp/NMgiVqc5ex3p1fuQKnwlXLWQVyLPdxHa6CCq+iAto9eVVQlBw7ImdcTbYKz0iMZ69sUioudG7gl+xUrbaCBhiWW48hH87wxvieBAyTyE0SKF5KQOFR7lf2xzKduOXyDuUzaNJQsqNQDGcvtev4cMSJXpZtMMI9D2hcZsiU0KUOyAJuR8h+nW6WOWJc6VmgFwV8Z8cvyST+7to/eFzbzN2s7lWnVJI7ta0A0d0ZcB9aKnlzibzcW6rXtX1qnMDBF2PCwoV7T/baiZ2lXEjuDu1UFaxs3MsRDtZcYL5xlEjX5jML5SueCIwEe4LO/6h9B8uq+oeL9AvwUAS2OIRAV0DL3KOzDnjku6nNGHwdlI5fjUqfAA59Y+LcZtXowGqUPCDq0mAhdCO7TvLmNxnHzLMEAAPq0dOUfejGo/UAkyGawtT3GuHd3Q==; 5:PicmYHFRFSisO7s8lWiUyjQ8wjyEAyyx/usH6axPWJfL4vIitdVLKRjFJLSGUlGvejWzTqaxG5UTYfCfsLH7f1iLEgOWRZ3cIICrY8PZZHIHsNCtP1OREaTDi+7Gjd2NAJ85nGqulQ2emDjabqZsvhrx8ZvZocsRdTaYQ7dVYmw=; 7:hUtQnYGeU5xvuIwEbkIvvFCf2BLhNQqcWCKVR0eJxshZmoOrIOFmY9qbDaU8+URmYwwbnXVMQsltDpIDpBf179JMR1fDrV3dEYbFGTgrg9VW9zcRvJ0uG4Xof/zxS/C+ZjcM38Ho4uvkjzzXIssaB0zVedfZcC5dl+bV+WMaO1T5rA9iZxG64XkvdbdHmFwlzhHCO3Xws9MqamIEpTRtcuIeyLkkoWwSZWD8HMLE5JKmhN/pNLFj0vSfox+9uln9 x-ms-office365-filtering-correlation-id: 5d40577e-0656-43a7-8430-08d62273f077 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(7020095)(4652040)(8989299)(5600074)(711020)(4618075)(4534165)(4627221)(201703031133081)(201702281549075)(8990200)(2017052603328)(7153060)(7193020); SRVR:DB3PR0502MB3996; x-ms-traffictypediagnostic: DB3PR0502MB3996: x-ld-processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtAddr x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:; x-ms-exchange-senderadcheck: 1 x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(6040522)(2401047)(8121501046)(5005006)(10201501046)(3002001)(93006095)(93001095)(3231355)(944501410)(52105095)(6055026)(149066)(150027)(6041310)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123558120)(20161123560045)(20161123562045)(20161123564045)(201708071742011)(7699051); SRVR:DB3PR0502MB3996; BCL:0; PCL:0; RULEID:; SRVR:DB3PR0502MB3996; x-forefront-prvs: 0805EC9467 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(1496009)(396003)(376002)(346002)(366004)(39860400002)(136003)(199004)(189003)(66066001)(2616005)(476003)(446003)(486006)(11346002)(16200700003)(6436002)(305945005)(7736002)(53946003)(575784001)(386003)(6506007)(6486002)(6512007)(186003)(102836004)(5250100002)(76176011)(25786009)(26005)(68736007)(6116002)(54906003)(110136005)(99286004)(1076002)(3846002)(71200400001)(52116002)(86362001)(53936002)(71190400001)(97736004)(8936002)(107886003)(14444005)(256004)(4326008)(316002)(2900100001)(81156014)(8676002)(36756003)(81166006)(105586002)(2906002)(106356001)(6636002)(478600001)(5660300001)(14454004)(579004)(559001)(569006); DIR:OUT; SFP:1101; SCL:1; SRVR:DB3PR0502MB3996; H:DB3PR0502MB3980.eurprd05.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: mellanox.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: I1v8Y3pnxIW2RvLyB6hbWKn03RU2I0t8rjk0Qvg/OP+EhmsmDdt7qKNOF2vRxuJ7EcdDw1AsI3GQOZ5axuRdo7dPIi51ExuTcsuEMwjcp6erwH7FxL8hyHi5O30yMGigWLNpu5PEqwB1Fg7X5hXOKmgngB2Wu+/aTBEsxuWEHZ6j5e1eBnZ7R42Xd+Rz6Sz94fRJ4bm1pGSSPRKg66P6+eaN1zO+ArWp9sT2L734Tt+sgPj74rSAuGOJjkcp9rCF8ccQkQQGE5+BXYjl9ejs5PE4Nd426NkNvKYApATGNWZlNFCJVrRBHR3d3QKmrWyzt7+W4FVf9NBpzcTfubELbHW0LBbe/Cxdup3lRasd/1M= spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 5d40577e-0656-43a7-8430-08d62273f077 X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Sep 2018 23:17:47.6701 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR0502MB3996 Subject: [dpdk-dev] [PATCH v3 07/11] net/mlx5: add Direct Verbs translate items X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 24 Sep 2018 23:17:49 -0000 From: Ori Kam This commit handles the translation of the requested flow into Direct Verbs API. The Direct Verbs introduce the matcher object which acts as shared mask for all flows that are using the same mask. So in this commit we translate the item and get in return a matcher and the value that should be matched. Signed-off-by: Ori Kam Acked-by: Yongseok Koh --- drivers/net/mlx5/mlx5_flow.c | 36 ++ drivers/net/mlx5/mlx5_flow.h | 25 ++ drivers/net/mlx5/mlx5_flow_dv.c | 775 +++++++++++++++++++++++++++++++++= +++- drivers/net/mlx5/mlx5_flow_verbs.c | 72 +--- drivers/net/mlx5/mlx5_prm.h | 7 + 5 files changed, 858 insertions(+), 57 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 1c177b9c8..5632e31c5 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -444,6 +444,42 @@ mlx5_flow_item_acceptable(const struct rte_flow_item *= item, } =20 /** + * Adjust the hash fields according to the @p flow information. + * + * @param[in] dev_flow. + * Pointer to the mlx5_flow. + * @param[in] tunnel + * 1 when the hash field is for a tunnel item. + * @param[in] layer_types + * ETH_RSS_* types. + * @param[in] hash_fields + * Item hash fields. + * + * @return + * The hash fileds that should be used. + */ +uint64_t +mlx5_flow_hashfields_adjust(struct mlx5_flow *dev_flow, + int tunnel __rte_unused, uint32_t layer_types, + uint64_t hash_fields) +{ + struct rte_flow *flow =3D dev_flow->flow; +#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT + int rss_request_inner =3D flow->rss.level >=3D 2; + + /* Check RSS hash level for tunnel. */ + if (tunnel && rss_request_inner) + hash_fields |=3D IBV_RX_HASH_INNER; + else if (tunnel || rss_request_inner) + return 0; +#endif + /* Check if requested layer matches RSS hash fields. */ + if (!(flow->rss.types & layer_types)) + return 0; + return hash_fields; +} + +/** * Lookup and set the ptype in the data Rx part. A single Ptype can be us= ed, * if several tunnel rules are used on this queue, the tunnel ptype will b= e * cleared. diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 0cf496db3..7f0566fc9 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -89,6 +89,10 @@ #define MLX5_IP_PROTOCOL_GRE 47 #define MLX5_IP_PROTOCOL_MPLS 147 =20 +/* Internent Protocol versions. */ +#define MLX5_VXLAN 4789 +#define MLX5_VXLAN_GPE 4790 + /* Priority reserved for default flows. */ #define MLX5_FLOW_PRIO_RSVD ((uint32_t)-1) =20 @@ -103,6 +107,24 @@ #define MLX5_PRIORITY_MAP_L4 0 #define MLX5_PRIORITY_MAP_MAX 3 =20 +/* Valid layer type for IPV4 RSS. */ +#define MLX5_IPV4_LAYER_TYPES \ + (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \ + ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \ + ETH_RSS_NONFRAG_IPV4_OTHER) + +/* IBV hash source bits for IPV4. */ +#define MLX5_IPV4_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV4 | IBV_RX_HASH_DST_IPV4= ) + +/* Valid layer type for IPV6 RSS. */ +#define MLX5_IPV6_LAYER_TYPES \ + (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP | \ + ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_EX | ETH_RSS_IPV6_TCP_EX | \ + ETH_RSS_IPV6_UDP_EX | ETH_RSS_NONFRAG_IPV6_OTHER) + +/* IBV hash source bits for IPV6. */ +#define MLX5_IPV6_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV6 | IBV_RX_HASH_DST_IPV6= ) + /* Max number of actions per DV flow. */ #define MLX5_DV_MAX_NUMBER_OF_ACTIONS 8 =20 @@ -223,6 +245,9 @@ struct mlx5_flow_driver_ops { =20 /* mlx5_flow.c */ =20 +uint64_t mlx5_flow_hashfields_adjust(struct mlx5_flow *dev_flow, int tunne= l, + uint32_t layer_types, + uint64_t hash_fields); uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priori= ty, uint32_t subpriority); int mlx5_flow_validate_action_count(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_d= v.c index 30d501a61..acb1b7549 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -334,6 +334,779 @@ flow_dv_prepare(const struct rte_flow_attr *attr __rt= e_unused, } =20 /** + * Add Ethernet item to matcher and to the value. + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. + * @param[in] inner + * Item is inner pattern. + */ +static void +flow_dv_translate_item_eth(void *matcher, void *key, + const struct rte_flow_item *item, int inner) +{ + const struct rte_flow_item_eth *eth_m =3D item->mask; + const struct rte_flow_item_eth *eth_v =3D item->spec; + const struct rte_flow_item_eth nic_mask =3D { + .dst.addr_bytes =3D "\xff\xff\xff\xff\xff\xff", + .src.addr_bytes =3D "\xff\xff\xff\xff\xff\xff", + .type =3D RTE_BE16(0xffff), + }; + void *headers_m; + void *headers_v; + char *l24_v; + unsigned int i; + + if (!eth_v) + return; + if (!eth_m) + eth_m =3D &nic_mask; + if (inner) { + headers_m =3D MLX5_ADDR_OF(fte_match_param, matcher, + inner_headers); + headers_v =3D MLX5_ADDR_OF(fte_match_param, key, inner_headers); + } else { + headers_m =3D MLX5_ADDR_OF(fte_match_param, matcher, + outer_headers); + headers_v =3D MLX5_ADDR_OF(fte_match_param, key, outer_headers); + } + memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, dmac_47_16), + ð_m->dst, sizeof(eth_m->dst)); + /* The value must be in the range of the mask. */ + l24_v =3D MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, dmac_47_16); + for (i =3D 0; i < sizeof(eth_m->dst); ++i) + l24_v[i] =3D eth_m->dst.addr_bytes[i] & eth_v->dst.addr_bytes[i]; + memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, smac_47_16), + ð_m->src, sizeof(eth_m->src)); + l24_v =3D MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, smac_47_16); + /* The value must be in the range of the mask. */ + for (i =3D 0; i < sizeof(eth_m->dst); ++i) + l24_v[i] =3D eth_m->src.addr_bytes[i] & eth_v->src.addr_bytes[i]; + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ethertype, + rte_be_to_cpu_16(eth_m->type)); + l24_v =3D MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, ethertype); + *(uint16_t *)(l24_v) =3D eth_m->type & eth_v->type; +} + +/** + * Add VLAN item to matcher and to the value. + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. + * @param[in] inner + * Item is inner pattern. + */ +static void +flow_dv_translate_item_vlan(void *matcher, void *key, + const struct rte_flow_item *item, + int inner) +{ + const struct rte_flow_item_vlan *vlan_m =3D item->mask; + const struct rte_flow_item_vlan *vlan_v =3D item->spec; + const struct rte_flow_item_vlan nic_mask =3D { + .tci =3D RTE_BE16(0x0fff), + .inner_type =3D RTE_BE16(0xffff), + }; + void *headers_m; + void *headers_v; + uint16_t tci_m; + uint16_t tci_v; + + if (!vlan_v) + return; + if (!vlan_m) + vlan_m =3D &nic_mask; + if (inner) { + headers_m =3D MLX5_ADDR_OF(fte_match_param, matcher, + inner_headers); + headers_v =3D MLX5_ADDR_OF(fte_match_param, key, inner_headers); + } else { + headers_m =3D MLX5_ADDR_OF(fte_match_param, matcher, + outer_headers); + headers_v =3D MLX5_ADDR_OF(fte_match_param, key, outer_headers); + } + tci_m =3D rte_be_to_cpu_16(vlan_m->tci); + tci_v =3D rte_be_to_cpu_16(vlan_m->tci & vlan_v->tci); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, cvlan_tag, 1); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, cvlan_tag, 1); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, first_vid, tci_m); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, first_vid, tci_v); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, first_cfi, tci_m >> 12); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, first_cfi, tci_v >> 12); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, first_prio, tci_m >> 13); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, first_prio, tci_v >> 13); +} + +/** + * Add IPV4 item to matcher and to the value. + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. + * @param[in] inner + * Item is inner pattern. + */ +static void +flow_dv_translate_item_ipv4(void *matcher, void *key, + const struct rte_flow_item *item, + int inner) +{ + const struct rte_flow_item_ipv4 *ipv4_m =3D item->mask; + const struct rte_flow_item_ipv4 *ipv4_v =3D item->spec; + const struct rte_flow_item_ipv4 nic_mask =3D { + .hdr =3D { + .src_addr =3D RTE_BE32(0xffffffff), + .dst_addr =3D RTE_BE32(0xffffffff), + .type_of_service =3D 0xff, + .next_proto_id =3D 0xff, + }, + }; + void *headers_m; + void *headers_v; + char *l24_m; + char *l24_v; + uint8_t tos; + + if (!ipv4_v) + return; + if (!ipv4_m) + ipv4_m =3D &nic_mask; + if (inner) { + headers_m =3D MLX5_ADDR_OF(fte_match_param, matcher, + inner_headers); + headers_v =3D MLX5_ADDR_OF(fte_match_param, key, inner_headers); + } else { + headers_m =3D MLX5_ADDR_OF(fte_match_param, matcher, + outer_headers); + headers_v =3D MLX5_ADDR_OF(fte_match_param, key, outer_headers); + } + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, 0xf); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, 4); + l24_m =3D MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, + dst_ipv4_dst_ipv6.ipv4_layout.ipv4); + l24_v =3D MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, + dst_ipv4_dst_ipv6.ipv4_layout.ipv4); + *(uint32_t *)l24_m =3D ipv4_m->hdr.dst_addr; + *(uint32_t *)l24_v =3D ipv4_m->hdr.dst_addr & ipv4_v->hdr.dst_addr; + l24_m =3D MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, + src_ipv4_src_ipv6.ipv4_layout.ipv4); + l24_v =3D MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, + src_ipv4_src_ipv6.ipv4_layout.ipv4); + *(uint32_t *)l24_m =3D ipv4_m->hdr.src_addr; + *(uint32_t *)l24_v =3D ipv4_m->hdr.src_addr & ipv4_v->hdr.src_addr; + tos =3D ipv4_m->hdr.type_of_service & ipv4_v->hdr.type_of_service; + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ecn, + ipv4_m->hdr.type_of_service); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, tos); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_dscp, + ipv4_m->hdr.type_of_service >> 2); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp, tos >> 2); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, + ipv4_m->hdr.next_proto_id); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, + ipv4_v->hdr.next_proto_id & ipv4_m->hdr.next_proto_id); +} + +/** + * Add IPV6 item to matcher and to the value. + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. + * @param[in] inner + * Item is inner pattern. + */ +static void +flow_dv_translate_item_ipv6(void *matcher, void *key, + const struct rte_flow_item *item, + int inner) +{ + const struct rte_flow_item_ipv6 *ipv6_m =3D item->mask; + const struct rte_flow_item_ipv6 *ipv6_v =3D item->spec; + const struct rte_flow_item_ipv6 nic_mask =3D { + .hdr =3D { + .src_addr =3D + "\xff\xff\xff\xff\xff\xff\xff\xff" + "\xff\xff\xff\xff\xff\xff\xff\xff", + .dst_addr =3D + "\xff\xff\xff\xff\xff\xff\xff\xff" + "\xff\xff\xff\xff\xff\xff\xff\xff", + .vtc_flow =3D RTE_BE32(0xffffffff), + .proto =3D 0xff, + .hop_limits =3D 0xff, + }, + }; + void *headers_m; + void *headers_v; + void *misc_m =3D MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + void *misc_v =3D MLX5_ADDR_OF(fte_match_param, key, misc_parameters); + char *l24_m; + char *l24_v; + uint32_t vtc_m; + uint32_t vtc_v; + int i; + int size; + + if (!ipv6_v) + return; + if (!ipv6_m) + ipv6_m =3D &nic_mask; + if (inner) { + headers_m =3D MLX5_ADDR_OF(fte_match_param, matcher, + inner_headers); + headers_v =3D MLX5_ADDR_OF(fte_match_param, key, inner_headers); + } else { + headers_m =3D MLX5_ADDR_OF(fte_match_param, matcher, + outer_headers); + headers_v =3D MLX5_ADDR_OF(fte_match_param, key, outer_headers); + } + size =3D sizeof(ipv6_m->hdr.dst_addr); + l24_m =3D MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, + dst_ipv4_dst_ipv6.ipv6_layout.ipv6); + l24_v =3D MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, + dst_ipv4_dst_ipv6.ipv6_layout.ipv6); + memcpy(l24_m, ipv6_m->hdr.dst_addr, size); + for (i =3D 0; i < size; ++i) + l24_v[i] =3D l24_m[i] & ipv6_v->hdr.dst_addr[i]; + l24_m =3D MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, + src_ipv4_src_ipv6.ipv6_layout.ipv6); + l24_v =3D MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, + src_ipv4_src_ipv6.ipv6_layout.ipv6); + memcpy(l24_m, ipv6_m->hdr.src_addr, size); + for (i =3D 0; i < size; ++i) + l24_v[i] =3D l24_m[i] & ipv6_v->hdr.src_addr[i]; + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, 0xf); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, 6); + /* TOS. */ + vtc_m =3D rte_be_to_cpu_32(ipv6_m->hdr.vtc_flow); + vtc_v =3D rte_be_to_cpu_32(ipv6_m->hdr.vtc_flow & ipv6_v->hdr.vtc_flow); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ecn, vtc_m >> 20); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, vtc_v >> 20); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_dscp, vtc_m >> 22); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp, vtc_v >> 22); + /* Label. */ + if (inner) { + MLX5_SET(fte_match_set_misc, misc_m, inner_ipv6_flow_label, + vtc_m); + MLX5_SET(fte_match_set_misc, misc_v, inner_ipv6_flow_label, + vtc_v); + } else { + MLX5_SET(fte_match_set_misc, misc_m, outer_ipv6_flow_label, + vtc_m); + MLX5_SET(fte_match_set_misc, misc_v, outer_ipv6_flow_label, + vtc_v); + } + /* Protocol. */ + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, + ipv6_m->hdr.proto); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, + ipv6_v->hdr.proto & ipv6_m->hdr.proto); +} + +/** + * Add TCP item to matcher and to the value. + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. + * @param[in] inner + * Item is inner pattern. + */ +static void +flow_dv_translate_item_tcp(void *matcher, void *key, + const struct rte_flow_item *item, + int inner) +{ + const struct rte_flow_item_tcp *tcp_m =3D item->mask; + const struct rte_flow_item_tcp *tcp_v =3D item->spec; + void *headers_m; + void *headers_v; + + if (!tcp_v) + return; + if (!tcp_m) + tcp_m =3D &rte_flow_item_tcp_mask; + if (inner) { + headers_m =3D MLX5_ADDR_OF(fte_match_param, matcher, + inner_headers); + headers_v =3D MLX5_ADDR_OF(fte_match_param, key, inner_headers); + } else { + headers_m =3D MLX5_ADDR_OF(fte_match_param, matcher, + outer_headers); + headers_v =3D MLX5_ADDR_OF(fte_match_param, key, outer_headers); + } + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_TCP); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_sport, + rte_be_to_cpu_16(tcp_m->hdr.src_port)); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_sport, + rte_be_to_cpu_16(tcp_v->hdr.src_port & tcp_m->hdr.src_port)); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_dport, + rte_be_to_cpu_16(tcp_m->hdr.dst_port)); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_dport, + rte_be_to_cpu_16(tcp_v->hdr.dst_port & tcp_m->hdr.dst_port)); +} + +/** + * Add UDP item to matcher and to the value. + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. + * @param[in] inner + * Item is inner pattern. + */ +static void +flow_dv_translate_item_udp(void *matcher, void *key, + const struct rte_flow_item *item, + int inner) +{ + const struct rte_flow_item_udp *udp_m =3D item->mask; + const struct rte_flow_item_udp *udp_v =3D item->spec; + void *headers_m; + void *headers_v; + + if (!udp_v) + return; + if (!udp_m) + udp_m =3D &rte_flow_item_udp_mask; + if (inner) { + headers_m =3D MLX5_ADDR_OF(fte_match_param, matcher, + inner_headers); + headers_v =3D MLX5_ADDR_OF(fte_match_param, key, inner_headers); + } else { + headers_m =3D MLX5_ADDR_OF(fte_match_param, matcher, + outer_headers); + headers_v =3D MLX5_ADDR_OF(fte_match_param, key, outer_headers); + } + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_UDP); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_sport, + rte_be_to_cpu_16(udp_m->hdr.src_port)); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_sport, + rte_be_to_cpu_16(udp_v->hdr.src_port & udp_m->hdr.src_port)); + MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, + rte_be_to_cpu_16(udp_m->hdr.dst_port)); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + rte_be_to_cpu_16(udp_v->hdr.dst_port & udp_m->hdr.dst_port)); +} + +/** + * Add GRE item to matcher and to the value. + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. + * @param[in] inner + * Item is inner pattern. + */ +static void +flow_dv_translate_item_gre(void *matcher, void *key, + const struct rte_flow_item *item, + int inner) +{ + const struct rte_flow_item_gre *gre_m =3D item->mask; + const struct rte_flow_item_gre *gre_v =3D item->spec; + void *headers_m; + void *headers_v; + void *misc_m =3D MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + void *misc_v =3D MLX5_ADDR_OF(fte_match_param, key, misc_parameters); + + if (!gre_v) + return; + if (!gre_m) + gre_m =3D &rte_flow_item_gre_mask; + if (inner) { + headers_m =3D MLX5_ADDR_OF(fte_match_param, matcher, + inner_headers); + headers_v =3D MLX5_ADDR_OF(fte_match_param, key, inner_headers); + } else { + headers_m =3D MLX5_ADDR_OF(fte_match_param, matcher, + outer_headers); + headers_v =3D MLX5_ADDR_OF(fte_match_param, key, outer_headers); + } + MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_GRE); + MLX5_SET(fte_match_set_misc, misc_m, gre_protocol, + rte_be_to_cpu_16(gre_m->protocol)); + MLX5_SET(fte_match_set_misc, misc_v, gre_protocol, + rte_be_to_cpu_16(gre_v->protocol & gre_m->protocol)); +} + +/** + * Add NVGRE item to matcher and to the value. + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. + * @param[in] inner + * Item is inner pattern. + */ +static void +flow_dv_translate_item_nvgre(void *matcher, void *key, + const struct rte_flow_item *item, + int inner) +{ + const struct rte_flow_item_nvgre *nvgre_m =3D item->mask; + const struct rte_flow_item_nvgre *nvgre_v =3D item->spec; + void *misc_m =3D MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + void *misc_v =3D MLX5_ADDR_OF(fte_match_param, key, misc_parameters); + char *gre_key_m; + char *gre_key_v; + int size; + int i; + + if (!nvgre_v) + return; + if (!nvgre_m) + nvgre_m =3D &rte_flow_item_nvgre_mask; + size =3D sizeof(nvgre_m->tni) + sizeof(nvgre_m->flow_id); + gre_key_m =3D MLX5_ADDR_OF(fte_match_set_misc, misc_m, gre_key_h); + gre_key_v =3D MLX5_ADDR_OF(fte_match_set_misc, misc_v, gre_key_h); + memcpy(gre_key_m, nvgre_m->tni, size); + for (i =3D 0; i < size; ++i) + gre_key_v[i] =3D gre_key_m[i] & ((const char *)(nvgre_v->tni))[i]; + flow_dv_translate_item_gre(matcher, key, item, inner); +} + +/** + * Add VXLAN item to matcher and to the value. + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. + * @param[in] inner + * Item is inner pattern. + */ +static void +flow_dv_translate_item_vxlan(void *matcher, void *key, + const struct rte_flow_item *item, + int inner) +{ + const struct rte_flow_item_vxlan *vxlan_m =3D item->mask; + const struct rte_flow_item_vxlan *vxlan_v =3D item->spec; + void *headers_m; + void *headers_v; + void *misc_m =3D MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + void *misc_v =3D MLX5_ADDR_OF(fte_match_param, key, misc_parameters); + char *vni_m; + char *vni_v; + uint16_t dport; + int size; + int i; + + if (!vxlan_v) + return; + if (!vxlan_m) + vxlan_m =3D &rte_flow_item_vxlan_mask; + if (inner) { + headers_m =3D MLX5_ADDR_OF(fte_match_param, matcher, + inner_headers); + headers_v =3D MLX5_ADDR_OF(fte_match_param, key, inner_headers); + } else { + headers_m =3D MLX5_ADDR_OF(fte_match_param, matcher, + outer_headers); + headers_v =3D MLX5_ADDR_OF(fte_match_param, key, outer_headers); + } + dport =3D item->type =3D=3D RTE_FLOW_ITEM_TYPE_VXLAN ? MLX5_VXLAN : + MLX5_VXLAN_GPE; + if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { + MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, dport); + } + size =3D sizeof(vxlan_m->vni); + vni_m =3D MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni); + vni_v =3D MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni); + memcpy(vni_m, vxlan_m->vni, size); + for (i =3D 0; i < size; ++i) + vni_v[i] =3D vni_m[i] & vxlan_v->vni[i]; +} + +/** + * Update the matcher and the value based the selected item. + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. + * @param[in, out] dev_flow + * Pointer to the mlx5_flow. + * @param[in] inner + * Item is inner pattern. + */ +static void +flow_dv_create_item(void *matcher, void *key, + const struct rte_flow_item *item, + struct mlx5_flow *dev_flow, + int inner) +{ + struct mlx5_flow_dv_matcher *tmatcher =3D matcher; + + switch (item->type) { + case RTE_FLOW_ITEM_TYPE_VOID: + case RTE_FLOW_ITEM_TYPE_END: + break; + case RTE_FLOW_ITEM_TYPE_ETH: + flow_dv_translate_item_eth(tmatcher->mask.buf, key, item, + inner); + tmatcher->priority =3D MLX5_PRIORITY_MAP_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + flow_dv_translate_item_vlan(tmatcher->mask.buf, key, item, + inner); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + flow_dv_translate_item_ipv4(tmatcher->mask.buf, key, item, + inner); + tmatcher->priority =3D MLX5_PRIORITY_MAP_L3; + dev_flow->dv.hash_fields |=3D + mlx5_flow_hashfields_adjust(dev_flow, inner, + MLX5_IPV4_LAYER_TYPES, + MLX5_IPV4_IBV_RX_HASH); + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + flow_dv_translate_item_ipv6(tmatcher->mask.buf, key, item, + inner); + tmatcher->priority =3D MLX5_PRIORITY_MAP_L3; + dev_flow->dv.hash_fields |=3D + mlx5_flow_hashfields_adjust(dev_flow, inner, + MLX5_IPV6_LAYER_TYPES, + MLX5_IPV6_IBV_RX_HASH); + break; + case RTE_FLOW_ITEM_TYPE_TCP: + flow_dv_translate_item_tcp(tmatcher->mask.buf, key, item, + inner); + tmatcher->priority =3D MLX5_PRIORITY_MAP_L4; + dev_flow->dv.hash_fields |=3D + mlx5_flow_hashfields_adjust(dev_flow, inner, + ETH_RSS_TCP, + (IBV_RX_HASH_SRC_PORT_TCP | + IBV_RX_HASH_DST_PORT_TCP)); + break; + case RTE_FLOW_ITEM_TYPE_UDP: + flow_dv_translate_item_udp(tmatcher->mask.buf, key, item, + inner); + tmatcher->priority =3D MLX5_PRIORITY_MAP_L4; + dev_flow->verbs.hash_fields |=3D + mlx5_flow_hashfields_adjust(dev_flow, inner, + ETH_RSS_TCP, + (IBV_RX_HASH_SRC_PORT_TCP | + IBV_RX_HASH_DST_PORT_TCP)); + break; + case RTE_FLOW_ITEM_TYPE_NVGRE: + flow_dv_translate_item_nvgre(tmatcher->mask.buf, key, item, + inner); + break; + case RTE_FLOW_ITEM_TYPE_GRE: + flow_dv_translate_item_gre(tmatcher->mask.buf, key, item, + inner); + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + flow_dv_translate_item_vxlan(tmatcher->mask.buf, key, item, + inner); + break; + default: + break; + } +} + +static uint32_t matcher_zero[MLX5_ST_SZ_DW(fte_match_param)] =3D { 0 }; + +#define HEADER_IS_ZERO(match_criteria, headers) \ + !(memcmp(MLX5_ADDR_OF(fte_match_param, match_criteria, headers), \ + matcher_zero, MLX5_FLD_SZ_BYTES(fte_match_param, headers))) \ + +/** + * Calculate flow matcher enable bitmap. + * + * @param match_criteria + * Pointer to flow matcher criteria. + * + * @return + * Bitmap of enabled fields. + */ +static uint8_t +flow_dv_matcher_enable(uint32_t *match_criteria) +{ + uint8_t match_criteria_enable; + + match_criteria_enable =3D + (!HEADER_IS_ZERO(match_criteria, outer_headers)) << + MLX5_MATCH_CRITERIA_ENABLE_OUTER_BIT; + match_criteria_enable |=3D + (!HEADER_IS_ZERO(match_criteria, misc_parameters)) << + MLX5_MATCH_CRITERIA_ENABLE_MISC_BIT; + match_criteria_enable |=3D + (!HEADER_IS_ZERO(match_criteria, inner_headers)) << + MLX5_MATCH_CRITERIA_ENABLE_INNER_BIT; + match_criteria_enable |=3D + (!HEADER_IS_ZERO(match_criteria, misc_parameters_2)) << + MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT; + + return match_criteria_enable; +} + +/** + * Register the flow matcher. + * + * @param dev[in, out] + * Pointer to rte_eth_dev structure. + * @param[in, out] matcher + * Pointer to flow matcher. + * @parm[in, out] dev_flow + * Pointer to the dev_flow. + * @param[out] error + * pointer to error structure. + * + * @return + * 0 on success otherwise -errno and errno is set. + */ +static int +flow_dv_matcher_register(struct rte_eth_dev *dev, + struct mlx5_flow_dv_matcher *matcher, + struct mlx5_flow *dev_flow, + struct rte_flow_error *error) +{ + struct priv *priv =3D dev->data->dev_private; + struct mlx5_flow_dv_matcher *cache; + struct mlx5dv_flow_matcher_attr dv_attr =3D { + .type =3D IBV_FLOW_ATTR_NORMAL, + .match_mask =3D (void *)&matcher->mask, + }; + + /* Lookup from cache. */ + LIST_FOREACH(cache, &priv->matchers, cache.next) { + if (matcher->crc =3D=3D cache->crc && + matcher->priority =3D=3D cache->priority && + matcher->egress =3D=3D cache->egress && + !memcmp((const void *)matcher->mask.buf, + (const void *)cache->mask.buf, cache->mask.size)) { + DRV_LOG(DEBUG, + "priority %hd use %s matcher %p: refcnt %d++", + cache->priority, cache->egress ? "tx" : "rx", + (void *)cache, + rte_atomic32_read(&cache->cache.refcnt)); + rte_atomic32_inc(&cache->cache.refcnt); + dev_flow->dv.matcher =3D cache; + return 0; + } + } + /* Register new matcher. */ + cache =3D rte_calloc(__func__, 1, sizeof(*cache), 0); + if (!cache) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "cannot allocate matcher memory"); + *cache =3D *matcher; + dv_attr.match_criteria_enable =3D + flow_dv_matcher_enable(cache->mask.buf); + dv_attr.priority =3D matcher->priority; + if (matcher->egress) + dv_attr.flags |=3D IBV_FLOW_ATTR_FLAGS_EGRESS; + cache->cache.resource =3D + mlx5dv_create_flow_matcher(priv->ctx, &dv_attr); + if (!cache->cache.resource) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "cannot create matcher"); + rte_atomic32_inc(&cache->cache.refcnt); + LIST_INSERT_HEAD(&priv->matchers, &cache->cache, next); + dev_flow->dv.matcher =3D cache; + DRV_LOG(DEBUG, "priority %hd new %s matcher %p: refcnt %d", + cache->priority, + cache->egress ? "tx" : "rx", (void *)cache, + rte_atomic32_read(&cache->cache.refcnt)); + return 0; +} + + +/** + * Fill the flow with DV spec. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in, out] dev_flow + * Pointer to the sub flow. + * @param[in] attr + * Pointer to the flow attributes. + * @param[in] items + * Pointer to the list of items. + * @param[in] actions + * Pointer to the list of actions. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_ernno is set. + */ +static int +flow_dv_translate(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[] __rte_unused, + struct rte_flow_error *error) +{ + struct priv *priv =3D dev->data->dev_private; + uint64_t priority =3D attr->priority; + struct mlx5_flow_dv_matcher matcher =3D { + .mask =3D { + .size =3D sizeof(matcher.mask.buf), + }, + }; + void *match_value =3D dev_flow->dv.value.buf; + uint8_t inner =3D 0; + + if (priority =3D=3D MLX5_FLOW_PRIO_RSVD) + priority =3D priv->config.flow_prio - 1; + for (; items->type !=3D RTE_FLOW_ITEM_TYPE_END; items++) + flow_dv_create_item(&matcher, match_value, items, dev_flow, + inner); + matcher.crc =3D rte_raw_cksum((const void *)matcher.mask.buf, + matcher.mask.size); + if (priority =3D=3D MLX5_FLOW_PRIO_RSVD) + priority =3D priv->config.flow_prio - 1; + matcher.priority =3D mlx5_flow_adjust_priority(dev, priority, + matcher.priority); + matcher.egress =3D attr->egress; + if (flow_dv_matcher_register(dev, &matcher, dev_flow, error)) + return -rte_errno; + return 0; +} + +/** * Fills the flow_ops with the function pointers. * * @param[out] flow_ops @@ -345,7 +1118,7 @@ mlx5_flow_dv_get_driver_ops(struct mlx5_flow_driver_op= s *flow_ops) *flow_ops =3D (struct mlx5_flow_driver_ops) { .validate =3D flow_dv_validate, .prepare =3D flow_dv_prepare, - .translate =3D NULL, + .translate =3D flow_dv_translate, .apply =3D NULL, .remove =3D NULL, .destroy =3D NULL, diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flo= w_verbs.c index e8e16cc37..f4a264232 100644 --- a/drivers/net/mlx5/mlx5_flow_verbs.c +++ b/drivers/net/mlx5/mlx5_flow_verbs.c @@ -132,37 +132,6 @@ flow_verbs_spec_add(struct mlx5_flow *flow, void *src,= unsigned int size) } =20 /** - * Adjust verbs hash fields according to the @p flow information. - * - * @param[in] dev_flow. - * Pointer to dev flow structure. - * @param[in] tunnel - * 1 when the hash field is for a tunnel item. - * @param[in] layer_types - * ETH_RSS_* types. - * @param[in] hash_fields - * Item hash fields. - */ -static void -flow_verbs_hashfields_adjust(struct mlx5_flow *dev_flow, - int tunnel __rte_unused, - uint32_t layer_types, uint64_t hash_fields) -{ -#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT - int rss_request_inner =3D dev_flow->flow->rss.level >=3D 2; - - hash_fields |=3D (tunnel ? IBV_RX_HASH_INNER : 0); - if (rss_request_inner && !tunnel) - hash_fields =3D 0; - else if (rss_request_inner < 2 && tunnel) - hash_fields =3D 0; -#endif - if (!(dev_flow->flow->rss.types & layer_types)) - hash_fields =3D 0; - dev_flow->verbs.hash_fields |=3D hash_fields; -} - -/** * Convert the @p item into a Verbs specification. This function assumes t= hat * the input is valid and that there is space to insert the requested item * into the flow. @@ -346,13 +315,10 @@ flow_verbs_translate_item_ipv4(const struct rte_flow_= item *item, ipv4.val.proto &=3D ipv4.mask.proto; ipv4.val.tos &=3D ipv4.mask.tos; } - flow_verbs_hashfields_adjust(dev_flow, tunnel, - (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | - ETH_RSS_NONFRAG_IPV4_TCP | - ETH_RSS_NONFRAG_IPV4_UDP | - ETH_RSS_NONFRAG_IPV4_OTHER), - (IBV_RX_HASH_SRC_IPV4 | - IBV_RX_HASH_DST_IPV4)); + dev_flow->verbs.hash_fields |=3D + mlx5_flow_hashfields_adjust(dev_flow, tunnel, + MLX5_IPV4_LAYER_TYPES, + MLX5_IPV4_IBV_RX_HASH); dev_flow->verbs.attr->priority =3D MLX5_PRIORITY_MAP_L3; flow_verbs_spec_add(dev_flow, &ipv4, size); } @@ -426,16 +392,10 @@ flow_verbs_translate_item_ipv6(const struct rte_flow_= item *item, ipv6.val.next_hdr &=3D ipv6.mask.next_hdr; ipv6.val.hop_limit &=3D ipv6.mask.hop_limit; } - flow_verbs_hashfields_adjust(dev_flow, tunnel, - (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | - ETH_RSS_NONFRAG_IPV6_TCP | - ETH_RSS_NONFRAG_IPV6_UDP | - ETH_RSS_IPV6_EX | - ETH_RSS_IPV6_TCP_EX | - ETH_RSS_IPV6_UDP_EX | - ETH_RSS_NONFRAG_IPV6_OTHER), - (IBV_RX_HASH_SRC_IPV6 | - IBV_RX_HASH_DST_IPV6)); + dev_flow->verbs.hash_fields |=3D + mlx5_flow_hashfields_adjust(dev_flow, tunnel, + MLX5_IPV6_LAYER_TYPES, + MLX5_IPV6_IBV_RX_HASH); dev_flow->verbs.attr->priority =3D MLX5_PRIORITY_MAP_L3; flow_verbs_spec_add(dev_flow, &ipv6, size); } @@ -479,10 +439,10 @@ flow_verbs_translate_item_udp(const struct rte_flow_i= tem *item, udp.val.src_port &=3D udp.mask.src_port; udp.val.dst_port &=3D udp.mask.dst_port; } - flow_verbs_hashfields_adjust(dev_flow, - tunnel, ETH_RSS_UDP, - (IBV_RX_HASH_SRC_PORT_UDP | - IBV_RX_HASH_DST_PORT_UDP)); + dev_flow->verbs.hash_fields |=3D + mlx5_flow_hashfields_adjust(dev_flow, tunnel, ETH_RSS_UDP, + (IBV_RX_HASH_SRC_PORT_UDP | + IBV_RX_HASH_DST_PORT_UDP)); dev_flow->verbs.attr->priority =3D MLX5_PRIORITY_MAP_L4; flow_verbs_spec_add(dev_flow, &udp, size); } @@ -526,10 +486,10 @@ flow_verbs_translate_item_tcp(const struct rte_flow_i= tem *item, tcp.val.src_port &=3D tcp.mask.src_port; tcp.val.dst_port &=3D tcp.mask.dst_port; } - flow_verbs_hashfields_adjust(dev_flow, - tunnel, ETH_RSS_TCP, - (IBV_RX_HASH_SRC_PORT_TCP | - IBV_RX_HASH_DST_PORT_TCP)); + dev_flow->verbs.hash_fields |=3D + mlx5_flow_hashfields_adjust(dev_flow, tunnel, ETH_RSS_TCP, + (IBV_RX_HASH_SRC_PORT_TCP | + IBV_RX_HASH_DST_PORT_TCP)); dev_flow->verbs.attr->priority =3D MLX5_PRIORITY_MAP_L4; flow_verbs_spec_add(dev_flow, &tcp, size); } diff --git a/drivers/net/mlx5/mlx5_prm.h b/drivers/net/mlx5/mlx5_prm.h index 2222e7fbd..4e2f9f43d 100644 --- a/drivers/net/mlx5/mlx5_prm.h +++ b/drivers/net/mlx5/mlx5_prm.h @@ -493,6 +493,13 @@ struct mlx5_ifc_fte_match_param_bits { u8 reserved_at_800[0x800]; }; =20 +enum { + MLX5_MATCH_CRITERIA_ENABLE_OUTER_BIT, + MLX5_MATCH_CRITERIA_ENABLE_MISC_BIT, + MLX5_MATCH_CRITERIA_ENABLE_INNER_BIT, + MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT +}; + /* CQE format mask. */ #define MLX5E_CQE_FORMAT_MASK 0xc =20 --=20 2.11.0