From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B5E3DA0032; Sun, 14 Nov 2021 16:36:57 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C608841150; Sun, 14 Nov 2021 16:36:46 +0100 (CET) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1anam02on2057.outbound.protection.outlook.com [40.107.96.57]) by mails.dpdk.org (Postfix) with ESMTP id E6E5241150; Sun, 14 Nov 2021 16:36:44 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KY3YNcM4qUtQYTxzfEgC4qSPxehaD1CYWH1BiLzMCGsHJbXfZ4A9JXovDUJo6i1MJsXkdWN5+TJLBkpEdFj+OF6d7OWLEFHBjToDPISUHOLkPKH8xs1ehDCYlNDQO2gw5IYgTzxY2PvT3HOHqeqTypX90bxrcxMBiFtS72uEPgNhCI6NEwFEHHdaGH0f2NA/avgkaiAMB8+hMk3kw+mn0gtnNvxSh5Zmwby74qgD3fu5IjCXYrKlrxXsxDW590lwHL8l7C89F7+qUiV7h4MAm3MoPcjb46B/NDqRWrN6mDlBSGmr2RkhE1qr65xB3Fv1kFd8CWFHARR2gDlTE4mylw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=x31FItkQctv5yCfdc1jgO+8/hq9usRWzEJY1CTKoNbU=; b=YvIYbR/16t6hHZ7MAguadG/sKMaUCVxljecbMHQzc/voERhyJCE2cjZe9y76+U/GdLWdJ3WvNfdJ9CsggfYB0iCOK41XmrNSmRbCwuLpoda6OLMg/+KC5EEm2oJ37scNA0kuC16UXPhukHnylqGfEetjPd/w+mSA0+d5ad0o9RhECYvNblfYXSxcyHCz6b8y34FMDMCT6HZ+8j59mDsfPPtHUm6mtlgsTJZ8kMGkIqSpAYDXzquljQI16vEJDdbrpoSa91t+vRu5IbdbuoqFi6XR+qCV/+kzCzU16oHdhW4W30En2gkS9EH3kfJZBVHqdXBLAl9PdiI7/Q/7Ew0efA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=x31FItkQctv5yCfdc1jgO+8/hq9usRWzEJY1CTKoNbU=; b=h65k0V44EHHHbDBreZdsZ1Qe0HtHQplG58Dzb770t77XVvYXiA23AJ8y+vnYJkiBBIt+H3Exqo1DKx6WO+mMm5gWGZXOQ70VcE8dfe2pRgFwxGE+KLGHc3DyCCb3Rf+CwlwU5uH9YKf5gIk8q77rh3j/ROTQDzIqLihC2BIgkogDolX2eCBYm2Vqk1XcYwKTZbpnuX+Nz/CbImieMLquGi9KPpnw8J3COYugWV4Frl0rqiUZB/FOOhqRg+HDOh5ClnuznCNrUlRRG2Wxj+a5tR2uyXWS+WV+VIerXb8wTAALICbYqg5SG6J1qesXcE6twyQPQr/QsIEGs1TupcxfPQ== Received: from DM6PR05CA0050.namprd05.prod.outlook.com (2603:10b6:5:335::19) by CH2PR12MB3910.namprd12.prod.outlook.com (2603:10b6:610:28::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.13; Sun, 14 Nov 2021 15:36:42 +0000 Received: from DM6NAM11FT021.eop-nam11.prod.protection.outlook.com (2603:10b6:5:335:cafe::5b) by DM6PR05CA0050.outlook.office365.com (2603:10b6:5:335::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4713.17 via Frontend Transport; Sun, 14 Nov 2021 15:36:42 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT021.mail.protection.outlook.com (10.13.173.76) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4690.15 via Frontend Transport; Sun, 14 Nov 2021 15:36:41 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 14 Nov 2021 15:36:37 +0000 From: Gregory Etelson To: , CC: , , , "Viacheslav Ovsiienko" , Moti Haimovsky Subject: [PATCH 4/5] net/mlx5: fix GENEVE protocol type translation Date: Sun, 14 Nov 2021 17:36:15 +0200 Message-ID: <20211114153617.25085-4-getelson@nvidia.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211114153617.25085-1-getelson@nvidia.com> References: <20211114153617.25085-1-getelson@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: f554cfb4-e418-4ece-2724-08d9a7848e86 X-MS-TrafficTypeDiagnostic: CH2PR12MB3910: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3276; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 14Dog+Q7ph52cLW72mY3Ac+zkGSkokWia/o7nnBsJkb75mfoON6b/KhURLxfZMZAxmNnZnF/1xNKgSdWkGPUpGwduLg+UK0uGRAkp64xEXtHljX0u8CCTgTQyU28AraIS1Im3p2uSMHAcdU5D0YDZ/SnFVPSIT6mTNHUbwaRhMsqGS0on7TrwjXt8cHaSehGMtN6pFVL4PGTlkspgi0ATck+hRUtXg003bEPt6GrsZzYIhhl+MqwpzfauGaSmL0RzQid0W7GG2+69Mryxn7HhhQ6gRa8Rn6TDytLIgv4tfCdq+0dnO6euc7h/FxYRsx2+dRY46a0ZA8s+tM3lRHijJ3kzRJK8VtK77BkkUg71Esa2oRiHn7kiIcnEiVOML9gQ7Lydz981sZQBDzdhYBEmAHoJUsLT4GeenXJIGDg6DenmN9fUEdDao2TBlngp9JSki+xcUCQhWaWWFI1OtnivzaMVd1Quh7Hy23172ZBrw7gSRBM7wkSd59VWjhD2Zzm/vQaFNk6+U9v1GUnz6QYK2jCSYj0hLsUcJiYD5GNl3vshZy3FXZqJJzJbhxTOYQmTVkv7Dz+kppMieVrZGgBffhPbjEp0gzvOZFodUgqqegM7I6xAPCsP8vwKUciO3+Mta7Otlx8UkFMA90Lfp5uSovt+qHVzic6ySUVq2rwkG9UlAWCbfYGjwHcnNRY4FnjrOD1PeNtLdpqTEkjymYn/A== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(2906002)(8676002)(55016002)(6286002)(36860700001)(86362001)(110136005)(83380400001)(8936002)(70586007)(336012)(186003)(70206006)(16526019)(7696005)(54906003)(2616005)(316002)(426003)(6666004)(7049001)(26005)(356005)(36756003)(5660300002)(4326008)(450100002)(47076005)(107886003)(508600001)(82310400003)(1076003)(7636003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Nov 2021 15:36:41.7124 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: f554cfb4-e418-4ece-2724-08d9a7848e86 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT021.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB3910 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When application creates several flows to match on GENEVE tunnel without explicitly specifying GENEVE protocol type value in flow rules, PMD will translate that to zero mask. RDMA-CORE cannot distinguish between different inner flow types and produces identical matchers for each zero mask. The patch extracts inner header type from flow rule and forces it in GENEVE protocol type, if application did not specify any without explicitly specifying GENEVE protocol type value in flow rules, protocol type value. Cc: stable@dpdk.org Fixes: e59a5dbcfd07 ("net/mlx5: add flow match on GENEVE item") Signed-off-by: Gregory Etelson --- drivers/net/mlx5/mlx5_flow_dv.c | 78 ++++++++++++++++++++------------- 1 file changed, 47 insertions(+), 31 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index f9acb69cca..bce504391d 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -93,6 +93,20 @@ static int flow_dv_jump_tbl_resource_release(struct rte_eth_dev *dev, uint32_t rix_jump); +static inline uint16_t +mlx5_translate_tunnel_etypes(uint64_t pattern_flags) +{ + if (pattern_flags & MLX5_FLOW_LAYER_INNER_L2) + return RTE_ETHER_TYPE_TEB; + else if (pattern_flags & MLX5_FLOW_LAYER_INNER_L3_IPV4) + return RTE_ETHER_TYPE_IPV4; + else if (pattern_flags & MLX5_FLOW_LAYER_INNER_L3_IPV6) + return RTE_ETHER_TYPE_IPV6; + else if (pattern_flags & MLX5_FLOW_LAYER_MPLS) + return RTE_ETHER_TYPE_MPLS; + return 0; +} + static int16_t flow_dv_get_esw_manager_vport_id(struct rte_eth_dev *dev) { @@ -9038,49 +9052,39 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, static void flow_dv_translate_item_geneve(void *matcher, void *key, - const struct rte_flow_item *item, int inner) + const struct rte_flow_item *item, + uint64_t pattern_flags) { + static const struct rte_flow_item_geneve empty_geneve = {0,}; const struct rte_flow_item_geneve *geneve_m = item->mask; const struct rte_flow_item_geneve *geneve_v = item->spec; - void *headers_m; - void *headers_v; + /* GENEVE flow item validation allows single tunnel item */ + void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); + void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - uint16_t dport; uint16_t gbhdr_m; uint16_t gbhdr_v; - char *vni_m; - char *vni_v; - size_t size, i; + char *vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, geneve_vni); + char *vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, geneve_vni); + size_t size = sizeof(geneve_m->vni), i; + uint16_t protocol_m, protocol_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - dport = MLX5_UDP_PORT_GENEVE; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, dport); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + MLX5_UDP_PORT_GENEVE); + } + if (!geneve_v) { + geneve_v = &empty_geneve; + geneve_m = &empty_geneve; + } else { + if (!geneve_m) + geneve_m = &rte_flow_item_geneve_mask; } - if (!geneve_v) - return; - if (!geneve_m) - geneve_m = &rte_flow_item_geneve_mask; - size = sizeof(geneve_m->vni); - vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, geneve_vni); - vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, geneve_vni); memcpy(vni_m, geneve_m->vni, size); for (i = 0; i < size; ++i) vni_v[i] = vni_m[i] & geneve_v->vni[i]; - MLX5_SET(fte_match_set_misc, misc_m, geneve_protocol_type, - rte_be_to_cpu_16(geneve_m->protocol)); - MLX5_SET(fte_match_set_misc, misc_v, geneve_protocol_type, - rte_be_to_cpu_16(geneve_v->protocol & geneve_m->protocol)); gbhdr_m = rte_be_to_cpu_16(geneve_m->ver_opt_len_o_c_rsvd0); gbhdr_v = rte_be_to_cpu_16(geneve_v->ver_opt_len_o_c_rsvd0); MLX5_SET(fte_match_set_misc, misc_m, geneve_oam, @@ -9092,6 +9096,16 @@ flow_dv_translate_item_geneve(void *matcher, void *key, MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, MLX5_GENEVE_OPTLEN_VAL(gbhdr_v) & MLX5_GENEVE_OPTLEN_VAL(gbhdr_m)); + protocol_m = rte_be_to_cpu_16(geneve_m->protocol); + protocol_v = rte_be_to_cpu_16(geneve_v->protocol); + if (!protocol_m) { + /* Force next protocol to prevent matchers duplication */ + protocol_m = 0xFFFF; + protocol_v = mlx5_translate_tunnel_etypes(pattern_flags); + } + MLX5_SET(fte_match_set_misc, misc_m, geneve_protocol_type, protocol_m); + MLX5_SET(fte_match_set_misc, misc_v, geneve_protocol_type, + protocol_m & protocol_v); } /** @@ -13449,10 +13463,9 @@ flow_dv_translate(struct rte_eth_dev *dev, tunnel_item = items; break; case RTE_FLOW_ITEM_TYPE_GENEVE: - flow_dv_translate_item_geneve(match_mask, match_value, - items, tunnel); matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); last_item = MLX5_FLOW_LAYER_GENEVE; + tunnel_item = items; break; case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: ret = flow_dv_translate_item_geneve_opt(dev, match_mask, @@ -13581,6 +13594,9 @@ flow_dv_translate(struct rte_eth_dev *dev, if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) flow_dv_translate_item_vxlan_gpe(match_mask, match_value, tunnel_item, item_flags); + else if (item_flags & MLX5_FLOW_LAYER_GENEVE) + flow_dv_translate_item_geneve(match_mask, match_value, + tunnel_item, item_flags); #ifdef RTE_LIBRTE_MLX5_DEBUG MLX5_ASSERT(!flow_dv_check_valid_spec(matcher.mask.buf, dev_flow->dv.value.buf)); -- 2.33.1