From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3365743DE5 for ; Wed, 3 Apr 2024 10:40:46 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 272DD402CE; Wed, 3 Apr 2024 10:40:46 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2055.outbound.protection.outlook.com [40.107.92.55]) by mails.dpdk.org (Postfix) with ESMTP id 4CEA34025C for ; Wed, 3 Apr 2024 10:40:44 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Wji1Lo27sjHmy2GNYZ+11h9Y3bmV9WvC++AOdRbfugTXS6YYDJhDMUcAe/WrFkfYk4WDL4uUkopJi4n6h4Go5NGDJw2RDRDQ+5bhvgk8xZ/mnlPsaV+452S19JoGeSvH1UpRMiHFWoFvk08phL7yQUc9cWlJ23+kOJ3AZfIEzF/++axgc4CnhcjePfopVlfF+8Ov9QHn8Wik1cWhX0rL/1Ta3JjuT42/ix4SGxPiBUIlhN0y3CLSYdA/fSuZpLKQK79CBV7wOdYPmkySllvgJsrp0zGgwX13ykRH1J121+GWPmoUFmIsH+nThdVspZVais1N5eT9nUubuPw95jAjNw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=eA4fS1wgMmpFA/nUgtsjVvQgRZU28W3BKU39niv6JGc=; b=aRG6GAxUl99WbwEPT0+Kw60h5iICDoW8MWhM5gI9KZoB4byG8tHIn9L7J2/EL98OOX9fvtYybptPj8HJTlUd6W5iLcAKfCO8ApuTL16ofo4PH5R13jkPDQ3IGie8dtcYO1tV8fNpaJdI1BvnItcU/EdBbt45/IvHC8zQm1q171qR8PCfvfZC0nNxmGZRatN8lN0mnXf6ty/DirSCBQsoiXCY1lrtfjsgPv8lMSagDPACk69jokeLadYTESIcVZZoJkYuFB+lTSRjiNaKcPLf2dOH3nYTDNhC/KbL+/zDhLrBWi9OlbLd8CcxMYf2OAbsYtEk9q9SY60eEQJ1Kz9euA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=temperror (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=temperror action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=eA4fS1wgMmpFA/nUgtsjVvQgRZU28W3BKU39niv6JGc=; b=sNqqO0Trb19PjOmehKkDQexQRk8Q36idZPWvF6wDrfYW1rUaaHz9igTMocA1Ob3vEWs9XZAWg24OoXcBMtSenteVdebD0sMSavBrCuAUOsrVaHGJFTXCgtSa7g6h4F5xpgAi3t3YoLI+tXoHB19nZLygFE/Ozhg0JlBWzD0MxiyuY942R9gQL88Yj/r8FOoqC9dBjSgyC34qdCwJCgS0vTuoG6ihOSKC8LWmkEMEmjqPh28T30jh0LGnjaD4X5EwJTb3STgSrdLZ678h9vrIgNKtxwvYBgvzLC60xFELF6os9b9bBsskcfNc0bW0wOUduG9QGU5UMDQzpO2I/bqyEw== Received: from PH0PR07CA0095.namprd07.prod.outlook.com (2603:10b6:510:4::10) by MN0PR12MB6197.namprd12.prod.outlook.com (2603:10b6:208:3c6::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.46; Wed, 3 Apr 2024 08:40:41 +0000 Received: from CY4PEPF0000E9D6.namprd05.prod.outlook.com (2603:10b6:510:4:cafe::66) by PH0PR07CA0095.outlook.office365.com (2603:10b6:510:4::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7409.13 via Frontend Transport; Wed, 3 Apr 2024 08:40:41 +0000 X-MS-Exchange-Authentication-Results: spf=temperror (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=temperror action=none header.from=nvidia.com; Received-SPF: TempError (protection.outlook.com: error in processing during lookup of nvidia.com: DNS Timeout) Received: from mail.nvidia.com (216.228.117.160) by CY4PEPF0000E9D6.mail.protection.outlook.com (10.167.241.80) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7452.22 via Frontend Transport; Wed, 3 Apr 2024 08:40:39 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Wed, 3 Apr 2024 01:40:23 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.12; Wed, 3 Apr 2024 01:40:22 -0700 From: Dariusz Sosnowski To: Matan Azrad , Viacheslav Ovsiienko , Suanming Mou CC: Subject: [PATCH 22.11] net/mlx5: fix flow configure validation Date: Wed, 3 Apr 2024 10:40:03 +0200 Message-ID: <20240403084003.23705-1-dsosnowski@nvidia.com> X-Mailer: git-send-email 2.39.2 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000E9D6:EE_|MN0PR12MB6197:EE_ X-MS-Office365-Filtering-Correlation-Id: ff6b79a5-00f3-4bff-58ca-08dc53b9bde7 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: zN9hUo34SSMh3GVdFgcx8J+avSQ59hJWkd8Z6VzFfTzgXnpxT35cx6L2utBIFw/6iesPUMJuCruncc9Q4IBgCvWz1GpNWVnhL6qi7ck8u5J4739vtBB1FEe3BFJ+tDrYgGYEC2VT3pHqPaDEHcsEBbflxOgBO2XBaL2ts47b39jTM9PXZY21NJvTnAs/+Vd4jNGrJAFRgif70XbFTE2rloveoNEYmy30fmcRJeXd+RLAkIkYmLk9QKcSli32bO2KAG5Wl0P+4Ofa7vXbxaCLYuI2wF3BRs7Y+d7LQKxY1hXlQdvK2Vgl2FSQqivutjEu/EI3STLPasBkqPRsW/U7L1WuNLZL6dXzgV1AQZQ4QAiroSwIirKbwTMhPj50jH+JKxBWTM18FEMwko7WSW/hoEG+cmJUKxDrvcsjDTtePXXcmM0eomiBCMtxfetpjE+wLwY3eaD+CeXhGN4UqGNVM8S2mnbwYc/JqigEJd+x5sYQOSO60/Ew/ijDGRVYTIeLQuva+CfdD3YdAQ4pvLCuwR4NvN3dAFxe6vkZxFVOGDg4xZw2qBVeHMVpcRC/Jtrg0bQjLessfcyBQB8sFGWk+Z35xnfXlhH4Mk2SY7364RgIn6wdQm8HNHHhoWjGxmZ4MMvxyjQNMZSWz+j/eDOBBgwLeMrRQU77MQJcyVBCsCRlWZoEozrQOgwwM04+rdXqUbCaUsEwENtVxHGI2VdEo6S6wW1+b89cpbWWzgRRk4P1OV2qDYtZ41QWwq2SFnum X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230031)(1800799015)(36860700004)(376005)(82310400014); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Apr 2024 08:40:39.8628 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ff6b79a5-00f3-4bff-58ca-08dc53b9bde7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000E9D6.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB6197 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org [ upstream commit ff9433b578195be8c6cb44443ad199defdbf3c99 ] There's an existing limitation in mlx5 PMD, that all configured flow queues must have the same size. Even though this condition is checked, some allocations are done before that. This lead to segmentation fault during rollback on error in rte_flow_configure() implementation. This patch fixes that by reorganizing validation, so that configuration options are validated before any allocations are done and necessary checks for NULL are added to error rollback. Bugzilla ID: 1199 Fixes: b401400db24e ("net/mlx5: add port flow configuration") Cc: stable@dpdk.org Signed-off-by: Dariusz Sosnowski Acked-by: Suanming Mou --- drivers/net/mlx5/mlx5_flow_hw.c | 58 +++++++++++++++++++++++---------- 1 file changed, 41 insertions(+), 17 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 87d29ec0da..3b854ce73d 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -7078,6 +7078,38 @@ mlx5_flow_hw_cleanup_ctrl_rx_templates(struct rte_eth_dev *dev) } } +static int +flow_hw_validate_attributes(const struct rte_flow_port_attr *port_attr, + uint16_t nb_queue, + const struct rte_flow_queue_attr *queue_attr[], + struct rte_flow_error *error) +{ + uint32_t size; + unsigned int i; + + if (port_attr == NULL) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Port attributes must be non-NULL"); + + if (nb_queue == 0) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "At least one flow queue is required"); + + if (queue_attr == NULL) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Queue attributes must be non-NULL"); + + size = queue_attr[0]->size; + for (i = 1; i < nb_queue; ++i) { + if (queue_attr[i]->size != size) + return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "All flow queues must have the same size"); + } + + return 0; +} + /** * Configure port HWS resources. * @@ -7127,10 +7159,8 @@ flow_hw_configure(struct rte_eth_dev *dev, int ret = 0; uint32_t action_flags; - if (!port_attr || !nb_queue || !queue_attr) { - rte_errno = EINVAL; - goto err; - } + if (flow_hw_validate_attributes(port_attr, nb_queue, queue_attr, error)) + return -rte_errno; /* In case re-configuring, release existing context at first. */ if (priv->dr_ctx) { /* */ @@ -7163,14 +7193,6 @@ flow_hw_configure(struct rte_eth_dev *dev, /* Allocate the queue job descriptor LIFO. */ mem_size = sizeof(priv->hw_q[0]) * nb_q_updated; for (i = 0; i < nb_q_updated; i++) { - /* - * Check if the queues' size are all the same as the - * limitation from HWS layer. - */ - if (_queue_attr[i]->size != _queue_attr[0]->size) { - rte_errno = EINVAL; - goto err; - } mem_size += (sizeof(struct mlx5_hw_q_job *) + sizeof(struct mlx5_hw_q_job) + sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN + @@ -7378,12 +7400,14 @@ flow_hw_configure(struct rte_eth_dev *dev, flow_hw_destroy_vlan(dev); if (dr_ctx) claim_zero(mlx5dr_context_close(dr_ctx)); - for (i = 0; i < nb_q_updated; i++) { - rte_ring_free(priv->hw_q[i].indir_iq); - rte_ring_free(priv->hw_q[i].indir_cq); + if (priv->hw_q) { + for (i = 0; i < nb_q_updated; i++) { + rte_ring_free(priv->hw_q[i].indir_iq); + rte_ring_free(priv->hw_q[i].indir_cq); + } + mlx5_free(priv->hw_q); + priv->hw_q = NULL; } - mlx5_free(priv->hw_q); - priv->hw_q = NULL; if (priv->acts_ipool) { mlx5_ipool_destroy(priv->acts_ipool); priv->acts_ipool = NULL; -- 2.39.2