From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A9876433D8 for ; Mon, 11 Dec 2023 11:20:45 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A429D42D83; Mon, 11 Dec 2023 11:20:45 +0100 (CET) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2068.outbound.protection.outlook.com [40.107.244.68]) by mails.dpdk.org (Postfix) with ESMTP id 064DC40ED2 for ; Mon, 11 Dec 2023 11:20:44 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=M0xw5k8ILNEtzn1ekvj8kn6NaXlNln42XDMmAF6apAwyBGfJwiaJUZdlBUJLGxsnkWt0oyWvIXvi4T8iaBRm4iETGiArUlvnZou/yrLDEdYZfaPKHJ8Awilel7cZAXg2x69gkFDHDRUUmg2FjQj5nGqLLk7Ks7EMXuWc3gb6OZc3UMQum8PLUfTzpIlmPNE2oHX1g0X/Yut0Vr/1OmmpMUGwBtqd2F1u0sUtbVz5ACaxDASOQtj8qCm+vr28cbufoOOvV/CwQXa58luSbIqiQDi9MV83EpeOeHG/vZBr6SaWxWL9/gOY7LRGofQNdjHsE4l5DgA8S8p5xhBMYYs2eQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JWaVspSVyoig2wIVfvtam3qEJ6cp0YVn+YBMoqDCTM4=; b=kTd2VgA1fZgAZBVPoQCu/Ohfxg96PW66L8DigZJs2F97tTdjrXMQwhFLuRP5DqOiKD+5xDIn67gZz4oq/Yz9ZUs+OH6DSYYwF53v2eWeLg8WX8jjAQpSg7ulibGLm9ZeQ7/3naoaDWQJnO1dmEc0XzU1lL30J7gjf8OpB3cIjq7oI/J0hwCjYjh9UMikHJ1/VmHPyA4yWMydHPCb2rfP3wDGZvwG7CEy/qy0DwARJHDoC32IA7T0gny2M2uu/3n7NfdbTxjnMZKj9eIdpGXgO48JvPA/WLj4S9+VOw3P5X0PGH2M+BRSwsqAaJkfiEyBsDVtr8YYd9PaJeJAfBOQZg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JWaVspSVyoig2wIVfvtam3qEJ6cp0YVn+YBMoqDCTM4=; b=Lvd4aQgpsukrwYTt57u7GUYxiGcwUDkfEKoE1GrqXILjrKb0NTByXIneiLnl54rAYcQU5F9mF8CMvMQllSn4A0szuXEw8/pR793h6Gg3+SRhXL+J5z9lTKdnxU5riki07Oc4AygYYa7QSZsqVjrFYRtziLmnQxPOmyeIaFdc0zLb7GlWe9gngz8D/XjwQ+U5bdf+9zmu0Ba6nV+X4UB8xJjqPj3ZRu5R2V45SPt9CcHMDwGwfzttszPQ3Kd26jdDa8usZPZEBD8GWjy4Swgp8l8ZyRrYWpzuzXKYBaBc6zAYr/9KH8hOhU22kZtz7hb9ocduv4ZA6ZBXsQJuPL2EMw== Received: from PH8PR07CA0046.namprd07.prod.outlook.com (2603:10b6:510:2cf::24) by MW6PR12MB8915.namprd12.prod.outlook.com (2603:10b6:303:23e::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.32; Mon, 11 Dec 2023 10:20:41 +0000 Received: from SN1PEPF000252A4.namprd05.prod.outlook.com (2603:10b6:510:2cf:cafe::4) by PH8PR07CA0046.outlook.office365.com (2603:10b6:510:2cf::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7068.32 via Frontend Transport; Mon, 11 Dec 2023 10:20:40 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by SN1PEPF000252A4.mail.protection.outlook.com (10.167.242.11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7091.18 via Frontend Transport; Mon, 11 Dec 2023 10:20:40 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Mon, 11 Dec 2023 02:20:23 -0800 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.41; Mon, 11 Dec 2023 02:20:21 -0800 From: Xueming Li To: Suanming Mou CC: Matan Azrad , dpdk stable Subject: patch 'net/mlx5: fix counter query during port close' has been queued to stable release 22.11.4 Date: Mon, 11 Dec 2023 18:11:51 +0800 Message-ID: <20231211101226.2122-87-xuemingl@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231211101226.2122-1-xuemingl@nvidia.com> References: <20231022142250.10324-1-xuemingl@nvidia.com> <20231211101226.2122-1-xuemingl@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SN1PEPF000252A4:EE_|MW6PR12MB8915:EE_ X-MS-Office365-Filtering-Correlation-Id: dbde7031-f819-4e71-0896-08dbfa32d382 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: RqW4fPaBgReFbVpxgBGP0hShRHlDHOoB+5PWpNLNDgLplSDcG0GJNUyi1ASsARRfk3pz2iiIEO0x44S/GnB2aXCbL4kGlI5pzoZjQ8TxVAaKvo3pj0V+84bem+cKGnnL86lmgWoDsrIZLc1Zo+ku6ulNU2TJA4x3kgk5Ydi+YnNStJYpLV5AYTJsQgJbS51WJVXZpi/gyJjuAxEIvKF7haCmHAESz8Be7UV0TonGW4wwFG/UnvlKydEd/zgicZ8eQ2pLU4e60mTBe/iDauWIavnD324CIcpytKCWG0uhTiHZzjA0mUfKcJi3CxuvBcSWIimvuwTKMJkZu6Jv2ezagBpUGXEgOVx5YqKCsU2TI6xSey6bxfWrLcijYrmrqQ8+DZzZShYRS0fhwIn0wK1IUf4J0XYdcxm53E0FCO5YxmNv4A/Pln4lkFqpFcctGkO5KwxGsT1W4z0VmO7Jq5L3xXenrsd/XIBIHLSSp15TfFNmg0reShCIZJ54MTo2/moIu5ItMGDFecNEY2MdR3OsHnZdNowARyWInjpJMrSLz0bvgBdvN/ypYU0pYrXSHvz5I9mvRdXgFlDT1XcZ/cf9JZIXdGsTn2ATe1yjWgKiVtgecSWJxP/KjF8HBhbAjtyzwV76UN3KNu6n8yBcJVx/2x8/6EN9DPmBVpsqyjbxcRecxV4k5R0l76mWkMrE+gNu3H4rk/HtFsITFxd1J+koukEzGxS/hf2t1x7etYCe5rnoEfkOA1U/sAVi4lwnv0dKy0NQnlLA3steoDs9szcMpHWTEt6b6q48R47rxY3VoIY= X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230031)(4636009)(39860400002)(346002)(376002)(396003)(136003)(230922051799003)(186009)(1800799012)(82310400011)(64100799003)(451199024)(40470700004)(36840700001)(46966006)(40460700003)(4001150100001)(2906002)(41300700001)(36860700001)(36756003)(86362001)(7636003)(82740400003)(356005)(2616005)(1076003)(336012)(26005)(426003)(16526019)(6286002)(6666004)(478600001)(53546011)(7696005)(47076005)(966005)(83380400001)(6862004)(4326008)(5660300002)(6636002)(316002)(70586007)(70206006)(37006003)(54906003)(8676002)(8936002)(55016003)(40480700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Dec 2023 10:20:40.5474 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: dbde7031-f819-4e71-0896-08dbfa32d382 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SN1PEPF000252A4.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8915 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Hi, FYI, your patch has been queued to stable release 22.11.4 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 12/13/23. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Queued patches are on a temporary branch at: https://git.dpdk.org/dpdk-stable/log/?h=22.11-staging This queued commit can be viewed at: https://git.dpdk.org/dpdk-stable/commit/?h=22.11-staging&id=97b9c4dca36ba6465a195e2a860455476370012a Thanks. Xueming Li --- >From 97b9c4dca36ba6465a195e2a860455476370012a Mon Sep 17 00:00:00 2001 From: Suanming Mou Date: Thu, 9 Nov 2023 16:07:51 +0800 Subject: [PATCH] net/mlx5: fix counter query during port close Cc: Xueming Li [ upstream commit 6ac2104ac125b6e8037d6c13ba102b7afe27cf38 ] Currently, the counter query service thread queries all the ports which belongs to the same sh. In case one of the ports is closing the query may still be proceeded. This commit adds the pool list in shared context to manage the pool for avoiding query the port during port close. Fixes: 4d368e1da3a4 ("net/mlx5: support flow counter action for HWS") Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5.c | 3 +++ drivers/net/mlx5/mlx5.h | 2 ++ drivers/net/mlx5/mlx5_hws_cnt.c | 36 ++++++++++++++++++++++----------- drivers/net/mlx5/mlx5_hws_cnt.h | 2 ++ 4 files changed, 31 insertions(+), 12 deletions(-) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 1dfd10e7cb..90dbb6e3b0 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1612,6 +1612,9 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, /* Add context to the global device list. */ LIST_INSERT_HEAD(&mlx5_dev_ctx_list, sh, next); rte_spinlock_init(&sh->geneve_tlv_opt_sl); + /* Init counter pool list header and lock. */ + LIST_INIT(&sh->hws_cpool_list); + rte_spinlock_init(&sh->cpool_lock); exit: pthread_mutex_unlock(&mlx5_dev_ctx_list_mutex); return sh; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 5f8361c52b..fa8931e8b5 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1455,6 +1455,8 @@ struct mlx5_dev_ctx_shared { uint32_t host_shaper_rate:8; uint32_t lwm_triggered:1; struct mlx5_hws_cnt_svc_mng *cnt_svc; + rte_spinlock_t cpool_lock; + LIST_HEAD(hws_cpool_list, mlx5_hws_cnt_pool) hws_cpool_list; /* Count pool list. */ struct mlx5_dev_shared_port port[]; /* per device port data array. */ }; diff --git a/drivers/net/mlx5/mlx5_hws_cnt.c b/drivers/net/mlx5/mlx5_hws_cnt.c index 8ccc6ab1f8..791fde4458 100644 --- a/drivers/net/mlx5/mlx5_hws_cnt.c +++ b/drivers/net/mlx5/mlx5_hws_cnt.c @@ -306,26 +306,25 @@ mlx5_hws_cnt_svc(void *opaque) (struct mlx5_dev_ctx_shared *)opaque; uint64_t interval = (uint64_t)sh->cnt_svc->query_interval * (US_PER_S / MS_PER_S); - uint16_t port_id; + struct mlx5_hws_cnt_pool *hws_cpool; uint64_t start_cycle, query_cycle = 0; uint64_t query_us; uint64_t sleep_us; while (sh->cnt_svc->svc_running != 0) { + if (rte_spinlock_trylock(&sh->cpool_lock) == 0) + continue; start_cycle = rte_rdtsc(); - MLX5_ETH_FOREACH_DEV(port_id, sh->cdev->dev) { - struct mlx5_priv *opriv = - rte_eth_devices[port_id].data->dev_private; - if (opriv != NULL && - opriv->sh == sh && - opriv->hws_cpool != NULL) { - __mlx5_hws_cnt_svc(sh, opriv->hws_cpool); - if (opriv->hws_age_req) - mlx5_hws_aging_check(opriv, - opriv->hws_cpool); - } + /* 200ms for 16M counters. */ + LIST_FOREACH(hws_cpool, &sh->hws_cpool_list, next) { + struct mlx5_priv *opriv = hws_cpool->priv; + + __mlx5_hws_cnt_svc(sh, hws_cpool); + if (opriv->hws_age_req) + mlx5_hws_aging_check(opriv, hws_cpool); } query_cycle = rte_rdtsc() - start_cycle; + rte_spinlock_unlock(&sh->cpool_lock); query_us = query_cycle / (rte_get_timer_hz() / US_PER_S); sleep_us = interval - query_us; if (interval > query_us) @@ -659,6 +658,10 @@ mlx5_hws_cnt_pool_create(struct rte_eth_dev *dev, if (ret != 0) goto error; priv->sh->cnt_svc->refcnt++; + cpool->priv = priv; + rte_spinlock_lock(&priv->sh->cpool_lock); + LIST_INSERT_HEAD(&priv->sh->hws_cpool_list, cpool, next); + rte_spinlock_unlock(&priv->sh->cpool_lock); return cpool; error: mlx5_hws_cnt_pool_destroy(priv->sh, cpool); @@ -671,6 +674,13 @@ mlx5_hws_cnt_pool_destroy(struct mlx5_dev_ctx_shared *sh, { if (cpool == NULL) return; + /* + * 16M counter consumes 200ms to finish the query. + * Maybe blocked for at most 200ms here. + */ + rte_spinlock_lock(&sh->cpool_lock); + LIST_REMOVE(cpool, next); + rte_spinlock_unlock(&sh->cpool_lock); if (--sh->cnt_svc->refcnt == 0) mlx5_hws_cnt_svc_deinit(sh); mlx5_hws_cnt_pool_action_destroy(cpool); @@ -1228,11 +1238,13 @@ mlx5_hws_age_pool_destroy(struct mlx5_priv *priv) { struct mlx5_age_info *age_info = GET_PORT_AGE_INFO(priv); + rte_spinlock_lock(&priv->sh->cpool_lock); MLX5_ASSERT(priv->hws_age_req); mlx5_hws_age_info_destroy(priv); mlx5_ipool_destroy(age_info->ages_ipool); age_info->ages_ipool = NULL; priv->hws_age_req = 0; + rte_spinlock_unlock(&priv->sh->cpool_lock); } #endif diff --git a/drivers/net/mlx5/mlx5_hws_cnt.h b/drivers/net/mlx5/mlx5_hws_cnt.h index 030dcead86..b5c19a8e2c 100644 --- a/drivers/net/mlx5/mlx5_hws_cnt.h +++ b/drivers/net/mlx5/mlx5_hws_cnt.h @@ -97,6 +97,7 @@ struct mlx5_hws_cnt_pool_caches { }; struct mlx5_hws_cnt_pool { + LIST_ENTRY(mlx5_hws_cnt_pool) next; struct mlx5_hws_cnt_pool_cfg cfg __rte_cache_aligned; struct mlx5_hws_cnt_dcs_mng dcs_mng __rte_cache_aligned; uint32_t query_gen __rte_cache_aligned; @@ -107,6 +108,7 @@ struct mlx5_hws_cnt_pool { struct rte_ring *wait_reset_list; struct mlx5_hws_cnt_pool_caches *cache; uint64_t time_of_last_age_check; + struct mlx5_priv *priv; } __rte_cache_aligned; /* HWS AGE status. */ -- 2.25.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2023-12-11 17:56:25.866227700 +0800 +++ 0086-net-mlx5-fix-counter-query-during-port-close.patch 2023-12-11 17:56:23.157652300 +0800 @@ -1 +1 @@ -From 6ac2104ac125b6e8037d6c13ba102b7afe27cf38 Mon Sep 17 00:00:00 2001 +From 97b9c4dca36ba6465a195e2a860455476370012a Mon Sep 17 00:00:00 2001 @@ -4,0 +5,3 @@ +Cc: Xueming Li + +[ upstream commit 6ac2104ac125b6e8037d6c13ba102b7afe27cf38 ] @@ -14 +16,0 @@ -Cc: stable@dpdk.org @@ -26 +28 @@ -index 2cf21a1921..d6cb0d1c8a 100644 +index 1dfd10e7cb..90dbb6e3b0 100644 @@ -29 +31,2 @@ -@@ -1814,6 +1814,9 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, +@@ -1612,6 +1612,9 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, + /* Add context to the global device list. */ @@ -32 +34,0 @@ - mlx5_init_shared_dev_registers(sh); @@ -40 +42 @@ -index ee13ad6db2..f5eacb2c67 100644 +index 5f8361c52b..fa8931e8b5 100644 @@ -43 +45 @@ -@@ -1521,6 +1521,8 @@ struct mlx5_dev_ctx_shared { +@@ -1455,6 +1455,8 @@ struct mlx5_dev_ctx_shared { @@ -49 +50,0 @@ - struct mlx5_dev_registers registers; @@ -51,0 +53 @@ + @@ -53 +55 @@ -index f556a9fbcc..a3bea94811 100644 +index 8ccc6ab1f8..791fde4458 100644 @@ -56 +58 @@ -@@ -294,26 +294,25 @@ mlx5_hws_cnt_svc(void *opaque) +@@ -306,26 +306,25 @@ mlx5_hws_cnt_svc(void *opaque) @@ -94 +96 @@ -@@ -665,6 +664,10 @@ mlx5_hws_cnt_pool_create(struct rte_eth_dev *dev, +@@ -659,6 +658,10 @@ mlx5_hws_cnt_pool_create(struct rte_eth_dev *dev, @@ -105 +107 @@ -@@ -677,6 +680,13 @@ mlx5_hws_cnt_pool_destroy(struct mlx5_dev_ctx_shared *sh, +@@ -671,6 +674,13 @@ mlx5_hws_cnt_pool_destroy(struct mlx5_dev_ctx_shared *sh, @@ -116,4 +118,4 @@ - if (cpool->cfg.host_cpool == NULL) { - if (--sh->cnt_svc->refcnt == 0) - mlx5_hws_cnt_svc_deinit(sh); -@@ -1244,11 +1254,13 @@ mlx5_hws_age_pool_destroy(struct mlx5_priv *priv) + if (--sh->cnt_svc->refcnt == 0) + mlx5_hws_cnt_svc_deinit(sh); + mlx5_hws_cnt_pool_action_destroy(cpool); +@@ -1228,11 +1238,13 @@ mlx5_hws_age_pool_destroy(struct mlx5_priv *priv) @@ -134 +136 @@ -index dcd5cec020..585b5a83ad 100644 +index 030dcead86..b5c19a8e2c 100644 @@ -137 +139 @@ -@@ -98,6 +98,7 @@ struct mlx5_hws_cnt_pool_caches { +@@ -97,6 +97,7 @@ struct mlx5_hws_cnt_pool_caches { @@ -145 +147 @@ -@@ -108,6 +109,7 @@ struct mlx5_hws_cnt_pool { +@@ -107,6 +108,7 @@ struct mlx5_hws_cnt_pool {