From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 18D51489E9 for ; Tue, 28 Oct 2025 09:17:44 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0C175402A6; Tue, 28 Oct 2025 09:17:44 +0100 (CET) Received: from MW6PR02CU001.outbound.protection.outlook.com (mail-westus2azon11012057.outbound.protection.outlook.com [52.101.48.57]) by mails.dpdk.org (Postfix) with ESMTP id E71484021E for ; Tue, 28 Oct 2025 09:17:41 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=VIcI5Y1yyO+7Zkd0bqhEG7hUKBxSITogVnm1mrzffhDwkVLRnYPbZzmZIflsf7dVpSNWi6lQ9o6nbt7locy6/T4UlwktvuPNsNqb1Uo/PeVn+SNCq+OGxJCs9syIE1Hag4V5l9P3lmZXRo8dlPxxJvmeXspVn9312xTH4/j1XrXQUpHtss1vLgS/xdc03TGfti0Aw9uKlHMD3iqzkJcwuCY1Hq3JIMB41rzrHwFbyJwTu0f/iqUhJoZeagKt491TUHsgSy+8LGDeo5Rq3TFgeUH5xkXmewljcva09K6oqik6F1DMLxl5z4gHTGxQ4E7/nvimPSsfHHbk9LqGHIXU6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ATu3QJRZ8DXPeu77KtZ7wGmzIeAiJQCBOqDq+B9V8a0=; b=pqBK8z9sKcoFFCLFpAaYgz6ZvtJg66gNAw3FJxIYgoSppOAaeTFgAdnis4TNrAPWiskD8nF68ea9DzUts6ebF+DljVn9g9fD6shz4FHxDJWStrRIOSBi7i3rtu/FV0eRfwnSRyuFJAqOivDQtTBKv39x5yhyqPpow42Crkyzp+sy+SYvMC5oeVCqpdBO5CSuC3ConocmDvH78kTkTIzLc0A0kJB+rOyjOjgSsgm0TFMMziOctoYsFvyG4wnUweMcc0HZgGlG+YOTdf30RzMPxcnJIzjn1SJKZADZXJPQ6XyymtHYoVCevjDglKDWdQzvc7rwgv7aFLcelfnOpxhbGg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ATu3QJRZ8DXPeu77KtZ7wGmzIeAiJQCBOqDq+B9V8a0=; b=QWRhq5c6Iyuq8EfMPtcGefpBTKFwg4IPUhbFFvQtZQEUfz6cDuDBWpM5VgOp8Ti8pBoxY26SAkr/Y0b8XNVi8MDq5gQpRp2jiXY3NIeedQwa+wjqlG9EP0hPFVtUXwGlySGsGKftI1uHui6pmDKc/z1IO8fFK6kUp3bpId3rIgRcPAP2IUVF3ekNDXnnJrBi7kURc0M21I+kX7AwuBj++GmYo4k3TGNPVDGJ0n3vHGgvwPwZmB9RtThq8XvNZEkcsji2SL6U2D786pMZUHPMU71RX5SSL8y9BkomFfChb2B35/1rwoGT6aGhr6P1vj6CqbwFFrgFLds7b938ExAWZA== Received: from BYAPR11CA0067.namprd11.prod.outlook.com (2603:10b6:a03:80::44) by MN2PR12MB4376.namprd12.prod.outlook.com (2603:10b6:208:26c::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9253.19; Tue, 28 Oct 2025 08:17:36 +0000 Received: from SJ1PEPF00001CE2.namprd05.prod.outlook.com (2603:10b6:a03:80:cafe::ba) by BYAPR11CA0067.outlook.office365.com (2603:10b6:a03:80::44) with Microsoft SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.9275.12 via Frontend Transport; Tue, 28 Oct 2025 08:17:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by SJ1PEPF00001CE2.mail.protection.outlook.com (10.167.242.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.9275.10 via Frontend Transport; Tue, 28 Oct 2025 08:17:36 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.34; Tue, 28 Oct 2025 01:17:24 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.20; Tue, 28 Oct 2025 01:17:22 -0700 From: Rongwei Liu To: , , , , , CC: Subject: [PATCH22.11 v1] net/mlx5: fix age checking crash Date: Tue, 28 Oct 2025 10:16:01 +0200 Message-ID: <20251028081601.1749225-1-rongweil@nvidia.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ1PEPF00001CE2:EE_|MN2PR12MB4376:EE_ X-MS-Office365-Filtering-Correlation-Id: d369dcb4-025c-47a5-de7d-08de15fa7406 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|376014|82310400026|1800799024|36860700013|7053199007; X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?a2PB7aRGExowEnVeoRHS3HPmyugpQZ4GKI7YK2x5fx7NsmSGIhPnhVWCkSn2?= =?us-ascii?Q?Y4HLCh721TmP/KtVQmy/0lXweG/I/+/7zIbLryQJ/kdlMc3xBav2VqJVwQbw?= =?us-ascii?Q?x/U/FPPDccaf4+jexWrdrDAue77wpsQB0ZVj7s7V8KwD/KRwQ67gYw/Ajyfe?= =?us-ascii?Q?IHW488N75XwpRi20Ay3VKo0hqFArvEmcOSkzEVHklkn93Ctj2RtE3cRvGbmH?= =?us-ascii?Q?xx2EP8JGFVLzyJczR4UgOZsWfyi+rag6pd4vR5WkzqpQz3PXAGEEBO0qbu/n?= =?us-ascii?Q?uABY2MPO5xVNQmpeEjv5NjsbJYnFnjjcq8TZ16Ob8cbN0rVc6Z1M4LPIixTr?= =?us-ascii?Q?tF+QmnBc0YDkaX+69cnPU1ZRFpViAUIeQpq55YUoVNpYz5c5QaC40Ycl7T5q?= =?us-ascii?Q?qoZN5LollzNMZJnVUKkEoMJxPiS1HHA94Zp6ahblB64qmTTlmj54xzsWjCdU?= =?us-ascii?Q?NgVkqDznd0cIKtM1gBnCEkKPhlYbmezsLF5bFQTxge7EHr2uzQE0qW30sP7/?= =?us-ascii?Q?0ZhHYyfFz7OzPpWsnfmmvjBgIzDui2L3Nw07YKbMg33pw6jfI3nN6kelXdV4?= =?us-ascii?Q?U+abEXqjNAvRiF97rzFTGD0chXX4WPXw13claZYZ4FIbYOFnfXaJ491WkHK+?= =?us-ascii?Q?aUGsIzbvTMqTNTlhHUacehtti5pF/fYxqu0nhyf8rfK1kVvZV++/X18wC5Pa?= =?us-ascii?Q?aCZt+/Bfa9aLyIZZmEBFJ4o3JnbfohRSveKzabudlRdp1mMSln9N+UeWSljX?= =?us-ascii?Q?HT0X5yWrt18wfVnUx7p0Fb8M2j8In17HJuEGWYykqUntIX2r98pjWbSB6pSI?= =?us-ascii?Q?+QnNs50+MLCMr2VPwDMhYwJkvgo/T52lvA0T72OPhjTQg4hyyEhI65tx3ti6?= =?us-ascii?Q?rNmaMTw28ogLoerU2ahKszWbLEGAHMeJSq3N81Ez/DhlPPpfP9QQQrEPpXN1?= =?us-ascii?Q?qHm5E7RXOq9DvhHCV08cxjiMQntyY6u1Cn2DoYVffb1YKXNxxhBIGq8Vv9P7?= =?us-ascii?Q?SMgjeTKe8iyFnbR7yCq2d5Mcc83HiGKxgK9FhnKfipxFkT9HRrA5i9678WLe?= =?us-ascii?Q?tiNYK4CcvEiVWN/8baO0fZfRThuK1Kt1mLyrMntrAYwWCE4ipDUAcNNIOIQA?= =?us-ascii?Q?NDhsnPibsozN3Y4AYUEVKlt2S4YcSUmAeI146N650xQD6m1l1V4PW2MFsq9c?= =?us-ascii?Q?9RjorQlem7FND837nR3nvE3sIjyF1Qb1L/r1IiUMclEa33tUykRJXNHJQcP+?= =?us-ascii?Q?cfF28fT7VmpOxtUHyBPCL8XO67xU4PTCPu8ON7th5TNf0yxTtTkmaITbo+ww?= =?us-ascii?Q?S14WSDVQ8y/0ZYWZw3sBGvHiNmVQtCrClF6KPXjE4ie/nSGAFpGYWIsXQfcr?= =?us-ascii?Q?bVazFFq+DWBE8P1EDcszHpUKCu3CZnIZA5cAJBcOsM2UGcSaRTmyiHmR4atX?= =?us-ascii?Q?q42GqIXL7AJylDR5pklhxfQ657Tbw9RzDh6SH6pyx/y2eqUwk7igWcLvb4kE?= =?us-ascii?Q?Oo9bf09Dr88PrzG9zqIhVL5daswUHO2WKkI5pokhMp90LtipyefUYfnZlOVu?= =?us-ascii?Q?dwts21bdmQrYjx1SVL4=3D?= X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230040)(376014)(82310400026)(1800799024)(36860700013)(7053199007); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2025 08:17:36.5147 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d369dcb4-025c-47a5-de7d-08de15fa7406 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: SJ1PEPF00001CE2.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4376 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org When aging is configured, there is a background thread which queries all the counters in the pool. Meantime, per queue flow insertion/deletion/update changes the counter pool too. It introduces a race condition between resetting counters's in_used and age_idx fields during flow deletion and reading them in the background thread. To resolve it, all key members of counter's struct are placed in a single uint32_t and they are accessed atomically. To avoid the occasional timestamp equalization with age_idx, query_gen_when_free is moved out of the union. The total memory size is kept the same. Fixes: 04a4de756e14 ("net/mlx5: support flow age action with HWS") Cc: michaelba@nvidia.com Cc: stable@dpdk.org Signed-off-by: Rongwei Liu --- drivers/net/mlx5/mlx5_flow_hw.c | 5 +- drivers/net/mlx5/mlx5_hws_cnt.c | 8 +-- drivers/net/mlx5/mlx5_hws_cnt.h | 124 +++++++++++++++++++------------- 3 files changed, 82 insertions(+), 55 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 5fbbd487de..df02bd237c 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -1933,7 +1933,7 @@ flow_hw_shared_action_construct(struct rte_eth_dev *dev, uint32_t queue, return -1; if (action_flags & MLX5_FLOW_ACTION_COUNT) { cnt_queue = mlx5_hws_cnt_get_queue(priv, &queue); - if (mlx5_hws_cnt_pool_get(priv->hws_cpool, cnt_queue, &age_cnt, idx) < 0) + if (mlx5_hws_cnt_pool_get(priv->hws_cpool, cnt_queue, &age_cnt, idx, 1) < 0) return -1; flow->cnt_id = age_cnt; param->nb_cnts++; @@ -2321,7 +2321,8 @@ flow_hw_actions_construct(struct rte_eth_dev *dev, /* Fall-through. */ case RTE_FLOW_ACTION_TYPE_COUNT: cnt_queue = mlx5_hws_cnt_get_queue(priv, &queue); - ret = mlx5_hws_cnt_pool_get(priv->hws_cpool, cnt_queue, &cnt_id, age_idx); + ret = mlx5_hws_cnt_pool_get(priv->hws_cpool, cnt_queue, &cnt_id, + age_idx, 0); if (ret != 0) { rte_flow_error_set(error, -ret, RTE_FLOW_ERROR_TYPE_ACTION, action, "Failed to allocate flow counter"); diff --git a/drivers/net/mlx5/mlx5_hws_cnt.c b/drivers/net/mlx5/mlx5_hws_cnt.c index 6a0c371cd9..956ee5a172 100644 --- a/drivers/net/mlx5/mlx5_hws_cnt.c +++ b/drivers/net/mlx5/mlx5_hws_cnt.c @@ -72,8 +72,8 @@ __mlx5_hws_cnt_svc(struct mlx5_dev_ctx_shared *sh, uint32_t ret __rte_unused; reset_cnt_num = rte_ring_count(reset_list); - cpool->query_gen++; mlx5_aso_cnt_query(sh, cpool); + __atomic_store_n(&cpool->query_gen, cpool->query_gen + 1, __ATOMIC_RELEASE); zcdr.n1 = 0; zcdu.n1 = 0; ret = rte_ring_enqueue_zc_burst_elem_start(reuse_list, @@ -143,14 +143,14 @@ mlx5_hws_aging_check(struct mlx5_priv *priv, struct mlx5_hws_cnt_pool *cpool) uint32_t nb_alloc_cnts = mlx5_hws_cnt_pool_get_size(cpool); uint16_t expected1 = HWS_AGE_CANDIDATE; uint16_t expected2 = HWS_AGE_CANDIDATE_INSIDE_RING; - uint32_t i; + uint32_t i, age_idx, in_use; cpool->time_of_last_age_check = curr_time; for (i = 0; i < nb_alloc_cnts; ++i) { - uint32_t age_idx = cpool->pool[i].age_idx; uint64_t hits; - if (!cpool->pool[i].in_used || age_idx == 0) + mlx5_hws_cnt_get_all(&cpool->pool[i], &in_use, NULL, &age_idx); + if (!in_use || age_idx == 0) continue; param = mlx5_ipool_get(age_info->ages_ipool, age_idx); if (unlikely(param == NULL)) { diff --git a/drivers/net/mlx5/mlx5_hws_cnt.h b/drivers/net/mlx5/mlx5_hws_cnt.h index 72751f3330..115586fced 100644 --- a/drivers/net/mlx5/mlx5_hws_cnt.h +++ b/drivers/net/mlx5/mlx5_hws_cnt.h @@ -42,33 +42,36 @@ struct mlx5_hws_cnt_dcs_mng { struct mlx5_hws_cnt_dcs dcs[MLX5_HWS_CNT_DCS_NUM]; }; -struct mlx5_hws_cnt { - struct flow_counter_stats reset; - bool in_used; /* Indicator whether this counter in used or in pool. */ - union { - struct { - uint32_t share:1; - /* - * share will be set to 1 when this counter is used as - * indirect action. - */ - uint32_t age_idx:24; - /* - * When this counter uses for aging, it save the index - * of AGE parameter. For pure counter (without aging) - * this index is zero. - */ - }; - /* This struct is only meaningful when user own this counter. */ - uint32_t query_gen_when_free; +union mlx5_hws_cnt_state { + uint32_t data; + struct { + uint32_t in_used:1; + /* Indicator whether this counter in used or in pool. */ + uint32_t share:1; + /* + * share will be set to 1 when this counter is used as + * indirect action. + */ + uint32_t age_idx:24; /* - * When PMD own this counter (user put back counter to PMD - * counter pool, i.e), this field recorded value of counter - * pools query generation at time user release the counter. + * When this counter uses for aging, it stores the index + * of AGE parameter. Otherwise, this index is zero. */ }; }; +struct mlx5_hws_cnt { + struct flow_counter_stats reset; + union mlx5_hws_cnt_state cnt_state; + /* This struct is only meaningful when user own this counter. */ + uint32_t query_gen_when_free; + /* + * When PMD own this counter (user put back counter to PMD + * counter pool, i.e), this field recorded value of counter + * pools query generation at time user release the counter. + */ +}; + struct mlx5_hws_cnt_raw_data_mng { struct flow_counter_stats *raw; struct mlx5_pmd_mr mr; @@ -179,6 +182,42 @@ mlx5_hws_cnt_id_valid(cnt_id_t cnt_id) MLX5_INDIRECT_ACTION_TYPE_COUNT ? true : false; } +static __rte_always_inline void +mlx5_hws_cnt_set_age_idx(struct mlx5_hws_cnt *cnt, uint32_t value) +{ + union mlx5_hws_cnt_state cnt_state; + + cnt_state.data = __atomic_load_n(&cnt->cnt_state.data, __ATOMIC_ACQUIRE); + cnt_state.age_idx = value; + __atomic_store_n(&cnt->cnt_state.data, cnt_state.data, __ATOMIC_RELEASE); +} + +static __rte_always_inline void +mlx5_hws_cnt_set_all(struct mlx5_hws_cnt *cnt, uint32_t in_used, uint32_t share, uint32_t age_idx) +{ + union mlx5_hws_cnt_state cnt_state; + + cnt_state.in_used = !!in_used; + cnt_state.share = !!share; + cnt_state.age_idx = age_idx; + __atomic_store_n(&cnt->cnt_state.data, cnt_state.data, __ATOMIC_RELAXED); +} + +static __rte_always_inline void +mlx5_hws_cnt_get_all(struct mlx5_hws_cnt *cnt, uint32_t *in_used, uint32_t *share, + uint32_t *age_idx) +{ + union mlx5_hws_cnt_state cnt_state; + + cnt_state.data = __atomic_load_n(&cnt->cnt_state.data, __ATOMIC_ACQUIRE); + if (in_used != NULL) + *in_used = cnt_state.in_used; + if (share != NULL) + *share = cnt_state.share; + if (age_idx != NULL) + *age_idx = cnt_state.age_idx; +} + /** * Generate Counter id from internal index. * @@ -402,8 +441,7 @@ mlx5_hws_cnt_pool_put(struct mlx5_hws_cnt_pool *cpool, uint32_t *queue, uint32_t iidx; iidx = mlx5_hws_cnt_iidx(cpool, *cnt_id); - MLX5_ASSERT(cpool->pool[iidx].in_used); - cpool->pool[iidx].in_used = false; + mlx5_hws_cnt_set_all(&cpool->pool[iidx], 0, 0, 0); cpool->pool[iidx].query_gen_when_free = __atomic_load_n(&cpool->query_gen, __ATOMIC_RELAXED); if (likely(queue != NULL)) @@ -459,7 +497,7 @@ mlx5_hws_cnt_pool_put(struct mlx5_hws_cnt_pool *cpool, uint32_t *queue, */ static __rte_always_inline int mlx5_hws_cnt_pool_get(struct mlx5_hws_cnt_pool *cpool, uint32_t *queue, - cnt_id_t *cnt_id, uint32_t age_idx) + cnt_id_t *cnt_id, uint32_t age_idx, uint32_t shared) { unsigned int ret; struct rte_ring_zc_data zcdc = {0}; @@ -486,9 +524,7 @@ mlx5_hws_cnt_pool_get(struct mlx5_hws_cnt_pool *cpool, uint32_t *queue, __hws_cnt_query_raw(cpool, *cnt_id, &cpool->pool[iidx].reset.hits, &cpool->pool[iidx].reset.bytes); - MLX5_ASSERT(!cpool->pool[iidx].in_used); - cpool->pool[iidx].in_used = true; - cpool->pool[iidx].age_idx = age_idx; + mlx5_hws_cnt_set_all(&cpool->pool[iidx], 1, shared, age_idx); return 0; } ret = rte_ring_dequeue_zc_burst_elem_start(qcache, sizeof(cnt_id_t), 1, @@ -526,10 +562,7 @@ mlx5_hws_cnt_pool_get(struct mlx5_hws_cnt_pool *cpool, uint32_t *queue, __hws_cnt_query_raw(cpool, *cnt_id, &cpool->pool[iidx].reset.hits, &cpool->pool[iidx].reset.bytes); rte_ring_dequeue_zc_elem_finish(qcache, 1); - cpool->pool[iidx].share = 0; - MLX5_ASSERT(!cpool->pool[iidx].in_used); - cpool->pool[iidx].in_used = true; - cpool->pool[iidx].age_idx = age_idx; + mlx5_hws_cnt_set_all(&cpool->pool[iidx], 1, shared, age_idx); return 0; } @@ -582,23 +615,12 @@ static __rte_always_inline int mlx5_hws_cnt_shared_get(struct mlx5_hws_cnt_pool *cpool, cnt_id_t *cnt_id, uint32_t age_idx) { - int ret; - uint32_t iidx; - - ret = mlx5_hws_cnt_pool_get(cpool, NULL, cnt_id, age_idx); - if (ret != 0) - return ret; - iidx = mlx5_hws_cnt_iidx(cpool, *cnt_id); - cpool->pool[iidx].share = 1; - return 0; + return mlx5_hws_cnt_pool_get(cpool, NULL, cnt_id, age_idx, 1); } static __rte_always_inline void mlx5_hws_cnt_shared_put(struct mlx5_hws_cnt_pool *cpool, cnt_id_t *cnt_id) { - uint32_t iidx = mlx5_hws_cnt_iidx(cpool, *cnt_id); - - cpool->pool[iidx].share = 0; mlx5_hws_cnt_pool_put(cpool, NULL, cnt_id); } @@ -606,8 +628,10 @@ static __rte_always_inline bool mlx5_hws_cnt_is_shared(struct mlx5_hws_cnt_pool *cpool, cnt_id_t cnt_id) { uint32_t iidx = mlx5_hws_cnt_iidx(cpool, cnt_id); + uint32_t share; - return cpool->pool[iidx].share ? true : false; + mlx5_hws_cnt_get_all(&cpool->pool[iidx], NULL, &share, NULL); + return !!share; } static __rte_always_inline void @@ -616,17 +640,19 @@ mlx5_hws_cnt_age_set(struct mlx5_hws_cnt_pool *cpool, cnt_id_t cnt_id, { uint32_t iidx = mlx5_hws_cnt_iidx(cpool, cnt_id); - MLX5_ASSERT(cpool->pool[iidx].share); - cpool->pool[iidx].age_idx = age_idx; + MLX5_ASSERT(cpool->pool[iidx].cnt_state.share); + mlx5_hws_cnt_set_age_idx(&cpool->pool[iidx], age_idx); } static __rte_always_inline uint32_t mlx5_hws_cnt_age_get(struct mlx5_hws_cnt_pool *cpool, cnt_id_t cnt_id) { uint32_t iidx = mlx5_hws_cnt_iidx(cpool, cnt_id); + uint32_t age_idx, share; - MLX5_ASSERT(cpool->pool[iidx].share); - return cpool->pool[iidx].age_idx; + mlx5_hws_cnt_get_all(&cpool->pool[iidx], NULL, &share, &age_idx); + MLX5_ASSERT(share); + return age_idx; } static __rte_always_inline cnt_id_t -- 2.27.0