From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id EE9F145BD4;
	Fri, 25 Oct 2024 10:51:45 +0200 (CEST)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 5215F4066F;
	Fri, 25 Oct 2024 10:51:13 +0200 (CEST)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on2053.outbound.protection.outlook.com [40.107.20.53])
 by mails.dpdk.org (Postfix) with ESMTP id 27B424003C
 for <dev@dpdk.org>; Fri, 25 Oct 2024 10:50:57 +0200 (CEST)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;
 b=st4EWLS24byVCm4XlI04gQLjgWoRg/OogybGNBwoMVLQ00eLYW1lS660BOr93MuwgVEbh6aTx//cgTQO2Q4GQbKW+7TN5MpBDNlYuw8tlnEr3KcrvNGdi/Pt4Gru17ByhZi0mB/1CEO72JLMMpOIzEVgs0yihWg4aYG4qRtzzza0DzbwVbsvGlr2JD0UfR0ePOkSxve859SyFflTAOE6BydnCJLkl1DyVVhA7QLx7PL5KqkjtH+8RtnzpOxTygEdcpcqBvjwg/dLm0Y/cwoyi4Gm8UMImscNdlFKSkfHcjqz+Wn8zA+Ktig6Grc9CHgapNaShC43hg+lTuPHz5gdIQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector10001;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bZe54tHV1BWuXkkOzb0KIXBGV4vegSrp2GI9ZzhuZjE=;
 b=ljazkzQRTL6lPuGeR4Cm5Ga6fQ8LvBKqPDR3WCoehA7JLd2fe1oGNVYP6q9ObMoIV+909imCnrqN5kp/c3p3IaXAVVT6euKnyomMG3RX0kdH1gGCZhLflWb3bETgs8CjAdakfGIs4CfVY6GnUQ6UIK8d6vd0+J9wIVUZImJQq07NsNjX7RZGaNcxf5M7tiND7YiQa5ByxfCT2bDmsdIKkyH4fzRSgJDdeROFAxSR6PStasx+BRpp0Nc0cE2t+8TTYn+UOPWrRBgE4FchDUu1JOa2kI1zWC2T7DLPBkqvjRU+kXV8aY0NuwZyAyGHWJODl3Y6U8AntK2Ra2OzlQdVqA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 192.176.1.74) smtp.rcpttodomain=dpdk.org smtp.mailfrom=ericsson.com;
 dmarc=pass (p=reject sp=reject pct=100) action=none header.from=ericsson.com; 
 dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ericsson.com;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bZe54tHV1BWuXkkOzb0KIXBGV4vegSrp2GI9ZzhuZjE=;
 b=pi7a3WRzBXczaCngFwPRjwA3QJrIsNHlpahUSiyMNtvWv6M1AzaeVDbv+lTnEmMCYuqvpP/ozOCYOffh9NXVi0kVmdcvfs1YVTBYKmVXdpQNiJwcT63RzNNPhZxOIY5JULiucp09YI+lh0hqaQKpnPdSIgEjpx9xDve8kFgQSsOLzJ3O58AaGtYWL3ApcWSiTZO3HNyCspXKSGUX13tmSo2zDTviKJDFcsuFAZ05uHyBcNbsIXdDuVLUY8Ybos7M5sLTy2SB9En3XHqy9nSDdJiTKFzgk07ubm5cNG6BUfYULRyYzBagLqrPgpiVZI9JjxlTNKWBOLWMoTE43Yo49g==
Received: from DU7P195CA0028.EURP195.PROD.OUTLOOK.COM (2603:10a6:10:54d::28)
 by AM9PR07MB7169.eurprd07.prod.outlook.com (2603:10a6:20b:2cc::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8093.21; Fri, 25 Oct
 2024 08:50:51 +0000
Received: from DB3PEPF00008859.eurprd02.prod.outlook.com
 (2603:10a6:10:54d:cafe::db) by DU7P195CA0028.outlook.office365.com
 (2603:10a6:10:54d::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8093.20 via Frontend
 Transport; Fri, 25 Oct 2024 08:50:51 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 192.176.1.74)
 smtp.mailfrom=ericsson.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=ericsson.com;
Received-SPF: Pass (protection.outlook.com: domain of ericsson.com designates
 192.176.1.74 as permitted sender)
 receiver=protection.outlook.com; 
 client-ip=192.176.1.74; helo=oa.msg.ericsson.com; pr=C
Received: from oa.msg.ericsson.com (192.176.1.74) by
 DB3PEPF00008859.mail.protection.outlook.com (10.167.242.4) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.8093.14 via Frontend Transport; Fri, 25 Oct 2024 08:50:51 +0000
Received: from seliicinfr00050.seli.gic.ericsson.se (153.88.142.248) by
 smtp-central.internal.ericsson.com (100.87.178.61) with Microsoft SMTP Server
 id 15.2.1544.11; Fri, 25 Oct 2024 10:50:50 +0200
Received: from breslau.. (seliicwb00002.seli.gic.ericsson.se [10.156.25.100])
 by seliicinfr00050.seli.gic.ericsson.se (Postfix) with ESMTP id
 2D5381C00A9; Fri, 25 Oct 2024 10:50:50 +0200 (CEST)
From: =?UTF-8?q?Mattias=20R=C3=B6nnblom?= <mattias.ronnblom@ericsson.com>
To: <dev@dpdk.org>
CC: <hofors@lysator.liu.se>, =?UTF-8?q?Morten=20Br=C3=B8rup?=
 <mb@smartsharesystems.com>, Stephen Hemminger <stephen@networkplumber.org>,
 Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>, David Marchand
 <david.marchand@redhat.com>, Jerin Jacob <jerinj@marvell.com>, Luka Jankovic
 <luka.jankovic@ericsson.com>, Thomas Monjalon <thomas@monjalon.net>,
 =?UTF-8?q?Mattias=20R=C3=B6nnblom?= <mattias.ronnblom@ericsson.com>,
 Konstantin Ananyev <konstantin.ananyev@huawei.com>, Chengwen Feng
 <fengchengwen@huawei.com>
Subject: [PATCH v17 7/8] service: keep per-lcore state in lcore variable
Date: Fri, 25 Oct 2024 10:41:48 +0200
Message-ID: <20241025084149.873037-8-mattias.ronnblom@ericsson.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20241025084149.873037-1-mattias.ronnblom@ericsson.com>
References: <20241023075302.869008-1-mattias.ronnblom@ericsson.com>
 <20241025084149.873037-1-mattias.ronnblom@ericsson.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DB3PEPF00008859:EE_|AM9PR07MB7169:EE_
X-MS-Office365-Filtering-Correlation-Id: dcf14ad4-82aa-496a-3b68-08dcf4d2211b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
 ARA:13230040|7416014|376014|36860700013|82310400026|1800799024; 
X-Microsoft-Antispam-Message-Info: =?utf-8?B?dkhXNU1HMCtJZ1REVEhVeitqU1Z6SW5ZVmJqUlBwUW16R08yTzcyanNxTnFC?=
 =?utf-8?B?Q1FLdXFZQVExaC9CRkRvMGd3SFExQWJialNsaEhPK3gvY0xWUm45VldVT2xz?=
 =?utf-8?B?YTk4Sm5yby9GcDV2WEUrZTVad2syNGF3ZnFxWmNEbS93cmxIc2xQM2ZRc0Mz?=
 =?utf-8?B?T2lkUjBsd1MyWjZXdkhreHp4RDVIdGtGeVVoNW1SZ0h6UGJ4eURGd29xYlZD?=
 =?utf-8?B?Nld3L0RQQURuU0pGOTcrdDl2NHNXem9HVzI2UTdlWGYxY3IyV2xyQnV4MDk5?=
 =?utf-8?B?NkdvTWJzblVxU1FQWERlQXdrRCs4eHR1UGN5N2NEZS9LN3prejBGRVVIRFhO?=
 =?utf-8?B?S0NiT2lHaTFlVEZzdVpaU1l3bUJSSDA3Rkl1Z0ZSNnM4MW9mWThCdlJSWnhl?=
 =?utf-8?B?bXFWK1B2ZEhNMmxaZ1c1NHo3U01hVUZ5ZGJvbW1sWUpLTmwxejQzbnRFVWdR?=
 =?utf-8?B?L0hsSXQ4SngwVEJ4c0tjTHBnN0hQL3ZIT1ZIVElKdElEc1Q4Ti9LMEs0ZnAw?=
 =?utf-8?B?OHdXOGluakoreFYvMEpxRlBiRkVkQmF3clh0N3d0UFdKeklVdllNQlRqSmhI?=
 =?utf-8?B?MmszSTJwYUF3UHhvUE1Pc1hiK0lrSTZnMlBzcEV6VmxWTGNhNFFXZ2lPUW5i?=
 =?utf-8?B?YUxvM00vb1VSRDNyRmpUZVdYMnJzWXV0V2MyalV2a2U3Sm1pUjJLWGpnVkJs?=
 =?utf-8?B?YlFoQk9NWDcvSlRCVmRmeVY1MUJ1WlBRdFJ3MjR3YUxJVWNWbitGWDVLc0cx?=
 =?utf-8?B?S3hDUHZnNzdmRkFUakFoK0dRTVQxMFpOTEpWQU9yR3hzTlEvN1FETjNoUnQ3?=
 =?utf-8?B?MEh3UWtuY3owVnF2ZkVlaXpSNTZKZFZqS3RoUnNaeExvNVdySW9UMzhNTFNo?=
 =?utf-8?B?TVZiazljb1k5ckp4Sk56NHlJR3NEbDJhblMxVUpSWlZ6bmlCZVBaMmVVdjlV?=
 =?utf-8?B?aER4bFdDNWExTGdkOEJxSlF5OVRNcVJRTmJubXFPaXBzSHM2STQvYmxVbExr?=
 =?utf-8?B?eXhib1FaSjhxVnE2S0lVSVovWjNxMFBGUVVHenROajZLUUZ2SElZZFRiL1ZL?=
 =?utf-8?B?cVY1RlNXMHdua3pmTEpIY1MyU050c0NNS3lJdmw3SFlvRitDR0NiS3AwMVhq?=
 =?utf-8?B?a0tISXVtYXhMd3BKcE9waDNra3dGaG1jSHhoNE5pcW5sbEVlc1NuZVREMnMr?=
 =?utf-8?B?d3pqcXpNckwzc25wQXBBMmNUcVR1WW9rbXRBazJXaUYvUFZrekxOZCtPeDJR?=
 =?utf-8?B?d0lPMnpGc2NzSlZOcUwvMStrdG9IVUNiZldTS2RBRkd5Qks4WjRJalpKRnha?=
 =?utf-8?B?MTArNUFuZGVCQ1ZBQmhLL1BtMkFBTDhXQXFZWS9pbVhiOVJMT1pWTnFWWTVz?=
 =?utf-8?B?SXIyK2pYR2RYQUJlbHI3amIyeTFSSDlYT3NMWHRxandESFJpK0VQSkdDMWph?=
 =?utf-8?B?T242M1ZkbmhwUzdwVDlaSG1OK0N3WXQ3L2pTUGlTSGNJRzJ4aUFhQSt3NFI3?=
 =?utf-8?B?ZS9qSllmaVpGMDRJWC9lS1BZb3JiVnNoandsYXR2eTM4bnVNNDRkNnpVdXhz?=
 =?utf-8?B?Zy9qWWFkUjRxUWpmVlRVa3k4cmErenJwblJwQW1VQndsSWRQUHR5OG44VGRS?=
 =?utf-8?B?RVBuRGxuTEtnNzlFOHNJeGtrMUNUYm5WblFYY0NRWnRHUDZRWEZkNG1SSVpl?=
 =?utf-8?B?MnlTVnBGQ3ppbS9NMHY5RjFpL2RhblY4bUFKQVE2SWNIQVhtWmxTS2xyT0VH?=
 =?utf-8?B?Wnl4aUhJZGVhRzRYdUIyNktSakV2Ymw2MGV5ZXMwU3VUaDBmYVlXK2dRL0I3?=
 =?utf-8?B?K1FPY3BXY3U5ZXV1RThtdWNWQXVmM3pVa1BpcTRMYkNrc0JzS3lONnVoU2RQ?=
 =?utf-8?B?L0t0TlBLVWFxWVVQclRMRFpiTlFTQ1VrVVNad20rR1JIc1E9PQ==?=
X-Forefront-Antispam-Report: CIP:192.176.1.74; CTRY:SE; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:oa.msg.ericsson.com; PTR:office365.se.ericsson.net;
 CAT:NONE;
 SFS:(13230040)(7416014)(376014)(36860700013)(82310400026)(1800799024); DIR:OUT;
 SFP:1101; 
X-OriginatorOrg: ericsson.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Oct 2024 08:50:51.4728 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: dcf14ad4-82aa-496a-3b68-08dcf4d2211b
X-MS-Exchange-CrossTenant-Id: 92e84ceb-fbfd-47ab-be52-080c6b87953f
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=92e84ceb-fbfd-47ab-be52-080c6b87953f; Ip=[192.176.1.74];
 Helo=[oa.msg.ericsson.com]
X-MS-Exchange-CrossTenant-AuthSource: DB3PEPF00008859.eurprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR07MB7169
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Replace static array of cache-aligned structs with an lcore variable,
to slightly benefit code simplicity and performance.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>

--

PATCH v14:
 * Merge with bitset-related changes.

PATCH v7:
 * Update to match new FOREACH API.

RFC v6:
 * Remove a now-redundant lcore variable value memset().

RFC v5:
 * Fix lcore value pointer bug introduced by RFC v4.

RFC v4:
 * Remove strange-looking lcore value lookup potentially containing
   invalid lcore id. (Morten Brørup)
 * Replace misplaced tab with space. (Morten Brørup)
---
 lib/eal/common/rte_service.c | 116 ++++++++++++++++++++---------------
 1 file changed, 65 insertions(+), 51 deletions(-)

diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index 324471e897..dad3150df9 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -11,6 +11,7 @@
 
 #include <eal_trace_internal.h>
 #include <rte_lcore.h>
+#include <rte_lcore_var.h>
 #include <rte_bitset.h>
 #include <rte_branch_prediction.h>
 #include <rte_common.h>
@@ -78,7 +79,7 @@ struct __rte_cache_aligned core_state {
 
 static uint32_t rte_service_count;
 static struct rte_service_spec_impl *rte_services;
-static struct core_state *lcore_states;
+static RTE_LCORE_VAR_HANDLE(struct core_state, lcore_states);
 static uint32_t rte_service_library_initialized;
 
 int32_t
@@ -99,12 +100,8 @@ rte_service_init(void)
 		goto fail_mem;
 	}
 
-	lcore_states = rte_calloc("rte_service_core_states", RTE_MAX_LCORE,
-			sizeof(struct core_state), RTE_CACHE_LINE_SIZE);
-	if (!lcore_states) {
-		EAL_LOG(ERR, "error allocating core states array");
-		goto fail_mem;
-	}
+	if (lcore_states == NULL)
+		RTE_LCORE_VAR_ALLOC(lcore_states);
 
 	int i;
 	struct rte_config *cfg = rte_eal_get_configuration();
@@ -120,7 +117,6 @@ rte_service_init(void)
 	return 0;
 fail_mem:
 	rte_free(rte_services);
-	rte_free(lcore_states);
 	return -ENOMEM;
 }
 
@@ -134,7 +130,6 @@ rte_service_finalize(void)
 	rte_eal_mp_wait_lcore();
 
 	rte_free(rte_services);
-	rte_free(lcore_states);
 
 	rte_service_library_initialized = 0;
 }
@@ -284,7 +279,6 @@ rte_service_component_register(const struct rte_service_spec *spec,
 int32_t
 rte_service_component_unregister(uint32_t id)
 {
-	uint32_t i;
 	struct rte_service_spec_impl *s;
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
 
@@ -292,9 +286,11 @@ rte_service_component_unregister(uint32_t id)
 
 	s->internal_flags &= ~(SERVICE_F_REGISTERED);
 
+	unsigned int lcore_id;
+	struct core_state *cs;
 	/* clear the run-bit in all cores */
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		rte_bitset_clear(lcore_states[i].mapped_services, id);
+	RTE_LCORE_VAR_FOREACH(lcore_id, cs, lcore_states)
+		rte_bitset_clear(cs->mapped_services, id);
 
 	memset(&rte_services[id], 0, sizeof(struct rte_service_spec_impl));
 
@@ -463,7 +459,10 @@ rte_service_may_be_active(uint32_t id)
 		return -EINVAL;
 
 	for (i = 0; i < lcore_count; i++) {
-		if (rte_bitset_test(lcore_states[ids[i]].service_active_on_lcore, id))
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE(ids[i], lcore_states);
+
+		if (rte_bitset_test(cs->service_active_on_lcore, id))
 			return 1;
 	}
 
@@ -473,7 +472,7 @@ rte_service_may_be_active(uint32_t id)
 int32_t
 rte_service_run_iter_on_app_lcore(uint32_t id, uint32_t serialize_mt_unsafe)
 {
-	struct core_state *cs = &lcore_states[rte_lcore_id()];
+	struct core_state *cs =	RTE_LCORE_VAR(lcore_states);
 	struct rte_service_spec_impl *s;
 
 	SERVICE_VALID_GET_OR_ERR_RET(id, s, -EINVAL);
@@ -496,8 +495,7 @@ static int32_t
 service_runner_func(void *arg)
 {
 	RTE_SET_USED(arg);
-	const int lcore = rte_lcore_id();
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR(lcore_states);
 
 	rte_atomic_store_explicit(&cs->thread_active, 1, rte_memory_order_seq_cst);
 
@@ -533,13 +531,15 @@ service_runner_func(void *arg)
 int32_t
 rte_service_lcore_may_be_active(uint32_t lcore)
 {
-	if (lcore >= RTE_MAX_LCORE || !lcore_states[lcore].is_service_core)
+	struct core_state *cs = RTE_LCORE_VAR_LCORE(lcore, lcore_states);
+
+	if (lcore >= RTE_MAX_LCORE || !cs->is_service_core)
 		return -EINVAL;
 
 	/* Load thread_active using ACQUIRE to avoid instructions dependent on
 	 * the result being re-ordered before this load completes.
 	 */
-	return rte_atomic_load_explicit(&lcore_states[lcore].thread_active,
+	return rte_atomic_load_explicit(&cs->thread_active,
 			       rte_memory_order_acquire);
 }
 
@@ -547,9 +547,12 @@ int32_t
 rte_service_lcore_count(void)
 {
 	int32_t count = 0;
-	uint32_t i;
-	for (i = 0; i < RTE_MAX_LCORE; i++)
-		count += lcore_states[i].is_service_core;
+
+	unsigned int lcore_id;
+	struct core_state *cs;
+	RTE_LCORE_VAR_FOREACH(lcore_id, cs, lcore_states)
+		count += cs->is_service_core;
+
 	return count;
 }
 
@@ -566,7 +569,8 @@ rte_service_lcore_list(uint32_t array[], uint32_t n)
 	uint32_t i;
 	uint32_t idx = 0;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		struct core_state *cs = &lcore_states[i];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE(i, lcore_states);
 		if (cs->is_service_core) {
 			array[idx] = i;
 			idx++;
@@ -582,7 +586,7 @@ rte_service_lcore_count_services(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs = RTE_LCORE_VAR_LCORE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -634,28 +638,30 @@ rte_service_start_with_defaults(void)
 static int32_t
 service_update(uint32_t sid, uint32_t lcore, uint32_t *set, uint32_t *enabled)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE(lcore, lcore_states);
+
 	/* validate ID, or return error value */
 	if (!service_valid(sid) || lcore >= RTE_MAX_LCORE ||
-			!lcore_states[lcore].is_service_core)
+			!cs->is_service_core)
 		return -EINVAL;
 
 	if (set) {
-		uint64_t lcore_mapped = rte_bitset_test(lcore_states[lcore].mapped_services, sid);
+		bool lcore_mapped = rte_bitset_test(cs->mapped_services, sid);
 
 		if (*set && !lcore_mapped) {
-			rte_bitset_set(lcore_states[lcore].mapped_services, sid);
+			rte_bitset_set(cs->mapped_services, sid);
 			rte_atomic_fetch_add_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 		if (!*set && lcore_mapped) {
-			rte_bitset_clear(lcore_states[lcore].mapped_services, sid);
+			rte_bitset_clear(cs->mapped_services, sid);
 			rte_atomic_fetch_sub_explicit(&rte_services[sid].num_mapped_cores,
 				1, rte_memory_order_relaxed);
 		}
 	}
 
 	if (enabled)
-		*enabled = rte_bitset_test(lcore_states[lcore].mapped_services, sid);
+		*enabled = rte_bitset_test(cs->mapped_services, sid);
 
 	return 0;
 }
@@ -683,13 +689,14 @@ set_lcore_state(uint32_t lcore, int32_t state)
 {
 	/* mark core state in hugepage backed config */
 	struct rte_config *cfg = rte_eal_get_configuration();
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE(lcore, lcore_states);
 	cfg->lcore_role[lcore] = state;
 
 	/* mark state in process local lcore_config */
 	lcore_config[lcore].core_role = state;
 
 	/* update per-lcore optimized state tracking */
-	lcore_states[lcore].is_service_core = (state == ROLE_SERVICE);
+	cs->is_service_core = (state == ROLE_SERVICE);
 
 	rte_eal_trace_service_lcore_state_change(lcore, state);
 }
@@ -700,14 +707,16 @@ rte_service_lcore_reset_all(void)
 	/* loop over cores, reset all mapped services */
 	uint32_t i;
 	for (i = 0; i < RTE_MAX_LCORE; i++) {
-		if (lcore_states[i].is_service_core) {
-			rte_bitset_clear_all(lcore_states[i].mapped_services, RTE_SERVICE_NUM_MAX);
+		struct core_state *cs =	RTE_LCORE_VAR_LCORE(i, lcore_states);
+
+		if (cs->is_service_core) {
+			rte_bitset_clear_all(cs->mapped_services, RTE_SERVICE_NUM_MAX);
 			set_lcore_state(i, ROLE_RTE);
 			/* runstate act as guard variable Use
 			 * store-release memory order here to synchronize
 			 * with load-acquire in runstate read functions.
 			 */
-			rte_atomic_store_explicit(&lcore_states[i].runstate,
+			rte_atomic_store_explicit(&cs->runstate,
 				RUNSTATE_STOPPED, rte_memory_order_release);
 		}
 	}
@@ -723,17 +732,19 @@ rte_service_lcore_add(uint32_t lcore)
 {
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
-	if (lcore_states[lcore].is_service_core)
+
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE(lcore, lcore_states);
+	if (cs->is_service_core)
 		return -EALREADY;
 
 	set_lcore_state(lcore, ROLE_SERVICE);
 
 	/* ensure that after adding a core the mask and state are defaults */
-	rte_bitset_clear_all(lcore_states[lcore].mapped_services, RTE_SERVICE_NUM_MAX);
+	rte_bitset_clear_all(cs->mapped_services, RTE_SERVICE_NUM_MAX);
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	return rte_eal_wait_lcore(lcore);
@@ -745,7 +756,7 @@ rte_service_lcore_del(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -769,7 +780,7 @@ rte_service_lcore_start(uint32_t lcore)
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE(lcore, lcore_states);
 	if (!cs->is_service_core)
 		return -EINVAL;
 
@@ -799,6 +810,8 @@ rte_service_lcore_start(uint32_t lcore)
 int32_t
 rte_service_lcore_stop(uint32_t lcore)
 {
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE(lcore, lcore_states);
+
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
@@ -806,12 +819,11 @@ rte_service_lcore_stop(uint32_t lcore)
 	 * memory order here to synchronize with store-release
 	 * in runstate update functions.
 	 */
-	if (rte_atomic_load_explicit(&lcore_states[lcore].runstate, rte_memory_order_acquire) ==
+	if (rte_atomic_load_explicit(&cs->runstate, rte_memory_order_acquire) ==
 			RUNSTATE_STOPPED)
 		return -EALREADY;
 
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
 
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
 		bool enabled = rte_bitset_test(cs->mapped_services, i);
@@ -831,7 +843,7 @@ rte_service_lcore_stop(uint32_t lcore)
 	/* Use store-release memory order here to synchronize with
 	 * load-acquire in runstate read functions.
 	 */
-	rte_atomic_store_explicit(&lcore_states[lcore].runstate, RUNSTATE_STOPPED,
+	rte_atomic_store_explicit(&cs->runstate, RUNSTATE_STOPPED,
 		rte_memory_order_release);
 
 	rte_eal_trace_service_lcore_stop(lcore);
@@ -842,7 +854,7 @@ rte_service_lcore_stop(uint32_t lcore)
 static uint64_t
 lcore_attr_get_loops(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->loops, rte_memory_order_relaxed);
 }
@@ -850,7 +862,7 @@ lcore_attr_get_loops(unsigned int lcore)
 static uint64_t
 lcore_attr_get_cycles(unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->cycles, rte_memory_order_relaxed);
 }
@@ -858,7 +870,7 @@ lcore_attr_get_cycles(unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_calls(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].calls,
 		rte_memory_order_relaxed);
@@ -885,7 +897,7 @@ lcore_attr_get_service_error_calls(uint32_t service_id, unsigned int lcore)
 static uint64_t
 lcore_attr_get_service_cycles(uint32_t service_id, unsigned int lcore)
 {
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE(lcore, lcore_states);
 
 	return rte_atomic_load_explicit(&cs->service_stats[service_id].cycles,
 		rte_memory_order_relaxed);
@@ -901,7 +913,10 @@ attr_get(uint32_t id, lcore_attr_get_fun lcore_attr_get)
 	uint64_t sum = 0;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		if (lcore_states[lcore].is_service_core)
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE(lcore, lcore_states);
+
+		if (cs->is_service_core)
 			sum += lcore_attr_get(id, lcore);
 	}
 
@@ -963,12 +978,11 @@ int32_t
 rte_service_lcore_attr_get(uint32_t lcore, uint32_t attr_id,
 			   uint64_t *attr_value)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE || !attr_value)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -993,7 +1007,8 @@ rte_service_attr_reset_all(uint32_t id)
 		return -EINVAL;
 
 	for (lcore = 0; lcore < RTE_MAX_LCORE; lcore++) {
-		struct core_state *cs = &lcore_states[lcore];
+		struct core_state *cs =
+			RTE_LCORE_VAR_LCORE(lcore, lcore_states);
 
 		cs->service_stats[id] = (struct service_stats) {};
 	}
@@ -1004,12 +1019,11 @@ rte_service_attr_reset_all(uint32_t id)
 int32_t
 rte_service_lcore_attr_reset_all(uint32_t lcore)
 {
-	struct core_state *cs;
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE(lcore, lcore_states);
 
 	if (lcore >= RTE_MAX_LCORE)
 		return -EINVAL;
 
-	cs = &lcore_states[lcore];
 	if (!cs->is_service_core)
 		return -ENOTSUP;
 
@@ -1044,7 +1058,7 @@ static void
 service_dump_calls_per_lcore(FILE *f, uint32_t lcore)
 {
 	uint32_t i;
-	struct core_state *cs = &lcore_states[lcore];
+	struct core_state *cs =	RTE_LCORE_VAR_LCORE(lcore, lcore_states);
 
 	fprintf(f, "%02d\t", lcore);
 	for (i = 0; i < RTE_SERVICE_NUM_MAX; i++) {
-- 
2.43.0