From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id B1CB945777;
	Fri,  9 Aug 2024 11:14:57 +0200 (CEST)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id BA1A942E93;
	Fri,  9 Aug 2024 11:14:25 +0200 (CEST)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on2081.outbound.protection.outlook.com [40.107.21.81])
 by mails.dpdk.org (Postfix) with ESMTP id EB2D440274
 for <dev@dpdk.org>; Fri,  9 Aug 2024 11:14:19 +0200 (CEST)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;
 b=jbdJqSWRQ2MRnqMC/TP2KCgv7OfVoVEOtqHYgFYR90jYDMxyUD/mZR5LDqz9xWFzoIRzsLoKk4tj+lQZvAvQ2e0StIi+4upe2jdjcbuA/uSkh07kKEjoMqpVEmpmV5XKWxyz373bRwTkPCoW1bcaovRWm6KgKSBrGzGAxVA7mfdiG7pFe7iQtiytUYI4a3MyXTSHX4YhKmfcO1spv/ENgUrZPjwiKx1rkK6evWe5eaxtezP5T/wZQzVW3ZQ7i1t3leMVRc4QJb6LNqoLAh2Wvdyl92Pgw5rmNfOu/qa7beZ9sIUNrX5hqr3CSlfQjOzCA5oZd90LSi9ZyZzfiX7GbQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector10001;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jwiGG0fQkLZvNDw6SkRjQhF852jGO3AvqFfPxnYlUEw=;
 b=qKTPjUCefvdEpywPS5fp+zn2WTacLwAtHN2278PTPLTHbT/uvKtaFOHipQI/4dh8q1Rgltpzlf3RZQ8JvmyZTNx6j1RjGens86uRP0autkZN5AI1rbWn1pyo2GImED1NnDwk50etzMyfQG+wy7JICcsktub8sbqTyVLJ7YI0dVX5hkIjFTfrSUJa02xVSIo0yKRsT2xzc39C0I78hnkeYAJsGK+29hF9Y88K5MGAXLJICXCPzWgZGR/dbbPa8jeQMfYQIpawIr4uMLU/tlZvNRlPhtKO8WmCnJ6gLQcYvDqUMBartqwWHkMZXNqLGH5WHc3FFv79YYdTDhoOlfeiBg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 192.176.1.74) smtp.rcpttodomain=arm.com smtp.mailfrom=ericsson.com;
 dmarc=pass (p=reject sp=reject pct=100) action=none header.from=ericsson.com; 
 dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ericsson.com;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jwiGG0fQkLZvNDw6SkRjQhF852jGO3AvqFfPxnYlUEw=;
 b=zJLpM4jpmW2oLxJn6pQmOQuAqk+ruHHp+8BXv9Q8/2uxdKr4FUv6TOIKK+OhFv02mhC/09DP5Bu66edJUBdWnXEEsL+h3dNdYgUiytKZ4j3nRygakDxcAjoaWFqxlrRZiot3yxoFvp+ilHY5PP4IOgCkXCwnSvvWcFT9frg6oGjKNoQa7hjff41j1S3zDnewtaFt5JiAcUqrICdBvltmK4xKwBzuzrS4vFmY3QpnhJa1evsCK286TPDJTgxAJPwjBZckE8Sdh0TcKPYvKsvXwIi5fDBN4MhTn1AToO7o4O6/Q7tAu265+UV1Ethbu9+nQ2ohKJ4KNDY13Figg33D1A==
Received: from PR1P264CA0139.FRAP264.PROD.OUTLOOK.COM (2603:10a6:102:2ce::10)
 by AM7PR07MB6216.eurprd07.prod.outlook.com (2603:10a6:20b:132::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7849.15; Fri, 9 Aug
 2024 09:14:17 +0000
Received: from AM4PEPF00025F96.EURPRD83.prod.outlook.com
 (2603:10a6:102:2ce:cafe::e0) by PR1P264CA0139.outlook.office365.com
 (2603:10a6:102:2ce::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7849.15 via Frontend
 Transport; Fri, 9 Aug 2024 09:14:16 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 192.176.1.74)
 smtp.mailfrom=ericsson.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=ericsson.com;
Received-SPF: Pass (protection.outlook.com: domain of ericsson.com designates
 192.176.1.74 as permitted sender)
 receiver=protection.outlook.com; 
 client-ip=192.176.1.74; helo=oa.msg.ericsson.com; pr=C
Received: from oa.msg.ericsson.com (192.176.1.74) by
 AM4PEPF00025F96.mail.protection.outlook.com (10.167.16.5) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.7875.2 via Frontend Transport; Fri, 9 Aug 2024 09:14:16 +0000
Received: from seliicinfr00050.seli.gic.ericsson.se (153.88.142.248) by
 smtp-central.internal.ericsson.com (100.87.178.67) with Microsoft SMTP Server
 id 15.2.1544.11; Fri, 9 Aug 2024 11:14:15 +0200
Received: from breslau.. (seliicwb00002.seli.gic.ericsson.se [10.156.25.100])
 by seliicinfr00050.seli.gic.ericsson.se (Postfix) with ESMTP id
 C0D331C006A; Fri,  9 Aug 2024 11:14:15 +0200 (CEST)
From: =?UTF-8?q?Mattias=20R=C3=B6nnblom?= <mattias.ronnblom@ericsson.com>
To: <dev@dpdk.org>
CC: <hofors@lysator.liu.se>, Heng Wang <heng.wang@ericsson.com>, "Stephen
 Hemminger" <stephen@networkplumber.org>, Joyce Kong <joyce.kong@arm.com>,
 Tyler Retzlaff <roretzla@linux.microsoft.com>,
 =?UTF-8?q?Morten=20Br=C3=B8rup?= <mb@smartsharesystems.com>,
 =?UTF-8?q?Mattias=20R=C3=B6nnblom?= <mattias.ronnblom@ericsson.com>
Subject: [PATCH 5/5] eal: extend bitops to handle volatile pointers
Date: Fri, 9 Aug 2024 11:04:39 +0200
Message-ID: <20240809090439.589295-6-mattias.ronnblom@ericsson.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240809090439.589295-1-mattias.ronnblom@ericsson.com>
References: <20240505083737.118649-2-mattias.ronnblom@ericsson.com>
 <20240809090439.589295-1-mattias.ronnblom@ericsson.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AM4PEPF00025F96:EE_|AM7PR07MB6216:EE_
X-MS-Office365-Filtering-Correlation-Id: 3c80aa5d-1aa8-4de0-e666-08dcb853a4ee
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
 ARA:13230040|36860700013|376014|1800799024|82310400026; 
X-Microsoft-Antispam-Message-Info: =?utf-8?B?OTUxaXJLYkJxZDEyQVhFZ0dRWSt0c0NtRXFPZTJzaUtORG5SYWg5ZmhPeElk?=
 =?utf-8?B?a29hVXJFVXEralREUmR6WVdqTW1XVDdLSzNZbXQvS1JVL2ZJdXJOREFyYWY5?=
 =?utf-8?B?SXpUZ3gwYnR6VG9zcEtpTVplV1llUFZXOTYvVE9ROEthNmk0TjliV1F4MjVu?=
 =?utf-8?B?NUNIQ1ZDTmt3cHY4bU1ZL2x3SnhPM0pBNWpaQ251SlZ2ODBqdWZId3RUdFRF?=
 =?utf-8?B?MDlSZnVuWlVMcTJuYUhBTnQxU3ZLeTROWWlaTlFpd1p1RTJJQXlHejRVOEQz?=
 =?utf-8?B?Q0cvZDlWYjJDMGgyUnVBZnp2ekVOcXJPbzdONW9QRUtnRjBiWG9kamVlWWth?=
 =?utf-8?B?eUErbDBmOTlOSVNZRWVFOGt4c05rVzZ2TVJXTmZmcE5mNXFpMkhyYzU3SjN5?=
 =?utf-8?B?M09OcUxKTjV1VDBOY3REa1g3cWF5NHhzUllHU0IzaU5adlJuTmh6SzlKUWJ4?=
 =?utf-8?B?RHc2YmtHYUtFMlFUNGwxb0tWQ2ZDZVc2czlxY3duL3FPV0N3K2ltTEMrVVlC?=
 =?utf-8?B?YjdzeGN5UHNpMm80ZmhJN0VYTjZzYlM1emRGUkJkLzhUOFZHeDJtYXRUYXRG?=
 =?utf-8?B?QmJvK2lmb0VTVGFhaUxyTk42c3BNTDQ2aVpQNktiVWpSYXlMTW13SzE2bG03?=
 =?utf-8?B?M05yc2lkV3JkaTFYQTR5MHFOYnJDZEZRcXU2Q1lsbGhETmtiK3VPYTF5c3hh?=
 =?utf-8?B?bDdrY3ZCRE5KbkJqK3c4bkNJYkxSdTYzUGcvNW9wMGZDQ2l3c1dGbUNFTldl?=
 =?utf-8?B?eFk2WGRXNC8zUVpvUVdQNExpZnIwbHlxQnlsWHB4amFaZFNwSFZmVGY2ZmpM?=
 =?utf-8?B?SjE0eExDSHgyakJTT3BYUVpIcGJ0UEV6czdzRTBWNjR4RG51ZGR0ZFFhWHhV?=
 =?utf-8?B?Q2hXZzg5MmMvNHVCZkRUa2I1REJXcXFxQU1QOURHWFJQd1BQbG93L1orTG9w?=
 =?utf-8?B?UmtCZWhzaFh1eCt3TGYvckc5QzY1Vk1FRW9LQUxydGo0MXUxR2FpTmd6eEty?=
 =?utf-8?B?Yjc5RWg3NnBNVkpJYXkrMzZDMVVISS8xY3EvQ0U4WFoyRjhuZzRmRktnWWR2?=
 =?utf-8?B?cHNUaGJQcmdXVmNDdDNyNFdRRVFnVnZBeU05aDk0WTYwK0p6R3FoYkpKNyt6?=
 =?utf-8?B?cWhzRnJxOWNCQ1dGdzZ1THloS0Q3WUtCcERGekZKL0JYKzI5dVcrdVhxblZG?=
 =?utf-8?B?TWlROVp4QXg1c2tVMGFoZXFGTmRMdkc0V3dadjR3ckQ1eStSRHJyUHkwc2hG?=
 =?utf-8?B?YnAwZ2Y0aUZ0c1IrN2tPb0JRNEtWdzc5WEhQWkZ6SWxJZThIbVlvSXZwbFBk?=
 =?utf-8?B?MEZmQUFKdS8yV1NaTW1lRk44bmhRUGMyY3c5eXZib0tkek1PWEFIeUZjdFNu?=
 =?utf-8?B?VEVaQTZMT0V2aWJqMkFDZ25pU3dxUGgrTWlhWHVyOG5ZMVN3N3R4a2Z0ZG9P?=
 =?utf-8?B?dzdSbGVjOVc0cDdEbkEzb3JHemJwd0tSRFppdjNTLytPVVIzYWVISDJrT3Q5?=
 =?utf-8?B?STZ0aTcvdjJIWkxyenZrU1pUZXYraXZwOW1UYm1xQyszNkxoVFBSelRLM2lR?=
 =?utf-8?B?N3huQjFUZHE4dU1XUTl6NlcrcXVidkY1NG8wbDBvN2Vva2t1b2JCRDBvR1hp?=
 =?utf-8?B?R1BxOGwyRXJjeUZmSXlzOUpVVy9YSXM3ZURVUXdYWENhYm9NSUYrVkJzaU5j?=
 =?utf-8?B?U044NDg0OU1MOUx2dEZ0dTBaM04wcjJIclRzOXFmTFJFUU1ZRHR2QnQyZXNv?=
 =?utf-8?B?eEpDSW1PZnpSVkFCY09EOWNnVmwxZE5DTGZEcndUVTgzalNLeWJlQmlqcVlz?=
 =?utf-8?B?SXlCbzBuMUZSdUN4dUx2bnl4QXRwQzRXaG5YUWNTODNLVVV5R1hyb1VQeCts?=
 =?utf-8?B?ejJqWmJHSmV1MjZieVhUUVBldU9Na1pRWXVnSzlGN2VONUE9PQ==?=
X-Forefront-Antispam-Report: CIP:192.176.1.74; CTRY:SE; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:oa.msg.ericsson.com; PTR:office365.se.ericsson.net;
 CAT:NONE; SFS:(13230040)(36860700013)(376014)(1800799024)(82310400026);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: ericsson.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Aug 2024 09:14:16.8059 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3c80aa5d-1aa8-4de0-e666-08dcb853a4ee
X-MS-Exchange-CrossTenant-Id: 92e84ceb-fbfd-47ab-be52-080c6b87953f
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=92e84ceb-fbfd-47ab-be52-080c6b87953f; Ip=[192.176.1.74];
 Helo=[oa.msg.ericsson.com]
X-MS-Exchange-CrossTenant-AuthSource: AM4PEPF00025F96.EURPRD83.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR07MB6216
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Have rte_bit_[test|set|clear|assign|flip]() and rte_bit_atomic_*()
handle volatile-marked pointers.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 app/test/test_bitops.c       |  30 ++-
 lib/eal/include/rte_bitops.h | 427 ++++++++++++++++++++++-------------
 2 files changed, 289 insertions(+), 168 deletions(-)

diff --git a/app/test/test_bitops.c b/app/test/test_bitops.c
index b80216a0a1..e6e9f7ec44 100644
--- a/app/test/test_bitops.c
+++ b/app/test/test_bitops.c
@@ -14,13 +14,13 @@
 #include "test.h"
 
 #define GEN_TEST_BIT_ACCESS(test_name, set_fun, clear_fun, assign_fun,	\
-			    flip_fun, test_fun, size)			\
+			    flip_fun, test_fun, size, mod)		\
 	static int							\
 	test_name(void)							\
 	{								\
 		uint ## size ## _t reference = (uint ## size ## _t)rte_rand(); \
 		unsigned int bit_nr;					\
-		uint ## size ## _t word = (uint ## size ## _t)rte_rand(); \
+		mod uint ## size ## _t word = (uint ## size ## _t)rte_rand(); \
 									\
 		for (bit_nr = 0; bit_nr < size; bit_nr++) {		\
 			bool reference_bit = (reference >> bit_nr) & 1;	\
@@ -41,7 +41,7 @@
 				    "Bit %d had unflipped value", bit_nr); \
 			flip_fun(&word, bit_nr);			\
 									\
-			const uint ## size ## _t *const_ptr = &word;	\
+			const mod uint ## size ## _t *const_ptr = &word; \
 			TEST_ASSERT(test_fun(const_ptr, bit_nr) ==	\
 				    reference_bit,			\
 				    "Bit %d had unexpected value", bit_nr); \
@@ -59,10 +59,16 @@
 	}
 
 GEN_TEST_BIT_ACCESS(test_bit_access32, rte_bit_set, rte_bit_clear,
-		    rte_bit_assign, rte_bit_flip, rte_bit_test, 32)
+		    rte_bit_assign, rte_bit_flip, rte_bit_test, 32,)
 
 GEN_TEST_BIT_ACCESS(test_bit_access64, rte_bit_set, rte_bit_clear,
-		    rte_bit_assign, rte_bit_flip, rte_bit_test, 64)
+		    rte_bit_assign, rte_bit_flip, rte_bit_test, 64,)
+
+GEN_TEST_BIT_ACCESS(test_bit_v_access32, rte_bit_set, rte_bit_clear,
+		    rte_bit_assign, rte_bit_flip, rte_bit_test, 32, volatile)
+
+GEN_TEST_BIT_ACCESS(test_bit_v_access64, rte_bit_set, rte_bit_clear,
+		    rte_bit_assign, rte_bit_flip, rte_bit_test, 64, volatile)
 
 #define bit_atomic_set(addr, nr)				\
 	rte_bit_atomic_set(addr, nr, rte_memory_order_relaxed)
@@ -81,11 +87,19 @@ GEN_TEST_BIT_ACCESS(test_bit_access64, rte_bit_set, rte_bit_clear,
 
 GEN_TEST_BIT_ACCESS(test_bit_atomic_access32, bit_atomic_set,
 		    bit_atomic_clear, bit_atomic_assign,
-		    bit_atomic_flip, bit_atomic_test, 32)
+		    bit_atomic_flip, bit_atomic_test, 32,)
 
 GEN_TEST_BIT_ACCESS(test_bit_atomic_access64, bit_atomic_set,
 		    bit_atomic_clear, bit_atomic_assign,
-		    bit_atomic_flip, bit_atomic_test, 64)
+		    bit_atomic_flip, bit_atomic_test, 64,)
+
+GEN_TEST_BIT_ACCESS(test_bit_atomic_v_access32, bit_atomic_set,
+		    bit_atomic_clear, bit_atomic_assign,
+		    bit_atomic_flip, bit_atomic_test, 32, volatile)
+
+GEN_TEST_BIT_ACCESS(test_bit_atomic_v_access64, bit_atomic_set,
+		    bit_atomic_clear, bit_atomic_assign,
+		    bit_atomic_flip, bit_atomic_test, 64, volatile)
 
 #define PARALLEL_TEST_RUNTIME 0.25
 
@@ -480,6 +494,8 @@ static struct unit_test_suite test_suite = {
 		TEST_CASE(test_bit_access64),
 		TEST_CASE(test_bit_access32),
 		TEST_CASE(test_bit_access64),
+		TEST_CASE(test_bit_v_access32),
+		TEST_CASE(test_bit_v_access64),
 		TEST_CASE(test_bit_atomic_access32),
 		TEST_CASE(test_bit_atomic_access64),
 		TEST_CASE(test_bit_atomic_parallel_assign32),
diff --git a/lib/eal/include/rte_bitops.h b/lib/eal/include/rte_bitops.h
index 4d878099ed..1355949fb6 100644
--- a/lib/eal/include/rte_bitops.h
+++ b/lib/eal/include/rte_bitops.h
@@ -127,12 +127,16 @@ extern "C" {
  * @param nr
  *   The index of the bit.
  */
-#define rte_bit_test(addr, nr)					\
-	_Generic((addr),					\
-		uint32_t *: __rte_bit_test32,			\
-		const uint32_t *: __rte_bit_test32,		\
-		uint64_t *: __rte_bit_test64,			\
-		const uint64_t *: __rte_bit_test64)(addr, nr)
+#define rte_bit_test(addr, nr)						\
+	_Generic((addr),						\
+		 uint32_t *: __rte_bit_test32,				\
+		 const uint32_t *: __rte_bit_test32,			\
+		 volatile uint32_t *: __rte_bit_v_test32,		\
+		 const volatile uint32_t *: __rte_bit_v_test32,		\
+		 uint64_t *: __rte_bit_test64,				\
+		 const uint64_t *: __rte_bit_test64,			\
+		 volatile uint64_t *: __rte_bit_v_test64,		\
+		 const volatile uint64_t *: __rte_bit_v_test64)(addr, nr)
 
 /**
  * @warning
@@ -152,10 +156,12 @@ extern "C" {
  * @param nr
  *   The index of the bit.
  */
-#define rte_bit_set(addr, nr)				\
-	_Generic((addr),				\
-		 uint32_t *: __rte_bit_set32,		\
-		 uint64_t *: __rte_bit_set64)(addr, nr)
+#define rte_bit_set(addr, nr)						\
+	_Generic((addr),						\
+		 uint32_t *: __rte_bit_set32,				\
+		 volatile uint32_t *: __rte_bit_v_set32,		\
+		 uint64_t *: __rte_bit_set64,				\
+		 volatile uint64_t *: __rte_bit_v_set64)(addr, nr)
 
 /**
  * @warning
@@ -175,10 +181,12 @@ extern "C" {
  * @param nr
  *   The index of the bit.
  */
-#define rte_bit_clear(addr, nr)					\
-	_Generic((addr),					\
-		 uint32_t *: __rte_bit_clear32,			\
-		 uint64_t *: __rte_bit_clear64)(addr, nr)
+#define rte_bit_clear(addr, nr)						\
+	_Generic((addr),						\
+		 uint32_t *: __rte_bit_clear32,				\
+		 volatile uint32_t *: __rte_bit_v_clear32,		\
+		 uint64_t *: __rte_bit_clear64,				\
+		 volatile uint64_t *: __rte_bit_v_clear64)(addr, nr)
 
 /**
  * @warning
@@ -202,7 +210,9 @@ extern "C" {
 #define rte_bit_assign(addr, nr, value)					\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_assign32,			\
-		 uint64_t *: __rte_bit_assign64)(addr, nr, value)
+		 volatile uint32_t *: __rte_bit_v_assign32,		\
+		 uint64_t *: __rte_bit_assign64,			\
+		 volatile uint64_t *: __rte_bit_v_assign64)(addr, nr, value)
 
 /**
  * @warning
@@ -225,7 +235,9 @@ extern "C" {
 #define rte_bit_flip(addr, nr)						\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_flip32,				\
-		 uint64_t *: __rte_bit_flip64)(addr, nr)
+		 volatile uint32_t *: __rte_bit_v_flip32,		\
+		 uint64_t *: __rte_bit_flip64,				\
+		 volatile uint64_t *: __rte_bit_v_flip64)(addr, nr)
 
 /**
  * @warning
@@ -250,9 +262,13 @@ extern "C" {
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_atomic_test32,			\
 		 const uint32_t *: __rte_bit_atomic_test32,		\
+		 volatile uint32_t *: __rte_bit_atomic_v_test32,	\
+		 const volatile uint32_t *: __rte_bit_atomic_v_test32,	\
 		 uint64_t *: __rte_bit_atomic_test64,			\
-		 const uint64_t *: __rte_bit_atomic_test64)(addr, nr,	\
-							    memory_order)
+		 const uint64_t *: __rte_bit_atomic_test64,		\
+		 volatile uint64_t *: __rte_bit_atomic_v_test64,	\
+		 const volatile uint64_t *: __rte_bit_atomic_v_test64) \
+						    (addr, nr, memory_order)
 
 /**
  * @warning
@@ -274,7 +290,10 @@ extern "C" {
 #define rte_bit_atomic_set(addr, nr, memory_order)			\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_atomic_set32,			\
-		 uint64_t *: __rte_bit_atomic_set64)(addr, nr, memory_order)
+		 volatile uint32_t *: __rte_bit_atomic_v_set32,		\
+		 uint64_t *: __rte_bit_atomic_set64,			\
+		 volatile uint64_t *: __rte_bit_atomic_v_set64)(addr, nr, \
+								memory_order)
 
 /**
  * @warning
@@ -296,7 +315,10 @@ extern "C" {
 #define rte_bit_atomic_clear(addr, nr, memory_order)			\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_atomic_clear32,			\
-		 uint64_t *: __rte_bit_atomic_clear64)(addr, nr, memory_order)
+		 volatile uint32_t *: __rte_bit_atomic_v_clear32,	\
+		 uint64_t *: __rte_bit_atomic_clear64,			\
+		 volatile uint64_t *: __rte_bit_atomic_v_clear64)(addr, nr, \
+								  memory_order)
 
 /**
  * @warning
@@ -320,8 +342,11 @@ extern "C" {
 #define rte_bit_atomic_assign(addr, nr, value, memory_order)		\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_atomic_assign32,			\
-		 uint64_t *: __rte_bit_atomic_assign64)(addr, nr, value, \
-							memory_order)
+		 volatile uint32_t *: __rte_bit_atomic_v_assign32,	\
+		 uint64_t *: __rte_bit_atomic_assign64,			\
+		 volatile uint64_t *: __rte_bit_atomic_v_assign64)(addr, nr, \
+								   value, \
+								   memory_order)
 
 /**
  * @warning
@@ -344,7 +369,10 @@ extern "C" {
 #define rte_bit_atomic_flip(addr, nr, memory_order)			\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_atomic_flip32,			\
-		 uint64_t *: __rte_bit_atomic_flip64)(addr, nr, memory_order)
+		 volatile uint32_t *: __rte_bit_atomic_v_flip32,	\
+		 uint64_t *: __rte_bit_atomic_flip64,			\
+		 volatile uint64_t *: __rte_bit_atomic_v_flip64)(addr, nr, \
+								 memory_order)
 
 /**
  * @warning
@@ -368,8 +396,10 @@ extern "C" {
 #define rte_bit_atomic_test_and_set(addr, nr, memory_order)		\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_atomic_test_and_set32,		\
-		 uint64_t *: __rte_bit_atomic_test_and_set64)(addr, nr,	\
-							      memory_order)
+		 volatile uint32_t *: __rte_bit_atomic_v_test_and_set32, \
+		 uint64_t *: __rte_bit_atomic_test_and_set64,		\
+		 volatile uint64_t *: __rte_bit_atomic_v_test_and_set64) \
+						    (addr, nr, memory_order)
 
 /**
  * @warning
@@ -393,8 +423,10 @@ extern "C" {
 #define rte_bit_atomic_test_and_clear(addr, nr, memory_order)		\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_atomic_test_and_clear32,		\
-		 uint64_t *: __rte_bit_atomic_test_and_clear64)(addr, nr, \
-								memory_order)
+		 volatile uint32_t *: __rte_bit_atomic_v_test_and_clear32, \
+		 uint64_t *: __rte_bit_atomic_test_and_clear64,		\
+		 volatile uint64_t *: __rte_bit_atomic_v_test_and_clear64) \
+						       (addr, nr, memory_order)
 
 /**
  * @warning
@@ -421,9 +453,10 @@ extern "C" {
 #define rte_bit_atomic_test_and_assign(addr, nr, value, memory_order)	\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_atomic_test_and_assign32,	\
-		 uint64_t *: __rte_bit_atomic_test_and_assign64)(addr, nr, \
-								 value, \
-								 memory_order)
+		 volatile uint32_t *: __rte_bit_atomic_v_test_and_assign32, \
+		 uint64_t *: __rte_bit_atomic_test_and_assign64,	\
+		 volatile uint64_t *: __rte_bit_atomic_v_test_and_assign64) \
+						(addr, nr, value, memory_order)
 
 #define __RTE_GEN_BIT_TEST(family, fun, qualifier, size)		\
 	__rte_experimental						\
@@ -491,93 +524,105 @@ __RTE_GEN_BIT_CLEAR(, clear,, 32)
 __RTE_GEN_BIT_ASSIGN(, assign,, 32)
 __RTE_GEN_BIT_FLIP(, flip,, 32)
 
+__RTE_GEN_BIT_TEST(v_, test, volatile, 32)
+__RTE_GEN_BIT_SET(v_, set, volatile, 32)
+__RTE_GEN_BIT_CLEAR(v_, clear, volatile, 32)
+__RTE_GEN_BIT_ASSIGN(v_, assign, volatile, 32)
+__RTE_GEN_BIT_FLIP(v_, flip, volatile, 32)
+
 __RTE_GEN_BIT_TEST(, test,, 64)
 __RTE_GEN_BIT_SET(, set,, 64)
 __RTE_GEN_BIT_CLEAR(, clear,, 64)
 __RTE_GEN_BIT_ASSIGN(, assign,, 64)
 __RTE_GEN_BIT_FLIP(, flip,, 64)
 
-#define __RTE_GEN_BIT_ATOMIC_TEST(size)					\
+__RTE_GEN_BIT_TEST(v_, test, volatile, 64)
+__RTE_GEN_BIT_SET(v_, set, volatile, 64)
+__RTE_GEN_BIT_CLEAR(v_, clear, volatile, 64)
+__RTE_GEN_BIT_ASSIGN(v_, assign, volatile, 64)
+__RTE_GEN_BIT_FLIP(v_, flip, volatile, 64)
+
+#define __RTE_GEN_BIT_ATOMIC_TEST(v, qualifier, size)			\
 	__rte_experimental						\
 	static inline bool						\
-	__rte_bit_atomic_test ## size(const uint ## size ## _t *addr,	\
-				      unsigned int nr, int memory_order) \
+	__rte_bit_atomic_ ## v ## test ## size(const qualifier uint ## size ## _t *addr, \
+					       unsigned int nr, int memory_order) \
 	{								\
 		RTE_ASSERT(nr < size);					\
 									\
-		const RTE_ATOMIC(uint ## size ## _t) *a_addr =		\
-			(const RTE_ATOMIC(uint ## size ## _t) *)addr;	\
+		const qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr = \
+			(const qualifier RTE_ATOMIC(uint ## size ## _t) *)addr;	\
 		uint ## size ## _t mask = (uint ## size ## _t)1 << nr;	\
 		return rte_atomic_load_explicit(a_addr, memory_order) & mask; \
 	}
 
-#define __RTE_GEN_BIT_ATOMIC_SET(size)					\
+#define __RTE_GEN_BIT_ATOMIC_SET(v, qualifier, size)			\
 	__rte_experimental						\
 	static inline void						\
-	__rte_bit_atomic_set ## size(uint ## size ## _t *addr,		\
-				     unsigned int nr, int memory_order)	\
+	__rte_bit_atomic_ ## v ## set ## size(qualifier uint ## size ## _t *addr, \
+					      unsigned int nr, int memory_order) \
 	{								\
 		RTE_ASSERT(nr < size);					\
 									\
-		RTE_ATOMIC(uint ## size ## _t) *a_addr =		\
-			(RTE_ATOMIC(uint ## size ## _t) *)addr;		\
+		qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr =	\
+			(qualifier RTE_ATOMIC(uint ## size ## _t) *)addr; \
 		uint ## size ## _t mask = (uint ## size ## _t)1 << nr;	\
 		rte_atomic_fetch_or_explicit(a_addr, mask, memory_order); \
 	}
 
-#define __RTE_GEN_BIT_ATOMIC_CLEAR(size)				\
+#define __RTE_GEN_BIT_ATOMIC_CLEAR(v, qualifier, size)			\
 	__rte_experimental						\
 	static inline void						\
-	__rte_bit_atomic_clear ## size(uint ## size ## _t *addr,	\
-				       unsigned int nr, int memory_order) \
+	__rte_bit_atomic_ ## v ## clear ## size(qualifier uint ## size ## _t *addr,	\
+						unsigned int nr, int memory_order) \
 	{								\
 		RTE_ASSERT(nr < size);					\
 									\
-		RTE_ATOMIC(uint ## size ## _t) *a_addr =		\
-			(RTE_ATOMIC(uint ## size ## _t) *)addr;		\
+		qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr =	\
+			(qualifier RTE_ATOMIC(uint ## size ## _t) *)addr; \
 		uint ## size ## _t mask = (uint ## size ## _t)1 << nr;	\
 		rte_atomic_fetch_and_explicit(a_addr, ~mask, memory_order); \
 	}
 
-#define __RTE_GEN_BIT_ATOMIC_FLIP(size)					\
+#define __RTE_GEN_BIT_ATOMIC_FLIP(v, qualifier, size)			\
 	__rte_experimental						\
 	static inline void						\
-	__rte_bit_atomic_flip ## size(uint ## size ## _t *addr,		\
-				       unsigned int nr, int memory_order) \
+	__rte_bit_atomic_ ## v ## flip ## size(qualifier uint ## size ## _t *addr, \
+					       unsigned int nr, int memory_order) \
 	{								\
 		RTE_ASSERT(nr < size);					\
 									\
-		RTE_ATOMIC(uint ## size ## _t) *a_addr =		\
-			(RTE_ATOMIC(uint ## size ## _t) *)addr;		\
+		qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr =	\
+			(qualifier RTE_ATOMIC(uint ## size ## _t) *)addr; \
 		uint ## size ## _t mask = (uint ## size ## _t)1 << nr;	\
 		rte_atomic_fetch_xor_explicit(a_addr, mask, memory_order); \
 	}
 
-#define __RTE_GEN_BIT_ATOMIC_ASSIGN(size)				\
+#define __RTE_GEN_BIT_ATOMIC_ASSIGN(v, qualifier, size)			\
 	__rte_experimental						\
 	static inline void						\
-	__rte_bit_atomic_assign ## size(uint ## size ## _t *addr,	\
-					unsigned int nr, bool value,	\
-					int memory_order)		\
+	__rte_bit_atomic_## v ## assign ## size(qualifier uint ## size ## _t *addr, \
+						unsigned int nr, bool value, \
+						int memory_order)	\
 	{								\
 		if (value)						\
-			__rte_bit_atomic_set ## size(addr, nr, memory_order); \
+			__rte_bit_atomic_ ## v ## set ## size(addr, nr, memory_order); \
 		else							\
-			__rte_bit_atomic_clear ## size(addr, nr,	\
-						       memory_order);	\
+			__rte_bit_atomic_ ## v ## clear ## size(addr, nr, \
+								     memory_order); \
 	}
 
-#define __RTE_GEN_BIT_ATOMIC_TEST_AND_SET(size)				\
+#define __RTE_GEN_BIT_ATOMIC_TEST_AND_SET(v, qualifier, size)		\
 	__rte_experimental						\
 	static inline bool						\
-	__rte_bit_atomic_test_and_set ## size(uint ## size ## _t *addr,	\
-					      unsigned int nr,		\
-					      int memory_order)		\
+	__rte_bit_atomic_ ## v ## test_and_set ## size(qualifier uint ## size ## _t *addr, \
+						       unsigned int nr,	\
+						       int memory_order) \
 	{								\
 		RTE_ASSERT(nr < size);					\
 									\
-		RTE_ATOMIC(uint ## size ## _t) *a_addr =		\
-			(RTE_ATOMIC(uint ## size ## _t) *)addr;		\
+		qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr =	\
+			(qualifier RTE_ATOMIC(uint ## size ## _t) *)addr; \
 		uint ## size ## _t mask = (uint ## size ## _t)1 << nr;	\
 		uint ## size ## _t prev;				\
 									\
@@ -587,17 +632,17 @@ __RTE_GEN_BIT_FLIP(, flip,, 64)
 		return prev & mask;					\
 	}
 
-#define __RTE_GEN_BIT_ATOMIC_TEST_AND_CLEAR(size)			\
+#define __RTE_GEN_BIT_ATOMIC_TEST_AND_CLEAR(v, qualifier, size)		\
 	__rte_experimental						\
 	static inline bool						\
-	__rte_bit_atomic_test_and_clear ## size(uint ## size ## _t *addr, \
-						unsigned int nr,	\
-						int memory_order)	\
+	__rte_bit_atomic_ ## v ## test_and_clear ## size(qualifier uint ## size ## _t *addr, \
+							 unsigned int nr, \
+							 int memory_order) \
 	{								\
 		RTE_ASSERT(nr < size);					\
 									\
-		RTE_ATOMIC(uint ## size ## _t) *a_addr =		\
-			(RTE_ATOMIC(uint ## size ## _t) *)addr;		\
+		qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr =	\
+			(qualifier RTE_ATOMIC(uint ## size ## _t) *)addr; \
 		uint ## size ## _t mask = (uint ## size ## _t)1 << nr;	\
 		uint ## size ## _t prev;				\
 									\
@@ -607,34 +652,36 @@ __RTE_GEN_BIT_FLIP(, flip,, 64)
 		return prev & mask;					\
 	}
 
-#define __RTE_GEN_BIT_ATOMIC_TEST_AND_ASSIGN(size)			\
+#define __RTE_GEN_BIT_ATOMIC_TEST_AND_ASSIGN(v, qualifier, size)	\
 	__rte_experimental						\
 	static inline bool						\
-	__rte_bit_atomic_test_and_assign ## size(uint ## size ## _t *addr, \
-						 unsigned int nr,	\
-						 bool value,		\
-						 int memory_order)	\
+	__rte_bit_atomic_ ## v ## test_and_assign ## size(qualifier uint ## size ## _t *addr, \
+							  unsigned int nr, \
+							  bool value,	\
+							  int memory_order) \
 	{								\
 		if (value)						\
-			return __rte_bit_atomic_test_and_set ## size(addr, nr, \
-								     memory_order); \
+			return __rte_bit_atomic_ ## v ## test_and_set ## size(addr, nr, memory_order); \
 		else							\
-			return __rte_bit_atomic_test_and_clear ## size(addr, nr, \
-								       memory_order); \
+			return __rte_bit_atomic_ ## v ## test_and_clear ## size(addr, nr, memory_order); \
 	}
 
-#define __RTE_GEN_BIT_ATOMIC_OPS(size)			\
-	__RTE_GEN_BIT_ATOMIC_TEST(size)			\
-	__RTE_GEN_BIT_ATOMIC_SET(size)			\
-	__RTE_GEN_BIT_ATOMIC_CLEAR(size)		\
-	__RTE_GEN_BIT_ATOMIC_ASSIGN(size)		\
-	__RTE_GEN_BIT_ATOMIC_TEST_AND_SET(size)		\
-	__RTE_GEN_BIT_ATOMIC_TEST_AND_CLEAR(size)	\
-	__RTE_GEN_BIT_ATOMIC_TEST_AND_ASSIGN(size)	\
-	__RTE_GEN_BIT_ATOMIC_FLIP(size)
+#define __RTE_GEN_BIT_ATOMIC_OPS(v, qualifier, size)	\
+	__RTE_GEN_BIT_ATOMIC_TEST(v, qualifier, size)	\
+	__RTE_GEN_BIT_ATOMIC_SET(v, qualifier, size)	\
+	__RTE_GEN_BIT_ATOMIC_CLEAR(v, qualifier, size)	\
+	__RTE_GEN_BIT_ATOMIC_ASSIGN(v, qualifier, size)	\
+	__RTE_GEN_BIT_ATOMIC_TEST_AND_SET(v, qualifier, size) \
+	__RTE_GEN_BIT_ATOMIC_TEST_AND_CLEAR(v, qualifier, size) \
+	__RTE_GEN_BIT_ATOMIC_TEST_AND_ASSIGN(v, qualifier, size) \
+	__RTE_GEN_BIT_ATOMIC_FLIP(v, qualifier, size)
 
-__RTE_GEN_BIT_ATOMIC_OPS(32)
-__RTE_GEN_BIT_ATOMIC_OPS(64)
+#define __RTE_GEN_BIT_ATOMIC_OPS_SIZE(size) \
+	__RTE_GEN_BIT_ATOMIC_OPS(,, size) \
+	__RTE_GEN_BIT_ATOMIC_OPS(v_, volatile, size)
+
+__RTE_GEN_BIT_ATOMIC_OPS_SIZE(32)
+__RTE_GEN_BIT_ATOMIC_OPS_SIZE(64)
 
 /*------------------------ 32-bit relaxed operations ------------------------*/
 
@@ -1340,120 +1387,178 @@ rte_log2_u64(uint64_t v)
 #undef rte_bit_atomic_test_and_clear
 #undef rte_bit_atomic_test_and_assign
 
-#define __RTE_BIT_OVERLOAD_SZ_2(fun, qualifier, size, arg1_type, arg1_name) \
+#define __RTE_BIT_OVERLOAD_V_2(family, v, fun, c, size, arg1_type, arg1_name) \
 	static inline void						\
-	rte_bit_ ## fun(qualifier uint ## size ## _t *addr,		\
-			arg1_type arg1_name)				\
+	rte_bit_ ## family ## fun(c uint ## size ## _t *addr,		\
+				  arg1_type arg1_name)			\
 	{								\
-		__rte_bit_ ## fun ## size(addr, arg1_name);		\
+		__rte_bit_ ## family ## v ## fun ## size(addr, arg1_name); \
 	}
 
-#define __RTE_BIT_OVERLOAD_2(fun, qualifier, arg1_type, arg1_name)	\
-	__RTE_BIT_OVERLOAD_SZ_2(fun, qualifier, 32, arg1_type, arg1_name) \
-	__RTE_BIT_OVERLOAD_SZ_2(fun, qualifier, 64, arg1_type, arg1_name)
+#define __RTE_BIT_OVERLOAD_SZ_2(family, fun, c, size, arg1_type, arg1_name) \
+	__RTE_BIT_OVERLOAD_V_2(family,, fun, c, size, arg1_type,	\
+			       arg1_name)				\
+	__RTE_BIT_OVERLOAD_V_2(family, v_, fun, c volatile, size, \
+			       arg1_type, arg1_name)
 
-#define __RTE_BIT_OVERLOAD_SZ_2R(fun, qualifier, size, ret_type, arg1_type, \
-				 arg1_name)				\
+#define __RTE_BIT_OVERLOAD_2(family, fun, c, arg1_type, arg1_name)	\
+	__RTE_BIT_OVERLOAD_SZ_2(family, fun, c, 32, arg1_type, arg1_name) \
+	__RTE_BIT_OVERLOAD_SZ_2(family, fun, c, 64, arg1_type, arg1_name)
+
+#define __RTE_BIT_OVERLOAD_V_2R(family, v, fun, c, size, ret_type, arg1_type, \
+				arg1_name)				\
 	static inline ret_type						\
-	rte_bit_ ## fun(qualifier uint ## size ## _t *addr,		\
+	rte_bit_ ## family ## fun(c uint ## size ## _t *addr,		\
 			arg1_type arg1_name)				\
 	{								\
-		return __rte_bit_ ## fun ## size(addr, arg1_name);	\
+		return __rte_bit_ ## family ## v ## fun ## size(addr,	\
+								arg1_name); \
 	}
 
-#define __RTE_BIT_OVERLOAD_2R(fun, qualifier, ret_type, arg1_type, arg1_name) \
-	__RTE_BIT_OVERLOAD_SZ_2R(fun, qualifier, 32, ret_type, arg1_type, \
+#define __RTE_BIT_OVERLOAD_SZ_2R(family, fun, c, size, ret_type, arg1_type, \
+				 arg1_name)				\
+	__RTE_BIT_OVERLOAD_V_2R(family,, fun, c, size, ret_type, arg1_type, \
+				arg1_name)				\
+	__RTE_BIT_OVERLOAD_V_2R(family, v_, fun, c volatile,		\
+				size, ret_type, arg1_type, arg1_name)
+
+#define __RTE_BIT_OVERLOAD_2R(family, fun, c, ret_type, arg1_type, arg1_name) \
+	__RTE_BIT_OVERLOAD_SZ_2R(family, fun, c, 32, ret_type, arg1_type, \
 				 arg1_name)				\
-	__RTE_BIT_OVERLOAD_SZ_2R(fun, qualifier, 64, ret_type, arg1_type, \
+	__RTE_BIT_OVERLOAD_SZ_2R(family, fun, c, 64, ret_type, arg1_type, \
 				 arg1_name)
 
-#define __RTE_BIT_OVERLOAD_SZ_3(fun, qualifier, size, arg1_type, arg1_name, \
-				arg2_type, arg2_name)			\
+#define __RTE_BIT_OVERLOAD_V_3(family, v, fun, c, size, arg1_type, arg1_name, \
+			       arg2_type, arg2_name)			\
 	static inline void						\
-	rte_bit_ ## fun(uint ## size ## _t *addr, arg1_type arg1_name,	\
-			arg2_type arg2_name)				\
+	rte_bit_ ## family ## fun(c uint ## size ## _t *addr,		\
+				  arg1_type arg1_name, arg2_type arg2_name) \
 	{								\
-		__rte_bit_ ## fun ## size(addr, arg1_name, arg2_name);	\
+		__rte_bit_ ## family ## v ## fun ## size(addr, arg1_name, \
+							 arg2_name);	\
 	}
 
-#define __RTE_BIT_OVERLOAD_3(fun, qualifier, arg1_type, arg1_name, arg2_type, \
+#define __RTE_BIT_OVERLOAD_SZ_3(family, fun, c, size, arg1_type, arg1_name, \
+				arg2_type, arg2_name)			\
+	__RTE_BIT_OVERLOAD_V_3(family,, fun, c, size, arg1_type, arg1_name, \
+			       arg2_type, arg2_name)			\
+	__RTE_BIT_OVERLOAD_V_3(family, v_, fun, c volatile, size, arg1_type, \
+			       arg1_name, arg2_type, arg2_name)
+
+#define __RTE_BIT_OVERLOAD_3(family, fun, c, arg1_type, arg1_name, arg2_type, \
 			     arg2_name)					\
-	__RTE_BIT_OVERLOAD_SZ_3(fun, qualifier, 32, arg1_type, arg1_name, \
+	__RTE_BIT_OVERLOAD_SZ_3(family, fun, c, 32, arg1_type, arg1_name, \
 				arg2_type, arg2_name)			\
-	__RTE_BIT_OVERLOAD_SZ_3(fun, qualifier, 64, arg1_type, arg1_name, \
+	__RTE_BIT_OVERLOAD_SZ_3(family, fun, c, 64, arg1_type, arg1_name, \
 				arg2_type, arg2_name)
 
-#define __RTE_BIT_OVERLOAD_SZ_3R(fun, qualifier, size, ret_type, arg1_type, \
-				 arg1_name, arg2_type, arg2_name)	\
+#define __RTE_BIT_OVERLOAD_V_3R(family, v, fun, c, size, ret_type, arg1_type, \
+				arg1_name, arg2_type, arg2_name)	\
 	static inline ret_type						\
-	rte_bit_ ## fun(uint ## size ## _t *addr, arg1_type arg1_name,	\
-			arg2_type arg2_name)				\
+	rte_bit_ ## family ## fun(c uint ## size ## _t *addr,		\
+				  arg1_type arg1_name, arg2_type arg2_name) \
 	{								\
-		return __rte_bit_ ## fun ## size(addr, arg1_name, arg2_name); \
+		return __rte_bit_ ## family ## v ## fun ## size(addr,	\
+								arg1_name, \
+								arg2_name); \
 	}
 
-#define __RTE_BIT_OVERLOAD_3R(fun, qualifier, ret_type, arg1_type, arg1_name, \
-			      arg2_type, arg2_name)			\
-	__RTE_BIT_OVERLOAD_SZ_3R(fun, qualifier, 32, ret_type, arg1_type, \
+#define __RTE_BIT_OVERLOAD_SZ_3R(family, fun, c, size, ret_type, arg1_type, \
 				 arg1_name, arg2_type, arg2_name)	\
-	__RTE_BIT_OVERLOAD_SZ_3R(fun, qualifier, 64, ret_type, arg1_type, \
-				 arg1_name, arg2_type, arg2_name)
+	__RTE_BIT_OVERLOAD_V_3R(family,, fun, c, size, ret_type, \
+				arg1_type, arg1_name, arg2_type, arg2_name) \
+	__RTE_BIT_OVERLOAD_V_3R(family, v_, fun, c volatile, size, \
+				ret_type, arg1_type, arg1_name, arg2_type, \
+				arg2_name)
 
-#define __RTE_BIT_OVERLOAD_SZ_4(fun, qualifier, size, arg1_type, arg1_name, \
-				arg2_type, arg2_name, arg3_type, arg3_name) \
+#define __RTE_BIT_OVERLOAD_3R(family, fun, c, ret_type, arg1_type, arg1_name, \
+			      arg2_type, arg2_name)			\
+	__RTE_BIT_OVERLOAD_SZ_3R(family, fun, c, 32, ret_type,		\
+				 arg1_type, arg1_name, arg2_type, arg2_name) \
+	__RTE_BIT_OVERLOAD_SZ_3R(family, fun, c, 64, ret_type, \
+				 arg1_type, arg1_name, arg2_type, arg2_name)
+
+#define __RTE_BIT_OVERLOAD_V_4(family, v, fun, c, size, arg1_type, arg1_name, \
+			       arg2_type, arg2_name, arg3_type,	arg3_name) \
 	static inline void						\
-	rte_bit_ ## fun(uint ## size ## _t *addr, arg1_type arg1_name,	\
-			arg2_type arg2_name, arg3_type arg3_name)	\
+	rte_bit_ ## family ## fun(c uint ## size ## _t *addr,		\
+				  arg1_type arg1_name, arg2_type arg2_name, \
+				  arg3_type arg3_name)			\
 	{								\
-		__rte_bit_ ## fun ## size(addr, arg1_name, arg2_name,	\
-					  arg3_name);		      \
+		__rte_bit_ ## family ## v ## fun ## size(addr, arg1_name, \
+							 arg2_name,	\
+							 arg3_name);	\
 	}
 
-#define __RTE_BIT_OVERLOAD_4(fun, qualifier, arg1_type, arg1_name, arg2_type, \
-			     arg2_name, arg3_type, arg3_name)		\
-	__RTE_BIT_OVERLOAD_SZ_4(fun, qualifier, 32, arg1_type, arg1_name, \
+#define __RTE_BIT_OVERLOAD_SZ_4(family, fun, c, size, arg1_type, arg1_name, \
 				arg2_type, arg2_name, arg3_type, arg3_name) \
-	__RTE_BIT_OVERLOAD_SZ_4(fun, qualifier, 64, arg1_type, arg1_name, \
-				arg2_type, arg2_name, arg3_type, arg3_name)
-
-#define __RTE_BIT_OVERLOAD_SZ_4R(fun, qualifier, size, ret_type, arg1_type, \
-				 arg1_name, arg2_type, arg2_name, arg3_type, \
-				 arg3_name)				\
+	__RTE_BIT_OVERLOAD_V_4(family,, fun, c, size, arg1_type,	\
+			       arg1_name, arg2_type, arg2_name, arg3_type, \
+			       arg3_name)				\
+	__RTE_BIT_OVERLOAD_V_4(family, v_, fun, c volatile, size,	\
+			       arg1_type, arg1_name, arg2_type, arg2_name, \
+			       arg3_type, arg3_name)
+
+#define __RTE_BIT_OVERLOAD_4(family, fun, c, arg1_type, arg1_name, arg2_type, \
+			     arg2_name, arg3_type, arg3_name)		\
+	__RTE_BIT_OVERLOAD_SZ_4(family, fun, c, 32, arg1_type,		\
+				arg1_name, arg2_type, arg2_name, arg3_type, \
+				arg3_name)				\
+	__RTE_BIT_OVERLOAD_SZ_4(family, fun, c, 64, arg1_type,		\
+				arg1_name, arg2_type, arg2_name, arg3_type, \
+				arg3_name)
+
+#define __RTE_BIT_OVERLOAD_V_4R(family, v, fun, c, size, ret_type, arg1_type, \
+				arg1_name, arg2_type, arg2_name, arg3_type, \
+				arg3_name)				\
 	static inline ret_type						\
-	rte_bit_ ## fun(uint ## size ## _t *addr, arg1_type arg1_name,	\
-			arg2_type arg2_name, arg3_type arg3_name)	\
+	rte_bit_ ## family ## fun(c uint ## size ## _t *addr,		\
+				  arg1_type arg1_name, arg2_type arg2_name, \
+				  arg3_type arg3_name)			\
 	{								\
-		return __rte_bit_ ## fun ## size(addr, arg1_name, arg2_name, \
-						 arg3_name);		\
+		return __rte_bit_ ## family ## v ## fun ## size(addr,	\
+								arg1_name, \
+								arg2_name, \
+								arg3_name); \
 	}
 
-#define __RTE_BIT_OVERLOAD_4R(fun, qualifier, ret_type, arg1_type, arg1_name, \
-			      arg2_type, arg2_name, arg3_type, arg3_name) \
-	__RTE_BIT_OVERLOAD_SZ_4R(fun, qualifier, 32, ret_type, arg1_type, \
+#define __RTE_BIT_OVERLOAD_SZ_4R(family, fun, c, size, ret_type, arg1_type, \
 				 arg1_name, arg2_type, arg2_name, arg3_type, \
 				 arg3_name)				\
-	__RTE_BIT_OVERLOAD_SZ_4R(fun, qualifier, 64, ret_type, arg1_type, \
-				 arg1_name, arg2_type, arg2_name, arg3_type, \
-				 arg3_name)
-
-__RTE_BIT_OVERLOAD_2R(test, const, bool, unsigned int, nr)
-__RTE_BIT_OVERLOAD_2(set,, unsigned int, nr)
-__RTE_BIT_OVERLOAD_2(clear,, unsigned int, nr)
-__RTE_BIT_OVERLOAD_3(assign,, unsigned int, nr, bool, value)
-__RTE_BIT_OVERLOAD_2(flip,, unsigned int, nr)
-
-__RTE_BIT_OVERLOAD_3R(atomic_test, const, bool, unsigned int, nr,
+	__RTE_BIT_OVERLOAD_V_4R(family,, fun, c, size, ret_type, arg1_type, \
+				arg1_name, arg2_type, arg2_name, arg3_type, \
+				arg3_name)				\
+	__RTE_BIT_OVERLOAD_V_4R(family, v_, fun, c volatile, size,	\
+				ret_type, arg1_type, arg1_name, arg2_type, \
+				arg2_name, arg3_type, arg3_name)
+
+#define __RTE_BIT_OVERLOAD_4R(family, fun, c, ret_type, arg1_type, arg1_name, \
+			      arg2_type, arg2_name, arg3_type, arg3_name) \
+	__RTE_BIT_OVERLOAD_SZ_4R(family, fun, c, 32, ret_type,		\
+				 arg1_type, arg1_name, arg2_type, arg2_name, \
+				 arg3_type, arg3_name)			\
+	__RTE_BIT_OVERLOAD_SZ_4R(family, fun, c, 64, ret_type,		\
+				 arg1_type, arg1_name, arg2_type, arg2_name, \
+				 arg3_type, arg3_name)
+
+__RTE_BIT_OVERLOAD_2R(, test, const, bool, unsigned int, nr)
+__RTE_BIT_OVERLOAD_2(, set,, unsigned int, nr)
+__RTE_BIT_OVERLOAD_2(, clear,, unsigned int, nr)
+__RTE_BIT_OVERLOAD_3(, assign,, unsigned int, nr, bool, value)
+__RTE_BIT_OVERLOAD_2(, flip,, unsigned int, nr)
+
+__RTE_BIT_OVERLOAD_3R(atomic_, test, const, bool, unsigned int, nr,
 		      int, memory_order)
-__RTE_BIT_OVERLOAD_3(atomic_set,, unsigned int, nr, int, memory_order)
-__RTE_BIT_OVERLOAD_3(atomic_clear,, unsigned int, nr, int, memory_order)
-__RTE_BIT_OVERLOAD_4(atomic_assign,, unsigned int, nr, bool, value,
+__RTE_BIT_OVERLOAD_3(atomic_, set,, unsigned int, nr, int, memory_order)
+__RTE_BIT_OVERLOAD_3(atomic_, clear,, unsigned int, nr, int, memory_order)
+__RTE_BIT_OVERLOAD_4(atomic_, assign,, unsigned int, nr, bool, value,
 		     int, memory_order)
-__RTE_BIT_OVERLOAD_3(atomic_flip,, unsigned int, nr, int, memory_order)
-__RTE_BIT_OVERLOAD_3R(atomic_test_and_set,, bool, unsigned int, nr,
+__RTE_BIT_OVERLOAD_3(atomic_, flip,, unsigned int, nr, int, memory_order)
+__RTE_BIT_OVERLOAD_3R(atomic_, test_and_set,, bool, unsigned int, nr,
 		      int, memory_order)
-__RTE_BIT_OVERLOAD_3R(atomic_test_and_clear,, bool, unsigned int, nr,
+__RTE_BIT_OVERLOAD_3R(atomic_, test_and_clear,, bool, unsigned int, nr,
 		      int, memory_order)
-__RTE_BIT_OVERLOAD_4R(atomic_test_and_assign,, bool, unsigned int, nr,
+__RTE_BIT_OVERLOAD_4R(atomic_, test_and_assign,, bool, unsigned int, nr,
 		      bool, value, int, memory_order)
 
 #endif
-- 
2.34.1