From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 1A6DB45778;
	Fri,  9 Aug 2024 12:08:42 +0200 (CEST)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id E211242F15;
	Fri,  9 Aug 2024 12:08:14 +0200 (CEST)
Received: from EUR03-DBA-obe.outbound.protection.outlook.com
 (mail-dbaeur03on2041.outbound.protection.outlook.com [40.107.104.41])
 by mails.dpdk.org (Postfix) with ESMTP id CBCB442EB8
 for <dev@dpdk.org>; Fri,  9 Aug 2024 12:08:07 +0200 (CEST)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none;
 b=MYiT6QQ7PjG7J2WHFr5/DgDRuiNYRR73FyP0dyOGWV9oC01loieCQw9QN+mzI+eAAO3XBeUZWYLo1pQQt7NnWOTQqO35OyZ7mR1LgXjdWjrYOUsrPw+alr5Kq/aA+zcv3KRYFs9bP76jxRjRoNSTpBAMHMlAWxYEQLoYKAHtExTg5VIJtGtCFbmNbOwshjgG9YVMCYWgZ/KWBMe4HSDJez3xqfAibW39OBZSqwUkRVxesc5ATfmA1mi7HZozDvHFdXvGUDJJMTTyAsB2beqz4eElU0dFsoxS+/lzF/g34LXbrwIsqMFaCbf+95B+kfJ1lNAncXPBXIrDrAPdrsQMVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector10001;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1LGm30rZXPVAtL0AoWBQ5eyMwJ8Oxi4dgiomg5LBXFE=;
 b=vGV7I3XsMoyuLQ9CkyT08DVu6KFHka2y5OESvl68fEvrYhmyRWnM7HjBYfWbFoSEHwAp8hTNpmEY44Xm0m/h6AbH0Tj+eCRvAem6HTi3fxXOa1NoDNqzMG2vy3Ix2GzM8znp9+imXWJtr5rinG9xdKVKhsFeRNGdRY6MFEPxyT3YpmJYx/pRYmp83D1r5A6P2VzXWJa+pHAtJ8gQBOdH0Jaif/dr+L5iheGQ/BVvypgiM3+mJiTJba5gYewCOHLZzuG3yXubr5wihu0DkSbUfzaWI6fd8O+rzlzxMBdlOFCDM6gTatiybHH3s63IXdOR1VNuCheWkK2MtCdpw5L+TQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 192.176.1.74) smtp.rcpttodomain=arm.com smtp.mailfrom=ericsson.com;
 dmarc=pass (p=reject sp=reject pct=100) action=none header.from=ericsson.com; 
 dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ericsson.com;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1LGm30rZXPVAtL0AoWBQ5eyMwJ8Oxi4dgiomg5LBXFE=;
 b=dBtPcUyDaM4vBPiJ5Ga7sAc/fxmm2VnCiQ1iktgbPyF0YQe6/Qh0cdfyMmCzQ8jGmii+WFm8ES73iw7RvCrXZ9XGnaqny/PeHqYz96Jd1zWZ2jSKHTXhoFBle1geFBvoNE2UGQVnYznBfJGAcMk2wOZx2i71PCxE9YmeoPEqqHItk+jgfQd48/WHjEVAsqhS0+kZv9FhYF2/0f4WlKuxLaGcMLA/Z9Hbjl2ZaKQaHv5r3JhNWm4XWl33OV7b4g5K62X7N5RH5L7F5ChouPU1OHpxaKkXUejP4iWRUamWqnFwdQug75UZA9tRJC45f+OJiMxdzYBiuDtQvXRfrVRWuw==
Received: from DUZPR01CA0088.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:46a::14) by AM8PR07MB7523.eurprd07.prod.outlook.com
 (2603:10a6:20b:244::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7849.14; Fri, 9 Aug
 2024 10:08:05 +0000
Received: from DU6PEPF0000A7E2.eurprd02.prod.outlook.com
 (2603:10a6:10:46a:cafe::d8) by DUZPR01CA0088.outlook.office365.com
 (2603:10a6:10:46a::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7849.14 via Frontend
 Transport; Fri, 9 Aug 2024 10:08:04 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 192.176.1.74)
 smtp.mailfrom=ericsson.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=ericsson.com;
Received-SPF: Pass (protection.outlook.com: domain of ericsson.com designates
 192.176.1.74 as permitted sender)
 receiver=protection.outlook.com; 
 client-ip=192.176.1.74; helo=oa.msg.ericsson.com; pr=C
Received: from oa.msg.ericsson.com (192.176.1.74) by
 DU6PEPF0000A7E2.mail.protection.outlook.com (10.167.8.42) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.7849.8 via Frontend Transport; Fri, 9 Aug 2024 10:08:04 +0000
Received: from seliicinfr00050.seli.gic.ericsson.se (153.88.142.248) by
 smtp-central.internal.ericsson.com (100.87.178.63) with Microsoft SMTP Server
 id 15.2.1544.11; Fri, 9 Aug 2024 12:08:04 +0200
Received: from breslau.. (seliicwb00002.seli.gic.ericsson.se [10.156.25.100])
 by seliicinfr00050.seli.gic.ericsson.se (Postfix) with ESMTP id
 D46301C006B; Fri,  9 Aug 2024 12:08:03 +0200 (CEST)
From: =?UTF-8?q?Mattias=20R=C3=B6nnblom?= <mattias.ronnblom@ericsson.com>
To: <dev@dpdk.org>
CC: <hofors@lysator.liu.se>, Heng Wang <heng.wang@ericsson.com>, "Stephen
 Hemminger" <stephen@networkplumber.org>, Joyce Kong <joyce.kong@arm.com>,
 Tyler Retzlaff <roretzla@linux.microsoft.com>,
 =?UTF-8?q?Morten=20Br=C3=B8rup?= <mb@smartsharesystems.com>,
 =?UTF-8?q?Mattias=20R=C3=B6nnblom?= <mattias.ronnblom@ericsson.com>
Subject: [PATCH v2 5/5] eal: extend bitops to handle volatile pointers
Date: Fri, 9 Aug 2024 11:58:29 +0200
Message-ID: <20240809095829.589396-6-mattias.ronnblom@ericsson.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240809095829.589396-1-mattias.ronnblom@ericsson.com>
References: <20240809090439.589295-2-mattias.ronnblom@ericsson.com>
 <20240809095829.589396-1-mattias.ronnblom@ericsson.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: DU6PEPF0000A7E2:EE_|AM8PR07MB7523:EE_
X-MS-Office365-Filtering-Correlation-Id: 099b8a4d-4f41-435a-0363-08dcb85b2909
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
 ARA:13230040|1800799024|36860700013|376014|82310400026; 
X-Microsoft-Antispam-Message-Info: =?utf-8?B?dkx4WnVsY2VsOCsrdStCSXRnZ2xNMVlPS1FpQThJKzF0WmdpeFdidTQ3RlVl?=
 =?utf-8?B?M2ZaSmcrZGQ3OEVGM0lneVE5TVYwOUR2S2RoOWxpc1hHV29qY2lVZTV4ZVIv?=
 =?utf-8?B?K3hhYVJEKytUMi8vMTJNVTVMQXZqU2dPRm5YZlZtMkRxSjNqT0d6QUtRaC8w?=
 =?utf-8?B?RmZDWHpDcDRRR3JvWXVwa3VQMUVZcEtsSnF0K1M2RXVMRXQwb0M1RFA0QnA4?=
 =?utf-8?B?Mi93MzIzcENLOVFuTGw3eHdWQ0Qyc2FwTUdzMHZJajlXUXlzT3NwcU0zS3p1?=
 =?utf-8?B?THVINDBNalYwRzZYSUdRWGNtWjlWSXVaaGxqTjF5eVkzZVVZRG9wcVNyYlJl?=
 =?utf-8?B?Q2dpZXpVYlc0cmNGN2NHb3k2M2FaN1lDaWJYb0p0NUppcVc3bEw5ZFEzamI4?=
 =?utf-8?B?eGcxNERpVDF5WUdad1UwRFVuWkgrVzZoQmRTbjdOS1lqRitCY1k1R0JzK3NN?=
 =?utf-8?B?cmdIcXAraUtDSzhLNHRiWmRpMXBPR1hUcDg1YnlUbWRnVzR2cjVJdmxoZzlo?=
 =?utf-8?B?ZUE3SXZrN1d3ZzY1RGFLaGFmVDkydkFZWWg3Rlh5Z3RRSmo1SEx6Y1l6VkJv?=
 =?utf-8?B?WUZFKzMyQjdEZlJObW1WTmIzN2cvaGNwWC9qNnBxYzNWVWVyMkYwZVdweTVJ?=
 =?utf-8?B?Ulg2dldidWxZc1hIdld1OEc0bW53bXNSY1EyY1ljSEdSbEdNU0VwU1FuVlFq?=
 =?utf-8?B?dkFid0FSUDNIMjVYemtmcHpBbENkSGxqaFdHMjdSclBzTXlWRjBXTllzSFFB?=
 =?utf-8?B?Q29jTXJKM0Q3VnN0MDBsZXJvQW50RGZLSWg2VFFvUjRGWisycExxVk4yL214?=
 =?utf-8?B?ZDF1QXVzc0gwYWVUQ3dzNk01YWJHTW8yNEcwQUttS2dVcVpCbTRkV3lLdWlG?=
 =?utf-8?B?NG9hdVpyeXp2Y3JnNXBTbkRDcmNxbzNzaEQ2WElMRTVaTWJWeVhyNkhTTjVK?=
 =?utf-8?B?TS9tWkE5NUJLQ3N3Tk9hc0hvQVBSRlNVdGx4NTZJZmJPNkFwSmllMlJBWGJq?=
 =?utf-8?B?UFlvWTBHdnZXT2NRd25TY1UrbFh2TUpWcmlRMWNPMXJsdTdjM05UaDVwNldK?=
 =?utf-8?B?RDhVR3hmcDZJbnZoczNtTDNQaVpld1V0UXQxTWFjdURJQ3RyQ3FhbDVuNUtu?=
 =?utf-8?B?enQ5RnZ2aXFsUnRSdWt2ZllkR25NTitxV2tXQzZtTTRGc1hycnpWSHJVQitI?=
 =?utf-8?B?OW5uMGVVUk9OVGp6ZWxhaHJSWTR6NG5jUzRJUngydTJlNkhMQjdDblNMZXVC?=
 =?utf-8?B?YVpTTmJkV3dQNEJrMFFzZkNzbVR0RDdMLzZPMTMyMEE1RGNLT2NPdFRsa2Mw?=
 =?utf-8?B?TVBSZDF1M2RqaEp0cDcwNksydFlybmdOaGNod0pwb0lYWGNsajdpNmMvZm5S?=
 =?utf-8?B?U3hKcWhNT2RBMXZ3MmpQVGtSOHhwa0pjdW56NGU2VW5sa0pGYjRVWHBlYmhR?=
 =?utf-8?B?dUlYSHArS2xZU2Zla2pKZ3ZsY0ZkNWgzampicS84YUlPTDFnNXF3SjdTQ1lS?=
 =?utf-8?B?ci94b0NYS2dyb3lFTFhMcWlzTUpUeGxGUmk1VVpodzQzUHpXSmFXY21GeEtB?=
 =?utf-8?B?RGNBSUkydUNGU0xGMnBGdVo4ZmdVZW5FV1FSSkQ3a2tyZjVHa3VuOXV5blYr?=
 =?utf-8?B?RldaT0cxOVFuYnMwNUJ3a2k5Ymh0SFNOZnRETjl1MUIzK2hBUUtuWW1QdXl2?=
 =?utf-8?B?T0VJcHlxclphSkpzdGpNMlVhY0xONUF5bmRxeEdpQmZzejU5cjlDWHE5OFJi?=
 =?utf-8?B?NjlCa3czZ1VZbE16K0pBTEUrU2RqU0xPNmRubG5iTUF1OVd3U2owWGtvZDlw?=
 =?utf-8?B?dEtRN3QwZWpwYTJqdmRJN0s5aVJkNVBLL3BWbXRNOVBxMVNYN09SMnA5QU4z?=
 =?utf-8?B?cGpsbytqWHNlSUdWMVZhb2NjQ25OcVZseXIvdkswVXFWU0E9PQ==?=
X-Forefront-Antispam-Report: CIP:192.176.1.74; CTRY:SE; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:oa.msg.ericsson.com; PTR:office365.se.ericsson.net;
 CAT:NONE; SFS:(13230040)(1800799024)(36860700013)(376014)(82310400026);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: ericsson.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Aug 2024 10:08:04.8821 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 099b8a4d-4f41-435a-0363-08dcb85b2909
X-MS-Exchange-CrossTenant-Id: 92e84ceb-fbfd-47ab-be52-080c6b87953f
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=92e84ceb-fbfd-47ab-be52-080c6b87953f; Ip=[192.176.1.74];
 Helo=[oa.msg.ericsson.com]
X-MS-Exchange-CrossTenant-AuthSource: DU6PEPF0000A7E2.eurprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR07MB7523
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Have rte_bit_[test|set|clear|assign|flip]() and rte_bit_atomic_*()
handle volatile-marked pointers.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>

--

PATCH v2:
 * Actually run the est_bit_atomic_v_access*() test functions.
---
 app/test/test_bitops.c       |  32 ++-
 lib/eal/include/rte_bitops.h | 427 ++++++++++++++++++++++-------------
 2 files changed, 291 insertions(+), 168 deletions(-)

diff --git a/app/test/test_bitops.c b/app/test/test_bitops.c
index b80216a0a1..10e87f6776 100644
--- a/app/test/test_bitops.c
+++ b/app/test/test_bitops.c
@@ -14,13 +14,13 @@
 #include "test.h"
 
 #define GEN_TEST_BIT_ACCESS(test_name, set_fun, clear_fun, assign_fun,	\
-			    flip_fun, test_fun, size)			\
+			    flip_fun, test_fun, size, mod)		\
 	static int							\
 	test_name(void)							\
 	{								\
 		uint ## size ## _t reference = (uint ## size ## _t)rte_rand(); \
 		unsigned int bit_nr;					\
-		uint ## size ## _t word = (uint ## size ## _t)rte_rand(); \
+		mod uint ## size ## _t word = (uint ## size ## _t)rte_rand(); \
 									\
 		for (bit_nr = 0; bit_nr < size; bit_nr++) {		\
 			bool reference_bit = (reference >> bit_nr) & 1;	\
@@ -41,7 +41,7 @@
 				    "Bit %d had unflipped value", bit_nr); \
 			flip_fun(&word, bit_nr);			\
 									\
-			const uint ## size ## _t *const_ptr = &word;	\
+			const mod uint ## size ## _t *const_ptr = &word; \
 			TEST_ASSERT(test_fun(const_ptr, bit_nr) ==	\
 				    reference_bit,			\
 				    "Bit %d had unexpected value", bit_nr); \
@@ -59,10 +59,16 @@
 	}
 
 GEN_TEST_BIT_ACCESS(test_bit_access32, rte_bit_set, rte_bit_clear,
-		    rte_bit_assign, rte_bit_flip, rte_bit_test, 32)
+		    rte_bit_assign, rte_bit_flip, rte_bit_test, 32,)
 
 GEN_TEST_BIT_ACCESS(test_bit_access64, rte_bit_set, rte_bit_clear,
-		    rte_bit_assign, rte_bit_flip, rte_bit_test, 64)
+		    rte_bit_assign, rte_bit_flip, rte_bit_test, 64,)
+
+GEN_TEST_BIT_ACCESS(test_bit_v_access32, rte_bit_set, rte_bit_clear,
+		    rte_bit_assign, rte_bit_flip, rte_bit_test, 32, volatile)
+
+GEN_TEST_BIT_ACCESS(test_bit_v_access64, rte_bit_set, rte_bit_clear,
+		    rte_bit_assign, rte_bit_flip, rte_bit_test, 64, volatile)
 
 #define bit_atomic_set(addr, nr)				\
 	rte_bit_atomic_set(addr, nr, rte_memory_order_relaxed)
@@ -81,11 +87,19 @@ GEN_TEST_BIT_ACCESS(test_bit_access64, rte_bit_set, rte_bit_clear,
 
 GEN_TEST_BIT_ACCESS(test_bit_atomic_access32, bit_atomic_set,
 		    bit_atomic_clear, bit_atomic_assign,
-		    bit_atomic_flip, bit_atomic_test, 32)
+		    bit_atomic_flip, bit_atomic_test, 32,)
 
 GEN_TEST_BIT_ACCESS(test_bit_atomic_access64, bit_atomic_set,
 		    bit_atomic_clear, bit_atomic_assign,
-		    bit_atomic_flip, bit_atomic_test, 64)
+		    bit_atomic_flip, bit_atomic_test, 64,)
+
+GEN_TEST_BIT_ACCESS(test_bit_atomic_v_access32, bit_atomic_set,
+		    bit_atomic_clear, bit_atomic_assign,
+		    bit_atomic_flip, bit_atomic_test, 32, volatile)
+
+GEN_TEST_BIT_ACCESS(test_bit_atomic_v_access64, bit_atomic_set,
+		    bit_atomic_clear, bit_atomic_assign,
+		    bit_atomic_flip, bit_atomic_test, 64, volatile)
 
 #define PARALLEL_TEST_RUNTIME 0.25
 
@@ -480,8 +494,12 @@ static struct unit_test_suite test_suite = {
 		TEST_CASE(test_bit_access64),
 		TEST_CASE(test_bit_access32),
 		TEST_CASE(test_bit_access64),
+		TEST_CASE(test_bit_v_access32),
+		TEST_CASE(test_bit_v_access64),
 		TEST_CASE(test_bit_atomic_access32),
 		TEST_CASE(test_bit_atomic_access64),
+		TEST_CASE(test_bit_atomic_v_access32),
+		TEST_CASE(test_bit_atomic_v_access64),
 		TEST_CASE(test_bit_atomic_parallel_assign32),
 		TEST_CASE(test_bit_atomic_parallel_assign64),
 		TEST_CASE(test_bit_atomic_parallel_test_and_modify32),
diff --git a/lib/eal/include/rte_bitops.h b/lib/eal/include/rte_bitops.h
index 4d878099ed..1355949fb6 100644
--- a/lib/eal/include/rte_bitops.h
+++ b/lib/eal/include/rte_bitops.h
@@ -127,12 +127,16 @@ extern "C" {
  * @param nr
  *   The index of the bit.
  */
-#define rte_bit_test(addr, nr)					\
-	_Generic((addr),					\
-		uint32_t *: __rte_bit_test32,			\
-		const uint32_t *: __rte_bit_test32,		\
-		uint64_t *: __rte_bit_test64,			\
-		const uint64_t *: __rte_bit_test64)(addr, nr)
+#define rte_bit_test(addr, nr)						\
+	_Generic((addr),						\
+		 uint32_t *: __rte_bit_test32,				\
+		 const uint32_t *: __rte_bit_test32,			\
+		 volatile uint32_t *: __rte_bit_v_test32,		\
+		 const volatile uint32_t *: __rte_bit_v_test32,		\
+		 uint64_t *: __rte_bit_test64,				\
+		 const uint64_t *: __rte_bit_test64,			\
+		 volatile uint64_t *: __rte_bit_v_test64,		\
+		 const volatile uint64_t *: __rte_bit_v_test64)(addr, nr)
 
 /**
  * @warning
@@ -152,10 +156,12 @@ extern "C" {
  * @param nr
  *   The index of the bit.
  */
-#define rte_bit_set(addr, nr)				\
-	_Generic((addr),				\
-		 uint32_t *: __rte_bit_set32,		\
-		 uint64_t *: __rte_bit_set64)(addr, nr)
+#define rte_bit_set(addr, nr)						\
+	_Generic((addr),						\
+		 uint32_t *: __rte_bit_set32,				\
+		 volatile uint32_t *: __rte_bit_v_set32,		\
+		 uint64_t *: __rte_bit_set64,				\
+		 volatile uint64_t *: __rte_bit_v_set64)(addr, nr)
 
 /**
  * @warning
@@ -175,10 +181,12 @@ extern "C" {
  * @param nr
  *   The index of the bit.
  */
-#define rte_bit_clear(addr, nr)					\
-	_Generic((addr),					\
-		 uint32_t *: __rte_bit_clear32,			\
-		 uint64_t *: __rte_bit_clear64)(addr, nr)
+#define rte_bit_clear(addr, nr)						\
+	_Generic((addr),						\
+		 uint32_t *: __rte_bit_clear32,				\
+		 volatile uint32_t *: __rte_bit_v_clear32,		\
+		 uint64_t *: __rte_bit_clear64,				\
+		 volatile uint64_t *: __rte_bit_v_clear64)(addr, nr)
 
 /**
  * @warning
@@ -202,7 +210,9 @@ extern "C" {
 #define rte_bit_assign(addr, nr, value)					\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_assign32,			\
-		 uint64_t *: __rte_bit_assign64)(addr, nr, value)
+		 volatile uint32_t *: __rte_bit_v_assign32,		\
+		 uint64_t *: __rte_bit_assign64,			\
+		 volatile uint64_t *: __rte_bit_v_assign64)(addr, nr, value)
 
 /**
  * @warning
@@ -225,7 +235,9 @@ extern "C" {
 #define rte_bit_flip(addr, nr)						\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_flip32,				\
-		 uint64_t *: __rte_bit_flip64)(addr, nr)
+		 volatile uint32_t *: __rte_bit_v_flip32,		\
+		 uint64_t *: __rte_bit_flip64,				\
+		 volatile uint64_t *: __rte_bit_v_flip64)(addr, nr)
 
 /**
  * @warning
@@ -250,9 +262,13 @@ extern "C" {
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_atomic_test32,			\
 		 const uint32_t *: __rte_bit_atomic_test32,		\
+		 volatile uint32_t *: __rte_bit_atomic_v_test32,	\
+		 const volatile uint32_t *: __rte_bit_atomic_v_test32,	\
 		 uint64_t *: __rte_bit_atomic_test64,			\
-		 const uint64_t *: __rte_bit_atomic_test64)(addr, nr,	\
-							    memory_order)
+		 const uint64_t *: __rte_bit_atomic_test64,		\
+		 volatile uint64_t *: __rte_bit_atomic_v_test64,	\
+		 const volatile uint64_t *: __rte_bit_atomic_v_test64) \
+						    (addr, nr, memory_order)
 
 /**
  * @warning
@@ -274,7 +290,10 @@ extern "C" {
 #define rte_bit_atomic_set(addr, nr, memory_order)			\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_atomic_set32,			\
-		 uint64_t *: __rte_bit_atomic_set64)(addr, nr, memory_order)
+		 volatile uint32_t *: __rte_bit_atomic_v_set32,		\
+		 uint64_t *: __rte_bit_atomic_set64,			\
+		 volatile uint64_t *: __rte_bit_atomic_v_set64)(addr, nr, \
+								memory_order)
 
 /**
  * @warning
@@ -296,7 +315,10 @@ extern "C" {
 #define rte_bit_atomic_clear(addr, nr, memory_order)			\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_atomic_clear32,			\
-		 uint64_t *: __rte_bit_atomic_clear64)(addr, nr, memory_order)
+		 volatile uint32_t *: __rte_bit_atomic_v_clear32,	\
+		 uint64_t *: __rte_bit_atomic_clear64,			\
+		 volatile uint64_t *: __rte_bit_atomic_v_clear64)(addr, nr, \
+								  memory_order)
 
 /**
  * @warning
@@ -320,8 +342,11 @@ extern "C" {
 #define rte_bit_atomic_assign(addr, nr, value, memory_order)		\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_atomic_assign32,			\
-		 uint64_t *: __rte_bit_atomic_assign64)(addr, nr, value, \
-							memory_order)
+		 volatile uint32_t *: __rte_bit_atomic_v_assign32,	\
+		 uint64_t *: __rte_bit_atomic_assign64,			\
+		 volatile uint64_t *: __rte_bit_atomic_v_assign64)(addr, nr, \
+								   value, \
+								   memory_order)
 
 /**
  * @warning
@@ -344,7 +369,10 @@ extern "C" {
 #define rte_bit_atomic_flip(addr, nr, memory_order)			\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_atomic_flip32,			\
-		 uint64_t *: __rte_bit_atomic_flip64)(addr, nr, memory_order)
+		 volatile uint32_t *: __rte_bit_atomic_v_flip32,	\
+		 uint64_t *: __rte_bit_atomic_flip64,			\
+		 volatile uint64_t *: __rte_bit_atomic_v_flip64)(addr, nr, \
+								 memory_order)
 
 /**
  * @warning
@@ -368,8 +396,10 @@ extern "C" {
 #define rte_bit_atomic_test_and_set(addr, nr, memory_order)		\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_atomic_test_and_set32,		\
-		 uint64_t *: __rte_bit_atomic_test_and_set64)(addr, nr,	\
-							      memory_order)
+		 volatile uint32_t *: __rte_bit_atomic_v_test_and_set32, \
+		 uint64_t *: __rte_bit_atomic_test_and_set64,		\
+		 volatile uint64_t *: __rte_bit_atomic_v_test_and_set64) \
+						    (addr, nr, memory_order)
 
 /**
  * @warning
@@ -393,8 +423,10 @@ extern "C" {
 #define rte_bit_atomic_test_and_clear(addr, nr, memory_order)		\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_atomic_test_and_clear32,		\
-		 uint64_t *: __rte_bit_atomic_test_and_clear64)(addr, nr, \
-								memory_order)
+		 volatile uint32_t *: __rte_bit_atomic_v_test_and_clear32, \
+		 uint64_t *: __rte_bit_atomic_test_and_clear64,		\
+		 volatile uint64_t *: __rte_bit_atomic_v_test_and_clear64) \
+						       (addr, nr, memory_order)
 
 /**
  * @warning
@@ -421,9 +453,10 @@ extern "C" {
 #define rte_bit_atomic_test_and_assign(addr, nr, value, memory_order)	\
 	_Generic((addr),						\
 		 uint32_t *: __rte_bit_atomic_test_and_assign32,	\
-		 uint64_t *: __rte_bit_atomic_test_and_assign64)(addr, nr, \
-								 value, \
-								 memory_order)
+		 volatile uint32_t *: __rte_bit_atomic_v_test_and_assign32, \
+		 uint64_t *: __rte_bit_atomic_test_and_assign64,	\
+		 volatile uint64_t *: __rte_bit_atomic_v_test_and_assign64) \
+						(addr, nr, value, memory_order)
 
 #define __RTE_GEN_BIT_TEST(family, fun, qualifier, size)		\
 	__rte_experimental						\
@@ -491,93 +524,105 @@ __RTE_GEN_BIT_CLEAR(, clear,, 32)
 __RTE_GEN_BIT_ASSIGN(, assign,, 32)
 __RTE_GEN_BIT_FLIP(, flip,, 32)
 
+__RTE_GEN_BIT_TEST(v_, test, volatile, 32)
+__RTE_GEN_BIT_SET(v_, set, volatile, 32)
+__RTE_GEN_BIT_CLEAR(v_, clear, volatile, 32)
+__RTE_GEN_BIT_ASSIGN(v_, assign, volatile, 32)
+__RTE_GEN_BIT_FLIP(v_, flip, volatile, 32)
+
 __RTE_GEN_BIT_TEST(, test,, 64)
 __RTE_GEN_BIT_SET(, set,, 64)
 __RTE_GEN_BIT_CLEAR(, clear,, 64)
 __RTE_GEN_BIT_ASSIGN(, assign,, 64)
 __RTE_GEN_BIT_FLIP(, flip,, 64)
 
-#define __RTE_GEN_BIT_ATOMIC_TEST(size)					\
+__RTE_GEN_BIT_TEST(v_, test, volatile, 64)
+__RTE_GEN_BIT_SET(v_, set, volatile, 64)
+__RTE_GEN_BIT_CLEAR(v_, clear, volatile, 64)
+__RTE_GEN_BIT_ASSIGN(v_, assign, volatile, 64)
+__RTE_GEN_BIT_FLIP(v_, flip, volatile, 64)
+
+#define __RTE_GEN_BIT_ATOMIC_TEST(v, qualifier, size)			\
 	__rte_experimental						\
 	static inline bool						\
-	__rte_bit_atomic_test ## size(const uint ## size ## _t *addr,	\
-				      unsigned int nr, int memory_order) \
+	__rte_bit_atomic_ ## v ## test ## size(const qualifier uint ## size ## _t *addr, \
+					       unsigned int nr, int memory_order) \
 	{								\
 		RTE_ASSERT(nr < size);					\
 									\
-		const RTE_ATOMIC(uint ## size ## _t) *a_addr =		\
-			(const RTE_ATOMIC(uint ## size ## _t) *)addr;	\
+		const qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr = \
+			(const qualifier RTE_ATOMIC(uint ## size ## _t) *)addr;	\
 		uint ## size ## _t mask = (uint ## size ## _t)1 << nr;	\
 		return rte_atomic_load_explicit(a_addr, memory_order) & mask; \
 	}
 
-#define __RTE_GEN_BIT_ATOMIC_SET(size)					\
+#define __RTE_GEN_BIT_ATOMIC_SET(v, qualifier, size)			\
 	__rte_experimental						\
 	static inline void						\
-	__rte_bit_atomic_set ## size(uint ## size ## _t *addr,		\
-				     unsigned int nr, int memory_order)	\
+	__rte_bit_atomic_ ## v ## set ## size(qualifier uint ## size ## _t *addr, \
+					      unsigned int nr, int memory_order) \
 	{								\
 		RTE_ASSERT(nr < size);					\
 									\
-		RTE_ATOMIC(uint ## size ## _t) *a_addr =		\
-			(RTE_ATOMIC(uint ## size ## _t) *)addr;		\
+		qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr =	\
+			(qualifier RTE_ATOMIC(uint ## size ## _t) *)addr; \
 		uint ## size ## _t mask = (uint ## size ## _t)1 << nr;	\
 		rte_atomic_fetch_or_explicit(a_addr, mask, memory_order); \
 	}
 
-#define __RTE_GEN_BIT_ATOMIC_CLEAR(size)				\
+#define __RTE_GEN_BIT_ATOMIC_CLEAR(v, qualifier, size)			\
 	__rte_experimental						\
 	static inline void						\
-	__rte_bit_atomic_clear ## size(uint ## size ## _t *addr,	\
-				       unsigned int nr, int memory_order) \
+	__rte_bit_atomic_ ## v ## clear ## size(qualifier uint ## size ## _t *addr,	\
+						unsigned int nr, int memory_order) \
 	{								\
 		RTE_ASSERT(nr < size);					\
 									\
-		RTE_ATOMIC(uint ## size ## _t) *a_addr =		\
-			(RTE_ATOMIC(uint ## size ## _t) *)addr;		\
+		qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr =	\
+			(qualifier RTE_ATOMIC(uint ## size ## _t) *)addr; \
 		uint ## size ## _t mask = (uint ## size ## _t)1 << nr;	\
 		rte_atomic_fetch_and_explicit(a_addr, ~mask, memory_order); \
 	}
 
-#define __RTE_GEN_BIT_ATOMIC_FLIP(size)					\
+#define __RTE_GEN_BIT_ATOMIC_FLIP(v, qualifier, size)			\
 	__rte_experimental						\
 	static inline void						\
-	__rte_bit_atomic_flip ## size(uint ## size ## _t *addr,		\
-				       unsigned int nr, int memory_order) \
+	__rte_bit_atomic_ ## v ## flip ## size(qualifier uint ## size ## _t *addr, \
+					       unsigned int nr, int memory_order) \
 	{								\
 		RTE_ASSERT(nr < size);					\
 									\
-		RTE_ATOMIC(uint ## size ## _t) *a_addr =		\
-			(RTE_ATOMIC(uint ## size ## _t) *)addr;		\
+		qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr =	\
+			(qualifier RTE_ATOMIC(uint ## size ## _t) *)addr; \
 		uint ## size ## _t mask = (uint ## size ## _t)1 << nr;	\
 		rte_atomic_fetch_xor_explicit(a_addr, mask, memory_order); \
 	}
 
-#define __RTE_GEN_BIT_ATOMIC_ASSIGN(size)				\
+#define __RTE_GEN_BIT_ATOMIC_ASSIGN(v, qualifier, size)			\
 	__rte_experimental						\
 	static inline void						\
-	__rte_bit_atomic_assign ## size(uint ## size ## _t *addr,	\
-					unsigned int nr, bool value,	\
-					int memory_order)		\
+	__rte_bit_atomic_## v ## assign ## size(qualifier uint ## size ## _t *addr, \
+						unsigned int nr, bool value, \
+						int memory_order)	\
 	{								\
 		if (value)						\
-			__rte_bit_atomic_set ## size(addr, nr, memory_order); \
+			__rte_bit_atomic_ ## v ## set ## size(addr, nr, memory_order); \
 		else							\
-			__rte_bit_atomic_clear ## size(addr, nr,	\
-						       memory_order);	\
+			__rte_bit_atomic_ ## v ## clear ## size(addr, nr, \
+								     memory_order); \
 	}
 
-#define __RTE_GEN_BIT_ATOMIC_TEST_AND_SET(size)				\
+#define __RTE_GEN_BIT_ATOMIC_TEST_AND_SET(v, qualifier, size)		\
 	__rte_experimental						\
 	static inline bool						\
-	__rte_bit_atomic_test_and_set ## size(uint ## size ## _t *addr,	\
-					      unsigned int nr,		\
-					      int memory_order)		\
+	__rte_bit_atomic_ ## v ## test_and_set ## size(qualifier uint ## size ## _t *addr, \
+						       unsigned int nr,	\
+						       int memory_order) \
 	{								\
 		RTE_ASSERT(nr < size);					\
 									\
-		RTE_ATOMIC(uint ## size ## _t) *a_addr =		\
-			(RTE_ATOMIC(uint ## size ## _t) *)addr;		\
+		qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr =	\
+			(qualifier RTE_ATOMIC(uint ## size ## _t) *)addr; \
 		uint ## size ## _t mask = (uint ## size ## _t)1 << nr;	\
 		uint ## size ## _t prev;				\
 									\
@@ -587,17 +632,17 @@ __RTE_GEN_BIT_FLIP(, flip,, 64)
 		return prev & mask;					\
 	}
 
-#define __RTE_GEN_BIT_ATOMIC_TEST_AND_CLEAR(size)			\
+#define __RTE_GEN_BIT_ATOMIC_TEST_AND_CLEAR(v, qualifier, size)		\
 	__rte_experimental						\
 	static inline bool						\
-	__rte_bit_atomic_test_and_clear ## size(uint ## size ## _t *addr, \
-						unsigned int nr,	\
-						int memory_order)	\
+	__rte_bit_atomic_ ## v ## test_and_clear ## size(qualifier uint ## size ## _t *addr, \
+							 unsigned int nr, \
+							 int memory_order) \
 	{								\
 		RTE_ASSERT(nr < size);					\
 									\
-		RTE_ATOMIC(uint ## size ## _t) *a_addr =		\
-			(RTE_ATOMIC(uint ## size ## _t) *)addr;		\
+		qualifier RTE_ATOMIC(uint ## size ## _t) *a_addr =	\
+			(qualifier RTE_ATOMIC(uint ## size ## _t) *)addr; \
 		uint ## size ## _t mask = (uint ## size ## _t)1 << nr;	\
 		uint ## size ## _t prev;				\
 									\
@@ -607,34 +652,36 @@ __RTE_GEN_BIT_FLIP(, flip,, 64)
 		return prev & mask;					\
 	}
 
-#define __RTE_GEN_BIT_ATOMIC_TEST_AND_ASSIGN(size)			\
+#define __RTE_GEN_BIT_ATOMIC_TEST_AND_ASSIGN(v, qualifier, size)	\
 	__rte_experimental						\
 	static inline bool						\
-	__rte_bit_atomic_test_and_assign ## size(uint ## size ## _t *addr, \
-						 unsigned int nr,	\
-						 bool value,		\
-						 int memory_order)	\
+	__rte_bit_atomic_ ## v ## test_and_assign ## size(qualifier uint ## size ## _t *addr, \
+							  unsigned int nr, \
+							  bool value,	\
+							  int memory_order) \
 	{								\
 		if (value)						\
-			return __rte_bit_atomic_test_and_set ## size(addr, nr, \
-								     memory_order); \
+			return __rte_bit_atomic_ ## v ## test_and_set ## size(addr, nr, memory_order); \
 		else							\
-			return __rte_bit_atomic_test_and_clear ## size(addr, nr, \
-								       memory_order); \
+			return __rte_bit_atomic_ ## v ## test_and_clear ## size(addr, nr, memory_order); \
 	}
 
-#define __RTE_GEN_BIT_ATOMIC_OPS(size)			\
-	__RTE_GEN_BIT_ATOMIC_TEST(size)			\
-	__RTE_GEN_BIT_ATOMIC_SET(size)			\
-	__RTE_GEN_BIT_ATOMIC_CLEAR(size)		\
-	__RTE_GEN_BIT_ATOMIC_ASSIGN(size)		\
-	__RTE_GEN_BIT_ATOMIC_TEST_AND_SET(size)		\
-	__RTE_GEN_BIT_ATOMIC_TEST_AND_CLEAR(size)	\
-	__RTE_GEN_BIT_ATOMIC_TEST_AND_ASSIGN(size)	\
-	__RTE_GEN_BIT_ATOMIC_FLIP(size)
+#define __RTE_GEN_BIT_ATOMIC_OPS(v, qualifier, size)	\
+	__RTE_GEN_BIT_ATOMIC_TEST(v, qualifier, size)	\
+	__RTE_GEN_BIT_ATOMIC_SET(v, qualifier, size)	\
+	__RTE_GEN_BIT_ATOMIC_CLEAR(v, qualifier, size)	\
+	__RTE_GEN_BIT_ATOMIC_ASSIGN(v, qualifier, size)	\
+	__RTE_GEN_BIT_ATOMIC_TEST_AND_SET(v, qualifier, size) \
+	__RTE_GEN_BIT_ATOMIC_TEST_AND_CLEAR(v, qualifier, size) \
+	__RTE_GEN_BIT_ATOMIC_TEST_AND_ASSIGN(v, qualifier, size) \
+	__RTE_GEN_BIT_ATOMIC_FLIP(v, qualifier, size)
 
-__RTE_GEN_BIT_ATOMIC_OPS(32)
-__RTE_GEN_BIT_ATOMIC_OPS(64)
+#define __RTE_GEN_BIT_ATOMIC_OPS_SIZE(size) \
+	__RTE_GEN_BIT_ATOMIC_OPS(,, size) \
+	__RTE_GEN_BIT_ATOMIC_OPS(v_, volatile, size)
+
+__RTE_GEN_BIT_ATOMIC_OPS_SIZE(32)
+__RTE_GEN_BIT_ATOMIC_OPS_SIZE(64)
 
 /*------------------------ 32-bit relaxed operations ------------------------*/
 
@@ -1340,120 +1387,178 @@ rte_log2_u64(uint64_t v)
 #undef rte_bit_atomic_test_and_clear
 #undef rte_bit_atomic_test_and_assign
 
-#define __RTE_BIT_OVERLOAD_SZ_2(fun, qualifier, size, arg1_type, arg1_name) \
+#define __RTE_BIT_OVERLOAD_V_2(family, v, fun, c, size, arg1_type, arg1_name) \
 	static inline void						\
-	rte_bit_ ## fun(qualifier uint ## size ## _t *addr,		\
-			arg1_type arg1_name)				\
+	rte_bit_ ## family ## fun(c uint ## size ## _t *addr,		\
+				  arg1_type arg1_name)			\
 	{								\
-		__rte_bit_ ## fun ## size(addr, arg1_name);		\
+		__rte_bit_ ## family ## v ## fun ## size(addr, arg1_name); \
 	}
 
-#define __RTE_BIT_OVERLOAD_2(fun, qualifier, arg1_type, arg1_name)	\
-	__RTE_BIT_OVERLOAD_SZ_2(fun, qualifier, 32, arg1_type, arg1_name) \
-	__RTE_BIT_OVERLOAD_SZ_2(fun, qualifier, 64, arg1_type, arg1_name)
+#define __RTE_BIT_OVERLOAD_SZ_2(family, fun, c, size, arg1_type, arg1_name) \
+	__RTE_BIT_OVERLOAD_V_2(family,, fun, c, size, arg1_type,	\
+			       arg1_name)				\
+	__RTE_BIT_OVERLOAD_V_2(family, v_, fun, c volatile, size, \
+			       arg1_type, arg1_name)
 
-#define __RTE_BIT_OVERLOAD_SZ_2R(fun, qualifier, size, ret_type, arg1_type, \
-				 arg1_name)				\
+#define __RTE_BIT_OVERLOAD_2(family, fun, c, arg1_type, arg1_name)	\
+	__RTE_BIT_OVERLOAD_SZ_2(family, fun, c, 32, arg1_type, arg1_name) \
+	__RTE_BIT_OVERLOAD_SZ_2(family, fun, c, 64, arg1_type, arg1_name)
+
+#define __RTE_BIT_OVERLOAD_V_2R(family, v, fun, c, size, ret_type, arg1_type, \
+				arg1_name)				\
 	static inline ret_type						\
-	rte_bit_ ## fun(qualifier uint ## size ## _t *addr,		\
+	rte_bit_ ## family ## fun(c uint ## size ## _t *addr,		\
 			arg1_type arg1_name)				\
 	{								\
-		return __rte_bit_ ## fun ## size(addr, arg1_name);	\
+		return __rte_bit_ ## family ## v ## fun ## size(addr,	\
+								arg1_name); \
 	}
 
-#define __RTE_BIT_OVERLOAD_2R(fun, qualifier, ret_type, arg1_type, arg1_name) \
-	__RTE_BIT_OVERLOAD_SZ_2R(fun, qualifier, 32, ret_type, arg1_type, \
+#define __RTE_BIT_OVERLOAD_SZ_2R(family, fun, c, size, ret_type, arg1_type, \
+				 arg1_name)				\
+	__RTE_BIT_OVERLOAD_V_2R(family,, fun, c, size, ret_type, arg1_type, \
+				arg1_name)				\
+	__RTE_BIT_OVERLOAD_V_2R(family, v_, fun, c volatile,		\
+				size, ret_type, arg1_type, arg1_name)
+
+#define __RTE_BIT_OVERLOAD_2R(family, fun, c, ret_type, arg1_type, arg1_name) \
+	__RTE_BIT_OVERLOAD_SZ_2R(family, fun, c, 32, ret_type, arg1_type, \
 				 arg1_name)				\
-	__RTE_BIT_OVERLOAD_SZ_2R(fun, qualifier, 64, ret_type, arg1_type, \
+	__RTE_BIT_OVERLOAD_SZ_2R(family, fun, c, 64, ret_type, arg1_type, \
 				 arg1_name)
 
-#define __RTE_BIT_OVERLOAD_SZ_3(fun, qualifier, size, arg1_type, arg1_name, \
-				arg2_type, arg2_name)			\
+#define __RTE_BIT_OVERLOAD_V_3(family, v, fun, c, size, arg1_type, arg1_name, \
+			       arg2_type, arg2_name)			\
 	static inline void						\
-	rte_bit_ ## fun(uint ## size ## _t *addr, arg1_type arg1_name,	\
-			arg2_type arg2_name)				\
+	rte_bit_ ## family ## fun(c uint ## size ## _t *addr,		\
+				  arg1_type arg1_name, arg2_type arg2_name) \
 	{								\
-		__rte_bit_ ## fun ## size(addr, arg1_name, arg2_name);	\
+		__rte_bit_ ## family ## v ## fun ## size(addr, arg1_name, \
+							 arg2_name);	\
 	}
 
-#define __RTE_BIT_OVERLOAD_3(fun, qualifier, arg1_type, arg1_name, arg2_type, \
+#define __RTE_BIT_OVERLOAD_SZ_3(family, fun, c, size, arg1_type, arg1_name, \
+				arg2_type, arg2_name)			\
+	__RTE_BIT_OVERLOAD_V_3(family,, fun, c, size, arg1_type, arg1_name, \
+			       arg2_type, arg2_name)			\
+	__RTE_BIT_OVERLOAD_V_3(family, v_, fun, c volatile, size, arg1_type, \
+			       arg1_name, arg2_type, arg2_name)
+
+#define __RTE_BIT_OVERLOAD_3(family, fun, c, arg1_type, arg1_name, arg2_type, \
 			     arg2_name)					\
-	__RTE_BIT_OVERLOAD_SZ_3(fun, qualifier, 32, arg1_type, arg1_name, \
+	__RTE_BIT_OVERLOAD_SZ_3(family, fun, c, 32, arg1_type, arg1_name, \
 				arg2_type, arg2_name)			\
-	__RTE_BIT_OVERLOAD_SZ_3(fun, qualifier, 64, arg1_type, arg1_name, \
+	__RTE_BIT_OVERLOAD_SZ_3(family, fun, c, 64, arg1_type, arg1_name, \
 				arg2_type, arg2_name)
 
-#define __RTE_BIT_OVERLOAD_SZ_3R(fun, qualifier, size, ret_type, arg1_type, \
-				 arg1_name, arg2_type, arg2_name)	\
+#define __RTE_BIT_OVERLOAD_V_3R(family, v, fun, c, size, ret_type, arg1_type, \
+				arg1_name, arg2_type, arg2_name)	\
 	static inline ret_type						\
-	rte_bit_ ## fun(uint ## size ## _t *addr, arg1_type arg1_name,	\
-			arg2_type arg2_name)				\
+	rte_bit_ ## family ## fun(c uint ## size ## _t *addr,		\
+				  arg1_type arg1_name, arg2_type arg2_name) \
 	{								\
-		return __rte_bit_ ## fun ## size(addr, arg1_name, arg2_name); \
+		return __rte_bit_ ## family ## v ## fun ## size(addr,	\
+								arg1_name, \
+								arg2_name); \
 	}
 
-#define __RTE_BIT_OVERLOAD_3R(fun, qualifier, ret_type, arg1_type, arg1_name, \
-			      arg2_type, arg2_name)			\
-	__RTE_BIT_OVERLOAD_SZ_3R(fun, qualifier, 32, ret_type, arg1_type, \
+#define __RTE_BIT_OVERLOAD_SZ_3R(family, fun, c, size, ret_type, arg1_type, \
 				 arg1_name, arg2_type, arg2_name)	\
-	__RTE_BIT_OVERLOAD_SZ_3R(fun, qualifier, 64, ret_type, arg1_type, \
-				 arg1_name, arg2_type, arg2_name)
+	__RTE_BIT_OVERLOAD_V_3R(family,, fun, c, size, ret_type, \
+				arg1_type, arg1_name, arg2_type, arg2_name) \
+	__RTE_BIT_OVERLOAD_V_3R(family, v_, fun, c volatile, size, \
+				ret_type, arg1_type, arg1_name, arg2_type, \
+				arg2_name)
 
-#define __RTE_BIT_OVERLOAD_SZ_4(fun, qualifier, size, arg1_type, arg1_name, \
-				arg2_type, arg2_name, arg3_type, arg3_name) \
+#define __RTE_BIT_OVERLOAD_3R(family, fun, c, ret_type, arg1_type, arg1_name, \
+			      arg2_type, arg2_name)			\
+	__RTE_BIT_OVERLOAD_SZ_3R(family, fun, c, 32, ret_type,		\
+				 arg1_type, arg1_name, arg2_type, arg2_name) \
+	__RTE_BIT_OVERLOAD_SZ_3R(family, fun, c, 64, ret_type, \
+				 arg1_type, arg1_name, arg2_type, arg2_name)
+
+#define __RTE_BIT_OVERLOAD_V_4(family, v, fun, c, size, arg1_type, arg1_name, \
+			       arg2_type, arg2_name, arg3_type,	arg3_name) \
 	static inline void						\
-	rte_bit_ ## fun(uint ## size ## _t *addr, arg1_type arg1_name,	\
-			arg2_type arg2_name, arg3_type arg3_name)	\
+	rte_bit_ ## family ## fun(c uint ## size ## _t *addr,		\
+				  arg1_type arg1_name, arg2_type arg2_name, \
+				  arg3_type arg3_name)			\
 	{								\
-		__rte_bit_ ## fun ## size(addr, arg1_name, arg2_name,	\
-					  arg3_name);		      \
+		__rte_bit_ ## family ## v ## fun ## size(addr, arg1_name, \
+							 arg2_name,	\
+							 arg3_name);	\
 	}
 
-#define __RTE_BIT_OVERLOAD_4(fun, qualifier, arg1_type, arg1_name, arg2_type, \
-			     arg2_name, arg3_type, arg3_name)		\
-	__RTE_BIT_OVERLOAD_SZ_4(fun, qualifier, 32, arg1_type, arg1_name, \
+#define __RTE_BIT_OVERLOAD_SZ_4(family, fun, c, size, arg1_type, arg1_name, \
 				arg2_type, arg2_name, arg3_type, arg3_name) \
-	__RTE_BIT_OVERLOAD_SZ_4(fun, qualifier, 64, arg1_type, arg1_name, \
-				arg2_type, arg2_name, arg3_type, arg3_name)
-
-#define __RTE_BIT_OVERLOAD_SZ_4R(fun, qualifier, size, ret_type, arg1_type, \
-				 arg1_name, arg2_type, arg2_name, arg3_type, \
-				 arg3_name)				\
+	__RTE_BIT_OVERLOAD_V_4(family,, fun, c, size, arg1_type,	\
+			       arg1_name, arg2_type, arg2_name, arg3_type, \
+			       arg3_name)				\
+	__RTE_BIT_OVERLOAD_V_4(family, v_, fun, c volatile, size,	\
+			       arg1_type, arg1_name, arg2_type, arg2_name, \
+			       arg3_type, arg3_name)
+
+#define __RTE_BIT_OVERLOAD_4(family, fun, c, arg1_type, arg1_name, arg2_type, \
+			     arg2_name, arg3_type, arg3_name)		\
+	__RTE_BIT_OVERLOAD_SZ_4(family, fun, c, 32, arg1_type,		\
+				arg1_name, arg2_type, arg2_name, arg3_type, \
+				arg3_name)				\
+	__RTE_BIT_OVERLOAD_SZ_4(family, fun, c, 64, arg1_type,		\
+				arg1_name, arg2_type, arg2_name, arg3_type, \
+				arg3_name)
+
+#define __RTE_BIT_OVERLOAD_V_4R(family, v, fun, c, size, ret_type, arg1_type, \
+				arg1_name, arg2_type, arg2_name, arg3_type, \
+				arg3_name)				\
 	static inline ret_type						\
-	rte_bit_ ## fun(uint ## size ## _t *addr, arg1_type arg1_name,	\
-			arg2_type arg2_name, arg3_type arg3_name)	\
+	rte_bit_ ## family ## fun(c uint ## size ## _t *addr,		\
+				  arg1_type arg1_name, arg2_type arg2_name, \
+				  arg3_type arg3_name)			\
 	{								\
-		return __rte_bit_ ## fun ## size(addr, arg1_name, arg2_name, \
-						 arg3_name);		\
+		return __rte_bit_ ## family ## v ## fun ## size(addr,	\
+								arg1_name, \
+								arg2_name, \
+								arg3_name); \
 	}
 
-#define __RTE_BIT_OVERLOAD_4R(fun, qualifier, ret_type, arg1_type, arg1_name, \
-			      arg2_type, arg2_name, arg3_type, arg3_name) \
-	__RTE_BIT_OVERLOAD_SZ_4R(fun, qualifier, 32, ret_type, arg1_type, \
+#define __RTE_BIT_OVERLOAD_SZ_4R(family, fun, c, size, ret_type, arg1_type, \
 				 arg1_name, arg2_type, arg2_name, arg3_type, \
 				 arg3_name)				\
-	__RTE_BIT_OVERLOAD_SZ_4R(fun, qualifier, 64, ret_type, arg1_type, \
-				 arg1_name, arg2_type, arg2_name, arg3_type, \
-				 arg3_name)
-
-__RTE_BIT_OVERLOAD_2R(test, const, bool, unsigned int, nr)
-__RTE_BIT_OVERLOAD_2(set,, unsigned int, nr)
-__RTE_BIT_OVERLOAD_2(clear,, unsigned int, nr)
-__RTE_BIT_OVERLOAD_3(assign,, unsigned int, nr, bool, value)
-__RTE_BIT_OVERLOAD_2(flip,, unsigned int, nr)
-
-__RTE_BIT_OVERLOAD_3R(atomic_test, const, bool, unsigned int, nr,
+	__RTE_BIT_OVERLOAD_V_4R(family,, fun, c, size, ret_type, arg1_type, \
+				arg1_name, arg2_type, arg2_name, arg3_type, \
+				arg3_name)				\
+	__RTE_BIT_OVERLOAD_V_4R(family, v_, fun, c volatile, size,	\
+				ret_type, arg1_type, arg1_name, arg2_type, \
+				arg2_name, arg3_type, arg3_name)
+
+#define __RTE_BIT_OVERLOAD_4R(family, fun, c, ret_type, arg1_type, arg1_name, \
+			      arg2_type, arg2_name, arg3_type, arg3_name) \
+	__RTE_BIT_OVERLOAD_SZ_4R(family, fun, c, 32, ret_type,		\
+				 arg1_type, arg1_name, arg2_type, arg2_name, \
+				 arg3_type, arg3_name)			\
+	__RTE_BIT_OVERLOAD_SZ_4R(family, fun, c, 64, ret_type,		\
+				 arg1_type, arg1_name, arg2_type, arg2_name, \
+				 arg3_type, arg3_name)
+
+__RTE_BIT_OVERLOAD_2R(, test, const, bool, unsigned int, nr)
+__RTE_BIT_OVERLOAD_2(, set,, unsigned int, nr)
+__RTE_BIT_OVERLOAD_2(, clear,, unsigned int, nr)
+__RTE_BIT_OVERLOAD_3(, assign,, unsigned int, nr, bool, value)
+__RTE_BIT_OVERLOAD_2(, flip,, unsigned int, nr)
+
+__RTE_BIT_OVERLOAD_3R(atomic_, test, const, bool, unsigned int, nr,
 		      int, memory_order)
-__RTE_BIT_OVERLOAD_3(atomic_set,, unsigned int, nr, int, memory_order)
-__RTE_BIT_OVERLOAD_3(atomic_clear,, unsigned int, nr, int, memory_order)
-__RTE_BIT_OVERLOAD_4(atomic_assign,, unsigned int, nr, bool, value,
+__RTE_BIT_OVERLOAD_3(atomic_, set,, unsigned int, nr, int, memory_order)
+__RTE_BIT_OVERLOAD_3(atomic_, clear,, unsigned int, nr, int, memory_order)
+__RTE_BIT_OVERLOAD_4(atomic_, assign,, unsigned int, nr, bool, value,
 		     int, memory_order)
-__RTE_BIT_OVERLOAD_3(atomic_flip,, unsigned int, nr, int, memory_order)
-__RTE_BIT_OVERLOAD_3R(atomic_test_and_set,, bool, unsigned int, nr,
+__RTE_BIT_OVERLOAD_3(atomic_, flip,, unsigned int, nr, int, memory_order)
+__RTE_BIT_OVERLOAD_3R(atomic_, test_and_set,, bool, unsigned int, nr,
 		      int, memory_order)
-__RTE_BIT_OVERLOAD_3R(atomic_test_and_clear,, bool, unsigned int, nr,
+__RTE_BIT_OVERLOAD_3R(atomic_, test_and_clear,, bool, unsigned int, nr,
 		      int, memory_order)
-__RTE_BIT_OVERLOAD_4R(atomic_test_and_assign,, bool, unsigned int, nr,
+__RTE_BIT_OVERLOAD_4R(atomic_, test_and_assign,, bool, unsigned int, nr,
 		      bool, value, int, memory_order)
 
 #endif
-- 
2.34.1