From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1905C459BD; Tue, 17 Sep 2024 12:57:49 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 78F9D40B8D; Tue, 17 Sep 2024 12:57:37 +0200 (CEST) Received: from EUR05-VI1-obe.outbound.protection.outlook.com (mail-vi1eur05on2046.outbound.protection.outlook.com [40.107.21.46]) by mails.dpdk.org (Postfix) with ESMTP id EA18C4025F for ; Tue, 17 Sep 2024 12:57:33 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=rqqgmMZRp6mcus6XxCXO8j7XijL1B60FG8MhvkbLVlTx6UaLBBJTcnqOPRuDMTpn5PnYrW8vYVDc++sA3RcphOBCfHrY89xlYdw55EMiE7uRerz8LhSqf1s62XnGYmbmuYkLAvkgtvVA5gT5KYfzbivqgPeWQvIYK6lmg4ZklJf61m709d9mzOzx9YFvz2SqYTJ+ubGFVXZhAClPHHuEKXKaEyeJ9K5WMITyhQ+i9hUqBP1WXj1bTBqTjYAeTqi2GXx16zMHCmFHrjUJbm4Mi9KesU9L8OMnJkk/nj1JZKNkI4wLveMI5DeNYaxw6q+IYi56RjxVdVFxlOdVG0kKBA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8m3UgXhFKSzTYlHXra2yjkfez6POUZKx41YlHLDAINY=; b=iiA8iNQlI2zWRsaWQjiVwxcdb4UBL35r5obxRlmM+mOjauJJNqBkKSM0R8xGrGJ3b9pf1AcHzi0zqGMVKBM8/9dWG5S8Eo9OSBTsps1HLKA1VzDb/P3pKS8Cr9a4JuvpzQg6tRz28jBRkWm4nq2jB8qmpW1/MV3rW4o/EYGVwO2YIQVJW8Z6dJrmpR/gxIm7tbe6oXY3DQZHOYvBm+2sIFoYnyo2slCyIKfrvYJznv9+/1ehMF8Snbj66/GLGMncsjxOsQDs6aDnQjkyIc0pUCAcB1LMi1ElXZlBstys3ISxuWDZ8IqzmsJ362j6/XF48AQO2gnI9fNAy7tQyxcFIg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 192.176.1.74) smtp.rcpttodomain=dpdk.org smtp.mailfrom=ericsson.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=ericsson.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ericsson.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8m3UgXhFKSzTYlHXra2yjkfez6POUZKx41YlHLDAINY=; b=C4TA3zFGlmOz01lhDNanJJKklvSuOXqY9Ayi9jdAkF+MFOLf7mCKeRem1F8kA1PUxVQ3ZuLAyP5UTLM7okdd/pBS5/7o39c1IqyPqaqX2uKuNvp7JR/EXji6nDIWoOUGKA3vfw5EScYVvVw+c/Jzz4InhKa95u4AdSvg4afrEyusXrVSod+f67r1ambJFZArEN4gjb2iC6jMq2BrYttr5RTleUQS+dBudxkPpL7JCqHbt4SAZFXLWlERJrK9DUgnVuT/gwjJNtBuW/VLgMqEHQKkhyG4LdUinXkzmqy/yrrQhT4SxUxRptpArDNjY4LuldlzoymtLTIgBS3DH9LnVA== Received: from AM7PR03CA0016.eurprd03.prod.outlook.com (2603:10a6:20b:130::26) by DB8PR07MB6329.eurprd07.prod.outlook.com (2603:10a6:10:137::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7962.18; Tue, 17 Sep 2024 10:57:31 +0000 Received: from AM4PEPF00027A69.eurprd04.prod.outlook.com (2603:10a6:20b:130:cafe::f9) by AM7PR03CA0016.outlook.office365.com (2603:10a6:20b:130::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7939.30 via Frontend Transport; Tue, 17 Sep 2024 10:57:31 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 192.176.1.74) smtp.mailfrom=ericsson.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=ericsson.com; Received-SPF: Pass (protection.outlook.com: domain of ericsson.com designates 192.176.1.74 as permitted sender) receiver=protection.outlook.com; client-ip=192.176.1.74; helo=oa.msg.ericsson.com; pr=C Received: from oa.msg.ericsson.com (192.176.1.74) by AM4PEPF00027A69.mail.protection.outlook.com (10.167.16.87) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7918.13 via Frontend Transport; Tue, 17 Sep 2024 10:57:31 +0000 Received: from seliicinfr00049.seli.gic.ericsson.se (153.88.142.248) by smtp-central.internal.ericsson.com (100.87.178.69) with Microsoft SMTP Server id 15.2.1544.11; Tue, 17 Sep 2024 12:57:29 +0200 Received: from breslau.. (seliicwb00002.seli.gic.ericsson.se [10.156.25.100]) by seliicinfr00049.seli.gic.ericsson.se (Postfix) with ESMTP id 3585738007A; Tue, 17 Sep 2024 12:57:29 +0200 (CEST) From: =?UTF-8?q?Mattias=20R=C3=B6nnblom?= To: CC: , Heng Wang , "Stephen Hemminger" , Tyler Retzlaff , =?UTF-8?q?Morten=20Br=C3=B8rup?= , Jack Bond-Preston , David Marchand , Chengwen Feng , =?UTF-8?q?Mattias=20R=C3=B6nnblom?= Subject: [PATCH v8 6/6] eal: extend bitops to handle volatile pointers Date: Tue, 17 Sep 2024 12:48:11 +0200 Message-ID: <20240917104811.723863-7-mattias.ronnblom@ericsson.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240917104811.723863-1-mattias.ronnblom@ericsson.com> References: <20240917093646.723777-2-mattias.ronnblom@ericsson.com> <20240917104811.723863-1-mattias.ronnblom@ericsson.com> MIME-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8bit X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: AM4PEPF00027A69:EE_|DB8PR07MB6329:EE_ X-MS-Office365-Filtering-Correlation-Id: caaef8e1-0ddd-410d-7dce-08dcd7078717 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|1800799024|36860700013|376014|82310400026; X-Microsoft-Antispam-Message-Info: =?utf-8?B?dFRGZnFQeHV2eVFuQTB6VnhBZHJyaDViOWFYbkZzRjdJWEdKTlFUWW4xOG5n?= =?utf-8?B?K0ZuQzYxT0dTeTUrVXJMekgyT2hVM2F6Nnc2T0ZvZThmL1dwZXBGNkQ3TFpQ?= =?utf-8?B?WkJRWHorT0ExWkswZWJSTTI4SUd0aHJzb1M3WjBOOXhRb2NSOS95SG8rTElP?= =?utf-8?B?alhPV3pCT3FEUUxXZE9kQ3lZQm1EamFhVDZCWWdaQ3FFTy9EQ1pGY3F5amlk?= =?utf-8?B?QWJPRm5NMTNzZXhOb1QzRUdBZ2lXcmpkNjhhQ1orNm5wNFRiSy96dWcrSy9R?= =?utf-8?B?ZzlHTEpRcVJMZUxBaHJOV1RkTitOL3dpU2VjYU9Sb1Jua29WbE1RQTgxWTR0?= =?utf-8?B?RTNUYXd6cTJ6ZXdmdlprTlZpeEFJQVhRRUIrdjllZzk4L0tGd0wwQk1KRkln?= =?utf-8?B?VmZIM2dyZW1LOFp6QTJpWFVGbGZvZllQaDR0b1BJT1E3aisvRXNXY3lVWnpR?= =?utf-8?B?dzIxemtOZ3RtSFlEaDVHci8rSW5zRFI5dDZ1VDBrbXFNazZBdW11WDlNUDQ5?= =?utf-8?B?OEVzQ2d6aHBrazNuYk9va01BenJNVGZFTERaWEYwZUpwd2EyT1BvMDYzbWZm?= =?utf-8?B?L056ck1OdyttMDJKL204TXU5Yzc0WHFxN3c5Nzk0bkdQZ3R4ZEo0K2JEcSt5?= =?utf-8?B?S2lJeURQWTA1VVIwVHpnQ3kydVVpaGF5TzB0Y1dRTHpNWEt4MFJkUHhoc0NI?= =?utf-8?B?T0x6QXdOMXBHOFEvVDRBWCtjTG1qT3FaU2U4ZWZRZ2VldEFjWXFWdmVJY210?= =?utf-8?B?a1NhRlJ2S2xJb2VJZnZCVlF2MjlzUWJhS3lQNVNOSG53dnhiUlZDcE5UdTha?= =?utf-8?B?Z1NUeDBuVXZXWnZ0a3g5eGRscVNRd20xZVdsMm5HZUpnRUpaTWVJZmlkSTln?= =?utf-8?B?dU10a285MHZvOWtWOC9MR21HM1hlSENWOU4waitxMW1hZThnS212dFVVZ2xN?= =?utf-8?B?T2EwK2NTQWVsR2JTTGpTQWxLYUxoSW1JbDVzbk1FKytwS3VoUGl5S05sT1lp?= =?utf-8?B?bnZpciszak8vK0pXUU1MWm83cHZ5VlJtTW1tTlNSL0JaQ3B5ejlnamorY2lh?= =?utf-8?B?ZEZNTTRubkdyTFhmYVNyRG5SZU1lQUhna3RUL2YwRmtMa1lKdWVhclM5QXE1?= =?utf-8?B?a3dqcC9WbUZRVUxOTkxmMEdoQUhTd2lTcE5ZWE1wQTJDSjV5NXpYSXVCK0Fs?= =?utf-8?B?QzJUT04vNnJsV25xMjRFbWJHUzRLcHdNWHlvalRQVTg4ZzhrRXU0Mkd3R0cr?= =?utf-8?B?SHNUd3I3RWRIOHpEUlZJSWk4UVozV0MzZUdYbnB5bWQvV2g4UkNzMThvTXJ3?= =?utf-8?B?V2QzWnlyaC9FODNqaE94c0lGM0tlMDNSYXFDN3FvNHR3NGZnQ1JtdzdiNE9q?= =?utf-8?B?a0N4bVFxT2g5ODlVTWUvR2xadG5zV3dzYnA4M1lDU3UwYWt0TUJDWnZhdWZy?= =?utf-8?B?VVFDZEVxRWFJRFJGUVk3SXExTzVLZ0lHMXpqSHU4SWRubTVHeExxUDFMTUUx?= =?utf-8?B?WGRYQ253NXBTN2JwZUVxdEpsVFhvU2tTa3VuR2NnbFhDVUFIeXA4QWN2WGJq?= =?utf-8?B?ZEg1TmlROEFsTlNiYUdvSnhrcEVFaVdYdklyV291Wng5UE5EaStnK2RwTSs2?= =?utf-8?B?QThhV3p1WWoveW1nSmEvcU5qb2gzNFNUeUkxVjRaRHIzREhZU0FXdVpTaHpw?= =?utf-8?B?ci82b25GRmtIeDBvLzdkbWRuS2RRZndieFZsVm5zL3d2cDhXNnZZUmFlVy9t?= =?utf-8?B?RHBBcTBlTi82TmZtYjBLUmU0L1J1NzN4R2l6NDV6MnZ0QnA5cVdoNEZYakw3?= =?utf-8?Q?Ns3HgCkannv+pxThJYfzFFnz2hJ+JTvn6lXEI=3D?= X-Forefront-Antispam-Report: CIP:192.176.1.74; CTRY:SE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:oa.msg.ericsson.com; PTR:office365.se.ericsson.net; CAT:NONE; SFS:(13230040)(1800799024)(36860700013)(376014)(82310400026); DIR:OUT; SFP:1101; X-OriginatorOrg: ericsson.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Sep 2024 10:57:31.0349 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: caaef8e1-0ddd-410d-7dce-08dcd7078717 X-MS-Exchange-CrossTenant-Id: 92e84ceb-fbfd-47ab-be52-080c6b87953f X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=92e84ceb-fbfd-47ab-be52-080c6b87953f; Ip=[192.176.1.74]; Helo=[oa.msg.ericsson.com] X-MS-Exchange-CrossTenant-AuthSource: AM4PEPF00027A69.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR07MB6329 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Have rte_bit_[test|set|clear|assign|flip]() and rte_bit_atomic_*() handle volatile-marked pointers. Signed-off-by: Mattias Rönnblom Acked-by: Morten Brørup Acked-by: Jack Bond-Preston -- PATCH v3: * Updated to reflect removed 'fun' parameter in __RTE_GEN_BIT_*() (Jack Bond-Preston). PATCH v2: * Actually run the test_bit_atomic_v_access*() test functions. --- app/test/test_bitops.c | 32 +++- lib/eal/include/rte_bitops.h | 301 +++++++++++++++++++++++------------ 2 files changed, 222 insertions(+), 111 deletions(-) diff --git a/app/test/test_bitops.c b/app/test/test_bitops.c index b80216a0a1..10e87f6776 100644 --- a/app/test/test_bitops.c +++ b/app/test/test_bitops.c @@ -14,13 +14,13 @@ #include "test.h" #define GEN_TEST_BIT_ACCESS(test_name, set_fun, clear_fun, assign_fun, \ - flip_fun, test_fun, size) \ + flip_fun, test_fun, size, mod) \ static int \ test_name(void) \ { \ uint ## size ## _t reference = (uint ## size ## _t)rte_rand(); \ unsigned int bit_nr; \ - uint ## size ## _t word = (uint ## size ## _t)rte_rand(); \ + mod uint ## size ## _t word = (uint ## size ## _t)rte_rand(); \ \ for (bit_nr = 0; bit_nr < size; bit_nr++) { \ bool reference_bit = (reference >> bit_nr) & 1; \ @@ -41,7 +41,7 @@ "Bit %d had unflipped value", bit_nr); \ flip_fun(&word, bit_nr); \ \ - const uint ## size ## _t *const_ptr = &word; \ + const mod uint ## size ## _t *const_ptr = &word; \ TEST_ASSERT(test_fun(const_ptr, bit_nr) == \ reference_bit, \ "Bit %d had unexpected value", bit_nr); \ @@ -59,10 +59,16 @@ } GEN_TEST_BIT_ACCESS(test_bit_access32, rte_bit_set, rte_bit_clear, - rte_bit_assign, rte_bit_flip, rte_bit_test, 32) + rte_bit_assign, rte_bit_flip, rte_bit_test, 32,) GEN_TEST_BIT_ACCESS(test_bit_access64, rte_bit_set, rte_bit_clear, - rte_bit_assign, rte_bit_flip, rte_bit_test, 64) + rte_bit_assign, rte_bit_flip, rte_bit_test, 64,) + +GEN_TEST_BIT_ACCESS(test_bit_v_access32, rte_bit_set, rte_bit_clear, + rte_bit_assign, rte_bit_flip, rte_bit_test, 32, volatile) + +GEN_TEST_BIT_ACCESS(test_bit_v_access64, rte_bit_set, rte_bit_clear, + rte_bit_assign, rte_bit_flip, rte_bit_test, 64, volatile) #define bit_atomic_set(addr, nr) \ rte_bit_atomic_set(addr, nr, rte_memory_order_relaxed) @@ -81,11 +87,19 @@ GEN_TEST_BIT_ACCESS(test_bit_access64, rte_bit_set, rte_bit_clear, GEN_TEST_BIT_ACCESS(test_bit_atomic_access32, bit_atomic_set, bit_atomic_clear, bit_atomic_assign, - bit_atomic_flip, bit_atomic_test, 32) + bit_atomic_flip, bit_atomic_test, 32,) GEN_TEST_BIT_ACCESS(test_bit_atomic_access64, bit_atomic_set, bit_atomic_clear, bit_atomic_assign, - bit_atomic_flip, bit_atomic_test, 64) + bit_atomic_flip, bit_atomic_test, 64,) + +GEN_TEST_BIT_ACCESS(test_bit_atomic_v_access32, bit_atomic_set, + bit_atomic_clear, bit_atomic_assign, + bit_atomic_flip, bit_atomic_test, 32, volatile) + +GEN_TEST_BIT_ACCESS(test_bit_atomic_v_access64, bit_atomic_set, + bit_atomic_clear, bit_atomic_assign, + bit_atomic_flip, bit_atomic_test, 64, volatile) #define PARALLEL_TEST_RUNTIME 0.25 @@ -480,8 +494,12 @@ static struct unit_test_suite test_suite = { TEST_CASE(test_bit_access64), TEST_CASE(test_bit_access32), TEST_CASE(test_bit_access64), + TEST_CASE(test_bit_v_access32), + TEST_CASE(test_bit_v_access64), TEST_CASE(test_bit_atomic_access32), TEST_CASE(test_bit_atomic_access64), + TEST_CASE(test_bit_atomic_v_access32), + TEST_CASE(test_bit_atomic_v_access64), TEST_CASE(test_bit_atomic_parallel_assign32), TEST_CASE(test_bit_atomic_parallel_assign64), TEST_CASE(test_bit_atomic_parallel_test_and_modify32), diff --git a/lib/eal/include/rte_bitops.h b/lib/eal/include/rte_bitops.h index 3ad6795fd1..d7a07c4099 100644 --- a/lib/eal/include/rte_bitops.h +++ b/lib/eal/include/rte_bitops.h @@ -127,12 +127,16 @@ extern "C" { * @param nr * The index of the bit. */ -#define rte_bit_test(addr, nr) \ - _Generic((addr), \ - uint32_t *: __rte_bit_test32, \ - const uint32_t *: __rte_bit_test32, \ - uint64_t *: __rte_bit_test64, \ - const uint64_t *: __rte_bit_test64)(addr, nr) +#define rte_bit_test(addr, nr) \ + _Generic((addr), \ + uint32_t *: __rte_bit_test32, \ + const uint32_t *: __rte_bit_test32, \ + volatile uint32_t *: __rte_bit_v_test32, \ + const volatile uint32_t *: __rte_bit_v_test32, \ + uint64_t *: __rte_bit_test64, \ + const uint64_t *: __rte_bit_test64, \ + volatile uint64_t *: __rte_bit_v_test64, \ + const volatile uint64_t *: __rte_bit_v_test64)(addr, nr) /** * @warning @@ -152,10 +156,12 @@ extern "C" { * @param nr * The index of the bit. */ -#define rte_bit_set(addr, nr) \ - _Generic((addr), \ - uint32_t *: __rte_bit_set32, \ - uint64_t *: __rte_bit_set64)(addr, nr) +#define rte_bit_set(addr, nr) \ + _Generic((addr), \ + uint32_t *: __rte_bit_set32, \ + volatile uint32_t *: __rte_bit_v_set32, \ + uint64_t *: __rte_bit_set64, \ + volatile uint64_t *: __rte_bit_v_set64)(addr, nr) /** * @warning @@ -175,10 +181,12 @@ extern "C" { * @param nr * The index of the bit. */ -#define rte_bit_clear(addr, nr) \ - _Generic((addr), \ - uint32_t *: __rte_bit_clear32, \ - uint64_t *: __rte_bit_clear64)(addr, nr) +#define rte_bit_clear(addr, nr) \ + _Generic((addr), \ + uint32_t *: __rte_bit_clear32, \ + volatile uint32_t *: __rte_bit_v_clear32, \ + uint64_t *: __rte_bit_clear64, \ + volatile uint64_t *: __rte_bit_v_clear64)(addr, nr) /** * @warning @@ -202,7 +210,9 @@ extern "C" { #define rte_bit_assign(addr, nr, value) \ _Generic((addr), \ uint32_t *: __rte_bit_assign32, \ - uint64_t *: __rte_bit_assign64)(addr, nr, value) + volatile uint32_t *: __rte_bit_v_assign32, \ + uint64_t *: __rte_bit_assign64, \ + volatile uint64_t *: __rte_bit_v_assign64)(addr, nr, value) /** * @warning @@ -225,7 +235,9 @@ extern "C" { #define rte_bit_flip(addr, nr) \ _Generic((addr), \ uint32_t *: __rte_bit_flip32, \ - uint64_t *: __rte_bit_flip64)(addr, nr) + volatile uint32_t *: __rte_bit_v_flip32, \ + uint64_t *: __rte_bit_flip64, \ + volatile uint64_t *: __rte_bit_v_flip64)(addr, nr) /** * @warning @@ -250,9 +262,13 @@ extern "C" { _Generic((addr), \ uint32_t *: __rte_bit_atomic_test32, \ const uint32_t *: __rte_bit_atomic_test32, \ + volatile uint32_t *: __rte_bit_atomic_v_test32, \ + const volatile uint32_t *: __rte_bit_atomic_v_test32, \ uint64_t *: __rte_bit_atomic_test64, \ - const uint64_t *: __rte_bit_atomic_test64)(addr, nr, \ - memory_order) + const uint64_t *: __rte_bit_atomic_test64, \ + volatile uint64_t *: __rte_bit_atomic_v_test64, \ + const volatile uint64_t *: __rte_bit_atomic_v_test64) \ + (addr, nr, memory_order) /** * @warning @@ -274,7 +290,10 @@ extern "C" { #define rte_bit_atomic_set(addr, nr, memory_order) \ _Generic((addr), \ uint32_t *: __rte_bit_atomic_set32, \ - uint64_t *: __rte_bit_atomic_set64)(addr, nr, memory_order) + volatile uint32_t *: __rte_bit_atomic_v_set32, \ + uint64_t *: __rte_bit_atomic_set64, \ + volatile uint64_t *: __rte_bit_atomic_v_set64)(addr, nr, \ + memory_order) /** * @warning @@ -296,7 +315,10 @@ extern "C" { #define rte_bit_atomic_clear(addr, nr, memory_order) \ _Generic((addr), \ uint32_t *: __rte_bit_atomic_clear32, \ - uint64_t *: __rte_bit_atomic_clear64)(addr, nr, memory_order) + volatile uint32_t *: __rte_bit_atomic_v_clear32, \ + uint64_t *: __rte_bit_atomic_clear64, \ + volatile uint64_t *: __rte_bit_atomic_v_clear64)(addr, nr, \ + memory_order) /** * @warning @@ -320,8 +342,11 @@ extern "C" { #define rte_bit_atomic_assign(addr, nr, value, memory_order) \ _Generic((addr), \ uint32_t *: __rte_bit_atomic_assign32, \ - uint64_t *: __rte_bit_atomic_assign64)(addr, nr, value, \ - memory_order) + volatile uint32_t *: __rte_bit_atomic_v_assign32, \ + uint64_t *: __rte_bit_atomic_assign64, \ + volatile uint64_t *: __rte_bit_atomic_v_assign64)(addr, nr, \ + value, \ + memory_order) /** * @warning @@ -344,7 +369,10 @@ extern "C" { #define rte_bit_atomic_flip(addr, nr, memory_order) \ _Generic((addr), \ uint32_t *: __rte_bit_atomic_flip32, \ - uint64_t *: __rte_bit_atomic_flip64)(addr, nr, memory_order) + volatile uint32_t *: __rte_bit_atomic_v_flip32, \ + uint64_t *: __rte_bit_atomic_flip64, \ + volatile uint64_t *: __rte_bit_atomic_v_flip64)(addr, nr, \ + memory_order) /** * @warning @@ -368,8 +396,10 @@ extern "C" { #define rte_bit_atomic_test_and_set(addr, nr, memory_order) \ _Generic((addr), \ uint32_t *: __rte_bit_atomic_test_and_set32, \ - uint64_t *: __rte_bit_atomic_test_and_set64)(addr, nr, \ - memory_order) + volatile uint32_t *: __rte_bit_atomic_v_test_and_set32, \ + uint64_t *: __rte_bit_atomic_test_and_set64, \ + volatile uint64_t *: __rte_bit_atomic_v_test_and_set64) \ + (addr, nr, memory_order) /** * @warning @@ -393,8 +423,10 @@ extern "C" { #define rte_bit_atomic_test_and_clear(addr, nr, memory_order) \ _Generic((addr), \ uint32_t *: __rte_bit_atomic_test_and_clear32, \ - uint64_t *: __rte_bit_atomic_test_and_clear64)(addr, nr, \ - memory_order) + volatile uint32_t *: __rte_bit_atomic_v_test_and_clear32, \ + uint64_t *: __rte_bit_atomic_test_and_clear64, \ + volatile uint64_t *: __rte_bit_atomic_v_test_and_clear64) \ + (addr, nr, memory_order) /** * @warning @@ -421,9 +453,10 @@ extern "C" { #define rte_bit_atomic_test_and_assign(addr, nr, value, memory_order) \ _Generic((addr), \ uint32_t *: __rte_bit_atomic_test_and_assign32, \ - uint64_t *: __rte_bit_atomic_test_and_assign64)(addr, nr, \ - value, \ - memory_order) + volatile uint32_t *: __rte_bit_atomic_v_test_and_assign32, \ + uint64_t *: __rte_bit_atomic_test_and_assign64, \ + volatile uint64_t *: __rte_bit_atomic_v_test_and_assign64) \ + (addr, nr, value, memory_order) #define __RTE_GEN_BIT_TEST(variant, qualifier, size) \ __rte_experimental \ @@ -493,7 +526,8 @@ extern "C" { __RTE_GEN_BIT_FLIP(v, qualifier, size) #define __RTE_GEN_BIT_OPS_SIZE(size) \ - __RTE_GEN_BIT_OPS(,, size) + __RTE_GEN_BIT_OPS(,, size) \ + __RTE_GEN_BIT_OPS(v_, volatile, size) __RTE_GEN_BIT_OPS_SIZE(32) __RTE_GEN_BIT_OPS_SIZE(64) @@ -633,7 +667,8 @@ __RTE_GEN_BIT_OPS_SIZE(64) __RTE_GEN_BIT_ATOMIC_FLIP(variant, qualifier, size) #define __RTE_GEN_BIT_ATOMIC_OPS_SIZE(size) \ - __RTE_GEN_BIT_ATOMIC_OPS(,, size) + __RTE_GEN_BIT_ATOMIC_OPS(,, size) \ + __RTE_GEN_BIT_ATOMIC_OPS(v_, volatile, size) __RTE_GEN_BIT_ATOMIC_OPS_SIZE(32) __RTE_GEN_BIT_ATOMIC_OPS_SIZE(64) @@ -1342,120 +1377,178 @@ rte_log2_u64(uint64_t v) #undef rte_bit_atomic_test_and_clear #undef rte_bit_atomic_test_and_assign -#define __RTE_BIT_OVERLOAD_SZ_2(fun, qualifier, size, arg1_type, arg1_name) \ +#define __RTE_BIT_OVERLOAD_V_2(family, v, fun, c, size, arg1_type, arg1_name) \ static inline void \ - rte_bit_ ## fun(qualifier uint ## size ## _t *addr, \ - arg1_type arg1_name) \ + rte_bit_ ## family ## fun(c uint ## size ## _t *addr, \ + arg1_type arg1_name) \ { \ - __rte_bit_ ## fun ## size(addr, arg1_name); \ + __rte_bit_ ## family ## v ## fun ## size(addr, arg1_name); \ } -#define __RTE_BIT_OVERLOAD_2(fun, qualifier, arg1_type, arg1_name) \ - __RTE_BIT_OVERLOAD_SZ_2(fun, qualifier, 32, arg1_type, arg1_name) \ - __RTE_BIT_OVERLOAD_SZ_2(fun, qualifier, 64, arg1_type, arg1_name) +#define __RTE_BIT_OVERLOAD_SZ_2(family, fun, c, size, arg1_type, arg1_name) \ + __RTE_BIT_OVERLOAD_V_2(family,, fun, c, size, arg1_type, \ + arg1_name) \ + __RTE_BIT_OVERLOAD_V_2(family, v_, fun, c volatile, size, \ + arg1_type, arg1_name) -#define __RTE_BIT_OVERLOAD_SZ_2R(fun, qualifier, size, ret_type, arg1_type, \ - arg1_name) \ +#define __RTE_BIT_OVERLOAD_2(family, fun, c, arg1_type, arg1_name) \ + __RTE_BIT_OVERLOAD_SZ_2(family, fun, c, 32, arg1_type, arg1_name) \ + __RTE_BIT_OVERLOAD_SZ_2(family, fun, c, 64, arg1_type, arg1_name) + +#define __RTE_BIT_OVERLOAD_V_2R(family, v, fun, c, size, ret_type, arg1_type, \ + arg1_name) \ static inline ret_type \ - rte_bit_ ## fun(qualifier uint ## size ## _t *addr, \ + rte_bit_ ## family ## fun(c uint ## size ## _t *addr, \ arg1_type arg1_name) \ { \ - return __rte_bit_ ## fun ## size(addr, arg1_name); \ + return __rte_bit_ ## family ## v ## fun ## size(addr, \ + arg1_name); \ } -#define __RTE_BIT_OVERLOAD_2R(fun, qualifier, ret_type, arg1_type, arg1_name) \ - __RTE_BIT_OVERLOAD_SZ_2R(fun, qualifier, 32, ret_type, arg1_type, \ +#define __RTE_BIT_OVERLOAD_SZ_2R(family, fun, c, size, ret_type, arg1_type, \ + arg1_name) \ + __RTE_BIT_OVERLOAD_V_2R(family,, fun, c, size, ret_type, arg1_type, \ + arg1_name) \ + __RTE_BIT_OVERLOAD_V_2R(family, v_, fun, c volatile, \ + size, ret_type, arg1_type, arg1_name) + +#define __RTE_BIT_OVERLOAD_2R(family, fun, c, ret_type, arg1_type, arg1_name) \ + __RTE_BIT_OVERLOAD_SZ_2R(family, fun, c, 32, ret_type, arg1_type, \ arg1_name) \ - __RTE_BIT_OVERLOAD_SZ_2R(fun, qualifier, 64, ret_type, arg1_type, \ + __RTE_BIT_OVERLOAD_SZ_2R(family, fun, c, 64, ret_type, arg1_type, \ arg1_name) -#define __RTE_BIT_OVERLOAD_SZ_3(fun, qualifier, size, arg1_type, arg1_name, \ - arg2_type, arg2_name) \ +#define __RTE_BIT_OVERLOAD_V_3(family, v, fun, c, size, arg1_type, arg1_name, \ + arg2_type, arg2_name) \ static inline void \ - rte_bit_ ## fun(uint ## size ## _t *addr, arg1_type arg1_name, \ - arg2_type arg2_name) \ + rte_bit_ ## family ## fun(c uint ## size ## _t *addr, \ + arg1_type arg1_name, arg2_type arg2_name) \ { \ - __rte_bit_ ## fun ## size(addr, arg1_name, arg2_name); \ + __rte_bit_ ## family ## v ## fun ## size(addr, arg1_name, \ + arg2_name); \ } -#define __RTE_BIT_OVERLOAD_3(fun, qualifier, arg1_type, arg1_name, arg2_type, \ +#define __RTE_BIT_OVERLOAD_SZ_3(family, fun, c, size, arg1_type, arg1_name, \ + arg2_type, arg2_name) \ + __RTE_BIT_OVERLOAD_V_3(family,, fun, c, size, arg1_type, arg1_name, \ + arg2_type, arg2_name) \ + __RTE_BIT_OVERLOAD_V_3(family, v_, fun, c volatile, size, arg1_type, \ + arg1_name, arg2_type, arg2_name) + +#define __RTE_BIT_OVERLOAD_3(family, fun, c, arg1_type, arg1_name, arg2_type, \ arg2_name) \ - __RTE_BIT_OVERLOAD_SZ_3(fun, qualifier, 32, arg1_type, arg1_name, \ + __RTE_BIT_OVERLOAD_SZ_3(family, fun, c, 32, arg1_type, arg1_name, \ arg2_type, arg2_name) \ - __RTE_BIT_OVERLOAD_SZ_3(fun, qualifier, 64, arg1_type, arg1_name, \ + __RTE_BIT_OVERLOAD_SZ_3(family, fun, c, 64, arg1_type, arg1_name, \ arg2_type, arg2_name) -#define __RTE_BIT_OVERLOAD_SZ_3R(fun, qualifier, size, ret_type, arg1_type, \ - arg1_name, arg2_type, arg2_name) \ +#define __RTE_BIT_OVERLOAD_V_3R(family, v, fun, c, size, ret_type, arg1_type, \ + arg1_name, arg2_type, arg2_name) \ static inline ret_type \ - rte_bit_ ## fun(uint ## size ## _t *addr, arg1_type arg1_name, \ - arg2_type arg2_name) \ + rte_bit_ ## family ## fun(c uint ## size ## _t *addr, \ + arg1_type arg1_name, arg2_type arg2_name) \ { \ - return __rte_bit_ ## fun ## size(addr, arg1_name, arg2_name); \ + return __rte_bit_ ## family ## v ## fun ## size(addr, \ + arg1_name, \ + arg2_name); \ } -#define __RTE_BIT_OVERLOAD_3R(fun, qualifier, ret_type, arg1_type, arg1_name, \ - arg2_type, arg2_name) \ - __RTE_BIT_OVERLOAD_SZ_3R(fun, qualifier, 32, ret_type, arg1_type, \ +#define __RTE_BIT_OVERLOAD_SZ_3R(family, fun, c, size, ret_type, arg1_type, \ arg1_name, arg2_type, arg2_name) \ - __RTE_BIT_OVERLOAD_SZ_3R(fun, qualifier, 64, ret_type, arg1_type, \ - arg1_name, arg2_type, arg2_name) + __RTE_BIT_OVERLOAD_V_3R(family,, fun, c, size, ret_type, \ + arg1_type, arg1_name, arg2_type, arg2_name) \ + __RTE_BIT_OVERLOAD_V_3R(family, v_, fun, c volatile, size, \ + ret_type, arg1_type, arg1_name, arg2_type, \ + arg2_name) -#define __RTE_BIT_OVERLOAD_SZ_4(fun, qualifier, size, arg1_type, arg1_name, \ - arg2_type, arg2_name, arg3_type, arg3_name) \ +#define __RTE_BIT_OVERLOAD_3R(family, fun, c, ret_type, arg1_type, arg1_name, \ + arg2_type, arg2_name) \ + __RTE_BIT_OVERLOAD_SZ_3R(family, fun, c, 32, ret_type, \ + arg1_type, arg1_name, arg2_type, arg2_name) \ + __RTE_BIT_OVERLOAD_SZ_3R(family, fun, c, 64, ret_type, \ + arg1_type, arg1_name, arg2_type, arg2_name) + +#define __RTE_BIT_OVERLOAD_V_4(family, v, fun, c, size, arg1_type, arg1_name, \ + arg2_type, arg2_name, arg3_type, arg3_name) \ static inline void \ - rte_bit_ ## fun(uint ## size ## _t *addr, arg1_type arg1_name, \ - arg2_type arg2_name, arg3_type arg3_name) \ + rte_bit_ ## family ## fun(c uint ## size ## _t *addr, \ + arg1_type arg1_name, arg2_type arg2_name, \ + arg3_type arg3_name) \ { \ - __rte_bit_ ## fun ## size(addr, arg1_name, arg2_name, \ - arg3_name); \ + __rte_bit_ ## family ## v ## fun ## size(addr, arg1_name, \ + arg2_name, \ + arg3_name); \ } -#define __RTE_BIT_OVERLOAD_4(fun, qualifier, arg1_type, arg1_name, arg2_type, \ - arg2_name, arg3_type, arg3_name) \ - __RTE_BIT_OVERLOAD_SZ_4(fun, qualifier, 32, arg1_type, arg1_name, \ +#define __RTE_BIT_OVERLOAD_SZ_4(family, fun, c, size, arg1_type, arg1_name, \ arg2_type, arg2_name, arg3_type, arg3_name) \ - __RTE_BIT_OVERLOAD_SZ_4(fun, qualifier, 64, arg1_type, arg1_name, \ - arg2_type, arg2_name, arg3_type, arg3_name) - -#define __RTE_BIT_OVERLOAD_SZ_4R(fun, qualifier, size, ret_type, arg1_type, \ - arg1_name, arg2_type, arg2_name, arg3_type, \ - arg3_name) \ + __RTE_BIT_OVERLOAD_V_4(family,, fun, c, size, arg1_type, \ + arg1_name, arg2_type, arg2_name, arg3_type, \ + arg3_name) \ + __RTE_BIT_OVERLOAD_V_4(family, v_, fun, c volatile, size, \ + arg1_type, arg1_name, arg2_type, arg2_name, \ + arg3_type, arg3_name) + +#define __RTE_BIT_OVERLOAD_4(family, fun, c, arg1_type, arg1_name, arg2_type, \ + arg2_name, arg3_type, arg3_name) \ + __RTE_BIT_OVERLOAD_SZ_4(family, fun, c, 32, arg1_type, \ + arg1_name, arg2_type, arg2_name, arg3_type, \ + arg3_name) \ + __RTE_BIT_OVERLOAD_SZ_4(family, fun, c, 64, arg1_type, \ + arg1_name, arg2_type, arg2_name, arg3_type, \ + arg3_name) + +#define __RTE_BIT_OVERLOAD_V_4R(family, v, fun, c, size, ret_type, arg1_type, \ + arg1_name, arg2_type, arg2_name, arg3_type, \ + arg3_name) \ static inline ret_type \ - rte_bit_ ## fun(uint ## size ## _t *addr, arg1_type arg1_name, \ - arg2_type arg2_name, arg3_type arg3_name) \ + rte_bit_ ## family ## fun(c uint ## size ## _t *addr, \ + arg1_type arg1_name, arg2_type arg2_name, \ + arg3_type arg3_name) \ { \ - return __rte_bit_ ## fun ## size(addr, arg1_name, arg2_name, \ - arg3_name); \ + return __rte_bit_ ## family ## v ## fun ## size(addr, \ + arg1_name, \ + arg2_name, \ + arg3_name); \ } -#define __RTE_BIT_OVERLOAD_4R(fun, qualifier, ret_type, arg1_type, arg1_name, \ - arg2_type, arg2_name, arg3_type, arg3_name) \ - __RTE_BIT_OVERLOAD_SZ_4R(fun, qualifier, 32, ret_type, arg1_type, \ +#define __RTE_BIT_OVERLOAD_SZ_4R(family, fun, c, size, ret_type, arg1_type, \ arg1_name, arg2_type, arg2_name, arg3_type, \ arg3_name) \ - __RTE_BIT_OVERLOAD_SZ_4R(fun, qualifier, 64, ret_type, arg1_type, \ - arg1_name, arg2_type, arg2_name, arg3_type, \ - arg3_name) - -__RTE_BIT_OVERLOAD_2R(test, const, bool, unsigned int, nr) -__RTE_BIT_OVERLOAD_2(set,, unsigned int, nr) -__RTE_BIT_OVERLOAD_2(clear,, unsigned int, nr) -__RTE_BIT_OVERLOAD_3(assign,, unsigned int, nr, bool, value) -__RTE_BIT_OVERLOAD_2(flip,, unsigned int, nr) - -__RTE_BIT_OVERLOAD_3R(atomic_test, const, bool, unsigned int, nr, + __RTE_BIT_OVERLOAD_V_4R(family,, fun, c, size, ret_type, arg1_type, \ + arg1_name, arg2_type, arg2_name, arg3_type, \ + arg3_name) \ + __RTE_BIT_OVERLOAD_V_4R(family, v_, fun, c volatile, size, \ + ret_type, arg1_type, arg1_name, arg2_type, \ + arg2_name, arg3_type, arg3_name) + +#define __RTE_BIT_OVERLOAD_4R(family, fun, c, ret_type, arg1_type, arg1_name, \ + arg2_type, arg2_name, arg3_type, arg3_name) \ + __RTE_BIT_OVERLOAD_SZ_4R(family, fun, c, 32, ret_type, \ + arg1_type, arg1_name, arg2_type, arg2_name, \ + arg3_type, arg3_name) \ + __RTE_BIT_OVERLOAD_SZ_4R(family, fun, c, 64, ret_type, \ + arg1_type, arg1_name, arg2_type, arg2_name, \ + arg3_type, arg3_name) + +__RTE_BIT_OVERLOAD_2R(, test, const, bool, unsigned int, nr) +__RTE_BIT_OVERLOAD_2(, set,, unsigned int, nr) +__RTE_BIT_OVERLOAD_2(, clear,, unsigned int, nr) +__RTE_BIT_OVERLOAD_3(, assign,, unsigned int, nr, bool, value) +__RTE_BIT_OVERLOAD_2(, flip,, unsigned int, nr) + +__RTE_BIT_OVERLOAD_3R(atomic_, test, const, bool, unsigned int, nr, int, memory_order) -__RTE_BIT_OVERLOAD_3(atomic_set,, unsigned int, nr, int, memory_order) -__RTE_BIT_OVERLOAD_3(atomic_clear,, unsigned int, nr, int, memory_order) -__RTE_BIT_OVERLOAD_4(atomic_assign,, unsigned int, nr, bool, value, +__RTE_BIT_OVERLOAD_3(atomic_, set,, unsigned int, nr, int, memory_order) +__RTE_BIT_OVERLOAD_3(atomic_, clear,, unsigned int, nr, int, memory_order) +__RTE_BIT_OVERLOAD_4(atomic_, assign,, unsigned int, nr, bool, value, int, memory_order) -__RTE_BIT_OVERLOAD_3(atomic_flip,, unsigned int, nr, int, memory_order) -__RTE_BIT_OVERLOAD_3R(atomic_test_and_set,, bool, unsigned int, nr, +__RTE_BIT_OVERLOAD_3(atomic_, flip,, unsigned int, nr, int, memory_order) +__RTE_BIT_OVERLOAD_3R(atomic_, test_and_set,, bool, unsigned int, nr, int, memory_order) -__RTE_BIT_OVERLOAD_3R(atomic_test_and_clear,, bool, unsigned int, nr, +__RTE_BIT_OVERLOAD_3R(atomic_, test_and_clear,, bool, unsigned int, nr, int, memory_order) -__RTE_BIT_OVERLOAD_4R(atomic_test_and_assign,, bool, unsigned int, nr, +__RTE_BIT_OVERLOAD_4R(atomic_, test_and_assign,, bool, unsigned int, nr, bool, value, int, memory_order) #endif -- 2.34.1