* [PATCH 0/6] RFC optional rte optional stdatomics API
@ 2023-08-11 1:31 Tyler Retzlaff
2023-08-11 1:31 ` [PATCH 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
` (10 more replies)
0 siblings, 11 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-11 1:31 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
This series introduces API additions prefixed in the rte namespace that allow
the optional use of stdatomics.h from C11 using enable_stdatomics=true for
targets where enable_stdatomics=false no functional change is intended.
Be aware this does not contain all changes to use stdatomics across the DPDK
tree it only introduces the minimum to allow the option to be used which is
a pre-requisite for a clean CI (probably using clang) that can be run
with enable_stdatomics=true enabled.
It is planned that subsequent series will be introduced per lib/driver as
appropriate to further enable stdatomics use when enable_stdatomics=true.
Notes:
* additional libraries beyond EAL make visible atomics use across the
API/ABI surface they will be converted in the next series.
* the eal: add rte atomic qualifier with casts patch needs some discussion
as to whether or not the legacy rte_atomic APIs should be converted to
work with enable_stdatomic=true right now some implementation dependent
casts are used to prevent cascading / having to convert too much in
the intial series.
* windows will obviously need complete conversion of libraries including
atomics that are not crossing API/ABI boundaries. those conversions will
introduced in separate series as new along side the existing msvc series.
Please keep in mind we would like to prioritize the review / acceptance of
this patch since it needs to be completed in the 23.11 merge window.
Thank you all for the discussion that lead to the formation of this series.
Tyler Retzlaff (6):
eal: provide rte stdatomics optional atomics API
eal: adapt EAL to present rte optional atomics API
eal: add rte atomic qualifier with casts
distributor: adapt for EAL optional atomics API changes
bpf: adapt for EAL optional atomics API changes
devtools: forbid new direct use of GCC atomic builtins
app/test/test_mcslock.c | 6 +-
config/meson.build | 1 +
config/rte_config.h | 1 +
devtools/checkpatches.sh | 8 ++
lib/bpf/bpf_pkt.c | 6 +-
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 ++++-----
lib/eal/arm/include/rte_atomic_64.h | 32 +++---
lib/eal/arm/include/rte_pause_64.h | 26 ++---
lib/eal/arm/rte_power_intrinsics.c | 8 +-
lib/eal/common/eal_common_trace.c | 16 +--
lib/eal/include/generic/rte_atomic.h | 66 ++++++++-----
lib/eal/include/generic/rte_pause.h | 41 ++++----
lib/eal/include/generic/rte_rwlock.h | 47 ++++-----
lib/eal/include/generic/rte_spinlock.h | 19 ++--
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_mcslock.h | 50 +++++-----
lib/eal/include/rte_pflock.h | 24 ++---
lib/eal/include/rte_seqcount.h | 18 ++--
lib/eal/include/rte_stdatomic.h | 162 +++++++++++++++++++++++++++++++
lib/eal/include/rte_ticketlock.h | 42 ++++----
lib/eal/include/rte_trace_point.h | 4 +-
lib/eal/ppc/include/rte_atomic.h | 50 +++++-----
lib/eal/x86/include/rte_atomic.h | 4 +-
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 7 +-
meson_options.txt | 1 +
27 files changed, 445 insertions(+), 243 deletions(-)
create mode 100644 lib/eal/include/rte_stdatomic.h
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH 1/6] eal: provide rte stdatomics optional atomics API
2023-08-11 1:31 [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-11 1:31 ` Tyler Retzlaff
2023-08-11 8:56 ` Bruce Richardson
2023-08-11 9:42 ` Morten Brørup
2023-08-11 1:31 ` [PATCH 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
` (9 subsequent siblings)
10 siblings, 2 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-11 1:31 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Provide API for atomic operations in the rte namespace that may
optionally be configured to use C11 atomics with meson
option enable_stdatomics=true
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
config/meson.build | 1 +
config/rte_config.h | 1 +
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_stdatomic.h | 162 ++++++++++++++++++++++++++++++++++++++++
meson_options.txt | 1 +
5 files changed, 166 insertions(+)
create mode 100644 lib/eal/include/rte_stdatomic.h
diff --git a/config/meson.build b/config/meson.build
index d822371..ec49964 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -303,6 +303,7 @@ endforeach
# set other values pulled from the build options
dpdk_conf.set('RTE_MAX_ETHPORTS', get_option('max_ethports'))
dpdk_conf.set('RTE_LIBEAL_USE_HPET', get_option('use_hpet'))
+dpdk_conf.set('RTE_ENABLE_STDATOMIC', get_option('enable_stdatomic'))
dpdk_conf.set('RTE_ENABLE_TRACE_FP', get_option('enable_trace_fp'))
# values which have defaults which may be overridden
dpdk_conf.set('RTE_MAX_VFIO_GROUPS', 64)
diff --git a/config/rte_config.h b/config/rte_config.h
index 400e44e..f17b6ae 100644
--- a/config/rte_config.h
+++ b/config/rte_config.h
@@ -13,6 +13,7 @@
#define _RTE_CONFIG_H_
#include <rte_build_config.h>
+#include <rte_stdatomic.h>
/* legacy defines */
#ifdef RTE_EXEC_ENV_LINUX
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index b0db9b3..f8a47b3 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -43,6 +43,7 @@ headers += files(
'rte_seqlock.h',
'rte_service.h',
'rte_service_component.h',
+ 'rte_stdatomic.h',
'rte_string_fns.h',
'rte_tailq.h',
'rte_thread.h',
diff --git a/lib/eal/include/rte_stdatomic.h b/lib/eal/include/rte_stdatomic.h
new file mode 100644
index 0000000..832fd07
--- /dev/null
+++ b/lib/eal/include/rte_stdatomic.h
@@ -0,0 +1,162 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Microsoft Corporation
+ */
+
+#ifndef _RTE_STDATOMIC_H_
+#define _RTE_STDATOMIC_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+typedef int rte_memory_order;
+
+#ifdef RTE_ENABLE_STDATOMIC
+#ifdef __STDC_NO_ATOMICS__
+#error enable_stdatomics=true but atomics not supported by toolchain
+#endif
+
+#include <stdatomic.h>
+
+#define __rte_atomic _Atomic
+
+#define rte_memory_order_relaxed memory_order_relaxed
+#ifdef __ATOMIC_RELAXED
+_Static_assert(rte_memory_order_relaxed == __ATOMIC_RELAXED,
+ "rte_memory_order_relaxed == __ATOMIC_RELAXED");
+#endif
+
+#define rte_memory_order_consume memory_order_consume
+#ifdef __ATOMIC_CONSUME
+_Static_assert(rte_memory_order_consume == __ATOMIC_CONSUME,
+ "rte_memory_order_consume == __ATOMIC_CONSUME");
+#endif
+
+#define rte_memory_order_acquire memory_order_acquire
+#ifdef __ATOMIC_ACQUIRE
+_Static_assert(rte_memory_order_acquire == __ATOMIC_ACQUIRE,
+ "rte_memory_order_acquire == __ATOMIC_ACQUIRE");
+#endif
+
+#define rte_memory_order_release memory_order_release
+#ifdef __ATOMIC_RELEASE
+_Static_assert(rte_memory_order_release == __ATOMIC_RELEASE,
+ "rte_memory_order_release == __ATOMIC_RELEASE");
+#endif
+
+#define rte_memory_order_acq_rel memory_order_acq_rel
+#ifdef __ATOMIC_ACQ_REL
+_Static_assert(rte_memory_order_acq_rel == __ATOMIC_ACQ_REL,
+ "rte_memory_order_acq_rel == __ATOMIC_ACQ_REL");
+#endif
+
+#define rte_memory_order_seq_cst memory_order_seq_cst
+#ifdef __ATOMIC_SEQ_CST
+_Static_assert(rte_memory_order_seq_cst == __ATOMIC_SEQ_CST,
+ "rte_memory_order_seq_cst == __ATOMIC_SEQ_CST");
+#endif
+
+#define rte_atomic_load_explicit(ptr, memorder) \
+ atomic_load_explicit(ptr, memorder)
+
+#define rte_atomic_store_explicit(ptr, val, memorder) \
+ atomic_store_explicit(ptr, val, memorder)
+
+#define rte_atomic_exchange_explicit(ptr, val, memorder) \
+ atomic_exchange_explicit(ptr, val, memorder)
+
+#define rte_atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder)
+
+#define rte_atomic_compare_exchange_weak_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder)
+
+#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
+ atomic_fetch_add_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
+ atomic_fetch_sub_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
+ atomic_fetch_and_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
+ atomic_fetch_xor_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
+ atomic_fetch_or_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
+ atomic_fetch_nand_explicit(ptr, val, memorder)
+
+#define rte_atomic_flag_test_and_set_explict(ptr, memorder) \
+ atomic_flag_test_and_set_explicit(ptr, memorder)
+
+#define rte_atomic_flag_clear_explicit(ptr, memorder) \
+ atomic_flag_clear(ptr, memorder)
+
+#else
+
+#define __rte_atomic
+
+#define rte_memory_order_relaxed __ATOMIC_RELAXED
+#define rte_memory_order_consume __ATOMIC_CONSUME
+#define rte_memory_order_acquire __ATOMIC_ACQUIRE
+#define rte_memory_order_release __ATOMIC_RELEASE
+#define rte_memory_order_acq_rel __ATOMIC_ACQ_REL
+#define rte_memory_order_seq_cst __ATOMIC_SEQ_CST
+
+#define rte_atomic_load_explicit(ptr, memorder) \
+ __atomic_load_n(ptr, memorder)
+
+#define rte_atomic_store_explicit(ptr, val, memorder) \
+ __atomic_store_n(ptr, val, memorder)
+
+#define rte_atomic_exchange_explicit(ptr, val, memorder) \
+ __atomic_exchange_n(ptr, val, memorder)
+
+#define rte_atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ __atomic_compare_exchange_n( \
+ ptr, expected, desired, 0, succ_memorder, fail_memorder)
+
+#define rte_atomic_compare_exchange_weak_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ __atomic_compare_exchange_n( \
+ ptr, expected, desired, 1, succ_memorder, fail_memorder)
+
+#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
+ __atomic_fetch_add(ptr, val, memorder)
+
+#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
+ __atomic_fetch_sub(ptr, val, memorder)
+
+#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
+ __atomic_fetch_and(ptr, val, memorder)
+
+#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
+ __atomic_fetch_xor(ptr, val, memorder)
+
+#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
+ __atomic_fetch_or(ptr, val, memorder)
+
+#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
+ __atomic_fetch_nand(ptr, val, memorder)
+
+#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \
+ __atomic_test_and_set(ptr, memorder)
+
+#define rte_atomic_flag_clear_explicit(ptr, memorder) \
+ __atomic_clear(ptr, memorder)
+
+#endif
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STDATOMIC_H_ */
diff --git a/meson_options.txt b/meson_options.txt
index 621e1ca..7d6784d 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -46,6 +46,7 @@ option('mbuf_refcnt_atomic', type: 'boolean', value: true, description:
'Atomically access the mbuf refcnt.')
option('platform', type: 'string', value: 'native', description:
'Platform to build, either "native", "generic" or a SoC. Please refer to the Linux build guide for more information.')
+option('enable_stdatomic', type: 'boolean', value: false, description: 'enable use of C11 stdatomic')
option('enable_trace_fp', type: 'boolean', value: false, description:
'enable fast path trace points.')
option('tests', type: 'boolean', value: true, description:
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH 2/6] eal: adapt EAL to present rte optional atomics API
2023-08-11 1:31 [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-11 1:31 ` [PATCH 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
@ 2023-08-11 1:31 ` Tyler Retzlaff
2023-08-11 1:31 ` [PATCH 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
` (8 subsequent siblings)
10 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-11 1:31 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt the EAL public headers to use rte optional atomics API instead of
directly using and exposing toolchain specific atomic builtin intrinsics.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
app/test/test_mcslock.c | 6 ++--
lib/eal/arm/include/rte_atomic_64.h | 32 +++++++++++-----------
lib/eal/arm/include/rte_pause_64.h | 26 +++++++++---------
lib/eal/arm/rte_power_intrinsics.c | 8 +++---
lib/eal/common/eal_common_trace.c | 16 ++++++-----
lib/eal/include/generic/rte_atomic.h | 50 +++++++++++++++++-----------------
lib/eal/include/generic/rte_pause.h | 38 +++++++++++++-------------
lib/eal/include/generic/rte_rwlock.h | 47 +++++++++++++++++---------------
lib/eal/include/generic/rte_spinlock.h | 19 ++++++-------
lib/eal/include/rte_mcslock.h | 50 +++++++++++++++++-----------------
lib/eal/include/rte_pflock.h | 24 ++++++++--------
lib/eal/include/rte_seqcount.h | 18 ++++++------
lib/eal/include/rte_ticketlock.h | 42 ++++++++++++++--------------
lib/eal/include/rte_trace_point.h | 4 +--
lib/eal/ppc/include/rte_atomic.h | 50 +++++++++++++++++-----------------
lib/eal/x86/include/rte_atomic.h | 4 +--
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 6 ++--
18 files changed, 225 insertions(+), 217 deletions(-)
diff --git a/app/test/test_mcslock.c b/app/test/test_mcslock.c
index 52e45e7..cc25970 100644
--- a/app/test/test_mcslock.c
+++ b/app/test/test_mcslock.c
@@ -36,9 +36,9 @@
* lock multiple times.
*/
-rte_mcslock_t *p_ml;
-rte_mcslock_t *p_ml_try;
-rte_mcslock_t *p_ml_perf;
+rte_mcslock_t * __rte_atomic p_ml;
+rte_mcslock_t * __rte_atomic p_ml_try;
+rte_mcslock_t * __rte_atomic p_ml_perf;
static unsigned int count;
diff --git a/lib/eal/arm/include/rte_atomic_64.h b/lib/eal/arm/include/rte_atomic_64.h
index 6047911..ac3cec9 100644
--- a/lib/eal/arm/include/rte_atomic_64.h
+++ b/lib/eal/arm/include/rte_atomic_64.h
@@ -107,33 +107,33 @@
*/
RTE_SET_USED(failure);
/* Find invalid memory order */
- RTE_ASSERT(success == __ATOMIC_RELAXED ||
- success == __ATOMIC_ACQUIRE ||
- success == __ATOMIC_RELEASE ||
- success == __ATOMIC_ACQ_REL ||
- success == __ATOMIC_SEQ_CST);
+ RTE_ASSERT(success == rte_memory_order_relaxed ||
+ success == rte_memory_order_acquire ||
+ success == rte_memory_order_release ||
+ success == rte_memory_order_acq_rel ||
+ success == rte_memory_order_seq_cst);
rte_int128_t expected = *exp;
rte_int128_t desired = *src;
rte_int128_t old;
#if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS)
- if (success == __ATOMIC_RELAXED)
+ if (success == rte_memory_order_relaxed)
__cas_128_relaxed(dst, exp, desired);
- else if (success == __ATOMIC_ACQUIRE)
+ else if (success == rte_memory_order_acquire)
__cas_128_acquire(dst, exp, desired);
- else if (success == __ATOMIC_RELEASE)
+ else if (success == rte_memory_order_release)
__cas_128_release(dst, exp, desired);
else
__cas_128_acq_rel(dst, exp, desired);
old = *exp;
#else
-#define __HAS_ACQ(mo) ((mo) != __ATOMIC_RELAXED && (mo) != __ATOMIC_RELEASE)
-#define __HAS_RLS(mo) ((mo) == __ATOMIC_RELEASE || (mo) == __ATOMIC_ACQ_REL || \
- (mo) == __ATOMIC_SEQ_CST)
+#define __HAS_ACQ(mo) ((mo) != rte_memory_order_relaxed && (mo) != rte_memory_order_release)
+#define __HAS_RLS(mo) ((mo) == rte_memory_order_release || (mo) == rte_memory_order_acq_rel || \
+ (mo) == rte_memory_order_seq_cst)
- int ldx_mo = __HAS_ACQ(success) ? __ATOMIC_ACQUIRE : __ATOMIC_RELAXED;
- int stx_mo = __HAS_RLS(success) ? __ATOMIC_RELEASE : __ATOMIC_RELAXED;
+ int ldx_mo = __HAS_ACQ(success) ? rte_memory_order_acquire : rte_memory_order_relaxed;
+ int stx_mo = __HAS_RLS(success) ? rte_memory_order_release : rte_memory_order_relaxed;
#undef __HAS_ACQ
#undef __HAS_RLS
@@ -153,7 +153,7 @@
: "Q" (src->val[0]) \
: "memory"); }
- if (ldx_mo == __ATOMIC_RELAXED)
+ if (ldx_mo == rte_memory_order_relaxed)
__LOAD_128("ldxp", dst, old)
else
__LOAD_128("ldaxp", dst, old)
@@ -170,7 +170,7 @@
: "memory"); }
if (likely(old.int128 == expected.int128)) {
- if (stx_mo == __ATOMIC_RELAXED)
+ if (stx_mo == rte_memory_order_relaxed)
__STORE_128("stxp", dst, desired, ret)
else
__STORE_128("stlxp", dst, desired, ret)
@@ -181,7 +181,7 @@
* needs to be stored back to ensure it was read
* atomically.
*/
- if (stx_mo == __ATOMIC_RELAXED)
+ if (stx_mo == rte_memory_order_relaxed)
__STORE_128("stxp", dst, old, ret)
else
__STORE_128("stlxp", dst, old, ret)
diff --git a/lib/eal/arm/include/rte_pause_64.h b/lib/eal/arm/include/rte_pause_64.h
index 5f70e97..d4daafc 100644
--- a/lib/eal/arm/include/rte_pause_64.h
+++ b/lib/eal/arm/include/rte_pause_64.h
@@ -41,7 +41,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_8(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxrb %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -60,7 +60,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_16(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxrh %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -79,7 +79,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_32(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxr %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -98,7 +98,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_64(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxr %x[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -118,7 +118,7 @@ static inline void rte_pause(void)
*/
#define __RTE_ARM_LOAD_EXC_128(src, dst, memorder) { \
volatile rte_int128_t *dst_128 = (volatile rte_int128_t *)&dst; \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxp %x[tmp0], %x[tmp1], [%x[addr]]" \
: [tmp0] "=&r" (dst_128->val[0]), \
[tmp1] "=&r" (dst_128->val[1]) \
@@ -153,8 +153,8 @@ static inline void rte_pause(void)
{
uint16_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_16(addr, value, memorder)
if (value != expected) {
@@ -172,8 +172,8 @@ static inline void rte_pause(void)
{
uint32_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_32(addr, value, memorder)
if (value != expected) {
@@ -191,8 +191,8 @@ static inline void rte_pause(void)
{
uint64_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_64(addr, value, memorder)
if (value != expected) {
@@ -206,8 +206,8 @@ static inline void rte_pause(void)
#define RTE_WAIT_UNTIL_MASKED(addr, mask, cond, expected, memorder) do { \
RTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); \
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && \
- memorder != __ATOMIC_RELAXED); \
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire && \
+ memorder != rte_memory_order_relaxed); \
const uint32_t size = sizeof(*(addr)) << 3; \
typeof(*(addr)) expected_value = (expected); \
typeof(*(addr)) value; \
diff --git a/lib/eal/arm/rte_power_intrinsics.c b/lib/eal/arm/rte_power_intrinsics.c
index 77b96e4..f54cf59 100644
--- a/lib/eal/arm/rte_power_intrinsics.c
+++ b/lib/eal/arm/rte_power_intrinsics.c
@@ -33,19 +33,19 @@
switch (pmc->size) {
case sizeof(uint8_t):
- __RTE_ARM_LOAD_EXC_8(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_8(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint16_t):
- __RTE_ARM_LOAD_EXC_16(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_16(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint32_t):
- __RTE_ARM_LOAD_EXC_32(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_32(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint64_t):
- __RTE_ARM_LOAD_EXC_64(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_64(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
default:
diff --git a/lib/eal/common/eal_common_trace.c b/lib/eal/common/eal_common_trace.c
index cb980af..c6628dd 100644
--- a/lib/eal/common/eal_common_trace.c
+++ b/lib/eal/common/eal_common_trace.c
@@ -103,11 +103,11 @@ struct trace_point_head *
trace_mode_set(rte_trace_point_t *t, enum rte_trace_mode mode)
{
if (mode == RTE_TRACE_MODE_OVERWRITE)
- __atomic_fetch_and(t, ~__RTE_TRACE_FIELD_ENABLE_DISCARD,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_and_explicit(t, ~__RTE_TRACE_FIELD_ENABLE_DISCARD,
+ rte_memory_order_release);
else
- __atomic_fetch_or(t, __RTE_TRACE_FIELD_ENABLE_DISCARD,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_or_explicit(t, __RTE_TRACE_FIELD_ENABLE_DISCARD,
+ rte_memory_order_release);
}
void
@@ -141,7 +141,7 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return false;
- val = __atomic_load_n(t, __ATOMIC_ACQUIRE);
+ val = rte_atomic_load_explicit(t, rte_memory_order_acquire);
return (val & __RTE_TRACE_FIELD_ENABLE_MASK) != 0;
}
@@ -153,7 +153,8 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return -ERANGE;
- prev = __atomic_fetch_or(t, __RTE_TRACE_FIELD_ENABLE_MASK, __ATOMIC_RELEASE);
+ prev = rte_atomic_fetch_or_explicit(t, __RTE_TRACE_FIELD_ENABLE_MASK,
+ rte_memory_order_release);
if ((prev & __RTE_TRACE_FIELD_ENABLE_MASK) == 0)
__atomic_fetch_add(&trace.status, 1, __ATOMIC_RELEASE);
return 0;
@@ -167,7 +168,8 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return -ERANGE;
- prev = __atomic_fetch_and(t, ~__RTE_TRACE_FIELD_ENABLE_MASK, __ATOMIC_RELEASE);
+ prev = rte_atomic_fetch_and_explicit(t, ~__RTE_TRACE_FIELD_ENABLE_MASK,
+ rte_memory_order_release);
if ((prev & __RTE_TRACE_FIELD_ENABLE_MASK) != 0)
__atomic_fetch_sub(&trace.status, 1, __ATOMIC_RELEASE);
return 0;
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index aef44e2..15a36f3 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -62,7 +62,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_ACQ_REL) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_acq_rel) should be used instead.
*/
static inline void rte_smp_mb(void);
@@ -79,7 +79,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_RELEASE) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_release) should be used instead.
* The fence also guarantees LOAD operations that precede the call
* are globally visible across the lcores before the STORE operations
* that follows it.
@@ -99,7 +99,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_ACQUIRE) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_acquire) should be used instead.
* The fence also guarantees LOAD operations that precede the call
* are globally visible across the lcores before the STORE operations
* that follows it.
@@ -153,7 +153,7 @@
/**
* Synchronization fence between threads based on the specified memory order.
*/
-static inline void rte_atomic_thread_fence(int memorder);
+static inline void rte_atomic_thread_fence(rte_memory_order memorder);
/*------------------------- 16 bit atomic operations -------------------------*/
@@ -206,7 +206,7 @@
static inline uint16_t
rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -273,7 +273,7 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
/**
@@ -287,7 +287,7 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
/**
@@ -340,7 +340,7 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
/**
@@ -360,7 +360,7 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
/**
@@ -379,7 +379,7 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_SEQ_CST) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -399,7 +399,7 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_SEQ_CST) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -485,7 +485,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline uint32_t
rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -552,7 +552,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
/**
@@ -566,7 +566,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
/**
@@ -619,7 +619,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
/**
@@ -639,7 +639,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
/**
@@ -658,7 +658,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_SEQ_CST) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -678,7 +678,7 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_SEQ_CST) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -763,7 +763,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline uint64_t
rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -884,7 +884,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
#endif
@@ -903,7 +903,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
#endif
@@ -961,7 +961,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
#endif
@@ -985,7 +985,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
#endif
@@ -1116,8 +1116,8 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
* stronger) model.
* @param failure
* If unsuccessful, the operation's memory behavior conforms to this (or a
- * stronger) model. This argument cannot be __ATOMIC_RELEASE,
- * __ATOMIC_ACQ_REL, or a stronger model than success.
+ * stronger) model. This argument cannot be rte_memory_order_release,
+ * rte_memory_order_acq_rel, or a stronger model than success.
* @return
* Non-zero on success; 0 on failure.
*/
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index ec1f418..3ea1553 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -35,13 +35,13 @@
* A 16-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
+ * rte_memory_order_acquire and rte_memory_order_relaxed. These map to
* C++11 memory orders with the same names, see the C++11 standard or
* the GCC wiki on atomic synchronization for detailed definition.
*/
static __rte_always_inline void
rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,
- int memorder);
+ rte_memory_order memorder);
/**
* Wait for *addr to be updated with a 32-bit expected value, with a relaxed
@@ -53,13 +53,13 @@
* A 32-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
+ * rte_memory_order_acquire and rte_memory_order_relaxed. These map to
* C++11 memory orders with the same names, see the C++11 standard or
* the GCC wiki on atomic synchronization for detailed definition.
*/
static __rte_always_inline void
rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,
- int memorder);
+ rte_memory_order memorder);
/**
* Wait for *addr to be updated with a 64-bit expected value, with a relaxed
@@ -71,42 +71,42 @@
* A 64-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
+ * rte_memory_order_acquire and rte_memory_order_relaxed. These map to
* C++11 memory orders with the same names, see the C++11 standard or
* the GCC wiki on atomic synchronization for detailed definition.
*/
static __rte_always_inline void
rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,
- int memorder);
+ rte_memory_order memorder);
#ifndef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED
static __rte_always_inline void
rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
static __rte_always_inline void
rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
static __rte_always_inline void
rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
@@ -124,16 +124,16 @@
* An expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
+ * rte_memory_order_acquire and rte_memory_order_relaxed. These map to
* C++11 memory orders with the same names, see the C++11 standard or
* the GCC wiki on atomic synchronization for detailed definition.
*/
#define RTE_WAIT_UNTIL_MASKED(addr, mask, cond, expected, memorder) do { \
RTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); \
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && \
- memorder != __ATOMIC_RELAXED); \
+ RTE_BUILD_BUG_ON((memorder) != rte_memory_order_acquire && \
+ (memorder) != rte_memory_order_relaxed); \
typeof(*(addr)) expected_value = (expected); \
- while (!((__atomic_load_n((addr), (memorder)) & (mask)) cond \
+ while (!((rte_atomic_load_explicit((addr), (memorder)) & (mask)) cond \
expected_value)) \
rte_pause(); \
} while (0)
diff --git a/lib/eal/include/generic/rte_rwlock.h b/lib/eal/include/generic/rte_rwlock.h
index 9e083bb..fc0d5fd 100644
--- a/lib/eal/include/generic/rte_rwlock.h
+++ b/lib/eal/include/generic/rte_rwlock.h
@@ -57,7 +57,7 @@
#define RTE_RWLOCK_READ 0x4 /* Reader increment */
typedef struct __rte_lockable {
- int32_t cnt;
+ int32_t __rte_atomic cnt;
} rte_rwlock_t;
/**
@@ -92,21 +92,21 @@
while (1) {
/* Wait while writer is present or pending */
- while (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED)
+ while (rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed)
& RTE_RWLOCK_MASK)
rte_pause();
/* Try to get read lock */
- x = __atomic_fetch_add(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_ACQUIRE) + RTE_RWLOCK_READ;
+ x = rte_atomic_fetch_add_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_acquire) + RTE_RWLOCK_READ;
/* If no writer, then acquire was successful */
if (likely(!(x & RTE_RWLOCK_MASK)))
return;
/* Lost race with writer, backout the change. */
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_RELAXED);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_relaxed);
}
}
@@ -127,20 +127,20 @@
{
int32_t x;
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
/* fail if write lock is held or writer is pending */
if (x & RTE_RWLOCK_MASK)
return -EBUSY;
/* Try to get read lock */
- x = __atomic_fetch_add(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_ACQUIRE) + RTE_RWLOCK_READ;
+ x = rte_atomic_fetch_add_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_acquire) + RTE_RWLOCK_READ;
/* Back out if writer raced in */
if (unlikely(x & RTE_RWLOCK_MASK)) {
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_release);
return -EBUSY;
}
@@ -158,7 +158,7 @@
__rte_unlock_function(rwl)
__rte_no_thread_safety_analysis
{
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ, __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ, rte_memory_order_release);
}
/**
@@ -178,10 +178,10 @@
{
int32_t x;
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
if (x < RTE_RWLOCK_WRITE &&
- __atomic_compare_exchange_n(&rwl->cnt, &x, x + RTE_RWLOCK_WRITE,
- 1, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ rte_atomic_compare_exchange_weak_explicit(&rwl->cnt, &x, x + RTE_RWLOCK_WRITE,
+ rte_memory_order_acquire, rte_memory_order_relaxed))
return 0;
else
return -EBUSY;
@@ -201,22 +201,25 @@
int32_t x;
while (1) {
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
/* No readers or writers? */
if (likely(x < RTE_RWLOCK_WRITE)) {
/* Turn off RTE_RWLOCK_WAIT, turn on RTE_RWLOCK_WRITE */
- if (__atomic_compare_exchange_n(&rwl->cnt, &x, RTE_RWLOCK_WRITE, 1,
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ if (rte_atomic_compare_exchange_weak_explicit(
+ &rwl->cnt, &x, RTE_RWLOCK_WRITE,
+ rte_memory_order_acquire, rte_memory_order_relaxed))
return;
}
/* Turn on writer wait bit */
if (!(x & RTE_RWLOCK_WAIT))
- __atomic_fetch_or(&rwl->cnt, RTE_RWLOCK_WAIT, __ATOMIC_RELAXED);
+ rte_atomic_fetch_or_explicit(&rwl->cnt, RTE_RWLOCK_WAIT,
+ rte_memory_order_relaxed);
/* Wait until no readers before trying again */
- while (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) > RTE_RWLOCK_WAIT)
+ while (rte_atomic_load_explicit(&rwl->cnt,
+ rte_memory_order_relaxed) > RTE_RWLOCK_WAIT)
rte_pause();
}
@@ -233,7 +236,7 @@
__rte_unlock_function(rwl)
__rte_no_thread_safety_analysis
{
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_WRITE, __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_WRITE, rte_memory_order_release);
}
/**
@@ -247,7 +250,7 @@
static inline int
rte_rwlock_write_is_locked(rte_rwlock_t *rwl)
{
- if (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) & RTE_RWLOCK_WRITE)
+ if (rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed) & RTE_RWLOCK_WRITE)
return 1;
return 0;
diff --git a/lib/eal/include/generic/rte_spinlock.h b/lib/eal/include/generic/rte_spinlock.h
index c50ebaa..e5ff348 100644
--- a/lib/eal/include/generic/rte_spinlock.h
+++ b/lib/eal/include/generic/rte_spinlock.h
@@ -28,7 +28,7 @@
* The rte_spinlock_t type.
*/
typedef struct __rte_lockable {
- volatile int locked; /**< lock status 0 = unlocked, 1 = locked */
+ volatile int __rte_atomic locked; /**< lock status 0 = unlocked, 1 = locked */
} rte_spinlock_t;
/**
@@ -65,10 +65,10 @@
{
int exp = 0;
- while (!__atomic_compare_exchange_n(&sl->locked, &exp, 1, 0,
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) {
- rte_wait_until_equal_32((volatile uint32_t *)&sl->locked,
- 0, __ATOMIC_RELAXED);
+ while (!rte_atomic_compare_exchange_strong_explicit(&sl->locked, &exp, 1,
+ rte_memory_order_acquire, rte_memory_order_relaxed)) {
+ rte_wait_until_equal_32((volatile uint32_t *)(uintptr_t)&sl->locked,
+ 0, rte_memory_order_relaxed);
exp = 0;
}
}
@@ -89,7 +89,7 @@
rte_spinlock_unlock(rte_spinlock_t *sl)
__rte_no_thread_safety_analysis
{
- __atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&sl->locked, 0, rte_memory_order_release);
}
#endif
@@ -112,9 +112,8 @@
__rte_no_thread_safety_analysis
{
int exp = 0;
- return __atomic_compare_exchange_n(&sl->locked, &exp, 1,
- 0, /* disallow spurious failure */
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
+ return rte_atomic_compare_exchange_strong_explicit(&sl->locked, &exp, 1,
+ rte_memory_order_acquire, rte_memory_order_relaxed);
}
#endif
@@ -128,7 +127,7 @@
*/
static inline int rte_spinlock_is_locked (rte_spinlock_t *sl)
{
- return __atomic_load_n(&sl->locked, __ATOMIC_ACQUIRE);
+ return rte_atomic_load_explicit(&sl->locked, rte_memory_order_acquire);
}
/**
diff --git a/lib/eal/include/rte_mcslock.h b/lib/eal/include/rte_mcslock.h
index a805cb2..982fd81 100644
--- a/lib/eal/include/rte_mcslock.h
+++ b/lib/eal/include/rte_mcslock.h
@@ -32,8 +32,8 @@
* The rte_mcslock_t type.
*/
typedef struct rte_mcslock {
- struct rte_mcslock *next;
- int locked; /* 1 if the queue locked, 0 otherwise */
+ struct rte_mcslock * __rte_atomic next;
+ int __rte_atomic locked; /* 1 if the queue locked, 0 otherwise */
} rte_mcslock_t;
/**
@@ -48,13 +48,13 @@
* lock should use its 'own node'.
*/
static inline void
-rte_mcslock_lock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_lock(rte_mcslock_t * __rte_atomic *msl, rte_mcslock_t *me)
{
rte_mcslock_t *prev;
/* Init me node */
- __atomic_store_n(&me->locked, 1, __ATOMIC_RELAXED);
- __atomic_store_n(&me->next, NULL, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&me->locked, 1, rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&me->next, NULL, rte_memory_order_relaxed);
/* If the queue is empty, the exchange operation is enough to acquire
* the lock. Hence, the exchange operation requires acquire semantics.
@@ -62,7 +62,7 @@
* visible to other CPUs/threads. Hence, the exchange operation requires
* release semantics as well.
*/
- prev = __atomic_exchange_n(msl, me, __ATOMIC_ACQ_REL);
+ prev = rte_atomic_exchange_explicit(msl, me, rte_memory_order_acq_rel);
if (likely(prev == NULL)) {
/* Queue was empty, no further action required,
* proceed with lock taken.
@@ -76,19 +76,19 @@
* strong as a release fence and is not sufficient to enforce the
* desired order here.
*/
- __atomic_store_n(&prev->next, me, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&prev->next, me, rte_memory_order_release);
/* The while-load of me->locked should not move above the previous
* store to prev->next. Otherwise it will cause a deadlock. Need a
* store-load barrier.
*/
- __atomic_thread_fence(__ATOMIC_ACQ_REL);
+ __atomic_thread_fence(rte_memory_order_acq_rel);
/* If the lock has already been acquired, it first atomically
* places the node at the end of the queue and then proceeds
* to spin on me->locked until the previous lock holder resets
* the me->locked using mcslock_unlock().
*/
- rte_wait_until_equal_32((uint32_t *)&me->locked, 0, __ATOMIC_ACQUIRE);
+ rte_wait_until_equal_32((uint32_t *)(uintptr_t)&me->locked, 0, rte_memory_order_acquire);
}
/**
@@ -100,34 +100,34 @@
* A pointer to the node of MCS lock passed in rte_mcslock_lock.
*/
static inline void
-rte_mcslock_unlock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_unlock(rte_mcslock_t * __rte_atomic *msl, rte_mcslock_t * __rte_atomic me)
{
/* Check if there are more nodes in the queue. */
- if (likely(__atomic_load_n(&me->next, __ATOMIC_RELAXED) == NULL)) {
+ if (likely(rte_atomic_load_explicit(&me->next, rte_memory_order_relaxed) == NULL)) {
/* No, last member in the queue. */
- rte_mcslock_t *save_me = __atomic_load_n(&me, __ATOMIC_RELAXED);
+ rte_mcslock_t *save_me = rte_atomic_load_explicit(&me, rte_memory_order_relaxed);
/* Release the lock by setting it to NULL */
- if (likely(__atomic_compare_exchange_n(msl, &save_me, NULL, 0,
- __ATOMIC_RELEASE, __ATOMIC_RELAXED)))
+ if (likely(rte_atomic_compare_exchange_strong_explicit(msl, &save_me, NULL,
+ rte_memory_order_release, rte_memory_order_relaxed)))
return;
/* Speculative execution would be allowed to read in the
* while-loop first. This has the potential to cause a
* deadlock. Need a load barrier.
*/
- __atomic_thread_fence(__ATOMIC_ACQUIRE);
+ __atomic_thread_fence(rte_memory_order_acquire);
/* More nodes added to the queue by other CPUs.
* Wait until the next pointer is set.
*/
- uintptr_t *next;
- next = (uintptr_t *)&me->next;
+ uintptr_t __rte_atomic *next;
+ next = (uintptr_t __rte_atomic *)&me->next;
RTE_WAIT_UNTIL_MASKED(next, UINTPTR_MAX, !=, 0,
- __ATOMIC_RELAXED);
+ rte_memory_order_relaxed);
}
/* Pass lock to next waiter. */
- __atomic_store_n(&me->next->locked, 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&me->next->locked, 0, rte_memory_order_release);
}
/**
@@ -141,10 +141,10 @@
* 1 if the lock is successfully taken; 0 otherwise.
*/
static inline int
-rte_mcslock_trylock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_trylock(rte_mcslock_t * __rte_atomic *msl, rte_mcslock_t *me)
{
/* Init me node */
- __atomic_store_n(&me->next, NULL, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&me->next, NULL, rte_memory_order_relaxed);
/* Try to lock */
rte_mcslock_t *expected = NULL;
@@ -155,8 +155,8 @@
* is visible to other CPUs/threads. Hence, the compare-exchange
* operation requires release semantics as well.
*/
- return __atomic_compare_exchange_n(msl, &expected, me, 0,
- __ATOMIC_ACQ_REL, __ATOMIC_RELAXED);
+ return rte_atomic_compare_exchange_strong_explicit(msl, &expected, me,
+ rte_memory_order_acq_rel, rte_memory_order_relaxed);
}
/**
@@ -168,9 +168,9 @@
* 1 if the lock is currently taken; 0 otherwise.
*/
static inline int
-rte_mcslock_is_locked(rte_mcslock_t *msl)
+rte_mcslock_is_locked(rte_mcslock_t * __rte_atomic msl)
{
- return (__atomic_load_n(&msl, __ATOMIC_RELAXED) != NULL);
+ return (rte_atomic_load_explicit(&msl, rte_memory_order_relaxed) != NULL);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_pflock.h b/lib/eal/include/rte_pflock.h
index a3f7291..7d51fc9 100644
--- a/lib/eal/include/rte_pflock.h
+++ b/lib/eal/include/rte_pflock.h
@@ -40,8 +40,8 @@
*/
struct rte_pflock {
struct {
- uint16_t in;
- uint16_t out;
+ uint16_t __rte_atomic in;
+ uint16_t __rte_atomic out;
} rd, wr;
};
typedef struct rte_pflock rte_pflock_t;
@@ -116,14 +116,14 @@ struct rte_pflock {
* If no writer is present, then the operation has completed
* successfully.
*/
- w = __atomic_fetch_add(&pf->rd.in, RTE_PFLOCK_RINC, __ATOMIC_ACQUIRE)
+ w = rte_atomic_fetch_add_explicit(&pf->rd.in, RTE_PFLOCK_RINC, rte_memory_order_acquire)
& RTE_PFLOCK_WBITS;
if (w == 0)
return;
/* Wait for current write phase to complete. */
RTE_WAIT_UNTIL_MASKED(&pf->rd.in, RTE_PFLOCK_WBITS, !=, w,
- __ATOMIC_ACQUIRE);
+ rte_memory_order_acquire);
}
/**
@@ -139,7 +139,7 @@ struct rte_pflock {
static inline void
rte_pflock_read_unlock(rte_pflock_t *pf)
{
- __atomic_fetch_add(&pf->rd.out, RTE_PFLOCK_RINC, __ATOMIC_RELEASE);
+ rte_atomic_fetch_add_explicit(&pf->rd.out, RTE_PFLOCK_RINC, rte_memory_order_release);
}
/**
@@ -160,8 +160,9 @@ struct rte_pflock {
/* Acquire ownership of write-phase.
* This is same as rte_ticketlock_lock().
*/
- ticket = __atomic_fetch_add(&pf->wr.in, 1, __ATOMIC_RELAXED);
- rte_wait_until_equal_16(&pf->wr.out, ticket, __ATOMIC_ACQUIRE);
+ ticket = rte_atomic_fetch_add_explicit(&pf->wr.in, 1, rte_memory_order_relaxed);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&pf->wr.out, ticket,
+ rte_memory_order_acquire);
/*
* Acquire ticket on read-side in order to allow them
@@ -172,10 +173,11 @@ struct rte_pflock {
* speculatively.
*/
w = RTE_PFLOCK_PRES | (ticket & RTE_PFLOCK_PHID);
- ticket = __atomic_fetch_add(&pf->rd.in, w, __ATOMIC_RELAXED);
+ ticket = rte_atomic_fetch_add_explicit(&pf->rd.in, w, rte_memory_order_relaxed);
/* Wait for any pending readers to flush. */
- rte_wait_until_equal_16(&pf->rd.out, ticket, __ATOMIC_ACQUIRE);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&pf->rd.out, ticket,
+ rte_memory_order_acquire);
}
/**
@@ -192,10 +194,10 @@ struct rte_pflock {
rte_pflock_write_unlock(rte_pflock_t *pf)
{
/* Migrate from write phase to read phase. */
- __atomic_fetch_and(&pf->rd.in, RTE_PFLOCK_LSB, __ATOMIC_RELEASE);
+ rte_atomic_fetch_and_explicit(&pf->rd.in, RTE_PFLOCK_LSB, rte_memory_order_release);
/* Allow other writers to continue. */
- __atomic_fetch_add(&pf->wr.out, 1, __ATOMIC_RELEASE);
+ rte_atomic_fetch_add_explicit(&pf->wr.out, 1, rte_memory_order_release);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_seqcount.h b/lib/eal/include/rte_seqcount.h
index ff62708..f581908 100644
--- a/lib/eal/include/rte_seqcount.h
+++ b/lib/eal/include/rte_seqcount.h
@@ -31,7 +31,7 @@
* The RTE seqcount type.
*/
typedef struct {
- uint32_t sn; /**< A sequence number for the protected data. */
+ uint32_t __rte_atomic sn; /**< A sequence number for the protected data. */
} rte_seqcount_t;
/**
@@ -105,11 +105,11 @@
static inline uint32_t
rte_seqcount_read_begin(const rte_seqcount_t *seqcount)
{
- /* __ATOMIC_ACQUIRE to prevent loads after (in program order)
+ /* rte_memory_order_acquire to prevent loads after (in program order)
* from happening before the sn load. Synchronizes-with the
* store release in rte_seqcount_write_end().
*/
- return __atomic_load_n(&seqcount->sn, __ATOMIC_ACQUIRE);
+ return rte_atomic_load_explicit(&seqcount->sn, rte_memory_order_acquire);
}
/**
@@ -160,9 +160,9 @@
return true;
/* make sure the data loads happens before the sn load */
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+ rte_atomic_thread_fence(rte_memory_order_acquire);
- end_sn = __atomic_load_n(&seqcount->sn, __ATOMIC_RELAXED);
+ end_sn = rte_atomic_load_explicit(&seqcount->sn, rte_memory_order_relaxed);
/* A writer incremented the sequence number during this read
* critical section.
@@ -204,12 +204,12 @@
sn = seqcount->sn + 1;
- __atomic_store_n(&seqcount->sn, sn, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&seqcount->sn, sn, rte_memory_order_relaxed);
- /* __ATOMIC_RELEASE to prevent stores after (in program order)
+ /* rte_memory_order_release to prevent stores after (in program order)
* from happening before the sn store.
*/
- rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ rte_atomic_thread_fence(rte_memory_order_release);
}
/**
@@ -236,7 +236,7 @@
sn = seqcount->sn + 1;
/* Synchronizes-with the load acquire in rte_seqcount_read_begin(). */
- __atomic_store_n(&seqcount->sn, sn, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&seqcount->sn, sn, rte_memory_order_release);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_ticketlock.h b/lib/eal/include/rte_ticketlock.h
index 5db0d8a..31b5193 100644
--- a/lib/eal/include/rte_ticketlock.h
+++ b/lib/eal/include/rte_ticketlock.h
@@ -29,10 +29,10 @@
* The rte_ticketlock_t type.
*/
typedef union {
- uint32_t tickets;
+ uint32_t __rte_atomic tickets;
struct {
- uint16_t current;
- uint16_t next;
+ uint16_t __rte_atomic current;
+ uint16_t __rte_atomic next;
} s;
} rte_ticketlock_t;
@@ -50,7 +50,7 @@
static inline void
rte_ticketlock_init(rte_ticketlock_t *tl)
{
- __atomic_store_n(&tl->tickets, 0, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tl->tickets, 0, rte_memory_order_relaxed);
}
/**
@@ -62,8 +62,9 @@
static inline void
rte_ticketlock_lock(rte_ticketlock_t *tl)
{
- uint16_t me = __atomic_fetch_add(&tl->s.next, 1, __ATOMIC_RELAXED);
- rte_wait_until_equal_16(&tl->s.current, me, __ATOMIC_ACQUIRE);
+ uint16_t me = rte_atomic_fetch_add_explicit(&tl->s.next, 1, rte_memory_order_relaxed);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tl->s.current, me,
+ rte_memory_order_acquire);
}
/**
@@ -75,8 +76,8 @@
static inline void
rte_ticketlock_unlock(rte_ticketlock_t *tl)
{
- uint16_t i = __atomic_load_n(&tl->s.current, __ATOMIC_RELAXED);
- __atomic_store_n(&tl->s.current, i + 1, __ATOMIC_RELEASE);
+ uint16_t i = rte_atomic_load_explicit(&tl->s.current, rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&tl->s.current, i + 1, rte_memory_order_release);
}
/**
@@ -91,12 +92,13 @@
rte_ticketlock_trylock(rte_ticketlock_t *tl)
{
rte_ticketlock_t oldl, newl;
- oldl.tickets = __atomic_load_n(&tl->tickets, __ATOMIC_RELAXED);
+ oldl.tickets = rte_atomic_load_explicit(&tl->tickets, rte_memory_order_relaxed);
newl.tickets = oldl.tickets;
newl.s.next++;
if (oldl.s.next == oldl.s.current) {
- if (__atomic_compare_exchange_n(&tl->tickets, &oldl.tickets,
- newl.tickets, 0, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ if (rte_atomic_compare_exchange_strong_explicit(&tl->tickets,
+ (uint32_t *)(uintptr_t)&oldl.tickets,
+ newl.tickets, rte_memory_order_acquire, rte_memory_order_relaxed))
return 1;
}
@@ -115,7 +117,7 @@
rte_ticketlock_is_locked(rte_ticketlock_t *tl)
{
rte_ticketlock_t tic;
- tic.tickets = __atomic_load_n(&tl->tickets, __ATOMIC_ACQUIRE);
+ tic.tickets = rte_atomic_load_explicit(&tl->tickets, rte_memory_order_acquire);
return (tic.s.current != tic.s.next);
}
@@ -126,7 +128,7 @@
typedef struct {
rte_ticketlock_t tl; /**< the actual ticketlock */
- int user; /**< core id using lock, TICKET_LOCK_INVALID_ID for unused */
+ int __rte_atomic user; /**< core id using lock, TICKET_LOCK_INVALID_ID for unused */
unsigned int count; /**< count of time this lock has been called */
} rte_ticketlock_recursive_t;
@@ -146,7 +148,7 @@
rte_ticketlock_recursive_init(rte_ticketlock_recursive_t *tlr)
{
rte_ticketlock_init(&tlr->tl);
- __atomic_store_n(&tlr->user, TICKET_LOCK_INVALID_ID, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, TICKET_LOCK_INVALID_ID, rte_memory_order_relaxed);
tlr->count = 0;
}
@@ -161,9 +163,9 @@
{
int id = rte_gettid();
- if (__atomic_load_n(&tlr->user, __ATOMIC_RELAXED) != id) {
+ if (rte_atomic_load_explicit(&tlr->user, rte_memory_order_relaxed) != id) {
rte_ticketlock_lock(&tlr->tl);
- __atomic_store_n(&tlr->user, id, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, id, rte_memory_order_relaxed);
}
tlr->count++;
}
@@ -178,8 +180,8 @@
rte_ticketlock_recursive_unlock(rte_ticketlock_recursive_t *tlr)
{
if (--(tlr->count) == 0) {
- __atomic_store_n(&tlr->user, TICKET_LOCK_INVALID_ID,
- __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, TICKET_LOCK_INVALID_ID,
+ rte_memory_order_relaxed);
rte_ticketlock_unlock(&tlr->tl);
}
}
@@ -197,10 +199,10 @@
{
int id = rte_gettid();
- if (__atomic_load_n(&tlr->user, __ATOMIC_RELAXED) != id) {
+ if (rte_atomic_load_explicit(&tlr->user, rte_memory_order_relaxed) != id) {
if (rte_ticketlock_trylock(&tlr->tl) == 0)
return 0;
- __atomic_store_n(&tlr->user, id, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, id, rte_memory_order_relaxed);
}
tlr->count++;
return 1;
diff --git a/lib/eal/include/rte_trace_point.h b/lib/eal/include/rte_trace_point.h
index c6b6fcc..2bcf954 100644
--- a/lib/eal/include/rte_trace_point.h
+++ b/lib/eal/include/rte_trace_point.h
@@ -32,7 +32,7 @@
#include <rte_uuid.h>
/** The tracepoint object. */
-typedef uint64_t rte_trace_point_t;
+typedef uint64_t __rte_atomic rte_trace_point_t;
/**
* Macro to define the tracepoint arguments in RTE_TRACE_POINT macro.
@@ -358,7 +358,7 @@ struct __rte_trace_header {
#define __rte_trace_point_emit_header_generic(t) \
void *mem; \
do { \
- const uint64_t val = __atomic_load_n(t, __ATOMIC_ACQUIRE); \
+ const uint64_t val = rte_atomic_load_explicit(t, rte_memory_order_acquire); \
if (likely(!(val & __RTE_TRACE_FIELD_ENABLE_MASK))) \
return; \
mem = __rte_trace_mem_get(val); \
diff --git a/lib/eal/ppc/include/rte_atomic.h b/lib/eal/ppc/include/rte_atomic.h
index ec8d8a2..44822db 100644
--- a/lib/eal/ppc/include/rte_atomic.h
+++ b/lib/eal/ppc/include/rte_atomic.h
@@ -48,8 +48,8 @@
static inline int
rte_atomic16_cmpset(volatile uint16_t *dst, uint16_t exp, uint16_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline int rte_atomic16_test_and_set(rte_atomic16_t *v)
@@ -60,29 +60,29 @@ static inline int rte_atomic16_test_and_set(rte_atomic16_t *v)
static inline void
rte_atomic16_inc(rte_atomic16_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic16_dec(rte_atomic16_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline uint16_t
rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val)
{
- return __atomic_exchange_2(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_2(dst, val, rte_memory_order_seq_cst);
}
/*------------------------- 32 bit atomic operations -------------------------*/
@@ -90,8 +90,8 @@ static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
static inline int
rte_atomic32_cmpset(volatile uint32_t *dst, uint32_t exp, uint32_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline int rte_atomic32_test_and_set(rte_atomic32_t *v)
@@ -102,29 +102,29 @@ static inline int rte_atomic32_test_and_set(rte_atomic32_t *v)
static inline void
rte_atomic32_inc(rte_atomic32_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic32_dec(rte_atomic32_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline uint32_t
rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val)
{
- return __atomic_exchange_4(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_4(dst, val, rte_memory_order_seq_cst);
}
/*------------------------- 64 bit atomic operations -------------------------*/
@@ -132,8 +132,8 @@ static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
static inline int
rte_atomic64_cmpset(volatile uint64_t *dst, uint64_t exp, uint64_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline void
@@ -157,47 +157,47 @@ static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_acquire);
}
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_acquire);
}
static inline void
rte_atomic64_inc(rte_atomic64_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic64_dec(rte_atomic64_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_ACQUIRE) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_acquire) + inc;
}
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_ACQUIRE) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_acquire) - dec;
}
static inline int rte_atomic64_inc_and_test(rte_atomic64_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic64_dec_and_test(rte_atomic64_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline int rte_atomic64_test_and_set(rte_atomic64_t *v)
@@ -213,7 +213,7 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
static inline uint64_t
rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val)
{
- return __atomic_exchange_8(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_8(dst, val, rte_memory_order_seq_cst);
}
#endif
diff --git a/lib/eal/x86/include/rte_atomic.h b/lib/eal/x86/include/rte_atomic.h
index f2ee1a9..aedce9b 100644
--- a/lib/eal/x86/include/rte_atomic.h
+++ b/lib/eal/x86/include/rte_atomic.h
@@ -82,14 +82,14 @@
/**
* Synchronization fence between threads based on the specified memory order.
*
- * On x86 the __atomic_thread_fence(__ATOMIC_SEQ_CST) generates full 'mfence'
+ * On x86 the __atomic_thread_fence(rte_memory_order_seq_cst) generates full 'mfence'
* which is quite expensive. The optimized implementation of rte_smp_mb is
* used instead.
*/
static __rte_always_inline void
rte_atomic_thread_fence(int memorder)
{
- if (memorder == __ATOMIC_SEQ_CST)
+ if (memorder == rte_memory_order_seq_cst)
rte_smp_mb();
else
__atomic_thread_fence(memorder);
diff --git a/lib/eal/x86/include/rte_spinlock.h b/lib/eal/x86/include/rte_spinlock.h
index 0b20ddf..c76218a 100644
--- a/lib/eal/x86/include/rte_spinlock.h
+++ b/lib/eal/x86/include/rte_spinlock.h
@@ -78,7 +78,7 @@ static inline int rte_tm_supported(void)
}
static inline int
-rte_try_tm(volatile int *lock)
+rte_try_tm(volatile int __rte_atomic *lock)
{
int i, retries;
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index f749da9..cf70e33 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,9 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = __atomic_load_n((volatile uint64_t *)addr, __ATOMIC_RELAXED);
- __atomic_compare_exchange_n((volatile uint64_t *)addr, &val, val, 0,
- __ATOMIC_RELAXED, __ATOMIC_RELAXED);
+ val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
+ rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH 3/6] eal: add rte atomic qualifier with casts
2023-08-11 1:31 [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-11 1:31 ` [PATCH 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
2023-08-11 1:31 ` [PATCH 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
@ 2023-08-11 1:31 ` Tyler Retzlaff
2023-08-11 1:31 ` [PATCH 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
` (7 subsequent siblings)
10 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-11 1:31 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Introduce __rte_atomic qualifying casts in rte_optional atomics inline
functions to prevent cascading the need to pass __rte_atomic qualified
arguments.
Warning, this is really implementation dependent and being done
temporarily to avoid having to convert more of the libraries and tests in
DPDK in the initial series that introduces the API. The consequence of the
assumption of the ABI of the types in question not being ``the same'' is
only a risk that may be realized when enable_stdatomic=true.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/eal/include/generic/rte_atomic.h | 48 ++++++++++++++++++++++++------------
lib/eal/include/generic/rte_pause.h | 9 ++++---
lib/eal/x86/rte_power_intrinsics.c | 7 +++---
3 files changed, 42 insertions(+), 22 deletions(-)
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 15a36f3..2c65304 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -273,7 +273,8 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -287,7 +288,8 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -340,7 +342,8 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -360,7 +363,8 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -379,7 +383,8 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -399,7 +404,8 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -552,7 +558,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -566,7 +573,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile int32_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -619,7 +627,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -639,7 +648,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile int32_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -658,7 +668,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -678,7 +689,8 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile int32_t __rte_atomic *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -884,7 +896,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile int64_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
#endif
@@ -903,7 +916,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile int64_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
#endif
@@ -961,7 +975,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile int64_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
#endif
@@ -985,7 +1000,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile int64_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
#endif
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index 3ea1553..db8a1f8 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -86,7 +86,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile uint16_t __rte_atomic *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -96,7 +97,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile uint32_t __rte_atomic *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -106,7 +108,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile uint64_t __rte_atomic *)addr, memorder)
+ != expected)
rte_pause();
}
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index cf70e33..6c192f0 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,10 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
- rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
- rte_memory_order_relaxed, rte_memory_order_relaxed);
+ val = rte_atomic_load_explicit((volatile uint64_t __rte_atomic *)addr,
+ rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile uint64_t __rte_atomic *)addr,
+ &val, val, rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH 4/6] distributor: adapt for EAL optional atomics API changes
2023-08-11 1:31 [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (2 preceding siblings ...)
2023-08-11 1:31 ` [PATCH 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
@ 2023-08-11 1:31 ` Tyler Retzlaff
2023-08-11 1:32 ` [PATCH 5/6] bpf: " Tyler Retzlaff
` (6 subsequent siblings)
10 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-11 1:31 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt distributor for EAL optional atomics API changes
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 ++++++++++++++++----------------
2 files changed, 23 insertions(+), 23 deletions(-)
diff --git a/lib/distributor/distributor_private.h b/lib/distributor/distributor_private.h
index 7101f63..ffbdae5 100644
--- a/lib/distributor/distributor_private.h
+++ b/lib/distributor/distributor_private.h
@@ -52,7 +52,7 @@
* Only 64-bits of the memory is actually used though.
*/
union rte_distributor_buffer_single {
- volatile int64_t bufptr64;
+ volatile int64_t __rte_atomic bufptr64;
char pad[RTE_CACHE_LINE_SIZE*3];
} __rte_cache_aligned;
diff --git a/lib/distributor/rte_distributor_single.c b/lib/distributor/rte_distributor_single.c
index 2c77ac4..ad43c13 100644
--- a/lib/distributor/rte_distributor_single.c
+++ b/lib/distributor/rte_distributor_single.c
@@ -32,10 +32,10 @@
int64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_GET_BUF;
RTE_WAIT_UNTIL_MASKED(&buf->bufptr64, RTE_DISTRIB_FLAGS_MASK,
- ==, 0, __ATOMIC_RELAXED);
+ ==, 0, rte_memory_order_relaxed);
/* Sync with distributor on GET_BUF flag. */
- __atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&buf->bufptr64, req, rte_memory_order_release);
}
struct rte_mbuf *
@@ -44,7 +44,7 @@ struct rte_mbuf *
{
union rte_distributor_buffer_single *buf = &d->bufs[worker_id];
/* Sync with distributor. Acquire bufptr64. */
- if (__atomic_load_n(&buf->bufptr64, __ATOMIC_ACQUIRE)
+ if (rte_atomic_load_explicit(&buf->bufptr64, rte_memory_order_acquire)
& RTE_DISTRIB_GET_BUF)
return NULL;
@@ -72,10 +72,10 @@ struct rte_mbuf *
uint64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_RETURN_BUF;
RTE_WAIT_UNTIL_MASKED(&buf->bufptr64, RTE_DISTRIB_FLAGS_MASK,
- ==, 0, __ATOMIC_RELAXED);
+ ==, 0, rte_memory_order_relaxed);
/* Sync with distributor on RETURN_BUF flag. */
- __atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&buf->bufptr64, req, rte_memory_order_release);
return 0;
}
@@ -119,7 +119,7 @@ struct rte_mbuf *
d->in_flight_tags[wkr] = 0;
d->in_flight_bitmask &= ~(1UL << wkr);
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64), 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64, 0, rte_memory_order_release);
if (unlikely(d->backlog[wkr].count != 0)) {
/* On return of a packet, we need to move the
* queued packets for this core elsewhere.
@@ -165,21 +165,21 @@ struct rte_mbuf *
for (wkr = 0; wkr < d->num_workers; wkr++) {
uintptr_t oldbuf = 0;
/* Sync with worker. Acquire bufptr64. */
- const int64_t data = __atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE);
+ const int64_t data = rte_atomic_load_explicit(&d->bufs[wkr].bufptr64,
+ rte_memory_order_acquire);
if (data & RTE_DISTRIB_GET_BUF) {
flushed++;
if (d->backlog[wkr].count)
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
else {
/* Sync with worker on GET_BUF flag. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
RTE_DISTRIB_GET_BUF,
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
d->in_flight_tags[wkr] = 0;
d->in_flight_bitmask &= ~(1UL << wkr);
}
@@ -217,8 +217,8 @@ struct rte_mbuf *
while (next_idx < num_mbufs || next_mb != NULL) {
uintptr_t oldbuf = 0;
/* Sync with worker. Acquire bufptr64. */
- int64_t data = __atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE);
+ int64_t data = rte_atomic_load_explicit(&(d->bufs[wkr].bufptr64),
+ rte_memory_order_acquire);
if (!next_mb) {
next_mb = mbufs[next_idx++];
@@ -264,15 +264,15 @@ struct rte_mbuf *
if (d->backlog[wkr].count)
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
else {
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
next_value,
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
d->in_flight_tags[wkr] = new_tag;
d->in_flight_bitmask |= (1UL << wkr);
next_mb = NULL;
@@ -294,8 +294,8 @@ struct rte_mbuf *
for (wkr = 0; wkr < d->num_workers; wkr++)
if (d->backlog[wkr].count &&
/* Sync with worker. Acquire bufptr64. */
- (__atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE) & RTE_DISTRIB_GET_BUF)) {
+ (rte_atomic_load_explicit(&d->bufs[wkr].bufptr64,
+ rte_memory_order_acquire) & RTE_DISTRIB_GET_BUF)) {
int64_t oldbuf = d->bufs[wkr].bufptr64 >>
RTE_DISTRIB_FLAG_BITS;
@@ -303,9 +303,9 @@ struct rte_mbuf *
store_return(oldbuf, d, &ret_start, &ret_count);
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
}
d->returns.start = ret_start;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH 5/6] bpf: adapt for EAL optional atomics API changes
2023-08-11 1:31 [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (3 preceding siblings ...)
2023-08-11 1:31 ` [PATCH 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
@ 2023-08-11 1:32 ` Tyler Retzlaff
2023-08-11 1:32 ` [PATCH 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
` (5 subsequent siblings)
10 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-11 1:32 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt bpf for EAL optional atomics API changes
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/bpf/bpf_pkt.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c
index ffd2db7..b300447 100644
--- a/lib/bpf/bpf_pkt.c
+++ b/lib/bpf/bpf_pkt.c
@@ -25,7 +25,7 @@
struct bpf_eth_cbi {
/* used by both data & control path */
- uint32_t use; /*usage counter */
+ uint32_t __rte_atomic use; /*usage counter */
const struct rte_eth_rxtx_callback *cb; /* callback handle */
struct rte_bpf *bpf;
struct rte_bpf_jit jit;
@@ -110,8 +110,8 @@ struct bpf_eth_cbh {
/* in use, busy wait till current RX/TX iteration is finished */
if ((puse & BPF_ETH_CBI_INUSE) != 0) {
- RTE_WAIT_UNTIL_MASKED((uint32_t *)(uintptr_t)&cbi->use,
- UINT32_MAX, !=, puse, __ATOMIC_RELAXED);
+ RTE_WAIT_UNTIL_MASKED((uint32_t __rte_atomic *)(uintptr_t)&cbi->use,
+ UINT32_MAX, !=, puse, rte_memory_order_relaxed);
}
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH 6/6] devtools: forbid new direct use of GCC atomic builtins
2023-08-11 1:31 [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (4 preceding siblings ...)
2023-08-11 1:32 ` [PATCH 5/6] bpf: " Tyler Retzlaff
@ 2023-08-11 1:32 ` Tyler Retzlaff
2023-08-11 8:57 ` Bruce Richardson
2023-08-11 9:51 ` Morten Brørup
2023-08-11 17:32 ` [PATCH v2 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (4 subsequent siblings)
10 siblings, 2 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-11 1:32 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Refrain from using compiler __atomic_xxx builtins DPDK now requires
the use of rte_atomic_<op>_explicit macros when operating on DPDK
atomic variables.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
devtools/checkpatches.sh | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh
index 43f5e36..a32f02e 100755
--- a/devtools/checkpatches.sh
+++ b/devtools/checkpatches.sh
@@ -102,6 +102,14 @@ check_forbidden_additions() { # <patch>
-f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
"$1" || res=1
+ # refrain from using compiler __atomic_xxx builtins
+ awk -v FOLDERS="lib drivers app examples" \
+ -v EXPRESSIONS="__atomic_.*\\\(" \
+ -v RET_ON_FAIL=1 \
+ -v MESSAGE='Using __atomic_xxx builtins' \
+ -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
+ "$1" || res=1
+
# refrain from using compiler __atomic_thread_fence()
# It should be avoided on x86 for SMP case.
awk -v FOLDERS="lib drivers app examples" \
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 1/6] eal: provide rte stdatomics optional atomics API
2023-08-11 1:31 ` [PATCH 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
@ 2023-08-11 8:56 ` Bruce Richardson
2023-08-11 9:42 ` Morten Brørup
1 sibling, 0 replies; 82+ messages in thread
From: Bruce Richardson @ 2023-08-11 8:56 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: dev, techboard, Honnappa Nagarahalli, Ruifeng Wang, Jerin Jacob,
Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand
On Thu, Aug 10, 2023 at 06:31:56PM -0700, Tyler Retzlaff wrote:
> Provide API for atomic operations in the rte namespace that may
> optionally be configured to use C11 atomics with meson
> option enable_stdatomics=true
>
> Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> ---
> config/meson.build | 1 +
> config/rte_config.h | 1 +
> lib/eal/include/meson.build | 1 +
> lib/eal/include/rte_stdatomic.h | 162 ++++++++++++++++++++++++++++++++++++++++
> meson_options.txt | 1 +
> 5 files changed, 166 insertions(+)
> create mode 100644 lib/eal/include/rte_stdatomic.h
>
<snip>
> diff --git a/meson_options.txt b/meson_options.txt
> index 621e1ca..7d6784d 100644
> --- a/meson_options.txt
> +++ b/meson_options.txt
> @@ -46,6 +46,7 @@ option('mbuf_refcnt_atomic', type: 'boolean', value: true, description:
> 'Atomically access the mbuf refcnt.')
> option('platform', type: 'string', value: 'native', description:
> 'Platform to build, either "native", "generic" or a SoC. Please refer to the Linux build guide for more information.')
> +option('enable_stdatomic', type: 'boolean', value: false, description: 'enable use of C11 stdatomic')
Minor nit - all other options in this file put the description on it's own
line. For consistency I think we should do the same here.
> option('enable_trace_fp', type: 'boolean', value: false, description:
> 'enable fast path trace points.')
> option('tests', type: 'boolean', value: true, description:
> --
> 1.8.3.1
>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 6/6] devtools: forbid new direct use of GCC atomic builtins
2023-08-11 1:32 ` [PATCH 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
@ 2023-08-11 8:57 ` Bruce Richardson
2023-08-11 9:51 ` Morten Brørup
1 sibling, 0 replies; 82+ messages in thread
From: Bruce Richardson @ 2023-08-11 8:57 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: dev, techboard, Honnappa Nagarahalli, Ruifeng Wang, Jerin Jacob,
Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand
On Thu, Aug 10, 2023 at 06:32:01PM -0700, Tyler Retzlaff wrote:
> Refrain from using compiler __atomic_xxx builtins DPDK now requires
> the use of rte_atomic_<op>_explicit macros when operating on DPDK
> atomic variables.
>
> Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
> ---
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH 1/6] eal: provide rte stdatomics optional atomics API
2023-08-11 1:31 ` [PATCH 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
2023-08-11 8:56 ` Bruce Richardson
@ 2023-08-11 9:42 ` Morten Brørup
2023-08-11 15:54 ` Tyler Retzlaff
1 sibling, 1 reply; 82+ messages in thread
From: Morten Brørup @ 2023-08-11 9:42 UTC (permalink / raw)
To: Tyler Retzlaff, dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Friday, 11 August 2023 03.32
>
> Provide API for atomic operations in the rte namespace that may
> optionally be configured to use C11 atomics with meson
> option enable_stdatomics=true
>
> Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> ---
> config/meson.build | 1 +
> config/rte_config.h | 1 +
> lib/eal/include/meson.build | 1 +
> lib/eal/include/rte_stdatomic.h | 162
> ++++++++++++++++++++++++++++++++++++++++
> meson_options.txt | 1 +
> 5 files changed, 166 insertions(+)
> create mode 100644 lib/eal/include/rte_stdatomic.h
>
> diff --git a/config/meson.build b/config/meson.build
> index d822371..ec49964 100644
> --- a/config/meson.build
> +++ b/config/meson.build
> @@ -303,6 +303,7 @@ endforeach
> # set other values pulled from the build options
> dpdk_conf.set('RTE_MAX_ETHPORTS', get_option('max_ethports'))
> dpdk_conf.set('RTE_LIBEAL_USE_HPET', get_option('use_hpet'))
> +dpdk_conf.set('RTE_ENABLE_STDATOMIC', get_option('enable_stdatomic'))
> dpdk_conf.set('RTE_ENABLE_TRACE_FP', get_option('enable_trace_fp'))
> # values which have defaults which may be overridden
> dpdk_conf.set('RTE_MAX_VFIO_GROUPS', 64)
> diff --git a/config/rte_config.h b/config/rte_config.h
> index 400e44e..f17b6ae 100644
> --- a/config/rte_config.h
> +++ b/config/rte_config.h
> @@ -13,6 +13,7 @@
> #define _RTE_CONFIG_H_
>
> #include <rte_build_config.h>
> +#include <rte_stdatomic.h>
>
> /* legacy defines */
> #ifdef RTE_EXEC_ENV_LINUX
> diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
> index b0db9b3..f8a47b3 100644
> --- a/lib/eal/include/meson.build
> +++ b/lib/eal/include/meson.build
> @@ -43,6 +43,7 @@ headers += files(
> 'rte_seqlock.h',
> 'rte_service.h',
> 'rte_service_component.h',
> + 'rte_stdatomic.h',
> 'rte_string_fns.h',
> 'rte_tailq.h',
> 'rte_thread.h',
> diff --git a/lib/eal/include/rte_stdatomic.h
> b/lib/eal/include/rte_stdatomic.h
> new file mode 100644
> index 0000000..832fd07
> --- /dev/null
> +++ b/lib/eal/include/rte_stdatomic.h
> @@ -0,0 +1,162 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2023 Microsoft Corporation
> + */
> +
> +#ifndef _RTE_STDATOMIC_H_
> +#define _RTE_STDATOMIC_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +typedef int rte_memory_order;
In C11 memory_order is an enumerated type, and in GCC built-ins it is an int. If possible, rte_memory_order should be too; i.e. remove the typedef here, and make two variants of it instead.
> +
> +#ifdef RTE_ENABLE_STDATOMIC
> +#ifdef __STDC_NO_ATOMICS__
> +#error enable_stdatomics=true but atomics not supported by toolchain
> +#endif
> +
> +#include <stdatomic.h>
> +
> +#define __rte_atomic _Atomic
Move the (changed) C11 memory order type definition here:
/* The memory order is an enumerated type in C11. */
#define memory_order rte_memory_order
> +
> +#define rte_memory_order_relaxed memory_order_relaxed
> +#ifdef __ATOMIC_RELAXED
> +_Static_assert(rte_memory_order_relaxed == __ATOMIC_RELAXED,
> + "rte_memory_order_relaxed == __ATOMIC_RELAXED");
> +#endif
> +
> +#define rte_memory_order_consume memory_order_consume
> +#ifdef __ATOMIC_CONSUME
> +_Static_assert(rte_memory_order_consume == __ATOMIC_CONSUME,
> + "rte_memory_order_consume == __ATOMIC_CONSUME");
> +#endif
> +
> +#define rte_memory_order_acquire memory_order_acquire
> +#ifdef __ATOMIC_ACQUIRE
> +_Static_assert(rte_memory_order_acquire == __ATOMIC_ACQUIRE,
> + "rte_memory_order_acquire == __ATOMIC_ACQUIRE");
> +#endif
> +
> +#define rte_memory_order_release memory_order_release
> +#ifdef __ATOMIC_RELEASE
> +_Static_assert(rte_memory_order_release == __ATOMIC_RELEASE,
> + "rte_memory_order_release == __ATOMIC_RELEASE");
> +#endif
> +
> +#define rte_memory_order_acq_rel memory_order_acq_rel
> +#ifdef __ATOMIC_ACQ_REL
> +_Static_assert(rte_memory_order_acq_rel == __ATOMIC_ACQ_REL,
> + "rte_memory_order_acq_rel == __ATOMIC_ACQ_REL");
> +#endif
> +
> +#define rte_memory_order_seq_cst memory_order_seq_cst
> +#ifdef __ATOMIC_SEQ_CST
> +_Static_assert(rte_memory_order_seq_cst == __ATOMIC_SEQ_CST,
> + "rte_memory_order_seq_cst == __ATOMIC_SEQ_CST");
> +#endif
Excellent idea adding these _Static_asserts!
Have you tested (with the toolchain you are targeting with this _Static_assert) that e.g. __ATOMIC_RELAXED is actually #defined, so the preprocessor can see it? (I guess that being a built-it, it might not be a #define, it might be a magic value known by the compiler only.)
> +
> +#define rte_atomic_load_explicit(ptr, memorder) \
> + atomic_load_explicit(ptr, memorder)
> +
> +#define rte_atomic_store_explicit(ptr, val, memorder) \
> + atomic_store_explicit(ptr, val, memorder)
> +
> +#define rte_atomic_exchange_explicit(ptr, val, memorder) \
> + atomic_exchange_explicit(ptr, val, memorder)
> +
> +#define rte_atomic_compare_exchange_strong_explicit( \
> + ptr, expected, desired, succ_memorder, fail_memorder) \
> + atomic_compare_exchange_strong_explicit( \
> + ptr, expected, desired, succ_memorder, fail_memorder)
> +
> +#define rte_atomic_compare_exchange_weak_explicit( \
> + ptr, expected, desired, succ_memorder, fail_memorder) \
> + atomic_compare_exchange_strong_explicit( \
> + ptr, expected, desired, succ_memorder, fail_memorder)
> +
> +#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
> + atomic_fetch_add_explicit(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
> + atomic_fetch_sub_explicit(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
> + atomic_fetch_and_explicit(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
> + atomic_fetch_xor_explicit(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
> + atomic_fetch_or_explicit(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
> + atomic_fetch_nand_explicit(ptr, val, memorder)
> +
> +#define rte_atomic_flag_test_and_set_explict(ptr, memorder) \
> + atomic_flag_test_and_set_explicit(ptr, memorder)
> +
> +#define rte_atomic_flag_clear_explicit(ptr, memorder) \
> + atomic_flag_clear(ptr, memorder)
> +
> +#else
> +
> +#define __rte_atomic
Move the built-ins memory order type definition here:
/* The memory order is an integer type in GCC built-ins,
* not an enumerated type like in C11.
*/
typedef int rte_memory_order;
> +
> +#define rte_memory_order_relaxed __ATOMIC_RELAXED
> +#define rte_memory_order_consume __ATOMIC_CONSUME
> +#define rte_memory_order_acquire __ATOMIC_ACQUIRE
> +#define rte_memory_order_release __ATOMIC_RELEASE
> +#define rte_memory_order_acq_rel __ATOMIC_ACQ_REL
> +#define rte_memory_order_seq_cst __ATOMIC_SEQ_CST
Agree; the memorder type is int, so no enum here.
> +
> +#define rte_atomic_load_explicit(ptr, memorder) \
> + __atomic_load_n(ptr, memorder)
> +
> +#define rte_atomic_store_explicit(ptr, val, memorder) \
> + __atomic_store_n(ptr, val, memorder)
> +
> +#define rte_atomic_exchange_explicit(ptr, val, memorder) \
> + __atomic_exchange_n(ptr, val, memorder)
> +
> +#define rte_atomic_compare_exchange_strong_explicit( \
> + ptr, expected, desired, succ_memorder, fail_memorder) \
> + __atomic_compare_exchange_n( \
> + ptr, expected, desired, 0, succ_memorder, fail_memorder)
> +
> +#define rte_atomic_compare_exchange_weak_explicit( \
> + ptr, expected, desired, succ_memorder, fail_memorder) \
> + __atomic_compare_exchange_n( \
> + ptr, expected, desired, 1, succ_memorder, fail_memorder)
> +
> +#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
> + __atomic_fetch_add(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
> + __atomic_fetch_sub(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
> + __atomic_fetch_and(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
> + __atomic_fetch_xor(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
> + __atomic_fetch_or(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
> + __atomic_fetch_nand(ptr, val, memorder)
> +
> +#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \
> + __atomic_test_and_set(ptr, memorder)
> +
> +#define rte_atomic_flag_clear_explicit(ptr, memorder) \
> + __atomic_clear(ptr, memorder)
> +
> +#endif
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_STDATOMIC_H_ */
> diff --git a/meson_options.txt b/meson_options.txt
> index 621e1ca..7d6784d 100644
> --- a/meson_options.txt
> +++ b/meson_options.txt
> @@ -46,6 +46,7 @@ option('mbuf_refcnt_atomic', type: 'boolean', value:
> true, description:
> 'Atomically access the mbuf refcnt.')
> option('platform', type: 'string', value: 'native', description:
> 'Platform to build, either "native", "generic" or a SoC. Please
> refer to the Linux build guide for more information.')
> +option('enable_stdatomic', type: 'boolean', value: false, description:
> 'enable use of C11 stdatomic')
> option('enable_trace_fp', type: 'boolean', value: false, description:
> 'enable fast path trace points.')
> option('tests', type: 'boolean', value: true, description:
> --
> 1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH 6/6] devtools: forbid new direct use of GCC atomic builtins
2023-08-11 1:32 ` [PATCH 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
2023-08-11 8:57 ` Bruce Richardson
@ 2023-08-11 9:51 ` Morten Brørup
2023-08-11 15:56 ` Tyler Retzlaff
1 sibling, 1 reply; 82+ messages in thread
From: Morten Brørup @ 2023-08-11 9:51 UTC (permalink / raw)
To: Tyler Retzlaff, dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Friday, 11 August 2023 03.32
>
> Refrain from using compiler __atomic_xxx builtins DPDK now requires
> the use of rte_atomic_<op>_explicit macros when operating on DPDK
> atomic variables.
>
> Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> Acked-by: Morten Brørup <mb@smartsharesystems.com>
The Acked-by should have been:
Suggested-by: Morten Brørup <mb@smartsharesystems.com>
> ---
> devtools/checkpatches.sh | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh
> index 43f5e36..a32f02e 100755
> --- a/devtools/checkpatches.sh
> +++ b/devtools/checkpatches.sh
> @@ -102,6 +102,14 @@ check_forbidden_additions() { # <patch>
> -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk
> \
> "$1" || res=1
>
> + # refrain from using compiler __atomic_xxx builtins
> + awk -v FOLDERS="lib drivers app examples" \
> + -v EXPRESSIONS="__atomic_.*\\\(" \
This expression is a superset of other expressions in checkpatches (search for "__atomic" in the checkpatches, and you'll find them). Perhaps they can be removed?
> + -v RET_ON_FAIL=1 \
> + -v MESSAGE='Using __atomic_xxx builtins' \
> + -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk
> \
> + "$1" || res=1
> +
> # refrain from using compiler __atomic_thread_fence()
> # It should be avoided on x86 for SMP case.
> awk -v FOLDERS="lib drivers app examples" \
> --
> 1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 1/6] eal: provide rte stdatomics optional atomics API
2023-08-11 9:42 ` Morten Brørup
@ 2023-08-11 15:54 ` Tyler Retzlaff
2023-08-14 9:04 ` Morten Brørup
0 siblings, 1 reply; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-11 15:54 UTC (permalink / raw)
To: Morten Brørup
Cc: dev, techboard, Bruce Richardson, Honnappa Nagarahalli,
Ruifeng Wang, Jerin Jacob, Sunil Kumar Kori,
Mattias Rönnblom, Joyce Kong, David Christensen,
Konstantin Ananyev, David Hunt, Thomas Monjalon, David Marchand
On Fri, Aug 11, 2023 at 11:42:12AM +0200, Morten Brørup wrote:
> > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > Sent: Friday, 11 August 2023 03.32
> >
> > Provide API for atomic operations in the rte namespace that may
> > optionally be configured to use C11 atomics with meson
> > option enable_stdatomics=true
> >
> > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > ---
> > config/meson.build | 1 +
> > config/rte_config.h | 1 +
> > lib/eal/include/meson.build | 1 +
> > lib/eal/include/rte_stdatomic.h | 162
> > ++++++++++++++++++++++++++++++++++++++++
> > meson_options.txt | 1 +
> > 5 files changed, 166 insertions(+)
> > create mode 100644 lib/eal/include/rte_stdatomic.h
> >
> > diff --git a/config/meson.build b/config/meson.build
> > index d822371..ec49964 100644
> > --- a/config/meson.build
> > +++ b/config/meson.build
> > @@ -303,6 +303,7 @@ endforeach
> > # set other values pulled from the build options
> > dpdk_conf.set('RTE_MAX_ETHPORTS', get_option('max_ethports'))
> > dpdk_conf.set('RTE_LIBEAL_USE_HPET', get_option('use_hpet'))
> > +dpdk_conf.set('RTE_ENABLE_STDATOMIC', get_option('enable_stdatomic'))
> > dpdk_conf.set('RTE_ENABLE_TRACE_FP', get_option('enable_trace_fp'))
> > # values which have defaults which may be overridden
> > dpdk_conf.set('RTE_MAX_VFIO_GROUPS', 64)
> > diff --git a/config/rte_config.h b/config/rte_config.h
> > index 400e44e..f17b6ae 100644
> > --- a/config/rte_config.h
> > +++ b/config/rte_config.h
> > @@ -13,6 +13,7 @@
> > #define _RTE_CONFIG_H_
> >
> > #include <rte_build_config.h>
> > +#include <rte_stdatomic.h>
> >
> > /* legacy defines */
> > #ifdef RTE_EXEC_ENV_LINUX
> > diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
> > index b0db9b3..f8a47b3 100644
> > --- a/lib/eal/include/meson.build
> > +++ b/lib/eal/include/meson.build
> > @@ -43,6 +43,7 @@ headers += files(
> > 'rte_seqlock.h',
> > 'rte_service.h',
> > 'rte_service_component.h',
> > + 'rte_stdatomic.h',
> > 'rte_string_fns.h',
> > 'rte_tailq.h',
> > 'rte_thread.h',
> > diff --git a/lib/eal/include/rte_stdatomic.h
> > b/lib/eal/include/rte_stdatomic.h
> > new file mode 100644
> > index 0000000..832fd07
> > --- /dev/null
> > +++ b/lib/eal/include/rte_stdatomic.h
> > @@ -0,0 +1,162 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2023 Microsoft Corporation
> > + */
> > +
> > +#ifndef _RTE_STDATOMIC_H_
> > +#define _RTE_STDATOMIC_H_
> > +
> > +#ifdef __cplusplus
> > +extern "C" {
> > +#endif
> > +
> > +typedef int rte_memory_order;
>
> In C11 memory_order is an enumerated type, and in GCC built-ins it is an int. If possible, rte_memory_order should be too; i.e. remove the typedef here, and make two variants of it instead.
will be in v2
>
> > +
> > +#ifdef RTE_ENABLE_STDATOMIC
> > +#ifdef __STDC_NO_ATOMICS__
> > +#error enable_stdatomics=true but atomics not supported by toolchain
> > +#endif
> > +
> > +#include <stdatomic.h>
> > +
> > +#define __rte_atomic _Atomic
>
> Move the (changed) C11 memory order type definition here:
>
> /* The memory order is an enumerated type in C11. */
> #define memory_order rte_memory_order
>
> > +
> > +#define rte_memory_order_relaxed memory_order_relaxed
> > +#ifdef __ATOMIC_RELAXED
> > +_Static_assert(rte_memory_order_relaxed == __ATOMIC_RELAXED,
> > + "rte_memory_order_relaxed == __ATOMIC_RELAXED");
> > +#endif
> > +
> > +#define rte_memory_order_consume memory_order_consume
> > +#ifdef __ATOMIC_CONSUME
> > +_Static_assert(rte_memory_order_consume == __ATOMIC_CONSUME,
> > + "rte_memory_order_consume == __ATOMIC_CONSUME");
> > +#endif
> > +
> > +#define rte_memory_order_acquire memory_order_acquire
> > +#ifdef __ATOMIC_ACQUIRE
> > +_Static_assert(rte_memory_order_acquire == __ATOMIC_ACQUIRE,
> > + "rte_memory_order_acquire == __ATOMIC_ACQUIRE");
> > +#endif
> > +
> > +#define rte_memory_order_release memory_order_release
> > +#ifdef __ATOMIC_RELEASE
> > +_Static_assert(rte_memory_order_release == __ATOMIC_RELEASE,
> > + "rte_memory_order_release == __ATOMIC_RELEASE");
> > +#endif
> > +
> > +#define rte_memory_order_acq_rel memory_order_acq_rel
> > +#ifdef __ATOMIC_ACQ_REL
> > +_Static_assert(rte_memory_order_acq_rel == __ATOMIC_ACQ_REL,
> > + "rte_memory_order_acq_rel == __ATOMIC_ACQ_REL");
> > +#endif
> > +
> > +#define rte_memory_order_seq_cst memory_order_seq_cst
> > +#ifdef __ATOMIC_SEQ_CST
> > +_Static_assert(rte_memory_order_seq_cst == __ATOMIC_SEQ_CST,
> > + "rte_memory_order_seq_cst == __ATOMIC_SEQ_CST");
> > +#endif
>
> Excellent idea adding these _Static_asserts!
>
> Have you tested (with the toolchain you are targeting with this _Static_assert) that e.g. __ATOMIC_RELAXED is actually #defined, so the preprocessor can see it? (I guess that being a built-it, it might not be a #define, it might be a magic value known by the compiler only.)
* llvm and gcc both expose it as a built-in #define for test builds i
have run. worst case the assert is lost if it isn't.
* since i have to handle non-{clang,gcc} too i still guard with ifdef
* i do need to switch to using assert.h static_assert macro to
inter-operate with c++ in v2
>
> > +
> > +#define rte_atomic_load_explicit(ptr, memorder) \
> > + atomic_load_explicit(ptr, memorder)
> > +
> > +#define rte_atomic_store_explicit(ptr, val, memorder) \
> > + atomic_store_explicit(ptr, val, memorder)
> > +
> > +#define rte_atomic_exchange_explicit(ptr, val, memorder) \
> > + atomic_exchange_explicit(ptr, val, memorder)
> > +
> > +#define rte_atomic_compare_exchange_strong_explicit( \
> > + ptr, expected, desired, succ_memorder, fail_memorder) \
> > + atomic_compare_exchange_strong_explicit( \
> > + ptr, expected, desired, succ_memorder, fail_memorder)
> > +
> > +#define rte_atomic_compare_exchange_weak_explicit( \
> > + ptr, expected, desired, succ_memorder, fail_memorder) \
> > + atomic_compare_exchange_strong_explicit( \
> > + ptr, expected, desired, succ_memorder, fail_memorder)
> > +
> > +#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
> > + atomic_fetch_add_explicit(ptr, val, memorder)
> > +
> > +#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
> > + atomic_fetch_sub_explicit(ptr, val, memorder)
> > +
> > +#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
> > + atomic_fetch_and_explicit(ptr, val, memorder)
> > +
> > +#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
> > + atomic_fetch_xor_explicit(ptr, val, memorder)
> > +
> > +#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
> > + atomic_fetch_or_explicit(ptr, val, memorder)
> > +
> > +#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
> > + atomic_fetch_nand_explicit(ptr, val, memorder)
> > +
> > +#define rte_atomic_flag_test_and_set_explict(ptr, memorder) \
> > + atomic_flag_test_and_set_explicit(ptr, memorder)
> > +
> > +#define rte_atomic_flag_clear_explicit(ptr, memorder) \
> > + atomic_flag_clear(ptr, memorder)
> > +
> > +#else
> > +
> > +#define __rte_atomic
>
> Move the built-ins memory order type definition here:
>
> /* The memory order is an integer type in GCC built-ins,
> * not an enumerated type like in C11.
> */
> typedef int rte_memory_order;
>
> > +
> > +#define rte_memory_order_relaxed __ATOMIC_RELAXED
> > +#define rte_memory_order_consume __ATOMIC_CONSUME
> > +#define rte_memory_order_acquire __ATOMIC_ACQUIRE
> > +#define rte_memory_order_release __ATOMIC_RELEASE
> > +#define rte_memory_order_acq_rel __ATOMIC_ACQ_REL
> > +#define rte_memory_order_seq_cst __ATOMIC_SEQ_CST
>
> Agree; the memorder type is int, so no enum here.
>
> > +
> > +#define rte_atomic_load_explicit(ptr, memorder) \
> > + __atomic_load_n(ptr, memorder)
> > +
> > +#define rte_atomic_store_explicit(ptr, val, memorder) \
> > + __atomic_store_n(ptr, val, memorder)
> > +
> > +#define rte_atomic_exchange_explicit(ptr, val, memorder) \
> > + __atomic_exchange_n(ptr, val, memorder)
> > +
> > +#define rte_atomic_compare_exchange_strong_explicit( \
> > + ptr, expected, desired, succ_memorder, fail_memorder) \
> > + __atomic_compare_exchange_n( \
> > + ptr, expected, desired, 0, succ_memorder, fail_memorder)
> > +
> > +#define rte_atomic_compare_exchange_weak_explicit( \
> > + ptr, expected, desired, succ_memorder, fail_memorder) \
> > + __atomic_compare_exchange_n( \
> > + ptr, expected, desired, 1, succ_memorder, fail_memorder)
> > +
> > +#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
> > + __atomic_fetch_add(ptr, val, memorder)
> > +
> > +#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
> > + __atomic_fetch_sub(ptr, val, memorder)
> > +
> > +#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
> > + __atomic_fetch_and(ptr, val, memorder)
> > +
> > +#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
> > + __atomic_fetch_xor(ptr, val, memorder)
> > +
> > +#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
> > + __atomic_fetch_or(ptr, val, memorder)
> > +
> > +#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
> > + __atomic_fetch_nand(ptr, val, memorder)
> > +
> > +#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \
> > + __atomic_test_and_set(ptr, memorder)
> > +
> > +#define rte_atomic_flag_clear_explicit(ptr, memorder) \
> > + __atomic_clear(ptr, memorder)
> > +
> > +#endif
> > +
> > +#ifdef __cplusplus
> > +}
> > +#endif
> > +
> > +#endif /* _RTE_STDATOMIC_H_ */
> > diff --git a/meson_options.txt b/meson_options.txt
> > index 621e1ca..7d6784d 100644
> > --- a/meson_options.txt
> > +++ b/meson_options.txt
> > @@ -46,6 +46,7 @@ option('mbuf_refcnt_atomic', type: 'boolean', value:
> > true, description:
> > 'Atomically access the mbuf refcnt.')
> > option('platform', type: 'string', value: 'native', description:
> > 'Platform to build, either "native", "generic" or a SoC. Please
> > refer to the Linux build guide for more information.')
> > +option('enable_stdatomic', type: 'boolean', value: false, description:
> > 'enable use of C11 stdatomic')
> > option('enable_trace_fp', type: 'boolean', value: false, description:
> > 'enable fast path trace points.')
> > option('tests', type: 'boolean', value: true, description:
> > --
> > 1.8.3.1
>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH 6/6] devtools: forbid new direct use of GCC atomic builtins
2023-08-11 9:51 ` Morten Brørup
@ 2023-08-11 15:56 ` Tyler Retzlaff
2023-08-14 6:37 ` Morten Brørup
0 siblings, 1 reply; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-11 15:56 UTC (permalink / raw)
To: Morten Brørup
Cc: dev, techboard, Bruce Richardson, Honnappa Nagarahalli,
Ruifeng Wang, Jerin Jacob, Sunil Kumar Kori,
Mattias Rönnblom, Joyce Kong, David Christensen,
Konstantin Ananyev, David Hunt, Thomas Monjalon, David Marchand
On Fri, Aug 11, 2023 at 11:51:17AM +0200, Morten Brørup wrote:
> > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > Sent: Friday, 11 August 2023 03.32
> >
> > Refrain from using compiler __atomic_xxx builtins DPDK now requires
> > the use of rte_atomic_<op>_explicit macros when operating on DPDK
> > atomic variables.
> >
> > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > Acked-by: Morten Brørup <mb@smartsharesystems.com>
>
> The Acked-by should have been:
> Suggested-by: Morten Brørup <mb@smartsharesystems.com>
ooh, did i make a mistake? i was carrying the ack from my abandoned
series (or i thought you had acked this patch on that series sorry).
i'll change it to suggested-by.
thanks!
>
> > ---
> > devtools/checkpatches.sh | 8 ++++++++
> > 1 file changed, 8 insertions(+)
> >
> > diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh
> > index 43f5e36..a32f02e 100755
> > --- a/devtools/checkpatches.sh
> > +++ b/devtools/checkpatches.sh
> > @@ -102,6 +102,14 @@ check_forbidden_additions() { # <patch>
> > -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk
> > \
> > "$1" || res=1
> >
> > + # refrain from using compiler __atomic_xxx builtins
> > + awk -v FOLDERS="lib drivers app examples" \
> > + -v EXPRESSIONS="__atomic_.*\\\(" \
>
> This expression is a superset of other expressions in checkpatches (search for "__atomic" in the checkpatches, and you'll find them). Perhaps they can be removed?
yes, seems like a good idea.
v2
>
> > + -v RET_ON_FAIL=1 \
> > + -v MESSAGE='Using __atomic_xxx builtins' \
> > + -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk
> > \
> > + "$1" || res=1
> > +
> > # refrain from using compiler __atomic_thread_fence()
> > # It should be avoided on x86 for SMP case.
> > awk -v FOLDERS="lib drivers app examples" \
> > --
> > 1.8.3.1
>
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v2 0/6] RFC optional rte optional stdatomics API
2023-08-11 1:31 [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (5 preceding siblings ...)
2023-08-11 1:32 ` [PATCH 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
@ 2023-08-11 17:32 ` Tyler Retzlaff
2023-08-11 17:32 ` [PATCH v2 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
` (5 more replies)
2023-08-16 19:19 ` [PATCH v3 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (3 subsequent siblings)
10 siblings, 6 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-11 17:32 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
This series introduces API additions prefixed in the rte namespace that allow
the optional use of stdatomics.h from C11 using enable_stdatomics=true for
targets where enable_stdatomics=false no functional change is intended.
Be aware this does not contain all changes to use stdatomics across the DPDK
tree it only introduces the minimum to allow the option to be used which is
a pre-requisite for a clean CI (probably using clang) that can be run
with enable_stdatomics=true enabled.
It is planned that subsequent series will be introduced per lib/driver as
appropriate to further enable stdatomics use when enable_stdatomics=true.
Notes:
* Additional libraries beyond EAL make visible atomics use across the
API/ABI surface they will be converted in the subsequent series.
* The eal: add rte atomic qualifier with casts patch needs some discussion
as to whether or not the legacy rte_atomic APIs should be converted to
work with enable_stdatomic=true right now some implementation dependent
casts are used to prevent cascading / having to convert too much in
the intial series.
* Windows will obviously need complete conversion of libraries including
atomics that are not crossing API/ABI boundaries. those conversions will
introduced in separate series as new along side the existing msvc series.
Please keep in mind we would like to prioritize the review / acceptance of
this patch since it needs to be completed in the 23.11 merge window.
Thank you all for the discussion that lead to the formation of this series.
v2:
* Wrap meson_options.txt option description to newline and indent to
be consistent with other options.
* Provide separate typedef of rte_memory_order for enable_stdatomic=true
VS enable_stdatomic=false instead of a single typedef to int
note: slight tweak to reviewers feedback i've chosen to use a typedef
for both enable_stdatomic={true,false} (just seemed more consistent)
* Bring in assert.h and use static_assert macro instead of _Static_assert
keyword to better interoperate with c/c++
* Directly include rte_stdatomic.h where into other places it is consumed
instead of hacking it globally into rte_config.h
* Provide and use __rte_atomic_thread_fence to allow conditional expansion
within the body of existing rte_atomic_thread_fence inline function to
maintain per-arch optimizations when enable_stdatomic=false
Tyler Retzlaff (6):
eal: provide rte stdatomics optional atomics API
eal: adapt EAL to present rte optional atomics API
eal: add rte atomic qualifier with casts
distributor: adapt for EAL optional atomics API changes
bpf: adapt for EAL optional atomics API changes
devtools: forbid new direct use of GCC atomic builtins
app/test/test_mcslock.c | 6 +-
config/meson.build | 1 +
devtools/checkpatches.sh | 6 +-
lib/bpf/bpf_pkt.c | 6 +-
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 ++++----
lib/eal/arm/include/rte_atomic_32.h | 4 +-
lib/eal/arm/include/rte_atomic_64.h | 36 +++---
lib/eal/arm/include/rte_pause_64.h | 26 ++---
lib/eal/arm/rte_power_intrinsics.c | 8 +-
lib/eal/common/eal_common_trace.c | 16 +--
lib/eal/include/generic/rte_atomic.h | 67 +++++++-----
lib/eal/include/generic/rte_pause.h | 42 +++----
lib/eal/include/generic/rte_rwlock.h | 48 ++++----
lib/eal/include/generic/rte_spinlock.h | 20 ++--
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_mcslock.h | 51 ++++-----
lib/eal/include/rte_pflock.h | 25 +++--
lib/eal/include/rte_seqcount.h | 19 ++--
lib/eal/include/rte_stdatomic.h | 182 +++++++++++++++++++++++++++++++
lib/eal/include/rte_ticketlock.h | 43 ++++----
lib/eal/include/rte_trace_point.h | 5 +-
lib/eal/loongarch/include/rte_atomic.h | 4 +-
lib/eal/ppc/include/rte_atomic.h | 54 ++++-----
lib/eal/riscv/include/rte_atomic.h | 4 +-
lib/eal/x86/include/rte_atomic.h | 8 +-
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 7 +-
meson_options.txt | 2 +
29 files changed, 481 insertions(+), 258 deletions(-)
create mode 100644 lib/eal/include/rte_stdatomic.h
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v2 1/6] eal: provide rte stdatomics optional atomics API
2023-08-11 17:32 ` [PATCH v2 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-11 17:32 ` Tyler Retzlaff
2023-08-14 7:06 ` Morten Brørup
2023-08-11 17:32 ` [PATCH v2 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
` (4 subsequent siblings)
5 siblings, 1 reply; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-11 17:32 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Provide API for atomic operations in the rte namespace that may
optionally be configured to use C11 atomics with meson
option enable_stdatomics=true
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
config/meson.build | 1 +
lib/eal/include/generic/rte_atomic.h | 1 +
lib/eal/include/generic/rte_pause.h | 1 +
lib/eal/include/generic/rte_rwlock.h | 1 +
lib/eal/include/generic/rte_spinlock.h | 1 +
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_mcslock.h | 1 +
lib/eal/include/rte_pflock.h | 1 +
lib/eal/include/rte_seqcount.h | 1 +
lib/eal/include/rte_stdatomic.h | 182 +++++++++++++++++++++++++++++++++
lib/eal/include/rte_ticketlock.h | 1 +
lib/eal/include/rte_trace_point.h | 1 +
meson_options.txt | 2 +
13 files changed, 195 insertions(+)
create mode 100644 lib/eal/include/rte_stdatomic.h
diff --git a/config/meson.build b/config/meson.build
index d822371..ec49964 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -303,6 +303,7 @@ endforeach
# set other values pulled from the build options
dpdk_conf.set('RTE_MAX_ETHPORTS', get_option('max_ethports'))
dpdk_conf.set('RTE_LIBEAL_USE_HPET', get_option('use_hpet'))
+dpdk_conf.set('RTE_ENABLE_STDATOMIC', get_option('enable_stdatomic'))
dpdk_conf.set('RTE_ENABLE_TRACE_FP', get_option('enable_trace_fp'))
# values which have defaults which may be overridden
dpdk_conf.set('RTE_MAX_VFIO_GROUPS', 64)
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index aef44e2..efd29eb 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -15,6 +15,7 @@
#include <stdint.h>
#include <rte_compat.h>
#include <rte_common.h>
+#include <rte_stdatomic.h>
#ifdef __DOXYGEN__
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index ec1f418..bebfa95 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -16,6 +16,7 @@
#include <assert.h>
#include <rte_common.h>
#include <rte_atomic.h>
+#include <rte_stdatomic.h>
/**
* Pause CPU execution for a short while
diff --git a/lib/eal/include/generic/rte_rwlock.h b/lib/eal/include/generic/rte_rwlock.h
index 9e083bb..24ebec6 100644
--- a/lib/eal/include/generic/rte_rwlock.h
+++ b/lib/eal/include/generic/rte_rwlock.h
@@ -32,6 +32,7 @@
#include <rte_common.h>
#include <rte_lock_annotations.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_rwlock_t type.
diff --git a/lib/eal/include/generic/rte_spinlock.h b/lib/eal/include/generic/rte_spinlock.h
index c50ebaa..e18f0cd 100644
--- a/lib/eal/include/generic/rte_spinlock.h
+++ b/lib/eal/include/generic/rte_spinlock.h
@@ -23,6 +23,7 @@
#endif
#include <rte_lock_annotations.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_spinlock_t type.
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index a0463ef..e94b056 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -42,6 +42,7 @@ headers += files(
'rte_seqlock.h',
'rte_service.h',
'rte_service_component.h',
+ 'rte_stdatomic.h',
'rte_string_fns.h',
'rte_tailq.h',
'rte_thread.h',
diff --git a/lib/eal/include/rte_mcslock.h b/lib/eal/include/rte_mcslock.h
index a805cb2..18e63eb 100644
--- a/lib/eal/include/rte_mcslock.h
+++ b/lib/eal/include/rte_mcslock.h
@@ -27,6 +27,7 @@
#include <rte_common.h>
#include <rte_pause.h>
#include <rte_branch_prediction.h>
+#include <rte_stdatomic.h>
/**
* The rte_mcslock_t type.
diff --git a/lib/eal/include/rte_pflock.h b/lib/eal/include/rte_pflock.h
index a3f7291..790be71 100644
--- a/lib/eal/include/rte_pflock.h
+++ b/lib/eal/include/rte_pflock.h
@@ -34,6 +34,7 @@
#include <rte_compat.h>
#include <rte_common.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_pflock_t type.
diff --git a/lib/eal/include/rte_seqcount.h b/lib/eal/include/rte_seqcount.h
index ff62708..098af26 100644
--- a/lib/eal/include/rte_seqcount.h
+++ b/lib/eal/include/rte_seqcount.h
@@ -26,6 +26,7 @@
#include <rte_atomic.h>
#include <rte_branch_prediction.h>
#include <rte_compat.h>
+#include <rte_stdatomic.h>
/**
* The RTE seqcount type.
diff --git a/lib/eal/include/rte_stdatomic.h b/lib/eal/include/rte_stdatomic.h
new file mode 100644
index 0000000..f03be9b
--- /dev/null
+++ b/lib/eal/include/rte_stdatomic.h
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Microsoft Corporation
+ */
+
+#ifndef _RTE_STDATOMIC_H_
+#define _RTE_STDATOMIC_H_
+
+#include <assert.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#ifdef RTE_ENABLE_STDATOMIC
+#ifdef __STDC_NO_ATOMICS__
+#error enable_stdatomics=true but atomics not supported by toolchain
+#endif
+
+#include <stdatomic.h>
+
+#define __rte_atomic _Atomic
+
+/* The memory order is an enumerated type in C11. */
+typedef memory_order rte_memory_order;
+
+#define rte_memory_order_relaxed memory_order_relaxed
+#ifdef __ATOMIC_RELAXED
+static_assert(rte_memory_order_relaxed == __ATOMIC_RELAXED,
+ "rte_memory_order_relaxed == __ATOMIC_RELAXED");
+#endif
+
+#define rte_memory_order_consume memory_order_consume
+#ifdef __ATOMIC_CONSUME
+static_assert(rte_memory_order_consume == __ATOMIC_CONSUME,
+ "rte_memory_order_consume == __ATOMIC_CONSUME");
+#endif
+
+#define rte_memory_order_acquire memory_order_acquire
+#ifdef __ATOMIC_ACQUIRE
+static_assert(rte_memory_order_acquire == __ATOMIC_ACQUIRE,
+ "rte_memory_order_acquire == __ATOMIC_ACQUIRE");
+#endif
+
+#define rte_memory_order_release memory_order_release
+#ifdef __ATOMIC_RELEASE
+static_assert(rte_memory_order_release == __ATOMIC_RELEASE,
+ "rte_memory_order_release == __ATOMIC_RELEASE");
+#endif
+
+#define rte_memory_order_acq_rel memory_order_acq_rel
+#ifdef __ATOMIC_ACQ_REL
+static_assert(rte_memory_order_acq_rel == __ATOMIC_ACQ_REL,
+ "rte_memory_order_acq_rel == __ATOMIC_ACQ_REL");
+#endif
+
+#define rte_memory_order_seq_cst memory_order_seq_cst
+#ifdef __ATOMIC_SEQ_CST
+static_assert(rte_memory_order_seq_cst == __ATOMIC_SEQ_CST,
+ "rte_memory_order_seq_cst == __ATOMIC_SEQ_CST");
+#endif
+
+#define rte_atomic_load_explicit(ptr, memorder) \
+ atomic_load_explicit(ptr, memorder)
+
+#define rte_atomic_store_explicit(ptr, val, memorder) \
+ atomic_store_explicit(ptr, val, memorder)
+
+#define rte_atomic_exchange_explicit(ptr, val, memorder) \
+ atomic_exchange_explicit(ptr, val, memorder)
+
+#define rte_atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder)
+
+#define rte_atomic_compare_exchange_weak_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder)
+
+#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
+ atomic_fetch_add_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
+ atomic_fetch_sub_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
+ atomic_fetch_and_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
+ atomic_fetch_xor_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
+ atomic_fetch_or_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
+ atomic_fetch_nand_explicit(ptr, val, memorder)
+
+#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \
+ atomic_flag_test_and_set_explicit(ptr, memorder)
+
+#define rte_atomic_flag_clear_explicit(ptr, memorder) \
+ atomic_flag_clear(ptr, memorder)
+
+/* We provide internal macro here to allow conditional expansion
+ * in the body of the per-arch rte_atomic_thread_fence inline functions.
+ */
+#define __rte_atomic_thread_fence(memorder) \
+ atomic_thread_fence(memorder)
+
+#else
+
+#define __rte_atomic
+
+/* The memory order is an integer type in GCC built-ins,
+ * not an enumerated type like in C11.
+ */
+typedef int rte_memory_order;
+
+#define rte_memory_order_relaxed __ATOMIC_RELAXED
+#define rte_memory_order_consume __ATOMIC_CONSUME
+#define rte_memory_order_acquire __ATOMIC_ACQUIRE
+#define rte_memory_order_release __ATOMIC_RELEASE
+#define rte_memory_order_acq_rel __ATOMIC_ACQ_REL
+#define rte_memory_order_seq_cst __ATOMIC_SEQ_CST
+
+#define rte_atomic_load_explicit(ptr, memorder) \
+ __atomic_load_n(ptr, memorder)
+
+#define rte_atomic_store_explicit(ptr, val, memorder) \
+ __atomic_store_n(ptr, val, memorder)
+
+#define rte_atomic_exchange_explicit(ptr, val, memorder) \
+ __atomic_exchange_n(ptr, val, memorder)
+
+#define rte_atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ __atomic_compare_exchange_n( \
+ ptr, expected, desired, 0, succ_memorder, fail_memorder)
+
+#define rte_atomic_compare_exchange_weak_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ __atomic_compare_exchange_n( \
+ ptr, expected, desired, 1, succ_memorder, fail_memorder)
+
+#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
+ __atomic_fetch_add(ptr, val, memorder)
+
+#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
+ __atomic_fetch_sub(ptr, val, memorder)
+
+#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
+ __atomic_fetch_and(ptr, val, memorder)
+
+#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
+ __atomic_fetch_xor(ptr, val, memorder)
+
+#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
+ __atomic_fetch_or(ptr, val, memorder)
+
+#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
+ __atomic_fetch_nand(ptr, val, memorder)
+
+#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \
+ __atomic_test_and_set(ptr, memorder)
+
+#define rte_atomic_flag_clear_explicit(ptr, memorder) \
+ __atomic_clear(ptr, memorder)
+
+/* We provide internal macro here to allow conditional expansion
+ * in the body of the per-arch rte_atomic_thread_fence inline functions.
+ */
+#define __rte_atomic_thread_fence(memorder) \
+ __atomic_thread_fence(memorder)
+
+#endif
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STDATOMIC_H_ */
diff --git a/lib/eal/include/rte_ticketlock.h b/lib/eal/include/rte_ticketlock.h
index 5db0d8a..e22d119 100644
--- a/lib/eal/include/rte_ticketlock.h
+++ b/lib/eal/include/rte_ticketlock.h
@@ -24,6 +24,7 @@
#include <rte_common.h>
#include <rte_lcore.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_ticketlock_t type.
diff --git a/lib/eal/include/rte_trace_point.h b/lib/eal/include/rte_trace_point.h
index c6b6fcc..d587591 100644
--- a/lib/eal/include/rte_trace_point.h
+++ b/lib/eal/include/rte_trace_point.h
@@ -30,6 +30,7 @@
#include <rte_per_lcore.h>
#include <rte_string_fns.h>
#include <rte_uuid.h>
+#include <rte_stdatomic.h>
/** The tracepoint object. */
typedef uint64_t rte_trace_point_t;
diff --git a/meson_options.txt b/meson_options.txt
index 621e1ca..bb22bba 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -46,6 +46,8 @@ option('mbuf_refcnt_atomic', type: 'boolean', value: true, description:
'Atomically access the mbuf refcnt.')
option('platform', type: 'string', value: 'native', description:
'Platform to build, either "native", "generic" or a SoC. Please refer to the Linux build guide for more information.')
+option('enable_stdatomic', type: 'boolean', value: false, description:
+ 'enable use of C11 stdatomic')
option('enable_trace_fp', type: 'boolean', value: false, description:
'enable fast path trace points.')
option('tests', type: 'boolean', value: true, description:
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v2 2/6] eal: adapt EAL to present rte optional atomics API
2023-08-11 17:32 ` [PATCH v2 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-11 17:32 ` [PATCH v2 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
@ 2023-08-11 17:32 ` Tyler Retzlaff
2023-08-14 8:00 ` Morten Brørup
2023-08-11 17:32 ` [PATCH v2 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
` (3 subsequent siblings)
5 siblings, 1 reply; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-11 17:32 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt the EAL public headers to use rte optional atomics API instead of
directly using and exposing toolchain specific atomic builtin intrinsics.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
app/test/test_mcslock.c | 6 ++--
lib/eal/arm/include/rte_atomic_32.h | 4 +--
lib/eal/arm/include/rte_atomic_64.h | 36 +++++++++++------------
lib/eal/arm/include/rte_pause_64.h | 26 ++++++++--------
lib/eal/arm/rte_power_intrinsics.c | 8 ++---
lib/eal/common/eal_common_trace.c | 16 +++++-----
lib/eal/include/generic/rte_atomic.h | 50 +++++++++++++++----------------
lib/eal/include/generic/rte_pause.h | 38 ++++++++++++------------
lib/eal/include/generic/rte_rwlock.h | 47 +++++++++++++++--------------
lib/eal/include/generic/rte_spinlock.h | 19 ++++++------
lib/eal/include/rte_mcslock.h | 50 +++++++++++++++----------------
lib/eal/include/rte_pflock.h | 24 ++++++++-------
lib/eal/include/rte_seqcount.h | 18 ++++++------
lib/eal/include/rte_ticketlock.h | 42 +++++++++++++-------------
lib/eal/include/rte_trace_point.h | 4 +--
lib/eal/loongarch/include/rte_atomic.h | 4 +--
lib/eal/ppc/include/rte_atomic.h | 54 +++++++++++++++++-----------------
lib/eal/riscv/include/rte_atomic.h | 4 +--
lib/eal/x86/include/rte_atomic.h | 8 ++---
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 6 ++--
21 files changed, 237 insertions(+), 229 deletions(-)
diff --git a/app/test/test_mcslock.c b/app/test/test_mcslock.c
index 52e45e7..cc25970 100644
--- a/app/test/test_mcslock.c
+++ b/app/test/test_mcslock.c
@@ -36,9 +36,9 @@
* lock multiple times.
*/
-rte_mcslock_t *p_ml;
-rte_mcslock_t *p_ml_try;
-rte_mcslock_t *p_ml_perf;
+rte_mcslock_t * __rte_atomic p_ml;
+rte_mcslock_t * __rte_atomic p_ml_try;
+rte_mcslock_t * __rte_atomic p_ml_perf;
static unsigned int count;
diff --git a/lib/eal/arm/include/rte_atomic_32.h b/lib/eal/arm/include/rte_atomic_32.h
index c00ab78..62fc337 100644
--- a/lib/eal/arm/include/rte_atomic_32.h
+++ b/lib/eal/arm/include/rte_atomic_32.h
@@ -34,9 +34,9 @@
#define rte_io_rmb() rte_rmb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
#ifdef __cplusplus
diff --git a/lib/eal/arm/include/rte_atomic_64.h b/lib/eal/arm/include/rte_atomic_64.h
index 6047911..75d8ba6 100644
--- a/lib/eal/arm/include/rte_atomic_64.h
+++ b/lib/eal/arm/include/rte_atomic_64.h
@@ -38,9 +38,9 @@
#define rte_io_rmb() rte_rmb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
/*------------------------ 128 bit atomic operations -------------------------*/
@@ -107,33 +107,33 @@
*/
RTE_SET_USED(failure);
/* Find invalid memory order */
- RTE_ASSERT(success == __ATOMIC_RELAXED ||
- success == __ATOMIC_ACQUIRE ||
- success == __ATOMIC_RELEASE ||
- success == __ATOMIC_ACQ_REL ||
- success == __ATOMIC_SEQ_CST);
+ RTE_ASSERT(success == rte_memory_order_relaxed ||
+ success == rte_memory_order_acquire ||
+ success == rte_memory_order_release ||
+ success == rte_memory_order_acq_rel ||
+ success == rte_memory_order_seq_cst);
rte_int128_t expected = *exp;
rte_int128_t desired = *src;
rte_int128_t old;
#if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS)
- if (success == __ATOMIC_RELAXED)
+ if (success == rte_memory_order_relaxed)
__cas_128_relaxed(dst, exp, desired);
- else if (success == __ATOMIC_ACQUIRE)
+ else if (success == rte_memory_order_acquire)
__cas_128_acquire(dst, exp, desired);
- else if (success == __ATOMIC_RELEASE)
+ else if (success == rte_memory_order_release)
__cas_128_release(dst, exp, desired);
else
__cas_128_acq_rel(dst, exp, desired);
old = *exp;
#else
-#define __HAS_ACQ(mo) ((mo) != __ATOMIC_RELAXED && (mo) != __ATOMIC_RELEASE)
-#define __HAS_RLS(mo) ((mo) == __ATOMIC_RELEASE || (mo) == __ATOMIC_ACQ_REL || \
- (mo) == __ATOMIC_SEQ_CST)
+#define __HAS_ACQ(mo) ((mo) != rte_memory_order_relaxed && (mo) != rte_memory_order_release)
+#define __HAS_RLS(mo) ((mo) == rte_memory_order_release || (mo) == rte_memory_order_acq_rel || \
+ (mo) == rte_memory_order_seq_cst)
- int ldx_mo = __HAS_ACQ(success) ? __ATOMIC_ACQUIRE : __ATOMIC_RELAXED;
- int stx_mo = __HAS_RLS(success) ? __ATOMIC_RELEASE : __ATOMIC_RELAXED;
+ int ldx_mo = __HAS_ACQ(success) ? rte_memory_order_acquire : rte_memory_order_relaxed;
+ int stx_mo = __HAS_RLS(success) ? rte_memory_order_release : rte_memory_order_relaxed;
#undef __HAS_ACQ
#undef __HAS_RLS
@@ -153,7 +153,7 @@
: "Q" (src->val[0]) \
: "memory"); }
- if (ldx_mo == __ATOMIC_RELAXED)
+ if (ldx_mo == rte_memory_order_relaxed)
__LOAD_128("ldxp", dst, old)
else
__LOAD_128("ldaxp", dst, old)
@@ -170,7 +170,7 @@
: "memory"); }
if (likely(old.int128 == expected.int128)) {
- if (stx_mo == __ATOMIC_RELAXED)
+ if (stx_mo == rte_memory_order_relaxed)
__STORE_128("stxp", dst, desired, ret)
else
__STORE_128("stlxp", dst, desired, ret)
@@ -181,7 +181,7 @@
* needs to be stored back to ensure it was read
* atomically.
*/
- if (stx_mo == __ATOMIC_RELAXED)
+ if (stx_mo == rte_memory_order_relaxed)
__STORE_128("stxp", dst, old, ret)
else
__STORE_128("stlxp", dst, old, ret)
diff --git a/lib/eal/arm/include/rte_pause_64.h b/lib/eal/arm/include/rte_pause_64.h
index 5f70e97..d4daafc 100644
--- a/lib/eal/arm/include/rte_pause_64.h
+++ b/lib/eal/arm/include/rte_pause_64.h
@@ -41,7 +41,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_8(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxrb %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -60,7 +60,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_16(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxrh %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -79,7 +79,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_32(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxr %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -98,7 +98,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_64(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxr %x[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -118,7 +118,7 @@ static inline void rte_pause(void)
*/
#define __RTE_ARM_LOAD_EXC_128(src, dst, memorder) { \
volatile rte_int128_t *dst_128 = (volatile rte_int128_t *)&dst; \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxp %x[tmp0], %x[tmp1], [%x[addr]]" \
: [tmp0] "=&r" (dst_128->val[0]), \
[tmp1] "=&r" (dst_128->val[1]) \
@@ -153,8 +153,8 @@ static inline void rte_pause(void)
{
uint16_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_16(addr, value, memorder)
if (value != expected) {
@@ -172,8 +172,8 @@ static inline void rte_pause(void)
{
uint32_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_32(addr, value, memorder)
if (value != expected) {
@@ -191,8 +191,8 @@ static inline void rte_pause(void)
{
uint64_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_64(addr, value, memorder)
if (value != expected) {
@@ -206,8 +206,8 @@ static inline void rte_pause(void)
#define RTE_WAIT_UNTIL_MASKED(addr, mask, cond, expected, memorder) do { \
RTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); \
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && \
- memorder != __ATOMIC_RELAXED); \
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire && \
+ memorder != rte_memory_order_relaxed); \
const uint32_t size = sizeof(*(addr)) << 3; \
typeof(*(addr)) expected_value = (expected); \
typeof(*(addr)) value; \
diff --git a/lib/eal/arm/rte_power_intrinsics.c b/lib/eal/arm/rte_power_intrinsics.c
index 77b96e4..f54cf59 100644
--- a/lib/eal/arm/rte_power_intrinsics.c
+++ b/lib/eal/arm/rte_power_intrinsics.c
@@ -33,19 +33,19 @@
switch (pmc->size) {
case sizeof(uint8_t):
- __RTE_ARM_LOAD_EXC_8(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_8(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint16_t):
- __RTE_ARM_LOAD_EXC_16(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_16(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint32_t):
- __RTE_ARM_LOAD_EXC_32(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_32(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint64_t):
- __RTE_ARM_LOAD_EXC_64(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_64(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
default:
diff --git a/lib/eal/common/eal_common_trace.c b/lib/eal/common/eal_common_trace.c
index cb980af..c6628dd 100644
--- a/lib/eal/common/eal_common_trace.c
+++ b/lib/eal/common/eal_common_trace.c
@@ -103,11 +103,11 @@ struct trace_point_head *
trace_mode_set(rte_trace_point_t *t, enum rte_trace_mode mode)
{
if (mode == RTE_TRACE_MODE_OVERWRITE)
- __atomic_fetch_and(t, ~__RTE_TRACE_FIELD_ENABLE_DISCARD,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_and_explicit(t, ~__RTE_TRACE_FIELD_ENABLE_DISCARD,
+ rte_memory_order_release);
else
- __atomic_fetch_or(t, __RTE_TRACE_FIELD_ENABLE_DISCARD,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_or_explicit(t, __RTE_TRACE_FIELD_ENABLE_DISCARD,
+ rte_memory_order_release);
}
void
@@ -141,7 +141,7 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return false;
- val = __atomic_load_n(t, __ATOMIC_ACQUIRE);
+ val = rte_atomic_load_explicit(t, rte_memory_order_acquire);
return (val & __RTE_TRACE_FIELD_ENABLE_MASK) != 0;
}
@@ -153,7 +153,8 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return -ERANGE;
- prev = __atomic_fetch_or(t, __RTE_TRACE_FIELD_ENABLE_MASK, __ATOMIC_RELEASE);
+ prev = rte_atomic_fetch_or_explicit(t, __RTE_TRACE_FIELD_ENABLE_MASK,
+ rte_memory_order_release);
if ((prev & __RTE_TRACE_FIELD_ENABLE_MASK) == 0)
__atomic_fetch_add(&trace.status, 1, __ATOMIC_RELEASE);
return 0;
@@ -167,7 +168,8 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return -ERANGE;
- prev = __atomic_fetch_and(t, ~__RTE_TRACE_FIELD_ENABLE_MASK, __ATOMIC_RELEASE);
+ prev = rte_atomic_fetch_and_explicit(t, ~__RTE_TRACE_FIELD_ENABLE_MASK,
+ rte_memory_order_release);
if ((prev & __RTE_TRACE_FIELD_ENABLE_MASK) != 0)
__atomic_fetch_sub(&trace.status, 1, __ATOMIC_RELEASE);
return 0;
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index efd29eb..f6c4b3e 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -63,7 +63,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_ACQ_REL) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_acq_rel) should be used instead.
*/
static inline void rte_smp_mb(void);
@@ -80,7 +80,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_RELEASE) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_release) should be used instead.
* The fence also guarantees LOAD operations that precede the call
* are globally visible across the lcores before the STORE operations
* that follows it.
@@ -100,7 +100,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_ACQUIRE) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_acquire) should be used instead.
* The fence also guarantees LOAD operations that precede the call
* are globally visible across the lcores before the STORE operations
* that follows it.
@@ -154,7 +154,7 @@
/**
* Synchronization fence between threads based on the specified memory order.
*/
-static inline void rte_atomic_thread_fence(int memorder);
+static inline void rte_atomic_thread_fence(rte_memory_order memorder);
/*------------------------- 16 bit atomic operations -------------------------*/
@@ -207,7 +207,7 @@
static inline uint16_t
rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -274,7 +274,7 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
/**
@@ -288,7 +288,7 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
/**
@@ -341,7 +341,7 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
/**
@@ -361,7 +361,7 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
/**
@@ -380,7 +380,7 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_SEQ_CST) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -400,7 +400,7 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_SEQ_CST) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -486,7 +486,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline uint32_t
rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -553,7 +553,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
/**
@@ -567,7 +567,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
/**
@@ -620,7 +620,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
/**
@@ -640,7 +640,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
/**
@@ -659,7 +659,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_SEQ_CST) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -679,7 +679,7 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_SEQ_CST) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -764,7 +764,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline uint64_t
rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -885,7 +885,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
#endif
@@ -904,7 +904,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
#endif
@@ -962,7 +962,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
#endif
@@ -986,7 +986,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
#endif
@@ -1117,8 +1117,8 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
* stronger) model.
* @param failure
* If unsuccessful, the operation's memory behavior conforms to this (or a
- * stronger) model. This argument cannot be __ATOMIC_RELEASE,
- * __ATOMIC_ACQ_REL, or a stronger model than success.
+ * stronger) model. This argument cannot be rte_memory_order_release,
+ * rte_memory_order_acq_rel, or a stronger model than success.
* @return
* Non-zero on success; 0 on failure.
*/
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index bebfa95..c816e7d 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -36,13 +36,13 @@
* A 16-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
+ * rte_memory_order_acquire and rte_memory_order_relaxed. These map to
* C++11 memory orders with the same names, see the C++11 standard or
* the GCC wiki on atomic synchronization for detailed definition.
*/
static __rte_always_inline void
rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,
- int memorder);
+ rte_memory_order memorder);
/**
* Wait for *addr to be updated with a 32-bit expected value, with a relaxed
@@ -54,13 +54,13 @@
* A 32-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
+ * rte_memory_order_acquire and rte_memory_order_relaxed. These map to
* C++11 memory orders with the same names, see the C++11 standard or
* the GCC wiki on atomic synchronization for detailed definition.
*/
static __rte_always_inline void
rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,
- int memorder);
+ rte_memory_order memorder);
/**
* Wait for *addr to be updated with a 64-bit expected value, with a relaxed
@@ -72,42 +72,42 @@
* A 64-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
+ * rte_memory_order_acquire and rte_memory_order_relaxed. These map to
* C++11 memory orders with the same names, see the C++11 standard or
* the GCC wiki on atomic synchronization for detailed definition.
*/
static __rte_always_inline void
rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,
- int memorder);
+ rte_memory_order memorder);
#ifndef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED
static __rte_always_inline void
rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
static __rte_always_inline void
rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
static __rte_always_inline void
rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
@@ -125,16 +125,16 @@
* An expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
+ * rte_memory_order_acquire and rte_memory_order_relaxed. These map to
* C++11 memory orders with the same names, see the C++11 standard or
* the GCC wiki on atomic synchronization for detailed definition.
*/
#define RTE_WAIT_UNTIL_MASKED(addr, mask, cond, expected, memorder) do { \
RTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); \
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && \
- memorder != __ATOMIC_RELAXED); \
+ RTE_BUILD_BUG_ON((memorder) != rte_memory_order_acquire && \
+ (memorder) != rte_memory_order_relaxed); \
typeof(*(addr)) expected_value = (expected); \
- while (!((__atomic_load_n((addr), (memorder)) & (mask)) cond \
+ while (!((rte_atomic_load_explicit((addr), (memorder)) & (mask)) cond \
expected_value)) \
rte_pause(); \
} while (0)
diff --git a/lib/eal/include/generic/rte_rwlock.h b/lib/eal/include/generic/rte_rwlock.h
index 24ebec6..176775f 100644
--- a/lib/eal/include/generic/rte_rwlock.h
+++ b/lib/eal/include/generic/rte_rwlock.h
@@ -58,7 +58,7 @@
#define RTE_RWLOCK_READ 0x4 /* Reader increment */
typedef struct __rte_lockable {
- int32_t cnt;
+ int32_t __rte_atomic cnt;
} rte_rwlock_t;
/**
@@ -93,21 +93,21 @@
while (1) {
/* Wait while writer is present or pending */
- while (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED)
+ while (rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed)
& RTE_RWLOCK_MASK)
rte_pause();
/* Try to get read lock */
- x = __atomic_fetch_add(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_ACQUIRE) + RTE_RWLOCK_READ;
+ x = rte_atomic_fetch_add_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_acquire) + RTE_RWLOCK_READ;
/* If no writer, then acquire was successful */
if (likely(!(x & RTE_RWLOCK_MASK)))
return;
/* Lost race with writer, backout the change. */
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_RELAXED);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_relaxed);
}
}
@@ -128,20 +128,20 @@
{
int32_t x;
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
/* fail if write lock is held or writer is pending */
if (x & RTE_RWLOCK_MASK)
return -EBUSY;
/* Try to get read lock */
- x = __atomic_fetch_add(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_ACQUIRE) + RTE_RWLOCK_READ;
+ x = rte_atomic_fetch_add_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_acquire) + RTE_RWLOCK_READ;
/* Back out if writer raced in */
if (unlikely(x & RTE_RWLOCK_MASK)) {
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_release);
return -EBUSY;
}
@@ -159,7 +159,7 @@
__rte_unlock_function(rwl)
__rte_no_thread_safety_analysis
{
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ, __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ, rte_memory_order_release);
}
/**
@@ -179,10 +179,10 @@
{
int32_t x;
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
if (x < RTE_RWLOCK_WRITE &&
- __atomic_compare_exchange_n(&rwl->cnt, &x, x + RTE_RWLOCK_WRITE,
- 1, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ rte_atomic_compare_exchange_weak_explicit(&rwl->cnt, &x, x + RTE_RWLOCK_WRITE,
+ rte_memory_order_acquire, rte_memory_order_relaxed))
return 0;
else
return -EBUSY;
@@ -202,22 +202,25 @@
int32_t x;
while (1) {
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
/* No readers or writers? */
if (likely(x < RTE_RWLOCK_WRITE)) {
/* Turn off RTE_RWLOCK_WAIT, turn on RTE_RWLOCK_WRITE */
- if (__atomic_compare_exchange_n(&rwl->cnt, &x, RTE_RWLOCK_WRITE, 1,
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ if (rte_atomic_compare_exchange_weak_explicit(
+ &rwl->cnt, &x, RTE_RWLOCK_WRITE,
+ rte_memory_order_acquire, rte_memory_order_relaxed))
return;
}
/* Turn on writer wait bit */
if (!(x & RTE_RWLOCK_WAIT))
- __atomic_fetch_or(&rwl->cnt, RTE_RWLOCK_WAIT, __ATOMIC_RELAXED);
+ rte_atomic_fetch_or_explicit(&rwl->cnt, RTE_RWLOCK_WAIT,
+ rte_memory_order_relaxed);
/* Wait until no readers before trying again */
- while (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) > RTE_RWLOCK_WAIT)
+ while (rte_atomic_load_explicit(&rwl->cnt,
+ rte_memory_order_relaxed) > RTE_RWLOCK_WAIT)
rte_pause();
}
@@ -234,7 +237,7 @@
__rte_unlock_function(rwl)
__rte_no_thread_safety_analysis
{
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_WRITE, __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_WRITE, rte_memory_order_release);
}
/**
@@ -248,7 +251,7 @@
static inline int
rte_rwlock_write_is_locked(rte_rwlock_t *rwl)
{
- if (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) & RTE_RWLOCK_WRITE)
+ if (rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed) & RTE_RWLOCK_WRITE)
return 1;
return 0;
diff --git a/lib/eal/include/generic/rte_spinlock.h b/lib/eal/include/generic/rte_spinlock.h
index e18f0cd..274616a 100644
--- a/lib/eal/include/generic/rte_spinlock.h
+++ b/lib/eal/include/generic/rte_spinlock.h
@@ -29,7 +29,7 @@
* The rte_spinlock_t type.
*/
typedef struct __rte_lockable {
- volatile int locked; /**< lock status 0 = unlocked, 1 = locked */
+ volatile int __rte_atomic locked; /**< lock status 0 = unlocked, 1 = locked */
} rte_spinlock_t;
/**
@@ -66,10 +66,10 @@
{
int exp = 0;
- while (!__atomic_compare_exchange_n(&sl->locked, &exp, 1, 0,
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) {
- rte_wait_until_equal_32((volatile uint32_t *)&sl->locked,
- 0, __ATOMIC_RELAXED);
+ while (!rte_atomic_compare_exchange_strong_explicit(&sl->locked, &exp, 1,
+ rte_memory_order_acquire, rte_memory_order_relaxed)) {
+ rte_wait_until_equal_32((volatile uint32_t *)(uintptr_t)&sl->locked,
+ 0, rte_memory_order_relaxed);
exp = 0;
}
}
@@ -90,7 +90,7 @@
rte_spinlock_unlock(rte_spinlock_t *sl)
__rte_no_thread_safety_analysis
{
- __atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&sl->locked, 0, rte_memory_order_release);
}
#endif
@@ -113,9 +113,8 @@
__rte_no_thread_safety_analysis
{
int exp = 0;
- return __atomic_compare_exchange_n(&sl->locked, &exp, 1,
- 0, /* disallow spurious failure */
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
+ return rte_atomic_compare_exchange_strong_explicit(&sl->locked, &exp, 1,
+ rte_memory_order_acquire, rte_memory_order_relaxed);
}
#endif
@@ -129,7 +128,7 @@
*/
static inline int rte_spinlock_is_locked (rte_spinlock_t *sl)
{
- return __atomic_load_n(&sl->locked, __ATOMIC_ACQUIRE);
+ return rte_atomic_load_explicit(&sl->locked, rte_memory_order_acquire);
}
/**
diff --git a/lib/eal/include/rte_mcslock.h b/lib/eal/include/rte_mcslock.h
index 18e63eb..229c8e2 100644
--- a/lib/eal/include/rte_mcslock.h
+++ b/lib/eal/include/rte_mcslock.h
@@ -33,8 +33,8 @@
* The rte_mcslock_t type.
*/
typedef struct rte_mcslock {
- struct rte_mcslock *next;
- int locked; /* 1 if the queue locked, 0 otherwise */
+ struct rte_mcslock * __rte_atomic next;
+ int __rte_atomic locked; /* 1 if the queue locked, 0 otherwise */
} rte_mcslock_t;
/**
@@ -49,13 +49,13 @@
* lock should use its 'own node'.
*/
static inline void
-rte_mcslock_lock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_lock(rte_mcslock_t * __rte_atomic *msl, rte_mcslock_t *me)
{
rte_mcslock_t *prev;
/* Init me node */
- __atomic_store_n(&me->locked, 1, __ATOMIC_RELAXED);
- __atomic_store_n(&me->next, NULL, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&me->locked, 1, rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&me->next, NULL, rte_memory_order_relaxed);
/* If the queue is empty, the exchange operation is enough to acquire
* the lock. Hence, the exchange operation requires acquire semantics.
@@ -63,7 +63,7 @@
* visible to other CPUs/threads. Hence, the exchange operation requires
* release semantics as well.
*/
- prev = __atomic_exchange_n(msl, me, __ATOMIC_ACQ_REL);
+ prev = rte_atomic_exchange_explicit(msl, me, rte_memory_order_acq_rel);
if (likely(prev == NULL)) {
/* Queue was empty, no further action required,
* proceed with lock taken.
@@ -77,19 +77,19 @@
* strong as a release fence and is not sufficient to enforce the
* desired order here.
*/
- __atomic_store_n(&prev->next, me, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&prev->next, me, rte_memory_order_release);
/* The while-load of me->locked should not move above the previous
* store to prev->next. Otherwise it will cause a deadlock. Need a
* store-load barrier.
*/
- __atomic_thread_fence(__ATOMIC_ACQ_REL);
+ __rte_atomic_thread_fence(rte_memory_order_acq_rel);
/* If the lock has already been acquired, it first atomically
* places the node at the end of the queue and then proceeds
* to spin on me->locked until the previous lock holder resets
* the me->locked using mcslock_unlock().
*/
- rte_wait_until_equal_32((uint32_t *)&me->locked, 0, __ATOMIC_ACQUIRE);
+ rte_wait_until_equal_32((uint32_t *)(uintptr_t)&me->locked, 0, rte_memory_order_acquire);
}
/**
@@ -101,34 +101,34 @@
* A pointer to the node of MCS lock passed in rte_mcslock_lock.
*/
static inline void
-rte_mcslock_unlock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_unlock(rte_mcslock_t * __rte_atomic *msl, rte_mcslock_t * __rte_atomic me)
{
/* Check if there are more nodes in the queue. */
- if (likely(__atomic_load_n(&me->next, __ATOMIC_RELAXED) == NULL)) {
+ if (likely(rte_atomic_load_explicit(&me->next, rte_memory_order_relaxed) == NULL)) {
/* No, last member in the queue. */
- rte_mcslock_t *save_me = __atomic_load_n(&me, __ATOMIC_RELAXED);
+ rte_mcslock_t *save_me = rte_atomic_load_explicit(&me, rte_memory_order_relaxed);
/* Release the lock by setting it to NULL */
- if (likely(__atomic_compare_exchange_n(msl, &save_me, NULL, 0,
- __ATOMIC_RELEASE, __ATOMIC_RELAXED)))
+ if (likely(rte_atomic_compare_exchange_strong_explicit(msl, &save_me, NULL,
+ rte_memory_order_release, rte_memory_order_relaxed)))
return;
/* Speculative execution would be allowed to read in the
* while-loop first. This has the potential to cause a
* deadlock. Need a load barrier.
*/
- __atomic_thread_fence(__ATOMIC_ACQUIRE);
+ __rte_atomic_thread_fence(rte_memory_order_acquire);
/* More nodes added to the queue by other CPUs.
* Wait until the next pointer is set.
*/
- uintptr_t *next;
- next = (uintptr_t *)&me->next;
+ uintptr_t __rte_atomic *next;
+ next = (uintptr_t __rte_atomic *)&me->next;
RTE_WAIT_UNTIL_MASKED(next, UINTPTR_MAX, !=, 0,
- __ATOMIC_RELAXED);
+ rte_memory_order_relaxed);
}
/* Pass lock to next waiter. */
- __atomic_store_n(&me->next->locked, 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&me->next->locked, 0, rte_memory_order_release);
}
/**
@@ -142,10 +142,10 @@
* 1 if the lock is successfully taken; 0 otherwise.
*/
static inline int
-rte_mcslock_trylock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_trylock(rte_mcslock_t * __rte_atomic *msl, rte_mcslock_t *me)
{
/* Init me node */
- __atomic_store_n(&me->next, NULL, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&me->next, NULL, rte_memory_order_relaxed);
/* Try to lock */
rte_mcslock_t *expected = NULL;
@@ -156,8 +156,8 @@
* is visible to other CPUs/threads. Hence, the compare-exchange
* operation requires release semantics as well.
*/
- return __atomic_compare_exchange_n(msl, &expected, me, 0,
- __ATOMIC_ACQ_REL, __ATOMIC_RELAXED);
+ return rte_atomic_compare_exchange_strong_explicit(msl, &expected, me,
+ rte_memory_order_acq_rel, rte_memory_order_relaxed);
}
/**
@@ -169,9 +169,9 @@
* 1 if the lock is currently taken; 0 otherwise.
*/
static inline int
-rte_mcslock_is_locked(rte_mcslock_t *msl)
+rte_mcslock_is_locked(rte_mcslock_t * __rte_atomic msl)
{
- return (__atomic_load_n(&msl, __ATOMIC_RELAXED) != NULL);
+ return (rte_atomic_load_explicit(&msl, rte_memory_order_relaxed) != NULL);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_pflock.h b/lib/eal/include/rte_pflock.h
index 790be71..a2375b3 100644
--- a/lib/eal/include/rte_pflock.h
+++ b/lib/eal/include/rte_pflock.h
@@ -41,8 +41,8 @@
*/
struct rte_pflock {
struct {
- uint16_t in;
- uint16_t out;
+ uint16_t __rte_atomic in;
+ uint16_t __rte_atomic out;
} rd, wr;
};
typedef struct rte_pflock rte_pflock_t;
@@ -117,14 +117,14 @@ struct rte_pflock {
* If no writer is present, then the operation has completed
* successfully.
*/
- w = __atomic_fetch_add(&pf->rd.in, RTE_PFLOCK_RINC, __ATOMIC_ACQUIRE)
+ w = rte_atomic_fetch_add_explicit(&pf->rd.in, RTE_PFLOCK_RINC, rte_memory_order_acquire)
& RTE_PFLOCK_WBITS;
if (w == 0)
return;
/* Wait for current write phase to complete. */
RTE_WAIT_UNTIL_MASKED(&pf->rd.in, RTE_PFLOCK_WBITS, !=, w,
- __ATOMIC_ACQUIRE);
+ rte_memory_order_acquire);
}
/**
@@ -140,7 +140,7 @@ struct rte_pflock {
static inline void
rte_pflock_read_unlock(rte_pflock_t *pf)
{
- __atomic_fetch_add(&pf->rd.out, RTE_PFLOCK_RINC, __ATOMIC_RELEASE);
+ rte_atomic_fetch_add_explicit(&pf->rd.out, RTE_PFLOCK_RINC, rte_memory_order_release);
}
/**
@@ -161,8 +161,9 @@ struct rte_pflock {
/* Acquire ownership of write-phase.
* This is same as rte_ticketlock_lock().
*/
- ticket = __atomic_fetch_add(&pf->wr.in, 1, __ATOMIC_RELAXED);
- rte_wait_until_equal_16(&pf->wr.out, ticket, __ATOMIC_ACQUIRE);
+ ticket = rte_atomic_fetch_add_explicit(&pf->wr.in, 1, rte_memory_order_relaxed);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&pf->wr.out, ticket,
+ rte_memory_order_acquire);
/*
* Acquire ticket on read-side in order to allow them
@@ -173,10 +174,11 @@ struct rte_pflock {
* speculatively.
*/
w = RTE_PFLOCK_PRES | (ticket & RTE_PFLOCK_PHID);
- ticket = __atomic_fetch_add(&pf->rd.in, w, __ATOMIC_RELAXED);
+ ticket = rte_atomic_fetch_add_explicit(&pf->rd.in, w, rte_memory_order_relaxed);
/* Wait for any pending readers to flush. */
- rte_wait_until_equal_16(&pf->rd.out, ticket, __ATOMIC_ACQUIRE);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&pf->rd.out, ticket,
+ rte_memory_order_acquire);
}
/**
@@ -193,10 +195,10 @@ struct rte_pflock {
rte_pflock_write_unlock(rte_pflock_t *pf)
{
/* Migrate from write phase to read phase. */
- __atomic_fetch_and(&pf->rd.in, RTE_PFLOCK_LSB, __ATOMIC_RELEASE);
+ rte_atomic_fetch_and_explicit(&pf->rd.in, RTE_PFLOCK_LSB, rte_memory_order_release);
/* Allow other writers to continue. */
- __atomic_fetch_add(&pf->wr.out, 1, __ATOMIC_RELEASE);
+ rte_atomic_fetch_add_explicit(&pf->wr.out, 1, rte_memory_order_release);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_seqcount.h b/lib/eal/include/rte_seqcount.h
index 098af26..a658178 100644
--- a/lib/eal/include/rte_seqcount.h
+++ b/lib/eal/include/rte_seqcount.h
@@ -32,7 +32,7 @@
* The RTE seqcount type.
*/
typedef struct {
- uint32_t sn; /**< A sequence number for the protected data. */
+ uint32_t __rte_atomic sn; /**< A sequence number for the protected data. */
} rte_seqcount_t;
/**
@@ -106,11 +106,11 @@
static inline uint32_t
rte_seqcount_read_begin(const rte_seqcount_t *seqcount)
{
- /* __ATOMIC_ACQUIRE to prevent loads after (in program order)
+ /* rte_memory_order_acquire to prevent loads after (in program order)
* from happening before the sn load. Synchronizes-with the
* store release in rte_seqcount_write_end().
*/
- return __atomic_load_n(&seqcount->sn, __ATOMIC_ACQUIRE);
+ return rte_atomic_load_explicit(&seqcount->sn, rte_memory_order_acquire);
}
/**
@@ -161,9 +161,9 @@
return true;
/* make sure the data loads happens before the sn load */
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+ rte_atomic_thread_fence(rte_memory_order_acquire);
- end_sn = __atomic_load_n(&seqcount->sn, __ATOMIC_RELAXED);
+ end_sn = rte_atomic_load_explicit(&seqcount->sn, rte_memory_order_relaxed);
/* A writer incremented the sequence number during this read
* critical section.
@@ -205,12 +205,12 @@
sn = seqcount->sn + 1;
- __atomic_store_n(&seqcount->sn, sn, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&seqcount->sn, sn, rte_memory_order_relaxed);
- /* __ATOMIC_RELEASE to prevent stores after (in program order)
+ /* rte_memory_order_release to prevent stores after (in program order)
* from happening before the sn store.
*/
- rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ rte_atomic_thread_fence(rte_memory_order_release);
}
/**
@@ -237,7 +237,7 @@
sn = seqcount->sn + 1;
/* Synchronizes-with the load acquire in rte_seqcount_read_begin(). */
- __atomic_store_n(&seqcount->sn, sn, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&seqcount->sn, sn, rte_memory_order_release);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_ticketlock.h b/lib/eal/include/rte_ticketlock.h
index e22d119..d816650 100644
--- a/lib/eal/include/rte_ticketlock.h
+++ b/lib/eal/include/rte_ticketlock.h
@@ -30,10 +30,10 @@
* The rte_ticketlock_t type.
*/
typedef union {
- uint32_t tickets;
+ uint32_t __rte_atomic tickets;
struct {
- uint16_t current;
- uint16_t next;
+ uint16_t __rte_atomic current;
+ uint16_t __rte_atomic next;
} s;
} rte_ticketlock_t;
@@ -51,7 +51,7 @@
static inline void
rte_ticketlock_init(rte_ticketlock_t *tl)
{
- __atomic_store_n(&tl->tickets, 0, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tl->tickets, 0, rte_memory_order_relaxed);
}
/**
@@ -63,8 +63,9 @@
static inline void
rte_ticketlock_lock(rte_ticketlock_t *tl)
{
- uint16_t me = __atomic_fetch_add(&tl->s.next, 1, __ATOMIC_RELAXED);
- rte_wait_until_equal_16(&tl->s.current, me, __ATOMIC_ACQUIRE);
+ uint16_t me = rte_atomic_fetch_add_explicit(&tl->s.next, 1, rte_memory_order_relaxed);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tl->s.current, me,
+ rte_memory_order_acquire);
}
/**
@@ -76,8 +77,8 @@
static inline void
rte_ticketlock_unlock(rte_ticketlock_t *tl)
{
- uint16_t i = __atomic_load_n(&tl->s.current, __ATOMIC_RELAXED);
- __atomic_store_n(&tl->s.current, i + 1, __ATOMIC_RELEASE);
+ uint16_t i = rte_atomic_load_explicit(&tl->s.current, rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&tl->s.current, i + 1, rte_memory_order_release);
}
/**
@@ -92,12 +93,13 @@
rte_ticketlock_trylock(rte_ticketlock_t *tl)
{
rte_ticketlock_t oldl, newl;
- oldl.tickets = __atomic_load_n(&tl->tickets, __ATOMIC_RELAXED);
+ oldl.tickets = rte_atomic_load_explicit(&tl->tickets, rte_memory_order_relaxed);
newl.tickets = oldl.tickets;
newl.s.next++;
if (oldl.s.next == oldl.s.current) {
- if (__atomic_compare_exchange_n(&tl->tickets, &oldl.tickets,
- newl.tickets, 0, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ if (rte_atomic_compare_exchange_strong_explicit(&tl->tickets,
+ (uint32_t *)(uintptr_t)&oldl.tickets,
+ newl.tickets, rte_memory_order_acquire, rte_memory_order_relaxed))
return 1;
}
@@ -116,7 +118,7 @@
rte_ticketlock_is_locked(rte_ticketlock_t *tl)
{
rte_ticketlock_t tic;
- tic.tickets = __atomic_load_n(&tl->tickets, __ATOMIC_ACQUIRE);
+ tic.tickets = rte_atomic_load_explicit(&tl->tickets, rte_memory_order_acquire);
return (tic.s.current != tic.s.next);
}
@@ -127,7 +129,7 @@
typedef struct {
rte_ticketlock_t tl; /**< the actual ticketlock */
- int user; /**< core id using lock, TICKET_LOCK_INVALID_ID for unused */
+ int __rte_atomic user; /**< core id using lock, TICKET_LOCK_INVALID_ID for unused */
unsigned int count; /**< count of time this lock has been called */
} rte_ticketlock_recursive_t;
@@ -147,7 +149,7 @@
rte_ticketlock_recursive_init(rte_ticketlock_recursive_t *tlr)
{
rte_ticketlock_init(&tlr->tl);
- __atomic_store_n(&tlr->user, TICKET_LOCK_INVALID_ID, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, TICKET_LOCK_INVALID_ID, rte_memory_order_relaxed);
tlr->count = 0;
}
@@ -162,9 +164,9 @@
{
int id = rte_gettid();
- if (__atomic_load_n(&tlr->user, __ATOMIC_RELAXED) != id) {
+ if (rte_atomic_load_explicit(&tlr->user, rte_memory_order_relaxed) != id) {
rte_ticketlock_lock(&tlr->tl);
- __atomic_store_n(&tlr->user, id, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, id, rte_memory_order_relaxed);
}
tlr->count++;
}
@@ -179,8 +181,8 @@
rte_ticketlock_recursive_unlock(rte_ticketlock_recursive_t *tlr)
{
if (--(tlr->count) == 0) {
- __atomic_store_n(&tlr->user, TICKET_LOCK_INVALID_ID,
- __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, TICKET_LOCK_INVALID_ID,
+ rte_memory_order_relaxed);
rte_ticketlock_unlock(&tlr->tl);
}
}
@@ -198,10 +200,10 @@
{
int id = rte_gettid();
- if (__atomic_load_n(&tlr->user, __ATOMIC_RELAXED) != id) {
+ if (rte_atomic_load_explicit(&tlr->user, rte_memory_order_relaxed) != id) {
if (rte_ticketlock_trylock(&tlr->tl) == 0)
return 0;
- __atomic_store_n(&tlr->user, id, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, id, rte_memory_order_relaxed);
}
tlr->count++;
return 1;
diff --git a/lib/eal/include/rte_trace_point.h b/lib/eal/include/rte_trace_point.h
index d587591..e682109 100644
--- a/lib/eal/include/rte_trace_point.h
+++ b/lib/eal/include/rte_trace_point.h
@@ -33,7 +33,7 @@
#include <rte_stdatomic.h>
/** The tracepoint object. */
-typedef uint64_t rte_trace_point_t;
+typedef uint64_t __rte_atomic rte_trace_point_t;
/**
* Macro to define the tracepoint arguments in RTE_TRACE_POINT macro.
@@ -359,7 +359,7 @@ struct __rte_trace_header {
#define __rte_trace_point_emit_header_generic(t) \
void *mem; \
do { \
- const uint64_t val = __atomic_load_n(t, __ATOMIC_ACQUIRE); \
+ const uint64_t val = rte_atomic_load_explicit(t, rte_memory_order_acquire); \
if (likely(!(val & __RTE_TRACE_FIELD_ENABLE_MASK))) \
return; \
mem = __rte_trace_mem_get(val); \
diff --git a/lib/eal/loongarch/include/rte_atomic.h b/lib/eal/loongarch/include/rte_atomic.h
index 3c82845..0510b8f 100644
--- a/lib/eal/loongarch/include/rte_atomic.h
+++ b/lib/eal/loongarch/include/rte_atomic.h
@@ -35,9 +35,9 @@
#define rte_io_rmb() rte_mb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
#ifdef __cplusplus
diff --git a/lib/eal/ppc/include/rte_atomic.h b/lib/eal/ppc/include/rte_atomic.h
index ec8d8a2..7382412 100644
--- a/lib/eal/ppc/include/rte_atomic.h
+++ b/lib/eal/ppc/include/rte_atomic.h
@@ -38,9 +38,9 @@
#define rte_io_rmb() rte_rmb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
/*------------------------- 16 bit atomic operations -------------------------*/
@@ -48,8 +48,8 @@
static inline int
rte_atomic16_cmpset(volatile uint16_t *dst, uint16_t exp, uint16_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline int rte_atomic16_test_and_set(rte_atomic16_t *v)
@@ -60,29 +60,29 @@ static inline int rte_atomic16_test_and_set(rte_atomic16_t *v)
static inline void
rte_atomic16_inc(rte_atomic16_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic16_dec(rte_atomic16_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline uint16_t
rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val)
{
- return __atomic_exchange_2(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_2(dst, val, rte_memory_order_seq_cst);
}
/*------------------------- 32 bit atomic operations -------------------------*/
@@ -90,8 +90,8 @@ static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
static inline int
rte_atomic32_cmpset(volatile uint32_t *dst, uint32_t exp, uint32_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline int rte_atomic32_test_and_set(rte_atomic32_t *v)
@@ -102,29 +102,29 @@ static inline int rte_atomic32_test_and_set(rte_atomic32_t *v)
static inline void
rte_atomic32_inc(rte_atomic32_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic32_dec(rte_atomic32_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline uint32_t
rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val)
{
- return __atomic_exchange_4(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_4(dst, val, rte_memory_order_seq_cst);
}
/*------------------------- 64 bit atomic operations -------------------------*/
@@ -132,8 +132,8 @@ static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
static inline int
rte_atomic64_cmpset(volatile uint64_t *dst, uint64_t exp, uint64_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline void
@@ -157,47 +157,47 @@ static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_acquire);
}
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_acquire);
}
static inline void
rte_atomic64_inc(rte_atomic64_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic64_dec(rte_atomic64_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_ACQUIRE) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_acquire) + inc;
}
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_ACQUIRE) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_acquire) - dec;
}
static inline int rte_atomic64_inc_and_test(rte_atomic64_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic64_dec_and_test(rte_atomic64_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline int rte_atomic64_test_and_set(rte_atomic64_t *v)
@@ -213,7 +213,7 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
static inline uint64_t
rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val)
{
- return __atomic_exchange_8(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_8(dst, val, rte_memory_order_seq_cst);
}
#endif
diff --git a/lib/eal/riscv/include/rte_atomic.h b/lib/eal/riscv/include/rte_atomic.h
index 4b4633c..2603bc9 100644
--- a/lib/eal/riscv/include/rte_atomic.h
+++ b/lib/eal/riscv/include/rte_atomic.h
@@ -40,9 +40,9 @@
#define rte_io_rmb() asm volatile("fence ir, ir" : : : "memory")
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
#ifdef __cplusplus
diff --git a/lib/eal/x86/include/rte_atomic.h b/lib/eal/x86/include/rte_atomic.h
index f2ee1a9..3b3a9a4 100644
--- a/lib/eal/x86/include/rte_atomic.h
+++ b/lib/eal/x86/include/rte_atomic.h
@@ -82,17 +82,17 @@
/**
* Synchronization fence between threads based on the specified memory order.
*
- * On x86 the __atomic_thread_fence(__ATOMIC_SEQ_CST) generates full 'mfence'
+ * On x86 the __rte_atomic_thread_fence(rte_memory_order_seq_cst) generates full 'mfence'
* which is quite expensive. The optimized implementation of rte_smp_mb is
* used instead.
*/
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- if (memorder == __ATOMIC_SEQ_CST)
+ if (memorder == rte_memory_order_seq_cst)
rte_smp_mb();
else
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
/*------------------------- 16 bit atomic operations -------------------------*/
diff --git a/lib/eal/x86/include/rte_spinlock.h b/lib/eal/x86/include/rte_spinlock.h
index 0b20ddf..c76218a 100644
--- a/lib/eal/x86/include/rte_spinlock.h
+++ b/lib/eal/x86/include/rte_spinlock.h
@@ -78,7 +78,7 @@ static inline int rte_tm_supported(void)
}
static inline int
-rte_try_tm(volatile int *lock)
+rte_try_tm(volatile int __rte_atomic *lock)
{
int i, retries;
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index f749da9..cf70e33 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,9 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = __atomic_load_n((volatile uint64_t *)addr, __ATOMIC_RELAXED);
- __atomic_compare_exchange_n((volatile uint64_t *)addr, &val, val, 0,
- __ATOMIC_RELAXED, __ATOMIC_RELAXED);
+ val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
+ rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v2 3/6] eal: add rte atomic qualifier with casts
2023-08-11 17:32 ` [PATCH v2 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-11 17:32 ` [PATCH v2 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
2023-08-11 17:32 ` [PATCH v2 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
@ 2023-08-11 17:32 ` Tyler Retzlaff
2023-08-14 8:05 ` Morten Brørup
2023-08-11 17:32 ` [PATCH v2 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
` (2 subsequent siblings)
5 siblings, 1 reply; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-11 17:32 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Introduce __rte_atomic qualifying casts in rte_optional atomics inline
functions to prevent cascading the need to pass __rte_atomic qualified
arguments.
Warning, this is really implementation dependent and being done
temporarily to avoid having to convert more of the libraries and tests in
DPDK in the initial series that introduces the API. The consequence of the
assumption of the ABI of the types in question not being ``the same'' is
only a risk that may be realized when enable_stdatomic=true.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/eal/include/generic/rte_atomic.h | 48 ++++++++++++++++++++++++------------
lib/eal/include/generic/rte_pause.h | 9 ++++---
lib/eal/x86/rte_power_intrinsics.c | 7 +++---
3 files changed, 42 insertions(+), 22 deletions(-)
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index f6c4b3e..4f954e0 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -274,7 +274,8 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -288,7 +289,8 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -341,7 +343,8 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -361,7 +364,8 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -380,7 +384,8 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -400,7 +405,8 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile int16_t __rte_atomic *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -553,7 +559,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -567,7 +574,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile int32_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -620,7 +628,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -640,7 +649,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile int32_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -659,7 +669,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile int32_t __rte_atomic *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -679,7 +690,8 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile int32_t __rte_atomic *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -885,7 +897,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile int64_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
#endif
@@ -904,7 +917,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile int64_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
#endif
@@ -962,7 +976,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile int64_t __rte_atomic *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
#endif
@@ -986,7 +1001,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile int64_t __rte_atomic *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
#endif
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index c816e7d..c261689 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -87,7 +87,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile uint16_t __rte_atomic *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -97,7 +98,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile uint32_t __rte_atomic *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -107,7 +109,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile uint64_t __rte_atomic *)addr, memorder)
+ != expected)
rte_pause();
}
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index cf70e33..6c192f0 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,10 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
- rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
- rte_memory_order_relaxed, rte_memory_order_relaxed);
+ val = rte_atomic_load_explicit((volatile uint64_t __rte_atomic *)addr,
+ rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile uint64_t __rte_atomic *)addr,
+ &val, val, rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v2 4/6] distributor: adapt for EAL optional atomics API changes
2023-08-11 17:32 ` [PATCH v2 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (2 preceding siblings ...)
2023-08-11 17:32 ` [PATCH v2 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
@ 2023-08-11 17:32 ` Tyler Retzlaff
2023-08-14 8:07 ` Morten Brørup
2023-08-11 17:32 ` [PATCH v2 5/6] bpf: " Tyler Retzlaff
2023-08-11 17:32 ` [PATCH v2 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
5 siblings, 1 reply; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-11 17:32 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt distributor for EAL optional atomics API changes
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 ++++++++++++++++----------------
2 files changed, 23 insertions(+), 23 deletions(-)
diff --git a/lib/distributor/distributor_private.h b/lib/distributor/distributor_private.h
index 7101f63..ffbdae5 100644
--- a/lib/distributor/distributor_private.h
+++ b/lib/distributor/distributor_private.h
@@ -52,7 +52,7 @@
* Only 64-bits of the memory is actually used though.
*/
union rte_distributor_buffer_single {
- volatile int64_t bufptr64;
+ volatile int64_t __rte_atomic bufptr64;
char pad[RTE_CACHE_LINE_SIZE*3];
} __rte_cache_aligned;
diff --git a/lib/distributor/rte_distributor_single.c b/lib/distributor/rte_distributor_single.c
index 2c77ac4..ad43c13 100644
--- a/lib/distributor/rte_distributor_single.c
+++ b/lib/distributor/rte_distributor_single.c
@@ -32,10 +32,10 @@
int64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_GET_BUF;
RTE_WAIT_UNTIL_MASKED(&buf->bufptr64, RTE_DISTRIB_FLAGS_MASK,
- ==, 0, __ATOMIC_RELAXED);
+ ==, 0, rte_memory_order_relaxed);
/* Sync with distributor on GET_BUF flag. */
- __atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&buf->bufptr64, req, rte_memory_order_release);
}
struct rte_mbuf *
@@ -44,7 +44,7 @@ struct rte_mbuf *
{
union rte_distributor_buffer_single *buf = &d->bufs[worker_id];
/* Sync with distributor. Acquire bufptr64. */
- if (__atomic_load_n(&buf->bufptr64, __ATOMIC_ACQUIRE)
+ if (rte_atomic_load_explicit(&buf->bufptr64, rte_memory_order_acquire)
& RTE_DISTRIB_GET_BUF)
return NULL;
@@ -72,10 +72,10 @@ struct rte_mbuf *
uint64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_RETURN_BUF;
RTE_WAIT_UNTIL_MASKED(&buf->bufptr64, RTE_DISTRIB_FLAGS_MASK,
- ==, 0, __ATOMIC_RELAXED);
+ ==, 0, rte_memory_order_relaxed);
/* Sync with distributor on RETURN_BUF flag. */
- __atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&buf->bufptr64, req, rte_memory_order_release);
return 0;
}
@@ -119,7 +119,7 @@ struct rte_mbuf *
d->in_flight_tags[wkr] = 0;
d->in_flight_bitmask &= ~(1UL << wkr);
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64), 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64, 0, rte_memory_order_release);
if (unlikely(d->backlog[wkr].count != 0)) {
/* On return of a packet, we need to move the
* queued packets for this core elsewhere.
@@ -165,21 +165,21 @@ struct rte_mbuf *
for (wkr = 0; wkr < d->num_workers; wkr++) {
uintptr_t oldbuf = 0;
/* Sync with worker. Acquire bufptr64. */
- const int64_t data = __atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE);
+ const int64_t data = rte_atomic_load_explicit(&d->bufs[wkr].bufptr64,
+ rte_memory_order_acquire);
if (data & RTE_DISTRIB_GET_BUF) {
flushed++;
if (d->backlog[wkr].count)
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
else {
/* Sync with worker on GET_BUF flag. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
RTE_DISTRIB_GET_BUF,
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
d->in_flight_tags[wkr] = 0;
d->in_flight_bitmask &= ~(1UL << wkr);
}
@@ -217,8 +217,8 @@ struct rte_mbuf *
while (next_idx < num_mbufs || next_mb != NULL) {
uintptr_t oldbuf = 0;
/* Sync with worker. Acquire bufptr64. */
- int64_t data = __atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE);
+ int64_t data = rte_atomic_load_explicit(&(d->bufs[wkr].bufptr64),
+ rte_memory_order_acquire);
if (!next_mb) {
next_mb = mbufs[next_idx++];
@@ -264,15 +264,15 @@ struct rte_mbuf *
if (d->backlog[wkr].count)
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
else {
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
next_value,
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
d->in_flight_tags[wkr] = new_tag;
d->in_flight_bitmask |= (1UL << wkr);
next_mb = NULL;
@@ -294,8 +294,8 @@ struct rte_mbuf *
for (wkr = 0; wkr < d->num_workers; wkr++)
if (d->backlog[wkr].count &&
/* Sync with worker. Acquire bufptr64. */
- (__atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE) & RTE_DISTRIB_GET_BUF)) {
+ (rte_atomic_load_explicit(&d->bufs[wkr].bufptr64,
+ rte_memory_order_acquire) & RTE_DISTRIB_GET_BUF)) {
int64_t oldbuf = d->bufs[wkr].bufptr64 >>
RTE_DISTRIB_FLAG_BITS;
@@ -303,9 +303,9 @@ struct rte_mbuf *
store_return(oldbuf, d, &ret_start, &ret_count);
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
}
d->returns.start = ret_start;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v2 5/6] bpf: adapt for EAL optional atomics API changes
2023-08-11 17:32 ` [PATCH v2 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (3 preceding siblings ...)
2023-08-11 17:32 ` [PATCH v2 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
@ 2023-08-11 17:32 ` Tyler Retzlaff
2023-08-14 8:11 ` Morten Brørup
2023-08-11 17:32 ` [PATCH v2 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
5 siblings, 1 reply; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-11 17:32 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt bpf for EAL optional atomics API changes
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/bpf/bpf_pkt.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c
index ffd2db7..b300447 100644
--- a/lib/bpf/bpf_pkt.c
+++ b/lib/bpf/bpf_pkt.c
@@ -25,7 +25,7 @@
struct bpf_eth_cbi {
/* used by both data & control path */
- uint32_t use; /*usage counter */
+ uint32_t __rte_atomic use; /*usage counter */
const struct rte_eth_rxtx_callback *cb; /* callback handle */
struct rte_bpf *bpf;
struct rte_bpf_jit jit;
@@ -110,8 +110,8 @@ struct bpf_eth_cbh {
/* in use, busy wait till current RX/TX iteration is finished */
if ((puse & BPF_ETH_CBI_INUSE) != 0) {
- RTE_WAIT_UNTIL_MASKED((uint32_t *)(uintptr_t)&cbi->use,
- UINT32_MAX, !=, puse, __ATOMIC_RELAXED);
+ RTE_WAIT_UNTIL_MASKED((uint32_t __rte_atomic *)(uintptr_t)&cbi->use,
+ UINT32_MAX, !=, puse, rte_memory_order_relaxed);
}
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v2 6/6] devtools: forbid new direct use of GCC atomic builtins
2023-08-11 17:32 ` [PATCH v2 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (4 preceding siblings ...)
2023-08-11 17:32 ` [PATCH v2 5/6] bpf: " Tyler Retzlaff
@ 2023-08-11 17:32 ` Tyler Retzlaff
2023-08-14 8:12 ` Morten Brørup
5 siblings, 1 reply; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-11 17:32 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Refrain from using compiler __atomic_xxx builtins DPDK now requires
the use of rte_atomic_<op>_explicit macros when operating on DPDK
atomic variables.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Suggested-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
devtools/checkpatches.sh | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh
index 43f5e36..b15c3f7 100755
--- a/devtools/checkpatches.sh
+++ b/devtools/checkpatches.sh
@@ -111,11 +111,11 @@ check_forbidden_additions() { # <patch>
-f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
"$1" || res=1
- # refrain from using compiler __atomic_{add,and,nand,or,sub,xor}_fetch()
+ # refrain from using compiler __atomic_xxx builtins
awk -v FOLDERS="lib drivers app examples" \
- -v EXPRESSIONS="__atomic_(add|and|nand|or|sub|xor)_fetch\\\(" \
+ -v EXPRESSIONS="__atomic_.*\\\(" \
-v RET_ON_FAIL=1 \
- -v MESSAGE='Using __atomic_op_fetch, prefer __atomic_fetch_op' \
+ -v MESSAGE='Using __atomic_xxx builtins' \
-f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
"$1" || res=1
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH 6/6] devtools: forbid new direct use of GCC atomic builtins
2023-08-11 15:56 ` Tyler Retzlaff
@ 2023-08-14 6:37 ` Morten Brørup
0 siblings, 0 replies; 82+ messages in thread
From: Morten Brørup @ 2023-08-14 6:37 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: dev, techboard, Bruce Richardson, Honnappa Nagarahalli,
Ruifeng Wang, Jerin Jacob, Sunil Kumar Kori,
Mattias Rönnblom, Joyce Kong, David Christensen,
Konstantin Ananyev, David Hunt, Thomas Monjalon, David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Friday, 11 August 2023 17.57
>
> On Fri, Aug 11, 2023 at 11:51:17AM +0200, Morten Brørup wrote:
> > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > Sent: Friday, 11 August 2023 03.32
> > >
> > > Refrain from using compiler __atomic_xxx builtins DPDK now requires
> > > the use of rte_atomic_<op>_explicit macros when operating on DPDK
> > > atomic variables.
> > >
> > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > > Acked-by: Morten Brørup <mb@smartsharesystems.com>
> >
> > The Acked-by should have been:
> > Suggested-by: Morten Brørup <mb@smartsharesystems.com>
>
> ooh, did i make a mistake? i was carrying the ack from my abandoned
> series (or i thought you had acked this patch on that series sorry).
>
> i'll change it to suggested-by.
>
> thanks!
No problem. Both tags mean that I approve of the concept anyway.
Minor mistakes are bound to come with big sets like this. Better in the comments than in the code. :-)
>
> >
> > > ---
> > > devtools/checkpatches.sh | 8 ++++++++
> > > 1 file changed, 8 insertions(+)
> > >
> > > diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh
> > > index 43f5e36..a32f02e 100755
> > > --- a/devtools/checkpatches.sh
> > > +++ b/devtools/checkpatches.sh
> > > @@ -102,6 +102,14 @@ check_forbidden_additions() { # <patch>
> > > -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk
> > > \
> > > "$1" || res=1
> > >
> > > + # refrain from using compiler __atomic_xxx builtins
> > > + awk -v FOLDERS="lib drivers app examples" \
> > > + -v EXPRESSIONS="__atomic_.*\\\(" \
> >
> > This expression is a superset of other expressions in checkpatches (search
> for "__atomic" in the checkpatches, and you'll find them). Perhaps they can be
> removed?
>
> yes, seems like a good idea.
>
> v2
>
> >
> > > + -v RET_ON_FAIL=1 \
> > > + -v MESSAGE='Using __atomic_xxx builtins' \
> > > + -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk
> > > \
> > > + "$1" || res=1
> > > +
> > > # refrain from using compiler __atomic_thread_fence()
> > > # It should be avoided on x86 for SMP case.
> > > awk -v FOLDERS="lib drivers app examples" \
> > > --
> > > 1.8.3.1
> >
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH v2 1/6] eal: provide rte stdatomics optional atomics API
2023-08-11 17:32 ` [PATCH v2 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
@ 2023-08-14 7:06 ` Morten Brørup
0 siblings, 0 replies; 82+ messages in thread
From: Morten Brørup @ 2023-08-14 7:06 UTC (permalink / raw)
To: Tyler Retzlaff, dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Friday, 11 August 2023 19.32
>
> Provide API for atomic operations in the rte namespace that may
> optionally be configured to use C11 atomics with meson
> option enable_stdatomics=true
>
> Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> ---
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH v2 2/6] eal: adapt EAL to present rte optional atomics API
2023-08-11 17:32 ` [PATCH v2 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
@ 2023-08-14 8:00 ` Morten Brørup
2023-08-14 17:47 ` Tyler Retzlaff
0 siblings, 1 reply; 82+ messages in thread
From: Morten Brørup @ 2023-08-14 8:00 UTC (permalink / raw)
To: Tyler Retzlaff, dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Friday, 11 August 2023 19.32
>
> Adapt the EAL public headers to use rte optional atomics API instead of
> directly using and exposing toolchain specific atomic builtin intrinsics.
>
> Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> ---
[...]
> --- a/app/test/test_mcslock.c
> +++ b/app/test/test_mcslock.c
> @@ -36,9 +36,9 @@
> * lock multiple times.
> */
>
> -rte_mcslock_t *p_ml;
> -rte_mcslock_t *p_ml_try;
> -rte_mcslock_t *p_ml_perf;
> +rte_mcslock_t * __rte_atomic p_ml;
> +rte_mcslock_t * __rte_atomic p_ml_try;
> +rte_mcslock_t * __rte_atomic p_ml_perf;
Although this looks weird, it is pointers themselves, not the structures, that are used atomically. So it is correct.
> diff --git a/lib/eal/include/generic/rte_pause.h
> b/lib/eal/include/generic/rte_pause.h
> index bebfa95..c816e7d 100644
> --- a/lib/eal/include/generic/rte_pause.h
> +++ b/lib/eal/include/generic/rte_pause.h
> @@ -36,13 +36,13 @@
> * A 16-bit expected value to be in the memory location.
> * @param memorder
> * Two different memory orders that can be specified:
> - * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
> + * rte_memory_order_acquire and rte_memory_order_relaxed. These map to
> * C++11 memory orders with the same names, see the C++11 standard or
> * the GCC wiki on atomic synchronization for detailed definition.
Delete the last part of the description, starting at "These map to...".
> */
> static __rte_always_inline void
> rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,
> - int memorder);
> + rte_memory_order memorder);
>
> /**
> * Wait for *addr to be updated with a 32-bit expected value, with a relaxed
> @@ -54,13 +54,13 @@
> * A 32-bit expected value to be in the memory location.
> * @param memorder
> * Two different memory orders that can be specified:
> - * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
> + * rte_memory_order_acquire and rte_memory_order_relaxed. These map to
> * C++11 memory orders with the same names, see the C++11 standard or
> * the GCC wiki on atomic synchronization for detailed definition.
Delete the last part of the description, starting at "These map to...".
> */
> static __rte_always_inline void
> rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,
> - int memorder);
> + rte_memory_order memorder);
>
> /**
> * Wait for *addr to be updated with a 64-bit expected value, with a relaxed
> @@ -72,42 +72,42 @@
> * A 64-bit expected value to be in the memory location.
> * @param memorder
> * Two different memory orders that can be specified:
> - * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
> + * rte_memory_order_acquire and rte_memory_order_relaxed. These map to
> * C++11 memory orders with the same names, see the C++11 standard or
> * the GCC wiki on atomic synchronization for detailed definition.
Delete the last part of the description, starting at "These map to...".
> */
> static __rte_always_inline void
> rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,
> - int memorder);
> + rte_memory_order memorder);
[...]
> @@ -125,16 +125,16 @@
> * An expected value to be in the memory location.
> * @param memorder
> * Two different memory orders that can be specified:
> - * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
> + * rte_memory_order_acquire and rte_memory_order_relaxed. These map to
> * C++11 memory orders with the same names, see the C++11 standard or
> * the GCC wiki on atomic synchronization for detailed definition.
Delete the last part of the description, starting at "These map to...".
There might be more similar comments that need removal; I haven't tried searching.
> */
> #define RTE_WAIT_UNTIL_MASKED(addr, mask, cond, expected, memorder) do { \
[...]
> --- a/lib/eal/include/generic/rte_spinlock.h
> +++ b/lib/eal/include/generic/rte_spinlock.h
> @@ -29,7 +29,7 @@
> * The rte_spinlock_t type.
> */
> typedef struct __rte_lockable {
> - volatile int locked; /**< lock status 0 = unlocked, 1 = locked */
> + volatile int __rte_atomic locked; /**< lock status 0 = unlocked, 1 =
> locked */
I think __rte_atomic should be before the type:
volatile __rte_atomic int locked; /**< lock status [...]
Alternatively (just mentioning it, I know we don't use this form):
volatile __rte_atomic(int) locked; /**< lock status [...]
Thinking of where you would put "const" might help.
Maybe your order is also correct, so it is a matter of preference.
The DPDK coding style guidelines doesn't mention where to place "const", but looking at the code, it seems to use "const unsigned int" and "const char *".
> } rte_spinlock_t;
>
> /**
[...]
> --- a/lib/eal/include/rte_mcslock.h
> +++ b/lib/eal/include/rte_mcslock.h
> @@ -33,8 +33,8 @@
> * The rte_mcslock_t type.
> */
> typedef struct rte_mcslock {
> - struct rte_mcslock *next;
> - int locked; /* 1 if the queue locked, 0 otherwise */
> + struct rte_mcslock * __rte_atomic next;
Correct, the pointer is atomic, not the struct.
> + int __rte_atomic locked; /* 1 if the queue locked, 0 otherwise */
Again, I think __rte_atomic should be before the type:
__rte_atomic int locked; /* 1 if the queue locked, 0 otherwise */
> } rte_mcslock_t;
>
[...]
> @@ -101,34 +101,34 @@
> * A pointer to the node of MCS lock passed in rte_mcslock_lock.
> */
> static inline void
> -rte_mcslock_unlock(rte_mcslock_t **msl, rte_mcslock_t *me)
> +rte_mcslock_unlock(rte_mcslock_t * __rte_atomic *msl, rte_mcslock_t *
> __rte_atomic me)
> {
> /* Check if there are more nodes in the queue. */
> - if (likely(__atomic_load_n(&me->next, __ATOMIC_RELAXED) == NULL)) {
> + if (likely(rte_atomic_load_explicit(&me->next, rte_memory_order_relaxed)
> == NULL)) {
> /* No, last member in the queue. */
> - rte_mcslock_t *save_me = __atomic_load_n(&me, __ATOMIC_RELAXED);
> + rte_mcslock_t *save_me = rte_atomic_load_explicit(&me,
> rte_memory_order_relaxed);
>
> /* Release the lock by setting it to NULL */
> - if (likely(__atomic_compare_exchange_n(msl, &save_me, NULL, 0,
> - __ATOMIC_RELEASE, __ATOMIC_RELAXED)))
> + if (likely(rte_atomic_compare_exchange_strong_explicit(msl,
> &save_me, NULL,
> + rte_memory_order_release,
> rte_memory_order_relaxed)))
> return;
>
> /* Speculative execution would be allowed to read in the
> * while-loop first. This has the potential to cause a
> * deadlock. Need a load barrier.
> */
> - __atomic_thread_fence(__ATOMIC_ACQUIRE);
> + __rte_atomic_thread_fence(rte_memory_order_acquire);
> /* More nodes added to the queue by other CPUs.
> * Wait until the next pointer is set.
> */
> - uintptr_t *next;
> - next = (uintptr_t *)&me->next;
> + uintptr_t __rte_atomic *next;
> + next = (uintptr_t __rte_atomic *)&me->next;
This way around, I think:
__rte_atomic uintptr_t *next;
next = (__rte_atomic uintptr_t *)&me->next;
[...]
> --- a/lib/eal/include/rte_pflock.h
> +++ b/lib/eal/include/rte_pflock.h
> @@ -41,8 +41,8 @@
> */
> struct rte_pflock {
> struct {
> - uint16_t in;
> - uint16_t out;
> + uint16_t __rte_atomic in;
> + uint16_t __rte_atomic out;
Again, I think __rte_atomic should be before the type:
__rte_atomic uint16_t in;
__rte_atomic uint16_t out;
> } rd, wr;
> };
[...]
> --- a/lib/eal/include/rte_seqcount.h
> +++ b/lib/eal/include/rte_seqcount.h
> @@ -32,7 +32,7 @@
> * The RTE seqcount type.
> */
> typedef struct {
> - uint32_t sn; /**< A sequence number for the protected data. */
> + uint32_t __rte_atomic sn; /**< A sequence number for the protected data.
> */
Again, I think __rte_atomic should be before the type:
__rte_atomic uint32_t sn; /**< A sequence [...]
> } rte_seqcount_t;
[...]
> --- a/lib/eal/include/rte_ticketlock.h
> +++ b/lib/eal/include/rte_ticketlock.h
> @@ -30,10 +30,10 @@
> * The rte_ticketlock_t type.
> */
> typedef union {
> - uint32_t tickets;
> + uint32_t __rte_atomic tickets;
> struct {
> - uint16_t current;
> - uint16_t next;
> + uint16_t __rte_atomic current;
> + uint16_t __rte_atomic next;
Again, I think __rte_atomic should be before the type:
__rte_atomic uint16_t current;
__rte_atomic uint16_t next;
> } s;
> } rte_ticketlock_t;
> @@ -127,7 +129,7 @@
>
> typedef struct {
> rte_ticketlock_t tl; /**< the actual ticketlock */
> - int user; /**< core id using lock, TICKET_LOCK_INVALID_ID for unused */
> + int __rte_atomic user; /**< core id using lock, TICKET_LOCK_INVALID_ID
> for unused */
Again, I think __rte_atomic should be before the type:
__rte_atomic int user; /**< core id [...]
> unsigned int count; /**< count of time this lock has been called */
> } rte_ticketlock_recursive_t;
[...]
> --- a/lib/eal/include/rte_trace_point.h
> +++ b/lib/eal/include/rte_trace_point.h
> @@ -33,7 +33,7 @@
> #include <rte_stdatomic.h>
>
> /** The tracepoint object. */
> -typedef uint64_t rte_trace_point_t;
> +typedef uint64_t __rte_atomic rte_trace_point_t;
Again, I think __rte_atomic should be before the type:
typedef __rte_atomic uint64_t rte_trace_point_t;
[...]
At the risk of having gone "speed blind" by all the search-replaces along the way...
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH v2 3/6] eal: add rte atomic qualifier with casts
2023-08-11 17:32 ` [PATCH v2 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
@ 2023-08-14 8:05 ` Morten Brørup
0 siblings, 0 replies; 82+ messages in thread
From: Morten Brørup @ 2023-08-14 8:05 UTC (permalink / raw)
To: Tyler Retzlaff, dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Friday, 11 August 2023 19.32
>
> Introduce __rte_atomic qualifying casts in rte_optional atomics inline
> functions to prevent cascading the need to pass __rte_atomic qualified
> arguments.
>
> Warning, this is really implementation dependent and being done
> temporarily to avoid having to convert more of the libraries and tests in
> DPDK in the initial series that introduces the API. The consequence of the
> assumption of the ABI of the types in question not being ``the same'' is
> only a risk that may be realized when enable_stdatomic=true.
>
> Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> ---
> lib/eal/include/generic/rte_atomic.h | 48 ++++++++++++++++++++++++-----------
> -
> lib/eal/include/generic/rte_pause.h | 9 ++++---
> lib/eal/x86/rte_power_intrinsics.c | 7 +++---
> 3 files changed, 42 insertions(+), 22 deletions(-)
>
> diff --git a/lib/eal/include/generic/rte_atomic.h
> b/lib/eal/include/generic/rte_atomic.h
> index f6c4b3e..4f954e0 100644
> --- a/lib/eal/include/generic/rte_atomic.h
> +++ b/lib/eal/include/generic/rte_atomic.h
> @@ -274,7 +274,8 @@
> static inline void
> rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
> {
> - rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
> + rte_atomic_fetch_add_explicit((volatile int16_t __rte_atomic *)&v->cnt,
> inc,
> + rte_memory_order_seq_cst);
As mentioned in my review to the 2/6 patch, I think __rte_atomic should come before the type, like this:
(volatile __rte_atomic int16_t *)
Same with all the changes.
Otherwise good.
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH v2 4/6] distributor: adapt for EAL optional atomics API changes
2023-08-11 17:32 ` [PATCH v2 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
@ 2023-08-14 8:07 ` Morten Brørup
0 siblings, 0 replies; 82+ messages in thread
From: Morten Brørup @ 2023-08-14 8:07 UTC (permalink / raw)
To: Tyler Retzlaff, dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Friday, 11 August 2023 19.32
>
> Adapt distributor for EAL optional atomics API changes
>
> Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> ---
> lib/distributor/distributor_private.h | 2 +-
> lib/distributor/rte_distributor_single.c | 44 ++++++++++++++++---------------
> -
> 2 files changed, 23 insertions(+), 23 deletions(-)
>
> diff --git a/lib/distributor/distributor_private.h
> b/lib/distributor/distributor_private.h
> index 7101f63..ffbdae5 100644
> --- a/lib/distributor/distributor_private.h
> +++ b/lib/distributor/distributor_private.h
> @@ -52,7 +52,7 @@
> * Only 64-bits of the memory is actually used though.
> */
> union rte_distributor_buffer_single {
> - volatile int64_t bufptr64;
> + volatile int64_t __rte_atomic bufptr64;
As mentioned in my review to the 2/6 patch, I think __rte_atomic should come before the type, like this:
> + volatile __rte_atomic int64_t bufptr64;
> char pad[RTE_CACHE_LINE_SIZE*3];
> } __rte_cache_aligned;
The rest is simple search-replace; easy to review, so...
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH v2 5/6] bpf: adapt for EAL optional atomics API changes
2023-08-11 17:32 ` [PATCH v2 5/6] bpf: " Tyler Retzlaff
@ 2023-08-14 8:11 ` Morten Brørup
0 siblings, 0 replies; 82+ messages in thread
From: Morten Brørup @ 2023-08-14 8:11 UTC (permalink / raw)
To: Tyler Retzlaff, dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Friday, 11 August 2023 19.32
>
> Adapt bpf for EAL optional atomics API changes
>
> Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> ---
> lib/bpf/bpf_pkt.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c
> index ffd2db7..b300447 100644
> --- a/lib/bpf/bpf_pkt.c
> +++ b/lib/bpf/bpf_pkt.c
> @@ -25,7 +25,7 @@
>
> struct bpf_eth_cbi {
> /* used by both data & control path */
> - uint32_t use; /*usage counter */
> + uint32_t __rte_atomic use; /*usage counter */
As mentioned in my review to the 2/6 patch, I think __rte_atomic should come before the type, like this:
__rte_atomic uint32_t use; /*usage counter */
> const struct rte_eth_rxtx_callback *cb; /* callback handle */
> struct rte_bpf *bpf;
> struct rte_bpf_jit jit;
> @@ -110,8 +110,8 @@ struct bpf_eth_cbh {
>
> /* in use, busy wait till current RX/TX iteration is finished */
> if ((puse & BPF_ETH_CBI_INUSE) != 0) {
> - RTE_WAIT_UNTIL_MASKED((uint32_t *)(uintptr_t)&cbi->use,
> - UINT32_MAX, !=, puse, __ATOMIC_RELAXED);
> + RTE_WAIT_UNTIL_MASKED((uint32_t __rte_atomic *)(uintptr_t)&cbi->use,
And here:
RTE_WAIT_UNTIL_MASKED((__rte_atomic uint32_t *) [...]
> + UINT32_MAX, !=, puse, rte_memory_order_relaxed);
> }
> }
>
> --
> 1.8.3.1
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH v2 6/6] devtools: forbid new direct use of GCC atomic builtins
2023-08-11 17:32 ` [PATCH v2 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
@ 2023-08-14 8:12 ` Morten Brørup
0 siblings, 0 replies; 82+ messages in thread
From: Morten Brørup @ 2023-08-14 8:12 UTC (permalink / raw)
To: Tyler Retzlaff, dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Friday, 11 August 2023 19.32
>
> Refrain from using compiler __atomic_xxx builtins DPDK now requires
> the use of rte_atomic_<op>_explicit macros when operating on DPDK
> atomic variables.
>
> Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> Suggested-by: Morten Brørup <mb@smartsharesystems.com>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
Acked-by: Morten Brørup <mb@smartsharesystems.com>
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH 1/6] eal: provide rte stdatomics optional atomics API
2023-08-11 15:54 ` Tyler Retzlaff
@ 2023-08-14 9:04 ` Morten Brørup
0 siblings, 0 replies; 82+ messages in thread
From: Morten Brørup @ 2023-08-14 9:04 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: dev, techboard, Bruce Richardson, Honnappa Nagarahalli,
Ruifeng Wang, Jerin Jacob, Sunil Kumar Kori,
Mattias Rönnblom, Joyce Kong, David Christensen,
Konstantin Ananyev, David Hunt, Thomas Monjalon, David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Friday, 11 August 2023 17.55
>
> On Fri, Aug 11, 2023 at 11:42:12AM +0200, Morten Brørup wrote:
> > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > Sent: Friday, 11 August 2023 03.32
> > >
> > > Provide API for atomic operations in the rte namespace that may
> > > optionally be configured to use C11 atomics with meson
> > > option enable_stdatomics=true
> > >
> > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > > ---
[...]
> >
> > Move the (changed) C11 memory order type definition here:
> >
> > /* The memory order is an enumerated type in C11. */
> > #define memory_order rte_memory_order
No objections to your typedef in v2.
[...]
> > > +#ifdef __ATOMIC_SEQ_CST
> > > +_Static_assert(rte_memory_order_seq_cst == __ATOMIC_SEQ_CST,
> > > + "rte_memory_order_seq_cst == __ATOMIC_SEQ_CST");
> > > +#endif
> >
> > Excellent idea adding these _Static_asserts!
> >
> > Have you tested (with the toolchain you are targeting with this
> _Static_assert) that e.g. __ATOMIC_RELAXED is actually #defined, so the
> preprocessor can see it? (I guess that being a built-it, it might not be a
> #define, it might be a magic value known by the compiler only.)
>
> * llvm and gcc both expose it as a built-in #define for test builds i
> have run. worst case the assert is lost if it isn't.
I only wanted to check that we didn't always hit the "worst case" where the static_assert is lost, so thank you for the confirmation regarding GCC/clang.
> * since i have to handle non-{clang,gcc} too i still guard with ifdef
Agree.
> * i do need to switch to using assert.h static_assert macro to
> inter-operate with c++ in v2
OK. I reviewed that in v2 too.
-Morten
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v2 2/6] eal: adapt EAL to present rte optional atomics API
2023-08-14 8:00 ` Morten Brørup
@ 2023-08-14 17:47 ` Tyler Retzlaff
2023-08-16 20:13 ` Morten Brørup
0 siblings, 1 reply; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-14 17:47 UTC (permalink / raw)
To: Morten Brørup
Cc: dev, techboard, Bruce Richardson, Honnappa Nagarahalli,
Ruifeng Wang, Jerin Jacob, Sunil Kumar Kori,
Mattias Rönnblom, Joyce Kong, David Christensen,
Konstantin Ananyev, David Hunt, Thomas Monjalon, David Marchand
On Mon, Aug 14, 2023 at 10:00:49AM +0200, Morten Brørup wrote:
> > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > Sent: Friday, 11 August 2023 19.32
> >
> > Adapt the EAL public headers to use rte optional atomics API instead of
> > directly using and exposing toolchain specific atomic builtin intrinsics.
> >
> > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > ---
>
> [...]
>
will fix the comments identified.
>
> [...]
>
> > --- a/lib/eal/include/generic/rte_spinlock.h
> > +++ b/lib/eal/include/generic/rte_spinlock.h
> > @@ -29,7 +29,7 @@
> > * The rte_spinlock_t type.
> > */
> > typedef struct __rte_lockable {
> > - volatile int locked; /**< lock status 0 = unlocked, 1 = locked */
> > + volatile int __rte_atomic locked; /**< lock status 0 = unlocked, 1 =
> > locked */
>
> I think __rte_atomic should be before the type:
> volatile __rte_atomic int locked; /**< lock status [...]
> Alternatively (just mentioning it, I know we don't use this form):
> volatile __rte_atomic(int) locked; /**< lock status [...]
>
> Thinking of where you would put "const" might help.
>
> Maybe your order is also correct, so it is a matter of preference.
so for me what you suggest is the canonical convention for c and i did
initially try to make the change with this convention but ran into
trouble when using the keyword in a context used as a type specifier
and the type was incomplete.
the rte_mcslock is a good example for illustration.
// original struct
typedef struct rte_mcslock {
struct rte_mcslock *next;
...
};
it simply doesn't work / won't compile (at least with clang) which is
what drove me to use the less-often used syntax.
typedef struct rte_mcslock {
_Atomic struct rte_mcslock *next;
...
};
In file included from ../app/test/test_mcslock.c:19:
..\lib\eal\include\rte_mcslock.h:36:2: error: _Atomic cannot be applied
to incomplete type 'struct rte_mcslock'
_Atomic struct rte_mcslock *next;
^
..\lib\eal\include\rte_mcslock.h:35:16: note: definition of 'struct
rte_mcslock' is not complete until the closing '}'
typedef struct rte_mcslock {
^
1 error generated.
so i ended up choosing to use a single syntax by convention consistently
rather than using one for the exceptional case and one everywhere else.
i think (based on our other thread of discussion) i would recommend we
use adopt and require the use of the _Atomic(T) macro to disambiguate it
also has the advantage of not being churned later when we can do c++23.
// using macro
typedef struct rte_mcslock {
_Atomic(struct rte_mcslock *) next;
...
};
this is much easier at a glance to know when the specified type is the T
or the T * similarly in parameter lists it becomes more clear too.
e.g.
void foo(int *v)
that it is either void foo(_Atomic(int) *v) or void foo(_Atomic(int *) v) becomes
much clearer without having to do mental gymnastics.
so i propose we retain
#define __rte_atomic _Atomic
allow it to be used in contexts where we need a type-qualifier.
note:
most of the cases where _Atomic is used as a type-qualifier it
is a red flag that we are sensitive to an implementation detail
of the compiler. in time i hope most of these will go away as we
remove deprecated rte_atomic_xx apis.
but also introduce the following macro
#define RTE_ATOMIC(type) _Atomic(type)
require it be used in the contexts that we are using it as a type-specifier.
if folks agree with this please reply back positively and i'll update
the series. feel free to propose alternate names or whatever, but sooner
than later so i don't have to churn things too much :)
thanks!
>
> The DPDK coding style guidelines doesn't mention where to place "const", but looking at the code, it seems to use "const unsigned int" and "const char *".
we probably should document it as a convention and most likely we should
adopt what is already in use more commonly.
>
> > } rte_spinlock_t;
> >
> > /**
>
> [...]
>
> > --- a/lib/eal/include/rte_mcslock.h
> > +++ b/lib/eal/include/rte_mcslock.h
> > @@ -33,8 +33,8 @@
> > * The rte_mcslock_t type.
> > */
> > typedef struct rte_mcslock {
> > - struct rte_mcslock *next;
> > - int locked; /* 1 if the queue locked, 0 otherwise */
> > + struct rte_mcslock * __rte_atomic next;
>
> Correct, the pointer is atomic, not the struct.
>
> > + int __rte_atomic locked; /* 1 if the queue locked, 0 otherwise */
>
> Again, I think __rte_atomic should be before the type:
> __rte_atomic int locked; /* 1 if the queue locked, 0 otherwise */
>
> > } rte_mcslock_t;
> >
>
> [...]
>
> > @@ -101,34 +101,34 @@
> > * A pointer to the node of MCS lock passed in rte_mcslock_lock.
> > */
> > static inline void
> > -rte_mcslock_unlock(rte_mcslock_t **msl, rte_mcslock_t *me)
> > +rte_mcslock_unlock(rte_mcslock_t * __rte_atomic *msl, rte_mcslock_t *
> > __rte_atomic me)
> > {
> > /* Check if there are more nodes in the queue. */
> > - if (likely(__atomic_load_n(&me->next, __ATOMIC_RELAXED) == NULL)) {
> > + if (likely(rte_atomic_load_explicit(&me->next, rte_memory_order_relaxed)
> > == NULL)) {
> > /* No, last member in the queue. */
> > - rte_mcslock_t *save_me = __atomic_load_n(&me, __ATOMIC_RELAXED);
> > + rte_mcslock_t *save_me = rte_atomic_load_explicit(&me,
> > rte_memory_order_relaxed);
> >
> > /* Release the lock by setting it to NULL */
> > - if (likely(__atomic_compare_exchange_n(msl, &save_me, NULL, 0,
> > - __ATOMIC_RELEASE, __ATOMIC_RELAXED)))
> > + if (likely(rte_atomic_compare_exchange_strong_explicit(msl,
> > &save_me, NULL,
> > + rte_memory_order_release,
> > rte_memory_order_relaxed)))
> > return;
> >
> > /* Speculative execution would be allowed to read in the
> > * while-loop first. This has the potential to cause a
> > * deadlock. Need a load barrier.
> > */
> > - __atomic_thread_fence(__ATOMIC_ACQUIRE);
> > + __rte_atomic_thread_fence(rte_memory_order_acquire);
> > /* More nodes added to the queue by other CPUs.
> > * Wait until the next pointer is set.
> > */
> > - uintptr_t *next;
> > - next = (uintptr_t *)&me->next;
> > + uintptr_t __rte_atomic *next;
> > + next = (uintptr_t __rte_atomic *)&me->next;
>
> This way around, I think:
> __rte_atomic uintptr_t *next;
> next = (__rte_atomic uintptr_t *)&me->next;
>
> [...]
>
> > --- a/lib/eal/include/rte_pflock.h
> > +++ b/lib/eal/include/rte_pflock.h
> > @@ -41,8 +41,8 @@
> > */
> > struct rte_pflock {
> > struct {
> > - uint16_t in;
> > - uint16_t out;
> > + uint16_t __rte_atomic in;
> > + uint16_t __rte_atomic out;
>
> Again, I think __rte_atomic should be before the type:
> __rte_atomic uint16_t in;
> __rte_atomic uint16_t out;
>
> > } rd, wr;
> > };
>
> [...]
>
> > --- a/lib/eal/include/rte_seqcount.h
> > +++ b/lib/eal/include/rte_seqcount.h
> > @@ -32,7 +32,7 @@
> > * The RTE seqcount type.
> > */
> > typedef struct {
> > - uint32_t sn; /**< A sequence number for the protected data. */
> > + uint32_t __rte_atomic sn; /**< A sequence number for the protected data.
> > */
>
> Again, I think __rte_atomic should be before the type:
> __rte_atomic uint32_t sn; /**< A sequence [...]
>
> > } rte_seqcount_t;
>
> [...]
>
> > --- a/lib/eal/include/rte_ticketlock.h
> > +++ b/lib/eal/include/rte_ticketlock.h
> > @@ -30,10 +30,10 @@
> > * The rte_ticketlock_t type.
> > */
> > typedef union {
> > - uint32_t tickets;
> > + uint32_t __rte_atomic tickets;
> > struct {
> > - uint16_t current;
> > - uint16_t next;
> > + uint16_t __rte_atomic current;
> > + uint16_t __rte_atomic next;
>
> Again, I think __rte_atomic should be before the type:
> __rte_atomic uint16_t current;
> __rte_atomic uint16_t next;
>
> > } s;
> > } rte_ticketlock_t;
>
>
>
> > @@ -127,7 +129,7 @@
> >
> > typedef struct {
> > rte_ticketlock_t tl; /**< the actual ticketlock */
> > - int user; /**< core id using lock, TICKET_LOCK_INVALID_ID for unused */
> > + int __rte_atomic user; /**< core id using lock, TICKET_LOCK_INVALID_ID
> > for unused */
>
> Again, I think __rte_atomic should be before the type:
> __rte_atomic int user; /**< core id [...]
>
> > unsigned int count; /**< count of time this lock has been called */
> > } rte_ticketlock_recursive_t;
>
> [...]
>
> > --- a/lib/eal/include/rte_trace_point.h
> > +++ b/lib/eal/include/rte_trace_point.h
> > @@ -33,7 +33,7 @@
> > #include <rte_stdatomic.h>
> >
> > /** The tracepoint object. */
> > -typedef uint64_t rte_trace_point_t;
> > +typedef uint64_t __rte_atomic rte_trace_point_t;
>
> Again, I think __rte_atomic should be before the type:
> typedef __rte_atomic uint64_t rte_trace_point_t;
>
> [...]
>
> At the risk of having gone "speed blind" by all the search-replaces along the way...
>
> Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
>
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v3 0/6] RFC optional rte optional stdatomics API
2023-08-11 1:31 [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (6 preceding siblings ...)
2023-08-11 17:32 ` [PATCH v2 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-16 19:19 ` Tyler Retzlaff
2023-08-16 19:19 ` [PATCH v3 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
` (5 more replies)
2023-08-16 21:38 ` [PATCH v4 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (2 subsequent siblings)
10 siblings, 6 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-16 19:19 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
This series introduces API additions prefixed in the rte namespace that allow
the optional use of stdatomics.h from C11 using enable_stdatomics=true for
targets where enable_stdatomics=false no functional change is intended.
Be aware this does not contain all changes to use stdatomics across the DPDK
tree it only introduces the minimum to allow the option to be used which is
a pre-requisite for a clean CI (probably using clang) that can be run
with enable_stdatomics=true enabled.
It is planned that subsequent series will be introduced per lib/driver as
appropriate to further enable stdatomics use when enable_stdatomics=true.
Notes:
* Additional libraries beyond EAL make visible atomics use across the
API/ABI surface they will be converted in the subsequent series.
* The eal: add rte atomic qualifier with casts patch needs some discussion
as to whether or not the legacy rte_atomic APIs should be converted to
work with enable_stdatomic=true right now some implementation dependent
casts are used to prevent cascading / having to convert too much in
the intial series.
* Windows will obviously need complete conversion of libraries including
atomics that are not crossing API/ABI boundaries. those conversions will
introduced in separate series as new along side the existing msvc series.
Please keep in mind we would like to prioritize the review / acceptance of
this patch since it needs to be completed in the 23.11 merge window.
Thank you all for the discussion that lead to the formation of this series.
v3:
* Remove comments from APIs mentioning the mapping to C++ memory model
memory orders
* Introduce and use new macro RTE_ATOMIC(type) to be used in contexts
where _Atomic is used as a type specifier to declare variables. The
macro allows more clarity about what the atomic type being specified
is. e.g. _Atomic(T *) vs _Atomic(T) it is easier to understand that
the former is an atomic pointer type and the latter is an atomic
type. it also has the benefit of (in the future) being interoperable
with c++23 syntactically
note: Morten i have retained your 'reviewed-by' tags if you disagree
given the changes in the above version please indicate as such but
i believe the changes are in the spirit of the feedback you provided
v2:
* Wrap meson_options.txt option description to newline and indent to
be consistent with other options.
* Provide separate typedef of rte_memory_order for enable_stdatomic=true
VS enable_stdatomic=false instead of a single typedef to int
note: slight tweak to reviewers feedback i've chosen to use a typedef
for both enable_stdatomic={true,false} (just seemed more consistent)
* Bring in assert.h and use static_assert macro instead of _Static_assert
keyword to better interoperate with c/c++
* Directly include rte_stdatomic.h where into other places it is consumed
instead of hacking it globally into rte_config.h
* Provide and use __rte_atomic_thread_fence to allow conditional expansion
within the body of existing rte_atomic_thread_fence inline function to
maintain per-arch optimizations when enable_stdatomic=false
Tyler Retzlaff (6):
eal: provide rte stdatomics optional atomics API
eal: adapt EAL to present rte optional atomics API
eal: add rte atomic qualifier with casts
distributor: adapt for EAL optional atomics API changes
bpf: adapt for EAL optional atomics API changes
devtools: forbid new direct use of GCC atomic builtins
app/test/test_mcslock.c | 6 +-
config/meson.build | 1 +
devtools/checkpatches.sh | 6 +-
lib/bpf/bpf_pkt.c | 6 +-
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 ++++----
lib/eal/arm/include/rte_atomic_32.h | 4 +-
lib/eal/arm/include/rte_atomic_64.h | 36 +++---
lib/eal/arm/include/rte_pause_64.h | 26 ++---
lib/eal/arm/rte_power_intrinsics.c | 8 +-
lib/eal/common/eal_common_trace.c | 16 +--
lib/eal/include/generic/rte_atomic.h | 67 ++++++-----
lib/eal/include/generic/rte_pause.h | 50 ++++-----
lib/eal/include/generic/rte_rwlock.h | 48 ++++----
lib/eal/include/generic/rte_spinlock.h | 20 ++--
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_mcslock.h | 51 ++++-----
lib/eal/include/rte_pflock.h | 25 +++--
lib/eal/include/rte_seqcount.h | 19 ++--
lib/eal/include/rte_stdatomic.h | 184 +++++++++++++++++++++++++++++++
lib/eal/include/rte_ticketlock.h | 43 ++++----
lib/eal/include/rte_trace_point.h | 5 +-
lib/eal/loongarch/include/rte_atomic.h | 4 +-
lib/eal/ppc/include/rte_atomic.h | 54 ++++-----
lib/eal/riscv/include/rte_atomic.h | 4 +-
lib/eal/x86/include/rte_atomic.h | 8 +-
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 7 +-
meson_options.txt | 2 +
29 files changed, 483 insertions(+), 266 deletions(-)
create mode 100644 lib/eal/include/rte_stdatomic.h
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v3 1/6] eal: provide rte stdatomics optional atomics API
2023-08-16 19:19 ` [PATCH v3 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-16 19:19 ` Tyler Retzlaff
2023-08-16 20:55 ` Morten Brørup
2023-08-16 19:19 ` [PATCH v3 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
` (4 subsequent siblings)
5 siblings, 1 reply; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-16 19:19 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Provide API for atomic operations in the rte namespace that may
optionally be configured to use C11 atomics with meson
option enable_stdatomics=true
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
config/meson.build | 1 +
lib/eal/include/generic/rte_atomic.h | 1 +
lib/eal/include/generic/rte_pause.h | 1 +
lib/eal/include/generic/rte_rwlock.h | 1 +
lib/eal/include/generic/rte_spinlock.h | 1 +
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_mcslock.h | 1 +
lib/eal/include/rte_pflock.h | 1 +
lib/eal/include/rte_seqcount.h | 1 +
lib/eal/include/rte_stdatomic.h | 182 +++++++++++++++++++++++++++++++++
lib/eal/include/rte_ticketlock.h | 1 +
lib/eal/include/rte_trace_point.h | 1 +
meson_options.txt | 2 +
13 files changed, 195 insertions(+)
create mode 100644 lib/eal/include/rte_stdatomic.h
diff --git a/config/meson.build b/config/meson.build
index d822371..ec49964 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -303,6 +303,7 @@ endforeach
# set other values pulled from the build options
dpdk_conf.set('RTE_MAX_ETHPORTS', get_option('max_ethports'))
dpdk_conf.set('RTE_LIBEAL_USE_HPET', get_option('use_hpet'))
+dpdk_conf.set('RTE_ENABLE_STDATOMIC', get_option('enable_stdatomic'))
dpdk_conf.set('RTE_ENABLE_TRACE_FP', get_option('enable_trace_fp'))
# values which have defaults which may be overridden
dpdk_conf.set('RTE_MAX_VFIO_GROUPS', 64)
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 82b9bfc..4a235ba 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -15,6 +15,7 @@
#include <stdint.h>
#include <rte_compat.h>
#include <rte_common.h>
+#include <rte_stdatomic.h>
#ifdef __DOXYGEN__
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index ec1f418..bebfa95 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -16,6 +16,7 @@
#include <assert.h>
#include <rte_common.h>
#include <rte_atomic.h>
+#include <rte_stdatomic.h>
/**
* Pause CPU execution for a short while
diff --git a/lib/eal/include/generic/rte_rwlock.h b/lib/eal/include/generic/rte_rwlock.h
index 9e083bb..24ebec6 100644
--- a/lib/eal/include/generic/rte_rwlock.h
+++ b/lib/eal/include/generic/rte_rwlock.h
@@ -32,6 +32,7 @@
#include <rte_common.h>
#include <rte_lock_annotations.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_rwlock_t type.
diff --git a/lib/eal/include/generic/rte_spinlock.h b/lib/eal/include/generic/rte_spinlock.h
index c50ebaa..e18f0cd 100644
--- a/lib/eal/include/generic/rte_spinlock.h
+++ b/lib/eal/include/generic/rte_spinlock.h
@@ -23,6 +23,7 @@
#endif
#include <rte_lock_annotations.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_spinlock_t type.
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index a0463ef..e94b056 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -42,6 +42,7 @@ headers += files(
'rte_seqlock.h',
'rte_service.h',
'rte_service_component.h',
+ 'rte_stdatomic.h',
'rte_string_fns.h',
'rte_tailq.h',
'rte_thread.h',
diff --git a/lib/eal/include/rte_mcslock.h b/lib/eal/include/rte_mcslock.h
index a805cb2..18e63eb 100644
--- a/lib/eal/include/rte_mcslock.h
+++ b/lib/eal/include/rte_mcslock.h
@@ -27,6 +27,7 @@
#include <rte_common.h>
#include <rte_pause.h>
#include <rte_branch_prediction.h>
+#include <rte_stdatomic.h>
/**
* The rte_mcslock_t type.
diff --git a/lib/eal/include/rte_pflock.h b/lib/eal/include/rte_pflock.h
index a3f7291..790be71 100644
--- a/lib/eal/include/rte_pflock.h
+++ b/lib/eal/include/rte_pflock.h
@@ -34,6 +34,7 @@
#include <rte_compat.h>
#include <rte_common.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_pflock_t type.
diff --git a/lib/eal/include/rte_seqcount.h b/lib/eal/include/rte_seqcount.h
index ff62708..098af26 100644
--- a/lib/eal/include/rte_seqcount.h
+++ b/lib/eal/include/rte_seqcount.h
@@ -26,6 +26,7 @@
#include <rte_atomic.h>
#include <rte_branch_prediction.h>
#include <rte_compat.h>
+#include <rte_stdatomic.h>
/**
* The RTE seqcount type.
diff --git a/lib/eal/include/rte_stdatomic.h b/lib/eal/include/rte_stdatomic.h
new file mode 100644
index 0000000..f03be9b
--- /dev/null
+++ b/lib/eal/include/rte_stdatomic.h
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Microsoft Corporation
+ */
+
+#ifndef _RTE_STDATOMIC_H_
+#define _RTE_STDATOMIC_H_
+
+#include <assert.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#ifdef RTE_ENABLE_STDATOMIC
+#ifdef __STDC_NO_ATOMICS__
+#error enable_stdatomics=true but atomics not supported by toolchain
+#endif
+
+#include <stdatomic.h>
+
+#define __rte_atomic _Atomic
+
+/* The memory order is an enumerated type in C11. */
+typedef memory_order rte_memory_order;
+
+#define rte_memory_order_relaxed memory_order_relaxed
+#ifdef __ATOMIC_RELAXED
+static_assert(rte_memory_order_relaxed == __ATOMIC_RELAXED,
+ "rte_memory_order_relaxed == __ATOMIC_RELAXED");
+#endif
+
+#define rte_memory_order_consume memory_order_consume
+#ifdef __ATOMIC_CONSUME
+static_assert(rte_memory_order_consume == __ATOMIC_CONSUME,
+ "rte_memory_order_consume == __ATOMIC_CONSUME");
+#endif
+
+#define rte_memory_order_acquire memory_order_acquire
+#ifdef __ATOMIC_ACQUIRE
+static_assert(rte_memory_order_acquire == __ATOMIC_ACQUIRE,
+ "rte_memory_order_acquire == __ATOMIC_ACQUIRE");
+#endif
+
+#define rte_memory_order_release memory_order_release
+#ifdef __ATOMIC_RELEASE
+static_assert(rte_memory_order_release == __ATOMIC_RELEASE,
+ "rte_memory_order_release == __ATOMIC_RELEASE");
+#endif
+
+#define rte_memory_order_acq_rel memory_order_acq_rel
+#ifdef __ATOMIC_ACQ_REL
+static_assert(rte_memory_order_acq_rel == __ATOMIC_ACQ_REL,
+ "rte_memory_order_acq_rel == __ATOMIC_ACQ_REL");
+#endif
+
+#define rte_memory_order_seq_cst memory_order_seq_cst
+#ifdef __ATOMIC_SEQ_CST
+static_assert(rte_memory_order_seq_cst == __ATOMIC_SEQ_CST,
+ "rte_memory_order_seq_cst == __ATOMIC_SEQ_CST");
+#endif
+
+#define rte_atomic_load_explicit(ptr, memorder) \
+ atomic_load_explicit(ptr, memorder)
+
+#define rte_atomic_store_explicit(ptr, val, memorder) \
+ atomic_store_explicit(ptr, val, memorder)
+
+#define rte_atomic_exchange_explicit(ptr, val, memorder) \
+ atomic_exchange_explicit(ptr, val, memorder)
+
+#define rte_atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder)
+
+#define rte_atomic_compare_exchange_weak_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder)
+
+#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
+ atomic_fetch_add_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
+ atomic_fetch_sub_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
+ atomic_fetch_and_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
+ atomic_fetch_xor_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
+ atomic_fetch_or_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
+ atomic_fetch_nand_explicit(ptr, val, memorder)
+
+#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \
+ atomic_flag_test_and_set_explicit(ptr, memorder)
+
+#define rte_atomic_flag_clear_explicit(ptr, memorder) \
+ atomic_flag_clear(ptr, memorder)
+
+/* We provide internal macro here to allow conditional expansion
+ * in the body of the per-arch rte_atomic_thread_fence inline functions.
+ */
+#define __rte_atomic_thread_fence(memorder) \
+ atomic_thread_fence(memorder)
+
+#else
+
+#define __rte_atomic
+
+/* The memory order is an integer type in GCC built-ins,
+ * not an enumerated type like in C11.
+ */
+typedef int rte_memory_order;
+
+#define rte_memory_order_relaxed __ATOMIC_RELAXED
+#define rte_memory_order_consume __ATOMIC_CONSUME
+#define rte_memory_order_acquire __ATOMIC_ACQUIRE
+#define rte_memory_order_release __ATOMIC_RELEASE
+#define rte_memory_order_acq_rel __ATOMIC_ACQ_REL
+#define rte_memory_order_seq_cst __ATOMIC_SEQ_CST
+
+#define rte_atomic_load_explicit(ptr, memorder) \
+ __atomic_load_n(ptr, memorder)
+
+#define rte_atomic_store_explicit(ptr, val, memorder) \
+ __atomic_store_n(ptr, val, memorder)
+
+#define rte_atomic_exchange_explicit(ptr, val, memorder) \
+ __atomic_exchange_n(ptr, val, memorder)
+
+#define rte_atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ __atomic_compare_exchange_n( \
+ ptr, expected, desired, 0, succ_memorder, fail_memorder)
+
+#define rte_atomic_compare_exchange_weak_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ __atomic_compare_exchange_n( \
+ ptr, expected, desired, 1, succ_memorder, fail_memorder)
+
+#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
+ __atomic_fetch_add(ptr, val, memorder)
+
+#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
+ __atomic_fetch_sub(ptr, val, memorder)
+
+#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
+ __atomic_fetch_and(ptr, val, memorder)
+
+#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
+ __atomic_fetch_xor(ptr, val, memorder)
+
+#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
+ __atomic_fetch_or(ptr, val, memorder)
+
+#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
+ __atomic_fetch_nand(ptr, val, memorder)
+
+#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \
+ __atomic_test_and_set(ptr, memorder)
+
+#define rte_atomic_flag_clear_explicit(ptr, memorder) \
+ __atomic_clear(ptr, memorder)
+
+/* We provide internal macro here to allow conditional expansion
+ * in the body of the per-arch rte_atomic_thread_fence inline functions.
+ */
+#define __rte_atomic_thread_fence(memorder) \
+ __atomic_thread_fence(memorder)
+
+#endif
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STDATOMIC_H_ */
diff --git a/lib/eal/include/rte_ticketlock.h b/lib/eal/include/rte_ticketlock.h
index 5db0d8a..e22d119 100644
--- a/lib/eal/include/rte_ticketlock.h
+++ b/lib/eal/include/rte_ticketlock.h
@@ -24,6 +24,7 @@
#include <rte_common.h>
#include <rte_lcore.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_ticketlock_t type.
diff --git a/lib/eal/include/rte_trace_point.h b/lib/eal/include/rte_trace_point.h
index c6b6fcc..d587591 100644
--- a/lib/eal/include/rte_trace_point.h
+++ b/lib/eal/include/rte_trace_point.h
@@ -30,6 +30,7 @@
#include <rte_per_lcore.h>
#include <rte_string_fns.h>
#include <rte_uuid.h>
+#include <rte_stdatomic.h>
/** The tracepoint object. */
typedef uint64_t rte_trace_point_t;
diff --git a/meson_options.txt b/meson_options.txt
index 621e1ca..bb22bba 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -46,6 +46,8 @@ option('mbuf_refcnt_atomic', type: 'boolean', value: true, description:
'Atomically access the mbuf refcnt.')
option('platform', type: 'string', value: 'native', description:
'Platform to build, either "native", "generic" or a SoC. Please refer to the Linux build guide for more information.')
+option('enable_stdatomic', type: 'boolean', value: false, description:
+ 'enable use of C11 stdatomic')
option('enable_trace_fp', type: 'boolean', value: false, description:
'enable fast path trace points.')
option('tests', type: 'boolean', value: true, description:
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v3 2/6] eal: adapt EAL to present rte optional atomics API
2023-08-16 19:19 ` [PATCH v3 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-16 19:19 ` [PATCH v3 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
@ 2023-08-16 19:19 ` Tyler Retzlaff
2023-08-16 19:19 ` [PATCH v3 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
` (3 subsequent siblings)
5 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-16 19:19 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt the EAL public headers to use rte optional atomics API instead of
directly using and exposing toolchain specific atomic builtin intrinsics.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
app/test/test_mcslock.c | 6 ++--
lib/eal/arm/include/rte_atomic_32.h | 4 +--
lib/eal/arm/include/rte_atomic_64.h | 36 +++++++++++------------
lib/eal/arm/include/rte_pause_64.h | 26 ++++++++--------
lib/eal/arm/rte_power_intrinsics.c | 8 ++---
lib/eal/common/eal_common_trace.c | 16 +++++-----
lib/eal/include/generic/rte_atomic.h | 50 +++++++++++++++----------------
lib/eal/include/generic/rte_pause.h | 46 ++++++++++++-----------------
lib/eal/include/generic/rte_rwlock.h | 47 +++++++++++++++--------------
lib/eal/include/generic/rte_spinlock.h | 19 ++++++------
lib/eal/include/rte_mcslock.h | 50 +++++++++++++++----------------
lib/eal/include/rte_pflock.h | 24 ++++++++-------
lib/eal/include/rte_seqcount.h | 18 ++++++------
lib/eal/include/rte_stdatomic.h | 2 ++
lib/eal/include/rte_ticketlock.h | 42 +++++++++++++-------------
lib/eal/include/rte_trace_point.h | 4 +--
lib/eal/loongarch/include/rte_atomic.h | 4 +--
lib/eal/ppc/include/rte_atomic.h | 54 +++++++++++++++++-----------------
lib/eal/riscv/include/rte_atomic.h | 4 +--
lib/eal/x86/include/rte_atomic.h | 8 ++---
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 6 ++--
22 files changed, 239 insertions(+), 237 deletions(-)
diff --git a/app/test/test_mcslock.c b/app/test/test_mcslock.c
index 52e45e7..242c242 100644
--- a/app/test/test_mcslock.c
+++ b/app/test/test_mcslock.c
@@ -36,9 +36,9 @@
* lock multiple times.
*/
-rte_mcslock_t *p_ml;
-rte_mcslock_t *p_ml_try;
-rte_mcslock_t *p_ml_perf;
+RTE_ATOMIC(rte_mcslock_t *) p_ml;
+RTE_ATOMIC(rte_mcslock_t *) p_ml_try;
+RTE_ATOMIC(rte_mcslock_t *) p_ml_perf;
static unsigned int count;
diff --git a/lib/eal/arm/include/rte_atomic_32.h b/lib/eal/arm/include/rte_atomic_32.h
index c00ab78..62fc337 100644
--- a/lib/eal/arm/include/rte_atomic_32.h
+++ b/lib/eal/arm/include/rte_atomic_32.h
@@ -34,9 +34,9 @@
#define rte_io_rmb() rte_rmb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
#ifdef __cplusplus
diff --git a/lib/eal/arm/include/rte_atomic_64.h b/lib/eal/arm/include/rte_atomic_64.h
index 6047911..75d8ba6 100644
--- a/lib/eal/arm/include/rte_atomic_64.h
+++ b/lib/eal/arm/include/rte_atomic_64.h
@@ -38,9 +38,9 @@
#define rte_io_rmb() rte_rmb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
/*------------------------ 128 bit atomic operations -------------------------*/
@@ -107,33 +107,33 @@
*/
RTE_SET_USED(failure);
/* Find invalid memory order */
- RTE_ASSERT(success == __ATOMIC_RELAXED ||
- success == __ATOMIC_ACQUIRE ||
- success == __ATOMIC_RELEASE ||
- success == __ATOMIC_ACQ_REL ||
- success == __ATOMIC_SEQ_CST);
+ RTE_ASSERT(success == rte_memory_order_relaxed ||
+ success == rte_memory_order_acquire ||
+ success == rte_memory_order_release ||
+ success == rte_memory_order_acq_rel ||
+ success == rte_memory_order_seq_cst);
rte_int128_t expected = *exp;
rte_int128_t desired = *src;
rte_int128_t old;
#if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS)
- if (success == __ATOMIC_RELAXED)
+ if (success == rte_memory_order_relaxed)
__cas_128_relaxed(dst, exp, desired);
- else if (success == __ATOMIC_ACQUIRE)
+ else if (success == rte_memory_order_acquire)
__cas_128_acquire(dst, exp, desired);
- else if (success == __ATOMIC_RELEASE)
+ else if (success == rte_memory_order_release)
__cas_128_release(dst, exp, desired);
else
__cas_128_acq_rel(dst, exp, desired);
old = *exp;
#else
-#define __HAS_ACQ(mo) ((mo) != __ATOMIC_RELAXED && (mo) != __ATOMIC_RELEASE)
-#define __HAS_RLS(mo) ((mo) == __ATOMIC_RELEASE || (mo) == __ATOMIC_ACQ_REL || \
- (mo) == __ATOMIC_SEQ_CST)
+#define __HAS_ACQ(mo) ((mo) != rte_memory_order_relaxed && (mo) != rte_memory_order_release)
+#define __HAS_RLS(mo) ((mo) == rte_memory_order_release || (mo) == rte_memory_order_acq_rel || \
+ (mo) == rte_memory_order_seq_cst)
- int ldx_mo = __HAS_ACQ(success) ? __ATOMIC_ACQUIRE : __ATOMIC_RELAXED;
- int stx_mo = __HAS_RLS(success) ? __ATOMIC_RELEASE : __ATOMIC_RELAXED;
+ int ldx_mo = __HAS_ACQ(success) ? rte_memory_order_acquire : rte_memory_order_relaxed;
+ int stx_mo = __HAS_RLS(success) ? rte_memory_order_release : rte_memory_order_relaxed;
#undef __HAS_ACQ
#undef __HAS_RLS
@@ -153,7 +153,7 @@
: "Q" (src->val[0]) \
: "memory"); }
- if (ldx_mo == __ATOMIC_RELAXED)
+ if (ldx_mo == rte_memory_order_relaxed)
__LOAD_128("ldxp", dst, old)
else
__LOAD_128("ldaxp", dst, old)
@@ -170,7 +170,7 @@
: "memory"); }
if (likely(old.int128 == expected.int128)) {
- if (stx_mo == __ATOMIC_RELAXED)
+ if (stx_mo == rte_memory_order_relaxed)
__STORE_128("stxp", dst, desired, ret)
else
__STORE_128("stlxp", dst, desired, ret)
@@ -181,7 +181,7 @@
* needs to be stored back to ensure it was read
* atomically.
*/
- if (stx_mo == __ATOMIC_RELAXED)
+ if (stx_mo == rte_memory_order_relaxed)
__STORE_128("stxp", dst, old, ret)
else
__STORE_128("stlxp", dst, old, ret)
diff --git a/lib/eal/arm/include/rte_pause_64.h b/lib/eal/arm/include/rte_pause_64.h
index 5f70e97..d4daafc 100644
--- a/lib/eal/arm/include/rte_pause_64.h
+++ b/lib/eal/arm/include/rte_pause_64.h
@@ -41,7 +41,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_8(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxrb %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -60,7 +60,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_16(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxrh %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -79,7 +79,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_32(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxr %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -98,7 +98,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_64(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxr %x[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -118,7 +118,7 @@ static inline void rte_pause(void)
*/
#define __RTE_ARM_LOAD_EXC_128(src, dst, memorder) { \
volatile rte_int128_t *dst_128 = (volatile rte_int128_t *)&dst; \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxp %x[tmp0], %x[tmp1], [%x[addr]]" \
: [tmp0] "=&r" (dst_128->val[0]), \
[tmp1] "=&r" (dst_128->val[1]) \
@@ -153,8 +153,8 @@ static inline void rte_pause(void)
{
uint16_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_16(addr, value, memorder)
if (value != expected) {
@@ -172,8 +172,8 @@ static inline void rte_pause(void)
{
uint32_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_32(addr, value, memorder)
if (value != expected) {
@@ -191,8 +191,8 @@ static inline void rte_pause(void)
{
uint64_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_64(addr, value, memorder)
if (value != expected) {
@@ -206,8 +206,8 @@ static inline void rte_pause(void)
#define RTE_WAIT_UNTIL_MASKED(addr, mask, cond, expected, memorder) do { \
RTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); \
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && \
- memorder != __ATOMIC_RELAXED); \
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire && \
+ memorder != rte_memory_order_relaxed); \
const uint32_t size = sizeof(*(addr)) << 3; \
typeof(*(addr)) expected_value = (expected); \
typeof(*(addr)) value; \
diff --git a/lib/eal/arm/rte_power_intrinsics.c b/lib/eal/arm/rte_power_intrinsics.c
index 77b96e4..f54cf59 100644
--- a/lib/eal/arm/rte_power_intrinsics.c
+++ b/lib/eal/arm/rte_power_intrinsics.c
@@ -33,19 +33,19 @@
switch (pmc->size) {
case sizeof(uint8_t):
- __RTE_ARM_LOAD_EXC_8(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_8(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint16_t):
- __RTE_ARM_LOAD_EXC_16(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_16(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint32_t):
- __RTE_ARM_LOAD_EXC_32(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_32(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint64_t):
- __RTE_ARM_LOAD_EXC_64(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_64(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
default:
diff --git a/lib/eal/common/eal_common_trace.c b/lib/eal/common/eal_common_trace.c
index cb980af..c6628dd 100644
--- a/lib/eal/common/eal_common_trace.c
+++ b/lib/eal/common/eal_common_trace.c
@@ -103,11 +103,11 @@ struct trace_point_head *
trace_mode_set(rte_trace_point_t *t, enum rte_trace_mode mode)
{
if (mode == RTE_TRACE_MODE_OVERWRITE)
- __atomic_fetch_and(t, ~__RTE_TRACE_FIELD_ENABLE_DISCARD,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_and_explicit(t, ~__RTE_TRACE_FIELD_ENABLE_DISCARD,
+ rte_memory_order_release);
else
- __atomic_fetch_or(t, __RTE_TRACE_FIELD_ENABLE_DISCARD,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_or_explicit(t, __RTE_TRACE_FIELD_ENABLE_DISCARD,
+ rte_memory_order_release);
}
void
@@ -141,7 +141,7 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return false;
- val = __atomic_load_n(t, __ATOMIC_ACQUIRE);
+ val = rte_atomic_load_explicit(t, rte_memory_order_acquire);
return (val & __RTE_TRACE_FIELD_ENABLE_MASK) != 0;
}
@@ -153,7 +153,8 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return -ERANGE;
- prev = __atomic_fetch_or(t, __RTE_TRACE_FIELD_ENABLE_MASK, __ATOMIC_RELEASE);
+ prev = rte_atomic_fetch_or_explicit(t, __RTE_TRACE_FIELD_ENABLE_MASK,
+ rte_memory_order_release);
if ((prev & __RTE_TRACE_FIELD_ENABLE_MASK) == 0)
__atomic_fetch_add(&trace.status, 1, __ATOMIC_RELEASE);
return 0;
@@ -167,7 +168,8 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return -ERANGE;
- prev = __atomic_fetch_and(t, ~__RTE_TRACE_FIELD_ENABLE_MASK, __ATOMIC_RELEASE);
+ prev = rte_atomic_fetch_and_explicit(t, ~__RTE_TRACE_FIELD_ENABLE_MASK,
+ rte_memory_order_release);
if ((prev & __RTE_TRACE_FIELD_ENABLE_MASK) != 0)
__atomic_fetch_sub(&trace.status, 1, __ATOMIC_RELEASE);
return 0;
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 4a235ba..5940e7e 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -63,7 +63,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_ACQ_REL) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_acq_rel) should be used instead.
*/
static inline void rte_smp_mb(void);
@@ -80,7 +80,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_RELEASE) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_release) should be used instead.
* The fence also guarantees LOAD operations that precede the call
* are globally visible across the lcores before the STORE operations
* that follows it.
@@ -100,7 +100,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_ACQUIRE) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_acquire) should be used instead.
* The fence also guarantees LOAD operations that precede the call
* are globally visible across the lcores before the STORE operations
* that follows it.
@@ -154,7 +154,7 @@
/**
* Synchronization fence between threads based on the specified memory order.
*/
-static inline void rte_atomic_thread_fence(int memorder);
+static inline void rte_atomic_thread_fence(rte_memory_order memorder);
/*------------------------- 16 bit atomic operations -------------------------*/
@@ -207,7 +207,7 @@
static inline uint16_t
rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -274,7 +274,7 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
/**
@@ -288,7 +288,7 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
/**
@@ -341,7 +341,7 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
/**
@@ -361,7 +361,7 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
/**
@@ -380,7 +380,7 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_SEQ_CST) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -400,7 +400,7 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_SEQ_CST) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -486,7 +486,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline uint32_t
rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -553,7 +553,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
/**
@@ -567,7 +567,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
/**
@@ -620,7 +620,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
/**
@@ -640,7 +640,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
/**
@@ -659,7 +659,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_SEQ_CST) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -679,7 +679,7 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_SEQ_CST) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -764,7 +764,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline uint64_t
rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -885,7 +885,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
#endif
@@ -904,7 +904,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
#endif
@@ -962,7 +962,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
#endif
@@ -986,7 +986,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
#endif
@@ -1115,8 +1115,8 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
* stronger) model.
* @param failure
* If unsuccessful, the operation's memory behavior conforms to this (or a
- * stronger) model. This argument cannot be __ATOMIC_RELEASE,
- * __ATOMIC_ACQ_REL, or a stronger model than success.
+ * stronger) model. This argument cannot be rte_memory_order_release,
+ * rte_memory_order_acq_rel, or a stronger model than success.
* @return
* Non-zero on success; 0 on failure.
*/
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index bebfa95..256309e 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -36,13 +36,11 @@
* A 16-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
- * C++11 memory orders with the same names, see the C++11 standard or
- * the GCC wiki on atomic synchronization for detailed definition.
+ * rte_memory_order_acquire and rte_memory_order_relaxed.
*/
static __rte_always_inline void
rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,
- int memorder);
+ rte_memory_order memorder);
/**
* Wait for *addr to be updated with a 32-bit expected value, with a relaxed
@@ -54,13 +52,11 @@
* A 32-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
- * C++11 memory orders with the same names, see the C++11 standard or
- * the GCC wiki on atomic synchronization for detailed definition.
+ * rte_memory_order_acquire and rte_memory_order_relaxed.
*/
static __rte_always_inline void
rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,
- int memorder);
+ rte_memory_order memorder);
/**
* Wait for *addr to be updated with a 64-bit expected value, with a relaxed
@@ -72,42 +68,40 @@
* A 64-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
- * C++11 memory orders with the same names, see the C++11 standard or
- * the GCC wiki on atomic synchronization for detailed definition.
+ * rte_memory_order_acquire and rte_memory_order_relaxed.
*/
static __rte_always_inline void
rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,
- int memorder);
+ rte_memory_order memorder);
#ifndef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED
static __rte_always_inline void
rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
static __rte_always_inline void
rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
static __rte_always_inline void
rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
@@ -125,16 +119,14 @@
* An expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
- * C++11 memory orders with the same names, see the C++11 standard or
- * the GCC wiki on atomic synchronization for detailed definition.
+ * rte_memory_order_acquire and rte_memory_order_relaxed.
*/
#define RTE_WAIT_UNTIL_MASKED(addr, mask, cond, expected, memorder) do { \
RTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); \
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && \
- memorder != __ATOMIC_RELAXED); \
+ RTE_BUILD_BUG_ON((memorder) != rte_memory_order_acquire && \
+ (memorder) != rte_memory_order_relaxed); \
typeof(*(addr)) expected_value = (expected); \
- while (!((__atomic_load_n((addr), (memorder)) & (mask)) cond \
+ while (!((rte_atomic_load_explicit((addr), (memorder)) & (mask)) cond \
expected_value)) \
rte_pause(); \
} while (0)
diff --git a/lib/eal/include/generic/rte_rwlock.h b/lib/eal/include/generic/rte_rwlock.h
index 24ebec6..c788705 100644
--- a/lib/eal/include/generic/rte_rwlock.h
+++ b/lib/eal/include/generic/rte_rwlock.h
@@ -58,7 +58,7 @@
#define RTE_RWLOCK_READ 0x4 /* Reader increment */
typedef struct __rte_lockable {
- int32_t cnt;
+ RTE_ATOMIC(int32_t) cnt;
} rte_rwlock_t;
/**
@@ -93,21 +93,21 @@
while (1) {
/* Wait while writer is present or pending */
- while (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED)
+ while (rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed)
& RTE_RWLOCK_MASK)
rte_pause();
/* Try to get read lock */
- x = __atomic_fetch_add(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_ACQUIRE) + RTE_RWLOCK_READ;
+ x = rte_atomic_fetch_add_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_acquire) + RTE_RWLOCK_READ;
/* If no writer, then acquire was successful */
if (likely(!(x & RTE_RWLOCK_MASK)))
return;
/* Lost race with writer, backout the change. */
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_RELAXED);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_relaxed);
}
}
@@ -128,20 +128,20 @@
{
int32_t x;
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
/* fail if write lock is held or writer is pending */
if (x & RTE_RWLOCK_MASK)
return -EBUSY;
/* Try to get read lock */
- x = __atomic_fetch_add(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_ACQUIRE) + RTE_RWLOCK_READ;
+ x = rte_atomic_fetch_add_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_acquire) + RTE_RWLOCK_READ;
/* Back out if writer raced in */
if (unlikely(x & RTE_RWLOCK_MASK)) {
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_release);
return -EBUSY;
}
@@ -159,7 +159,7 @@
__rte_unlock_function(rwl)
__rte_no_thread_safety_analysis
{
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ, __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ, rte_memory_order_release);
}
/**
@@ -179,10 +179,10 @@
{
int32_t x;
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
if (x < RTE_RWLOCK_WRITE &&
- __atomic_compare_exchange_n(&rwl->cnt, &x, x + RTE_RWLOCK_WRITE,
- 1, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ rte_atomic_compare_exchange_weak_explicit(&rwl->cnt, &x, x + RTE_RWLOCK_WRITE,
+ rte_memory_order_acquire, rte_memory_order_relaxed))
return 0;
else
return -EBUSY;
@@ -202,22 +202,25 @@
int32_t x;
while (1) {
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
/* No readers or writers? */
if (likely(x < RTE_RWLOCK_WRITE)) {
/* Turn off RTE_RWLOCK_WAIT, turn on RTE_RWLOCK_WRITE */
- if (__atomic_compare_exchange_n(&rwl->cnt, &x, RTE_RWLOCK_WRITE, 1,
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ if (rte_atomic_compare_exchange_weak_explicit(
+ &rwl->cnt, &x, RTE_RWLOCK_WRITE,
+ rte_memory_order_acquire, rte_memory_order_relaxed))
return;
}
/* Turn on writer wait bit */
if (!(x & RTE_RWLOCK_WAIT))
- __atomic_fetch_or(&rwl->cnt, RTE_RWLOCK_WAIT, __ATOMIC_RELAXED);
+ rte_atomic_fetch_or_explicit(&rwl->cnt, RTE_RWLOCK_WAIT,
+ rte_memory_order_relaxed);
/* Wait until no readers before trying again */
- while (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) > RTE_RWLOCK_WAIT)
+ while (rte_atomic_load_explicit(&rwl->cnt,
+ rte_memory_order_relaxed) > RTE_RWLOCK_WAIT)
rte_pause();
}
@@ -234,7 +237,7 @@
__rte_unlock_function(rwl)
__rte_no_thread_safety_analysis
{
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_WRITE, __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_WRITE, rte_memory_order_release);
}
/**
@@ -248,7 +251,7 @@
static inline int
rte_rwlock_write_is_locked(rte_rwlock_t *rwl)
{
- if (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) & RTE_RWLOCK_WRITE)
+ if (rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed) & RTE_RWLOCK_WRITE)
return 1;
return 0;
diff --git a/lib/eal/include/generic/rte_spinlock.h b/lib/eal/include/generic/rte_spinlock.h
index e18f0cd..23fb048 100644
--- a/lib/eal/include/generic/rte_spinlock.h
+++ b/lib/eal/include/generic/rte_spinlock.h
@@ -29,7 +29,7 @@
* The rte_spinlock_t type.
*/
typedef struct __rte_lockable {
- volatile int locked; /**< lock status 0 = unlocked, 1 = locked */
+ volatile RTE_ATOMIC(int) locked; /**< lock status 0 = unlocked, 1 = locked */
} rte_spinlock_t;
/**
@@ -66,10 +66,10 @@
{
int exp = 0;
- while (!__atomic_compare_exchange_n(&sl->locked, &exp, 1, 0,
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) {
- rte_wait_until_equal_32((volatile uint32_t *)&sl->locked,
- 0, __ATOMIC_RELAXED);
+ while (!rte_atomic_compare_exchange_strong_explicit(&sl->locked, &exp, 1,
+ rte_memory_order_acquire, rte_memory_order_relaxed)) {
+ rte_wait_until_equal_32((volatile uint32_t *)(uintptr_t)&sl->locked,
+ 0, rte_memory_order_relaxed);
exp = 0;
}
}
@@ -90,7 +90,7 @@
rte_spinlock_unlock(rte_spinlock_t *sl)
__rte_no_thread_safety_analysis
{
- __atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&sl->locked, 0, rte_memory_order_release);
}
#endif
@@ -113,9 +113,8 @@
__rte_no_thread_safety_analysis
{
int exp = 0;
- return __atomic_compare_exchange_n(&sl->locked, &exp, 1,
- 0, /* disallow spurious failure */
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
+ return rte_atomic_compare_exchange_strong_explicit(&sl->locked, &exp, 1,
+ rte_memory_order_acquire, rte_memory_order_relaxed);
}
#endif
@@ -129,7 +128,7 @@
*/
static inline int rte_spinlock_is_locked (rte_spinlock_t *sl)
{
- return __atomic_load_n(&sl->locked, __ATOMIC_ACQUIRE);
+ return rte_atomic_load_explicit(&sl->locked, rte_memory_order_acquire);
}
/**
diff --git a/lib/eal/include/rte_mcslock.h b/lib/eal/include/rte_mcslock.h
index 18e63eb..8c75377 100644
--- a/lib/eal/include/rte_mcslock.h
+++ b/lib/eal/include/rte_mcslock.h
@@ -33,8 +33,8 @@
* The rte_mcslock_t type.
*/
typedef struct rte_mcslock {
- struct rte_mcslock *next;
- int locked; /* 1 if the queue locked, 0 otherwise */
+ RTE_ATOMIC(struct rte_mcslock *) next;
+ RTE_ATOMIC(int) locked; /* 1 if the queue locked, 0 otherwise */
} rte_mcslock_t;
/**
@@ -49,13 +49,13 @@
* lock should use its 'own node'.
*/
static inline void
-rte_mcslock_lock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_lock(RTE_ATOMIC(rte_mcslock_t *) *msl, rte_mcslock_t *me)
{
rte_mcslock_t *prev;
/* Init me node */
- __atomic_store_n(&me->locked, 1, __ATOMIC_RELAXED);
- __atomic_store_n(&me->next, NULL, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&me->locked, 1, rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&me->next, NULL, rte_memory_order_relaxed);
/* If the queue is empty, the exchange operation is enough to acquire
* the lock. Hence, the exchange operation requires acquire semantics.
@@ -63,7 +63,7 @@
* visible to other CPUs/threads. Hence, the exchange operation requires
* release semantics as well.
*/
- prev = __atomic_exchange_n(msl, me, __ATOMIC_ACQ_REL);
+ prev = rte_atomic_exchange_explicit(msl, me, rte_memory_order_acq_rel);
if (likely(prev == NULL)) {
/* Queue was empty, no further action required,
* proceed with lock taken.
@@ -77,19 +77,19 @@
* strong as a release fence and is not sufficient to enforce the
* desired order here.
*/
- __atomic_store_n(&prev->next, me, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&prev->next, me, rte_memory_order_release);
/* The while-load of me->locked should not move above the previous
* store to prev->next. Otherwise it will cause a deadlock. Need a
* store-load barrier.
*/
- __atomic_thread_fence(__ATOMIC_ACQ_REL);
+ __rte_atomic_thread_fence(rte_memory_order_acq_rel);
/* If the lock has already been acquired, it first atomically
* places the node at the end of the queue and then proceeds
* to spin on me->locked until the previous lock holder resets
* the me->locked using mcslock_unlock().
*/
- rte_wait_until_equal_32((uint32_t *)&me->locked, 0, __ATOMIC_ACQUIRE);
+ rte_wait_until_equal_32((uint32_t *)(uintptr_t)&me->locked, 0, rte_memory_order_acquire);
}
/**
@@ -101,34 +101,34 @@
* A pointer to the node of MCS lock passed in rte_mcslock_lock.
*/
static inline void
-rte_mcslock_unlock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_unlock(RTE_ATOMIC(rte_mcslock_t *) *msl, RTE_ATOMIC(rte_mcslock_t *) me)
{
/* Check if there are more nodes in the queue. */
- if (likely(__atomic_load_n(&me->next, __ATOMIC_RELAXED) == NULL)) {
+ if (likely(rte_atomic_load_explicit(&me->next, rte_memory_order_relaxed) == NULL)) {
/* No, last member in the queue. */
- rte_mcslock_t *save_me = __atomic_load_n(&me, __ATOMIC_RELAXED);
+ rte_mcslock_t *save_me = rte_atomic_load_explicit(&me, rte_memory_order_relaxed);
/* Release the lock by setting it to NULL */
- if (likely(__atomic_compare_exchange_n(msl, &save_me, NULL, 0,
- __ATOMIC_RELEASE, __ATOMIC_RELAXED)))
+ if (likely(rte_atomic_compare_exchange_strong_explicit(msl, &save_me, NULL,
+ rte_memory_order_release, rte_memory_order_relaxed)))
return;
/* Speculative execution would be allowed to read in the
* while-loop first. This has the potential to cause a
* deadlock. Need a load barrier.
*/
- __atomic_thread_fence(__ATOMIC_ACQUIRE);
+ __rte_atomic_thread_fence(rte_memory_order_acquire);
/* More nodes added to the queue by other CPUs.
* Wait until the next pointer is set.
*/
- uintptr_t *next;
- next = (uintptr_t *)&me->next;
+ RTE_ATOMIC(uintptr_t) *next;
+ next = (__rte_atomic uintptr_t *)&me->next;
RTE_WAIT_UNTIL_MASKED(next, UINTPTR_MAX, !=, 0,
- __ATOMIC_RELAXED);
+ rte_memory_order_relaxed);
}
/* Pass lock to next waiter. */
- __atomic_store_n(&me->next->locked, 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&me->next->locked, 0, rte_memory_order_release);
}
/**
@@ -142,10 +142,10 @@
* 1 if the lock is successfully taken; 0 otherwise.
*/
static inline int
-rte_mcslock_trylock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_trylock(RTE_ATOMIC(rte_mcslock_t *) *msl, rte_mcslock_t *me)
{
/* Init me node */
- __atomic_store_n(&me->next, NULL, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&me->next, NULL, rte_memory_order_relaxed);
/* Try to lock */
rte_mcslock_t *expected = NULL;
@@ -156,8 +156,8 @@
* is visible to other CPUs/threads. Hence, the compare-exchange
* operation requires release semantics as well.
*/
- return __atomic_compare_exchange_n(msl, &expected, me, 0,
- __ATOMIC_ACQ_REL, __ATOMIC_RELAXED);
+ return rte_atomic_compare_exchange_strong_explicit(msl, &expected, me,
+ rte_memory_order_acq_rel, rte_memory_order_relaxed);
}
/**
@@ -169,9 +169,9 @@
* 1 if the lock is currently taken; 0 otherwise.
*/
static inline int
-rte_mcslock_is_locked(rte_mcslock_t *msl)
+rte_mcslock_is_locked(RTE_ATOMIC(rte_mcslock_t *) msl)
{
- return (__atomic_load_n(&msl, __ATOMIC_RELAXED) != NULL);
+ return (rte_atomic_load_explicit(&msl, rte_memory_order_relaxed) != NULL);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_pflock.h b/lib/eal/include/rte_pflock.h
index 790be71..79feeea 100644
--- a/lib/eal/include/rte_pflock.h
+++ b/lib/eal/include/rte_pflock.h
@@ -41,8 +41,8 @@
*/
struct rte_pflock {
struct {
- uint16_t in;
- uint16_t out;
+ RTE_ATOMIC(uint16_t) in;
+ RTE_ATOMIC(uint16_t) out;
} rd, wr;
};
typedef struct rte_pflock rte_pflock_t;
@@ -117,14 +117,14 @@ struct rte_pflock {
* If no writer is present, then the operation has completed
* successfully.
*/
- w = __atomic_fetch_add(&pf->rd.in, RTE_PFLOCK_RINC, __ATOMIC_ACQUIRE)
+ w = rte_atomic_fetch_add_explicit(&pf->rd.in, RTE_PFLOCK_RINC, rte_memory_order_acquire)
& RTE_PFLOCK_WBITS;
if (w == 0)
return;
/* Wait for current write phase to complete. */
RTE_WAIT_UNTIL_MASKED(&pf->rd.in, RTE_PFLOCK_WBITS, !=, w,
- __ATOMIC_ACQUIRE);
+ rte_memory_order_acquire);
}
/**
@@ -140,7 +140,7 @@ struct rte_pflock {
static inline void
rte_pflock_read_unlock(rte_pflock_t *pf)
{
- __atomic_fetch_add(&pf->rd.out, RTE_PFLOCK_RINC, __ATOMIC_RELEASE);
+ rte_atomic_fetch_add_explicit(&pf->rd.out, RTE_PFLOCK_RINC, rte_memory_order_release);
}
/**
@@ -161,8 +161,9 @@ struct rte_pflock {
/* Acquire ownership of write-phase.
* This is same as rte_ticketlock_lock().
*/
- ticket = __atomic_fetch_add(&pf->wr.in, 1, __ATOMIC_RELAXED);
- rte_wait_until_equal_16(&pf->wr.out, ticket, __ATOMIC_ACQUIRE);
+ ticket = rte_atomic_fetch_add_explicit(&pf->wr.in, 1, rte_memory_order_relaxed);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&pf->wr.out, ticket,
+ rte_memory_order_acquire);
/*
* Acquire ticket on read-side in order to allow them
@@ -173,10 +174,11 @@ struct rte_pflock {
* speculatively.
*/
w = RTE_PFLOCK_PRES | (ticket & RTE_PFLOCK_PHID);
- ticket = __atomic_fetch_add(&pf->rd.in, w, __ATOMIC_RELAXED);
+ ticket = rte_atomic_fetch_add_explicit(&pf->rd.in, w, rte_memory_order_relaxed);
/* Wait for any pending readers to flush. */
- rte_wait_until_equal_16(&pf->rd.out, ticket, __ATOMIC_ACQUIRE);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&pf->rd.out, ticket,
+ rte_memory_order_acquire);
}
/**
@@ -193,10 +195,10 @@ struct rte_pflock {
rte_pflock_write_unlock(rte_pflock_t *pf)
{
/* Migrate from write phase to read phase. */
- __atomic_fetch_and(&pf->rd.in, RTE_PFLOCK_LSB, __ATOMIC_RELEASE);
+ rte_atomic_fetch_and_explicit(&pf->rd.in, RTE_PFLOCK_LSB, rte_memory_order_release);
/* Allow other writers to continue. */
- __atomic_fetch_add(&pf->wr.out, 1, __ATOMIC_RELEASE);
+ rte_atomic_fetch_add_explicit(&pf->wr.out, 1, rte_memory_order_release);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_seqcount.h b/lib/eal/include/rte_seqcount.h
index 098af26..4f9cefb 100644
--- a/lib/eal/include/rte_seqcount.h
+++ b/lib/eal/include/rte_seqcount.h
@@ -32,7 +32,7 @@
* The RTE seqcount type.
*/
typedef struct {
- uint32_t sn; /**< A sequence number for the protected data. */
+ RTE_ATOMIC(uint32_t) sn; /**< A sequence number for the protected data. */
} rte_seqcount_t;
/**
@@ -106,11 +106,11 @@
static inline uint32_t
rte_seqcount_read_begin(const rte_seqcount_t *seqcount)
{
- /* __ATOMIC_ACQUIRE to prevent loads after (in program order)
+ /* rte_memory_order_acquire to prevent loads after (in program order)
* from happening before the sn load. Synchronizes-with the
* store release in rte_seqcount_write_end().
*/
- return __atomic_load_n(&seqcount->sn, __ATOMIC_ACQUIRE);
+ return rte_atomic_load_explicit(&seqcount->sn, rte_memory_order_acquire);
}
/**
@@ -161,9 +161,9 @@
return true;
/* make sure the data loads happens before the sn load */
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+ rte_atomic_thread_fence(rte_memory_order_acquire);
- end_sn = __atomic_load_n(&seqcount->sn, __ATOMIC_RELAXED);
+ end_sn = rte_atomic_load_explicit(&seqcount->sn, rte_memory_order_relaxed);
/* A writer incremented the sequence number during this read
* critical section.
@@ -205,12 +205,12 @@
sn = seqcount->sn + 1;
- __atomic_store_n(&seqcount->sn, sn, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&seqcount->sn, sn, rte_memory_order_relaxed);
- /* __ATOMIC_RELEASE to prevent stores after (in program order)
+ /* rte_memory_order_release to prevent stores after (in program order)
* from happening before the sn store.
*/
- rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ rte_atomic_thread_fence(rte_memory_order_release);
}
/**
@@ -237,7 +237,7 @@
sn = seqcount->sn + 1;
/* Synchronizes-with the load acquire in rte_seqcount_read_begin(). */
- __atomic_store_n(&seqcount->sn, sn, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&seqcount->sn, sn, rte_memory_order_release);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_stdatomic.h b/lib/eal/include/rte_stdatomic.h
index f03be9b..3934190 100644
--- a/lib/eal/include/rte_stdatomic.h
+++ b/lib/eal/include/rte_stdatomic.h
@@ -18,6 +18,7 @@
#include <stdatomic.h>
+#define RTE_ATOMIC(type) _Atomic(type)
#define __rte_atomic _Atomic
/* The memory order is an enumerated type in C11. */
@@ -110,6 +111,7 @@
#else
+#define RTE_ATOMIC(type) type
#define __rte_atomic
/* The memory order is an integer type in GCC built-ins,
diff --git a/lib/eal/include/rte_ticketlock.h b/lib/eal/include/rte_ticketlock.h
index e22d119..7d39bca 100644
--- a/lib/eal/include/rte_ticketlock.h
+++ b/lib/eal/include/rte_ticketlock.h
@@ -30,10 +30,10 @@
* The rte_ticketlock_t type.
*/
typedef union {
- uint32_t tickets;
+ RTE_ATOMIC(uint32_t) tickets;
struct {
- uint16_t current;
- uint16_t next;
+ RTE_ATOMIC(uint16_t) current;
+ RTE_ATOMIC(uint16_t) next;
} s;
} rte_ticketlock_t;
@@ -51,7 +51,7 @@
static inline void
rte_ticketlock_init(rte_ticketlock_t *tl)
{
- __atomic_store_n(&tl->tickets, 0, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tl->tickets, 0, rte_memory_order_relaxed);
}
/**
@@ -63,8 +63,9 @@
static inline void
rte_ticketlock_lock(rte_ticketlock_t *tl)
{
- uint16_t me = __atomic_fetch_add(&tl->s.next, 1, __ATOMIC_RELAXED);
- rte_wait_until_equal_16(&tl->s.current, me, __ATOMIC_ACQUIRE);
+ uint16_t me = rte_atomic_fetch_add_explicit(&tl->s.next, 1, rte_memory_order_relaxed);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tl->s.current, me,
+ rte_memory_order_acquire);
}
/**
@@ -76,8 +77,8 @@
static inline void
rte_ticketlock_unlock(rte_ticketlock_t *tl)
{
- uint16_t i = __atomic_load_n(&tl->s.current, __ATOMIC_RELAXED);
- __atomic_store_n(&tl->s.current, i + 1, __ATOMIC_RELEASE);
+ uint16_t i = rte_atomic_load_explicit(&tl->s.current, rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&tl->s.current, i + 1, rte_memory_order_release);
}
/**
@@ -92,12 +93,13 @@
rte_ticketlock_trylock(rte_ticketlock_t *tl)
{
rte_ticketlock_t oldl, newl;
- oldl.tickets = __atomic_load_n(&tl->tickets, __ATOMIC_RELAXED);
+ oldl.tickets = rte_atomic_load_explicit(&tl->tickets, rte_memory_order_relaxed);
newl.tickets = oldl.tickets;
newl.s.next++;
if (oldl.s.next == oldl.s.current) {
- if (__atomic_compare_exchange_n(&tl->tickets, &oldl.tickets,
- newl.tickets, 0, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ if (rte_atomic_compare_exchange_strong_explicit(&tl->tickets,
+ (uint32_t *)(uintptr_t)&oldl.tickets,
+ newl.tickets, rte_memory_order_acquire, rte_memory_order_relaxed))
return 1;
}
@@ -116,7 +118,7 @@
rte_ticketlock_is_locked(rte_ticketlock_t *tl)
{
rte_ticketlock_t tic;
- tic.tickets = __atomic_load_n(&tl->tickets, __ATOMIC_ACQUIRE);
+ tic.tickets = rte_atomic_load_explicit(&tl->tickets, rte_memory_order_acquire);
return (tic.s.current != tic.s.next);
}
@@ -127,7 +129,7 @@
typedef struct {
rte_ticketlock_t tl; /**< the actual ticketlock */
- int user; /**< core id using lock, TICKET_LOCK_INVALID_ID for unused */
+ RTE_ATOMIC(int) user; /**< core id using lock, TICKET_LOCK_INVALID_ID for unused */
unsigned int count; /**< count of time this lock has been called */
} rte_ticketlock_recursive_t;
@@ -147,7 +149,7 @@
rte_ticketlock_recursive_init(rte_ticketlock_recursive_t *tlr)
{
rte_ticketlock_init(&tlr->tl);
- __atomic_store_n(&tlr->user, TICKET_LOCK_INVALID_ID, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, TICKET_LOCK_INVALID_ID, rte_memory_order_relaxed);
tlr->count = 0;
}
@@ -162,9 +164,9 @@
{
int id = rte_gettid();
- if (__atomic_load_n(&tlr->user, __ATOMIC_RELAXED) != id) {
+ if (rte_atomic_load_explicit(&tlr->user, rte_memory_order_relaxed) != id) {
rte_ticketlock_lock(&tlr->tl);
- __atomic_store_n(&tlr->user, id, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, id, rte_memory_order_relaxed);
}
tlr->count++;
}
@@ -179,8 +181,8 @@
rte_ticketlock_recursive_unlock(rte_ticketlock_recursive_t *tlr)
{
if (--(tlr->count) == 0) {
- __atomic_store_n(&tlr->user, TICKET_LOCK_INVALID_ID,
- __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, TICKET_LOCK_INVALID_ID,
+ rte_memory_order_relaxed);
rte_ticketlock_unlock(&tlr->tl);
}
}
@@ -198,10 +200,10 @@
{
int id = rte_gettid();
- if (__atomic_load_n(&tlr->user, __ATOMIC_RELAXED) != id) {
+ if (rte_atomic_load_explicit(&tlr->user, rte_memory_order_relaxed) != id) {
if (rte_ticketlock_trylock(&tlr->tl) == 0)
return 0;
- __atomic_store_n(&tlr->user, id, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, id, rte_memory_order_relaxed);
}
tlr->count++;
return 1;
diff --git a/lib/eal/include/rte_trace_point.h b/lib/eal/include/rte_trace_point.h
index d587591..b403edd 100644
--- a/lib/eal/include/rte_trace_point.h
+++ b/lib/eal/include/rte_trace_point.h
@@ -33,7 +33,7 @@
#include <rte_stdatomic.h>
/** The tracepoint object. */
-typedef uint64_t rte_trace_point_t;
+typedef RTE_ATOMIC(uint64_t) rte_trace_point_t;
/**
* Macro to define the tracepoint arguments in RTE_TRACE_POINT macro.
@@ -359,7 +359,7 @@ struct __rte_trace_header {
#define __rte_trace_point_emit_header_generic(t) \
void *mem; \
do { \
- const uint64_t val = __atomic_load_n(t, __ATOMIC_ACQUIRE); \
+ const uint64_t val = rte_atomic_load_explicit(t, rte_memory_order_acquire); \
if (likely(!(val & __RTE_TRACE_FIELD_ENABLE_MASK))) \
return; \
mem = __rte_trace_mem_get(val); \
diff --git a/lib/eal/loongarch/include/rte_atomic.h b/lib/eal/loongarch/include/rte_atomic.h
index 3c82845..0510b8f 100644
--- a/lib/eal/loongarch/include/rte_atomic.h
+++ b/lib/eal/loongarch/include/rte_atomic.h
@@ -35,9 +35,9 @@
#define rte_io_rmb() rte_mb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
#ifdef __cplusplus
diff --git a/lib/eal/ppc/include/rte_atomic.h b/lib/eal/ppc/include/rte_atomic.h
index ec8d8a2..7382412 100644
--- a/lib/eal/ppc/include/rte_atomic.h
+++ b/lib/eal/ppc/include/rte_atomic.h
@@ -38,9 +38,9 @@
#define rte_io_rmb() rte_rmb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
/*------------------------- 16 bit atomic operations -------------------------*/
@@ -48,8 +48,8 @@
static inline int
rte_atomic16_cmpset(volatile uint16_t *dst, uint16_t exp, uint16_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline int rte_atomic16_test_and_set(rte_atomic16_t *v)
@@ -60,29 +60,29 @@ static inline int rte_atomic16_test_and_set(rte_atomic16_t *v)
static inline void
rte_atomic16_inc(rte_atomic16_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic16_dec(rte_atomic16_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline uint16_t
rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val)
{
- return __atomic_exchange_2(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_2(dst, val, rte_memory_order_seq_cst);
}
/*------------------------- 32 bit atomic operations -------------------------*/
@@ -90,8 +90,8 @@ static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
static inline int
rte_atomic32_cmpset(volatile uint32_t *dst, uint32_t exp, uint32_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline int rte_atomic32_test_and_set(rte_atomic32_t *v)
@@ -102,29 +102,29 @@ static inline int rte_atomic32_test_and_set(rte_atomic32_t *v)
static inline void
rte_atomic32_inc(rte_atomic32_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic32_dec(rte_atomic32_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline uint32_t
rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val)
{
- return __atomic_exchange_4(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_4(dst, val, rte_memory_order_seq_cst);
}
/*------------------------- 64 bit atomic operations -------------------------*/
@@ -132,8 +132,8 @@ static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
static inline int
rte_atomic64_cmpset(volatile uint64_t *dst, uint64_t exp, uint64_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline void
@@ -157,47 +157,47 @@ static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_acquire);
}
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_acquire);
}
static inline void
rte_atomic64_inc(rte_atomic64_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic64_dec(rte_atomic64_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_ACQUIRE) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_acquire) + inc;
}
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_ACQUIRE) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_acquire) - dec;
}
static inline int rte_atomic64_inc_and_test(rte_atomic64_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic64_dec_and_test(rte_atomic64_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline int rte_atomic64_test_and_set(rte_atomic64_t *v)
@@ -213,7 +213,7 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
static inline uint64_t
rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val)
{
- return __atomic_exchange_8(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_8(dst, val, rte_memory_order_seq_cst);
}
#endif
diff --git a/lib/eal/riscv/include/rte_atomic.h b/lib/eal/riscv/include/rte_atomic.h
index 4b4633c..2603bc9 100644
--- a/lib/eal/riscv/include/rte_atomic.h
+++ b/lib/eal/riscv/include/rte_atomic.h
@@ -40,9 +40,9 @@
#define rte_io_rmb() asm volatile("fence ir, ir" : : : "memory")
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
#ifdef __cplusplus
diff --git a/lib/eal/x86/include/rte_atomic.h b/lib/eal/x86/include/rte_atomic.h
index f2ee1a9..3b3a9a4 100644
--- a/lib/eal/x86/include/rte_atomic.h
+++ b/lib/eal/x86/include/rte_atomic.h
@@ -82,17 +82,17 @@
/**
* Synchronization fence between threads based on the specified memory order.
*
- * On x86 the __atomic_thread_fence(__ATOMIC_SEQ_CST) generates full 'mfence'
+ * On x86 the __rte_atomic_thread_fence(rte_memory_order_seq_cst) generates full 'mfence'
* which is quite expensive. The optimized implementation of rte_smp_mb is
* used instead.
*/
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- if (memorder == __ATOMIC_SEQ_CST)
+ if (memorder == rte_memory_order_seq_cst)
rte_smp_mb();
else
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
/*------------------------- 16 bit atomic operations -------------------------*/
diff --git a/lib/eal/x86/include/rte_spinlock.h b/lib/eal/x86/include/rte_spinlock.h
index 0b20ddf..a6c23ea 100644
--- a/lib/eal/x86/include/rte_spinlock.h
+++ b/lib/eal/x86/include/rte_spinlock.h
@@ -78,7 +78,7 @@ static inline int rte_tm_supported(void)
}
static inline int
-rte_try_tm(volatile int *lock)
+rte_try_tm(volatile RTE_ATOMIC(int) *lock)
{
int i, retries;
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index f749da9..cf70e33 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,9 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = __atomic_load_n((volatile uint64_t *)addr, __ATOMIC_RELAXED);
- __atomic_compare_exchange_n((volatile uint64_t *)addr, &val, val, 0,
- __ATOMIC_RELAXED, __ATOMIC_RELAXED);
+ val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
+ rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v3 3/6] eal: add rte atomic qualifier with casts
2023-08-16 19:19 ` [PATCH v3 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-16 19:19 ` [PATCH v3 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
2023-08-16 19:19 ` [PATCH v3 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
@ 2023-08-16 19:19 ` Tyler Retzlaff
2023-08-16 19:19 ` [PATCH v3 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
` (2 subsequent siblings)
5 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-16 19:19 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Introduce __rte_atomic qualifying casts in rte_optional atomics inline
functions to prevent cascading the need to pass __rte_atomic qualified
arguments.
Warning, this is really implementation dependent and being done
temporarily to avoid having to convert more of the libraries and tests in
DPDK in the initial series that introduces the API. The consequence of the
assumption of the ABI of the types in question not being ``the same'' is
only a risk that may be realized when enable_stdatomic=true.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/eal/include/generic/rte_atomic.h | 48 ++++++++++++++++++++++++------------
lib/eal/include/generic/rte_pause.h | 9 ++++---
lib/eal/x86/rte_power_intrinsics.c | 7 +++---
3 files changed, 42 insertions(+), 22 deletions(-)
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 5940e7e..709bf15 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -274,7 +274,8 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -288,7 +289,8 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -341,7 +343,8 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -361,7 +364,8 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -380,7 +384,8 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -400,7 +405,8 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -553,7 +559,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -567,7 +574,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -620,7 +628,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -640,7 +649,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -659,7 +669,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -679,7 +690,8 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -885,7 +897,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int64_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
#endif
@@ -904,7 +917,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int64_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
#endif
@@ -962,7 +976,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int64_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
#endif
@@ -986,7 +1001,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int64_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
#endif
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index 256309e..b7b059f 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -81,7 +81,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint16_t *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -91,7 +92,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint32_t *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -101,7 +103,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint64_t *)addr, memorder)
+ != expected)
rte_pause();
}
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index cf70e33..fb8539f 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,10 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
- rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
- rte_memory_order_relaxed, rte_memory_order_relaxed);
+ val = rte_atomic_load_explicit((volatile __rte_atomic uint64_t *)addr,
+ rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile __rte_atomic uint64_t *)addr,
+ &val, val, rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v3 4/6] distributor: adapt for EAL optional atomics API changes
2023-08-16 19:19 ` [PATCH v3 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (2 preceding siblings ...)
2023-08-16 19:19 ` [PATCH v3 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
@ 2023-08-16 19:19 ` Tyler Retzlaff
2023-08-16 19:19 ` [PATCH v3 5/6] bpf: " Tyler Retzlaff
2023-08-16 19:19 ` [PATCH v3 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
5 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-16 19:19 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt distributor for EAL optional atomics API changes
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 ++++++++++++++++----------------
2 files changed, 23 insertions(+), 23 deletions(-)
diff --git a/lib/distributor/distributor_private.h b/lib/distributor/distributor_private.h
index 7101f63..2f29343 100644
--- a/lib/distributor/distributor_private.h
+++ b/lib/distributor/distributor_private.h
@@ -52,7 +52,7 @@
* Only 64-bits of the memory is actually used though.
*/
union rte_distributor_buffer_single {
- volatile int64_t bufptr64;
+ volatile RTE_ATOMIC(int64_t) bufptr64;
char pad[RTE_CACHE_LINE_SIZE*3];
} __rte_cache_aligned;
diff --git a/lib/distributor/rte_distributor_single.c b/lib/distributor/rte_distributor_single.c
index 2c77ac4..ad43c13 100644
--- a/lib/distributor/rte_distributor_single.c
+++ b/lib/distributor/rte_distributor_single.c
@@ -32,10 +32,10 @@
int64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_GET_BUF;
RTE_WAIT_UNTIL_MASKED(&buf->bufptr64, RTE_DISTRIB_FLAGS_MASK,
- ==, 0, __ATOMIC_RELAXED);
+ ==, 0, rte_memory_order_relaxed);
/* Sync with distributor on GET_BUF flag. */
- __atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&buf->bufptr64, req, rte_memory_order_release);
}
struct rte_mbuf *
@@ -44,7 +44,7 @@ struct rte_mbuf *
{
union rte_distributor_buffer_single *buf = &d->bufs[worker_id];
/* Sync with distributor. Acquire bufptr64. */
- if (__atomic_load_n(&buf->bufptr64, __ATOMIC_ACQUIRE)
+ if (rte_atomic_load_explicit(&buf->bufptr64, rte_memory_order_acquire)
& RTE_DISTRIB_GET_BUF)
return NULL;
@@ -72,10 +72,10 @@ struct rte_mbuf *
uint64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_RETURN_BUF;
RTE_WAIT_UNTIL_MASKED(&buf->bufptr64, RTE_DISTRIB_FLAGS_MASK,
- ==, 0, __ATOMIC_RELAXED);
+ ==, 0, rte_memory_order_relaxed);
/* Sync with distributor on RETURN_BUF flag. */
- __atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&buf->bufptr64, req, rte_memory_order_release);
return 0;
}
@@ -119,7 +119,7 @@ struct rte_mbuf *
d->in_flight_tags[wkr] = 0;
d->in_flight_bitmask &= ~(1UL << wkr);
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64), 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64, 0, rte_memory_order_release);
if (unlikely(d->backlog[wkr].count != 0)) {
/* On return of a packet, we need to move the
* queued packets for this core elsewhere.
@@ -165,21 +165,21 @@ struct rte_mbuf *
for (wkr = 0; wkr < d->num_workers; wkr++) {
uintptr_t oldbuf = 0;
/* Sync with worker. Acquire bufptr64. */
- const int64_t data = __atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE);
+ const int64_t data = rte_atomic_load_explicit(&d->bufs[wkr].bufptr64,
+ rte_memory_order_acquire);
if (data & RTE_DISTRIB_GET_BUF) {
flushed++;
if (d->backlog[wkr].count)
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
else {
/* Sync with worker on GET_BUF flag. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
RTE_DISTRIB_GET_BUF,
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
d->in_flight_tags[wkr] = 0;
d->in_flight_bitmask &= ~(1UL << wkr);
}
@@ -217,8 +217,8 @@ struct rte_mbuf *
while (next_idx < num_mbufs || next_mb != NULL) {
uintptr_t oldbuf = 0;
/* Sync with worker. Acquire bufptr64. */
- int64_t data = __atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE);
+ int64_t data = rte_atomic_load_explicit(&(d->bufs[wkr].bufptr64),
+ rte_memory_order_acquire);
if (!next_mb) {
next_mb = mbufs[next_idx++];
@@ -264,15 +264,15 @@ struct rte_mbuf *
if (d->backlog[wkr].count)
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
else {
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
next_value,
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
d->in_flight_tags[wkr] = new_tag;
d->in_flight_bitmask |= (1UL << wkr);
next_mb = NULL;
@@ -294,8 +294,8 @@ struct rte_mbuf *
for (wkr = 0; wkr < d->num_workers; wkr++)
if (d->backlog[wkr].count &&
/* Sync with worker. Acquire bufptr64. */
- (__atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE) & RTE_DISTRIB_GET_BUF)) {
+ (rte_atomic_load_explicit(&d->bufs[wkr].bufptr64,
+ rte_memory_order_acquire) & RTE_DISTRIB_GET_BUF)) {
int64_t oldbuf = d->bufs[wkr].bufptr64 >>
RTE_DISTRIB_FLAG_BITS;
@@ -303,9 +303,9 @@ struct rte_mbuf *
store_return(oldbuf, d, &ret_start, &ret_count);
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
}
d->returns.start = ret_start;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v3 5/6] bpf: adapt for EAL optional atomics API changes
2023-08-16 19:19 ` [PATCH v3 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (3 preceding siblings ...)
2023-08-16 19:19 ` [PATCH v3 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
@ 2023-08-16 19:19 ` Tyler Retzlaff
2023-08-16 19:19 ` [PATCH v3 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
5 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-16 19:19 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt bpf for EAL optional atomics API changes
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/bpf/bpf_pkt.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c
index ffd2db7..7a8e4a6 100644
--- a/lib/bpf/bpf_pkt.c
+++ b/lib/bpf/bpf_pkt.c
@@ -25,7 +25,7 @@
struct bpf_eth_cbi {
/* used by both data & control path */
- uint32_t use; /*usage counter */
+ RTE_ATOMIC(uint32_t) use; /*usage counter */
const struct rte_eth_rxtx_callback *cb; /* callback handle */
struct rte_bpf *bpf;
struct rte_bpf_jit jit;
@@ -110,8 +110,8 @@ struct bpf_eth_cbh {
/* in use, busy wait till current RX/TX iteration is finished */
if ((puse & BPF_ETH_CBI_INUSE) != 0) {
- RTE_WAIT_UNTIL_MASKED((uint32_t *)(uintptr_t)&cbi->use,
- UINT32_MAX, !=, puse, __ATOMIC_RELAXED);
+ RTE_WAIT_UNTIL_MASKED((__rte_atomic uint32_t *)(uintptr_t)&cbi->use,
+ UINT32_MAX, !=, puse, rte_memory_order_relaxed);
}
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v3 6/6] devtools: forbid new direct use of GCC atomic builtins
2023-08-16 19:19 ` [PATCH v3 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (4 preceding siblings ...)
2023-08-16 19:19 ` [PATCH v3 5/6] bpf: " Tyler Retzlaff
@ 2023-08-16 19:19 ` Tyler Retzlaff
5 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-16 19:19 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Refrain from using compiler __atomic_xxx builtins DPDK now requires
the use of rte_atomic_<op>_explicit macros when operating on DPDK
atomic variables.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Suggested-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
devtools/checkpatches.sh | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh
index 43f5e36..b15c3f7 100755
--- a/devtools/checkpatches.sh
+++ b/devtools/checkpatches.sh
@@ -111,11 +111,11 @@ check_forbidden_additions() { # <patch>
-f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
"$1" || res=1
- # refrain from using compiler __atomic_{add,and,nand,or,sub,xor}_fetch()
+ # refrain from using compiler __atomic_xxx builtins
awk -v FOLDERS="lib drivers app examples" \
- -v EXPRESSIONS="__atomic_(add|and|nand|or|sub|xor)_fetch\\\(" \
+ -v EXPRESSIONS="__atomic_.*\\\(" \
-v RET_ON_FAIL=1 \
- -v MESSAGE='Using __atomic_op_fetch, prefer __atomic_fetch_op' \
+ -v MESSAGE='Using __atomic_xxx builtins' \
-f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
"$1" || res=1
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH v2 2/6] eal: adapt EAL to present rte optional atomics API
2023-08-14 17:47 ` Tyler Retzlaff
@ 2023-08-16 20:13 ` Morten Brørup
2023-08-16 20:32 ` Tyler Retzlaff
0 siblings, 1 reply; 82+ messages in thread
From: Morten Brørup @ 2023-08-16 20:13 UTC (permalink / raw)
To: Tyler Retzlaff, thomas, bruce.richardson
Cc: dev, techboard, Honnappa Nagarahalli, Ruifeng Wang, Jerin Jacob,
Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Monday, 14 August 2023 19.47
>
> On Mon, Aug 14, 2023 at 10:00:49AM +0200, Morten Brørup wrote:
> > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > Sent: Friday, 11 August 2023 19.32
> > >
> > > Adapt the EAL public headers to use rte optional atomics API instead
> of
> > > directly using and exposing toolchain specific atomic builtin
> intrinsics.
> > >
> > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > > ---
> >
> > [...]
> >
>
> will fix the comments identified.
>
> >
> > [...]
> >
> > > --- a/lib/eal/include/generic/rte_spinlock.h
> > > +++ b/lib/eal/include/generic/rte_spinlock.h
> > > @@ -29,7 +29,7 @@
> > > * The rte_spinlock_t type.
> > > */
> > > typedef struct __rte_lockable {
> > > - volatile int locked; /**< lock status 0 = unlocked, 1 = locked */
> > > + volatile int __rte_atomic locked; /**< lock status 0 = unlocked, 1
> =
> > > locked */
> >
> > I think __rte_atomic should be before the type:
> > volatile __rte_atomic int locked; /**< lock status [...]
> > Alternatively (just mentioning it, I know we don't use this form):
> > volatile __rte_atomic(int) locked; /**< lock status [...]
> >
> > Thinking of where you would put "const" might help.
Regarding "const", I use the mental trick of reading from right-to-left when pointers are involved, e.g.:
const int * * const x;
----5---- 4 3 --2-- 1
x(1) is a const(2) pointer(3) to a pointer(4) to a const int(5).
And yes, treating "const int" as one word is cheating... formally it should be "int" "const", i.e. the reverse order; but that is not the convention, so I have learned to accept it.
> >
> > Maybe your order is also correct, so it is a matter of preference.
>
> so for me what you suggest is the canonical convention for c and i did
> initially try to make the change with this convention but ran into
> trouble when using the keyword in a context used as a type specifier
> and the type was incomplete.
>
> the rte_mcslock is a good example for illustration.
>
> // original struct
> typedef struct rte_mcslock {
> struct rte_mcslock *next;
> ...
> };
>
> it simply doesn't work / won't compile (at least with clang) which is
> what drove me to use the less-often used syntax.
>
> typedef struct rte_mcslock {
> _Atomic struct rte_mcslock *next;
> ...
> };
>
> In file included from ../app/test/test_mcslock.c:19:
> ..\lib\eal\include\rte_mcslock.h:36:2: error: _Atomic cannot be
> applied
> to incomplete type 'struct rte_mcslock'
> _Atomic struct rte_mcslock *next;
> ^
> ..\lib\eal\include\rte_mcslock.h:35:16: note: definition of 'struct
> rte_mcslock' is not complete until the closing '}'
> typedef struct rte_mcslock {
> ^
> 1 error generated.
>
> so i ended up choosing to use a single syntax by convention consistently
> rather than using one for the exceptional case and one everywhere else.
>
> i think (based on our other thread of discussion) i would recommend we
> use adopt and require the use of the _Atomic(T) macro to disambiguate it
> also has the advantage of not being churned later when we can do c++23.
>
> // using macro
> typedef struct rte_mcslock {
> _Atomic(struct rte_mcslock *) next;
This makes it an atomic pointer. Your example above tried making the struct rts_mcslock atomic. Probably what you wanted was:
typedef struct rte_mcslock {
struct rte_mcslock * _Atomic next;
...
};
Like "const", the convention should be putting it before any type, but after the "*" for pointers.
I suppose clang doesn't accept applying _Atomic to incomplete types, regardless where you put it... I.e. this should also fail, I guess:
typedef struct rte_mcslock {
struct rte_mcslock _Atomic * next;
...
};
> ...
> };
>
> this is much easier at a glance to know when the specified type is the T
> or the T * similarly in parameter lists it becomes more clear too.
>
> e.g.
> void foo(int *v)
>
> that it is either void foo(_Atomic(int) *v) or void foo(_Atomic(int *)
> v) becomes
> much clearer without having to do mental gymnastics.
The same could be said about making "const" clearer:
void foo(const(int) * v) instead of void foo(const int * v), and
void foo(const(int *) v) instead of void foo(int * const v).
Luckily, we don't need toolchain specific handling of "const", so let's just leave that the way it is. :-)
>
> so i propose we retain
>
> #define __rte_atomic _Atomic
>
> allow it to be used in contexts where we need a type-qualifier.
> note:
> most of the cases where _Atomic is used as a type-qualifier it
> is a red flag that we are sensitive to an implementation detail
> of the compiler. in time i hope most of these will go away as we
> remove deprecated rte_atomic_xx apis.
>
> but also introduce the following macro
>
> #define RTE_ATOMIC(type) _Atomic(type)
> require it be used in the contexts that we are using it as a type-
> specifier.
>
> if folks agree with this please reply back positively and i'll update
> the series. feel free to propose alternate names or whatever, but sooner
> than later so i don't have to churn things too much :)
+1 to Tyler's updated proposal, with macro names as suggested.
If anyone disagrees, please speak up soon!
If in doubt, please read https://en.cppreference.com/w/c/language/atomic carefully. It says:
(1) _Atomic(type-name) (since C11): Use as a type specifier; this designates a new atomic type.
(2) _Atomic type-name (since C11): Use as a type qualifier; this designates the atomic version of type-name. In this role, it may be mixed with const, volatile, and restrict, although unlike other qualifiers, the atomic version of type-name may have a different size, alignment, and object representation.
NB: I hadn't noticed this before, otherwise I had probably suggested using _Atomic(T) earlier on. We learn something new every day. :-)
>
> thanks!
Sorry about the late response, Tyler. Other work prevented me from setting aside coherent time to review your updated proposal.
>
> >
> > The DPDK coding style guidelines doesn't mention where to place
> "const", but looking at the code, it seems to use "const unsigned int"
> and "const char *".
>
> we probably should document it as a convention and most likely we should
> adopt what is already in use more commonly.
+1, but not as part of this series. :-)
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v2 2/6] eal: adapt EAL to present rte optional atomics API
2023-08-16 20:13 ` Morten Brørup
@ 2023-08-16 20:32 ` Tyler Retzlaff
0 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-16 20:32 UTC (permalink / raw)
To: Morten Brørup
Cc: thomas, bruce.richardson, dev, techboard, Honnappa Nagarahalli,
Ruifeng Wang, Jerin Jacob, Sunil Kumar Kori,
Mattias Rönnblom, Joyce Kong, David Christensen,
Konstantin Ananyev, David Hunt, David Marchand
On Wed, Aug 16, 2023 at 10:13:22PM +0200, Morten Brørup wrote:
> > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > Sent: Monday, 14 August 2023 19.47
> >
> > On Mon, Aug 14, 2023 at 10:00:49AM +0200, Morten Brørup wrote:
> > > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > > Sent: Friday, 11 August 2023 19.32
> > > >
> > > > Adapt the EAL public headers to use rte optional atomics API instead
> > of
> > > > directly using and exposing toolchain specific atomic builtin
> > intrinsics.
> > > >
> > > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > > > ---
> > >
> > > [...]
> > >
> >
> > will fix the comments identified.
> >
> > >
> > > [...]
> > >
> > > > --- a/lib/eal/include/generic/rte_spinlock.h
> > > > +++ b/lib/eal/include/generic/rte_spinlock.h
> > > > @@ -29,7 +29,7 @@
> > > > * The rte_spinlock_t type.
> > > > */
> > > > typedef struct __rte_lockable {
> > > > - volatile int locked; /**< lock status 0 = unlocked, 1 = locked */
> > > > + volatile int __rte_atomic locked; /**< lock status 0 = unlocked, 1
> > =
> > > > locked */
> > >
> > > I think __rte_atomic should be before the type:
> > > volatile __rte_atomic int locked; /**< lock status [...]
> > > Alternatively (just mentioning it, I know we don't use this form):
> > > volatile __rte_atomic(int) locked; /**< lock status [...]
> > >
> > > Thinking of where you would put "const" might help.
>
> Regarding "const", I use the mental trick of reading from right-to-left when pointers are involved, e.g.:
>
> const int * * const x;
> ----5---- 4 3 --2-- 1
yes, i'm very familiar with where it can appear in the syntax and
applied. but it's always good to have someone summarize it like this for
the discussion.
>
> x(1) is a const(2) pointer(3) to a pointer(4) to a const int(5).
>
> And yes, treating "const int" as one word is cheating... formally it should be "int" "const", i.e. the reverse order; but that is not the convention, so I have learned to accept it.
it more often is the convention in c++, but i agree in c conventionally
people put the const first.
>
> > >
> > > Maybe your order is also correct, so it is a matter of preference.
> >
> > so for me what you suggest is the canonical convention for c and i did
> > initially try to make the change with this convention but ran into
> > trouble when using the keyword in a context used as a type specifier
> > and the type was incomplete.
> >
> > the rte_mcslock is a good example for illustration.
> >
> > // original struct
> > typedef struct rte_mcslock {
> > struct rte_mcslock *next;
> > ...
> > };
> >
> > it simply doesn't work / won't compile (at least with clang) which is
> > what drove me to use the less-often used syntax.
> >
> > typedef struct rte_mcslock {
> > _Atomic struct rte_mcslock *next;
> > ...
> > };
> >
> > In file included from ../app/test/test_mcslock.c:19:
> > ..\lib\eal\include\rte_mcslock.h:36:2: error: _Atomic cannot be
> > applied
> > to incomplete type 'struct rte_mcslock'
> > _Atomic struct rte_mcslock *next;
> > ^
> > ..\lib\eal\include\rte_mcslock.h:35:16: note: definition of 'struct
> > rte_mcslock' is not complete until the closing '}'
> > typedef struct rte_mcslock {
> > ^
> > 1 error generated.
> >
> > so i ended up choosing to use a single syntax by convention consistently
> > rather than using one for the exceptional case and one everywhere else.
> >
> > i think (based on our other thread of discussion) i would recommend we
> > use adopt and require the use of the _Atomic(T) macro to disambiguate it
> > also has the advantage of not being churned later when we can do c++23.
> >
> > // using macro
> > typedef struct rte_mcslock {
> > _Atomic(struct rte_mcslock *) next;
>
> This makes it an atomic pointer. Your example above tried making the struct rts_mcslock atomic. Probably what you wanted was:
> typedef struct rte_mcslock {
> struct rte_mcslock * _Atomic next;
> ...
> };
this is what my v2 in the patch had. but following your const example
you indicated you preferred the equivalent of `const T' over `T const` i
was trying to illustrate that if you replace T = struct foo * the
compiler can't disambiguate between type and pointer to type and
produces an error.
>
> Like "const", the convention should be putting it before any type, but after the "*" for pointers.
i see, thank you for this clarification. I had not understood that you
were suggesting that for pointer types specifically i should use one
placement and for non-pointer types i should use another.
>
> I suppose clang doesn't accept applying _Atomic to incomplete types, regardless where you put it... I.e. this should also fail, I guess:
> typedef struct rte_mcslock {
> struct rte_mcslock _Atomic * next;
> ...
> };
actually I think for C11 atomics i think you can actually do this
because you can declare an entire struct object to be atomic. However,
since we need to intersect with what non-C11 gcc builtin atomics do we
would not be able to make struct objects atomic as gcc only let's you do
atomic things with integer and pointer types.
>
> > ...
> > };
> >
> > this is much easier at a glance to know when the specified type is the T
> > or the T * similarly in parameter lists it becomes more clear too.
> >
> > e.g.
> > void foo(int *v)
> >
> > that it is either void foo(_Atomic(int) *v) or void foo(_Atomic(int *)
> > v) becomes
> > much clearer without having to do mental gymnastics.
>
> The same could be said about making "const" clearer:
> void foo(const(int) * v) instead of void foo(const int * v), and
> void foo(const(int *) v) instead of void foo(int * const v).
>
> Luckily, we don't need toolchain specific handling of "const", so let's just leave that the way it is. :-)
>
> >
> > so i propose we retain
> >
> > #define __rte_atomic _Atomic
> >
> > allow it to be used in contexts where we need a type-qualifier.
> > note:
> > most of the cases where _Atomic is used as a type-qualifier it
> > is a red flag that we are sensitive to an implementation detail
> > of the compiler. in time i hope most of these will go away as we
> > remove deprecated rte_atomic_xx apis.
> >
> > but also introduce the following macro
> >
> > #define RTE_ATOMIC(type) _Atomic(type)
> > require it be used in the contexts that we are using it as a type-
> > specifier.
> >
> > if folks agree with this please reply back positively and i'll update
> > the series. feel free to propose alternate names or whatever, but sooner
> > than later so i don't have to churn things too much :)
>
> +1 to Tyler's updated proposal, with macro names as suggested.
yeah, I think it really helps clarify the pointer vs regular type
specification by whacking the ( ) around what we are talking about
instead of using positioning of _Atomic in two different places.
>
> If anyone disagrees, please speak up soon!
>
> If in doubt, please read https://en.cppreference.com/w/c/language/atomic carefully. It says:
> (1) _Atomic(type-name) (since C11): Use as a type specifier; this designates a new atomic type.
> (2) _Atomic type-name (since C11): Use as a type qualifier; this designates the atomic version of type-name. In this role, it may be mixed with const, volatile, and restrict, although unlike other qualifiers, the atomic version of type-name may have a different size, alignment, and object representation.
>
> NB: I hadn't noticed this before, otherwise I had probably suggested using _Atomic(T) earlier on. We learn something new every day. :-)
yeah, i knew about this which is why i was being really careful about
'qualification' vs 'specification' in my mails.
>
> >
> > thanks!
>
> Sorry about the late response, Tyler. Other work prevented me from setting aside coherent time to review your updated proposal.
meh it's okay, based on the other thread i kind of guessed you might
agree with using _Atomic(T) so i just submitted a new version an hour
ago with the changes. i hope it meets your approval, one thing i'm kind
of edgy about is the actual macro name itself RTE_ATOMIC(type) it seems
kinda ugly, so if someone has an opinion there i'm open to it.
>
> >
> > >
> > > The DPDK coding style guidelines doesn't mention where to place
> > "const", but looking at the code, it seems to use "const unsigned int"
> > and "const char *".
> >
> > we probably should document it as a convention and most likely we should
> > adopt what is already in use more commonly.
>
> +1, but not as part of this series. :-)
i'll look into doing it once we get this series merged.
thanks!
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH v3 1/6] eal: provide rte stdatomics optional atomics API
2023-08-16 19:19 ` [PATCH v3 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
@ 2023-08-16 20:55 ` Morten Brørup
2023-08-16 21:04 ` Tyler Retzlaff
0 siblings, 1 reply; 82+ messages in thread
From: Morten Brørup @ 2023-08-16 20:55 UTC (permalink / raw)
To: Tyler Retzlaff, dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Wednesday, 16 August 2023 21.19
>
> Provide API for atomic operations in the rte namespace that may
> optionally be configured to use C11 atomics with meson
> option enable_stdatomics=true
>
> Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
> ---
"#define RTE_ATOMIC(type) [...]" is missing in lib/eal/include/rte_stdatomic.h, both with and without RTE_ENABLE_STDATOMIC.
I suggest you keep it together with "#define __rte_atomic [...]".
Please also add descriptions as comments to both (type-qualifier and -specifier) in both locations. Your descriptions from the mailing list discussion are fine.
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v3 1/6] eal: provide rte stdatomics optional atomics API
2023-08-16 20:55 ` Morten Brørup
@ 2023-08-16 21:04 ` Tyler Retzlaff
2023-08-16 21:08 ` Morten Brørup
2023-08-16 21:10 ` Tyler Retzlaff
0 siblings, 2 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-16 21:04 UTC (permalink / raw)
To: Morten Brørup
Cc: dev, techboard, Bruce Richardson, Honnappa Nagarahalli,
Ruifeng Wang, Jerin Jacob, Sunil Kumar Kori,
Mattias Rönnblom, Joyce Kong, David Christensen,
Konstantin Ananyev, David Hunt, Thomas Monjalon, David Marchand
On Wed, Aug 16, 2023 at 10:55:51PM +0200, Morten Brørup wrote:
> > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > Sent: Wednesday, 16 August 2023 21.19
> >
> > Provide API for atomic operations in the rte namespace that may
> > optionally be configured to use C11 atomics with meson
> > option enable_stdatomics=true
> >
> > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
> > ---
>
> "#define RTE_ATOMIC(type) [...]" is missing in lib/eal/include/rte_stdatomic.h, both with and without RTE_ENABLE_STDATOMIC.
>
> I suggest you keep it together with "#define __rte_atomic [...]".
this seems to be an error i made in patch submission, somehow i managed
to drop the changes i made from v2-v3 from the series i submitted i'm
confused how i managed to fat finger it.
i'll submit a new series with the correct changes to rte_stdatomic.h
>
> Please also add descriptions as comments to both (type-qualifier and -specifier) in both locations. Your descriptions from the mailing list discussion are fine.
>
i'll do this in the v4 commit message
thanks
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH v3 1/6] eal: provide rte stdatomics optional atomics API
2023-08-16 21:04 ` Tyler Retzlaff
@ 2023-08-16 21:08 ` Morten Brørup
2023-08-16 21:10 ` Tyler Retzlaff
1 sibling, 0 replies; 82+ messages in thread
From: Morten Brørup @ 2023-08-16 21:08 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: dev, techboard, Bruce Richardson, Honnappa Nagarahalli,
Ruifeng Wang, Jerin Jacob, Sunil Kumar Kori,
Mattias Rönnblom, Joyce Kong, David Christensen,
Konstantin Ananyev, David Hunt, Thomas Monjalon, David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Wednesday, 16 August 2023 23.04
>
> On Wed, Aug 16, 2023 at 10:55:51PM +0200, Morten Brørup wrote:
> > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > Sent: Wednesday, 16 August 2023 21.19
> > >
> > > Provide API for atomic operations in the rte namespace that may
> > > optionally be configured to use C11 atomics with meson
> > > option enable_stdatomics=true
> > >
> > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > > Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
> > > ---
> >
> > "#define RTE_ATOMIC(type) [...]" is missing in
> lib/eal/include/rte_stdatomic.h, both with and without
> RTE_ENABLE_STDATOMIC.
> >
> > I suggest you keep it together with "#define __rte_atomic [...]".
>
> this seems to be an error i made in patch submission, somehow i managed
> to drop the changes i made from v2-v3 from the series i submitted i'm
> confused how i managed to fat finger it.
Happens to me all the time - too often, if you look at my patch submission track record. ;-)
>
> i'll submit a new series with the correct changes to rte_stdatomic.h
> >
> > Please also add descriptions as comments to both (type-qualifier and -
> specifier) in both locations. Your descriptions from the mailing list
> discussion are fine.
> >
>
> i'll do this in the v4 commit message
Please put the descriptions about their usage in the code as documentation comments, not in the commit message.
>
> thanks
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v3 1/6] eal: provide rte stdatomics optional atomics API
2023-08-16 21:04 ` Tyler Retzlaff
2023-08-16 21:08 ` Morten Brørup
@ 2023-08-16 21:10 ` Tyler Retzlaff
1 sibling, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-16 21:10 UTC (permalink / raw)
To: Morten Brørup
Cc: dev, techboard, Bruce Richardson, Honnappa Nagarahalli,
Ruifeng Wang, Jerin Jacob, Sunil Kumar Kori,
Mattias Rönnblom, Joyce Kong, David Christensen,
Konstantin Ananyev, David Hunt, Thomas Monjalon, David Marchand
On Wed, Aug 16, 2023 at 02:04:06PM -0700, Tyler Retzlaff wrote:
> On Wed, Aug 16, 2023 at 10:55:51PM +0200, Morten Brørup wrote:
> > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > Sent: Wednesday, 16 August 2023 21.19
> > >
> > > Provide API for atomic operations in the rte namespace that may
> > > optionally be configured to use C11 atomics with meson
> > > option enable_stdatomics=true
> > >
> > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > > Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
> > > ---
> >
> > "#define RTE_ATOMIC(type) [...]" is missing in lib/eal/include/rte_stdatomic.h, both with and without RTE_ENABLE_STDATOMIC.
> >
> > I suggest you keep it together with "#define __rte_atomic [...]".
>
> this seems to be an error i made in patch submission, somehow i managed
> to drop the changes i made from v2-v3 from the series i submitted i'm
> confused how i managed to fat finger it.
okay! i know what i did. somehow the additions of the macros were
placed into patch 2 instead of patch 1 where they belong. i'm glad
there's a simple explanation :)
v4 coming up with the it moved into the patch it belongs in.
>
> i'll submit a new series with the correct changes to rte_stdatomic.h
> >
> > Please also add descriptions as comments to both (type-qualifier and -specifier) in both locations. Your descriptions from the mailing list discussion are fine.
> >
>
> i'll do this in the v4 commit message
>
> thanks
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v4 0/6] RFC optional rte optional stdatomics API
2023-08-11 1:31 [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (7 preceding siblings ...)
2023-08-16 19:19 ` [PATCH v3 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-16 21:38 ` Tyler Retzlaff
2023-08-16 21:38 ` [PATCH v4 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
` (5 more replies)
2023-08-17 21:42 ` [PATCH v5 0/6] optional rte optional stdatomics API Tyler Retzlaff
2023-08-22 21:00 ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
10 siblings, 6 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-16 21:38 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
This series introduces API additions prefixed in the rte namespace that allow
the optional use of stdatomics.h from C11 using enable_stdatomics=true for
targets where enable_stdatomics=false no functional change is intended.
Be aware this does not contain all changes to use stdatomics across the DPDK
tree it only introduces the minimum to allow the option to be used which is
a pre-requisite for a clean CI (probably using clang) that can be run
with enable_stdatomics=true enabled.
It is planned that subsequent series will be introduced per lib/driver as
appropriate to further enable stdatomics use when enable_stdatomics=true.
Notes:
* Additional libraries beyond EAL make visible atomics use across the
API/ABI surface they will be converted in the subsequent series.
* The eal: add rte atomic qualifier with casts patch needs some discussion
as to whether or not the legacy rte_atomic APIs should be converted to
work with enable_stdatomic=true right now some implementation dependent
casts are used to prevent cascading / having to convert too much in
the intial series.
* Windows will obviously need complete conversion of libraries including
atomics that are not crossing API/ABI boundaries. those conversions will
introduced in separate series as new along side the existing msvc series.
Please keep in mind we would like to prioritize the review / acceptance of
this patch since it needs to be completed in the 23.11 merge window.
Thank you all for the discussion that lead to the formation of this series.
v4:
* Move the definition of #define RTE_ATOMIC(type) to patch 1 where it
belongs (a mistake in v3)
* Provide comments for both RTE_ATOMIC and __rte_atomic macros indicating
their use as specified or qualified contexts.
v3:
* Remove comments from APIs mentioning the mapping to C++ memory model
memory orders
* Introduce and use new macro RTE_ATOMIC(type) to be used in contexts
where _Atomic is used as a type specifier to declare variables. The
macro allows more clarity about what the atomic type being specified
is. e.g. _Atomic(T *) vs _Atomic(T) it is easier to understand that
the former is an atomic pointer type and the latter is an atomic
type. it also has the benefit of (in the future) being interoperable
with c++23 syntactically
note: Morten i have retained your 'reviewed-by' tags if you disagree
given the changes in the above version please indicate as such but
i believe the changes are in the spirit of the feedback you provided
v2:
* Wrap meson_options.txt option description to newline and indent to
be consistent with other options.
* Provide separate typedef of rte_memory_order for enable_stdatomic=true
VS enable_stdatomic=false instead of a single typedef to int
note: slight tweak to reviewers feedback i've chosen to use a typedef
for both enable_stdatomic={true,false} (just seemed more consistent)
* Bring in assert.h and use static_assert macro instead of _Static_assert
keyword to better interoperate with c/c++
* Directly include rte_stdatomic.h where into other places it is consumed
instead of hacking it globally into rte_config.h
* Provide and use __rte_atomic_thread_fence to allow conditional expansion
within the body of existing rte_atomic_thread_fence inline function to
maintain per-arch optimizations when enable_stdatomic=false
Tyler Retzlaff (6):
eal: provide rte stdatomics optional atomics API
eal: adapt EAL to present rte optional atomics API
eal: add rte atomic qualifier with casts
distributor: adapt for EAL optional atomics API changes
bpf: adapt for EAL optional atomics API changes
devtools: forbid new direct use of GCC atomic builtins
app/test/test_mcslock.c | 6 +-
config/meson.build | 1 +
devtools/checkpatches.sh | 6 +-
lib/bpf/bpf_pkt.c | 6 +-
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 +++----
lib/eal/arm/include/rte_atomic_32.h | 4 +-
lib/eal/arm/include/rte_atomic_64.h | 36 +++---
lib/eal/arm/include/rte_pause_64.h | 26 ++--
lib/eal/arm/rte_power_intrinsics.c | 8 +-
lib/eal/common/eal_common_trace.c | 16 +--
lib/eal/include/generic/rte_atomic.h | 67 +++++++----
lib/eal/include/generic/rte_pause.h | 50 ++++----
lib/eal/include/generic/rte_rwlock.h | 48 ++++----
lib/eal/include/generic/rte_spinlock.h | 20 ++--
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_mcslock.h | 51 ++++----
lib/eal/include/rte_pflock.h | 25 ++--
lib/eal/include/rte_seqcount.h | 19 +--
lib/eal/include/rte_stdatomic.h | 198 +++++++++++++++++++++++++++++++
lib/eal/include/rte_ticketlock.h | 43 +++----
lib/eal/include/rte_trace_point.h | 5 +-
lib/eal/loongarch/include/rte_atomic.h | 4 +-
lib/eal/ppc/include/rte_atomic.h | 54 ++++-----
lib/eal/riscv/include/rte_atomic.h | 4 +-
lib/eal/x86/include/rte_atomic.h | 8 +-
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 7 +-
meson_options.txt | 2 +
29 files changed, 497 insertions(+), 266 deletions(-)
create mode 100644 lib/eal/include/rte_stdatomic.h
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v4 1/6] eal: provide rte stdatomics optional atomics API
2023-08-16 21:38 ` [PATCH v4 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-16 21:38 ` Tyler Retzlaff
2023-08-17 11:45 ` Morten Brørup
2023-08-16 21:38 ` [PATCH v4 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
` (4 subsequent siblings)
5 siblings, 1 reply; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-16 21:38 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Provide API for atomic operations in the rte namespace that may
optionally be configured to use C11 atomics with meson
option enable_stdatomics=true
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
config/meson.build | 1 +
lib/eal/include/generic/rte_atomic.h | 1 +
lib/eal/include/generic/rte_pause.h | 1 +
lib/eal/include/generic/rte_rwlock.h | 1 +
lib/eal/include/generic/rte_spinlock.h | 1 +
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_mcslock.h | 1 +
lib/eal/include/rte_pflock.h | 1 +
lib/eal/include/rte_seqcount.h | 1 +
lib/eal/include/rte_stdatomic.h | 198 +++++++++++++++++++++++++++++++++
lib/eal/include/rte_ticketlock.h | 1 +
lib/eal/include/rte_trace_point.h | 1 +
meson_options.txt | 2 +
13 files changed, 211 insertions(+)
create mode 100644 lib/eal/include/rte_stdatomic.h
diff --git a/config/meson.build b/config/meson.build
index d822371..ec49964 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -303,6 +303,7 @@ endforeach
# set other values pulled from the build options
dpdk_conf.set('RTE_MAX_ETHPORTS', get_option('max_ethports'))
dpdk_conf.set('RTE_LIBEAL_USE_HPET', get_option('use_hpet'))
+dpdk_conf.set('RTE_ENABLE_STDATOMIC', get_option('enable_stdatomic'))
dpdk_conf.set('RTE_ENABLE_TRACE_FP', get_option('enable_trace_fp'))
# values which have defaults which may be overridden
dpdk_conf.set('RTE_MAX_VFIO_GROUPS', 64)
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 82b9bfc..4a235ba 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -15,6 +15,7 @@
#include <stdint.h>
#include <rte_compat.h>
#include <rte_common.h>
+#include <rte_stdatomic.h>
#ifdef __DOXYGEN__
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index ec1f418..bebfa95 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -16,6 +16,7 @@
#include <assert.h>
#include <rte_common.h>
#include <rte_atomic.h>
+#include <rte_stdatomic.h>
/**
* Pause CPU execution for a short while
diff --git a/lib/eal/include/generic/rte_rwlock.h b/lib/eal/include/generic/rte_rwlock.h
index 9e083bb..24ebec6 100644
--- a/lib/eal/include/generic/rte_rwlock.h
+++ b/lib/eal/include/generic/rte_rwlock.h
@@ -32,6 +32,7 @@
#include <rte_common.h>
#include <rte_lock_annotations.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_rwlock_t type.
diff --git a/lib/eal/include/generic/rte_spinlock.h b/lib/eal/include/generic/rte_spinlock.h
index c50ebaa..e18f0cd 100644
--- a/lib/eal/include/generic/rte_spinlock.h
+++ b/lib/eal/include/generic/rte_spinlock.h
@@ -23,6 +23,7 @@
#endif
#include <rte_lock_annotations.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_spinlock_t type.
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index a0463ef..e94b056 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -42,6 +42,7 @@ headers += files(
'rte_seqlock.h',
'rte_service.h',
'rte_service_component.h',
+ 'rte_stdatomic.h',
'rte_string_fns.h',
'rte_tailq.h',
'rte_thread.h',
diff --git a/lib/eal/include/rte_mcslock.h b/lib/eal/include/rte_mcslock.h
index a805cb2..18e63eb 100644
--- a/lib/eal/include/rte_mcslock.h
+++ b/lib/eal/include/rte_mcslock.h
@@ -27,6 +27,7 @@
#include <rte_common.h>
#include <rte_pause.h>
#include <rte_branch_prediction.h>
+#include <rte_stdatomic.h>
/**
* The rte_mcslock_t type.
diff --git a/lib/eal/include/rte_pflock.h b/lib/eal/include/rte_pflock.h
index a3f7291..790be71 100644
--- a/lib/eal/include/rte_pflock.h
+++ b/lib/eal/include/rte_pflock.h
@@ -34,6 +34,7 @@
#include <rte_compat.h>
#include <rte_common.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_pflock_t type.
diff --git a/lib/eal/include/rte_seqcount.h b/lib/eal/include/rte_seqcount.h
index ff62708..098af26 100644
--- a/lib/eal/include/rte_seqcount.h
+++ b/lib/eal/include/rte_seqcount.h
@@ -26,6 +26,7 @@
#include <rte_atomic.h>
#include <rte_branch_prediction.h>
#include <rte_compat.h>
+#include <rte_stdatomic.h>
/**
* The RTE seqcount type.
diff --git a/lib/eal/include/rte_stdatomic.h b/lib/eal/include/rte_stdatomic.h
new file mode 100644
index 0000000..8f152ea
--- /dev/null
+++ b/lib/eal/include/rte_stdatomic.h
@@ -0,0 +1,198 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Microsoft Corporation
+ */
+
+#ifndef _RTE_STDATOMIC_H_
+#define _RTE_STDATOMIC_H_
+
+#include <assert.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#ifdef RTE_ENABLE_STDATOMIC
+#ifdef __STDC_NO_ATOMICS__
+#error enable_stdatomics=true but atomics not supported by toolchain
+#endif
+
+#include <stdatomic.h>
+
+/* RTE_ATOMIC(type) is provided for use as a type specifier
+ * permitting designation of an rte atomic type.
+ */
+#define RTE_ATOMIC(type) _Atomic(type)
+
+/* __rte_atomic is provided for type qualification permitting
+ * designation of an rte atomic qualified type-name.
+ */
+#define __rte_atomic _Atomic
+
+/* The memory order is an enumerated type in C11. */
+typedef memory_order rte_memory_order;
+
+#define rte_memory_order_relaxed memory_order_relaxed
+#ifdef __ATOMIC_RELAXED
+static_assert(rte_memory_order_relaxed == __ATOMIC_RELAXED,
+ "rte_memory_order_relaxed == __ATOMIC_RELAXED");
+#endif
+
+#define rte_memory_order_consume memory_order_consume
+#ifdef __ATOMIC_CONSUME
+static_assert(rte_memory_order_consume == __ATOMIC_CONSUME,
+ "rte_memory_order_consume == __ATOMIC_CONSUME");
+#endif
+
+#define rte_memory_order_acquire memory_order_acquire
+#ifdef __ATOMIC_ACQUIRE
+static_assert(rte_memory_order_acquire == __ATOMIC_ACQUIRE,
+ "rte_memory_order_acquire == __ATOMIC_ACQUIRE");
+#endif
+
+#define rte_memory_order_release memory_order_release
+#ifdef __ATOMIC_RELEASE
+static_assert(rte_memory_order_release == __ATOMIC_RELEASE,
+ "rte_memory_order_release == __ATOMIC_RELEASE");
+#endif
+
+#define rte_memory_order_acq_rel memory_order_acq_rel
+#ifdef __ATOMIC_ACQ_REL
+static_assert(rte_memory_order_acq_rel == __ATOMIC_ACQ_REL,
+ "rte_memory_order_acq_rel == __ATOMIC_ACQ_REL");
+#endif
+
+#define rte_memory_order_seq_cst memory_order_seq_cst
+#ifdef __ATOMIC_SEQ_CST
+static_assert(rte_memory_order_seq_cst == __ATOMIC_SEQ_CST,
+ "rte_memory_order_seq_cst == __ATOMIC_SEQ_CST");
+#endif
+
+#define rte_atomic_load_explicit(ptr, memorder) \
+ atomic_load_explicit(ptr, memorder)
+
+#define rte_atomic_store_explicit(ptr, val, memorder) \
+ atomic_store_explicit(ptr, val, memorder)
+
+#define rte_atomic_exchange_explicit(ptr, val, memorder) \
+ atomic_exchange_explicit(ptr, val, memorder)
+
+#define rte_atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder)
+
+#define rte_atomic_compare_exchange_weak_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder)
+
+#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
+ atomic_fetch_add_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
+ atomic_fetch_sub_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
+ atomic_fetch_and_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
+ atomic_fetch_xor_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
+ atomic_fetch_or_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
+ atomic_fetch_nand_explicit(ptr, val, memorder)
+
+#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \
+ atomic_flag_test_and_set_explicit(ptr, memorder)
+
+#define rte_atomic_flag_clear_explicit(ptr, memorder) \
+ atomic_flag_clear(ptr, memorder)
+
+/* We provide internal macro here to allow conditional expansion
+ * in the body of the per-arch rte_atomic_thread_fence inline functions.
+ */
+#define __rte_atomic_thread_fence(memorder) \
+ atomic_thread_fence(memorder)
+
+#else
+
+/* RTE_ATOMIC(type) is provided for use as a type specifier
+ * permitting designation of an rte atomic type.
+ */
+#define RTE_ATOMIC(type) type
+
+/* __rte_atomic is provided for type qualification permitting
+ * designation of an rte atomic qualified type-name.
+ */
+#define __rte_atomic
+
+/* The memory order is an integer type in GCC built-ins,
+ * not an enumerated type like in C11.
+ */
+typedef int rte_memory_order;
+
+#define rte_memory_order_relaxed __ATOMIC_RELAXED
+#define rte_memory_order_consume __ATOMIC_CONSUME
+#define rte_memory_order_acquire __ATOMIC_ACQUIRE
+#define rte_memory_order_release __ATOMIC_RELEASE
+#define rte_memory_order_acq_rel __ATOMIC_ACQ_REL
+#define rte_memory_order_seq_cst __ATOMIC_SEQ_CST
+
+#define rte_atomic_load_explicit(ptr, memorder) \
+ __atomic_load_n(ptr, memorder)
+
+#define rte_atomic_store_explicit(ptr, val, memorder) \
+ __atomic_store_n(ptr, val, memorder)
+
+#define rte_atomic_exchange_explicit(ptr, val, memorder) \
+ __atomic_exchange_n(ptr, val, memorder)
+
+#define rte_atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ __atomic_compare_exchange_n( \
+ ptr, expected, desired, 0, succ_memorder, fail_memorder)
+
+#define rte_atomic_compare_exchange_weak_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ __atomic_compare_exchange_n( \
+ ptr, expected, desired, 1, succ_memorder, fail_memorder)
+
+#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
+ __atomic_fetch_add(ptr, val, memorder)
+
+#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
+ __atomic_fetch_sub(ptr, val, memorder)
+
+#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
+ __atomic_fetch_and(ptr, val, memorder)
+
+#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
+ __atomic_fetch_xor(ptr, val, memorder)
+
+#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
+ __atomic_fetch_or(ptr, val, memorder)
+
+#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
+ __atomic_fetch_nand(ptr, val, memorder)
+
+#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \
+ __atomic_test_and_set(ptr, memorder)
+
+#define rte_atomic_flag_clear_explicit(ptr, memorder) \
+ __atomic_clear(ptr, memorder)
+
+/* We provide internal macro here to allow conditional expansion
+ * in the body of the per-arch rte_atomic_thread_fence inline functions.
+ */
+#define __rte_atomic_thread_fence(memorder) \
+ __atomic_thread_fence(memorder)
+
+#endif
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STDATOMIC_H_ */
diff --git a/lib/eal/include/rte_ticketlock.h b/lib/eal/include/rte_ticketlock.h
index 5db0d8a..e22d119 100644
--- a/lib/eal/include/rte_ticketlock.h
+++ b/lib/eal/include/rte_ticketlock.h
@@ -24,6 +24,7 @@
#include <rte_common.h>
#include <rte_lcore.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_ticketlock_t type.
diff --git a/lib/eal/include/rte_trace_point.h b/lib/eal/include/rte_trace_point.h
index c6b6fcc..d587591 100644
--- a/lib/eal/include/rte_trace_point.h
+++ b/lib/eal/include/rte_trace_point.h
@@ -30,6 +30,7 @@
#include <rte_per_lcore.h>
#include <rte_string_fns.h>
#include <rte_uuid.h>
+#include <rte_stdatomic.h>
/** The tracepoint object. */
typedef uint64_t rte_trace_point_t;
diff --git a/meson_options.txt b/meson_options.txt
index 621e1ca..bb22bba 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -46,6 +46,8 @@ option('mbuf_refcnt_atomic', type: 'boolean', value: true, description:
'Atomically access the mbuf refcnt.')
option('platform', type: 'string', value: 'native', description:
'Platform to build, either "native", "generic" or a SoC. Please refer to the Linux build guide for more information.')
+option('enable_stdatomic', type: 'boolean', value: false, description:
+ 'enable use of C11 stdatomic')
option('enable_trace_fp', type: 'boolean', value: false, description:
'enable fast path trace points.')
option('tests', type: 'boolean', value: true, description:
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v4 2/6] eal: adapt EAL to present rte optional atomics API
2023-08-16 21:38 ` [PATCH v4 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-16 21:38 ` [PATCH v4 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
@ 2023-08-16 21:38 ` Tyler Retzlaff
2023-08-16 21:38 ` [PATCH v4 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
` (3 subsequent siblings)
5 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-16 21:38 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt the EAL public headers to use rte optional atomics API instead of
directly using and exposing toolchain specific atomic builtin intrinsics.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
app/test/test_mcslock.c | 6 ++--
lib/eal/arm/include/rte_atomic_32.h | 4 +--
lib/eal/arm/include/rte_atomic_64.h | 36 +++++++++++------------
lib/eal/arm/include/rte_pause_64.h | 26 ++++++++--------
lib/eal/arm/rte_power_intrinsics.c | 8 ++---
lib/eal/common/eal_common_trace.c | 16 +++++-----
lib/eal/include/generic/rte_atomic.h | 50 +++++++++++++++----------------
lib/eal/include/generic/rte_pause.h | 46 ++++++++++++-----------------
lib/eal/include/generic/rte_rwlock.h | 47 +++++++++++++++--------------
lib/eal/include/generic/rte_spinlock.h | 19 ++++++------
lib/eal/include/rte_mcslock.h | 50 +++++++++++++++----------------
lib/eal/include/rte_pflock.h | 24 ++++++++-------
lib/eal/include/rte_seqcount.h | 18 ++++++------
lib/eal/include/rte_ticketlock.h | 42 +++++++++++++-------------
lib/eal/include/rte_trace_point.h | 4 +--
lib/eal/loongarch/include/rte_atomic.h | 4 +--
lib/eal/ppc/include/rte_atomic.h | 54 +++++++++++++++++-----------------
lib/eal/riscv/include/rte_atomic.h | 4 +--
lib/eal/x86/include/rte_atomic.h | 8 ++---
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 6 ++--
21 files changed, 237 insertions(+), 237 deletions(-)
diff --git a/app/test/test_mcslock.c b/app/test/test_mcslock.c
index 52e45e7..242c242 100644
--- a/app/test/test_mcslock.c
+++ b/app/test/test_mcslock.c
@@ -36,9 +36,9 @@
* lock multiple times.
*/
-rte_mcslock_t *p_ml;
-rte_mcslock_t *p_ml_try;
-rte_mcslock_t *p_ml_perf;
+RTE_ATOMIC(rte_mcslock_t *) p_ml;
+RTE_ATOMIC(rte_mcslock_t *) p_ml_try;
+RTE_ATOMIC(rte_mcslock_t *) p_ml_perf;
static unsigned int count;
diff --git a/lib/eal/arm/include/rte_atomic_32.h b/lib/eal/arm/include/rte_atomic_32.h
index c00ab78..62fc337 100644
--- a/lib/eal/arm/include/rte_atomic_32.h
+++ b/lib/eal/arm/include/rte_atomic_32.h
@@ -34,9 +34,9 @@
#define rte_io_rmb() rte_rmb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
#ifdef __cplusplus
diff --git a/lib/eal/arm/include/rte_atomic_64.h b/lib/eal/arm/include/rte_atomic_64.h
index 6047911..75d8ba6 100644
--- a/lib/eal/arm/include/rte_atomic_64.h
+++ b/lib/eal/arm/include/rte_atomic_64.h
@@ -38,9 +38,9 @@
#define rte_io_rmb() rte_rmb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
/*------------------------ 128 bit atomic operations -------------------------*/
@@ -107,33 +107,33 @@
*/
RTE_SET_USED(failure);
/* Find invalid memory order */
- RTE_ASSERT(success == __ATOMIC_RELAXED ||
- success == __ATOMIC_ACQUIRE ||
- success == __ATOMIC_RELEASE ||
- success == __ATOMIC_ACQ_REL ||
- success == __ATOMIC_SEQ_CST);
+ RTE_ASSERT(success == rte_memory_order_relaxed ||
+ success == rte_memory_order_acquire ||
+ success == rte_memory_order_release ||
+ success == rte_memory_order_acq_rel ||
+ success == rte_memory_order_seq_cst);
rte_int128_t expected = *exp;
rte_int128_t desired = *src;
rte_int128_t old;
#if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS)
- if (success == __ATOMIC_RELAXED)
+ if (success == rte_memory_order_relaxed)
__cas_128_relaxed(dst, exp, desired);
- else if (success == __ATOMIC_ACQUIRE)
+ else if (success == rte_memory_order_acquire)
__cas_128_acquire(dst, exp, desired);
- else if (success == __ATOMIC_RELEASE)
+ else if (success == rte_memory_order_release)
__cas_128_release(dst, exp, desired);
else
__cas_128_acq_rel(dst, exp, desired);
old = *exp;
#else
-#define __HAS_ACQ(mo) ((mo) != __ATOMIC_RELAXED && (mo) != __ATOMIC_RELEASE)
-#define __HAS_RLS(mo) ((mo) == __ATOMIC_RELEASE || (mo) == __ATOMIC_ACQ_REL || \
- (mo) == __ATOMIC_SEQ_CST)
+#define __HAS_ACQ(mo) ((mo) != rte_memory_order_relaxed && (mo) != rte_memory_order_release)
+#define __HAS_RLS(mo) ((mo) == rte_memory_order_release || (mo) == rte_memory_order_acq_rel || \
+ (mo) == rte_memory_order_seq_cst)
- int ldx_mo = __HAS_ACQ(success) ? __ATOMIC_ACQUIRE : __ATOMIC_RELAXED;
- int stx_mo = __HAS_RLS(success) ? __ATOMIC_RELEASE : __ATOMIC_RELAXED;
+ int ldx_mo = __HAS_ACQ(success) ? rte_memory_order_acquire : rte_memory_order_relaxed;
+ int stx_mo = __HAS_RLS(success) ? rte_memory_order_release : rte_memory_order_relaxed;
#undef __HAS_ACQ
#undef __HAS_RLS
@@ -153,7 +153,7 @@
: "Q" (src->val[0]) \
: "memory"); }
- if (ldx_mo == __ATOMIC_RELAXED)
+ if (ldx_mo == rte_memory_order_relaxed)
__LOAD_128("ldxp", dst, old)
else
__LOAD_128("ldaxp", dst, old)
@@ -170,7 +170,7 @@
: "memory"); }
if (likely(old.int128 == expected.int128)) {
- if (stx_mo == __ATOMIC_RELAXED)
+ if (stx_mo == rte_memory_order_relaxed)
__STORE_128("stxp", dst, desired, ret)
else
__STORE_128("stlxp", dst, desired, ret)
@@ -181,7 +181,7 @@
* needs to be stored back to ensure it was read
* atomically.
*/
- if (stx_mo == __ATOMIC_RELAXED)
+ if (stx_mo == rte_memory_order_relaxed)
__STORE_128("stxp", dst, old, ret)
else
__STORE_128("stlxp", dst, old, ret)
diff --git a/lib/eal/arm/include/rte_pause_64.h b/lib/eal/arm/include/rte_pause_64.h
index 5f70e97..d4daafc 100644
--- a/lib/eal/arm/include/rte_pause_64.h
+++ b/lib/eal/arm/include/rte_pause_64.h
@@ -41,7 +41,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_8(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxrb %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -60,7 +60,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_16(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxrh %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -79,7 +79,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_32(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxr %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -98,7 +98,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_64(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxr %x[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -118,7 +118,7 @@ static inline void rte_pause(void)
*/
#define __RTE_ARM_LOAD_EXC_128(src, dst, memorder) { \
volatile rte_int128_t *dst_128 = (volatile rte_int128_t *)&dst; \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxp %x[tmp0], %x[tmp1], [%x[addr]]" \
: [tmp0] "=&r" (dst_128->val[0]), \
[tmp1] "=&r" (dst_128->val[1]) \
@@ -153,8 +153,8 @@ static inline void rte_pause(void)
{
uint16_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_16(addr, value, memorder)
if (value != expected) {
@@ -172,8 +172,8 @@ static inline void rte_pause(void)
{
uint32_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_32(addr, value, memorder)
if (value != expected) {
@@ -191,8 +191,8 @@ static inline void rte_pause(void)
{
uint64_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_64(addr, value, memorder)
if (value != expected) {
@@ -206,8 +206,8 @@ static inline void rte_pause(void)
#define RTE_WAIT_UNTIL_MASKED(addr, mask, cond, expected, memorder) do { \
RTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); \
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && \
- memorder != __ATOMIC_RELAXED); \
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire && \
+ memorder != rte_memory_order_relaxed); \
const uint32_t size = sizeof(*(addr)) << 3; \
typeof(*(addr)) expected_value = (expected); \
typeof(*(addr)) value; \
diff --git a/lib/eal/arm/rte_power_intrinsics.c b/lib/eal/arm/rte_power_intrinsics.c
index 77b96e4..f54cf59 100644
--- a/lib/eal/arm/rte_power_intrinsics.c
+++ b/lib/eal/arm/rte_power_intrinsics.c
@@ -33,19 +33,19 @@
switch (pmc->size) {
case sizeof(uint8_t):
- __RTE_ARM_LOAD_EXC_8(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_8(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint16_t):
- __RTE_ARM_LOAD_EXC_16(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_16(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint32_t):
- __RTE_ARM_LOAD_EXC_32(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_32(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint64_t):
- __RTE_ARM_LOAD_EXC_64(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_64(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
default:
diff --git a/lib/eal/common/eal_common_trace.c b/lib/eal/common/eal_common_trace.c
index cb980af..c6628dd 100644
--- a/lib/eal/common/eal_common_trace.c
+++ b/lib/eal/common/eal_common_trace.c
@@ -103,11 +103,11 @@ struct trace_point_head *
trace_mode_set(rte_trace_point_t *t, enum rte_trace_mode mode)
{
if (mode == RTE_TRACE_MODE_OVERWRITE)
- __atomic_fetch_and(t, ~__RTE_TRACE_FIELD_ENABLE_DISCARD,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_and_explicit(t, ~__RTE_TRACE_FIELD_ENABLE_DISCARD,
+ rte_memory_order_release);
else
- __atomic_fetch_or(t, __RTE_TRACE_FIELD_ENABLE_DISCARD,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_or_explicit(t, __RTE_TRACE_FIELD_ENABLE_DISCARD,
+ rte_memory_order_release);
}
void
@@ -141,7 +141,7 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return false;
- val = __atomic_load_n(t, __ATOMIC_ACQUIRE);
+ val = rte_atomic_load_explicit(t, rte_memory_order_acquire);
return (val & __RTE_TRACE_FIELD_ENABLE_MASK) != 0;
}
@@ -153,7 +153,8 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return -ERANGE;
- prev = __atomic_fetch_or(t, __RTE_TRACE_FIELD_ENABLE_MASK, __ATOMIC_RELEASE);
+ prev = rte_atomic_fetch_or_explicit(t, __RTE_TRACE_FIELD_ENABLE_MASK,
+ rte_memory_order_release);
if ((prev & __RTE_TRACE_FIELD_ENABLE_MASK) == 0)
__atomic_fetch_add(&trace.status, 1, __ATOMIC_RELEASE);
return 0;
@@ -167,7 +168,8 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return -ERANGE;
- prev = __atomic_fetch_and(t, ~__RTE_TRACE_FIELD_ENABLE_MASK, __ATOMIC_RELEASE);
+ prev = rte_atomic_fetch_and_explicit(t, ~__RTE_TRACE_FIELD_ENABLE_MASK,
+ rte_memory_order_release);
if ((prev & __RTE_TRACE_FIELD_ENABLE_MASK) != 0)
__atomic_fetch_sub(&trace.status, 1, __ATOMIC_RELEASE);
return 0;
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 4a235ba..5940e7e 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -63,7 +63,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_ACQ_REL) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_acq_rel) should be used instead.
*/
static inline void rte_smp_mb(void);
@@ -80,7 +80,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_RELEASE) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_release) should be used instead.
* The fence also guarantees LOAD operations that precede the call
* are globally visible across the lcores before the STORE operations
* that follows it.
@@ -100,7 +100,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_ACQUIRE) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_acquire) should be used instead.
* The fence also guarantees LOAD operations that precede the call
* are globally visible across the lcores before the STORE operations
* that follows it.
@@ -154,7 +154,7 @@
/**
* Synchronization fence between threads based on the specified memory order.
*/
-static inline void rte_atomic_thread_fence(int memorder);
+static inline void rte_atomic_thread_fence(rte_memory_order memorder);
/*------------------------- 16 bit atomic operations -------------------------*/
@@ -207,7 +207,7 @@
static inline uint16_t
rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -274,7 +274,7 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
/**
@@ -288,7 +288,7 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
/**
@@ -341,7 +341,7 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
/**
@@ -361,7 +361,7 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
/**
@@ -380,7 +380,7 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_SEQ_CST) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -400,7 +400,7 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_SEQ_CST) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -486,7 +486,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline uint32_t
rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -553,7 +553,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
/**
@@ -567,7 +567,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
/**
@@ -620,7 +620,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
/**
@@ -640,7 +640,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
/**
@@ -659,7 +659,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_SEQ_CST) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -679,7 +679,7 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_SEQ_CST) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -764,7 +764,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline uint64_t
rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -885,7 +885,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
#endif
@@ -904,7 +904,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
#endif
@@ -962,7 +962,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
#endif
@@ -986,7 +986,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
#endif
@@ -1115,8 +1115,8 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
* stronger) model.
* @param failure
* If unsuccessful, the operation's memory behavior conforms to this (or a
- * stronger) model. This argument cannot be __ATOMIC_RELEASE,
- * __ATOMIC_ACQ_REL, or a stronger model than success.
+ * stronger) model. This argument cannot be rte_memory_order_release,
+ * rte_memory_order_acq_rel, or a stronger model than success.
* @return
* Non-zero on success; 0 on failure.
*/
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index bebfa95..256309e 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -36,13 +36,11 @@
* A 16-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
- * C++11 memory orders with the same names, see the C++11 standard or
- * the GCC wiki on atomic synchronization for detailed definition.
+ * rte_memory_order_acquire and rte_memory_order_relaxed.
*/
static __rte_always_inline void
rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,
- int memorder);
+ rte_memory_order memorder);
/**
* Wait for *addr to be updated with a 32-bit expected value, with a relaxed
@@ -54,13 +52,11 @@
* A 32-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
- * C++11 memory orders with the same names, see the C++11 standard or
- * the GCC wiki on atomic synchronization for detailed definition.
+ * rte_memory_order_acquire and rte_memory_order_relaxed.
*/
static __rte_always_inline void
rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,
- int memorder);
+ rte_memory_order memorder);
/**
* Wait for *addr to be updated with a 64-bit expected value, with a relaxed
@@ -72,42 +68,40 @@
* A 64-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
- * C++11 memory orders with the same names, see the C++11 standard or
- * the GCC wiki on atomic synchronization for detailed definition.
+ * rte_memory_order_acquire and rte_memory_order_relaxed.
*/
static __rte_always_inline void
rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,
- int memorder);
+ rte_memory_order memorder);
#ifndef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED
static __rte_always_inline void
rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
static __rte_always_inline void
rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
static __rte_always_inline void
rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
@@ -125,16 +119,14 @@
* An expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
- * C++11 memory orders with the same names, see the C++11 standard or
- * the GCC wiki on atomic synchronization for detailed definition.
+ * rte_memory_order_acquire and rte_memory_order_relaxed.
*/
#define RTE_WAIT_UNTIL_MASKED(addr, mask, cond, expected, memorder) do { \
RTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); \
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && \
- memorder != __ATOMIC_RELAXED); \
+ RTE_BUILD_BUG_ON((memorder) != rte_memory_order_acquire && \
+ (memorder) != rte_memory_order_relaxed); \
typeof(*(addr)) expected_value = (expected); \
- while (!((__atomic_load_n((addr), (memorder)) & (mask)) cond \
+ while (!((rte_atomic_load_explicit((addr), (memorder)) & (mask)) cond \
expected_value)) \
rte_pause(); \
} while (0)
diff --git a/lib/eal/include/generic/rte_rwlock.h b/lib/eal/include/generic/rte_rwlock.h
index 24ebec6..c788705 100644
--- a/lib/eal/include/generic/rte_rwlock.h
+++ b/lib/eal/include/generic/rte_rwlock.h
@@ -58,7 +58,7 @@
#define RTE_RWLOCK_READ 0x4 /* Reader increment */
typedef struct __rte_lockable {
- int32_t cnt;
+ RTE_ATOMIC(int32_t) cnt;
} rte_rwlock_t;
/**
@@ -93,21 +93,21 @@
while (1) {
/* Wait while writer is present or pending */
- while (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED)
+ while (rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed)
& RTE_RWLOCK_MASK)
rte_pause();
/* Try to get read lock */
- x = __atomic_fetch_add(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_ACQUIRE) + RTE_RWLOCK_READ;
+ x = rte_atomic_fetch_add_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_acquire) + RTE_RWLOCK_READ;
/* If no writer, then acquire was successful */
if (likely(!(x & RTE_RWLOCK_MASK)))
return;
/* Lost race with writer, backout the change. */
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_RELAXED);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_relaxed);
}
}
@@ -128,20 +128,20 @@
{
int32_t x;
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
/* fail if write lock is held or writer is pending */
if (x & RTE_RWLOCK_MASK)
return -EBUSY;
/* Try to get read lock */
- x = __atomic_fetch_add(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_ACQUIRE) + RTE_RWLOCK_READ;
+ x = rte_atomic_fetch_add_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_acquire) + RTE_RWLOCK_READ;
/* Back out if writer raced in */
if (unlikely(x & RTE_RWLOCK_MASK)) {
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_release);
return -EBUSY;
}
@@ -159,7 +159,7 @@
__rte_unlock_function(rwl)
__rte_no_thread_safety_analysis
{
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ, __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ, rte_memory_order_release);
}
/**
@@ -179,10 +179,10 @@
{
int32_t x;
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
if (x < RTE_RWLOCK_WRITE &&
- __atomic_compare_exchange_n(&rwl->cnt, &x, x + RTE_RWLOCK_WRITE,
- 1, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ rte_atomic_compare_exchange_weak_explicit(&rwl->cnt, &x, x + RTE_RWLOCK_WRITE,
+ rte_memory_order_acquire, rte_memory_order_relaxed))
return 0;
else
return -EBUSY;
@@ -202,22 +202,25 @@
int32_t x;
while (1) {
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
/* No readers or writers? */
if (likely(x < RTE_RWLOCK_WRITE)) {
/* Turn off RTE_RWLOCK_WAIT, turn on RTE_RWLOCK_WRITE */
- if (__atomic_compare_exchange_n(&rwl->cnt, &x, RTE_RWLOCK_WRITE, 1,
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ if (rte_atomic_compare_exchange_weak_explicit(
+ &rwl->cnt, &x, RTE_RWLOCK_WRITE,
+ rte_memory_order_acquire, rte_memory_order_relaxed))
return;
}
/* Turn on writer wait bit */
if (!(x & RTE_RWLOCK_WAIT))
- __atomic_fetch_or(&rwl->cnt, RTE_RWLOCK_WAIT, __ATOMIC_RELAXED);
+ rte_atomic_fetch_or_explicit(&rwl->cnt, RTE_RWLOCK_WAIT,
+ rte_memory_order_relaxed);
/* Wait until no readers before trying again */
- while (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) > RTE_RWLOCK_WAIT)
+ while (rte_atomic_load_explicit(&rwl->cnt,
+ rte_memory_order_relaxed) > RTE_RWLOCK_WAIT)
rte_pause();
}
@@ -234,7 +237,7 @@
__rte_unlock_function(rwl)
__rte_no_thread_safety_analysis
{
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_WRITE, __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_WRITE, rte_memory_order_release);
}
/**
@@ -248,7 +251,7 @@
static inline int
rte_rwlock_write_is_locked(rte_rwlock_t *rwl)
{
- if (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) & RTE_RWLOCK_WRITE)
+ if (rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed) & RTE_RWLOCK_WRITE)
return 1;
return 0;
diff --git a/lib/eal/include/generic/rte_spinlock.h b/lib/eal/include/generic/rte_spinlock.h
index e18f0cd..23fb048 100644
--- a/lib/eal/include/generic/rte_spinlock.h
+++ b/lib/eal/include/generic/rte_spinlock.h
@@ -29,7 +29,7 @@
* The rte_spinlock_t type.
*/
typedef struct __rte_lockable {
- volatile int locked; /**< lock status 0 = unlocked, 1 = locked */
+ volatile RTE_ATOMIC(int) locked; /**< lock status 0 = unlocked, 1 = locked */
} rte_spinlock_t;
/**
@@ -66,10 +66,10 @@
{
int exp = 0;
- while (!__atomic_compare_exchange_n(&sl->locked, &exp, 1, 0,
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) {
- rte_wait_until_equal_32((volatile uint32_t *)&sl->locked,
- 0, __ATOMIC_RELAXED);
+ while (!rte_atomic_compare_exchange_strong_explicit(&sl->locked, &exp, 1,
+ rte_memory_order_acquire, rte_memory_order_relaxed)) {
+ rte_wait_until_equal_32((volatile uint32_t *)(uintptr_t)&sl->locked,
+ 0, rte_memory_order_relaxed);
exp = 0;
}
}
@@ -90,7 +90,7 @@
rte_spinlock_unlock(rte_spinlock_t *sl)
__rte_no_thread_safety_analysis
{
- __atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&sl->locked, 0, rte_memory_order_release);
}
#endif
@@ -113,9 +113,8 @@
__rte_no_thread_safety_analysis
{
int exp = 0;
- return __atomic_compare_exchange_n(&sl->locked, &exp, 1,
- 0, /* disallow spurious failure */
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
+ return rte_atomic_compare_exchange_strong_explicit(&sl->locked, &exp, 1,
+ rte_memory_order_acquire, rte_memory_order_relaxed);
}
#endif
@@ -129,7 +128,7 @@
*/
static inline int rte_spinlock_is_locked (rte_spinlock_t *sl)
{
- return __atomic_load_n(&sl->locked, __ATOMIC_ACQUIRE);
+ return rte_atomic_load_explicit(&sl->locked, rte_memory_order_acquire);
}
/**
diff --git a/lib/eal/include/rte_mcslock.h b/lib/eal/include/rte_mcslock.h
index 18e63eb..8c75377 100644
--- a/lib/eal/include/rte_mcslock.h
+++ b/lib/eal/include/rte_mcslock.h
@@ -33,8 +33,8 @@
* The rte_mcslock_t type.
*/
typedef struct rte_mcslock {
- struct rte_mcslock *next;
- int locked; /* 1 if the queue locked, 0 otherwise */
+ RTE_ATOMIC(struct rte_mcslock *) next;
+ RTE_ATOMIC(int) locked; /* 1 if the queue locked, 0 otherwise */
} rte_mcslock_t;
/**
@@ -49,13 +49,13 @@
* lock should use its 'own node'.
*/
static inline void
-rte_mcslock_lock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_lock(RTE_ATOMIC(rte_mcslock_t *) *msl, rte_mcslock_t *me)
{
rte_mcslock_t *prev;
/* Init me node */
- __atomic_store_n(&me->locked, 1, __ATOMIC_RELAXED);
- __atomic_store_n(&me->next, NULL, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&me->locked, 1, rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&me->next, NULL, rte_memory_order_relaxed);
/* If the queue is empty, the exchange operation is enough to acquire
* the lock. Hence, the exchange operation requires acquire semantics.
@@ -63,7 +63,7 @@
* visible to other CPUs/threads. Hence, the exchange operation requires
* release semantics as well.
*/
- prev = __atomic_exchange_n(msl, me, __ATOMIC_ACQ_REL);
+ prev = rte_atomic_exchange_explicit(msl, me, rte_memory_order_acq_rel);
if (likely(prev == NULL)) {
/* Queue was empty, no further action required,
* proceed with lock taken.
@@ -77,19 +77,19 @@
* strong as a release fence and is not sufficient to enforce the
* desired order here.
*/
- __atomic_store_n(&prev->next, me, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&prev->next, me, rte_memory_order_release);
/* The while-load of me->locked should not move above the previous
* store to prev->next. Otherwise it will cause a deadlock. Need a
* store-load barrier.
*/
- __atomic_thread_fence(__ATOMIC_ACQ_REL);
+ __rte_atomic_thread_fence(rte_memory_order_acq_rel);
/* If the lock has already been acquired, it first atomically
* places the node at the end of the queue and then proceeds
* to spin on me->locked until the previous lock holder resets
* the me->locked using mcslock_unlock().
*/
- rte_wait_until_equal_32((uint32_t *)&me->locked, 0, __ATOMIC_ACQUIRE);
+ rte_wait_until_equal_32((uint32_t *)(uintptr_t)&me->locked, 0, rte_memory_order_acquire);
}
/**
@@ -101,34 +101,34 @@
* A pointer to the node of MCS lock passed in rte_mcslock_lock.
*/
static inline void
-rte_mcslock_unlock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_unlock(RTE_ATOMIC(rte_mcslock_t *) *msl, RTE_ATOMIC(rte_mcslock_t *) me)
{
/* Check if there are more nodes in the queue. */
- if (likely(__atomic_load_n(&me->next, __ATOMIC_RELAXED) == NULL)) {
+ if (likely(rte_atomic_load_explicit(&me->next, rte_memory_order_relaxed) == NULL)) {
/* No, last member in the queue. */
- rte_mcslock_t *save_me = __atomic_load_n(&me, __ATOMIC_RELAXED);
+ rte_mcslock_t *save_me = rte_atomic_load_explicit(&me, rte_memory_order_relaxed);
/* Release the lock by setting it to NULL */
- if (likely(__atomic_compare_exchange_n(msl, &save_me, NULL, 0,
- __ATOMIC_RELEASE, __ATOMIC_RELAXED)))
+ if (likely(rte_atomic_compare_exchange_strong_explicit(msl, &save_me, NULL,
+ rte_memory_order_release, rte_memory_order_relaxed)))
return;
/* Speculative execution would be allowed to read in the
* while-loop first. This has the potential to cause a
* deadlock. Need a load barrier.
*/
- __atomic_thread_fence(__ATOMIC_ACQUIRE);
+ __rte_atomic_thread_fence(rte_memory_order_acquire);
/* More nodes added to the queue by other CPUs.
* Wait until the next pointer is set.
*/
- uintptr_t *next;
- next = (uintptr_t *)&me->next;
+ RTE_ATOMIC(uintptr_t) *next;
+ next = (__rte_atomic uintptr_t *)&me->next;
RTE_WAIT_UNTIL_MASKED(next, UINTPTR_MAX, !=, 0,
- __ATOMIC_RELAXED);
+ rte_memory_order_relaxed);
}
/* Pass lock to next waiter. */
- __atomic_store_n(&me->next->locked, 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&me->next->locked, 0, rte_memory_order_release);
}
/**
@@ -142,10 +142,10 @@
* 1 if the lock is successfully taken; 0 otherwise.
*/
static inline int
-rte_mcslock_trylock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_trylock(RTE_ATOMIC(rte_mcslock_t *) *msl, rte_mcslock_t *me)
{
/* Init me node */
- __atomic_store_n(&me->next, NULL, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&me->next, NULL, rte_memory_order_relaxed);
/* Try to lock */
rte_mcslock_t *expected = NULL;
@@ -156,8 +156,8 @@
* is visible to other CPUs/threads. Hence, the compare-exchange
* operation requires release semantics as well.
*/
- return __atomic_compare_exchange_n(msl, &expected, me, 0,
- __ATOMIC_ACQ_REL, __ATOMIC_RELAXED);
+ return rte_atomic_compare_exchange_strong_explicit(msl, &expected, me,
+ rte_memory_order_acq_rel, rte_memory_order_relaxed);
}
/**
@@ -169,9 +169,9 @@
* 1 if the lock is currently taken; 0 otherwise.
*/
static inline int
-rte_mcslock_is_locked(rte_mcslock_t *msl)
+rte_mcslock_is_locked(RTE_ATOMIC(rte_mcslock_t *) msl)
{
- return (__atomic_load_n(&msl, __ATOMIC_RELAXED) != NULL);
+ return (rte_atomic_load_explicit(&msl, rte_memory_order_relaxed) != NULL);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_pflock.h b/lib/eal/include/rte_pflock.h
index 790be71..79feeea 100644
--- a/lib/eal/include/rte_pflock.h
+++ b/lib/eal/include/rte_pflock.h
@@ -41,8 +41,8 @@
*/
struct rte_pflock {
struct {
- uint16_t in;
- uint16_t out;
+ RTE_ATOMIC(uint16_t) in;
+ RTE_ATOMIC(uint16_t) out;
} rd, wr;
};
typedef struct rte_pflock rte_pflock_t;
@@ -117,14 +117,14 @@ struct rte_pflock {
* If no writer is present, then the operation has completed
* successfully.
*/
- w = __atomic_fetch_add(&pf->rd.in, RTE_PFLOCK_RINC, __ATOMIC_ACQUIRE)
+ w = rte_atomic_fetch_add_explicit(&pf->rd.in, RTE_PFLOCK_RINC, rte_memory_order_acquire)
& RTE_PFLOCK_WBITS;
if (w == 0)
return;
/* Wait for current write phase to complete. */
RTE_WAIT_UNTIL_MASKED(&pf->rd.in, RTE_PFLOCK_WBITS, !=, w,
- __ATOMIC_ACQUIRE);
+ rte_memory_order_acquire);
}
/**
@@ -140,7 +140,7 @@ struct rte_pflock {
static inline void
rte_pflock_read_unlock(rte_pflock_t *pf)
{
- __atomic_fetch_add(&pf->rd.out, RTE_PFLOCK_RINC, __ATOMIC_RELEASE);
+ rte_atomic_fetch_add_explicit(&pf->rd.out, RTE_PFLOCK_RINC, rte_memory_order_release);
}
/**
@@ -161,8 +161,9 @@ struct rte_pflock {
/* Acquire ownership of write-phase.
* This is same as rte_ticketlock_lock().
*/
- ticket = __atomic_fetch_add(&pf->wr.in, 1, __ATOMIC_RELAXED);
- rte_wait_until_equal_16(&pf->wr.out, ticket, __ATOMIC_ACQUIRE);
+ ticket = rte_atomic_fetch_add_explicit(&pf->wr.in, 1, rte_memory_order_relaxed);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&pf->wr.out, ticket,
+ rte_memory_order_acquire);
/*
* Acquire ticket on read-side in order to allow them
@@ -173,10 +174,11 @@ struct rte_pflock {
* speculatively.
*/
w = RTE_PFLOCK_PRES | (ticket & RTE_PFLOCK_PHID);
- ticket = __atomic_fetch_add(&pf->rd.in, w, __ATOMIC_RELAXED);
+ ticket = rte_atomic_fetch_add_explicit(&pf->rd.in, w, rte_memory_order_relaxed);
/* Wait for any pending readers to flush. */
- rte_wait_until_equal_16(&pf->rd.out, ticket, __ATOMIC_ACQUIRE);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&pf->rd.out, ticket,
+ rte_memory_order_acquire);
}
/**
@@ -193,10 +195,10 @@ struct rte_pflock {
rte_pflock_write_unlock(rte_pflock_t *pf)
{
/* Migrate from write phase to read phase. */
- __atomic_fetch_and(&pf->rd.in, RTE_PFLOCK_LSB, __ATOMIC_RELEASE);
+ rte_atomic_fetch_and_explicit(&pf->rd.in, RTE_PFLOCK_LSB, rte_memory_order_release);
/* Allow other writers to continue. */
- __atomic_fetch_add(&pf->wr.out, 1, __ATOMIC_RELEASE);
+ rte_atomic_fetch_add_explicit(&pf->wr.out, 1, rte_memory_order_release);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_seqcount.h b/lib/eal/include/rte_seqcount.h
index 098af26..4f9cefb 100644
--- a/lib/eal/include/rte_seqcount.h
+++ b/lib/eal/include/rte_seqcount.h
@@ -32,7 +32,7 @@
* The RTE seqcount type.
*/
typedef struct {
- uint32_t sn; /**< A sequence number for the protected data. */
+ RTE_ATOMIC(uint32_t) sn; /**< A sequence number for the protected data. */
} rte_seqcount_t;
/**
@@ -106,11 +106,11 @@
static inline uint32_t
rte_seqcount_read_begin(const rte_seqcount_t *seqcount)
{
- /* __ATOMIC_ACQUIRE to prevent loads after (in program order)
+ /* rte_memory_order_acquire to prevent loads after (in program order)
* from happening before the sn load. Synchronizes-with the
* store release in rte_seqcount_write_end().
*/
- return __atomic_load_n(&seqcount->sn, __ATOMIC_ACQUIRE);
+ return rte_atomic_load_explicit(&seqcount->sn, rte_memory_order_acquire);
}
/**
@@ -161,9 +161,9 @@
return true;
/* make sure the data loads happens before the sn load */
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+ rte_atomic_thread_fence(rte_memory_order_acquire);
- end_sn = __atomic_load_n(&seqcount->sn, __ATOMIC_RELAXED);
+ end_sn = rte_atomic_load_explicit(&seqcount->sn, rte_memory_order_relaxed);
/* A writer incremented the sequence number during this read
* critical section.
@@ -205,12 +205,12 @@
sn = seqcount->sn + 1;
- __atomic_store_n(&seqcount->sn, sn, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&seqcount->sn, sn, rte_memory_order_relaxed);
- /* __ATOMIC_RELEASE to prevent stores after (in program order)
+ /* rte_memory_order_release to prevent stores after (in program order)
* from happening before the sn store.
*/
- rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ rte_atomic_thread_fence(rte_memory_order_release);
}
/**
@@ -237,7 +237,7 @@
sn = seqcount->sn + 1;
/* Synchronizes-with the load acquire in rte_seqcount_read_begin(). */
- __atomic_store_n(&seqcount->sn, sn, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&seqcount->sn, sn, rte_memory_order_release);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_ticketlock.h b/lib/eal/include/rte_ticketlock.h
index e22d119..7d39bca 100644
--- a/lib/eal/include/rte_ticketlock.h
+++ b/lib/eal/include/rte_ticketlock.h
@@ -30,10 +30,10 @@
* The rte_ticketlock_t type.
*/
typedef union {
- uint32_t tickets;
+ RTE_ATOMIC(uint32_t) tickets;
struct {
- uint16_t current;
- uint16_t next;
+ RTE_ATOMIC(uint16_t) current;
+ RTE_ATOMIC(uint16_t) next;
} s;
} rte_ticketlock_t;
@@ -51,7 +51,7 @@
static inline void
rte_ticketlock_init(rte_ticketlock_t *tl)
{
- __atomic_store_n(&tl->tickets, 0, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tl->tickets, 0, rte_memory_order_relaxed);
}
/**
@@ -63,8 +63,9 @@
static inline void
rte_ticketlock_lock(rte_ticketlock_t *tl)
{
- uint16_t me = __atomic_fetch_add(&tl->s.next, 1, __ATOMIC_RELAXED);
- rte_wait_until_equal_16(&tl->s.current, me, __ATOMIC_ACQUIRE);
+ uint16_t me = rte_atomic_fetch_add_explicit(&tl->s.next, 1, rte_memory_order_relaxed);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tl->s.current, me,
+ rte_memory_order_acquire);
}
/**
@@ -76,8 +77,8 @@
static inline void
rte_ticketlock_unlock(rte_ticketlock_t *tl)
{
- uint16_t i = __atomic_load_n(&tl->s.current, __ATOMIC_RELAXED);
- __atomic_store_n(&tl->s.current, i + 1, __ATOMIC_RELEASE);
+ uint16_t i = rte_atomic_load_explicit(&tl->s.current, rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&tl->s.current, i + 1, rte_memory_order_release);
}
/**
@@ -92,12 +93,13 @@
rte_ticketlock_trylock(rte_ticketlock_t *tl)
{
rte_ticketlock_t oldl, newl;
- oldl.tickets = __atomic_load_n(&tl->tickets, __ATOMIC_RELAXED);
+ oldl.tickets = rte_atomic_load_explicit(&tl->tickets, rte_memory_order_relaxed);
newl.tickets = oldl.tickets;
newl.s.next++;
if (oldl.s.next == oldl.s.current) {
- if (__atomic_compare_exchange_n(&tl->tickets, &oldl.tickets,
- newl.tickets, 0, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ if (rte_atomic_compare_exchange_strong_explicit(&tl->tickets,
+ (uint32_t *)(uintptr_t)&oldl.tickets,
+ newl.tickets, rte_memory_order_acquire, rte_memory_order_relaxed))
return 1;
}
@@ -116,7 +118,7 @@
rte_ticketlock_is_locked(rte_ticketlock_t *tl)
{
rte_ticketlock_t tic;
- tic.tickets = __atomic_load_n(&tl->tickets, __ATOMIC_ACQUIRE);
+ tic.tickets = rte_atomic_load_explicit(&tl->tickets, rte_memory_order_acquire);
return (tic.s.current != tic.s.next);
}
@@ -127,7 +129,7 @@
typedef struct {
rte_ticketlock_t tl; /**< the actual ticketlock */
- int user; /**< core id using lock, TICKET_LOCK_INVALID_ID for unused */
+ RTE_ATOMIC(int) user; /**< core id using lock, TICKET_LOCK_INVALID_ID for unused */
unsigned int count; /**< count of time this lock has been called */
} rte_ticketlock_recursive_t;
@@ -147,7 +149,7 @@
rte_ticketlock_recursive_init(rte_ticketlock_recursive_t *tlr)
{
rte_ticketlock_init(&tlr->tl);
- __atomic_store_n(&tlr->user, TICKET_LOCK_INVALID_ID, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, TICKET_LOCK_INVALID_ID, rte_memory_order_relaxed);
tlr->count = 0;
}
@@ -162,9 +164,9 @@
{
int id = rte_gettid();
- if (__atomic_load_n(&tlr->user, __ATOMIC_RELAXED) != id) {
+ if (rte_atomic_load_explicit(&tlr->user, rte_memory_order_relaxed) != id) {
rte_ticketlock_lock(&tlr->tl);
- __atomic_store_n(&tlr->user, id, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, id, rte_memory_order_relaxed);
}
tlr->count++;
}
@@ -179,8 +181,8 @@
rte_ticketlock_recursive_unlock(rte_ticketlock_recursive_t *tlr)
{
if (--(tlr->count) == 0) {
- __atomic_store_n(&tlr->user, TICKET_LOCK_INVALID_ID,
- __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, TICKET_LOCK_INVALID_ID,
+ rte_memory_order_relaxed);
rte_ticketlock_unlock(&tlr->tl);
}
}
@@ -198,10 +200,10 @@
{
int id = rte_gettid();
- if (__atomic_load_n(&tlr->user, __ATOMIC_RELAXED) != id) {
+ if (rte_atomic_load_explicit(&tlr->user, rte_memory_order_relaxed) != id) {
if (rte_ticketlock_trylock(&tlr->tl) == 0)
return 0;
- __atomic_store_n(&tlr->user, id, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, id, rte_memory_order_relaxed);
}
tlr->count++;
return 1;
diff --git a/lib/eal/include/rte_trace_point.h b/lib/eal/include/rte_trace_point.h
index d587591..b403edd 100644
--- a/lib/eal/include/rte_trace_point.h
+++ b/lib/eal/include/rte_trace_point.h
@@ -33,7 +33,7 @@
#include <rte_stdatomic.h>
/** The tracepoint object. */
-typedef uint64_t rte_trace_point_t;
+typedef RTE_ATOMIC(uint64_t) rte_trace_point_t;
/**
* Macro to define the tracepoint arguments in RTE_TRACE_POINT macro.
@@ -359,7 +359,7 @@ struct __rte_trace_header {
#define __rte_trace_point_emit_header_generic(t) \
void *mem; \
do { \
- const uint64_t val = __atomic_load_n(t, __ATOMIC_ACQUIRE); \
+ const uint64_t val = rte_atomic_load_explicit(t, rte_memory_order_acquire); \
if (likely(!(val & __RTE_TRACE_FIELD_ENABLE_MASK))) \
return; \
mem = __rte_trace_mem_get(val); \
diff --git a/lib/eal/loongarch/include/rte_atomic.h b/lib/eal/loongarch/include/rte_atomic.h
index 3c82845..0510b8f 100644
--- a/lib/eal/loongarch/include/rte_atomic.h
+++ b/lib/eal/loongarch/include/rte_atomic.h
@@ -35,9 +35,9 @@
#define rte_io_rmb() rte_mb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
#ifdef __cplusplus
diff --git a/lib/eal/ppc/include/rte_atomic.h b/lib/eal/ppc/include/rte_atomic.h
index ec8d8a2..7382412 100644
--- a/lib/eal/ppc/include/rte_atomic.h
+++ b/lib/eal/ppc/include/rte_atomic.h
@@ -38,9 +38,9 @@
#define rte_io_rmb() rte_rmb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
/*------------------------- 16 bit atomic operations -------------------------*/
@@ -48,8 +48,8 @@
static inline int
rte_atomic16_cmpset(volatile uint16_t *dst, uint16_t exp, uint16_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline int rte_atomic16_test_and_set(rte_atomic16_t *v)
@@ -60,29 +60,29 @@ static inline int rte_atomic16_test_and_set(rte_atomic16_t *v)
static inline void
rte_atomic16_inc(rte_atomic16_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic16_dec(rte_atomic16_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline uint16_t
rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val)
{
- return __atomic_exchange_2(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_2(dst, val, rte_memory_order_seq_cst);
}
/*------------------------- 32 bit atomic operations -------------------------*/
@@ -90,8 +90,8 @@ static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
static inline int
rte_atomic32_cmpset(volatile uint32_t *dst, uint32_t exp, uint32_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline int rte_atomic32_test_and_set(rte_atomic32_t *v)
@@ -102,29 +102,29 @@ static inline int rte_atomic32_test_and_set(rte_atomic32_t *v)
static inline void
rte_atomic32_inc(rte_atomic32_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic32_dec(rte_atomic32_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline uint32_t
rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val)
{
- return __atomic_exchange_4(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_4(dst, val, rte_memory_order_seq_cst);
}
/*------------------------- 64 bit atomic operations -------------------------*/
@@ -132,8 +132,8 @@ static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
static inline int
rte_atomic64_cmpset(volatile uint64_t *dst, uint64_t exp, uint64_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline void
@@ -157,47 +157,47 @@ static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_acquire);
}
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_acquire);
}
static inline void
rte_atomic64_inc(rte_atomic64_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic64_dec(rte_atomic64_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_ACQUIRE) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_acquire) + inc;
}
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_ACQUIRE) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_acquire) - dec;
}
static inline int rte_atomic64_inc_and_test(rte_atomic64_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic64_dec_and_test(rte_atomic64_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline int rte_atomic64_test_and_set(rte_atomic64_t *v)
@@ -213,7 +213,7 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
static inline uint64_t
rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val)
{
- return __atomic_exchange_8(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_8(dst, val, rte_memory_order_seq_cst);
}
#endif
diff --git a/lib/eal/riscv/include/rte_atomic.h b/lib/eal/riscv/include/rte_atomic.h
index 4b4633c..2603bc9 100644
--- a/lib/eal/riscv/include/rte_atomic.h
+++ b/lib/eal/riscv/include/rte_atomic.h
@@ -40,9 +40,9 @@
#define rte_io_rmb() asm volatile("fence ir, ir" : : : "memory")
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
#ifdef __cplusplus
diff --git a/lib/eal/x86/include/rte_atomic.h b/lib/eal/x86/include/rte_atomic.h
index f2ee1a9..3b3a9a4 100644
--- a/lib/eal/x86/include/rte_atomic.h
+++ b/lib/eal/x86/include/rte_atomic.h
@@ -82,17 +82,17 @@
/**
* Synchronization fence between threads based on the specified memory order.
*
- * On x86 the __atomic_thread_fence(__ATOMIC_SEQ_CST) generates full 'mfence'
+ * On x86 the __rte_atomic_thread_fence(rte_memory_order_seq_cst) generates full 'mfence'
* which is quite expensive. The optimized implementation of rte_smp_mb is
* used instead.
*/
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- if (memorder == __ATOMIC_SEQ_CST)
+ if (memorder == rte_memory_order_seq_cst)
rte_smp_mb();
else
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
/*------------------------- 16 bit atomic operations -------------------------*/
diff --git a/lib/eal/x86/include/rte_spinlock.h b/lib/eal/x86/include/rte_spinlock.h
index 0b20ddf..a6c23ea 100644
--- a/lib/eal/x86/include/rte_spinlock.h
+++ b/lib/eal/x86/include/rte_spinlock.h
@@ -78,7 +78,7 @@ static inline int rte_tm_supported(void)
}
static inline int
-rte_try_tm(volatile int *lock)
+rte_try_tm(volatile RTE_ATOMIC(int) *lock)
{
int i, retries;
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index f749da9..cf70e33 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,9 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = __atomic_load_n((volatile uint64_t *)addr, __ATOMIC_RELAXED);
- __atomic_compare_exchange_n((volatile uint64_t *)addr, &val, val, 0,
- __ATOMIC_RELAXED, __ATOMIC_RELAXED);
+ val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
+ rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v4 3/6] eal: add rte atomic qualifier with casts
2023-08-16 21:38 ` [PATCH v4 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-16 21:38 ` [PATCH v4 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
2023-08-16 21:38 ` [PATCH v4 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
@ 2023-08-16 21:38 ` Tyler Retzlaff
2023-08-16 21:38 ` [PATCH v4 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
` (2 subsequent siblings)
5 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-16 21:38 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Introduce __rte_atomic qualifying casts in rte_optional atomics inline
functions to prevent cascading the need to pass __rte_atomic qualified
arguments.
Warning, this is really implementation dependent and being done
temporarily to avoid having to convert more of the libraries and tests in
DPDK in the initial series that introduces the API. The consequence of the
assumption of the ABI of the types in question not being ``the same'' is
only a risk that may be realized when enable_stdatomic=true.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/eal/include/generic/rte_atomic.h | 48 ++++++++++++++++++++++++------------
lib/eal/include/generic/rte_pause.h | 9 ++++---
lib/eal/x86/rte_power_intrinsics.c | 7 +++---
3 files changed, 42 insertions(+), 22 deletions(-)
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 5940e7e..709bf15 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -274,7 +274,8 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -288,7 +289,8 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -341,7 +343,8 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -361,7 +364,8 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -380,7 +384,8 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -400,7 +405,8 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -553,7 +559,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -567,7 +574,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -620,7 +628,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -640,7 +649,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -659,7 +669,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -679,7 +690,8 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -885,7 +897,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int64_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
#endif
@@ -904,7 +917,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int64_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
#endif
@@ -962,7 +976,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int64_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
#endif
@@ -986,7 +1001,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int64_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
#endif
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index 256309e..b7b059f 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -81,7 +81,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint16_t *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -91,7 +92,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint32_t *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -101,7 +103,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint64_t *)addr, memorder)
+ != expected)
rte_pause();
}
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index cf70e33..fb8539f 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,10 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
- rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
- rte_memory_order_relaxed, rte_memory_order_relaxed);
+ val = rte_atomic_load_explicit((volatile __rte_atomic uint64_t *)addr,
+ rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile __rte_atomic uint64_t *)addr,
+ &val, val, rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v4 4/6] distributor: adapt for EAL optional atomics API changes
2023-08-16 21:38 ` [PATCH v4 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (2 preceding siblings ...)
2023-08-16 21:38 ` [PATCH v4 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
@ 2023-08-16 21:38 ` Tyler Retzlaff
2023-08-16 21:38 ` [PATCH v4 5/6] bpf: " Tyler Retzlaff
2023-08-16 21:38 ` [PATCH v4 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
5 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-16 21:38 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt distributor for EAL optional atomics API changes
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 ++++++++++++++++----------------
2 files changed, 23 insertions(+), 23 deletions(-)
diff --git a/lib/distributor/distributor_private.h b/lib/distributor/distributor_private.h
index 7101f63..2f29343 100644
--- a/lib/distributor/distributor_private.h
+++ b/lib/distributor/distributor_private.h
@@ -52,7 +52,7 @@
* Only 64-bits of the memory is actually used though.
*/
union rte_distributor_buffer_single {
- volatile int64_t bufptr64;
+ volatile RTE_ATOMIC(int64_t) bufptr64;
char pad[RTE_CACHE_LINE_SIZE*3];
} __rte_cache_aligned;
diff --git a/lib/distributor/rte_distributor_single.c b/lib/distributor/rte_distributor_single.c
index 2c77ac4..ad43c13 100644
--- a/lib/distributor/rte_distributor_single.c
+++ b/lib/distributor/rte_distributor_single.c
@@ -32,10 +32,10 @@
int64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_GET_BUF;
RTE_WAIT_UNTIL_MASKED(&buf->bufptr64, RTE_DISTRIB_FLAGS_MASK,
- ==, 0, __ATOMIC_RELAXED);
+ ==, 0, rte_memory_order_relaxed);
/* Sync with distributor on GET_BUF flag. */
- __atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&buf->bufptr64, req, rte_memory_order_release);
}
struct rte_mbuf *
@@ -44,7 +44,7 @@ struct rte_mbuf *
{
union rte_distributor_buffer_single *buf = &d->bufs[worker_id];
/* Sync with distributor. Acquire bufptr64. */
- if (__atomic_load_n(&buf->bufptr64, __ATOMIC_ACQUIRE)
+ if (rte_atomic_load_explicit(&buf->bufptr64, rte_memory_order_acquire)
& RTE_DISTRIB_GET_BUF)
return NULL;
@@ -72,10 +72,10 @@ struct rte_mbuf *
uint64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_RETURN_BUF;
RTE_WAIT_UNTIL_MASKED(&buf->bufptr64, RTE_DISTRIB_FLAGS_MASK,
- ==, 0, __ATOMIC_RELAXED);
+ ==, 0, rte_memory_order_relaxed);
/* Sync with distributor on RETURN_BUF flag. */
- __atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&buf->bufptr64, req, rte_memory_order_release);
return 0;
}
@@ -119,7 +119,7 @@ struct rte_mbuf *
d->in_flight_tags[wkr] = 0;
d->in_flight_bitmask &= ~(1UL << wkr);
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64), 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64, 0, rte_memory_order_release);
if (unlikely(d->backlog[wkr].count != 0)) {
/* On return of a packet, we need to move the
* queued packets for this core elsewhere.
@@ -165,21 +165,21 @@ struct rte_mbuf *
for (wkr = 0; wkr < d->num_workers; wkr++) {
uintptr_t oldbuf = 0;
/* Sync with worker. Acquire bufptr64. */
- const int64_t data = __atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE);
+ const int64_t data = rte_atomic_load_explicit(&d->bufs[wkr].bufptr64,
+ rte_memory_order_acquire);
if (data & RTE_DISTRIB_GET_BUF) {
flushed++;
if (d->backlog[wkr].count)
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
else {
/* Sync with worker on GET_BUF flag. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
RTE_DISTRIB_GET_BUF,
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
d->in_flight_tags[wkr] = 0;
d->in_flight_bitmask &= ~(1UL << wkr);
}
@@ -217,8 +217,8 @@ struct rte_mbuf *
while (next_idx < num_mbufs || next_mb != NULL) {
uintptr_t oldbuf = 0;
/* Sync with worker. Acquire bufptr64. */
- int64_t data = __atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE);
+ int64_t data = rte_atomic_load_explicit(&(d->bufs[wkr].bufptr64),
+ rte_memory_order_acquire);
if (!next_mb) {
next_mb = mbufs[next_idx++];
@@ -264,15 +264,15 @@ struct rte_mbuf *
if (d->backlog[wkr].count)
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
else {
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
next_value,
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
d->in_flight_tags[wkr] = new_tag;
d->in_flight_bitmask |= (1UL << wkr);
next_mb = NULL;
@@ -294,8 +294,8 @@ struct rte_mbuf *
for (wkr = 0; wkr < d->num_workers; wkr++)
if (d->backlog[wkr].count &&
/* Sync with worker. Acquire bufptr64. */
- (__atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE) & RTE_DISTRIB_GET_BUF)) {
+ (rte_atomic_load_explicit(&d->bufs[wkr].bufptr64,
+ rte_memory_order_acquire) & RTE_DISTRIB_GET_BUF)) {
int64_t oldbuf = d->bufs[wkr].bufptr64 >>
RTE_DISTRIB_FLAG_BITS;
@@ -303,9 +303,9 @@ struct rte_mbuf *
store_return(oldbuf, d, &ret_start, &ret_count);
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
}
d->returns.start = ret_start;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v4 5/6] bpf: adapt for EAL optional atomics API changes
2023-08-16 21:38 ` [PATCH v4 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (3 preceding siblings ...)
2023-08-16 21:38 ` [PATCH v4 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
@ 2023-08-16 21:38 ` Tyler Retzlaff
2023-08-16 21:38 ` [PATCH v4 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
5 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-16 21:38 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt bpf for EAL optional atomics API changes
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/bpf/bpf_pkt.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c
index ffd2db7..7a8e4a6 100644
--- a/lib/bpf/bpf_pkt.c
+++ b/lib/bpf/bpf_pkt.c
@@ -25,7 +25,7 @@
struct bpf_eth_cbi {
/* used by both data & control path */
- uint32_t use; /*usage counter */
+ RTE_ATOMIC(uint32_t) use; /*usage counter */
const struct rte_eth_rxtx_callback *cb; /* callback handle */
struct rte_bpf *bpf;
struct rte_bpf_jit jit;
@@ -110,8 +110,8 @@ struct bpf_eth_cbh {
/* in use, busy wait till current RX/TX iteration is finished */
if ((puse & BPF_ETH_CBI_INUSE) != 0) {
- RTE_WAIT_UNTIL_MASKED((uint32_t *)(uintptr_t)&cbi->use,
- UINT32_MAX, !=, puse, __ATOMIC_RELAXED);
+ RTE_WAIT_UNTIL_MASKED((__rte_atomic uint32_t *)(uintptr_t)&cbi->use,
+ UINT32_MAX, !=, puse, rte_memory_order_relaxed);
}
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v4 6/6] devtools: forbid new direct use of GCC atomic builtins
2023-08-16 21:38 ` [PATCH v4 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (4 preceding siblings ...)
2023-08-16 21:38 ` [PATCH v4 5/6] bpf: " Tyler Retzlaff
@ 2023-08-16 21:38 ` Tyler Retzlaff
2023-08-17 11:57 ` Morten Brørup
5 siblings, 1 reply; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-16 21:38 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Refrain from using compiler __atomic_xxx builtins DPDK now requires
the use of rte_atomic_<op>_explicit macros when operating on DPDK
atomic variables.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Suggested-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
devtools/checkpatches.sh | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh
index 43f5e36..b15c3f7 100755
--- a/devtools/checkpatches.sh
+++ b/devtools/checkpatches.sh
@@ -111,11 +111,11 @@ check_forbidden_additions() { # <patch>
-f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
"$1" || res=1
- # refrain from using compiler __atomic_{add,and,nand,or,sub,xor}_fetch()
+ # refrain from using compiler __atomic_xxx builtins
awk -v FOLDERS="lib drivers app examples" \
- -v EXPRESSIONS="__atomic_(add|and|nand|or|sub|xor)_fetch\\\(" \
+ -v EXPRESSIONS="__atomic_.*\\\(" \
-v RET_ON_FAIL=1 \
- -v MESSAGE='Using __atomic_op_fetch, prefer __atomic_fetch_op' \
+ -v MESSAGE='Using __atomic_xxx builtins' \
-f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
"$1" || res=1
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH v4 1/6] eal: provide rte stdatomics optional atomics API
2023-08-16 21:38 ` [PATCH v4 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
@ 2023-08-17 11:45 ` Morten Brørup
2023-08-17 19:09 ` Tyler Retzlaff
0 siblings, 1 reply; 82+ messages in thread
From: Morten Brørup @ 2023-08-17 11:45 UTC (permalink / raw)
To: Tyler Retzlaff, dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Wednesday, 16 August 2023 23.39
>
> Provide API for atomic operations in the rte namespace that may
> optionally be configured to use C11 atomics with meson
> option enable_stdatomics=true
>
> Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
> ---
Speed blindness during my review... I have now spotted a couple of copy-paste typos:
> +#define rte_atomic_compare_exchange_weak_explicit( \
> + ptr, expected, desired, succ_memorder, fail_memorder) \
> + atomic_compare_exchange_strong_explicit( \
atomic_compare_exchange_weak_explicit, not strong.
> + ptr, expected, desired, succ_memorder, fail_memorder)
> +
[...]
> +#define rte_atomic_flag_clear_explicit(ptr, memorder) \
> + atomic_flag_clear(ptr, memorder)
atomic_flag_clear_explicit(ptr, memorder), missing _explicit.
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH v4 6/6] devtools: forbid new direct use of GCC atomic builtins
2023-08-16 21:38 ` [PATCH v4 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
@ 2023-08-17 11:57 ` Morten Brørup
2023-08-17 19:14 ` Tyler Retzlaff
0 siblings, 1 reply; 82+ messages in thread
From: Morten Brørup @ 2023-08-17 11:57 UTC (permalink / raw)
To: Tyler Retzlaff, dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Wednesday, 16 August 2023 23.39
>
> Refrain from using compiler __atomic_xxx builtins DPDK now requires
> the use of rte_atomic_<op>_explicit macros when operating on DPDK
> atomic variables.
There is probably no end to how much can be added to checkpatches.
You got the important stuff, so below are only further suggestions!
[...]
> - # refrain from using compiler __atomic_{add,and,nand,or,sub,xor}_fetch()
> + # refrain from using compiler __atomic_xxx builtins
> awk -v FOLDERS="lib drivers app examples" \
> - -v EXPRESSIONS="__atomic_(add|and|nand|or|sub|xor)_fetch\\\(" \
> + -v EXPRESSIONS="__atomic_.*\\\(" \
> -v RET_ON_FAIL=1 \
> - -v MESSAGE='Using __atomic_op_fetch, prefer __atomic_fetch_op' \
> + -v MESSAGE='Using __atomic_xxx builtins' \
Alternatively:
-v MESSAGE='Using __atomic_xxx built-ins, prefer rte_atomic_xxx' \
> -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
> "$1" || res=1
>
> --
> 1.8.3.1
This could be updated too:
# refrain from using compiler __atomic_thread_fence()
# It should be avoided on x86 for SMP case.
awk -v FOLDERS="lib drivers app examples" \
-v EXPRESSIONS="__atomic_thread_fence\\\(" \
-v RET_ON_FAIL=1 \
- -v MESSAGE='Using __atomic_thread_fence' \
+ -v MESSAGE='Using __atomic_thread_fence built-in, prefer __rte_atomic_thread_fence' \
-f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
"$1" || res=1
You could also add C11 variants of these tests...
atomic_(load|store|exchange|compare_exchange_(strong|weak)|fetch_(add|sub|and|xor|or|nand)|flag_(test_and_set|clear))[_explicit], and
atomic_thread_fence.
And a test for using "_Atomic".
-Morten
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v4 1/6] eal: provide rte stdatomics optional atomics API
2023-08-17 11:45 ` Morten Brørup
@ 2023-08-17 19:09 ` Tyler Retzlaff
2023-08-18 6:55 ` Morten Brørup
0 siblings, 1 reply; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-17 19:09 UTC (permalink / raw)
To: Morten Brørup
Cc: dev, techboard, Bruce Richardson, Honnappa Nagarahalli,
Ruifeng Wang, Jerin Jacob, Sunil Kumar Kori,
Mattias Rönnblom, Joyce Kong, David Christensen,
Konstantin Ananyev, David Hunt, Thomas Monjalon, David Marchand
On Thu, Aug 17, 2023 at 01:45:21PM +0200, Morten Brørup wrote:
> > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > Sent: Wednesday, 16 August 2023 23.39
> >
> > Provide API for atomic operations in the rte namespace that may
> > optionally be configured to use C11 atomics with meson
> > option enable_stdatomics=true
> >
> > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
> > ---
>
> Speed blindness during my review... I have now spotted a couple of copy-paste typos:
>
> > +#define rte_atomic_compare_exchange_weak_explicit( \
> > + ptr, expected, desired, succ_memorder, fail_memorder) \
> > + atomic_compare_exchange_strong_explicit( \
>
> atomic_compare_exchange_weak_explicit, not strong.
yikes, thanks for catching that cut & past error
>
> > + ptr, expected, desired, succ_memorder, fail_memorder)
> > +
>
> [...]
>
> > +#define rte_atomic_flag_clear_explicit(ptr, memorder) \
> > + atomic_flag_clear(ptr, memorder)
>
> atomic_flag_clear_explicit(ptr, memorder), missing _explicit.
yes, currently unused otherwise it would have failed to compie
i'll correct this too.
thank you for the careful review i look at the diffs over and over and
still it's hard to spot subtle swaps/exchanges of things.
>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v4 6/6] devtools: forbid new direct use of GCC atomic builtins
2023-08-17 11:57 ` Morten Brørup
@ 2023-08-17 19:14 ` Tyler Retzlaff
2023-08-18 7:13 ` Morten Brørup
0 siblings, 1 reply; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-17 19:14 UTC (permalink / raw)
To: Morten Brørup
Cc: dev, techboard, Bruce Richardson, Honnappa Nagarahalli,
Ruifeng Wang, Jerin Jacob, Sunil Kumar Kori,
Mattias Rönnblom, Joyce Kong, David Christensen,
Konstantin Ananyev, David Hunt, Thomas Monjalon, David Marchand
On Thu, Aug 17, 2023 at 01:57:01PM +0200, Morten Brørup wrote:
> > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > Sent: Wednesday, 16 August 2023 23.39
> >
> > Refrain from using compiler __atomic_xxx builtins DPDK now requires
> > the use of rte_atomic_<op>_explicit macros when operating on DPDK
> > atomic variables.
>
> There is probably no end to how much can be added to checkpatches.
>
> You got the important stuff, so below are only further suggestions!
>
> [...]
>
> > - # refrain from using compiler __atomic_{add,and,nand,or,sub,xor}_fetch()
> > + # refrain from using compiler __atomic_xxx builtins
> > awk -v FOLDERS="lib drivers app examples" \
> > - -v EXPRESSIONS="__atomic_(add|and|nand|or|sub|xor)_fetch\\\(" \
> > + -v EXPRESSIONS="__atomic_.*\\\(" \
> > -v RET_ON_FAIL=1 \
> > - -v MESSAGE='Using __atomic_op_fetch, prefer __atomic_fetch_op' \
> > + -v MESSAGE='Using __atomic_xxx builtins' \
>
> Alternatively:
> -v MESSAGE='Using __atomic_xxx built-ins, prefer rte_atomic_xxx' \
i can adjust the wording as you suggest, no problem
>
> > -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
> > "$1" || res=1
> >
> > --
> > 1.8.3.1
>
> This could be updated too:
>
> # refrain from using compiler __atomic_thread_fence()
> # It should be avoided on x86 for SMP case.
> awk -v FOLDERS="lib drivers app examples" \
> -v EXPRESSIONS="__atomic_thread_fence\\\(" \
> -v RET_ON_FAIL=1 \
> - -v MESSAGE='Using __atomic_thread_fence' \
> + -v MESSAGE='Using __atomic_thread_fence built-in, prefer __rte_atomic_thread_fence' \
> -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
yeah, i left this one separate i think the advice is actually use
rte_atomic_thread_fence which may be an inline function that uses
__rte_atomic_thread_fence
> "$1" || res=1
>
> You could also add C11 variants of these tests...
> atomic_(load|store|exchange|compare_exchange_(strong|weak)|fetch_(add|sub|and|xor|or|nand)|flag_(test_and_set|clear))[_explicit], and
> atomic_thread_fence.
>
> And a test for using "_Atomic".
direct use would cause early compilation in the CI so it would be caught
fairly early. i'm not sure i want to get into the business of trying to
add redundant (albiet cheaper) earlier checks.
though if there is a general call for this from the reviewers i'll add
them.
>
> -Morten
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v5 0/6] optional rte optional stdatomics API
2023-08-11 1:31 [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (8 preceding siblings ...)
2023-08-16 21:38 ` [PATCH v4 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-17 21:42 ` Tyler Retzlaff
2023-08-17 21:42 ` [PATCH v5 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
` (6 more replies)
2023-08-22 21:00 ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
10 siblings, 7 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-17 21:42 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
This series introduces API additions prefixed in the rte namespace that allow
the optional use of stdatomics.h from C11 using enable_stdatomics=true for
targets where enable_stdatomics=false no functional change is intended.
Be aware this does not contain all changes to use stdatomics across the DPDK
tree it only introduces the minimum to allow the option to be used which is
a pre-requisite for a clean CI (probably using clang) that can be run
with enable_stdatomics=true enabled.
It is planned that subsequent series will be introduced per lib/driver as
appropriate to further enable stdatomics use when enable_stdatomics=true.
Notes:
* Additional libraries beyond EAL make visible atomics use across the
API/ABI surface they will be converted in the subsequent series.
* The eal: add rte atomic qualifier with casts patch needs some discussion
as to whether or not the legacy rte_atomic APIs should be converted to
work with enable_stdatomic=true right now some implementation dependent
casts are used to prevent cascading / having to convert too much in
the intial series.
* Windows will obviously need complete conversion of libraries including
atomics that are not crossing API/ABI boundaries. those conversions will
introduced in separate series as new along side the existing msvc series.
Please keep in mind we would like to prioritize the review / acceptance of
this patch since it needs to be completed in the 23.11 merge window.
Thank you all for the discussion that lead to the formation of this series.
v5:
* Add RTE_ATOMIC to doxygen configuration PREDEFINED macros list to
fix documentation generation failure
* Fix two typos in expansion of C11 atomics macros strong -> weak and
add missing _explicit
* Adjust devtools/checkpatches messages based on feedback. i have chosen
not to try and catch use of C11 atomics or _Atomic since using those
directly will be picked up by existing CI passes where by compilation
error where enable_stdatomic=false (the default for most platforms)
v4:
* Move the definition of #define RTE_ATOMIC(type) to patch 1 where it
belongs (a mistake in v3)
* Provide comments for both RTE_ATOMIC and __rte_atomic macros indicating
their use as specified or qualified contexts.
v3:
* Remove comments from APIs mentioning the mapping to C++ memory model
memory orders
* Introduce and use new macro RTE_ATOMIC(type) to be used in contexts
where _Atomic is used as a type specifier to declare variables. The
macro allows more clarity about what the atomic type being specified
is. e.g. _Atomic(T *) vs _Atomic(T) it is easier to understand that
the former is an atomic pointer type and the latter is an atomic
type. it also has the benefit of (in the future) being interoperable
with c++23 syntactically
note: Morten i have retained your 'reviewed-by' tags if you disagree
given the changes in the above version please indicate as such but
i believe the changes are in the spirit of the feedback you provided
v2:
* Wrap meson_options.txt option description to newline and indent to
be consistent with other options.
* Provide separate typedef of rte_memory_order for enable_stdatomic=true
VS enable_stdatomic=false instead of a single typedef to int
note: slight tweak to reviewers feedback i've chosen to use a typedef
for both enable_stdatomic={true,false} (just seemed more consistent)
* Bring in assert.h and use static_assert macro instead of _Static_assert
keyword to better interoperate with c/c++
* Directly include rte_stdatomic.h where into other places it is consumed
instead of hacking it globally into rte_config.h
* Provide and use __rte_atomic_thread_fence to allow conditional expansion
within the body of existing rte_atomic_thread_fence inline function to
maintain per-arch optimizations when enable_stdatomic=false
Tyler Retzlaff (6):
eal: provide rte stdatomics optional atomics API
eal: adapt EAL to present rte optional atomics API
eal: add rte atomic qualifier with casts
distributor: adapt for EAL optional atomics API changes
bpf: adapt for EAL optional atomics API changes
devtools: forbid new direct use of GCC atomic builtins
app/test/test_mcslock.c | 6 +-
config/meson.build | 1 +
devtools/checkpatches.sh | 8 +-
doc/api/doxy-api.conf.in | 1 +
lib/bpf/bpf_pkt.c | 6 +-
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 +++----
lib/eal/arm/include/rte_atomic_32.h | 4 +-
lib/eal/arm/include/rte_atomic_64.h | 36 +++---
lib/eal/arm/include/rte_pause_64.h | 26 ++--
lib/eal/arm/rte_power_intrinsics.c | 8 +-
lib/eal/common/eal_common_trace.c | 16 +--
lib/eal/include/generic/rte_atomic.h | 67 +++++++----
lib/eal/include/generic/rte_pause.h | 50 ++++----
lib/eal/include/generic/rte_rwlock.h | 48 ++++----
lib/eal/include/generic/rte_spinlock.h | 20 ++--
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_mcslock.h | 51 ++++----
lib/eal/include/rte_pflock.h | 25 ++--
lib/eal/include/rte_seqcount.h | 19 +--
lib/eal/include/rte_stdatomic.h | 198 +++++++++++++++++++++++++++++++
lib/eal/include/rte_ticketlock.h | 43 +++----
lib/eal/include/rte_trace_point.h | 5 +-
lib/eal/loongarch/include/rte_atomic.h | 4 +-
lib/eal/ppc/include/rte_atomic.h | 54 ++++-----
lib/eal/riscv/include/rte_atomic.h | 4 +-
lib/eal/x86/include/rte_atomic.h | 8 +-
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 7 +-
meson_options.txt | 2 +
30 files changed, 499 insertions(+), 267 deletions(-)
create mode 100644 lib/eal/include/rte_stdatomic.h
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v5 1/6] eal: provide rte stdatomics optional atomics API
2023-08-17 21:42 ` [PATCH v5 0/6] optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-17 21:42 ` Tyler Retzlaff
2023-08-17 21:42 ` [PATCH v5 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
` (5 subsequent siblings)
6 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-17 21:42 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Provide API for atomic operations in the rte namespace that may
optionally be configured to use C11 atomics with meson
option enable_stdatomics=true
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
config/meson.build | 1 +
doc/api/doxy-api.conf.in | 1 +
lib/eal/include/generic/rte_atomic.h | 1 +
lib/eal/include/generic/rte_pause.h | 1 +
lib/eal/include/generic/rte_rwlock.h | 1 +
lib/eal/include/generic/rte_spinlock.h | 1 +
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_mcslock.h | 1 +
lib/eal/include/rte_pflock.h | 1 +
lib/eal/include/rte_seqcount.h | 1 +
lib/eal/include/rte_stdatomic.h | 198 +++++++++++++++++++++++++++++++++
lib/eal/include/rte_ticketlock.h | 1 +
lib/eal/include/rte_trace_point.h | 1 +
meson_options.txt | 2 +
14 files changed, 212 insertions(+)
create mode 100644 lib/eal/include/rte_stdatomic.h
diff --git a/config/meson.build b/config/meson.build
index d822371..ec49964 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -303,6 +303,7 @@ endforeach
# set other values pulled from the build options
dpdk_conf.set('RTE_MAX_ETHPORTS', get_option('max_ethports'))
dpdk_conf.set('RTE_LIBEAL_USE_HPET', get_option('use_hpet'))
+dpdk_conf.set('RTE_ENABLE_STDATOMIC', get_option('enable_stdatomic'))
dpdk_conf.set('RTE_ENABLE_TRACE_FP', get_option('enable_trace_fp'))
# values which have defaults which may be overridden
dpdk_conf.set('RTE_MAX_VFIO_GROUPS', 64)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a88accd..51e8586 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -84,6 +84,7 @@ INPUT += @API_EXAMPLES@
FILE_PATTERNS = rte_*.h \
cmdline.h
PREDEFINED = __DOXYGEN__ \
+ RTE_ATOMIC \
RTE_HAS_CPUSET \
VFIO_PRESENT \
__rte_lockable= \
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 82b9bfc..4a235ba 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -15,6 +15,7 @@
#include <stdint.h>
#include <rte_compat.h>
#include <rte_common.h>
+#include <rte_stdatomic.h>
#ifdef __DOXYGEN__
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index ec1f418..bebfa95 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -16,6 +16,7 @@
#include <assert.h>
#include <rte_common.h>
#include <rte_atomic.h>
+#include <rte_stdatomic.h>
/**
* Pause CPU execution for a short while
diff --git a/lib/eal/include/generic/rte_rwlock.h b/lib/eal/include/generic/rte_rwlock.h
index 9e083bb..24ebec6 100644
--- a/lib/eal/include/generic/rte_rwlock.h
+++ b/lib/eal/include/generic/rte_rwlock.h
@@ -32,6 +32,7 @@
#include <rte_common.h>
#include <rte_lock_annotations.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_rwlock_t type.
diff --git a/lib/eal/include/generic/rte_spinlock.h b/lib/eal/include/generic/rte_spinlock.h
index c50ebaa..e18f0cd 100644
--- a/lib/eal/include/generic/rte_spinlock.h
+++ b/lib/eal/include/generic/rte_spinlock.h
@@ -23,6 +23,7 @@
#endif
#include <rte_lock_annotations.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_spinlock_t type.
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index a0463ef..e94b056 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -42,6 +42,7 @@ headers += files(
'rte_seqlock.h',
'rte_service.h',
'rte_service_component.h',
+ 'rte_stdatomic.h',
'rte_string_fns.h',
'rte_tailq.h',
'rte_thread.h',
diff --git a/lib/eal/include/rte_mcslock.h b/lib/eal/include/rte_mcslock.h
index a805cb2..18e63eb 100644
--- a/lib/eal/include/rte_mcslock.h
+++ b/lib/eal/include/rte_mcslock.h
@@ -27,6 +27,7 @@
#include <rte_common.h>
#include <rte_pause.h>
#include <rte_branch_prediction.h>
+#include <rte_stdatomic.h>
/**
* The rte_mcslock_t type.
diff --git a/lib/eal/include/rte_pflock.h b/lib/eal/include/rte_pflock.h
index a3f7291..790be71 100644
--- a/lib/eal/include/rte_pflock.h
+++ b/lib/eal/include/rte_pflock.h
@@ -34,6 +34,7 @@
#include <rte_compat.h>
#include <rte_common.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_pflock_t type.
diff --git a/lib/eal/include/rte_seqcount.h b/lib/eal/include/rte_seqcount.h
index ff62708..098af26 100644
--- a/lib/eal/include/rte_seqcount.h
+++ b/lib/eal/include/rte_seqcount.h
@@ -26,6 +26,7 @@
#include <rte_atomic.h>
#include <rte_branch_prediction.h>
#include <rte_compat.h>
+#include <rte_stdatomic.h>
/**
* The RTE seqcount type.
diff --git a/lib/eal/include/rte_stdatomic.h b/lib/eal/include/rte_stdatomic.h
new file mode 100644
index 0000000..41f90b4
--- /dev/null
+++ b/lib/eal/include/rte_stdatomic.h
@@ -0,0 +1,198 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Microsoft Corporation
+ */
+
+#ifndef _RTE_STDATOMIC_H_
+#define _RTE_STDATOMIC_H_
+
+#include <assert.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#ifdef RTE_ENABLE_STDATOMIC
+#ifdef __STDC_NO_ATOMICS__
+#error enable_stdatomics=true but atomics not supported by toolchain
+#endif
+
+#include <stdatomic.h>
+
+/* RTE_ATOMIC(type) is provided for use as a type specifier
+ * permitting designation of an rte atomic type.
+ */
+#define RTE_ATOMIC(type) _Atomic(type)
+
+/* __rte_atomic is provided for type qualification permitting
+ * designation of an rte atomic qualified type-name.
+ */
+#define __rte_atomic _Atomic
+
+/* The memory order is an enumerated type in C11. */
+typedef memory_order rte_memory_order;
+
+#define rte_memory_order_relaxed memory_order_relaxed
+#ifdef __ATOMIC_RELAXED
+static_assert(rte_memory_order_relaxed == __ATOMIC_RELAXED,
+ "rte_memory_order_relaxed == __ATOMIC_RELAXED");
+#endif
+
+#define rte_memory_order_consume memory_order_consume
+#ifdef __ATOMIC_CONSUME
+static_assert(rte_memory_order_consume == __ATOMIC_CONSUME,
+ "rte_memory_order_consume == __ATOMIC_CONSUME");
+#endif
+
+#define rte_memory_order_acquire memory_order_acquire
+#ifdef __ATOMIC_ACQUIRE
+static_assert(rte_memory_order_acquire == __ATOMIC_ACQUIRE,
+ "rte_memory_order_acquire == __ATOMIC_ACQUIRE");
+#endif
+
+#define rte_memory_order_release memory_order_release
+#ifdef __ATOMIC_RELEASE
+static_assert(rte_memory_order_release == __ATOMIC_RELEASE,
+ "rte_memory_order_release == __ATOMIC_RELEASE");
+#endif
+
+#define rte_memory_order_acq_rel memory_order_acq_rel
+#ifdef __ATOMIC_ACQ_REL
+static_assert(rte_memory_order_acq_rel == __ATOMIC_ACQ_REL,
+ "rte_memory_order_acq_rel == __ATOMIC_ACQ_REL");
+#endif
+
+#define rte_memory_order_seq_cst memory_order_seq_cst
+#ifdef __ATOMIC_SEQ_CST
+static_assert(rte_memory_order_seq_cst == __ATOMIC_SEQ_CST,
+ "rte_memory_order_seq_cst == __ATOMIC_SEQ_CST");
+#endif
+
+#define rte_atomic_load_explicit(ptr, memorder) \
+ atomic_load_explicit(ptr, memorder)
+
+#define rte_atomic_store_explicit(ptr, val, memorder) \
+ atomic_store_explicit(ptr, val, memorder)
+
+#define rte_atomic_exchange_explicit(ptr, val, memorder) \
+ atomic_exchange_explicit(ptr, val, memorder)
+
+#define rte_atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder)
+
+#define rte_atomic_compare_exchange_weak_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ atomic_compare_exchange_weak_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder)
+
+#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
+ atomic_fetch_add_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
+ atomic_fetch_sub_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
+ atomic_fetch_and_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
+ atomic_fetch_xor_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
+ atomic_fetch_or_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
+ atomic_fetch_nand_explicit(ptr, val, memorder)
+
+#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \
+ atomic_flag_test_and_set_explicit(ptr, memorder)
+
+#define rte_atomic_flag_clear_explicit(ptr, memorder) \
+ atomic_flag_clear_explicit(ptr, memorder)
+
+/* We provide internal macro here to allow conditional expansion
+ * in the body of the per-arch rte_atomic_thread_fence inline functions.
+ */
+#define __rte_atomic_thread_fence(memorder) \
+ atomic_thread_fence(memorder)
+
+#else
+
+/* RTE_ATOMIC(type) is provided for use as a type specifier
+ * permitting designation of an rte atomic type.
+ */
+#define RTE_ATOMIC(type) type
+
+/* __rte_atomic is provided for type qualification permitting
+ * designation of an rte atomic qualified type-name.
+ */
+#define __rte_atomic
+
+/* The memory order is an integer type in GCC built-ins,
+ * not an enumerated type like in C11.
+ */
+typedef int rte_memory_order;
+
+#define rte_memory_order_relaxed __ATOMIC_RELAXED
+#define rte_memory_order_consume __ATOMIC_CONSUME
+#define rte_memory_order_acquire __ATOMIC_ACQUIRE
+#define rte_memory_order_release __ATOMIC_RELEASE
+#define rte_memory_order_acq_rel __ATOMIC_ACQ_REL
+#define rte_memory_order_seq_cst __ATOMIC_SEQ_CST
+
+#define rte_atomic_load_explicit(ptr, memorder) \
+ __atomic_load_n(ptr, memorder)
+
+#define rte_atomic_store_explicit(ptr, val, memorder) \
+ __atomic_store_n(ptr, val, memorder)
+
+#define rte_atomic_exchange_explicit(ptr, val, memorder) \
+ __atomic_exchange_n(ptr, val, memorder)
+
+#define rte_atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ __atomic_compare_exchange_n( \
+ ptr, expected, desired, 0, succ_memorder, fail_memorder)
+
+#define rte_atomic_compare_exchange_weak_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ __atomic_compare_exchange_n( \
+ ptr, expected, desired, 1, succ_memorder, fail_memorder)
+
+#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
+ __atomic_fetch_add(ptr, val, memorder)
+
+#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
+ __atomic_fetch_sub(ptr, val, memorder)
+
+#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
+ __atomic_fetch_and(ptr, val, memorder)
+
+#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
+ __atomic_fetch_xor(ptr, val, memorder)
+
+#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
+ __atomic_fetch_or(ptr, val, memorder)
+
+#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
+ __atomic_fetch_nand(ptr, val, memorder)
+
+#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \
+ __atomic_test_and_set(ptr, memorder)
+
+#define rte_atomic_flag_clear_explicit(ptr, memorder) \
+ __atomic_clear(ptr, memorder)
+
+/* We provide internal macro here to allow conditional expansion
+ * in the body of the per-arch rte_atomic_thread_fence inline functions.
+ */
+#define __rte_atomic_thread_fence(memorder) \
+ __atomic_thread_fence(memorder)
+
+#endif
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STDATOMIC_H_ */
diff --git a/lib/eal/include/rte_ticketlock.h b/lib/eal/include/rte_ticketlock.h
index 5db0d8a..e22d119 100644
--- a/lib/eal/include/rte_ticketlock.h
+++ b/lib/eal/include/rte_ticketlock.h
@@ -24,6 +24,7 @@
#include <rte_common.h>
#include <rte_lcore.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_ticketlock_t type.
diff --git a/lib/eal/include/rte_trace_point.h b/lib/eal/include/rte_trace_point.h
index c6b6fcc..d587591 100644
--- a/lib/eal/include/rte_trace_point.h
+++ b/lib/eal/include/rte_trace_point.h
@@ -30,6 +30,7 @@
#include <rte_per_lcore.h>
#include <rte_string_fns.h>
#include <rte_uuid.h>
+#include <rte_stdatomic.h>
/** The tracepoint object. */
typedef uint64_t rte_trace_point_t;
diff --git a/meson_options.txt b/meson_options.txt
index 621e1ca..bb22bba 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -46,6 +46,8 @@ option('mbuf_refcnt_atomic', type: 'boolean', value: true, description:
'Atomically access the mbuf refcnt.')
option('platform', type: 'string', value: 'native', description:
'Platform to build, either "native", "generic" or a SoC. Please refer to the Linux build guide for more information.')
+option('enable_stdatomic', type: 'boolean', value: false, description:
+ 'enable use of C11 stdatomic')
option('enable_trace_fp', type: 'boolean', value: false, description:
'enable fast path trace points.')
option('tests', type: 'boolean', value: true, description:
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v5 2/6] eal: adapt EAL to present rte optional atomics API
2023-08-17 21:42 ` [PATCH v5 0/6] optional rte optional stdatomics API Tyler Retzlaff
2023-08-17 21:42 ` [PATCH v5 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
@ 2023-08-17 21:42 ` Tyler Retzlaff
2023-08-17 21:42 ` [PATCH v5 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
` (4 subsequent siblings)
6 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-17 21:42 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt the EAL public headers to use rte optional atomics API instead of
directly using and exposing toolchain specific atomic builtin intrinsics.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
app/test/test_mcslock.c | 6 ++--
lib/eal/arm/include/rte_atomic_32.h | 4 +--
lib/eal/arm/include/rte_atomic_64.h | 36 +++++++++++------------
lib/eal/arm/include/rte_pause_64.h | 26 ++++++++--------
lib/eal/arm/rte_power_intrinsics.c | 8 ++---
lib/eal/common/eal_common_trace.c | 16 +++++-----
lib/eal/include/generic/rte_atomic.h | 50 +++++++++++++++----------------
lib/eal/include/generic/rte_pause.h | 46 ++++++++++++-----------------
lib/eal/include/generic/rte_rwlock.h | 47 +++++++++++++++--------------
lib/eal/include/generic/rte_spinlock.h | 19 ++++++------
lib/eal/include/rte_mcslock.h | 50 +++++++++++++++----------------
lib/eal/include/rte_pflock.h | 24 ++++++++-------
lib/eal/include/rte_seqcount.h | 18 ++++++------
lib/eal/include/rte_ticketlock.h | 42 +++++++++++++-------------
lib/eal/include/rte_trace_point.h | 4 +--
lib/eal/loongarch/include/rte_atomic.h | 4 +--
lib/eal/ppc/include/rte_atomic.h | 54 +++++++++++++++++-----------------
lib/eal/riscv/include/rte_atomic.h | 4 +--
lib/eal/x86/include/rte_atomic.h | 8 ++---
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 6 ++--
21 files changed, 237 insertions(+), 237 deletions(-)
diff --git a/app/test/test_mcslock.c b/app/test/test_mcslock.c
index 52e45e7..242c242 100644
--- a/app/test/test_mcslock.c
+++ b/app/test/test_mcslock.c
@@ -36,9 +36,9 @@
* lock multiple times.
*/
-rte_mcslock_t *p_ml;
-rte_mcslock_t *p_ml_try;
-rte_mcslock_t *p_ml_perf;
+RTE_ATOMIC(rte_mcslock_t *) p_ml;
+RTE_ATOMIC(rte_mcslock_t *) p_ml_try;
+RTE_ATOMIC(rte_mcslock_t *) p_ml_perf;
static unsigned int count;
diff --git a/lib/eal/arm/include/rte_atomic_32.h b/lib/eal/arm/include/rte_atomic_32.h
index c00ab78..62fc337 100644
--- a/lib/eal/arm/include/rte_atomic_32.h
+++ b/lib/eal/arm/include/rte_atomic_32.h
@@ -34,9 +34,9 @@
#define rte_io_rmb() rte_rmb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
#ifdef __cplusplus
diff --git a/lib/eal/arm/include/rte_atomic_64.h b/lib/eal/arm/include/rte_atomic_64.h
index 6047911..75d8ba6 100644
--- a/lib/eal/arm/include/rte_atomic_64.h
+++ b/lib/eal/arm/include/rte_atomic_64.h
@@ -38,9 +38,9 @@
#define rte_io_rmb() rte_rmb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
/*------------------------ 128 bit atomic operations -------------------------*/
@@ -107,33 +107,33 @@
*/
RTE_SET_USED(failure);
/* Find invalid memory order */
- RTE_ASSERT(success == __ATOMIC_RELAXED ||
- success == __ATOMIC_ACQUIRE ||
- success == __ATOMIC_RELEASE ||
- success == __ATOMIC_ACQ_REL ||
- success == __ATOMIC_SEQ_CST);
+ RTE_ASSERT(success == rte_memory_order_relaxed ||
+ success == rte_memory_order_acquire ||
+ success == rte_memory_order_release ||
+ success == rte_memory_order_acq_rel ||
+ success == rte_memory_order_seq_cst);
rte_int128_t expected = *exp;
rte_int128_t desired = *src;
rte_int128_t old;
#if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS)
- if (success == __ATOMIC_RELAXED)
+ if (success == rte_memory_order_relaxed)
__cas_128_relaxed(dst, exp, desired);
- else if (success == __ATOMIC_ACQUIRE)
+ else if (success == rte_memory_order_acquire)
__cas_128_acquire(dst, exp, desired);
- else if (success == __ATOMIC_RELEASE)
+ else if (success == rte_memory_order_release)
__cas_128_release(dst, exp, desired);
else
__cas_128_acq_rel(dst, exp, desired);
old = *exp;
#else
-#define __HAS_ACQ(mo) ((mo) != __ATOMIC_RELAXED && (mo) != __ATOMIC_RELEASE)
-#define __HAS_RLS(mo) ((mo) == __ATOMIC_RELEASE || (mo) == __ATOMIC_ACQ_REL || \
- (mo) == __ATOMIC_SEQ_CST)
+#define __HAS_ACQ(mo) ((mo) != rte_memory_order_relaxed && (mo) != rte_memory_order_release)
+#define __HAS_RLS(mo) ((mo) == rte_memory_order_release || (mo) == rte_memory_order_acq_rel || \
+ (mo) == rte_memory_order_seq_cst)
- int ldx_mo = __HAS_ACQ(success) ? __ATOMIC_ACQUIRE : __ATOMIC_RELAXED;
- int stx_mo = __HAS_RLS(success) ? __ATOMIC_RELEASE : __ATOMIC_RELAXED;
+ int ldx_mo = __HAS_ACQ(success) ? rte_memory_order_acquire : rte_memory_order_relaxed;
+ int stx_mo = __HAS_RLS(success) ? rte_memory_order_release : rte_memory_order_relaxed;
#undef __HAS_ACQ
#undef __HAS_RLS
@@ -153,7 +153,7 @@
: "Q" (src->val[0]) \
: "memory"); }
- if (ldx_mo == __ATOMIC_RELAXED)
+ if (ldx_mo == rte_memory_order_relaxed)
__LOAD_128("ldxp", dst, old)
else
__LOAD_128("ldaxp", dst, old)
@@ -170,7 +170,7 @@
: "memory"); }
if (likely(old.int128 == expected.int128)) {
- if (stx_mo == __ATOMIC_RELAXED)
+ if (stx_mo == rte_memory_order_relaxed)
__STORE_128("stxp", dst, desired, ret)
else
__STORE_128("stlxp", dst, desired, ret)
@@ -181,7 +181,7 @@
* needs to be stored back to ensure it was read
* atomically.
*/
- if (stx_mo == __ATOMIC_RELAXED)
+ if (stx_mo == rte_memory_order_relaxed)
__STORE_128("stxp", dst, old, ret)
else
__STORE_128("stlxp", dst, old, ret)
diff --git a/lib/eal/arm/include/rte_pause_64.h b/lib/eal/arm/include/rte_pause_64.h
index 5f70e97..d4daafc 100644
--- a/lib/eal/arm/include/rte_pause_64.h
+++ b/lib/eal/arm/include/rte_pause_64.h
@@ -41,7 +41,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_8(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxrb %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -60,7 +60,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_16(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxrh %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -79,7 +79,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_32(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxr %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -98,7 +98,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_64(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxr %x[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -118,7 +118,7 @@ static inline void rte_pause(void)
*/
#define __RTE_ARM_LOAD_EXC_128(src, dst, memorder) { \
volatile rte_int128_t *dst_128 = (volatile rte_int128_t *)&dst; \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxp %x[tmp0], %x[tmp1], [%x[addr]]" \
: [tmp0] "=&r" (dst_128->val[0]), \
[tmp1] "=&r" (dst_128->val[1]) \
@@ -153,8 +153,8 @@ static inline void rte_pause(void)
{
uint16_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_16(addr, value, memorder)
if (value != expected) {
@@ -172,8 +172,8 @@ static inline void rte_pause(void)
{
uint32_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_32(addr, value, memorder)
if (value != expected) {
@@ -191,8 +191,8 @@ static inline void rte_pause(void)
{
uint64_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_64(addr, value, memorder)
if (value != expected) {
@@ -206,8 +206,8 @@ static inline void rte_pause(void)
#define RTE_WAIT_UNTIL_MASKED(addr, mask, cond, expected, memorder) do { \
RTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); \
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && \
- memorder != __ATOMIC_RELAXED); \
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire && \
+ memorder != rte_memory_order_relaxed); \
const uint32_t size = sizeof(*(addr)) << 3; \
typeof(*(addr)) expected_value = (expected); \
typeof(*(addr)) value; \
diff --git a/lib/eal/arm/rte_power_intrinsics.c b/lib/eal/arm/rte_power_intrinsics.c
index 77b96e4..f54cf59 100644
--- a/lib/eal/arm/rte_power_intrinsics.c
+++ b/lib/eal/arm/rte_power_intrinsics.c
@@ -33,19 +33,19 @@
switch (pmc->size) {
case sizeof(uint8_t):
- __RTE_ARM_LOAD_EXC_8(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_8(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint16_t):
- __RTE_ARM_LOAD_EXC_16(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_16(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint32_t):
- __RTE_ARM_LOAD_EXC_32(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_32(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint64_t):
- __RTE_ARM_LOAD_EXC_64(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_64(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
default:
diff --git a/lib/eal/common/eal_common_trace.c b/lib/eal/common/eal_common_trace.c
index cb980af..c6628dd 100644
--- a/lib/eal/common/eal_common_trace.c
+++ b/lib/eal/common/eal_common_trace.c
@@ -103,11 +103,11 @@ struct trace_point_head *
trace_mode_set(rte_trace_point_t *t, enum rte_trace_mode mode)
{
if (mode == RTE_TRACE_MODE_OVERWRITE)
- __atomic_fetch_and(t, ~__RTE_TRACE_FIELD_ENABLE_DISCARD,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_and_explicit(t, ~__RTE_TRACE_FIELD_ENABLE_DISCARD,
+ rte_memory_order_release);
else
- __atomic_fetch_or(t, __RTE_TRACE_FIELD_ENABLE_DISCARD,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_or_explicit(t, __RTE_TRACE_FIELD_ENABLE_DISCARD,
+ rte_memory_order_release);
}
void
@@ -141,7 +141,7 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return false;
- val = __atomic_load_n(t, __ATOMIC_ACQUIRE);
+ val = rte_atomic_load_explicit(t, rte_memory_order_acquire);
return (val & __RTE_TRACE_FIELD_ENABLE_MASK) != 0;
}
@@ -153,7 +153,8 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return -ERANGE;
- prev = __atomic_fetch_or(t, __RTE_TRACE_FIELD_ENABLE_MASK, __ATOMIC_RELEASE);
+ prev = rte_atomic_fetch_or_explicit(t, __RTE_TRACE_FIELD_ENABLE_MASK,
+ rte_memory_order_release);
if ((prev & __RTE_TRACE_FIELD_ENABLE_MASK) == 0)
__atomic_fetch_add(&trace.status, 1, __ATOMIC_RELEASE);
return 0;
@@ -167,7 +168,8 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return -ERANGE;
- prev = __atomic_fetch_and(t, ~__RTE_TRACE_FIELD_ENABLE_MASK, __ATOMIC_RELEASE);
+ prev = rte_atomic_fetch_and_explicit(t, ~__RTE_TRACE_FIELD_ENABLE_MASK,
+ rte_memory_order_release);
if ((prev & __RTE_TRACE_FIELD_ENABLE_MASK) != 0)
__atomic_fetch_sub(&trace.status, 1, __ATOMIC_RELEASE);
return 0;
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 4a235ba..5940e7e 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -63,7 +63,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_ACQ_REL) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_acq_rel) should be used instead.
*/
static inline void rte_smp_mb(void);
@@ -80,7 +80,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_RELEASE) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_release) should be used instead.
* The fence also guarantees LOAD operations that precede the call
* are globally visible across the lcores before the STORE operations
* that follows it.
@@ -100,7 +100,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_ACQUIRE) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_acquire) should be used instead.
* The fence also guarantees LOAD operations that precede the call
* are globally visible across the lcores before the STORE operations
* that follows it.
@@ -154,7 +154,7 @@
/**
* Synchronization fence between threads based on the specified memory order.
*/
-static inline void rte_atomic_thread_fence(int memorder);
+static inline void rte_atomic_thread_fence(rte_memory_order memorder);
/*------------------------- 16 bit atomic operations -------------------------*/
@@ -207,7 +207,7 @@
static inline uint16_t
rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -274,7 +274,7 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
/**
@@ -288,7 +288,7 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
/**
@@ -341,7 +341,7 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
/**
@@ -361,7 +361,7 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
/**
@@ -380,7 +380,7 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_SEQ_CST) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -400,7 +400,7 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_SEQ_CST) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -486,7 +486,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline uint32_t
rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -553,7 +553,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
/**
@@ -567,7 +567,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
/**
@@ -620,7 +620,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
/**
@@ -640,7 +640,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
/**
@@ -659,7 +659,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_SEQ_CST) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -679,7 +679,7 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_SEQ_CST) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -764,7 +764,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline uint64_t
rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -885,7 +885,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
#endif
@@ -904,7 +904,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
#endif
@@ -962,7 +962,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
#endif
@@ -986,7 +986,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
#endif
@@ -1115,8 +1115,8 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
* stronger) model.
* @param failure
* If unsuccessful, the operation's memory behavior conforms to this (or a
- * stronger) model. This argument cannot be __ATOMIC_RELEASE,
- * __ATOMIC_ACQ_REL, or a stronger model than success.
+ * stronger) model. This argument cannot be rte_memory_order_release,
+ * rte_memory_order_acq_rel, or a stronger model than success.
* @return
* Non-zero on success; 0 on failure.
*/
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index bebfa95..256309e 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -36,13 +36,11 @@
* A 16-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
- * C++11 memory orders with the same names, see the C++11 standard or
- * the GCC wiki on atomic synchronization for detailed definition.
+ * rte_memory_order_acquire and rte_memory_order_relaxed.
*/
static __rte_always_inline void
rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,
- int memorder);
+ rte_memory_order memorder);
/**
* Wait for *addr to be updated with a 32-bit expected value, with a relaxed
@@ -54,13 +52,11 @@
* A 32-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
- * C++11 memory orders with the same names, see the C++11 standard or
- * the GCC wiki on atomic synchronization for detailed definition.
+ * rte_memory_order_acquire and rte_memory_order_relaxed.
*/
static __rte_always_inline void
rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,
- int memorder);
+ rte_memory_order memorder);
/**
* Wait for *addr to be updated with a 64-bit expected value, with a relaxed
@@ -72,42 +68,40 @@
* A 64-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
- * C++11 memory orders with the same names, see the C++11 standard or
- * the GCC wiki on atomic synchronization for detailed definition.
+ * rte_memory_order_acquire and rte_memory_order_relaxed.
*/
static __rte_always_inline void
rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,
- int memorder);
+ rte_memory_order memorder);
#ifndef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED
static __rte_always_inline void
rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
static __rte_always_inline void
rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
static __rte_always_inline void
rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
@@ -125,16 +119,14 @@
* An expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
- * C++11 memory orders with the same names, see the C++11 standard or
- * the GCC wiki on atomic synchronization for detailed definition.
+ * rte_memory_order_acquire and rte_memory_order_relaxed.
*/
#define RTE_WAIT_UNTIL_MASKED(addr, mask, cond, expected, memorder) do { \
RTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); \
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && \
- memorder != __ATOMIC_RELAXED); \
+ RTE_BUILD_BUG_ON((memorder) != rte_memory_order_acquire && \
+ (memorder) != rte_memory_order_relaxed); \
typeof(*(addr)) expected_value = (expected); \
- while (!((__atomic_load_n((addr), (memorder)) & (mask)) cond \
+ while (!((rte_atomic_load_explicit((addr), (memorder)) & (mask)) cond \
expected_value)) \
rte_pause(); \
} while (0)
diff --git a/lib/eal/include/generic/rte_rwlock.h b/lib/eal/include/generic/rte_rwlock.h
index 24ebec6..c788705 100644
--- a/lib/eal/include/generic/rte_rwlock.h
+++ b/lib/eal/include/generic/rte_rwlock.h
@@ -58,7 +58,7 @@
#define RTE_RWLOCK_READ 0x4 /* Reader increment */
typedef struct __rte_lockable {
- int32_t cnt;
+ RTE_ATOMIC(int32_t) cnt;
} rte_rwlock_t;
/**
@@ -93,21 +93,21 @@
while (1) {
/* Wait while writer is present or pending */
- while (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED)
+ while (rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed)
& RTE_RWLOCK_MASK)
rte_pause();
/* Try to get read lock */
- x = __atomic_fetch_add(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_ACQUIRE) + RTE_RWLOCK_READ;
+ x = rte_atomic_fetch_add_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_acquire) + RTE_RWLOCK_READ;
/* If no writer, then acquire was successful */
if (likely(!(x & RTE_RWLOCK_MASK)))
return;
/* Lost race with writer, backout the change. */
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_RELAXED);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_relaxed);
}
}
@@ -128,20 +128,20 @@
{
int32_t x;
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
/* fail if write lock is held or writer is pending */
if (x & RTE_RWLOCK_MASK)
return -EBUSY;
/* Try to get read lock */
- x = __atomic_fetch_add(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_ACQUIRE) + RTE_RWLOCK_READ;
+ x = rte_atomic_fetch_add_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_acquire) + RTE_RWLOCK_READ;
/* Back out if writer raced in */
if (unlikely(x & RTE_RWLOCK_MASK)) {
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_release);
return -EBUSY;
}
@@ -159,7 +159,7 @@
__rte_unlock_function(rwl)
__rte_no_thread_safety_analysis
{
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ, __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ, rte_memory_order_release);
}
/**
@@ -179,10 +179,10 @@
{
int32_t x;
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
if (x < RTE_RWLOCK_WRITE &&
- __atomic_compare_exchange_n(&rwl->cnt, &x, x + RTE_RWLOCK_WRITE,
- 1, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ rte_atomic_compare_exchange_weak_explicit(&rwl->cnt, &x, x + RTE_RWLOCK_WRITE,
+ rte_memory_order_acquire, rte_memory_order_relaxed))
return 0;
else
return -EBUSY;
@@ -202,22 +202,25 @@
int32_t x;
while (1) {
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
/* No readers or writers? */
if (likely(x < RTE_RWLOCK_WRITE)) {
/* Turn off RTE_RWLOCK_WAIT, turn on RTE_RWLOCK_WRITE */
- if (__atomic_compare_exchange_n(&rwl->cnt, &x, RTE_RWLOCK_WRITE, 1,
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ if (rte_atomic_compare_exchange_weak_explicit(
+ &rwl->cnt, &x, RTE_RWLOCK_WRITE,
+ rte_memory_order_acquire, rte_memory_order_relaxed))
return;
}
/* Turn on writer wait bit */
if (!(x & RTE_RWLOCK_WAIT))
- __atomic_fetch_or(&rwl->cnt, RTE_RWLOCK_WAIT, __ATOMIC_RELAXED);
+ rte_atomic_fetch_or_explicit(&rwl->cnt, RTE_RWLOCK_WAIT,
+ rte_memory_order_relaxed);
/* Wait until no readers before trying again */
- while (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) > RTE_RWLOCK_WAIT)
+ while (rte_atomic_load_explicit(&rwl->cnt,
+ rte_memory_order_relaxed) > RTE_RWLOCK_WAIT)
rte_pause();
}
@@ -234,7 +237,7 @@
__rte_unlock_function(rwl)
__rte_no_thread_safety_analysis
{
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_WRITE, __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_WRITE, rte_memory_order_release);
}
/**
@@ -248,7 +251,7 @@
static inline int
rte_rwlock_write_is_locked(rte_rwlock_t *rwl)
{
- if (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) & RTE_RWLOCK_WRITE)
+ if (rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed) & RTE_RWLOCK_WRITE)
return 1;
return 0;
diff --git a/lib/eal/include/generic/rte_spinlock.h b/lib/eal/include/generic/rte_spinlock.h
index e18f0cd..23fb048 100644
--- a/lib/eal/include/generic/rte_spinlock.h
+++ b/lib/eal/include/generic/rte_spinlock.h
@@ -29,7 +29,7 @@
* The rte_spinlock_t type.
*/
typedef struct __rte_lockable {
- volatile int locked; /**< lock status 0 = unlocked, 1 = locked */
+ volatile RTE_ATOMIC(int) locked; /**< lock status 0 = unlocked, 1 = locked */
} rte_spinlock_t;
/**
@@ -66,10 +66,10 @@
{
int exp = 0;
- while (!__atomic_compare_exchange_n(&sl->locked, &exp, 1, 0,
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) {
- rte_wait_until_equal_32((volatile uint32_t *)&sl->locked,
- 0, __ATOMIC_RELAXED);
+ while (!rte_atomic_compare_exchange_strong_explicit(&sl->locked, &exp, 1,
+ rte_memory_order_acquire, rte_memory_order_relaxed)) {
+ rte_wait_until_equal_32((volatile uint32_t *)(uintptr_t)&sl->locked,
+ 0, rte_memory_order_relaxed);
exp = 0;
}
}
@@ -90,7 +90,7 @@
rte_spinlock_unlock(rte_spinlock_t *sl)
__rte_no_thread_safety_analysis
{
- __atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&sl->locked, 0, rte_memory_order_release);
}
#endif
@@ -113,9 +113,8 @@
__rte_no_thread_safety_analysis
{
int exp = 0;
- return __atomic_compare_exchange_n(&sl->locked, &exp, 1,
- 0, /* disallow spurious failure */
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
+ return rte_atomic_compare_exchange_strong_explicit(&sl->locked, &exp, 1,
+ rte_memory_order_acquire, rte_memory_order_relaxed);
}
#endif
@@ -129,7 +128,7 @@
*/
static inline int rte_spinlock_is_locked (rte_spinlock_t *sl)
{
- return __atomic_load_n(&sl->locked, __ATOMIC_ACQUIRE);
+ return rte_atomic_load_explicit(&sl->locked, rte_memory_order_acquire);
}
/**
diff --git a/lib/eal/include/rte_mcslock.h b/lib/eal/include/rte_mcslock.h
index 18e63eb..8c75377 100644
--- a/lib/eal/include/rte_mcslock.h
+++ b/lib/eal/include/rte_mcslock.h
@@ -33,8 +33,8 @@
* The rte_mcslock_t type.
*/
typedef struct rte_mcslock {
- struct rte_mcslock *next;
- int locked; /* 1 if the queue locked, 0 otherwise */
+ RTE_ATOMIC(struct rte_mcslock *) next;
+ RTE_ATOMIC(int) locked; /* 1 if the queue locked, 0 otherwise */
} rte_mcslock_t;
/**
@@ -49,13 +49,13 @@
* lock should use its 'own node'.
*/
static inline void
-rte_mcslock_lock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_lock(RTE_ATOMIC(rte_mcslock_t *) *msl, rte_mcslock_t *me)
{
rte_mcslock_t *prev;
/* Init me node */
- __atomic_store_n(&me->locked, 1, __ATOMIC_RELAXED);
- __atomic_store_n(&me->next, NULL, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&me->locked, 1, rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&me->next, NULL, rte_memory_order_relaxed);
/* If the queue is empty, the exchange operation is enough to acquire
* the lock. Hence, the exchange operation requires acquire semantics.
@@ -63,7 +63,7 @@
* visible to other CPUs/threads. Hence, the exchange operation requires
* release semantics as well.
*/
- prev = __atomic_exchange_n(msl, me, __ATOMIC_ACQ_REL);
+ prev = rte_atomic_exchange_explicit(msl, me, rte_memory_order_acq_rel);
if (likely(prev == NULL)) {
/* Queue was empty, no further action required,
* proceed with lock taken.
@@ -77,19 +77,19 @@
* strong as a release fence and is not sufficient to enforce the
* desired order here.
*/
- __atomic_store_n(&prev->next, me, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&prev->next, me, rte_memory_order_release);
/* The while-load of me->locked should not move above the previous
* store to prev->next. Otherwise it will cause a deadlock. Need a
* store-load barrier.
*/
- __atomic_thread_fence(__ATOMIC_ACQ_REL);
+ __rte_atomic_thread_fence(rte_memory_order_acq_rel);
/* If the lock has already been acquired, it first atomically
* places the node at the end of the queue and then proceeds
* to spin on me->locked until the previous lock holder resets
* the me->locked using mcslock_unlock().
*/
- rte_wait_until_equal_32((uint32_t *)&me->locked, 0, __ATOMIC_ACQUIRE);
+ rte_wait_until_equal_32((uint32_t *)(uintptr_t)&me->locked, 0, rte_memory_order_acquire);
}
/**
@@ -101,34 +101,34 @@
* A pointer to the node of MCS lock passed in rte_mcslock_lock.
*/
static inline void
-rte_mcslock_unlock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_unlock(RTE_ATOMIC(rte_mcslock_t *) *msl, RTE_ATOMIC(rte_mcslock_t *) me)
{
/* Check if there are more nodes in the queue. */
- if (likely(__atomic_load_n(&me->next, __ATOMIC_RELAXED) == NULL)) {
+ if (likely(rte_atomic_load_explicit(&me->next, rte_memory_order_relaxed) == NULL)) {
/* No, last member in the queue. */
- rte_mcslock_t *save_me = __atomic_load_n(&me, __ATOMIC_RELAXED);
+ rte_mcslock_t *save_me = rte_atomic_load_explicit(&me, rte_memory_order_relaxed);
/* Release the lock by setting it to NULL */
- if (likely(__atomic_compare_exchange_n(msl, &save_me, NULL, 0,
- __ATOMIC_RELEASE, __ATOMIC_RELAXED)))
+ if (likely(rte_atomic_compare_exchange_strong_explicit(msl, &save_me, NULL,
+ rte_memory_order_release, rte_memory_order_relaxed)))
return;
/* Speculative execution would be allowed to read in the
* while-loop first. This has the potential to cause a
* deadlock. Need a load barrier.
*/
- __atomic_thread_fence(__ATOMIC_ACQUIRE);
+ __rte_atomic_thread_fence(rte_memory_order_acquire);
/* More nodes added to the queue by other CPUs.
* Wait until the next pointer is set.
*/
- uintptr_t *next;
- next = (uintptr_t *)&me->next;
+ RTE_ATOMIC(uintptr_t) *next;
+ next = (__rte_atomic uintptr_t *)&me->next;
RTE_WAIT_UNTIL_MASKED(next, UINTPTR_MAX, !=, 0,
- __ATOMIC_RELAXED);
+ rte_memory_order_relaxed);
}
/* Pass lock to next waiter. */
- __atomic_store_n(&me->next->locked, 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&me->next->locked, 0, rte_memory_order_release);
}
/**
@@ -142,10 +142,10 @@
* 1 if the lock is successfully taken; 0 otherwise.
*/
static inline int
-rte_mcslock_trylock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_trylock(RTE_ATOMIC(rte_mcslock_t *) *msl, rte_mcslock_t *me)
{
/* Init me node */
- __atomic_store_n(&me->next, NULL, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&me->next, NULL, rte_memory_order_relaxed);
/* Try to lock */
rte_mcslock_t *expected = NULL;
@@ -156,8 +156,8 @@
* is visible to other CPUs/threads. Hence, the compare-exchange
* operation requires release semantics as well.
*/
- return __atomic_compare_exchange_n(msl, &expected, me, 0,
- __ATOMIC_ACQ_REL, __ATOMIC_RELAXED);
+ return rte_atomic_compare_exchange_strong_explicit(msl, &expected, me,
+ rte_memory_order_acq_rel, rte_memory_order_relaxed);
}
/**
@@ -169,9 +169,9 @@
* 1 if the lock is currently taken; 0 otherwise.
*/
static inline int
-rte_mcslock_is_locked(rte_mcslock_t *msl)
+rte_mcslock_is_locked(RTE_ATOMIC(rte_mcslock_t *) msl)
{
- return (__atomic_load_n(&msl, __ATOMIC_RELAXED) != NULL);
+ return (rte_atomic_load_explicit(&msl, rte_memory_order_relaxed) != NULL);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_pflock.h b/lib/eal/include/rte_pflock.h
index 790be71..79feeea 100644
--- a/lib/eal/include/rte_pflock.h
+++ b/lib/eal/include/rte_pflock.h
@@ -41,8 +41,8 @@
*/
struct rte_pflock {
struct {
- uint16_t in;
- uint16_t out;
+ RTE_ATOMIC(uint16_t) in;
+ RTE_ATOMIC(uint16_t) out;
} rd, wr;
};
typedef struct rte_pflock rte_pflock_t;
@@ -117,14 +117,14 @@ struct rte_pflock {
* If no writer is present, then the operation has completed
* successfully.
*/
- w = __atomic_fetch_add(&pf->rd.in, RTE_PFLOCK_RINC, __ATOMIC_ACQUIRE)
+ w = rte_atomic_fetch_add_explicit(&pf->rd.in, RTE_PFLOCK_RINC, rte_memory_order_acquire)
& RTE_PFLOCK_WBITS;
if (w == 0)
return;
/* Wait for current write phase to complete. */
RTE_WAIT_UNTIL_MASKED(&pf->rd.in, RTE_PFLOCK_WBITS, !=, w,
- __ATOMIC_ACQUIRE);
+ rte_memory_order_acquire);
}
/**
@@ -140,7 +140,7 @@ struct rte_pflock {
static inline void
rte_pflock_read_unlock(rte_pflock_t *pf)
{
- __atomic_fetch_add(&pf->rd.out, RTE_PFLOCK_RINC, __ATOMIC_RELEASE);
+ rte_atomic_fetch_add_explicit(&pf->rd.out, RTE_PFLOCK_RINC, rte_memory_order_release);
}
/**
@@ -161,8 +161,9 @@ struct rte_pflock {
/* Acquire ownership of write-phase.
* This is same as rte_ticketlock_lock().
*/
- ticket = __atomic_fetch_add(&pf->wr.in, 1, __ATOMIC_RELAXED);
- rte_wait_until_equal_16(&pf->wr.out, ticket, __ATOMIC_ACQUIRE);
+ ticket = rte_atomic_fetch_add_explicit(&pf->wr.in, 1, rte_memory_order_relaxed);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&pf->wr.out, ticket,
+ rte_memory_order_acquire);
/*
* Acquire ticket on read-side in order to allow them
@@ -173,10 +174,11 @@ struct rte_pflock {
* speculatively.
*/
w = RTE_PFLOCK_PRES | (ticket & RTE_PFLOCK_PHID);
- ticket = __atomic_fetch_add(&pf->rd.in, w, __ATOMIC_RELAXED);
+ ticket = rte_atomic_fetch_add_explicit(&pf->rd.in, w, rte_memory_order_relaxed);
/* Wait for any pending readers to flush. */
- rte_wait_until_equal_16(&pf->rd.out, ticket, __ATOMIC_ACQUIRE);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&pf->rd.out, ticket,
+ rte_memory_order_acquire);
}
/**
@@ -193,10 +195,10 @@ struct rte_pflock {
rte_pflock_write_unlock(rte_pflock_t *pf)
{
/* Migrate from write phase to read phase. */
- __atomic_fetch_and(&pf->rd.in, RTE_PFLOCK_LSB, __ATOMIC_RELEASE);
+ rte_atomic_fetch_and_explicit(&pf->rd.in, RTE_PFLOCK_LSB, rte_memory_order_release);
/* Allow other writers to continue. */
- __atomic_fetch_add(&pf->wr.out, 1, __ATOMIC_RELEASE);
+ rte_atomic_fetch_add_explicit(&pf->wr.out, 1, rte_memory_order_release);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_seqcount.h b/lib/eal/include/rte_seqcount.h
index 098af26..4f9cefb 100644
--- a/lib/eal/include/rte_seqcount.h
+++ b/lib/eal/include/rte_seqcount.h
@@ -32,7 +32,7 @@
* The RTE seqcount type.
*/
typedef struct {
- uint32_t sn; /**< A sequence number for the protected data. */
+ RTE_ATOMIC(uint32_t) sn; /**< A sequence number for the protected data. */
} rte_seqcount_t;
/**
@@ -106,11 +106,11 @@
static inline uint32_t
rte_seqcount_read_begin(const rte_seqcount_t *seqcount)
{
- /* __ATOMIC_ACQUIRE to prevent loads after (in program order)
+ /* rte_memory_order_acquire to prevent loads after (in program order)
* from happening before the sn load. Synchronizes-with the
* store release in rte_seqcount_write_end().
*/
- return __atomic_load_n(&seqcount->sn, __ATOMIC_ACQUIRE);
+ return rte_atomic_load_explicit(&seqcount->sn, rte_memory_order_acquire);
}
/**
@@ -161,9 +161,9 @@
return true;
/* make sure the data loads happens before the sn load */
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+ rte_atomic_thread_fence(rte_memory_order_acquire);
- end_sn = __atomic_load_n(&seqcount->sn, __ATOMIC_RELAXED);
+ end_sn = rte_atomic_load_explicit(&seqcount->sn, rte_memory_order_relaxed);
/* A writer incremented the sequence number during this read
* critical section.
@@ -205,12 +205,12 @@
sn = seqcount->sn + 1;
- __atomic_store_n(&seqcount->sn, sn, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&seqcount->sn, sn, rte_memory_order_relaxed);
- /* __ATOMIC_RELEASE to prevent stores after (in program order)
+ /* rte_memory_order_release to prevent stores after (in program order)
* from happening before the sn store.
*/
- rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ rte_atomic_thread_fence(rte_memory_order_release);
}
/**
@@ -237,7 +237,7 @@
sn = seqcount->sn + 1;
/* Synchronizes-with the load acquire in rte_seqcount_read_begin(). */
- __atomic_store_n(&seqcount->sn, sn, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&seqcount->sn, sn, rte_memory_order_release);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_ticketlock.h b/lib/eal/include/rte_ticketlock.h
index e22d119..7d39bca 100644
--- a/lib/eal/include/rte_ticketlock.h
+++ b/lib/eal/include/rte_ticketlock.h
@@ -30,10 +30,10 @@
* The rte_ticketlock_t type.
*/
typedef union {
- uint32_t tickets;
+ RTE_ATOMIC(uint32_t) tickets;
struct {
- uint16_t current;
- uint16_t next;
+ RTE_ATOMIC(uint16_t) current;
+ RTE_ATOMIC(uint16_t) next;
} s;
} rte_ticketlock_t;
@@ -51,7 +51,7 @@
static inline void
rte_ticketlock_init(rte_ticketlock_t *tl)
{
- __atomic_store_n(&tl->tickets, 0, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tl->tickets, 0, rte_memory_order_relaxed);
}
/**
@@ -63,8 +63,9 @@
static inline void
rte_ticketlock_lock(rte_ticketlock_t *tl)
{
- uint16_t me = __atomic_fetch_add(&tl->s.next, 1, __ATOMIC_RELAXED);
- rte_wait_until_equal_16(&tl->s.current, me, __ATOMIC_ACQUIRE);
+ uint16_t me = rte_atomic_fetch_add_explicit(&tl->s.next, 1, rte_memory_order_relaxed);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tl->s.current, me,
+ rte_memory_order_acquire);
}
/**
@@ -76,8 +77,8 @@
static inline void
rte_ticketlock_unlock(rte_ticketlock_t *tl)
{
- uint16_t i = __atomic_load_n(&tl->s.current, __ATOMIC_RELAXED);
- __atomic_store_n(&tl->s.current, i + 1, __ATOMIC_RELEASE);
+ uint16_t i = rte_atomic_load_explicit(&tl->s.current, rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&tl->s.current, i + 1, rte_memory_order_release);
}
/**
@@ -92,12 +93,13 @@
rte_ticketlock_trylock(rte_ticketlock_t *tl)
{
rte_ticketlock_t oldl, newl;
- oldl.tickets = __atomic_load_n(&tl->tickets, __ATOMIC_RELAXED);
+ oldl.tickets = rte_atomic_load_explicit(&tl->tickets, rte_memory_order_relaxed);
newl.tickets = oldl.tickets;
newl.s.next++;
if (oldl.s.next == oldl.s.current) {
- if (__atomic_compare_exchange_n(&tl->tickets, &oldl.tickets,
- newl.tickets, 0, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ if (rte_atomic_compare_exchange_strong_explicit(&tl->tickets,
+ (uint32_t *)(uintptr_t)&oldl.tickets,
+ newl.tickets, rte_memory_order_acquire, rte_memory_order_relaxed))
return 1;
}
@@ -116,7 +118,7 @@
rte_ticketlock_is_locked(rte_ticketlock_t *tl)
{
rte_ticketlock_t tic;
- tic.tickets = __atomic_load_n(&tl->tickets, __ATOMIC_ACQUIRE);
+ tic.tickets = rte_atomic_load_explicit(&tl->tickets, rte_memory_order_acquire);
return (tic.s.current != tic.s.next);
}
@@ -127,7 +129,7 @@
typedef struct {
rte_ticketlock_t tl; /**< the actual ticketlock */
- int user; /**< core id using lock, TICKET_LOCK_INVALID_ID for unused */
+ RTE_ATOMIC(int) user; /**< core id using lock, TICKET_LOCK_INVALID_ID for unused */
unsigned int count; /**< count of time this lock has been called */
} rte_ticketlock_recursive_t;
@@ -147,7 +149,7 @@
rte_ticketlock_recursive_init(rte_ticketlock_recursive_t *tlr)
{
rte_ticketlock_init(&tlr->tl);
- __atomic_store_n(&tlr->user, TICKET_LOCK_INVALID_ID, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, TICKET_LOCK_INVALID_ID, rte_memory_order_relaxed);
tlr->count = 0;
}
@@ -162,9 +164,9 @@
{
int id = rte_gettid();
- if (__atomic_load_n(&tlr->user, __ATOMIC_RELAXED) != id) {
+ if (rte_atomic_load_explicit(&tlr->user, rte_memory_order_relaxed) != id) {
rte_ticketlock_lock(&tlr->tl);
- __atomic_store_n(&tlr->user, id, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, id, rte_memory_order_relaxed);
}
tlr->count++;
}
@@ -179,8 +181,8 @@
rte_ticketlock_recursive_unlock(rte_ticketlock_recursive_t *tlr)
{
if (--(tlr->count) == 0) {
- __atomic_store_n(&tlr->user, TICKET_LOCK_INVALID_ID,
- __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, TICKET_LOCK_INVALID_ID,
+ rte_memory_order_relaxed);
rte_ticketlock_unlock(&tlr->tl);
}
}
@@ -198,10 +200,10 @@
{
int id = rte_gettid();
- if (__atomic_load_n(&tlr->user, __ATOMIC_RELAXED) != id) {
+ if (rte_atomic_load_explicit(&tlr->user, rte_memory_order_relaxed) != id) {
if (rte_ticketlock_trylock(&tlr->tl) == 0)
return 0;
- __atomic_store_n(&tlr->user, id, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, id, rte_memory_order_relaxed);
}
tlr->count++;
return 1;
diff --git a/lib/eal/include/rte_trace_point.h b/lib/eal/include/rte_trace_point.h
index d587591..b403edd 100644
--- a/lib/eal/include/rte_trace_point.h
+++ b/lib/eal/include/rte_trace_point.h
@@ -33,7 +33,7 @@
#include <rte_stdatomic.h>
/** The tracepoint object. */
-typedef uint64_t rte_trace_point_t;
+typedef RTE_ATOMIC(uint64_t) rte_trace_point_t;
/**
* Macro to define the tracepoint arguments in RTE_TRACE_POINT macro.
@@ -359,7 +359,7 @@ struct __rte_trace_header {
#define __rte_trace_point_emit_header_generic(t) \
void *mem; \
do { \
- const uint64_t val = __atomic_load_n(t, __ATOMIC_ACQUIRE); \
+ const uint64_t val = rte_atomic_load_explicit(t, rte_memory_order_acquire); \
if (likely(!(val & __RTE_TRACE_FIELD_ENABLE_MASK))) \
return; \
mem = __rte_trace_mem_get(val); \
diff --git a/lib/eal/loongarch/include/rte_atomic.h b/lib/eal/loongarch/include/rte_atomic.h
index 3c82845..0510b8f 100644
--- a/lib/eal/loongarch/include/rte_atomic.h
+++ b/lib/eal/loongarch/include/rte_atomic.h
@@ -35,9 +35,9 @@
#define rte_io_rmb() rte_mb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
#ifdef __cplusplus
diff --git a/lib/eal/ppc/include/rte_atomic.h b/lib/eal/ppc/include/rte_atomic.h
index ec8d8a2..7382412 100644
--- a/lib/eal/ppc/include/rte_atomic.h
+++ b/lib/eal/ppc/include/rte_atomic.h
@@ -38,9 +38,9 @@
#define rte_io_rmb() rte_rmb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
/*------------------------- 16 bit atomic operations -------------------------*/
@@ -48,8 +48,8 @@
static inline int
rte_atomic16_cmpset(volatile uint16_t *dst, uint16_t exp, uint16_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline int rte_atomic16_test_and_set(rte_atomic16_t *v)
@@ -60,29 +60,29 @@ static inline int rte_atomic16_test_and_set(rte_atomic16_t *v)
static inline void
rte_atomic16_inc(rte_atomic16_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic16_dec(rte_atomic16_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline uint16_t
rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val)
{
- return __atomic_exchange_2(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_2(dst, val, rte_memory_order_seq_cst);
}
/*------------------------- 32 bit atomic operations -------------------------*/
@@ -90,8 +90,8 @@ static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
static inline int
rte_atomic32_cmpset(volatile uint32_t *dst, uint32_t exp, uint32_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline int rte_atomic32_test_and_set(rte_atomic32_t *v)
@@ -102,29 +102,29 @@ static inline int rte_atomic32_test_and_set(rte_atomic32_t *v)
static inline void
rte_atomic32_inc(rte_atomic32_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic32_dec(rte_atomic32_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline uint32_t
rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val)
{
- return __atomic_exchange_4(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_4(dst, val, rte_memory_order_seq_cst);
}
/*------------------------- 64 bit atomic operations -------------------------*/
@@ -132,8 +132,8 @@ static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
static inline int
rte_atomic64_cmpset(volatile uint64_t *dst, uint64_t exp, uint64_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline void
@@ -157,47 +157,47 @@ static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_acquire);
}
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_acquire);
}
static inline void
rte_atomic64_inc(rte_atomic64_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic64_dec(rte_atomic64_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_ACQUIRE) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_acquire) + inc;
}
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_ACQUIRE) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_acquire) - dec;
}
static inline int rte_atomic64_inc_and_test(rte_atomic64_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic64_dec_and_test(rte_atomic64_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline int rte_atomic64_test_and_set(rte_atomic64_t *v)
@@ -213,7 +213,7 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
static inline uint64_t
rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val)
{
- return __atomic_exchange_8(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_8(dst, val, rte_memory_order_seq_cst);
}
#endif
diff --git a/lib/eal/riscv/include/rte_atomic.h b/lib/eal/riscv/include/rte_atomic.h
index 4b4633c..2603bc9 100644
--- a/lib/eal/riscv/include/rte_atomic.h
+++ b/lib/eal/riscv/include/rte_atomic.h
@@ -40,9 +40,9 @@
#define rte_io_rmb() asm volatile("fence ir, ir" : : : "memory")
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
#ifdef __cplusplus
diff --git a/lib/eal/x86/include/rte_atomic.h b/lib/eal/x86/include/rte_atomic.h
index f2ee1a9..3b3a9a4 100644
--- a/lib/eal/x86/include/rte_atomic.h
+++ b/lib/eal/x86/include/rte_atomic.h
@@ -82,17 +82,17 @@
/**
* Synchronization fence between threads based on the specified memory order.
*
- * On x86 the __atomic_thread_fence(__ATOMIC_SEQ_CST) generates full 'mfence'
+ * On x86 the __rte_atomic_thread_fence(rte_memory_order_seq_cst) generates full 'mfence'
* which is quite expensive. The optimized implementation of rte_smp_mb is
* used instead.
*/
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- if (memorder == __ATOMIC_SEQ_CST)
+ if (memorder == rte_memory_order_seq_cst)
rte_smp_mb();
else
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
/*------------------------- 16 bit atomic operations -------------------------*/
diff --git a/lib/eal/x86/include/rte_spinlock.h b/lib/eal/x86/include/rte_spinlock.h
index 0b20ddf..a6c23ea 100644
--- a/lib/eal/x86/include/rte_spinlock.h
+++ b/lib/eal/x86/include/rte_spinlock.h
@@ -78,7 +78,7 @@ static inline int rte_tm_supported(void)
}
static inline int
-rte_try_tm(volatile int *lock)
+rte_try_tm(volatile RTE_ATOMIC(int) *lock)
{
int i, retries;
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index f749da9..cf70e33 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,9 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = __atomic_load_n((volatile uint64_t *)addr, __ATOMIC_RELAXED);
- __atomic_compare_exchange_n((volatile uint64_t *)addr, &val, val, 0,
- __ATOMIC_RELAXED, __ATOMIC_RELAXED);
+ val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
+ rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v5 3/6] eal: add rte atomic qualifier with casts
2023-08-17 21:42 ` [PATCH v5 0/6] optional rte optional stdatomics API Tyler Retzlaff
2023-08-17 21:42 ` [PATCH v5 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
2023-08-17 21:42 ` [PATCH v5 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
@ 2023-08-17 21:42 ` Tyler Retzlaff
2023-08-17 21:42 ` [PATCH v5 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
` (3 subsequent siblings)
6 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-17 21:42 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Introduce __rte_atomic qualifying casts in rte_optional atomics inline
functions to prevent cascading the need to pass __rte_atomic qualified
arguments.
Warning, this is really implementation dependent and being done
temporarily to avoid having to convert more of the libraries and tests in
DPDK in the initial series that introduces the API. The consequence of the
assumption of the ABI of the types in question not being ``the same'' is
only a risk that may be realized when enable_stdatomic=true.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/eal/include/generic/rte_atomic.h | 48 ++++++++++++++++++++++++------------
lib/eal/include/generic/rte_pause.h | 9 ++++---
lib/eal/x86/rte_power_intrinsics.c | 7 +++---
3 files changed, 42 insertions(+), 22 deletions(-)
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 5940e7e..709bf15 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -274,7 +274,8 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -288,7 +289,8 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -341,7 +343,8 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -361,7 +364,8 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -380,7 +384,8 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -400,7 +405,8 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -553,7 +559,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -567,7 +574,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -620,7 +628,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -640,7 +649,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -659,7 +669,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -679,7 +690,8 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -885,7 +897,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int64_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
#endif
@@ -904,7 +917,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int64_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
#endif
@@ -962,7 +976,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int64_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
#endif
@@ -986,7 +1001,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int64_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
#endif
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index 256309e..b7b059f 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -81,7 +81,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint16_t *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -91,7 +92,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint32_t *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -101,7 +103,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint64_t *)addr, memorder)
+ != expected)
rte_pause();
}
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index cf70e33..fb8539f 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,10 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
- rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
- rte_memory_order_relaxed, rte_memory_order_relaxed);
+ val = rte_atomic_load_explicit((volatile __rte_atomic uint64_t *)addr,
+ rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile __rte_atomic uint64_t *)addr,
+ &val, val, rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v5 4/6] distributor: adapt for EAL optional atomics API changes
2023-08-17 21:42 ` [PATCH v5 0/6] optional rte optional stdatomics API Tyler Retzlaff
` (2 preceding siblings ...)
2023-08-17 21:42 ` [PATCH v5 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
@ 2023-08-17 21:42 ` Tyler Retzlaff
2023-08-17 21:42 ` [PATCH v5 5/6] bpf: " Tyler Retzlaff
` (2 subsequent siblings)
6 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-17 21:42 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt distributor for EAL optional atomics API changes
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 ++++++++++++++++----------------
2 files changed, 23 insertions(+), 23 deletions(-)
diff --git a/lib/distributor/distributor_private.h b/lib/distributor/distributor_private.h
index 7101f63..2f29343 100644
--- a/lib/distributor/distributor_private.h
+++ b/lib/distributor/distributor_private.h
@@ -52,7 +52,7 @@
* Only 64-bits of the memory is actually used though.
*/
union rte_distributor_buffer_single {
- volatile int64_t bufptr64;
+ volatile RTE_ATOMIC(int64_t) bufptr64;
char pad[RTE_CACHE_LINE_SIZE*3];
} __rte_cache_aligned;
diff --git a/lib/distributor/rte_distributor_single.c b/lib/distributor/rte_distributor_single.c
index 2c77ac4..ad43c13 100644
--- a/lib/distributor/rte_distributor_single.c
+++ b/lib/distributor/rte_distributor_single.c
@@ -32,10 +32,10 @@
int64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_GET_BUF;
RTE_WAIT_UNTIL_MASKED(&buf->bufptr64, RTE_DISTRIB_FLAGS_MASK,
- ==, 0, __ATOMIC_RELAXED);
+ ==, 0, rte_memory_order_relaxed);
/* Sync with distributor on GET_BUF flag. */
- __atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&buf->bufptr64, req, rte_memory_order_release);
}
struct rte_mbuf *
@@ -44,7 +44,7 @@ struct rte_mbuf *
{
union rte_distributor_buffer_single *buf = &d->bufs[worker_id];
/* Sync with distributor. Acquire bufptr64. */
- if (__atomic_load_n(&buf->bufptr64, __ATOMIC_ACQUIRE)
+ if (rte_atomic_load_explicit(&buf->bufptr64, rte_memory_order_acquire)
& RTE_DISTRIB_GET_BUF)
return NULL;
@@ -72,10 +72,10 @@ struct rte_mbuf *
uint64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_RETURN_BUF;
RTE_WAIT_UNTIL_MASKED(&buf->bufptr64, RTE_DISTRIB_FLAGS_MASK,
- ==, 0, __ATOMIC_RELAXED);
+ ==, 0, rte_memory_order_relaxed);
/* Sync with distributor on RETURN_BUF flag. */
- __atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&buf->bufptr64, req, rte_memory_order_release);
return 0;
}
@@ -119,7 +119,7 @@ struct rte_mbuf *
d->in_flight_tags[wkr] = 0;
d->in_flight_bitmask &= ~(1UL << wkr);
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64), 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64, 0, rte_memory_order_release);
if (unlikely(d->backlog[wkr].count != 0)) {
/* On return of a packet, we need to move the
* queued packets for this core elsewhere.
@@ -165,21 +165,21 @@ struct rte_mbuf *
for (wkr = 0; wkr < d->num_workers; wkr++) {
uintptr_t oldbuf = 0;
/* Sync with worker. Acquire bufptr64. */
- const int64_t data = __atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE);
+ const int64_t data = rte_atomic_load_explicit(&d->bufs[wkr].bufptr64,
+ rte_memory_order_acquire);
if (data & RTE_DISTRIB_GET_BUF) {
flushed++;
if (d->backlog[wkr].count)
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
else {
/* Sync with worker on GET_BUF flag. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
RTE_DISTRIB_GET_BUF,
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
d->in_flight_tags[wkr] = 0;
d->in_flight_bitmask &= ~(1UL << wkr);
}
@@ -217,8 +217,8 @@ struct rte_mbuf *
while (next_idx < num_mbufs || next_mb != NULL) {
uintptr_t oldbuf = 0;
/* Sync with worker. Acquire bufptr64. */
- int64_t data = __atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE);
+ int64_t data = rte_atomic_load_explicit(&(d->bufs[wkr].bufptr64),
+ rte_memory_order_acquire);
if (!next_mb) {
next_mb = mbufs[next_idx++];
@@ -264,15 +264,15 @@ struct rte_mbuf *
if (d->backlog[wkr].count)
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
else {
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
next_value,
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
d->in_flight_tags[wkr] = new_tag;
d->in_flight_bitmask |= (1UL << wkr);
next_mb = NULL;
@@ -294,8 +294,8 @@ struct rte_mbuf *
for (wkr = 0; wkr < d->num_workers; wkr++)
if (d->backlog[wkr].count &&
/* Sync with worker. Acquire bufptr64. */
- (__atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE) & RTE_DISTRIB_GET_BUF)) {
+ (rte_atomic_load_explicit(&d->bufs[wkr].bufptr64,
+ rte_memory_order_acquire) & RTE_DISTRIB_GET_BUF)) {
int64_t oldbuf = d->bufs[wkr].bufptr64 >>
RTE_DISTRIB_FLAG_BITS;
@@ -303,9 +303,9 @@ struct rte_mbuf *
store_return(oldbuf, d, &ret_start, &ret_count);
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
}
d->returns.start = ret_start;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v5 5/6] bpf: adapt for EAL optional atomics API changes
2023-08-17 21:42 ` [PATCH v5 0/6] optional rte optional stdatomics API Tyler Retzlaff
` (3 preceding siblings ...)
2023-08-17 21:42 ` [PATCH v5 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
@ 2023-08-17 21:42 ` Tyler Retzlaff
2023-08-17 21:42 ` [PATCH v5 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
2023-08-21 22:27 ` [PATCH v5 0/6] optional rte optional stdatomics API Konstantin Ananyev
6 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-17 21:42 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt bpf for EAL optional atomics API changes
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/bpf/bpf_pkt.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c
index ffd2db7..7a8e4a6 100644
--- a/lib/bpf/bpf_pkt.c
+++ b/lib/bpf/bpf_pkt.c
@@ -25,7 +25,7 @@
struct bpf_eth_cbi {
/* used by both data & control path */
- uint32_t use; /*usage counter */
+ RTE_ATOMIC(uint32_t) use; /*usage counter */
const struct rte_eth_rxtx_callback *cb; /* callback handle */
struct rte_bpf *bpf;
struct rte_bpf_jit jit;
@@ -110,8 +110,8 @@ struct bpf_eth_cbh {
/* in use, busy wait till current RX/TX iteration is finished */
if ((puse & BPF_ETH_CBI_INUSE) != 0) {
- RTE_WAIT_UNTIL_MASKED((uint32_t *)(uintptr_t)&cbi->use,
- UINT32_MAX, !=, puse, __ATOMIC_RELAXED);
+ RTE_WAIT_UNTIL_MASKED((__rte_atomic uint32_t *)(uintptr_t)&cbi->use,
+ UINT32_MAX, !=, puse, rte_memory_order_relaxed);
}
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v5 6/6] devtools: forbid new direct use of GCC atomic builtins
2023-08-17 21:42 ` [PATCH v5 0/6] optional rte optional stdatomics API Tyler Retzlaff
` (4 preceding siblings ...)
2023-08-17 21:42 ` [PATCH v5 5/6] bpf: " Tyler Retzlaff
@ 2023-08-17 21:42 ` Tyler Retzlaff
2023-08-21 22:27 ` [PATCH v5 0/6] optional rte optional stdatomics API Konstantin Ananyev
6 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-17 21:42 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Refrain from using compiler __atomic_xxx builtins DPDK now requires
the use of rte_atomic_<op>_explicit macros when operating on DPDK
atomic variables.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Suggested-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
devtools/checkpatches.sh | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh
index 43f5e36..84c096d 100755
--- a/devtools/checkpatches.sh
+++ b/devtools/checkpatches.sh
@@ -107,15 +107,15 @@ check_forbidden_additions() { # <patch>
awk -v FOLDERS="lib drivers app examples" \
-v EXPRESSIONS="__atomic_thread_fence\\\(" \
-v RET_ON_FAIL=1 \
- -v MESSAGE='Using __atomic_thread_fence' \
+ -v MESSAGE='Using __atomic_thread_fence built-in, prefer rte_atomic_thread_fence' \
-f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
"$1" || res=1
- # refrain from using compiler __atomic_{add,and,nand,or,sub,xor}_fetch()
+ # refrain from using compiler __atomic_xxx builtins
awk -v FOLDERS="lib drivers app examples" \
- -v EXPRESSIONS="__atomic_(add|and|nand|or|sub|xor)_fetch\\\(" \
+ -v EXPRESSIONS="__atomic_.*\\\(" \
-v RET_ON_FAIL=1 \
- -v MESSAGE='Using __atomic_op_fetch, prefer __atomic_fetch_op' \
+ -v MESSAGE='Using __atomic_xxx built-ins, prefer rte_atomic_xxx' \
-f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
"$1" || res=1
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH v4 1/6] eal: provide rte stdatomics optional atomics API
2023-08-17 19:09 ` Tyler Retzlaff
@ 2023-08-18 6:55 ` Morten Brørup
0 siblings, 0 replies; 82+ messages in thread
From: Morten Brørup @ 2023-08-18 6:55 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: dev, techboard, Bruce Richardson, Honnappa Nagarahalli,
Ruifeng Wang, Jerin Jacob, Sunil Kumar Kori,
Mattias Rönnblom, Joyce Kong, David Christensen,
Konstantin Ananyev, David Hunt, Thomas Monjalon, David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Thursday, 17 August 2023 21.10
>
> On Thu, Aug 17, 2023 at 01:45:21PM +0200, Morten Brørup wrote:
> > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > Sent: Wednesday, 16 August 2023 23.39
> > >
> > > Provide API for atomic operations in the rte namespace that may
> > > optionally be configured to use C11 atomics with meson
> > > option enable_stdatomics=true
> > >
> > > Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> > > Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
> > > ---
> >
> > Speed blindness during my review... I have now spotted a couple of copy-
> paste typos:
> >
> > > +#define rte_atomic_compare_exchange_weak_explicit( \
> > > + ptr, expected, desired, succ_memorder, fail_memorder) \
> > > + atomic_compare_exchange_strong_explicit( \
> >
> > atomic_compare_exchange_weak_explicit, not strong.
>
> yikes, thanks for catching that cut & past error
>
> >
> > > + ptr, expected, desired, succ_memorder, fail_memorder)
> > > +
> >
> > [...]
> >
> > > +#define rte_atomic_flag_clear_explicit(ptr, memorder) \
> > > + atomic_flag_clear(ptr, memorder)
> >
> > atomic_flag_clear_explicit(ptr, memorder), missing _explicit.
>
> yes, currently unused otherwise it would have failed to compie
Yes, I guessed something similar, when I spotted this... two parameters being passed to a single-parameter function.
>
> i'll correct this too.
>
> thank you for the careful review i look at the diffs over and over and
> still it's hard to spot subtle swaps/exchanges of things.
Yes, when reviewing many similar lines of code, the probability of overlooking something increases rapidly, even for external reviewers.
About 20 years ago, a Danish consulting company realized this, and turned it to something positive. They set up a highly specialized organization, Specialisterne (https://specialisterne.com/), to offer autistic consultants for tasks like this. These autistics enjoy repetitive tasks, and are excellent at spotting subtle differences normal people would likely overlook.
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH v4 6/6] devtools: forbid new direct use of GCC atomic builtins
2023-08-17 19:14 ` Tyler Retzlaff
@ 2023-08-18 7:13 ` Morten Brørup
2023-08-22 18:14 ` Tyler Retzlaff
0 siblings, 1 reply; 82+ messages in thread
From: Morten Brørup @ 2023-08-18 7:13 UTC (permalink / raw)
To: Tyler Retzlaff, Bruce Richardson
Cc: dev, techboard, Honnappa Nagarahalli, Ruifeng Wang, Jerin Jacob,
Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Thursday, 17 August 2023 21.14
>
> On Thu, Aug 17, 2023 at 01:57:01PM +0200, Morten Brørup wrote:
> > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > Sent: Wednesday, 16 August 2023 23.39
> > >
> > > Refrain from using compiler __atomic_xxx builtins DPDK now requires
> > > the use of rte_atomic_<op>_explicit macros when operating on DPDK
> > > atomic variables.
> >
> > There is probably no end to how much can be added to checkpatches.
> >
> > You got the important stuff, so below are only further suggestions!
> >
> > [...]
> >
> > > - # refrain from using compiler __atomic_{add,and,nand,or,sub,xor}_fetch()
> > > + # refrain from using compiler __atomic_xxx builtins
> > > awk -v FOLDERS="lib drivers app examples" \
> > > - -v EXPRESSIONS="__atomic_(add|and|nand|or|sub|xor)_fetch\\\(" \
> > > + -v EXPRESSIONS="__atomic_.*\\\(" \
> > > -v RET_ON_FAIL=1 \
> > > - -v MESSAGE='Using __atomic_op_fetch, prefer __atomic_fetch_op' \
> > > + -v MESSAGE='Using __atomic_xxx builtins' \
> >
> > Alternatively:
> > -v MESSAGE='Using __atomic_xxx built-ins, prefer rte_atomic_xxx'
> \
>
> i can adjust the wording as you suggest, no problem
>
> >
> > > -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
> > > "$1" || res=1
> > >
> > > --
> > > 1.8.3.1
> >
> > This could be updated too:
> >
> > # refrain from using compiler __atomic_thread_fence()
> > # It should be avoided on x86 for SMP case.
> > awk -v FOLDERS="lib drivers app examples" \
> > -v EXPRESSIONS="__atomic_thread_fence\\\(" \
> > -v RET_ON_FAIL=1 \
> > - -v MESSAGE='Using __atomic_thread_fence' \
> > + -v MESSAGE='Using __atomic_thread_fence built-in, prefer
> __rte_atomic_thread_fence' \
> > -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
>
> yeah, i left this one separate i think the advice is actually use
> rte_atomic_thread_fence which may be an inline function that uses
> __rte_atomic_thread_fence
I now noticed that the comment to this says "# [...] should be avoided on x86 for SMP case." Wouldn't that apply to __rte_atomic_thread_fence too? So would we want a similar warning for __rte_atomic_thread_fence in checkpatches; i.e. warnings for all variants of [__[rte_]]atomic_thread_fence?
If the use of [__[rte_]]atomic_thread_fence is only conditionally prohibited, I think that we should move the warning to the definition of __rte_atomic_thread_fence itself, gated by x64 and SMP being defined. The CI would catch its use in DPDK, but still allow application developers to use it (for targets not being x86 SMP). What do others think?
>
> > "$1" || res=1
> >
> > You could also add C11 variants of these tests...
> >
> atomic_(load|store|exchange|compare_exchange_(strong|weak)|fetch_(add|sub|and|
> xor|or|nand)|flag_(test_and_set|clear))[_explicit], and
> > atomic_thread_fence.
> >
> > And a test for using "_Atomic".
>
> direct use would cause early compilation in the CI so it would be caught
> fairly early. i'm not sure i want to get into the business of trying to
> add redundant (albiet cheaper) earlier checks.
>
> though if there is a general call for this from the reviewers i'll add
> them.
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v5 0/6] optional rte optional stdatomics API
2023-08-17 21:42 ` [PATCH v5 0/6] optional rte optional stdatomics API Tyler Retzlaff
` (5 preceding siblings ...)
2023-08-17 21:42 ` [PATCH v5 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
@ 2023-08-21 22:27 ` Konstantin Ananyev
6 siblings, 0 replies; 82+ messages in thread
From: Konstantin Ananyev @ 2023-08-21 22:27 UTC (permalink / raw)
To: Tyler Retzlaff, dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, David Hunt, Thomas Monjalon, David Marchand
> This series introduces API additions prefixed in the rte namespace that allow
> the optional use of stdatomics.h from C11 using enable_stdatomics=true for
> targets where enable_stdatomics=false no functional change is intended.
>
> Be aware this does not contain all changes to use stdatomics across the DPDK
> tree it only introduces the minimum to allow the option to be used which is
> a pre-requisite for a clean CI (probably using clang) that can be run
> with enable_stdatomics=true enabled.
>
> It is planned that subsequent series will be introduced per lib/driver as
> appropriate to further enable stdatomics use when enable_stdatomics=true.
>
> Notes:
>
> * Additional libraries beyond EAL make visible atomics use across the
> API/ABI surface they will be converted in the subsequent series.
>
> * The eal: add rte atomic qualifier with casts patch needs some discussion
> as to whether or not the legacy rte_atomic APIs should be converted to
> work with enable_stdatomic=true right now some implementation dependent
> casts are used to prevent cascading / having to convert too much in
> the intial series.
>
> * Windows will obviously need complete conversion of libraries including
> atomics that are not crossing API/ABI boundaries. those conversions will
> introduced in separate series as new along side the existing msvc series.
>
> Please keep in mind we would like to prioritize the review / acceptance of
> this patch since it needs to be completed in the 23.11 merge window.
>
> Thank you all for the discussion that lead to the formation of this series.
>
> v5:
> * Add RTE_ATOMIC to doxygen configuration PREDEFINED macros list to
> fix documentation generation failure
> * Fix two typos in expansion of C11 atomics macros strong -> weak and
> add missing _explicit
> * Adjust devtools/checkpatches messages based on feedback. i have chosen
> not to try and catch use of C11 atomics or _Atomic since using those
> directly will be picked up by existing CI passes where by compilation
> error where enable_stdatomic=false (the default for most platforms)
>
> v4:
> * Move the definition of #define RTE_ATOMIC(type) to patch 1 where it
> belongs (a mistake in v3)
> * Provide comments for both RTE_ATOMIC and __rte_atomic macros indicating
> their use as specified or qualified contexts.
>
> v3:
> * Remove comments from APIs mentioning the mapping to C++ memory model
> memory orders
> * Introduce and use new macro RTE_ATOMIC(type) to be used in contexts
> where _Atomic is used as a type specifier to declare variables. The
> macro allows more clarity about what the atomic type being specified
> is. e.g. _Atomic(T *) vs _Atomic(T) it is easier to understand that
> the former is an atomic pointer type and the latter is an atomic
> type. it also has the benefit of (in the future) being interoperable
> with c++23 syntactically
> note: Morten i have retained your 'reviewed-by' tags if you disagree
> given the changes in the above version please indicate as such but
> i believe the changes are in the spirit of the feedback you provided
>
> v2:
> * Wrap meson_options.txt option description to newline and indent to
> be consistent with other options.
> * Provide separate typedef of rte_memory_order for enable_stdatomic=true
> VS enable_stdatomic=false instead of a single typedef to int
> note: slight tweak to reviewers feedback i've chosen to use a typedef
> for both enable_stdatomic={true,false} (just seemed more consistent)
> * Bring in assert.h and use static_assert macro instead of _Static_assert
> keyword to better interoperate with c/c++
> * Directly include rte_stdatomic.h where into other places it is consumed
> instead of hacking it globally into rte_config.h
> * Provide and use __rte_atomic_thread_fence to allow conditional expansion
> within the body of existing rte_atomic_thread_fence inline function to
> maintain per-arch optimizations when enable_stdatomic=false
>
> Tyler Retzlaff (6):
> eal: provide rte stdatomics optional atomics API
> eal: adapt EAL to present rte optional atomics API
> eal: add rte atomic qualifier with casts
> distributor: adapt for EAL optional atomics API changes
> bpf: adapt for EAL optional atomics API changes
> devtools: forbid new direct use of GCC atomic builtins
>
> app/test/test_mcslock.c | 6 +-
> config/meson.build | 1 +
> devtools/checkpatches.sh | 8 +-
> doc/api/doxy-api.conf.in | 1 +
> lib/bpf/bpf_pkt.c | 6 +-
> lib/distributor/distributor_private.h | 2 +-
> lib/distributor/rte_distributor_single.c | 44 +++----
> lib/eal/arm/include/rte_atomic_32.h | 4 +-
> lib/eal/arm/include/rte_atomic_64.h | 36 +++---
> lib/eal/arm/include/rte_pause_64.h | 26 ++--
> lib/eal/arm/rte_power_intrinsics.c | 8 +-
> lib/eal/common/eal_common_trace.c | 16 +--
> lib/eal/include/generic/rte_atomic.h | 67 +++++++----
> lib/eal/include/generic/rte_pause.h | 50 ++++----
> lib/eal/include/generic/rte_rwlock.h | 48 ++++----
> lib/eal/include/generic/rte_spinlock.h | 20 ++--
> lib/eal/include/meson.build | 1 +
> lib/eal/include/rte_mcslock.h | 51 ++++----
> lib/eal/include/rte_pflock.h | 25 ++--
> lib/eal/include/rte_seqcount.h | 19 +--
> lib/eal/include/rte_stdatomic.h | 198 +++++++++++++++++++++++++++++++
> lib/eal/include/rte_ticketlock.h | 43 +++----
> lib/eal/include/rte_trace_point.h | 5 +-
> lib/eal/loongarch/include/rte_atomic.h | 4 +-
> lib/eal/ppc/include/rte_atomic.h | 54 ++++-----
> lib/eal/riscv/include/rte_atomic.h | 4 +-
> lib/eal/x86/include/rte_atomic.h | 8 +-
> lib/eal/x86/include/rte_spinlock.h | 2 +-
> lib/eal/x86/rte_power_intrinsics.c | 7 +-
> meson_options.txt | 2 +
> 30 files changed, 499 insertions(+), 267 deletions(-)
> create mode 100644 lib/eal/include/rte_stdatomic.h
>
Series-acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v4 6/6] devtools: forbid new direct use of GCC atomic builtins
2023-08-18 7:13 ` Morten Brørup
@ 2023-08-22 18:14 ` Tyler Retzlaff
0 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-22 18:14 UTC (permalink / raw)
To: Morten Brørup
Cc: Bruce Richardson, dev, techboard, Honnappa Nagarahalli,
Ruifeng Wang, Jerin Jacob, Sunil Kumar Kori,
Mattias Rönnblom, Joyce Kong, David Christensen,
Konstantin Ananyev, David Hunt, Thomas Monjalon, David Marchand
On Fri, Aug 18, 2023 at 09:13:29AM +0200, Morten Brørup wrote:
> > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > Sent: Thursday, 17 August 2023 21.14
> >
> > On Thu, Aug 17, 2023 at 01:57:01PM +0200, Morten Brørup wrote:
> > > > From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> > > > Sent: Wednesday, 16 August 2023 23.39
> > > >
> > > > Refrain from using compiler __atomic_xxx builtins DPDK now requires
> > > > the use of rte_atomic_<op>_explicit macros when operating on DPDK
> > > > atomic variables.
> > >
> > > There is probably no end to how much can be added to checkpatches.
> > >
> > > You got the important stuff, so below are only further suggestions!
> > >
> > > [...]
> > >
> > > > - # refrain from using compiler __atomic_{add,and,nand,or,sub,xor}_fetch()
> > > > + # refrain from using compiler __atomic_xxx builtins
> > > > awk -v FOLDERS="lib drivers app examples" \
> > > > - -v EXPRESSIONS="__atomic_(add|and|nand|or|sub|xor)_fetch\\\(" \
> > > > + -v EXPRESSIONS="__atomic_.*\\\(" \
> > > > -v RET_ON_FAIL=1 \
> > > > - -v MESSAGE='Using __atomic_op_fetch, prefer __atomic_fetch_op' \
> > > > + -v MESSAGE='Using __atomic_xxx builtins' \
> > >
> > > Alternatively:
> > > -v MESSAGE='Using __atomic_xxx built-ins, prefer rte_atomic_xxx'
> > \
> >
> > i can adjust the wording as you suggest, no problem
> >
> > >
> > > > -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
> > > > "$1" || res=1
> > > >
> > > > --
> > > > 1.8.3.1
> > >
> > > This could be updated too:
> > >
> > > # refrain from using compiler __atomic_thread_fence()
> > > # It should be avoided on x86 for SMP case.
> > > awk -v FOLDERS="lib drivers app examples" \
> > > -v EXPRESSIONS="__atomic_thread_fence\\\(" \
> > > -v RET_ON_FAIL=1 \
> > > - -v MESSAGE='Using __atomic_thread_fence' \
> > > + -v MESSAGE='Using __atomic_thread_fence built-in, prefer
> > __rte_atomic_thread_fence' \
> > > -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
> >
> > yeah, i left this one separate i think the advice is actually use
> > rte_atomic_thread_fence which may be an inline function that uses
> > __rte_atomic_thread_fence
>
> I now noticed that the comment to this says "# [...] should be avoided on x86 for SMP case." Wouldn't that apply to __rte_atomic_thread_fence too? So would we want a similar warning for __rte_atomic_thread_fence in checkpatches; i.e. warnings for all variants of [__[rte_]]atomic_thread_fence?
to understand how this applies we need to look at
x86/include/rte_atomic.h
static __rte_always_inline void
rte_atomic_thread_fence(rte_memory_order memorder)
{
if (memorder == rte_memory_order_seq_cst)
rte_smp_mb();
else
__rte_atomic_thread_fence(memorder);
}
So what i've done is this
You *should* always use rte_atomic_thread_fence() because it does the dance to
give you what you are supposed to on "x86 for the SMP case" right?
>
> If the use of [__[rte_]]atomic_thread_fence is only conditionally prohibited, I think that we should move the warning to the definition of __rte_atomic_thread_fence itself, gated by x64 and SMP being defined. The CI would catch its use in DPDK, but still allow application developers to use it (for targets not being x86 SMP). What do others think?
I'll tweak the patch to warn about __rte_atomic and __atomic since the
correct usage is always using rte_atomic_xxx
>
> >
> > > "$1" || res=1
> > >
> > > You could also add C11 variants of these tests...
> > >
> > atomic_(load|store|exchange|compare_exchange_(strong|weak)|fetch_(add|sub|and|
> > xor|or|nand)|flag_(test_and_set|clear))[_explicit], and
> > > atomic_thread_fence.
> > >
> > > And a test for using "_Atomic".
> >
> > direct use would cause early compilation in the CI so it would be caught
> > fairly early. i'm not sure i want to get into the business of trying to
> > add redundant (albiet cheaper) earlier checks.
> >
> > though if there is a general call for this from the reviewers i'll add
> > them.
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v6 0/6] rte atomics API for optional stdatomic
2023-08-11 1:31 [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
` (9 preceding siblings ...)
2023-08-17 21:42 ` [PATCH v5 0/6] optional rte optional stdatomics API Tyler Retzlaff
@ 2023-08-22 21:00 ` Tyler Retzlaff
2023-08-22 21:00 ` [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
` (7 more replies)
10 siblings, 8 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-22 21:00 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
This series introduces API additions prefixed in the rte namespace that allow
the optional use of stdatomics.h from C11 using enable_stdatomics=true for
targets where enable_stdatomics=false no functional change is intended.
Be aware this does not contain all changes to use stdatomics across the DPDK
tree it only introduces the minimum to allow the option to be used which is
a pre-requisite for a clean CI (probably using clang) that can be run
with enable_stdatomics=true enabled.
It is planned that subsequent series will be introduced per lib/driver as
appropriate to further enable stdatomics use when enable_stdatomics=true.
Notes:
* Additional libraries beyond EAL make visible atomics use across the
API/ABI surface they will be converted in the subsequent series.
* The eal: add rte atomic qualifier with casts patch needs some discussion
as to whether or not the legacy rte_atomic APIs should be converted to
work with enable_stdatomic=true right now some implementation dependent
casts are used to prevent cascading / having to convert too much in
the intial series.
* Windows will obviously need complete conversion of libraries including
atomics that are not crossing API/ABI boundaries. those conversions will
introduced in separate series as new along side the existing msvc series.
Please keep in mind we would like to prioritize the review / acceptance of
this patch since it needs to be completed in the 23.11 merge window.
Thank you all for the discussion that lead to the formation of this series.
v6:
* Adjust checkpatches to warn about use of __rte_atomic_thread_fence
and suggest use of rte_atomic_thread_fence. Use the existing check
more generic check for __atomic_xxx to catch use of __atomic_thread_fence
and recommend rte_atomic_xxx.
v5:
* Add RTE_ATOMIC to doxygen configuration PREDEFINED macros list to
fix documentation generation failure
* Fix two typos in expansion of C11 atomics macros strong -> weak and
add missing _explicit
* Adjust devtools/checkpatches messages based on feedback. i have chosen
not to try and catch use of C11 atomics or _Atomic since using those
directly will be picked up by existing CI passes where by compilation
error where enable_stdatomic=false (the default for most platforms)
v4:
* Move the definition of #define RTE_ATOMIC(type) to patch 1 where it
belongs (a mistake in v3)
* Provide comments for both RTE_ATOMIC and __rte_atomic macros indicating
their use as specified or qualified contexts.
v3:
* Remove comments from APIs mentioning the mapping to C++ memory model
memory orders
* Introduce and use new macro RTE_ATOMIC(type) to be used in contexts
where _Atomic is used as a type specifier to declare variables. The
macro allows more clarity about what the atomic type being specified
is. e.g. _Atomic(T *) vs _Atomic(T) it is easier to understand that
the former is an atomic pointer type and the latter is an atomic
type. it also has the benefit of (in the future) being interoperable
with c++23 syntactically
note: Morten i have retained your 'reviewed-by' tags if you disagree
given the changes in the above version please indicate as such but
i believe the changes are in the spirit of the feedback you provided
v2:
* Wrap meson_options.txt option description to newline and indent to
be consistent with other options.
* Provide separate typedef of rte_memory_order for enable_stdatomic=true
VS enable_stdatomic=false instead of a single typedef to int
note: slight tweak to reviewers feedback i've chosen to use a typedef
for both enable_stdatomic={true,false} (just seemed more consistent)
* Bring in assert.h and use static_assert macro instead of _Static_assert
keyword to better interoperate with c/c++
* Directly include rte_stdatomic.h where into other places it is consumed
instead of hacking it globally into rte_config.h
* Provide and use __rte_atomic_thread_fence to allow conditional expansion
within the body of existing rte_atomic_thread_fence inline function to
maintain per-arch optimizations when enable_stdatomic=false
Tyler Retzlaff (6):
eal: provide rte stdatomics optional atomics API
eal: adapt EAL to present rte optional atomics API
eal: add rte atomic qualifier with casts
distributor: adapt for EAL optional atomics API changes
bpf: adapt for EAL optional atomics API changes
devtools: forbid new direct use of GCC atomic builtins
app/test/test_mcslock.c | 6 +-
config/meson.build | 1 +
devtools/checkpatches.sh | 12 +-
doc/api/doxy-api.conf.in | 1 +
lib/bpf/bpf_pkt.c | 6 +-
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 +++----
lib/eal/arm/include/rte_atomic_32.h | 4 +-
lib/eal/arm/include/rte_atomic_64.h | 36 +++---
lib/eal/arm/include/rte_pause_64.h | 26 ++--
lib/eal/arm/rte_power_intrinsics.c | 8 +-
lib/eal/common/eal_common_trace.c | 16 +--
lib/eal/include/generic/rte_atomic.h | 67 +++++++----
lib/eal/include/generic/rte_pause.h | 50 ++++----
lib/eal/include/generic/rte_rwlock.h | 48 ++++----
lib/eal/include/generic/rte_spinlock.h | 20 ++--
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_mcslock.h | 51 ++++----
lib/eal/include/rte_pflock.h | 25 ++--
lib/eal/include/rte_seqcount.h | 19 +--
lib/eal/include/rte_stdatomic.h | 198 +++++++++++++++++++++++++++++++
lib/eal/include/rte_ticketlock.h | 43 +++----
lib/eal/include/rte_trace_point.h | 5 +-
lib/eal/loongarch/include/rte_atomic.h | 4 +-
lib/eal/ppc/include/rte_atomic.h | 54 ++++-----
lib/eal/riscv/include/rte_atomic.h | 4 +-
lib/eal/x86/include/rte_atomic.h | 8 +-
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 7 +-
meson_options.txt | 2 +
30 files changed, 501 insertions(+), 269 deletions(-)
create mode 100644 lib/eal/include/rte_stdatomic.h
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API
2023-08-22 21:00 ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
@ 2023-08-22 21:00 ` Tyler Retzlaff
2023-09-28 8:06 ` Thomas Monjalon
2023-08-22 21:00 ` [PATCH v6 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
` (6 subsequent siblings)
7 siblings, 1 reply; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-22 21:00 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Provide API for atomic operations in the rte namespace that may
optionally be configured to use C11 atomics with meson
option enable_stdatomics=true
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
---
config/meson.build | 1 +
doc/api/doxy-api.conf.in | 1 +
lib/eal/include/generic/rte_atomic.h | 1 +
lib/eal/include/generic/rte_pause.h | 1 +
lib/eal/include/generic/rte_rwlock.h | 1 +
lib/eal/include/generic/rte_spinlock.h | 1 +
lib/eal/include/meson.build | 1 +
lib/eal/include/rte_mcslock.h | 1 +
lib/eal/include/rte_pflock.h | 1 +
lib/eal/include/rte_seqcount.h | 1 +
lib/eal/include/rte_stdatomic.h | 198 +++++++++++++++++++++++++++++++++
lib/eal/include/rte_ticketlock.h | 1 +
lib/eal/include/rte_trace_point.h | 1 +
meson_options.txt | 2 +
14 files changed, 212 insertions(+)
create mode 100644 lib/eal/include/rte_stdatomic.h
diff --git a/config/meson.build b/config/meson.build
index d822371..ec49964 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -303,6 +303,7 @@ endforeach
# set other values pulled from the build options
dpdk_conf.set('RTE_MAX_ETHPORTS', get_option('max_ethports'))
dpdk_conf.set('RTE_LIBEAL_USE_HPET', get_option('use_hpet'))
+dpdk_conf.set('RTE_ENABLE_STDATOMIC', get_option('enable_stdatomic'))
dpdk_conf.set('RTE_ENABLE_TRACE_FP', get_option('enable_trace_fp'))
# values which have defaults which may be overridden
dpdk_conf.set('RTE_MAX_VFIO_GROUPS', 64)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a88accd..51e8586 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -84,6 +84,7 @@ INPUT += @API_EXAMPLES@
FILE_PATTERNS = rte_*.h \
cmdline.h
PREDEFINED = __DOXYGEN__ \
+ RTE_ATOMIC \
RTE_HAS_CPUSET \
VFIO_PRESENT \
__rte_lockable= \
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 82b9bfc..4a235ba 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -15,6 +15,7 @@
#include <stdint.h>
#include <rte_compat.h>
#include <rte_common.h>
+#include <rte_stdatomic.h>
#ifdef __DOXYGEN__
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index ec1f418..bebfa95 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -16,6 +16,7 @@
#include <assert.h>
#include <rte_common.h>
#include <rte_atomic.h>
+#include <rte_stdatomic.h>
/**
* Pause CPU execution for a short while
diff --git a/lib/eal/include/generic/rte_rwlock.h b/lib/eal/include/generic/rte_rwlock.h
index 9e083bb..24ebec6 100644
--- a/lib/eal/include/generic/rte_rwlock.h
+++ b/lib/eal/include/generic/rte_rwlock.h
@@ -32,6 +32,7 @@
#include <rte_common.h>
#include <rte_lock_annotations.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_rwlock_t type.
diff --git a/lib/eal/include/generic/rte_spinlock.h b/lib/eal/include/generic/rte_spinlock.h
index c50ebaa..e18f0cd 100644
--- a/lib/eal/include/generic/rte_spinlock.h
+++ b/lib/eal/include/generic/rte_spinlock.h
@@ -23,6 +23,7 @@
#endif
#include <rte_lock_annotations.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_spinlock_t type.
diff --git a/lib/eal/include/meson.build b/lib/eal/include/meson.build
index a0463ef..e94b056 100644
--- a/lib/eal/include/meson.build
+++ b/lib/eal/include/meson.build
@@ -42,6 +42,7 @@ headers += files(
'rte_seqlock.h',
'rte_service.h',
'rte_service_component.h',
+ 'rte_stdatomic.h',
'rte_string_fns.h',
'rte_tailq.h',
'rte_thread.h',
diff --git a/lib/eal/include/rte_mcslock.h b/lib/eal/include/rte_mcslock.h
index a805cb2..18e63eb 100644
--- a/lib/eal/include/rte_mcslock.h
+++ b/lib/eal/include/rte_mcslock.h
@@ -27,6 +27,7 @@
#include <rte_common.h>
#include <rte_pause.h>
#include <rte_branch_prediction.h>
+#include <rte_stdatomic.h>
/**
* The rte_mcslock_t type.
diff --git a/lib/eal/include/rte_pflock.h b/lib/eal/include/rte_pflock.h
index a3f7291..790be71 100644
--- a/lib/eal/include/rte_pflock.h
+++ b/lib/eal/include/rte_pflock.h
@@ -34,6 +34,7 @@
#include <rte_compat.h>
#include <rte_common.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_pflock_t type.
diff --git a/lib/eal/include/rte_seqcount.h b/lib/eal/include/rte_seqcount.h
index ff62708..098af26 100644
--- a/lib/eal/include/rte_seqcount.h
+++ b/lib/eal/include/rte_seqcount.h
@@ -26,6 +26,7 @@
#include <rte_atomic.h>
#include <rte_branch_prediction.h>
#include <rte_compat.h>
+#include <rte_stdatomic.h>
/**
* The RTE seqcount type.
diff --git a/lib/eal/include/rte_stdatomic.h b/lib/eal/include/rte_stdatomic.h
new file mode 100644
index 0000000..41f90b4
--- /dev/null
+++ b/lib/eal/include/rte_stdatomic.h
@@ -0,0 +1,198 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2023 Microsoft Corporation
+ */
+
+#ifndef _RTE_STDATOMIC_H_
+#define _RTE_STDATOMIC_H_
+
+#include <assert.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#ifdef RTE_ENABLE_STDATOMIC
+#ifdef __STDC_NO_ATOMICS__
+#error enable_stdatomics=true but atomics not supported by toolchain
+#endif
+
+#include <stdatomic.h>
+
+/* RTE_ATOMIC(type) is provided for use as a type specifier
+ * permitting designation of an rte atomic type.
+ */
+#define RTE_ATOMIC(type) _Atomic(type)
+
+/* __rte_atomic is provided for type qualification permitting
+ * designation of an rte atomic qualified type-name.
+ */
+#define __rte_atomic _Atomic
+
+/* The memory order is an enumerated type in C11. */
+typedef memory_order rte_memory_order;
+
+#define rte_memory_order_relaxed memory_order_relaxed
+#ifdef __ATOMIC_RELAXED
+static_assert(rte_memory_order_relaxed == __ATOMIC_RELAXED,
+ "rte_memory_order_relaxed == __ATOMIC_RELAXED");
+#endif
+
+#define rte_memory_order_consume memory_order_consume
+#ifdef __ATOMIC_CONSUME
+static_assert(rte_memory_order_consume == __ATOMIC_CONSUME,
+ "rte_memory_order_consume == __ATOMIC_CONSUME");
+#endif
+
+#define rte_memory_order_acquire memory_order_acquire
+#ifdef __ATOMIC_ACQUIRE
+static_assert(rte_memory_order_acquire == __ATOMIC_ACQUIRE,
+ "rte_memory_order_acquire == __ATOMIC_ACQUIRE");
+#endif
+
+#define rte_memory_order_release memory_order_release
+#ifdef __ATOMIC_RELEASE
+static_assert(rte_memory_order_release == __ATOMIC_RELEASE,
+ "rte_memory_order_release == __ATOMIC_RELEASE");
+#endif
+
+#define rte_memory_order_acq_rel memory_order_acq_rel
+#ifdef __ATOMIC_ACQ_REL
+static_assert(rte_memory_order_acq_rel == __ATOMIC_ACQ_REL,
+ "rte_memory_order_acq_rel == __ATOMIC_ACQ_REL");
+#endif
+
+#define rte_memory_order_seq_cst memory_order_seq_cst
+#ifdef __ATOMIC_SEQ_CST
+static_assert(rte_memory_order_seq_cst == __ATOMIC_SEQ_CST,
+ "rte_memory_order_seq_cst == __ATOMIC_SEQ_CST");
+#endif
+
+#define rte_atomic_load_explicit(ptr, memorder) \
+ atomic_load_explicit(ptr, memorder)
+
+#define rte_atomic_store_explicit(ptr, val, memorder) \
+ atomic_store_explicit(ptr, val, memorder)
+
+#define rte_atomic_exchange_explicit(ptr, val, memorder) \
+ atomic_exchange_explicit(ptr, val, memorder)
+
+#define rte_atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder)
+
+#define rte_atomic_compare_exchange_weak_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ atomic_compare_exchange_weak_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder)
+
+#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
+ atomic_fetch_add_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
+ atomic_fetch_sub_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
+ atomic_fetch_and_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
+ atomic_fetch_xor_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
+ atomic_fetch_or_explicit(ptr, val, memorder)
+
+#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
+ atomic_fetch_nand_explicit(ptr, val, memorder)
+
+#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \
+ atomic_flag_test_and_set_explicit(ptr, memorder)
+
+#define rte_atomic_flag_clear_explicit(ptr, memorder) \
+ atomic_flag_clear_explicit(ptr, memorder)
+
+/* We provide internal macro here to allow conditional expansion
+ * in the body of the per-arch rte_atomic_thread_fence inline functions.
+ */
+#define __rte_atomic_thread_fence(memorder) \
+ atomic_thread_fence(memorder)
+
+#else
+
+/* RTE_ATOMIC(type) is provided for use as a type specifier
+ * permitting designation of an rte atomic type.
+ */
+#define RTE_ATOMIC(type) type
+
+/* __rte_atomic is provided for type qualification permitting
+ * designation of an rte atomic qualified type-name.
+ */
+#define __rte_atomic
+
+/* The memory order is an integer type in GCC built-ins,
+ * not an enumerated type like in C11.
+ */
+typedef int rte_memory_order;
+
+#define rte_memory_order_relaxed __ATOMIC_RELAXED
+#define rte_memory_order_consume __ATOMIC_CONSUME
+#define rte_memory_order_acquire __ATOMIC_ACQUIRE
+#define rte_memory_order_release __ATOMIC_RELEASE
+#define rte_memory_order_acq_rel __ATOMIC_ACQ_REL
+#define rte_memory_order_seq_cst __ATOMIC_SEQ_CST
+
+#define rte_atomic_load_explicit(ptr, memorder) \
+ __atomic_load_n(ptr, memorder)
+
+#define rte_atomic_store_explicit(ptr, val, memorder) \
+ __atomic_store_n(ptr, val, memorder)
+
+#define rte_atomic_exchange_explicit(ptr, val, memorder) \
+ __atomic_exchange_n(ptr, val, memorder)
+
+#define rte_atomic_compare_exchange_strong_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ __atomic_compare_exchange_n( \
+ ptr, expected, desired, 0, succ_memorder, fail_memorder)
+
+#define rte_atomic_compare_exchange_weak_explicit( \
+ ptr, expected, desired, succ_memorder, fail_memorder) \
+ __atomic_compare_exchange_n( \
+ ptr, expected, desired, 1, succ_memorder, fail_memorder)
+
+#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
+ __atomic_fetch_add(ptr, val, memorder)
+
+#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
+ __atomic_fetch_sub(ptr, val, memorder)
+
+#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
+ __atomic_fetch_and(ptr, val, memorder)
+
+#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
+ __atomic_fetch_xor(ptr, val, memorder)
+
+#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
+ __atomic_fetch_or(ptr, val, memorder)
+
+#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
+ __atomic_fetch_nand(ptr, val, memorder)
+
+#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \
+ __atomic_test_and_set(ptr, memorder)
+
+#define rte_atomic_flag_clear_explicit(ptr, memorder) \
+ __atomic_clear(ptr, memorder)
+
+/* We provide internal macro here to allow conditional expansion
+ * in the body of the per-arch rte_atomic_thread_fence inline functions.
+ */
+#define __rte_atomic_thread_fence(memorder) \
+ __atomic_thread_fence(memorder)
+
+#endif
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STDATOMIC_H_ */
diff --git a/lib/eal/include/rte_ticketlock.h b/lib/eal/include/rte_ticketlock.h
index 5db0d8a..e22d119 100644
--- a/lib/eal/include/rte_ticketlock.h
+++ b/lib/eal/include/rte_ticketlock.h
@@ -24,6 +24,7 @@
#include <rte_common.h>
#include <rte_lcore.h>
#include <rte_pause.h>
+#include <rte_stdatomic.h>
/**
* The rte_ticketlock_t type.
diff --git a/lib/eal/include/rte_trace_point.h b/lib/eal/include/rte_trace_point.h
index c6b6fcc..d587591 100644
--- a/lib/eal/include/rte_trace_point.h
+++ b/lib/eal/include/rte_trace_point.h
@@ -30,6 +30,7 @@
#include <rte_per_lcore.h>
#include <rte_string_fns.h>
#include <rte_uuid.h>
+#include <rte_stdatomic.h>
/** The tracepoint object. */
typedef uint64_t rte_trace_point_t;
diff --git a/meson_options.txt b/meson_options.txt
index 621e1ca..bb22bba 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -46,6 +46,8 @@ option('mbuf_refcnt_atomic', type: 'boolean', value: true, description:
'Atomically access the mbuf refcnt.')
option('platform', type: 'string', value: 'native', description:
'Platform to build, either "native", "generic" or a SoC. Please refer to the Linux build guide for more information.')
+option('enable_stdatomic', type: 'boolean', value: false, description:
+ 'enable use of C11 stdatomic')
option('enable_trace_fp', type: 'boolean', value: false, description:
'enable fast path trace points.')
option('tests', type: 'boolean', value: true, description:
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v6 2/6] eal: adapt EAL to present rte optional atomics API
2023-08-22 21:00 ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
2023-08-22 21:00 ` [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
@ 2023-08-22 21:00 ` Tyler Retzlaff
2023-08-22 21:00 ` [PATCH v6 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
` (5 subsequent siblings)
7 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-22 21:00 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt the EAL public headers to use rte optional atomics API instead of
directly using and exposing toolchain specific atomic builtin intrinsics.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
---
app/test/test_mcslock.c | 6 ++--
lib/eal/arm/include/rte_atomic_32.h | 4 +--
lib/eal/arm/include/rte_atomic_64.h | 36 +++++++++++------------
lib/eal/arm/include/rte_pause_64.h | 26 ++++++++--------
lib/eal/arm/rte_power_intrinsics.c | 8 ++---
lib/eal/common/eal_common_trace.c | 16 +++++-----
lib/eal/include/generic/rte_atomic.h | 50 +++++++++++++++----------------
lib/eal/include/generic/rte_pause.h | 46 ++++++++++++-----------------
lib/eal/include/generic/rte_rwlock.h | 47 +++++++++++++++--------------
lib/eal/include/generic/rte_spinlock.h | 19 ++++++------
lib/eal/include/rte_mcslock.h | 50 +++++++++++++++----------------
lib/eal/include/rte_pflock.h | 24 ++++++++-------
lib/eal/include/rte_seqcount.h | 18 ++++++------
lib/eal/include/rte_ticketlock.h | 42 +++++++++++++-------------
lib/eal/include/rte_trace_point.h | 4 +--
lib/eal/loongarch/include/rte_atomic.h | 4 +--
lib/eal/ppc/include/rte_atomic.h | 54 +++++++++++++++++-----------------
lib/eal/riscv/include/rte_atomic.h | 4 +--
lib/eal/x86/include/rte_atomic.h | 8 ++---
lib/eal/x86/include/rte_spinlock.h | 2 +-
lib/eal/x86/rte_power_intrinsics.c | 6 ++--
21 files changed, 237 insertions(+), 237 deletions(-)
diff --git a/app/test/test_mcslock.c b/app/test/test_mcslock.c
index 52e45e7..242c242 100644
--- a/app/test/test_mcslock.c
+++ b/app/test/test_mcslock.c
@@ -36,9 +36,9 @@
* lock multiple times.
*/
-rte_mcslock_t *p_ml;
-rte_mcslock_t *p_ml_try;
-rte_mcslock_t *p_ml_perf;
+RTE_ATOMIC(rte_mcslock_t *) p_ml;
+RTE_ATOMIC(rte_mcslock_t *) p_ml_try;
+RTE_ATOMIC(rte_mcslock_t *) p_ml_perf;
static unsigned int count;
diff --git a/lib/eal/arm/include/rte_atomic_32.h b/lib/eal/arm/include/rte_atomic_32.h
index c00ab78..62fc337 100644
--- a/lib/eal/arm/include/rte_atomic_32.h
+++ b/lib/eal/arm/include/rte_atomic_32.h
@@ -34,9 +34,9 @@
#define rte_io_rmb() rte_rmb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
#ifdef __cplusplus
diff --git a/lib/eal/arm/include/rte_atomic_64.h b/lib/eal/arm/include/rte_atomic_64.h
index 6047911..75d8ba6 100644
--- a/lib/eal/arm/include/rte_atomic_64.h
+++ b/lib/eal/arm/include/rte_atomic_64.h
@@ -38,9 +38,9 @@
#define rte_io_rmb() rte_rmb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
/*------------------------ 128 bit atomic operations -------------------------*/
@@ -107,33 +107,33 @@
*/
RTE_SET_USED(failure);
/* Find invalid memory order */
- RTE_ASSERT(success == __ATOMIC_RELAXED ||
- success == __ATOMIC_ACQUIRE ||
- success == __ATOMIC_RELEASE ||
- success == __ATOMIC_ACQ_REL ||
- success == __ATOMIC_SEQ_CST);
+ RTE_ASSERT(success == rte_memory_order_relaxed ||
+ success == rte_memory_order_acquire ||
+ success == rte_memory_order_release ||
+ success == rte_memory_order_acq_rel ||
+ success == rte_memory_order_seq_cst);
rte_int128_t expected = *exp;
rte_int128_t desired = *src;
rte_int128_t old;
#if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS)
- if (success == __ATOMIC_RELAXED)
+ if (success == rte_memory_order_relaxed)
__cas_128_relaxed(dst, exp, desired);
- else if (success == __ATOMIC_ACQUIRE)
+ else if (success == rte_memory_order_acquire)
__cas_128_acquire(dst, exp, desired);
- else if (success == __ATOMIC_RELEASE)
+ else if (success == rte_memory_order_release)
__cas_128_release(dst, exp, desired);
else
__cas_128_acq_rel(dst, exp, desired);
old = *exp;
#else
-#define __HAS_ACQ(mo) ((mo) != __ATOMIC_RELAXED && (mo) != __ATOMIC_RELEASE)
-#define __HAS_RLS(mo) ((mo) == __ATOMIC_RELEASE || (mo) == __ATOMIC_ACQ_REL || \
- (mo) == __ATOMIC_SEQ_CST)
+#define __HAS_ACQ(mo) ((mo) != rte_memory_order_relaxed && (mo) != rte_memory_order_release)
+#define __HAS_RLS(mo) ((mo) == rte_memory_order_release || (mo) == rte_memory_order_acq_rel || \
+ (mo) == rte_memory_order_seq_cst)
- int ldx_mo = __HAS_ACQ(success) ? __ATOMIC_ACQUIRE : __ATOMIC_RELAXED;
- int stx_mo = __HAS_RLS(success) ? __ATOMIC_RELEASE : __ATOMIC_RELAXED;
+ int ldx_mo = __HAS_ACQ(success) ? rte_memory_order_acquire : rte_memory_order_relaxed;
+ int stx_mo = __HAS_RLS(success) ? rte_memory_order_release : rte_memory_order_relaxed;
#undef __HAS_ACQ
#undef __HAS_RLS
@@ -153,7 +153,7 @@
: "Q" (src->val[0]) \
: "memory"); }
- if (ldx_mo == __ATOMIC_RELAXED)
+ if (ldx_mo == rte_memory_order_relaxed)
__LOAD_128("ldxp", dst, old)
else
__LOAD_128("ldaxp", dst, old)
@@ -170,7 +170,7 @@
: "memory"); }
if (likely(old.int128 == expected.int128)) {
- if (stx_mo == __ATOMIC_RELAXED)
+ if (stx_mo == rte_memory_order_relaxed)
__STORE_128("stxp", dst, desired, ret)
else
__STORE_128("stlxp", dst, desired, ret)
@@ -181,7 +181,7 @@
* needs to be stored back to ensure it was read
* atomically.
*/
- if (stx_mo == __ATOMIC_RELAXED)
+ if (stx_mo == rte_memory_order_relaxed)
__STORE_128("stxp", dst, old, ret)
else
__STORE_128("stlxp", dst, old, ret)
diff --git a/lib/eal/arm/include/rte_pause_64.h b/lib/eal/arm/include/rte_pause_64.h
index 5f70e97..d4daafc 100644
--- a/lib/eal/arm/include/rte_pause_64.h
+++ b/lib/eal/arm/include/rte_pause_64.h
@@ -41,7 +41,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_8(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxrb %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -60,7 +60,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_16(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxrh %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -79,7 +79,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_32(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxr %w[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -98,7 +98,7 @@ static inline void rte_pause(void)
* implicitly to exit WFE.
*/
#define __RTE_ARM_LOAD_EXC_64(src, dst, memorder) { \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxr %x[tmp], [%x[addr]]" \
: [tmp] "=&r" (dst) \
: [addr] "r" (src) \
@@ -118,7 +118,7 @@ static inline void rte_pause(void)
*/
#define __RTE_ARM_LOAD_EXC_128(src, dst, memorder) { \
volatile rte_int128_t *dst_128 = (volatile rte_int128_t *)&dst; \
- if (memorder == __ATOMIC_RELAXED) { \
+ if (memorder == rte_memory_order_relaxed) { \
asm volatile("ldxp %x[tmp0], %x[tmp1], [%x[addr]]" \
: [tmp0] "=&r" (dst_128->val[0]), \
[tmp1] "=&r" (dst_128->val[1]) \
@@ -153,8 +153,8 @@ static inline void rte_pause(void)
{
uint16_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_16(addr, value, memorder)
if (value != expected) {
@@ -172,8 +172,8 @@ static inline void rte_pause(void)
{
uint32_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_32(addr, value, memorder)
if (value != expected) {
@@ -191,8 +191,8 @@ static inline void rte_pause(void)
{
uint64_t value;
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&
- memorder != __ATOMIC_RELAXED);
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire &&
+ memorder != rte_memory_order_relaxed);
__RTE_ARM_LOAD_EXC_64(addr, value, memorder)
if (value != expected) {
@@ -206,8 +206,8 @@ static inline void rte_pause(void)
#define RTE_WAIT_UNTIL_MASKED(addr, mask, cond, expected, memorder) do { \
RTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); \
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && \
- memorder != __ATOMIC_RELAXED); \
+ RTE_BUILD_BUG_ON(memorder != rte_memory_order_acquire && \
+ memorder != rte_memory_order_relaxed); \
const uint32_t size = sizeof(*(addr)) << 3; \
typeof(*(addr)) expected_value = (expected); \
typeof(*(addr)) value; \
diff --git a/lib/eal/arm/rte_power_intrinsics.c b/lib/eal/arm/rte_power_intrinsics.c
index 77b96e4..f54cf59 100644
--- a/lib/eal/arm/rte_power_intrinsics.c
+++ b/lib/eal/arm/rte_power_intrinsics.c
@@ -33,19 +33,19 @@
switch (pmc->size) {
case sizeof(uint8_t):
- __RTE_ARM_LOAD_EXC_8(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_8(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint16_t):
- __RTE_ARM_LOAD_EXC_16(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_16(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint32_t):
- __RTE_ARM_LOAD_EXC_32(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_32(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
case sizeof(uint64_t):
- __RTE_ARM_LOAD_EXC_64(pmc->addr, cur_value, __ATOMIC_RELAXED)
+ __RTE_ARM_LOAD_EXC_64(pmc->addr, cur_value, rte_memory_order_relaxed)
__RTE_ARM_WFE()
break;
default:
diff --git a/lib/eal/common/eal_common_trace.c b/lib/eal/common/eal_common_trace.c
index cb980af..c6628dd 100644
--- a/lib/eal/common/eal_common_trace.c
+++ b/lib/eal/common/eal_common_trace.c
@@ -103,11 +103,11 @@ struct trace_point_head *
trace_mode_set(rte_trace_point_t *t, enum rte_trace_mode mode)
{
if (mode == RTE_TRACE_MODE_OVERWRITE)
- __atomic_fetch_and(t, ~__RTE_TRACE_FIELD_ENABLE_DISCARD,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_and_explicit(t, ~__RTE_TRACE_FIELD_ENABLE_DISCARD,
+ rte_memory_order_release);
else
- __atomic_fetch_or(t, __RTE_TRACE_FIELD_ENABLE_DISCARD,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_or_explicit(t, __RTE_TRACE_FIELD_ENABLE_DISCARD,
+ rte_memory_order_release);
}
void
@@ -141,7 +141,7 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return false;
- val = __atomic_load_n(t, __ATOMIC_ACQUIRE);
+ val = rte_atomic_load_explicit(t, rte_memory_order_acquire);
return (val & __RTE_TRACE_FIELD_ENABLE_MASK) != 0;
}
@@ -153,7 +153,8 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return -ERANGE;
- prev = __atomic_fetch_or(t, __RTE_TRACE_FIELD_ENABLE_MASK, __ATOMIC_RELEASE);
+ prev = rte_atomic_fetch_or_explicit(t, __RTE_TRACE_FIELD_ENABLE_MASK,
+ rte_memory_order_release);
if ((prev & __RTE_TRACE_FIELD_ENABLE_MASK) == 0)
__atomic_fetch_add(&trace.status, 1, __ATOMIC_RELEASE);
return 0;
@@ -167,7 +168,8 @@ rte_trace_mode rte_trace_mode_get(void)
if (trace_point_is_invalid(t))
return -ERANGE;
- prev = __atomic_fetch_and(t, ~__RTE_TRACE_FIELD_ENABLE_MASK, __ATOMIC_RELEASE);
+ prev = rte_atomic_fetch_and_explicit(t, ~__RTE_TRACE_FIELD_ENABLE_MASK,
+ rte_memory_order_release);
if ((prev & __RTE_TRACE_FIELD_ENABLE_MASK) != 0)
__atomic_fetch_sub(&trace.status, 1, __ATOMIC_RELEASE);
return 0;
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 4a235ba..5940e7e 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -63,7 +63,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_ACQ_REL) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_acq_rel) should be used instead.
*/
static inline void rte_smp_mb(void);
@@ -80,7 +80,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_RELEASE) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_release) should be used instead.
* The fence also guarantees LOAD operations that precede the call
* are globally visible across the lcores before the STORE operations
* that follows it.
@@ -100,7 +100,7 @@
* but has different syntax and memory ordering semantic. Hence
* deprecated for the simplicity of memory ordering semantics in use.
*
- * rte_atomic_thread_fence(__ATOMIC_ACQUIRE) should be used instead.
+ * rte_atomic_thread_fence(rte_memory_order_acquire) should be used instead.
* The fence also guarantees LOAD operations that precede the call
* are globally visible across the lcores before the STORE operations
* that follows it.
@@ -154,7 +154,7 @@
/**
* Synchronization fence between threads based on the specified memory order.
*/
-static inline void rte_atomic_thread_fence(int memorder);
+static inline void rte_atomic_thread_fence(rte_memory_order memorder);
/*------------------------- 16 bit atomic operations -------------------------*/
@@ -207,7 +207,7 @@
static inline uint16_t
rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -274,7 +274,7 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
/**
@@ -288,7 +288,7 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
/**
@@ -341,7 +341,7 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
/**
@@ -361,7 +361,7 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
/**
@@ -380,7 +380,7 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_SEQ_CST) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -400,7 +400,7 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_SEQ_CST) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -486,7 +486,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline uint32_t
rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -553,7 +553,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
/**
@@ -567,7 +567,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
/**
@@ -620,7 +620,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
/**
@@ -640,7 +640,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
/**
@@ -659,7 +659,7 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_SEQ_CST) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -679,7 +679,7 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_SEQ_CST) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -764,7 +764,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline uint64_t
rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val)
{
- return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
+ return rte_atomic_exchange_explicit(dst, val, rte_memory_order_seq_cst);
}
#endif
@@ -885,7 +885,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
}
#endif
@@ -904,7 +904,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
}
#endif
@@ -962,7 +962,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_SEQ_CST) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
}
#endif
@@ -986,7 +986,7 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_SEQ_CST) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
}
#endif
@@ -1115,8 +1115,8 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
* stronger) model.
* @param failure
* If unsuccessful, the operation's memory behavior conforms to this (or a
- * stronger) model. This argument cannot be __ATOMIC_RELEASE,
- * __ATOMIC_ACQ_REL, or a stronger model than success.
+ * stronger) model. This argument cannot be rte_memory_order_release,
+ * rte_memory_order_acq_rel, or a stronger model than success.
* @return
* Non-zero on success; 0 on failure.
*/
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index bebfa95..256309e 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -36,13 +36,11 @@
* A 16-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
- * C++11 memory orders with the same names, see the C++11 standard or
- * the GCC wiki on atomic synchronization for detailed definition.
+ * rte_memory_order_acquire and rte_memory_order_relaxed.
*/
static __rte_always_inline void
rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,
- int memorder);
+ rte_memory_order memorder);
/**
* Wait for *addr to be updated with a 32-bit expected value, with a relaxed
@@ -54,13 +52,11 @@
* A 32-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
- * C++11 memory orders with the same names, see the C++11 standard or
- * the GCC wiki on atomic synchronization for detailed definition.
+ * rte_memory_order_acquire and rte_memory_order_relaxed.
*/
static __rte_always_inline void
rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,
- int memorder);
+ rte_memory_order memorder);
/**
* Wait for *addr to be updated with a 64-bit expected value, with a relaxed
@@ -72,42 +68,40 @@
* A 64-bit expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
- * C++11 memory orders with the same names, see the C++11 standard or
- * the GCC wiki on atomic synchronization for detailed definition.
+ * rte_memory_order_acquire and rte_memory_order_relaxed.
*/
static __rte_always_inline void
rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,
- int memorder);
+ rte_memory_order memorder);
#ifndef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED
static __rte_always_inline void
rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
static __rte_always_inline void
rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
static __rte_always_inline void
rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,
- int memorder)
+ rte_memory_order memorder)
{
- assert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);
+ assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (__atomic_load_n(addr, memorder) != expected)
+ while (rte_atomic_load_explicit(addr, memorder) != expected)
rte_pause();
}
@@ -125,16 +119,14 @@
* An expected value to be in the memory location.
* @param memorder
* Two different memory orders that can be specified:
- * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
- * C++11 memory orders with the same names, see the C++11 standard or
- * the GCC wiki on atomic synchronization for detailed definition.
+ * rte_memory_order_acquire and rte_memory_order_relaxed.
*/
#define RTE_WAIT_UNTIL_MASKED(addr, mask, cond, expected, memorder) do { \
RTE_BUILD_BUG_ON(!__builtin_constant_p(memorder)); \
- RTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE && \
- memorder != __ATOMIC_RELAXED); \
+ RTE_BUILD_BUG_ON((memorder) != rte_memory_order_acquire && \
+ (memorder) != rte_memory_order_relaxed); \
typeof(*(addr)) expected_value = (expected); \
- while (!((__atomic_load_n((addr), (memorder)) & (mask)) cond \
+ while (!((rte_atomic_load_explicit((addr), (memorder)) & (mask)) cond \
expected_value)) \
rte_pause(); \
} while (0)
diff --git a/lib/eal/include/generic/rte_rwlock.h b/lib/eal/include/generic/rte_rwlock.h
index 24ebec6..c788705 100644
--- a/lib/eal/include/generic/rte_rwlock.h
+++ b/lib/eal/include/generic/rte_rwlock.h
@@ -58,7 +58,7 @@
#define RTE_RWLOCK_READ 0x4 /* Reader increment */
typedef struct __rte_lockable {
- int32_t cnt;
+ RTE_ATOMIC(int32_t) cnt;
} rte_rwlock_t;
/**
@@ -93,21 +93,21 @@
while (1) {
/* Wait while writer is present or pending */
- while (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED)
+ while (rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed)
& RTE_RWLOCK_MASK)
rte_pause();
/* Try to get read lock */
- x = __atomic_fetch_add(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_ACQUIRE) + RTE_RWLOCK_READ;
+ x = rte_atomic_fetch_add_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_acquire) + RTE_RWLOCK_READ;
/* If no writer, then acquire was successful */
if (likely(!(x & RTE_RWLOCK_MASK)))
return;
/* Lost race with writer, backout the change. */
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_RELAXED);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_relaxed);
}
}
@@ -128,20 +128,20 @@
{
int32_t x;
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
/* fail if write lock is held or writer is pending */
if (x & RTE_RWLOCK_MASK)
return -EBUSY;
/* Try to get read lock */
- x = __atomic_fetch_add(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_ACQUIRE) + RTE_RWLOCK_READ;
+ x = rte_atomic_fetch_add_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_acquire) + RTE_RWLOCK_READ;
/* Back out if writer raced in */
if (unlikely(x & RTE_RWLOCK_MASK)) {
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ,
- __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ,
+ rte_memory_order_release);
return -EBUSY;
}
@@ -159,7 +159,7 @@
__rte_unlock_function(rwl)
__rte_no_thread_safety_analysis
{
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_READ, __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_READ, rte_memory_order_release);
}
/**
@@ -179,10 +179,10 @@
{
int32_t x;
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
if (x < RTE_RWLOCK_WRITE &&
- __atomic_compare_exchange_n(&rwl->cnt, &x, x + RTE_RWLOCK_WRITE,
- 1, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ rte_atomic_compare_exchange_weak_explicit(&rwl->cnt, &x, x + RTE_RWLOCK_WRITE,
+ rte_memory_order_acquire, rte_memory_order_relaxed))
return 0;
else
return -EBUSY;
@@ -202,22 +202,25 @@
int32_t x;
while (1) {
- x = __atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED);
+ x = rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed);
/* No readers or writers? */
if (likely(x < RTE_RWLOCK_WRITE)) {
/* Turn off RTE_RWLOCK_WAIT, turn on RTE_RWLOCK_WRITE */
- if (__atomic_compare_exchange_n(&rwl->cnt, &x, RTE_RWLOCK_WRITE, 1,
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ if (rte_atomic_compare_exchange_weak_explicit(
+ &rwl->cnt, &x, RTE_RWLOCK_WRITE,
+ rte_memory_order_acquire, rte_memory_order_relaxed))
return;
}
/* Turn on writer wait bit */
if (!(x & RTE_RWLOCK_WAIT))
- __atomic_fetch_or(&rwl->cnt, RTE_RWLOCK_WAIT, __ATOMIC_RELAXED);
+ rte_atomic_fetch_or_explicit(&rwl->cnt, RTE_RWLOCK_WAIT,
+ rte_memory_order_relaxed);
/* Wait until no readers before trying again */
- while (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) > RTE_RWLOCK_WAIT)
+ while (rte_atomic_load_explicit(&rwl->cnt,
+ rte_memory_order_relaxed) > RTE_RWLOCK_WAIT)
rte_pause();
}
@@ -234,7 +237,7 @@
__rte_unlock_function(rwl)
__rte_no_thread_safety_analysis
{
- __atomic_fetch_sub(&rwl->cnt, RTE_RWLOCK_WRITE, __ATOMIC_RELEASE);
+ rte_atomic_fetch_sub_explicit(&rwl->cnt, RTE_RWLOCK_WRITE, rte_memory_order_release);
}
/**
@@ -248,7 +251,7 @@
static inline int
rte_rwlock_write_is_locked(rte_rwlock_t *rwl)
{
- if (__atomic_load_n(&rwl->cnt, __ATOMIC_RELAXED) & RTE_RWLOCK_WRITE)
+ if (rte_atomic_load_explicit(&rwl->cnt, rte_memory_order_relaxed) & RTE_RWLOCK_WRITE)
return 1;
return 0;
diff --git a/lib/eal/include/generic/rte_spinlock.h b/lib/eal/include/generic/rte_spinlock.h
index e18f0cd..23fb048 100644
--- a/lib/eal/include/generic/rte_spinlock.h
+++ b/lib/eal/include/generic/rte_spinlock.h
@@ -29,7 +29,7 @@
* The rte_spinlock_t type.
*/
typedef struct __rte_lockable {
- volatile int locked; /**< lock status 0 = unlocked, 1 = locked */
+ volatile RTE_ATOMIC(int) locked; /**< lock status 0 = unlocked, 1 = locked */
} rte_spinlock_t;
/**
@@ -66,10 +66,10 @@
{
int exp = 0;
- while (!__atomic_compare_exchange_n(&sl->locked, &exp, 1, 0,
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) {
- rte_wait_until_equal_32((volatile uint32_t *)&sl->locked,
- 0, __ATOMIC_RELAXED);
+ while (!rte_atomic_compare_exchange_strong_explicit(&sl->locked, &exp, 1,
+ rte_memory_order_acquire, rte_memory_order_relaxed)) {
+ rte_wait_until_equal_32((volatile uint32_t *)(uintptr_t)&sl->locked,
+ 0, rte_memory_order_relaxed);
exp = 0;
}
}
@@ -90,7 +90,7 @@
rte_spinlock_unlock(rte_spinlock_t *sl)
__rte_no_thread_safety_analysis
{
- __atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&sl->locked, 0, rte_memory_order_release);
}
#endif
@@ -113,9 +113,8 @@
__rte_no_thread_safety_analysis
{
int exp = 0;
- return __atomic_compare_exchange_n(&sl->locked, &exp, 1,
- 0, /* disallow spurious failure */
- __ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
+ return rte_atomic_compare_exchange_strong_explicit(&sl->locked, &exp, 1,
+ rte_memory_order_acquire, rte_memory_order_relaxed);
}
#endif
@@ -129,7 +128,7 @@
*/
static inline int rte_spinlock_is_locked (rte_spinlock_t *sl)
{
- return __atomic_load_n(&sl->locked, __ATOMIC_ACQUIRE);
+ return rte_atomic_load_explicit(&sl->locked, rte_memory_order_acquire);
}
/**
diff --git a/lib/eal/include/rte_mcslock.h b/lib/eal/include/rte_mcslock.h
index 18e63eb..8c75377 100644
--- a/lib/eal/include/rte_mcslock.h
+++ b/lib/eal/include/rte_mcslock.h
@@ -33,8 +33,8 @@
* The rte_mcslock_t type.
*/
typedef struct rte_mcslock {
- struct rte_mcslock *next;
- int locked; /* 1 if the queue locked, 0 otherwise */
+ RTE_ATOMIC(struct rte_mcslock *) next;
+ RTE_ATOMIC(int) locked; /* 1 if the queue locked, 0 otherwise */
} rte_mcslock_t;
/**
@@ -49,13 +49,13 @@
* lock should use its 'own node'.
*/
static inline void
-rte_mcslock_lock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_lock(RTE_ATOMIC(rte_mcslock_t *) *msl, rte_mcslock_t *me)
{
rte_mcslock_t *prev;
/* Init me node */
- __atomic_store_n(&me->locked, 1, __ATOMIC_RELAXED);
- __atomic_store_n(&me->next, NULL, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&me->locked, 1, rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&me->next, NULL, rte_memory_order_relaxed);
/* If the queue is empty, the exchange operation is enough to acquire
* the lock. Hence, the exchange operation requires acquire semantics.
@@ -63,7 +63,7 @@
* visible to other CPUs/threads. Hence, the exchange operation requires
* release semantics as well.
*/
- prev = __atomic_exchange_n(msl, me, __ATOMIC_ACQ_REL);
+ prev = rte_atomic_exchange_explicit(msl, me, rte_memory_order_acq_rel);
if (likely(prev == NULL)) {
/* Queue was empty, no further action required,
* proceed with lock taken.
@@ -77,19 +77,19 @@
* strong as a release fence and is not sufficient to enforce the
* desired order here.
*/
- __atomic_store_n(&prev->next, me, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&prev->next, me, rte_memory_order_release);
/* The while-load of me->locked should not move above the previous
* store to prev->next. Otherwise it will cause a deadlock. Need a
* store-load barrier.
*/
- __atomic_thread_fence(__ATOMIC_ACQ_REL);
+ __rte_atomic_thread_fence(rte_memory_order_acq_rel);
/* If the lock has already been acquired, it first atomically
* places the node at the end of the queue and then proceeds
* to spin on me->locked until the previous lock holder resets
* the me->locked using mcslock_unlock().
*/
- rte_wait_until_equal_32((uint32_t *)&me->locked, 0, __ATOMIC_ACQUIRE);
+ rte_wait_until_equal_32((uint32_t *)(uintptr_t)&me->locked, 0, rte_memory_order_acquire);
}
/**
@@ -101,34 +101,34 @@
* A pointer to the node of MCS lock passed in rte_mcslock_lock.
*/
static inline void
-rte_mcslock_unlock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_unlock(RTE_ATOMIC(rte_mcslock_t *) *msl, RTE_ATOMIC(rte_mcslock_t *) me)
{
/* Check if there are more nodes in the queue. */
- if (likely(__atomic_load_n(&me->next, __ATOMIC_RELAXED) == NULL)) {
+ if (likely(rte_atomic_load_explicit(&me->next, rte_memory_order_relaxed) == NULL)) {
/* No, last member in the queue. */
- rte_mcslock_t *save_me = __atomic_load_n(&me, __ATOMIC_RELAXED);
+ rte_mcslock_t *save_me = rte_atomic_load_explicit(&me, rte_memory_order_relaxed);
/* Release the lock by setting it to NULL */
- if (likely(__atomic_compare_exchange_n(msl, &save_me, NULL, 0,
- __ATOMIC_RELEASE, __ATOMIC_RELAXED)))
+ if (likely(rte_atomic_compare_exchange_strong_explicit(msl, &save_me, NULL,
+ rte_memory_order_release, rte_memory_order_relaxed)))
return;
/* Speculative execution would be allowed to read in the
* while-loop first. This has the potential to cause a
* deadlock. Need a load barrier.
*/
- __atomic_thread_fence(__ATOMIC_ACQUIRE);
+ __rte_atomic_thread_fence(rte_memory_order_acquire);
/* More nodes added to the queue by other CPUs.
* Wait until the next pointer is set.
*/
- uintptr_t *next;
- next = (uintptr_t *)&me->next;
+ RTE_ATOMIC(uintptr_t) *next;
+ next = (__rte_atomic uintptr_t *)&me->next;
RTE_WAIT_UNTIL_MASKED(next, UINTPTR_MAX, !=, 0,
- __ATOMIC_RELAXED);
+ rte_memory_order_relaxed);
}
/* Pass lock to next waiter. */
- __atomic_store_n(&me->next->locked, 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&me->next->locked, 0, rte_memory_order_release);
}
/**
@@ -142,10 +142,10 @@
* 1 if the lock is successfully taken; 0 otherwise.
*/
static inline int
-rte_mcslock_trylock(rte_mcslock_t **msl, rte_mcslock_t *me)
+rte_mcslock_trylock(RTE_ATOMIC(rte_mcslock_t *) *msl, rte_mcslock_t *me)
{
/* Init me node */
- __atomic_store_n(&me->next, NULL, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&me->next, NULL, rte_memory_order_relaxed);
/* Try to lock */
rte_mcslock_t *expected = NULL;
@@ -156,8 +156,8 @@
* is visible to other CPUs/threads. Hence, the compare-exchange
* operation requires release semantics as well.
*/
- return __atomic_compare_exchange_n(msl, &expected, me, 0,
- __ATOMIC_ACQ_REL, __ATOMIC_RELAXED);
+ return rte_atomic_compare_exchange_strong_explicit(msl, &expected, me,
+ rte_memory_order_acq_rel, rte_memory_order_relaxed);
}
/**
@@ -169,9 +169,9 @@
* 1 if the lock is currently taken; 0 otherwise.
*/
static inline int
-rte_mcslock_is_locked(rte_mcslock_t *msl)
+rte_mcslock_is_locked(RTE_ATOMIC(rte_mcslock_t *) msl)
{
- return (__atomic_load_n(&msl, __ATOMIC_RELAXED) != NULL);
+ return (rte_atomic_load_explicit(&msl, rte_memory_order_relaxed) != NULL);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_pflock.h b/lib/eal/include/rte_pflock.h
index 790be71..79feeea 100644
--- a/lib/eal/include/rte_pflock.h
+++ b/lib/eal/include/rte_pflock.h
@@ -41,8 +41,8 @@
*/
struct rte_pflock {
struct {
- uint16_t in;
- uint16_t out;
+ RTE_ATOMIC(uint16_t) in;
+ RTE_ATOMIC(uint16_t) out;
} rd, wr;
};
typedef struct rte_pflock rte_pflock_t;
@@ -117,14 +117,14 @@ struct rte_pflock {
* If no writer is present, then the operation has completed
* successfully.
*/
- w = __atomic_fetch_add(&pf->rd.in, RTE_PFLOCK_RINC, __ATOMIC_ACQUIRE)
+ w = rte_atomic_fetch_add_explicit(&pf->rd.in, RTE_PFLOCK_RINC, rte_memory_order_acquire)
& RTE_PFLOCK_WBITS;
if (w == 0)
return;
/* Wait for current write phase to complete. */
RTE_WAIT_UNTIL_MASKED(&pf->rd.in, RTE_PFLOCK_WBITS, !=, w,
- __ATOMIC_ACQUIRE);
+ rte_memory_order_acquire);
}
/**
@@ -140,7 +140,7 @@ struct rte_pflock {
static inline void
rte_pflock_read_unlock(rte_pflock_t *pf)
{
- __atomic_fetch_add(&pf->rd.out, RTE_PFLOCK_RINC, __ATOMIC_RELEASE);
+ rte_atomic_fetch_add_explicit(&pf->rd.out, RTE_PFLOCK_RINC, rte_memory_order_release);
}
/**
@@ -161,8 +161,9 @@ struct rte_pflock {
/* Acquire ownership of write-phase.
* This is same as rte_ticketlock_lock().
*/
- ticket = __atomic_fetch_add(&pf->wr.in, 1, __ATOMIC_RELAXED);
- rte_wait_until_equal_16(&pf->wr.out, ticket, __ATOMIC_ACQUIRE);
+ ticket = rte_atomic_fetch_add_explicit(&pf->wr.in, 1, rte_memory_order_relaxed);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&pf->wr.out, ticket,
+ rte_memory_order_acquire);
/*
* Acquire ticket on read-side in order to allow them
@@ -173,10 +174,11 @@ struct rte_pflock {
* speculatively.
*/
w = RTE_PFLOCK_PRES | (ticket & RTE_PFLOCK_PHID);
- ticket = __atomic_fetch_add(&pf->rd.in, w, __ATOMIC_RELAXED);
+ ticket = rte_atomic_fetch_add_explicit(&pf->rd.in, w, rte_memory_order_relaxed);
/* Wait for any pending readers to flush. */
- rte_wait_until_equal_16(&pf->rd.out, ticket, __ATOMIC_ACQUIRE);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&pf->rd.out, ticket,
+ rte_memory_order_acquire);
}
/**
@@ -193,10 +195,10 @@ struct rte_pflock {
rte_pflock_write_unlock(rte_pflock_t *pf)
{
/* Migrate from write phase to read phase. */
- __atomic_fetch_and(&pf->rd.in, RTE_PFLOCK_LSB, __ATOMIC_RELEASE);
+ rte_atomic_fetch_and_explicit(&pf->rd.in, RTE_PFLOCK_LSB, rte_memory_order_release);
/* Allow other writers to continue. */
- __atomic_fetch_add(&pf->wr.out, 1, __ATOMIC_RELEASE);
+ rte_atomic_fetch_add_explicit(&pf->wr.out, 1, rte_memory_order_release);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_seqcount.h b/lib/eal/include/rte_seqcount.h
index 098af26..4f9cefb 100644
--- a/lib/eal/include/rte_seqcount.h
+++ b/lib/eal/include/rte_seqcount.h
@@ -32,7 +32,7 @@
* The RTE seqcount type.
*/
typedef struct {
- uint32_t sn; /**< A sequence number for the protected data. */
+ RTE_ATOMIC(uint32_t) sn; /**< A sequence number for the protected data. */
} rte_seqcount_t;
/**
@@ -106,11 +106,11 @@
static inline uint32_t
rte_seqcount_read_begin(const rte_seqcount_t *seqcount)
{
- /* __ATOMIC_ACQUIRE to prevent loads after (in program order)
+ /* rte_memory_order_acquire to prevent loads after (in program order)
* from happening before the sn load. Synchronizes-with the
* store release in rte_seqcount_write_end().
*/
- return __atomic_load_n(&seqcount->sn, __ATOMIC_ACQUIRE);
+ return rte_atomic_load_explicit(&seqcount->sn, rte_memory_order_acquire);
}
/**
@@ -161,9 +161,9 @@
return true;
/* make sure the data loads happens before the sn load */
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+ rte_atomic_thread_fence(rte_memory_order_acquire);
- end_sn = __atomic_load_n(&seqcount->sn, __ATOMIC_RELAXED);
+ end_sn = rte_atomic_load_explicit(&seqcount->sn, rte_memory_order_relaxed);
/* A writer incremented the sequence number during this read
* critical section.
@@ -205,12 +205,12 @@
sn = seqcount->sn + 1;
- __atomic_store_n(&seqcount->sn, sn, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&seqcount->sn, sn, rte_memory_order_relaxed);
- /* __ATOMIC_RELEASE to prevent stores after (in program order)
+ /* rte_memory_order_release to prevent stores after (in program order)
* from happening before the sn store.
*/
- rte_atomic_thread_fence(__ATOMIC_RELEASE);
+ rte_atomic_thread_fence(rte_memory_order_release);
}
/**
@@ -237,7 +237,7 @@
sn = seqcount->sn + 1;
/* Synchronizes-with the load acquire in rte_seqcount_read_begin(). */
- __atomic_store_n(&seqcount->sn, sn, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&seqcount->sn, sn, rte_memory_order_release);
}
#ifdef __cplusplus
diff --git a/lib/eal/include/rte_ticketlock.h b/lib/eal/include/rte_ticketlock.h
index e22d119..7d39bca 100644
--- a/lib/eal/include/rte_ticketlock.h
+++ b/lib/eal/include/rte_ticketlock.h
@@ -30,10 +30,10 @@
* The rte_ticketlock_t type.
*/
typedef union {
- uint32_t tickets;
+ RTE_ATOMIC(uint32_t) tickets;
struct {
- uint16_t current;
- uint16_t next;
+ RTE_ATOMIC(uint16_t) current;
+ RTE_ATOMIC(uint16_t) next;
} s;
} rte_ticketlock_t;
@@ -51,7 +51,7 @@
static inline void
rte_ticketlock_init(rte_ticketlock_t *tl)
{
- __atomic_store_n(&tl->tickets, 0, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tl->tickets, 0, rte_memory_order_relaxed);
}
/**
@@ -63,8 +63,9 @@
static inline void
rte_ticketlock_lock(rte_ticketlock_t *tl)
{
- uint16_t me = __atomic_fetch_add(&tl->s.next, 1, __ATOMIC_RELAXED);
- rte_wait_until_equal_16(&tl->s.current, me, __ATOMIC_ACQUIRE);
+ uint16_t me = rte_atomic_fetch_add_explicit(&tl->s.next, 1, rte_memory_order_relaxed);
+ rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tl->s.current, me,
+ rte_memory_order_acquire);
}
/**
@@ -76,8 +77,8 @@
static inline void
rte_ticketlock_unlock(rte_ticketlock_t *tl)
{
- uint16_t i = __atomic_load_n(&tl->s.current, __ATOMIC_RELAXED);
- __atomic_store_n(&tl->s.current, i + 1, __ATOMIC_RELEASE);
+ uint16_t i = rte_atomic_load_explicit(&tl->s.current, rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&tl->s.current, i + 1, rte_memory_order_release);
}
/**
@@ -92,12 +93,13 @@
rte_ticketlock_trylock(rte_ticketlock_t *tl)
{
rte_ticketlock_t oldl, newl;
- oldl.tickets = __atomic_load_n(&tl->tickets, __ATOMIC_RELAXED);
+ oldl.tickets = rte_atomic_load_explicit(&tl->tickets, rte_memory_order_relaxed);
newl.tickets = oldl.tickets;
newl.s.next++;
if (oldl.s.next == oldl.s.current) {
- if (__atomic_compare_exchange_n(&tl->tickets, &oldl.tickets,
- newl.tickets, 0, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
+ if (rte_atomic_compare_exchange_strong_explicit(&tl->tickets,
+ (uint32_t *)(uintptr_t)&oldl.tickets,
+ newl.tickets, rte_memory_order_acquire, rte_memory_order_relaxed))
return 1;
}
@@ -116,7 +118,7 @@
rte_ticketlock_is_locked(rte_ticketlock_t *tl)
{
rte_ticketlock_t tic;
- tic.tickets = __atomic_load_n(&tl->tickets, __ATOMIC_ACQUIRE);
+ tic.tickets = rte_atomic_load_explicit(&tl->tickets, rte_memory_order_acquire);
return (tic.s.current != tic.s.next);
}
@@ -127,7 +129,7 @@
typedef struct {
rte_ticketlock_t tl; /**< the actual ticketlock */
- int user; /**< core id using lock, TICKET_LOCK_INVALID_ID for unused */
+ RTE_ATOMIC(int) user; /**< core id using lock, TICKET_LOCK_INVALID_ID for unused */
unsigned int count; /**< count of time this lock has been called */
} rte_ticketlock_recursive_t;
@@ -147,7 +149,7 @@
rte_ticketlock_recursive_init(rte_ticketlock_recursive_t *tlr)
{
rte_ticketlock_init(&tlr->tl);
- __atomic_store_n(&tlr->user, TICKET_LOCK_INVALID_ID, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, TICKET_LOCK_INVALID_ID, rte_memory_order_relaxed);
tlr->count = 0;
}
@@ -162,9 +164,9 @@
{
int id = rte_gettid();
- if (__atomic_load_n(&tlr->user, __ATOMIC_RELAXED) != id) {
+ if (rte_atomic_load_explicit(&tlr->user, rte_memory_order_relaxed) != id) {
rte_ticketlock_lock(&tlr->tl);
- __atomic_store_n(&tlr->user, id, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, id, rte_memory_order_relaxed);
}
tlr->count++;
}
@@ -179,8 +181,8 @@
rte_ticketlock_recursive_unlock(rte_ticketlock_recursive_t *tlr)
{
if (--(tlr->count) == 0) {
- __atomic_store_n(&tlr->user, TICKET_LOCK_INVALID_ID,
- __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, TICKET_LOCK_INVALID_ID,
+ rte_memory_order_relaxed);
rte_ticketlock_unlock(&tlr->tl);
}
}
@@ -198,10 +200,10 @@
{
int id = rte_gettid();
- if (__atomic_load_n(&tlr->user, __ATOMIC_RELAXED) != id) {
+ if (rte_atomic_load_explicit(&tlr->user, rte_memory_order_relaxed) != id) {
if (rte_ticketlock_trylock(&tlr->tl) == 0)
return 0;
- __atomic_store_n(&tlr->user, id, __ATOMIC_RELAXED);
+ rte_atomic_store_explicit(&tlr->user, id, rte_memory_order_relaxed);
}
tlr->count++;
return 1;
diff --git a/lib/eal/include/rte_trace_point.h b/lib/eal/include/rte_trace_point.h
index d587591..b403edd 100644
--- a/lib/eal/include/rte_trace_point.h
+++ b/lib/eal/include/rte_trace_point.h
@@ -33,7 +33,7 @@
#include <rte_stdatomic.h>
/** The tracepoint object. */
-typedef uint64_t rte_trace_point_t;
+typedef RTE_ATOMIC(uint64_t) rte_trace_point_t;
/**
* Macro to define the tracepoint arguments in RTE_TRACE_POINT macro.
@@ -359,7 +359,7 @@ struct __rte_trace_header {
#define __rte_trace_point_emit_header_generic(t) \
void *mem; \
do { \
- const uint64_t val = __atomic_load_n(t, __ATOMIC_ACQUIRE); \
+ const uint64_t val = rte_atomic_load_explicit(t, rte_memory_order_acquire); \
if (likely(!(val & __RTE_TRACE_FIELD_ENABLE_MASK))) \
return; \
mem = __rte_trace_mem_get(val); \
diff --git a/lib/eal/loongarch/include/rte_atomic.h b/lib/eal/loongarch/include/rte_atomic.h
index 3c82845..0510b8f 100644
--- a/lib/eal/loongarch/include/rte_atomic.h
+++ b/lib/eal/loongarch/include/rte_atomic.h
@@ -35,9 +35,9 @@
#define rte_io_rmb() rte_mb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
#ifdef __cplusplus
diff --git a/lib/eal/ppc/include/rte_atomic.h b/lib/eal/ppc/include/rte_atomic.h
index ec8d8a2..7382412 100644
--- a/lib/eal/ppc/include/rte_atomic.h
+++ b/lib/eal/ppc/include/rte_atomic.h
@@ -38,9 +38,9 @@
#define rte_io_rmb() rte_rmb()
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
/*------------------------- 16 bit atomic operations -------------------------*/
@@ -48,8 +48,8 @@
static inline int
rte_atomic16_cmpset(volatile uint16_t *dst, uint16_t exp, uint16_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline int rte_atomic16_test_and_set(rte_atomic16_t *v)
@@ -60,29 +60,29 @@ static inline int rte_atomic16_test_and_set(rte_atomic16_t *v)
static inline void
rte_atomic16_inc(rte_atomic16_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic16_dec(rte_atomic16_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline uint16_t
rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val)
{
- return __atomic_exchange_2(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_2(dst, val, rte_memory_order_seq_cst);
}
/*------------------------- 32 bit atomic operations -------------------------*/
@@ -90,8 +90,8 @@ static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
static inline int
rte_atomic32_cmpset(volatile uint32_t *dst, uint32_t exp, uint32_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline int rte_atomic32_test_and_set(rte_atomic32_t *v)
@@ -102,29 +102,29 @@ static inline int rte_atomic32_test_and_set(rte_atomic32_t *v)
static inline void
rte_atomic32_inc(rte_atomic32_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic32_dec(rte_atomic32_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline uint32_t
rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val)
{
- return __atomic_exchange_4(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_4(dst, val, rte_memory_order_seq_cst);
}
/*------------------------- 64 bit atomic operations -------------------------*/
@@ -132,8 +132,8 @@ static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
static inline int
rte_atomic64_cmpset(volatile uint64_t *dst, uint64_t exp, uint64_t src)
{
- return __atomic_compare_exchange(dst, &exp, &src, 0, __ATOMIC_ACQUIRE,
- __ATOMIC_ACQUIRE) ? 1 : 0;
+ return __atomic_compare_exchange(dst, &exp, &src, 0, rte_memory_order_acquire,
+ rte_memory_order_acquire) ? 1 : 0;
}
static inline void
@@ -157,47 +157,47 @@ static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- __atomic_fetch_add(&v->cnt, inc, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_acquire);
}
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_acquire);
}
static inline void
rte_atomic64_inc(rte_atomic64_t *v)
{
- __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline void
rte_atomic64_dec(rte_atomic64_t *v)
{
- __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE);
+ rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire);
}
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return __atomic_fetch_add(&v->cnt, inc, __ATOMIC_ACQUIRE) + inc;
+ return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_acquire) + inc;
}
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return __atomic_fetch_sub(&v->cnt, dec, __ATOMIC_ACQUIRE) - dec;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_acquire) - dec;
}
static inline int rte_atomic64_inc_and_test(rte_atomic64_t *v)
{
- return __atomic_fetch_add(&v->cnt, 1, __ATOMIC_ACQUIRE) + 1 == 0;
+ return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_acquire) + 1 == 0;
}
static inline int rte_atomic64_dec_and_test(rte_atomic64_t *v)
{
- return __atomic_fetch_sub(&v->cnt, 1, __ATOMIC_ACQUIRE) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_acquire) - 1 == 0;
}
static inline int rte_atomic64_test_and_set(rte_atomic64_t *v)
@@ -213,7 +213,7 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
static inline uint64_t
rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val)
{
- return __atomic_exchange_8(dst, val, __ATOMIC_SEQ_CST);
+ return __atomic_exchange_8(dst, val, rte_memory_order_seq_cst);
}
#endif
diff --git a/lib/eal/riscv/include/rte_atomic.h b/lib/eal/riscv/include/rte_atomic.h
index 4b4633c..2603bc9 100644
--- a/lib/eal/riscv/include/rte_atomic.h
+++ b/lib/eal/riscv/include/rte_atomic.h
@@ -40,9 +40,9 @@
#define rte_io_rmb() asm volatile("fence ir, ir" : : : "memory")
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
#ifdef __cplusplus
diff --git a/lib/eal/x86/include/rte_atomic.h b/lib/eal/x86/include/rte_atomic.h
index f2ee1a9..3b3a9a4 100644
--- a/lib/eal/x86/include/rte_atomic.h
+++ b/lib/eal/x86/include/rte_atomic.h
@@ -82,17 +82,17 @@
/**
* Synchronization fence between threads based on the specified memory order.
*
- * On x86 the __atomic_thread_fence(__ATOMIC_SEQ_CST) generates full 'mfence'
+ * On x86 the __rte_atomic_thread_fence(rte_memory_order_seq_cst) generates full 'mfence'
* which is quite expensive. The optimized implementation of rte_smp_mb is
* used instead.
*/
static __rte_always_inline void
-rte_atomic_thread_fence(int memorder)
+rte_atomic_thread_fence(rte_memory_order memorder)
{
- if (memorder == __ATOMIC_SEQ_CST)
+ if (memorder == rte_memory_order_seq_cst)
rte_smp_mb();
else
- __atomic_thread_fence(memorder);
+ __rte_atomic_thread_fence(memorder);
}
/*------------------------- 16 bit atomic operations -------------------------*/
diff --git a/lib/eal/x86/include/rte_spinlock.h b/lib/eal/x86/include/rte_spinlock.h
index 0b20ddf..a6c23ea 100644
--- a/lib/eal/x86/include/rte_spinlock.h
+++ b/lib/eal/x86/include/rte_spinlock.h
@@ -78,7 +78,7 @@ static inline int rte_tm_supported(void)
}
static inline int
-rte_try_tm(volatile int *lock)
+rte_try_tm(volatile RTE_ATOMIC(int) *lock)
{
int i, retries;
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index f749da9..cf70e33 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,9 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = __atomic_load_n((volatile uint64_t *)addr, __ATOMIC_RELAXED);
- __atomic_compare_exchange_n((volatile uint64_t *)addr, &val, val, 0,
- __ATOMIC_RELAXED, __ATOMIC_RELAXED);
+ val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
+ rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v6 3/6] eal: add rte atomic qualifier with casts
2023-08-22 21:00 ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
2023-08-22 21:00 ` [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
2023-08-22 21:00 ` [PATCH v6 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
@ 2023-08-22 21:00 ` Tyler Retzlaff
2023-08-22 21:00 ` [PATCH v6 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
` (4 subsequent siblings)
7 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-22 21:00 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Introduce __rte_atomic qualifying casts in rte_optional atomics inline
functions to prevent cascading the need to pass __rte_atomic qualified
arguments.
Warning, this is really implementation dependent and being done
temporarily to avoid having to convert more of the libraries and tests in
DPDK in the initial series that introduces the API. The consequence of the
assumption of the ABI of the types in question not being ``the same'' is
only a risk that may be realized when enable_stdatomic=true.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
---
lib/eal/include/generic/rte_atomic.h | 48 ++++++++++++++++++++++++------------
lib/eal/include/generic/rte_pause.h | 9 ++++---
lib/eal/x86/rte_power_intrinsics.c | 7 +++---
3 files changed, 42 insertions(+), 22 deletions(-)
diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
index 5940e7e..709bf15 100644
--- a/lib/eal/include/generic/rte_atomic.h
+++ b/lib/eal/include/generic/rte_atomic.h
@@ -274,7 +274,8 @@
static inline void
rte_atomic16_add(rte_atomic16_t *v, int16_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -288,7 +289,8 @@
static inline void
rte_atomic16_sub(rte_atomic16_t *v, int16_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -341,7 +343,8 @@
static inline int16_t
rte_atomic16_add_return(rte_atomic16_t *v, int16_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -361,7 +364,8 @@
static inline int16_t
rte_atomic16_sub_return(rte_atomic16_t *v, int16_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -380,7 +384,8 @@
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int16_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -400,7 +405,8 @@ static inline int rte_atomic16_inc_and_test(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic16_dec_and_test(rte_atomic16_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int16_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -553,7 +559,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_add(rte_atomic32_t *v, int32_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
/**
@@ -567,7 +574,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline void
rte_atomic32_sub(rte_atomic32_t *v, int32_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
/**
@@ -620,7 +628,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_add_return(rte_atomic32_t *v, int32_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
/**
@@ -640,7 +649,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
static inline int32_t
rte_atomic32_sub_return(rte_atomic32_t *v, int32_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
/**
@@ -659,7 +669,8 @@ static inline void rte_atomic16_clear(rte_atomic16_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, 1, rte_memory_order_seq_cst) + 1 == 0;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int32_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) + 1 == 0;
}
#endif
@@ -679,7 +690,8 @@ static inline int rte_atomic32_inc_and_test(rte_atomic32_t *v)
#ifdef RTE_FORCE_INTRINSICS
static inline int rte_atomic32_dec_and_test(rte_atomic32_t *v)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, 1, rte_memory_order_seq_cst) - 1 == 0;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int32_t *)&v->cnt, 1,
+ rte_memory_order_seq_cst) - 1 == 0;
}
#endif
@@ -885,7 +897,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_add(rte_atomic64_t *v, int64_t inc)
{
- rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst);
+ rte_atomic_fetch_add_explicit((volatile __rte_atomic int64_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst);
}
#endif
@@ -904,7 +917,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline void
rte_atomic64_sub(rte_atomic64_t *v, int64_t dec)
{
- rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst);
+ rte_atomic_fetch_sub_explicit((volatile __rte_atomic int64_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst);
}
#endif
@@ -962,7 +976,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_add_return(rte_atomic64_t *v, int64_t inc)
{
- return rte_atomic_fetch_add_explicit(&v->cnt, inc, rte_memory_order_seq_cst) + inc;
+ return rte_atomic_fetch_add_explicit((volatile __rte_atomic int64_t *)&v->cnt, inc,
+ rte_memory_order_seq_cst) + inc;
}
#endif
@@ -986,7 +1001,8 @@ static inline void rte_atomic32_clear(rte_atomic32_t *v)
static inline int64_t
rte_atomic64_sub_return(rte_atomic64_t *v, int64_t dec)
{
- return rte_atomic_fetch_sub_explicit(&v->cnt, dec, rte_memory_order_seq_cst) - dec;
+ return rte_atomic_fetch_sub_explicit((volatile __rte_atomic int64_t *)&v->cnt, dec,
+ rte_memory_order_seq_cst) - dec;
}
#endif
diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h
index 256309e..b7b059f 100644
--- a/lib/eal/include/generic/rte_pause.h
+++ b/lib/eal/include/generic/rte_pause.h
@@ -81,7 +81,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint16_t *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -91,7 +92,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint32_t *)addr, memorder)
+ != expected)
rte_pause();
}
@@ -101,7 +103,8 @@
{
assert(memorder == rte_memory_order_acquire || memorder == rte_memory_order_relaxed);
- while (rte_atomic_load_explicit(addr, memorder) != expected)
+ while (rte_atomic_load_explicit((volatile __rte_atomic uint64_t *)addr, memorder)
+ != expected)
rte_pause();
}
diff --git a/lib/eal/x86/rte_power_intrinsics.c b/lib/eal/x86/rte_power_intrinsics.c
index cf70e33..fb8539f 100644
--- a/lib/eal/x86/rte_power_intrinsics.c
+++ b/lib/eal/x86/rte_power_intrinsics.c
@@ -23,9 +23,10 @@
uint64_t val;
/* trigger a write but don't change the value */
- val = rte_atomic_load_explicit((volatile uint64_t *)addr, rte_memory_order_relaxed);
- rte_atomic_compare_exchange_strong_explicit((volatile uint64_t *)addr, &val, val,
- rte_memory_order_relaxed, rte_memory_order_relaxed);
+ val = rte_atomic_load_explicit((volatile __rte_atomic uint64_t *)addr,
+ rte_memory_order_relaxed);
+ rte_atomic_compare_exchange_strong_explicit((volatile __rte_atomic uint64_t *)addr,
+ &val, val, rte_memory_order_relaxed, rte_memory_order_relaxed);
}
static bool wait_supported;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v6 4/6] distributor: adapt for EAL optional atomics API changes
2023-08-22 21:00 ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
` (2 preceding siblings ...)
2023-08-22 21:00 ` [PATCH v6 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
@ 2023-08-22 21:00 ` Tyler Retzlaff
2023-08-22 21:00 ` [PATCH v6 5/6] bpf: " Tyler Retzlaff
` (3 subsequent siblings)
7 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-22 21:00 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt distributor for EAL optional atomics API changes
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
---
lib/distributor/distributor_private.h | 2 +-
lib/distributor/rte_distributor_single.c | 44 ++++++++++++++++----------------
2 files changed, 23 insertions(+), 23 deletions(-)
diff --git a/lib/distributor/distributor_private.h b/lib/distributor/distributor_private.h
index 7101f63..2f29343 100644
--- a/lib/distributor/distributor_private.h
+++ b/lib/distributor/distributor_private.h
@@ -52,7 +52,7 @@
* Only 64-bits of the memory is actually used though.
*/
union rte_distributor_buffer_single {
- volatile int64_t bufptr64;
+ volatile RTE_ATOMIC(int64_t) bufptr64;
char pad[RTE_CACHE_LINE_SIZE*3];
} __rte_cache_aligned;
diff --git a/lib/distributor/rte_distributor_single.c b/lib/distributor/rte_distributor_single.c
index 2c77ac4..ad43c13 100644
--- a/lib/distributor/rte_distributor_single.c
+++ b/lib/distributor/rte_distributor_single.c
@@ -32,10 +32,10 @@
int64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_GET_BUF;
RTE_WAIT_UNTIL_MASKED(&buf->bufptr64, RTE_DISTRIB_FLAGS_MASK,
- ==, 0, __ATOMIC_RELAXED);
+ ==, 0, rte_memory_order_relaxed);
/* Sync with distributor on GET_BUF flag. */
- __atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&buf->bufptr64, req, rte_memory_order_release);
}
struct rte_mbuf *
@@ -44,7 +44,7 @@ struct rte_mbuf *
{
union rte_distributor_buffer_single *buf = &d->bufs[worker_id];
/* Sync with distributor. Acquire bufptr64. */
- if (__atomic_load_n(&buf->bufptr64, __ATOMIC_ACQUIRE)
+ if (rte_atomic_load_explicit(&buf->bufptr64, rte_memory_order_acquire)
& RTE_DISTRIB_GET_BUF)
return NULL;
@@ -72,10 +72,10 @@ struct rte_mbuf *
uint64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_RETURN_BUF;
RTE_WAIT_UNTIL_MASKED(&buf->bufptr64, RTE_DISTRIB_FLAGS_MASK,
- ==, 0, __ATOMIC_RELAXED);
+ ==, 0, rte_memory_order_relaxed);
/* Sync with distributor on RETURN_BUF flag. */
- __atomic_store_n(&(buf->bufptr64), req, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&buf->bufptr64, req, rte_memory_order_release);
return 0;
}
@@ -119,7 +119,7 @@ struct rte_mbuf *
d->in_flight_tags[wkr] = 0;
d->in_flight_bitmask &= ~(1UL << wkr);
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64), 0, __ATOMIC_RELEASE);
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64, 0, rte_memory_order_release);
if (unlikely(d->backlog[wkr].count != 0)) {
/* On return of a packet, we need to move the
* queued packets for this core elsewhere.
@@ -165,21 +165,21 @@ struct rte_mbuf *
for (wkr = 0; wkr < d->num_workers; wkr++) {
uintptr_t oldbuf = 0;
/* Sync with worker. Acquire bufptr64. */
- const int64_t data = __atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE);
+ const int64_t data = rte_atomic_load_explicit(&d->bufs[wkr].bufptr64,
+ rte_memory_order_acquire);
if (data & RTE_DISTRIB_GET_BUF) {
flushed++;
if (d->backlog[wkr].count)
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
else {
/* Sync with worker on GET_BUF flag. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
RTE_DISTRIB_GET_BUF,
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
d->in_flight_tags[wkr] = 0;
d->in_flight_bitmask &= ~(1UL << wkr);
}
@@ -217,8 +217,8 @@ struct rte_mbuf *
while (next_idx < num_mbufs || next_mb != NULL) {
uintptr_t oldbuf = 0;
/* Sync with worker. Acquire bufptr64. */
- int64_t data = __atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE);
+ int64_t data = rte_atomic_load_explicit(&(d->bufs[wkr].bufptr64),
+ rte_memory_order_acquire);
if (!next_mb) {
next_mb = mbufs[next_idx++];
@@ -264,15 +264,15 @@ struct rte_mbuf *
if (d->backlog[wkr].count)
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
else {
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
next_value,
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
d->in_flight_tags[wkr] = new_tag;
d->in_flight_bitmask |= (1UL << wkr);
next_mb = NULL;
@@ -294,8 +294,8 @@ struct rte_mbuf *
for (wkr = 0; wkr < d->num_workers; wkr++)
if (d->backlog[wkr].count &&
/* Sync with worker. Acquire bufptr64. */
- (__atomic_load_n(&(d->bufs[wkr].bufptr64),
- __ATOMIC_ACQUIRE) & RTE_DISTRIB_GET_BUF)) {
+ (rte_atomic_load_explicit(&d->bufs[wkr].bufptr64,
+ rte_memory_order_acquire) & RTE_DISTRIB_GET_BUF)) {
int64_t oldbuf = d->bufs[wkr].bufptr64 >>
RTE_DISTRIB_FLAG_BITS;
@@ -303,9 +303,9 @@ struct rte_mbuf *
store_return(oldbuf, d, &ret_start, &ret_count);
/* Sync with worker. Release bufptr64. */
- __atomic_store_n(&(d->bufs[wkr].bufptr64),
+ rte_atomic_store_explicit(&d->bufs[wkr].bufptr64,
backlog_pop(&d->backlog[wkr]),
- __ATOMIC_RELEASE);
+ rte_memory_order_release);
}
d->returns.start = ret_start;
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v6 5/6] bpf: adapt for EAL optional atomics API changes
2023-08-22 21:00 ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
` (3 preceding siblings ...)
2023-08-22 21:00 ` [PATCH v6 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
@ 2023-08-22 21:00 ` Tyler Retzlaff
2023-08-22 21:00 ` [PATCH v6 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
` (2 subsequent siblings)
7 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-22 21:00 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Adapt bpf for EAL optional atomics API changes
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Reviewed-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
---
lib/bpf/bpf_pkt.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c
index ffd2db7..7a8e4a6 100644
--- a/lib/bpf/bpf_pkt.c
+++ b/lib/bpf/bpf_pkt.c
@@ -25,7 +25,7 @@
struct bpf_eth_cbi {
/* used by both data & control path */
- uint32_t use; /*usage counter */
+ RTE_ATOMIC(uint32_t) use; /*usage counter */
const struct rte_eth_rxtx_callback *cb; /* callback handle */
struct rte_bpf *bpf;
struct rte_bpf_jit jit;
@@ -110,8 +110,8 @@ struct bpf_eth_cbh {
/* in use, busy wait till current RX/TX iteration is finished */
if ((puse & BPF_ETH_CBI_INUSE) != 0) {
- RTE_WAIT_UNTIL_MASKED((uint32_t *)(uintptr_t)&cbi->use,
- UINT32_MAX, !=, puse, __ATOMIC_RELAXED);
+ RTE_WAIT_UNTIL_MASKED((__rte_atomic uint32_t *)(uintptr_t)&cbi->use,
+ UINT32_MAX, !=, puse, rte_memory_order_relaxed);
}
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* [PATCH v6 6/6] devtools: forbid new direct use of GCC atomic builtins
2023-08-22 21:00 ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
` (4 preceding siblings ...)
2023-08-22 21:00 ` [PATCH v6 5/6] bpf: " Tyler Retzlaff
@ 2023-08-22 21:00 ` Tyler Retzlaff
2023-08-29 15:57 ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
2023-09-29 14:09 ` David Marchand
7 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-22 21:00 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand, Tyler Retzlaff
Refrain from using compiler __atomic_xxx builtins DPDK now requires
the use of rte_atomic_<op>_explicit macros when operating on DPDK
atomic variables.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Suggested-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
---
devtools/checkpatches.sh | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh
index 43f5e36..3f051f5 100755
--- a/devtools/checkpatches.sh
+++ b/devtools/checkpatches.sh
@@ -102,20 +102,20 @@ check_forbidden_additions() { # <patch>
-f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
"$1" || res=1
- # refrain from using compiler __atomic_thread_fence()
+ # refrain from using compiler __rte_atomic_thread_fence()
# It should be avoided on x86 for SMP case.
awk -v FOLDERS="lib drivers app examples" \
- -v EXPRESSIONS="__atomic_thread_fence\\\(" \
+ -v EXPRESSIONS="__rte_atomic_thread_fence\\\(" \
-v RET_ON_FAIL=1 \
- -v MESSAGE='Using __atomic_thread_fence' \
+ -v MESSAGE='Using __rte_atomic_thread_fence, prefer rte_atomic_thread_fence' \
-f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
"$1" || res=1
- # refrain from using compiler __atomic_{add,and,nand,or,sub,xor}_fetch()
+ # refrain from using compiler __atomic_xxx builtins
awk -v FOLDERS="lib drivers app examples" \
- -v EXPRESSIONS="__atomic_(add|and|nand|or|sub|xor)_fetch\\\(" \
+ -v EXPRESSIONS="__atomic_.*\\\(" \
-v RET_ON_FAIL=1 \
- -v MESSAGE='Using __atomic_op_fetch, prefer __atomic_fetch_op' \
+ -v MESSAGE='Using __atomic_xxx built-ins, prefer rte_atomic_xxx' \
-f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
"$1" || res=1
--
1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v6 0/6] rte atomics API for optional stdatomic
2023-08-22 21:00 ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
` (5 preceding siblings ...)
2023-08-22 21:00 ` [PATCH v6 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
@ 2023-08-29 15:57 ` Tyler Retzlaff
2023-09-29 14:09 ` David Marchand
7 siblings, 0 replies; 82+ messages in thread
From: Tyler Retzlaff @ 2023-08-29 15:57 UTC (permalink / raw)
To: dev
Cc: techboard, Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt,
Thomas Monjalon, David Marchand
ping for additional reviewers.
thanks!
On Tue, Aug 22, 2023 at 02:00:39PM -0700, Tyler Retzlaff wrote:
> This series introduces API additions prefixed in the rte namespace that allow
> the optional use of stdatomics.h from C11 using enable_stdatomics=true for
> targets where enable_stdatomics=false no functional change is intended.
>
> Be aware this does not contain all changes to use stdatomics across the DPDK
> tree it only introduces the minimum to allow the option to be used which is
> a pre-requisite for a clean CI (probably using clang) that can be run
> with enable_stdatomics=true enabled.
>
> It is planned that subsequent series will be introduced per lib/driver as
> appropriate to further enable stdatomics use when enable_stdatomics=true.
>
> Notes:
>
> * Additional libraries beyond EAL make visible atomics use across the
> API/ABI surface they will be converted in the subsequent series.
>
> * The eal: add rte atomic qualifier with casts patch needs some discussion
> as to whether or not the legacy rte_atomic APIs should be converted to
> work with enable_stdatomic=true right now some implementation dependent
> casts are used to prevent cascading / having to convert too much in
> the intial series.
>
> * Windows will obviously need complete conversion of libraries including
> atomics that are not crossing API/ABI boundaries. those conversions will
> introduced in separate series as new along side the existing msvc series.
>
> Please keep in mind we would like to prioritize the review / acceptance of
> this patch since it needs to be completed in the 23.11 merge window.
>
> Thank you all for the discussion that lead to the formation of this series.
>
> v6:
> * Adjust checkpatches to warn about use of __rte_atomic_thread_fence
> and suggest use of rte_atomic_thread_fence. Use the existing check
> more generic check for __atomic_xxx to catch use of __atomic_thread_fence
> and recommend rte_atomic_xxx.
>
> v5:
> * Add RTE_ATOMIC to doxygen configuration PREDEFINED macros list to
> fix documentation generation failure
> * Fix two typos in expansion of C11 atomics macros strong -> weak and
> add missing _explicit
> * Adjust devtools/checkpatches messages based on feedback. i have chosen
> not to try and catch use of C11 atomics or _Atomic since using those
> directly will be picked up by existing CI passes where by compilation
> error where enable_stdatomic=false (the default for most platforms)
>
> v4:
> * Move the definition of #define RTE_ATOMIC(type) to patch 1 where it
> belongs (a mistake in v3)
> * Provide comments for both RTE_ATOMIC and __rte_atomic macros indicating
> their use as specified or qualified contexts.
>
> v3:
> * Remove comments from APIs mentioning the mapping to C++ memory model
> memory orders
> * Introduce and use new macro RTE_ATOMIC(type) to be used in contexts
> where _Atomic is used as a type specifier to declare variables. The
> macro allows more clarity about what the atomic type being specified
> is. e.g. _Atomic(T *) vs _Atomic(T) it is easier to understand that
> the former is an atomic pointer type and the latter is an atomic
> type. it also has the benefit of (in the future) being interoperable
> with c++23 syntactically
> note: Morten i have retained your 'reviewed-by' tags if you disagree
> given the changes in the above version please indicate as such but
> i believe the changes are in the spirit of the feedback you provided
>
> v2:
> * Wrap meson_options.txt option description to newline and indent to
> be consistent with other options.
> * Provide separate typedef of rte_memory_order for enable_stdatomic=true
> VS enable_stdatomic=false instead of a single typedef to int
> note: slight tweak to reviewers feedback i've chosen to use a typedef
> for both enable_stdatomic={true,false} (just seemed more consistent)
> * Bring in assert.h and use static_assert macro instead of _Static_assert
> keyword to better interoperate with c/c++
> * Directly include rte_stdatomic.h where into other places it is consumed
> instead of hacking it globally into rte_config.h
> * Provide and use __rte_atomic_thread_fence to allow conditional expansion
> within the body of existing rte_atomic_thread_fence inline function to
> maintain per-arch optimizations when enable_stdatomic=false
>
> Tyler Retzlaff (6):
> eal: provide rte stdatomics optional atomics API
> eal: adapt EAL to present rte optional atomics API
> eal: add rte atomic qualifier with casts
> distributor: adapt for EAL optional atomics API changes
> bpf: adapt for EAL optional atomics API changes
> devtools: forbid new direct use of GCC atomic builtins
>
> app/test/test_mcslock.c | 6 +-
> config/meson.build | 1 +
> devtools/checkpatches.sh | 12 +-
> doc/api/doxy-api.conf.in | 1 +
> lib/bpf/bpf_pkt.c | 6 +-
> lib/distributor/distributor_private.h | 2 +-
> lib/distributor/rte_distributor_single.c | 44 +++----
> lib/eal/arm/include/rte_atomic_32.h | 4 +-
> lib/eal/arm/include/rte_atomic_64.h | 36 +++---
> lib/eal/arm/include/rte_pause_64.h | 26 ++--
> lib/eal/arm/rte_power_intrinsics.c | 8 +-
> lib/eal/common/eal_common_trace.c | 16 +--
> lib/eal/include/generic/rte_atomic.h | 67 +++++++----
> lib/eal/include/generic/rte_pause.h | 50 ++++----
> lib/eal/include/generic/rte_rwlock.h | 48 ++++----
> lib/eal/include/generic/rte_spinlock.h | 20 ++--
> lib/eal/include/meson.build | 1 +
> lib/eal/include/rte_mcslock.h | 51 ++++----
> lib/eal/include/rte_pflock.h | 25 ++--
> lib/eal/include/rte_seqcount.h | 19 +--
> lib/eal/include/rte_stdatomic.h | 198 +++++++++++++++++++++++++++++++
> lib/eal/include/rte_ticketlock.h | 43 +++----
> lib/eal/include/rte_trace_point.h | 5 +-
> lib/eal/loongarch/include/rte_atomic.h | 4 +-
> lib/eal/ppc/include/rte_atomic.h | 54 ++++-----
> lib/eal/riscv/include/rte_atomic.h | 4 +-
> lib/eal/x86/include/rte_atomic.h | 8 +-
> lib/eal/x86/include/rte_spinlock.h | 2 +-
> lib/eal/x86/rte_power_intrinsics.c | 7 +-
> meson_options.txt | 2 +
> 30 files changed, 501 insertions(+), 269 deletions(-)
> create mode 100644 lib/eal/include/rte_stdatomic.h
>
> --
> 1.8.3.1
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API
2023-08-22 21:00 ` [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
@ 2023-09-28 8:06 ` Thomas Monjalon
2023-09-29 8:04 ` David Marchand
0 siblings, 1 reply; 82+ messages in thread
From: Thomas Monjalon @ 2023-09-28 8:06 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: dev, techboard, Bruce Richardson, Honnappa Nagarahalli,
Ruifeng Wang, Jerin Jacob, Sunil Kumar Kori,
Mattias Rönnblom, Joyce Kong, David Christensen,
Konstantin Ananyev, David Hunt, David Marchand
22/08/2023 23:00, Tyler Retzlaff:
> --- a/lib/eal/include/generic/rte_rwlock.h
> +++ b/lib/eal/include/generic/rte_rwlock.h
> @@ -32,6 +32,7 @@
> #include <rte_common.h>
> #include <rte_lock_annotations.h>
> #include <rte_pause.h>
> +#include <rte_stdatomic.h>
I'm not sure about adding the include in patch 1 if it is not used here.
> --- /dev/null
> +++ b/lib/eal/include/rte_stdatomic.h
> @@ -0,0 +1,198 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2023 Microsoft Corporation
> + */
> +
> +#ifndef _RTE_STDATOMIC_H_
> +#define _RTE_STDATOMIC_H_
> +
> +#include <assert.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#ifdef RTE_ENABLE_STDATOMIC
> +#ifdef __STDC_NO_ATOMICS__
> +#error enable_stdatomics=true but atomics not supported by toolchain
> +#endif
> +
> +#include <stdatomic.h>
> +
> +/* RTE_ATOMIC(type) is provided for use as a type specifier
> + * permitting designation of an rte atomic type.
> + */
> +#define RTE_ATOMIC(type) _Atomic(type)
> +
> +/* __rte_atomic is provided for type qualification permitting
> + * designation of an rte atomic qualified type-name.
Sorry I don't understand this comment.
> + */
> +#define __rte_atomic _Atomic
> +
> +/* The memory order is an enumerated type in C11. */
> +typedef memory_order rte_memory_order;
> +
> +#define rte_memory_order_relaxed memory_order_relaxed
> +#ifdef __ATOMIC_RELAXED
> +static_assert(rte_memory_order_relaxed == __ATOMIC_RELAXED,
> + "rte_memory_order_relaxed == __ATOMIC_RELAXED");
Not sure about using static_assert or RTE_BUILD_BUG_ON
> +#endif
> +
> +#define rte_memory_order_consume memory_order_consume
> +#ifdef __ATOMIC_CONSUME
> +static_assert(rte_memory_order_consume == __ATOMIC_CONSUME,
> + "rte_memory_order_consume == __ATOMIC_CONSUME");
> +#endif
> +
> +#define rte_memory_order_acquire memory_order_acquire
> +#ifdef __ATOMIC_ACQUIRE
> +static_assert(rte_memory_order_acquire == __ATOMIC_ACQUIRE,
> + "rte_memory_order_acquire == __ATOMIC_ACQUIRE");
> +#endif
> +
> +#define rte_memory_order_release memory_order_release
> +#ifdef __ATOMIC_RELEASE
> +static_assert(rte_memory_order_release == __ATOMIC_RELEASE,
> + "rte_memory_order_release == __ATOMIC_RELEASE");
> +#endif
> +
> +#define rte_memory_order_acq_rel memory_order_acq_rel
> +#ifdef __ATOMIC_ACQ_REL
> +static_assert(rte_memory_order_acq_rel == __ATOMIC_ACQ_REL,
> + "rte_memory_order_acq_rel == __ATOMIC_ACQ_REL");
> +#endif
> +
> +#define rte_memory_order_seq_cst memory_order_seq_cst
> +#ifdef __ATOMIC_SEQ_CST
> +static_assert(rte_memory_order_seq_cst == __ATOMIC_SEQ_CST,
> + "rte_memory_order_seq_cst == __ATOMIC_SEQ_CST");
> +#endif
> +
> +#define rte_atomic_load_explicit(ptr, memorder) \
> + atomic_load_explicit(ptr, memorder)
> +
> +#define rte_atomic_store_explicit(ptr, val, memorder) \
> + atomic_store_explicit(ptr, val, memorder)
> +
> +#define rte_atomic_exchange_explicit(ptr, val, memorder) \
> + atomic_exchange_explicit(ptr, val, memorder)
> +
> +#define rte_atomic_compare_exchange_strong_explicit( \
> + ptr, expected, desired, succ_memorder, fail_memorder) \
> + atomic_compare_exchange_strong_explicit( \
> + ptr, expected, desired, succ_memorder, fail_memorder)
> +
> +#define rte_atomic_compare_exchange_weak_explicit( \
> + ptr, expected, desired, succ_memorder, fail_memorder) \
> + atomic_compare_exchange_weak_explicit( \
> + ptr, expected, desired, succ_memorder, fail_memorder)
> +
> +#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
> + atomic_fetch_add_explicit(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
> + atomic_fetch_sub_explicit(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
> + atomic_fetch_and_explicit(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
> + atomic_fetch_xor_explicit(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
> + atomic_fetch_or_explicit(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
> + atomic_fetch_nand_explicit(ptr, val, memorder)
> +
> +#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \
> + atomic_flag_test_and_set_explicit(ptr, memorder)
> +
> +#define rte_atomic_flag_clear_explicit(ptr, memorder) \
> + atomic_flag_clear_explicit(ptr, memorder)
> +
> +/* We provide internal macro here to allow conditional expansion
> + * in the body of the per-arch rte_atomic_thread_fence inline functions.
> + */
> +#define __rte_atomic_thread_fence(memorder) \
> + atomic_thread_fence(memorder)
> +
> +#else
Better to add some context in comment of this "else": /* !RTE_ENABLE_STDATOMIC */
> +
> +/* RTE_ATOMIC(type) is provided for use as a type specifier
> + * permitting designation of an rte atomic type.
> + */
The comment should say it has no effect.
Or no comment at all for this part.
> +#define RTE_ATOMIC(type) type
> +
> +/* __rte_atomic is provided for type qualification permitting
> + * designation of an rte atomic qualified type-name.
> + */
> +#define __rte_atomic
> +
> +/* The memory order is an integer type in GCC built-ins,
> + * not an enumerated type like in C11.
> + */
> +typedef int rte_memory_order;
> +
> +#define rte_memory_order_relaxed __ATOMIC_RELAXED
> +#define rte_memory_order_consume __ATOMIC_CONSUME
> +#define rte_memory_order_acquire __ATOMIC_ACQUIRE
> +#define rte_memory_order_release __ATOMIC_RELEASE
> +#define rte_memory_order_acq_rel __ATOMIC_ACQ_REL
> +#define rte_memory_order_seq_cst __ATOMIC_SEQ_CST
> +
> +#define rte_atomic_load_explicit(ptr, memorder) \
> + __atomic_load_n(ptr, memorder)
> +
> +#define rte_atomic_store_explicit(ptr, val, memorder) \
> + __atomic_store_n(ptr, val, memorder)
> +
> +#define rte_atomic_exchange_explicit(ptr, val, memorder) \
> + __atomic_exchange_n(ptr, val, memorder)
> +
> +#define rte_atomic_compare_exchange_strong_explicit( \
> + ptr, expected, desired, succ_memorder, fail_memorder) \
> + __atomic_compare_exchange_n( \
> + ptr, expected, desired, 0, succ_memorder, fail_memorder)
> +
> +#define rte_atomic_compare_exchange_weak_explicit( \
> + ptr, expected, desired, succ_memorder, fail_memorder) \
> + __atomic_compare_exchange_n( \
> + ptr, expected, desired, 1, succ_memorder, fail_memorder)
> +
> +#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \
> + __atomic_fetch_add(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \
> + __atomic_fetch_sub(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \
> + __atomic_fetch_and(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \
> + __atomic_fetch_xor(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \
> + __atomic_fetch_or(ptr, val, memorder)
> +
> +#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \
> + __atomic_fetch_nand(ptr, val, memorder)
> +
> +#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \
> + __atomic_test_and_set(ptr, memorder)
> +
> +#define rte_atomic_flag_clear_explicit(ptr, memorder) \
> + __atomic_clear(ptr, memorder)
> +
> +/* We provide internal macro here to allow conditional expansion
> + * in the body of the per-arch rte_atomic_thread_fence inline functions.
> + */
> +#define __rte_atomic_thread_fence(memorder) \
> + __atomic_thread_fence(memorder)
> +
> +#endif
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_STDATOMIC_H_ */
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API
2023-09-28 8:06 ` Thomas Monjalon
@ 2023-09-29 8:04 ` David Marchand
2023-09-29 8:54 ` Morten Brørup
0 siblings, 1 reply; 82+ messages in thread
From: David Marchand @ 2023-09-29 8:04 UTC (permalink / raw)
To: Thomas Monjalon, Tyler Retzlaff
Cc: dev, techboard, Bruce Richardson, Honnappa Nagarahalli,
Ruifeng Wang, Jerin Jacob, Sunil Kumar Kori,
Mattias Rönnblom, Joyce Kong, David Christensen,
Konstantin Ananyev, David Hunt
On Thu, Sep 28, 2023 at 10:06 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 22/08/2023 23:00, Tyler Retzlaff:
> > --- a/lib/eal/include/generic/rte_rwlock.h
> > +++ b/lib/eal/include/generic/rte_rwlock.h
> > @@ -32,6 +32,7 @@
> > #include <rte_common.h>
> > #include <rte_lock_annotations.h>
> > #include <rte_pause.h>
> > +#include <rte_stdatomic.h>
>
> I'm not sure about adding the include in patch 1 if it is not used here.
Yes, this is something I had already fixed locally.
>
> > --- /dev/null
> > +++ b/lib/eal/include/rte_stdatomic.h
> > @@ -0,0 +1,198 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2023 Microsoft Corporation
> > + */
> > +
> > +#ifndef _RTE_STDATOMIC_H_
> > +#define _RTE_STDATOMIC_H_
> > +
> > +#include <assert.h>
> > +
> > +#ifdef __cplusplus
> > +extern "C" {
> > +#endif
> > +
> > +#ifdef RTE_ENABLE_STDATOMIC
> > +#ifdef __STDC_NO_ATOMICS__
> > +#error enable_stdatomics=true but atomics not supported by toolchain
> > +#endif
> > +
> > +#include <stdatomic.h>
> > +
> > +/* RTE_ATOMIC(type) is provided for use as a type specifier
> > + * permitting designation of an rte atomic type.
> > + */
> > +#define RTE_ATOMIC(type) _Atomic(type)
> > +
> > +/* __rte_atomic is provided for type qualification permitting
> > + * designation of an rte atomic qualified type-name.
>
> Sorry I don't understand this comment.
The difference between atomic qualifier and atomic specifier and the
need for exposing those two notions are not obvious to me.
One clue I have is with one use later in the series:
+rte_mcslock_lock(RTE_ATOMIC(rte_mcslock_t *) *msl, rte_mcslock_t *me)
...
+ prev = rte_atomic_exchange_explicit(msl, me, rte_memory_order_acq_rel);
So at least RTE_ATOMIC() seems necessary.
>
> > + */
> > +#define __rte_atomic _Atomic
> > +
> > +/* The memory order is an enumerated type in C11. */
> > +typedef memory_order rte_memory_order;
> > +
> > +#define rte_memory_order_relaxed memory_order_relaxed
> > +#ifdef __ATOMIC_RELAXED
> > +static_assert(rte_memory_order_relaxed == __ATOMIC_RELAXED,
> > + "rte_memory_order_relaxed == __ATOMIC_RELAXED");
>
> Not sure about using static_assert or RTE_BUILD_BUG_ON
Do you mean you want no check at all in a public facing header?
Or is it that we have RTE_BUILD_BUG_ON and we should keep using it
instead of static_assert?
I remember some problems with RTE_BUILD_BUG_ON where the compiler
would silently drop the whole expression and reported no problem as it
could not evaluate the expression.
At least, with static_assert (iirc, it is new to C11) the compiler
complains with a clear "error: expression in static assertion is not
constant".
We could fix RTE_BUILD_BUG_ON, but I guess the fix would be equivalent
to map it to static_assert(!condition).
Using language standard constructs seems a better choice.
--
David Marchand
^ permalink raw reply [flat|nested] 82+ messages in thread
* RE: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API
2023-09-29 8:04 ` David Marchand
@ 2023-09-29 8:54 ` Morten Brørup
2023-09-29 9:02 ` David Marchand
0 siblings, 1 reply; 82+ messages in thread
From: Morten Brørup @ 2023-09-29 8:54 UTC (permalink / raw)
To: David Marchand, Thomas Monjalon, Tyler Retzlaff
Cc: dev, techboard, Bruce Richardson, Honnappa Nagarahalli,
Ruifeng Wang, Jerin Jacob, Sunil Kumar Kori,
Mattias Rönnblom, Joyce Kong, David Christensen,
Konstantin Ananyev, David Hunt
> From: David Marchand [mailto:david.marchand@redhat.com]
> Sent: Friday, 29 September 2023 10.04
>
> On Thu, Sep 28, 2023 at 10:06 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> >
> > 22/08/2023 23:00, Tyler Retzlaff:
[...]
> > > +/* The memory order is an enumerated type in C11. */
> > > +typedef memory_order rte_memory_order;
> > > +
> > > +#define rte_memory_order_relaxed memory_order_relaxed
> > > +#ifdef __ATOMIC_RELAXED
> > > +static_assert(rte_memory_order_relaxed == __ATOMIC_RELAXED,
> > > + "rte_memory_order_relaxed == __ATOMIC_RELAXED");
> >
> > Not sure about using static_assert or RTE_BUILD_BUG_ON
>
> Do you mean you want no check at all in a public facing header?
>
> Or is it that we have RTE_BUILD_BUG_ON and we should keep using it
> instead of static_assert?
>
> I remember some problems with RTE_BUILD_BUG_ON where the compiler
> would silently drop the whole expression and reported no problem as it
> could not evaluate the expression.
> At least, with static_assert (iirc, it is new to C11) the compiler
> complains with a clear "error: expression in static assertion is not
> constant".
> We could fix RTE_BUILD_BUG_ON, but I guess the fix would be equivalent
> to map it to static_assert(!condition).
> Using language standard constructs seems a better choice.
+1 to using language standard constructs.
static_assert became standard in C11. (Formally, _Static_assert is standard C11, and static_assert is available through a convenience macro in C11 [1].)
In my opinion, our move to C11 thus makes RTE_BUILD_BUG_ON obsolete.
We should mark RTE_BUILD_BUG_ON as deprecated, and disallow RTE_BUILD_BUG_ON in new code. Perhaps checkpatches could catch this?
[1]: https://en.cppreference.com/w/c/language/_Static_assert
PS: static_assert also has the advantage that it can be used directly in header files. RTE_BUILD_BUG_ON can only be used in functions, and thus needs to be wrapped in a dummy (static inline) function when used in a header file.
>
>
> --
> David Marchand
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API
2023-09-29 8:54 ` Morten Brørup
@ 2023-09-29 9:02 ` David Marchand
2023-09-29 9:26 ` Bruce Richardson
0 siblings, 1 reply; 82+ messages in thread
From: David Marchand @ 2023-09-29 9:02 UTC (permalink / raw)
To: Morten Brørup
Cc: Thomas Monjalon, Tyler Retzlaff, dev, techboard,
Bruce Richardson, Honnappa Nagarahalli, Ruifeng Wang,
Jerin Jacob, Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt
On Fri, Sep 29, 2023 at 10:54 AM Morten Brørup <mb@smartsharesystems.com> wrote:
> In my opinion, our move to C11 thus makes RTE_BUILD_BUG_ON obsolete.
That's my thought too.
>
> We should mark RTE_BUILD_BUG_ON as deprecated, and disallow RTE_BUILD_BUG_ON in new code. Perhaps checkpatches could catch this?
For a clear deprecation of a part of DPDK API, I don't see a need to
add something in checkpatch.
Putting a RTE_DEPRECATED in RTE_BUILD_BUG_ON directly triggers a build
warning (caught by CI since we run with Werror).
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index 771c70f2c8..40542629c1 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -495,7 +495,7 @@ rte_is_aligned(const void * const __rte_restrict
ptr, const unsigned int align)
/**
* Triggers an error at compilation time if the condition is true.
*/
-#define RTE_BUILD_BUG_ON(condition) ((void)sizeof(char[1 - 2*!!(condition)]))
+#define RTE_BUILD_BUG_ON(condition) RTE_DEPRECATED(RTE_BUILD_BUG_ON)
((void)sizeof(char[1 - 2*!!(condition)]))
/*********** Cache line related macros ********/
$ ninja -C build-mini
...
[18/333] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
../lib/eal/common/eal_common_trace.c: In function ‘eal_trace_init’:
../lib/eal/common/eal_common_trace.c:44:20: warning:
"RTE_BUILD_BUG_ON" is deprecated
44 | RTE_BUILD_BUG_ON((offsetof(struct __rte_trace_header,
mem) % 8) != 0);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[38/333] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
../lib/eal/common/malloc_heap.c: In function ‘malloc_heap_destroy’:
../lib/eal/common/malloc_heap.c:1398:20: warning: "RTE_BUILD_BUG_ON"
is deprecated
1398 | RTE_BUILD_BUG_ON(offsetof(struct malloc_heap, lock) != 0);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[50/333] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
../lib/eal/unix/rte_thread.c: In function ‘rte_thread_self’:
../lib/eal/unix/rte_thread.c:239:20: warning: "RTE_BUILD_BUG_ON" is deprecated
239 | RTE_BUILD_BUG_ON(sizeof(pthread_t) > sizeof(uintptr_t));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--
David Marchand
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API
2023-09-29 9:02 ` David Marchand
@ 2023-09-29 9:26 ` Bruce Richardson
2023-09-29 9:34 ` David Marchand
0 siblings, 1 reply; 82+ messages in thread
From: Bruce Richardson @ 2023-09-29 9:26 UTC (permalink / raw)
To: David Marchand
Cc: Morten Brørup, Thomas Monjalon, Tyler Retzlaff, dev,
techboard, Honnappa Nagarahalli, Ruifeng Wang, Jerin Jacob,
Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt
On Fri, Sep 29, 2023 at 11:02:38AM +0200, David Marchand wrote:
> On Fri, Sep 29, 2023 at 10:54 AM Morten Brørup <mb@smartsharesystems.com> wrote:
> > In my opinion, our move to C11 thus makes RTE_BUILD_BUG_ON obsolete.
>
> That's my thought too.
>
> >
> > We should mark RTE_BUILD_BUG_ON as deprecated, and disallow RTE_BUILD_BUG_ON in new code. Perhaps checkpatches could catch this?
>
> For a clear deprecation of a part of DPDK API, I don't see a need to
> add something in checkpatch.
> Putting a RTE_DEPRECATED in RTE_BUILD_BUG_ON directly triggers a build
> warning (caught by CI since we run with Werror).
>
Would it not be sufficient to just make it an alias for the C11 static
assertions? It's not like its a lot of code to maintain, and if app users
have it in their code I'm not sure we get massive benefit from forcing them
to edit their code. I'd rather see it kept as a one-line macro purely from
a backward compatibility viewpoint. We can replace internal usages, though
- which can be checked by checkpatch.
/Bruce
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API
2023-09-29 9:26 ` Bruce Richardson
@ 2023-09-29 9:34 ` David Marchand
2023-09-29 10:26 ` Thomas Monjalon
0 siblings, 1 reply; 82+ messages in thread
From: David Marchand @ 2023-09-29 9:34 UTC (permalink / raw)
To: Bruce Richardson
Cc: Morten Brørup, Thomas Monjalon, Tyler Retzlaff, dev,
techboard, Honnappa Nagarahalli, Ruifeng Wang, Jerin Jacob,
Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt
On Fri, Sep 29, 2023 at 11:26 AM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Fri, Sep 29, 2023 at 11:02:38AM +0200, David Marchand wrote:
> > On Fri, Sep 29, 2023 at 10:54 AM Morten Brørup <mb@smartsharesystems.com> wrote:
> > > In my opinion, our move to C11 thus makes RTE_BUILD_BUG_ON obsolete.
> >
> > That's my thought too.
> >
> > >
> > > We should mark RTE_BUILD_BUG_ON as deprecated, and disallow RTE_BUILD_BUG_ON in new code. Perhaps checkpatches could catch this?
> >
> > For a clear deprecation of a part of DPDK API, I don't see a need to
> > add something in checkpatch.
> > Putting a RTE_DEPRECATED in RTE_BUILD_BUG_ON directly triggers a build
> > warning (caught by CI since we run with Werror).
> >
>
> Would it not be sufficient to just make it an alias for the C11 static
> assertions? It's not like its a lot of code to maintain, and if app users
> have it in their code I'm not sure we get massive benefit from forcing them
> to edit their code. I'd rather see it kept as a one-line macro purely from
> a backward compatibility viewpoint. We can replace internal usages, though
> - which can be checked by checkpatch.
No, there is no massive benefit, just trying to reduce our ever
growing API surface.
Note, this macro should have been kept internal but it was introduced
at a time such matter was not considered...
--
David Marchand
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API
2023-09-29 9:34 ` David Marchand
@ 2023-09-29 10:26 ` Thomas Monjalon
2023-09-29 11:38 ` David Marchand
0 siblings, 1 reply; 82+ messages in thread
From: Thomas Monjalon @ 2023-09-29 10:26 UTC (permalink / raw)
To: Bruce Richardson, David Marchand
Cc: Morten Brørup, Tyler Retzlaff, dev, techboard,
Honnappa Nagarahalli, Ruifeng Wang, Jerin Jacob,
Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt
29/09/2023 11:34, David Marchand:
> On Fri, Sep 29, 2023 at 11:26 AM Bruce Richardson
> <bruce.richardson@intel.com> wrote:
> >
> > On Fri, Sep 29, 2023 at 11:02:38AM +0200, David Marchand wrote:
> > > On Fri, Sep 29, 2023 at 10:54 AM Morten Brørup <mb@smartsharesystems.com> wrote:
> > > > In my opinion, our move to C11 thus makes RTE_BUILD_BUG_ON obsolete.
> > >
> > > That's my thought too.
> > >
> > > >
> > > > We should mark RTE_BUILD_BUG_ON as deprecated, and disallow RTE_BUILD_BUG_ON in new code. Perhaps checkpatches could catch this?
> > >
> > > For a clear deprecation of a part of DPDK API, I don't see a need to
> > > add something in checkpatch.
> > > Putting a RTE_DEPRECATED in RTE_BUILD_BUG_ON directly triggers a build
> > > warning (caught by CI since we run with Werror).
> > >
> >
> > Would it not be sufficient to just make it an alias for the C11 static
> > assertions? It's not like its a lot of code to maintain, and if app users
> > have it in their code I'm not sure we get massive benefit from forcing them
> > to edit their code. I'd rather see it kept as a one-line macro purely from
> > a backward compatibility viewpoint. We can replace internal usages, though
> > - which can be checked by checkpatch.
>
> No, there is no massive benefit, just trying to reduce our ever
> growing API surface.
>
> Note, this macro should have been kept internal but it was introduced
> at a time such matter was not considered...
I agree with all.
Now taking techboard hat, we agreed to avoid breaking API if possible.
So we should keep RTE_BUILD_BUG_ON forever even if not used.
Internally we can replace its usages.
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API
2023-09-29 10:26 ` Thomas Monjalon
@ 2023-09-29 11:38 ` David Marchand
2023-09-29 11:51 ` Thomas Monjalon
0 siblings, 1 reply; 82+ messages in thread
From: David Marchand @ 2023-09-29 11:38 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Bruce Richardson, Morten Brørup, Tyler Retzlaff, dev,
techboard, Honnappa Nagarahalli, Ruifeng Wang, Jerin Jacob,
Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt
On Fri, Sep 29, 2023 at 12:26 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 29/09/2023 11:34, David Marchand:
> > On Fri, Sep 29, 2023 at 11:26 AM Bruce Richardson
> > <bruce.richardson@intel.com> wrote:
> > >
> > > On Fri, Sep 29, 2023 at 11:02:38AM +0200, David Marchand wrote:
> > > > On Fri, Sep 29, 2023 at 10:54 AM Morten Brørup <mb@smartsharesystems.com> wrote:
> > > > > In my opinion, our move to C11 thus makes RTE_BUILD_BUG_ON obsolete.
> > > >
> > > > That's my thought too.
> > > >
> > > > >
> > > > > We should mark RTE_BUILD_BUG_ON as deprecated, and disallow RTE_BUILD_BUG_ON in new code. Perhaps checkpatches could catch this?
> > > >
> > > > For a clear deprecation of a part of DPDK API, I don't see a need to
> > > > add something in checkpatch.
> > > > Putting a RTE_DEPRECATED in RTE_BUILD_BUG_ON directly triggers a build
> > > > warning (caught by CI since we run with Werror).
> > > >
> > >
> > > Would it not be sufficient to just make it an alias for the C11 static
> > > assertions? It's not like its a lot of code to maintain, and if app users
> > > have it in their code I'm not sure we get massive benefit from forcing them
> > > to edit their code. I'd rather see it kept as a one-line macro purely from
> > > a backward compatibility viewpoint. We can replace internal usages, though
> > > - which can be checked by checkpatch.
> >
> > No, there is no massive benefit, just trying to reduce our ever
> > growing API surface.
> >
> > Note, this macro should have been kept internal but it was introduced
> > at a time such matter was not considered...
>
> I agree with all.
> Now taking techboard hat, we agreed to avoid breaking API if possible.
> So we should keep RTE_BUILD_BUG_ON forever even if not used.
> Internally we can replace its usages.
So back to the original topic, I get that static_assert is ok for this patch.
--
David Marchand
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API
2023-09-29 11:38 ` David Marchand
@ 2023-09-29 11:51 ` Thomas Monjalon
0 siblings, 0 replies; 82+ messages in thread
From: Thomas Monjalon @ 2023-09-29 11:51 UTC (permalink / raw)
To: David Marchand
Cc: Bruce Richardson, Morten Brørup, Tyler Retzlaff, dev,
techboard, Honnappa Nagarahalli, Ruifeng Wang, Jerin Jacob,
Sunil Kumar Kori, Mattias Rönnblom, Joyce Kong,
David Christensen, Konstantin Ananyev, David Hunt
29/09/2023 13:38, David Marchand:
> On Fri, Sep 29, 2023 at 12:26 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> >
> > 29/09/2023 11:34, David Marchand:
> > > On Fri, Sep 29, 2023 at 11:26 AM Bruce Richardson
> > > <bruce.richardson@intel.com> wrote:
> > > >
> > > > On Fri, Sep 29, 2023 at 11:02:38AM +0200, David Marchand wrote:
> > > > > On Fri, Sep 29, 2023 at 10:54 AM Morten Brørup <mb@smartsharesystems.com> wrote:
> > > > > > In my opinion, our move to C11 thus makes RTE_BUILD_BUG_ON obsolete.
> > > > >
> > > > > That's my thought too.
> > > > >
> > > > > >
> > > > > > We should mark RTE_BUILD_BUG_ON as deprecated, and disallow RTE_BUILD_BUG_ON in new code. Perhaps checkpatches could catch this?
> > > > >
> > > > > For a clear deprecation of a part of DPDK API, I don't see a need to
> > > > > add something in checkpatch.
> > > > > Putting a RTE_DEPRECATED in RTE_BUILD_BUG_ON directly triggers a build
> > > > > warning (caught by CI since we run with Werror).
> > > > >
> > > >
> > > > Would it not be sufficient to just make it an alias for the C11 static
> > > > assertions? It's not like its a lot of code to maintain, and if app users
> > > > have it in their code I'm not sure we get massive benefit from forcing them
> > > > to edit their code. I'd rather see it kept as a one-line macro purely from
> > > > a backward compatibility viewpoint. We can replace internal usages, though
> > > > - which can be checked by checkpatch.
> > >
> > > No, there is no massive benefit, just trying to reduce our ever
> > > growing API surface.
> > >
> > > Note, this macro should have been kept internal but it was introduced
> > > at a time such matter was not considered...
> >
> > I agree with all.
> > Now taking techboard hat, we agreed to avoid breaking API if possible.
> > So we should keep RTE_BUILD_BUG_ON forever even if not used.
> > Internally we can replace its usages.
>
> So back to the original topic, I get that static_assert is ok for this patch.
Yes we can use static_assert.
^ permalink raw reply [flat|nested] 82+ messages in thread
* Re: [PATCH v6 0/6] rte atomics API for optional stdatomic
2023-08-22 21:00 ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
` (6 preceding siblings ...)
2023-08-29 15:57 ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
@ 2023-09-29 14:09 ` David Marchand
7 siblings, 0 replies; 82+ messages in thread
From: David Marchand @ 2023-09-29 14:09 UTC (permalink / raw)
To: Tyler Retzlaff
Cc: dev, techboard, Bruce Richardson, Honnappa Nagarahalli,
Ruifeng Wang, Jerin Jacob, Sunil Kumar Kori,
Mattias Rönnblom, Joyce Kong, David Christensen,
Konstantin Ananyev, David Hunt, Thomas Monjalon
Hello,
On Tue, Aug 22, 2023 at 11:00 PM Tyler Retzlaff
<roretzla@linux.microsoft.com> wrote:
>
> This series introduces API additions prefixed in the rte namespace that allow
> the optional use of stdatomics.h from C11 using enable_stdatomics=true for
> targets where enable_stdatomics=false no functional change is intended.
>
> Be aware this does not contain all changes to use stdatomics across the DPDK
> tree it only introduces the minimum to allow the option to be used which is
> a pre-requisite for a clean CI (probably using clang) that can be run
> with enable_stdatomics=true enabled.
>
> It is planned that subsequent series will be introduced per lib/driver as
> appropriate to further enable stdatomics use when enable_stdatomics=true.
>
> Notes:
>
> * Additional libraries beyond EAL make visible atomics use across the
> API/ABI surface they will be converted in the subsequent series.
>
> * The eal: add rte atomic qualifier with casts patch needs some discussion
> as to whether or not the legacy rte_atomic APIs should be converted to
> work with enable_stdatomic=true right now some implementation dependent
> casts are used to prevent cascading / having to convert too much in
> the intial series.
>
> * Windows will obviously need complete conversion of libraries including
> atomics that are not crossing API/ABI boundaries. those conversions will
> introduced in separate series as new along side the existing msvc series.
>
> Please keep in mind we would like to prioritize the review / acceptance of
> this patch since it needs to be completed in the 23.11 merge window.
>
> Thank you all for the discussion that lead to the formation of this series.
I did a number of updates on this v6:
- moved rte_stdatomic.h from patch 1 to later patches where needed,
- added a RN entry,
- tried to consistently/adjusted indent,
- fixed mentions of stdatomic*s* to simple atomic, like in the build
option name,
- removed unneded comments (Thomas review on patch 1),
Series applied, thanks Tyler.
Two things are missing:
- add doxygen tags in the new API (this can be fixed later in this
release, can you look at it?),
- add compilation tests for enable_stdatomic (I'll send a patch soon
for devtools and GHA),
--
David Marchand
^ permalink raw reply [flat|nested] 82+ messages in thread
end of thread, other threads:[~2023-09-29 14:10 UTC | newest]
Thread overview: 82+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-11 1:31 [PATCH 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-11 1:31 ` [PATCH 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
2023-08-11 8:56 ` Bruce Richardson
2023-08-11 9:42 ` Morten Brørup
2023-08-11 15:54 ` Tyler Retzlaff
2023-08-14 9:04 ` Morten Brørup
2023-08-11 1:31 ` [PATCH 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
2023-08-11 1:31 ` [PATCH 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
2023-08-11 1:31 ` [PATCH 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
2023-08-11 1:32 ` [PATCH 5/6] bpf: " Tyler Retzlaff
2023-08-11 1:32 ` [PATCH 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
2023-08-11 8:57 ` Bruce Richardson
2023-08-11 9:51 ` Morten Brørup
2023-08-11 15:56 ` Tyler Retzlaff
2023-08-14 6:37 ` Morten Brørup
2023-08-11 17:32 ` [PATCH v2 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-11 17:32 ` [PATCH v2 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
2023-08-14 7:06 ` Morten Brørup
2023-08-11 17:32 ` [PATCH v2 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
2023-08-14 8:00 ` Morten Brørup
2023-08-14 17:47 ` Tyler Retzlaff
2023-08-16 20:13 ` Morten Brørup
2023-08-16 20:32 ` Tyler Retzlaff
2023-08-11 17:32 ` [PATCH v2 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
2023-08-14 8:05 ` Morten Brørup
2023-08-11 17:32 ` [PATCH v2 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
2023-08-14 8:07 ` Morten Brørup
2023-08-11 17:32 ` [PATCH v2 5/6] bpf: " Tyler Retzlaff
2023-08-14 8:11 ` Morten Brørup
2023-08-11 17:32 ` [PATCH v2 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
2023-08-14 8:12 ` Morten Brørup
2023-08-16 19:19 ` [PATCH v3 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-16 19:19 ` [PATCH v3 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
2023-08-16 20:55 ` Morten Brørup
2023-08-16 21:04 ` Tyler Retzlaff
2023-08-16 21:08 ` Morten Brørup
2023-08-16 21:10 ` Tyler Retzlaff
2023-08-16 19:19 ` [PATCH v3 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
2023-08-16 19:19 ` [PATCH v3 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
2023-08-16 19:19 ` [PATCH v3 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
2023-08-16 19:19 ` [PATCH v3 5/6] bpf: " Tyler Retzlaff
2023-08-16 19:19 ` [PATCH v3 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
2023-08-16 21:38 ` [PATCH v4 0/6] RFC optional rte optional stdatomics API Tyler Retzlaff
2023-08-16 21:38 ` [PATCH v4 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
2023-08-17 11:45 ` Morten Brørup
2023-08-17 19:09 ` Tyler Retzlaff
2023-08-18 6:55 ` Morten Brørup
2023-08-16 21:38 ` [PATCH v4 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
2023-08-16 21:38 ` [PATCH v4 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
2023-08-16 21:38 ` [PATCH v4 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
2023-08-16 21:38 ` [PATCH v4 5/6] bpf: " Tyler Retzlaff
2023-08-16 21:38 ` [PATCH v4 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
2023-08-17 11:57 ` Morten Brørup
2023-08-17 19:14 ` Tyler Retzlaff
2023-08-18 7:13 ` Morten Brørup
2023-08-22 18:14 ` Tyler Retzlaff
2023-08-17 21:42 ` [PATCH v5 0/6] optional rte optional stdatomics API Tyler Retzlaff
2023-08-17 21:42 ` [PATCH v5 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
2023-08-17 21:42 ` [PATCH v5 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
2023-08-17 21:42 ` [PATCH v5 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
2023-08-17 21:42 ` [PATCH v5 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
2023-08-17 21:42 ` [PATCH v5 5/6] bpf: " Tyler Retzlaff
2023-08-17 21:42 ` [PATCH v5 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
2023-08-21 22:27 ` [PATCH v5 0/6] optional rte optional stdatomics API Konstantin Ananyev
2023-08-22 21:00 ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
2023-08-22 21:00 ` [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API Tyler Retzlaff
2023-09-28 8:06 ` Thomas Monjalon
2023-09-29 8:04 ` David Marchand
2023-09-29 8:54 ` Morten Brørup
2023-09-29 9:02 ` David Marchand
2023-09-29 9:26 ` Bruce Richardson
2023-09-29 9:34 ` David Marchand
2023-09-29 10:26 ` Thomas Monjalon
2023-09-29 11:38 ` David Marchand
2023-09-29 11:51 ` Thomas Monjalon
2023-08-22 21:00 ` [PATCH v6 2/6] eal: adapt EAL to present rte " Tyler Retzlaff
2023-08-22 21:00 ` [PATCH v6 3/6] eal: add rte atomic qualifier with casts Tyler Retzlaff
2023-08-22 21:00 ` [PATCH v6 4/6] distributor: adapt for EAL optional atomics API changes Tyler Retzlaff
2023-08-22 21:00 ` [PATCH v6 5/6] bpf: " Tyler Retzlaff
2023-08-22 21:00 ` [PATCH v6 6/6] devtools: forbid new direct use of GCC atomic builtins Tyler Retzlaff
2023-08-29 15:57 ` [PATCH v6 0/6] rte atomics API for optional stdatomic Tyler Retzlaff
2023-09-29 14:09 ` David Marchand
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).