From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B29FFA04A2; Tue, 12 May 2020 10:04:41 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E3B4B1C1C6; Tue, 12 May 2020 10:04:38 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 216921C1C3 for ; Tue, 12 May 2020 10:04:37 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9F31D1FB; Tue, 12 May 2020 01:04:36 -0700 (PDT) Received: from phil-VirtualBox.shanghai.arm.com (phil-VirtualBox.shanghai.arm.com [10.169.109.147]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id D79883F305; Tue, 12 May 2020 01:04:27 -0700 (PDT) From: Phil Yang To: thomas@monjalon.net, dev@dpdk.org Cc: bruce.richardson@intel.com, ferruh.yigit@intel.com, hemant.agrawal@nxp.com, honnappa.nagarahalli@arm.com, jerinj@marvell.com, ktraynor@redhat.com, konstantin.ananyev@intel.com, maxime.coquelin@redhat.com, olivier.matz@6wind.com, stephen@networkplumber.org, mb@smartsharesystems.com, mattias.ronnblom@ericsson.com, harry.van.haaren@intel.com, erik.g.carrillo@intel.com, phil.yang@arm.com, nd@arm.com Date: Tue, 12 May 2020 16:03:06 +0800 Message-Id: <1589270586-4480-5-git-send-email-phil.yang@arm.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1589270586-4480-1-git-send-email-phil.yang@arm.com> References: <1584407863-774-1-git-send-email-phil.yang@arm.com> <1589270586-4480-1-git-send-email-phil.yang@arm.com> Subject: [dpdk-dev] [PATCH v4 4/4] eal/atomic: add wrapper for c11 atomics X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Wraps up compiler c11 atomic built-ins with explicit memory ordering parameter. Signed-off-by: Phil Yang --- lib/librte_eal/include/generic/rte_atomic_c11.h | 139 ++++++++++++++++++++++++ lib/librte_eal/include/meson.build | 1 + 2 files changed, 140 insertions(+) create mode 100644 lib/librte_eal/include/generic/rte_atomic_c11.h diff --git a/lib/librte_eal/include/generic/rte_atomic_c11.h b/lib/librte_eal/include/generic/rte_atomic_c11.h new file mode 100644 index 0000000..20490f4 --- /dev/null +++ b/lib/librte_eal/include/generic/rte_atomic_c11.h @@ -0,0 +1,139 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Arm Limited + */ + +#ifndef _RTE_ATOMIC_C11_H_ +#define _RTE_ATOMIC_C11_H_ + +#include + +/** + * @file + * c11 atomic operations + * + * This file wraps up compiler (GCC) c11 atomic built-ins. + * https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html + */ + +#define memory_order_relaxed __ATOMIC_RELAXED +#define memory_order_consume __ATOMIC_CONSUME +#define memory_order_acquire __ATOMIC_ACQUIRE +#define memory_order_release __ATOMIC_RELEASE +#define memory_order_acq_rel __ATOMIC_ACQ_REL +#define memory_order_seq_cst __ATOMIC_SEQ_CST + +/* Generic atomic load. + * It returns the contents of *PTR. + * + * The valid memory order variants are: + * memory_order_relaxed + * memory_order_consume + * memory_order_acquire + * memory_order_seq_cst + */ +#define rte_atomic_load(PTR, MO) \ + (__extension__ ({ \ + typeof(PTR) _ptr = (PTR); \ + typeof(*_ptr) _ret; \ + __atomic_load(_ptr, &_ret, (MO)); \ + _ret; \ + })) + +/* Generic atomic store. + * It stores the value of VAL into *PTR. + * + * The valid memory order variants are: + * memory_order_relaxed + * memory_order_release + * memory_order_seq_cst + */ +#define rte_atomic_store(PTR, VAL, MO) \ + (__extension__ ({ \ + typeof(PTR) _ptr = (PTR); \ + typeof(*_ptr) _val = (VAL); \ + __atomic_store(_ptr, &_val, (MO)); \ + })) + +/* Generic atomic exchange. + * It stores the value of VAL into *PTR. + * It returns the original value of *PTR. + * + * The valid memory order variants are: + * memory_order_relaxed + * memory_order_acquire + * memory_order_release + * memory_order_acq_rel + * memory_order_seq_cst + */ +#define rte_atomic_exchange(PTR, VAL, MO) \ + (__extension__ ({ \ + typeof(PTR) _ptr = (PTR); \ + typeof(*_ptr) _val = (VAL); \ + typeof(*_ptr) _ret; \ + __atomic_exchange(_ptr, &_val, &_ret, (MO)); \ + _ret; \ + })) + +/* Generic atomic compare and exchange. + * It compares the contents of *PTR with the contents of *EXP. + * If equal, the operation is a read-modify-write operation that + * writes DES into *PTR. + * If they are not equal, the operation is a read and the current + * contents of *PTR are written into *EXP. + * + * The weak compare_exchange may fail spuriously and the strong + * variation will never fails spuriously. + * + * If DES is written into *PTR then true is returned and memory is + * affected according to the memory order specified by SUC_MO. + * There are no restrictions on what memory order can be used here. + * + * Otherwise, false is returned and memory is affected according to + * FAIL_MO. This memory order cannot be memory_order_release nor + * memory_order_acq_rel. It also cannot be a stronger order than that + * specified by SUC_MO. + */ +#define rte_atomic_compare_exchange_weak(PTR, EXP, DES, SUC_MO, FAIL_MO) \ + (__extension__ ({ \ + typeof(PTR) _ptr = (PTR); \ + typeof(*_ptr) _des = (DES); \ + __atomic_compare_exchange(_ptr, (EXP), &_des, 1, \ + (SUC_MO), (FAIL_MO)); \ + })) + +#define rte_atomic_compare_exchange_strong(PTR, EXP, DES, SUC_MO, FAIL_MO) \ + (__extension__ ({ \ + typeof(PTR) _ptr = (PTR); \ + typeof(*_ptr) _des = (DES); \ + __atomic_compare_exchange(_ptr, (EXP), &_des, 0, \ + (SUC_MO), (FAIL_MO)); \ + })) + +#define rte_atomic_fetch_add(PTR, VAL, MO) \ + __atomic_fetch_add((PTR), (VAL), (MO)) +#define rte_atomic_fetch_sub(PTR, VAL, MO) \ + __atomic_fetch_sub((PTR), (VAL), (MO)) +#define rte_atomic_fetch_or(PTR, VAL, MO) \ + __atomic_fetch_or((PTR), (VAL), (MO)) +#define rte_atomic_fetch_xor(PTR, VAL, MO) \ + __atomic_fetch_xor((PTR), (VAL), (MO)) +#define rte_atomic_fetch_and(PTR, VAL, MO) \ + __atomic_fetch_and((PTR), (VAL), (MO)) + +#define rte_atomic_add_fetch(PTR, VAL, MO) \ + __atomic_add_fetch((PTR), (VAL), (MO)) +#define rte_atomic_sub_fetch(PTR, VAL, MO) \ + __atomic_sub_fetch((PTR), (VAL), (MO)) +#define rte_atomic_or_fetch(PTR, VAL, MO) \ + __atomic_or_fetch((PTR), (VAL), (MO)) +#define rte_atomic_xor_fetch(PTR, VAL, MO) \ + __atomic_xor_fetch((PTR), (VAL), (MO)) +#define rte_atomic_and_fetch(PTR, VAL, MO) \ + __atomic_and_fetch((PTR), (VAL), (MO)) + +/* Synchronization fence between threads based on + * the specified memory order. + */ +#define rte_atomic_thread_fence(MO) __atomic_thread_fence((MO)) + +#endif /* _RTE_ATOMIC_C11_H_ */ diff --git a/lib/librte_eal/include/meson.build b/lib/librte_eal/include/meson.build index bc73ec2..dac1aac 100644 --- a/lib/librte_eal/include/meson.build +++ b/lib/librte_eal/include/meson.build @@ -51,6 +51,7 @@ headers += files( # special case install the generic headers, since they go in a subdir generic_headers = files( 'generic/rte_atomic.h', + 'generic/rte_atomic_c11.h', 'generic/rte_byteorder.h', 'generic/rte_cpuflags.h', 'generic/rte_cycles.h', -- 2.7.4