From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EC44C4265E; Thu, 28 Sep 2023 10:06:21 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D8FCE40296; Thu, 28 Sep 2023 10:06:21 +0200 (CEST) Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com [64.147.123.21]) by mails.dpdk.org (Postfix) with ESMTP id 83D3940273; Thu, 28 Sep 2023 10:06:20 +0200 (CEST) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id AD5BC3200938; Thu, 28 Sep 2023 04:06:15 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute5.internal (MEProxy); Thu, 28 Sep 2023 04:06:17 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:sender:subject:subject:to:to; s=fm2; t= 1695888375; x=1695974775; bh=cpACBr6gIddf8Mqfh3Ln4/RitE4QZzB40Sd aIAuIx2s=; b=WY2129VPOaJoV9gDBPpToA+zttagHFbp49D2mVwNzrF1PQlInEy F1jfkfavMOzn9pe5zCdMFnNtnzGuWtKZj9r6zBkBN4amLOa6/OonMfr+R0qx5AYc lnic3vPks6Rp3jhbmXpwEp7CLfcRkEoX21ZHc09sYf3nQ7ZD90deh3mSUy4dfo3N hlfRByGTjxWU10XecErO9L0M+8xIF00wnR7lG0Vn9O/2ALHJyfEZLSsN/sxlFH+U KbdrzdK8eqYA8tmUrsSoenFUzlZLBgJHzYaJdchL2Gy85Mesw1o/VxoXWBu6FtlR EgsVKxW3PBPdqUl6tu4I5BvHS5wLArHB2QQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t= 1695888375; x=1695974775; bh=cpACBr6gIddf8Mqfh3Ln4/RitE4QZzB40Sd aIAuIx2s=; b=WgNI5rT0nqmwMxyF02OtvEWOCGC6LCNklIcsaWpcI6YFaHttfzH OxIQw/CEYjUxZ5U5FNTkN5Q/j5u08gU6Qam8A65lzw5nclAVFhDqEdqgT+6wProq iYqB2oJKaerImETZ4z5In7GHLhIPoRSGZoMwuZUckFEDH6xe3RP+ubWPKKMaC2b1 LeJ+NRHoAdmVfvnLRUueoyVvFZmPH36HW7tvnDTFzjqeuKWrCQENAr9jHEfFme+o u0YBXFV505nRaX2DXmzbRButkDwbpXUqrIKrQQBax9D63VNDn52/XcUHolJ1wj8m NJzejkpodSUXOAd4fO/FA6PTv7tLsYGdxBg== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvjedrtdeiucetufdoteggodetrfdotffvucfrrh hofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgenuceurghi lhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujfgurh ephffvvefufffkjghfggfgtgesthfuredttddtvdenucfhrhhomhepvfhhohhmrghsucfo ohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrghlohhnrdhnvghtqeenucggtffrrg htthgvrhhnpedtjeeiieefhedtfffgvdelteeufeefheeujefgueetfedttdeikefgkedu hedtgfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpe hthhhomhgrshesmhhonhhjrghlohhnrdhnvght X-ME-Proxy: Feedback-ID: i47234305:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu, 28 Sep 2023 04:06:12 -0400 (EDT) From: Thomas Monjalon To: Tyler Retzlaff Cc: dev@dpdk.org, techboard@dpdk.org, Bruce Richardson , Honnappa Nagarahalli , Ruifeng Wang , Jerin Jacob , Sunil Kumar Kori , Mattias =?ISO-8859-1?Q?R=F6nnblom?= , Joyce Kong , David Christensen , Konstantin Ananyev , David Hunt , David Marchand Subject: Re: [PATCH v6 1/6] eal: provide rte stdatomics optional atomics API Date: Thu, 28 Sep 2023 10:06:11 +0200 Message-ID: <5908573.LM0AJKV5NW@thomas> In-Reply-To: <1692738045-32363-2-git-send-email-roretzla@linux.microsoft.com> References: <1691717521-1025-1-git-send-email-roretzla@linux.microsoft.com> <1692738045-32363-1-git-send-email-roretzla@linux.microsoft.com> <1692738045-32363-2-git-send-email-roretzla@linux.microsoft.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org 22/08/2023 23:00, Tyler Retzlaff: > --- a/lib/eal/include/generic/rte_rwlock.h > +++ b/lib/eal/include/generic/rte_rwlock.h > @@ -32,6 +32,7 @@ > #include > #include > #include > +#include I'm not sure about adding the include in patch 1 if it is not used here. > --- /dev/null > +++ b/lib/eal/include/rte_stdatomic.h > @@ -0,0 +1,198 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2023 Microsoft Corporation > + */ > + > +#ifndef _RTE_STDATOMIC_H_ > +#define _RTE_STDATOMIC_H_ > + > +#include > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +#ifdef RTE_ENABLE_STDATOMIC > +#ifdef __STDC_NO_ATOMICS__ > +#error enable_stdatomics=true but atomics not supported by toolchain > +#endif > + > +#include > + > +/* RTE_ATOMIC(type) is provided for use as a type specifier > + * permitting designation of an rte atomic type. > + */ > +#define RTE_ATOMIC(type) _Atomic(type) > + > +/* __rte_atomic is provided for type qualification permitting > + * designation of an rte atomic qualified type-name. Sorry I don't understand this comment. > + */ > +#define __rte_atomic _Atomic > + > +/* The memory order is an enumerated type in C11. */ > +typedef memory_order rte_memory_order; > + > +#define rte_memory_order_relaxed memory_order_relaxed > +#ifdef __ATOMIC_RELAXED > +static_assert(rte_memory_order_relaxed == __ATOMIC_RELAXED, > + "rte_memory_order_relaxed == __ATOMIC_RELAXED"); Not sure about using static_assert or RTE_BUILD_BUG_ON > +#endif > + > +#define rte_memory_order_consume memory_order_consume > +#ifdef __ATOMIC_CONSUME > +static_assert(rte_memory_order_consume == __ATOMIC_CONSUME, > + "rte_memory_order_consume == __ATOMIC_CONSUME"); > +#endif > + > +#define rte_memory_order_acquire memory_order_acquire > +#ifdef __ATOMIC_ACQUIRE > +static_assert(rte_memory_order_acquire == __ATOMIC_ACQUIRE, > + "rte_memory_order_acquire == __ATOMIC_ACQUIRE"); > +#endif > + > +#define rte_memory_order_release memory_order_release > +#ifdef __ATOMIC_RELEASE > +static_assert(rte_memory_order_release == __ATOMIC_RELEASE, > + "rte_memory_order_release == __ATOMIC_RELEASE"); > +#endif > + > +#define rte_memory_order_acq_rel memory_order_acq_rel > +#ifdef __ATOMIC_ACQ_REL > +static_assert(rte_memory_order_acq_rel == __ATOMIC_ACQ_REL, > + "rte_memory_order_acq_rel == __ATOMIC_ACQ_REL"); > +#endif > + > +#define rte_memory_order_seq_cst memory_order_seq_cst > +#ifdef __ATOMIC_SEQ_CST > +static_assert(rte_memory_order_seq_cst == __ATOMIC_SEQ_CST, > + "rte_memory_order_seq_cst == __ATOMIC_SEQ_CST"); > +#endif > + > +#define rte_atomic_load_explicit(ptr, memorder) \ > + atomic_load_explicit(ptr, memorder) > + > +#define rte_atomic_store_explicit(ptr, val, memorder) \ > + atomic_store_explicit(ptr, val, memorder) > + > +#define rte_atomic_exchange_explicit(ptr, val, memorder) \ > + atomic_exchange_explicit(ptr, val, memorder) > + > +#define rte_atomic_compare_exchange_strong_explicit( \ > + ptr, expected, desired, succ_memorder, fail_memorder) \ > + atomic_compare_exchange_strong_explicit( \ > + ptr, expected, desired, succ_memorder, fail_memorder) > + > +#define rte_atomic_compare_exchange_weak_explicit( \ > + ptr, expected, desired, succ_memorder, fail_memorder) \ > + atomic_compare_exchange_weak_explicit( \ > + ptr, expected, desired, succ_memorder, fail_memorder) > + > +#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \ > + atomic_fetch_add_explicit(ptr, val, memorder) > + > +#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \ > + atomic_fetch_sub_explicit(ptr, val, memorder) > + > +#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \ > + atomic_fetch_and_explicit(ptr, val, memorder) > + > +#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \ > + atomic_fetch_xor_explicit(ptr, val, memorder) > + > +#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \ > + atomic_fetch_or_explicit(ptr, val, memorder) > + > +#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \ > + atomic_fetch_nand_explicit(ptr, val, memorder) > + > +#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \ > + atomic_flag_test_and_set_explicit(ptr, memorder) > + > +#define rte_atomic_flag_clear_explicit(ptr, memorder) \ > + atomic_flag_clear_explicit(ptr, memorder) > + > +/* We provide internal macro here to allow conditional expansion > + * in the body of the per-arch rte_atomic_thread_fence inline functions. > + */ > +#define __rte_atomic_thread_fence(memorder) \ > + atomic_thread_fence(memorder) > + > +#else Better to add some context in comment of this "else": /* !RTE_ENABLE_STDATOMIC */ > + > +/* RTE_ATOMIC(type) is provided for use as a type specifier > + * permitting designation of an rte atomic type. > + */ The comment should say it has no effect. Or no comment at all for this part. > +#define RTE_ATOMIC(type) type > + > +/* __rte_atomic is provided for type qualification permitting > + * designation of an rte atomic qualified type-name. > + */ > +#define __rte_atomic > + > +/* The memory order is an integer type in GCC built-ins, > + * not an enumerated type like in C11. > + */ > +typedef int rte_memory_order; > + > +#define rte_memory_order_relaxed __ATOMIC_RELAXED > +#define rte_memory_order_consume __ATOMIC_CONSUME > +#define rte_memory_order_acquire __ATOMIC_ACQUIRE > +#define rte_memory_order_release __ATOMIC_RELEASE > +#define rte_memory_order_acq_rel __ATOMIC_ACQ_REL > +#define rte_memory_order_seq_cst __ATOMIC_SEQ_CST > + > +#define rte_atomic_load_explicit(ptr, memorder) \ > + __atomic_load_n(ptr, memorder) > + > +#define rte_atomic_store_explicit(ptr, val, memorder) \ > + __atomic_store_n(ptr, val, memorder) > + > +#define rte_atomic_exchange_explicit(ptr, val, memorder) \ > + __atomic_exchange_n(ptr, val, memorder) > + > +#define rte_atomic_compare_exchange_strong_explicit( \ > + ptr, expected, desired, succ_memorder, fail_memorder) \ > + __atomic_compare_exchange_n( \ > + ptr, expected, desired, 0, succ_memorder, fail_memorder) > + > +#define rte_atomic_compare_exchange_weak_explicit( \ > + ptr, expected, desired, succ_memorder, fail_memorder) \ > + __atomic_compare_exchange_n( \ > + ptr, expected, desired, 1, succ_memorder, fail_memorder) > + > +#define rte_atomic_fetch_add_explicit(ptr, val, memorder) \ > + __atomic_fetch_add(ptr, val, memorder) > + > +#define rte_atomic_fetch_sub_explicit(ptr, val, memorder) \ > + __atomic_fetch_sub(ptr, val, memorder) > + > +#define rte_atomic_fetch_and_explicit(ptr, val, memorder) \ > + __atomic_fetch_and(ptr, val, memorder) > + > +#define rte_atomic_fetch_xor_explicit(ptr, val, memorder) \ > + __atomic_fetch_xor(ptr, val, memorder) > + > +#define rte_atomic_fetch_or_explicit(ptr, val, memorder) \ > + __atomic_fetch_or(ptr, val, memorder) > + > +#define rte_atomic_fetch_nand_explicit(ptr, val, memorder) \ > + __atomic_fetch_nand(ptr, val, memorder) > + > +#define rte_atomic_flag_test_and_set_explicit(ptr, memorder) \ > + __atomic_test_and_set(ptr, memorder) > + > +#define rte_atomic_flag_clear_explicit(ptr, memorder) \ > + __atomic_clear(ptr, memorder) > + > +/* We provide internal macro here to allow conditional expansion > + * in the body of the per-arch rte_atomic_thread_fence inline functions. > + */ > +#define __rte_atomic_thread_fence(memorder) \ > + __atomic_thread_fence(memorder) > + > +#endif > + > +#ifdef __cplusplus > +} > +#endif > + > +#endif /* _RTE_STDATOMIC_H_ */