From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id D8C1B43FA6;
	Mon,  6 May 2024 20:01:35 +0200 (CEST)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id 2F8D540ED2;
	Mon,  6 May 2024 19:59:08 +0200 (CEST)
Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182])
 by mails.dpdk.org (Postfix) with ESMTP id 0E0A940608
 for <dev@dpdk.org>; Mon,  6 May 2024 19:58:34 +0200 (CEST)
Received: by linux.microsoft.com (Postfix, from userid 1086)
 id 0B14620B2CA7; Mon,  6 May 2024 10:58:27 -0700 (PDT)
DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 0B14620B2CA7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com;
 s=default; t=1715018309;
 bh=J04whIFp/m3NwcHcfmZVAAuRQcUSV/sdSJk8oqxXPo4=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=gNBgN+wR3ciMY6MgpDacENik2WPcNwmKRlZdrSBy5puHMr6BxPa6PN9ScjwGM0ZOS
 kYltryym5xzgc+nwcx4yT7vpUQ3OZt0su93nfGJJQBoqiwxQWFtSgnuGhflri7ozsy
 avZJ9gxH9cQRezaeSxi20/Q6VCQ/4OlrOm87+QD0=
From: Tyler Retzlaff <roretzla@linux.microsoft.com>
To: dev@dpdk.org
Cc: =?UTF-8?q?Mattias=20R=C3=B6nnblom?= <mattias.ronnblom@ericsson.com>,
 =?UTF-8?q?Morten=20Br=C3=B8rup?= <mb@smartsharesystems.com>,
 Abdullah Sevincer <abdullah.sevincer@intel.com>,
 Ajit Khaparde <ajit.khaparde@broadcom.com>,
 Alok Prasad <palok@marvell.com>,
 Anatoly Burakov <anatoly.burakov@intel.com>,
 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
 Anoob Joseph <anoobj@marvell.com>,
 Bruce Richardson <bruce.richardson@intel.com>,
 Byron Marohn <byron.marohn@intel.com>, Chenbo Xia <chenbox@nvidia.com>,
 Chengwen Feng <fengchengwen@huawei.com>,
 Ciara Loftus <ciara.loftus@intel.com>, Ciara Power <ciara.power@intel.com>,
 Dariusz Sosnowski <dsosnowski@nvidia.com>,
 David Hunt <david.hunt@intel.com>,
 Devendra Singh Rawat <dsinghrawat@marvell.com>,
 Erik Gabriel Carrillo <erik.g.carrillo@intel.com>,
 Guoyang Zhou <zhouguoyang@huawei.com>, Harman Kalra <hkalra@marvell.com>,
 Harry van Haaren <harry.van.haaren@intel.com>,
 Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>,
 Jakub Grajciar <jgrajcia@cisco.com>, Jerin Jacob <jerinj@marvell.com>,
 Jeroen de Borst <jeroendb@google.com>, Jian Wang <jianwang@trustnetic.com>,
 Jiawen Wu <jiawenwu@trustnetic.com>, Jie Hai <haijie1@huawei.com>,
 Jingjing Wu <jingjing.wu@intel.com>,
 Joshua Washington <joshwash@google.com>, Joyce Kong <joyce.kong@arm.com>,
 Junfeng Guo <junfeng.guo@intel.com>, Kevin Laatz <kevin.laatz@intel.com>,
 Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>,
 Liang Ma <liangma@liangbit.com>, Long Li <longli@microsoft.com>,
 Maciej Czekaj <mczekaj@marvell.com>, Matan Azrad <matan@nvidia.com>,
 Maxime Coquelin <maxime.coquelin@redhat.com>,
 Nicolas Chautru <nicolas.chautru@intel.com>, Ori Kam <orika@nvidia.com>,
 Pavan Nikhilesh <pbhagavatula@marvell.com>,
 Peter Mccarthy <peter.mccarthy@intel.com>,
 Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>,
 Reshma Pattan <reshma.pattan@intel.com>, Rosen Xu <rosen.xu@intel.com>,
 Ruifeng Wang <ruifeng.wang@arm.com>, Rushil Gupta <rushilg@google.com>,
 Sameh Gobriel <sameh.gobriel@intel.com>,
 Sivaprasad Tummala <sivaprasad.tummala@amd.com>,
 Somnath Kotur <somnath.kotur@broadcom.com>,
 Stephen Hemminger <stephen@networkplumber.org>,
 Suanming Mou <suanmingm@nvidia.com>, Sunil Kumar Kori <skori@marvell.com>,
 Sunil Uttarwar <sunilprakashrao.uttarwar@amd.com>,
 Tetsuya Mukawa <mtetsuyah@gmail.com>,
 Vamsi Attunuru <vattunuru@marvell.com>,
 Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
 Vladimir Medvedkin <vladimir.medvedkin@intel.com>,
 Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>,
 Yipeng Wang <yipeng1.wang@intel.com>,
 Yisen Zhuang <yisen.zhuang@huawei.com>,
 Ziyang Xuan <xuanziyang2@huawei.com>,
 Tyler Retzlaff <roretzla@linux.microsoft.com>
Subject: [PATCH v5 24/45] event/octeontx: use rte stdatomic API
Date: Mon,  6 May 2024 10:58:05 -0700
Message-Id: <1715018306-13741-25-git-send-email-roretzla@linux.microsoft.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1715018306-13741-1-git-send-email-roretzla@linux.microsoft.com>
References: <1710967892-7046-1-git-send-email-roretzla@linux.microsoft.com>
 <1715018306-13741-1-git-send-email-roretzla@linux.microsoft.com>
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Replace the use of gcc builtin __atomic_xxx intrinsics with
corresponding rte_atomic_xxx optional rte stdatomic API.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
 drivers/event/octeontx/timvf_evdev.h  |  8 ++++----
 drivers/event/octeontx/timvf_worker.h | 36 +++++++++++++++++------------------
 2 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/drivers/event/octeontx/timvf_evdev.h b/drivers/event/octeontx/timvf_evdev.h
index e7a63e4..3a2dc47 100644
--- a/drivers/event/octeontx/timvf_evdev.h
+++ b/drivers/event/octeontx/timvf_evdev.h
@@ -126,15 +126,15 @@ enum timvf_clk_src {
 struct __rte_aligned(8) tim_mem_bucket {
 	uint64_t first_chunk;
 	union {
-		uint64_t w1;
+		RTE_ATOMIC(uint64_t) w1;
 		struct {
-			uint32_t nb_entry;
+			RTE_ATOMIC(uint32_t) nb_entry;
 			uint8_t sbt:1;
 			uint8_t hbt:1;
 			uint8_t bsk:1;
 			uint8_t rsvd:5;
-			uint8_t lock;
-			int16_t chunk_remainder;
+			RTE_ATOMIC(uint8_t) lock;
+			RTE_ATOMIC(int16_t) chunk_remainder;
 		};
 	};
 	uint64_t current_chunk;
diff --git a/drivers/event/octeontx/timvf_worker.h b/drivers/event/octeontx/timvf_worker.h
index e4b923e..de9f1b0 100644
--- a/drivers/event/octeontx/timvf_worker.h
+++ b/drivers/event/octeontx/timvf_worker.h
@@ -19,22 +19,22 @@
 static inline int16_t
 timr_bkt_get_rem(struct tim_mem_bucket *bktp)
 {
-	return __atomic_load_n(&bktp->chunk_remainder,
-			__ATOMIC_ACQUIRE);
+	return rte_atomic_load_explicit(&bktp->chunk_remainder,
+			rte_memory_order_acquire);
 }
 
 static inline void
 timr_bkt_set_rem(struct tim_mem_bucket *bktp, uint16_t v)
 {
-	__atomic_store_n(&bktp->chunk_remainder, v,
-			__ATOMIC_RELEASE);
+	rte_atomic_store_explicit(&bktp->chunk_remainder, v,
+			rte_memory_order_release);
 }
 
 static inline void
 timr_bkt_sub_rem(struct tim_mem_bucket *bktp, uint16_t v)
 {
-	__atomic_fetch_sub(&bktp->chunk_remainder, v,
-			__ATOMIC_RELEASE);
+	rte_atomic_fetch_sub_explicit(&bktp->chunk_remainder, v,
+			rte_memory_order_release);
 }
 
 static inline uint8_t
@@ -47,14 +47,14 @@
 timr_bkt_set_sbt(struct tim_mem_bucket *bktp)
 {
 	const uint64_t v = TIM_BUCKET_W1_M_SBT << TIM_BUCKET_W1_S_SBT;
-	return __atomic_fetch_or(&bktp->w1, v, __ATOMIC_ACQ_REL);
+	return rte_atomic_fetch_or_explicit(&bktp->w1, v, rte_memory_order_acq_rel);
 }
 
 static inline uint64_t
 timr_bkt_clr_sbt(struct tim_mem_bucket *bktp)
 {
 	const uint64_t v = ~(TIM_BUCKET_W1_M_SBT << TIM_BUCKET_W1_S_SBT);
-	return __atomic_fetch_and(&bktp->w1, v, __ATOMIC_ACQ_REL);
+	return rte_atomic_fetch_and_explicit(&bktp->w1, v, rte_memory_order_acq_rel);
 }
 
 static inline uint8_t
@@ -81,34 +81,34 @@
 {
 	/*Clear everything except lock. */
 	const uint64_t v = TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK;
-	return __atomic_fetch_and(&bktp->w1, v, __ATOMIC_ACQ_REL);
+	return rte_atomic_fetch_and_explicit(&bktp->w1, v, rte_memory_order_acq_rel);
 }
 
 static inline uint64_t
 timr_bkt_fetch_sema_lock(struct tim_mem_bucket *bktp)
 {
-	return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA_WLOCK,
-			__ATOMIC_ACQ_REL);
+	return rte_atomic_fetch_add_explicit(&bktp->w1, TIM_BUCKET_SEMA_WLOCK,
+			rte_memory_order_acq_rel);
 }
 
 static inline uint64_t
 timr_bkt_fetch_sema(struct tim_mem_bucket *bktp)
 {
-	return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA,
-			__ATOMIC_RELAXED);
+	return rte_atomic_fetch_add_explicit(&bktp->w1, TIM_BUCKET_SEMA,
+			rte_memory_order_relaxed);
 }
 
 static inline uint64_t
 timr_bkt_inc_lock(struct tim_mem_bucket *bktp)
 {
 	const uint64_t v = 1ull << TIM_BUCKET_W1_S_LOCK;
-	return __atomic_fetch_add(&bktp->w1, v, __ATOMIC_ACQ_REL);
+	return rte_atomic_fetch_add_explicit(&bktp->w1, v, rte_memory_order_acq_rel);
 }
 
 static inline void
 timr_bkt_dec_lock(struct tim_mem_bucket *bktp)
 {
-	__atomic_fetch_add(&bktp->lock, 0xff, __ATOMIC_ACQ_REL);
+	rte_atomic_fetch_add_explicit(&bktp->lock, 0xff, rte_memory_order_acq_rel);
 }
 
 static inline uint32_t
@@ -121,13 +121,13 @@
 static inline void
 timr_bkt_inc_nent(struct tim_mem_bucket *bktp)
 {
-	__atomic_fetch_add(&bktp->nb_entry, 1, __ATOMIC_RELAXED);
+	rte_atomic_fetch_add_explicit(&bktp->nb_entry, 1, rte_memory_order_relaxed);
 }
 
 static inline void
 timr_bkt_add_nent(struct tim_mem_bucket *bktp, uint32_t v)
 {
-	__atomic_fetch_add(&bktp->nb_entry, v, __ATOMIC_RELAXED);
+	rte_atomic_fetch_add_explicit(&bktp->nb_entry, v, rte_memory_order_relaxed);
 }
 
 static inline uint64_t
@@ -135,7 +135,7 @@
 {
 	const uint64_t v = ~(TIM_BUCKET_W1_M_NUM_ENTRIES <<
 			TIM_BUCKET_W1_S_NUM_ENTRIES);
-	return __atomic_fetch_and(&bktp->w1, v, __ATOMIC_ACQ_REL) & v;
+	return rte_atomic_fetch_and_explicit(&bktp->w1, v, rte_memory_order_acq_rel) & v;
 }
 
 static inline struct tim_mem_entry *
-- 
1.8.3.1