From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id E782543183;
	Tue, 17 Oct 2023 01:10:10 +0200 (CEST)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id AE584410F2;
	Tue, 17 Oct 2023 01:09:19 +0200 (CEST)
Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182])
 by mails.dpdk.org (Postfix) with ESMTP id 744BB40A7F
 for <dev@dpdk.org>; Tue, 17 Oct 2023 01:09:08 +0200 (CEST)
Received: by linux.microsoft.com (Postfix, from userid 1086)
 id 4934320B74C9; Mon, 16 Oct 2023 16:09:06 -0700 (PDT)
DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 4934320B74C9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com;
 s=default; t=1697497747;
 bh=Rdo4CTfzh1MFMAvSkba3lG6P4rLVvUaJ3UdMQ8MGNP0=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=EpOp6R2u6wEDA4zfvbdpTW/ej3/Lt9SRjJHH+gHfYO6GazUbZVqY3MbFzNKxUMzs5
 TDZNBYeOGnDxU3ELwGaU7UGu4dFpzp0WWldSssee5qol2gAhCaXtCK95utlWYLvrFI
 5w9jvBPDyDmq61GS/qRHdf2xWoa+Nc5BKn5E7M94=
From: Tyler Retzlaff <roretzla@linux.microsoft.com>
To: dev@dpdk.org
Cc: Akhil Goyal <gakhil@marvell.com>,
 Anatoly Burakov <anatoly.burakov@intel.com>,
 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
 Bruce Richardson <bruce.richardson@intel.com>,
 Chenbo Xia <chenbo.xia@intel.com>, Ciara Power <ciara.power@intel.com>,
 David Christensen <drc@linux.vnet.ibm.com>,
 David Hunt <david.hunt@intel.com>,
 Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>,
 Dmitry Malloy <dmitrym@microsoft.com>,
 Elena Agostini <eagostini@nvidia.com>,
 Erik Gabriel Carrillo <erik.g.carrillo@intel.com>,
 Fan Zhang <fanzhang.oss@gmail.com>, Ferruh Yigit <ferruh.yigit@amd.com>,
 Harman Kalra <hkalra@marvell.com>,
 Harry van Haaren <harry.van.haaren@intel.com>,
 Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>,
 Jerin Jacob <jerinj@marvell.com>,
 Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>,
 Matan Azrad <matan@nvidia.com>,
 Maxime Coquelin <maxime.coquelin@redhat.com>,
 Narcisa Ana Maria Vasile <navasile@linux.microsoft.com>,
 Nicolas Chautru <nicolas.chautru@intel.com>,
 Olivier Matz <olivier.matz@6wind.com>, Ori Kam <orika@nvidia.com>,
 Pallavi Kadam <pallavi.kadam@intel.com>,
 Pavan Nikhilesh <pbhagavatula@marvell.com>,
 Reshma Pattan <reshma.pattan@intel.com>,
 Sameh Gobriel <sameh.gobriel@intel.com>,
 Shijith Thotton <sthotton@marvell.com>,
 Sivaprasad Tummala <sivaprasad.tummala@amd.com>,
 Stephen Hemminger <stephen@networkplumber.org>,
 Suanming Mou <suanmingm@nvidia.com>, Sunil Kumar Kori <skori@marvell.com>,
 Thomas Monjalon <thomas@monjalon.net>,
 Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
 Vladimir Medvedkin <vladimir.medvedkin@intel.com>,
 Yipeng Wang <yipeng1.wang@intel.com>,
 Tyler Retzlaff <roretzla@linux.microsoft.com>
Subject: [PATCH 09/21] mbuf: use rte optional stdatomic API
Date: Mon, 16 Oct 2023 16:08:53 -0700
Message-Id: <1697497745-20664-10-git-send-email-roretzla@linux.microsoft.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1697497745-20664-1-git-send-email-roretzla@linux.microsoft.com>
References: <1697497745-20664-1-git-send-email-roretzla@linux.microsoft.com>
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Replace the use of gcc builtin __atomic_xxx intrinsics with
corresponding rte_atomic_xxx optional stdatomic API

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
 lib/mbuf/rte_mbuf.h      | 20 ++++++++++----------
 lib/mbuf/rte_mbuf_core.h |  4 ++--
 2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h
index 913c459..b8ab477 100644
--- a/lib/mbuf/rte_mbuf.h
+++ b/lib/mbuf/rte_mbuf.h
@@ -361,7 +361,7 @@ struct rte_pktmbuf_pool_private {
 static inline uint16_t
 rte_mbuf_refcnt_read(const struct rte_mbuf *m)
 {
-	return __atomic_load_n(&m->refcnt, __ATOMIC_RELAXED);
+	return rte_atomic_load_explicit(&m->refcnt, rte_memory_order_relaxed);
 }
 
 /**
@@ -374,15 +374,15 @@ struct rte_pktmbuf_pool_private {
 static inline void
 rte_mbuf_refcnt_set(struct rte_mbuf *m, uint16_t new_value)
 {
-	__atomic_store_n(&m->refcnt, new_value, __ATOMIC_RELAXED);
+	rte_atomic_store_explicit(&m->refcnt, new_value, rte_memory_order_relaxed);
 }
 
 /* internal */
 static inline uint16_t
 __rte_mbuf_refcnt_update(struct rte_mbuf *m, int16_t value)
 {
-	return __atomic_fetch_add(&m->refcnt, value,
-				 __ATOMIC_ACQ_REL) + value;
+	return rte_atomic_fetch_add_explicit(&m->refcnt, value,
+				 rte_memory_order_acq_rel) + value;
 }
 
 /**
@@ -463,7 +463,7 @@ struct rte_pktmbuf_pool_private {
 static inline uint16_t
 rte_mbuf_ext_refcnt_read(const struct rte_mbuf_ext_shared_info *shinfo)
 {
-	return __atomic_load_n(&shinfo->refcnt, __ATOMIC_RELAXED);
+	return rte_atomic_load_explicit(&shinfo->refcnt, rte_memory_order_relaxed);
 }
 
 /**
@@ -478,7 +478,7 @@ struct rte_pktmbuf_pool_private {
 rte_mbuf_ext_refcnt_set(struct rte_mbuf_ext_shared_info *shinfo,
 	uint16_t new_value)
 {
-	__atomic_store_n(&shinfo->refcnt, new_value, __ATOMIC_RELAXED);
+	rte_atomic_store_explicit(&shinfo->refcnt, new_value, rte_memory_order_relaxed);
 }
 
 /**
@@ -502,8 +502,8 @@ struct rte_pktmbuf_pool_private {
 		return (uint16_t)value;
 	}
 
-	return __atomic_fetch_add(&shinfo->refcnt, value,
-				 __ATOMIC_ACQ_REL) + value;
+	return rte_atomic_fetch_add_explicit(&shinfo->refcnt, value,
+				 rte_memory_order_acq_rel) + value;
 }
 
 /** Mbuf prefetch */
@@ -1315,8 +1315,8 @@ static inline int __rte_pktmbuf_pinned_extbuf_decref(struct rte_mbuf *m)
 	 * Direct usage of add primitive to avoid
 	 * duplication of comparing with one.
 	 */
-	if (likely(__atomic_fetch_add(&shinfo->refcnt, -1,
-				     __ATOMIC_ACQ_REL) - 1))
+	if (likely(rte_atomic_fetch_add_explicit(&shinfo->refcnt, -1,
+				     rte_memory_order_acq_rel) - 1))
 		return 1;
 
 	/* Reinitialize counter before mbuf freeing. */
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index e9bc0d1..bf761f8 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -497,7 +497,7 @@ struct rte_mbuf {
 	 * rte_mbuf_refcnt_set(). The functionality of these functions (atomic,
 	 * or non-atomic) is controlled by the RTE_MBUF_REFCNT_ATOMIC flag.
 	 */
-	uint16_t refcnt;
+	RTE_ATOMIC(uint16_t) refcnt;
 
 	/**
 	 * Number of segments. Only valid for the first segment of an mbuf
@@ -674,7 +674,7 @@ struct rte_mbuf {
 struct rte_mbuf_ext_shared_info {
 	rte_mbuf_extbuf_free_callback_t free_cb; /**< Free callback function */
 	void *fcb_opaque;                        /**< Free callback argument */
-	uint16_t refcnt;
+	RTE_ATOMIC(uint16_t) refcnt;
 };
 
 /** Maximum number of nb_segs allowed. */
-- 
1.8.3.1