From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 8E2BD43FA6;
	Mon,  6 May 2024 20:00:21 +0200 (CEST)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id BFD6140E0F;
	Mon,  6 May 2024 19:58:52 +0200 (CEST)
Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182])
 by mails.dpdk.org (Postfix) with ESMTP id 051224067B
 for <dev@dpdk.org>; Mon,  6 May 2024 19:58:32 +0200 (CEST)
Received: by linux.microsoft.com (Postfix, from userid 1086)
 id 75C7D20B2C9B; Mon,  6 May 2024 10:58:27 -0700 (PDT)
DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 75C7D20B2C9B
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com;
 s=default; t=1715018308;
 bh=+WNBJxJYbi4X53+Lxh22UF0Il5v4ubZL4b7mc9gj3nA=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=Rr+YIaGTyxHan+E8fWRe6uydIxDL4YbHJ1LIMDzGzCMrrxmHoHb3d39VPmSgl/ZM1
 pErWmbzKJkdEgBsTSdYeROPhc5JhBab0axFtlzp0zDKiQ0J2f84gVzty93UUeQdlIw
 ojtGTolJmv9Hb2536x+j3HL0aaeewPL9fmJixhfU=
From: Tyler Retzlaff <roretzla@linux.microsoft.com>
To: dev@dpdk.org
Cc: =?UTF-8?q?Mattias=20R=C3=B6nnblom?= <mattias.ronnblom@ericsson.com>,
 =?UTF-8?q?Morten=20Br=C3=B8rup?= <mb@smartsharesystems.com>,
 Abdullah Sevincer <abdullah.sevincer@intel.com>,
 Ajit Khaparde <ajit.khaparde@broadcom.com>,
 Alok Prasad <palok@marvell.com>,
 Anatoly Burakov <anatoly.burakov@intel.com>,
 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
 Anoob Joseph <anoobj@marvell.com>,
 Bruce Richardson <bruce.richardson@intel.com>,
 Byron Marohn <byron.marohn@intel.com>, Chenbo Xia <chenbox@nvidia.com>,
 Chengwen Feng <fengchengwen@huawei.com>,
 Ciara Loftus <ciara.loftus@intel.com>, Ciara Power <ciara.power@intel.com>,
 Dariusz Sosnowski <dsosnowski@nvidia.com>,
 David Hunt <david.hunt@intel.com>,
 Devendra Singh Rawat <dsinghrawat@marvell.com>,
 Erik Gabriel Carrillo <erik.g.carrillo@intel.com>,
 Guoyang Zhou <zhouguoyang@huawei.com>, Harman Kalra <hkalra@marvell.com>,
 Harry van Haaren <harry.van.haaren@intel.com>,
 Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>,
 Jakub Grajciar <jgrajcia@cisco.com>, Jerin Jacob <jerinj@marvell.com>,
 Jeroen de Borst <jeroendb@google.com>, Jian Wang <jianwang@trustnetic.com>,
 Jiawen Wu <jiawenwu@trustnetic.com>, Jie Hai <haijie1@huawei.com>,
 Jingjing Wu <jingjing.wu@intel.com>,
 Joshua Washington <joshwash@google.com>, Joyce Kong <joyce.kong@arm.com>,
 Junfeng Guo <junfeng.guo@intel.com>, Kevin Laatz <kevin.laatz@intel.com>,
 Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>,
 Liang Ma <liangma@liangbit.com>, Long Li <longli@microsoft.com>,
 Maciej Czekaj <mczekaj@marvell.com>, Matan Azrad <matan@nvidia.com>,
 Maxime Coquelin <maxime.coquelin@redhat.com>,
 Nicolas Chautru <nicolas.chautru@intel.com>, Ori Kam <orika@nvidia.com>,
 Pavan Nikhilesh <pbhagavatula@marvell.com>,
 Peter Mccarthy <peter.mccarthy@intel.com>,
 Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>,
 Reshma Pattan <reshma.pattan@intel.com>, Rosen Xu <rosen.xu@intel.com>,
 Ruifeng Wang <ruifeng.wang@arm.com>, Rushil Gupta <rushilg@google.com>,
 Sameh Gobriel <sameh.gobriel@intel.com>,
 Sivaprasad Tummala <sivaprasad.tummala@amd.com>,
 Somnath Kotur <somnath.kotur@broadcom.com>,
 Stephen Hemminger <stephen@networkplumber.org>,
 Suanming Mou <suanmingm@nvidia.com>, Sunil Kumar Kori <skori@marvell.com>,
 Sunil Uttarwar <sunilprakashrao.uttarwar@amd.com>,
 Tetsuya Mukawa <mtetsuyah@gmail.com>,
 Vamsi Attunuru <vattunuru@marvell.com>,
 Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
 Vladimir Medvedkin <vladimir.medvedkin@intel.com>,
 Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>,
 Yipeng Wang <yipeng1.wang@intel.com>,
 Yisen Zhuang <yisen.zhuang@huawei.com>,
 Ziyang Xuan <xuanziyang2@huawei.com>,
 Tyler Retzlaff <roretzla@linux.microsoft.com>
Subject: [PATCH v5 16/45] net/virtio: use rte stdatomic API
Date: Mon,  6 May 2024 10:57:57 -0700
Message-Id: <1715018306-13741-17-git-send-email-roretzla@linux.microsoft.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1715018306-13741-1-git-send-email-roretzla@linux.microsoft.com>
References: <1710967892-7046-1-git-send-email-roretzla@linux.microsoft.com>
 <1715018306-13741-1-git-send-email-roretzla@linux.microsoft.com>
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Replace the use of gcc builtin __atomic_xxx intrinsics with
corresponding rte_atomic_xxx optional rte stdatomic API.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
 drivers/net/virtio/virtio_ring.h                 |  4 +--
 drivers/net/virtio/virtio_user/virtio_user_dev.c | 12 ++++-----
 drivers/net/virtio/virtqueue.h                   | 32 ++++++++++++------------
 3 files changed, 24 insertions(+), 24 deletions(-)

diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h
index e848c0b..2a25751 100644
--- a/drivers/net/virtio/virtio_ring.h
+++ b/drivers/net/virtio/virtio_ring.h
@@ -59,7 +59,7 @@ struct vring_used_elem {
 
 struct vring_used {
 	uint16_t flags;
-	uint16_t idx;
+	RTE_ATOMIC(uint16_t) idx;
 	struct vring_used_elem ring[];
 };
 
@@ -70,7 +70,7 @@ struct vring_packed_desc {
 	uint64_t addr;
 	uint32_t len;
 	uint16_t id;
-	uint16_t flags;
+	RTE_ATOMIC(uint16_t) flags;
 };
 
 #define RING_EVENT_FLAGS_ENABLE 0x0
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 4fdfe70..24e2b2c 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -948,7 +948,7 @@ int virtio_user_stop_device(struct virtio_user_dev *dev)
 static inline int
 desc_is_avail(struct vring_packed_desc *desc, bool wrap_counter)
 {
-	uint16_t flags = __atomic_load_n(&desc->flags, __ATOMIC_ACQUIRE);
+	uint16_t flags = rte_atomic_load_explicit(&desc->flags, rte_memory_order_acquire);
 
 	return wrap_counter == !!(flags & VRING_PACKED_DESC_F_AVAIL) &&
 		wrap_counter != !!(flags & VRING_PACKED_DESC_F_USED);
@@ -1037,8 +1037,8 @@ int virtio_user_stop_device(struct virtio_user_dev *dev)
 		if (vq->used_wrap_counter)
 			flags |= VRING_PACKED_DESC_F_AVAIL_USED;
 
-		__atomic_store_n(&vring->desc[vq->used_idx].flags, flags,
-				 __ATOMIC_RELEASE);
+		rte_atomic_store_explicit(&vring->desc[vq->used_idx].flags, flags,
+				 rte_memory_order_release);
 
 		vq->used_idx += n_descs;
 		if (vq->used_idx >= dev->queue_size) {
@@ -1057,9 +1057,9 @@ int virtio_user_stop_device(struct virtio_user_dev *dev)
 	struct vring *vring = &dev->vrings.split[queue_idx];
 
 	/* Consume avail ring, using used ring idx as first one */
-	while (__atomic_load_n(&vring->used->idx, __ATOMIC_RELAXED)
+	while (rte_atomic_load_explicit(&vring->used->idx, rte_memory_order_relaxed)
 	       != vring->avail->idx) {
-		avail_idx = __atomic_load_n(&vring->used->idx, __ATOMIC_RELAXED)
+		avail_idx = rte_atomic_load_explicit(&vring->used->idx, rte_memory_order_relaxed)
 			    & (vring->num - 1);
 		desc_idx = vring->avail->ring[avail_idx];
 
@@ -1070,7 +1070,7 @@ int virtio_user_stop_device(struct virtio_user_dev *dev)
 		uep->id = desc_idx;
 		uep->len = n_descs;
 
-		__atomic_fetch_add(&vring->used->idx, 1, __ATOMIC_RELAXED);
+		rte_atomic_fetch_add_explicit(&vring->used->idx, 1, rte_memory_order_relaxed);
 	}
 }
 
diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h
index 75d70f1..60211a4 100644
--- a/drivers/net/virtio/virtqueue.h
+++ b/drivers/net/virtio/virtqueue.h
@@ -37,7 +37,7 @@
 virtio_mb(uint8_t weak_barriers)
 {
 	if (weak_barriers)
-		rte_atomic_thread_fence(__ATOMIC_SEQ_CST);
+		rte_atomic_thread_fence(rte_memory_order_seq_cst);
 	else
 		rte_mb();
 }
@@ -46,7 +46,7 @@
 virtio_rmb(uint8_t weak_barriers)
 {
 	if (weak_barriers)
-		rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
+		rte_atomic_thread_fence(rte_memory_order_acquire);
 	else
 		rte_io_rmb();
 }
@@ -55,7 +55,7 @@
 virtio_wmb(uint8_t weak_barriers)
 {
 	if (weak_barriers)
-		rte_atomic_thread_fence(__ATOMIC_RELEASE);
+		rte_atomic_thread_fence(rte_memory_order_release);
 	else
 		rte_io_wmb();
 }
@@ -67,12 +67,12 @@
 	uint16_t flags;
 
 	if (weak_barriers) {
-/* x86 prefers to using rte_io_rmb over __atomic_load_n as it reports
+/* x86 prefers to using rte_io_rmb over rte_atomic_load_explicit as it reports
  * a better perf(~1.5%), which comes from the saved branch by the compiler.
  * The if and else branch are identical  on the platforms except Arm.
  */
 #ifdef RTE_ARCH_ARM
-		flags = __atomic_load_n(&dp->flags, __ATOMIC_ACQUIRE);
+		flags = rte_atomic_load_explicit(&dp->flags, rte_memory_order_acquire);
 #else
 		flags = dp->flags;
 		rte_io_rmb();
@@ -90,12 +90,12 @@
 			      uint16_t flags, uint8_t weak_barriers)
 {
 	if (weak_barriers) {
-/* x86 prefers to using rte_io_wmb over __atomic_store_n as it reports
+/* x86 prefers to using rte_io_wmb over rte_atomic_store_explicit as it reports
  * a better perf(~1.5%), which comes from the saved branch by the compiler.
  * The if and else branch are identical on the platforms except Arm.
  */
 #ifdef RTE_ARCH_ARM
-		__atomic_store_n(&dp->flags, flags, __ATOMIC_RELEASE);
+		rte_atomic_store_explicit(&dp->flags, flags, rte_memory_order_release);
 #else
 		rte_io_wmb();
 		dp->flags = flags;
@@ -425,7 +425,7 @@ struct virtqueue *virtqueue_alloc(struct virtio_hw *hw, uint16_t index,
 
 	if (vq->hw->weak_barriers) {
 	/**
-	 * x86 prefers to using rte_smp_rmb over __atomic_load_n as it
+	 * x86 prefers to using rte_smp_rmb over rte_atomic_load_explicit as it
 	 * reports a slightly better perf, which comes from the saved
 	 * branch by the compiler.
 	 * The if and else branches are identical with the smp and io
@@ -435,8 +435,8 @@ struct virtqueue *virtqueue_alloc(struct virtio_hw *hw, uint16_t index,
 		idx = vq->vq_split.ring.used->idx;
 		rte_smp_rmb();
 #else
-		idx = __atomic_load_n(&(vq)->vq_split.ring.used->idx,
-				__ATOMIC_ACQUIRE);
+		idx = rte_atomic_load_explicit(&(vq)->vq_split.ring.used->idx,
+				rte_memory_order_acquire);
 #endif
 	} else {
 		idx = vq->vq_split.ring.used->idx;
@@ -454,7 +454,7 @@ void vq_ring_free_inorder(struct virtqueue *vq, uint16_t desc_idx,
 vq_update_avail_idx(struct virtqueue *vq)
 {
 	if (vq->hw->weak_barriers) {
-	/* x86 prefers to using rte_smp_wmb over __atomic_store_n as
+	/* x86 prefers to using rte_smp_wmb over rte_atomic_store_explicit as
 	 * it reports a slightly better perf, which comes from the
 	 * saved branch by the compiler.
 	 * The if and else branches are identical with the smp and
@@ -464,8 +464,8 @@ void vq_ring_free_inorder(struct virtqueue *vq, uint16_t desc_idx,
 		rte_smp_wmb();
 		vq->vq_split.ring.avail->idx = vq->vq_avail_idx;
 #else
-		__atomic_store_n(&vq->vq_split.ring.avail->idx,
-				 vq->vq_avail_idx, __ATOMIC_RELEASE);
+		rte_atomic_store_explicit(&vq->vq_split.ring.avail->idx,
+				 vq->vq_avail_idx, rte_memory_order_release);
 #endif
 	} else {
 		rte_io_wmb();
@@ -528,8 +528,8 @@ void vq_ring_free_inorder(struct virtqueue *vq, uint16_t desc_idx,
 #ifdef RTE_LIBRTE_VIRTIO_DEBUG_DUMP
 #define VIRTQUEUE_DUMP(vq) do { \
 	uint16_t used_idx, nused; \
-	used_idx = __atomic_load_n(&(vq)->vq_split.ring.used->idx, \
-				   __ATOMIC_RELAXED); \
+	used_idx = rte_atomic_load_explicit(&(vq)->vq_split.ring.used->idx, \
+				   rte_memory_order_relaxed); \
 	nused = (uint16_t)(used_idx - (vq)->vq_used_cons_idx); \
 	if (virtio_with_packed_queue((vq)->hw)) { \
 		PMD_INIT_LOG(DEBUG, \
@@ -546,7 +546,7 @@ void vq_ring_free_inorder(struct virtqueue *vq, uint16_t desc_idx,
 	  " avail.flags=0x%x; used.flags=0x%x", \
 	  (vq)->vq_nentries, (vq)->vq_free_cnt, nused, (vq)->vq_desc_head_idx, \
 	  (vq)->vq_split.ring.avail->idx, (vq)->vq_used_cons_idx, \
-	  __atomic_load_n(&(vq)->vq_split.ring.used->idx, __ATOMIC_RELAXED), \
+	  rte_atomic_load_explicit(&(vq)->vq_split.ring.used->idx, rte_memory_order_relaxed), \
 	  (vq)->vq_split.ring.avail->flags, (vq)->vq_split.ring.used->flags); \
 } while (0)
 #else
-- 
1.8.3.1