From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 882A643D09;
	Wed, 20 Mar 2024 21:56:46 +0100 (CET)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id C5D7742F4D;
	Wed, 20 Mar 2024 21:52:37 +0100 (CET)
Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182])
 by mails.dpdk.org (Postfix) with ESMTP id B9C1A42DF9
 for <dev@dpdk.org>; Wed, 20 Mar 2024 21:51:47 +0100 (CET)
Received: by linux.microsoft.com (Postfix, from userid 1086)
 id 66D0A20B479C; Wed, 20 Mar 2024 13:51:34 -0700 (PDT)
DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 66D0A20B479C
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com;
 s=default; t=1710967896;
 bh=j14S1bq2Vk1WGuThjtkCrkKt8B7PPTdfIH5Q1NsfGHM=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=SxewF9CmqNJ3K1sdbPNhfxXtM9DonO6EqiKZgF9m9YMemvSayrh/LeeRNdtQw80t5
 v3gMTzMePwi1d2G6UdbVUfAMlijBAMLMyDwn2x+AlPzagQ5/cuq68lns3UsaNDxJDy
 fNqZoemXsVIl3ctsDR8amToRal+/92Dy0yr5cfkw=
From: Tyler Retzlaff <roretzla@linux.microsoft.com>
To: dev@dpdk.org
Cc: =?UTF-8?q?Mattias=20R=C3=B6nnblom?= <mattias.ronnblom@ericsson.com>,
 =?UTF-8?q?Morten=20Br=C3=B8rup?= <mb@smartsharesystems.com>,
 Abdullah Sevincer <abdullah.sevincer@intel.com>,
 Ajit Khaparde <ajit.khaparde@broadcom.com>,
 Alok Prasad <palok@marvell.com>,
 Anatoly Burakov <anatoly.burakov@intel.com>,
 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
 Anoob Joseph <anoobj@marvell.com>,
 Bruce Richardson <bruce.richardson@intel.com>,
 Byron Marohn <byron.marohn@intel.com>, Chenbo Xia <chenbox@nvidia.com>,
 Chengwen Feng <fengchengwen@huawei.com>,
 Ciara Loftus <ciara.loftus@intel.com>, Ciara Power <ciara.power@intel.com>,
 Dariusz Sosnowski <dsosnowski@nvidia.com>,
 David Hunt <david.hunt@intel.com>,
 Devendra Singh Rawat <dsinghrawat@marvell.com>,
 Erik Gabriel Carrillo <erik.g.carrillo@intel.com>,
 Guoyang Zhou <zhouguoyang@huawei.com>, Harman Kalra <hkalra@marvell.com>,
 Harry van Haaren <harry.van.haaren@intel.com>,
 Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>,
 Jakub Grajciar <jgrajcia@cisco.com>, Jerin Jacob <jerinj@marvell.com>,
 Jeroen de Borst <jeroendb@google.com>, Jian Wang <jianwang@trustnetic.com>,
 Jiawen Wu <jiawenwu@trustnetic.com>, Jie Hai <haijie1@huawei.com>,
 Jingjing Wu <jingjing.wu@intel.com>,
 Joshua Washington <joshwash@google.com>, Joyce Kong <joyce.kong@arm.com>,
 Junfeng Guo <junfeng.guo@intel.com>, Kevin Laatz <kevin.laatz@intel.com>,
 Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>,
 Liang Ma <liangma@liangbit.com>, Long Li <longli@microsoft.com>,
 Maciej Czekaj <mczekaj@marvell.com>, Matan Azrad <matan@nvidia.com>,
 Maxime Coquelin <maxime.coquelin@redhat.com>,
 Nicolas Chautru <nicolas.chautru@intel.com>, Ori Kam <orika@nvidia.com>,
 Pavan Nikhilesh <pbhagavatula@marvell.com>,
 Peter Mccarthy <peter.mccarthy@intel.com>,
 Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>,
 Reshma Pattan <reshma.pattan@intel.com>, Rosen Xu <rosen.xu@intel.com>,
 Ruifeng Wang <ruifeng.wang@arm.com>, Rushil Gupta <rushilg@google.com>,
 Sameh Gobriel <sameh.gobriel@intel.com>,
 Sivaprasad Tummala <sivaprasad.tummala@amd.com>,
 Somnath Kotur <somnath.kotur@broadcom.com>,
 Stephen Hemminger <stephen@networkplumber.org>,
 Suanming Mou <suanmingm@nvidia.com>, Sunil Kumar Kori <skori@marvell.com>,
 Sunil Uttarwar <sunilprakashrao.uttarwar@amd.com>,
 Tetsuya Mukawa <mtetsuyah@gmail.com>,
 Vamsi Attunuru <vattunuru@marvell.com>,
 Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
 Vladimir Medvedkin <vladimir.medvedkin@intel.com>,
 Xiaoyun Wang <cloud.wangxiaoyun@huawei.com>,
 Yipeng Wang <yipeng1.wang@intel.com>,
 Yisen Zhuang <yisen.zhuang@huawei.com>,
 Yuying Zhang <Yuying.Zhang@intel.com>,
 Yuying Zhang <yuying.zhang@intel.com>,
 Ziyang Xuan <xuanziyang2@huawei.com>,
 Tyler Retzlaff <roretzla@linux.microsoft.com>
Subject: [PATCH 46/46] app/test-bbdev: use rte stdatomic API
Date: Wed, 20 Mar 2024 13:51:32 -0700
Message-Id: <1710967892-7046-47-git-send-email-roretzla@linux.microsoft.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1710967892-7046-1-git-send-email-roretzla@linux.microsoft.com>
References: <1710967892-7046-1-git-send-email-roretzla@linux.microsoft.com>
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

Replace the use of gcc builtin __atomic_xxx intrinsics with
corresponding rte_atomic_xxx optional rte stdatomic API.

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
 app/test-bbdev/test_bbdev_perf.c | 183 +++++++++++++++++++++++----------------
 1 file changed, 110 insertions(+), 73 deletions(-)

diff --git a/app/test-bbdev/test_bbdev_perf.c b/app/test-bbdev/test_bbdev_perf.c
index dcce00a..9694ed3 100644
--- a/app/test-bbdev/test_bbdev_perf.c
+++ b/app/test-bbdev/test_bbdev_perf.c
@@ -144,7 +144,7 @@ struct test_op_params {
 	uint16_t num_to_process;
 	uint16_t num_lcores;
 	int vector_mask;
-	uint16_t sync;
+	RTE_ATOMIC(uint16_t) sync;
 	struct test_buffers q_bufs[RTE_MAX_NUMA_NODES][MAX_QUEUES];
 };
 
@@ -159,9 +159,9 @@ struct thread_params {
 	uint8_t iter_count;
 	double iter_average;
 	double bler;
-	uint16_t nb_dequeued;
-	int16_t processing_status;
-	uint16_t burst_sz;
+	RTE_ATOMIC(uint16_t) nb_dequeued;
+	RTE_ATOMIC(int16_t) processing_status;
+	RTE_ATOMIC(uint16_t) burst_sz;
 	struct test_op_params *op_params;
 	struct rte_bbdev_dec_op *dec_ops[MAX_BURST];
 	struct rte_bbdev_enc_op *enc_ops[MAX_BURST];
@@ -3195,56 +3195,64 @@ typedef int (test_case_function)(struct active_device *ad,
 	}
 
 	if (unlikely(event != RTE_BBDEV_EVENT_DEQUEUE)) {
-		__atomic_store_n(&tp->processing_status, TEST_FAILED, __ATOMIC_RELAXED);
+		rte_atomic_store_explicit(&tp->processing_status, TEST_FAILED,
+		    rte_memory_order_relaxed);
 		printf(
 			"Dequeue interrupt handler called for incorrect event!\n");
 		return;
 	}
 
-	burst_sz = __atomic_load_n(&tp->burst_sz, __ATOMIC_RELAXED);
+	burst_sz = rte_atomic_load_explicit(&tp->burst_sz, rte_memory_order_relaxed);
 	num_ops = tp->op_params->num_to_process;
 
 	if (test_vector.op_type == RTE_BBDEV_OP_TURBO_DEC)
 		deq = rte_bbdev_dequeue_dec_ops(dev_id, queue_id,
 				&tp->dec_ops[
-					__atomic_load_n(&tp->nb_dequeued, __ATOMIC_RELAXED)],
+					rte_atomic_load_explicit(&tp->nb_dequeued,
+					    rte_memory_order_relaxed)],
 				burst_sz);
 	else if (test_vector.op_type == RTE_BBDEV_OP_LDPC_DEC)
 		deq = rte_bbdev_dequeue_ldpc_dec_ops(dev_id, queue_id,
 				&tp->dec_ops[
-					__atomic_load_n(&tp->nb_dequeued, __ATOMIC_RELAXED)],
+					rte_atomic_load_explicit(&tp->nb_dequeued,
+					    rte_memory_order_relaxed)],
 				burst_sz);
 	else if (test_vector.op_type == RTE_BBDEV_OP_LDPC_ENC)
 		deq = rte_bbdev_dequeue_ldpc_enc_ops(dev_id, queue_id,
 				&tp->enc_ops[
-					__atomic_load_n(&tp->nb_dequeued, __ATOMIC_RELAXED)],
+					rte_atomic_load_explicit(&tp->nb_dequeued,
+					    rte_memory_order_relaxed)],
 				burst_sz);
 	else if (test_vector.op_type == RTE_BBDEV_OP_FFT)
 		deq = rte_bbdev_dequeue_fft_ops(dev_id, queue_id,
 				&tp->fft_ops[
-					__atomic_load_n(&tp->nb_dequeued, __ATOMIC_RELAXED)],
+					rte_atomic_load_explicit(&tp->nb_dequeued,
+					    rte_memory_order_relaxed)],
 				burst_sz);
 	else if (test_vector.op_type == RTE_BBDEV_OP_MLDTS)
 		deq = rte_bbdev_dequeue_mldts_ops(dev_id, queue_id,
 				&tp->mldts_ops[
-					__atomic_load_n(&tp->nb_dequeued, __ATOMIC_RELAXED)],
+					rte_atomic_load_explicit(&tp->nb_dequeued,
+					    rte_memory_order_relaxed)],
 				burst_sz);
 	else /*RTE_BBDEV_OP_TURBO_ENC*/
 		deq = rte_bbdev_dequeue_enc_ops(dev_id, queue_id,
 				&tp->enc_ops[
-					__atomic_load_n(&tp->nb_dequeued, __ATOMIC_RELAXED)],
+					rte_atomic_load_explicit(&tp->nb_dequeued,
+					    rte_memory_order_relaxed)],
 				burst_sz);
 
 	if (deq < burst_sz) {
 		printf(
 			"After receiving the interrupt all operations should be dequeued. Expected: %u, got: %u\n",
 			burst_sz, deq);
-		__atomic_store_n(&tp->processing_status, TEST_FAILED, __ATOMIC_RELAXED);
+		rte_atomic_store_explicit(&tp->processing_status, TEST_FAILED,
+		    rte_memory_order_relaxed);
 		return;
 	}
 
-	if (__atomic_load_n(&tp->nb_dequeued, __ATOMIC_RELAXED) + deq < num_ops) {
-		__atomic_fetch_add(&tp->nb_dequeued, deq, __ATOMIC_RELAXED);
+	if (rte_atomic_load_explicit(&tp->nb_dequeued, rte_memory_order_relaxed) + deq < num_ops) {
+		rte_atomic_fetch_add_explicit(&tp->nb_dequeued, deq, rte_memory_order_relaxed);
 		return;
 	}
 
@@ -3288,7 +3296,8 @@ typedef int (test_case_function)(struct active_device *ad,
 
 	if (ret) {
 		printf("Buffers validation failed\n");
-		__atomic_store_n(&tp->processing_status, TEST_FAILED, __ATOMIC_RELAXED);
+		rte_atomic_store_explicit(&tp->processing_status, TEST_FAILED,
+		    rte_memory_order_relaxed);
 	}
 
 	switch (test_vector.op_type) {
@@ -3315,7 +3324,8 @@ typedef int (test_case_function)(struct active_device *ad,
 		break;
 	default:
 		printf("Unknown op type: %d\n", test_vector.op_type);
-		__atomic_store_n(&tp->processing_status, TEST_FAILED, __ATOMIC_RELAXED);
+		rte_atomic_store_explicit(&tp->processing_status, TEST_FAILED,
+		    rte_memory_order_relaxed);
 		return;
 	}
 
@@ -3324,7 +3334,7 @@ typedef int (test_case_function)(struct active_device *ad,
 	tp->mbps += (((double)(num_ops * tb_len_bits)) / 1000000.0) /
 			((double)total_time / (double)rte_get_tsc_hz());
 
-	__atomic_fetch_add(&tp->nb_dequeued, deq, __ATOMIC_RELAXED);
+	rte_atomic_fetch_add_explicit(&tp->nb_dequeued, deq, rte_memory_order_relaxed);
 }
 
 static int
@@ -3362,10 +3372,11 @@ typedef int (test_case_function)(struct active_device *ad,
 
 	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
 
-	__atomic_store_n(&tp->processing_status, 0, __ATOMIC_RELAXED);
-	__atomic_store_n(&tp->nb_dequeued, 0, __ATOMIC_RELAXED);
+	rte_atomic_store_explicit(&tp->processing_status, 0, rte_memory_order_relaxed);
+	rte_atomic_store_explicit(&tp->nb_dequeued, 0, rte_memory_order_relaxed);
 
-	rte_wait_until_equal_16(&tp->op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+	rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->op_params->sync, SYNC_START,
+	    rte_memory_order_relaxed);
 
 	ret = rte_bbdev_dec_op_alloc_bulk(tp->op_params->mp, ops,
 				num_to_process);
@@ -3415,15 +3426,17 @@ typedef int (test_case_function)(struct active_device *ad,
 			 * the number of operations is not a multiple of
 			 * burst size.
 			 */
-			__atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED);
+			rte_atomic_store_explicit(&tp->burst_sz, num_to_enq,
+			    rte_memory_order_relaxed);
 
 			/* Wait until processing of previous batch is
 			 * completed
 			 */
-			rte_wait_until_equal_16(&tp->nb_dequeued, enqueued, __ATOMIC_RELAXED);
+			rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->nb_dequeued, enqueued,
+			    rte_memory_order_relaxed);
 		}
 		if (j != TEST_REPETITIONS - 1)
-			__atomic_store_n(&tp->nb_dequeued, 0, __ATOMIC_RELAXED);
+			rte_atomic_store_explicit(&tp->nb_dequeued, 0, rte_memory_order_relaxed);
 	}
 
 	return TEST_SUCCESS;
@@ -3459,10 +3472,11 @@ typedef int (test_case_function)(struct active_device *ad,
 
 	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
 
-	__atomic_store_n(&tp->processing_status, 0, __ATOMIC_RELAXED);
-	__atomic_store_n(&tp->nb_dequeued, 0, __ATOMIC_RELAXED);
+	rte_atomic_store_explicit(&tp->processing_status, 0, rte_memory_order_relaxed);
+	rte_atomic_store_explicit(&tp->nb_dequeued, 0, rte_memory_order_relaxed);
 
-	rte_wait_until_equal_16(&tp->op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+	rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->op_params->sync, SYNC_START,
+	    rte_memory_order_relaxed);
 
 	ret = rte_bbdev_dec_op_alloc_bulk(tp->op_params->mp, ops,
 				num_to_process);
@@ -3506,15 +3520,17 @@ typedef int (test_case_function)(struct active_device *ad,
 			 * the number of operations is not a multiple of
 			 * burst size.
 			 */
-			__atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED);
+			rte_atomic_store_explicit(&tp->burst_sz, num_to_enq,
+			    rte_memory_order_relaxed);
 
 			/* Wait until processing of previous batch is
 			 * completed
 			 */
-			rte_wait_until_equal_16(&tp->nb_dequeued, enqueued, __ATOMIC_RELAXED);
+			rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->nb_dequeued, enqueued,
+			    rte_memory_order_relaxed);
 		}
 		if (j != TEST_REPETITIONS - 1)
-			__atomic_store_n(&tp->nb_dequeued, 0, __ATOMIC_RELAXED);
+			rte_atomic_store_explicit(&tp->nb_dequeued, 0, rte_memory_order_relaxed);
 	}
 
 	return TEST_SUCCESS;
@@ -3549,10 +3565,11 @@ typedef int (test_case_function)(struct active_device *ad,
 
 	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
 
-	__atomic_store_n(&tp->processing_status, 0, __ATOMIC_RELAXED);
-	__atomic_store_n(&tp->nb_dequeued, 0, __ATOMIC_RELAXED);
+	rte_atomic_store_explicit(&tp->processing_status, 0, rte_memory_order_relaxed);
+	rte_atomic_store_explicit(&tp->nb_dequeued, 0, rte_memory_order_relaxed);
 
-	rte_wait_until_equal_16(&tp->op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+	rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->op_params->sync, SYNC_START,
+	    rte_memory_order_relaxed);
 
 	ret = rte_bbdev_enc_op_alloc_bulk(tp->op_params->mp, ops,
 			num_to_process);
@@ -3592,15 +3609,17 @@ typedef int (test_case_function)(struct active_device *ad,
 			 * the number of operations is not a multiple of
 			 * burst size.
 			 */
-			__atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED);
+			rte_atomic_store_explicit(&tp->burst_sz, num_to_enq,
+			    rte_memory_order_relaxed);
 
 			/* Wait until processing of previous batch is
 			 * completed
 			 */
-			rte_wait_until_equal_16(&tp->nb_dequeued, enqueued, __ATOMIC_RELAXED);
+			rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->nb_dequeued, enqueued,
+			    rte_memory_order_relaxed);
 		}
 		if (j != TEST_REPETITIONS - 1)
-			__atomic_store_n(&tp->nb_dequeued, 0, __ATOMIC_RELAXED);
+			rte_atomic_store_explicit(&tp->nb_dequeued, 0, rte_memory_order_relaxed);
 	}
 
 	return TEST_SUCCESS;
@@ -3636,10 +3655,11 @@ typedef int (test_case_function)(struct active_device *ad,
 
 	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
 
-	__atomic_store_n(&tp->processing_status, 0, __ATOMIC_RELAXED);
-	__atomic_store_n(&tp->nb_dequeued, 0, __ATOMIC_RELAXED);
+	rte_atomic_store_explicit(&tp->processing_status, 0, rte_memory_order_relaxed);
+	rte_atomic_store_explicit(&tp->nb_dequeued, 0, rte_memory_order_relaxed);
 
-	rte_wait_until_equal_16(&tp->op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+	rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->op_params->sync, SYNC_START,
+	    rte_memory_order_relaxed);
 
 	ret = rte_bbdev_enc_op_alloc_bulk(tp->op_params->mp, ops,
 			num_to_process);
@@ -3681,15 +3701,17 @@ typedef int (test_case_function)(struct active_device *ad,
 			 * the number of operations is not a multiple of
 			 * burst size.
 			 */
-			__atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED);
+			rte_atomic_store_explicit(&tp->burst_sz, num_to_enq,
+			    rte_memory_order_relaxed);
 
 			/* Wait until processing of previous batch is
 			 * completed
 			 */
-			rte_wait_until_equal_16(&tp->nb_dequeued, enqueued, __ATOMIC_RELAXED);
+			rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->nb_dequeued, enqueued,
+			    rte_memory_order_relaxed);
 		}
 		if (j != TEST_REPETITIONS - 1)
-			__atomic_store_n(&tp->nb_dequeued, 0, __ATOMIC_RELAXED);
+			rte_atomic_store_explicit(&tp->nb_dequeued, 0, rte_memory_order_relaxed);
 	}
 
 	return TEST_SUCCESS;
@@ -3725,10 +3747,11 @@ typedef int (test_case_function)(struct active_device *ad,
 
 	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
 
-	__atomic_store_n(&tp->processing_status, 0, __ATOMIC_RELAXED);
-	__atomic_store_n(&tp->nb_dequeued, 0, __ATOMIC_RELAXED);
+	rte_atomic_store_explicit(&tp->processing_status, 0, rte_memory_order_relaxed);
+	rte_atomic_store_explicit(&tp->nb_dequeued, 0, rte_memory_order_relaxed);
 
-	rte_wait_until_equal_16(&tp->op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+	rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->op_params->sync, SYNC_START,
+	    rte_memory_order_relaxed);
 
 	ret = rte_bbdev_fft_op_alloc_bulk(tp->op_params->mp, ops,
 			num_to_process);
@@ -3769,15 +3792,17 @@ typedef int (test_case_function)(struct active_device *ad,
 			 * the number of operations is not a multiple of
 			 * burst size.
 			 */
-			__atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED);
+			rte_atomic_store_explicit(&tp->burst_sz, num_to_enq,
+			    rte_memory_order_relaxed);
 
 			/* Wait until processing of previous batch is
 			 * completed
 			 */
-			rte_wait_until_equal_16(&tp->nb_dequeued, enqueued, __ATOMIC_RELAXED);
+			rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->nb_dequeued, enqueued,
+			    rte_memory_order_relaxed);
 		}
 		if (j != TEST_REPETITIONS - 1)
-			__atomic_store_n(&tp->nb_dequeued, 0, __ATOMIC_RELAXED);
+			rte_atomic_store_explicit(&tp->nb_dequeued, 0, rte_memory_order_relaxed);
 	}
 
 	return TEST_SUCCESS;
@@ -3811,10 +3836,11 @@ typedef int (test_case_function)(struct active_device *ad,
 
 	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
 
-	__atomic_store_n(&tp->processing_status, 0, __ATOMIC_RELAXED);
-	__atomic_store_n(&tp->nb_dequeued, 0, __ATOMIC_RELAXED);
+	rte_atomic_store_explicit(&tp->processing_status, 0, rte_memory_order_relaxed);
+	rte_atomic_store_explicit(&tp->nb_dequeued, 0, rte_memory_order_relaxed);
 
-	rte_wait_until_equal_16(&tp->op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+	rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->op_params->sync, SYNC_START,
+	    rte_memory_order_relaxed);
 
 	ret = rte_bbdev_mldts_op_alloc_bulk(tp->op_params->mp, ops, num_to_process);
 	TEST_ASSERT_SUCCESS(ret, "Allocation failed for %d ops", num_to_process);
@@ -3851,15 +3877,17 @@ typedef int (test_case_function)(struct active_device *ad,
 			 * the number of operations is not a multiple of
 			 * burst size.
 			 */
-			__atomic_store_n(&tp->burst_sz, num_to_enq, __ATOMIC_RELAXED);
+			rte_atomic_store_explicit(&tp->burst_sz, num_to_enq,
+			    rte_memory_order_relaxed);
 
 			/* Wait until processing of previous batch is
 			 * completed
 			 */
-			rte_wait_until_equal_16(&tp->nb_dequeued, enqueued, __ATOMIC_RELAXED);
+			rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->nb_dequeued, enqueued,
+			    rte_memory_order_relaxed);
 		}
 		if (j != TEST_REPETITIONS - 1)
-			__atomic_store_n(&tp->nb_dequeued, 0, __ATOMIC_RELAXED);
+			rte_atomic_store_explicit(&tp->nb_dequeued, 0, rte_memory_order_relaxed);
 	}
 
 	return TEST_SUCCESS;
@@ -3894,7 +3922,8 @@ typedef int (test_case_function)(struct active_device *ad,
 
 	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
 
-	rte_wait_until_equal_16(&tp->op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+	rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->op_params->sync, SYNC_START,
+	    rte_memory_order_relaxed);
 
 	ret = rte_bbdev_dec_op_alloc_bulk(tp->op_params->mp, ops_enq, num_ops);
 	TEST_ASSERT_SUCCESS(ret, "Allocation failed for %d ops", num_ops);
@@ -4013,7 +4042,8 @@ typedef int (test_case_function)(struct active_device *ad,
 
 	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
 
-	rte_wait_until_equal_16(&tp->op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+	rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->op_params->sync, SYNC_START,
+	    rte_memory_order_relaxed);
 
 	ret = rte_bbdev_dec_op_alloc_bulk(tp->op_params->mp, ops_enq, num_ops);
 	TEST_ASSERT_SUCCESS(ret, "Allocation failed for %d ops", num_ops);
@@ -4148,7 +4178,8 @@ typedef int (test_case_function)(struct active_device *ad,
 
 	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
 
-	rte_wait_until_equal_16(&tp->op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+	rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->op_params->sync, SYNC_START,
+	    rte_memory_order_relaxed);
 
 	ret = rte_bbdev_dec_op_alloc_bulk(tp->op_params->mp, ops_enq, num_ops);
 	TEST_ASSERT_SUCCESS(ret, "Allocation failed for %d ops", num_ops);
@@ -4271,7 +4302,8 @@ typedef int (test_case_function)(struct active_device *ad,
 
 	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
 
-	rte_wait_until_equal_16(&tp->op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+	rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->op_params->sync, SYNC_START,
+	    rte_memory_order_relaxed);
 
 	ret = rte_bbdev_dec_op_alloc_bulk(tp->op_params->mp, ops_enq, num_ops);
 	TEST_ASSERT_SUCCESS(ret, "Allocation failed for %d ops", num_ops);
@@ -4402,7 +4434,8 @@ typedef int (test_case_function)(struct active_device *ad,
 
 	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
 
-	rte_wait_until_equal_16(&tp->op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+	rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->op_params->sync, SYNC_START,
+	    rte_memory_order_relaxed);
 
 	ret = rte_bbdev_enc_op_alloc_bulk(tp->op_params->mp, ops_enq,
 			num_ops);
@@ -4503,7 +4536,8 @@ typedef int (test_case_function)(struct active_device *ad,
 
 	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
 
-	rte_wait_until_equal_16(&tp->op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+	rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->op_params->sync, SYNC_START,
+	    rte_memory_order_relaxed);
 
 	ret = rte_bbdev_enc_op_alloc_bulk(tp->op_params->mp, ops_enq,
 			num_ops);
@@ -4604,7 +4638,8 @@ typedef int (test_case_function)(struct active_device *ad,
 
 	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
 
-	rte_wait_until_equal_16(&tp->op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+	rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->op_params->sync, SYNC_START,
+	    rte_memory_order_relaxed);
 
 	ret = rte_bbdev_fft_op_alloc_bulk(tp->op_params->mp, ops_enq, num_ops);
 	TEST_ASSERT_SUCCESS(ret, "Allocation failed for %d ops", num_ops);
@@ -4702,7 +4737,8 @@ typedef int (test_case_function)(struct active_device *ad,
 
 	bufs = &tp->op_params->q_bufs[GET_SOCKET(info.socket_id)][queue_id];
 
-	rte_wait_until_equal_16(&tp->op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+	rte_wait_until_equal_16((uint16_t *)(uintptr_t)&tp->op_params->sync, SYNC_START,
+	    rte_memory_order_relaxed);
 
 	ret = rte_bbdev_mldts_op_alloc_bulk(tp->op_params->mp, ops_enq, num_ops);
 	TEST_ASSERT_SUCCESS(ret, "Allocation failed for %d ops", num_ops);
@@ -4898,7 +4934,7 @@ typedef int (test_case_function)(struct active_device *ad,
 	else
 		return TEST_SKIPPED;
 
-	__atomic_store_n(&op_params->sync, SYNC_WAIT, __ATOMIC_RELAXED);
+	rte_atomic_store_explicit(&op_params->sync, SYNC_WAIT, rte_memory_order_relaxed);
 
 	/* Main core is set at first entry */
 	t_params[0].dev_id = ad->dev_id;
@@ -4921,7 +4957,7 @@ typedef int (test_case_function)(struct active_device *ad,
 				&t_params[used_cores++], lcore_id);
 	}
 
-	__atomic_store_n(&op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+	rte_atomic_store_explicit(&op_params->sync, SYNC_START, rte_memory_order_relaxed);
 	ret = bler_function(&t_params[0]);
 
 	/* Main core is always used */
@@ -5024,7 +5060,7 @@ typedef int (test_case_function)(struct active_device *ad,
 			throughput_function = throughput_pmd_lcore_enc;
 	}
 
-	__atomic_store_n(&op_params->sync, SYNC_WAIT, __ATOMIC_RELAXED);
+	rte_atomic_store_explicit(&op_params->sync, SYNC_WAIT, rte_memory_order_relaxed);
 
 	/* Main core is set at first entry */
 	t_params[0].dev_id = ad->dev_id;
@@ -5047,7 +5083,7 @@ typedef int (test_case_function)(struct active_device *ad,
 				&t_params[used_cores++], lcore_id);
 	}
 
-	__atomic_store_n(&op_params->sync, SYNC_START, __ATOMIC_RELAXED);
+	rte_atomic_store_explicit(&op_params->sync, SYNC_START, rte_memory_order_relaxed);
 	ret = throughput_function(&t_params[0]);
 
 	/* Main core is always used */
@@ -5077,29 +5113,30 @@ typedef int (test_case_function)(struct active_device *ad,
 	 * Wait for main lcore operations.
 	 */
 	tp = &t_params[0];
-	while ((__atomic_load_n(&tp->nb_dequeued, __ATOMIC_RELAXED) <
+	while ((rte_atomic_load_explicit(&tp->nb_dequeued, rte_memory_order_relaxed) <
 		op_params->num_to_process) &&
-		(__atomic_load_n(&tp->processing_status, __ATOMIC_RELAXED) !=
+		(rte_atomic_load_explicit(&tp->processing_status, rte_memory_order_relaxed) !=
 		TEST_FAILED))
 		rte_pause();
 
 	tp->ops_per_sec /= TEST_REPETITIONS;
 	tp->mbps /= TEST_REPETITIONS;
-	ret |= (int)__atomic_load_n(&tp->processing_status, __ATOMIC_RELAXED);
+	ret |= (int)rte_atomic_load_explicit(&tp->processing_status, rte_memory_order_relaxed);
 
 	/* Wait for worker lcores operations */
 	for (used_cores = 1; used_cores < num_lcores; used_cores++) {
 		tp = &t_params[used_cores];
 
-		while ((__atomic_load_n(&tp->nb_dequeued, __ATOMIC_RELAXED) <
+		while ((rte_atomic_load_explicit(&tp->nb_dequeued, rte_memory_order_relaxed) <
 			op_params->num_to_process) &&
-			(__atomic_load_n(&tp->processing_status, __ATOMIC_RELAXED) !=
-			TEST_FAILED))
+			(rte_atomic_load_explicit(&tp->processing_status,
+			    rte_memory_order_relaxed) != TEST_FAILED))
 			rte_pause();
 
 		tp->ops_per_sec /= TEST_REPETITIONS;
 		tp->mbps /= TEST_REPETITIONS;
-		ret |= (int)__atomic_load_n(&tp->processing_status, __ATOMIC_RELAXED);
+		ret |= (int)rte_atomic_load_explicit(&tp->processing_status,
+		    rte_memory_order_relaxed);
 	}
 
 	/* Print throughput if test passed */
-- 
1.8.3.1