From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6605243D09; Wed, 20 Mar 2024 21:53:31 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C752242E41; Wed, 20 Mar 2024 21:51:56 +0100 (CET) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id D8AEA4114B for ; Wed, 20 Mar 2024 21:51:37 +0100 (CET) Received: by linux.microsoft.com (Postfix, from userid 1086) id 3EC9320B74CF; Wed, 20 Mar 2024 13:51:33 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 3EC9320B74CF DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1710967894; bh=8KzPUNlQEaL2u+f9sn1iYCUGoibUAq8twXkhwPg0E28=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ImRiGmOuoQuyCCuD71F5IoIKnsu6lfy0PAAzaM6TIR3SjQD2kUpSWwWi01fHpgdEe BQFR/hUqHkduryQNbzeCWQ0VNT4VZ1R8Ilkf5lLbUuU1C0R3QgS+X+PmrzJJPlKcYu nIGEBfqOZ7ZFiAPoKUhJe9k6Ucf3231BXUeZ7mUk= From: Tyler Retzlaff To: dev@dpdk.org Cc: =?UTF-8?q?Mattias=20R=C3=B6nnblom?= , =?UTF-8?q?Morten=20Br=C3=B8rup?= , Abdullah Sevincer , Ajit Khaparde , Alok Prasad , Anatoly Burakov , Andrew Rybchenko , Anoob Joseph , Bruce Richardson , Byron Marohn , Chenbo Xia , Chengwen Feng , Ciara Loftus , Ciara Power , Dariusz Sosnowski , David Hunt , Devendra Singh Rawat , Erik Gabriel Carrillo , Guoyang Zhou , Harman Kalra , Harry van Haaren , Honnappa Nagarahalli , Jakub Grajciar , Jerin Jacob , Jeroen de Borst , Jian Wang , Jiawen Wu , Jie Hai , Jingjing Wu , Joshua Washington , Joyce Kong , Junfeng Guo , Kevin Laatz , Konstantin Ananyev , Liang Ma , Long Li , Maciej Czekaj , Matan Azrad , Maxime Coquelin , Nicolas Chautru , Ori Kam , Pavan Nikhilesh , Peter Mccarthy , Rahul Lakkireddy , Reshma Pattan , Rosen Xu , Ruifeng Wang , Rushil Gupta , Sameh Gobriel , Sivaprasad Tummala , Somnath Kotur , Stephen Hemminger , Suanming Mou , Sunil Kumar Kori , Sunil Uttarwar , Tetsuya Mukawa , Vamsi Attunuru , Viacheslav Ovsiienko , Vladimir Medvedkin , Xiaoyun Wang , Yipeng Wang , Yisen Zhuang , Yuying Zhang , Yuying Zhang , Ziyang Xuan , Tyler Retzlaff Subject: [PATCH 15/46] net/sfc: use rte stdatomic API Date: Wed, 20 Mar 2024 13:51:01 -0700 Message-Id: <1710967892-7046-16-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1710967892-7046-1-git-send-email-roretzla@linux.microsoft.com> References: <1710967892-7046-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional rte stdatomic API. Signed-off-by: Tyler Retzlaff --- drivers/net/sfc/meson.build | 5 ++--- drivers/net/sfc/sfc_mae_counter.c | 30 +++++++++++++++--------------- drivers/net/sfc/sfc_repr_proxy.c | 8 ++++---- drivers/net/sfc/sfc_stats.h | 8 ++++---- 4 files changed, 25 insertions(+), 26 deletions(-) diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build index 5adde68..d3603a0 100644 --- a/drivers/net/sfc/meson.build +++ b/drivers/net/sfc/meson.build @@ -47,9 +47,8 @@ int main(void) __int128 a = 0; __int128 b; - b = __atomic_load_n(&a, __ATOMIC_RELAXED); - __atomic_store(&b, &a, __ATOMIC_RELAXED); - __atomic_store_n(&b, a, __ATOMIC_RELAXED); + b = rte_atomic_load_explicit(&a, rte_memory_order_relaxed); + rte_atomic_store_explicit(&b, a, rte_memory_order_relaxed); return 0; } ''' diff --git a/drivers/net/sfc/sfc_mae_counter.c b/drivers/net/sfc/sfc_mae_counter.c index ba17295..a32da84 100644 --- a/drivers/net/sfc/sfc_mae_counter.c +++ b/drivers/net/sfc/sfc_mae_counter.c @@ -131,8 +131,8 @@ * And it does not depend on different stores/loads in other threads. * Paired with relaxed ordering in counter increment. */ - __atomic_store(&p->reset.pkts_bytes.int128, - &p->value.pkts_bytes.int128, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&p->reset.pkts_bytes.int128, + p->value.pkts_bytes.int128, rte_memory_order_relaxed); p->generation_count = generation_count; p->ft_switch_hit_counter = counterp->ft_switch_hit_counter; @@ -142,7 +142,7 @@ * at the beginning of delete operation. Release ordering is * paired with acquire ordering on load in counter increment operation. */ - __atomic_store_n(&p->inuse, true, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&p->inuse, true, rte_memory_order_release); sfc_info(sa, "enabled MAE counter 0x%x-#%u with reset pkts=%" PRIu64 " bytes=%" PRIu64, counterp->type, mae_counter.id, @@ -189,7 +189,7 @@ * paired with acquire ordering on load in counter increment operation. */ p = &counters->mae_counters[mae_counter->id]; - __atomic_store_n(&p->inuse, false, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&p->inuse, false, rte_memory_order_release); rc = efx_mae_counters_free_type(sa->nic, counter->type, 1, &unused, mae_counter, NULL); @@ -228,7 +228,7 @@ * Acquire ordering is paired with release ordering in counter add * and delete operations. */ - __atomic_load(&p->inuse, &inuse, __ATOMIC_ACQUIRE); + inuse = rte_atomic_load_explicit(&p->inuse, rte_memory_order_acquire); if (!inuse) { /* * Two possible cases include: @@ -258,15 +258,15 @@ * And it does not depend on different stores/loads in other threads. * Paired with relaxed ordering on counter reset. */ - __atomic_store(&p->value.pkts_bytes, - &cnt_val.pkts_bytes, __ATOMIC_RELAXED); + rte_atomic_store_explicit(&p->value.pkts_bytes, + cnt_val.pkts_bytes, rte_memory_order_relaxed); if (p->ft_switch_hit_counter != NULL) { uint64_t ft_switch_hit_counter; ft_switch_hit_counter = *p->ft_switch_hit_counter + pkts; - __atomic_store_n(p->ft_switch_hit_counter, ft_switch_hit_counter, - __ATOMIC_RELAXED); + rte_atomic_store_explicit(p->ft_switch_hit_counter, ft_switch_hit_counter, + rte_memory_order_relaxed); } sfc_info(sa, "update MAE counter 0x%x-#%u: pkts+%" PRIu64 "=%" PRIu64 @@ -498,8 +498,8 @@ &sa->mae.counter_registry; int32_t rc; - while (__atomic_load_n(&counter_registry->polling.thread.run, - __ATOMIC_ACQUIRE)) { + while (rte_atomic_load_explicit(&counter_registry->polling.thread.run, + rte_memory_order_acquire)) { rc = sfc_mae_counter_poll_packets(sa); if (rc == 0) { /* @@ -684,8 +684,8 @@ int rc; /* Ensure that flag is set before attempting to join thread */ - __atomic_store_n(&counter_registry->polling.thread.run, false, - __ATOMIC_RELEASE); + rte_atomic_store_explicit(&counter_registry->polling.thread.run, false, + rte_memory_order_release); rc = rte_thread_join(counter_registry->polling.thread.id, NULL); if (rc != 0) @@ -1024,8 +1024,8 @@ * And it does not depend on different stores/loads in other threads. * Paired with relaxed ordering in counter increment. */ - value.pkts_bytes.int128 = __atomic_load_n(&p->value.pkts_bytes.int128, - __ATOMIC_RELAXED); + value.pkts_bytes.int128 = rte_atomic_load_explicit(&p->value.pkts_bytes.int128, + rte_memory_order_relaxed); data->hits_set = 1; data->hits = value.pkts - p->reset.pkts; diff --git a/drivers/net/sfc/sfc_repr_proxy.c b/drivers/net/sfc/sfc_repr_proxy.c index ff13795..7275901 100644 --- a/drivers/net/sfc/sfc_repr_proxy.c +++ b/drivers/net/sfc/sfc_repr_proxy.c @@ -83,7 +83,7 @@ * Release ordering enforces marker set after data is populated. * Paired with acquire ordering in sfc_repr_proxy_mbox_handle(). */ - __atomic_store_n(&mbox->write_marker, true, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&mbox->write_marker, true, rte_memory_order_release); /* * Wait for the representor routine to process the request. @@ -94,7 +94,7 @@ * Paired with release ordering in sfc_repr_proxy_mbox_handle() * on acknowledge write. */ - if (__atomic_load_n(&mbox->ack, __ATOMIC_ACQUIRE)) + if (rte_atomic_load_explicit(&mbox->ack, rte_memory_order_acquire)) break; rte_delay_ms(1); @@ -119,7 +119,7 @@ * Paired with release ordering in sfc_repr_proxy_mbox_send() * on marker set. */ - if (!__atomic_load_n(&mbox->write_marker, __ATOMIC_ACQUIRE)) + if (!rte_atomic_load_explicit(&mbox->write_marker, rte_memory_order_acquire)) return; mbox->write_marker = false; @@ -146,7 +146,7 @@ * Paired with acquire ordering in sfc_repr_proxy_mbox_send() * on acknowledge read. */ - __atomic_store_n(&mbox->ack, true, __ATOMIC_RELEASE); + rte_atomic_store_explicit(&mbox->ack, true, rte_memory_order_release); } static void diff --git a/drivers/net/sfc/sfc_stats.h b/drivers/net/sfc/sfc_stats.h index 597e14d..25c2b9e 100644 --- a/drivers/net/sfc/sfc_stats.h +++ b/drivers/net/sfc/sfc_stats.h @@ -51,8 +51,8 @@ * Store the result atomically to guarantee that the reader * core sees both counter updates together. */ - __atomic_store_n(&st->pkts_bytes.int128, result.pkts_bytes.int128, - __ATOMIC_RELAXED); + rte_atomic_store_explicit(&st->pkts_bytes.int128, result.pkts_bytes.int128, + rte_memory_order_relaxed); #else st->pkts += pkts; st->bytes += bytes; @@ -66,8 +66,8 @@ sfc_pkts_bytes_get(const union sfc_pkts_bytes *st, union sfc_pkts_bytes *result) { #if SFC_SW_STATS_ATOMIC - result->pkts_bytes.int128 = __atomic_load_n(&st->pkts_bytes.int128, - __ATOMIC_RELAXED); + result->pkts_bytes.int128 = rte_atomic_load_explicit(&st->pkts_bytes.int128, + rte_memory_order_relaxed); #else *result = *st; #endif -- 1.8.3.1