From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 358C8431EE; Tue, 24 Oct 2023 10:48:32 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2212940689; Tue, 24 Oct 2023 10:48:32 +0200 (CEST) Received: from forward501b.mail.yandex.net (forward501b.mail.yandex.net [178.154.239.145]) by mails.dpdk.org (Postfix) with ESMTP id B990940608 for ; Tue, 24 Oct 2023 10:48:30 +0200 (CEST) Received: from mail-nwsmtp-smtp-production-main-57.myt.yp-c.yandex.net (mail-nwsmtp-smtp-production-main-57.myt.yp-c.yandex.net [IPv6:2a02:6b8:c12:112b:0:640:c113:0]) by forward501b.mail.yandex.net (Yandex) with ESMTP id 0D0AC610EF; Tue, 24 Oct 2023 11:48:30 +0300 (MSK) Received: by mail-nwsmtp-smtp-production-main-57.myt.yp-c.yandex.net (smtp/Yandex) with ESMTPSA id mjJJv81DeOs0-vFfBmylj; Tue, 24 Oct 2023 11:48:28 +0300 X-Yandex-Fwd: 1 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex.ru; s=mail; t=1698137309; bh=tP4r9dGe6nGe8I7aoN0eeHzMqduYR8Le2x6ZneM5Zcc=; h=From:In-Reply-To:Cc:Date:References:To:Subject:Message-ID; b=meeZdJG4o0dKRlH2CiqKIJ8F6VPgFVmxM+g3KJ8JqK8wcq8GvMTcFYzzgm+RSOzhQ O95A3vf0mDkmOsT993QT6yPLjwBrfjVHsEcGK73BBelRzWiYxzUUdqB6L+kT1ZYTov BJ6sWWJjOfiBBm05qpBCObLMjVmirTVe9ofncxsY= Authentication-Results: mail-nwsmtp-smtp-production-main-57.myt.yp-c.yandex.net; dkim=pass header.i=@yandex.ru Message-ID: Date: Tue, 24 Oct 2023 09:48:23 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2 11/19] stack: use rte optional stdatomic API Content-Language: en-US, ru-RU To: Tyler Retzlaff , dev@dpdk.org Cc: Akhil Goyal , Anatoly Burakov , Andrew Rybchenko , Bruce Richardson , Chenbo Xia , Ciara Power , David Christensen , David Hunt , Dmitry Kozlyuk , Dmitry Malloy , Elena Agostini , Erik Gabriel Carrillo , Fan Zhang , Ferruh Yigit , Harman Kalra , Harry van Haaren , Honnappa Nagarahalli , Jerin Jacob , Matan Azrad , Maxime Coquelin , Narcisa Ana Maria Vasile , Nicolas Chautru , Olivier Matz , Ori Kam , Pallavi Kadam , Pavan Nikhilesh , Reshma Pattan , Sameh Gobriel , Shijith Thotton , Sivaprasad Tummala , Stephen Hemminger , Suanming Mou , Sunil Kumar Kori , Thomas Monjalon , Viacheslav Ovsiienko , Vladimir Medvedkin , Yipeng Wang References: <1697497745-20664-1-git-send-email-roretzla@linux.microsoft.com> <1697574677-16578-1-git-send-email-roretzla@linux.microsoft.com> <1697574677-16578-12-git-send-email-roretzla@linux.microsoft.com> From: Konstantin Ananyev In-Reply-To: <1697574677-16578-12-git-send-email-roretzla@linux.microsoft.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org 17.10.2023 21:31, Tyler Retzlaff пишет: > Replace the use of gcc builtin __atomic_xxx intrinsics with > corresponding rte_atomic_xxx optional stdatomic API > > Signed-off-by: Tyler Retzlaff > --- > lib/stack/rte_stack.h | 2 +- > lib/stack/rte_stack_lf_c11.h | 24 ++++++++++++------------ > lib/stack/rte_stack_lf_generic.h | 18 +++++++++--------- > 3 files changed, 22 insertions(+), 22 deletions(-) > > diff --git a/lib/stack/rte_stack.h b/lib/stack/rte_stack.h > index 921d29a..a379300 100644 > --- a/lib/stack/rte_stack.h > +++ b/lib/stack/rte_stack.h > @@ -44,7 +44,7 @@ struct rte_stack_lf_list { > /** List head */ > struct rte_stack_lf_head head __rte_aligned(16); > /** List len */ > - uint64_t len; > + RTE_ATOMIC(uint64_t) len; > }; > > /* Structure containing two lock-free LIFO lists: the stack itself and a list > diff --git a/lib/stack/rte_stack_lf_c11.h b/lib/stack/rte_stack_lf_c11.h > index 687a6f6..9cb6998 100644 > --- a/lib/stack/rte_stack_lf_c11.h > +++ b/lib/stack/rte_stack_lf_c11.h > @@ -26,8 +26,8 @@ > * elements. If the mempool is near-empty to the point that this is a > * concern, the user should consider increasing the mempool size. > */ > - return (unsigned int)__atomic_load_n(&s->stack_lf.used.len, > - __ATOMIC_RELAXED); > + return (unsigned int)rte_atomic_load_explicit(&s->stack_lf.used.len, > + rte_memory_order_relaxed); > } > > static __rte_always_inline void > @@ -59,14 +59,14 @@ > (rte_int128_t *)&list->head, > (rte_int128_t *)&old_head, > (rte_int128_t *)&new_head, > - 1, __ATOMIC_RELEASE, > - __ATOMIC_RELAXED); > + 1, rte_memory_order_release, > + rte_memory_order_relaxed); > } while (success == 0); > > /* Ensure the stack modifications are not reordered with respect > * to the LIFO len update. > */ > - __atomic_fetch_add(&list->len, num, __ATOMIC_RELEASE); > + rte_atomic_fetch_add_explicit(&list->len, num, rte_memory_order_release); > } > > static __rte_always_inline struct rte_stack_lf_elem * > @@ -80,7 +80,7 @@ > int success; > > /* Reserve num elements, if available */ > - len = __atomic_load_n(&list->len, __ATOMIC_RELAXED); > + len = rte_atomic_load_explicit(&list->len, rte_memory_order_relaxed); > > while (1) { > /* Does the list contain enough elements? */ > @@ -88,10 +88,10 @@ > return NULL; > > /* len is updated on failure */ > - if (__atomic_compare_exchange_n(&list->len, > + if (rte_atomic_compare_exchange_weak_explicit(&list->len, > &len, len - num, > - 1, __ATOMIC_ACQUIRE, > - __ATOMIC_RELAXED)) > + rte_memory_order_acquire, > + rte_memory_order_relaxed)) > break; > } > > @@ -110,7 +110,7 @@ > * elements are properly ordered with respect to the head > * pointer read. > */ > - __atomic_thread_fence(__ATOMIC_ACQUIRE); > + __atomic_thread_fence(rte_memory_order_acquire); > > rte_prefetch0(old_head.top); > > @@ -159,8 +159,8 @@ > (rte_int128_t *)&list->head, > (rte_int128_t *)&old_head, > (rte_int128_t *)&new_head, > - 0, __ATOMIC_RELAXED, > - __ATOMIC_RELAXED); > + 0, rte_memory_order_relaxed, > + rte_memory_order_relaxed); > } while (success == 0); > > return old_head.top; > diff --git a/lib/stack/rte_stack_lf_generic.h b/lib/stack/rte_stack_lf_generic.h > index 39f7ff3..cc69e4d 100644 > --- a/lib/stack/rte_stack_lf_generic.h > +++ b/lib/stack/rte_stack_lf_generic.h > @@ -27,7 +27,7 @@ > * concern, the user should consider increasing the mempool size. > */ > /* NOTE: review for potential ordering optimization */ > - return __atomic_load_n(&s->stack_lf.used.len, __ATOMIC_SEQ_CST); > + return rte_atomic_load_explicit(&s->stack_lf.used.len, rte_memory_order_seq_cst); > } > > static __rte_always_inline void > @@ -64,11 +64,11 @@ > (rte_int128_t *)&list->head, > (rte_int128_t *)&old_head, > (rte_int128_t *)&new_head, > - 1, __ATOMIC_RELEASE, > - __ATOMIC_RELAXED); > + 1, rte_memory_order_release, > + rte_memory_order_relaxed); > } while (success == 0); > /* NOTE: review for potential ordering optimization */ > - __atomic_fetch_add(&list->len, num, __ATOMIC_SEQ_CST); > + rte_atomic_fetch_add_explicit(&list->len, num, rte_memory_order_seq_cst); > } > > static __rte_always_inline struct rte_stack_lf_elem * > @@ -83,15 +83,15 @@ > /* Reserve num elements, if available */ > while (1) { > /* NOTE: review for potential ordering optimization */ > - uint64_t len = __atomic_load_n(&list->len, __ATOMIC_SEQ_CST); > + uint64_t len = rte_atomic_load_explicit(&list->len, rte_memory_order_seq_cst); > > /* Does the list contain enough elements? */ > if (unlikely(len < num)) > return NULL; > > /* NOTE: review for potential ordering optimization */ > - if (__atomic_compare_exchange_n(&list->len, &len, len - num, > - 0, __ATOMIC_SEQ_CST, __ATOMIC_SEQ_CST)) > + if (rte_atomic_compare_exchange_strong_explicit(&list->len, &len, len - num, > + rte_memory_order_seq_cst, rte_memory_order_seq_cst)) > break; > } > > @@ -143,8 +143,8 @@ > (rte_int128_t *)&list->head, > (rte_int128_t *)&old_head, > (rte_int128_t *)&new_head, > - 1, __ATOMIC_RELEASE, > - __ATOMIC_RELAXED); > + 1, rte_memory_order_release, > + rte_memory_order_relaxed); > } while (success == 0); > > return old_head.top; Acked-by: Konstantin Ananyev