From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 886E9431FF; Thu, 26 Oct 2023 00:38:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1FCEF402A3; Thu, 26 Oct 2023 00:38:17 +0200 (CEST) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id 72FFB4027F for ; Thu, 26 Oct 2023 00:38:15 +0200 (CEST) Received: by linux.microsoft.com (Postfix, from userid 1086) id B1A5020B74C0; Wed, 25 Oct 2023 15:38:14 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com B1A5020B74C0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1698273494; bh=M8u1r1vpoW994WeLZIuoqjZBeyQDdXG6LH/J++4qC34=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=WmNIY2oaBRr5QpkDQcX41jxy84nKgMbt+r34a1vUeDUQq8V+gWsRlcj/LaAQ8j8u0 KzhD38JEIgebiqBVOwrbNpMZ3AmRtF4LeCicr5UwE+3mzLv1x2+l0hvfFC/kr/weaW 5qB4Iam3Nw4V++m0LUzuJjKDimyWwkCHhPjOIN9g= Date: Wed, 25 Oct 2023 15:38:14 -0700 From: Tyler Retzlaff To: Ruifeng Wang Cc: "dev@dpdk.org" , Akhil Goyal , Anatoly Burakov , Andrew Rybchenko , Bruce Richardson , Chenbo Xia , Ciara Power , David Christensen , David Hunt , Dmitry Kozlyuk , Dmitry Malloy , Elena Agostini , Erik Gabriel Carrillo , Fan Zhang , Ferruh Yigit , Harman Kalra , Harry van Haaren , Honnappa Nagarahalli , "jerinj@marvell.com" , Konstantin Ananyev , Matan Azrad , Maxime Coquelin , Narcisa Ana Maria Vasile , Nicolas Chautru , Olivier Matz , Ori Kam , Pallavi Kadam , Pavan Nikhilesh , Reshma Pattan , Sameh Gobriel , Shijith Thotton , Sivaprasad Tummala , Stephen Hemminger , Suanming Mou , Sunil Kumar Kori , "thomas@monjalon.net" , Viacheslav Ovsiienko , Vladimir Medvedkin , Yipeng Wang , nd Subject: Re: [PATCH v2 09/19] rcu: use rte optional stdatomic API Message-ID: <20231025223814.GA30459@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net> References: <1697497745-20664-1-git-send-email-roretzla@linux.microsoft.com> <1697574677-16578-1-git-send-email-roretzla@linux.microsoft.com> <1697574677-16578-10-git-send-email-roretzla@linux.microsoft.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org On Wed, Oct 25, 2023 at 09:41:22AM +0000, Ruifeng Wang wrote: > > -----Original Message----- > > From: Tyler Retzlaff > > Sent: Wednesday, October 18, 2023 4:31 AM > > To: dev@dpdk.org > > Cc: Akhil Goyal ; Anatoly Burakov ; Andrew > > Rybchenko ; Bruce Richardson ; > > Chenbo Xia ; Ciara Power ; David Christensen > > ; David Hunt ; Dmitry Kozlyuk > > ; Dmitry Malloy ; Elena Agostini > > ; Erik Gabriel Carrillo ; Fan Zhang > > ; Ferruh Yigit ; Harman Kalra > > ; Harry van Haaren ; Honnappa Nagarahalli > > ; jerinj@marvell.com; Konstantin Ananyev > > ; Matan Azrad ; Maxime Coquelin > > ; Narcisa Ana Maria Vasile ; > > Nicolas Chautru ; Olivier Matz ; Ori > > Kam ; Pallavi Kadam ; Pavan Nikhilesh > > ; Reshma Pattan ; Sameh Gobriel > > ; Shijith Thotton ; Sivaprasad Tummala > > ; Stephen Hemminger ; Suanming Mou > > ; Sunil Kumar Kori ; thomas@monjalon.net; > > Viacheslav Ovsiienko ; Vladimir Medvedkin > > ; Yipeng Wang ; Tyler Retzlaff > > > > Subject: [PATCH v2 09/19] rcu: use rte optional stdatomic API > > > > Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx > > optional stdatomic API > > > > Signed-off-by: Tyler Retzlaff > > --- > > lib/rcu/rte_rcu_qsbr.c | 48 +++++++++++++++++------------------ > > lib/rcu/rte_rcu_qsbr.h | 68 +++++++++++++++++++++++++------------------------- > > 2 files changed, 58 insertions(+), 58 deletions(-) > > > > diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c index 17be93e..4dc7714 100644 > > --- a/lib/rcu/rte_rcu_qsbr.c > > +++ b/lib/rcu/rte_rcu_qsbr.c > > @@ -102,21 +102,21 @@ > > * go out of sync. Hence, additional checks are required. > > */ > > /* Check if the thread is already registered */ > > - old_bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i), > > - __ATOMIC_RELAXED); > > + old_bmap = rte_atomic_load_explicit(__RTE_QSBR_THRID_ARRAY_ELM(v, i), > > + rte_memory_order_relaxed); > > if (old_bmap & 1UL << id) > > return 0; > > > > do { > > new_bmap = old_bmap | (1UL << id); > > - success = __atomic_compare_exchange( > > + success = rte_atomic_compare_exchange_strong_explicit( > > __RTE_QSBR_THRID_ARRAY_ELM(v, i), > > - &old_bmap, &new_bmap, 0, > > - __ATOMIC_RELEASE, __ATOMIC_RELAXED); > > + &old_bmap, new_bmap, > > + rte_memory_order_release, rte_memory_order_relaxed); > > > > if (success) > > - __atomic_fetch_add(&v->num_threads, > > - 1, __ATOMIC_RELAXED); > > + rte_atomic_fetch_add_explicit(&v->num_threads, > > + 1, rte_memory_order_relaxed); > > else if (old_bmap & (1UL << id)) > > /* Someone else registered this thread. > > * Counter should not be incremented. > > @@ -154,8 +154,8 @@ > > * go out of sync. Hence, additional checks are required. > > */ > > /* Check if the thread is already unregistered */ > > - old_bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i), > > - __ATOMIC_RELAXED); > > + old_bmap = rte_atomic_load_explicit(__RTE_QSBR_THRID_ARRAY_ELM(v, i), > > + rte_memory_order_relaxed); > > if (!(old_bmap & (1UL << id))) > > return 0; > > > > @@ -165,14 +165,14 @@ > > * completed before removal of the thread from the list of > > * reporting threads. > > */ > > - success = __atomic_compare_exchange( > > + success = rte_atomic_compare_exchange_strong_explicit( > > __RTE_QSBR_THRID_ARRAY_ELM(v, i), > > - &old_bmap, &new_bmap, 0, > > - __ATOMIC_RELEASE, __ATOMIC_RELAXED); > > + &old_bmap, new_bmap, > > + rte_memory_order_release, rte_memory_order_relaxed); > > > > if (success) > > - __atomic_fetch_sub(&v->num_threads, > > - 1, __ATOMIC_RELAXED); > > + rte_atomic_fetch_sub_explicit(&v->num_threads, > > + 1, rte_memory_order_relaxed); > > else if (!(old_bmap & (1UL << id))) > > /* Someone else unregistered this thread. > > * Counter should not be incremented. > > @@ -227,8 +227,8 @@ > > > > fprintf(f, " Registered thread IDs = "); > > for (i = 0; i < v->num_elems; i++) { > > - bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i), > > - __ATOMIC_ACQUIRE); > > + bmap = rte_atomic_load_explicit(__RTE_QSBR_THRID_ARRAY_ELM(v, i), > > + rte_memory_order_acquire); > > id = i << __RTE_QSBR_THRID_INDEX_SHIFT; > > while (bmap) { > > t = __builtin_ctzl(bmap); > > @@ -241,26 +241,26 @@ > > fprintf(f, "\n"); > > > > fprintf(f, " Token = %" PRIu64 "\n", > > - __atomic_load_n(&v->token, __ATOMIC_ACQUIRE)); > > + rte_atomic_load_explicit(&v->token, rte_memory_order_acquire)); > > > > fprintf(f, " Least Acknowledged Token = %" PRIu64 "\n", > > - __atomic_load_n(&v->acked_token, __ATOMIC_ACQUIRE)); > > + rte_atomic_load_explicit(&v->acked_token, > > +rte_memory_order_acquire)); > > > > fprintf(f, "Quiescent State Counts for readers:\n"); > > for (i = 0; i < v->num_elems; i++) { > > - bmap = __atomic_load_n(__RTE_QSBR_THRID_ARRAY_ELM(v, i), > > - __ATOMIC_ACQUIRE); > > + bmap = rte_atomic_load_explicit(__RTE_QSBR_THRID_ARRAY_ELM(v, i), > > + rte_memory_order_acquire); > > id = i << __RTE_QSBR_THRID_INDEX_SHIFT; > > while (bmap) { > > t = __builtin_ctzl(bmap); > > fprintf(f, "thread ID = %u, count = %" PRIu64 ", lock count = %u\n", > > id + t, > > - __atomic_load_n( > > + rte_atomic_load_explicit( > > &v->qsbr_cnt[id + t].cnt, > > - __ATOMIC_RELAXED), > > - __atomic_load_n( > > + rte_memory_order_relaxed), > > + rte_atomic_load_explicit( > > &v->qsbr_cnt[id + t].lock_cnt, > > - __ATOMIC_RELAXED)); > > + rte_memory_order_relaxed)); > > bmap &= ~(1UL << t); > > } > > } > > diff --git a/lib/rcu/rte_rcu_qsbr.h b/lib/rcu/rte_rcu_qsbr.h index 87e1b55..9f4aed2 100644 > > --- a/lib/rcu/rte_rcu_qsbr.h > > +++ b/lib/rcu/rte_rcu_qsbr.h > > @@ -63,11 +63,11 @@ > > * Given thread id needs to be converted to index into the array and > > * the id within the array element. > > */ > > -#define __RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(uint64_t) * 8) > > +#define __RTE_QSBR_THRID_ARRAY_ELM_SIZE (sizeof(RTE_ATOMIC(uint64_t)) * > > +8) > > #define __RTE_QSBR_THRID_ARRAY_SIZE(max_threads) \ > > RTE_ALIGN(RTE_ALIGN_MUL_CEIL(max_threads, \ > > __RTE_QSBR_THRID_ARRAY_ELM_SIZE) >> 3, RTE_CACHE_LINE_SIZE) -#define > > __RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t *) \ > > +#define __RTE_QSBR_THRID_ARRAY_ELM(v, i) ((uint64_t __rte_atomic *) \ > > Is it equivalent to ((RTE_ATOMIC(uint64_t) *)? i'm not sure if you're asking about the resultant type of the expression or not? in this context we aren't specifying an atomic type but rather adding the atomic qualifier to what should already be a variable that has an atomic specified type with a cast which is why we use __rte_atomic. > > > ((struct rte_rcu_qsbr_cnt *)(v + 1) + v->max_threads) + i) #define > > __RTE_QSBR_THRID_INDEX_SHIFT 6 #define __RTE_QSBR_THRID_MASK 0x3f @@ -75,13 +75,13 @@ > > > >