DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/5] use rte atomic thread fence
@ 2023-11-02  3:04 Tyler Retzlaff
  2023-11-02  3:04 ` [PATCH 1/5] distributor: " Tyler Retzlaff
                   ` (7 more replies)
  0 siblings, 8 replies; 19+ messages in thread
From: Tyler Retzlaff @ 2023-11-02  3:04 UTC (permalink / raw)
  To: dev
  Cc: Bruce Richardson, David Hunt, Honnappa Nagarahalli, Jerin Jacob,
	Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang, Tyler Retzlaff

Replace use of __atomic_thread_fence with __rte_atomic_thread_fence.

It may be appropriate to use rte_atomic_thread_fence instead but it
will be up to maintainers to evaluate and make the change if appropriate.

Tyler Retzlaff (5):
  distributor: use rte atomic thread fence
  eal: use rte atomic thread fence
  hash: use rte atomic thread fence
  ring: use rte atomic thread fence
  stack: use rte atomic thread fence

 lib/distributor/rte_distributor.c |  2 +-
 lib/eal/common/eal_common_trace.c |  2 +-
 lib/hash/rte_cuckoo_hash.c        | 10 +++++-----
 lib/ring/rte_ring_c11_pvt.h       |  4 ++--
 lib/stack/rte_stack_lf_c11.h      |  2 +-
 5 files changed, 10 insertions(+), 10 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 1/5] distributor: use rte atomic thread fence
  2023-11-02  3:04 [PATCH 0/5] use rte atomic thread fence Tyler Retzlaff
@ 2023-11-02  3:04 ` Tyler Retzlaff
  2023-11-02  3:04 ` [PATCH 2/5] eal: " Tyler Retzlaff
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Tyler Retzlaff @ 2023-11-02  3:04 UTC (permalink / raw)
  To: dev
  Cc: Bruce Richardson, David Hunt, Honnappa Nagarahalli, Jerin Jacob,
	Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang, Tyler Retzlaff

Use __rte_atomic_thread_fence instead of directly using
__atomic_thread_fence builtin gcc intrinsic

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
 lib/distributor/rte_distributor.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/distributor/rte_distributor.c b/lib/distributor/rte_distributor.c
index 2ecb95c..848cf85 100644
--- a/lib/distributor/rte_distributor.c
+++ b/lib/distributor/rte_distributor.c
@@ -187,7 +187,7 @@
 	}
 
 	/* Sync with distributor to acquire retptrs */
-	__atomic_thread_fence(rte_memory_order_acquire);
+	__rte_atomic_thread_fence(rte_memory_order_acquire);
 	for (i = 0; i < RTE_DIST_BURST_SIZE; i++)
 		/* Switch off the return bit first */
 		buf->retptr64[i] = 0;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 2/5] eal: use rte atomic thread fence
  2023-11-02  3:04 [PATCH 0/5] use rte atomic thread fence Tyler Retzlaff
  2023-11-02  3:04 ` [PATCH 1/5] distributor: " Tyler Retzlaff
@ 2023-11-02  3:04 ` Tyler Retzlaff
  2023-11-02  3:04 ` [PATCH 3/5] hash: " Tyler Retzlaff
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Tyler Retzlaff @ 2023-11-02  3:04 UTC (permalink / raw)
  To: dev
  Cc: Bruce Richardson, David Hunt, Honnappa Nagarahalli, Jerin Jacob,
	Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang, Tyler Retzlaff

Use __rte_atomic_thread_fence instead of directly using
__atomic_thread_fence builtin gcc intrinsic

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
 lib/eal/common/eal_common_trace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/eal/common/eal_common_trace.c b/lib/eal/common/eal_common_trace.c
index 6ad87fc..4a09cbf 100644
--- a/lib/eal/common/eal_common_trace.c
+++ b/lib/eal/common/eal_common_trace.c
@@ -526,7 +526,7 @@ rte_trace_mode rte_trace_mode_get(void)
 
 	/* Add the trace point at tail */
 	STAILQ_INSERT_TAIL(&tp_list, tp, next);
-	__atomic_thread_fence(rte_memory_order_release);
+	__rte_atomic_thread_fence(rte_memory_order_release);
 
 	/* All Good !!! */
 	return 0;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 3/5] hash: use rte atomic thread fence
  2023-11-02  3:04 [PATCH 0/5] use rte atomic thread fence Tyler Retzlaff
  2023-11-02  3:04 ` [PATCH 1/5] distributor: " Tyler Retzlaff
  2023-11-02  3:04 ` [PATCH 2/5] eal: " Tyler Retzlaff
@ 2023-11-02  3:04 ` Tyler Retzlaff
  2023-11-02  3:04 ` [PATCH 4/5] ring: " Tyler Retzlaff
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Tyler Retzlaff @ 2023-11-02  3:04 UTC (permalink / raw)
  To: dev
  Cc: Bruce Richardson, David Hunt, Honnappa Nagarahalli, Jerin Jacob,
	Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang, Tyler Retzlaff

Use __rte_atomic_thread_fence instead of directly using
__atomic_thread_fence builtin gcc intrinsic

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
 lib/hash/rte_cuckoo_hash.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c
index b2cf60d..ca350ed 100644
--- a/lib/hash/rte_cuckoo_hash.c
+++ b/lib/hash/rte_cuckoo_hash.c
@@ -871,7 +871,7 @@ struct rte_hash *
 			/* The store to sig_current should not
 			 * move above the store to tbl_chng_cnt.
 			 */
-			__atomic_thread_fence(rte_memory_order_release);
+			__rte_atomic_thread_fence(rte_memory_order_release);
 		}
 
 		/* Need to swap current/alt sig to allow later
@@ -903,7 +903,7 @@ struct rte_hash *
 		/* The store to sig_current should not
 		 * move above the store to tbl_chng_cnt.
 		 */
-		__atomic_thread_fence(rte_memory_order_release);
+		__rte_atomic_thread_fence(rte_memory_order_release);
 	}
 
 	curr_bkt->sig_current[curr_slot] = sig;
@@ -1396,7 +1396,7 @@ struct rte_hash *
 		/* The loads of sig_current in search_one_bucket
 		 * should not move below the load from tbl_chng_cnt.
 		 */
-		__atomic_thread_fence(rte_memory_order_acquire);
+		__rte_atomic_thread_fence(rte_memory_order_acquire);
 		/* Re-read the table change counter to check if the
 		 * table has changed during search. If yes, re-do
 		 * the search.
@@ -1625,7 +1625,7 @@ struct rte_hash *
 				/* The store to sig_current should
 				 * not move above the store to tbl_chng_cnt.
 				 */
-				__atomic_thread_fence(rte_memory_order_release);
+				__rte_atomic_thread_fence(rte_memory_order_release);
 			}
 			last_bkt->sig_current[i] = NULL_SIGNATURE;
 			rte_atomic_store_explicit(&last_bkt->key_idx[i],
@@ -2216,7 +2216,7 @@ struct rte_hash *
 		/* The loads of sig_current in compare_signatures
 		 * should not move below the load from tbl_chng_cnt.
 		 */
-		__atomic_thread_fence(rte_memory_order_acquire);
+		__rte_atomic_thread_fence(rte_memory_order_acquire);
 		/* Re-read the table change counter to check if the
 		 * table has changed during search. If yes, re-do
 		 * the search.
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 4/5] ring: use rte atomic thread fence
  2023-11-02  3:04 [PATCH 0/5] use rte atomic thread fence Tyler Retzlaff
                   ` (2 preceding siblings ...)
  2023-11-02  3:04 ` [PATCH 3/5] hash: " Tyler Retzlaff
@ 2023-11-02  3:04 ` Tyler Retzlaff
  2023-11-02  3:04 ` [PATCH 5/5] stack: " Tyler Retzlaff
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Tyler Retzlaff @ 2023-11-02  3:04 UTC (permalink / raw)
  To: dev
  Cc: Bruce Richardson, David Hunt, Honnappa Nagarahalli, Jerin Jacob,
	Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang, Tyler Retzlaff

Use __rte_atomic_thread_fence instead of directly using
__atomic_thread_fence builtin gcc intrinsic

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
 lib/ring/rte_ring_c11_pvt.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/ring/rte_ring_c11_pvt.h b/lib/ring/rte_ring_c11_pvt.h
index 5c10ad8..38a003d 100644
--- a/lib/ring/rte_ring_c11_pvt.h
+++ b/lib/ring/rte_ring_c11_pvt.h
@@ -68,7 +68,7 @@
 		n = max;
 
 		/* Ensure the head is read before tail */
-		__atomic_thread_fence(rte_memory_order_acquire);
+		__rte_atomic_thread_fence(rte_memory_order_acquire);
 
 		/* load-acquire synchronize with store-release of ht->tail
 		 * in update_tail.
@@ -145,7 +145,7 @@
 		n = max;
 
 		/* Ensure the head is read before tail */
-		__atomic_thread_fence(rte_memory_order_acquire);
+		__rte_atomic_thread_fence(rte_memory_order_acquire);
 
 		/* this load-acquire synchronize with store-release of ht->tail
 		 * in update_tail.
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 5/5] stack: use rte atomic thread fence
  2023-11-02  3:04 [PATCH 0/5] use rte atomic thread fence Tyler Retzlaff
                   ` (3 preceding siblings ...)
  2023-11-02  3:04 ` [PATCH 4/5] ring: " Tyler Retzlaff
@ 2023-11-02  3:04 ` Tyler Retzlaff
  2023-11-02  7:42 ` [PATCH 0/5] " Morten Brørup
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 19+ messages in thread
From: Tyler Retzlaff @ 2023-11-02  3:04 UTC (permalink / raw)
  To: dev
  Cc: Bruce Richardson, David Hunt, Honnappa Nagarahalli, Jerin Jacob,
	Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang, Tyler Retzlaff

Use __rte_atomic_thread_fence instead of directly using
__atomic_thread_fence builtin gcc intrinsic

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
 lib/stack/rte_stack_lf_c11.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/stack/rte_stack_lf_c11.h b/lib/stack/rte_stack_lf_c11.h
index 9cb6998..0f86434 100644
--- a/lib/stack/rte_stack_lf_c11.h
+++ b/lib/stack/rte_stack_lf_c11.h
@@ -110,7 +110,7 @@
 		 * elements are properly ordered with respect to the head
 		 * pointer read.
 		 */
-		__atomic_thread_fence(rte_memory_order_acquire);
+		__rte_atomic_thread_fence(rte_memory_order_acquire);
 
 		rte_prefetch0(old_head.top);
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* RE: [PATCH 0/5] use rte atomic thread fence
  2023-11-02  3:04 [PATCH 0/5] use rte atomic thread fence Tyler Retzlaff
                   ` (4 preceding siblings ...)
  2023-11-02  3:04 ` [PATCH 5/5] stack: " Tyler Retzlaff
@ 2023-11-02  7:42 ` Morten Brørup
  2023-11-08 17:04 ` Thomas Monjalon
  2024-02-15  6:50 ` [PATCH v2 0/6] " Tyler Retzlaff
  7 siblings, 0 replies; 19+ messages in thread
From: Morten Brørup @ 2023-11-02  7:42 UTC (permalink / raw)
  To: Tyler Retzlaff, dev
  Cc: Bruce Richardson, David Hunt, Honnappa Nagarahalli, Jerin Jacob,
	Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang

> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Thursday, 2 November 2023 04.04
> 
> Replace use of __atomic_thread_fence with __rte_atomic_thread_fence.
> 
> It may be appropriate to use rte_atomic_thread_fence instead but it
> will be up to maintainers to evaluate and make the change if
> appropriate.
> 
> Tyler Retzlaff (5):
>   distributor: use rte atomic thread fence
>   eal: use rte atomic thread fence
>   hash: use rte atomic thread fence
>   ring: use rte atomic thread fence
>   stack: use rte atomic thread fence
> 
>  lib/distributor/rte_distributor.c |  2 +-
>  lib/eal/common/eal_common_trace.c |  2 +-
>  lib/hash/rte_cuckoo_hash.c        | 10 +++++-----
>  lib/ring/rte_ring_c11_pvt.h       |  4 ++--
>  lib/stack/rte_stack_lf_c11.h      |  2 +-
>  5 files changed, 10 insertions(+), 10 deletions(-)
> 
> --
> 1.8.3.1

Series-acked-by: Morten Brørup <mb@smartsharesystems.com>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/5] use rte atomic thread fence
  2023-11-02  3:04 [PATCH 0/5] use rte atomic thread fence Tyler Retzlaff
                   ` (5 preceding siblings ...)
  2023-11-02  7:42 ` [PATCH 0/5] " Morten Brørup
@ 2023-11-08 17:04 ` Thomas Monjalon
  2023-11-08 18:49   ` Tyler Retzlaff
  2024-02-15  6:50 ` [PATCH v2 0/6] " Tyler Retzlaff
  7 siblings, 1 reply; 19+ messages in thread
From: Thomas Monjalon @ 2023-11-08 17:04 UTC (permalink / raw)
  To: Tyler Retzlaff
  Cc: dev, Bruce Richardson, David Hunt, Honnappa Nagarahalli,
	Jerin Jacob, Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang

02/11/2023 04:04, Tyler Retzlaff:
> Replace use of __atomic_thread_fence with __rte_atomic_thread_fence.
> 
> It may be appropriate to use rte_atomic_thread_fence instead but it
> will be up to maintainers to evaluate and make the change if appropriate.

I don't understand the use of __rte_atomic_thread_fence
which is supposed to be EAL-internal only, isn't it?

On x86, we have this:
static __rte_always_inline void
rte_atomic_thread_fence(rte_memory_order memorder)
{
    if (memorder == rte_memory_order_seq_cst)
        rte_smp_mb();
    else
        __rte_atomic_thread_fence(memorder);
}

This is skipped if you use __rte_atomic_thread_fence() directly.



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/5] use rte atomic thread fence
  2023-11-08 17:04 ` Thomas Monjalon
@ 2023-11-08 18:49   ` Tyler Retzlaff
  2024-02-14 22:40     ` Thomas Monjalon
  0 siblings, 1 reply; 19+ messages in thread
From: Tyler Retzlaff @ 2023-11-08 18:49 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, Bruce Richardson, David Hunt, Honnappa Nagarahalli,
	Jerin Jacob, Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang

On Wed, Nov 08, 2023 at 06:04:47PM +0100, Thomas Monjalon wrote:
> 02/11/2023 04:04, Tyler Retzlaff:
> > Replace use of __atomic_thread_fence with __rte_atomic_thread_fence.
> > 
> > It may be appropriate to use rte_atomic_thread_fence instead but it
> > will be up to maintainers to evaluate and make the change if appropriate.
> 
> I don't understand the use of __rte_atomic_thread_fence
> which is supposed to be EAL-internal only, isn't it?
> 
> On x86, we have this:
> static __rte_always_inline void
> rte_atomic_thread_fence(rte_memory_order memorder)
> {
>     if (memorder == rte_memory_order_seq_cst)
>         rte_smp_mb();
>     else
>         __rte_atomic_thread_fence(memorder);
> }
> 
> This is skipped if you use __rte_atomic_thread_fence() directly.

correct. that is on purpose because the original code was skipping
condition by using __atomic_thread_fence directly.

this series intends no functional change which is why the replacements
are __rte_atomic_thread_fence instead of to rte_atomic_thread_fence.

i would encourage the maintainers to evaluate whether the code can use
rte_atomic_thread_fence directly without functional regression.

ty

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH 0/5] use rte atomic thread fence
  2023-11-08 18:49   ` Tyler Retzlaff
@ 2024-02-14 22:40     ` Thomas Monjalon
  0 siblings, 0 replies; 19+ messages in thread
From: Thomas Monjalon @ 2024-02-14 22:40 UTC (permalink / raw)
  To: Tyler Retzlaff
  Cc: dev, Bruce Richardson, David Hunt, Honnappa Nagarahalli,
	Jerin Jacob, Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang, david.marchand,
	Morten Brørup

08/11/2023 19:49, Tyler Retzlaff:
> On Wed, Nov 08, 2023 at 06:04:47PM +0100, Thomas Monjalon wrote:
> > 02/11/2023 04:04, Tyler Retzlaff:
> > > Replace use of __atomic_thread_fence with __rte_atomic_thread_fence.
> > > 
> > > It may be appropriate to use rte_atomic_thread_fence instead but it
> > > will be up to maintainers to evaluate and make the change if appropriate.
> > 
> > I don't understand the use of __rte_atomic_thread_fence
> > which is supposed to be EAL-internal only, isn't it?
> > 
> > On x86, we have this:
> > static __rte_always_inline void
> > rte_atomic_thread_fence(rte_memory_order memorder)
> > {
> >     if (memorder == rte_memory_order_seq_cst)
> >         rte_smp_mb();
> >     else
> >         __rte_atomic_thread_fence(memorder);
> > }
> > 
> > This is skipped if you use __rte_atomic_thread_fence() directly.
> 
> correct. that is on purpose because the original code was skipping
> condition by using __atomic_thread_fence directly.

There is chance that it was not skipping on purpose.

> this series intends no functional change which is why the replacements
> are __rte_atomic_thread_fence instead of to rte_atomic_thread_fence.

We should take this opportunity to simplify the code,
I mean we should avoid having 3 functions for the same thing.

> i would encourage the maintainers to evaluate whether the code can use
> rte_atomic_thread_fence directly without functional regression.

Let's do the change so the maintainers can review what is changed.
Note it should have no impact if not using rte_memory_order_seq_cst.
If we discover later that something is broken, we have time to fix it.

Note: lib/eal/include/rte_mcslock.h should not use __rte_atomic_thread_fence()
and lib/lpm/rte_lpm.c should not use __atomic_thread_fence().

Can we replace and discourage all occurrences of
__atomic_thread_fence() and __rte_atomic_thread_fence()?
It would be simpler to recommend only rte_atomic_thread_fence().

Also in another patch, it would be nice to replace __ATOMIC_*
with rte_memory_order_*, at least when used in rte_atomic_thread_fence().



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 0/6] use rte atomic thread fence
  2023-11-02  3:04 [PATCH 0/5] use rte atomic thread fence Tyler Retzlaff
                   ` (6 preceding siblings ...)
  2023-11-08 17:04 ` Thomas Monjalon
@ 2024-02-15  6:50 ` Tyler Retzlaff
  2024-02-15  6:50   ` [PATCH v2 1/6] distributor: " Tyler Retzlaff
                     ` (6 more replies)
  7 siblings, 7 replies; 19+ messages in thread
From: Tyler Retzlaff @ 2024-02-15  6:50 UTC (permalink / raw)
  To: dev
  Cc: Bruce Richardson, David Hunt, Honnappa Nagarahalli, Jerin Jacob,
	Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang, mb, thomas, Tyler Retzlaff

Replace use of __atomic_thread_fence with rte_atomic_thread_fence.

Notes:

  The rest of lib/lpm will be converted to rte_atomic in a separate
  series (to be submitted soon).

  There are existing checkpatches checks that catch use of both
  __atomic_thread_fence and __rte_atomic_thread_fence in new
  submissions.

v2:
    * change series to use rte_atomic_thread_fence instead of
      __rte_atomic_thread_fence (internal)
    * also change __atomic_thread_fence in lib/lpm

Tyler Retzlaff (6):
  distributor: use rte atomic thread fence
  eal: use rte atomic thread fence
  hash: use rte atomic thread fence
  ring: use rte atomic thread fence
  stack: use rte atomic thread fence
  lpm: use rte atomic thread fence

 lib/distributor/rte_distributor.c |  2 +-
 lib/eal/common/eal_common_trace.c |  2 +-
 lib/eal/include/rte_mcslock.h     |  4 ++--
 lib/hash/rte_cuckoo_hash.c        | 10 +++++-----
 lib/lpm/rte_lpm.c                 |  4 ++--
 lib/ring/rte_ring_c11_pvt.h       |  4 ++--
 lib/stack/rte_stack_lf_c11.h      |  2 +-
 7 files changed, 14 insertions(+), 14 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 1/6] distributor: use rte atomic thread fence
  2024-02-15  6:50 ` [PATCH v2 0/6] " Tyler Retzlaff
@ 2024-02-15  6:50   ` Tyler Retzlaff
  2024-02-15  6:50   ` [PATCH v2 2/6] eal: " Tyler Retzlaff
                     ` (5 subsequent siblings)
  6 siblings, 0 replies; 19+ messages in thread
From: Tyler Retzlaff @ 2024-02-15  6:50 UTC (permalink / raw)
  To: dev
  Cc: Bruce Richardson, David Hunt, Honnappa Nagarahalli, Jerin Jacob,
	Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang, mb, thomas, Tyler Retzlaff

Use rte_atomic_thread_fence instead of directly using
__atomic_thread_fence builtin gcc intrinsic

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/distributor/rte_distributor.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/distributor/rte_distributor.c b/lib/distributor/rte_distributor.c
index 2ecb95c..e842dc9 100644
--- a/lib/distributor/rte_distributor.c
+++ b/lib/distributor/rte_distributor.c
@@ -187,7 +187,7 @@
 	}
 
 	/* Sync with distributor to acquire retptrs */
-	__atomic_thread_fence(rte_memory_order_acquire);
+	rte_atomic_thread_fence(rte_memory_order_acquire);
 	for (i = 0; i < RTE_DIST_BURST_SIZE; i++)
 		/* Switch off the return bit first */
 		buf->retptr64[i] = 0;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 2/6] eal: use rte atomic thread fence
  2024-02-15  6:50 ` [PATCH v2 0/6] " Tyler Retzlaff
  2024-02-15  6:50   ` [PATCH v2 1/6] distributor: " Tyler Retzlaff
@ 2024-02-15  6:50   ` Tyler Retzlaff
  2024-02-15  6:50   ` [PATCH v2 3/6] hash: " Tyler Retzlaff
                     ` (4 subsequent siblings)
  6 siblings, 0 replies; 19+ messages in thread
From: Tyler Retzlaff @ 2024-02-15  6:50 UTC (permalink / raw)
  To: dev
  Cc: Bruce Richardson, David Hunt, Honnappa Nagarahalli, Jerin Jacob,
	Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang, mb, thomas, Tyler Retzlaff

Use rte_atomic_thread_fence instead of directly using
__atomic_thread_fence builtin gcc intrinsic

Update rte_mcslock.h to use rte_atomic_thread_fence instead of
directly using internal __rte_atomic_thread_fence

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/eal/common/eal_common_trace.c | 2 +-
 lib/eal/include/rte_mcslock.h     | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/lib/eal/common/eal_common_trace.c b/lib/eal/common/eal_common_trace.c
index 6ad87fc..918f49b 100644
--- a/lib/eal/common/eal_common_trace.c
+++ b/lib/eal/common/eal_common_trace.c
@@ -526,7 +526,7 @@ rte_trace_mode rte_trace_mode_get(void)
 
 	/* Add the trace point at tail */
 	STAILQ_INSERT_TAIL(&tp_list, tp, next);
-	__atomic_thread_fence(rte_memory_order_release);
+	rte_atomic_thread_fence(rte_memory_order_release);
 
 	/* All Good !!! */
 	return 0;
diff --git a/lib/eal/include/rte_mcslock.h b/lib/eal/include/rte_mcslock.h
index 2ca967f..0aeb1a0 100644
--- a/lib/eal/include/rte_mcslock.h
+++ b/lib/eal/include/rte_mcslock.h
@@ -83,7 +83,7 @@
 	 * store to prev->next. Otherwise it will cause a deadlock. Need a
 	 * store-load barrier.
 	 */
-	__rte_atomic_thread_fence(rte_memory_order_acq_rel);
+	rte_atomic_thread_fence(rte_memory_order_acq_rel);
 	/* If the lock has already been acquired, it first atomically
 	 * places the node at the end of the queue and then proceeds
 	 * to spin on me->locked until the previous lock holder resets
@@ -117,7 +117,7 @@
 		 * while-loop first. This has the potential to cause a
 		 * deadlock. Need a load barrier.
 		 */
-		__rte_atomic_thread_fence(rte_memory_order_acquire);
+		rte_atomic_thread_fence(rte_memory_order_acquire);
 		/* More nodes added to the queue by other CPUs.
 		 * Wait until the next pointer is set.
 		 */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 3/6] hash: use rte atomic thread fence
  2024-02-15  6:50 ` [PATCH v2 0/6] " Tyler Retzlaff
  2024-02-15  6:50   ` [PATCH v2 1/6] distributor: " Tyler Retzlaff
  2024-02-15  6:50   ` [PATCH v2 2/6] eal: " Tyler Retzlaff
@ 2024-02-15  6:50   ` Tyler Retzlaff
  2024-02-15  6:50   ` [PATCH v2 4/6] ring: " Tyler Retzlaff
                     ` (3 subsequent siblings)
  6 siblings, 0 replies; 19+ messages in thread
From: Tyler Retzlaff @ 2024-02-15  6:50 UTC (permalink / raw)
  To: dev
  Cc: Bruce Richardson, David Hunt, Honnappa Nagarahalli, Jerin Jacob,
	Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang, mb, thomas, Tyler Retzlaff

Use rte_atomic_thread_fence instead of directly using
__atomic_thread_fence builtin gcc intrinsic

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/hash/rte_cuckoo_hash.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c
index 7045675..9cf9464 100644
--- a/lib/hash/rte_cuckoo_hash.c
+++ b/lib/hash/rte_cuckoo_hash.c
@@ -878,7 +878,7 @@ struct rte_hash *
 			/* The store to sig_current should not
 			 * move above the store to tbl_chng_cnt.
 			 */
-			__atomic_thread_fence(rte_memory_order_release);
+			rte_atomic_thread_fence(rte_memory_order_release);
 		}
 
 		/* Need to swap current/alt sig to allow later
@@ -910,7 +910,7 @@ struct rte_hash *
 		/* The store to sig_current should not
 		 * move above the store to tbl_chng_cnt.
 		 */
-		__atomic_thread_fence(rte_memory_order_release);
+		rte_atomic_thread_fence(rte_memory_order_release);
 	}
 
 	curr_bkt->sig_current[curr_slot] = sig;
@@ -1403,7 +1403,7 @@ struct rte_hash *
 		/* The loads of sig_current in search_one_bucket
 		 * should not move below the load from tbl_chng_cnt.
 		 */
-		__atomic_thread_fence(rte_memory_order_acquire);
+		rte_atomic_thread_fence(rte_memory_order_acquire);
 		/* Re-read the table change counter to check if the
 		 * table has changed during search. If yes, re-do
 		 * the search.
@@ -1632,7 +1632,7 @@ struct rte_hash *
 				/* The store to sig_current should
 				 * not move above the store to tbl_chng_cnt.
 				 */
-				__atomic_thread_fence(rte_memory_order_release);
+				rte_atomic_thread_fence(rte_memory_order_release);
 			}
 			last_bkt->sig_current[i] = NULL_SIGNATURE;
 			rte_atomic_store_explicit(&last_bkt->key_idx[i],
@@ -2223,7 +2223,7 @@ struct rte_hash *
 		/* The loads of sig_current in compare_signatures
 		 * should not move below the load from tbl_chng_cnt.
 		 */
-		__atomic_thread_fence(rte_memory_order_acquire);
+		rte_atomic_thread_fence(rte_memory_order_acquire);
 		/* Re-read the table change counter to check if the
 		 * table has changed during search. If yes, re-do
 		 * the search.
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 4/6] ring: use rte atomic thread fence
  2024-02-15  6:50 ` [PATCH v2 0/6] " Tyler Retzlaff
                     ` (2 preceding siblings ...)
  2024-02-15  6:50   ` [PATCH v2 3/6] hash: " Tyler Retzlaff
@ 2024-02-15  6:50   ` Tyler Retzlaff
  2024-02-15  6:50   ` [PATCH v2 5/6] stack: " Tyler Retzlaff
                     ` (2 subsequent siblings)
  6 siblings, 0 replies; 19+ messages in thread
From: Tyler Retzlaff @ 2024-02-15  6:50 UTC (permalink / raw)
  To: dev
  Cc: Bruce Richardson, David Hunt, Honnappa Nagarahalli, Jerin Jacob,
	Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang, mb, thomas, Tyler Retzlaff

Use rte_atomic_thread_fence instead of directly using
__atomic_thread_fence builtin gcc intrinsic

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/ring/rte_ring_c11_pvt.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/ring/rte_ring_c11_pvt.h b/lib/ring/rte_ring_c11_pvt.h
index 5c10ad8..629b2d9 100644
--- a/lib/ring/rte_ring_c11_pvt.h
+++ b/lib/ring/rte_ring_c11_pvt.h
@@ -68,7 +68,7 @@
 		n = max;
 
 		/* Ensure the head is read before tail */
-		__atomic_thread_fence(rte_memory_order_acquire);
+		rte_atomic_thread_fence(rte_memory_order_acquire);
 
 		/* load-acquire synchronize with store-release of ht->tail
 		 * in update_tail.
@@ -145,7 +145,7 @@
 		n = max;
 
 		/* Ensure the head is read before tail */
-		__atomic_thread_fence(rte_memory_order_acquire);
+		rte_atomic_thread_fence(rte_memory_order_acquire);
 
 		/* this load-acquire synchronize with store-release of ht->tail
 		 * in update_tail.
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 5/6] stack: use rte atomic thread fence
  2024-02-15  6:50 ` [PATCH v2 0/6] " Tyler Retzlaff
                     ` (3 preceding siblings ...)
  2024-02-15  6:50   ` [PATCH v2 4/6] ring: " Tyler Retzlaff
@ 2024-02-15  6:50   ` Tyler Retzlaff
  2024-02-15  6:50   ` [PATCH v2 6/6] lpm: " Tyler Retzlaff
  2024-02-18  3:23   ` [PATCH v2 0/6] " fengchengwen
  6 siblings, 0 replies; 19+ messages in thread
From: Tyler Retzlaff @ 2024-02-15  6:50 UTC (permalink / raw)
  To: dev
  Cc: Bruce Richardson, David Hunt, Honnappa Nagarahalli, Jerin Jacob,
	Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang, mb, thomas, Tyler Retzlaff

Use rte_atomic_thread_fence instead of directly using
__atomic_thread_fence builtin gcc intrinsic

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/stack/rte_stack_lf_c11.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/stack/rte_stack_lf_c11.h b/lib/stack/rte_stack_lf_c11.h
index 9cb6998..60d46e9 100644
--- a/lib/stack/rte_stack_lf_c11.h
+++ b/lib/stack/rte_stack_lf_c11.h
@@ -110,7 +110,7 @@
 		 * elements are properly ordered with respect to the head
 		 * pointer read.
 		 */
-		__atomic_thread_fence(rte_memory_order_acquire);
+		rte_atomic_thread_fence(rte_memory_order_acquire);
 
 		rte_prefetch0(old_head.top);
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 6/6] lpm: use rte atomic thread fence
  2024-02-15  6:50 ` [PATCH v2 0/6] " Tyler Retzlaff
                     ` (4 preceding siblings ...)
  2024-02-15  6:50   ` [PATCH v2 5/6] stack: " Tyler Retzlaff
@ 2024-02-15  6:50   ` Tyler Retzlaff
  2024-02-18  3:23   ` [PATCH v2 0/6] " fengchengwen
  6 siblings, 0 replies; 19+ messages in thread
From: Tyler Retzlaff @ 2024-02-15  6:50 UTC (permalink / raw)
  To: dev
  Cc: Bruce Richardson, David Hunt, Honnappa Nagarahalli, Jerin Jacob,
	Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang, mb, thomas, Tyler Retzlaff

Use rte_atomic_thread_fence instead of directly using
__atomic_thread_fence builtin gcc intrinsic

Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
 lib/lpm/rte_lpm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/lib/lpm/rte_lpm.c b/lib/lpm/rte_lpm.c
index 363058e..9633d63 100644
--- a/lib/lpm/rte_lpm.c
+++ b/lib/lpm/rte_lpm.c
@@ -1116,7 +1116,7 @@ struct rte_lpm *
 		 * Prevent the free of the tbl8 group from hoisting.
 		 */
 		i_lpm->lpm.tbl24[tbl24_index].valid = 0;
-		__atomic_thread_fence(__ATOMIC_RELEASE);
+		rte_atomic_thread_fence(rte_memory_order_release);
 		status = tbl8_free(i_lpm, tbl8_group_start);
 	} else if (tbl8_recycle_index > -1) {
 		/* Update tbl24 entry. */
@@ -1132,7 +1132,7 @@ struct rte_lpm *
 		 */
 		__atomic_store(&i_lpm->lpm.tbl24[tbl24_index], &new_tbl24_entry,
 				__ATOMIC_RELAXED);
-		__atomic_thread_fence(__ATOMIC_RELEASE);
+		rte_atomic_thread_fence(rte_memory_order_release);
 		status = tbl8_free(i_lpm, tbl8_group_start);
 	}
 #undef group_idx
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 0/6] use rte atomic thread fence
  2024-02-15  6:50 ` [PATCH v2 0/6] " Tyler Retzlaff
                     ` (5 preceding siblings ...)
  2024-02-15  6:50   ` [PATCH v2 6/6] lpm: " Tyler Retzlaff
@ 2024-02-18  3:23   ` fengchengwen
  2024-02-18 12:18     ` Thomas Monjalon
  6 siblings, 1 reply; 19+ messages in thread
From: fengchengwen @ 2024-02-18  3:23 UTC (permalink / raw)
  To: Tyler Retzlaff, dev
  Cc: Bruce Richardson, David Hunt, Honnappa Nagarahalli, Jerin Jacob,
	Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang, mb, thomas

Series-acked-by: Chengwen Feng <fengchengwen@huawei.com>

On 2024/2/15 14:50, Tyler Retzlaff wrote:
> Replace use of __atomic_thread_fence with rte_atomic_thread_fence.
> 
> Notes:
> 
>   The rest of lib/lpm will be converted to rte_atomic in a separate
>   series (to be submitted soon).
> 
>   There are existing checkpatches checks that catch use of both
>   __atomic_thread_fence and __rte_atomic_thread_fence in new
>   submissions.
> 
> v2:
>     * change series to use rte_atomic_thread_fence instead of
>       __rte_atomic_thread_fence (internal)
>     * also change __atomic_thread_fence in lib/lpm
> 
> Tyler Retzlaff (6):
>   distributor: use rte atomic thread fence
>   eal: use rte atomic thread fence
>   hash: use rte atomic thread fence
>   ring: use rte atomic thread fence
>   stack: use rte atomic thread fence
>   lpm: use rte atomic thread fence
> 
>  lib/distributor/rte_distributor.c |  2 +-
>  lib/eal/common/eal_common_trace.c |  2 +-
>  lib/eal/include/rte_mcslock.h     |  4 ++--
>  lib/hash/rte_cuckoo_hash.c        | 10 +++++-----
>  lib/lpm/rte_lpm.c                 |  4 ++--
>  lib/ring/rte_ring_c11_pvt.h       |  4 ++--
>  lib/stack/rte_stack_lf_c11.h      |  2 +-
>  7 files changed, 14 insertions(+), 14 deletions(-)
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 0/6] use rte atomic thread fence
  2024-02-18  3:23   ` [PATCH v2 0/6] " fengchengwen
@ 2024-02-18 12:18     ` Thomas Monjalon
  0 siblings, 0 replies; 19+ messages in thread
From: Thomas Monjalon @ 2024-02-18 12:18 UTC (permalink / raw)
  To: Tyler Retzlaff
  Cc: dev, Bruce Richardson, David Hunt, Honnappa Nagarahalli,
	Jerin Jacob, Konstantin Ananyev, Sameh Gobriel, Sunil Kumar Kori,
	Vladimir Medvedkin, Yipeng Wang, mb, fengchengwen

> > Replace use of __atomic_thread_fence with rte_atomic_thread_fence.
> > 
> > Notes:
> > 
> >   The rest of lib/lpm will be converted to rte_atomic in a separate
> >   series (to be submitted soon).
> > 
> >   There are existing checkpatches checks that catch use of both
> >   __atomic_thread_fence and __rte_atomic_thread_fence in new
> >   submissions.
> > 
> > v2:
> >     * change series to use rte_atomic_thread_fence instead of
> >       __rte_atomic_thread_fence (internal)
> >     * also change __atomic_thread_fence in lib/lpm
> > 
> > Tyler Retzlaff (6):
> >   distributor: use rte atomic thread fence
> >   eal: use rte atomic thread fence
> >   hash: use rte atomic thread fence
> >   ring: use rte atomic thread fence
> >   stack: use rte atomic thread fence
> >   lpm: use rte atomic thread fence
> 
> Series-acked-by: Chengwen Feng <fengchengwen@huawei.com>

Acked-by: Thomas Monjalon <thomas@monjalon.net>

Squashed and applied, thanks.



^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2024-02-18 12:18 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-02  3:04 [PATCH 0/5] use rte atomic thread fence Tyler Retzlaff
2023-11-02  3:04 ` [PATCH 1/5] distributor: " Tyler Retzlaff
2023-11-02  3:04 ` [PATCH 2/5] eal: " Tyler Retzlaff
2023-11-02  3:04 ` [PATCH 3/5] hash: " Tyler Retzlaff
2023-11-02  3:04 ` [PATCH 4/5] ring: " Tyler Retzlaff
2023-11-02  3:04 ` [PATCH 5/5] stack: " Tyler Retzlaff
2023-11-02  7:42 ` [PATCH 0/5] " Morten Brørup
2023-11-08 17:04 ` Thomas Monjalon
2023-11-08 18:49   ` Tyler Retzlaff
2024-02-14 22:40     ` Thomas Monjalon
2024-02-15  6:50 ` [PATCH v2 0/6] " Tyler Retzlaff
2024-02-15  6:50   ` [PATCH v2 1/6] distributor: " Tyler Retzlaff
2024-02-15  6:50   ` [PATCH v2 2/6] eal: " Tyler Retzlaff
2024-02-15  6:50   ` [PATCH v2 3/6] hash: " Tyler Retzlaff
2024-02-15  6:50   ` [PATCH v2 4/6] ring: " Tyler Retzlaff
2024-02-15  6:50   ` [PATCH v2 5/6] stack: " Tyler Retzlaff
2024-02-15  6:50   ` [PATCH v2 6/6] lpm: " Tyler Retzlaff
2024-02-18  3:23   ` [PATCH v2 0/6] " fengchengwen
2024-02-18 12:18     ` Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).