patches for DPDK stable branches
 help / color / mirror / Atom feed
* [dpdk-stable] [PATCH v2 5/5] eal: fix clang compilation error on x86
       [not found]   ` <20181220174229.5834-1-gavin.hu@arm.com>
@ 2018-12-20 17:42     ` Gavin Hu
  0 siblings, 0 replies; 17+ messages in thread
From: Gavin Hu @ 2018-12-20 17:42 UTC (permalink / raw)
  To: dev
  Cc: thomas, bruce.richardson, jerinj, hemant.agrawal, ferruh.yigit,
	Honnappa.Nagarahalli, nd, Gavin Hu, stable

When CONFIG_RTE_FORCE_INTRINSICS is enabled for x86, the clang
compilation error was:
	include/generic/rte_atomic.h:215:9: error:
		implicit declaration of function '__atomic_exchange_2'
		is invalid in C99
	include/generic/rte_atomic.h:494:9: error:
		implicit declaration of function '__atomic_exchange_4'
		is invalid in C99
	include/generic/rte_atomic.h:772:9: error:
		implicit declaration of function '__atomic_exchange_8'
		is invalid in C99

Use __atomic_exchange_n instead of __atomic_exchange_(2/4/8).
For more information, please refer to:
http://mails.dpdk.org/archives/dev/2018-April/096776.html

Fixes: 7bdccb93078e ("eal: fix ARM build with clang")
Cc: stable@dpdk.org

Signed-off-by: Gavin Hu <gavin.hu@arm.com>
---
 lib/librte_eal/common/include/generic/rte_atomic.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/lib/librte_eal/common/include/generic/rte_atomic.h b/lib/librte_eal/common/include/generic/rte_atomic.h
index b99ba4688..ed5b125b3 100644
--- a/lib/librte_eal/common/include/generic/rte_atomic.h
+++ b/lib/librte_eal/common/include/generic/rte_atomic.h
@@ -212,7 +212,7 @@ rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val);
 static inline uint16_t
 rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val)
 {
-#if defined(RTE_ARCH_ARM64) && defined(RTE_TOOLCHAIN_CLANG)
+#if defined(RTE_TOOLCHAIN_CLANG)
 	return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
 #else
 	return __atomic_exchange_2(dst, val, __ATOMIC_SEQ_CST);
@@ -495,7 +495,7 @@ rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val);
 static inline uint32_t
 rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val)
 {
-#if defined(RTE_ARCH_ARM64) && defined(RTE_TOOLCHAIN_CLANG)
+#if defined(RTE_TOOLCHAIN_CLANG)
 	return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
 #else
 	return __atomic_exchange_4(dst, val, __ATOMIC_SEQ_CST);
@@ -777,7 +777,7 @@ rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val);
 static inline uint64_t
 rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val)
 {
-#if defined(RTE_ARCH_ARM64) && defined(RTE_TOOLCHAIN_CLANG)
+#if defined(RTE_TOOLCHAIN_CLANG)
 	return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
 #else
 	return __atomic_exchange_8(dst, val, __ATOMIC_SEQ_CST);
-- 
2.11.0

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-stable] [PATCH v4 1/4] eal: fix clang compilation error on x86
       [not found] ` <20181220104246.5590-1-gavin.hu@arm.com>
       [not found]   ` <20181220174229.5834-1-gavin.hu@arm.com>
@ 2019-01-15  7:54   ` gavin hu
  2019-01-15 10:32   ` [dpdk-stable] [PATCH v5 " gavin hu
                     ` (9 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: gavin hu @ 2019-01-15  7:54 UTC (permalink / raw)
  To: dev
  Cc: thomas, jerinj, hemant.agrawal, stephen, Honnappa.Nagarahalli,
	gavin.hu, nd, stable

From: Gavin Hu <gavin.hu@arm.com>

When CONFIG_RTE_FORCE_INTRINSICS is enabled for x86, the clang
compilation error was:
	include/generic/rte_atomic.h:215:9: error:
		implicit declaration of function '__atomic_exchange_2'
		is invalid in C99
	include/generic/rte_atomic.h:494:9: error:
		implicit declaration of function '__atomic_exchange_4'
		is invalid in C99
	include/generic/rte_atomic.h:772:9: error:
		implicit declaration of function '__atomic_exchange_8'
		is invalid in C99

Use __atomic_exchange_n instead of __atomic_exchange_(2/4/8).
For more information, please refer to:
http://mails.dpdk.org/archives/dev/2018-April/096776.html

Fixes: 7bdccb93078e ("eal: fix ARM build with clang")
Cc: stable@dpdk.org

Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
 lib/librte_eal/common/include/generic/rte_atomic.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/lib/librte_eal/common/include/generic/rte_atomic.h b/lib/librte_eal/common/include/generic/rte_atomic.h
index b99ba46..ed5b125 100644
--- a/lib/librte_eal/common/include/generic/rte_atomic.h
+++ b/lib/librte_eal/common/include/generic/rte_atomic.h
@@ -212,7 +212,7 @@ rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val);
 static inline uint16_t
 rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val)
 {
-#if defined(RTE_ARCH_ARM64) && defined(RTE_TOOLCHAIN_CLANG)
+#if defined(RTE_TOOLCHAIN_CLANG)
 	return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
 #else
 	return __atomic_exchange_2(dst, val, __ATOMIC_SEQ_CST);
@@ -495,7 +495,7 @@ rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val);
 static inline uint32_t
 rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val)
 {
-#if defined(RTE_ARCH_ARM64) && defined(RTE_TOOLCHAIN_CLANG)
+#if defined(RTE_TOOLCHAIN_CLANG)
 	return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
 #else
 	return __atomic_exchange_4(dst, val, __ATOMIC_SEQ_CST);
@@ -777,7 +777,7 @@ rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val);
 static inline uint64_t
 rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val)
 {
-#if defined(RTE_ARCH_ARM64) && defined(RTE_TOOLCHAIN_CLANG)
+#if defined(RTE_TOOLCHAIN_CLANG)
 	return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
 #else
 	return __atomic_exchange_8(dst, val, __ATOMIC_SEQ_CST);
-- 
2.7.4

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-stable] [PATCH v5 1/4] eal: fix clang compilation error on x86
       [not found] ` <20181220104246.5590-1-gavin.hu@arm.com>
       [not found]   ` <20181220174229.5834-1-gavin.hu@arm.com>
  2019-01-15  7:54   ` [dpdk-stable] [PATCH v4 1/4] " gavin hu
@ 2019-01-15 10:32   ` gavin hu
  2019-01-15 17:42     ` Honnappa Nagarahalli
  2019-03-08  7:16   ` [dpdk-stable] [PATCH v6 1/3] test/spinlock: dealy 1 us to create contention Gavin Hu
                     ` (8 subsequent siblings)
  11 siblings, 1 reply; 17+ messages in thread
From: gavin hu @ 2019-01-15 10:32 UTC (permalink / raw)
  To: dev
  Cc: nd, thomas, jerinj, hemant.agrawal, Honnappa.Nagarahalli,
	gavin.hu, olivier.matz, bruce.richardson, stable

From: Gavin Hu <gavin.hu@arm.com>

When CONFIG_RTE_FORCE_INTRINSICS is enabled for x86, the clang
compilation error was:
	include/generic/rte_atomic.h:215:9: error:
		implicit declaration of function '__atomic_exchange_2'
		is invalid in C99
	include/generic/rte_atomic.h:494:9: error:
		implicit declaration of function '__atomic_exchange_4'
		is invalid in C99
	include/generic/rte_atomic.h:772:9: error:
		implicit declaration of function '__atomic_exchange_8'
		is invalid in C99

Use __atomic_exchange_n instead of __atomic_exchange_(2/4/8).
For more information, please refer to:
http://mails.dpdk.org/archives/dev/2018-April/096776.html

Fixes: 7bdccb93078e ("eal: fix ARM build with clang")
Cc: stable@dpdk.org

Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
 lib/librte_eal/common/include/generic/rte_atomic.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/lib/librte_eal/common/include/generic/rte_atomic.h b/lib/librte_eal/common/include/generic/rte_atomic.h
index b99ba46..ed5b125 100644
--- a/lib/librte_eal/common/include/generic/rte_atomic.h
+++ b/lib/librte_eal/common/include/generic/rte_atomic.h
@@ -212,7 +212,7 @@ rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val);
 static inline uint16_t
 rte_atomic16_exchange(volatile uint16_t *dst, uint16_t val)
 {
-#if defined(RTE_ARCH_ARM64) && defined(RTE_TOOLCHAIN_CLANG)
+#if defined(RTE_TOOLCHAIN_CLANG)
 	return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
 #else
 	return __atomic_exchange_2(dst, val, __ATOMIC_SEQ_CST);
@@ -495,7 +495,7 @@ rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val);
 static inline uint32_t
 rte_atomic32_exchange(volatile uint32_t *dst, uint32_t val)
 {
-#if defined(RTE_ARCH_ARM64) && defined(RTE_TOOLCHAIN_CLANG)
+#if defined(RTE_TOOLCHAIN_CLANG)
 	return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
 #else
 	return __atomic_exchange_4(dst, val, __ATOMIC_SEQ_CST);
@@ -777,7 +777,7 @@ rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val);
 static inline uint64_t
 rte_atomic64_exchange(volatile uint64_t *dst, uint64_t val)
 {
-#if defined(RTE_ARCH_ARM64) && defined(RTE_TOOLCHAIN_CLANG)
+#if defined(RTE_TOOLCHAIN_CLANG)
 	return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);
 #else
 	return __atomic_exchange_8(dst, val, __ATOMIC_SEQ_CST);
-- 
2.7.4

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [dpdk-stable] [PATCH v5 1/4] eal: fix clang compilation error on x86
  2019-01-15 10:32   ` [dpdk-stable] [PATCH v5 " gavin hu
@ 2019-01-15 17:42     ` Honnappa Nagarahalli
  0 siblings, 0 replies; 17+ messages in thread
From: Honnappa Nagarahalli @ 2019-01-15 17:42 UTC (permalink / raw)
  To: Gavin Hu (Arm Technology China), dev
  Cc: nd, thomas, jerinj, hemant.agrawal,
	Gavin Hu (Arm Technology China),
	olivier.matz, bruce.richardson, stable, nd

> 
> From: Gavin Hu <gavin.hu@arm.com>
> 
> When CONFIG_RTE_FORCE_INTRINSICS is enabled for x86, the clang
> compilation error was:
> 	include/generic/rte_atomic.h:215:9: error:
> 		implicit declaration of function '__atomic_exchange_2'
> 		is invalid in C99
> 	include/generic/rte_atomic.h:494:9: error:
> 		implicit declaration of function '__atomic_exchange_4'
> 		is invalid in C99
> 	include/generic/rte_atomic.h:772:9: error:
> 		implicit declaration of function '__atomic_exchange_8'
> 		is invalid in C99
> 
> Use __atomic_exchange_n instead of __atomic_exchange_(2/4/8).
> For more information, please refer to:
> http://mails.dpdk.org/archives/dev/2018-April/096776.html
> 
> Fixes: 7bdccb93078e ("eal: fix ARM build with clang")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Gavin Hu <gavin.hu@arm.com>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
> ---
>  lib/librte_eal/common/include/generic/rte_atomic.h | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/lib/librte_eal/common/include/generic/rte_atomic.h
> b/lib/librte_eal/common/include/generic/rte_atomic.h
> index b99ba46..ed5b125 100644
> --- a/lib/librte_eal/common/include/generic/rte_atomic.h
> +++ b/lib/librte_eal/common/include/generic/rte_atomic.h
> @@ -212,7 +212,7 @@ rte_atomic16_exchange(volatile uint16_t *dst,
> uint16_t val);  static inline uint16_t  rte_atomic16_exchange(volatile
> uint16_t *dst, uint16_t val)  { -#if defined(RTE_ARCH_ARM64) &&
> defined(RTE_TOOLCHAIN_CLANG)
> +#if defined(RTE_TOOLCHAIN_CLANG)
Please check http://mails.dpdk.org/archives/dev/2019-January/123331.html
This needs to be changed to (__clang__). This applies for other similar changes here.

>  	return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);  #else
>  	return __atomic_exchange_2(dst, val, __ATOMIC_SEQ_CST); @@ -
> 495,7 +495,7 @@ rte_atomic32_exchange(volatile uint32_t *dst, uint32_t
> val);  static inline uint32_t  rte_atomic32_exchange(volatile uint32_t *dst,
> uint32_t val)  { -#if defined(RTE_ARCH_ARM64) &&
> defined(RTE_TOOLCHAIN_CLANG)
> +#if defined(RTE_TOOLCHAIN_CLANG)
>  	return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);  #else
>  	return __atomic_exchange_4(dst, val, __ATOMIC_SEQ_CST); @@ -
> 777,7 +777,7 @@ rte_atomic64_exchange(volatile uint64_t *dst, uint64_t
> val);  static inline uint64_t  rte_atomic64_exchange(volatile uint64_t *dst,
> uint64_t val)  { -#if defined(RTE_ARCH_ARM64) &&
> defined(RTE_TOOLCHAIN_CLANG)
> +#if defined(RTE_TOOLCHAIN_CLANG)
>  	return __atomic_exchange_n(dst, val, __ATOMIC_SEQ_CST);  #else
>  	return __atomic_exchange_8(dst, val, __ATOMIC_SEQ_CST);
> --
> 2.7.4

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-stable] [PATCH v6 1/3] test/spinlock: dealy 1 us to create contention
       [not found] ` <20181220104246.5590-1-gavin.hu@arm.com>
                     ` (2 preceding siblings ...)
  2019-01-15 10:32   ` [dpdk-stable] [PATCH v5 " gavin hu
@ 2019-03-08  7:16   ` Gavin Hu
  2019-03-08  7:16   ` [dpdk-stable] [PATCH v6 2/3] test/spinlock: amortize the cost of getting time Gavin Hu
                     ` (7 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Gavin Hu @ 2019-03-08  7:16 UTC (permalink / raw)
  To: dev
  Cc: nd, thomas, jerinj, hemant.agrawal, nipun.gupta,
	Honnappa.Nagarahalli, gavin.hu, i.maximets, chaozhu, stable

Quickly taking and releasing the spinlock can't hit the contentions,
delay 1 us to create contention stress, this can help really show
the performance of spinlock implementation.

Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org

Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ruifeng Wang <Ruifeng.Wang@arm.com>
Reviewed-by: Joyce Kong <Joyce.Kong@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
Reviewed-by: Ola Liljedahl <Ola.Liljedahl@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
 app/test/test_spinlock.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/app/test/test_spinlock.c b/app/test/test_spinlock.c
index 73bff12..6795195 100644
--- a/app/test/test_spinlock.c
+++ b/app/test/test_spinlock.c
@@ -120,8 +120,6 @@ load_loop_fn(void *func_param)
 		lcount++;
 		if (use_lock)
 			rte_spinlock_unlock(&lk);
-		/* delay to make lock duty cycle slighlty realistic */
-		rte_delay_us(1);
 		time_diff = rte_get_timer_cycles() - begin;
 	}
 	lock_count[lcore] = lcount;
-- 
2.7.4

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-stable] [PATCH v6 2/3] test/spinlock: amortize the cost of getting time
       [not found] ` <20181220104246.5590-1-gavin.hu@arm.com>
                     ` (3 preceding siblings ...)
  2019-03-08  7:16   ` [dpdk-stable] [PATCH v6 1/3] test/spinlock: dealy 1 us to create contention Gavin Hu
@ 2019-03-08  7:16   ` Gavin Hu
  2019-03-08  7:16   ` [dpdk-stable] [PATCH v6 3/3] spinlock: reimplement with atomic one-way barrier builtins Gavin Hu
                     ` (6 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Gavin Hu @ 2019-03-08  7:16 UTC (permalink / raw)
  To: dev
  Cc: nd, thomas, jerinj, hemant.agrawal, nipun.gupta,
	Honnappa.Nagarahalli, gavin.hu, i.maximets, chaozhu, stable

Instead of getting timestamps per iteration, amortize its overhead
can help getting more precise benchmarking results.

Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org

Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Joyce Kong <Joyce.Kong@arm.com>
Reviewed-by: ruifeng wang <ruifeng.wang@arm.com>
Reviewed-by: honnappa nagarahalli <honnappa.nagarahalli@arm.com>
---
 app/test/test_spinlock.c | 29 ++++++++++++++++-------------
 1 file changed, 16 insertions(+), 13 deletions(-)

diff --git a/app/test/test_spinlock.c b/app/test/test_spinlock.c
index 6795195..6ac7495 100644
--- a/app/test/test_spinlock.c
+++ b/app/test/test_spinlock.c
@@ -96,16 +96,16 @@ test_spinlock_recursive_per_core(__attribute__((unused)) void *arg)
 }
 
 static rte_spinlock_t lk = RTE_SPINLOCK_INITIALIZER;
-static uint64_t lock_count[RTE_MAX_LCORE] = {0};
+static uint64_t time_count[RTE_MAX_LCORE] = {0};
 
-#define TIME_MS 100
+#define MAX_LOOP 10000
 
 static int
 load_loop_fn(void *func_param)
 {
 	uint64_t time_diff = 0, begin;
 	uint64_t hz = rte_get_timer_hz();
-	uint64_t lcount = 0;
+	volatile uint64_t lcount = 0;
 	const int use_lock = *(int*)func_param;
 	const unsigned lcore = rte_lcore_id();
 
@@ -114,15 +114,15 @@ load_loop_fn(void *func_param)
 		while (rte_atomic32_read(&synchro) == 0);
 
 	begin = rte_get_timer_cycles();
-	while (time_diff < hz * TIME_MS / 1000) {
+	while (lcount < MAX_LOOP) {
 		if (use_lock)
 			rte_spinlock_lock(&lk);
 		lcount++;
 		if (use_lock)
 			rte_spinlock_unlock(&lk);
-		time_diff = rte_get_timer_cycles() - begin;
 	}
-	lock_count[lcore] = lcount;
+	time_diff = rte_get_timer_cycles() - begin;
+	time_count[lcore] = time_diff * 1000000 / hz;
 	return 0;
 }
 
@@ -136,14 +136,16 @@ test_spinlock_perf(void)
 
 	printf("\nTest with no lock on single core...\n");
 	load_loop_fn(&lock);
-	printf("Core [%u] count = %"PRIu64"\n", lcore, lock_count[lcore]);
-	memset(lock_count, 0, sizeof(lock_count));
+	printf("Core [%u] Cost Time = %"PRIu64" us\n", lcore,
+						time_count[lcore]);
+	memset(time_count, 0, sizeof(time_count));
 
 	printf("\nTest with lock on single core...\n");
 	lock = 1;
 	load_loop_fn(&lock);
-	printf("Core [%u] count = %"PRIu64"\n", lcore, lock_count[lcore]);
-	memset(lock_count, 0, sizeof(lock_count));
+	printf("Core [%u] Cost Time = %"PRIu64" us\n", lcore,
+						time_count[lcore]);
+	memset(time_count, 0, sizeof(time_count));
 
 	printf("\nTest with lock on %u cores...\n", rte_lcore_count());
 
@@ -158,11 +160,12 @@ test_spinlock_perf(void)
 	rte_eal_mp_wait_lcore();
 
 	RTE_LCORE_FOREACH(i) {
-		printf("Core [%u] count = %"PRIu64"\n", i, lock_count[i]);
-		total += lock_count[i];
+		printf("Core [%u] Cost Time = %"PRIu64" us\n", i,
+						time_count[i]);
+		total += time_count[i];
 	}
 
-	printf("Total count = %"PRIu64"\n", total);
+	printf("Total Cost Time = %"PRIu64" us\n", total);
 
 	return 0;
 }
-- 
2.7.4

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-stable] [PATCH v6 3/3] spinlock: reimplement with atomic one-way barrier builtins
       [not found] ` <20181220104246.5590-1-gavin.hu@arm.com>
                     ` (4 preceding siblings ...)
  2019-03-08  7:16   ` [dpdk-stable] [PATCH v6 2/3] test/spinlock: amortize the cost of getting time Gavin Hu
@ 2019-03-08  7:16   ` Gavin Hu
  2019-03-08  7:37   ` [dpdk-stable] [PATCH v7 1/3] test/spinlock: remove 1us delay for correct benchmarking Gavin Hu
                     ` (5 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Gavin Hu @ 2019-03-08  7:16 UTC (permalink / raw)
  To: dev
  Cc: nd, thomas, jerinj, hemant.agrawal, nipun.gupta,
	Honnappa.Nagarahalli, gavin.hu, i.maximets, chaozhu, stable

The __sync builtin based implementation generates full memory barriers
('dmb ish') on Arm platforms. Using C11 atomic builtins to generate one way
barriers.

Here is the assembly code of __sync_compare_and_swap builtin.
__sync_bool_compare_and_swap(dst, exp, src);
   0x000000000090f1b0 <+16>:    e0 07 40 f9 ldr x0, [sp, #8]
   0x000000000090f1b4 <+20>:    e1 0f 40 79 ldrh    w1, [sp, #6]
   0x000000000090f1b8 <+24>:    e2 0b 40 79 ldrh    w2, [sp, #4]
   0x000000000090f1bc <+28>:    21 3c 00 12 and w1, w1, #0xffff
   0x000000000090f1c0 <+32>:    03 7c 5f 48 ldxrh   w3, [x0]
   0x000000000090f1c4 <+36>:    7f 00 01 6b cmp w3, w1
   0x000000000090f1c8 <+40>:    61 00 00 54 b.ne    0x90f1d4
<rte_atomic16_cmpset+52>  // b.any
   0x000000000090f1cc <+44>:    02 fc 04 48 stlxrh  w4, w2, [x0]
   0x000000000090f1d0 <+48>:    84 ff ff 35 cbnz    w4, 0x90f1c0
<rte_atomic16_cmpset+32>
   0x000000000090f1d4 <+52>:    bf 3b 03 d5 dmb ish
   0x000000000090f1d8 <+56>:    e0 17 9f 1a cset    w0, eq  // eq = none

The benchmarking results showed constant improvements on all available
platforms:
1. Cavium ThunderX2: 126% performance;
2. Hisilicon 1616: 30%;
3. Qualcomm Falkor: 13%;
4. Marvell ARMADA 8040 with A72 cores on macchiatobin: 3.7%

Here is the example test result on TX2:
$sudo ./build/app/test -l 16-27 -- i
RTE>>spinlock_autotest

*** spinlock_autotest without this patch ***
Test with lock on 12 cores...
Core [16] Cost Time = 53886 us
Core [17] Cost Time = 53605 us
Core [18] Cost Time = 53163 us
Core [19] Cost Time = 49419 us
Core [20] Cost Time = 34317 us
Core [21] Cost Time = 53408 us
Core [22] Cost Time = 53970 us
Core [23] Cost Time = 53930 us
Core [24] Cost Time = 53283 us
Core [25] Cost Time = 51504 us
Core [26] Cost Time = 50718 us
Core [27] Cost Time = 51730 us
Total Cost Time = 612933 us

*** spinlock_autotest with this patch ***
Test with lock on 12 cores...
Core [16] Cost Time = 18808 us
Core [17] Cost Time = 29497 us
Core [18] Cost Time = 29132 us
Core [19] Cost Time = 26150 us
Core [20] Cost Time = 21892 us
Core [21] Cost Time = 24377 us
Core [22] Cost Time = 27211 us
Core [23] Cost Time = 11070 us
Core [24] Cost Time = 29802 us
Core [25] Cost Time = 15793 us
Core [26] Cost Time = 7474 us
Core [27] Cost Time = 29550 us
Total Cost Time = 270756 us

In the tests on ThunderX2, with more cores contending, the performance gain
was even higher, indicating the __atomic implementation scales better than
__sync.

Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org

Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
Reviewed-by: Ola Liljedahl <Ola.Liljedahl@arm.com>
Reviewed-by: Steve Capper <Steve.Capper@arm.com>
---
 lib/librte_eal/common/include/generic/rte_spinlock.h | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/lib/librte_eal/common/include/generic/rte_spinlock.h b/lib/librte_eal/common/include/generic/rte_spinlock.h
index c4c3fc3..87ae7a4 100644
--- a/lib/librte_eal/common/include/generic/rte_spinlock.h
+++ b/lib/librte_eal/common/include/generic/rte_spinlock.h
@@ -61,9 +61,14 @@ rte_spinlock_lock(rte_spinlock_t *sl);
 static inline void
 rte_spinlock_lock(rte_spinlock_t *sl)
 {
-	while (__sync_lock_test_and_set(&sl->locked, 1))
-		while(sl->locked)
+	int exp = 0;
+
+	while (!__atomic_compare_exchange_n(&sl->locked, &exp, 1, 0,
+				__ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) {
+		while (__atomic_load_n(&sl->locked, __ATOMIC_RELAXED))
 			rte_pause();
+		exp = 0;
+	}
 }
 #endif
 
@@ -80,7 +85,7 @@ rte_spinlock_unlock (rte_spinlock_t *sl);
 static inline void
 rte_spinlock_unlock (rte_spinlock_t *sl)
 {
-	__sync_lock_release(&sl->locked);
+	__atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE);
 }
 #endif
 
@@ -99,7 +104,10 @@ rte_spinlock_trylock (rte_spinlock_t *sl);
 static inline int
 rte_spinlock_trylock (rte_spinlock_t *sl)
 {
-	return __sync_lock_test_and_set(&sl->locked,1) == 0;
+	int exp = 0;
+	return __atomic_compare_exchange_n(&sl->locked, &exp, 1,
+				0, /* disallow spurious failure */
+				__ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
 }
 #endif
 
@@ -113,7 +121,7 @@ rte_spinlock_trylock (rte_spinlock_t *sl)
  */
 static inline int rte_spinlock_is_locked (rte_spinlock_t *sl)
 {
-	return sl->locked;
+	return __atomic_load_n(&sl->locked, __ATOMIC_ACQUIRE);
 }
 
 /**
-- 
2.7.4

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-stable] [PATCH v7 1/3] test/spinlock: remove 1us delay for correct benchmarking
       [not found] ` <20181220104246.5590-1-gavin.hu@arm.com>
                     ` (5 preceding siblings ...)
  2019-03-08  7:16   ` [dpdk-stable] [PATCH v6 3/3] spinlock: reimplement with atomic one-way barrier builtins Gavin Hu
@ 2019-03-08  7:37   ` Gavin Hu
  2019-03-08  7:37   ` [dpdk-stable] [PATCH v7 2/3] test/spinlock: amortize the cost of getting time Gavin Hu
                     ` (4 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Gavin Hu @ 2019-03-08  7:37 UTC (permalink / raw)
  To: dev
  Cc: nd, thomas, jerinj, hemant.agrawal, nipun.gupta,
	Honnappa.Nagarahalli, gavin.hu, i.maximets, chaozhu, stable

The test is to benchmark the performance of spinlock by counting the
number of spinlock acquire and release operations within the specified
time.
A typical pair of lock and unlock operations costs tens or hundreds of
nano seconds, in comparison to this, delaying 1 us outside of the locked
region is too much, compromising the goal of benchmarking the lock and
unlock performance.

Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org

Change-Id: I7cc025e76082bb84de3d7cd5002e850f89b30eae
Jira: ENTNET-1047
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ruifeng Wang <Ruifeng.Wang@arm.com>
Reviewed-by: Joyce Kong <Joyce.Kong@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
Reviewed-by: Ola Liljedahl <Ola.Liljedahl@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
 app/test/test_spinlock.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/app/test/test_spinlock.c b/app/test/test_spinlock.c
index 73bff12..6795195 100644
--- a/app/test/test_spinlock.c
+++ b/app/test/test_spinlock.c
@@ -120,8 +120,6 @@ load_loop_fn(void *func_param)
 		lcount++;
 		if (use_lock)
 			rte_spinlock_unlock(&lk);
-		/* delay to make lock duty cycle slighlty realistic */
-		rte_delay_us(1);
 		time_diff = rte_get_timer_cycles() - begin;
 	}
 	lock_count[lcore] = lcount;
-- 
2.7.4

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-stable] [PATCH v7 2/3] test/spinlock: amortize the cost of getting time
       [not found] ` <20181220104246.5590-1-gavin.hu@arm.com>
                     ` (6 preceding siblings ...)
  2019-03-08  7:37   ` [dpdk-stable] [PATCH v7 1/3] test/spinlock: remove 1us delay for correct benchmarking Gavin Hu
@ 2019-03-08  7:37   ` Gavin Hu
  2019-03-08  7:37   ` [dpdk-stable] [PATCH v7 3/3] spinlock: reimplement with atomic one-way barrier builtins Gavin Hu
                     ` (3 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Gavin Hu @ 2019-03-08  7:37 UTC (permalink / raw)
  To: dev
  Cc: nd, thomas, jerinj, hemant.agrawal, nipun.gupta,
	Honnappa.Nagarahalli, gavin.hu, i.maximets, chaozhu, stable

Instead of getting timestamps per iteration, amortize its overhead
can help getting more precise benchmarking results.

Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org

Change-Id: I5460f585937f65772c2eabe9ebc3d23a682e8af2
Jira: ENTNET-1047
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Joyce Kong <Joyce.Kong@arm.com>
Reviewed-by: ruifeng wang <ruifeng.wang@arm.com>
Reviewed-by: honnappa nagarahalli <honnappa.nagarahalli@arm.com>
---
 app/test/test_spinlock.c | 29 ++++++++++++++++-------------
 1 file changed, 16 insertions(+), 13 deletions(-)

diff --git a/app/test/test_spinlock.c b/app/test/test_spinlock.c
index 6795195..6ac7495 100644
--- a/app/test/test_spinlock.c
+++ b/app/test/test_spinlock.c
@@ -96,16 +96,16 @@ test_spinlock_recursive_per_core(__attribute__((unused)) void *arg)
 }
 
 static rte_spinlock_t lk = RTE_SPINLOCK_INITIALIZER;
-static uint64_t lock_count[RTE_MAX_LCORE] = {0};
+static uint64_t time_count[RTE_MAX_LCORE] = {0};
 
-#define TIME_MS 100
+#define MAX_LOOP 10000
 
 static int
 load_loop_fn(void *func_param)
 {
 	uint64_t time_diff = 0, begin;
 	uint64_t hz = rte_get_timer_hz();
-	uint64_t lcount = 0;
+	volatile uint64_t lcount = 0;
 	const int use_lock = *(int*)func_param;
 	const unsigned lcore = rte_lcore_id();
 
@@ -114,15 +114,15 @@ load_loop_fn(void *func_param)
 		while (rte_atomic32_read(&synchro) == 0);
 
 	begin = rte_get_timer_cycles();
-	while (time_diff < hz * TIME_MS / 1000) {
+	while (lcount < MAX_LOOP) {
 		if (use_lock)
 			rte_spinlock_lock(&lk);
 		lcount++;
 		if (use_lock)
 			rte_spinlock_unlock(&lk);
-		time_diff = rte_get_timer_cycles() - begin;
 	}
-	lock_count[lcore] = lcount;
+	time_diff = rte_get_timer_cycles() - begin;
+	time_count[lcore] = time_diff * 1000000 / hz;
 	return 0;
 }
 
@@ -136,14 +136,16 @@ test_spinlock_perf(void)
 
 	printf("\nTest with no lock on single core...\n");
 	load_loop_fn(&lock);
-	printf("Core [%u] count = %"PRIu64"\n", lcore, lock_count[lcore]);
-	memset(lock_count, 0, sizeof(lock_count));
+	printf("Core [%u] Cost Time = %"PRIu64" us\n", lcore,
+						time_count[lcore]);
+	memset(time_count, 0, sizeof(time_count));
 
 	printf("\nTest with lock on single core...\n");
 	lock = 1;
 	load_loop_fn(&lock);
-	printf("Core [%u] count = %"PRIu64"\n", lcore, lock_count[lcore]);
-	memset(lock_count, 0, sizeof(lock_count));
+	printf("Core [%u] Cost Time = %"PRIu64" us\n", lcore,
+						time_count[lcore]);
+	memset(time_count, 0, sizeof(time_count));
 
 	printf("\nTest with lock on %u cores...\n", rte_lcore_count());
 
@@ -158,11 +160,12 @@ test_spinlock_perf(void)
 	rte_eal_mp_wait_lcore();
 
 	RTE_LCORE_FOREACH(i) {
-		printf("Core [%u] count = %"PRIu64"\n", i, lock_count[i]);
-		total += lock_count[i];
+		printf("Core [%u] Cost Time = %"PRIu64" us\n", i,
+						time_count[i]);
+		total += time_count[i];
 	}
 
-	printf("Total count = %"PRIu64"\n", total);
+	printf("Total Cost Time = %"PRIu64" us\n", total);
 
 	return 0;
 }
-- 
2.7.4

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-stable] [PATCH v7 3/3] spinlock: reimplement with atomic one-way barrier builtins
       [not found] ` <20181220104246.5590-1-gavin.hu@arm.com>
                     ` (7 preceding siblings ...)
  2019-03-08  7:37   ` [dpdk-stable] [PATCH v7 2/3] test/spinlock: amortize the cost of getting time Gavin Hu
@ 2019-03-08  7:37   ` Gavin Hu
  2019-03-08  7:56   ` [dpdk-stable] [PATCH v8 1/3] test/spinlock: remove 1us delay for correct benchmarking Gavin Hu
                     ` (2 subsequent siblings)
  11 siblings, 0 replies; 17+ messages in thread
From: Gavin Hu @ 2019-03-08  7:37 UTC (permalink / raw)
  To: dev
  Cc: nd, thomas, jerinj, hemant.agrawal, nipun.gupta,
	Honnappa.Nagarahalli, gavin.hu, i.maximets, chaozhu, stable

The __sync builtin based implementation generates full memory barriers
('dmb ish') on Arm platforms. Using C11 atomic builtins to generate one way
barriers.

Here is the assembly code of __sync_compare_and_swap builtin.
__sync_bool_compare_and_swap(dst, exp, src);
   0x000000000090f1b0 <+16>:    e0 07 40 f9 ldr x0, [sp, #8]
   0x000000000090f1b4 <+20>:    e1 0f 40 79 ldrh    w1, [sp, #6]
   0x000000000090f1b8 <+24>:    e2 0b 40 79 ldrh    w2, [sp, #4]
   0x000000000090f1bc <+28>:    21 3c 00 12 and w1, w1, #0xffff
   0x000000000090f1c0 <+32>:    03 7c 5f 48 ldxrh   w3, [x0]
   0x000000000090f1c4 <+36>:    7f 00 01 6b cmp w3, w1
   0x000000000090f1c8 <+40>:    61 00 00 54 b.ne    0x90f1d4
<rte_atomic16_cmpset+52>  // b.any
   0x000000000090f1cc <+44>:    02 fc 04 48 stlxrh  w4, w2, [x0]
   0x000000000090f1d0 <+48>:    84 ff ff 35 cbnz    w4, 0x90f1c0
<rte_atomic16_cmpset+32>
   0x000000000090f1d4 <+52>:    bf 3b 03 d5 dmb ish
   0x000000000090f1d8 <+56>:    e0 17 9f 1a cset    w0, eq  // eq = none

The benchmarking results showed constant improvements on all available
platforms:
1. Cavium ThunderX2: 126% performance;
2. Hisilicon 1616: 30%;
3. Qualcomm Falkor: 13%;
4. Marvell ARMADA 8040 with A72 cores on macchiatobin: 3.7%

Here is the example test result on TX2:
$sudo ./build/app/test -l 16-27 -- i
RTE>>spinlock_autotest

*** spinlock_autotest without this patch ***
Test with lock on 12 cores...
Core [16] Cost Time = 53886 us
Core [17] Cost Time = 53605 us
Core [18] Cost Time = 53163 us
Core [19] Cost Time = 49419 us
Core [20] Cost Time = 34317 us
Core [21] Cost Time = 53408 us
Core [22] Cost Time = 53970 us
Core [23] Cost Time = 53930 us
Core [24] Cost Time = 53283 us
Core [25] Cost Time = 51504 us
Core [26] Cost Time = 50718 us
Core [27] Cost Time = 51730 us
Total Cost Time = 612933 us

*** spinlock_autotest with this patch ***
Test with lock on 12 cores...
Core [16] Cost Time = 18808 us
Core [17] Cost Time = 29497 us
Core [18] Cost Time = 29132 us
Core [19] Cost Time = 26150 us
Core [20] Cost Time = 21892 us
Core [21] Cost Time = 24377 us
Core [22] Cost Time = 27211 us
Core [23] Cost Time = 11070 us
Core [24] Cost Time = 29802 us
Core [25] Cost Time = 15793 us
Core [26] Cost Time = 7474 us
Core [27] Cost Time = 29550 us
Total Cost Time = 270756 us

In the tests on ThunderX2, with more cores contending, the performance gain
was even higher, indicating the __atomic implementation scales better than
__sync.

Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org

Change-Id: Ibe82c1fa53a8409584aa84c1a2b4499aae8c5b4d
Jira: ENTNET-1047
Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
Reviewed-by: Ola Liljedahl <Ola.Liljedahl@arm.com>
Reviewed-by: Steve Capper <Steve.Capper@arm.com>
---
 lib/librte_eal/common/include/generic/rte_spinlock.h | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/lib/librte_eal/common/include/generic/rte_spinlock.h b/lib/librte_eal/common/include/generic/rte_spinlock.h
index c4c3fc3..87ae7a4 100644
--- a/lib/librte_eal/common/include/generic/rte_spinlock.h
+++ b/lib/librte_eal/common/include/generic/rte_spinlock.h
@@ -61,9 +61,14 @@ rte_spinlock_lock(rte_spinlock_t *sl);
 static inline void
 rte_spinlock_lock(rte_spinlock_t *sl)
 {
-	while (__sync_lock_test_and_set(&sl->locked, 1))
-		while(sl->locked)
+	int exp = 0;
+
+	while (!__atomic_compare_exchange_n(&sl->locked, &exp, 1, 0,
+				__ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) {
+		while (__atomic_load_n(&sl->locked, __ATOMIC_RELAXED))
 			rte_pause();
+		exp = 0;
+	}
 }
 #endif
 
@@ -80,7 +85,7 @@ rte_spinlock_unlock (rte_spinlock_t *sl);
 static inline void
 rte_spinlock_unlock (rte_spinlock_t *sl)
 {
-	__sync_lock_release(&sl->locked);
+	__atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE);
 }
 #endif
 
@@ -99,7 +104,10 @@ rte_spinlock_trylock (rte_spinlock_t *sl);
 static inline int
 rte_spinlock_trylock (rte_spinlock_t *sl)
 {
-	return __sync_lock_test_and_set(&sl->locked,1) == 0;
+	int exp = 0;
+	return __atomic_compare_exchange_n(&sl->locked, &exp, 1,
+				0, /* disallow spurious failure */
+				__ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
 }
 #endif
 
@@ -113,7 +121,7 @@ rte_spinlock_trylock (rte_spinlock_t *sl)
  */
 static inline int rte_spinlock_is_locked (rte_spinlock_t *sl)
 {
-	return sl->locked;
+	return __atomic_load_n(&sl->locked, __ATOMIC_ACQUIRE);
 }
 
 /**
-- 
2.7.4

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-stable] [PATCH v8 1/3] test/spinlock: remove 1us delay for correct benchmarking
       [not found] ` <20181220104246.5590-1-gavin.hu@arm.com>
                     ` (8 preceding siblings ...)
  2019-03-08  7:37   ` [dpdk-stable] [PATCH v7 3/3] spinlock: reimplement with atomic one-way barrier builtins Gavin Hu
@ 2019-03-08  7:56   ` Gavin Hu
  2019-03-08  7:56   ` [dpdk-stable] [PATCH v8 2/3] test/spinlock: amortize the cost of getting time Gavin Hu
  2019-03-08  7:56   ` [dpdk-stable] [PATCH v8 3/3] spinlock: reimplement with atomic one-way barrier builtins Gavin Hu
  11 siblings, 0 replies; 17+ messages in thread
From: Gavin Hu @ 2019-03-08  7:56 UTC (permalink / raw)
  To: dev
  Cc: nd, thomas, jerinj, hemant.agrawal, nipun.gupta,
	Honnappa.Nagarahalli, gavin.hu, i.maximets, chaozhu, stable

The test is to benchmark the performance of spinlock by counting the
number of spinlock acquire and release operations within the specified
time.
A typical pair of lock and unlock operations costs tens or hundreds of
nano seconds, in comparison to this, delaying 1 us outside of the locked
region is too much, compromising the goal of benchmarking the lock and
unlock performance.

Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org

Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Ruifeng Wang <Ruifeng.Wang@arm.com>
Reviewed-by: Joyce Kong <Joyce.Kong@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
Reviewed-by: Ola Liljedahl <Ola.Liljedahl@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
 app/test/test_spinlock.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/app/test/test_spinlock.c b/app/test/test_spinlock.c
index 73bff12..6795195 100644
--- a/app/test/test_spinlock.c
+++ b/app/test/test_spinlock.c
@@ -120,8 +120,6 @@ load_loop_fn(void *func_param)
 		lcount++;
 		if (use_lock)
 			rte_spinlock_unlock(&lk);
-		/* delay to make lock duty cycle slighlty realistic */
-		rte_delay_us(1);
 		time_diff = rte_get_timer_cycles() - begin;
 	}
 	lock_count[lcore] = lcount;
-- 
2.7.4

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-stable] [PATCH v8 2/3] test/spinlock: amortize the cost of getting time
       [not found] ` <20181220104246.5590-1-gavin.hu@arm.com>
                     ` (9 preceding siblings ...)
  2019-03-08  7:56   ` [dpdk-stable] [PATCH v8 1/3] test/spinlock: remove 1us delay for correct benchmarking Gavin Hu
@ 2019-03-08  7:56   ` Gavin Hu
  2019-03-08  7:56   ` [dpdk-stable] [PATCH v8 3/3] spinlock: reimplement with atomic one-way barrier builtins Gavin Hu
  11 siblings, 0 replies; 17+ messages in thread
From: Gavin Hu @ 2019-03-08  7:56 UTC (permalink / raw)
  To: dev
  Cc: nd, thomas, jerinj, hemant.agrawal, nipun.gupta,
	Honnappa.Nagarahalli, gavin.hu, i.maximets, chaozhu, stable

Instead of getting timestamps per iteration, amortize its overhead
can help getting more precise benchmarking results.

Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org

Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Joyce Kong <Joyce.Kong@arm.com>
Reviewed-by: ruifeng wang <ruifeng.wang@arm.com>
Reviewed-by: honnappa nagarahalli <honnappa.nagarahalli@arm.com>
---
 app/test/test_spinlock.c | 29 ++++++++++++++++-------------
 1 file changed, 16 insertions(+), 13 deletions(-)

diff --git a/app/test/test_spinlock.c b/app/test/test_spinlock.c
index 6795195..6ac7495 100644
--- a/app/test/test_spinlock.c
+++ b/app/test/test_spinlock.c
@@ -96,16 +96,16 @@ test_spinlock_recursive_per_core(__attribute__((unused)) void *arg)
 }
 
 static rte_spinlock_t lk = RTE_SPINLOCK_INITIALIZER;
-static uint64_t lock_count[RTE_MAX_LCORE] = {0};
+static uint64_t time_count[RTE_MAX_LCORE] = {0};
 
-#define TIME_MS 100
+#define MAX_LOOP 10000
 
 static int
 load_loop_fn(void *func_param)
 {
 	uint64_t time_diff = 0, begin;
 	uint64_t hz = rte_get_timer_hz();
-	uint64_t lcount = 0;
+	volatile uint64_t lcount = 0;
 	const int use_lock = *(int*)func_param;
 	const unsigned lcore = rte_lcore_id();
 
@@ -114,15 +114,15 @@ load_loop_fn(void *func_param)
 		while (rte_atomic32_read(&synchro) == 0);
 
 	begin = rte_get_timer_cycles();
-	while (time_diff < hz * TIME_MS / 1000) {
+	while (lcount < MAX_LOOP) {
 		if (use_lock)
 			rte_spinlock_lock(&lk);
 		lcount++;
 		if (use_lock)
 			rte_spinlock_unlock(&lk);
-		time_diff = rte_get_timer_cycles() - begin;
 	}
-	lock_count[lcore] = lcount;
+	time_diff = rte_get_timer_cycles() - begin;
+	time_count[lcore] = time_diff * 1000000 / hz;
 	return 0;
 }
 
@@ -136,14 +136,16 @@ test_spinlock_perf(void)
 
 	printf("\nTest with no lock on single core...\n");
 	load_loop_fn(&lock);
-	printf("Core [%u] count = %"PRIu64"\n", lcore, lock_count[lcore]);
-	memset(lock_count, 0, sizeof(lock_count));
+	printf("Core [%u] Cost Time = %"PRIu64" us\n", lcore,
+						time_count[lcore]);
+	memset(time_count, 0, sizeof(time_count));
 
 	printf("\nTest with lock on single core...\n");
 	lock = 1;
 	load_loop_fn(&lock);
-	printf("Core [%u] count = %"PRIu64"\n", lcore, lock_count[lcore]);
-	memset(lock_count, 0, sizeof(lock_count));
+	printf("Core [%u] Cost Time = %"PRIu64" us\n", lcore,
+						time_count[lcore]);
+	memset(time_count, 0, sizeof(time_count));
 
 	printf("\nTest with lock on %u cores...\n", rte_lcore_count());
 
@@ -158,11 +160,12 @@ test_spinlock_perf(void)
 	rte_eal_mp_wait_lcore();
 
 	RTE_LCORE_FOREACH(i) {
-		printf("Core [%u] count = %"PRIu64"\n", i, lock_count[i]);
-		total += lock_count[i];
+		printf("Core [%u] Cost Time = %"PRIu64" us\n", i,
+						time_count[i]);
+		total += time_count[i];
 	}
 
-	printf("Total count = %"PRIu64"\n", total);
+	printf("Total Cost Time = %"PRIu64" us\n", total);
 
 	return 0;
 }
-- 
2.7.4

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [dpdk-stable] [PATCH v8 3/3] spinlock: reimplement with atomic one-way barrier builtins
       [not found] ` <20181220104246.5590-1-gavin.hu@arm.com>
                     ` (10 preceding siblings ...)
  2019-03-08  7:56   ` [dpdk-stable] [PATCH v8 2/3] test/spinlock: amortize the cost of getting time Gavin Hu
@ 2019-03-08  7:56   ` Gavin Hu
  2019-03-12 14:53     ` [dpdk-stable] [EXT] " Jerin Jacob Kollanukkaran
  2019-03-14 14:22     ` Jerin Jacob Kollanukkaran
  11 siblings, 2 replies; 17+ messages in thread
From: Gavin Hu @ 2019-03-08  7:56 UTC (permalink / raw)
  To: dev
  Cc: nd, thomas, jerinj, hemant.agrawal, nipun.gupta,
	Honnappa.Nagarahalli, gavin.hu, i.maximets, chaozhu, stable

The __sync builtin based implementation generates full memory barriers
('dmb ish') on Arm platforms. Using C11 atomic builtins to generate one way
barriers.

Here is the assembly code of __sync_compare_and_swap builtin.
__sync_bool_compare_and_swap(dst, exp, src);
   0x000000000090f1b0 <+16>:    e0 07 40 f9 ldr x0, [sp, #8]
   0x000000000090f1b4 <+20>:    e1 0f 40 79 ldrh    w1, [sp, #6]
   0x000000000090f1b8 <+24>:    e2 0b 40 79 ldrh    w2, [sp, #4]
   0x000000000090f1bc <+28>:    21 3c 00 12 and w1, w1, #0xffff
   0x000000000090f1c0 <+32>:    03 7c 5f 48 ldxrh   w3, [x0]
   0x000000000090f1c4 <+36>:    7f 00 01 6b cmp w3, w1
   0x000000000090f1c8 <+40>:    61 00 00 54 b.ne    0x90f1d4
<rte_atomic16_cmpset+52>  // b.any
   0x000000000090f1cc <+44>:    02 fc 04 48 stlxrh  w4, w2, [x0]
   0x000000000090f1d0 <+48>:    84 ff ff 35 cbnz    w4, 0x90f1c0
<rte_atomic16_cmpset+32>
   0x000000000090f1d4 <+52>:    bf 3b 03 d5 dmb ish
   0x000000000090f1d8 <+56>:    e0 17 9f 1a cset    w0, eq  // eq = none

The benchmarking results showed constant improvements on all available
platforms:
1. Cavium ThunderX2: 126% performance;
2. Hisilicon 1616: 30%;
3. Qualcomm Falkor: 13%;
4. Marvell ARMADA 8040 with A72 cores on macchiatobin: 3.7%

Here is the example test result on TX2:
$sudo ./build/app/test -l 16-27 -- i
RTE>>spinlock_autotest

*** spinlock_autotest without this patch ***
Test with lock on 12 cores...
Core [16] Cost Time = 53886 us
Core [17] Cost Time = 53605 us
Core [18] Cost Time = 53163 us
Core [19] Cost Time = 49419 us
Core [20] Cost Time = 34317 us
Core [21] Cost Time = 53408 us
Core [22] Cost Time = 53970 us
Core [23] Cost Time = 53930 us
Core [24] Cost Time = 53283 us
Core [25] Cost Time = 51504 us
Core [26] Cost Time = 50718 us
Core [27] Cost Time = 51730 us
Total Cost Time = 612933 us

*** spinlock_autotest with this patch ***
Test with lock on 12 cores...
Core [16] Cost Time = 18808 us
Core [17] Cost Time = 29497 us
Core [18] Cost Time = 29132 us
Core [19] Cost Time = 26150 us
Core [20] Cost Time = 21892 us
Core [21] Cost Time = 24377 us
Core [22] Cost Time = 27211 us
Core [23] Cost Time = 11070 us
Core [24] Cost Time = 29802 us
Core [25] Cost Time = 15793 us
Core [26] Cost Time = 7474 us
Core [27] Cost Time = 29550 us
Total Cost Time = 270756 us

In the tests on ThunderX2, with more cores contending, the performance gain
was even higher, indicating the __atomic implementation scales up better
than __sync.

Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org

Signed-off-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
Reviewed-by: Ola Liljedahl <Ola.Liljedahl@arm.com>
Reviewed-by: Steve Capper <Steve.Capper@arm.com>
---
 lib/librte_eal/common/include/generic/rte_spinlock.h | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/lib/librte_eal/common/include/generic/rte_spinlock.h b/lib/librte_eal/common/include/generic/rte_spinlock.h
index c4c3fc3..87ae7a4 100644
--- a/lib/librte_eal/common/include/generic/rte_spinlock.h
+++ b/lib/librte_eal/common/include/generic/rte_spinlock.h
@@ -61,9 +61,14 @@ rte_spinlock_lock(rte_spinlock_t *sl);
 static inline void
 rte_spinlock_lock(rte_spinlock_t *sl)
 {
-	while (__sync_lock_test_and_set(&sl->locked, 1))
-		while(sl->locked)
+	int exp = 0;
+
+	while (!__atomic_compare_exchange_n(&sl->locked, &exp, 1, 0,
+				__ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) {
+		while (__atomic_load_n(&sl->locked, __ATOMIC_RELAXED))
 			rte_pause();
+		exp = 0;
+	}
 }
 #endif
 
@@ -80,7 +85,7 @@ rte_spinlock_unlock (rte_spinlock_t *sl);
 static inline void
 rte_spinlock_unlock (rte_spinlock_t *sl)
 {
-	__sync_lock_release(&sl->locked);
+	__atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE);
 }
 #endif
 
@@ -99,7 +104,10 @@ rte_spinlock_trylock (rte_spinlock_t *sl);
 static inline int
 rte_spinlock_trylock (rte_spinlock_t *sl)
 {
-	return __sync_lock_test_and_set(&sl->locked,1) == 0;
+	int exp = 0;
+	return __atomic_compare_exchange_n(&sl->locked, &exp, 1,
+				0, /* disallow spurious failure */
+				__ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
 }
 #endif
 
@@ -113,7 +121,7 @@ rte_spinlock_trylock (rte_spinlock_t *sl)
  */
 static inline int rte_spinlock_is_locked (rte_spinlock_t *sl)
 {
-	return sl->locked;
+	return __atomic_load_n(&sl->locked, __ATOMIC_ACQUIRE);
 }
 
 /**
-- 
2.7.4

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [dpdk-stable] [EXT] [PATCH v8 3/3] spinlock: reimplement with atomic one-way barrier builtins
  2019-03-08  7:56   ` [dpdk-stable] [PATCH v8 3/3] spinlock: reimplement with atomic one-way barrier builtins Gavin Hu
@ 2019-03-12 14:53     ` Jerin Jacob Kollanukkaran
  2019-03-14  0:31       ` Honnappa Nagarahalli
  2019-03-14 14:22     ` Jerin Jacob Kollanukkaran
  1 sibling, 1 reply; 17+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-03-12 14:53 UTC (permalink / raw)
  To: gavin.hu, dev
  Cc: i.maximets, chaozhu, nd, nipun.gupta, thomas, hemant.agrawal,
	stable, Honnappa.Nagarahalli

On Fri, 2019-03-08 at 15:56 +0800, Gavin Hu wrote:
> -------------------------------------------------------------------
> ---
> The __sync builtin based implementation generates full memory
> barriers
> ('dmb ish') on Arm platforms. Using C11 atomic builtins to generate
> one way
> barriers.
> 
> 
>  lib/librte_eal/common/include/generic/rte_spinlock.h | 18
> +++++++++++++-----
>  1 file changed, 13 insertions(+), 5 deletions(-)
> 
> diff --git a/lib/librte_eal/common/include/generic/rte_spinlock.h
> b/lib/librte_eal/common/include/generic/rte_spinlock.h
> index c4c3fc3..87ae7a4 100644
> --- a/lib/librte_eal/common/include/generic/rte_spinlock.h
> +++ b/lib/librte_eal/common/include/generic/rte_spinlock.h
> @@ -61,9 +61,14 @@ rte_spinlock_lock(rte_spinlock_t *sl);
>  static inline void
>  rte_spinlock_lock(rte_spinlock_t *sl)
>  {
> -	while (__sync_lock_test_and_set(&sl->locked, 1))
> -		while(sl->locked)
> +	int exp = 0;
> +
> +	while (!__atomic_compare_exchange_n(&sl->locked, &exp, 1, 0,
> +				__ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) {

Would it be clean to use __atomic_test_and_set()
to avoid explicit exp =
0.


> +		while (__atomic_load_n(&sl->locked, __ATOMIC_RELAXED))
>  			rte_pause();
> +		exp = 0;
> +	}
>  }
>  #endif
>  
> @@ -80,7 +85,7 @@ rte_spinlock_unlock (rte_spinlock_t *sl);
>  static inline void
>  rte_spinlock_unlock (rte_spinlock_t *sl)
>  {
> -	__sync_lock_release(&sl->locked);
> +	__atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE);

__atomic_clear(.., __ATOMIC_RELEASE) looks more clean to me.

>  }
>  #endif
>  
> @@ -99,7 +104,10 @@ rte_spinlock_trylock (rte_spinlock_t *sl);
>  static inline int
>  rte_spinlock_trylock (rte_spinlock_t *sl)
>  {
> -	return __sync_lock_test_and_set(&sl->locked,1) == 0;
> +	int exp = 0;
> +	return __atomic_compare_exchange_n(&sl->locked, &exp, 1,
> +				0, /* disallow spurious failure */
> +				__ATOMIC_ACQUIRE, __ATOMIC_RELAXED);

return  (__atomic_test_and_set(.., __ATOMIC_ACQUIRE) == 0) will be more
clean version.

>  }
>  #endif
>  
> @@ -113,7 +121,7 @@ rte_spinlock_trylock (rte_spinlock_t *sl)
>   */
>  static inline int rte_spinlock_is_locked (rte_spinlock_t *sl)
>  {
> -	return sl->locked;
> +	return __atomic_load_n(&sl->locked, __ATOMIC_ACQUIRE);

Does __ATOMIC_RELAXED will be sufficient?


>  }
>  
>  /**

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [dpdk-stable] [EXT] [PATCH v8 3/3] spinlock: reimplement with atomic one-way barrier builtins
  2019-03-12 14:53     ` [dpdk-stable] [EXT] " Jerin Jacob Kollanukkaran
@ 2019-03-14  0:31       ` Honnappa Nagarahalli
  2019-03-14  2:36         ` Gavin Hu (Arm Technology China)
  0 siblings, 1 reply; 17+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-14  0:31 UTC (permalink / raw)
  To: jerinj, Gavin Hu (Arm Technology China), dev
  Cc: i.maximets, chaozhu, nd, Nipun.gupta@nxp.com, thomas,
	hemant.agrawal, stable, nd

> > -------------------------------------------------------------------
> > ---
> > The __sync builtin based implementation generates full memory barriers
> > ('dmb ish') on Arm platforms. Using C11 atomic builtins to generate
> > one way barriers.
> >
> >
> >  lib/librte_eal/common/include/generic/rte_spinlock.h | 18
> > +++++++++++++-----
> >  1 file changed, 13 insertions(+), 5 deletions(-)
> >
> > diff --git a/lib/librte_eal/common/include/generic/rte_spinlock.h
> > b/lib/librte_eal/common/include/generic/rte_spinlock.h
> > index c4c3fc3..87ae7a4 100644
> > --- a/lib/librte_eal/common/include/generic/rte_spinlock.h
> > +++ b/lib/librte_eal/common/include/generic/rte_spinlock.h
> > @@ -61,9 +61,14 @@ rte_spinlock_lock(rte_spinlock_t *sl);  static
> > inline void  rte_spinlock_lock(rte_spinlock_t *sl)  {
> > -	while (__sync_lock_test_and_set(&sl->locked, 1))
> > -		while(sl->locked)
> > +	int exp = 0;
> > +
> > +	while (!__atomic_compare_exchange_n(&sl->locked, &exp, 1, 0,
> > +				__ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
> {
> 
> Would it be clean to use __atomic_test_and_set() to avoid explicit exp = 0.
We addressed it here: http://mails.dpdk.org/archives/dev/2019-January/122363.html

> 
> 
> > +		while (__atomic_load_n(&sl->locked, __ATOMIC_RELAXED))
> >  			rte_pause();
> > +		exp = 0;
> > +	}
> >  }
> >  #endif
> >
> > @@ -80,7 +85,7 @@ rte_spinlock_unlock (rte_spinlock_t *sl);  static
> > inline void  rte_spinlock_unlock (rte_spinlock_t *sl)  {
> > -	__sync_lock_release(&sl->locked);
> > +	__atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE);
> 
> __atomic_clear(.., __ATOMIC_RELEASE) looks more clean to me.
This needs the operand to be of type bool.

> 
> >  }
> >  #endif
> >
> > @@ -99,7 +104,10 @@ rte_spinlock_trylock (rte_spinlock_t *sl);  static
> > inline int  rte_spinlock_trylock (rte_spinlock_t *sl)  {
> > -	return __sync_lock_test_and_set(&sl->locked,1) == 0;
> > +	int exp = 0;
> > +	return __atomic_compare_exchange_n(&sl->locked, &exp, 1,
> > +				0, /* disallow spurious failure */
> > +				__ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
> 
> return  (__atomic_test_and_set(.., __ATOMIC_ACQUIRE) == 0) will be more
> clean version.
> 
> >  }
> >  #endif
> >
> > @@ -113,7 +121,7 @@ rte_spinlock_trylock (rte_spinlock_t *sl)
> >   */
> >  static inline int rte_spinlock_is_locked (rte_spinlock_t *sl)  {
> > -	return sl->locked;
> > +	return __atomic_load_n(&sl->locked, __ATOMIC_ACQUIRE);
> 
> Does __ATOMIC_RELAXED will be sufficient?
This is also addressed here: http://mails.dpdk.org/archives/dev/2019-January/122363.html

I think you approved the patch here: http://mails.dpdk.org/archives/dev/2019-January/123238.html
I think this patch just needs your reviewed-by tag :)
 
> 
> 
> >  }
> >
> >  /**

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [dpdk-stable] [EXT] [PATCH v8 3/3] spinlock: reimplement with atomic one-way barrier builtins
  2019-03-14  0:31       ` Honnappa Nagarahalli
@ 2019-03-14  2:36         ` Gavin Hu (Arm Technology China)
  0 siblings, 0 replies; 17+ messages in thread
From: Gavin Hu (Arm Technology China) @ 2019-03-14  2:36 UTC (permalink / raw)
  To: Honnappa Nagarahalli, jerinj, dev
  Cc: i.maximets, chaozhu, nd, Nipun.gupta@nxp.com, thomas,
	hemant.agrawal, stable, Gavin Hu (Arm Technology China)



> -----Original Message-----
> From: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Sent: Thursday, March 14, 2019 8:31 AM
> To: jerinj@marvell.com; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; dev@dpdk.org
> Cc: i.maximets@samsung.com; chaozhu@linux.vnet.ibm.com; nd
> <nd@arm.com>; Nipun.gupta@nxp.com; thomas@monjalon.net;
> hemant.agrawal@nxp.com; stable@dpdk.org; nd <nd@arm.com>
> Subject: RE: [EXT] [PATCH v8 3/3] spinlock: reimplement with atomic one-
> way barrier builtins
> 
> > > -------------------------------------------------------------------
> > > ---
> > > The __sync builtin based implementation generates full memory barriers
> > > ('dmb ish') on Arm platforms. Using C11 atomic builtins to generate
> > > one way barriers.
> > >
> > >
> > >  lib/librte_eal/common/include/generic/rte_spinlock.h | 18
> > > +++++++++++++-----
> > >  1 file changed, 13 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/lib/librte_eal/common/include/generic/rte_spinlock.h
> > > b/lib/librte_eal/common/include/generic/rte_spinlock.h
> > > index c4c3fc3..87ae7a4 100644
> > > --- a/lib/librte_eal/common/include/generic/rte_spinlock.h
> > > +++ b/lib/librte_eal/common/include/generic/rte_spinlock.h
> > > @@ -61,9 +61,14 @@ rte_spinlock_lock(rte_spinlock_t *sl);  static
> > > inline void  rte_spinlock_lock(rte_spinlock_t *sl)  {
> > > -	while (__sync_lock_test_and_set(&sl->locked, 1))
> > > -		while(sl->locked)
> > > +	int exp = 0;
> > > +
> > > +	while (!__atomic_compare_exchange_n(&sl->locked, &exp, 1, 0,
> > > +				__ATOMIC_ACQUIRE, __ATOMIC_RELAXED))
> > {
> >
> > Would it be clean to use __atomic_test_and_set() to avoid explicit exp = 0.
> We addressed it here: http://mails.dpdk.org/archives/dev/2019-
> January/122363.html
__atomic_test_and_set causes 10 times of performance degradation in our
micro benchmarking on ThunderX2. Here it is explained why:
http://mails.dpdk.org/archives/dev/2019-January/123340.html 
> 
> >
> >
> > > +		while (__atomic_load_n(&sl->locked, __ATOMIC_RELAXED))
> > >  			rte_pause();
> > > +		exp = 0;
> > > +	}
> > >  }
> > >  #endif
> > >
> > > @@ -80,7 +85,7 @@ rte_spinlock_unlock (rte_spinlock_t *sl);  static
> > > inline void  rte_spinlock_unlock (rte_spinlock_t *sl)  {
> > > -	__sync_lock_release(&sl->locked);
> > > +	__atomic_store_n(&sl->locked, 0, __ATOMIC_RELEASE);
> >
> > __atomic_clear(.., __ATOMIC_RELEASE) looks more clean to me.
> This needs the operand to be of type bool.
> 
> >
> > >  }
> > >  #endif
> > >
> > > @@ -99,7 +104,10 @@ rte_spinlock_trylock (rte_spinlock_t *sl);  static
> > > inline int  rte_spinlock_trylock (rte_spinlock_t *sl)  {
> > > -	return __sync_lock_test_and_set(&sl->locked,1) == 0;
> > > +	int exp = 0;
> > > +	return __atomic_compare_exchange_n(&sl->locked, &exp, 1,
> > > +				0, /* disallow spurious failure */
> > > +				__ATOMIC_ACQUIRE, __ATOMIC_RELAXED);
> >
> > return  (__atomic_test_and_set(.., __ATOMIC_ACQUIRE) == 0) will be
> more
> > clean version.
> >
> > >  }
> > >  #endif
> > >
> > > @@ -113,7 +121,7 @@ rte_spinlock_trylock (rte_spinlock_t *sl)
> > >   */
> > >  static inline int rte_spinlock_is_locked (rte_spinlock_t *sl)  {
> > > -	return sl->locked;
> > > +	return __atomic_load_n(&sl->locked, __ATOMIC_ACQUIRE);
> >
> > Does __ATOMIC_RELAXED will be sufficient?
> This is also addressed here: http://mails.dpdk.org/archives/dev/2019-
> January/122363.html
> 
> I think you approved the patch here:
> http://mails.dpdk.org/archives/dev/2019-January/123238.html
> I think this patch just needs your reviewed-by tag :)
> 
> >
> >
> > >  }
> > >
> > >  /**

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [dpdk-stable] [EXT] [PATCH v8 3/3] spinlock: reimplement with atomic one-way barrier builtins
  2019-03-08  7:56   ` [dpdk-stable] [PATCH v8 3/3] spinlock: reimplement with atomic one-way barrier builtins Gavin Hu
  2019-03-12 14:53     ` [dpdk-stable] [EXT] " Jerin Jacob Kollanukkaran
@ 2019-03-14 14:22     ` Jerin Jacob Kollanukkaran
  1 sibling, 0 replies; 17+ messages in thread
From: Jerin Jacob Kollanukkaran @ 2019-03-14 14:22 UTC (permalink / raw)
  To: Gavin Hu
  Cc: dev, nd, thomas, hemant.agrawal, nipun.gupta,
	Honnappa.Nagarahalli, i.maximets, chaozhu, stable

On Fri, Mar 08, 2019 at 03:56:37PM +0800, Gavin Hu wrote:
> External Email
> 
> ----------------------------------------------------------------------
> The __sync builtin based implementation generates full memory barriers
> ('dmb ish') on Arm platforms. Using C11 atomic builtins to generate one way
> barriers.
> 
> Here is the assembly code of __sync_compare_and_swap builtin.
> __sync_bool_compare_and_swap(dst, exp, src);
>    0x000000000090f1b0 <+16>:    e0 07 40 f9 ldr x0, [sp, #8]
>    0x000000000090f1b4 <+20>:    e1 0f 40 79 ldrh    w1, [sp, #6]
>    0x000000000090f1b8 <+24>:    e2 0b 40 79 ldrh    w2, [sp, #4]
>    0x000000000090f1bc <+28>:    21 3c 00 12 and w1, w1, #0xffff
>    0x000000000090f1c0 <+32>:    03 7c 5f 48 ldxrh   w3, [x0]
>    0x000000000090f1c4 <+36>:    7f 00 01 6b cmp w3, w1
>    0x000000000090f1c8 <+40>:    61 00 00 54 b.ne    0x90f1d4
> <rte_atomic16_cmpset+52>  // b.any
>    0x000000000090f1cc <+44>:    02 fc 04 48 stlxrh  w4, w2, [x0]
>    0x000000000090f1d0 <+48>:    84 ff ff 35 cbnz    w4, 0x90f1c0
> <rte_atomic16_cmpset+32>
>    0x000000000090f1d4 <+52>:    bf 3b 03 d5 dmb ish
>    0x000000000090f1d8 <+56>:    e0 17 9f 1a cset    w0, eq  // eq = none
> 
> The benchmarking results showed constant improvements on all available
> platforms:
> 1. Cavium ThunderX2: 126% performance;
> 2. Hisilicon 1616: 30%;
> 3. Qualcomm Falkor: 13%;
> 4. Marvell ARMADA 8040 with A72 cores on macchiatobin: 3.7%
> 
> Here is the example test result on TX2:
> $sudo ./build/app/test -l 16-27 -- i
> RTE>>spinlock_autotest
> 
> *** spinlock_autotest without this patch ***
> Test with lock on 12 cores...
> Core [16] Cost Time = 53886 us
> Core [17] Cost Time = 53605 us
> Core [18] Cost Time = 53163 us
> Core [19] Cost Time = 49419 us
> Core [20] Cost Time = 34317 us
> Core [21] Cost Time = 53408 us
> Core [22] Cost Time = 53970 us
> Core [23] Cost Time = 53930 us
> Core [24] Cost Time = 53283 us
> Core [25] Cost Time = 51504 us
> Core [26] Cost Time = 50718 us
> Core [27] Cost Time = 51730 us
> Total Cost Time = 612933 us
> 
> *** spinlock_autotest with this patch ***
> Test with lock on 12 cores...
> Core [16] Cost Time = 18808 us
> Core [17] Cost Time = 29497 us
> Core [18] Cost Time = 29132 us
> Core [19] Cost Time = 26150 us
> Core [20] Cost Time = 21892 us
> Core [21] Cost Time = 24377 us
> Core [22] Cost Time = 27211 us
> Core [23] Cost Time = 11070 us
> Core [24] Cost Time = 29802 us
> Core [25] Cost Time = 15793 us
> Core [26] Cost Time = 7474 us
> Core [27] Cost Time = 29550 us
> Total Cost Time = 270756 us
> 
> In the tests on ThunderX2, with more cores contending, the performance gain
> was even higher, indicating the __atomic implementation scales up better
> than __sync.
> 
> Fixes: af75078fece3 ("first public release")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Gavin Hu <gavin.hu@arm.com>
> Reviewed-by: Phil Yang <phil.yang@arm.com>
> Reviewed-by: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Reviewed-by: Ola Liljedahl <Ola.Liljedahl@arm.com>
> Reviewed-by: Steve Capper <Steve.Capper@arm.com>

Reviewed-by: Jerin Jacob <jerinj@marvell.com>



^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2019-03-14 14:22 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1547538849-10996-1-git-send-email-gavin.hu@arm.com>
     [not found] ` <20181220104246.5590-1-gavin.hu@arm.com>
     [not found]   ` <20181220174229.5834-1-gavin.hu@arm.com>
2018-12-20 17:42     ` [dpdk-stable] [PATCH v2 5/5] eal: fix clang compilation error on x86 Gavin Hu
2019-01-15  7:54   ` [dpdk-stable] [PATCH v4 1/4] " gavin hu
2019-01-15 10:32   ` [dpdk-stable] [PATCH v5 " gavin hu
2019-01-15 17:42     ` Honnappa Nagarahalli
2019-03-08  7:16   ` [dpdk-stable] [PATCH v6 1/3] test/spinlock: dealy 1 us to create contention Gavin Hu
2019-03-08  7:16   ` [dpdk-stable] [PATCH v6 2/3] test/spinlock: amortize the cost of getting time Gavin Hu
2019-03-08  7:16   ` [dpdk-stable] [PATCH v6 3/3] spinlock: reimplement with atomic one-way barrier builtins Gavin Hu
2019-03-08  7:37   ` [dpdk-stable] [PATCH v7 1/3] test/spinlock: remove 1us delay for correct benchmarking Gavin Hu
2019-03-08  7:37   ` [dpdk-stable] [PATCH v7 2/3] test/spinlock: amortize the cost of getting time Gavin Hu
2019-03-08  7:37   ` [dpdk-stable] [PATCH v7 3/3] spinlock: reimplement with atomic one-way barrier builtins Gavin Hu
2019-03-08  7:56   ` [dpdk-stable] [PATCH v8 1/3] test/spinlock: remove 1us delay for correct benchmarking Gavin Hu
2019-03-08  7:56   ` [dpdk-stable] [PATCH v8 2/3] test/spinlock: amortize the cost of getting time Gavin Hu
2019-03-08  7:56   ` [dpdk-stable] [PATCH v8 3/3] spinlock: reimplement with atomic one-way barrier builtins Gavin Hu
2019-03-12 14:53     ` [dpdk-stable] [EXT] " Jerin Jacob Kollanukkaran
2019-03-14  0:31       ` Honnappa Nagarahalli
2019-03-14  2:36         ` Gavin Hu (Arm Technology China)
2019-03-14 14:22     ` Jerin Jacob Kollanukkaran

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).