From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4AD60A04AB; Thu, 7 Nov 2019 22:36:01 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9364C1BFD2; Thu, 7 Nov 2019 22:35:50 +0100 (CET) Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by dpdk.org (Postfix) with ESMTP id 25AC81BFCD for ; Thu, 7 Nov 2019 22:35:48 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1573162547; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+08drIEGy16nzp5c3Qu5t96IMzAq+cBKnVtgiLXQlkM=; b=UnheV/oPVMkBjx1pZ4KhI++TT5Ck+YJeLi2AVUPDx7ihwoIMeOrSLoRwgw9+MvRF4tflgz 5Jdah6Ovajcz6hjNqgQPja4MJ/IpdQ8ITRx472lvdC8MmwRaEHQd/OXGFE+mgo5FlWSRfS eaClKvbiiikmfO55X2UxVXp5q/NWDis= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-336-LwF417vvP8mZF1aYIjq1-A-1; Thu, 07 Nov 2019 16:35:44 -0500 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 99F87107ACC3; Thu, 7 Nov 2019 21:35:42 +0000 (UTC) Received: from dmarchan.remote.csb (ovpn-204-222.brq.redhat.com [10.40.204.222]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8AE74608B2; Thu, 7 Nov 2019 21:35:39 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: nd@arm.com, konstantin.ananyev@intel.com, Gavin Hu , Thomas Monjalon , John McNamara , Marko Kovacevic , Jerin Jacob , Jan Viktorin Date: Thu, 7 Nov 2019 22:35:25 +0100 Message-Id: <1573162528-16230-3-git-send-email-david.marchand@redhat.com> In-Reply-To: <1573162528-16230-1-git-send-email-david.marchand@redhat.com> References: <1561911676-37718-1-git-send-email-gavin.hu@arm.com> <1573162528-16230-1-git-send-email-david.marchand@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-MC-Unique: LwF417vvP8mZF1aYIjq1-A-1 X-Mimecast-Spam-Score: 0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Subject: [dpdk-dev] [PATCH v13 2/5] eal: add the APIs to wait until equal X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Gavin Hu The rte_wait_until_equal_xx APIs abstract the functionality of 'polling for a memory location to become equal to a given value'. Add the RTE_ARM_USE_WFE configuration entry for aarch64, disabled by default. When it is enabled, the above APIs will call WFE instruction to save CPU cycles and power. >From a VM, when calling this API on aarch64, it may trap in and out to release vCPUs whereas cause high exit latency. Since kernel 4.18.20 an adaptive trapping mechanism is introduced to balance the latency and workload. Signed-off-by: Gavin Hu Reviewed-by: Ruifeng Wang Reviewed-by: Steve Capper Reviewed-by: Ola Liljedahl Reviewed-by: Honnappa Nagarahalli Reviewed-by: Phil Yang Acked-by: Pavan Nikhilesh Acked-by: Jerin Jacob Acked-by: Konstantin Ananyev Signed-off-by: David Marchand --- Changelog since v12: - added release notes update, - fixed function prototypes indent, - reimplemented the arm implementation without exposing internal inline functions, - added asserts in generic implementation, --- config/arm/meson.build | 1 + config/common_base | 5 + doc/guides/rel_notes/release_19_11.rst | 5 + .../common/include/arch/arm/rte_pause_64.h | 133 +++++++++++++++++= ++++ lib/librte_eal/common/include/generic/rte_pause.h | 105 ++++++++++++++++ 5 files changed, 249 insertions(+) diff --git a/config/arm/meson.build b/config/arm/meson.build index 46dff3a..ea47425 100644 --- a/config/arm/meson.build +++ b/config/arm/meson.build @@ -26,6 +26,7 @@ flags_common_default =3D [ =09['RTE_LIBRTE_AVP_PMD', false], =20 =09['RTE_SCHED_VECTOR', false], +=09['RTE_ARM_USE_WFE', false], ] =20 flags_generic =3D [ diff --git a/config/common_base b/config/common_base index 1858598..bb1b1ed 100644 --- a/config/common_base +++ b/config/common_base @@ -110,6 +110,11 @@ CONFIG_RTE_MAX_VFIO_CONTAINERS=3D64 CONFIG_RTE_MALLOC_DEBUG=3Dn CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=3Dn CONFIG_RTE_USE_LIBBSD=3Dn +# Use WFE instructions to implement the rte_wait_for_equal_xxx APIs, +# calling these APIs put the cores in low power state while waiting +# for the memory address to become equal to the expected value. +# This is supported only by aarch64. +CONFIG_RTE_ARM_USE_WFE=3Dn =20 # # Recognize/ignore the AVX/AVX512 CPU flags for performance/power testing. diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/= release_19_11.rst index fe11b4b..af5f2c5 100644 --- a/doc/guides/rel_notes/release_19_11.rst +++ b/doc/guides/rel_notes/release_19_11.rst @@ -65,6 +65,11 @@ New Features =20 The lock-free stack implementation is enabled for aarch64 platforms. =20 +* **Added Wait Until Equal API.** + + A new API has been added to wait for a memory location to be updated wit= h a + 16-bit, 32-bit, 64-bit value. + * **Changed mempool allocation behaviour.** =20 Objects are no longer across pages by default. diff --git a/lib/librte_eal/common/include/arch/arm/rte_pause_64.h b/lib/li= brte_eal/common/include/arch/arm/rte_pause_64.h index 93895d3..e87d10b 100644 --- a/lib/librte_eal/common/include/arch/arm/rte_pause_64.h +++ b/lib/librte_eal/common/include/arch/arm/rte_pause_64.h @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2017 Cavium, Inc + * Copyright(c) 2019 Arm Limited */ =20 #ifndef _RTE_PAUSE_ARM64_H_ @@ -10,6 +11,11 @@ extern "C" { #endif =20 #include + +#ifdef RTE_ARM_USE_WFE +#define RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED +#endif + #include "generic/rte_pause.h" =20 static inline void rte_pause(void) @@ -17,6 +23,133 @@ static inline void rte_pause(void) =09asm volatile("yield" ::: "memory"); } =20 +#ifdef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED + +/* Send an event to quit WFE. */ +#define __SEVL() { asm volatile("sevl" : : : "memory"); } + +/* Put processor into low power WFE(Wait For Event) state. */ +#define __WFE() { asm volatile("wfe" : : : "memory"); } + +static __rte_always_inline void +rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected, +=09=09int memorder) +{ +=09uint16_t value; + +=09assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D __ATOMIC_REL= AXED); + +=09/* +=09 * Atomic exclusive load from addr, it returns the 16-bit content of +=09 * *addr while making it 'monitored',when it is written by someone +=09 * else, the 'monitored' state is cleared and a event is generated +=09 * implicitly to exit WFE. +=09 */ +#define __LOAD_EXC_16(src, dst, memorder) { \ +=09if (memorder =3D=3D __ATOMIC_RELAXED) { \ +=09=09asm volatile("ldxrh %w[tmp], [%x[addr]]" \ +=09=09=09: [tmp] "=3D&r" (dst) \ +=09=09=09: [addr] "r"(src) \ +=09=09=09: "memory"); \ +=09} else { \ +=09=09asm volatile("ldaxrh %w[tmp], [%x[addr]]" \ +=09=09=09: [tmp] "=3D&r" (dst) \ +=09=09=09: [addr] "r"(src) \ +=09=09=09: "memory"); \ +=09} } + +=09__LOAD_EXC_16(addr, value, memorder) +=09if (value !=3D expected) { +=09=09__SEVL() +=09=09do { +=09=09=09__WFE() +=09=09=09__LOAD_EXC_16(addr, value, memorder) +=09=09} while (value !=3D expected); +=09} +#undef __LOAD_EXC_16 +} + +static __rte_always_inline void +rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected, +=09=09int memorder) +{ +=09uint32_t value; + +=09assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D __ATOMIC_REL= AXED); + +=09/* +=09 * Atomic exclusive load from addr, it returns the 32-bit content of +=09 * *addr while making it 'monitored',when it is written by someone +=09 * else, the 'monitored' state is cleared and a event is generated +=09 * implicitly to exit WFE. +=09 */ +#define __LOAD_EXC_32(src, dst, memorder) { \ +=09if (memorder =3D=3D __ATOMIC_RELAXED) { \ +=09=09asm volatile("ldxr %w[tmp], [%x[addr]]" \ +=09=09=09: [tmp] "=3D&r" (dst) \ +=09=09=09: [addr] "r"(src) \ +=09=09=09: "memory"); \ +=09} else { \ +=09=09asm volatile("ldaxr %w[tmp], [%x[addr]]" \ +=09=09=09: [tmp] "=3D&r" (dst) \ +=09=09=09: [addr] "r"(src) \ +=09=09=09: "memory"); \ +=09} } + +=09__LOAD_EXC_32(addr, value, memorder) +=09if (value !=3D expected) { +=09=09__SEVL() +=09=09do { +=09=09=09__WFE() +=09=09=09__LOAD_EXC_32(addr, value, memorder) +=09=09} while (value !=3D expected); +=09} +#undef __LOAD_EXC_32 +} + +static __rte_always_inline void +rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected, +=09=09int memorder) +{ +=09uint64_t value; + +=09assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D __ATOMIC_REL= AXED); + +=09/* +=09 * Atomic exclusive load from addr, it returns the 64-bit content of +=09 * *addr while making it 'monitored',when it is written by someone +=09 * else, the 'monitored' state is cleared and a event is generated +=09 * implicitly to exit WFE. +=09 */ +#define __LOAD_EXC_64(src, dst, memorder) { \ +=09if (memorder =3D=3D __ATOMIC_RELAXED) { \ +=09=09asm volatile("ldxr %x[tmp], [%x[addr]]" \ +=09=09=09: [tmp] "=3D&r" (dst) \ +=09=09=09: [addr] "r"(src) \ +=09=09=09: "memory"); \ +=09} else { \ +=09=09asm volatile("ldaxr %x[tmp], [%x[addr]]" \ +=09=09=09: [tmp] "=3D&r" (dst) \ +=09=09=09: [addr] "r"(src) \ +=09=09=09: "memory"); \ +=09} } + +=09__LOAD_EXC_64(addr, value, memorder) +=09if (value !=3D expected) { +=09=09__SEVL() +=09=09do { +=09=09=09__WFE() +=09=09=09__LOAD_EXC_64(addr, value, memorder) +=09=09} while (value !=3D expected); +=09} +} +#undef __LOAD_EXC_64 + +#undef __SEVL +#undef __WFE + +#endif + #ifdef __cplusplus } #endif diff --git a/lib/librte_eal/common/include/generic/rte_pause.h b/lib/librte= _eal/common/include/generic/rte_pause.h index 52bd4db..7422785 100644 --- a/lib/librte_eal/common/include/generic/rte_pause.h +++ b/lib/librte_eal/common/include/generic/rte_pause.h @@ -1,5 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(c) 2017 Cavium, Inc + * Copyright(c) 2019 Arm Limited */ =20 #ifndef _RTE_PAUSE_H_ @@ -12,6 +13,12 @@ * */ =20 +#include +#include +#include +#include +#include + /** * Pause CPU execution for a short while * @@ -20,4 +27,102 @@ */ static inline void rte_pause(void); =20 +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior noti= ce + * + * Wait for *addr to be updated with a 16-bit expected value, with a relax= ed + * memory ordering model meaning the loads around this API can be reordere= d. + * + * @param addr + * A pointer to the memory location. + * @param expected + * A 16-bit expected value to be in the memory location. + * @param memorder + * Two different memory orders that can be specified: + * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to + * C++11 memory orders with the same names, see the C++11 standard or + * the GCC wiki on atomic synchronization for detailed definition. + */ +__rte_experimental +static __rte_always_inline void +rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected, +=09=09int memorder); + +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior noti= ce + * + * Wait for *addr to be updated with a 32-bit expected value, with a relax= ed + * memory ordering model meaning the loads around this API can be reordere= d. + * + * @param addr + * A pointer to the memory location. + * @param expected + * A 32-bit expected value to be in the memory location. + * @param memorder + * Two different memory orders that can be specified: + * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to + * C++11 memory orders with the same names, see the C++11 standard or + * the GCC wiki on atomic synchronization for detailed definition. + */ +__rte_experimental +static __rte_always_inline void +rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected, +=09=09int memorder); + +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior noti= ce + * + * Wait for *addr to be updated with a 64-bit expected value, with a relax= ed + * memory ordering model meaning the loads around this API can be reordere= d. + * + * @param addr + * A pointer to the memory location. + * @param expected + * A 64-bit expected value to be in the memory location. + * @param memorder + * Two different memory orders that can be specified: + * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to + * C++11 memory orders with the same names, see the C++11 standard or + * the GCC wiki on atomic synchronization for detailed definition. + */ +__rte_experimental +static __rte_always_inline void +rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected, +=09=09int memorder); + +#ifndef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED +static __rte_always_inline void +rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected, +=09=09int memorder) +{ +=09assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D __ATOMIC_REL= AXED); + +=09while (__atomic_load_n(addr, memorder) !=3D expected) +=09=09rte_pause(); +} + +static __rte_always_inline void +rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected, +=09=09int memorder) +{ +=09assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D __ATOMIC_REL= AXED); + +=09while (__atomic_load_n(addr, memorder) !=3D expected) +=09=09rte_pause(); +} + +static __rte_always_inline void +rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected, +=09=09int memorder) +{ +=09assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D __ATOMIC_REL= AXED); + +=09while (__atomic_load_n(addr, memorder) !=3D expected) +=09=09rte_pause(); +} +#endif + #endif /* _RTE_PAUSE_H_ */ --=20 1.8.3.1