From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from dpdk.org (dpdk.org [92.243.14.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 8E3A2A00B8;
	Sun, 27 Oct 2019 23:19:58 +0100 (CET)
Received: from [92.243.14.124] (localhost [127.0.0.1])
	by dpdk.org (Postfix) with ESMTP id C1A8B1BEFD;
	Sun, 27 Oct 2019 23:19:56 +0100 (CET)
Received: from mga14.intel.com (mga14.intel.com [192.55.52.115])
 by dpdk.org (Postfix) with ESMTP id EFD311BEFA
 for <dev@dpdk.org>; Sun, 27 Oct 2019 23:19:54 +0100 (CET)
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga008.jf.intel.com ([10.7.209.65])
 by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;
 27 Oct 2019 15:19:53 -0700
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.68,237,1569308400"; d="scan'208";a="193092916"
Received: from irsmsx102.ger.corp.intel.com ([163.33.3.155])
 by orsmga008.jf.intel.com with ESMTP; 27 Oct 2019 15:19:50 -0700
Received: from irsmsx104.ger.corp.intel.com ([169.254.5.252]) by
 IRSMSX102.ger.corp.intel.com ([169.254.2.40]) with mapi id 14.03.0439.000;
 Sun, 27 Oct 2019 22:19:49 +0000
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: Gavin Hu <gavin.hu@arm.com>, "dev@dpdk.org" <dev@dpdk.org>
CC: "nd@arm.com" <nd@arm.com>, "david.marchand@redhat.com"
 <david.marchand@redhat.com>, "thomas@monjalon.net" <thomas@monjalon.net>,
 "stephen@networkplumber.org" <stephen@networkplumber.org>,
 "hemant.agrawal@nxp.com" <hemant.agrawal@nxp.com>, "jerinj@marvell.com"
 <jerinj@marvell.com>, "pbhagavatula@marvell.com" <pbhagavatula@marvell.com>,
 "Honnappa.Nagarahalli@arm.com" <Honnappa.Nagarahalli@arm.com>,
 "ruifeng.wang@arm.com" <ruifeng.wang@arm.com>, "phil.yang@arm.com"
 <phil.yang@arm.com>, "steve.capper@arm.com" <steve.capper@arm.com>
Thread-Topic: [PATCH v11 2/5] eal: add the APIs to wait until equal
Thread-Index: AQHVjMV+BPVyy2JJWEig8/ZUZjwBRKdvD2BA
Date: Sun, 27 Oct 2019 22:19:49 +0000
Message-ID: <2601191342CEEE43887BDE71AB97725801A8C710F4@IRSMSX104.ger.corp.intel.com>
References: <1561911676-37718-1-git-send-email-gavin.hu@arm.com>
 <1572180765-49767-3-git-send-email-gavin.hu@arm.com>
In-Reply-To: <1572180765-49767-3-git-send-email-gavin.hu@arm.com>
Accept-Language: en-IE, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiOTlhMGE1OTEtZTlkMC00NzE1LWE0ZWQtN2Y3NDZhMjQxMDdkIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiakFFOWFjaVllaUFON2ZjQ094OFRMZ1B6Mk5McGhRdlNIK1M0b3p6TXMxd2ZkckhxbG1sTTRPazVZV0JsdzFHOCJ9
x-ctpclassification: CTP_NT
dlp-product: dlpe-windows
dlp-version: 11.2.0.6
dlp-reaction: no-action
x-originating-ip: [163.33.239.182]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Subject: Re: [dpdk-dev] [PATCH v11 2/5] eal: add the APIs to wait until equal
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org
Sender: "dev" <dev-bounces@dpdk.org>


> The rte_wait_until_equal_xx APIs abstract the functionality of
> 'polling for a memory location to become equal to a given value'.
>=20
> Add the RTE_ARM_USE_WFE configuration entry for aarch64, disabled
> by default. When it is enabled, the above APIs will call WFE instruction
> to save CPU cycles and power.
>=20
> From a VM, when calling this API on aarch64, it may trap in and out to
> release vCPUs whereas cause high exit latency. Since kernel 4.18.20 an
> adaptive trapping mechanism is introduced to balance the latency and
> workload.
>=20
> Signed-off-by: Gavin Hu <gavin.hu@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Ola Liljedahl <ola.liljedahl@arm.com>
> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Phil Yang <phil.yang@arm.com>
> Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
> ---
>  config/arm/meson.build                             |   1 +
>  config/common_base                                 |   5 +
>  .../common/include/arch/arm/rte_pause_64.h         | 188 +++++++++++++++=
++++++
>  lib/librte_eal/common/include/generic/rte_pause.h  |  99 +++++++++++
>  4 files changed, 293 insertions(+)
>=20
> diff --git a/config/arm/meson.build b/config/arm/meson.build
> index 979018e..b4b4cac 100644
> --- a/config/arm/meson.build
> +++ b/config/arm/meson.build
> @@ -26,6 +26,7 @@ flags_common_default =3D [
>  	['RTE_LIBRTE_AVP_PMD', false],
>=20
>  	['RTE_SCHED_VECTOR', false],
> +	['RTE_ARM_USE_WFE', false],
>  ]
>=20
>  flags_generic =3D [
> diff --git a/config/common_base b/config/common_base
> index e843a21..c812156 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -111,6 +111,11 @@ CONFIG_RTE_MAX_VFIO_CONTAINERS=3D64
>  CONFIG_RTE_MALLOC_DEBUG=3Dn
>  CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=3Dn
>  CONFIG_RTE_USE_LIBBSD=3Dn
> +# Use WFE instructions to implement the rte_wait_for_equal_xxx APIs,
> +# calling these APIs put the cores in low power state while waiting
> +# for the memory address to become equal to the expected value.
> +# This is supported only by aarch64.
> +CONFIG_RTE_ARM_USE_WFE=3Dn
>=20
>  #
>  # Recognize/ignore the AVX/AVX512 CPU flags for performance/power testin=
g.
> diff --git a/lib/librte_eal/common/include/arch/arm/rte_pause_64.h b/lib/=
librte_eal/common/include/arch/arm/rte_pause_64.h
> index 93895d3..1680d7a 100644
> --- a/lib/librte_eal/common/include/arch/arm/rte_pause_64.h
> +++ b/lib/librte_eal/common/include/arch/arm/rte_pause_64.h
> @@ -1,5 +1,6 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
>   * Copyright(c) 2017 Cavium, Inc
> + * Copyright(c) 2019 Arm Limited
>   */
>=20
>  #ifndef _RTE_PAUSE_ARM64_H_
> @@ -10,6 +11,11 @@ extern "C" {
>  #endif
>=20
>  #include <rte_common.h>
> +
> +#ifdef RTE_ARM_USE_WFE
> +#define RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED
> +#endif
> +
>  #include "generic/rte_pause.h"
>=20
>  static inline void rte_pause(void)
> @@ -17,6 +23,188 @@ static inline void rte_pause(void)
>  	asm volatile("yield" ::: "memory");
>  }
>=20
> +/**
> + * Send an event to quit WFE.
> + */
> +static inline void rte_sevl(void);
> +
> +/**
> + * Put processor into low power WFE(Wait For Event) state
> + */
> +static inline void rte_wfe(void);
> +
> +#ifdef RTE_ARM_USE_WFE
> +static inline void rte_sevl(void)
> +{
> +	asm volatile("sevl" : : : "memory");
> +}
> +
> +static inline void rte_wfe(void)
> +{
> +	asm volatile("wfe" : : : "memory");
> +}
> +#else
> +static inline void rte_sevl(void)
> +{
> +}
> +static inline void rte_wfe(void)
> +{
> +	rte_pause();
> +}
> +#endif
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior no=
tice
> + *
> + * Atomic exclusive load from addr, it returns the 16-bit content of *ad=
dr
> + * while making it 'monitored',when it is written by someone else, the
> + * 'monitored' state is cleared and a event is generated implicitly to e=
xit
> + * WFE.
> + *
> + * @param addr
> + *  A pointer to the memory location.
> + * @param memorder
> + *  The valid memory order variants are __ATOMIC_ACQUIRE and __ATOMIC_RE=
LAXED.
> + *  These map to C++11 memory orders with the same names, see the C++11 =
standard
> + *  the GCC wiki on atomic synchronization for detailed definitions.
> + */
> +static __rte_always_inline uint16_t
> +rte_atomic_load_ex_16(volatile uint16_t *addr, int memorder);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior no=
tice
> + *
> + * Atomic exclusive load from addr, it returns the 32-bit content of *ad=
dr
> + * while making it 'monitored',when it is written by someone else, the
> + * 'monitored' state is cleared and a event is generated implicitly to e=
xit
> + * WFE.
> + *
> + * @param addr
> + *  A pointer to the memory location.
> + * @param memorder
> + *  The valid memory order variants are __ATOMIC_ACQUIRE and __ATOMIC_RE=
LAXED.
> + *  These map to C++11 memory orders with the same names, see the C++11 =
standard
> + *  the GCC wiki on atomic synchronization for detailed definitions.
> + */
> +static __rte_always_inline uint32_t
> +rte_atomic_load_ex_32(volatile uint32_t *addr, int memorder);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior no=
tice
> + *
> + * Atomic exclusive load from addr, it returns the 64-bit content of *ad=
dr
> + * while making it 'monitored',when it is written by someone else, the
> + * 'monitored' state is cleared and a event is generated implicitly to e=
xit
> + * WFE.
> + *
> + * @param addr
> + *  A pointer to the memory location.
> + * @param memorder
> + *  The valid memory order variants are __ATOMIC_ACQUIRE and __ATOMIC_RE=
LAXED.
> + *  These map to C++11 memory orders with the same names, see the C++11 =
standard
> + *  the GCC wiki on atomic synchronization for detailed definitions.
> + */
> +static __rte_always_inline uint64_t
> +rte_atomic_load_ex_64(volatile uint64_t *addr, int memorder);
> +
> +static __rte_always_inline uint16_t
> +rte_atomic_load_ex_16(volatile uint16_t *addr, int memorder)
> +{
> +	uint16_t tmp;
> +	assert((memorder =3D=3D __ATOMIC_ACQUIRE)
> +			|| (memorder =3D=3D __ATOMIC_RELAXED));
> +	if (memorder =3D=3D __ATOMIC_ACQUIRE)
> +		asm volatile("ldaxrh %w[tmp], [%x[addr]]"
> +			: [tmp] "=3D&r" (tmp)
> +			: [addr] "r"(addr)
> +			: "memory");
> +	else if (memorder =3D=3D __ATOMIC_RELAXED)
> +		asm volatile("ldxrh %w[tmp], [%x[addr]]"
> +			: [tmp] "=3D&r" (tmp)
> +			: [addr] "r"(addr)
> +			: "memory");
> +	return tmp;
> +}
> +
> +static __rte_always_inline uint32_t
> +rte_atomic_load_ex_32(volatile uint32_t *addr, int memorder)
> +{
> +	uint32_t tmp;
> +	assert((memorder =3D=3D __ATOMIC_ACQUIRE)
> +			|| (memorder =3D=3D __ATOMIC_RELAXED));
> +	if (memorder =3D=3D __ATOMIC_ACQUIRE)
> +		asm volatile("ldaxr %w[tmp], [%x[addr]]"
> +			: [tmp] "=3D&r" (tmp)
> +			: [addr] "r"(addr)
> +			: "memory");
> +	else if (memorder =3D=3D __ATOMIC_RELAXED)
> +		asm volatile("ldxr %w[tmp], [%x[addr]]"
> +			: [tmp] "=3D&r" (tmp)
> +			: [addr] "r"(addr)
> +			: "memory");
> +	return tmp;
> +}
> +
> +static __rte_always_inline uint64_t
> +rte_atomic_load_ex_64(volatile uint64_t *addr, int memorder)
> +{
> +	uint64_t tmp;
> +	assert((memorder =3D=3D __ATOMIC_ACQUIRE)
> +			|| (memorder =3D=3D __ATOMIC_RELAXED));
> +	if (memorder =3D=3D __ATOMIC_ACQUIRE)
> +		asm volatile("ldaxr %x[tmp], [%x[addr]]"
> +			: [tmp] "=3D&r" (tmp)
> +			: [addr] "r"(addr)
> +			: "memory");
> +	else if (memorder =3D=3D __ATOMIC_RELAXED)
> +		asm volatile("ldxr %x[tmp], [%x[addr]]"
> +			: [tmp] "=3D&r" (tmp)
> +			: [addr] "r"(addr)
> +			: "memory");
> +	return tmp;
> +}
> +
> +#ifdef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED
> +static __rte_always_inline void
> +rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,
> +int memorder)
> +{
> +	if (__atomic_load_n(addr, memorder) !=3D expected) {
> +		rte_sevl();
> +		do {
> +			rte_wfe();
> +		} while (rte_atomic_load_ex_16(addr, memorder) !=3D expected);
> +	}
> +}
> +
> +static __rte_always_inline void
> +rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,
> +int memorder)
> +{
> +	if (__atomic_load_n(addr, memorder) !=3D expected) {
> +		rte_sevl();
> +		do {
> +			rte_wfe();
> +		} while (__atomic_load_n(addr, memorder) !=3D expected);

Here and in _64, shouldn't it be:
rte_atomic_load_ex_..
?

> +	}
> +}
> +
> +static __rte_always_inline void
> +rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,
> +int memorder)
> +{
> +	if (__atomic_load_n(addr, memorder) !=3D expected) {
> +		rte_sevl();
> +		do {
> +			rte_wfe();
> +		} while (__atomic_load_n(addr, memorder) !=3D expected);
> +	}
> +}
> +#endif
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/librte_eal/common/include/generic/rte_pause.h b/lib/libr=
te_eal/common/include/generic/rte_pause.h
> index 52bd4db..9d42e32 100644
> --- a/lib/librte_eal/common/include/generic/rte_pause.h
> +++ b/lib/librte_eal/common/include/generic/rte_pause.h
> @@ -1,5 +1,6 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
>   * Copyright(c) 2017 Cavium, Inc
> + * Copyright(c) 2019 Arm Limited
>   */
>=20
>  #ifndef _RTE_PAUSE_H_
> @@ -12,6 +13,12 @@
>   *
>   */
>=20
> +#include <stdint.h>
> +#include <rte_common.h>
> +#include <rte_atomic.h>
> +#include <rte_compat.h>
> +#include <assert.h>
> +
>  /**
>   * Pause CPU execution for a short while
>   *
> @@ -20,4 +27,96 @@
>   */
>  static inline void rte_pause(void);
>=20
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior no=
tice
> + *
> + * Wait for *addr to be updated with a 16-bit expected value, with a rel=
axed
> + * memory ordering model meaning the loads around this API can be reorde=
red.
> + *
> + * @param addr
> + *  A pointer to the memory location.
> + * @param expected
> + *  A 16-bit expected value to be in the memory location.
> + * @param memorder
> + *  Two different memory orders that can be specified:
> + *  __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
> + *  C++11 memory orders with the same names, see the C++11 standard or
> + *  the GCC wiki on atomic synchronization for detailed definition.
> + */
> +__rte_experimental
> +static __rte_always_inline void
> +rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,
> +int memorder);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior no=
tice
> + *
> + * Wait for *addr to be updated with a 32-bit expected value, with a rel=
axed
> + * memory ordering model meaning the loads around this API can be reorde=
red.
> + *
> + * @param addr
> + *  A pointer to the memory location.
> + * @param expected
> + *  A 32-bit expected value to be in the memory location.
> + * @param memorder
> + *  Two different memory orders that can be specified:
> + *  __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
> + *  C++11 memory orders with the same names, see the C++11 standard or
> + *  the GCC wiki on atomic synchronization for detailed definition.
> + */
> +__rte_experimental
> +static __rte_always_inline void
> +rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,
> +int memorder);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change, or be removed, without prior no=
tice
> + *
> + * Wait for *addr to be updated with a 64-bit expected value, with a rel=
axed
> + * memory ordering model meaning the loads around this API can be reorde=
red.
> + *
> + * @param addr
> + *  A pointer to the memory location.
> + * @param expected
> + *  A 64-bit expected value to be in the memory location.
> + * @param memorder
> + *  Two different memory orders that can be specified:
> + *  __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to
> + *  C++11 memory orders with the same names, see the C++11 standard or
> + *  the GCC wiki on atomic synchronization for detailed definition.
> + */
> +__rte_experimental
> +static __rte_always_inline void
> +rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,
> +int memorder);
> +
> +#ifndef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED
> +static __rte_always_inline void
> +rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,
> +int memorder)
> +{
> +	while (__atomic_load_n(addr, memorder) !=3D expected)
> +		rte_pause();
> +}
> +
> +static __rte_always_inline void
> +rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,
> +int memorder)
> +{
> +	while (__atomic_load_n(addr, memorder) !=3D expected)
> +		rte_pause();
> +}
> +
> +static __rte_always_inline void
> +rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,
> +int memorder)
> +{
> +	while (__atomic_load_n(addr, memorder) !=3D expected)
> +		rte_pause();
> +}
> +#endif
> +
>  #endif /* _RTE_PAUSE_H_ */
> --
> 2.7.4