From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CF203A32A2 for ; Thu, 24 Oct 2019 19:00:30 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 14C2D1EA7E; Thu, 24 Oct 2019 19:00:30 +0200 (CEST) Received: from EUR04-DB3-obe.outbound.protection.outlook.com (mail-eopbgr60078.outbound.protection.outlook.com [40.107.6.78]) by dpdk.org (Postfix) with ESMTP id C64F41BFD6 for ; Thu, 24 Oct 2019 19:00:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=evq3FZWu515ftwRmLlkbH0A9x189jAt4LSdVEMWxULo=; b=h8ftqOOBQ4FGlh83blVsT5kU0wB758IQbqJbmVPlOFdVtO04kQcXSS4K5Q2rOcLSKRDZCu4byt1l2pJp9UN9hGuP/R1awfc7q5adNAYk4XWPI+XShilBy6S4ksRCdmZTxJZKWcAG/DxUooHbpGkMNhiNns6Qd5fLbB/KbxP08qU= Received: from AM4PR08CA0072.eurprd08.prod.outlook.com (2603:10a6:205:2::43) by DB6PR0801MB1688.eurprd08.prod.outlook.com (2603:10a6:4:3a::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2387.20; Thu, 24 Oct 2019 17:00:27 +0000 Received: from AM5EUR03FT038.eop-EUR03.prod.protection.outlook.com (2a01:111:f400:7e08::207) by AM4PR08CA0072.outlook.office365.com (2603:10a6:205:2::43) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2387.22 via Frontend Transport; Thu, 24 Oct 2019 17:00:27 +0000 Authentication-Results: spf=fail (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dpdk.org; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dpdk.org; dmarc=none action=none header.from=arm.com; Received-SPF: Fail (protection.outlook.com: domain of arm.com does not designate 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM5EUR03FT038.mail.protection.outlook.com (10.152.17.118) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2387.20 via Frontend Transport; Thu, 24 Oct 2019 17:00:27 +0000 Received: ("Tessian outbound 851a1162fca7:v33"); Thu, 24 Oct 2019 17:00:27 +0000 X-CR-MTA-TID: 64aa7808 Received: from f29297031892.2 (cr-mta-lb-1.cr-mta-net [104.47.2.51]) by 64aa7808-outbound-1.mta.getcheckrecipient.com id 5F048108-7254-49AA-8803-AB3EA23CA9DC.1; Thu, 24 Oct 2019 17:00:22 +0000 Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-db5eur01lp2051.outbound.protection.outlook.com [104.47.2.51]) by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f29297031892.2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Thu, 24 Oct 2019 17:00:22 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LC4Of8HgVsvXT38VUCsZ3hgO28R+qkmH88gkUJGjbxbLDJF1EPI8FLr/DQrsOHL51BsuwFVBqCvx43HRB7aFIOw3IvLLQn970BSmUaj0y0F99K/pUySa5lTH91UmmHT3bXyXJeiHPOIH8Bra6F5ePNdljsNsyDp9weNsg2FIZhdEW2TRaTABdWtRQ+MRpmTPjdGIC/do8pPQ5QTaMyFaIUO4FvdfoQFfxYb5en55f8d7KxxBJZU7s1qekboudxmpQ5tUY6PcxL8ifQbV/Z5+fsEwUftyq4k/5XKObTmNCrLD9NIR1j4SD4iMmQ0GXD7SWL1qxMHEb6u5pmXh/KE/fg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=evq3FZWu515ftwRmLlkbH0A9x189jAt4LSdVEMWxULo=; b=dUjg06K1a2fPQ2bU2n7iqAs8a6N/pmZKvE/Txvib+SJqCB/KHNs8MuOzWH9uW2eXkiDTnX7tmCfq7nQ2tsQ0ldG5yLMKRxiS6dckqMTYmz4A8M2j+f7bYqU9c0M5YF0OODYHiCE9Lutwi0aviN29Z30H2cwel3yjglkojl+JhKOWFbjYFiReiWCQiV1z2Y8DK3F7foE2MYmUO0Ybg4gfTiBRwIP+HLFBN4F/qLXxfD1KGYiG/wVL4DexF8uD5gvQHZygC+Z+wvegy7ITAnRXN7WYYTjy4+yOLvXUNlbHdMxVmAiKH4BVgsG5EQM0V7D8q4cOh8TejbhoP1Oy2nFf0g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=evq3FZWu515ftwRmLlkbH0A9x189jAt4LSdVEMWxULo=; b=h8ftqOOBQ4FGlh83blVsT5kU0wB758IQbqJbmVPlOFdVtO04kQcXSS4K5Q2rOcLSKRDZCu4byt1l2pJp9UN9hGuP/R1awfc7q5adNAYk4XWPI+XShilBy6S4ksRCdmZTxJZKWcAG/DxUooHbpGkMNhiNns6Qd5fLbB/KbxP08qU= Received: from AM0PR08MB5363.eurprd08.prod.outlook.com (52.132.214.213) by AM0PR08MB2993.eurprd08.prod.outlook.com (52.134.93.140) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2387.22; Thu, 24 Oct 2019 17:00:21 +0000 Received: from AM0PR08MB5363.eurprd08.prod.outlook.com ([fe80::b483:301f:e382:a94d]) by AM0PR08MB5363.eurprd08.prod.outlook.com ([fe80::b483:301f:e382:a94d%6]) with mapi id 15.20.2387.021; Thu, 24 Oct 2019 17:00:21 +0000 From: "Gavin Hu (Arm Technology China)" To: "Ananyev, Konstantin" , "dev@dpdk.org" CC: nd , "david.marchand@redhat.com" , "thomas@monjalon.net" , "stephen@networkplumber.org" , "hemant.agrawal@nxp.com" , "jerinj@marvell.com" , "pbhagavatula@marvell.com" , Honnappa Nagarahalli , "Ruifeng Wang (Arm Technology China)" , "Phil Yang (Arm Technology China)" , nd Thread-Topic: [PATCH v9 2/5] eal: add the APIs to wait until equal Thread-Index: AQHVinI+RjbJv92QikyG7lpH5yGSEqdqArSA Date: Thu, 24 Oct 2019 17:00:21 +0000 Message-ID: References: <1561911676-37718-1-git-send-email-gavin.hu@arm.com> <1571913748-51735-3-git-send-email-gavin.hu@arm.com> <2601191342CEEE43887BDE71AB97725801A8C6F8D0@IRSMSX104.ger.corp.intel.com> In-Reply-To: <2601191342CEEE43887BDE71AB97725801A8C6F8D0@IRSMSX104.ger.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ts-tracking-id: c3912a0b-2289-4eb9-8975-d41eb0bd9ea6.0 x-checkrecipientchecked: true Authentication-Results-Original: spf=none (sender IP is ) smtp.mailfrom=Gavin.Hu@arm.com; x-originating-ip: [113.29.88.7] x-ms-publictraffictype: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 85d64b10-f048-4c8f-4a47-08d758a3ab90 X-MS-TrafficTypeDiagnostic: AM0PR08MB2993:|AM0PR08MB2993:|DB6PR0801MB1688: x-ld-processed: f34e5979-57d9-4aaa-ad4d-b122a662184d,ExtAddr x-ms-exchange-transport-forked: True X-Microsoft-Antispam-PRVS: x-checkrecipientrouted: true x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000; x-forefront-prvs: 0200DDA8BE X-Forefront-Antispam-Report-Untrusted: SFV:NSPM; SFS:(10009020)(4636009)(346002)(136003)(39860400002)(366004)(376002)(396003)(189003)(199004)(13464003)(76176011)(66946007)(316002)(25786009)(66446008)(6436002)(66476007)(14454004)(64756008)(66556008)(26005)(6246003)(66066001)(5660300002)(229853002)(52536014)(6506007)(76116006)(55236004)(53546011)(4326008)(2501003)(110136005)(186003)(102836004)(86362001)(7696005)(99286004)(54906003)(8936002)(33656002)(256004)(14444005)(71200400001)(3846002)(71190400001)(74316002)(2906002)(486006)(30864003)(478600001)(446003)(476003)(9686003)(55016002)(11346002)(81166006)(81156014)(8676002)(305945005)(6116002)(7736002)(21314003); DIR:OUT; SFP:1101; SCL:1; SRVR:AM0PR08MB2993; H:AM0PR08MB5363.eurprd08.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: arm.com does not designate permitted sender hosts) X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: rAt9U70vOesCwOheQmkc/otcRLe66ol84HjBcHJ0ecwc0yzRonER1DVSK/o9p/eRtrMF6Usq8dJZruISsmPy1lBWlz2lnc6Bj25jXTNBn3s54n5F9C6EM0xjK7YBEwWh1GfO1SeuXeGyYo4nDbulmuNG6s8RqeDPOOuJB/eb9sF8oOcQLfDySOhS2I6qy/YWaQmXEt5o/5greZUQJh/dvA52rvxRmE87JuqSpZqawk6fJsOEagnJLgX1BCJZWEhc6w8QxKwgVzVpwBaBLfNmwMEoW50sa0lfbvfn2pOGFPkAXdN3RZsjXR/m5KL+YxBXkOP5417ybo5hkhRXc/L+57qf5+C3p08Zxyi6Dy8xdmbL3FK0ukODk2j/WhRVlo8Q3C4OZ0N66H7lLGJrMWZl9IBHuyPK0fgE9wxERTxSX+INwPcScxRYmV5KmYdrN8ga Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB2993 Original-Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Gavin.Hu@arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT038.eop-EUR03.prod.protection.outlook.com X-Forefront-Antispam-Report: CIP:63.35.35.123; IPV:CAL; SCL:-1; CTRY:IE; EFV:NLI; SFV:NSPM; SFS:(10009020)(4636009)(376002)(39860400002)(346002)(396003)(136003)(1110001)(339900001)(189003)(13464003)(199004)(476003)(30864003)(74316002)(76176011)(23726003)(8746002)(446003)(76130400001)(356004)(97756001)(110136005)(486006)(2906002)(8936002)(7696005)(14444005)(66066001)(9686003)(47776003)(55016002)(102836004)(70586007)(99286004)(14454004)(70206006)(81156014)(22756006)(81166006)(6116002)(126002)(3846002)(8676002)(26005)(229853002)(25786009)(33656002)(105606002)(11346002)(336012)(46406003)(6506007)(478600001)(53546011)(52536014)(50466002)(4326008)(86362001)(5660300002)(54906003)(26826003)(186003)(6246003)(316002)(305945005)(36906005)(7736002)(2501003)(21314003); DIR:OUT; SFP:1101; SCL:1; SRVR:DB6PR0801MB1688; H:64aa7808-outbound-1.mta.getcheckrecipient.com; FPR:; SPF:Fail; LANG:en; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; MX:1; A:1; X-MS-Office365-Filtering-Correlation-Id-Prvs: f5589ba9-874e-4d35-d96f-08d758a3a7a0 NoDisclaimer: True X-Forefront-PRVS: 0200DDA8BE X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: cLC4mX0yJIFp/+AlEKQ/hJE6Mg0ou977si5Y98ZQPsGzbEZKwCIB+VtM7rEeueZ6OWGLcPXu33DUzMzZ1NaDYr6Ak/f1FCZ3bIO1rUyQfe/FXjp9HPRp3Sb3CIQUUAO7mOASesoI3vbG9QYNOwG9tUq9eDrn7q1h5SBDS1pD37Ru8X0DtKL/jpAc+sHidEoet1h2fP3IQHzpf3BAKf6owt9l8Gy41+r0GffrEUbLEsKg4g95EZDNaJS4iWLv6iPu5H4rNqB6bIlI7gQc1EOe3T1y7OK4tLPIH6Ut0QwpwcmLfEx5vdIQLCOfYyBmv5AiueVCAmRBb8fQWVuf7ta7ifrFcjAs3SwTBJStmCvnVoJ0TJB7RJzaiatnG5lz7gGR6ADTORpdFP3PXn/lGt44N6axx2ZWF/aXgMATSJyh7+DpJSpJNsO1XxxbODSkwv/+ X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Oct 2019 17:00:27.6777 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 85d64b10-f048-4c8f-4a47-08d758a3ab90 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1688 Subject: Re: [dpdk-dev] [PATCH v9 2/5] eal: add the APIs to wait until equal X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Konstantin, > -----Original Message----- > From: Ananyev, Konstantin > Sent: Thursday, October 24, 2019 9:52 PM > To: Gavin Hu (Arm Technology China) ; > dev@dpdk.org > Cc: nd ; david.marchand@redhat.com; > thomas@monjalon.net; stephen@networkplumber.org; > hemant.agrawal@nxp.com; jerinj@marvell.com; > pbhagavatula@marvell.com; Honnappa Nagarahalli > ; Ruifeng Wang (Arm Technology China) > ; Phil Yang (Arm Technology China) > ; Steve Capper > Subject: RE: [PATCH v9 2/5] eal: add the APIs to wait until equal >=20 > Hi Gavin, >=20 > > The rte_wait_until_equal_xx APIs abstract the functionality of > > 'polling for a memory location to become equal to a given value'. > > > > Add the RTE_ARM_USE_WFE configuration entry for aarch64, disabled > > by default. When it is enabled, the above APIs will call WFE instructio= n > > to save CPU cycles and power. > > > > From a VM, when calling this API on aarch64, it may trap in and out to > > release vCPUs whereas cause high exit latency. Since kernel 4.18.20 an > > adaptive trapping mechanism is introduced to balance the latency and > > workload. > > > > Signed-off-by: Gavin Hu > > Reviewed-by: Ruifeng Wang > > Reviewed-by: Steve Capper > > Reviewed-by: Ola Liljedahl > > Reviewed-by: Honnappa Nagarahalli > > Reviewed-by: Phil Yang > > Acked-by: Pavan Nikhilesh > > Acked-by: Jerin Jacob > > --- > > config/arm/meson.build | 1 + > > config/common_base | 5 + > > .../common/include/arch/arm/rte_pause_64.h | 70 +++++++ > > lib/librte_eal/common/include/generic/rte_pause.h | 217 > +++++++++++++++++++++ > > 4 files changed, 293 insertions(+) > > > > diff --git a/config/arm/meson.build b/config/arm/meson.build > > index 979018e..b4b4cac 100644 > > --- a/config/arm/meson.build > > +++ b/config/arm/meson.build > > @@ -26,6 +26,7 @@ flags_common_default =3D [ > > ['RTE_LIBRTE_AVP_PMD', false], > > > > ['RTE_SCHED_VECTOR', false], > > + ['RTE_ARM_USE_WFE', false], > > ] > > > > flags_generic =3D [ > > diff --git a/config/common_base b/config/common_base > > index e843a21..c812156 100644 > > --- a/config/common_base > > +++ b/config/common_base > > @@ -111,6 +111,11 @@ CONFIG_RTE_MAX_VFIO_CONTAINERS=3D64 > > CONFIG_RTE_MALLOC_DEBUG=3Dn > > CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=3Dn > > CONFIG_RTE_USE_LIBBSD=3Dn > > +# Use WFE instructions to implement the rte_wait_for_equal_xxx APIs, > > +# calling these APIs put the cores in low power state while waiting > > +# for the memory address to become equal to the expected value. > > +# This is supported only by aarch64. > > +CONFIG_RTE_ARM_USE_WFE=3Dn > > > > # > > # Recognize/ignore the AVX/AVX512 CPU flags for performance/power > testing. > > diff --git a/lib/librte_eal/common/include/arch/arm/rte_pause_64.h > b/lib/librte_eal/common/include/arch/arm/rte_pause_64.h > > index 93895d3..7bc8efb 100644 > > --- a/lib/librte_eal/common/include/arch/arm/rte_pause_64.h > > +++ b/lib/librte_eal/common/include/arch/arm/rte_pause_64.h > > @@ -1,5 +1,6 @@ > > /* SPDX-License-Identifier: BSD-3-Clause > > * Copyright(c) 2017 Cavium, Inc > > + * Copyright(c) 2019 Arm Limited > > */ > > > > #ifndef _RTE_PAUSE_ARM64_H_ > > @@ -17,6 +18,75 @@ static inline void rte_pause(void) > > asm volatile("yield" ::: "memory"); > > } > > > > +#ifdef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED > > +static inline void rte_sevl(void) > > +{ > > + asm volatile("sevl" : : : "memory"); > > +} > > + > > +static inline void rte_wfe(void) > > +{ > > + asm volatile("wfe" : : : "memory"); > > +} > > + > > +static __rte_always_inline uint16_t > > +__atomic_load_ex_16(volatile uint16_t *addr, int memorder) > > +{ > > + uint16_t tmp; > > + assert((memorder =3D=3D __ATOMIC_ACQUIRE) > > + || (memorder =3D=3D __ATOMIC_RELAXED)); > > + if (memorder =3D=3D __ATOMIC_ACQUIRE) > > + asm volatile("ldaxrh %w[tmp], [%x[addr]]" > > + : [tmp] "=3D&r" (tmp) > > + : [addr] "r"(addr) > > + : "memory"); > > + else if (memorder =3D=3D __ATOMIC_RELAXED) > > + asm volatile("ldxrh %w[tmp], [%x[addr]]" > > + : [tmp] "=3D&r" (tmp) > > + : [addr] "r"(addr) > > + : "memory"); > > + return tmp; > > +} > > + > > +static __rte_always_inline uint32_t > > +__atomic_load_ex_32(volatile uint32_t *addr, int memorder) > > +{ > > + uint32_t tmp; > > + assert((memorder =3D=3D __ATOMIC_ACQUIRE) > > + || (memorder =3D=3D __ATOMIC_RELAXED)); > > + if (memorder =3D=3D __ATOMIC_ACQUIRE) > > + asm volatile("ldaxr %w[tmp], [%x[addr]]" > > + : [tmp] "=3D&r" (tmp) > > + : [addr] "r"(addr) > > + : "memory"); > > + else if (memorder =3D=3D __ATOMIC_RELAXED) > > + asm volatile("ldxr %w[tmp], [%x[addr]]" > > + : [tmp] "=3D&r" (tmp) > > + : [addr] "r"(addr) > > + : "memory"); > > + return tmp; > > +} > > + > > +static __rte_always_inline uint64_t > > +__atomic_load_ex_64(volatile uint64_t *addr, int memorder) > > +{ > > + uint64_t tmp; > > + assert((memorder =3D=3D __ATOMIC_ACQUIRE) > > + || (memorder =3D=3D __ATOMIC_RELAXED)); > > + if (memorder =3D=3D __ATOMIC_ACQUIRE) > > + asm volatile("ldaxr %x[tmp], [%x[addr]]" > > + : [tmp] "=3D&r" (tmp) > > + : [addr] "r"(addr) > > + : "memory"); > > + else if (memorder =3D=3D __ATOMIC_RELAXED) > > + asm volatile("ldxr %x[tmp], [%x[addr]]" > > + : [tmp] "=3D&r" (tmp) > > + : [addr] "r"(addr) > > + : "memory"); > > + return tmp; > > +} > > +#endif > > + >=20 > The function themselves seems good to me... > But I think it was some misunderstanding about code layout/placement. > I think arm specific functionsand defines need to be defined in arm spec= ific > headers only. > But we still can have one instance of rte_wait_until_equal_* for arm. I will move that part to arm specific headers.=20 /Gavin >=20 > To be more specific, I am talking about something like that here: >=20 > lib/librte_eal/common/include/generic/rte_pause.h: > ... > #ifndef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED > static __rte_always_inline void > rte_wait_until_equal_32(volatile type * addr, type expected, int memorder= ) > \ > { > while (__atomic_load_n(addr, memorder) !=3D expected) { > rte_pause(); \ > \ > } > .... > #endif > ... >=20 > lib/librte_eal/common/include/arch/arm/rte_pause_64.h: >=20 > ... > #ifdef RTE_ARM_USE_WFE > #define RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED > #endif > #include "generic/rte_pause.h" >=20 > ... > #ifdef RTE_ARM_USE_WFE > static inline void rte_sevl(void) > { > asm volatile("sevl" : : : "memory"); > } > static inline void rte_wfe(void) > { > asm volatile("wfe" : : : "memory"); > } > #else > static inline void rte_sevl(void) > { > } > static inline void rte_wfe(void) > { > rte_pause(); > } Should these arm specific APIs, including rte_load_ex_xxx APIs, be added th= e doxygen comments?=20 These APIs are arm specific, not intended to expose, but they are in the pu= blic files(arm specific headers be considered public?)=20 /Gavin > ... >=20 > static __rte_always_inline void > rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected, int > memorder) > { > if (__atomic_load_ex_32(addr, memorder) !=3D expected) { > rte_sevl(); > do { > rte_wfe(); > } while (__atomic_load_ex_32(addr, memorder) !=3D > expected); > } > } >=20 > #endif >=20 >=20 > > #ifdef __cplusplus > > } > > #endif > > diff --git a/lib/librte_eal/common/include/generic/rte_pause.h > b/lib/librte_eal/common/include/generic/rte_pause.h > > index 52bd4db..4db44f9 100644 > > --- a/lib/librte_eal/common/include/generic/rte_pause.h > > +++ b/lib/librte_eal/common/include/generic/rte_pause.h > > @@ -1,5 +1,6 @@ > > /* SPDX-License-Identifier: BSD-3-Clause > > * Copyright(c) 2017 Cavium, Inc > > + * Copyright(c) 2019 Arm Limited > > */ > > > > #ifndef _RTE_PAUSE_H_ > > @@ -12,6 +13,12 @@ > > * > > */ > > > > +#include > > +#include > > +#include > > +#include > > +#include > > + > > /** > > * Pause CPU execution for a short while > > * > > @@ -20,4 +27,214 @@ > > */ > > static inline void rte_pause(void); > > > > +static inline void rte_sevl(void); > > +static inline void rte_wfe(void); > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change, or be removed, without prior > notice > > + * > > + * Atomic load from addr, it returns the 16-bit content of *addr. > > + * > > + * @param addr > > + * A pointer to the memory location. > > + * @param memorder > > + * The valid memory order variants are __ATOMIC_ACQUIRE and > __ATOMIC_RELAXED. > > + * These map to C++11 memory orders with the same names, see the > C++11 standard > > + * the GCC wiki on atomic synchronization for detailed definitions. > > + */ > > +static __rte_always_inline uint16_t > > +__atomic_load_ex_16(volatile uint16_t *addr, int memorder); > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change, or be removed, without prior > notice > > + * > > + * Atomic load from addr, it returns the 32-bit content of *addr. > > + * > > + * @param addr > > + * A pointer to the memory location. > > + * @param memorder > > + * The valid memory order variants are __ATOMIC_ACQUIRE and > __ATOMIC_RELAXED. > > + * These map to C++11 memory orders with the same names, see the > C++11 standard > > + * the GCC wiki on atomic synchronization for detailed definitions. > > + */ > > +static __rte_always_inline uint32_t > > +__atomic_load_ex_32(volatile uint32_t *addr, int memorder); > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change, or be removed, without prior > notice > > + * > > + * Atomic load from addr, it returns the 64-bit content of *addr. > > + * > > + * @param addr > > + * A pointer to the memory location. > > + * @param memorder > > + * The valid memory order variants are __ATOMIC_ACQUIRE and > __ATOMIC_RELAXED. > > + * These map to C++11 memory orders with the same names, see the > C++11 standard > > + * the GCC wiki on atomic synchronization for detailed definitions. > > + */ > > +static __rte_always_inline uint64_t > > +__atomic_load_ex_64(volatile uint64_t *addr, int memorder); > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change, or be removed, without prior > notice > > + * > > + * Wait for *addr to be updated with a 16-bit expected value, with a > relaxed > > + * memory ordering model meaning the loads around this API can be > reordered. > > + * > > + * @param addr > > + * A pointer to the memory location. > > + * @param expected > > + * A 16-bit expected value to be in the memory location. > > + * @param memorder > > + * Two different memory orders that can be specified: > > + * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to > > + * C++11 memory orders with the same names, see the C++11 standard > or > > + * the GCC wiki on atomic synchronization for detailed definition. > > + */ > > +__rte_experimental > > +static __rte_always_inline void > > +rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected, > > +int memorder); > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change, or be removed, without prior > notice > > + * > > + * Wait for *addr to be updated with a 32-bit expected value, with a > relaxed > > + * memory ordering model meaning the loads around this API can be > reordered. > > + * > > + * @param addr > > + * A pointer to the memory location. > > + * @param expected > > + * A 32-bit expected value to be in the memory location. > > + * @param memorder > > + * Two different memory orders that can be specified: > > + * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to > > + * C++11 memory orders with the same names, see the C++11 standard > or > > + * the GCC wiki on atomic synchronization for detailed definition. > > + */ > > +__rte_experimental > > +static __rte_always_inline void > > +rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected, > > +int memorder); > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change, or be removed, without prior > notice > > + * > > + * Wait for *addr to be updated with a 64-bit expected value, with a > relaxed > > + * memory ordering model meaning the loads around this API can be > reordered. > > + * > > + * @param addr > > + * A pointer to the memory location. > > + * @param expected > > + * A 64-bit expected value to be in the memory location. > > + * @param memorder > > + * Two different memory orders that can be specified: > > + * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to > > + * C++11 memory orders with the same names, see the C++11 standard > or > > + * the GCC wiki on atomic synchronization for detailed definition. > > + */ > > +__rte_experimental > > +static __rte_always_inline void > > +rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected, > > +int memorder); > > + > > +#ifdef RTE_ARM_USE_WFE > > +#define RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED > > +#endif > > + > > +#ifndef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED > > +static inline void rte_sevl(void) > > +{ > > +} > > + > > +static inline void rte_wfe(void) > > +{ > > + rte_pause(); > > +} > > + > > +/** > > + * @warning > > + * @b EXPERIMENTAL: this API may change, or be removed, without prior > notice > > + * > > + * Atomic load from addr, it returns the 16-bit content of *addr. > > + * > > + * @param addr > > + * A pointer to the memory location. > > + * @param memorder > > + * The valid memory order variants are __ATOMIC_ACQUIRE and > __ATOMIC_RELAXED. > > + * These map to C++11 memory orders with the same names, see the > C++11 standard > > + * the GCC wiki on atomic synchronization for detailed definitions. > > + */ > > +static __rte_always_inline uint16_t > > +__atomic_load_ex_16(volatile uint16_t *addr, int memorder) > > +{ > > + uint16_t tmp; > > + assert((memorder =3D=3D __ATOMIC_ACQUIRE) > > + || (memorder =3D=3D __ATOMIC_RELAXED)); > > + tmp =3D __atomic_load_n(addr, memorder); > > + return tmp; > > +} > > + > > +static __rte_always_inline uint32_t > > +__atomic_load_ex_32(volatile uint32_t *addr, int memorder) > > +{ > > + uint32_t tmp; > > + assert((memorder =3D=3D __ATOMIC_ACQUIRE) > > + || (memorder =3D=3D __ATOMIC_RELAXED)); > > + tmp =3D __atomic_load_n(addr, memorder); > > + return tmp; > > +} > > + > > +static __rte_always_inline uint64_t > > +__atomic_load_ex_64(volatile uint64_t *addr, int memorder) > > +{ > > + uint64_t tmp; > > + assert((memorder =3D=3D __ATOMIC_ACQUIRE) > > + || (memorder =3D=3D __ATOMIC_RELAXED)); > > + tmp =3D __atomic_load_n(addr, memorder); > > + return tmp; > > +} > > + > > +static __rte_always_inline void > > +rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected, > > +int memorder) > > +{ > > + if (__atomic_load_n(addr, memorder) !=3D expected) { > > + rte_sevl(); > > + do { > > + rte_wfe(); > > + } while (__atomic_load_ex_16(addr, memorder) !=3D > expected); > > + } > > +} > > + > > +static __rte_always_inline void > > +rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected, > > +int memorder) > > +{ > > + if (__atomic_load_ex_32(addr, memorder) !=3D expected) { > > + rte_sevl(); > > + do { > > + rte_wfe(); > > + } while (__atomic_load_ex_32(addr, memorder) !=3D > expected); > > + } > > +} > > + > > +static __rte_always_inline void > > +rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected, > > +int memorder) > > +{ > > + if (__atomic_load_ex_64(addr, memorder) !=3D expected) { > > + rte_sevl(); > > + do { > > + rte_wfe(); > > + } while (__atomic_load_ex_64(addr, memorder) !=3D > expected); > > + } > > +} > > +#endif > > + > > #endif /* _RTE_PAUSE_H_ */ > > -- > > 2.7.4