From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 16804A0C47; Thu, 7 Oct 2021 18:19:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EC37440042; Thu, 7 Oct 2021 18:19:20 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id A6D0B40040 for ; Thu, 7 Oct 2021 18:19:18 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10130"; a="223694259" X-IronPort-AV: E=Sophos;i="5.85,355,1624345200"; d="scan'208";a="223694259" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 Oct 2021 09:18:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,355,1624345200"; d="scan'208";a="522654573" Received: from orsmsx606.amr.corp.intel.com ([10.22.229.19]) by orsmga001.jf.intel.com with ESMTP; 07 Oct 2021 09:18:54 -0700 Received: from orsmsx608.amr.corp.intel.com (10.22.229.21) by ORSMSX606.amr.corp.intel.com (10.22.229.19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12; Thu, 7 Oct 2021 09:18:53 -0700 Received: from orsmsx604.amr.corp.intel.com (10.22.229.17) by ORSMSX608.amr.corp.intel.com (10.22.229.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12; Thu, 7 Oct 2021 09:18:53 -0700 Received: from orsedg603.ED.cps.intel.com (10.7.248.4) by orsmsx604.amr.corp.intel.com (10.22.229.17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2242.12 via Frontend Transport; Thu, 7 Oct 2021 09:18:53 -0700 Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.172) by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2242.12; Thu, 7 Oct 2021 09:18:53 -0700 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cYgyAeGUyqR/J8f1T/KgVZuZ8upNlvkhlkRukugCk4nq5HbBbWWqfnKUwDGj48/eJsQkGmEA4aTy60MejVbrX5liwDye79lDjlCxAAox/u+QsqoDes04+mgemPCrO/Ekuk2A+Fc+o11879ec0iRSV6QCYeqE2f6K90cKcXVmpwFM55MQ+qz3gpdII1zYrSJrpMv3XKly7kJsdvRxadXt5qXZMgoPwq7gBvZ8VfwrJR5auNxkLkT9Dml/U/zme+BwzgV687g6TrjGOaTJTK8+DYfj0fXE2ZUKKE5GY60HrFk3+ZlHbnLfLisJ4gKdRxfRtpzcd4oHNVAvDAxJRAXmlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dX55TdoH81qT3sqCOgvTln7PAZQRV70WYYwEbOqu5QM=; b=JIE+2ymn5Sp36IpxsK8/7jxWEttGS/K+CsMtVgP0GiD4n7PVxZufxNiMxyfWQM3cqYe5B9NC0rEG8FFWe4IFVEU33YcpkbI+h869UQLjTgUCjaE6RJAtUrwsR6RZ//kSngRyNJWiBmPEj+ZM5fK76HWCFoKmXZ9umjxjjj3fvhhWO+uaMrlS4LIszslABzkNUsZK/jbswkr1AiAMKekjsMhUn7j/YUkahsUWbV6CvNtgbLB3KElaRybrKLTiesgeNQN4jjND0KGIE0AtRm3ilzr+K+8khEHOrtBvH2neoLannv0qo4wO6WB8Q+j+Vhutw1XhIGks1h9o0axdf259RA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com; dkim=pass header.d=intel.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; s=selector2-intel-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=dX55TdoH81qT3sqCOgvTln7PAZQRV70WYYwEbOqu5QM=; b=fLGh6a79BPVZVz/2deeQbrn3pUSlwR1YuB6U0rbjyw2K8qM8/mHtKaO8VLu9HbRklsZZAQDjdAAYH71lYrd4FfbjWUxR4L1GgmpmUwlel+/tLzU06SfaW9w/UzuzVj5uebB2ijI+qgRZKyR6HUpN2/+7JByCf01xYvOpx9RZQe8= Received: from DM6PR11MB4491.namprd11.prod.outlook.com (2603:10b6:5:204::19) by DM5PR11MB1898.namprd11.prod.outlook.com (2603:10b6:3:114::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4587.19; Thu, 7 Oct 2021 16:18:52 +0000 Received: from DM6PR11MB4491.namprd11.prod.outlook.com ([fe80::740e:126e:c785:c8fd]) by DM6PR11MB4491.namprd11.prod.outlook.com ([fe80::740e:126e:c785:c8fd%4]) with mapi id 15.20.4587.019; Thu, 7 Oct 2021 16:18:52 +0000 From: "Ananyev, Konstantin" To: Feifei Wang , Ruifeng Wang CC: "dev@dpdk.org" , "nd@arm.com" Thread-Topic: [dpdk-dev] [RFC PATCH v3 1/5] eal: add new definitions for wait scheme Thread-Index: AQHXsqBwAQapdLYqQES5YqgeHmEsl6vHwR2w Date: Thu, 7 Oct 2021 16:18:52 +0000 Message-ID: References: <20210902053253.3017858-1-feifei.wang2@arm.com> <20210926063302.1541193-1-feifei.wang2@arm.com> <20210926063302.1541193-2-feifei.wang2@arm.com> In-Reply-To: <20210926063302.1541193-2-feifei.wang2@arm.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: dlp-product: dlpe-windows dlp-reaction: no-action dlp-version: 11.6.200.16 authentication-results: arm.com; dkim=none (message not signed) header.d=none;arm.com; dmarc=none action=none header.from=intel.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: f6630a2e-bf1b-490a-13ee-08d989ae2719 x-ms-traffictypediagnostic: DM5PR11MB1898: x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:9508; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: 7i3rQryPfP6U2gaUfFi1sqpVvF0D80A/eNLtuRzBOS3A6mt3cTKfjoN6ZOnDfQz4HFePZh9i957nnz/wq8lgPG4RJe/Li/fhJ8G98BbghfFJ8kQESdyvH4W1UYB3RuiW0gghlhqxtwLxOFeRfwJkyLWYFwp3WsvUFfWH+dpQ+NyHVxBH1sqT2cpwM8JOCIUStOOmiL/ALu6gLuyOyLe7YjgbbU8jFo+Hxtf4LXdSsXiYxcRnrvVDApL/Bf4JnZQILZYD4jcz7iACRndI/vMt24L4lbPpPqShGjaeKKQ2yluSrK6yjUIvrTP31YNavSk8r9aSsIEA4hPgemeJ6wh0DTizMm/IAqg1uof2OxGTTePxYPvzCYSbGjUZrZNQVd0T01UAbN8Jhpj5k6JJCiT11PqVPy7umThyqOjl7i5VuOu/p0oH9oZ2yG1mMK9iqOXd5EGz80eL5motb4KIm1aJbXv5a0iUJHg7f+DR4WOugoQyKRgFx/CKaaaH8UVhRnFFv8NkdjCyxbOugy3u53DdXygPRxTZ6fyti8qbpHrSJkWy+4W2PBhRV+D4U33Lkm0sib5aVg2YGBoHCCzoEVBQI8npLM1mHePdEzqqwXY7bJpz3WryulOvJbVo8TIDnf5T5oEm5ZfK1m0t06GD0ozITiY4zOQ9SUvKlxDInxYsarioaeBdyY5aVC5i/Bc9ltQj3AVtRAlz/3vmED1aohrigA== x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM6PR11MB4491.namprd11.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(366004)(33656002)(38070700005)(38100700002)(86362001)(52536014)(5660300002)(8676002)(7696005)(8936002)(55016002)(66946007)(26005)(66556008)(64756008)(6506007)(55236004)(186003)(66476007)(66446008)(76116006)(71200400001)(9686003)(2906002)(30864003)(83380400001)(508600001)(54906003)(110136005)(316002)(4326008)(122000001); DIR:OUT; SFP:1102; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?us-ascii?Q?faCNawvx083kXgJYI0Uf2sl2QoWSQDTYQ6FCHONRQ0El2Em28YpMCZymIbmu?= =?us-ascii?Q?251c4DpnRkVR+D86HEiTwX17tAd/vdSw0a1Jq4owP3we0ZhRX4ewcUtf9wp0?= =?us-ascii?Q?FTvYooBqGMN9UdgjPgZ7LvXEMaWZQh3dJuFrq4/kP0uUYGkwMHs7gDJ0tais?= =?us-ascii?Q?9EjRK0GxkklZI0z1un9UHrxFEzVghPkmwKQakHBQx6GsXIUhTkUXPEcpRMej?= =?us-ascii?Q?9jgHuqKdsJBkmMO5Vkio3EW9UFPp5iSKyzCxEuHvLqh4KitIccMS6Mv8wt4x?= =?us-ascii?Q?eztZ0tqcFMG2HZGf3Ye2YXpRXecKk75FztmRC6Tzp7N1kVP+4+a2Hkg6OVU/?= =?us-ascii?Q?NRcDdHZTl2INLpSwB5EHDvsxvidgwMuddY5JYzonzya0okUbEn3+4R+rHbru?= =?us-ascii?Q?Bg4fu9RBHu9mD2IEqhk4x0YA+wabNWuxn1HIeVlcIS40mhJiVunwOp69bkHe?= =?us-ascii?Q?Ms1v73VKZ9riaTllISVO4ccyLBBB61y8ozYhdPy6glX4ilF/tiCvQbt28vVf?= =?us-ascii?Q?ftZ9Vhl2lZCReSpFsIDUEgZ1TvJvqMlsHSMhPcDig5zy6mgnc9gyu8dP5VkK?= =?us-ascii?Q?mLs0N68d2zl5KdCLfTWFU7lNIZ6IYvD7M7hYbuUqeZ8QPUIfCTp9+fdph/T8?= =?us-ascii?Q?E4omxmL1epn/9PGf4F9urk/U+bXyxEVPZw64uTLYBzRbHwY6jyMRBZJwqV3a?= =?us-ascii?Q?N5AgK2KyNrWRmKxJ5Dl7l0ppT183FrSDBA5SBHUTMxh5ueWUQGXz6pQqn2+d?= =?us-ascii?Q?arfq0212D26K+RYMK0mnviFUQ2ag/P1HUBQvNP/B6aWfSRxS7ZDJu/rux7v/?= =?us-ascii?Q?9El/gVZ5ICwdupsZ9ukFJ0ir2DtVWppcRkVojARgp52Ony6BGQD0gi9l3cvc?= =?us-ascii?Q?Dm0ZdnmncDY2mANVGaT2/TSrT5egl35tGkfz2JhjgzPePycrAH5mXdi1CXDf?= =?us-ascii?Q?G7ChVZZzuEzvFjJo21/C3wwIy+Codi0hetEEChKUrEFRRulcJ5ecNTqBzvtL?= =?us-ascii?Q?M+3ysthjjV1Aj44yaOsusVabYzp1uswk83W6Cbg6Pjlf/w4luJVQyr/JQhQk?= =?us-ascii?Q?LkJ+KjMS5zGLh9Da5Se3lLa4vowgWYcnnNaq/xznb8gqYgQzBjEoZMK7+SrB?= =?us-ascii?Q?SZqrInDYW5JMONfVgr8q0TKU4XrO6c08SGpMI14Yr7JqWqrNmuQrS6Rnl0YF?= =?us-ascii?Q?iVBcsXvAitL1qpYeUfLJ2E3z5uE6jnOdlUcjmgWjClJxK9MpAaOGOdmGXanS?= =?us-ascii?Q?OOT0S3QHsblYhA6gf8QP8OhWt/LnkIR+x80R5Y6i2T42flypEffvnPErTuqn?= =?us-ascii?Q?yOo3bRUfGK6yVM7KQiuWl+y7?= x-ms-exchange-transport-forked: True Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: DM6PR11MB4491.namprd11.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: f6630a2e-bf1b-490a-13ee-08d989ae2719 X-MS-Exchange-CrossTenant-originalarrivaltime: 07 Oct 2021 16:18:52.0956 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: kfFNCwUE4EmfZR68EY4hGn8adOHnXAjG+AVua1KaCENmK7KxpxkKzE48WktfAxPD8l7BWNEF1XifjAdkK0j/3B8a0sHQwYDguKT75/DP2gs= X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR11MB1898 X-OriginatorOrg: intel.com Subject: Re: [dpdk-dev] [RFC PATCH v3 1/5] eal: add new definitions for wait scheme X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > Introduce macros as generic interface for address monitoring. >=20 > Signed-off-by: Feifei Wang > Reviewed-by: Ruifeng Wang > --- > lib/eal/arm/include/rte_pause_64.h | 151 ++++++++++++++++++---------- > lib/eal/include/generic/rte_pause.h | 78 ++++++++++++++ > 2 files changed, 175 insertions(+), 54 deletions(-) >=20 > diff --git a/lib/eal/arm/include/rte_pause_64.h b/lib/eal/arm/include/rte= _pause_64.h > index e87d10b8cc..205510e044 100644 > --- a/lib/eal/arm/include/rte_pause_64.h > +++ b/lib/eal/arm/include/rte_pause_64.h > @@ -31,20 +31,12 @@ static inline void rte_pause(void) > /* Put processor into low power WFE(Wait For Event) state. */ > #define __WFE() { asm volatile("wfe" : : : "memory"); } >=20 > -static __rte_always_inline void > -rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected, > - int memorder) > -{ > - uint16_t value; > - > - assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D __ATOMIC_REL= AXED); > - > - /* > - * Atomic exclusive load from addr, it returns the 16-bit content of > - * *addr while making it 'monitored',when it is written by someone > - * else, the 'monitored' state is cleared and a event is generated > - * implicitly to exit WFE. > - */ > +/* > + * Atomic exclusive load from addr, it returns the 16-bit content of > + * *addr while making it 'monitored', when it is written by someone > + * else, the 'monitored' state is cleared and a event is generated > + * implicitly to exit WFE. > + */ > #define __LOAD_EXC_16(src, dst, memorder) { \ > if (memorder =3D=3D __ATOMIC_RELAXED) { \ > asm volatile("ldxrh %w[tmp], [%x[addr]]" \ > @@ -58,6 +50,52 @@ rte_wait_until_equal_16(volatile uint16_t *addr, uint1= 6_t expected, > : "memory"); \ > } } >=20 > +/* > + * Atomic exclusive load from addr, it returns the 32-bit content of > + * *addr while making it 'monitored', when it is written by someone > + * else, the 'monitored' state is cleared and a event is generated > + * implicitly to exit WFE. > + */ > +#define __LOAD_EXC_32(src, dst, memorder) { \ > + if (memorder =3D=3D __ATOMIC_RELAXED) { \ > + asm volatile("ldxr %w[tmp], [%x[addr]]" \ > + : [tmp] "=3D&r" (dst) \ > + : [addr] "r"(src) \ > + : "memory"); \ > + } else { \ > + asm volatile("ldaxr %w[tmp], [%x[addr]]" \ > + : [tmp] "=3D&r" (dst) \ > + : [addr] "r"(src) \ > + : "memory"); \ > + } } > + > +/* > + * Atomic exclusive load from addr, it returns the 64-bit content of > + * *addr while making it 'monitored', when it is written by someone > + * else, the 'monitored' state is cleared and a event is generated > + * implicitly to exit WFE. > + */ > +#define __LOAD_EXC_64(src, dst, memorder) { \ > + if (memorder =3D=3D __ATOMIC_RELAXED) { \ > + asm volatile("ldxr %x[tmp], [%x[addr]]" \ > + : [tmp] "=3D&r" (dst) \ > + : [addr] "r"(src) \ > + : "memory"); \ > + } else { \ > + asm volatile("ldaxr %x[tmp], [%x[addr]]" \ > + : [tmp] "=3D&r" (dst) \ > + : [addr] "r"(src) \ > + : "memory"); \ > + } } > + > +static __rte_always_inline void > +rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected, > + int memorder) > +{ > + uint16_t value; > + > + assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D __ATOMIC_REL= AXED); > + > __LOAD_EXC_16(addr, value, memorder) > if (value !=3D expected) { > __SEVL() > @@ -66,7 +104,6 @@ rte_wait_until_equal_16(volatile uint16_t *addr, uint1= 6_t expected, > __LOAD_EXC_16(addr, value, memorder) > } while (value !=3D expected); > } > -#undef __LOAD_EXC_16 > } >=20 > static __rte_always_inline void > @@ -77,25 +114,6 @@ rte_wait_until_equal_32(volatile uint32_t *addr, uint= 32_t expected, >=20 > assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D __ATOMIC_REL= AXED); >=20 > - /* > - * Atomic exclusive load from addr, it returns the 32-bit content of > - * *addr while making it 'monitored',when it is written by someone > - * else, the 'monitored' state is cleared and a event is generated > - * implicitly to exit WFE. > - */ > -#define __LOAD_EXC_32(src, dst, memorder) { \ > - if (memorder =3D=3D __ATOMIC_RELAXED) { \ > - asm volatile("ldxr %w[tmp], [%x[addr]]" \ > - : [tmp] "=3D&r" (dst) \ > - : [addr] "r"(src) \ > - : "memory"); \ > - } else { \ > - asm volatile("ldaxr %w[tmp], [%x[addr]]" \ > - : [tmp] "=3D&r" (dst) \ > - : [addr] "r"(src) \ > - : "memory"); \ > - } } > - > __LOAD_EXC_32(addr, value, memorder) > if (value !=3D expected) { > __SEVL() > @@ -104,7 +122,6 @@ rte_wait_until_equal_32(volatile uint32_t *addr, uint= 32_t expected, > __LOAD_EXC_32(addr, value, memorder) > } while (value !=3D expected); > } > -#undef __LOAD_EXC_32 > } >=20 > static __rte_always_inline void > @@ -115,25 +132,6 @@ rte_wait_until_equal_64(volatile uint64_t *addr, uin= t64_t expected, >=20 > assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D __ATOMIC_REL= AXED); >=20 > - /* > - * Atomic exclusive load from addr, it returns the 64-bit content of > - * *addr while making it 'monitored',when it is written by someone > - * else, the 'monitored' state is cleared and a event is generated > - * implicitly to exit WFE. > - */ > -#define __LOAD_EXC_64(src, dst, memorder) { \ > - if (memorder =3D=3D __ATOMIC_RELAXED) { \ > - asm volatile("ldxr %x[tmp], [%x[addr]]" \ > - : [tmp] "=3D&r" (dst) \ > - : [addr] "r"(src) \ > - : "memory"); \ > - } else { \ > - asm volatile("ldaxr %x[tmp], [%x[addr]]" \ > - : [tmp] "=3D&r" (dst) \ > - : [addr] "r"(src) \ > - : "memory"); \ > - } } > - > __LOAD_EXC_64(addr, value, memorder) > if (value !=3D expected) { > __SEVL() > @@ -143,6 +141,51 @@ rte_wait_until_equal_64(volatile uint64_t *addr, uin= t64_t expected, > } while (value !=3D expected); > } > } > + > +#define rte_wait_event_16(addr, mask, expected, cond, memorder) = \ > +do { \ > + uint16_t value = \ > + assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D __ATOMIC_REL= AXED); \ > + __LOAD_EXC_16(addr, value, memorder) \ > + if ((value & mask) cond expected) { \ > + __SEVL() \ > + do { \ > + __WFE() \ > + __LOAD_EXC_16(addr, value, memorder) \ > + } while ((value & mask) cond expected); \ > + } \ > +} while (0) > + > +#define rte_wait_event_32(addr, mask, expected, cond, memorder) = \ > +do { = \ > + uint32_t value = \ > + assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D __ATOMIC_REL= AXED); \ > + __LOAD_EXC_32(addr, value, memorder) = \ > + if ((value & mask) op expected) { = \ > + __SEVL() \ > + do { \ > + __WFE() \ > + __LOAD_EXC_32(addr, value, memorder) \ > + } while ((value & mask) cond expected); \ > + } = \ > +} while (0) > + > +#define rte_wait_event_64(addr, mask, expected, cond, memorder) = \ > +do { = \ > + uint64_t value = \ > + assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D __ATOMIC_REL= AXED); \ > + __LOAD_EXC_64(addr, value, memorder) = \ > + if ((value & mask) cond expected) { = \ > + __SEVL() \ > + do { \ > + __WFE() \ > + __LOAD_EXC_64(addr, value, memorder) \ > + } while ((value & mask) cond expected); \ > + } = \ > +} while (0) > + > +#undef __LOAD_EXC_16 > +#undef __LOAD_EXC_32 > #undef __LOAD_EXC_64 >=20 > #undef __SEVL > diff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generi= c/rte_pause.h > index 668ee4a184..4e32107eca 100644 > --- a/lib/eal/include/generic/rte_pause.h > +++ b/lib/eal/include/generic/rte_pause.h > @@ -111,6 +111,84 @@ rte_wait_until_equal_64(volatile uint64_t *addr, uin= t64_t expected, > while (__atomic_load_n(addr, memorder) !=3D expected) > rte_pause(); > } > + > +/* > + * Wait until a 16-bit *addr breaks the condition, with a relaxed memory > + * ordering model meaning the loads around this API can be reordered. > + * > + * @param addr > + * A pointer to the memory location. > + * @param mask > + * A mask of value bits in interest > + * @param expected > + * A 16-bit expected value to be in the memory location. > + * @param cond > + * A symbol representing the condition (=3D=3D, !=3D). > + * @param memorder > + * Two different memory orders that can be specified: > + * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to > + * C++11 memory orders with the same names, see the C++11 standard or > + * the GCC wiki on atomic synchronization for detailed definition. > + */ Hmm, so now we have 2 APIs doing similar thing: rte_wait_until_equal_n() and rte_wait_event_n(). Can we probably unite them somehow? At least make rte_wait_until_equal_n() to use rte_wait_event_n() underneath= . > +#define rte_wait_event_16(addr, mask, expected, cond, memorder) = \ > +do { \ > + assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D __ATOMIC_REL= AXED); \ And why user is not allowed to use __ATOMIC_SEQ_CST here? BTW, if we expect memorder to always be a constant, might be better BUILD_B= UG_ON()? > + \ > + while ((__atomic_load_n(addr, memorder) & mask) cond expected) \ > + rte_pause(); \ > +} while (0) Two thoughts with these macros: 1. It is a goof practise to put () around macro parameters in the macro bod= y. Will save from a lot of unexpected troubles. 2. I think these 3 macros can be united into one. Something like: #define rte_wait_event(addr, mask, expected, cond, memorder) do {\ typeof (*(addr)) val =3D __atomic_load_n((addr), (memorder)); \ if ((val & (typeof(val))(mask)) cond (typeof(val))(expected)) \ break; \ rte_pause(); \ } while (1); > + > +/* > + * Wait until a 32-bit *addr breaks the condition, with a relaxed memory > + * ordering model meaning the loads around this API can be reordered. > + * > + * @param addr > + * A pointer to the memory location. > + * @param mask > + * A mask of value bits in interest. > + * @param expected > + * A 32-bit expected value to be in the memory location. > + * @param cond > + * A symbol representing the condition (=3D=3D, !=3D). > + * @param memorder > + * Two different memory orders that can be specified: > + * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to > + * C++11 memory orders with the same names, see the C++11 standard or > + * the GCC wiki on atomic synchronization for detailed definition. > + */ > +#define rte_wait_event_32(addr, mask, expected, cond, memorder) = \ > +do { \ > + assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D __ATOMIC_REL= AXED); \ > + \ > + while ((__atomic_load_n(addr, memorder) & mask) cond expected) \ > + rte_pause(); \ > +} while (0) > + > +/* > + * Wait until a 64-bit *addr breaks the condition, with a relaxed memory > + * ordering model meaning the loads around this API can be reordered. > + * > + * @param addr > + * A pointer to the memory location. > + * @param mask > + * A mask of value bits in interest > + * @param expected > + * A 64-bit expected value to be in the memory location. > + * @param cond > + * A symbol representing the condition (=3D=3D, !=3D). > + * @param memorder > + * Two different memory orders that can be specified: > + * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to > + * C++11 memory orders with the same names, see the C++11 standard or > + * the GCC wiki on atomic synchronization for detailed definition. > + */ > +#define rte_wait_event_64(addr, mask, expected, cond, memorder) = \ > +do { \ > + assert(memorder =3D=3D __ATOMIC_ACQUIRE || memorder =3D=3D __ATOMIC_REL= AXED); \ > + \ > + while ((__atomic_load_n(addr, memorder) & mask) cond expected) \ > + rte_pause(); \ > +} while (0) > #endif >=20 > #endif /* _RTE_PAUSE_H_ */ > -- > 2.25.1