From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 63827A034F; Wed, 31 Mar 2021 06:19:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DF01C4069E; Wed, 31 Mar 2021 06:19:33 +0200 (CEST) Received: from EUR05-DB8-obe.outbound.protection.outlook.com (mail-db8eur05on2078.outbound.protection.outlook.com [40.107.20.78]) by mails.dpdk.org (Postfix) with ESMTP id D065340141 for ; Wed, 31 Mar 2021 06:19:31 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=iUN3TD5P3IFETfZMf4h1aqZuBvP20NZU8wazK0YeYN0=; b=Sv+Fn0nR8YJkv6L1Umfro8Rxl0PEQGVl4s4D3v2Y60um3GrJmFXasMVU3yF3riQhEgmv+rQ6pfXAvF0oP+lUy21FrDFjk7zoyKn22xQ8ejcFJTK46o+6fBfkh7ihZ4tME4YgXGn5KENXXbWXor8gCuDyp1v10P1KhE8olNBd7pw= Received: from AM8P190CA0019.EURP190.PROD.OUTLOOK.COM (2603:10a6:20b:219::24) by VI1PR08MB5456.eurprd08.prod.outlook.com (2603:10a6:803:12d::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3955.18; Wed, 31 Mar 2021 04:19:29 +0000 Received: from AM5EUR03FT022.eop-EUR03.prod.protection.outlook.com (2603:10a6:20b:219:cafe::3d) by AM8P190CA0019.outlook.office365.com (2603:10a6:20b:219::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3977.28 via Frontend Transport; Wed, 31 Mar 2021 04:19:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; dpdk.org; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;dpdk.org; dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by AM5EUR03FT022.mail.protection.outlook.com (10.152.16.79) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3977.25 via Frontend Transport; Wed, 31 Mar 2021 04:19:28 +0000 Received: ("Tessian outbound dd6be0a1b101:v89"); Wed, 31 Mar 2021 04:19:27 +0000 X-CR-MTA-TID: 64aa7808 Received: from 58d08b7c4c28.2 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 85384F63-9EBD-4E77-B32F-A5F4B417BA27.1; Wed, 31 Mar 2021 04:19:17 +0000 Received: from EUR02-VE1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 58d08b7c4c28.2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Wed, 31 Mar 2021 04:19:17 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kUQGjmiivAaF1jYNt4K4t8A40ZW+GOSu8rIuDpxhotda0/wggLwiQ7XXGcQvtfNeuVucCvY0mgigA9jClLIX2jxfoAQ0NW1hWuVvcOqiY6igFaCg+bjPEXsmKGqTiM+jzcDwGnpXPGSDAmobfZ2ZnFkYaHZL8Gr/6MuHOWlWZuOmDZ/UKH0OISwcct/yjwTge0IMOt1TNPf88VivZUtfZlXSj/a783qYFg7yst2KbiybSA7MvJs9Z64ClNGbczIk6Y6PVcdDMySxboYGPdf3TivScke40TS+oyexXQP1pOQ0eWNFI0a158LbOF1DZkalKqT+n7aYs6W1dM5eBGKU6w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=iUN3TD5P3IFETfZMf4h1aqZuBvP20NZU8wazK0YeYN0=; b=dSrESLJnQUCBuqT+BO2iGvAJVq05J1+EY0tSa//lq6yGF3aJZ0vAAtB+Svc2mjAdzR7sYVW0C7cIZVA0gOy/xB0e2PoPFHWQs0rejqa9kF1s9Y+WDAbcDbBKDaO9TLeVtUMCdr2jwc8DBXIRxunWNFvKlCDrGVT1p1xQPV1qF3/oBfL9zYZ+vzhL/pZrD5EPjwUdNN7NeRPhCSVR0DFpJsyDwsYDmMG77FjROSJnfcpXyA5qxuuesWwKsrw4AVP3ofnNKfPumQonpFarcTzs73UliEQwlvu+64J6D3jIO/aFZkEWnrwxCjZHPjhFNaKCy4b2uCYxIHjG83g/W/gwmQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=iUN3TD5P3IFETfZMf4h1aqZuBvP20NZU8wazK0YeYN0=; b=Sv+Fn0nR8YJkv6L1Umfro8Rxl0PEQGVl4s4D3v2Y60um3GrJmFXasMVU3yF3riQhEgmv+rQ6pfXAvF0oP+lUy21FrDFjk7zoyKn22xQ8ejcFJTK46o+6fBfkh7ihZ4tME4YgXGn5KENXXbWXor8gCuDyp1v10P1KhE8olNBd7pw= Received: from DBAPR08MB5814.eurprd08.prod.outlook.com (2603:10a6:10:1b1::6) by DB6PR0802MB2376.eurprd08.prod.outlook.com (2603:10a6:4:86::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3977.31; Wed, 31 Mar 2021 04:19:14 +0000 Received: from DBAPR08MB5814.eurprd08.prod.outlook.com ([fe80::2994:a01e:2de:f94e]) by DBAPR08MB5814.eurprd08.prod.outlook.com ([fe80::2994:a01e:2de:f94e%7]) with mapi id 15.20.3977.033; Wed, 31 Mar 2021 04:19:14 +0000 From: Honnappa Nagarahalli To: Stephen Hemminger CC: "dev@dpdk.org" , nd , Honnappa Nagarahalli , nd Thread-Topic: [PATCH v4] pflock: add phase-fair reader writer locks Thread-Index: AQHXJSGyNFCmgkV0TEKgivhWA823RaqdRD6Q Date: Wed, 31 Mar 2021 04:19:14 +0000 Message-ID: References: <20210212013838.312623-1-sthemmin@microsoft.com> <20210330050047.34175-1-stephen@networkplumber.org> In-Reply-To: <20210330050047.34175-1-stephen@networkplumber.org> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ts-tracking-id: 876666FB09A7334980F5D7D7B793E309.0 x-checkrecipientchecked: true Authentication-Results-Original: networkplumber.org; dkim=none (message not signed) header.d=none;networkplumber.org; dmarc=none action=none header.from=arm.com; x-originating-ip: [70.113.13.105] x-ms-publictraffictype: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 545e2f75-6697-4707-a431-08d8f3fc2d08 x-ms-traffictypediagnostic: DB6PR0802MB2376:|VI1PR08MB5456: x-ms-exchange-transport-forked: True X-Microsoft-Antispam-PRVS: x-checkrecipientrouted: true nodisclaimer: true x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: s+QMImoBfat08Qi0pofKFe1+vifiRJsYQFItlnQApCAHx6mLx2Gq4TSp3SGrL5zbS45jI1Fn26xdp2cL5ZsO+dIu69zLidRM5Y0wWeOt/Ut8I87rkOX2O5NN05VxJ2D+LqavsT9bwvQh8scDezt3og7Yh97WGbtCsN9PdGT83//EhvB2lhQ3Z/Y/w7lIv3WwlwB0805LombdkK9EtWiEy63kRBTNHw5uPWODJSVx4q9G/QcClFI65h+UTWtDtvKX2lGPMv+DpaDSrz5ZkQUssOcR0gJykuM4xcTpHa85M/IcR6HN7c+zUX1HZdBTvjbtyUsVnLZPmXeKBgEJGSOnbYtEv4MvXtD8VgPQiXYsBk5a7uywrN+68B/oQdXmCSqWHuDLVQo+3nNR/NDH6lr1IsjBtIw+lnG59Zs2fxCrnTkUAj46GQP11Y56FPRdSeh7+2UJf9AhgMe7/sKrVPHMVaNrAHrUfj1ovBu4XrG4dKh/ZMS//skeXpbkYAQik02i8uhqspUuCRo8FLyRo3jPzovC6iFyIKHqyoGvKq0BUagSFrZjm3DfT52MqHUGBPeejgxN2XfK+I/QANEohdAl9WLv+VkEwpGdlh9AbkOe1Vu23D7Yr//RBapZCsompdZXPIcpZC/bMkikiTwvIb+bMsiP5XPxfOzbT9gfMfNFONcSbh1SQD2Ch9GYhICQ6uBn5AwdIqic2eFj4ZPN6pIufONBR0npADUQQSsjmxuAyeg/BbGZRz7KeyycAERWOFGm X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DBAPR08MB5814.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(136003)(39850400004)(366004)(376002)(396003)(346002)(71200400001)(45080400002)(38100700001)(2906002)(9686003)(66556008)(4326008)(316002)(966005)(7696005)(33656002)(86362001)(66476007)(8676002)(55016002)(6916009)(6506007)(52536014)(83380400001)(478600001)(186003)(66946007)(26005)(30864003)(54906003)(5660300002)(66446008)(8936002)(76116006)(64756008); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata: =?us-ascii?Q?FljPKwBE4zT5n2uHED2IG225eaUvgigFM90t48dlkLE3/8l1gemx+kdVoxIW?= =?us-ascii?Q?d6qWAOWifd08ooh00NQm1Iw1x3HF54ajaIoPdcvN8GvLzi2LBT704tWAwYJ1?= =?us-ascii?Q?fn596RanhHfqwzwXi1VZ0tSeA/DS0FIS9ncQpt7zxuwtriAk2kflXN4AFODH?= =?us-ascii?Q?+BUHPVEpMawCHDOciWOPm7J3iRhRnijSrRk99mnZ4X8DONt2UGHE/DFh2Ixv?= =?us-ascii?Q?ukcvFIJ+LENfpSuzjNjyfakKV+BPplDH6rMR0RoezlBmjRRovKJHmZ8AWwj7?= =?us-ascii?Q?SxfeCEfsrbDT1f0vh7sEWAU3sgP7ipwKSj9wX+gyEdNMyd8w0STZwhlEph05?= =?us-ascii?Q?mUiOOqhRhFjV2frdy8GPllrEW1USGV5AVSe23nX7dmGC/cY7O8BbzXus6rKH?= =?us-ascii?Q?dVD2gZnNgjwAEtIPGSW1hY3eRZ2hZaLOJ+bKBslKt33qF/tWEqXsgfS8VQJI?= =?us-ascii?Q?KTuxk0cSlFzfHpwRCWEBbCrKD4JfP0gEoYYluVKq6A3SafVBnQJkyusjhoXH?= =?us-ascii?Q?ODBl35/53aS8c1HMUbR9yOHE47SLKq5DQunn5O4L2JZ9vJu2c0DRyBkI9hG8?= =?us-ascii?Q?7BL8wHT4Nv6VI+u1Ewb/jUUikG4/Db7lL3FSZZNsTsyURtqsXUZwMBOetDz1?= =?us-ascii?Q?WW7vJWypuG2i6PPpKjQPsFcFn7xzKnJiZGXq8lVWKrNUKnTR3nk8iUVjmO82?= =?us-ascii?Q?v7TH3l6xuKfW9ESbwKBIOGyZFyc+5/gI6TgBRMCJtBNlWVFjcvcmmxMqOaTQ?= =?us-ascii?Q?VkdePIenLgevIHjUmCn0d19dQ9S9Gy9qpoFz79vtR2EZuSzDbUwZ+inLIaW5?= =?us-ascii?Q?UL4yJTa2t94V+SUZWLNLnW3JyHCp3pkhLelfW5JsjHoUTID39TNMoB8jx4cL?= =?us-ascii?Q?yeELiZhGn2REW/79h10nGF3jIoh9L4CGVhvtzAihfSivtDeTZjkuqitt/eH7?= =?us-ascii?Q?sPSV9bj8OjzMJm++DAkpML1vQKBH0u0SjUr4q67HKBqJpM7iOblXl/QAC1Ik?= =?us-ascii?Q?8guNHM9d7pABG7VPg1WH7l83DIUDnf7+ro+2eKVcRNGutUFqrHWqsFqLMoM6?= =?us-ascii?Q?qobFV3N/+ZTsbCdZRssgv0HQ7syJ9fgP0OBCRxNFZzL7yWdAK2Ineuiz8bvu?= =?us-ascii?Q?ja/hVvdyUyGsbiYFgQSVr8pWHvtoTqgPnDrpXZIerpJ/Scjw4mzAAu1Nr0KJ?= =?us-ascii?Q?0VnMCtuLT+RRMveiGh7erGc9qPVvZsZBE7rwN52aquBB4MQ453zY0R0jy7fh?= =?us-ascii?Q?OM4BIOTBAyjNIOrq0TDuk/X0vyqNqs6+69olGOBCwrri3PeLNdiJhMCR5dtC?= =?us-ascii?Q?dvzzXQHgBdMapv8w4WKCaPG/?= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2376 Original-Authentication-Results: networkplumber.org; dkim=none (message not signed) header.d=none; networkplumber.org; dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT022.eop-EUR03.prod.protection.outlook.com X-MS-Office365-Filtering-Correlation-Id-Prvs: 4345066f-7ec5-4e89-e6a7-08d8f3fc24b0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ZNSx3zjwqy9qgdxJskxPiq/512YTc0BvuFnfcMekjVEAthg9VwoXYUbSuK2srSOOAuJi46YTfD/O38R4e7RSfTAEcTDaG4d+PvlO12xdc1bcmUZLMMUGjS+PGKU+VwdPwLjVc34tcH7Pd4AHxmPjvSEefIPn8VUfTcn4Gw7HvXz3E7mo3hIIHCfbB0jtgnFxlslMg3Sg7WxjkM418C15K8lt5kcZlzgex/X8Rir6IPHkB1JKXxdyfUrtq7EG5y2w3pq7EWJPOAGlmOjb468n6EOQi6UJD3/d/IK4EImQSEoyw1M5U/Y6i3hquJPfnOMEDRv7+gYwlkQA+AqeKkUh5YRunx8Vtggx5ecK94W+9mm8An8K+e5LOYeCmJ3aX3eWPkLI13UwWy5PBxLJa+O2qzxlhxK6eAqEDPHH4u5ldtTmD9eDSWOI075hokg+qNgiX+ua/IItF8z3oDfHFaKxYM6Go4ZFcrtkT+p3hC00ZbF2YuZBSM3KycSBo7Fpj6xYBV+3DViPJ0pf6kOEIUcskEU7z5LCJ6P7fA1xawJhSBA2HmURq5Gy6bo0txXlt34dOD80++MOKaoPSneL/MgvtXeJPBEzJtHgp5ZiDLT1F7tXGK/m3dGcRnlnzqdqdsTRir8CVnoXsq7HDyF+4BUqPSVd2/7uOgJlNyRIphgqtwbq9WeWzqhX6foQ87vjez7IvYouY8L6ZQTOFWhUn8/kCcROpaqeXDs83VBNUv5NkBQ= X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(4636009)(39850400004)(136003)(396003)(376002)(346002)(46966006)(36840700001)(83380400001)(26005)(55016002)(8676002)(9686003)(52536014)(186003)(30864003)(86362001)(4326008)(70586007)(82310400003)(8936002)(36860700001)(6862004)(7696005)(2906002)(336012)(356005)(33656002)(81166007)(5660300002)(45080400002)(478600001)(70206006)(966005)(54906003)(82740400003)(47076005)(316002)(6506007); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Mar 2021 04:19:28.5010 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 545e2f75-6697-4707-a431-08d8f3fc2d08 X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT022.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5456 Subject: Re: [dpdk-dev] [PATCH v4] pflock: add phase-fair reader writer locks X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Few minor comments, overall looks good. Tested on few Arm platforms. >=20 > This is a new type of reader-writer lock that provides better fairness > guarantees which better suited for typical DPDK applications. > A pflock has two ticket pools, one for readers and one for writers. >=20 > Phase fair reader writer locks ensure that neither reader nor writer will= be > starved. Neither reader or writer are preferred, they execute in alternat= ing > phases. All operations of the same type (reader or writer) that acquire t= he > lock are handled in FIFO order. Write operations are exclusive, and mult= iple > read operations can be run together (until a write arrives). >=20 > A similar implementation is in Concurrency Kit package in FreeBSD. > For more information see: > "Reader-Writer Synchronization for Shared-Memory Multiprocessor > Real-Time Systems", > http://www.cs.unc.edu/~anderson/papers/ecrts09b.pdf >=20 > Signed-off-by: Stephen Hemminger > --- > app/test/meson.build | 2 + > app/test/test_pflock.c | 193 +++++++++++++++++++ > lib/librte_eal/arm/include/meson.build | 1 + > lib/librte_eal/arm/include/rte_pflock.h | 18 ++ > lib/librte_eal/include/generic/rte_pflock.h | 202 ++++++++++++++++++++ > lib/librte_eal/ppc/include/meson.build | 1 + > lib/librte_eal/ppc/include/rte_pflock.h | 17 ++ > lib/librte_eal/x86/include/meson.build | 1 + > lib/librte_eal/x86/include/rte_pflock.h | 18 ++ > 9 files changed, 453 insertions(+) > create mode 100644 app/test/test_pflock.c create mode 100644 > lib/librte_eal/arm/include/rte_pflock.h > create mode 100644 lib/librte_eal/include/generic/rte_pflock.h > create mode 100644 lib/librte_eal/ppc/include/rte_pflock.h > create mode 100644 lib/librte_eal/x86/include/rte_pflock.h >=20 > diff --git a/app/test/meson.build b/app/test/meson.build index > 76eaaea45746..bd50818f82b0 100644 > --- a/app/test/meson.build > +++ b/app/test/meson.build > @@ -90,6 +90,7 @@ test_sources =3D files('commands.c', > 'test_mcslock.c', > 'test_mp_secondary.c', > 'test_per_lcore.c', > + 'test_pflock.c', > 'test_pmd_perf.c', > 'test_power.c', > 'test_power_cpufreq.c', > @@ -228,6 +229,7 @@ fast_tests =3D [ > ['meter_autotest', true], > ['multiprocess_autotest', false], > ['per_lcore_autotest', true], > + ['pflock_autotest', true], > ['prefetch_autotest', true], > ['rcu_qsbr_autotest', true], > ['red_autotest', true], > diff --git a/app/test/test_pflock.c b/app/test/test_pflock.c new file mod= e > 100644 index 000000000000..5e3c05767767 > --- /dev/null > +++ b/app/test/test_pflock.c > @@ -0,0 +1,193 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2021 Microsoft Corporation */ > + > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > +#include > + > +#include "test.h" > + > +/* > + * phase fair lock test > + * =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > + * Provides UT for phase fair lock API. > + * Main concern is on functional testing, but also provides some > + * performance measurements. > + * Obviously for proper testing need to be executed with more than one > lcore. > + */ > + > +#define ITER_NUM 0x80 > + > +#define TEST_SEC 5 The above 2 #defines are not used, you can remove them > + > +static rte_pflock_t sl; > +static rte_pflock_t sl_tab[RTE_MAX_LCORE]; static uint32_t synchro; > + > +enum { > + LC_TYPE_RDLOCK, > + LC_TYPE_WRLOCK, > +}; This enum is not used, you can remove it > + > +static int > +test_pflock_per_core(__rte_unused void *arg) { > + rte_pflock_write_lock(&sl); > + printf("Global write lock taken on core %u\n", rte_lcore_id()); > + rte_pflock_write_unlock(&sl); > + > + rte_pflock_write_lock(&sl_tab[rte_lcore_id()]); > + printf("Hello from core %u !\n", rte_lcore_id()); > + rte_pflock_write_unlock(&sl_tab[rte_lcore_id()]); > + > + rte_pflock_read_lock(&sl); > + printf("Global read lock taken on core %u\n", rte_lcore_id()); > + rte_delay_ms(100); > + printf("Release global read lock on core %u\n", rte_lcore_id()); > + rte_pflock_read_unlock(&sl); > + > + return 0; > +} > + > +static rte_pflock_t lk =3D RTE_PFLOCK_INITIALIZER; static volatile > +uint64_t pflock_data; static uint64_t time_count[RTE_MAX_LCORE] =3D {0}; > + > +#define MAX_LOOP 10000 > +#define TEST_PFLOCK_DEBUG 0 > + > +static int > +load_loop_fn(__rte_unused void *arg) > +{ > + uint64_t time_diff =3D 0, begin; > + uint64_t hz =3D rte_get_timer_hz(); > + uint64_t lcount =3D 0; > + const unsigned int lcore =3D rte_lcore_id(); > + > + /* wait synchro for workers */ > + if (lcore !=3D rte_get_main_lcore()) > + rte_wait_until_equal_32(&synchro, 1, __ATOMIC_RELAXED); > + > + begin =3D rte_rdtsc_precise(); > + while (lcount < MAX_LOOP) { > + rte_pflock_write_lock(&lk); > + ++pflock_data; This should be an atomic increment, better to use atomic fetch add > + rte_pflock_write_unlock(&lk); > + > + rte_pflock_read_lock(&lk); > + if (TEST_PFLOCK_DEBUG && !(lcount % 100)) > + printf("Core [%u] pflock_data =3D %"PRIu64"\n", > + lcore, pflock_data); > + rte_pflock_read_unlock(&lk); > + > + lcount++; > + /* delay to make lock duty cycle slightly realistic */ > + rte_pause(); > + } > + > + time_diff =3D rte_rdtsc_precise() - begin; > + time_count[lcore] =3D time_diff * 1000000 / hz; > + return 0; > +} > + > +static int > +test_pflock_perf(void) > +{ > + unsigned int i; > + uint64_t total =3D 0; > + > + printf("\nPhase fair test on %u cores...\n", rte_lcore_count()); > + > + /* clear synchro and start workers */ > + synchro =3D 0; > + if (rte_eal_mp_remote_launch(load_loop_fn, NULL, SKIP_MAIN) < > 0) > + return -1; > + > + /* start synchro and launch test on main */ > + __atomic_store_n(&synchro, 1, __ATOMIC_RELAXED); > + load_loop_fn(NULL); > + > + rte_eal_mp_wait_lcore(); > + > + RTE_LCORE_FOREACH(i) { > + printf("Core [%u] cost time =3D %"PRIu64" us\n", > + i, time_count[i]); > + total +=3D time_count[i]; > + } > + > + printf("Total cost time =3D %"PRIu64" us\n", total); > + memset(time_count, 0, sizeof(time_count)); > + > + return 0; > +} > + > +/* > + * - There is a global pflock and a table of pflocks (one per lcore). > + * > + * - The test function takes all of these locks and launches the > + * ``test_pflock_per_core()`` function on each core (except the main). > + * > + * - The function takes the global write lock, display something, > + * then releases the global lock. > + * - Then, it takes the per-lcore write lock, display something, and > + * releases the per-core lock. > + * - Finally, a read lock is taken during 100 ms, then released. > + * > + * - The main function unlocks the per-lcore locks sequentially and > + * waits between each lock. This triggers the display of a message > + * for each core, in the correct order. > + * > + * Then, it tries to take the global write lock and display the last > + * message. The autotest script checks that the message order is corre= ct. > + */ > +static int > +test_pflock(void) > +{ > + int i; > + > + rte_pflock_init(&sl); > + for (i =3D 0; i < RTE_MAX_LCORE; i++) > + rte_pflock_init(&sl_tab[i]); > + > + rte_pflock_write_lock(&sl); > + > + RTE_LCORE_FOREACH_WORKER(i) { > + rte_pflock_write_lock(&sl_tab[i]); > + rte_eal_remote_launch(test_pflock_per_core, NULL, i); > + } > + > + rte_pflock_write_unlock(&sl); > + > + RTE_LCORE_FOREACH_WORKER(i) { > + rte_pflock_write_unlock(&sl_tab[i]); > + rte_delay_ms(100); > + } > + > + rte_pflock_write_lock(&sl); > + /* this message should be the last message of test */ > + printf("Global write lock taken on main core %u\n", rte_lcore_id()); > + rte_pflock_write_unlock(&sl); > + > + rte_eal_mp_wait_lcore(); > + > + if (test_pflock_perf() < 0) Suggest separating out the performance test so that it is not run on the cl= oud CI platforms (which have issues with performance tests timing out). I t= hink autotest_data.py needs to be modified. > + return -1; > + > + return 0; > +} > + > +REGISTER_TEST_COMMAND(pflock_autotest, test_pflock); > diff --git a/lib/librte_eal/arm/include/meson.build > b/lib/librte_eal/arm/include/meson.build > index 770766de1a34..2c3cff61bed6 100644 > --- a/lib/librte_eal/arm/include/meson.build > +++ b/lib/librte_eal/arm/include/meson.build > @@ -21,6 +21,7 @@ arch_headers =3D files( > 'rte_pause_32.h', > 'rte_pause_64.h', > 'rte_pause.h', > + 'rte_pflock.h', > 'rte_power_intrinsics.h', > 'rte_prefetch_32.h', > 'rte_prefetch_64.h', > diff --git a/lib/librte_eal/arm/include/rte_pflock.h > b/lib/librte_eal/arm/include/rte_pflock.h > new file mode 100644 > index 000000000000..bb9934eec469 > --- /dev/null > +++ b/lib/librte_eal/arm/include/rte_pflock.h > @@ -0,0 +1,18 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2021 Microsoft Corporation */ > + > +#ifndef _RTE_PFLOCK_ARM_H_ > +#define _RTE_PFLOCK_ARM_H_ > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +#include "generic/rte_pflock.h" > + > +#ifdef __cplusplus > +} > +#endif > + > +#endif /* _RTE_PFLOCK_ARM_H_ */ > diff --git a/lib/librte_eal/include/generic/rte_pflock.h > b/lib/librte_eal/include/generic/rte_pflock.h > new file mode 100644 > index 000000000000..7c183633df60 > --- /dev/null > +++ b/lib/librte_eal/include/generic/rte_pflock.h > @@ -0,0 +1,202 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2021 Microsoft Corp. > + * All rights reserved. > + * > + * Derived from Concurrency Kit > + * Copyright 2011-2015 Samy Al Bahra. > + */ > + > +#ifndef _RTE_PFLOCK_H_ > +#define _RTE_PFLOCK_H_ > + > +/** > + * @file > + * > + * Phase-fair locks > + * > + * This file defines an API for Phase Fair reader writer locks, > + * which is a variant of typical reader-writer locks that prevent > + * starvation. In this type of lock, readers and writers alternate. > + * This significantly reduces the worst-case blocking for readers and wr= iters. > + * > + * This is an implementation derived from FreeBSD > + * based on the work described in: > + * Brandenburg, B. and Anderson, J. 2010. Spin-Based > + * Reader-Writer Synchronization for Multiprocessor Real-Time Systems > + * > + * All locks must be initialised before use, and only initialised once. > + */ > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +#include > +#include > + > +/** > + * The rte_pflock_t type. > + */ > +struct rte_pflock { > + struct { > + uint16_t in; > + uint16_t out; > + } rd, wr; > +}; > +typedef struct rte_pflock rte_pflock_t; > + > +/* > + * Allocation of bits to reader > + * > + * 16 8 7 2 1 0 Typo, this numbering should be 15 4 3 2 1 0 > + * +-------------------+------+-+-+ > + * | rin: reads issued |unused| | | > + * +-------------------+------+-+-+ > + * ^ ^ > + * | | > + * PRES: writer present ---+ | > + * PHID: writer phase id ----+ > + * > + * 16 2 7 0 Here, it should be 15 4 3 0 > + * +------------------+------+ > + * |rout:read complete|unused| > + * +------------------+------+ > + * > + * The maximum number of readers is 4095 */ > + > +/* Constants used to map the bits in reader counter */ > +#define RTE_PFLOCK_WBITS 0x3 /* Writer bits in reader. */ > +#define RTE_PFLOCK_PRES 0x2 /* Writer present bit. */ > +#define RTE_PFLOCK_PHID 0x1 /* Phase ID bit. */ > +#define RTE_PFLOCK_LSB 0xFFF0 /* reader bits. */ > +#define RTE_PFLOCK_RINC 0x10 /* Reader increment. */ > + > +/** > + * A static pflock initializer. > + */ > +#define RTE_PFLOCK_INITIALIZER { } > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. > + * > + * Initialize the pflock to an unlocked state. > + * > + * @param pf > + * A pointer to the pflock. > + */ > +__rte_experimental > +static inline void Minor, this API does not need to be inline. > +rte_pflock_init(struct rte_pflock *pf) > +{ > + memset(pf, 0, sizeof(*pf)); > +} > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. > + * > + * Take a pflock for read. > + * > + * @param pf > + * A pointer to a pflock structure. > + */ > +__rte_experimental > +static inline void > +rte_pflock_read_lock(rte_pflock_t *pf) > +{ > + uint16_t w; > + > + /* > + * If no writer is present, then the operation has completed > + * successfully. > + */ > + w =3D __atomic_fetch_add(&pf->rd.in, RTE_PFLOCK_RINC, > __ATOMIC_ACQUIRE) > + & RTE_PFLOCK_WBITS; > + if (w =3D=3D 0) > + return; > + > + /* Wait for current write phase to complete. */ > + while ((__atomic_load_n(&pf->rd.in, __ATOMIC_ACQUIRE) & > RTE_PFLOCK_WBITS) =3D=3D w) > + rte_pause(); > +} > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. > + * > + * Release a pflock locked for reading. > + * > + * @param pf > + * A pointer to the pflock structure. > + */ > +__rte_experimental > +static inline void > +rte_pflock_read_unlock(rte_pflock_t *pf) { > + __atomic_fetch_add(&pf->rd.out, RTE_PFLOCK_RINC, > __ATOMIC_RELEASE); } > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. > + * > + * Take the pflock for write. > + * > + * @param pf > + * A pointer to the ticketlock. Typo ^^^^^^^^ pflock > + */ > +__rte_experimental > +static inline void > +rte_pflock_write_lock(rte_pflock_t *pf) { > + uint16_t ticket, w; > + > + /* Acquire ownership of write-phase. > + * This is same as rte_tickelock_lock(). > + */ > + ticket =3D __atomic_fetch_add(&pf->wr.in, 1, __ATOMIC_RELAXED); > + rte_wait_until_equal_16(&pf->wr.out, ticket, __ATOMIC_ACQUIRE); > + > + /* > + * Acquire ticket on read-side in order to allow them > + * to flush. Indicates to any incoming reader that a > + * write-phase is pending. > + * > + * The load of rd.out in wait loop could be executed > + * speculatively. > + */ > + w =3D RTE_PFLOCK_PRES | (ticket & RTE_PFLOCK_PHID); > + ticket =3D __atomic_fetch_add(&pf->rd.in, w, __ATOMIC_RELAXED); > + > + /* Wait for any pending readers to flush. */ > + rte_wait_until_equal_16(&pf->rd.out, ticket, __ATOMIC_ACQUIRE); > } > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change without prior notice. > + * > + * Release a pflock held for writing. > + * > + * @param pf > + * A pointer to a pflock structure. > + */ > +__rte_experimental > +static inline void > +rte_pflock_write_unlock(rte_pflock_t *pf) { > + /* Migrate from write phase to read phase. */ > + __atomic_fetch_and(&pf->rd.in, RTE_PFLOCK_LSB, > __ATOMIC_RELEASE); > + > + /* Allow other writers to continue. */ > + __atomic_fetch_add(&pf->wr.out, 1, __ATOMIC_RELEASE); } > + > + > +#ifdef __cplusplus > +} > +#endif > + > +#endif /* RTE_PFLOCK_H */ > diff --git a/lib/librte_eal/ppc/include/meson.build > b/lib/librte_eal/ppc/include/meson.build > index dae40ede546e..7692a531ccba 100644 > --- a/lib/librte_eal/ppc/include/meson.build > +++ b/lib/librte_eal/ppc/include/meson.build > @@ -11,6 +11,7 @@ arch_headers =3D files( > 'rte_mcslock.h', > 'rte_memcpy.h', > 'rte_pause.h', > + 'rte_pflock.h', > 'rte_power_intrinsics.h', > 'rte_prefetch.h', > 'rte_rwlock.h', > diff --git a/lib/librte_eal/ppc/include/rte_pflock.h > b/lib/librte_eal/ppc/include/rte_pflock.h > new file mode 100644 > index 000000000000..27c201b11d05 > --- /dev/null > +++ b/lib/librte_eal/ppc/include/rte_pflock.h > @@ -0,0 +1,17 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2021 Microsoft Corporation */ #ifndef > +_RTE_PFLOCK_PPC_64_H_ #define _RTE_PFLOCK_PPC_64_H_ > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +#include "generic/rte_pflock.h" > + > +#ifdef __cplusplus > +} > +#endif > + > +#endif /* _RTE_PFLOCK_PPC_64_H_ */ > diff --git a/lib/librte_eal/x86/include/meson.build > b/lib/librte_eal/x86/include/meson.build > index 1a6ad0b17342..f43645c20899 100644 > --- a/lib/librte_eal/x86/include/meson.build > +++ b/lib/librte_eal/x86/include/meson.build > @@ -10,6 +10,7 @@ arch_headers =3D files( > 'rte_mcslock.h', > 'rte_memcpy.h', > 'rte_pause.h', > + 'rte_pflock.h', > 'rte_power_intrinsics.h', > 'rte_prefetch.h', > 'rte_rtm.h', > diff --git a/lib/librte_eal/x86/include/rte_pflock.h > b/lib/librte_eal/x86/include/rte_pflock.h > new file mode 100644 > index 000000000000..c2d876062c08 > --- /dev/null > +++ b/lib/librte_eal/x86/include/rte_pflock.h > @@ -0,0 +1,18 @@ > +/* SPDX-License-Identifier: BSD-3-Clause > + * Copyright(c) 2021 Microsoft Corporation */ > + > +#ifndef _RTE_PFLOCK_X86_64_H_ > +#define _RTE_PFLOCK_X86_64_H_ > + > +#ifdef __cplusplus > +extern "C" { > +#endif > + > +#include "generic/rte_pflock.h" > + > +#ifdef __cplusplus > +} > +#endif > + > +#endif /* _RTE_PFLOCK_X86_64_H_ */ > -- > 2.30.2