From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <Gavin.Hu@arm.com>
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 (mail-eopbgr10062.outbound.protection.outlook.com [40.107.1.62])
 by dpdk.org (Postfix) with ESMTP id E911C1B674
 for <dev@dpdk.org>; Wed, 19 Dec 2018 09:28:40 +0100 (CET)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector1-arm-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rpnSfMYkIkCYvwTCcZ/F9Py2ELSPVyenlMdOl/42ZwI=;
 b=QY5iY7FN8ZTNr73EZjqV5K9pmzT/IrOEmCv5His9OwjICIFgUUR3FPwbLIL3WZr+zFQi6/l73NkjwG+TDKIeNn9ixblu6K585ZLsSrmhW1MxEok5rxOZ5Y2ojzvpxrnANH8VC7eXmlTfjwNA2hVAUH+L6L80wdKHTeDG1SzH8Bg=
Received: from VI1PR08MB3167.eurprd08.prod.outlook.com (52.133.15.142) by
 VI1PR08MB3245.eurprd08.prod.outlook.com (52.134.30.139) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.1446.17; Wed, 19 Dec 2018 08:28:39 +0000
Received: from VI1PR08MB3167.eurprd08.prod.outlook.com
 ([fe80::b5a5:e179:34f1:7d21]) by VI1PR08MB3167.eurprd08.prod.outlook.com
 ([fe80::b5a5:e179:34f1:7d21%5]) with mapi id 15.20.1446.018; Wed, 19 Dec 2018
 08:28:39 +0000
From: "Gavin Hu (Arm Technology China)" <Gavin.Hu@arm.com>
To: Konstantin Ananyev <konstantin.ananyev@intel.com>, "dev@dpdk.org"
 <dev@dpdk.org>
Thread-Topic: [dpdk-dev] [PATCH 2/2] test: add new test-cases for rwlock
 autotest
Thread-Index: AQHUe3Y9SlDv3oCYpUugHgmH3mHwTKWF8WYg
Date: Wed, 19 Dec 2018 08:28:39 +0000
Message-ID: <VI1PR08MB31671A8C05C358ED8D52616A8FBE0@VI1PR08MB3167.eurprd08.prod.outlook.com>
References: <1542130061-3702-1-git-send-email-konstantin.ananyev@intel.com>
 <1542130061-3702-3-git-send-email-konstantin.ananyev@intel.com>
In-Reply-To: <1542130061-3702-3-git-send-email-konstantin.ananyev@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: spf=none (sender IP is )
 smtp.mailfrom=Gavin.Hu@arm.com; 
x-originating-ip: [113.29.88.7]
x-ms-publictraffictype: Email
x-microsoft-exchange-diagnostics: 1; VI1PR08MB3245;
 6:ZEfj1rOjzCyFRJb426/GwkIvVDbugisroZqbrFNsq0RllAjOZBaGbDNued4IzCQSjgF4gOwliBbXaCSCpBrofP6XN5EPrZ5K0bMGBaGipOBWujWjBBpWsYYifinqnhJfXb/NjFy4EnZfsIqZu+aEsCvpSZFkapNb/OA1EFdmcRY9Uf2Qn2WgoWBBhVSPOOhCgF3ZcFNKXecKE9eeUeM0rWSx03U0NUZ26rDVRES0SmCHuyehbEk1CmqDp9KMU4WdYu3Z0AsqHzCSrSypKi4UQLNR4BP3YRadl8NgpRJsPtZ5vFjKo1VnSXB1lYPqgUnYafgQ7+oMwBmISueKctHubP8vEkb2IEtoW+8Dq+4rex3CHfSjlMxmoOPxJVQ1jgRNugszm0pxnTPHI2Pv0TF6B6HCz622B5qndMcfKJYjeJF5OlPelBWIQp/YPByxO8um27vltTB5bhS6+SI4yIJyrg==;
 5:jJ1o1B4a72KyhXWd2seaPcZAXzDeXN4jhVYoU8AzjN4/mIVsBQDrwT3taq0J7UDED4zXPpR9/XM6T3D3NDhwZMmr73NtcrJUOnWmwmQrn0kXpZM8dRgD9N7tnGUrvxyjA1Fmp9WAoIcHRWSP3Yp1SWOGyBBVqF/1f+Os2nBOZp8=;
 7:dB6xxuc1WGRzu/aV4QctuK/064ZIfW6GZESttgwzJiSYKX3HCsOrJqmNAMpLak8grdZeJrfdB1GND59ZAhyl8ZaIjdy81l5ytfMc8Q9W6yS5PyrfqeYmN6NxkRLR7wEspS8zKPy2Rjl9CPsgZBBQzQ==
x-ms-exchange-antispam-srfa-diagnostics: SOS;
x-ms-office365-filtering-correlation-id: 05fa0af8-2325-491d-574d-08d6658bfa62
x-ms-office365-filtering-ht: Tenant
x-microsoft-antispam: BCL:0; PCL:0;
 RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600074)(711020)(4618075)(2017052603328)(7153060)(7193020);
 SRVR:VI1PR08MB3245; 
x-ms-traffictypediagnostic: VI1PR08MB3245:
x-microsoft-antispam-prvs: <VI1PR08MB32450BCAC29492940B7961C58FBE0@VI1PR08MB3245.eurprd08.prod.outlook.com>
x-ms-exchange-senderadcheck: 1
x-exchange-antispam-report-cfa-test: BCL:0; PCL:0;
 RULEID:(8211001083)(3230021)(999002)(6040522)(2401047)(5005006)(8121501046)(3231475)(944501520)(52105112)(3002001)(10201501046)(93006095)(93001095)(6055026)(148016)(149066)(150057)(6041310)(20161123564045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123560045)(20161123558120)(20161123562045)(201708071742011)(7699051)(76991095);
 SRVR:VI1PR08MB3245; BCL:0; PCL:0; RULEID:; SRVR:VI1PR08MB3245; 
x-forefront-prvs: 0891BC3F3D
x-forefront-antispam-report: SFV:NSPM;
 SFS:(10009020)(136003)(39860400002)(376002)(396003)(346002)(366004)(189003)(199004)(40434004)(13464003)(53936002)(9686003)(8676002)(478600001)(71200400001)(4744004)(71190400001)(81156014)(81166006)(66066001)(2906002)(8936002)(5660300001)(6246003)(2501003)(3846002)(110136005)(6116002)(229853002)(7736002)(105586002)(6436002)(72206003)(74316002)(14454004)(305945005)(106356001)(33656002)(53546011)(186003)(6506007)(55236004)(102836004)(316002)(26005)(486006)(11346002)(99286004)(476003)(446003)(97736004)(25786009)(55016002)(14444005)(76176011)(86362001)(5024004)(7696005)(68736007)(575784001)(256004);
 DIR:OUT; SFP:1101; SCL:1; SRVR:VI1PR08MB3245;
 H:VI1PR08MB3167.eurprd08.prod.outlook.com; FPR:; SPF:None; LANG:en;
 PTR:InfoNoRecords; MX:1; A:1; 
received-spf: None (protection.outlook.com: arm.com does not designate
 permitted sender hosts)
x-microsoft-antispam-message-info: LIKWLnsgXo1ljmAn+VNfo+AAg5q0dQJ38e/PXgRY19V5qJ+Vr3LaRlPRwTDwRXGD4qiGv2ISVe4McHO0jSO7DdAU2+mAwuDaZKCTrq36Wvi62trYuN6i4MHNQb6t/KJ9aTfQyKbNbNahi80DLCfgzB4OdG4a4+7RxeHdp6sYBhT8nMcHLuDV7z2GRuskPDJ5hLyr2/zFn3VdYGYAX4EXdEvv9dbA9tP1ta6v78W2ywi6DUOmpABoICcPzAAuEbsWCr6nVHOEGdJHp99qAm57+yxpszUqOI6bsFz0Eb7rVXiCUX2KAGO2p+92rCtHiycZ
spamdiagnosticoutput: 1:99
spamdiagnosticmetadata: NSPM
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 05fa0af8-2325-491d-574d-08d6658bfa62
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Dec 2018 08:28:39.3671 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3245
Subject: Re: [dpdk-dev] [PATCH 2/2] test: add new test-cases for rwlock
 autotest
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Wed, 19 Dec 2018 08:28:41 -0000



> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Konstantin Ananyev
> Sent: Wednesday, November 14, 2018 1:28 AM
> To: dev@dpdk.org
> Cc: Konstantin Ananyev <konstantin.ananyev@intel.com>
> Subject: [dpdk-dev] [PATCH 2/2] test: add new test-cases for rwlock autot=
est
>
> This patch targets 19.02 release.
>
> Add few functional and perfomance tests
> for rte_rwlock_read_trylock() and rte_rwlock_write_trylock().
>
> Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> ---
>  test/test/test_rwlock.c | 405
> ++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 386 insertions(+), 19 deletions(-)
>
> diff --git a/test/test/test_rwlock.c b/test/test/test_rwlock.c index
> 29171c422..47caac9fb 100644
> --- a/test/test/test_rwlock.c
> +++ b/test/test/test_rwlock.c
> @@ -4,8 +4,10 @@
>
>  #include <stdio.h>
>  #include <stdint.h>
> +#include <inttypes.h>
>  #include <unistd.h>
>  #include <sys/queue.h>
> +#include <string.h>
>
>  #include <rte_common.h>
>  #include <rte_memory.h>
> @@ -22,29 +24,41 @@
>  /*
>   * rwlock test
>   * =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> - *
> - * - There is a global rwlock and a table of rwlocks (one per lcore).
> - *
> - * - The test function takes all of these locks and launches the
> - *   ``test_rwlock_per_core()`` function on each core (except the master=
).
> - *
> - *   - The function takes the global write lock, display something,
> - *     then releases the global lock.
> - *   - Then, it takes the per-lcore write lock, display something, and
> - *     releases the per-core lock.
> - *   - Finally, a read lock is taken during 100 ms, then released.
> - *
> - * - The main function unlocks the per-lcore locks sequentially and
> - *   waits between each lock. This triggers the display of a message
> - *   for each core, in the correct order.
> - *
> - *   Then, it tries to take the global write lock and display the last
> - *   message. The autotest script checks that the message order is corre=
ct.
>   */
As the description was moved to rwlock_test1, a new general description of
the tests is still required.
>
> +#define ITER_NUM0x80
> +
> +#define TEST_SEC5
> +
>  static rte_rwlock_t sl;
>  static rte_rwlock_t sl_tab[RTE_MAX_LCORE];
>
> +enum {
> +LC_TYPE_RDLOCK,
> +LC_TYPE_WRLOCK,
> +};
> +
> +static struct {
> +rte_rwlock_t lock;
> +uint64_t tick;
> +volatile union {
> +uint8_t u8[RTE_CACHE_LINE_SIZE];
> +uint64_t u64[RTE_CACHE_LINE_SIZE / sizeof(uint64_t)];
> +} data;
> +} __rte_cache_aligned try_rwlock_data;
> +
> +struct try_rwlock_lcore {
> +int32_t rc;
> +int32_t type;
> +struct {
> +uint64_t tick;
> +uint64_t fail;
> +uint64_t success;
> +} stat;
> +} __rte_cache_aligned;
> +
> +static struct try_rwlock_lcore try_lcore_data[RTE_MAX_LCORE];
> +
>  static int
>  test_rwlock_per_core(__attribute__((unused)) void *arg)  { @@ -65,8
> +79,27 @@ test_rwlock_per_core(__attribute__((unused)) void *arg)
>  return 0;
>  }
>
> +/*
> + * - There is a global rwlock and a table of rwlocks (one per lcore).
> + *
> + * - The test function takes all of these locks and launches the
> + *   ``test_rwlock_per_core()`` function on each core (except the master=
).
> + *
> + *   - The function takes the global write lock, display something,
> + *     then releases the global lock.
> + *   - Then, it takes the per-lcore write lock, display something, and
> + *     releases the per-core lock.
> + *   - Finally, a read lock is taken during 100 ms, then released.
> + *
> + * - The main function unlocks the per-lcore locks sequentially and
> + *   waits between each lock. This triggers the display of a message
> + *   for each core, in the correct order.
> + *
> + *   Then, it tries to take the global write lock and display the last
> + *   message. The autotest script checks that the message order is corre=
ct.
> + */
>  static int
> -test_rwlock(void)
> +rwlock_test1(void)
>  {
>  int i;
>
> @@ -98,4 +131,338 @@ test_rwlock(void)
>  return 0;
>  }
>
> +static int
> +try_read(uint32_t lc)
> +{
> +int32_t rc;
> +uint32_t i;
> +
> +rc =3D rte_rwlock_read_trylock(&try_rwlock_data.lock);
> +if (rc !=3D 0)
> +return rc;
> +
> +for (i =3D 0; i !=3D RTE_DIM(try_rwlock_data.data.u64); i++) {
> +
> +/* race condition occurred, lock doesn't work properly */
> +if (try_rwlock_data.data.u64[i] !=3D 0) {
> +printf("%s(%u) error: unexpected data pattern\n",
> +__func__, lc);
> +rte_memdump(stdout, NULL,
> +(void *)(uintptr_t)&try_rwlock_data.data,
> +sizeof(try_rwlock_data.data));
> +rc =3D -EFAULT;
> +break;
> +}
> +}
> +
> +rte_rwlock_read_unlock(&try_rwlock_data.lock);
> +return rc;
> +}
> +
> +static int
> +try_write(uint32_t lc)
> +{
> +int32_t rc;
> +uint32_t i, v;
> +
> +v =3D RTE_MAX(lc % UINT8_MAX, 1U);
> +
> +rc =3D rte_rwlock_write_trylock(&try_rwlock_data.lock);
> +if (rc !=3D 0)
> +return rc;
> +
> +/* update by bytes in reverese order */
> +for (i =3D RTE_DIM(try_rwlock_data.data.u8); i-- !=3D 0; ) {
> +
> +/* race condition occurred, lock doesn't work properly */
> +if (try_rwlock_data.data.u8[i] !=3D 0) {
> +printf("%s:%d(%u) error: unexpected data pattern\n",
> +__func__, __LINE__, lc);
> +rte_memdump(stdout, NULL,
> +(void *)(uintptr_t)&try_rwlock_data.data,
> +sizeof(try_rwlock_data.data));
> +rc =3D -EFAULT;
> +break;
> +}
> +
> +try_rwlock_data.data.u8[i] =3D v;
> +}
> +
> +/* restore by bytes in reverese order */
> +for (i =3D RTE_DIM(try_rwlock_data.data.u8); i-- !=3D 0; ) {
> +
> +/* race condition occurred, lock doesn't work properly */
> +if (try_rwlock_data.data.u8[i] !=3D v) {
> +printf("%s:%d(%u) error: unexpected data pattern\n",
> +__func__, __LINE__, lc);
> +rte_memdump(stdout, NULL,
> +(void *)(uintptr_t)&try_rwlock_data.data,
> +sizeof(try_rwlock_data.data));
> +rc =3D -EFAULT;
> +break;
> +}
> +
> +try_rwlock_data.data.u8[i] =3D 0;
> +}
> +
> +rte_rwlock_write_unlock(&try_rwlock_data.lock);
> +return rc;
> +}
> +
> +static int
> +try_read_lcore(__rte_unused void *data) {
> +int32_t rc;
> +uint32_t i, lc;
> +uint64_t ftm, stm, tm;
> +struct try_rwlock_lcore *lcd;
> +
> +lc =3D rte_lcore_id();
> +lcd =3D try_lcore_data + lc;
> +lcd->type =3D LC_TYPE_RDLOCK;
> +
> +ftm =3D try_rwlock_data.tick;
> +stm =3D rte_get_timer_cycles();
> +
> +do {
> +for (i =3D 0; i !=3D ITER_NUM; i++) {
> +rc =3D try_read(lc);
> +if (rc =3D=3D 0)
> +lcd->stat.success++;
> +else if (rc =3D=3D -EBUSY)
> +lcd->stat.fail++;
> +else
> +break;
> +rc =3D 0;
> +}
> +tm =3D rte_get_timer_cycles() - stm;
> +} while (tm < ftm && rc =3D=3D 0);
> +
> +lcd->rc =3D rc;
> +lcd->stat.tick =3D tm;
> +return rc;
> +}
> +
> +static int
> +try_write_lcore(__rte_unused void *data) {
> +int32_t rc;
> +uint32_t i, lc;
> +uint64_t ftm, stm, tm;
> +struct try_rwlock_lcore *lcd;
> +
> +lc =3D rte_lcore_id();
> +lcd =3D try_lcore_data + lc;
> +lcd->type =3D LC_TYPE_WRLOCK;
> +
> +ftm =3D try_rwlock_data.tick;
> +stm =3D rte_get_timer_cycles();
> +
> +do {
> +for (i =3D 0; i !=3D ITER_NUM; i++) {
> +rc =3D try_write(lc);
> +if (rc =3D=3D 0)
> +lcd->stat.success++;
> +else if (rc =3D=3D -EBUSY)
> +lcd->stat.fail++;
> +else
> +break;
> +rc =3D 0;
> +}
> +tm =3D rte_get_timer_cycles() - stm;
> +} while (tm < ftm && rc =3D=3D 0);
> +
> +lcd->rc =3D rc;
> +lcd->stat.tick =3D tm;
> +return rc;
> +}
> +
> +static void
> +print_try_lcore_stats(const struct try_rwlock_lcore *tlc, uint32_t lc)
> +{
> +uint64_t f, s;
> +
> +f =3D RTE_MAX(tlc->stat.fail, 1ULL);
> +s =3D RTE_MAX(tlc->stat.success, 1ULL);
> +
> +printf("try_lcore_data[%u]=3D{\n"
> +"\trc=3D%d,\n"
> +"\ttype=3D%s,\n"
> +"\tfail=3D%" PRIu64 ",\n"
> +"\tsuccess=3D%" PRIu64 ",\n"
> +"\tcycles=3D%" PRIu64 ",\n"
> +"\tcycles/op=3D%#Lf,\n"
> +"\tcycles/success=3D%#Lf,\n"
> +"\tsuccess/fail=3D%#Lf,\n"
> +"};\n",
> +lc,
> +tlc->rc,
> +tlc->type =3D=3D LC_TYPE_RDLOCK ? "RDLOCK" : "WRLOCK",
> +tlc->stat.fail,
> +tlc->stat.success,
> +tlc->stat.tick,
> +(long double)tlc->stat.tick /
> +(tlc->stat.fail + tlc->stat.success),
> +(long double)tlc->stat.tick / s,
> +(long double)tlc->stat.success / f);
> +}
> +
> +static void
> +collect_try_lcore_stats(struct try_rwlock_lcore *tlc,
> +const struct try_rwlock_lcore *lc)
> +{
> +tlc->stat.tick +=3D lc->stat.tick;
> +tlc->stat.fail +=3D lc->stat.fail;
> +tlc->stat.success +=3D lc->stat.success; }
> +
> +/*
> + * Process collected results:
> + *  - check status
> + *  - collect and print statistics
> + */
> +static int
> +process_try_lcore_stats(void)
> +{
> +int32_t rc;
> +uint32_t lc, rd, wr;
> +struct try_rwlock_lcore rlc, wlc;
> +
> +memset(&rlc, 0, sizeof(rlc));
> +memset(&wlc, 0, sizeof(wlc));
> +
> +rlc.type =3D LC_TYPE_RDLOCK;
> +wlc.type =3D LC_TYPE_WRLOCK;
> +rd =3D 0;
> +wr =3D 0;
> +
> +rc =3D 0;
> +RTE_LCORE_FOREACH(lc) {
> +rc |=3D try_lcore_data[lc].rc;
> +if (try_lcore_data[lc].type =3D=3D LC_TYPE_RDLOCK) {
> +collect_try_lcore_stats(&rlc, try_lcore_data + lc);
> +rd++;
> +} else {
> +collect_try_lcore_stats(&wlc, try_lcore_data + lc);
> +wr++;
> +}
> +}
> +
> +if (rc =3D=3D 0) {
> +RTE_LCORE_FOREACH(lc)
> +print_try_lcore_stats(try_lcore_data + lc, lc);
> +
> +if (rd !=3D 0) {
> +printf("aggregated stats for %u RDLOCK cores:\n",
> rd);
> +print_try_lcore_stats(&rlc, rd);
> +}
> +
> +if (wr !=3D 0) {
> +printf("aggregated stats for %u WRLOCK cores:\n",
> wr);
> +print_try_lcore_stats(&wlc, wr);
> +}
> +}
> +
> +return rc;
> +}
> +
> +static void
> +try_test_reset(void)
> +{
> +memset(&try_lcore_data, 0, sizeof(try_lcore_data));
> +memset(&try_rwlock_data, 0, sizeof(try_rwlock_data));
> +try_rwlock_data.tick =3D TEST_SEC * rte_get_tsc_hz(); }
> +
> +/* all lcores grab RDLOCK */
> +static int
> +try_rwlock_test_rda(void)
> +{
> +try_test_reset();
> +
> +/* start read test on all avaialble lcores */
> +rte_eal_mp_remote_launch(try_read_lcore, NULL, CALL_MASTER);
> +rte_eal_mp_wait_lcore();
> +
> +return process_try_lcore_stats();
> +}
> +
> +/* all slave lcores grab RDLOCK, master one grabs WRLOCK */ static int
> +try_rwlock_test_rds_wrm(void)
> +{
> +try_test_reset();
> +
> +rte_eal_mp_remote_launch(try_read_lcore, NULL, SKIP_MASTER);
> +try_write_lcore(NULL);
> +rte_eal_mp_wait_lcore();
> +
> +return process_try_lcore_stats();
> +}
> +
> +/* master and even slave lcores grab RDLOCK, odd lcores grab WRLOCK */
> +static int
> +try_rwlock_test_rde_wro(void)
> +{
> +uint32_t lc, mlc;
> +
> +try_test_reset();
> +
> +mlc =3D rte_get_master_lcore();
> +
> +RTE_LCORE_FOREACH(lc) {
> +if (lc !=3D mlc) {
> +if ((lc & 1) =3D=3D 0)
> +rte_eal_remote_launch(try_read_lcore,
> +NULL, lc);
> +else
> +rte_eal_remote_launch(try_write_lcore,
> +NULL, lc);
> +}
> +}
> +try_read_lcore(NULL);
> +rte_eal_mp_wait_lcore();
> +
> +return process_try_lcore_stats();
> +}
> +
> +static int
> +test_rwlock(void)
> +{
> +uint32_t i;
> +int32_t rc, ret;
> +
> +static const struct {
> +const char *name;
> +int (*ftst)(void);
> +} test[] =3D {
> +{
> +.name =3D "rwlock_test1",
> +.ftst =3D rwlock_test1,
> +},
> +{
> +.name =3D "try_rwlock_test_rda",
> +.ftst =3D try_rwlock_test_rda,
> +},
> +{
> +.name =3D "try_rwlock_test_rds_wrm",
> +.ftst =3D try_rwlock_test_rds_wrm,
> +},
> +{
> +.name =3D "try_rwlock_test_rde_wro",
> +.ftst =3D try_rwlock_test_rde_wro,
> +},
> +};
> +
> +ret =3D 0;
> +for (i =3D 0; i !=3D RTE_DIM(test); i++) {
> +printf("starting test %s;\n", test[i].name);
> +rc =3D test[i].ftst();
> +printf("test %s completed with status %d\n", test[i].name,
> rc);
> +ret |=3D rc;
> +}
> +
> +return ret;
> +}
> +
>  REGISTER_TEST_COMMAND(rwlock_autotest, test_rwlock);
> --
> 2.17.1
Other than the minor comment,
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
IMPORTANT NOTICE: The contents of this email and any attachments are confid=
ential and may also be privileged. If you are not the intended recipient, p=
lease notify the sender immediately and do not disclose the contents to any=
 other person, use it for any purpose, or store or copy the information in =
any medium. Thank you.