From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 02FFCA00E6 for ; Mon, 8 Jul 2019 06:57:00 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A106E4C8D; Mon, 8 Jul 2019 06:57:00 +0200 (CEST) Received: from EUR03-VE1-obe.outbound.protection.outlook.com (mail-eopbgr50048.outbound.protection.outlook.com [40.107.5.48]) by dpdk.org (Postfix) with ESMTP id CFA8137B0 for ; Mon, 8 Jul 2019 06:56:59 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ljfqI7yiUznnZfVB9prw2Y8kwjoTobJRBNRnsMsFofg=; b=hv9yY8GWospbNlk/NIoN4yc7/V67nQFFjQxgM2aw5MEDYfpvT9nmAvFRHDu5FvEWdDX/dn4dnWWN6n6Zw5/aRUgAuvuxKamFIIylhE1eu+LqY0/f2a6A0VhnBzktCeeXXKtZjqq4cCWxVuea6/tbqimkJizf63Lel6+manUMB84= Received: from VE1PR08MB5149.eurprd08.prod.outlook.com (20.179.30.152) by VE1PR08MB4685.eurprd08.prod.outlook.com (10.255.115.74) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2052.18; Mon, 8 Jul 2019 04:56:58 +0000 Received: from VE1PR08MB5149.eurprd08.prod.outlook.com ([fe80::a89e:33:fbda:ed35]) by VE1PR08MB5149.eurprd08.prod.outlook.com ([fe80::a89e:33:fbda:ed35%4]) with mapi id 15.20.2052.020; Mon, 8 Jul 2019 04:56:58 +0000 From: Honnappa Nagarahalli To: "Ruifeng Wang (Arm Technology China)" , "vladimir.medvedkin@intel.com" , "bruce.richardson@intel.com" CC: "dev@dpdk.org" , "Gavin Hu (Arm Technology China)" , nd , "Ruifeng Wang (Arm Technology China)" , Honnappa Nagarahalli , nd Thread-Topic: [PATCH v4 3/3] lib/lpm: use atomic store to avoid partial update Thread-Index: AQHVMWJ8xcVaFMB+nUu7+JOs7yV9CqbALtOg Date: Mon, 8 Jul 2019 04:56:58 +0000 Message-ID: References: <20190703054441.30162-1-ruifeng.wang@arm.com> <20190703054441.30162-3-ruifeng.wang@arm.com> In-Reply-To: <20190703054441.30162-3-ruifeng.wang@arm.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ts-tracking-id: f3e6d6ea-f76f-4f6a-8eba-91e7dad1563d.0 x-checkrecipientchecked: true authentication-results: spf=none (sender IP is ) smtp.mailfrom=Honnappa.Nagarahalli@arm.com; x-originating-ip: [217.140.111.135] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: e6e2bef2-f334-4a8a-f5da-08d70360b4f6 x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:VE1PR08MB4685; x-ms-traffictypediagnostic: VE1PR08MB4685: x-microsoft-antispam-prvs: nodisclaimer: True x-ms-oob-tlc-oobclassifiers: OLM:6430; x-forefront-prvs: 00922518D8 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(366004)(396003)(376002)(136003)(39850400004)(346002)(199004)(189003)(53936002)(33656002)(6116002)(72206003)(3846002)(478600001)(6246003)(4326008)(25786009)(73956011)(76116006)(66946007)(2201001)(66556008)(5660300002)(86362001)(64756008)(52536014)(55016002)(316002)(66446008)(54906003)(9686003)(6436002)(110136005)(74316002)(229853002)(66476007)(81166006)(81156014)(8936002)(71200400001)(15650500001)(71190400001)(26005)(2906002)(186003)(7696005)(102836004)(76176011)(99286004)(6506007)(14444005)(256004)(7736002)(66066001)(68736007)(305945005)(11346002)(2501003)(14454004)(476003)(8676002)(486006)(446003); DIR:OUT; SFP:1101; SCL:1; SRVR:VE1PR08MB4685; H:VE1PR08MB5149.eurprd08.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; A:1; MX:1; received-spf: None (protection.outlook.com: arm.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: GnrrgdiDSvX5zNbtfzR35XPsJE4YCJ5c8KGlolw5twYC/ep+3yrZVU96p4jvhscs3f2nQ+kc4/E1y7MmVlfSFP0LSvsLwVxJ72Yh07eukKEn+aEX4xPogU3d2a0cL3ULx2LyS0sqgfU4FbwKxWbZTNLi0Akeqh2D+yi6JYxFuM6kZ+V0vjuGL10Mj9P0E0f6SCZScllpIhEeE3bymLA0GQHuBFymXdIkS7PzaIRGQyriCc16gmte0aKXdgU3m5qmRO02y5M0XGIDWpsV3GgCSAVTTPyNnBiUN8WjCnZB7uWdZMP6ZzbD83NVBzm6HePLgfWYEakt/mvl9nyd2MmodpjOUNqZjIXSKXLsdaLHZynCkVU7BrNpJfS07QYf5ueRSEQTQ/bOf28pu3G8z+pGJBa/t1rcEkZpSYPKHvYZQ2I= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-Network-Message-Id: e6e2bef2-f334-4a8a-f5da-08d70360b4f6 X-MS-Exchange-CrossTenant-originalarrivaltime: 08 Jul 2019 04:56:58.2542 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Honnappa.Nagarahalli@arm.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB4685 Subject: Re: [dpdk-dev] [PATCH v4 3/3] lib/lpm: use atomic store to avoid partial update X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" >=20 > Compiler could generate non-atomic stores for whole table entry updating. > This may cause incorrect nexthop to be returned, if the byte with valid f= lag is > updated prior to the byte with next hot is updated. ^^^^^^^ Should be nexthop >=20 > Changed to use atomic store to update whole table entry. >=20 > Suggested-by: Medvedkin Vladimir > Signed-off-by: Ruifeng Wang > Reviewed-by: Gavin Hu > --- > v4: initial version >=20 > lib/librte_lpm/rte_lpm.c | 34 ++++++++++++++++++++++++---------- > 1 file changed, 24 insertions(+), 10 deletions(-) >=20 > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c index > baa6e7460..5d1dbd7e6 100644 > --- a/lib/librte_lpm/rte_lpm.c > +++ b/lib/librte_lpm/rte_lpm.c > @@ -767,7 +767,9 @@ add_depth_small_v20(struct rte_lpm_v20 *lpm, > uint32_t ip, uint8_t depth, > * Setting tbl8 entry in one go to > avoid > * race conditions > */ > - lpm->tbl8[j] =3D new_tbl8_entry; > + __atomic_store(&lpm->tbl8[j], > + &new_tbl8_entry, > + __ATOMIC_RELAXED); >=20 > continue; > } > @@ -837,7 +839,9 @@ add_depth_small_v1604(struct rte_lpm *lpm, > uint32_t ip, uint8_t depth, > * Setting tbl8 entry in one go to > avoid > * race conditions > */ > - lpm->tbl8[j] =3D new_tbl8_entry; > + __atomic_store(&lpm->tbl8[j], > + &new_tbl8_entry, > + __ATOMIC_RELAXED); >=20 > continue; > } > @@ -965,7 +969,8 @@ add_depth_big_v20(struct rte_lpm_v20 *lpm, > uint32_t ip_masked, uint8_t depth, > * Setting tbl8 entry in one go to avoid race > * condition > */ > - lpm->tbl8[i] =3D new_tbl8_entry; > + __atomic_store(&lpm->tbl8[i], > &new_tbl8_entry, > + __ATOMIC_RELAXED); >=20 > continue; > } > @@ -1100,7 +1105,8 @@ add_depth_big_v1604(struct rte_lpm *lpm, > uint32_t ip_masked, uint8_t depth, > * Setting tbl8 entry in one go to avoid race > * condition > */ > - lpm->tbl8[i] =3D new_tbl8_entry; > + __atomic_store(&lpm->tbl8[i], > &new_tbl8_entry, > + __ATOMIC_RELAXED); >=20 > continue; > } > @@ -1393,7 +1399,9 @@ delete_depth_small_v20(struct rte_lpm_v20 *lpm, > uint32_t ip_masked, >=20 > RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) { >=20 > if (lpm->tbl8[j].depth <=3D depth) > - lpm->tbl8[j] =3D > new_tbl8_entry; > + __atomic_store(&lpm->tbl8[j], > + &new_tbl8_entry, > + __ATOMIC_RELAXED); > } > } > } > @@ -1490,7 +1498,9 @@ delete_depth_small_v1604(struct rte_lpm *lpm, > uint32_t ip_masked, >=20 > RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) { >=20 > if (lpm->tbl8[j].depth <=3D depth) > - lpm->tbl8[j] =3D > new_tbl8_entry; > + __atomic_store(&lpm->tbl8[j], > + &new_tbl8_entry, > + __ATOMIC_RELAXED); > } > } > } > @@ -1646,7 +1656,8 @@ delete_depth_big_v20(struct rte_lpm_v20 *lpm, > uint32_t ip_masked, > */ > for (i =3D tbl8_index; i < (tbl8_index + tbl8_range); i++) { > if (lpm->tbl8[i].depth <=3D depth) > - lpm->tbl8[i] =3D new_tbl8_entry; > + __atomic_store(&lpm->tbl8[i], > &new_tbl8_entry, > + __ATOMIC_RELAXED); > } > } >=20 > @@ -1677,7 +1688,8 @@ delete_depth_big_v20(struct rte_lpm_v20 *lpm, > uint32_t ip_masked, > /* Set tbl24 before freeing tbl8 to avoid race condition. > * Prevent the free of the tbl8 group from hoisting. > */ > - lpm->tbl24[tbl24_index] =3D new_tbl24_entry; > + __atomic_store(&lpm->tbl24[tbl24_index], > &new_tbl24_entry, > + __ATOMIC_RELAXED); > __atomic_thread_fence(__ATOMIC_RELEASE); > tbl8_free_v20(lpm->tbl8, tbl8_group_start); tbl8_alloc_v20/tbl8_free_v20 need to be updated to use __atomic_store > } > @@ -1730,7 +1742,8 @@ delete_depth_big_v1604(struct rte_lpm *lpm, > uint32_t ip_masked, > */ > for (i =3D tbl8_index; i < (tbl8_index + tbl8_range); i++) { > if (lpm->tbl8[i].depth <=3D depth) > - lpm->tbl8[i] =3D new_tbl8_entry; > + __atomic_store(&lpm->tbl8[i], > &new_tbl8_entry, > + __ATOMIC_RELAXED); > } > } >=20 > @@ -1761,7 +1774,8 @@ delete_depth_big_v1604(struct rte_lpm *lpm, > uint32_t ip_masked, > /* Set tbl24 before freeing tbl8 to avoid race condition. > * Prevent the free of the tbl8 group from hoisting. > */ > - lpm->tbl24[tbl24_index] =3D new_tbl24_entry; > + __atomic_store(&lpm->tbl24[tbl24_index], > &new_tbl24_entry, > + __ATOMIC_RELAXED); > __atomic_thread_fence(__ATOMIC_RELEASE); > tbl8_free_v1604(lpm->tbl8, tbl8_group_start); tbl8_alloc_v1604 /tbl8_free_v1604 need to be updated to use __atomic_store > } > -- > 2.17.1