From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6BFF6A00E6 for ; Tue, 9 Jul 2019 11:58:27 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2EE891B998; Tue, 9 Jul 2019 11:58:26 +0200 (CEST) Received: from EUR02-HE1-obe.outbound.protection.outlook.com (mail-eopbgr10076.outbound.protection.outlook.com [40.107.1.76]) by dpdk.org (Postfix) with ESMTP id D81FE1B997 for ; Tue, 9 Jul 2019 11:58:24 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; s=selector2-armh-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=yhV9hY+GyYdFSkcCwX3ZOViCpkWspsM5yRd0sIkENuQ=; b=v+Ko84vXFOLtb8KvbiILcmJxr37Bfz2+4RAk7/TaoU7VpvenXJ4c8Te7TYc2u5N72X32wPw6bftkA5N9GMRk+XlOUyzm8LGGLkjKRYFFZQDxIECijmPlMN2w4AXxZ2yG+sl2jSiQEASAsnPOACJSquGW7ogt0RBEq2pFzwtBiNw= Received: from AM0PR08MB3986.eurprd08.prod.outlook.com (20.178.118.90) by AM0PR08MB3331.eurprd08.prod.outlook.com (52.134.92.144) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2052.18; Tue, 9 Jul 2019 09:58:23 +0000 Received: from AM0PR08MB3986.eurprd08.prod.outlook.com ([fe80::ac98:f913:47e6:22e6]) by AM0PR08MB3986.eurprd08.prod.outlook.com ([fe80::ac98:f913:47e6:22e6%4]) with mapi id 15.20.2052.020; Tue, 9 Jul 2019 09:58:23 +0000 From: "Ruifeng Wang (Arm Technology China)" To: Honnappa Nagarahalli , "vladimir.medvedkin@intel.com" , "bruce.richardson@intel.com" CC: "dev@dpdk.org" , "Gavin Hu (Arm Technology China)" , nd , nd , nd Thread-Topic: [PATCH v4 3/3] lib/lpm: use atomic store to avoid partial update Thread-Index: AQHVMWJ8k7zPubWEi0+3NurnNX+1LKbAMF8AgAANd8CAAYEOgIAAU9OQ Date: Tue, 9 Jul 2019 09:58:23 +0000 Message-ID: References: <20190703054441.30162-1-ruifeng.wang@arm.com> <20190703054441.30162-3-ruifeng.wang@arm.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-ts-tracking-id: 4f245542-a397-4780-bdc3-e734a0e9818a.0 x-checkrecipientchecked: true authentication-results: spf=none (sender IP is ) smtp.mailfrom=Ruifeng.Wang@arm.com; x-originating-ip: [113.29.88.7] x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 61848dc7-5bee-4f15-0dd7-08d70453fafa x-ms-office365-filtering-ht: Tenant x-microsoft-antispam: BCL:0; PCL:0; RULEID:(2390118)(7020095)(4652040)(8989299)(4534185)(7168020)(4627221)(201703031133081)(201702281549075)(8990200)(5600148)(711020)(4605104)(1401327)(4618075)(2017052603328)(7193020); SRVR:AM0PR08MB3331; x-ms-traffictypediagnostic: AM0PR08MB3331: x-microsoft-antispam-prvs: nodisclaimer: True x-ms-oob-tlc-oobclassifiers: OLM:8273; x-forefront-prvs: 0093C80C01 x-forefront-antispam-report: SFV:NSPM; SFS:(10009020)(4636009)(346002)(136003)(376002)(366004)(396003)(39860400002)(189003)(199004)(13464003)(8676002)(6436002)(305945005)(81166006)(5660300002)(256004)(8936002)(7736002)(9686003)(14444005)(74316002)(81156014)(478600001)(72206003)(53936002)(71190400001)(3846002)(33656002)(54906003)(66066001)(76176011)(316002)(110136005)(7696005)(66446008)(64756008)(6506007)(102836004)(55236004)(53546011)(66476007)(66556008)(66946007)(99286004)(76116006)(486006)(4326008)(229853002)(11346002)(446003)(26005)(71200400001)(6116002)(6246003)(476003)(52536014)(15650500001)(2906002)(55016002)(14454004)(86362001)(25786009)(186003)(68736007)(73956011)(2501003)(2201001); DIR:OUT; SFP:1101; SCL:1; SRVR:AM0PR08MB3331; H:AM0PR08MB3986.eurprd08.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: arm.com does not designate permitted sender hosts) x-ms-exchange-senderadcheck: 1 x-microsoft-antispam-message-info: /+1XW0yQzYoENIw4yVW7wTc1SQ0mTfuHiIo6ojnq7iHYxpght90uFUUO3wu8gz4N52QfSB7WeQmu3VJbMLfqzwcSTokuYuSSJsJ0A/HrbmJK4iWtOYKjNoHweYdyqOJvEvjisVulg/9fXAkAZoXWHGSK9sigE7EcFdAFuYSMtI5P+Cmpt+Eh+vynCLDXzw8IuxvDuhLu3dgVUVuHddtCq1vr2r5rAG5mvs87mFrafx5hWVWyljoZUcMkjQcjm44SknPl3b0ylJEAUromB49qlxiwxN+mFOPbo1qqENtIEcAngf+KE28uuz29pU5G3kc20h3Y20JKSBFtZfcbFH/yJOy9BUppOdMhtbe+zjIm+t6crttw4eUYNb1wj+pU3WHP4SwgNqU31kxT90WyOfDHLdqy7sH72uhPAHfauBRpU1I= Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-Network-Message-Id: 61848dc7-5bee-4f15-0dd7-08d70453fafa X-MS-Exchange-CrossTenant-originalarrivaltime: 09 Jul 2019 09:58:23.4543 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: Ruifeng.Wang@arm.com X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3331 Subject: Re: [dpdk-dev] [PATCH v4 3/3] lib/lpm: use atomic store to avoid partial update X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" > -----Original Message----- > From: Honnappa Nagarahalli > Sent: Tuesday, July 9, 2019 12:43 > To: Ruifeng Wang (Arm Technology China) ; > vladimir.medvedkin@intel.com; bruce.richardson@intel.com > Cc: dev@dpdk.org; Gavin Hu (Arm Technology China) ; > Honnappa Nagarahalli ; nd > ; nd > Subject: RE: [PATCH v4 3/3] lib/lpm: use atomic store to avoid partial up= date >=20 > > > > > > > > > > > Compiler could generate non-atomic stores for whole table entry > updating. > > > > This may cause incorrect nexthop to be returned, if the byte with > > > > valid flag is updated prior to the byte with next hot is updated. > > > ^^^^^^^ > > > Should be nexthop > > > > > > > > > > > Changed to use atomic store to update whole table entry. > > > > > > > > Suggested-by: Medvedkin Vladimir > > > > Signed-off-by: Ruifeng Wang > > > > Reviewed-by: Gavin Hu > > > > --- > > > > v4: initial version > > > > > > > > lib/librte_lpm/rte_lpm.c | 34 ++++++++++++++++++++++++---------- > > > > 1 file changed, 24 insertions(+), 10 deletions(-) > > > > > > > > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c > > > > index > > > > baa6e7460..5d1dbd7e6 100644 > > > > --- a/lib/librte_lpm/rte_lpm.c > > > > +++ b/lib/librte_lpm/rte_lpm.c > > > > @@ -767,7 +767,9 @@ add_depth_small_v20(struct rte_lpm_v20 *lpm, > > > > uint32_t ip, uint8_t depth, > > > > * Setting tbl8 entry in one go to avoid > > > > * race conditions > > > > */ > > > > - lpm->tbl8[j] =3D new_tbl8_entry; > > > > + __atomic_store(&lpm->tbl8[j], > > > > + &new_tbl8_entry, > > > > + __ATOMIC_RELAXED); > > > > > > > > continue; > > > > } > > > > @@ -837,7 +839,9 @@ add_depth_small_v1604(struct rte_lpm *lpm, > > > > uint32_t ip, uint8_t depth, > > > > * Setting tbl8 entry in one go to avoid > > > > * race conditions > > > > */ > > > > - lpm->tbl8[j] =3D new_tbl8_entry; > > > > + __atomic_store(&lpm->tbl8[j], > > > > + &new_tbl8_entry, > > > > + __ATOMIC_RELAXED); > > > > > > > > continue; > > > > } > > > > @@ -965,7 +969,8 @@ add_depth_big_v20(struct rte_lpm_v20 *lpm, > > > > uint32_t ip_masked, uint8_t depth, > > > > * Setting tbl8 entry in one go to avoid race > > > > * condition > > > > */ > > > > - lpm->tbl8[i] =3D new_tbl8_entry; > > > > + __atomic_store(&lpm->tbl8[i], > > > > &new_tbl8_entry, > > > > + __ATOMIC_RELAXED); > > > > > > > > continue; > > > > } > > > > @@ -1100,7 +1105,8 @@ add_depth_big_v1604(struct rte_lpm *lpm, > > > > uint32_t ip_masked, uint8_t depth, > > > > * Setting tbl8 entry in one go to avoid race > > > > * condition > > > > */ > > > > - lpm->tbl8[i] =3D new_tbl8_entry; > > > > + __atomic_store(&lpm->tbl8[i], > > > > &new_tbl8_entry, > > > > + __ATOMIC_RELAXED); > > > > > > > > continue; > > > > } > > > > @@ -1393,7 +1399,9 @@ delete_depth_small_v20(struct rte_lpm_v20 > > > *lpm, > > > > uint32_t ip_masked, > > > > > > > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) { > > > > > > > > if (lpm->tbl8[j].depth <=3D depth) > > > > - lpm->tbl8[j] =3D > > > > new_tbl8_entry; > > > > + __atomic_store(&lpm- > > > >tbl8[j], > > > > + &new_tbl8_entry, > > > > + __ATOMIC_RELAXED); > > > > } > > > > } > > > > } > > > > @@ -1490,7 +1498,9 @@ delete_depth_small_v1604(struct rte_lpm > > > > *lpm, uint32_t ip_masked, > > > > > > > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) { > > > > > > > > if (lpm->tbl8[j].depth <=3D depth) > > > > - lpm->tbl8[j] =3D > > > > new_tbl8_entry; > > > > + __atomic_store(&lpm- > > > >tbl8[j], > > > > + &new_tbl8_entry, > > > > + __ATOMIC_RELAXED); > > > > } > > > > } > > > > } > > > > @@ -1646,7 +1656,8 @@ delete_depth_big_v20(struct rte_lpm_v20 > > > > *lpm, uint32_t ip_masked, > > > > */ > > > > for (i =3D tbl8_index; i < (tbl8_index + tbl8_range); i++) { > > > > if (lpm->tbl8[i].depth <=3D depth) > > > > - lpm->tbl8[i] =3D new_tbl8_entry; > > > > + __atomic_store(&lpm->tbl8[i], > > > > &new_tbl8_entry, > > > > + __ATOMIC_RELAXED); > > > > } > > > > } > > > > > > > > @@ -1677,7 +1688,8 @@ delete_depth_big_v20(struct rte_lpm_v20 > > > > *lpm, uint32_t ip_masked, > > > > /* Set tbl24 before freeing tbl8 to avoid race condition. > > > > * Prevent the free of the tbl8 group from hoisting. > > > > */ > > > > - lpm->tbl24[tbl24_index] =3D new_tbl24_entry; > > > > + __atomic_store(&lpm->tbl24[tbl24_index], > > > > &new_tbl24_entry, > > > > + __ATOMIC_RELAXED); > > > > __atomic_thread_fence(__ATOMIC_RELEASE); > > > > tbl8_free_v20(lpm->tbl8, tbl8_group_start); > > > tbl8_alloc_v20/tbl8_free_v20 need to be updated to use > > > __atomic_store > > > > > tbl8_alloc_v20/tbl8_free_v20 updates a single field of table entry. > > The process is already atomic. Do we really need to use __atomic_store? > I thought we agreed that all the tbl8 stores will use __atomic_store. > IMO, it is better to use C11 atomic built-ins entirely, at least for the = data > structures used in reader-writer scenario. Otherwise, the code does not > follow C11 memory model completely. (I do not know what to call such a > model). >=20 OK, change will be made in next version. > > > > > > } > > > > @@ -1730,7 +1742,8 @@ delete_depth_big_v1604(struct rte_lpm *lpm, > > > > uint32_t ip_masked, > > > > */ > > > > for (i =3D tbl8_index; i < (tbl8_index + tbl8_range); i++) { > > > > if (lpm->tbl8[i].depth <=3D depth) > > > > - lpm->tbl8[i] =3D new_tbl8_entry; > > > > + __atomic_store(&lpm->tbl8[i], > > > > &new_tbl8_entry, > > > > + __ATOMIC_RELAXED); > > > > } > > > > } > > > > > > > > @@ -1761,7 +1774,8 @@ delete_depth_big_v1604(struct rte_lpm *lpm, > > > > uint32_t ip_masked, > > > > /* Set tbl24 before freeing tbl8 to avoid race condition. > > > > * Prevent the free of the tbl8 group from hoisting. > > > > */ > > > > - lpm->tbl24[tbl24_index] =3D new_tbl24_entry; > > > > + __atomic_store(&lpm->tbl24[tbl24_index], > > > > &new_tbl24_entry, > > > > + __ATOMIC_RELAXED); > > > > __atomic_thread_fence(__ATOMIC_RELEASE); > > > > tbl8_free_v1604(lpm->tbl8, tbl8_group_start); > > > tbl8_alloc_v1604 /tbl8_free_v1604 need to be updated to use > > > __atomic_store > > Ditto. > > > > > > > > > } > > > > -- > > > > 2.17.1