From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 014AFA0487 for ; Fri, 5 Jul 2019 18:53:30 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A7B331BDF6; Fri, 5 Jul 2019 18:53:30 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 745101BDF5 for ; Fri, 5 Jul 2019 18:53:29 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 05 Jul 2019 09:53:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,455,1557212400"; d="scan'208";a="172717219" Received: from vmedvedk-mobl.ger.corp.intel.com (HELO [10.237.220.98]) ([10.237.220.98]) by FMSMGA003.fm.intel.com with ESMTP; 05 Jul 2019 09:53:27 -0700 To: Ruifeng Wang , bruce.richardson@intel.com Cc: dev@dpdk.org, honnappa.nagarahalli@arm.com, gavin.hu@arm.com, nd@arm.com References: <20190703054441.30162-1-ruifeng.wang@arm.com> <20190703054441.30162-3-ruifeng.wang@arm.com> From: "Medvedkin, Vladimir" Message-ID: <4de1bbae-da58-cfd6-acd0-7b79e51f7ee2@intel.com> Date: Fri, 5 Jul 2019 17:53:26 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.7.2 MIME-Version: 1.0 In-Reply-To: <20190703054441.30162-3-ruifeng.wang@arm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US Subject: Re: [dpdk-dev] [PATCH v4 3/3] lib/lpm: use atomic store to avoid partial update X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Wang, On 03/07/2019 06:44, Ruifeng Wang wrote: > Compiler could generate non-atomic stores for whole table entry > updating. This may cause incorrect nexthop to be returned, if > the byte with valid flag is updated prior to the byte with next > hot is updated. > > Changed to use atomic store to update whole table entry. > > Suggested-by: Medvedkin Vladimir > Signed-off-by: Ruifeng Wang > Reviewed-by: Gavin Hu > --- > v4: initial version > > lib/librte_lpm/rte_lpm.c | 34 ++++++++++++++++++++++++---------- > 1 file changed, 24 insertions(+), 10 deletions(-) > > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c > index baa6e7460..5d1dbd7e6 100644 > --- a/lib/librte_lpm/rte_lpm.c > +++ b/lib/librte_lpm/rte_lpm.c > @@ -767,7 +767,9 @@ add_depth_small_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth, > * Setting tbl8 entry in one go to avoid > * race conditions > */ > - lpm->tbl8[j] = new_tbl8_entry; > + __atomic_store(&lpm->tbl8[j], > + &new_tbl8_entry, > + __ATOMIC_RELAXED); > > continue; > } > @@ -837,7 +839,9 @@ add_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, > * Setting tbl8 entry in one go to avoid > * race conditions > */ > - lpm->tbl8[j] = new_tbl8_entry; > + __atomic_store(&lpm->tbl8[j], > + &new_tbl8_entry, > + __ATOMIC_RELAXED); > > continue; > } > @@ -965,7 +969,8 @@ add_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth, > * Setting tbl8 entry in one go to avoid race > * condition > */ > - lpm->tbl8[i] = new_tbl8_entry; > + __atomic_store(&lpm->tbl8[i], &new_tbl8_entry, > + __ATOMIC_RELAXED); > > continue; > } > @@ -1100,7 +1105,8 @@ add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth, > * Setting tbl8 entry in one go to avoid race > * condition > */ > - lpm->tbl8[i] = new_tbl8_entry; > + __atomic_store(&lpm->tbl8[i], &new_tbl8_entry, > + __ATOMIC_RELAXED); > > continue; > } > @@ -1393,7 +1399,9 @@ delete_depth_small_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, > RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) { > > if (lpm->tbl8[j].depth <= depth) > - lpm->tbl8[j] = new_tbl8_entry; > + __atomic_store(&lpm->tbl8[j], > + &new_tbl8_entry, > + __ATOMIC_RELAXED); > } > } > } > @@ -1490,7 +1498,9 @@ delete_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip_masked, > RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) { > > if (lpm->tbl8[j].depth <= depth) > - lpm->tbl8[j] = new_tbl8_entry; > + __atomic_store(&lpm->tbl8[j], > + &new_tbl8_entry, > + __ATOMIC_RELAXED); > } > } > } > @@ -1646,7 +1656,8 @@ delete_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, > */ > for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) { > if (lpm->tbl8[i].depth <= depth) > - lpm->tbl8[i] = new_tbl8_entry; > + __atomic_store(&lpm->tbl8[i], &new_tbl8_entry, > + __ATOMIC_RELAXED); > } > } > > @@ -1677,7 +1688,8 @@ delete_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, > /* Set tbl24 before freeing tbl8 to avoid race condition. > * Prevent the free of the tbl8 group from hoisting. > */ > - lpm->tbl24[tbl24_index] = new_tbl24_entry; > + __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry, > + __ATOMIC_RELAXED); > __atomic_thread_fence(__ATOMIC_RELEASE); > tbl8_free_v20(lpm->tbl8, tbl8_group_start); > } > @@ -1730,7 +1742,8 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, > */ > for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) { > if (lpm->tbl8[i].depth <= depth) > - lpm->tbl8[i] = new_tbl8_entry; > + __atomic_store(&lpm->tbl8[i], &new_tbl8_entry, > + __ATOMIC_RELAXED); > } > } > > @@ -1761,7 +1774,8 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, > /* Set tbl24 before freeing tbl8 to avoid race condition. > * Prevent the free of the tbl8 group from hoisting. > */ > - lpm->tbl24[tbl24_index] = new_tbl24_entry; > + __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry, > + __ATOMIC_RELAXED); > __atomic_thread_fence(__ATOMIC_RELEASE); Do you really need __atomic_thread_fence after atomic_store? > tbl8_free_v1604(lpm->tbl8, tbl8_group_start); > } -- Regards, Vladimir