From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 44CF2A046B for ; Fri, 28 Jun 2019 15:33:53 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id ED1924C6C; Fri, 28 Jun 2019 15:33:52 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id A051B3798 for ; Fri, 28 Jun 2019 15:33:50 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 Jun 2019 06:33:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,427,1557212400"; d="scan'208";a="183686615" Received: from vmedvedk-mobl.ger.corp.intel.com (HELO [10.237.220.98]) ([10.237.220.98]) by fmsmga001.fm.intel.com with ESMTP; 28 Jun 2019 06:33:48 -0700 To: Ruifeng Wang , bruce.richardson@intel.com Cc: dev@dpdk.org, honnappa.nagarahalli@arm.com, gavin.hu@arm.com, nd@arm.com References: <20190627093751.7746-1-ruifeng.wang@arm.com> <20190627093751.7746-3-ruifeng.wang@arm.com> From: "Medvedkin, Vladimir" Message-ID: <6daf7a76-1def-21df-d202-10b2a8f8582a@intel.com> Date: Fri, 28 Jun 2019 14:33:47 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.7.2 MIME-Version: 1.0 In-Reply-To: <20190627093751.7746-3-ruifeng.wang@arm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US Subject: Re: [dpdk-dev] [PATCH v3 3/3] lib/lpm: memory orderings to avoid race conditions for v20 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Wang, On 27/06/2019 10:37, Ruifeng Wang wrote: > When a tbl8 group is getting attached to a tbl24 entry, lookup > might fail even though the entry is configured in the table. > > For ex: consider a LPM table configured with 10.10.10.1/24. > When a new entry 10.10.10.32/28 is being added, a new tbl8 > group is allocated and tbl24 entry is changed to point to > the tbl8 group. If the tbl24 entry is written without the tbl8 > group entries updated, a lookup on 10.10.10.9 will return > failure. > > Correct memory orderings are required to ensure that the > store to tbl24 does not happen before the stores to tbl8 group > entries complete. > > Suggested-by: Honnappa Nagarahalli > Signed-off-by: Ruifeng Wang > Reviewed-by: Honnappa Nagarahalli > Reviewed-by: Gavin Hu > --- > v3: no changes > v2: fixed clang building issue by supplying alignment attribute. > v1: initail version > > lib/librte_lpm/rte_lpm.c | 31 ++++++++++++++++++++++++------- > lib/librte_lpm/rte_lpm.h | 4 ++-- > 2 files changed, 26 insertions(+), 9 deletions(-) > > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c > index fabd13fb0..5f8d494ae 100644 > --- a/lib/librte_lpm/rte_lpm.c > +++ b/lib/librte_lpm/rte_lpm.c > @@ -737,7 +737,8 @@ add_depth_small_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth, > /* Setting tbl24 entry in one go to avoid race > * conditions > */ > - lpm->tbl24[i] = new_tbl24_entry; > + __atomic_store(&lpm->tbl24[i], &new_tbl24_entry, > + __ATOMIC_RELEASE); > > continue; > } > @@ -892,7 +893,8 @@ add_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth, > .depth = 0, > }; > > - lpm->tbl24[tbl24_index] = new_tbl24_entry; > + __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry, > + __ATOMIC_RELEASE); > > } /* If valid entry but not extended calculate the index into Table8. */ > else if (lpm->tbl24[tbl24_index].valid_group == 0) { > @@ -938,7 +940,8 @@ add_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth, > .depth = 0, > }; > > - lpm->tbl24[tbl24_index] = new_tbl24_entry; > + __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry, > + __ATOMIC_RELEASE); > > } else { /* > * If it is valid, extended entry calculate the index into tbl8. > @@ -1320,7 +1323,14 @@ delete_depth_small_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, > > if (lpm->tbl24[i].valid_group == 0 && > lpm->tbl24[i].depth <= depth) { > - lpm->tbl24[i].valid = INVALID; > + struct rte_lpm_tbl_entry_v20 zero_tbl_entry = { > + .valid = INVALID, > + .depth = 0, > + .valid_group = 0, > + }; > + zero_tbl_entry.next_hop = 0; Please use the same var name in both v20 and v1604 (zero_tbl24_entry). The same is for struct initialization. In 1604 you use: struct rte_lpm_tbl_entry zero_tbl24_entry = {0}; > + __atomic_store(&lpm->tbl24[i], &zero_tbl_entry, > + __ATOMIC_RELEASE); > } else if (lpm->tbl24[i].valid_group == 1) { > /* > * If TBL24 entry is extended, then there has > @@ -1365,7 +1375,8 @@ delete_depth_small_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, > > if (lpm->tbl24[i].valid_group == 0 && > lpm->tbl24[i].depth <= depth) { > - lpm->tbl24[i] = new_tbl24_entry; > + __atomic_store(&lpm->tbl24[i], &new_tbl24_entry, > + __ATOMIC_RELEASE); > } else if (lpm->tbl24[i].valid_group == 1) { > /* > * If TBL24 entry is extended, then there has > @@ -1647,8 +1658,11 @@ delete_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, > tbl8_recycle_index = tbl8_recycle_check_v20(lpm->tbl8, tbl8_group_start); > > if (tbl8_recycle_index == -EINVAL) { > - /* Set tbl24 before freeing tbl8 to avoid race condition. */ > + /* Set tbl24 before freeing tbl8 to avoid race condition. > + * Prevent the free of the tbl8 group from hoisting. > + */ > lpm->tbl24[tbl24_index].valid = 0; > + __atomic_thread_fence(__ATOMIC_RELEASE); > tbl8_free_v20(lpm->tbl8, tbl8_group_start); > } else if (tbl8_recycle_index > -1) { > /* Update tbl24 entry. */ > @@ -1659,8 +1673,11 @@ delete_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, > .depth = lpm->tbl8[tbl8_recycle_index].depth, > }; > > - /* Set tbl24 before freeing tbl8 to avoid race condition. */ > + /* Set tbl24 before freeing tbl8 to avoid race condition. > + * Prevent the free of the tbl8 group from hoisting. > + */ > lpm->tbl24[tbl24_index] = new_tbl24_entry; > + __atomic_thread_fence(__ATOMIC_RELEASE); > tbl8_free_v20(lpm->tbl8, tbl8_group_start); > } > > diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h > index 6f5704c5c..98c70ecbe 100644 > --- a/lib/librte_lpm/rte_lpm.h > +++ b/lib/librte_lpm/rte_lpm.h > @@ -88,7 +88,7 @@ struct rte_lpm_tbl_entry_v20 { > */ > uint8_t valid_group :1; > uint8_t depth :6; /**< Rule depth. */ > -}; > +} __rte_aligned(2); I think it is better to __rte_aligned(sizeof(uint16_t)). > > __extension__ > struct rte_lpm_tbl_entry { > @@ -121,7 +121,7 @@ struct rte_lpm_tbl_entry_v20 { > uint8_t group_idx; > uint8_t next_hop; > }; > -}; > +} __rte_aligned(2); > > __extension__ > struct rte_lpm_tbl_entry { As a general remark consider writing all of the tbl entries including tbl8 with atomic_store. Now "lpm->tbl8[j] = new_tbl8_entry;" is looks like      1e9:       44 88 9c 47 40 01 00    mov %r11b,0x2000140(%rdi,%rax,2) <-write first byte      1f0:       02      1f1:       48 83 c0 01             add    $0x1,%rax      1f5:       42 88 8c 47 41 01 00    mov %cl,0x2000141(%rdi,%r8,2) <-write second byte      1fc:       02 This may cause an incorrect nexthop to be returned. If the byte with valid flag is updated first, the old(and maybe invalid) next hop could be returned. Please evaluate performance drop after. -- Regards, Vladimir