From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BDDB8A0A02; Wed, 13 Jan 2021 19:47:00 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A47D2140D68; Wed, 13 Jan 2021 19:47:00 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 338AF140D1E; Wed, 13 Jan 2021 19:46:57 +0100 (CET) IronPort-SDR: MUiCmgmzaZd1BIH3zCZ04aqa/My3Yv+zIKvp61sbfKZ+X/vw3n+AB5o307ZKhrPJsQsRMNg4HW vnQeOmw6p56g== X-IronPort-AV: E=McAfee;i="6000,8403,9863"; a="196895824" X-IronPort-AV: E=Sophos;i="5.79,344,1602572400"; d="scan'208";a="196895824" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jan 2021 10:46:57 -0800 IronPort-SDR: UlnzW5IndHAvrtFY/JsXPGR57U+/upa22lbv1BHbY67k513LwMR/X/1ScravGlYfjzh0/wqIPL t6WaikzE6izQ== X-IronPort-AV: E=Sophos;i="5.79,344,1602572400"; d="scan'208";a="424665963" Received: from vmedvedk-mobl.ger.corp.intel.com (HELO [10.252.6.92]) ([10.252.6.92]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jan 2021 10:46:54 -0800 To: Ruifeng Wang , Bruce Richardson , Konstantin Ananyev , Michal Kobylinski , David Hunt Cc: dev@dpdk.org, nd@arm.com, jerinj@marvell.com, drc@linux.vnet.ibm.com, honnappa.nagarahalli@arm.com, stable@dpdk.org References: <20210108082127.1061538-1-ruifeng.wang@arm.com> <20210108082127.1061538-3-ruifeng.wang@arm.com> From: "Medvedkin, Vladimir" Message-ID: Date: Wed, 13 Jan 2021 18:46:53 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.5.1 MIME-Version: 1.0 In-Reply-To: <20210108082127.1061538-3-ruifeng.wang@arm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH 2/4] lpm: fix vector lookup for x86 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 08/01/2021 08:21, Ruifeng Wang wrote: > rte_lpm_lookupx4 could return wrong next hop when more than 256 tbl8 > groups are created. This is caused by incorrect type casting of tbl8 > group index that been stored in tbl24 entry. The casting caused group > index truncation and hence wrong tbl8 group been searched. > > Issue fixed by applying proper mask to tbl24 entry to get tbl8 group index. > > Fixes: dc81ebbacaeb ("lpm: extend IPv4 next hop field") > Cc: michalx.kobylinski@intel.com > Cc: stable@dpdk.org > > Signed-off-by: Ruifeng Wang > --- > lib/librte_lpm/rte_lpm_sse.h | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/lib/librte_lpm/rte_lpm_sse.h b/lib/librte_lpm/rte_lpm_sse.h > index 44770b6ff..eaa863c52 100644 > --- a/lib/librte_lpm/rte_lpm_sse.h > +++ b/lib/librte_lpm/rte_lpm_sse.h > @@ -82,28 +82,28 @@ rte_lpm_lookupx4(const struct rte_lpm *lpm, xmm_t ip, uint32_t hop[4], > if (unlikely((pt & RTE_LPM_VALID_EXT_ENTRY_BITMASK) == > RTE_LPM_VALID_EXT_ENTRY_BITMASK)) { > i8.u32[0] = i8.u32[0] + > - (uint8_t)tbl[0] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; > + (tbl[0] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; > ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[0]]; > tbl[0] = *ptbl; > } > if (unlikely((pt >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) == > RTE_LPM_VALID_EXT_ENTRY_BITMASK)) { > i8.u32[1] = i8.u32[1] + > - (uint8_t)tbl[1] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; > + (tbl[1] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; > ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[1]]; > tbl[1] = *ptbl; > } > if (unlikely((pt2 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) == > RTE_LPM_VALID_EXT_ENTRY_BITMASK)) { > i8.u32[2] = i8.u32[2] + > - (uint8_t)tbl[2] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; > + (tbl[2] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; > ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[2]]; > tbl[2] = *ptbl; > } > if (unlikely((pt2 >> 32 & RTE_LPM_VALID_EXT_ENTRY_BITMASK) == > RTE_LPM_VALID_EXT_ENTRY_BITMASK)) { > i8.u32[3] = i8.u32[3] + > - (uint8_t)tbl[3] * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; > + (tbl[3] & 0x00FFFFFF) * RTE_LPM_TBL8_GROUP_NUM_ENTRIES; > ptbl = (const uint32_t *)&lpm->tbl8[i8.u32[3]]; > tbl[3] = *ptbl; > } > Acked-by: Vladimir Medvedkin -- Regards, Vladimir