From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id F1841A052E for ; Tue, 21 Jan 2020 16:07:21 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 904A24C90; Tue, 21 Jan 2020 16:07:21 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id 393C93977; Tue, 21 Jan 2020 16:07:17 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 Jan 2020 07:07:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,346,1574150400"; d="scan'208";a="374614519" Received: from silpixa00400072.ir.intel.com ([10.237.222.213]) by orsmga004.jf.intel.com with ESMTP; 21 Jan 2020 07:07:14 -0800 From: Vladimir Medvedkin To: dev@dpdk.org Cc: thomas@monjalon.net, vladimir.medvedkin@intel.com, stable@dpdk.org Date: Tue, 21 Jan 2020 15:07:09 +0000 Message-Id: <1579619229-75215-1-git-send-email-vladimir.medvedkin@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1579022312-396072-1-git-send-email-vladimir.medvedkin@intel.com> References: <1579022312-396072-1-git-send-email-vladimir.medvedkin@intel.com> Subject: [dpdk-stable] [PATCH v2] fib: fix possible integer overflow X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" This commit fixes possible integer overflow for prev_idx in build_common_root() CID 350596 and tbl8_idx in write_edge() CID 350597 Unintentional integer overflow (OVERFLOW_BEFORE_WIDEN) overflow_before_widen: Potentially overflowing expression tbl8_idx * 256 with type int (32 bits, signed) is evaluated using 32-bit arithmetic, and then used in a context that expects an expression of type uint64_t (64 bits, unsigned). Coverity issue: 350596, 350597 Fixes: c3e12e0f0354 ("fib: add dataplane algorithm for IPv6") Cc: vladimir.medvedkin@intel.com Cc: stable@dpdk.org Signed-off-by: Vladimir Medvedkin --- lib/librte_fib/trie.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/lib/librte_fib/trie.c b/lib/librte_fib/trie.c index 124aa8b..2ae2add 100644 --- a/lib/librte_fib/trie.c +++ b/lib/librte_fib/trie.c @@ -240,9 +240,8 @@ tbl8_alloc(struct rte_trie_tbl *dp, uint64_t nh) tbl8_idx = tbl8_get(dp); if (tbl8_idx < 0) return tbl8_idx; - tbl8_ptr = (uint8_t *)dp->tbl8 + - ((tbl8_idx * TRIE_TBL8_GRP_NUM_ENT) << - dp->nh_sz); + tbl8_ptr = get_tbl_p_by_idx(dp->tbl8, + tbl8_idx * TRIE_TBL8_GRP_NUM_ENT, dp->nh_sz); /*Init tbl8 entries with nexthop from tbl24*/ write_to_dp((void *)tbl8_ptr, nh, dp->nh_sz, TRIE_TBL8_GRP_NUM_ENT); @@ -317,7 +316,7 @@ get_idx(const uint8_t *ip, uint32_t prev_idx, int bytes, int first_byte) bitshift = (int8_t)(((first_byte + bytes - 1) - i)*BYTE_SIZE); idx |= ip[i] << bitshift; } - return (prev_idx * 256) + idx; + return (prev_idx * TRIE_TBL8_GRP_NUM_ENT) + idx; } static inline uint64_t @@ -354,8 +353,8 @@ recycle_root_path(struct rte_trie_tbl *dp, const uint8_t *ip_part, return; if (common_tbl8 != 0) { - p = get_tbl_p_by_idx(dp->tbl8, (val >> 1) * 256 + *ip_part, - dp->nh_sz); + p = get_tbl_p_by_idx(dp->tbl8, (val >> 1) * + TRIE_TBL8_GRP_NUM_ENT + *ip_part, dp->nh_sz); recycle_root_path(dp, ip_part + 1, common_tbl8 - 1, p); } tbl8_recycle(dp, prev, val >> 1); @@ -388,7 +387,8 @@ build_common_root(struct rte_trie_tbl *dp, const uint8_t *ip, j = i; cur_tbl = dp->tbl8; } - *tbl = get_tbl_p_by_idx(cur_tbl, prev_idx * 256, dp->nh_sz); + *tbl = get_tbl_p_by_idx(cur_tbl, prev_idx * TRIE_TBL8_GRP_NUM_ENT, + dp->nh_sz); return 0; } @@ -411,8 +411,8 @@ write_edge(struct rte_trie_tbl *dp, const uint8_t *ip_part, uint64_t next_hop, return tbl8_idx; val = (tbl8_idx << 1)|TRIE_EXT_ENT; } - p = get_tbl_p_by_idx(dp->tbl8, (tbl8_idx * 256) + *ip_part, - dp->nh_sz); + p = get_tbl_p_by_idx(dp->tbl8, (tbl8_idx * + TRIE_TBL8_GRP_NUM_ENT) + *ip_part, dp->nh_sz); ret = write_edge(dp, ip_part + 1, next_hop, len - 1, edge, p); if (ret < 0) return ret; @@ -420,8 +420,8 @@ write_edge(struct rte_trie_tbl *dp, const uint8_t *ip_part, uint64_t next_hop, write_to_dp((uint8_t *)p + (1 << dp->nh_sz), next_hop << 1, dp->nh_sz, UINT8_MAX - *ip_part); } else { - write_to_dp(get_tbl_p_by_idx(dp->tbl8, tbl8_idx * 256, - dp->nh_sz), + write_to_dp(get_tbl_p_by_idx(dp->tbl8, tbl8_idx * + TRIE_TBL8_GRP_NUM_ENT, dp->nh_sz), next_hop << 1, dp->nh_sz, *ip_part); } tbl8_recycle(dp, &val, tbl8_idx); -- 2.7.4