From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8E1ECA0528; Fri, 17 Jul 2020 19:12:06 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CD7371BEBF; Fri, 17 Jul 2020 19:12:05 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id EBB371BEBC for ; Fri, 17 Jul 2020 19:12:03 +0200 (CEST) IronPort-SDR: dXvh4XYCdHk87R8SZhHIt1iVHdf5eBY4UBhLe/Nyv7U+YIGwH2OVWv71nSu8VKE6nkqSkrOMyj 0YydS/jeC+kA== X-IronPort-AV: E=McAfee;i="6000,8403,9685"; a="137753944" X-IronPort-AV: E=Sophos;i="5.75,362,1589266800"; d="scan'208";a="137753944" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Jul 2020 10:12:02 -0700 IronPort-SDR: MMkydUnwwJ8H1cvcNSUGp36fXjLl/+M/xhinmSuHW+DINqnYFEnvbqZK83ExiIXF4d+De6PlSj SInrqZvBsHcw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,362,1589266800"; d="scan'208";a="460910843" Received: from vmedvedk-mobl.ger.corp.intel.com (HELO [10.213.219.25]) ([10.213.219.25]) by orsmga005.jf.intel.com with ESMTP; 17 Jul 2020 10:12:01 -0700 To: Ruifeng Wang , Bruce Richardson Cc: dev@dpdk.org, nd@arm.com, honnappa.nagarahalli@arm.com, phil.yang@arm.com References: <20200716051903.94195-1-ruifeng.wang@arm.com> <20200716154920.167185-1-ruifeng.wang@arm.com> From: "Medvedkin, Vladimir" Message-ID: <608e9beb-812e-2375-b532-79b6366d31f8@intel.com> Date: Fri, 17 Jul 2020 18:12:00 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20200716154920.167185-1-ruifeng.wang@arm.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v2] lpm: fix unchecked return value X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Ruifeng, On 16/07/2020 16:49, Ruifeng Wang wrote: > Coverity complains about unchecked return value of rte_rcu_qsbr_dq_enqueue. > By default, defer queue size is big enough to hold all tbl8 groups. When > enqueue fails, return error to the user to indicate system issue. > > Coverity issue: 360832 > Fixes: 8a9f8564e9f9 ("lpm: implement RCU rule reclamation") > > Signed-off-by: Ruifeng Wang > --- > v2: > Converted return value to conform to LPM API convention. (Vladimir) > > lib/librte_lpm/rte_lpm.c | 19 +++++++++++++------ > 1 file changed, 13 insertions(+), 6 deletions(-) > > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c > index 2db9e16a2..757436f49 100644 > --- a/lib/librte_lpm/rte_lpm.c > +++ b/lib/librte_lpm/rte_lpm.c > @@ -532,11 +532,12 @@ tbl8_alloc(struct rte_lpm *lpm) > return group_idx; > } > > -static void > +static int32_t > tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start) > { > struct rte_lpm_tbl_entry zero_tbl8_entry = {0}; > struct __rte_lpm *internal_lpm; > + int status; > > internal_lpm = container_of(lpm, struct __rte_lpm, lpm); > if (internal_lpm->v == NULL) { > @@ -552,9 +553,15 @@ tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start) > __ATOMIC_RELAXED); > } else if (internal_lpm->rcu_mode == RTE_LPM_QSBR_MODE_DQ) { > /* Push into QSBR defer queue. */ > - rte_rcu_qsbr_dq_enqueue(internal_lpm->dq, > + status = rte_rcu_qsbr_dq_enqueue(internal_lpm->dq, > (void *)&tbl8_group_start); > + if (status == 1) { > + RTE_LOG(ERR, LPM, "Failed to push QSBR FIFO\n"); > + return -rte_errno; > + } > } > + > + return 0; > } > > static __rte_noinline int32_t > @@ -1040,7 +1047,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, > #define group_idx next_hop > uint32_t tbl24_index, tbl8_group_index, tbl8_group_start, tbl8_index, > tbl8_range, i; > - int32_t tbl8_recycle_index; > + int32_t tbl8_recycle_index, status = 0; > > /* > * Calculate the index into tbl24 and range. Note: All depths larger > @@ -1097,7 +1104,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, > */ > lpm->tbl24[tbl24_index].valid = 0; > __atomic_thread_fence(__ATOMIC_RELEASE); > - tbl8_free(lpm, tbl8_group_start); > + status = tbl8_free(lpm, tbl8_group_start); > } else if (tbl8_recycle_index > -1) { > /* Update tbl24 entry. */ > struct rte_lpm_tbl_entry new_tbl24_entry = { > @@ -1113,10 +1120,10 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, > __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry, > __ATOMIC_RELAXED); > __atomic_thread_fence(__ATOMIC_RELEASE); > - tbl8_free(lpm, tbl8_group_start); > + status = tbl8_free(lpm, tbl8_group_start); > } > #undef group_idx > - return 0; > + return status; This will change rte_lpm_delete API. As a suggestion, you can leave it as it was before ("return 0"), and send separate patch (with "return status)" which will be targeted to 20.11. > } > > /* > -- Regards, Vladimir