From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BEC91A0526; Tue, 21 Jul 2020 18:23:08 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9F8751BFE9; Tue, 21 Jul 2020 18:23:08 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 3D3AC1BFE4 for ; Tue, 21 Jul 2020 18:23:07 +0200 (CEST) IronPort-SDR: JdTddrmTpyKABFIfK4vB9XkYviAHQgE53iSWdZqKiqeA0h9sB+WZBTa+6DgzDGzhm+mZbNOujt rHznnYEfKjag== X-IronPort-AV: E=McAfee;i="6000,8403,9689"; a="147671180" X-IronPort-AV: E=Sophos;i="5.75,379,1589266800"; d="scan'208";a="147671180" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jul 2020 09:23:05 -0700 IronPort-SDR: Ib3Hu6RUJJjTikfOQMk1xJBUCX9yosgdI0DJ49skpMwrYJjEGYxQirxs5aeAXmWC2tFL7ApRlz CEjXpPGNGByg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,379,1589266800"; d="scan'208";a="327920258" Received: from vmedvedk-mobl.ger.corp.intel.com (HELO [10.249.37.108]) ([10.249.37.108]) by orsmga007.jf.intel.com with ESMTP; 21 Jul 2020 09:23:03 -0700 To: Ruifeng Wang , Bruce Richardson Cc: "dev@dpdk.org" , nd , Honnappa Nagarahalli , Phil Yang References: <20200716051903.94195-1-ruifeng.wang@arm.com> <20200716154920.167185-1-ruifeng.wang@arm.com> <608e9beb-812e-2375-b532-79b6366d31f8@intel.com> From: "Medvedkin, Vladimir" Message-ID: Date: Tue, 21 Jul 2020 17:23:02 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Subject: Re: [dpdk-dev] [PATCH v2] lpm: fix unchecked return value X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Ruifeng, On 18/07/2020 10:22, Ruifeng Wang wrote: > >> -----Original Message----- >> From: Medvedkin, Vladimir >> Sent: Saturday, July 18, 2020 1:12 AM >> To: Ruifeng Wang ; Bruce Richardson >> >> Cc: dev@dpdk.org; nd ; Honnappa Nagarahalli >> ; Phil Yang >> Subject: Re: [PATCH v2] lpm: fix unchecked return value >> >> Hi Ruifeng, >> > Hi Vladimir, > >> On 16/07/2020 16:49, Ruifeng Wang wrote: >>> Coverity complains about unchecked return value of >> rte_rcu_qsbr_dq_enqueue. >>> By default, defer queue size is big enough to hold all tbl8 groups. >>> When enqueue fails, return error to the user to indicate system issue. >>> >>> Coverity issue: 360832 >>> Fixes: 8a9f8564e9f9 ("lpm: implement RCU rule reclamation") >>> >>> Signed-off-by: Ruifeng Wang >>> --- >>> v2: >>> Converted return value to conform to LPM API convention. (Vladimir) >>> >>> lib/librte_lpm/rte_lpm.c | 19 +++++++++++++------ >>> 1 file changed, 13 insertions(+), 6 deletions(-) >>> >>> diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c index >>> 2db9e16a2..757436f49 100644 >>> --- a/lib/librte_lpm/rte_lpm.c >>> +++ b/lib/librte_lpm/rte_lpm.c >>> @@ -532,11 +532,12 @@ tbl8_alloc(struct rte_lpm *lpm) >>> return group_idx; >>> } >>> >>> -static void >>> +static int32_t >>> tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start) >>> { >>> struct rte_lpm_tbl_entry zero_tbl8_entry = {0}; >>> struct __rte_lpm *internal_lpm; >>> + int status; >>> >>> internal_lpm = container_of(lpm, struct __rte_lpm, lpm); >>> if (internal_lpm->v == NULL) { >>> @@ -552,9 +553,15 @@ tbl8_free(struct rte_lpm *lpm, uint32_t >> tbl8_group_start) >>> __ATOMIC_RELAXED); >>> } else if (internal_lpm->rcu_mode == RTE_LPM_QSBR_MODE_DQ) { >>> /* Push into QSBR defer queue. */ >>> - rte_rcu_qsbr_dq_enqueue(internal_lpm->dq, >>> + status = rte_rcu_qsbr_dq_enqueue(internal_lpm->dq, >>> (void *)&tbl8_group_start); >>> + if (status == 1) { >>> + RTE_LOG(ERR, LPM, "Failed to push QSBR FIFO\n"); >>> + return -rte_errno; >>> + } >>> } >>> + >>> + return 0; >>> } >>> >>> static __rte_noinline int32_t >>> @@ -1040,7 +1047,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t >> ip_masked, >>> #define group_idx next_hop >>> uint32_t tbl24_index, tbl8_group_index, tbl8_group_start, >> tbl8_index, >>> tbl8_range, i; >>> - int32_t tbl8_recycle_index; >>> + int32_t tbl8_recycle_index, status = 0; >>> >>> /* >>> * Calculate the index into tbl24 and range. Note: All depths >>> larger @@ -1097,7 +1104,7 @@ delete_depth_big(struct rte_lpm *lpm, >> uint32_t ip_masked, >>> */ >>> lpm->tbl24[tbl24_index].valid = 0; >>> __atomic_thread_fence(__ATOMIC_RELEASE); >>> - tbl8_free(lpm, tbl8_group_start); >>> + status = tbl8_free(lpm, tbl8_group_start); >>> } else if (tbl8_recycle_index > -1) { >>> /* Update tbl24 entry. */ >>> struct rte_lpm_tbl_entry new_tbl24_entry = { @@ -1113,10 >> +1120,10 >>> @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, >>> __atomic_store(&lpm->tbl24[tbl24_index], >> &new_tbl24_entry, >>> __ATOMIC_RELAXED); >>> __atomic_thread_fence(__ATOMIC_RELEASE); >>> - tbl8_free(lpm, tbl8_group_start); >>> + status = tbl8_free(lpm, tbl8_group_start); >>> } >>> #undef group_idx >>> - return 0; >>> + return status; >> >> This will change rte_lpm_delete API. As a suggestion, you can leave it as it >> was before ("return 0"), and send separate patch (with "return status)" >> which will be targeted to 20.11. >> > > Is the change of API because a variable is returned instead of constant? > The patch passed ABI check on Travis: http://mails.dpdk.org/archives/test-report/2020-July/144864.html > So I didn't know there is API/ABI issue. Because new error status codes are returned. At the moment rte_lpm_delete() returns only -EINVAL. After patches it will also returns -ENOSPC. The user's code may not handle this returned error status. On the other hand, from documentation about returned value: "0 on success, negative value otherwise", and given the fact that this behavior is only after calling rte_lpm_rcu_qsbr_add(), I think we can accept this patch. Bruce, please correct me. > > Thanks. > /Ruifeng >>> } >>> >>> /* >>> >> >> -- >> Regards, >> Vladimir Acked-by: Vladimir Medvedkin -- Regards, Vladimir