From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8814EA04C0 for ; Fri, 9 Oct 2020 15:10:46 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 566651D5E4; Fri, 9 Oct 2020 15:10:45 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 788811D5A8; Fri, 9 Oct 2020 15:10:40 +0200 (CEST) IronPort-SDR: KV33EVS6uckWWsplvkVe5BGhVlqelXl4oxw+Mso6CLVX2xb5MiNkKTu/14fS8F8dWRpgNyJErh 8o7gwYu43FZQ== X-IronPort-AV: E=McAfee;i="6000,8403,9768"; a="162843501" X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="162843501" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 06:10:38 -0700 IronPort-SDR: +h1noTm6os+b4iExcx6Tn2F9vCuXGML3Xg7xHA8rmq1zbBupjqzvklZ7No1XAt9pvXffU0vg3+ UpoUttcjENGA== X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; d="scan'208";a="528922437" Received: from dhunt5-mobl5.ger.corp.intel.com (HELO [10.249.34.11]) ([10.249.34.11]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 06:10:36 -0700 To: Lukasz Wojciechowski , Bruce Richardson Cc: dev@dpdk.org, stable@dpdk.org References: <20200925224209.12173-1-l.wojciechow@partner.samsung.com> <20201008052323.11547-1-l.wojciechow@partner.samsung.com> <20201008052323.11547-15-l.wojciechow@partner.samsung.com> From: David Hunt Message-ID: <0ad53fcf-fc31-30a8-54ee-ea76e7b6b701@intel.com> Date: Fri, 9 Oct 2020 14:10:34 +0100 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:68.0) Gecko/20100101 Thunderbird/68.12.1 MIME-Version: 1.0 In-Reply-To: <20201008052323.11547-15-l.wojciechow@partner.samsung.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-GB Subject: Re: [dpdk-stable] [PATCH v5 14/15] distributor: fix flushing in flight packets X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" On 8/10/2020 6:23 AM, Lukasz Wojciechowski wrote: > rte_distributor_flush() is using total_outstanding() > function to calculate if it should still wait > for processing packets. However in burst mode > only backlog packets were counted. > > This patch fixes that issue by counting also in flight > packets. There are also sum fixes to properly keep > count of in flight packets for each worker in bufs[].count. > > Fixes: 775003ad2f96 ("distributor: add new burst-capable library") > Cc: david.hunt@intel.com > Cc: stable@dpdk.org > > Signed-off-by: Lukasz Wojciechowski > --- > lib/librte_distributor/rte_distributor.c | 12 +++++------- > 1 file changed, 5 insertions(+), 7 deletions(-) > > diff --git a/lib/librte_distributor/rte_distributor.c b/lib/librte_distributor/rte_distributor.c > index 4bd23a990..2478de3b7 100644 > --- a/lib/librte_distributor/rte_distributor.c > +++ b/lib/librte_distributor/rte_distributor.c > @@ -467,6 +467,7 @@ rte_distributor_process(struct rte_distributor *d, > /* Sync with worker on GET_BUF flag. */ > if (__atomic_load_n(&(d->bufs[wid].bufptr64[0]), > __ATOMIC_ACQUIRE) & RTE_DISTRIB_GET_BUF) { > + d->bufs[wid].count = 0; > release(d, wid); > handle_returns(d, wid); > } > @@ -481,11 +482,6 @@ rte_distributor_process(struct rte_distributor *d, > uint16_t matches[RTE_DIST_BURST_SIZE]; > unsigned int pkts; > > - /* Sync with worker on GET_BUF flag. */ > - if (__atomic_load_n(&(d->bufs[wkr].bufptr64[0]), > - __ATOMIC_ACQUIRE) & RTE_DISTRIB_GET_BUF) > - d->bufs[wkr].count = 0; > - > if ((num_mbufs - next_idx) < RTE_DIST_BURST_SIZE) > pkts = num_mbufs - next_idx; > else > @@ -605,8 +601,10 @@ rte_distributor_process(struct rte_distributor *d, > for (wid = 0 ; wid < d->num_workers; wid++) > /* Sync with worker on GET_BUF flag. */ > if ((__atomic_load_n(&(d->bufs[wid].bufptr64[0]), > - __ATOMIC_ACQUIRE) & RTE_DISTRIB_GET_BUF)) > + __ATOMIC_ACQUIRE) & RTE_DISTRIB_GET_BUF)) { > + d->bufs[wid].count = 0; > release(d, wid); > + } > > return num_mbufs; > } > @@ -649,7 +647,7 @@ total_outstanding(const struct rte_distributor *d) > unsigned int wkr, total_outstanding = 0; > > for (wkr = 0; wkr < d->num_workers; wkr++) > - total_outstanding += d->backlog[wkr].count; > + total_outstanding += d->backlog[wkr].count + d->bufs[wkr].count; > > return total_outstanding; > } Acked-by: David Hunt