From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BF7ACA0350; Thu, 25 Jun 2020 00:50:04 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8B5452BE5; Thu, 25 Jun 2020 00:50:04 +0200 (CEST) Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by dpdk.org (Postfix) with ESMTP id 8F0FA2B9E for ; Thu, 25 Jun 2020 00:50:03 +0200 (CEST) Received: from compute7.internal (compute7.nyi.internal [10.202.2.47]) by mailout.nyi.internal (Postfix) with ESMTP id 32CBC5C00B7; Wed, 24 Jun 2020 18:50:03 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute7.internal (MEProxy); Wed, 24 Jun 2020 18:50:03 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s=fm1; bh= QyyvFkcuGZ1DiDdvBcepwDLtIcMWoAWZEW4qxEDSxi4=; b=UwWhS5RAMMQlqTtB 16x6K0xTMgn5dJS9/rpVp2QZThLAYGtm7JMF89SIseMvdLGk9qYthNgOz/yK8vXD Czt1qwcXQ7RsJO5nZaGmxKY8PLwIPJ02Z3hRldAaNozh6cg3tvINL3umU3JNaR1q StLvAMla9jQ9Fau4Jbpwxb68NR6s0dfVSsDVub/g0NzV6jJ3xUrgFLPV1D7DLkkS 6o0MXbCcyMa1Cm3uis3+eVxmKIqs+t1EQaLoPJEy4rNmtd2HqvVBf2nxgRWF6jQD 0PdyFdgKlsq9c1jMTvv7wde92/hyxX46its+LLXSQ2K8pJeZexR6QZIjnT2HiNYl RZ8l1g== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:in-reply-to:message-id:mime-version:references :subject:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm3; bh=QyyvFkcuGZ1DiDdvBcepwDLtIcMWoAWZEW4qxEDSx i4=; b=q/hTVl92EKmaNcTjr2h/aw3Y1JL7Pt4h4G4LaLwetrkHvpYgUUZio+jGF UQ37UHhQHItXsIMNHvI29pZThQvyBhDGbRPElP7yr7UfYcJVsJrSUfv5W01ptHqD aDWRDiTFRBEWZDVxFGvnmJTbjXd7jli3epIfYTW9PY6tKx+jK65JS+qIvb23RkQs v6Eom5bgEsl/R/EBsqTVdlaVCjS7ArQMwnWQiiNIYhcVulX8kPCGZ5E5JmP/2idG PbDah4Rj/qVcYE3nQCj4SShsVJ4VM5TaXJNTsSZLmnQ0Yg0mFucM1pVlb69SPEIU s66+IBCoD/5/soGhdKUpkcSZ53Bpg== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduhedrudekkedgudegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkfgjfhgggfgtsehtufertddttddvnecuhfhrohhmpefvhhhomhgr shcuofhonhhjrghlohhnuceothhhohhmrghssehmohhnjhgrlhhonhdrnhgvtheqnecugg ftrfgrthhtvghrnhepudeggfdvfeduffdtfeeglefghfeukefgfffhueejtdetuedtjeeu ieeivdffgeehnecukfhppeejjedrudefgedrvddtfedrudekgeenucevlhhushhtvghruf hiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehthhhomhgrshesmhhonhhjrghl ohhnrdhnvght X-ME-Proxy: Received: from xps.localnet (184.203.134.77.rev.sfr.net [77.134.203.184]) by mail.messagingengine.com (Postfix) with ESMTPA id B1B7230676C6; Wed, 24 Jun 2020 18:50:01 -0400 (EDT) From: Thomas Monjalon To: "'Singh, Jasvinder'" Cc: "'Dumitrescu, Cristian'" , "'alangordondewar@gmail.com'" , dev@dpdk.org, 'Alan Dewar' , "Dewar, Alan" Date: Thu, 25 Jun 2020 00:50:00 +0200 Message-ID: <2265604.5qLSFAYZse@thomas> In-Reply-To: <80847da6f3b44c49a3884da76818302e@intl.att.com> References: <20200416084821.12591-1-alan.dewar@att.com> <80847da6f3b44c49a3884da76818302e@intl.att.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" Subject: Re: [dpdk-dev] [PATCH] sched: fix port time rounding error X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Jasvinder, what is the conclusion of this patch? 21/04/2020 10:21, Dewar, Alan: > From: Singh, Jasvinder > > > > From: Alan Dewar > > > > > > > > The QoS scheduler works off port time that is computed from the > > > > number of CPU cycles that have elapsed since the last time the port was > > > > polled. It divides the number of elapsed cycles to calculate how > > > > many bytes can be sent, however this division can generate rounding > > > > errors, where some fraction of a byte sent may be lost. > > > > > > > > Lose enough of these fractional bytes and the QoS scheduler > > > > underperforms. The problem is worse with low bandwidths. > > > > > > > > To compensate for this rounding error this fix doesn't advance the > > > > port's time_cpu_cycles by the number of cycles that have elapsed, > > > > but by multiplying the computed number of bytes that can be sent > > > > (which has been rounded down) by number of cycles per byte. > > > > This will mean that port's time_cpu_cycles will lag behind the CPU > > > > cycles momentarily. At the next poll, the lag will be taken into > > > > account. > > > > > > > > Fixes: de3cfa2c98 ("sched: initial import") > > > > > > > > Signed-off-by: Alan Dewar > > > > --- > > > > lib/librte_sched/rte_sched.c | 12 ++++++++++-- > > > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > > > > > diff --git a/lib/librte_sched/rte_sched.c > > > > b/lib/librte_sched/rte_sched.c index c0983ddda..c656dba2d 100644 > > > > --- a/lib/librte_sched/rte_sched.c > > > > +++ b/lib/librte_sched/rte_sched.c > > > > @@ -222,6 +222,7 @@ struct rte_sched_port { > > > > uint64_t time_cpu_bytes; /* Current CPU time measured in bytes > > > > */ > > > > uint64_t time; /* Current NIC TX time measured in bytes */ > > > > struct rte_reciprocal inv_cycles_per_byte; /* CPU cycles per byte > > > > */ > > > > + uint64_t cycles_per_byte; > > > > > > > > /* Grinders */ > > > > struct rte_mbuf **pkts_out; > > > > @@ -852,6 +853,7 @@ rte_sched_port_config(struct > > > rte_sched_port_params > > > > *params) > > > > cycles_per_byte = (rte_get_tsc_hz() << RTE_SCHED_TIME_SHIFT) > > > > / params->rate; > > > > port->inv_cycles_per_byte = rte_reciprocal_value(cycles_per_byte); > > > > + port->cycles_per_byte = cycles_per_byte; > > > > > > > > /* Grinders */ > > > > port->pkts_out = NULL; > > > > @@ -2673,20 +2675,26 @@ static inline void > > > > rte_sched_port_time_resync(struct rte_sched_port *port) { > > > > uint64_t cycles = rte_get_tsc_cycles(); > > > > - uint64_t cycles_diff = cycles - port->time_cpu_cycles; > > > > + uint64_t cycles_diff; > > > > uint64_t bytes_diff; > > > > uint32_t i; > > > > > > > > + if (cycles < port->time_cpu_cycles) > > > > + goto end; > > > > Above check seems redundant as port->time_cpu_cycles will always be less than the current cycles due to roundoff in previous iteration. > > > > This was to catch the condition where the cycles wraps back to zero (after 100+ years?? depending on clock speed). > Rather than just going to end: the conditional should at least reset port->time_cpu_cycles back to zero. > So there would be a very temporary glitch in accuracy once every 100+ years.