From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0856BA0350; Thu, 25 Jun 2020 10:40:59 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CE7241BC24; Thu, 25 Jun 2020 10:40:58 +0200 (CEST) Received: from mail-il1-f196.google.com (mail-il1-f196.google.com [209.85.166.196]) by dpdk.org (Postfix) with ESMTP id 7F8F11B5E1 for ; Thu, 25 Jun 2020 10:40:57 +0200 (CEST) Received: by mail-il1-f196.google.com with SMTP id t8so4592736ilm.7 for ; Thu, 25 Jun 2020 01:40:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=pJipzCG8cGtm2cO3W1uD3ciCr1HBbPA9EZ3LcfhJ/vQ=; b=gxc3z7zS3wdqBRILfL/bHJGGPb1LUEGpYxAi7tZamWMxfXpqfAivHeElwRv2YamooD z8hOs44eveZLnH6QI3krMGwL4CSOHYPwCvOPNCjZCXalxNwsY2VgMxq5bcuuRZALj+qW faBBv4QtlGOUuaJmguxgLo3xifg4WWdfhPmCgETF1ArlA3xfFBzviUFVv5XvmCH0Dmyw Y24D5omJszcXbDnpeixIImBNs3fYPgMVZOgqTkT6yOSc1vsDZf/vs4dd38Hk/4uPM73P zn08lIsKT5M7CJu6zhQ9KpsIz0c0p7ZFXQFdOWtqeKNcZmjiTdKqo2tJJrO7OLRQbLPv APNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=pJipzCG8cGtm2cO3W1uD3ciCr1HBbPA9EZ3LcfhJ/vQ=; b=Uu+MIkyslpHp978zeXrtcCe5x+1eyarmwgCDtcb68DCpHKdzOpgUBBXy3G2mw66BN5 yIkgtC4QgLSK5SLGqXfVfoNtpXClQwrLA6Jrh19otI9JUSHiRwNfXLEer5KDS4hQ+lF/ OE0vfTlwj8LCvOBnt52eVTGreKWd/0IC8pISdb1gImpam8Vkkqat6Ew4AJFVaMDTNiFB P29+CdXBSKPyiwBS6F0uoJ61grkGAyQhgzDIuUT2v+pU58plfnNIVvelGJYQT3xDdd3i /XNyAMmjSx4Mo28Ca/yy+UhOna/t4CcK1tg3+vjcKE/h2BCgI3WSKgBBeHWLCfBbehss wwuA== X-Gm-Message-State: AOAM5308SXCCA22S6FkwAMeiA+XnqTOGwYWEkeFzWdEe/Z/VHnhen44P 6DtiVrNdrgh6FBfhq1+WSQACX8cQC4aU13rCzP72 X-Google-Smtp-Source: ABdhPJwdj4ZhbPyJErqIZqyp1o12K34Jt2SaiTH8ym6UpRl+GKCksTmytA5gKAcBBsprbcilBSDURK96EEvBHQDtziM= X-Received: by 2002:a92:7701:: with SMTP id s1mr24962696ilc.157.1593074456896; Thu, 25 Jun 2020 01:40:56 -0700 (PDT) MIME-Version: 1.0 References: <20200416084821.12591-1-alan.dewar@att.com> <80847da6f3b44c49a3884da76818302e@intl.att.com> <2265604.5qLSFAYZse@thomas> In-Reply-To: From: Alan Dewar Date: Thu, 25 Jun 2020 09:40:45 +0100 Message-ID: To: "Singh, Jasvinder" Cc: Thomas Monjalon , "Dumitrescu, Cristian" , "dev@dpdk.org" , Alan Dewar , "Dewar, Alan" Content-Type: text/plain; charset="UTF-8" Subject: Re: [dpdk-dev] [PATCH] sched: fix port time rounding error X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Okay will do. On Thu, Jun 25, 2020 at 9:32 AM Singh, Jasvinder wrote: > > > > > -----Original Message----- > > From: Thomas Monjalon > > Sent: Wednesday, June 24, 2020 11:50 PM > > To: Singh, Jasvinder > > Cc: Dumitrescu, Cristian ; > > 'alangordondewar@gmail.com' ; > > dev@dpdk.org; 'Alan Dewar' ; Dewar, Alan > > > > Subject: Re: [dpdk-dev] [PATCH] sched: fix port time rounding error > > > > Jasvinder, what is the conclusion of this patch? > > > > 21/04/2020 10:21, Dewar, Alan: > > > From: Singh, Jasvinder > > > > > > From: Alan Dewar > > > > > > > > > > > > The QoS scheduler works off port time that is computed from the > > > > > > number of CPU cycles that have elapsed since the last time the port > > was > > > > > > polled. It divides the number of elapsed cycles to calculate how > > > > > > many bytes can be sent, however this division can generate > > > > > > rounding errors, where some fraction of a byte sent may be lost. > > > > > > > > > > > > Lose enough of these fractional bytes and the QoS scheduler > > > > > > underperforms. The problem is worse with low bandwidths. > > > > > > > > > > > > To compensate for this rounding error this fix doesn't advance > > > > > > the port's time_cpu_cycles by the number of cycles that have > > > > > > elapsed, but by multiplying the computed number of bytes that > > > > > > can be sent (which has been rounded down) by number of cycles per > > byte. > > > > > > This will mean that port's time_cpu_cycles will lag behind the > > > > > > CPU cycles momentarily. At the next poll, the lag will be taken > > > > > > into account. > > > > > > > > > > > > Fixes: de3cfa2c98 ("sched: initial import") > > > > > > > > > > > > Signed-off-by: Alan Dewar > > > > > > --- > > > > > > lib/librte_sched/rte_sched.c | 12 ++++++++++-- > > > > > > 1 file changed, 10 insertions(+), 2 deletions(-) > > > > > > > > > > > > diff --git a/lib/librte_sched/rte_sched.c > > > > > > b/lib/librte_sched/rte_sched.c index c0983ddda..c656dba2d 100644 > > > > > > --- a/lib/librte_sched/rte_sched.c > > > > > > +++ b/lib/librte_sched/rte_sched.c > > > > > > @@ -222,6 +222,7 @@ struct rte_sched_port { > > > > > > uint64_t time_cpu_bytes; /* Current CPU time measured in bytes > > > > > > */ > > > > > > uint64_t time; /* Current NIC TX time measured in bytes */ > > > > > > struct rte_reciprocal inv_cycles_per_byte; /* CPU cycles per > > > > > > byte */ > > > > > > + uint64_t cycles_per_byte; > > > > > > > > > > > > /* Grinders */ > > > > > > struct rte_mbuf **pkts_out; > > > > > > @@ -852,6 +853,7 @@ rte_sched_port_config(struct > > > > > rte_sched_port_params > > > > > > *params) > > > > > > cycles_per_byte = (rte_get_tsc_hz() << RTE_SCHED_TIME_SHIFT) > > > > > > / params->rate; > > > > > > port->inv_cycles_per_byte = > > > > > > rte_reciprocal_value(cycles_per_byte); > > > > > > + port->cycles_per_byte = cycles_per_byte; > > > > > > > > > > > > /* Grinders */ > > > > > > port->pkts_out = NULL; > > > > > > @@ -2673,20 +2675,26 @@ static inline void > > > > > > rte_sched_port_time_resync(struct rte_sched_port *port) { > > > > > > uint64_t cycles = rte_get_tsc_cycles(); > > > > > > - uint64_t cycles_diff = cycles - port->time_cpu_cycles; > > > > > > + uint64_t cycles_diff; > > > > > > uint64_t bytes_diff; > > > > > > uint32_t i; > > > > > > > > > > > > + if (cycles < port->time_cpu_cycles) > > > > > > + goto end; > > > > > > > > Above check seems redundant as port->time_cpu_cycles will always be > > less than the current cycles due to roundoff in previous iteration. > > > > > > > > > > This was to catch the condition where the cycles wraps back to zero (after > > 100+ years?? depending on clock speed). > > > Rather than just going to end: the conditional should at least reset port- > > >time_cpu_cycles back to zero. > > > So there would be a very temporary glitch in accuracy once every 100+ > > years. > > > > Alan, Could you please resubmit the patch with above change? Other than that, patch looks good to me. >