From: Kevin Traynor <ktraynor@redhat.com>
To: "Kinsella, Ray" <mdr@ashroe.eu>,
alangordondewar@gmail.com, cristian.dumitrescu@intel.com
Cc: dev@dpdk.org, Alan Dewar <alan.dewar@att.com>,
David Marchand <david.marchand@redhat.com>,
Luca Boccassi <bluca@debian.org>,
Jasvinder Singh <jasvinder.singh@intel.com>
Subject: Re: [dpdk-dev] [PATCH v2] sched: fix port time rounding error
Date: Mon, 7 Sep 2020 11:09:05 +0100 [thread overview]
Message-ID: <73ab57ba-c69b-157e-9f3b-dc14fa40c929@redhat.com> (raw)
In-Reply-To: <406b64c8-0c88-5d71-2179-f48d8384b8e7@ashroe.eu>
On 21/08/2020 16:28, Kinsella, Ray wrote:
>
>
> On 20/08/2020 15:32, Kevin Traynor wrote:
>> Hi,
>>
>> On 25/06/2020 10:59, alangordondewar@gmail.com wrote:
>>> From: Alan Dewar <alan.dewar@att.com>
>>>
>>> The QoS scheduler works off port time that is computed from the number
>>> of CPU cycles that have elapsed since the last time the port was
>>> polled. It divides the number of elapsed cycles to calculate how
>>> many bytes can be sent, however this division can generate rounding
>>> errors, where some fraction of a byte sent may be lost.
>>>
>>> Lose enough of these fractional bytes and the QoS scheduler
>>> underperforms. The problem is worse with low bandwidths.
>>>
>>> To compensate for this rounding error this fix doesn't advance the
>>> port's time_cpu_cycles by the number of cycles that have elapsed,
>>> but by multiplying the computed number of bytes that can be sent
>>> (which has been rounded down) by number of cycles per byte.
>>> This will mean that port's time_cpu_cycles will lag behind the CPU
>>> cycles momentarily. At the next poll, the lag will be taken into
>>> account.
>>>
>>> v2:
>>> If the cycles value wraps (100 year+) reset the port's cpu cycle back
>>> to zero.
>>>
>>> Fixes: de3cfa2c98 ("sched: initial import")
>>>
>>> Signed-off-by: Alan Dewar <alan.dewar@att.com>
>>> ---
>>> lib/librte_sched/rte_sched.c | 11 +++++++++--
>>> 1 file changed, 9 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
>>> index c0983ddda..7c022cd61 100644
>>> --- a/lib/librte_sched/rte_sched.c
>>> +++ b/lib/librte_sched/rte_sched.c
>>> @@ -222,6 +222,7 @@ struct rte_sched_port {
>>> uint64_t time_cpu_bytes; /* Current CPU time measured in bytes */
>>> uint64_t time; /* Current NIC TX time measured in bytes */
>>> struct rte_reciprocal inv_cycles_per_byte; /* CPU cycles per byte */
>>> + uint64_t cycles_per_byte;
>>>
>>
>> I was backporting this patch to 18.11. The older ABI checker complains
>> about this structure change.
>>
>> "cycles_per_byte has been added at the middle position of this
>> structural type."
>>
>> Isn't this an ABI break? Dropping from 18.11 for time being.
>
> So it looks like rte_sched_port * is an opaque pointers as it's contents are only
> known to rte_sched.c and not outside. To everyone else it is an opaque data structure,
> so structural changes to it would not be an ABI breakage.
>
Thanks Ray, makes sense. I've included the fix in 18.11.10-rc1.
Kevin.
>>
>>> /* Grinders */
>>> struct rte_mbuf **pkts_out;
>>> @@ -852,6 +853,7 @@ rte_sched_port_config(struct rte_sched_port_params *params)
>>> cycles_per_byte = (rte_get_tsc_hz() << RTE_SCHED_TIME_SHIFT)
>>> / params->rate;
>>> port->inv_cycles_per_byte = rte_reciprocal_value(cycles_per_byte);
>>> + port->cycles_per_byte = cycles_per_byte;
>>>
>>> /* Grinders */
>>> port->pkts_out = NULL;
>>> @@ -2673,16 +2675,21 @@ static inline void
>>> rte_sched_port_time_resync(struct rte_sched_port *port)
>>> {
>>> uint64_t cycles = rte_get_tsc_cycles();
>>> - uint64_t cycles_diff = cycles - port->time_cpu_cycles;
>>> + uint64_t cycles_diff;
>>> uint64_t bytes_diff;
>>> uint32_t i;
>>>
>>> + if (cycles < port->time_cpu_cycles)
>>> + port->time_cpu_cycles = 0;
>>> +
>>> + cycles_diff = cycles - port->time_cpu_cycles;
>>> /* Compute elapsed time in bytes */
>>> bytes_diff = rte_reciprocal_divide(cycles_diff << RTE_SCHED_TIME_SHIFT,
>>> port->inv_cycles_per_byte);
>>>
>>> /* Advance port time */
>>> - port->time_cpu_cycles = cycles;
>>> + port->time_cpu_cycles +=
>>> + (bytes_diff * port->cycles_per_byte) >> RTE_SCHED_TIME_SHIFT;
>>> port->time_cpu_bytes += bytes_diff;
>>> if (port->time < port->time_cpu_bytes)
>>> port->time = port->time_cpu_bytes;
>>>
>>
>
prev parent reply other threads:[~2020-09-07 10:09 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-16 8:48 [dpdk-dev] [PATCH] " alangordondewar
2020-04-17 21:19 ` Dumitrescu, Cristian
2020-04-20 11:23 ` Singh, Jasvinder
2020-04-21 8:21 ` Dewar, Alan
2020-06-24 22:50 ` Thomas Monjalon
2020-06-25 8:32 ` Singh, Jasvinder
2020-06-25 8:40 ` Alan Dewar
2020-06-25 9:59 ` [dpdk-dev] [PATCH v2] " alangordondewar
2020-07-05 20:41 ` Thomas Monjalon
2020-07-06 21:20 ` Singh, Jasvinder
2020-07-06 23:01 ` Thomas Monjalon
2020-08-20 14:32 ` Kevin Traynor
2020-08-21 15:28 ` Kinsella, Ray
2020-09-07 10:09 ` Kevin Traynor [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=73ab57ba-c69b-157e-9f3b-dc14fa40c929@redhat.com \
--to=ktraynor@redhat.com \
--cc=alan.dewar@att.com \
--cc=alangordondewar@gmail.com \
--cc=bluca@debian.org \
--cc=cristian.dumitrescu@intel.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=jasvinder.singh@intel.com \
--cc=mdr@ashroe.eu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).