DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] QoS grinder vs pipe wrr_tokens
@ 2016-06-07 17:28 Alexey Bogdanenko
  2016-06-08 15:23 ` Dumitrescu, Cristian
  0 siblings, 1 reply; 2+ messages in thread
From: Alexey Bogdanenko @ 2016-06-07 17:28 UTC (permalink / raw)
  To: dev

Hello,

I have a question regarding QoS grinder implementation, specifically, 
about the way queue WRR tokens are copied from pipe to grinder and back.

First, rte_sched_grinder uses uint16_t and rte_sched_pipe uses uint8_t 
to represent wrr_tokens. Second, instead of just copying the tokens, we 
shift bits by RTE_SCHED_WRR_SHIFT.

What does it accomplish? Can it lead to lower scheduler accuracy due to 
a round-off error?

version: v16.04

Thanks,

Alexey Bogdanenko

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-dev] QoS grinder vs pipe wrr_tokens
  2016-06-07 17:28 [dpdk-dev] QoS grinder vs pipe wrr_tokens Alexey Bogdanenko
@ 2016-06-08 15:23 ` Dumitrescu, Cristian
  0 siblings, 0 replies; 2+ messages in thread
From: Dumitrescu, Cristian @ 2016-06-08 15:23 UTC (permalink / raw)
  To: Alexey Bogdanenko, dev

Hi Alexey,

The WRR context is compressed to use less memory footprint in order to fit the entire pipe run-time context (struct rte_sched_pipe) into a single cache line for performance reasons. Basically we trade WRR accuracy for performance.

For some typical Telco use-cases, the WRR/WFQ accuracy for the traffic class queues is not that important, as usually the traffic class queue weight ratio is big, e.g. 1:4:20. Basically, whether the actual observed ratio at run-time is 1:4:20 or 1:5:18 or 1:3:22 is not that important, as the intention really is to source most of the traffic from the queue with the largest weight, some traffic from the queue with the medium weight and not starve the lowest weight queue; this mode is very similar to strict priority between traffic class queues, with the exception that the lowest priority queues are not starved for long time.

When WRR accuracy is more important than performance, this operation should be disabled.

Regards,
Cristian


> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Alexey
> Bogdanenko
> Sent: Tuesday, June 7, 2016 8:29 PM
> To: dev@dpdk.org
> Subject: [dpdk-dev] QoS grinder vs pipe wrr_tokens
> 
> Hello,
> 
> I have a question regarding QoS grinder implementation, specifically,
> about the way queue WRR tokens are copied from pipe to grinder and back.
> 
> First, rte_sched_grinder uses uint16_t and rte_sched_pipe uses uint8_t
> to represent wrr_tokens. Second, instead of just copying the tokens, we
> shift bits by RTE_SCHED_WRR_SHIFT.
> 
> What does it accomplish? Can it lead to lower scheduler accuracy due to
> a round-off error?
> 
> version: v16.04
> 
> Thanks,
> 
> Alexey Bogdanenko

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-06-08 15:23 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-07 17:28 [dpdk-dev] QoS grinder vs pipe wrr_tokens Alexey Bogdanenko
2016-06-08 15:23 ` Dumitrescu, Cristian

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).