DPDK usage discussions
 help / color / mirror / Atom feed
* DPDK Qos scheduler TC's misbehaviour within a Pipe
@ 2025-12-10 11:44 nagurvalisayyad
  0 siblings, 0 replies; 2+ messages in thread
From: nagurvalisayyad @ 2025-12-10 11:44 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 2232 bytes --]



Hi,

We are using dpdk-stable-22.11.6 version in our project.

We are facing an issue in our DPDK QoS scheduler example application, we 
notice that lower priority traffic (TC2) starves higher priority traffic
if the packet size of the lower priority traffic is smaller than the 
packet size of the higher priority traffic.

If the packet size of the lower priority traffic (TC2) is same or larger 
than the higher priority (TC0 or TC1), we dont see the problem.

Using q-index within the TC:

Pipe 0 size is 20Mbps

- Q0 (TC0), 1500 byte packets, 5Mbps configured and
- Q1 (TC2), 1400 byte packets, 20Mbps configured
- Total two pipes are configured and traffic is mapped to Pipe 0 TC0 and 
TC2
- Only on subport configured.
- TC period is set to 50ms (to support lower rates of around 256Kbps)

In this scenario TC2 consumes all the 20Mbps bandwidth, but as per 
priority, TC0 should get 5Mbps and TC2 should get 15Mbps.

If we pump the same size byte packets, then TC0 is getting 5Mbps and TC1 
is getting 15Mbps as per priority.

If we stop the TC0 traffic, then the unused 5Mbps from TC0 is getting 
used by TC1 and is getting 20Mbps.(as expected).

To further debug, we found in the qos scheduler documentation section 
57.2.4.6.3. Traffic Shaping

"

  	* Full accuracy can be achieved by selecting the value for _tb_period_ 
for which _tb_credits_per_period = 1_.
  	* When full accuracy is not required, better performance is achieved 
by setting _tb_credits_ to a larger value.

"

In rte_sched.c file, rte_sched_pipe_profile_convert(), the 
_tb_credits_per_period _is set to 1 and accordingly _tb_period _is set 
according to the rate.

We have increased the _tb_credits_per_period _and_ tb_period _by 10000 
times. So that 10000 credis are updated in the token bucket at a time. 
With this, TC0 and TC2 are working but not as much accurate as earlier.

And we are having the doubt of how this change will behave for different 
rates and different packet sizes.

Can you please help us in setting the optimal value for 
_tb_credits_per_period and tb_period, _so that it works well for 
different traffic rates and different packet sizes.

Please help us in resolving this issue.

-- 
Thanks & Regards

Nagurvali Sayyad.

[-- Attachment #2: Type: text/html, Size: 3605 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* DPDK Qos scheduler TC's misbehaviour within a Pipe
@ 2025-12-11  4:51 nagurvalisayyad
  0 siblings, 0 replies; 2+ messages in thread
From: nagurvalisayyad @ 2025-12-11  4:51 UTC (permalink / raw)
  To: Users

[-- Attachment #1: Type: text/plain, Size: 2231 bytes --]



Hi,

We are using dpdk-stable-22.11.6 version in our project.

We are facing an issue in our DPDK QoS scheduler example application, we 
notice that lower priority traffic (TC2) starves higher priority traffic
if the packet size of the lower priority traffic is smaller than the 
packet size of the higher priority traffic.

If the packet size of the lower priority traffic (TC2) is same or larger 
than the higher priority (TC0 or TC1), we dont see the problem.

Using q-index within the TC:

Pipe 0 size is 20Mbps

- Q0 (TC0), 1500 byte packets, 5Mbps configured and
- Q1 (TC2), 1400 byte packets, 20Mbps configured
- Total two pipes are configured and traffic is mapped to Pipe 0 TC0 and 
TC2
- Only on subport configured.
- TC period is set to 50ms (to support lower rates of around 256Kbps)

In this scenario TC2 consumes all the 20Mbps bandwidth, but as per 
priority, TC0 should get 5Mbps and TC2 should get 15Mbps.

If we pump the same size byte packets, then TC0 is getting 5Mbps and TC1 
is getting 15Mbps as per priority.

If we stop the TC0 traffic, then the unused 5Mbps from TC0 is getting 
used by TC1 and is getting 20Mbps.(as expected).

To further debug, we found in the qos scheduler documentation section 
57.2.4.6.3. Traffic Shaping

"

  	* Full accuracy can be achieved by selecting the value for _tb_period_ 
for which _tb_credits_per_period = 1_.
  	* When full accuracy is not required, better performance is achieved 
by setting _tb_credits_ to a larger value.

"

In rte_sched.c file, rte_sched_pipe_profile_convert(), the 
_tb_credits_per_period _is set to 1 and accordingly _tb_period _is set 
according to the rate.

We have increased the _tb_credits_per_period _and_ tb_period _by 10000 
times. So that 10000 credis are updated in the token bucket at a time. 
With this, TC0 and TC2 are working but not as much accurate as earlier.

And we are having the doubt of how this change will behave for different 
rates and different packet sizes.

Can you please help us in setting the optimal value for 
_tb_credits_per_period and tb_period, _so that it works well for 
different traffic rates and different packet sizes.

Please help us in resolving this issue.

-- 
Thanks & Regards

Nagurvali sayyad

[-- Attachment #2: Type: text/html, Size: 3553 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-12-11  4:51 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-12-10 11:44 DPDK Qos scheduler TC's misbehaviour within a Pipe nagurvalisayyad
2025-12-11  4:51 nagurvalisayyad

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).