DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] DPDK QOS scheduler priority starvation issue
@ 2016-04-26  4:02 Ashok Padhy
       [not found] ` <CACfChw2DNKT+V+5SfNjS8Tfvv0NNDdmTxE_fNLXLf7f3bd-+5Q@mail.gmail.com>
  0 siblings, 1 reply; 3+ messages in thread
From: Ashok Padhy @ 2016-04-26  4:02 UTC (permalink / raw)
  To: users

Hello Cristian,


We are running into an issue in our DPDK scheduler issue we notice that
lower priority traffic (TC3) starves higher priority traffic if the packet
size of the lower priority traffic is smaller than the packet size of the
higher priority traffic.

If the packet size of the lower priority traffic (TC3) is same or larger
than the higher priority (TC1 or TC2), we dont see the problem.

Using q-index within the TC:

-Q0 (TC1), 1024 byte packets, 40m configured and 100m sent
-Q2 (TC3), 128/256 byte packets, 400m configured and 500m sent
-Only one pipe active ( configured for 400m)
-Only on subport configured.
-TC period is set to 10msecs

In this scenario TC3 carries most of the traffic (400m).
We are using older version of DPDK, is this something addressed in the
later releases?

Appreciate any hint,

thanks
Ashok

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] DPDK QOS scheduler priority starvation issue
       [not found] ` <CACfChw2DNKT+V+5SfNjS8Tfvv0NNDdmTxE_fNLXLf7f3bd-+5Q@mail.gmail.com>
@ 2016-04-27 10:34   ` Dumitrescu, Cristian
  2016-04-27 13:17     ` Ashok Padhy
  0 siblings, 1 reply; 3+ messages in thread
From: Dumitrescu, Cristian @ 2016-04-27 10:34 UTC (permalink / raw)
  To: Ashok Padhy; +Cc: users

Hi Ashok,

I am not sure I understand what the issue is, as you do not provide the output rates. You mention pipe is configured with 400m (assuming 400 million credits), with pipe TC1 configured with 40m and TC3 with 400m, while input traffic is 100m for pipe TC1 and 500m for pipe TC3; to me, the output (i.e. scheduled and TX-ed) traffic should be close to 40m (40 million bytes, including Ethernet framing overhead of 20 bytes per frame) for pipe TC1 and close to 360m for pipe TC3.

One a pipe is selected for scheduling, we only read pipe and pipe TC credits once (when pipe is selected, which is also the moment the pipe and pipe TC credits are also updated), so we do not re-evaluate pipe credits again until the next time pipe is selected.

The hierarchical scheduler is only accurate when many (hundreds, thousands) pipes are active; looks like you are only using a single pipe for your test, please retry with more pipes active.

Regards,
Cristian


Hello Cristian,

We are running into an issue in our DPDK scheduler issue we notice that lower priority traffic (TC3) starves higher priority traffic if the packet size of the lower priority traffic is smaller than the packet size of the higher priority traffic.
If the packet size of the lower priority traffic (TC3) is same or larger than the higher priority (TC1 or TC2), we dont see the problem.
Using q-index within the TC:

-Q0 (TC1), 1024 byte packets, 40m configured and 100m sent
-Q2 (TC3), 128/256 byte packets, 400m configured and 500m sent
-Only one pipe active ( configured for 400m)
-Only on subport configured.
-TC period is set to 10msecs
In this scenario TC3 carries most of the traffic (400m).
We are using older version of DPDK, is this something addressed in the later releases?
Appreciate any hint,
thanks
Ashok



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] DPDK QOS scheduler priority starvation issue
  2016-04-27 10:34   ` Dumitrescu, Cristian
@ 2016-04-27 13:17     ` Ashok Padhy
  0 siblings, 0 replies; 3+ messages in thread
From: Ashok Padhy @ 2016-04-27 13:17 UTC (permalink / raw)
  To: Dumitrescu, Cristian; +Cc: users

Hi Cristian,

Thanks for the response. The o/p rates are that TC1 carries only about
10000bytes per sec, while TC3 carries ~400m.
As I said the high priority traffic in TC1 is starved out by the low
priority traffic in TC3.

I want to re-emphasize the fact that, this issue shows up only when the
packet size of traffic sent in TC3 is smaller (at 128-256 or smaller)
bytes, while the packet size of traffic sent in TC1 is larger (1024 bytes).

I also want to point out that the issue is seen only when the pipe is
oversubscribed. Please note that the shaping rate of pipe is always
honored, ie. we always get around 400m, as configured in the pipe.

The degree of starvation depends on the how small the packet size in TC3,
the moment I increase it beyond 256 bytes, the issue goes away.
Please note I only measure packet size in the byte sizes of 64, 128, 256,
1024 etc,

I have tried with 4 active pipes and still see the issue, I haven't tried
more than that.

Thanks
Ashok


On Wed, Apr 27, 2016 at 6:34 AM, Dumitrescu, Cristian <
cristian.dumitrescu@intel.com> wrote:

> Hi Ashok,
>
>
>
> I am not sure I understand what the issue is, as you do not provide the
> output rates. You mention pipe is configured with 400m (assuming 400
> million credits), with pipe TC1 configured with 40m and TC3 with 400m,
> while input traffic is 100m for pipe TC1 and 500m for pipe TC3; to me, the
> output (i.e. scheduled and TX-ed) traffic should be close to 40m (40
> million bytes, including Ethernet framing overhead of 20 bytes per frame)
> for pipe TC1 and close to 360m for pipe TC3.
>
>
>
> One a pipe is selected for scheduling, we only read pipe and pipe TC
> credits once (when pipe is selected, which is also the moment the pipe and
> pipe TC credits are also updated), so we do not re-evaluate pipe credits
> again until the next time pipe is selected.
>
>
>
> The hierarchical scheduler is only accurate when many (hundreds,
> thousands) pipes are active; looks like you are only using a single pipe
> for your test, please retry with more pipes active.
>
>
>
> Regards,
>
> Cristian
>
>
>
>
>
> Hello Cristian,
>
>
>
> We are running into an issue in our DPDK scheduler issue we notice that
> lower priority traffic (TC3) starves higher priority traffic if the packet
> size of the lower priority traffic is smaller than the packet size of the
> higher priority traffic.
>
> If the packet size of the lower priority traffic (TC3) is same or larger
> than the higher priority (TC1 or TC2), we dont see the problem.
>
> Using q-index within the TC:
>
>
> -Q0 (TC1), 1024 byte packets, 40m configured and 100m sent
> -Q2 (TC3), 128/256 byte packets, 400m configured and 500m sent
>
> -Only one pipe active ( configured for 400m)
>
> -Only on subport configured.
>
> -TC period is set to 10msecs
>
> In this scenario TC3 carries most of the traffic (400m).
> We are using older version of DPDK, is this something addressed in the
> later releases?
>
> Appreciate any hint,
>
> thanks
>
> Ashok
>
>
>
>
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2016-04-27 13:17 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-26  4:02 [dpdk-users] DPDK QOS scheduler priority starvation issue Ashok Padhy
     [not found] ` <CACfChw2DNKT+V+5SfNjS8Tfvv0NNDdmTxE_fNLXLf7f3bd-+5Q@mail.gmail.com>
2016-04-27 10:34   ` Dumitrescu, Cristian
2016-04-27 13:17     ` Ashok Padhy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).