DPDK usage discussions
 help / color / mirror / Atom feed
From: "Dumitrescu, Cristian" <cristian.dumitrescu@intel.com>
To: Ashok Padhy <ashokpadhy@gmail.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] DPDK QOS scheduler priority starvation issue
Date: Wed, 27 Apr 2016 10:34:10 +0000	[thread overview]
Message-ID: <3EB4FA525960D640B5BDFFD6A3D89126479A651C@IRSMSX108.ger.corp.intel.com> (raw)
In-Reply-To: <CACfChw2DNKT+V+5SfNjS8Tfvv0NNDdmTxE_fNLXLf7f3bd-+5Q@mail.gmail.com>

Hi Ashok,

I am not sure I understand what the issue is, as you do not provide the output rates. You mention pipe is configured with 400m (assuming 400 million credits), with pipe TC1 configured with 40m and TC3 with 400m, while input traffic is 100m for pipe TC1 and 500m for pipe TC3; to me, the output (i.e. scheduled and TX-ed) traffic should be close to 40m (40 million bytes, including Ethernet framing overhead of 20 bytes per frame) for pipe TC1 and close to 360m for pipe TC3.

One a pipe is selected for scheduling, we only read pipe and pipe TC credits once (when pipe is selected, which is also the moment the pipe and pipe TC credits are also updated), so we do not re-evaluate pipe credits again until the next time pipe is selected.

The hierarchical scheduler is only accurate when many (hundreds, thousands) pipes are active; looks like you are only using a single pipe for your test, please retry with more pipes active.

Regards,
Cristian


Hello Cristian,

We are running into an issue in our DPDK scheduler issue we notice that lower priority traffic (TC3) starves higher priority traffic if the packet size of the lower priority traffic is smaller than the packet size of the higher priority traffic.
If the packet size of the lower priority traffic (TC3) is same or larger than the higher priority (TC1 or TC2), we dont see the problem.
Using q-index within the TC:

-Q0 (TC1), 1024 byte packets, 40m configured and 100m sent
-Q2 (TC3), 128/256 byte packets, 400m configured and 500m sent
-Only one pipe active ( configured for 400m)
-Only on subport configured.
-TC period is set to 10msecs
In this scenario TC3 carries most of the traffic (400m).
We are using older version of DPDK, is this something addressed in the later releases?
Appreciate any hint,
thanks
Ashok



  parent reply	other threads:[~2016-04-27 10:34 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-26  4:02 Ashok Padhy
     [not found] ` <CACfChw2DNKT+V+5SfNjS8Tfvv0NNDdmTxE_fNLXLf7f3bd-+5Q@mail.gmail.com>
2016-04-27 10:34   ` Dumitrescu, Cristian [this message]
2016-04-27 13:17     ` Ashok Padhy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3EB4FA525960D640B5BDFFD6A3D89126479A651C@IRSMSX108.ger.corp.intel.com \
    --to=cristian.dumitrescu@intel.com \
    --cc=ashokpadhy@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).