DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Dumitrescu, Cristian" <cristian.dumitrescu@intel.com>
To: 'Srikanth Akula' <srikanth044@gmail.com>, "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Max throughput Using QOS Scheduler
Date: Thu, 6 Nov 2014 20:37:46 +0000	[thread overview]
Message-ID: <3EB4FA525960D640B5BDFFD6A3D891263229A072@IRSMSX108.ger.corp.intel.com> (raw)
In-Reply-To: <CA+8eA5=ErEWHtQJmy6gePFghdS8Li-OP3dfktiOvO6eEkEtGCg@mail.gmail.com>

Hi Srikanth,

>>Is there any difference between scheduler behavior  for above two scenarios  while enqueing and de-queueing ??
All the pipe queues share the bandwidth allocated to their pipe. The distribution of available pipe bandwidth between the pipe queues is governed by features like traffic class strict priority, bandwidth sharing between pipe traffic classes, weights of the queues within the same traffic class, etc. In the case you mention, you are just using one queue for each traffic class.

Let’s take an example:

-        Configuration: pipe rate = 10 Mbps, pipe traffic class 0 .. 3 rates = [20% of pipe rate = 2 Mbps, 30% of pipe rate = 3 Mbps, 40% of pipe rate = 4 Mbps, 100% of pipe rate = 10 Mbps]. Convention is that traffic class 0 is the highest priority.

-        Injected traffic per traffic class for this pipe: [3, 0, 0, 0] Mbps => Output traffic per traffic class for this pipe: [2 , 0, 0, 0] Mbps

-        Injected traffic per traffic class for this pipe: [0, 0, 0, 15] Mbps => Output traffic per traffic class for this pipe: [0, 0, 0, 10] Mbps

-        Injected traffic per traffic class for this pipe: [1, 10, 2, 15] Mbps => Output traffic per traffic class for this pipe: [1, 3, 2, 4] Mbps

Makes sense?

>>Queue size is 64 , and number of packets enqueued and dequeued is 64 as well.
I strongly recommend you never use a dequeue burst size that is equal to enqueue burst size, as performance will be bad.

In the qos_sched sample app, we use [enqueue burst size, dequeue burst size] set to [64, 32], other reasonable values could be [64, 48], [32, 16], etc. An enqueue burst bigger than dequeue burst will cause the big packet reservoir which is the traffic manager/port scheduler to fill up to a reasonable level that will allow dequeu to function optimally, and then the system regulates itself.

The reason is: since we interlace enqueue and dequeue calls, if you push on every iteration e.g. 64 packets in and then look to get 64 packets out, you’ll only have 64 packets into the queues, then you’ll work hard to find them, and you get out exactly those 64 packets that you pushed in.

>>And what is the improvements i would gain if i move to DPDK 1.7 w.r.t QOS ?
The QoS code is pretty stable since release 1.4, not many improvements added (maybe it’s the right time to revisit this feature and push it to the next level …), but there are improvements in other DPDK libraries that are dependencies for QoS (e.g. packet Rx/Tx).

Hope this helps.

Regards,
Cristian



From: Srikanth Akula [mailto:srikanth044@gmail.com]
Sent: Thursday, October 30, 2014 4:10 PM
To: dev@dpdk.org; Dumitrescu, Cristian
Subject: Max throughput Using QOS Scheduler

Hello All ,

I am currently trying to implement QOS scheduler using DPDK 1.6 . I have configured 1 subport , 4096 pipes for the sub port and 4 TC's and 4 Queues .

Currently i am trying to send packets destined to single Queue of the available 16 queues of one of the pipe .

Could some body explain what could be the throughput we can achieve using this scheme.  The reason for asking this is , i could sense different behavior each time when i send traffic destined to different destination Queues  .

for example :

1. << Only one stream>>> Stream destined Q0 of TC0 ..


2. << 4 streams >>>> 1st Stream destined for Q3 of Tc3 ...
                                 2nd stream destined for Q2 of Tc2
                                 3rd stream destined for Q1 of TC1
                                 4th Stream destined for Q0 of TC0

Is there any difference between scheduler behavior  for above two scenarios  while enqueing and de-queueing ??

Queue size is 64 , and number of packets enqueud and dequeued is 64 as well.
And what is the improvements i would gain if i move to DPDK 1.7 w.r.t QOS ?


Could you please clarify my queries ?


Thanks & Regards,
Srikanth


--------------------------------------------------------------
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.


  parent reply	other threads:[~2014-11-06 20:28 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-10-30 16:09 Srikanth Akula
2014-11-05  1:29 ` Srikanth Akula
2014-11-06 20:37 ` Dumitrescu, Cristian [this message]
2014-11-07  2:21   ` Srikanth Akula

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3EB4FA525960D640B5BDFFD6A3D891263229A072@IRSMSX108.ger.corp.intel.com \
    --to=cristian.dumitrescu@intel.com \
    --cc=dev@dpdk.org \
    --cc=srikanth044@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).