DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ariel Rodriguez <arodriguez@callistech.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] 4 Traffic classes per Pipe limitation
Date: Fri, 29 Nov 2013 20:33:05 -0200	[thread overview]
Message-ID: <CADoa0bZv=44PRbf7Fv3fEBmzeiigTzZ+AK63MvL2fQLjXCmjBw@mail.gmail.com> (raw)
In-Reply-To: <20131129132611.2ed0335c@nehalam.linuxnetplumber.net>

     Ok thats give the reason i need, yes i could change the number of bits
of ,for example , pipe size which is 20 bytes but we need around a million
of pipe (the telecom has a million of concurent subscribers). Thank you so
much, i have to think about this, for the moment i believe we will use the
4 traffic classes and group the differents protocols to a traffic class.
     Maybe later i will ask some questions about the traffic metering.

Thank you again , best regards,

Ariel Horacio Rodriguez, Callis Technologies.








On Fri, Nov 29, 2013 at 6:26 PM, Stephen Hemminger <
stephen@networkplumber.org> wrote:

> On Fri, 29 Nov 2013 17:50:34 -0200
> Ariel Rodriguez <arodriguez@callistech.com> wrote:
>
> >          Thanks for the answer, your explanation was perfect.
> Unfortunally
> > , the client requirements are those, we need at traffic control level
> > around 64 traffic metering controlers (traffic classes) at subscriber
> level.
>
> I think you maybe confused by the Intel QoS naming. It is better to
> think about it as 3 different classification levels and not get too hung
> up about the naming.
>
> The way to do what you want that is with 64 different 'pipes'.
> In our usage:
>         subport => VLAN
>         pipe    => subscriber matched by tuple
>         traffic class => mapping from DSCP to TC
>
>
> >           Each subscriber have a global plan rate (each pipe have the
> same
> > token bucket configuration), inside that plan there are different rules
> for
> > the traffic (traffic classes). For Example, facebook traffic, twitter
> > traffic, whatsapp traffic have different plan rates lower than the plan
> > global rate but different than the others protocols. We could group those
> > in one traffic class, but still the 4 traffic classes is a strong
> > limitation for us, beacause each protocol mapped to a traffic class share
> > the same configuration (facebook traffic, twitter traffic have had the
> same
> > rate and more, they compete for the  same traffic class rate).
> >           We have to compete against cisco bandwith control solution and
> at
> > least we need to offer the same features. The cisco solution its a DPI
> but
> > also a traffic control solution, its permit priorization of traffic and
> > manage the congestion inside the network per subscriber and per
> application
> > service. So apperently the dpdk qos scheduller doesnt fit for our needs.
> >           Anyway, i still doesnt understand the traffic classes
> limitation.
> > Inside the dpdk code of the qos scheduler i saw this:
> >
> > /** Number of queues per pipe traffic class. Cannot be changed. */
> > #define RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS    4
>
> >          I follow where the code use that define and except for the
> struct
> > rte_sched_port_hierarchy where its mandatory a bitwise field of two
> (0...3)
> > , i dont see where is the limitation here (except for performance). Its
> > worth to change the code to support more than 4 traffic classes, well i
> > could try to change the code more precisely jejeje.  I just want to know
> if
> > there are another limitation than a design desicion of that number. I
> dont
> > want to make the effort for nothing maybe you guys can help me to
> > understand why the limitation.
> >           I strongly use the dpdk solution for feed our dpi solution, i
> > wont change that because work greats!!! but its difficult to develop a
> > traffic control managment from scratch and integrated with the dpdk in a
> > clean way without touching the dpdk api, you guys just done that with the
> > qos scheduler, i dont want to reinvent the wheel.
> >           Again thank you for the patience, and for your expertise.
>
>
> The limitation on number's of TC (and pipes) comes from the number of
> bits available. Since the QoS code overloads the 32 bit RSS field in
> the mbuf there isn't enough bits to a lot. But then again if you add lots
> of pipes or subports the memory footprint gets huge.
>
> Since it is open source, you could reduce the number of bits for one
> field and increase the other. But having lots of priority classes
> would lead to poor performance and potential starvation.
>

  reply	other threads:[~2013-11-29 22:32 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-28 20:52 Ariel Rodriguez
2013-11-29 18:45 ` Dumitrescu, Cristian
2013-11-29 19:50   ` Ariel Rodriguez
2013-11-29 21:26     ` Stephen Hemminger
2013-11-29 22:33       ` Ariel Rodriguez [this message]
2015-06-05 17:06 Yeddula, Avinash
2015-06-05 20:50 ` Dumitrescu, Cristian
2015-06-06 21:05   ` Michael Sardo
2015-06-06 23:23     ` Michael Sardo
2015-06-06 23:39       ` Dumitrescu, Cristian

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CADoa0bZv=44PRbf7Fv3fEBmzeiigTzZ+AK63MvL2fQLjXCmjBw@mail.gmail.com' \
    --to=arodriguez@callistech.com \
    --cc=dev@dpdk.org \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).