From: Venky Venkatesh <vvenkatesh@paloaltonetworks.com>
To: "Singh, Jasvinder" <jasvinder.singh@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [2nd Try]:Re: Traffic Management API Questions
Date: Wed, 11 Jan 2023 03:55:56 -0800 [thread overview]
Message-ID: <CAJ4WCt+S05TdGJxLB6biYXsdNMH1886zz6BFqbp0_Qe5Zdy0NQ@mail.gmail.com> (raw)
In-Reply-To: <IA1PR11MB629058197ED571B3EDF2A339E0FF9@IA1PR11MB6290.namprd11.prod.outlook.com>
[-- Attachment #1: Type: text/plain, Size: 5287 bytes --]
Hi Jasvinder,
Thanks for the detailed answers. Our need is to have shaping at the port
level as well. I am trying to see what would be the way to accomplish this
given the current limitations of the sched library implementation in this
regard. I see 2 options:
- The top level (i.e. port level) documentation says the following:
"Output Ethernet port 1/10/40 GbE" and "Multiple ports are scheduled in
round robin order with all ports having equal priority". Questions:
- Do all the ports have to be of the same speed OR can it be a
heterogeneous set of port speeds?
- If it can be a heterogeneous set of ports, is the scheduling across
those ports *weighted* round robin as opposed to round robin?
- Are Speeds other than 1/10/40 GbE not supported?
- I suppose this heterogeneous mix of port speeds is implemented by
playing with the weights across ports, correct?
- If so, what problem do you foresee if we provide arbitrary
bandwidth ports by regulating the above weights?
- The other alternative would be to add another layer (which has a
shaper) to the hierarchy by mimicking one of the existing layers: how
amenable is the current implementation to that?
Do either of the above look like workable ideas? Are there any other
approaches where we could accomplish our requirement with minimal changes
to the code logic?
Thanks
-Venky
On Tue, Jan 10, 2023 at 2:54 AM Singh, Jasvinder <jasvinder.singh@intel.com>
wrote:
> Hi Venky,
>
>
>
> Please see inline.
>
>
>
> Thanks,
>
> Jasvinder
>
>
>
>
>
> *From:* Venky Venkatesh <vvenkatesh@paloaltonetworks.com>
> *Sent:* Tuesday, January 10, 2023 8:52 AM
> *To:* dev@dpdk.org
> *Subject:* [2nd Try]:Re: Traffic Management API Questions
>
>
>
> Hi,
>
> Can someone pls get back on these
>
> Thanks
>
> -Venky
>
>
>
> On Thu, Jan 5, 2023 at 4:07 AM Venky Venkatesh <
> vvenkatesh@paloaltonetworks.com> wrote:
>
> Hi,
>
> I was looking at the DPDK Traffic Management API. I wanted to clarify some
> things that I understand from the code (for software based TM
> implementation (at 20.11)) vs the documentation.
>
> · The documentation says "Traffic shaping: single/*dual rate**,* private
> (*per node*) and shared (by *multiple nodes*) shapers" are supported.
> However it appears that the code supports only *single *rate shapers. Is
> my understanding correct?
>
> [JS] – Yes, TM API supports single and dual rate shapers, privately per
> node as well as shared across multiple nodes. However, DPDK QoS scheduler
> library implements single rate shaper at nodes.
>
> o If not, pls point me to where dual rate shaping is supported in the
> software based TM implementation code.
>
> o However, if my understanding is correct, can the authors clarify the
> nature of issues they ran into in supporting dual rate (which thus
> prevented them from implementing it)?
>
> [JS] – There isn’t any issue except more complexity. Author can rework the
> library to implement the dual rate shapers for the desired nodes depending
> upon the requirement.
>
> · The documentation comment above sounds like every node can have
> shapers. However it appears that the code does not support shaping at the
> port level. Again the same questions as above(regarding the accuracy of my
> understanding and if it is accurate, the reasons from the author for not
> supporting it)
>
> [JS] – Implementation supports shapers at subport (group of pipes) and
> pipe level. The bandwidth available at the port level is distributed among
> the subports with the condition that aggregate bandwidth of subports should
> not exceed the port bandwidth. Each subport buffers and shape the traffic
> from the pipes depending upon the port bandwidth allocated to it.
> Implementation doesn’t support distribution of unused bandwidth of one
> subport to another subport. However, one can modify this behaviour if
> needed.
>
> · At the level of the TM API (*and* the associated software TM
> implementation) are there any restrictions on the number of levels of QoS
> hierarchy we can construct?
>
> [JS] – TM API doesn’t restrict the number of QoS scheduler levels and
> generic enough to work with hierarchical schedulers with any number of
> levels. The current dpdk sched library implementation supports fixed 5
> level scheduler hierarchy.
>
> · Lastly, does the QoS framework API (which I suppose is built on
> lower level building blocks including the TM API) expose the entire
> capabilities of the TM API (e.g. dual rate shapers, shapers at port level,
> > 4 levels of shaping etc.)? From the reading of the documentation it
> appears that there may be restrictions imposed by the QoS framework API on
> top of what TM API imposes. Can someone pls confirm this (and if so, the
> reason for doing so)?
>
> [JS] – No, QoS framework API (DPDK sched library) presents only one
> flavour of hierarchical scheduler and doesn’t implements all the features
> exposed through TM API. However, more features can be added to library and
> configured through TM API.
>
>
>
> Thanks
>
> -Venky
>
>
>
>
[-- Attachment #2: Type: text/html, Size: 9343 bytes --]
next prev parent reply other threads:[~2023-01-11 11:56 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-05 12:07 Venky Venkatesh
2023-01-10 8:52 ` [2nd Try]:Re: " Venky Venkatesh
2023-01-10 10:54 ` Singh, Jasvinder
2023-01-11 11:55 ` Venky Venkatesh [this message]
2023-01-11 17:24 ` Singh, Jasvinder
2023-01-16 8:05 ` Venky Venkatesh
2023-01-16 11:38 ` Singh, Jasvinder
2023-01-16 13:59 ` Venky Venkatesh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJ4WCt+S05TdGJxLB6biYXsdNMH1886zz6BFqbp0_Qe5Zdy0NQ@mail.gmail.com \
--to=vvenkatesh@paloaltonetworks.com \
--cc=dev@dpdk.org \
--cc=jasvinder.singh@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).