DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Dumitrescu, Cristian" <cristian.dumitrescu@intel.com>
To: Alan Robertson <aroberts@Brocade.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	Thomas Monjalon <thomas.monjalon@6wind.com>
Subject: Re: [dpdk-dev] [RFC] ethdev: abstraction layer for QoS hierarchical scheduler
Date: Wed, 7 Dec 2016 19:52:01 +0000	[thread overview]
Message-ID: <3EB4FA525960D640B5BDFFD6A3D8912652711302@IRSMSX108.ger.corp.intel.com> (raw)
In-Reply-To: <6d862b500e1e4f34a4cbf790db8d5d48@EMEAWP-EXMB11.corp.brocade.com>

Hi Alan,

Thanks for your comments!


> Hi Cristian,

> Looking at points 10 and 11 it's good to hear nodes can be dynamically added.

Yes, many implementations allow on-the-fly remapping a node from one parent to another
one, or simply adding more nodes post-initialization, so it is natural for the API to provide this.


> We've been trying to decide the best way to do this for support of qos on tunnels for
> some time now and the existing implementation doesn't allow this so effectively ruled
> out hierarchical queueing for tunnel targets on the output interface.

> Having said that, has thought been given to separating the queueing from being so closely
> tied to the Ethernet transmit process ?   When queueing on a tunnel for example we may
> be working with encryption.   When running with an anti-reply window it is really much
> better to do the QOS (packet reordering) before the encryption.  To support this would
> it be possible to have a separate scheduler structure which can be passed into the
> scheduling API ?  This means the calling code can hang the structure of whatever entity
> it wishes to perform qos on, and we get dynamic target support (sessions/tunnels etc).

Yes, this is one point where we need to look for a better solution. Current proposal attaches
the hierarchical scheduler function to an ethdev, so scheduling traffic for tunnels that have a
pre-defined bandwidth is not supported nicely. This question was also raised in VPP, but
there tunnels are supported as a type of output interfaces, so attaching scheduling to an
output interface also covers the tunnels case.

Looks to me that nice tunnel abstractions are a gap in DPDK as well. Any thoughts about
how tunnels should be supported in DPDK? What do other people think about this?


> Regarding the structure allocation, would it be possible to make the number of queues
> associated with a TC a compile time option which the scheduler would accommodate ?
> We frequently only use one queue per tc which means 75% of the space allocated at
> the queueing layer for that tc is never used.  This may be specific to our implementation
> but if other implementations do the same if folks could say we may get a better idea
> if this is a common case.

> Whilst touching on the scheduler, the token replenishment works using a division and
> multiplication obviously to cater for the fact that it may be run after several tc windows
> have passed.  The most commonly used industrial scheduler simply does a lapsed on the tc
> and then adds the bc.   This relies on the scheduler being called within the tc window
> though.  It would be nice to have this as a configurable option since it's much for efficient
> assuming the infra code from which it's called can guarantee the calling frequency.

This is probably feedback for librte_sched as opposed to the current API proposal, as the
Latter is intended to be generic/implementation-agnostic and therefor its scope far
exceeds the existing set of librte_sched features.

Btw, we do plan using the librte_sched feature as the default fall-back when the HW
ethdev is not scheduler-enabled, as well as the implementation of choice for a lot of
use-cases where it fits really well, so we do have to continue evolve and improve
librte_sched feature-wise and performance-wise.


> I hope you'll consider these points for inclusion into a future road map.  Hopefully in the
> future my employer will increase the priority of some of the tasks and a PR may appear
> on the mailing list.

> Thanks,
> Alan.

  reply	other threads:[~2016-12-07 19:52 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-30 18:16 Cristian Dumitrescu
2016-12-06 19:51 ` Stephen Hemminger
2016-12-06 22:14   ` Thomas Monjalon
2016-12-07 20:13     ` Dumitrescu, Cristian
2016-12-07 19:03   ` Dumitrescu, Cristian
     [not found] ` <57688e98-15d5-1866-0c3a-9dda81621651@brocade.com>
2016-12-07 10:58   ` Alan Robertson
2016-12-07 19:52     ` Dumitrescu, Cristian [this message]
2016-12-08 15:41       ` Alan Robertson
2016-12-08 17:18         ` Dumitrescu, Cristian
2016-12-09  9:28           ` Alan Robertson
2016-12-08 10:14     ` Bruce Richardson
2017-01-11 13:56 ` Jerin Jacob
2017-01-13 10:36 ` Hemant Agrawal

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3EB4FA525960D640B5BDFFD6A3D8912652711302@IRSMSX108.ger.corp.intel.com \
    --to=cristian.dumitrescu@intel.com \
    --cc=aroberts@Brocade.com \
    --cc=dev@dpdk.org \
    --cc=thomas.monjalon@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).