Thanks for the info, I have a question, is there any hard limit on how many supports can be configured on port? The default seems to be 8. LevelSiblings per ParentFunctional Description1Port - 1. Output Ethernet port 1/10/40 GbE. 2. Multiple ports are scheduled in round robin order with all ports having equal priority. 2SubportConfigurable (default: 8) 1. Traffic shaping using token bucket algorithm (one token bucket per subport). 2. Upper limit enforced per Traffic Class (TC) at the subport level. 3. Lower priority TCs able to reuse subport bandwidth currently unused by higher priority TCs. 3PipeConfigurable (default: 4K) 1. Traffic shaping using the token bucket algorithm (one token bucket per pipe. 4Traffic Class (TC)13 1. TCs of the same pipe handled in strict priority order. 2. Upper limit enforced per TC at the pipe level. 3. Lower priority TCs able to reuse pipe bandwidth currently unused by higher priority TCs. 4. When subport TC is oversubscribed (configuration time event), pipe TC upper limit is capped to a dynamically adjusted value that is shared by all the subport pipes. 5QueueHigh priority TCs: 1, Lowest priority TC: 4 1. All the high priority TCs (TC0, TC1, …,TC11) have exactly 1 queue, while the lowest priority TC (TC12), called Best Effort (BE), has 4 queues. 2. Queues of the lowest priority TC (BE) are serviced using Weighted Round Robin (WRR) according to predefined weights weights. On Thu, Apr 7, 2022 at 1:29 PM Singh, Jasvinder wrote: > Hi Satish, > > > > I would encourage you to have a look at library code especially around > dequeue operation to understand the scheduling behaviour. > > > > Some of the answers are inline; > > > > Thanks, > > Jasvinder > > > > > > *From:* satish amara > *Sent:* Wednesday, April 6, 2022 7:06 PM > *To:* Singh, Jasvinder > *Cc:* Thomas Monjalon ; users@dpdk.org; Dumitrescu, > Cristian > *Subject:* Re: Fwd: QOS sample example. > > > > Thank you. I have a question about how the active Traffic class is > selected in a pipe. > > Let's say I have confugured only one Subport and one Pipe on interface. > > If the highest priority traffic class in a pipe has exhausted it's rate > limit can lower traffic class in the same pipe be dequeued. > > > > Yes.once highest priority TC has consumed its allocated credits, and at > the pipe level, there are credits available, then packets from next > priority tc will be scheduled. > > > > Can I have profile for Pipe where bandwidth for Pipe is shared among > multiple TC's > > Here is an example how I want to configure pipe so bandwidth allocated to > Pipe is shared among 13 classes giving priority to highest queue provided > it didn't exceed the rate limit. > > > > pipe_profile 0 { > > tb_rate 1300000 /* Pipe level token bucket rate (bytes per second) */ > > tb_size 1000000 /* Pipe level token bucket size (bytes) */ > > tc0_rate 100000 /* Pipe level token bucket rate for traffic class 0 (bytes > per second) */ > > tc1_rate 100000 /* Pipe level token bucket rate for traffic class 1 (bytes > per second) */ > > tc2_rate 100000 /* Pipe level token bucket rate for traffic class 2 (bytes > per second) */ > > tc3_rate 100000 > > ..... /* Pipe level token bucket rate for traffic class 3 (bytes per > second) */ > > tc13_rate 100000 > > tc_period 40 /* Time int > > } > > > > DPDK QoS sample app has such profile defined if that helps. > > > > > > The scheduling decision to send next packet from (subport S, pipe P, > traffic class TC, queue Q) is favorable (packet is sent) when all the > conditions below are met: > > · Pipe P of subport S is currently selected by one of the port > grinders; > > · Traffic class TC is the highest priority active traffic class > of pipe P; > > · Queue Q is the next queue selected by WRR within traffic class > TC of pipe P; > > · Subport S has enough credits to send the packet; > > · Subport S has enough credits for traffic class TC to send the > packet; > > · Pipe P has enough credits to send the packet; > > · Pipe P has enough credits for traffic class TC to send the > packet. > > If all the above conditions are met, then the packet is selected for > transmission and the necessary credits are subtracted from subport S, > subport S traffic class TC, pipe P, pipe P traffic class TC. > > Yes, have a look at the grinder_credits_check () function in the library > code. > > > > > > On Wed, Apr 6, 2022 at 12:34 PM Singh, Jasvinder < > jasvinder.singh@intel.com> wrote: > > Yes, it is fixed. The tc_credits_per_period is updated after tc_period > duration. Note that tc credits don’t get accumulated if tc queue is visited > after multiple tc_period due to rate limiting mechanism at the traffic > class level. > > > > *From:* satish amara > *Sent:* Wednesday, April 6, 2022 4:32 PM > *To:* Singh, Jasvinder > *Cc:* Thomas Monjalon ; users@dpdk.org; Dumitrescu, > Cristian > *Subject:* Re: Fwd: QOS sample example. > > > > Jasvinder, > > I have a few more questions. > > Can you provide some clarity on > > tc_credits_per_period > > tc_period is for how often the credits for traffic need to be updated. Is > tc_credits_per_period is fixed based on tc_rate. > > > > Regards, > > Satish Amara > > > > On Fri, Apr 1, 2022 at 9:34 AM satish amara > wrote: > > Thanks for the info Jasvinder. I see there is an internal timer to see > when to refill the token buckets and credits. I have read the QOS > document. My understanding is that the DPDK code is using the same HQOS > thread CPU context to implement timer functionality during the pipe > selection and not leveraging on Linux timers or other timers. > > > > Regards, > > Satish Amara > > > > On Fri, Apr 1, 2022 at 4:36 AM Singh, Jasvinder > wrote: > > Hi Satish, > > > > DPDK HQoS scheduler has internal timer to compute the credits. The time > difference between the two consecutive visit to the same pipe is used to > compute the number of tb_periods elapsed and based on that, the available > credits in the token bucket is computed. Each pipe has its own context > which stores the timestamp of the last visit and it is used when pipe is > visited to schedule the packets from its queues. > > > > Thanks, > > Jasvinder > > > > > > > > *From:* satish amara > *Sent:* Thursday, March 31, 2022 9:27 PM > *To:* Thomas Monjalon > *Cc:* users@dpdk.org; Singh, Jasvinder ; > Dumitrescu, Cristian > *Subject:* Re: Fwd: QOS sample example. > > > > Thanks, Thomas for forwarding this to the group. > > I have one more question. Does DPDK QOS uses any internal threads/timers > for the token bucket implementation?. The token > > buckets can be implemented in different ways. When are the tokens are > filled, I see there is tb_period? > > It looks like the tokens are filled when the HQOS thread is trying to find > the next active pipe? > > > > Regards, > > Satish Amara > > > > > > > > On Thu, Mar 31, 2022 at 3:39 PM Thomas Monjalon > wrote: > > +Cc QoS scheduler maintainers (see file MAINTAINERS) > > 31/03/2022 18:59, satish amara: > > Hi, > > I am trying to understand the QOS sample scheduler application code. > > Trying to understand what is tc_period in the config. > > 30. QoS Scheduler Sample Application — Data Plane Development Kit 21.05.0 > > documentation (dpdk.org) > > Is > > tc_period same as tb_period > > tb_period Bytes Time period that should elapse since the last credit > update > > in order for the bucket to be awarded tb_credits_per_period worth or > > credits. > > Regards, > > Satish Amara > > > > >