From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 66D8BA0509 for ; Wed, 6 Apr 2022 20:06:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EEB8040689; Wed, 6 Apr 2022 20:06:41 +0200 (CEST) Received: from mail-ed1-f48.google.com (mail-ed1-f48.google.com [209.85.208.48]) by mails.dpdk.org (Postfix) with ESMTP id 559BB4014F for ; Wed, 6 Apr 2022 20:06:40 +0200 (CEST) Received: by mail-ed1-f48.google.com with SMTP id x20so3582951edi.12 for ; Wed, 06 Apr 2022 11:06:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=sY9OcNyh3U8Sxy5Ai/lxNrBnKzImO1GmhjygVFT0kUs=; b=UPvuUbXGpj0AQ6CEh+uKOzaCx/wRTFaPqyab+exx8chiDawAZcOVM+TGWjFtXkWrF7 F9ieaXG0iUtueBNUs+DRvaT980uqdSMIGBMq5xkPz64PnRxX6OkWj/6Z89smTIP2qZw8 31MvILrFGcgy4pHvZVIpFAIdj1ohOV1B9b2o1bHQDEtSCNosnOdLUfI34ePJ0I4nYEbS edxWyGpSfUGorSwAYdBIpb20BOab8QRDQkdLeS19ZHBmoMQY0NcsgLeKWky+TJuMHppH YZXXJMZFYGlaRtYuMReML/wjxNJMPCA1VPW0zO4fXdODi8ZMUyCmcjaNez0/kS1RlCum Q7JQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=sY9OcNyh3U8Sxy5Ai/lxNrBnKzImO1GmhjygVFT0kUs=; b=B52zDVjQnhHRPqkFqXe4F+YyOI6chG2oM2LI00nAw9wuQd/hGTQIcTX1iZlnCYFE6q YotGk5to+3EdAhsxEQDeUIHLrsztfBxneqtwCW8/HSgeeE+sOCLiadxlq3W3KpkiURIm MlFCC505W+xkVa2DA6YK+ateeP1e58GTGutq3aCJEfOPrB1XIZclFRxex6u3+uMqInxw uybcAaFV4XyhJUc5ZpFkQRRMj1ugXF0uoj2Xc7gERipv5BVXCJfIkpwFcIG/hw2dCmtl nlvC8wSWY9ruDHr9BSURN38jvM/KhsVljSprvSy7ln4pp6AKnjtmGENL8HQWUQLsHsE/ hOzw== X-Gm-Message-State: AOAM532XpeSBme8lhvP6FnAnBB/91tOxblGTQCOG6z7yBWE0qlbJyy0T scqWt24mJ98BTtLTVE9IDmAzwUU3fL2zB2khxSKfQvyb9dzqJg== X-Google-Smtp-Source: ABdhPJy/37gk0bjjmqcR6vCXs8eb7XJ1jTc0OZh0pv6K+ev1VFSEeI+POik0LQjBscyM5tMTVO6xkVepJlb/1YsrXjg= X-Received: by 2002:a05:6402:3595:b0:419:336b:29e4 with SMTP id y21-20020a056402359500b00419336b29e4mr10200246edc.63.1649268399903; Wed, 06 Apr 2022 11:06:39 -0700 (PDT) MIME-Version: 1.0 References: <10708365.BaYr0rKQ5T@thomas> In-Reply-To: From: satish amara Date: Wed, 6 Apr 2022 14:06:28 -0400 Message-ID: Subject: Re: Fwd: QOS sample example. To: "Singh, Jasvinder" Cc: Thomas Monjalon , "users@dpdk.org" , "Dumitrescu, Cristian" Content-Type: multipart/alternative; boundary="0000000000001d98e605dc0039f3" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --0000000000001d98e605dc0039f3 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Thank you. I have a question about how the active Traffic class is selected in a pipe. Let's say I have confugured only one Subport and one Pipe on interface. If the highest priority traffic class in a pipe has exhausted it's rate limit can lower traffic class in the same pipe be dequeued. Can I have profile for Pipe where bandwidth for Pipe is shared among multiple TC's Here is an example how I want to configure pipe so bandwidth allocated to Pipe is shared among 13 classes giving priority to highest queue provided it didn't exceed the rate limit. pipe_profile 0 { tb_rate 1300000 /* Pipe level token bucket rate (bytes per second) */ tb_size 1000000 /* Pipe level token bucket size (bytes) */ tc0_rate 100000 /* Pipe level token bucket rate for traffic class 0 (bytes per second) */ tc1_rate 100000 /* Pipe level token bucket rate for traffic class 1 (bytes per second) */ tc2_rate 100000 /* Pipe level token bucket rate for traffic class 2 (bytes per second) */ tc3_rate 100000 ..... /* Pipe level token bucket rate for traffic class 3 (bytes per second) */ tc13_rate 100000 tc_period 40 /* Time int } The scheduling decision to send next packet from (subport S, pipe P, traffic class TC, queue Q) is favorable (packet is sent) when all the conditions below are met: - Pipe P of subport S is currently selected by one of the port grinders; - Traffic class TC is the highest priority active traffic class of pipe P; - Queue Q is the next queue selected by WRR within traffic class TC of pipe P; - Subport S has enough credits to send the packet; - Subport S has enough credits for traffic class TC to send the packet; - Pipe P has enough credits to send the packet; - Pipe P has enough credits for traffic class TC to send the packet. If all the above conditions are met, then the packet is selected for transmission and the necessary credits are subtracted from subport S, subport S traffic class TC, pipe P, pipe P traffic class TC. On Wed, Apr 6, 2022 at 12:34 PM Singh, Jasvinder wrote: > Yes, it is fixed. The tc_credits_per_period is updated after tc_period > duration. Note that tc credits don=E2=80=99t get accumulated if tc queue = is visited > after multiple tc_period due to rate limiting mechanism at the traffic > class level. > > > > *From:* satish amara > *Sent:* Wednesday, April 6, 2022 4:32 PM > *To:* Singh, Jasvinder > *Cc:* Thomas Monjalon ; users@dpdk.org; Dumitrescu, > Cristian > *Subject:* Re: Fwd: QOS sample example. > > > > Jasvinder, > > I have a few more questions. > > Can you provide some clarity on > > tc_credits_per_period > > tc_period is for how often the credits for traffic need to be updated. Is > tc_credits_per_period is fixed based on tc_rate. > > > > Regards, > > Satish Amara > > > > On Fri, Apr 1, 2022 at 9:34 AM satish amara > wrote: > > Thanks for the info Jasvinder. I see there is an internal timer to see > when to refill the token buckets and credits. I have read the QOS > document. My understanding is that the DPDK code is using the same HQOS > thread CPU context to implement timer functionality during the pipe > selection and not leveraging on Linux timers or other timers. > > > > Regards, > > Satish Amara > > > > On Fri, Apr 1, 2022 at 4:36 AM Singh, Jasvinder > wrote: > > Hi Satish, > > > > DPDK HQoS scheduler has internal timer to compute the credits. The time > difference between the two consecutive visit to the same pipe is used to > compute the number of tb_periods elapsed and based on that, the available > credits in the token bucket is computed. Each pipe has its own context > which stores the timestamp of the last visit and it is used when pipe is > visited to schedule the packets from its queues. > > > > Thanks, > > Jasvinder > > > > > > > > *From:* satish amara > *Sent:* Thursday, March 31, 2022 9:27 PM > *To:* Thomas Monjalon > *Cc:* users@dpdk.org; Singh, Jasvinder ; > Dumitrescu, Cristian > *Subject:* Re: Fwd: QOS sample example. > > > > Thanks, Thomas for forwarding this to the group. > > I have one more question. Does DPDK QOS uses any internal threads/timers > for the token bucket implementation?. The token > > buckets can be implemented in different ways. When are the tokens are > filled, I see there is tb_period? > > It looks like the tokens are filled when the HQOS thread is trying to fin= d > the next active pipe? > > > > Regards, > > Satish Amara > > > > > > > > On Thu, Mar 31, 2022 at 3:39 PM Thomas Monjalon > wrote: > > +Cc QoS scheduler maintainers (see file MAINTAINERS) > > 31/03/2022 18:59, satish amara: > > Hi, > > I am trying to understand the QOS sample scheduler application code= . > > Trying to understand what is tc_period in the config. > > 30. QoS Scheduler Sample Application =E2=80=94 Data Plane Development K= it 21.05.0 > > documentation (dpdk.org) > > Is > > tc_period same as tb_period > > tb_period Bytes Time period that should elapse since the last credit > update > > in order for the bucket to be awarded tb_credits_per_period worth or > > credits. > > Regards, > > Satish Amara > > > > > > --0000000000001d98e605dc0039f3 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Thank you. I have a question about how the active Traffic = class is selected=C2=A0in a pipe.=C2=A0
Let's say I have=C2=A0 conf= ugured only one Subport and=C2=A0 one=C2=A0 Pipe on interface.
If the highest=C2=A0priority=C2=A0traffic class in a pipe has exhausted= =C2=A0it's rate limit can lower traffic class in the same pipe=C2=A0 be= dequeued.
Can I have profile=C2=A0for Pipe where bandwidth=C2=A0= for Pipe is shared among multiple TC's=C2=A0
Here is an examp= le how I want to configure pipe so bandwidth=C2=A0allocated to Pipe is shar= ed among 13 classes giving priority=C2=A0to highest queue provided it didn&= #39;t exceed the rate limit.

pipe_profile 0 {
= tb_rate 1300000 /* Pipe level token bucket rate (bytes per secon= d) */
tb_size 1000000 /= * Pipe level token bucket size (bytes) */
tc0_rate 100000 /* Pipe level token bucket rate for traf= fic class 0 (bytes per second) */
tc1_rate 100000 /* Pipe level token bucket rate for traffic clas= s 1 (bytes per second) */
tc2_r= ate 100000 /* Pipe level token bucket rate for traffic class 2 (byt= es per second) */
tc3_rate 1000= 00
..... /* Pipe level token bucket rate for traffic = class 3 (bytes per second) */
t= c13_rate 100000
tc_period 40 = /* Time int
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 }
=C2=A0The scheduling decision to send next packet = from (subport S, pipe P, traffic class TC, queue Q) is favorable (packet is= sent) when all the conditions below are met:
  • Pipe P of subport S is currently selected by one o= f the port grinders;
  • Traffic class TC is the highest priority active traffic = class of pipe P;
  • Queue Q is the next queue selected by WRR within traffic cla= ss TC of pipe P;
  • Subport S has enough credits to send the packet;
  • Subport S has= enough credits for traffic class TC to send the packet;
  • Pipe P has enough cr= edits to send the packet;
  • Pipe P has enough credits for traffic class TC to s= end the packet.

If all the above conditions are met, then the packet is sele= cted for transmission and the necessary credits are subtracted from subport= S, subport S traffic class TC, pipe P, pipe P traffic class TC.



On Wed, Apr 6, 2022 at 12:34 PM Singh, Jasvinder &l= t;jasvinder.singh@intel.com> wrote:

Yes, it is fixed. The tc_credits_per_period is= updated after tc_period duration. Note that tc credits don=E2=80=99t get a= ccumulated if tc queue is visited after multiple tc_period due to ra= te limiting mechanism at the traffic class level. =C2=A0=C2=A0

=C2=A0

From: satish amara <satishkamara@gmail.com>
Sent: Wednesday, April 6, 2022 4:32 PM
To: Singh, Jasvinder <jasvinder.singh@intel.com>
Cc: Thomas Monjalon <thomas@monjalon.net>; users@dpdk.org; Dumitrescu, Cristian <cristian.dumitrescu= @intel.com>
Subject: Re: Fwd: QOS sample example.

=C2=A0

Jasvinder,

=C2=A0 =C2=A0I have a few more questions.<= /u>

Can you provide=C2=A0 some clarity=C2=A0on=C2=A0<= /u>

tc_credits_per_period

tc_period is for how often the credits for traffic n= eed to be updated. Is tc_credits_per_period is fixed based on tc_rate.

=C2=A0

Regards,

Satish Amara

=C2=A0

On Fri, Apr 1, 2022 at 9:34 AM satish amara <satishkamara@gmail.= com> wrote:

Thanks for the info Jasvinder. I see there is an int= ernal timer to see when to refill the token buckets and credits.=C2=A0 I ha= ve read the=C2=A0QOS=C2=A0 document.=C2=A0 My understanding is that the DPD= K code is using the same HQOS thread CPU context to implement=C2=A0timer functionality during the pipe selection and not leveraging on Linux=C2=A0t= imers or other timers.=C2=A0=C2=A0

=C2=A0

Regards,

Satish Amara

=C2=A0

On Fri, Apr 1, 2022 at 4:36 AM Singh, Jasvinder <= jasvinder.si= ngh@intel.com> wrote:

Hi Satish,

=C2=A0

DPDK HQoS scheduler has internal timer to compute th= e credits. The time difference between the two consecutive visit to the sam= e pipe is used to compute the number of tb_periods elapsed and based on that, the available credits in the token bucket is co= mputed. Each pipe has its own context which stores the timestamp of the las= t visit =C2=A0and it is used when pipe is visited to schedule the packets f= rom its queues.

=C2=A0

Thanks,

Jasvinder

=C2=A0

=C2=A0

=C2=A0

From: satish amara <satishkamara@gmail.com= >
Sent: Thursday, March 31, 2022 9:27 PM
To: Thomas Monjalon <
thomas@monjalon.net>
Cc: users@dpdk.org; Singh, Jasv= inder <jasvinder.singh@intel.com>; Dumitrescu, Cristian <cristian.dumitrescu@intel.com= >
Subject: Re: Fwd: QOS sample example.

=C2=A0

Thanks, Thomas for forwarding this to the group.=C2= =A0

I have one more question. Does DPDK QOS=C2=A0 uses a= ny internal threads/timers for the token bucket implementation?. The token<= u>

=C2=A0buckets can be implemented in different ways.= =C2=A0 When are the=C2=A0tokens are filled, I see there is tb_period?

It looks like the=C2=A0tokens are filled when the HQ= OS thread is trying to find the next active pipe?

=C2=A0

Regards,

Satish Amara

=C2=A0

=C2=A0

=C2=A0

On Thu, Mar 31, 2022 at 3:39 PM Thomas Monjalon <= thomas@monjalon.ne= t> wrote:

+Cc QoS scheduler maint= ainers (see file MAINTAINERS)

31/03/2022 18:59, satish amara:
> Hi,
>=C2=A0 =C2=A0 =C2=A0I am trying to understand the QOS sample scheduler = application code.
> Trying to understand what is tc_period in the config.
> 30. QoS Scheduler Sample Application =E2=80=94 Data Plane Development = Kit 21.05.0
> documentation (dpdk.org<= /a>)
> <
https://doc.dpdk.org/guides-21.05/sample_app= _ug/qos_scheduler.html> Is
> tc_period same as=C2=A0 tb_period
> tb_period Bytes Time period that should elapse since the last credit u= pdate
> in order for the bucket to be awarded tb_credits_per_period worth or > credits.
> Regards,
> Satish Amara
>



--0000000000001d98e605dc0039f3--