From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BF07247003 for ; Wed, 10 Dec 2025 12:44:55 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3D31B4028F; Wed, 10 Dec 2025 12:44:55 +0100 (CET) Received: from mx1.bel.co.in (unknown [115.249.29.213]) by mails.dpdk.org (Postfix) with ESMTP id 8651040285 for ; Wed, 10 Dec 2025 12:44:52 +0100 (CET) X-CSE-ConnectionGUID: cY2V/VedR/CEhdetx3tNLg== X-CSE-MsgGUID: HeiR7i8OT5S7g35Ui2Qneg== X-ThreatScanner-Verdict: Negative X-IPAS-Result: =?us-ascii?q?A2EQBADPWzlp/8kyFKxaHQEBAQEJARIBBQUBQAmBR4E9W?= =?us-ascii?q?4MYhFWBGJleiRiOAg8BAQFgBAEBAwEDkWsnOBMBAgQBAQEBAwIDAQEBAQEBA?= =?us-ascii?q?QEBAQELAQEGAQEBAQEBBgYBAoEdhglGDYJbO4IWLAhjAUgdOhMQBDUCaRCFb?= =?us-ascii?q?rMAAoEyGgJlghSCaNk/gWQJgUuBaoZpASqBNAKJOw2BSEWBFTWLUYJpgiZ6F?= =?us-ascii?q?B2GAol4BolBLIEYIgMmMywBDxE1ExcLBwWBYwOBC24yDg+BIz4Xc4EUg0kea?= =?us-ascii?q?A8GgRGDT4kVD4l7OwMLB2Y9NxQbCAGBNZM4W06CMYEPTIElPzmkeGiLepRbP?= =?us-ascii?q?IQmgXCCEJZRhw4uF4QETYxGhjsrA5JrmQajfodBUYEuhCdSGY9kAQeFGrxWc?= =?us-ascii?q?DwCBwsBAQMJkh6BSwEB?= IronPort-PHdr: A9a23:Y4/yGxEq/EtWM5l4UCG0451Gf7RMhN3EVzX9CrIZgr5DOp6u447ld BSGo6k20BmRBc6Bsakfw6qO6ua8AjdGuc3A+Fk5M7VyFDY9wf0MmAIhBMPXQWbaF9XNKwEcI oFpeWQhwUuGN1NIEt31fVzYry76xzcTHhLiKVg9fbytScbdgMutyu+95YDYbRlWizqhe7NyK wi9oRnMusUMjoZvJKk8xgHVrndUdOha231kKFydkh3h4su84INv/z5ftv8v+cNMS7n2cqo9Q bdFEDkoLmc56dHkuhXEUQaB/GYXXH8MkhpPDQjF7RX6UYn0vyDnqOdz2zSUMNPvQ7wsVjqs9 6hkRAb2hSkIKjA16G7YhNB+g6JduxKhugdww5XIb4GPNfpxZb3ScNUHTmdcRMlRVihBAoShb 4sTCucKIOhVo5Xhq1YIsBCzAxSnCuHyxT9SnnL50qM03eQ/Hw/b3wIhEM4BvW/Oo9npNqcfS /y5wLXKwDjFcvhY2S396I/Nch05p/GDR71wcdfLyUkxDQzFilSQqZf/MDKV1uUCqXWU5PdnW OKpkWEnpBxxoiKxxsg2jonJh5kVxUrE9CR52ok6OMa1R1Vlbt6gCpdfqyaaO5F3QsMkWmxlv jsxxbIat5ChZicK1IgnyADFa/yBa4WF4QzuWemTLDp2i39ofL2yihSy/0W9xODxS8i53EtUo ydbkdTBuW4B2RPO5saIVvdw/1mt1SiP2Q3S5O9JJU86mKzGIJAi2r49jocfvEbdEiPshkn6k LWae0U49uSy9ejrfqjqq5mBPIFukA7+KL4hmsmnDOQ9NQgBQnaU9Pyn1L3m4U35WLJKjuAqk qXBsJDVO8AbpqmhDg9O14Yj9w6/Ay2939sGmXkLNk5KeBWdg4joPVHCOPH4DfGhjFSwiDpn2 vLLMqP7DpnTMnTPirfscapn50JC1AY/0MhT55dOBbEAJPLzVFXxtNvdDhIhNQy72ennCMhn1 oMAQ22PBq6ZMLjIvl6I5uMgOfSDaZQOtznjL/gp/eLhjXgjlV8ce6mlxYEXZ2ygHvR6P0WZZ mLhgsoaEWgUpAo+V/fnh0CaUTFOZHa+RaU85is0CI6+C4fMXZiigKad0yejAp1WemdGB0iRH XvwaoqEWPYMaCeKL8F5nTILW6avRZM92B+orAP11r9nIfDO+iIErZ/tzMF76fXcmx0q7jx0F 8qd3nmVQW9thm0GSCc63LtnoUxz0liD0bZ3g/hfFdFL//1HSx80O5GPh9B9Xs3pVxjaVtuRR FXgRc+pUh8rSddkxdYBflxjEtGjlRnSziOmS+sTnrOWHoI99qPH1mngLspVwXvCkqImyVggF JgcfVa6j7JyolCAT7XClF+Uwv7CSA== IronPort-Data: A9a23:LgJjFqPEpAfTUjjvrR0PlcFynXyQoLVcMsEvi/4bfWQNrUpwgzxRz mIXWG3QPPffMWSkcoh+bNvk9RhV6MXdx9IwS3M5pCpnJ55oRWspJjg7wmPYZX76whjrFRo/h ykmQoCeaphyFjmE+0vF3oHJ9RFUzbuPSqf3FNnKMyVwQR4MYCo6gHqPocZg6mJTqYb/WVvlV e/a+ZWFZQb/gmYsawr41orawP9RlKWq0N8nlgRWicBj5Df2i3QTBZQDEqC9R1OQapVUBOOzW 9HYx7i/+G7Dlz91Yj9yuu+mGqGiaue60Tmm0hK6aYD76vRxjnBaPpIACRYpQRw/ZwNlPzxG4 I4lWZSYEW/FN0BX8QgXe0Ew/ypWZcWq9FJbSJQWXAP6I0DuKhPRL/tS4E4eNrxCxKFJDG10s tc1KDwAcgDYm8SSz+fuIgVsrpxLwMjDJ4YDojdqwDWfBvFgQJarr6fivIACmm1owJ0WW6yGP qL1ahI2BPjESxtGNUoNFps6nfyhl2j2fxVSpVbTrq1x4m67IAlZjOG2bIGMJ43iqcN9vkuZn D3Hr0PFXypFO4fC9Dvfy3Shibqa9c/8cMdIfFGizdZ3m1CJ3UQfEBEVE1yhrpGEZlWWAo0Fb RZMvHd26/JqrSRHU+XAYvFxm1bc1jZ0ZjaaO7dSBN2lokYM3zukOw== IronPort-HdrOrdr: A9a23:dT6a8KzfCE9/PsbShv3QKrPwMr1zdoMgy1knxilNoH1uA66lfq WV7ZEmPHDP5Ar5NEtOpTniAsS9qBHnlaKdiLN5VdzJYOCMggqVxe9Zg7cLk1Xbak7DytI= X-Talos-CUID: =?us-ascii?q?9a23=3A15f0FGsEwBQJkiXToz7+2G806IsuQyfji0ndL3X?= =?us-ascii?q?/IktpRLCJWQSM3odrxp8=3D?= X-Talos-MUID: 9a23:pxuIowaxZv5WU+BTlzLtvm5jZOBUvp/xBxtQrJRcgs+JOnkl X-IronPort-Anti-Spam-Filtered: true X-IronPort-AV: E=Sophos;i="6.20,263,1758565800"; d="scan'208,217";a="41343159" Received: from unknown (HELO smtp.bel.co.in) ([172.20.50.201]) by mx1.bel.co.in with ESMTP; 10 Dec 2025 17:14:50 +0530 Received: (Haraka outbound); Wed, 10 Dec 2025 17:14:50 +0530 Received: from bel.co.in ([172.20.50.77]) by smtp.bel.co.in (Haraka/2.8.28) with ESMTP id 76178006-B49B-4D73-8009-0F4075BDD7F2.1 envelope-from ; Wed, 10 Dec 2025 17:14:50 +0530 MIME-Version: 1.0 Date: Wed, 10 Dec 2025 17:14:50 +0530 From: nagurvalisayyad To: users@dpdk.org Subject: DPDK Qos scheduler TC's misbehaviour within a Pipe Message-ID: <0816e6ae903d089cebeea389f4c564e6@bel.co.in> X-Sender: nagurvalisayyad@bel.co.in Content-Type: multipart/alternative; boundary="=_7dc4b1a9cf3644f88a2393bbd910fc32" DKIM-Signature: v=1; a=rsa-sha256; bh=lp0WMHYPCPXaiPVdGJL9iYp/6aKyYeME1x+6t4lothA=; c=relaxed/simple; d=bel.co.in; h=from:subject:date:message-id:to:mime-version; s=nov2015; b=WvtpUT8C1fvAF2BJNmcYszz2yqKSZ4BIyEt10k7DJv1hEIyLYVJ2G8OwgvGnuTbCN2+fwhEaQAjEIu1YKYfTz0MKDBqYPMxD/oRbp2j2oylWwfpOFdIIX6ZFPBhg4NYB9afn39AEReK0tSYBxleOnbNtDaFgz1NEZjyB1t+73UH/7Ddd+2iBrue/Jr4Ge/en0fMVeiSBh2Cnl64kN//EOmHDRfmam4QUmVdHSz47aNt65BHnPXYj12Q4z0k9GTj7BmvgRiJbckQ08KixCuZRNyj3nYIGn6Y8LUVxTJQFuBY+lrPs6UgYHxWXQhYvTtGq8LnGHSzE199xqoSm8K3E4Q== X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --=_7dc4b1a9cf3644f88a2393bbd910fc32 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=US-ASCII; format=flowed Hi, We are using dpdk-stable-22.11.6 version in our project. We are facing an issue in our DPDK QoS scheduler example application, we notice that lower priority traffic (TC2) starves higher priority traffic if the packet size of the lower priority traffic is smaller than the packet size of the higher priority traffic. If the packet size of the lower priority traffic (TC2) is same or larger than the higher priority (TC0 or TC1), we dont see the problem. Using q-index within the TC: Pipe 0 size is 20Mbps - Q0 (TC0), 1500 byte packets, 5Mbps configured and - Q1 (TC2), 1400 byte packets, 20Mbps configured - Total two pipes are configured and traffic is mapped to Pipe 0 TC0 and TC2 - Only on subport configured. - TC period is set to 50ms (to support lower rates of around 256Kbps) In this scenario TC2 consumes all the 20Mbps bandwidth, but as per priority, TC0 should get 5Mbps and TC2 should get 15Mbps. If we pump the same size byte packets, then TC0 is getting 5Mbps and TC1 is getting 15Mbps as per priority. If we stop the TC0 traffic, then the unused 5Mbps from TC0 is getting used by TC1 and is getting 20Mbps.(as expected). To further debug, we found in the qos scheduler documentation section 57.2.4.6.3. Traffic Shaping " * Full accuracy can be achieved by selecting the value for _tb_period_ for which _tb_credits_per_period = 1_. * When full accuracy is not required, better performance is achieved by setting _tb_credits_ to a larger value. " In rte_sched.c file, rte_sched_pipe_profile_convert(), the _tb_credits_per_period _is set to 1 and accordingly _tb_period _is set according to the rate. We have increased the _tb_credits_per_period _and_ tb_period _by 10000 times. So that 10000 credis are updated in the token bucket at a time. With this, TC0 and TC2 are working but not as much accurate as earlier. And we are having the doubt of how this change will behave for different rates and different packet sizes. Can you please help us in setting the optimal value for _tb_credits_per_period and tb_period, _so that it works well for different traffic rates and different packet sizes. Please help us in resolving this issue. -- Thanks & Regards Nagurvali Sayyad. --=_7dc4b1a9cf3644f88a2393bbd910fc32 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=UTF-8

Hi,

We are using dpdk-stable-22.11.6 versio= n in our project.

We are facing an issue in our DPDK Qo=
S scheduler example application, we notice that lower priority traffic (TC2=
) starves higher priority traffic 
if the packet size of the lower pri= ority traffic is smaller than the packet size of the higher priority traffi= c. If the packet size of the lower priority traffic (TC2) is same or larger th= an the higher priority (TC0 or TC1), we dont see the problem. Using q-index within the TC:
Pipe 0 size is 20Mbps

- Q0 (TC0), 1500 byte packets, 5Mbps= configured and=20 - Q1 (TC2), 1400 byte packets, 20Mbps configured=20 - Total two pipes are configured and traffic is mapped to Pipe 0 TC0 and TC2 - Only on subport configured. - TC period is set to 50ms (to support lower rates of around 256Kbps) In this scenario TC2 consumes all the 20Mbps bandwidth, but as per priority= , TC0 should get 5Mbps and TC2 should get 15Mbps.

If we pump the= same size byte packets, then TC0 is getting 5Mbps and TC1 is getting 15Mbp= s as per priority.

If we stop the TC0 traffic, then the unused = 5Mbps from TC0 is getting used by TC1 and is getting 20Mbps.(as expected).<= br />

To further debug, we found in the qos s= cheduler documentation section 57.2.4.6.= 3. Traffic Shaping 

"

  • Full accuracy can be achieved by selec= ting the value for tb_period for which tb_credi= ts_per_period =3D 1.
  • When full accuracy is not required, be= tter performance is achieved by setting tb_credits to a = larger value.

"

In rte_sched.c file, rte_s= ched_pipe_profile_convert(), the tb_credits_= per_period is set to 1 and accordingly = tb_period is set according to the rate.

We have increased the tb_credi= ts_per_period and tb_period by 10000 times. So= that 10000 credis are updated in the token bucket at a time. With this, TC= 0 and TC2 are working but not as much accurate as earlier.

And we are having the doubt of how this= change will behave for different rates and different packet sizes.<= /p>

Can you please help us in setting the optimal value for tb_credits_per_period and tb_= period, so that it works well for different traffic rates and= different packet sizes.



Please help us in resolvi= ng this issue.
--
= Thanks & Regards


= Nagurvali Sayyad.
--=_7dc4b1a9cf3644f88a2393bbd910fc32--