From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 202304700A for ; Thu, 11 Dec 2025 05:51:17 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AE93E40289; Thu, 11 Dec 2025 05:51:16 +0100 (CET) Received: from mx1.bel.co.in (unknown [45.127.208.136]) by mails.dpdk.org (Postfix) with ESMTP id CAA2140285 for ; Thu, 11 Dec 2025 05:51:14 +0100 (CET) X-CSE-ConnectionGUID: WgWV3gt4QwenJx24Wth0Fw== X-CSE-MsgGUID: 2FRoYmNRQhG0gs9WDIECCQ== X-ThreatScanner-Verdict: Negative X-IPAS-Result: =?us-ascii?q?A2EQBAB6TDpp/8kyFKxaHQEBAQEJARIBBQUBQAmBR4E9W?= =?us-ascii?q?4MYhFWBGJlelxoPAQEBYAQBAQMBA5FrJzgTAQIEAQEBAQMCAwEBAQEBAQEBA?= =?us-ascii?q?QEBCwEBBgEBAQEBAQYGAQKBHYYJRg2CWzuCFiwIYwFIHToTEAQ1Al4LEIVus?= =?us-ascii?q?w0CgTIaAmWCFIJo2T+BZAmBS4FqhmkBKoE0Aok7DYFIRYEVNYtRgmmCJnoUH?= =?us-ascii?q?YYCiXgGiUosgRgiAyYzLAEPRhMXCwcFgWYDgQtuMg4PgSM+F3OBFINJHmgPB?= =?us-ascii?q?oERg0+JFA+JeEYDCwdmPTcUGwgBgTWTOFtOgjGBD0yBJT85pHhooFU8hCaBc?= =?us-ascii?q?IIQllGHDi4XhARNjEaGOysDkmuZBqN+h0FRgS6EJ1IZj2QBB4UavUJwPAIHC?= =?us-ascii?q?wEBAwmSHoFLAQE?= IronPort-PHdr: A9a23:XYzd+xK0KWTtef2C1dmcuPtmWUAX0o4c3iYr45Yqw4hDbr6kt8y7e hCEubM11BSTAd6Eo7Ic0qyK6PumATdBqb+681k8M7V0HycfjssXmwFySOWkMmbcaNPMUWkRM f8GamVY+WqmO1NeAsf0ag6aiHSz6TkPBke3blItdaz6FYHIksu4yf259YHNbAVUnjq9Zq55I AmroQnLucQbj4RvJrwtxhfVrXdEZetbyG1qKFmOmxrw+tq88IRs/iletf8t7dJMXKv/c68lU bFWETMqPnw668HsqRTNVxaE6GEGUmURnBpIAgzF4w//U5zsrCb0tfdz1TeDM8HuQr46QTut4 751RRHnlSkLLzE2/n3Zhcx2l6JbvQmupwdjzI7OYYGaL+Rxc6XAdt4HX2VBX8JRVytcAoOga oYEEuQMMfpEo4T7ulADqwa1CwuxC+P10jJHiXH4060k3eo8Eg/H0xAvEskUv3jIrtX4LrseX fy3waTO0D7Nb+lW2TD46IXQdBAuu/6MXa9qccrP00YvEwLFjk6Kpo3lPzKazPkCuHWc4upmT +2vhHMnqxtvoje1wscsi5LJiZgVy1HE7yp23Z04KsamR05/e9KrDJxQuzubN4twW84vRXxjt ykmxLMco5G7YDQKx4o9xx7Zc/GKcJaF7x3hWeufLjp1gH1odbC/iRuv/0Wt1+zxWtSq3VtUr SdIncfAuH8Q2xLc9sSKS+Vx8lmh1zuO2Q7e7u5KLEYpnqTYM54s2qM8m5gNvUjZACP6hF/6g a+Kekk+5OSk9/nrb7P7rZGGLYB0kBvxMqE2l8y6BuQ3LxYBUnCA+eS5yL3j5Ur5QKhWjvEuk qnWrpTaJcMDq66iBg9Vzp4j5wykADi41NQUh2UILFVfdBKGiYjlI1DOIPbmAvejm1mgjTVmy +7cMrH/HpnBNHjOnKv7cbpj90JQ1RI/zdVF6JJVDrEBLujzWkj0tNHACB82KQO0w/v9CNVjz IweXWOPArSDP6LSsV6H/P8gLvKXa4ALvDbxMeQq5/n0gX84n18RZbOp0ocPaHCkAvRmJF2UY Wf3gtgfC2cKpREzQ/HyiFKfUj5ffXGyX7gz5j0jDoKpFp/MRpqxj7yZwCe7AppWa3hbBlyUD HjodISEW/IUZSyKLcFunCIKW6S9RoEnzR2hqQr6xKB9LuXI4iAWrYvt2sB66eHLjhEy7TJ0A tyF3W+UV296kXsERyQu3KBxuUN9ykmM0ax/g/FADdJd/utHXAhpfaLbmtFhBsrpElbcZdqTU 36qWtmlRzYrQYRi7cUJZhN7Etq/lA7K2yuxCq4EnrDDUJo9/b/GwXH1Jt102mfP2IErgl1gS cwJNGvw1f03zBTaG4OcyxbRrK2tb6lJhEbw IronPort-Data: A9a23:16YSkqiq4MAcMZNORNvBL0iQX161yhIKZh0ujC45NGQN5FlHY01je htvW2GHOvyPNGKneNh1aI/j9kxTu5/czN4yHgs6/n03RX5jpJueD7x1DKtR0wB+jCHnZBg6h ynLQoCYdKjYdleF+FH1dOCn9SQgvU2xbuKUIPbePSxsThNTRi4kiBZy88Y0mYcAbeKRWmthg vus5ZeHULOZ82QsaD9Nsfvb8EgHUMna4Vv0gHRvPZing3eG/5UlJMp3Db28KXL+Xr5VEoaSL 87fzKu093/u5BwkDNWoiN7TKiXmlZaLYGBiIlIPM0STqkAqSh4ai87XB9JAAatjsAhlqvgqo Dl7WTxcfi9yVkHEsLx1vxC1iEiSN4UekFPMCSDXXcB+UyQq2pYjqhljJBheAGEWxgp4KV5By sJDJyxRVw+OgcCmmu6EVuk0qNt2eaEHPKtH0p1h5SrcEe5gS53HBa7No95etNsyrpkRR7CEP 4xEMXw1NE2ojx5nYz/7DLo4lei1nGf2dzxDoUOErKYf6WXXigd2lrnrWDbQUoDTHpUFwx3Fz o7A1z7XOBYrMfql8Dqqqn2319DjvA7Qc7tHQdVU8dYv2jV/3Fc7FQYbT0CTreG1iQi5Qd03F qAP0nNx9+5orAr2Fp+nB0TQTGO4gyPwkuF4S4USgDxhAIKNi+pFLgDolgJ8VeE= IronPort-HdrOrdr: A9a23:w+f9Faxha/wgSI1pXESJKrPwKr1zdoMgy1knxilNoNJuE/Bw9v re+sjzsCWE8Qr5N0tNpTntAsa9qDbnhP1ICOoqUItKPjOW3VdARbsKheCJ/9SjIVydygc378 ddmsZFZuEYdmIK6voTsGGDYrId/OU= X-Talos-CUID: 9a23:qDA8uWAC2ebVUKL6ExZoy3BPFdx5S2LQylXAeROHFWYxZqLAHA== X-Talos-MUID: 9a23:jDJXVgjsu/1J4R3pon0gscMpbulS5JWrLno0kbpetNLVCgBRFhy2tWHi X-IronPort-Anti-Spam-Filtered: true X-IronPort-AV: E=Sophos;i="6.20,265,1758565800"; d="scan'208,217";a="41388535" Received: from unknown (HELO smtp.bel.co.in) ([172.20.50.201]) by mx1.bel.co.in with ESMTP; 11 Dec 2025 10:21:10 +0530 Received: (Haraka outbound); Thu, 11 Dec 2025 10:21:10 +0530 Received: from bel.co.in ([172.20.50.77]) by smtp.bel.co.in (Haraka/2.8.28) with ESMTP id 2B5ACC6F-344A-4E31-AD7D-316C0B45A093.1 envelope-from ; Thu, 11 Dec 2025 10:21:10 +0530 MIME-Version: 1.0 Date: Thu, 11 Dec 2025 10:21:10 +0530 From: nagurvalisayyad To: Users Subject: DPDK Qos scheduler TC's misbehaviour within a Pipe Message-ID: X-Sender: nagurvalisayyad@bel.co.in Content-Type: multipart/alternative; boundary="=_ae1ff2d99a713fa6001a46d69a5ab8d9" DKIM-Signature: v=1; a=rsa-sha256; bh=VT9UrkjjWM2QsT1GOcshicl1P/F0mNRUhakmy39vZhI=; c=relaxed/simple; d=bel.co.in; h=from:subject:date:message-id:to:mime-version; s=nov2015; b=XDZHt8Cn+GTQMAVS9MJmUCsvmK9sI35F4Hnq3p4VFZoeULcSz2iOHg7LqVUiSYachzUGu//irHmn2iCfLtsQuslw489il+0f/Him5i7RI+7MUEHcupquqtkcap40a1gbc4N3j74Kdd9W+B4+deV6vRdGwc1uiKpqNdGQfNYVccx5c3LTz+xow1jeSFo01TH96lU8UaXY4Zs5C23TDN3XqCzk/Lg29H+CyANpX4aBBsrmeRVL/eyrtX73oayLP4dwH80EZG7q8o/mk7IReuuPmsDdzue4BZWL+rY2721Zp1h0mhSNlo4nasFfkQn7m4LQoSXxBq8b4NYuR9UCN5rWsg== X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --=_ae1ff2d99a713fa6001a46d69a5ab8d9 Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset=US-ASCII; format=flowed Hi, We are using dpdk-stable-22.11.6 version in our project. We are facing an issue in our DPDK QoS scheduler example application, we notice that lower priority traffic (TC2) starves higher priority traffic if the packet size of the lower priority traffic is smaller than the packet size of the higher priority traffic. If the packet size of the lower priority traffic (TC2) is same or larger than the higher priority (TC0 or TC1), we dont see the problem. Using q-index within the TC: Pipe 0 size is 20Mbps - Q0 (TC0), 1500 byte packets, 5Mbps configured and - Q1 (TC2), 1400 byte packets, 20Mbps configured - Total two pipes are configured and traffic is mapped to Pipe 0 TC0 and TC2 - Only on subport configured. - TC period is set to 50ms (to support lower rates of around 256Kbps) In this scenario TC2 consumes all the 20Mbps bandwidth, but as per priority, TC0 should get 5Mbps and TC2 should get 15Mbps. If we pump the same size byte packets, then TC0 is getting 5Mbps and TC1 is getting 15Mbps as per priority. If we stop the TC0 traffic, then the unused 5Mbps from TC0 is getting used by TC1 and is getting 20Mbps.(as expected). To further debug, we found in the qos scheduler documentation section 57.2.4.6.3. Traffic Shaping " * Full accuracy can be achieved by selecting the value for _tb_period_ for which _tb_credits_per_period = 1_. * When full accuracy is not required, better performance is achieved by setting _tb_credits_ to a larger value. " In rte_sched.c file, rte_sched_pipe_profile_convert(), the _tb_credits_per_period _is set to 1 and accordingly _tb_period _is set according to the rate. We have increased the _tb_credits_per_period _and_ tb_period _by 10000 times. So that 10000 credis are updated in the token bucket at a time. With this, TC0 and TC2 are working but not as much accurate as earlier. And we are having the doubt of how this change will behave for different rates and different packet sizes. Can you please help us in setting the optimal value for _tb_credits_per_period and tb_period, _so that it works well for different traffic rates and different packet sizes. Please help us in resolving this issue. -- Thanks & Regards Nagurvali sayyad --=_ae1ff2d99a713fa6001a46d69a5ab8d9 Content-Transfer-Encoding: quoted-printable Content-Type: text/html; charset=UTF-8

Hi,

We are using dpdk-stable-22.11.6 versio= n in our project.

We are facing an issue in our DPDK Qo=
S scheduler example application, we notice that lower priority traffic (TC2=
) starves higher priority traffic 
if the packet size of the lower pri= ority traffic is smaller than the packet size of the higher priority traffi= c. If the packet size of the lower priority traffic (TC2) is same or larger th= an the higher priority (TC0 or TC1), we dont see the problem. Using q-index within the TC:
Pipe 0 size is 20Mbps

- Q0 (TC0), 1500 byte packets, 5Mbps= configured and=20 - Q1 (TC2), 1400 byte packets, 20Mbps configured=20 - Total two pipes are configured and traffic is mapped to Pipe 0 TC0 and TC2 - Only on subport configured. - TC period is set to 50ms (to support lower rates of around 256Kbps) In this scenario TC2 consumes all the 20Mbps bandwidth, but as per priority= , TC0 should get 5Mbps and TC2 should get 15Mbps.

If we pump the= same size byte packets, then TC0 is getting 5Mbps and TC1 is getting 15Mbp= s as per priority.

If we stop the TC0 traffic, then the unused = 5Mbps from TC0 is getting used by TC1 and is getting 20Mbps.(as expected).<= br />

To further debug, we found in the qos s= cheduler documentation section 57.2.4.6.= 3. Traffic Shaping 

"

  • Full accuracy can be achieved by selec= ting the value for tb_period for which tb_credi= ts_per_period =3D 1.
  • When full accuracy is not required, be= tter performance is achieved by setting tb_credits to a = larger value.

"

In rte_sched.c file, rte_s= ched_pipe_profile_convert(), the tb_credits_= per_period is set to 1 and accordingly = tb_period is set according to the rate.

We have increased the tb_credi= ts_per_period and tb_period by 10000 times. So= that 10000 credis are updated in the token bucket at a time. With this, TC= 0 and TC2 are working but not as much accurate as earlier.

And we are having the doubt of how this= change will behave for different rates and different packet sizes.<= /p>

Can you please help us in setting the o= ptimal value for tb_credits_per_period and&n= bsp;tb_periodso that it works well for differe= nt traffic rates and different packet sizes.



Please help us in resolvi= ng this issue.
--
= Thanks & Regards

Nagurvali sayyad

--=_ae1ff2d99a713fa6001a46d69a5ab8d9--