DPDK usage discussions
 help / color / mirror / Atom feed
From: Alex Kiselev <alex@therouter.net>
To: users@dpdk.org
Cc: cristian.dumitrescu@intel.com
Subject: Re: [dpdk-users] scheduler issue
Date: Wed, 25 Nov 2020 18:04:48 +0300	[thread overview]
Message-ID: <d113d5ffe6ec08fa2cdeb922a40b2d03@therouter.net> (raw)
In-Reply-To: <a7cf2607179902cd462652f3e95f45f3@therouter.net>

On 2020-11-24 16:34, Alex Kiselev wrote:
> Hello,
> 
> I am facing a problem with the scheduler library DPDK 18.11.10 with 
> default
> scheduler settings (RED is off).
> It seems like some of the pipes (last time it was 4 out of 600 pipes)
> start incorrectly dropping most of the traffic after a couple of days
> of successful work.
> 
> So far I've checked that there are no mbuf leaks or any
> other errors in my code and I am sure that traffic enters problematic 
> pipes.
> Also switching a traffic in the runtime to pipes of another port
> restores the traffic flow.
> 
> Ho do I approach debugging this issue?
> 
> I've added using rte_sched_queue_read_stats(), but it doesn't give
> me counters that accumulate values (packet drops for example),
> it gives me some kind of current values and after a couple of seconds
> those values are reset to zero, so I can say nothing based on that API.
> 
> I would appreciate any ideas and help.
> Thanks.

Problematic pipes had very low bandwidth limit (1 Mbit/s) and
also there is an oversubscription configuration event at subport 0
of port 13 to which those pipes belongs and
CONFIG_RTE_SCHED_SUBPORT_TC_OV is disabled.

Could a congestion at that subport be the reason of the problem?

How much overhead and performance degradation will add enabling
CONFIG_RTE_SCHED_SUBPORT_TC_OV feature?

Configuration:

   #
   # QoS Scheduler Profiles
   #
   hqos add profile  1 rate    8 K size 1000000 tc period 40
   hqos add profile  2 rate  400 K size 1000000 tc period 40
   hqos add profile  3 rate  600 K size 1000000 tc period 40
   hqos add profile  4 rate  800 K size 1000000 tc period 40
   hqos add profile  5 rate    1 M size 1000000 tc period 40
   hqos add profile  6 rate 1500 K size 1000000 tc period 40
   hqos add profile  7 rate    2 M size 1000000 tc period 40
   hqos add profile  8 rate    3 M size 1000000 tc period 40
   hqos add profile  9 rate    4 M size 1000000 tc period 40
   hqos add profile 10 rate    5 M size 1000000 tc period 40
   hqos add profile 11 rate    6 M size 1000000 tc period 40
   hqos add profile 12 rate    8 M size 1000000 tc period 40
   hqos add profile 13 rate   10 M size 1000000 tc period 40
   hqos add profile 14 rate   12 M size 1000000 tc period 40
   hqos add profile 15 rate   15 M size 1000000 tc period 40
   hqos add profile 16 rate   16 M size 1000000 tc period 40
   hqos add profile 17 rate   20 M size 1000000 tc period 40
   hqos add profile 18 rate   30 M size 1000000 tc period 40
   hqos add profile 19 rate   32 M size 1000000 tc period 40
   hqos add profile 20 rate   40 M size 1000000 tc period 40
   hqos add profile 21 rate   50 M size 1000000 tc period 40
   hqos add profile 22 rate   60 M size 1000000 tc period 40
   hqos add profile 23 rate  100 M size 1000000 tc period 40
   hqos add profile 24 rate 25 M size 1000000 tc period 40
   hqos add profile 25 rate 50 M size 1000000 tc period 40

   #
   # Port 13
   #
   hqos add port 13 rate 40 G mtu 1522 frame overhead 24 queue sizes 64 
64 64 64
   hqos add port 13 subport 0 rate 1500 M size 1000000 tc period 10
   hqos add port 13 subport 0 pipes 3000 profile 2
   hqos add port 13 subport 0 pipes 3000 profile 5
   hqos add port 13 subport 0 pipes 3000 profile 6
   hqos add port 13 subport 0 pipes 3000 profile 7
   hqos add port 13 subport 0 pipes 3000 profile 9
   hqos add port 13 subport 0 pipes 3000 profile 11
   hqos set port 13 lcore 5


  reply	other threads:[~2020-11-25 15:04 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-24 13:34 Alex Kiselev
2020-11-25 15:04 ` Alex Kiselev [this message]
2020-11-27 12:11   ` Alex Kiselev
2020-12-07 10:00     ` Singh, Jasvinder
2020-12-07 10:46       ` Alex Kiselev
2020-12-07 11:32         ` Singh, Jasvinder
2020-12-07 12:29           ` Alex Kiselev
2020-12-07 16:49           ` Alex Kiselev
2020-12-07 17:31             ` Singh, Jasvinder
2020-12-07 17:45               ` Alex Kiselev
     [not found]                 ` <49019BC8-DDA6-4B39-B395-2A68E91AB424@intel.com>
     [not found]                   ` <226b13286c876e69ad40a65858131b66@therouter.net>
     [not found]                     ` <4536a02973015dc8049834635f145a19@therouter.net>
     [not found]                       ` <f9a27b6493ae1e1e2850a3b459ab9d33@therouter.net>
     [not found]                         ` <B8241A33-0927-4411-A340-9DD0BEE07968@intel.com>
     [not found]                           ` <e6a0429dc4a1a33861a066e3401e85b6@therouter.net>
2020-12-07 22:16                             ` Alex Kiselev
2020-12-07 22:32                               ` Singh, Jasvinder
2020-12-08 10:52                                 ` Alex Kiselev
2020-12-08 13:24                                   ` Singh, Jasvinder
2020-12-09 13:41                                     ` Alex Kiselev
2020-12-10 10:29                                       ` Singh, Jasvinder
2020-12-11 21:29                                     ` Alex Kiselev
2020-12-11 22:06                                       ` Singh, Jasvinder
2020-12-11 22:27                                         ` Alex Kiselev
2020-12-11 22:36                                           ` Alex Kiselev
2020-12-11 22:55                                           ` Singh, Jasvinder
2020-12-11 23:36                                             ` Alex Kiselev
2020-12-12  0:20                                               ` Singh, Jasvinder
2020-12-12  0:45                                                 ` Alex Kiselev
2020-12-12  0:54                                                   ` Alex Kiselev
2020-12-12  1:45                                                     ` Alex Kiselev
2020-12-12 10:22                                                       ` Singh, Jasvinder
2020-12-12 10:46                                                         ` Alex Kiselev
2020-12-12 17:19                                                           ` Alex Kiselev

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d113d5ffe6ec08fa2cdeb922a40b2d03@therouter.net \
    --to=alex@therouter.net \
    --cc=cristian.dumitrescu@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).