From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ob0-f179.google.com (mail-ob0-f179.google.com [209.85.214.179]) by dpdk.org (Postfix) with ESMTP id 2205C37AA for ; Wed, 27 Apr 2016 15:17:39 +0200 (CEST) Received: by mail-ob0-f179.google.com with SMTP id bg3so22342005obb.1 for ; Wed, 27 Apr 2016 06:17:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc; bh=8yyCcWeT7nHGo1BfrYa8cpDFlQv5qP0r2k2Ct7PkHRI=; b=o5LYfNUHaaekUZ1YVj0YRp498GqbAc65f2frPpLMzQQ7zUGM82It5fmehJjT1nc/EA vSDR9VhMbwhIr143rngbXFfEA52a4pzL0zQT17ZmPd00oYAwKYdJYfMdi59MDqMzSYco sg7YQJ1pNqUEKOKbwFJfdtk0XJc50Ec4gB5dX6DxcG3dsNkt4vYn5amXNz8OR8OlLHhD SLpn8uJbpZI9fz2SGRxmXM+5WhdaAxsy5JbSuRtJ6zPco4ZAz9OL1tV/v+o7Gw5UvzC/ KeMFvkgCHIViZLg9ayr5eUOeRxS+Qtn+5IobM8V3+raKuxXsn1kEHOKnrhx7AL2AHIxL oLjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc; bh=8yyCcWeT7nHGo1BfrYa8cpDFlQv5qP0r2k2Ct7PkHRI=; b=IJvczOJke//Q+T7rhWmBl+IfCXIcQFocm2ziQY/2GFrjBLlNhQAjzsE92G8QTXPwmK 9/2lQUwdh92mTqoMNw+PIKd6YTEEYKAeBDe6AHXDkcc3I4/8LPzTAiU0aqcWTkWc+0ya jxjKuhZqhkCruhJy2wN+WVAJZ/uced9dl8u/6f9mCLa7JqZxDV+e2x1M04H619pS6lFp KgLsgtS5O9S5gi9zejBSLxdbU0q68GpJUUp2dd/hFg1d76DGz4738IZMpfjzVTQS7a6v ehj8Z/RC65ene3mxnY7oLo8wnL4RBS4NEW6D0Q/KhJroHpQD47U2qIwNPPr4ttX0Z8jX nCqw== X-Gm-Message-State: AOPr4FVatgGZIfLtvoJucpNRaTbRAjHgUu27HNNT5UqMWxOLkd0ZOmTUycJtVRME14tWY6oXMxhmAXVZHXm0gg== MIME-Version: 1.0 X-Received: by 10.60.143.179 with SMTP id sf19mr3454135oeb.53.1461763058592; Wed, 27 Apr 2016 06:17:38 -0700 (PDT) Received: by 10.202.44.79 with HTTP; Wed, 27 Apr 2016 06:17:38 -0700 (PDT) In-Reply-To: <3EB4FA525960D640B5BDFFD6A3D89126479A651C@IRSMSX108.ger.corp.intel.com> References: <3EB4FA525960D640B5BDFFD6A3D89126479A651C@IRSMSX108.ger.corp.intel.com> Date: Wed, 27 Apr 2016 09:17:38 -0400 Message-ID: From: Ashok Padhy To: "Dumitrescu, Cristian" Cc: "users@dpdk.org" Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-users] DPDK QOS scheduler priority starvation issue X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 27 Apr 2016 13:17:39 -0000 Hi Cristian, Thanks for the response. The o/p rates are that TC1 carries only about 10000bytes per sec, while TC3 carries ~400m. As I said the high priority traffic in TC1 is starved out by the low priority traffic in TC3. I want to re-emphasize the fact that, this issue shows up only when the packet size of traffic sent in TC3 is smaller (at 128-256 or smaller) bytes, while the packet size of traffic sent in TC1 is larger (1024 bytes). I also want to point out that the issue is seen only when the pipe is oversubscribed. Please note that the shaping rate of pipe is always honored, ie. we always get around 400m, as configured in the pipe. The degree of starvation depends on the how small the packet size in TC3, the moment I increase it beyond 256 bytes, the issue goes away. Please note I only measure packet size in the byte sizes of 64, 128, 256, 1024 etc, I have tried with 4 active pipes and still see the issue, I haven't tried more than that. Thanks Ashok On Wed, Apr 27, 2016 at 6:34 AM, Dumitrescu, Cristian < cristian.dumitrescu@intel.com> wrote: > Hi Ashok, > > > > I am not sure I understand what the issue is, as you do not provide the > output rates. You mention pipe is configured with 400m (assuming 400 > million credits), with pipe TC1 configured with 40m and TC3 with 400m, > while input traffic is 100m for pipe TC1 and 500m for pipe TC3; to me, the > output (i.e. scheduled and TX-ed) traffic should be close to 40m (40 > million bytes, including Ethernet framing overhead of 20 bytes per frame) > for pipe TC1 and close to 360m for pipe TC3. > > > > One a pipe is selected for scheduling, we only read pipe and pipe TC > credits once (when pipe is selected, which is also the moment the pipe and > pipe TC credits are also updated), so we do not re-evaluate pipe credits > again until the next time pipe is selected. > > > > The hierarchical scheduler is only accurate when many (hundreds, > thousands) pipes are active; looks like you are only using a single pipe > for your test, please retry with more pipes active. > > > > Regards, > > Cristian > > > > > > Hello Cristian, > > > > We are running into an issue in our DPDK scheduler issue we notice that > lower priority traffic (TC3) starves higher priority traffic if the packet > size of the lower priority traffic is smaller than the packet size of the > higher priority traffic. > > If the packet size of the lower priority traffic (TC3) is same or larger > than the higher priority (TC1 or TC2), we dont see the problem. > > Using q-index within the TC: > > > -Q0 (TC1), 1024 byte packets, 40m configured and 100m sent > -Q2 (TC3), 128/256 byte packets, 400m configured and 500m sent > > -Only one pipe active ( configured for 400m) > > -Only on subport configured. > > -TC period is set to 10msecs > > In this scenario TC3 carries most of the traffic (400m). > We are using older version of DPDK, is this something addressed in the > later releases? > > Appreciate any hint, > > thanks > > Ashok > > > > >