From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 4FF8A5A35 for ; Wed, 9 Sep 2015 21:54:16 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga102.fm.intel.com with ESMTP; 09 Sep 2015 12:54:15 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,498,1437462000"; d="scan'208";a="641720985" Received: from irsmsx105.ger.corp.intel.com ([163.33.3.28]) by orsmga003.jf.intel.com with ESMTP; 09 Sep 2015 12:54:14 -0700 Received: from irsmsx108.ger.corp.intel.com ([169.254.11.12]) by irsmsx105.ger.corp.intel.com ([169.254.7.51]) with mapi id 14.03.0224.002; Wed, 9 Sep 2015 20:54:13 +0100 From: "Dumitrescu, Cristian" To: Wei Shen , "dev@dpdk.org" Thread-Topic: [dpdk-dev] Order of system brought up affects throughput with qos_sched app Thread-Index: AQHQ6y7nj4EthfYNuEK4I1OuQnA81540ma5w Date: Wed, 9 Sep 2015 19:54:12 +0000 Message-ID: <3EB4FA525960D640B5BDFFD6A3D89126478B9630@IRSMSX108.ger.corp.intel.com> References: In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] Order of system brought up affects throughput with qos_sched app X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 09 Sep 2015 19:54:17 -0000 Hi Wei, Here is another hypothesis for you to consider: if the size of your token b= uckets (used to store subport and pipe credits) is big (and it actually is = set big in the default config file of the app), then when no packets are re= ceived for a long while (which is the case when you start the app first and= the traffic gen later), the token buckets are continuously replenished (wi= th nothing consumed) until they become full; when packets start to arrive, = the token buckets are full and it can take a long time (might be minutes or= even hours, depending on how big your buckets are) until they come down to= their normal values (this time can actually be comuted/estimated). If this is what happens in your case, lowering the size of your buckets wil= l help. Regards, Cristian > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Shen > Sent: Wednesday, September 9, 2015 9:39 PM > To: dev@dpdk.org > Subject: [dpdk-dev] Order of system brought up affects throughput with > qos_sched app >=20 > Hi all, > I ran into problems with qos_sched with different order of system brought > up. I can bring up the system in two ways: > 1. Start traffic gen first. Then start qos_sched.2. Start qos_sched first= . Then > start traffic gen. > With 256K pipes and 64 queue size, 128B packet size, I got ~4Gbps with or= der > #1. While I got 10G with order #2. > qos_sched command stats showed that ~59% packets got dropped in RX > (rte_ring_enqueue). > Plus, with #1, if I restart the traffic gen later, I would regain 10Gbps > throughput, which suggests that this is not an initialization issue but r= untime > behavior. > I also tried to assign qos_sched on different cores and got the same resu= lt. > I suspect that there is some rte_ring bugs when connecting two cores and > one core started enqueuing before another core is ready for dequeue. > Have you experienced the same issue? Appreciate your help. >=20 > Wei Shen.----------------------------------------------------------------= ----------------- > ---------------------------------------------------My system spec is:Inte= l(R) Xeon(R) > CPU E5-2699 v3 @ 2.30GHz15 * 1G hugepages > qos_sched argument: ./build/app/qos_sched -c 1c0002 -n 4 -- --pfc > "0,1,20,18,19" --cfg profile.cfg > profile.cfg:[port]frame overhead =3D 20number of subports per port =3D > 1number of pipes per subport =3D 262144queue sizes =3D 64 64 64 64 >=20 > ; Subport configuration[subport 0]tb rate =3D 1250000000 ; Byte= s per > secondtb size =3D 1000000 ; Bytes > tc 0 rate =3D 1250000000 ; Bytes per secondtc 1 rate =3D 12500000= 00 ; > Bytes per secondtc 2 rate =3D 1250000000 ; Bytes per secondtc 3 r= ate =3D > 1250000000 ; Bytes per secondtc period =3D 10 = ; Milliseconds > pipe 0-262143 =3D 0 ; These pipes are configured with pipe= profile 0 > ; Pipe configuration[pipe profile 0]tb rate =3D 1250000000 ; By= tes per > secondtb size =3D 1000000 ; Bytes > tc 0 rate =3D 1250000000 ; Bytes per secondtc 1 rate =3D 12500000= 00 ; > Bytes per secondtc 2 rate =3D 1250000000 ; Bytes per secondtc 3 r= ate =3D > 1250000000 ; Bytes per secondtc period =3D 10 = ; Milliseconds > tc 3 oversubscription weight =3D 1 > tc 0 wrr weights =3D 1 1 1 1tc 1 wrr weights =3D 1 1 1 1tc 2 wrr weights = =3D 1 1 1 1tc 3 > wrr weights =3D 1 1 1 1