From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id 3EF7C8D35 for ; Thu, 10 Sep 2015 14:16:41 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga103.jf.intel.com with ESMTP; 10 Sep 2015 05:16:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,504,1437462000"; d="scan'208,217";a="802237005" Received: from irsmsx151.ger.corp.intel.com ([163.33.192.59]) by fmsmga002.fm.intel.com with ESMTP; 10 Sep 2015 05:16:38 -0700 Received: from irsmsx108.ger.corp.intel.com ([169.254.11.12]) by IRSMSX151.ger.corp.intel.com ([169.254.4.37]) with mapi id 14.03.0224.002; Thu, 10 Sep 2015 13:16:37 +0100 From: "Dumitrescu, Cristian" To: Wei Shen , "dev@dpdk.org" Thread-Topic: [dpdk-dev] Order of system brought up affects throughput with qos_sched app Thread-Index: AQHQ61GLqXl2gjNtH0CMpQU3FcKwSp41rYvQ Date: Thu, 10 Sep 2015 12:16:37 +0000 Message-ID: <3EB4FA525960D640B5BDFFD6A3D89126478B9A60@IRSMSX108.ger.corp.intel.com> References: , <3EB4FA525960D640B5BDFFD6A3D89126478B9630@IRSMSX108.ger.corp.intel.com> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.180] MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] Order of system brought up affects throughput with qos_sched app X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Thu, 10 Sep 2015 12:16:42 -0000 Hi Wei, You simply need to do the math and create a model for each of your token bu= ckets, considering parameters like: size of the bucket, initial number of c= redits in the bucket, credit update rate for the bucket, rate of input pack= ets (in bytes per second) hitting that bucket and consuming credits, etc. Regards, Cristian From: Wei Shen [mailto:wshen0123@outlook.com] Sent: Thursday, September 10, 2015 1:47 AM To: Dumitrescu, Cristian; dev@dpdk.org Subject: RE: [dpdk-dev] Order of system brought up affects throughput with = qos_sched app Hi Cristian, Thanks for your quick response. I did a quick test of your hypothesis and i= t sort of came out as you mentioned. That is, it went back to ~4Gbps after = around ten minutes with the previous profile I posted. In another test, I set the pipe token rate to ~20Mbps instead of full line = rate each. Although I did run into the same order issue, I haven't noticed = any slow down yet by the time of this email (it's been up for an hour or so= ). I am sorry but I still don't get why. Do you mean ~10Gbps throughput saw in= order #2 is made possible by the initial accumulation of credits and later= when the app runs long enough the old credits would run out and get capped= by this credit rate? But in the profile I set everything to be line rate s= o I think this is not the bottleneck. Could you please illustrate it further? Appreciate it. Thank you. > From: cristian.dumitrescu@intel.com > To: wshen0123@outlook.com; dev@dpdk.org > Subject: RE: [dpdk-dev] Order of system brought up affects throughput wit= h qos_sched app > Date: Wed, 9 Sep 2015 19:54:12 +0000 > > Hi Wei, > > Here is another hypothesis for you to consider: if the size of your token= buckets (used to store subport and pipe credits) is big (and it actually i= s set big in the default config file of the app), then when no packets are = received for a long while (which is the case when you start the app first a= nd the traffic gen later), the token buckets are continuously replenished (= with nothing consumed) until they become full; when packets start to arrive= , the token buckets are full and it can take a long time (might be minutes = or even hours, depending on how big your buckets are) until they come down = to their normal values (this time can actually be comuted/estimated). > > If this is what happens in your case, lowering the size of your buckets w= ill help. > > Regards, > Cristian > > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wei Shen > > Sent: Wednesday, September 9, 2015 9:39 PM > > To: dev@dpdk.org > > Subject: [dpdk-dev] Order of system brought up affects throughput with > > qos_sched app > > > > Hi all, > > I ran into problems with qos_sched with different order of system broug= ht > > up. I can bring up the system in two ways: > > 1. Start traffic gen first. Then start qos_sched.2. Start qos_sched fir= st. Then > > start traffic gen. > > With 256K pipes and 64 queue size, 128B packet size, I got ~4Gbps with = order > > #1. While I got 10G with order #2. > > qos_sched command stats showed that ~59% packets got dropped in RX > > (rte_ring_enqueue). > > Plus, with #1, if I restart the traffic gen later, I would regain 10Gbp= s > > throughput, which suggests that this is not an initialization issue but= runtime > > behavior. > > I also tried to assign qos_sched on different cores and got the same re= sult. > > I suspect that there is some rte_ring bugs when connecting two cores an= d > > one core started enqueuing before another core is ready for dequeue. > > Have you experienced the same issue? Appreciate your help. > > > > Wei Shen.--------------------------------------------------------------= ------------------- > > ---------------------------------------------------My system spec is:In= tel(R) Xeon(R) > > CPU E5-2699 v3 @ 2.30GHz15 * 1G hugepages > > qos_sched argument: ./build/app/qos_sched -c 1c0002 -n 4 -- --pfc > > "0,1,20,18,19" --cfg profile.cfg > > profile.cfg:[port]frame overhead =3D 20number of subports per port =3D > > 1number of pipes per subport =3D 262144queue sizes =3D 64 64 64 64 > > > > ; Subport configuration[subport 0]tb rate =3D 1250000000 ; Bytes per > > secondtb size =3D 1000000 ; Bytes > > tc 0 rate =3D 1250000000 ; Bytes per secondtc 1 rate =3D 1250000000 ; > > Bytes per secondtc 2 rate =3D 1250000000 ; Bytes per secondtc 3 rate = =3D > > 1250000000 ; Bytes per secondtc period =3D 10 ; Milliseconds > > pipe 0-262143 =3D 0 ; These pipes are configured with pipe profile 0 > > ; Pipe configuration[pipe profile 0]tb rate =3D 1250000000 ; Bytes per > > secondtb size =3D 1000000 ; Bytes > > tc 0 rate =3D 1250000000 ; Bytes per secondtc 1 rate =3D 1250000000 ; > > Bytes per secondtc 2 rate =3D 1250000000 ; Bytes per secondtc 3 rate = =3D > > 1250000000 ; Bytes per secondtc period =3D 10 ; Milliseconds > > tc 3 oversubscription weight =3D 1 > > tc 0 wrr weights =3D 1 1 1 1tc 1 wrr weights =3D 1 1 1 1tc 2 wrr weight= s =3D 1 1 1 1tc 3 > > wrr weights =3D 1 1 1 1