From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 6F0921F3 for ; Sat, 21 Sep 2013 14:01:52 +0200 (CEST) Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP; 21 Sep 2013 05:02:31 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.90,952,1371106800"; d="scan'208";a="405057495" Received: from irsmsx104.ger.corp.intel.com ([163.33.3.159]) by fmsmga002.fm.intel.com with ESMTP; 21 Sep 2013 05:02:26 -0700 Received: from irsmsx151.ger.corp.intel.com (163.33.192.59) by IRSMSX104.ger.corp.intel.com (163.33.3.159) with Microsoft SMTP Server (TLS) id 14.3.123.3; Sat, 21 Sep 2013 12:58:19 +0100 Received: from irsmsx102.ger.corp.intel.com ([169.254.2.234]) by IRSMSX151.ger.corp.intel.com ([169.254.4.177]) with mapi id 14.03.0123.003; Sat, 21 Sep 2013 12:58:18 +0100 From: "Dumitrescu, Cristian" To: "dev@dpdk.org" Thread-Topic: [dpdk-dev] DPDK QoS performance issue in DPDK 1.4.1. Thread-Index: AQHOrsEFtFR0C6tSKke0S/qKy2l4/JnAbh1ggA+tAxA= Date: Sat, 21 Sep 2013 11:58:18 +0000 Message-ID: <3EB4FA525960D640B5BDFFD6A3D891261A5AB35B@IRSMSX102.ger.corp.intel.com> References: <201e7096.7e64.1410bed164b.Coremail.mydpdk@126.com> <69D4F1CAB263EA44A83BF67D2ECDA25519C41424@IRSMSX103.ger.corp.intel.com> In-Reply-To: <69D4F1CAB263EA44A83BF67D2ECDA25519C41424@IRSMSX103.ger.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Subject: Re: [dpdk-dev] DPDK QoS performance issue in DPDK 1.4.1. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sat, 21 Sep 2013 12:01:53 -0000 Hi Jim, When we designed the Intel DPDK QoS hierarchical scheduler, we targeted tho= usands of pipes per output port rather than just a couple of them, so this = configuration is not exactly relevant for a performance discussion. We coul= d take a look at your proposed configuration, but when looking at performan= ce, I would definitely recommend enabling thousands of pipes per output por= t, which is closer to a realistic traffic management scenario in a telecom = environment with thousands of users/subscribers, multiple traffic classes, = etc. Just in case you haven't seen this, there is a comprehensive chapter on "Qu= ality of Service (QoS) Framework" in Intel DPDK Programmer's Guide. Section= 18.2.1 has a statement on this: "The hierarchical scheduler is optimized f= or a large number of packet queues. When only a small number of queues are = needed, message passing queues should be used instead of this block." Looking at your profile.cfg, it is very similar to the default profile.cfg = from the qos_sched example application, with a few changes. Besides setting= "number of pipes per subport" to 2 instead of 4096, you also change the s= ubport and pipe "tc period" parameters from 10 ms and 40 ms to 10,000x more= , not sure what your intention is here? Looking at the packet loss: is it 30 packets in total or 30 packets/second?= In general, a packet loss can happen from a lot of reasons, it is very dif= ficult to point to a specific cause without having more details on your set= up. The qos_sched application provides a number of statistics counters trac= king packets along the way, are you able to see exactly where those packets= get dropped? I would suggest to try to see which block drops the packets a= nd then look deeper into that block to understand why. Just to double check: what is the format of your input packets? The qos_sch= ed application requires Ethernet frames with Q-in-Q (i.e. 2x VLAN tags) and= IPv4 packets. If the input traffic does not fit the format expected by the= application, then the application will not be able to work correctly. The = format of the input packets is described in the "QoS Scheduler Sample Appli= cation" chapter of the Intel DPDK Sample Guide: the VLAN ID of 1st VLAN lab= el specifies the subport ID (to be used by the traffic manager for the curr= ent packet), the VLAN ID of the 2nd VLAN label specifies the pipe ID within= the subport; assuming IP destination =3D A.B.C.D, the 2 least significant = bits of byte C specify the Traffic Class of the packet, while the 2 least s= ignificant bits of byte D specify the queue ID within the Traffic Class. Hope this helps! Regards, Cristian -----Original Message----- From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jim Jia Sent: Wednesday, September 11, 2013 8:30 AM To: dev@dpdk.org Subject: [dpdk-dev] DPDK QoS performance issue in DPDK 1.4.1. Hello, everyone! I have a question about DPDK's QoS Library performance. These days, I= am tesing DPDK's QoS Library performance using the DPDK example, qos_sche= d. Before I try to do the test, I modified profile.cfg. In my opinion, th= ere should be no packet dropped by qos_sched when it is processing packets= at about 3Gbps, 64bit. However, I find qos_sched would drop a few packets= (about 30) at that case. Is it normal phenomenon? Or did I do anything wro= ng? I am so amazed in the results. I need help. Thanks a lot! my profile.cfg as following: ; Port configuration [port] frame overhead =3D 24 number of subports per port =3D 1 number of pipes per subport =3D 2 queue sizes =3D 64 64 64 64 ; Subport configuration [subport 0] tb rate =3D 1250000000 ; Bytes per second tb size =3D 1250000000 ; Bytes tc 0 rate =3D 1250000000 ; Bytes per second tc 1 rate =3D 1250000000 ; Bytes per second tc 2 rate =3D 1250000000 ; Bytes per second tc 3 rate =3D 1250000000 ; Bytes per second tc period =3D 100000 ; Milliseconds pipe 0-1 =3D 0 ; These pipes are configured with pipe profil= e 0 ; Pipe configuration [pipe profile 0] tb rate =3D 1250000000 ; Bytes per second tb size =3D 1250000000 ; Bytes tc 0 rate =3D 1250000000 ; Bytes per second tc 1 rate =3D 1250000000 ; Bytes per second tc 2 rate =3D 1250000000 ; Bytes per second tc 3 rate =3D 1250000000 ; Bytes per second tc period =3D 400000 ; Milliseconds tc 3 oversubscription weight =3D 1 tc 0 wrr weights =3D 1 1 1 1 tc 1 wrr weights =3D 1 1 1 1 tc 2 wrr weights =3D 1 1 1 1 tc 3 wrr weights =3D 1 1 1 1 ; RED params per traffic class and color (Green / Yellow / Red) [red] tc 0 = wred min =3D 48 40 32 tc 0 wred max =3D 64 64 64 tc 0 wred inv prob =3D 10 = 10 10 tc 0 wred weight =3D 9 9 9 tc 1 wred min =3D 48 40 32 tc 1 wred max =3D 64 64 64 tc 1 wred inv prob =3D 10 10 10 tc 1 wred weight =3D 9 9 9 tc 2 wred min =3D 48 40 32 tc 2 wred max =3D 64 64 64 tc 2 wred inv prob =3D 10 10 10 tc 2 wred weight =3D 9 9 9 tc 3 wred min =3D 48 40 32 tc 3 wred max =3D 64 64 64 tc 3 wred inv prob =3D 10 10 10 tc 3 wred weight =3D 9 9 9 Jim jia -------------------------------------------------------------- Intel Shannon Limited Registered in Ireland Registered Office: Collinstown Industrial Park, Leixlip, County Kildare Registered Number: 308263 Business address: Dromore House, East Park, Shannon, Co. Clare This e-mail and any attachments may contain confidential material for the s= ole use of the intended recipient(s). Any review or distribution by others = is strictly prohibited. If you are not the intended recipient, please conta= ct the sender and delete all copies.