From: "Jim Jia" <mydpdk@126.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] DPDK QoS performance issue in DPDK 1.4.1.
Date: Sun, 22 Sep 2013 16:34:38 +0800 (CST) [thread overview]
Message-ID: <34e00c3d.d3cd.14144ce6182.Coremail.mydpdk@126.com> (raw)
In-Reply-To: <3EB4FA525960D640B5BDFFD6A3D891261A5AB35B@IRSMSX102.ger.corp.intel.com>
In my experiment, all the packes are the same, which is quit different from realistic traffic management scenario. Thank you for your replay that Ienlightened me.
At 2013-09-21 19:58:18,"Dumitrescu, Cristian" <cristian.dumitrescu@intel.com> wrote:
>Hi Jim,
>
>When we designed the Intel DPDK QoS hierarchical scheduler, we targeted thousands of pipes per output port rather than just a couple of them, so this configuration is not exactly relevant for a performance discussion. We could take a look at your proposed configuration, but when looking at performance, I would definitely recommend enabling thousands of pipes per output port, which is closer to a realistic traffic management scenario in a telecom environment with thousands of users/subscribers, multiple traffic classes, etc.
>
>Just in case you haven't seen this, there is a comprehensive chapter on "Quality of Service (QoS) Framework" in Intel DPDK Programmer's Guide. Section 18.2.1 has a statement on this: "The hierarchical scheduler is optimized for a large number of packet queues. When only a small number of queues are needed, message passing queues should be used instead of this block."
>
>Looking at your profile.cfg, it is very similar to the default profile.cfg from the qos_sched example application, with a few changes. Besides setting "number of pipes per subport" to 2 instead of 4096, you also change the subport and pipe "tc period" parameters from 10 ms and 40 ms to 10,000x more, not sure what your intention is here?
>
>Looking at the packet loss: is it 30 packets in total or 30 packets/second? In general, a packet loss can happen from a lot of reasons, it is very difficult to point to a specific cause without having more details on your setup. The qos_sched application provides a number of statistics counters tracking packets along the way, are you able to see exactly where those packets get dropped? I would suggest to try to see which block drops the packets and then look deeper into that block to understand why.
>
>Just to double check: what is the format of your input packets? The qos_sched application requires Ethernet frames with Q-in-Q (i.e. 2x VLAN tags) and IPv4 packets. If the input traffic does not fit the format expected by the application, then the application will not be able to work correctly. The format of the input packets is described in the "QoS Scheduler Sample Application" chapter of the Intel DPDK Sample Guide: the VLAN ID of 1st VLAN label specifies the subport ID (to be used by the traffic manager for the current packet), the VLAN ID of the 2nd VLAN label specifies the pipe ID within the subport; assuming IP destination = A.B.C.D, the 2 least significant bits of byte C specify the Traffic Class of the packet, while the 2 least significant bits of byte D specify the queue ID within the Traffic Class.
>
>Hope this helps!
>
>Regards,
>Cristian
>
>-----Original Message-----
>From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jim Jia
>Sent: Wednesday, September 11, 2013 8:30 AM
>To: dev@dpdk.org
>Subject: [dpdk-dev] DPDK QoS performance issue in DPDK 1.4.1.
>
> Hello, everyone!
> I have a question about DPDK's QoS Library performance. These days, I am tesing DPDK's QoS Library performance using the DPDK example, qos_sched. Before I try to do the test, I modified profile.cfg. In my opinion, there should be no packet dropped by qos_sched when it is processing packets at about 3Gbps, 64bit. However, I find qos_sched would drop a few packets (about 30) at that case. Is it normal phenomenon? Or did I do anything wrong? I am so amazed in the results. I need help. Thanks a lot!
>
>
>my profile.cfg as following:
>
>; Port configuration
>[port]
>frame overhead = 24
>number of subports per port = 1
>number of pipes per subport = 2
>queue sizes = 64 64 64 64
>
>; Subport configuration
>[subport 0]
>tb rate = 1250000000 ; Bytes per second
>tb size = 1250000000 ; Bytes
>
>tc 0 rate = 1250000000 ; Bytes per second
>tc 1 rate = 1250000000 ; Bytes per second
>tc 2 rate = 1250000000 ; Bytes per second
>tc 3 rate = 1250000000 ; Bytes per second
>tc period = 100000 ; Milliseconds
>
>pipe 0-1 = 0 ; These pipes are configured with pipe profile 0
>
>; Pipe configuration
>[pipe profile 0]
>tb rate = 1250000000 ; Bytes per second
>tb size = 1250000000 ; Bytes
>
>tc 0 rate = 1250000000 ; Bytes per second
>tc 1 rate = 1250000000 ; Bytes per second
>tc 2 rate = 1250000000 ; Bytes per second
>tc 3 rate = 1250000000 ; Bytes per second
>tc period = 400000 ; Milliseconds
>
>tc 3 oversubscription weight = 1
>
>tc 0 wrr weights = 1 1 1 1
>tc 1 wrr weights = 1 1 1 1
>tc 2 wrr weights = 1 1 1 1
>tc 3 wrr weights = 1 1 1 1
>
>; RED params per traffic class and color (Green / Yellow / Red) [red] tc 0 wred min = 48 40 32 tc 0 wred max = 64 64 64 tc 0 wred inv prob = 10 10 10 tc 0 wred weight = 9 9 9
>
>tc 1 wred min = 48 40 32
>tc 1 wred max = 64 64 64
>tc 1 wred inv prob = 10 10 10
>tc 1 wred weight = 9 9 9
>
>tc 2 wred min = 48 40 32
>tc 2 wred max = 64 64 64
>tc 2 wred inv prob = 10 10 10
>tc 2 wred weight = 9 9 9
>
>tc 3 wred min = 48 40 32
>tc 3 wred max = 64 64 64
>tc 3 wred inv prob = 10 10 10
>tc 3 wred weight = 9 9 9
>
>Jim jia
>--------------------------------------------------------------
>Intel Shannon Limited
>Registered in Ireland
>Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
>Registered Number: 308263
>Business address: Dromore House, East Park, Shannon, Co. Clare
>
>This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.
>
>
prev parent reply other threads:[~2013-09-22 8:34 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <201e7096.7e64.1410bed164b.Coremail.mydpdk@126.com>
[not found] ` <69D4F1CAB263EA44A83BF67D2ECDA25519C41424@IRSMSX103.ger.corp.intel.com>
2013-09-21 11:58 ` Dumitrescu, Cristian
2013-09-22 8:34 ` Jim Jia [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=34e00c3d.d3cd.14144ce6182.Coremail.mydpdk@126.com \
--to=mydpdk@126.com \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).