DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Dumitrescu, Cristian" <cristian.dumitrescu@intel.com>
To: "gayathri.manepalli@wipro.com" <gayathri.manepalli@wipro.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: "Stokes, Ian" <ian.stokes@intel.com>
Subject: Re: [dpdk-dev] [ovs-dev] Traffic scheduling by qos_sched library in DPDK
Date: Tue, 17 May 2016 17:13:28 +0000	[thread overview]
Message-ID: <3EB4FA525960D640B5BDFFD6A3D89126479BEDA8@IRSMSX108.ger.corp.intel.com> (raw)
In-Reply-To: <TY1PR0301MB10561082644E7AD250B2BD5A8E740@TY1PR0301MB1056.apcprd03.prod.outlook.com>

Hi Gayathri,

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of
> gayathri.manepalli@wipro.com
> Sent: Friday, May 13, 2016 7:44 AM
> To: dev@dpdk.org
> Cc: Stokes, Ian <ian.stokes@intel.com>
> Subject: [dpdk-dev] [ovs-dev] Traffic scheduling by qos_sched library in
> DPDK
> 
> Hi Team,
> 
> I started working on implementing the QoS Shaping in OVS+DPDK by making
> use of rte_sched library provided in DPDK. 

Great news, thank you!

Meanwhile to compare the
> performance, started performance test with DPDK sample scheduling
> application. Below are the configuration details of system which I am using,
> 
> Server Type : Intel ATOM
> 
> Huge page configuration: (Each page of 2M size)
> 
> [root@ATOM qos_sched]# grep -i huge /proc/meminfo
> AnonHugePages:     90112 kB
> HugePages_Total:    7168
> HugePages_Free:     7168
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:       2048 kB
> 
> Port Capacity : 1G (All four ports)
> I am able to successfully run the qos_sched with three pfc's as below,
> 
> ./build/qos_sched -c 0x3e -n 1 --socket-mem 14336 -- --pfc "0,1,2,3" --pfc
> "1,0,3,2" --pfc "2,3,4,5" --cfg ./profile.cfg
> 
> Issue :
> 
> When I am trying to add one more profile configuration flow(4th one) , I am
> getting below error
> 
> Command : ./build/qos_sched -c 0x3e -n 1 --socket-mem 14336 -- --pfc
> "0,1,2,3" --pfc "1,0,3,2" --pfc "2,3,4,5" --pfc "3,2,5,4"  --cfg ./profile.cfg
> 
> Error:
> 
> done:  Link Up - speed 1000 Mbps - full-duplex
> SCHED: Low level config for pipe profile 0:
>         Token bucket: period = 1, credits per period = 1, size = 100000
>         Traffic classes: period = 5000000, credits per period = [5000000, 5000000,
> 5000000, 5000000]
>        Traffic class 3 oversubscription: weight = 0
>         WRR cost: [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]
> SCHED: Low level config for subport 0:
>         Token bucket: period = 1, credits per period = 1, size = 100000
>         Traffic classes: period = 1250000, credits per period = [1250000, 1250000,
> 1250000, 1250000]
>         Traffic class 3 oversubscription: wm min = 0, wm max = 0
> EAL: Error - exiting with code: 1
>   Cause: Cannot init mbuf pool for socket 3
> 
> Analysis:
> 
> I have analyzed DPDK source code to find the root cause. I could see that in
> qos_sched, memory allocation while creating each mbug pool
> (rte_mempool_create) for corresponding RX port is as below,
> 
> 
> MBUF_SIZE = (1528 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)
> 
> mp_size  =  2*1024*1024
> 
> 
> 
> From above I understood that, for each pfc/ rx port around 4G of huge pages
> are consumed. Whereas ATOM is capable of maximum 7168 huge pages of
> 2M which is 14336M in total. So I am not able to configure beyond three pfc's.
> But I would like to measure the performance with 4 port & 6 port scenario
> which requires 4-6 pfc's configured.
> 
> 
> 
> Is there any other alternative through which I can configure more number of
> pfc's with my system configuration provided above.
> 
> 

Yes, you are probably running out of memory.

QoS hierarchical scheduler can be seen like a big reservoir of packets. Each rte_sched_port object basically has thousands of packet queues internally (number of queues is configurable). Ideally, to protect against the worst case scenario, , the number of buffers provisioned per each output port that has the hierarchical scheduler enabled needs to be at least equal to: number of scheduler queues x queue size. For example, for 64K queues (4K pipes with 16 queues each) of 64 packets each, this means 4M buffers per output port, which (assuming 2KB buffers in the mempool) leads to 8 GB of memory per output port. In our examples/qos_sched sample application we decided to go mid-way instead of worst case scenario, therefore we use 2M buffers per output port.

So possible workarounds for you to maximize the amount of ports with hierarchical scheduler support given a fixed amount memory:
1. Lower the amount of buffers you use per output port, e.g. from 2M to 1M and restest;
2. Use less pipes per output port (configurable), so number of scheduling queues is less, which reduces the pressure on mempool size.

Note that you can use distinct mempools per output port (like in examples/qos_sched application) or a single big global mempool shared by all ports; this is transparent for the librte_sched library.

Regards,
Cristian

> 
> Thanks & Regards,
> 
> Gayathri
> 
> The information contained in this electronic message and any attachments to
> this message are intended for the exclusive use of the addressee(s) and may
> contain proprietary, confidential or privileged information. If you are not the
> intended recipient, you should not disseminate, distribute or copy this e-
> mail. Please notify the sender immediately and destroy all copies of this
> message and any attachments. WARNING: Computer viruses can be
> transmitted via email. The recipient should check this email and any
> attachments for the presence of viruses. The company accepts no liability for
> any damage caused by any virus transmitted by this email. www.wipro.com

      reply	other threads:[~2016-05-17 17:13 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-13  6:43 gayathri.manepalli
2016-05-17 17:13 ` Dumitrescu, Cristian [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3EB4FA525960D640B5BDFFD6A3D89126479BEDA8@IRSMSX108.ger.corp.intel.com \
    --to=cristian.dumitrescu@intel.com \
    --cc=dev@dpdk.org \
    --cc=gayathri.manepalli@wipro.com \
    --cc=ian.stokes@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).