DPDK patches and discussions
 help / color / mirror / Atom feed
From: <gayathri.manepalli@wipro.com>
To: <dev@dpdk.org>
Cc: <ian.stokes@intel.com>
Subject: [dpdk-dev] [ovs-dev] Traffic scheduling by qos_sched library in DPDK
Date: Fri, 13 May 2016 06:43:51 +0000	[thread overview]
Message-ID: <TY1PR0301MB10561082644E7AD250B2BD5A8E740@TY1PR0301MB1056.apcprd03.prod.outlook.com> (raw)

Hi Team,

I started working on implementing the QoS Shaping in OVS+DPDK by making use of rte_sched library provided in DPDK.  Meanwhile to compare the performance, started performance test with DPDK sample scheduling application. Below are the configuration details of system which I am using,

Server Type : Intel ATOM

Huge page configuration: (Each page of 2M size)

[root@ATOM qos_sched]# grep -i huge /proc/meminfo
AnonHugePages:     90112 kB
HugePages_Total:    7168
HugePages_Free:     7168
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

Port Capacity : 1G (All four ports)
I am able to successfully run the qos_sched with three pfc's as below,

./build/qos_sched -c 0x3e -n 1 --socket-mem 14336 -- --pfc "0,1,2,3" --pfc "1,0,3,2" --pfc "2,3,4,5" --cfg ./profile.cfg

Issue :

When I am trying to add one more profile configuration flow(4th one) , I am getting below error

Command : ./build/qos_sched -c 0x3e -n 1 --socket-mem 14336 -- --pfc "0,1,2,3" --pfc "1,0,3,2" --pfc "2,3,4,5" --pfc "3,2,5,4"  --cfg ./profile.cfg

Error:

done:  Link Up - speed 1000 Mbps - full-duplex
SCHED: Low level config for pipe profile 0:
        Token bucket: period = 1, credits per period = 1, size = 100000
        Traffic classes: period = 5000000, credits per period = [5000000, 5000000, 5000000, 5000000]
       Traffic class 3 oversubscription: weight = 0
        WRR cost: [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]
SCHED: Low level config for subport 0:
        Token bucket: period = 1, credits per period = 1, size = 100000
        Traffic classes: period = 1250000, credits per period = [1250000, 1250000, 1250000, 1250000]
        Traffic class 3 oversubscription: wm min = 0, wm max = 0
EAL: Error - exiting with code: 1
  Cause: Cannot init mbuf pool for socket 3

Analysis:

I have analyzed DPDK source code to find the root cause. I could see that in qos_sched, memory allocation while creating each mbug pool (rte_mempool_create) for corresponding RX port is as below,


MBUF_SIZE = (1528 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM)

mp_size  =  2*1024*1024



>From above I understood that, for each pfc/ rx port around 4G of huge pages are consumed. Whereas ATOM is capable of maximum 7168 huge pages of 2M which is 14336M in total. So I am not able to configure beyond three pfc's. But I would like to measure the performance with 4 port & 6 port scenario which requires 4-6 pfc's configured.



Is there any other alternative through which I can configure more number of pfc's with my system configuration provided above.



Thanks & Regards,

Gayathri

The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com

             reply	other threads:[~2016-05-13  6:44 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-13  6:43 gayathri.manepalli [this message]
2016-05-17 17:13 ` Dumitrescu, Cristian

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=TY1PR0301MB10561082644E7AD250B2BD5A8E740@TY1PR0301MB1056.apcprd03.prod.outlook.com \
    --to=gayathri.manepalli@wipro.com \
    --cc=dev@dpdk.org \
    --cc=ian.stokes@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).