* [dpdk-dev] Traffic scheduling in DPDK
@ 2016-01-04 14:39 ravulakollu.kumar
2016-01-04 15:55 ` Singh, Jasvinder
0 siblings, 1 reply; 9+ messages in thread
From: ravulakollu.kumar @ 2016-01-04 14:39 UTC (permalink / raw)
To: dev
Hello All,
I have an issue in running qos_sched application in DPDK .Could someone tell me how to run the command and what
each parameter does In the below mentioned text.
Application mandatory parameters:
--pfc "RX PORT, TX PORT, RX LCORE, WT LCORE" : Packet flow configuration
multiple pfc can be configured in command line
Thanks,
Uday
The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-dev] Traffic scheduling in DPDK
2016-01-04 14:39 [dpdk-dev] Traffic scheduling in DPDK ravulakollu.kumar
@ 2016-01-04 15:55 ` Singh, Jasvinder
2016-01-05 6:21 ` ravulakollu.kumar
0 siblings, 1 reply; 9+ messages in thread
From: Singh, Jasvinder @ 2016-01-04 15:55 UTC (permalink / raw)
To: ravulakollu.kumar, dev
Hi Uday,
> I have an issue in running qos_sched application in DPDK .Could someone tell
> me how to run the command and what each parameter does In the below
> mentioned text.
>
> Application mandatory parameters:
> --pfc "RX PORT, TX PORT, RX LCORE, WT LCORE" : Packet flow configuration
> multiple pfc can be configured in command line
RX PORT - Specifies the packets receive port
TX PORT - Specifies the packets transmit port
RXCORE - Specifies the Core used for Packet reception and Classification stage of the QoS application.
WTCORE- Specifies the Core used for Packet enqueue/dequeue operation (QoS scheduling) and subsequently transmitting the packets out.
Multiple pfc can be specified depending upon the number of instances of qos sched required in application. For example- in order to run two instance, following can be used-
./build/qos_sched -c 0x7e -n 4 -- --pfc "0,1,2,3,4" --pfc "2,3,5,6" --cfg "profile.cfg"
First instance of qos sched receives packets from port 0 and transmits its packets through port 1 ,while second qos sched will receives packets from port 2 and transmit through port 3. In case of single qos sched instance, following can be used-
./build/qos_sched -c 0x1e -n 4 -- --pfc "0,1,2,3,4" --cfg "profile.cfg"
Thanks,
Jasvinder
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-dev] Traffic scheduling in DPDK
2016-01-04 15:55 ` Singh, Jasvinder
@ 2016-01-05 6:21 ` ravulakollu.kumar
2016-01-05 10:09 ` Singh, Jasvinder
0 siblings, 1 reply; 9+ messages in thread
From: ravulakollu.kumar @ 2016-01-05 6:21 UTC (permalink / raw)
To: jasvinder.singh; +Cc: dev
Thanks Jasvinder , I am running the below command
./build/qos_sched -c 0xe -n 1 -- --pfc "0,1,3,2" --cfg ./profile.cfg
Bound two 1G physical ports to DPDK , and started running the above command with the default profile mentioned in profile.cfg .
I am using lcore 3 and 2 for RX and TX. It was not successful, getting the below error.
APP: Initializing port 0... PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f5b20ba2240 hw_ring=0x7f5b20ba2680 dma_addr=0xbf87a2680
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f5b20b910c0 hw_ring=0x7f5b20b92100 dma_addr=0xbf8792100
PMD: eth_igb_start(): <<
done: Link Up - speed 1000 Mbps - full-duplex
APP: Initializing port 1... PMD: eth_igb_rx_queue_setup(): sw_ring=0x7f5b20b80a40 hw_ring=0x7f5b20b80e80 dma_addr=0xbf8780e80
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance, consider setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f5b20b6f8c0 hw_ring=0x7f5b20b70900 dma_addr=0xbf8770900
PMD: eth_igb_start(): <<
done: Link Up - speed 1000 Mbps - full-duplex
SCHED: Low level config for pipe profile 0:
Token bucket: period = 3277, credits per period = 8, size = 1000000
Traffic classes: period = 5000000, credits per period = [12207, 12207, 12207, 12207]
Traffic class 3 oversubscription: weight = 0
WRR cost: [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]
EAL: Error - exiting with code: 1
Cause: Unable to config sched subport 0, err=-2
Please, tell me whether I am missing any other configuration.
Thanks,
Uday
-----Original Message-----
From: Singh, Jasvinder [mailto:jasvinder.singh@intel.com]
Sent: Monday, January 04, 2016 9:26 PM
To: Ravulakollu Udaya Kumar (WT01 - Product Engineering Service); dev@dpdk.org
Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
Hi Uday,
> I have an issue in running qos_sched application in DPDK .Could
> someone tell me how to run the command and what each parameter does
> In the below mentioned text.
>
> Application mandatory parameters:
> --pfc "RX PORT, TX PORT, RX LCORE, WT LCORE" : Packet flow configuration
> multiple pfc can be configured in command line
RX PORT - Specifies the packets receive port TX PORT - Specifies the packets transmit port RXCORE - Specifies the Core used for Packet reception and Classification stage of the QoS application.
WTCORE- Specifies the Core used for Packet enqueue/dequeue operation (QoS scheduling) and subsequently transmitting the packets out.
Multiple pfc can be specified depending upon the number of instances of qos sched required in application. For example- in order to run two instance, following can be used-
./build/qos_sched -c 0x7e -n 4 -- --pfc "0,1,2,3,4" --pfc "2,3,5,6" --cfg "profile.cfg"
First instance of qos sched receives packets from port 0 and transmits its packets through port 1 ,while second qos sched will receives packets from port 2 and transmit through port 3. In case of single qos sched instance, following can be used-
./build/qos_sched -c 0x1e -n 4 -- --pfc "0,1,2,3,4" --cfg "profile.cfg"
Thanks,
Jasvinder
The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-dev] Traffic scheduling in DPDK
2016-01-05 6:21 ` ravulakollu.kumar
@ 2016-01-05 10:09 ` Singh, Jasvinder
2016-01-06 12:40 ` ravulakollu.kumar
2016-01-07 6:29 ` ravulakollu.kumar
0 siblings, 2 replies; 9+ messages in thread
From: Singh, Jasvinder @ 2016-01-05 10:09 UTC (permalink / raw)
To: ravulakollu.kumar; +Cc: dev
Hi Uday,
>
> Thanks Jasvinder , I am running the below command
>
> ./build/qos_sched -c 0xe -n 1 -- --pfc "0,1,3,2" --cfg ./profile.cfg
>
> Bound two 1G physical ports to DPDK , and started running the above
> command with the default profile mentioned in profile.cfg .
> I am using lcore 3 and 2 for RX and TX. It was not successful, getting the
> below error.
>
> APP: Initializing port 0... PMD: eth_igb_rx_queue_setup():
> sw_ring=0x7f5b20ba2240 hw_ring=0x7f5b20ba2680 dma_addr=0xbf87a2680
> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance,
> consider setting the TX WTHRESH value to 4, 8, or 16.
> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f5b20b910c0
> hw_ring=0x7f5b20b92100 dma_addr=0xbf8792100
> PMD: eth_igb_start(): <<
> done: Link Up - speed 1000 Mbps - full-duplex
> APP: Initializing port 1... PMD: eth_igb_rx_queue_setup():
> sw_ring=0x7f5b20b80a40 hw_ring=0x7f5b20b80e80 dma_addr=0xbf8780e80
> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance,
> consider setting the TX WTHRESH value to 4, 8, or 16.
> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f5b20b6f8c0
> hw_ring=0x7f5b20b70900 dma_addr=0xbf8770900
> PMD: eth_igb_start(): <<
> done: Link Up - speed 1000 Mbps - full-duplex
> SCHED: Low level config for pipe profile 0:
> Token bucket: period = 3277, credits per period = 8, size = 1000000
> Traffic classes: period = 5000000, credits per period = [12207, 12207, 12207,
> 12207]
> Traffic class 3 oversubscription: weight = 0
> WRR cost: [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]
> EAL: Error - exiting with code: 1
> Cause: Unable to config sched subport 0, err=-2
In default profile.cfg, It is assumed that all the nic ports have 10 Gbps rate. The above error occurs when subport's tb_rate (10Gbps) is found more than NIC port's capacity (1 Gbps). Therefore, you need to use either 10 Gbps ports in your application or have to amend the profile.cfg to work with 1 Gbps port. Please refer to DPDK QoS framework document for more details on various parameters - http://dpdk.org/doc/guides/prog_guide/qos_framework.html
> -----Original Message-----
> From: Singh, Jasvinder [mailto:jasvinder.singh@intel.com]
> Sent: Monday, January 04, 2016 9:26 PM
> To: Ravulakollu Udaya Kumar (WT01 - Product Engineering Service);
> dev@dpdk.org
> Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
>
> Hi Uday,
>
>
> > I have an issue in running qos_sched application in DPDK .Could
> > someone tell me how to run the command and what each parameter does
> > In the below mentioned text.
> >
> > Application mandatory parameters:
> > --pfc "RX PORT, TX PORT, RX LCORE, WT LCORE" : Packet flow
> configuration
> > multiple pfc can be configured in command line
>
>
> RX PORT - Specifies the packets receive port TX PORT - Specifies the packets
> transmit port RXCORE - Specifies the Core used for Packet reception and
> Classification stage of the QoS application.
> WTCORE- Specifies the Core used for Packet enqueue/dequeue operation
> (QoS scheduling) and subsequently transmitting the packets out.
>
> Multiple pfc can be specified depending upon the number of instances of
> qos sched required in application. For example- in order to run two instance,
> following can be used-
>
> ./build/qos_sched -c 0x7e -n 4 -- --pfc "0,1,2,3,4" --pfc "2,3,5,6" --cfg
> "profile.cfg"
>
> First instance of qos sched receives packets from port 0 and transmits its
> packets through port 1 ,while second qos sched will receives packets from
> port 2 and transmit through port 3. In case of single qos sched instance,
> following can be used-
>
> ./build/qos_sched -c 0x1e -n 4 -- --pfc "0,1,2,3,4" --cfg "profile.cfg"
>
>
> Thanks,
> Jasvinder
> The information contained in this electronic message and any attachments to
> this message are intended for the exclusive use of the addressee(s) and may
> contain proprietary, confidential or privileged information. If you are not the
> intended recipient, you should not disseminate, distribute or copy this e-
> mail. Please notify the sender immediately and destroy all copies of this
> message and any attachments. WARNING: Computer viruses can be
> transmitted via email. The recipient should check this email and any
> attachments for the presence of viruses. The company accepts no liability for
> any damage caused by any virus transmitted by this email. www.wipro.com
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-dev] Traffic scheduling in DPDK
2016-01-05 10:09 ` Singh, Jasvinder
@ 2016-01-06 12:40 ` ravulakollu.kumar
2016-01-06 12:57 ` Singh, Jasvinder
2016-01-07 6:29 ` ravulakollu.kumar
1 sibling, 1 reply; 9+ messages in thread
From: ravulakollu.kumar @ 2016-01-06 12:40 UTC (permalink / raw)
To: jasvinder.singh; +Cc: dev
Thanks Jasvinder,
Does this application works on systems with multiple NUMA Nodes ?
Thanks,
Uday
-----Original Message-----
From: Singh, Jasvinder [mailto:jasvinder.singh@intel.com]
Sent: Tuesday, January 05, 2016 3:40 PM
To: Ravulakollu Udaya Kumar (WT01 - Product Engineering Service)
Cc: dev@dpdk.org
Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
Hi Uday,
>
> Thanks Jasvinder , I am running the below command
>
> ./build/qos_sched -c 0xe -n 1 -- --pfc "0,1,3,2" --cfg ./profile.cfg
>
> Bound two 1G physical ports to DPDK , and started running the above
> command with the default profile mentioned in profile.cfg .
> I am using lcore 3 and 2 for RX and TX. It was not successful, getting
> the below error.
>
> APP: Initializing port 0... PMD: eth_igb_rx_queue_setup():
> sw_ring=0x7f5b20ba2240 hw_ring=0x7f5b20ba2680 dma_addr=0xbf87a2680
> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance,
> consider setting the TX WTHRESH value to 4, 8, or 16.
> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f5b20b910c0
> hw_ring=0x7f5b20b92100 dma_addr=0xbf8792100
> PMD: eth_igb_start(): <<
> done: Link Up - speed 1000 Mbps - full-duplex
> APP: Initializing port 1... PMD: eth_igb_rx_queue_setup():
> sw_ring=0x7f5b20b80a40 hw_ring=0x7f5b20b80e80 dma_addr=0xbf8780e80
> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance,
> consider setting the TX WTHRESH value to 4, 8, or 16.
> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f5b20b6f8c0
> hw_ring=0x7f5b20b70900 dma_addr=0xbf8770900
> PMD: eth_igb_start(): <<
> done: Link Up - speed 1000 Mbps - full-duplex
> SCHED: Low level config for pipe profile 0:
> Token bucket: period = 3277, credits per period = 8, size = 1000000
> Traffic classes: period = 5000000, credits per period = [12207,
> 12207, 12207, 12207]
> Traffic class 3 oversubscription: weight = 0
> WRR cost: [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]
> EAL: Error - exiting with code: 1
> Cause: Unable to config sched subport 0, err=-2
In default profile.cfg, It is assumed that all the nic ports have 10 Gbps rate. The above error occurs when subport's tb_rate (10Gbps) is found more than NIC port's capacity (1 Gbps). Therefore, you need to use either 10 Gbps ports in your application or have to amend the profile.cfg to work with 1 Gbps port. Please refer to DPDK QoS framework document for more details on various parameters - http://dpdk.org/doc/guides/prog_guide/qos_framework.html
> -----Original Message-----
> From: Singh, Jasvinder [mailto:jasvinder.singh@intel.com]
> Sent: Monday, January 04, 2016 9:26 PM
> To: Ravulakollu Udaya Kumar (WT01 - Product Engineering Service);
> dev@dpdk.org
> Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
>
> Hi Uday,
>
>
> > I have an issue in running qos_sched application in DPDK .Could
> > someone tell me how to run the command and what each parameter does
> > In the below mentioned text.
> >
> > Application mandatory parameters:
> > --pfc "RX PORT, TX PORT, RX LCORE, WT LCORE" : Packet flow
> configuration
> > multiple pfc can be configured in command line
>
>
> RX PORT - Specifies the packets receive port TX PORT - Specifies the
> packets transmit port RXCORE - Specifies the Core used for Packet
> reception and Classification stage of the QoS application.
> WTCORE- Specifies the Core used for Packet enqueue/dequeue operation
> (QoS scheduling) and subsequently transmitting the packets out.
>
> Multiple pfc can be specified depending upon the number of instances
> of qos sched required in application. For example- in order to run
> two instance, following can be used-
>
> ./build/qos_sched -c 0x7e -n 4 -- --pfc "0,1,2,3,4" --pfc "2,3,5,6"
> --cfg "profile.cfg"
>
> First instance of qos sched receives packets from port 0 and transmits
> its packets through port 1 ,while second qos sched will receives
> packets from port 2 and transmit through port 3. In case of single qos
> sched instance, following can be used-
>
> ./build/qos_sched -c 0x1e -n 4 -- --pfc "0,1,2,3,4" --cfg "profile.cfg"
>
>
> Thanks,
> Jasvinder
> The information contained in this electronic message and any
> attachments to this message are intended for the exclusive use of the
> addressee(s) and may contain proprietary, confidential or privileged
> information. If you are not the intended recipient, you should not
> disseminate, distribute or copy this e- mail. Please notify the sender
> immediately and destroy all copies of this message and any
> attachments. WARNING: Computer viruses can be transmitted via email.
> The recipient should check this email and any attachments for the
> presence of viruses. The company accepts no liability for any damage
> caused by any virus transmitted by this email. www.wipro.com
The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-dev] Traffic scheduling in DPDK
2016-01-06 12:40 ` ravulakollu.kumar
@ 2016-01-06 12:57 ` Singh, Jasvinder
0 siblings, 0 replies; 9+ messages in thread
From: Singh, Jasvinder @ 2016-01-06 12:57 UTC (permalink / raw)
To: ravulakollu.kumar; +Cc: dev
> -----Original Message-----
> From: ravulakollu.kumar@wipro.com [mailto:ravulakollu.kumar@wipro.com]
> Sent: Wednesday, January 6, 2016 12:40 PM
> To: Singh, Jasvinder
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
>
> Thanks Jasvinder,
>
> Does this application works on systems with multiple NUMA Nodes ?
>
It does.
> Thanks,
> Uday
>
> -----Original Message-----
> From: Singh, Jasvinder [mailto:jasvinder.singh@intel.com]
> Sent: Tuesday, January 05, 2016 3:40 PM
> To: Ravulakollu Udaya Kumar (WT01 - Product Engineering Service)
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
>
> Hi Uday,
>
> >
> > Thanks Jasvinder , I am running the below command
> >
> > ./build/qos_sched -c 0xe -n 1 -- --pfc "0,1,3,2" --cfg ./profile.cfg
> >
> > Bound two 1G physical ports to DPDK , and started running the above
> > command with the default profile mentioned in profile.cfg .
> > I am using lcore 3 and 2 for RX and TX. It was not successful, getting
> > the below error.
> >
> > APP: Initializing port 0... PMD: eth_igb_rx_queue_setup():
> > sw_ring=0x7f5b20ba2240 hw_ring=0x7f5b20ba2680
> dma_addr=0xbf87a2680
> > PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance,
> > consider setting the TX WTHRESH value to 4, 8, or 16.
> > PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f5b20b910c0
> > hw_ring=0x7f5b20b92100 dma_addr=0xbf8792100
> > PMD: eth_igb_start(): <<
> > done: Link Up - speed 1000 Mbps - full-duplex
> > APP: Initializing port 1... PMD: eth_igb_rx_queue_setup():
> > sw_ring=0x7f5b20b80a40 hw_ring=0x7f5b20b80e80
> dma_addr=0xbf8780e80
> > PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance,
> > consider setting the TX WTHRESH value to 4, 8, or 16.
> > PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f5b20b6f8c0
> > hw_ring=0x7f5b20b70900 dma_addr=0xbf8770900
> > PMD: eth_igb_start(): <<
> > done: Link Up - speed 1000 Mbps - full-duplex
> > SCHED: Low level config for pipe profile 0:
> > Token bucket: period = 3277, credits per period = 8, size = 1000000
> > Traffic classes: period = 5000000, credits per period = [12207,
> > 12207, 12207, 12207]
> > Traffic class 3 oversubscription: weight = 0
> > WRR cost: [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]
> > EAL: Error - exiting with code: 1
> > Cause: Unable to config sched subport 0, err=-2
>
>
> In default profile.cfg, It is assumed that all the nic ports have 10 Gbps rate.
> The above error occurs when subport's tb_rate (10Gbps) is found more than
> NIC port's capacity (1 Gbps). Therefore, you need to use either 10 Gbps ports
> in your application or have to amend the profile.cfg to work with 1 Gbps port.
> Please refer to DPDK QoS framework document for more details on various
> parameters - http://dpdk.org/doc/guides/prog_guide/qos_framework.html
>
>
> > -----Original Message-----
> > From: Singh, Jasvinder [mailto:jasvinder.singh@intel.com]
> > Sent: Monday, January 04, 2016 9:26 PM
> > To: Ravulakollu Udaya Kumar (WT01 - Product Engineering Service);
> > dev@dpdk.org
> > Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
> >
> > Hi Uday,
> >
> >
> > > I have an issue in running qos_sched application in DPDK .Could
> > > someone tell me how to run the command and what each parameter
> does
> > > In the below mentioned text.
> > >
> > > Application mandatory parameters:
> > > --pfc "RX PORT, TX PORT, RX LCORE, WT LCORE" : Packet flow
> > configuration
> > > multiple pfc can be configured in command line
> >
> >
> > RX PORT - Specifies the packets receive port TX PORT - Specifies the
> > packets transmit port RXCORE - Specifies the Core used for Packet
> > reception and Classification stage of the QoS application.
> > WTCORE- Specifies the Core used for Packet enqueue/dequeue operation
> > (QoS scheduling) and subsequently transmitting the packets out.
> >
> > Multiple pfc can be specified depending upon the number of instances
> > of qos sched required in application. For example- in order to run
> > two instance, following can be used-
> >
> > ./build/qos_sched -c 0x7e -n 4 -- --pfc "0,1,2,3,4" --pfc "2,3,5,6"
> > --cfg "profile.cfg"
> >
> > First instance of qos sched receives packets from port 0 and transmits
> > its packets through port 1 ,while second qos sched will receives
> > packets from port 2 and transmit through port 3. In case of single qos
> > sched instance, following can be used-
> >
> > ./build/qos_sched -c 0x1e -n 4 -- --pfc "0,1,2,3,4" --cfg "profile.cfg"
> >
> >
> > Thanks,
> > Jasvinder
> > The information contained in this electronic message and any
> > attachments to this message are intended for the exclusive use of the
> > addressee(s) and may contain proprietary, confidential or privileged
> > information. If you are not the intended recipient, you should not
> > disseminate, distribute or copy this e- mail. Please notify the sender
> > immediately and destroy all copies of this message and any
> > attachments. WARNING: Computer viruses can be transmitted via email.
> > The recipient should check this email and any attachments for the
> > presence of viruses. The company accepts no liability for any damage
> > caused by any virus transmitted by this email. www.wipro.com
> The information contained in this electronic message and any attachments to
> this message are intended for the exclusive use of the addressee(s) and may
> contain proprietary, confidential or privileged information. If you are not the
> intended recipient, you should not disseminate, distribute or copy this e-
> mail. Please notify the sender immediately and destroy all copies of this
> message and any attachments. WARNING: Computer viruses can be
> transmitted via email. The recipient should check this email and any
> attachments for the presence of viruses. The company accepts no liability for
> any damage caused by any virus transmitted by this email. www.wipro.com
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-dev] Traffic scheduling in DPDK
2016-01-05 10:09 ` Singh, Jasvinder
2016-01-06 12:40 ` ravulakollu.kumar
@ 2016-01-07 6:29 ` ravulakollu.kumar
2016-01-07 10:14 ` Singh, Jasvinder
1 sibling, 1 reply; 9+ messages in thread
From: ravulakollu.kumar @ 2016-01-07 6:29 UTC (permalink / raw)
To: jasvinder.singh; +Cc: dev
Hi Jasvinder,
Below is my system configuration
Hugepages:
--------------
[root@qos_sched]# grep -i huge /proc/meminfo
AnonHugePages: 4096 kB
HugePages_Total: 8000
HugePages_Free: 7488
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
NUMA Nodes:
------------------
NUMA node0 CPU(s): 0,2,4,6,8,10
NUMA node1 CPU(s): 1,3,5,7,9,11
Ports :
--------
Two Ethernet 10G 2P X520 Adapter
Note : These two PCI devices are connected to NUMA socket 0
Below is the QoS scheduler command I am running .
./build/qos_sched -c 0x14 -n 1 --socket-mem 1024,0 -- --pfc "0,1,2,4" --cfg ./profile.cfg
After running getting the below error.
APP: EAL core mask not configured properly, must be 16 instead of 14
So, changed the command line as below
./build/qos_sched -c 0x16 -n 1 --socket-mem 1024,0 -- --pfc "0,1,2,4" --cfg ./profile.cfg
After running getting a different error as shown below.
PANIC in rte_eth_dev_data_alloc():
Cannot allocate memzone for ethernet port data
10: [./build/qos_sched() [0x4039c9]]
9: [/lib64/libc.so.6(__libc_start_main+0xf5) [0x7fba5e95faf5]]
8: [./build/qos_sched(main+0x9) [0x403949]]
7: [./build/qos_sched(app_parse_args+0x2b) [0x4040eb]]
6: [/root/DPDK/x86_64-ivshmem-linuxapp-gcc/lib/libintel_dpdk.so(rte_eal_init+0xac2) [0x7fba5f8ba452]]
5: [/root/DPDK/x86_64-ivshmem-linuxapp-gcc/lib/libintel_dpdk.so(rte_eal_pci_probe+0x11d) [0x7fba5f8e767d]]
4: [/root /DPDK/x86_64-ivshmem-linuxapp-gcc/lib/libintel_dpdk.so(+0x11cafc) [0x7fba5f982afc]]
3: [/root /DPDK/x86_64-ivshmem-linuxapp-gcc/lib/libintel_dpdk.so(+0x11caa4) [0x7fba5f982aa4]]
2: [/root/DPDK/x86_64-ivshmem-linuxapp-gcc/lib/libintel_dpdk.so(__rte_panic+0xcb) [0x7fba5f894438]]
1: [/root/DPDK/x86_64-ivshmem-linuxapp-gcc/lib/libintel_dpdk.so(rte_dump_stack+0x18) [0x7fba5f8f2128]]
Aborted
Could you help me in running this app as per my system configuration.
Thanks ,
Uday
Bound two 10G
-----Original Message-----
From: Ravulakollu Udaya Kumar (WT01 - Product Engineering Service)
Sent: Wednesday, January 06, 2016 6:10 PM
To: 'Singh, Jasvinder'
Cc: dev@dpdk.org
Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
Thanks Jasvinder,
Does this application works on systems with multiple NUMA Nodes ?
Thanks,
Uday
-----Original Message-----
From: Singh, Jasvinder [mailto:jasvinder.singh@intel.com]
Sent: Tuesday, January 05, 2016 3:40 PM
To: Ravulakollu Udaya Kumar (WT01 - Product Engineering Service)
Cc: dev@dpdk.org
Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
Hi Uday,
>
> Thanks Jasvinder , I am running the below command
>
> ./build/qos_sched -c 0xe -n 1 -- --pfc "0,1,3,2" --cfg ./profile.cfg
>
> Bound two 1G physical ports to DPDK , and started running the above
> command with the default profile mentioned in profile.cfg .
> I am using lcore 3 and 2 for RX and TX. It was not successful, getting
> the below error.
>
> APP: Initializing port 0... PMD: eth_igb_rx_queue_setup():
> sw_ring=0x7f5b20ba2240 hw_ring=0x7f5b20ba2680 dma_addr=0xbf87a2680
> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance,
> consider setting the TX WTHRESH value to 4, 8, or 16.
> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f5b20b910c0
> hw_ring=0x7f5b20b92100 dma_addr=0xbf8792100
> PMD: eth_igb_start(): <<
> done: Link Up - speed 1000 Mbps - full-duplex
> APP: Initializing port 1... PMD: eth_igb_rx_queue_setup():
> sw_ring=0x7f5b20b80a40 hw_ring=0x7f5b20b80e80 dma_addr=0xbf8780e80
> PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance,
> consider setting the TX WTHRESH value to 4, 8, or 16.
> PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f5b20b6f8c0
> hw_ring=0x7f5b20b70900 dma_addr=0xbf8770900
> PMD: eth_igb_start(): <<
> done: Link Up - speed 1000 Mbps - full-duplex
> SCHED: Low level config for pipe profile 0:
> Token bucket: period = 3277, credits per period = 8, size = 1000000
> Traffic classes: period = 5000000, credits per period = [12207,
> 12207, 12207, 12207]
> Traffic class 3 oversubscription: weight = 0
> WRR cost: [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]
> EAL: Error - exiting with code: 1
> Cause: Unable to config sched subport 0, err=-2
In default profile.cfg, It is assumed that all the nic ports have 10 Gbps rate. The above error occurs when subport's tb_rate (10Gbps) is found more than NIC port's capacity (1 Gbps). Therefore, you need to use either 10 Gbps ports in your application or have to amend the profile.cfg to work with 1 Gbps port. Please refer to DPDK QoS framework document for more details on various parameters - http://dpdk.org/doc/guides/prog_guide/qos_framework.html
> -----Original Message-----
> From: Singh, Jasvinder [mailto:jasvinder.singh@intel.com]
> Sent: Monday, January 04, 2016 9:26 PM
> To: Ravulakollu Udaya Kumar (WT01 - Product Engineering Service);
> dev@dpdk.org
> Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
>
> Hi Uday,
>
>
> > I have an issue in running qos_sched application in DPDK .Could
> > someone tell me how to run the command and what each parameter does
> > In the below mentioned text.
> >
> > Application mandatory parameters:
> > --pfc "RX PORT, TX PORT, RX LCORE, WT LCORE" : Packet flow
> configuration
> > multiple pfc can be configured in command line
>
>
> RX PORT - Specifies the packets receive port TX PORT - Specifies the
> packets transmit port RXCORE - Specifies the Core used for Packet
> reception and Classification stage of the QoS application.
> WTCORE- Specifies the Core used for Packet enqueue/dequeue operation
> (QoS scheduling) and subsequently transmitting the packets out.
>
> Multiple pfc can be specified depending upon the number of instances
> of qos sched required in application. For example- in order to run
> two instance, following can be used-
>
> ./build/qos_sched -c 0x7e -n 4 -- --pfc "0,1,2,3,4" --pfc "2,3,5,6"
> --cfg "profile.cfg"
>
> First instance of qos sched receives packets from port 0 and transmits
> its packets through port 1 ,while second qos sched will receives
> packets from port 2 and transmit through port 3. In case of single qos
> sched instance, following can be used-
>
> ./build/qos_sched -c 0x1e -n 4 -- --pfc "0,1,2,3,4" --cfg "profile.cfg"
>
>
> Thanks,
> Jasvinder
> The information contained in this electronic message and any
> attachments to this message are intended for the exclusive use of the
> addressee(s) and may contain proprietary, confidential or privileged
> information. If you are not the intended recipient, you should not
> disseminate, distribute or copy this e- mail. Please notify the sender
> immediately and destroy all copies of this message and any
> attachments. WARNING: Computer viruses can be transmitted via email.
> The recipient should check this email and any attachments for the
> presence of viruses. The company accepts no liability for any damage
> caused by any virus transmitted by this email. www.wipro.com
The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-dev] Traffic scheduling in DPDK
2016-01-07 6:29 ` ravulakollu.kumar
@ 2016-01-07 10:14 ` Singh, Jasvinder
2016-01-07 10:21 ` ravulakollu.kumar
0 siblings, 1 reply; 9+ messages in thread
From: Singh, Jasvinder @ 2016-01-07 10:14 UTC (permalink / raw)
To: ravulakollu.kumar; +Cc: dev
Hi Uday,
> -----Original Message-----
> From: ravulakollu.kumar@wipro.com [mailto:ravulakollu.kumar@wipro.com]
> Sent: Thursday, January 7, 2016 6:29 AM
> To: Singh, Jasvinder
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
>
> Hi Jasvinder,
>
> Below is my system configuration
>
> Hugepages:
> --------------
> [root@qos_sched]# grep -i huge /proc/meminfo
> AnonHugePages: 4096 kB
> HugePages_Total: 8000
> HugePages_Free: 7488
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
>
> NUMA Nodes:
> ------------------
> NUMA node0 CPU(s): 0,2,4,6,8,10
> NUMA node1 CPU(s): 1,3,5,7,9,11
>
> Ports :
> --------
> Two Ethernet 10G 2P X520 Adapter
>
> Note : These two PCI devices are connected to NUMA socket 0
>
> Below is the QoS scheduler command I am running .
>
> ./build/qos_sched -c 0x14 -n 1 --socket-mem 1024,0 -- --pfc "0,1,2,4" --cfg
> ./profile.cfg
>
> After running getting the below error.
>
> APP: EAL core mask not configured properly, must be 16 instead of 14
>
> So, changed the command line as below
>
> ./build/qos_sched -c 0x16 -n 1 --socket-mem 1024,0 -- --pfc "0,1,2,4" --cfg
> ./profile.cfg
>
> After running getting a different error as shown below.
>
> PANIC in rte_eth_dev_data_alloc():
> Cannot allocate memzone for ethernet port data
> 10: [./build/qos_sched() [0x4039c9]]
> 9: [/lib64/libc.so.6(__libc_start_main+0xf5) [0x7fba5e95faf5]]
> 8: [./build/qos_sched(main+0x9) [0x403949]]
> 7: [./build/qos_sched(app_parse_args+0x2b) [0x4040eb]]
> 6: [/root/DPDK/x86_64-ivshmem-linuxapp-
> gcc/lib/libintel_dpdk.so(rte_eal_init+0xac2) [0x7fba5f8ba452]]
> 5: [/root/DPDK/x86_64-ivshmem-linuxapp-
> gcc/lib/libintel_dpdk.so(rte_eal_pci_probe+0x11d) [0x7fba5f8e767d]]
> 4: [/root /DPDK/x86_64-ivshmem-linuxapp-
> gcc/lib/libintel_dpdk.so(+0x11cafc) [0x7fba5f982afc]]
> 3: [/root /DPDK/x86_64-ivshmem-linuxapp-
> gcc/lib/libintel_dpdk.so(+0x11caa4) [0x7fba5f982aa4]]
> 2: [/root/DPDK/x86_64-ivshmem-linuxapp-
> gcc/lib/libintel_dpdk.so(__rte_panic+0xcb) [0x7fba5f894438]]
> 1: [/root/DPDK/x86_64-ivshmem-linuxapp-
> gcc/lib/libintel_dpdk.so(rte_dump_stack+0x18) [0x7fba5f8f2128]] Aborted
>
> Could you help me in running this app as per my system configuration.
I guess you are reserving less memory using --socket-mem. In qos_sched, each mbuf has (1528 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM) bytes and we have mempool size equal to 2*1024*1024 mbufs. Altogether it becomes approx. 4 GB. So, try using default memory allocated using hugepages or in case, if you want to use less than default , may use --socket-mem 5120,0
Please refer source code as well to get an idea on memory requirements for this application.
> Thanks ,
> Uday
>
> Bound two 10G
> -----Original Message-----
> From: Ravulakollu Udaya Kumar (WT01 - Product Engineering Service)
> Sent: Wednesday, January 06, 2016 6:10 PM
> To: 'Singh, Jasvinder'
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
>
> Thanks Jasvinder,
>
> Does this application works on systems with multiple NUMA Nodes ?
>
> Thanks,
> Uday
>
> -----Original Message-----
> From: Singh, Jasvinder [mailto:jasvinder.singh@intel.com]
> Sent: Tuesday, January 05, 2016 3:40 PM
> To: Ravulakollu Udaya Kumar (WT01 - Product Engineering Service)
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
>
> Hi Uday,
>
> >
> > Thanks Jasvinder , I am running the below command
> >
> > ./build/qos_sched -c 0xe -n 1 -- --pfc "0,1,3,2" --cfg ./profile.cfg
> >
> > Bound two 1G physical ports to DPDK , and started running the above
> > command with the default profile mentioned in profile.cfg .
> > I am using lcore 3 and 2 for RX and TX. It was not successful, getting
> > the below error.
> >
> > APP: Initializing port 0... PMD: eth_igb_rx_queue_setup():
> > sw_ring=0x7f5b20ba2240 hw_ring=0x7f5b20ba2680
> dma_addr=0xbf87a2680
> > PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance,
> > consider setting the TX WTHRESH value to 4, 8, or 16.
> > PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f5b20b910c0
> > hw_ring=0x7f5b20b92100 dma_addr=0xbf8792100
> > PMD: eth_igb_start(): <<
> > done: Link Up - speed 1000 Mbps - full-duplex
> > APP: Initializing port 1... PMD: eth_igb_rx_queue_setup():
> > sw_ring=0x7f5b20b80a40 hw_ring=0x7f5b20b80e80
> dma_addr=0xbf8780e80
> > PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance,
> > consider setting the TX WTHRESH value to 4, 8, or 16.
> > PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f5b20b6f8c0
> > hw_ring=0x7f5b20b70900 dma_addr=0xbf8770900
> > PMD: eth_igb_start(): <<
> > done: Link Up - speed 1000 Mbps - full-duplex
> > SCHED: Low level config for pipe profile 0:
> > Token bucket: period = 3277, credits per period = 8, size = 1000000
> > Traffic classes: period = 5000000, credits per period = [12207,
> > 12207, 12207, 12207]
> > Traffic class 3 oversubscription: weight = 0
> > WRR cost: [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]
> > EAL: Error - exiting with code: 1
> > Cause: Unable to config sched subport 0, err=-2
>
>
> In default profile.cfg, It is assumed that all the nic ports have 10 Gbps rate.
> The above error occurs when subport's tb_rate (10Gbps) is found more than
> NIC port's capacity (1 Gbps). Therefore, you need to use either 10 Gbps ports
> in your application or have to amend the profile.cfg to work with 1 Gbps port.
> Please refer to DPDK QoS framework document for more details on various
> parameters - http://dpdk.org/doc/guides/prog_guide/qos_framework.html
>
>
> > -----Original Message-----
> > From: Singh, Jasvinder [mailto:jasvinder.singh@intel.com]
> > Sent: Monday, January 04, 2016 9:26 PM
> > To: Ravulakollu Udaya Kumar (WT01 - Product Engineering Service);
> > dev@dpdk.org
> > Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
> >
> > Hi Uday,
> >
> >
> > > I have an issue in running qos_sched application in DPDK .Could
> > > someone tell me how to run the command and what each parameter
> does
> > > In the below mentioned text.
> > >
> > > Application mandatory parameters:
> > > --pfc "RX PORT, TX PORT, RX LCORE, WT LCORE" : Packet flow
> > configuration
> > > multiple pfc can be configured in command line
> >
> >
> > RX PORT - Specifies the packets receive port TX PORT - Specifies the
> > packets transmit port RXCORE - Specifies the Core used for Packet
> > reception and Classification stage of the QoS application.
> > WTCORE- Specifies the Core used for Packet enqueue/dequeue operation
> > (QoS scheduling) and subsequently transmitting the packets out.
> >
> > Multiple pfc can be specified depending upon the number of instances
> > of qos sched required in application. For example- in order to run
> > two instance, following can be used-
> >
> > ./build/qos_sched -c 0x7e -n 4 -- --pfc "0,1,2,3,4" --pfc "2,3,5,6"
> > --cfg "profile.cfg"
> >
> > First instance of qos sched receives packets from port 0 and transmits
> > its packets through port 1 ,while second qos sched will receives
> > packets from port 2 and transmit through port 3. In case of single qos
> > sched instance, following can be used-
> >
> > ./build/qos_sched -c 0x1e -n 4 -- --pfc "0,1,2,3,4" --cfg "profile.cfg"
> >
> >
> > Thanks,
> > Jasvinder
> > The information contained in this electronic message and any
> > attachments to this message are intended for the exclusive use of the
> > addressee(s) and may contain proprietary, confidential or privileged
> > information. If you are not the intended recipient, you should not
> > disseminate, distribute or copy this e- mail. Please notify the sender
> > immediately and destroy all copies of this message and any
> > attachments. WARNING: Computer viruses can be transmitted via email.
> > The recipient should check this email and any attachments for the
> > presence of viruses. The company accepts no liability for any damage
> > caused by any virus transmitted by this email. www.wipro.com
> The information contained in this electronic message and any attachments to
> this message are intended for the exclusive use of the addressee(s) and may
> contain proprietary, confidential or privileged information. If you are not the
> intended recipient, you should not disseminate, distribute or copy this e-
> mail. Please notify the sender immediately and destroy all copies of this
> message and any attachments. WARNING: Computer viruses can be
> transmitted via email. The recipient should check this email and any
> attachments for the presence of viruses. The company accepts no liability for
> any damage caused by any virus transmitted by this email. www.wipro.com
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [dpdk-dev] Traffic scheduling in DPDK
2016-01-07 10:14 ` Singh, Jasvinder
@ 2016-01-07 10:21 ` ravulakollu.kumar
0 siblings, 0 replies; 9+ messages in thread
From: ravulakollu.kumar @ 2016-01-07 10:21 UTC (permalink / raw)
To: jasvinder.singh; +Cc: dev
Thanks Jasvinder for your quick response.
-----Original Message-----
From: Singh, Jasvinder [mailto:jasvinder.singh@intel.com]
Sent: Thursday, January 07, 2016 3:44 PM
To: Ravulakollu Udaya Kumar (WT01 - Product Engineering Service)
Cc: dev@dpdk.org
Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
Hi Uday,
> -----Original Message-----
> From: ravulakollu.kumar@wipro.com [mailto:ravulakollu.kumar@wipro.com]
> Sent: Thursday, January 7, 2016 6:29 AM
> To: Singh, Jasvinder
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
>
> Hi Jasvinder,
>
> Below is my system configuration
>
> Hugepages:
> --------------
> [root@qos_sched]# grep -i huge /proc/meminfo
> AnonHugePages: 4096 kB
> HugePages_Total: 8000
> HugePages_Free: 7488
> HugePages_Rsvd: 0
> HugePages_Surp: 0
> Hugepagesize: 2048 kB
>
> NUMA Nodes:
> ------------------
> NUMA node0 CPU(s): 0,2,4,6,8,10
> NUMA node1 CPU(s): 1,3,5,7,9,11
>
> Ports :
> --------
> Two Ethernet 10G 2P X520 Adapter
>
> Note : These two PCI devices are connected to NUMA socket 0
>
> Below is the QoS scheduler command I am running .
>
> ./build/qos_sched -c 0x14 -n 1 --socket-mem 1024,0 -- --pfc "0,1,2,4"
> --cfg ./profile.cfg
>
> After running getting the below error.
>
> APP: EAL core mask not configured properly, must be 16 instead
> of 14
>
> So, changed the command line as below
>
> ./build/qos_sched -c 0x16 -n 1 --socket-mem 1024,0 -- --pfc "0,1,2,4"
> --cfg ./profile.cfg
>
> After running getting a different error as shown below.
>
> PANIC in rte_eth_dev_data_alloc():
> Cannot allocate memzone for ethernet port data
> 10: [./build/qos_sched() [0x4039c9]]
> 9: [/lib64/libc.so.6(__libc_start_main+0xf5) [0x7fba5e95faf5]]
> 8: [./build/qos_sched(main+0x9) [0x403949]]
> 7: [./build/qos_sched(app_parse_args+0x2b) [0x4040eb]]
> 6: [/root/DPDK/x86_64-ivshmem-linuxapp-
> gcc/lib/libintel_dpdk.so(rte_eal_init+0xac2) [0x7fba5f8ba452]]
> 5: [/root/DPDK/x86_64-ivshmem-linuxapp-
> gcc/lib/libintel_dpdk.so(rte_eal_pci_probe+0x11d) [0x7fba5f8e767d]]
> 4: [/root /DPDK/x86_64-ivshmem-linuxapp-
> gcc/lib/libintel_dpdk.so(+0x11cafc) [0x7fba5f982afc]]
> 3: [/root /DPDK/x86_64-ivshmem-linuxapp-
> gcc/lib/libintel_dpdk.so(+0x11caa4) [0x7fba5f982aa4]]
> 2: [/root/DPDK/x86_64-ivshmem-linuxapp-
> gcc/lib/libintel_dpdk.so(__rte_panic+0xcb) [0x7fba5f894438]]
> 1: [/root/DPDK/x86_64-ivshmem-linuxapp-
> gcc/lib/libintel_dpdk.so(rte_dump_stack+0x18) [0x7fba5f8f2128]]
> Aborted
>
> Could you help me in running this app as per my system configuration.
I guess you are reserving less memory using --socket-mem. In qos_sched, each mbuf has (1528 + sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM) bytes and we have mempool size equal to 2*1024*1024 mbufs. Altogether it becomes approx. 4 GB. So, try using default memory allocated using hugepages or in case, if you want to use less than default , may use --socket-mem 5120,0
Please refer source code as well to get an idea on memory requirements for this application.
> Thanks ,
> Uday
>
> Bound two 10G
> -----Original Message-----
> From: Ravulakollu Udaya Kumar (WT01 - Product Engineering Service)
> Sent: Wednesday, January 06, 2016 6:10 PM
> To: 'Singh, Jasvinder'
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
>
> Thanks Jasvinder,
>
> Does this application works on systems with multiple NUMA Nodes ?
>
> Thanks,
> Uday
>
> -----Original Message-----
> From: Singh, Jasvinder [mailto:jasvinder.singh@intel.com]
> Sent: Tuesday, January 05, 2016 3:40 PM
> To: Ravulakollu Udaya Kumar (WT01 - Product Engineering Service)
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
>
> Hi Uday,
>
> >
> > Thanks Jasvinder , I am running the below command
> >
> > ./build/qos_sched -c 0xe -n 1 -- --pfc "0,1,3,2" --cfg
> > ./profile.cfg
> >
> > Bound two 1G physical ports to DPDK , and started running the above
> > command with the default profile mentioned in profile.cfg .
> > I am using lcore 3 and 2 for RX and TX. It was not successful,
> > getting the below error.
> >
> > APP: Initializing port 0... PMD: eth_igb_rx_queue_setup():
> > sw_ring=0x7f5b20ba2240 hw_ring=0x7f5b20ba2680
> dma_addr=0xbf87a2680
> > PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance,
> > consider setting the TX WTHRESH value to 4, 8, or 16.
> > PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f5b20b910c0
> > hw_ring=0x7f5b20b92100 dma_addr=0xbf8792100
> > PMD: eth_igb_start(): <<
> > done: Link Up - speed 1000 Mbps - full-duplex
> > APP: Initializing port 1... PMD: eth_igb_rx_queue_setup():
> > sw_ring=0x7f5b20b80a40 hw_ring=0x7f5b20b80e80
> dma_addr=0xbf8780e80
> > PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance,
> > consider setting the TX WTHRESH value to 4, 8, or 16.
> > PMD: eth_igb_tx_queue_setup(): sw_ring=0x7f5b20b6f8c0
> > hw_ring=0x7f5b20b70900 dma_addr=0xbf8770900
> > PMD: eth_igb_start(): <<
> > done: Link Up - speed 1000 Mbps - full-duplex
> > SCHED: Low level config for pipe profile 0:
> > Token bucket: period = 3277, credits per period = 8, size = 1000000
> > Traffic classes: period = 5000000, credits per period = [12207,
> > 12207, 12207, 12207]
> > Traffic class 3 oversubscription: weight = 0
> > WRR cost: [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]
> > EAL: Error - exiting with code: 1
> > Cause: Unable to config sched subport 0, err=-2
>
>
> In default profile.cfg, It is assumed that all the nic ports have 10 Gbps rate.
> The above error occurs when subport's tb_rate (10Gbps) is found more
> than NIC port's capacity (1 Gbps). Therefore, you need to use either
> 10 Gbps ports in your application or have to amend the profile.cfg to work with 1 Gbps port.
> Please refer to DPDK QoS framework document for more details on
> various parameters -
> http://dpdk.org/doc/guides/prog_guide/qos_framework.html
>
>
> > -----Original Message-----
> > From: Singh, Jasvinder [mailto:jasvinder.singh@intel.com]
> > Sent: Monday, January 04, 2016 9:26 PM
> > To: Ravulakollu Udaya Kumar (WT01 - Product Engineering Service);
> > dev@dpdk.org
> > Subject: RE: [dpdk-dev] Traffic scheduling in DPDK
> >
> > Hi Uday,
> >
> >
> > > I have an issue in running qos_sched application in DPDK .Could
> > > someone tell me how to run the command and what each parameter
> does
> > > In the below mentioned text.
> > >
> > > Application mandatory parameters:
> > > --pfc "RX PORT, TX PORT, RX LCORE, WT LCORE" : Packet flow
> > configuration
> > > multiple pfc can be configured in command line
> >
> >
> > RX PORT - Specifies the packets receive port TX PORT - Specifies the
> > packets transmit port RXCORE - Specifies the Core used for Packet
> > reception and Classification stage of the QoS application.
> > WTCORE- Specifies the Core used for Packet enqueue/dequeue
> > operation (QoS scheduling) and subsequently transmitting the packets out.
> >
> > Multiple pfc can be specified depending upon the number of
> > instances of qos sched required in application. For example- in
> > order to run two instance, following can be used-
> >
> > ./build/qos_sched -c 0x7e -n 4 -- --pfc "0,1,2,3,4" --pfc "2,3,5,6"
> > --cfg "profile.cfg"
> >
> > First instance of qos sched receives packets from port 0 and
> > transmits its packets through port 1 ,while second qos sched will
> > receives packets from port 2 and transmit through port 3. In case of
> > single qos sched instance, following can be used-
> >
> > ./build/qos_sched -c 0x1e -n 4 -- --pfc "0,1,2,3,4" --cfg "profile.cfg"
> >
> >
> > Thanks,
> > Jasvinder
> > The information contained in this electronic message and any
> > attachments to this message are intended for the exclusive use of
> > the
> > addressee(s) and may contain proprietary, confidential or privileged
> > information. If you are not the intended recipient, you should not
> > disseminate, distribute or copy this e- mail. Please notify the
> > sender immediately and destroy all copies of this message and any
> > attachments. WARNING: Computer viruses can be transmitted via email.
> > The recipient should check this email and any attachments for the
> > presence of viruses. The company accepts no liability for any damage
> > caused by any virus transmitted by this email. www.wipro.com
> The information contained in this electronic message and any
> attachments to this message are intended for the exclusive use of the
> addressee(s) and may contain proprietary, confidential or privileged
> information. If you are not the intended recipient, you should not
> disseminate, distribute or copy this e- mail. Please notify the sender
> immediately and destroy all copies of this message and any
> attachments. WARNING: Computer viruses can be transmitted via email.
> The recipient should check this email and any attachments for the
> presence of viruses. The company accepts no liability for any damage
> caused by any virus transmitted by this email. www.wipro.com
The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2016-01-07 10:22 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-04 14:39 [dpdk-dev] Traffic scheduling in DPDK ravulakollu.kumar
2016-01-04 15:55 ` Singh, Jasvinder
2016-01-05 6:21 ` ravulakollu.kumar
2016-01-05 10:09 ` Singh, Jasvinder
2016-01-06 12:40 ` ravulakollu.kumar
2016-01-06 12:57 ` Singh, Jasvinder
2016-01-07 6:29 ` ravulakollu.kumar
2016-01-07 10:14 ` Singh, Jasvinder
2016-01-07 10:21 ` ravulakollu.kumar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).