DPDK usage discussions
 help / color / mirror / Atom feed
* Re: [dpdk-users] issues with running ip_pipeline sample application
@ 2015-11-17 11:35 Eli Britstein
  0 siblings, 0 replies; 4+ messages in thread
From: Eli Britstein @ 2015-11-17 11:35 UTC (permalink / raw)
  To: users

I also encountered the same issue.
It seems the NIC doesn't report it's up fast enough for that code. For me it sometimes worked and sometimes didn't.
I just pushed a sleep command before checking it's up (before calling to app_ports_check_link, in init.c).
-------------------------------------------------------------------------------------------------------------------------------------------------
This email and any files transmitted and/or attachments with it are confidential and proprietary information of
Toga Networks Ltd., and intended solely for the use of the individual or entity to whom they are addressed.
If you have received this email in error please notify the system manager. This message contains confidential
information of Toga Networks Ltd., and is intended only for the individual named. If you are not the named
addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately
by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not
the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on
the contents of this information is strictly prohibited.
------------------------------------------------------------------------------------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] issues with running ip_pipeline sample application
       [not found]     ` <CAGZfBEj4j0Kx5co7La-RDGmdQEEWi_gCeLuWktJ3KHGTXz7bgw@mail.gmail.com>
@ 2015-11-17 21:06       ` Zhang, Roy Fan
  0 siblings, 0 replies; 4+ messages in thread
From: Zhang, Roy Fan @ 2015-11-17 21:06 UTC (permalink / raw)
  To: Grace Liu; +Cc: users

Hello,

Can you provide the display message?
After the pipeline(s) are initialized, you should see the following terminal output

[APP] LINK0 (0) (10 Gbps) UP
[APP] LINK1 (1) (10 Gbps) UP
[APP] LINK2 (2) (10 Gbps) UP
[APP] LINK3 (3) (10 Gbps) UP
[APP] Initializing MSGQ-REQ-PIPELINE0 ...
[APP] Initializing MSGQ-RSP-PIPELINE0 ...
[APP] Initializing MSGQ-REQ-CORE-s0c0 ...
[APP] Initializing MSGQ-RSP-CORE-s0c0 ...
[APP] Initializing MSGQ-REQ-PIPELINE1 ...
[APP] Initializing MSGQ-RSP-PIPELINE1 ...
[APP] Initializing MSGQ-REQ-CORE-s0c1 ...
[APP] Initializing MSGQ-RSP-CORE-s0c1 ...
[APP] Initializing PIPELINE0 ...
pipeline> [APP] Initializing PIPELINE1 ...
[PIPELINE1] Pass-through

Here you should be able to type CLI commands like “p 1 ping” to check if the pipeline thread is alive. If it is alive, no message will be shown, but when it is not the error message is displayed. The program will exit only when you type “quit” command. The best way to check if this example still working is to check if the traffic is flowing.

Meanwhile, as ip_pipeline is a relatively big example, you can build your own basic pass-through and routing, even more complex flow classification, q-in-q encapsulation dpdk application by simply providing configuration file of your own. Currently ip_pipeline didn’t provide much example configuration files with it (only one simple pass-through I am afraid).

You may find useful information on how to build your configuration file on http://dpdk.org/doc/guides/sample_app_ug/ip_pipeline.html#.

If you have any further question, please do contact me.

Regards,
Fan

From: Grace Liu [mailto:guyue.liu@gmail.com]
Sent: Tuesday, November 17, 2015 5:23 PM
To: Zhang, Roy Fan <roy.fan.zhang@intel.com>
Subject: Re: [dpdk-users] issues with running ip_pipeline sample application

Hello,

Thanks for you reply, I've changed the port mask to 0x3 and added a sleep() command, now it seems both LINKs are up, but then the program will exit and my terminal is freeze, do you have any idea about this problem?

Thanks,
Grace


PMD: i40e_pf_parameter_init(): VMDQ queue pairs:4
[APP] Initializing MEMPOOL0 ...
[APP] Initializing LINK0 (0) (1 RXQ, 1 TXQ) ...
PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
PMD: i40e_dev_tx_queue_setup(): Using simple tx path
[APP] Initializing LINK1 (1) (1 RXQ, 1 TXQ) ...
PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are satisfied. Rx Burst Bulk Alloc function will be used on port=1, queue=0.
PMD: i40e_dev_tx_queue_setup(): Using simple tx path
[APP] LINK0 (0) (10 Gbps) UP
[APP] LINK1 (1) (10 Gbps) UP
[APP] Initializing MSGQ-REQ-PIPELINE0 ...
[APP] Initializing MSGQ-RSP-PIPELINE0 ...
[APP] Initializing MSGQ-REQ-CORE-s0c0 ...
[APP] Initializing MSGQ-RSP-CORE-s0c0 ...
[APP] Initializing MSGQ-REQ-PIPELINE1 ...
[APP] Initializing MSGQ-RSP-PIPELINE1 ...
[APP] Initializing MSGQ-REQ-CORE-s0c1 ...
[APP] Initializing MSGQ-RSP-CORE-s0c1 ...
[APP] Initializing PIPELINE0 ...
pipeline> [APP] Initializing PIPELINE1 ...
Cannot find LINK2 for RXQ2.0

On Tue, Nov 17, 2015 at 11:40 AM, Zhang, Roy Fan <roy.fan.zhang@intel.com<mailto:roy.fan.zhang@intel.com>> wrote:
Hello,

Thanks for using ip_pipeline.
If your host can work for l2fwd, ip_pipeline standalone should also work.
The problem may be the port mask 0x11 in your command.
Port mask works this way:

If your board has X NIC ports, the port mask shall be a X-bit unsigned integer.
The Nth bit's "1" indicates Nth NIC port will be used for the ip_pipeline application.
E.g., 0x3 indicates you want to use 1st and 2nd NIC ports, and 0x11 means you want to use 1st and 5th NIC ports.

To connect the port's sequence number to the actual NIC port information, please use the following command ./tools/dpdk_nic_bind.py --status

The sample output is shown as follows:
Network devices using DPDK-compatible driver ============================================
0000:02:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe  --> 1st port, port mask 0x01
0000:02:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe  --> 2nd port, port mask 0x02
0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe  --> 3rd port, port mask 0x04
0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe          --> 4th port, port mask 0x08

To use full 4 ports, -p 0x0f option should be included.

Hope this answer your question.

Best regards,
Fan

> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org<mailto:users-bounces@dpdk.org>] On Behalf Of Grace Liu
> Sent: Monday, November 16, 2015 11:03 PM
> To: users@dpdk.org<mailto:users@dpdk.org>
> Subject: [dpdk-users] issues with running ip_pipeline sample
> application
>
> Hello DPDK community,
>
> I met errors when running ip_pipeline and test_ip_pipeline sample
> application. I'm using dpdk-2.1.0 on ubuntu 14.04 host with kernel 3.16.
> My host machine has two 10G ports and they are working for l2fwd
> sample application but not for ip_pipeline, so I'm wondering is there
> any specific requirement for NIC to run this app? The command I use
> is: sudo ./build/ip_pipeline -p 0x11.
>
> The error message is as follows:
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fb9d8200000 (size = 0x200000)
>
> EAL: Requesting 256 pages of size 2MB from socket 0
>
> EAL: Requesting 768 pages of size 2MB from socket 1
>
> EAL: TSC frequency is ~2666753 KHz
>
> EAL: Master lcore 0 is ready (tid=74659900;cpuset=[0])
>
> EAL: lcore 1 is ready (tid=d81ff700;cpuset=[1])
>
> EAL: PCI device 0000:05:00.0 on NUMA socket -1
>
> EAL:   probe driver: 8086:10c9 rte_igb_pmd
>
> EAL:   Not managed by a supported kernel driver, skipped
>
> EAL: PCI device 0000:05:00.1 on NUMA socket -1
>
> EAL:   probe driver: 8086:10c9 rte_igb_pmd
>
> EAL:   Not managed by a supported kernel driver, skipped
>
> EAL: PCI device 0000:07:00.0 on NUMA socket -1
>
> EAL:   probe driver: 8086:1572 rte_i40e_pmd
>
> EAL:   PCI memory mapped at 0x7fb9d71ff000
>
> EAL:   PCI memory mapped at 0x7fba7465d000
>
> PMD: eth_i40e_dev_init(): FW 4.22 API 1.2 NVM 04.02.05 eetrack
> 8000143f
>
> PMD: eth_i40e_dev_init(): Failed to stop lldp
>
> PMD: i40e_pf_parameter_init(): Max supported VSIs:66
>
> PMD: i40e_pf_parameter_init(): PF queue pairs:64
>
> PMD: i40e_pf_parameter_init(): Max VMDQ VSI num:63
>
> PMD: i40e_pf_parameter_init(): VMDQ queue pairs:4
>
> EAL: PCI device 0000:07:00.1 on NUMA socket -1
>
> EAL:   probe driver: 8086:1572 rte_i40e_pmd
>
> EAL:   PCI memory mapped at 0x7fb9d69ff000
>
> EAL:   PCI memory mapped at 0x7fba7461e000
>
> PMD: eth_i40e_dev_init(): FW 4.22 API 1.2 NVM 04.02.05 eetrack
> 8000143f
>
> PMD: eth_i40e_dev_init(): Failed to stop lldp
>
> PMD: i40e_pf_parameter_init(): Max supported VSIs:66
>
> PMD: i40e_pf_parameter_init(): PF queue pairs:64
>
> PMD: i40e_pf_parameter_init(): Max VMDQ VSI num:63
>
> PMD: i40e_pf_parameter_init(): VMDQ queue pairs:4
>
> [APP] Initializing MEMPOOL0 ...
>
> [APP] Initializing LINK0 (0) (1 RXQ, 1 TXQ) ...
>
> PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
> satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
>
> PMD: i40e_dev_tx_queue_setup(): Using simple tx path
>
> [APP] Initializing LINK1 (4) (1 RXQ, 1 TXQ) ...
>
> PANIC in app_init_link():
>
> LINK1 (4): init error (-22)
>
> 6: [./build/ip_pipeline() [0x42dad3]]
>
> 5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)
> [0x7fba73776ec5]]
>
> 4: [./build/ip_pipeline(main+0x55) [0x42c715]]
>
> 3: [./build/ip_pipeline(app_init+0x1530) [0x439f10]]
>
> 2: [./build/ip_pipeline(__rte_panic+0xc1) [0x427bdb]]
>
> 1: [./build/ip_pipeline(rte_dump_stack+0x18) [0x4abe98]]
>
>
> Thanks,
>
> Grace


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] issues with running ip_pipeline sample application
       [not found] ` <B27915DBBA3421428155699D51E4CFE2023BDA5D@IRSMSX103.ger.corp.intel.com>
@ 2015-11-17 16:40   ` Zhang, Roy Fan
       [not found]     ` <CAGZfBEj4j0Kx5co7La-RDGmdQEEWi_gCeLuWktJ3KHGTXz7bgw@mail.gmail.com>
  0 siblings, 1 reply; 4+ messages in thread
From: Zhang, Roy Fan @ 2015-11-17 16:40 UTC (permalink / raw)
  To: users

Hello, 

Thanks for using ip_pipeline. 
If your host can work for l2fwd, ip_pipeline standalone should also work. 
The problem may be the port mask 0x11 in your command. 
Port mask works this way: 

If your board has X NIC ports, the port mask shall be a X-bit unsigned integer. 
The Nth bit's "1" indicates Nth NIC port will be used for the ip_pipeline application. 
E.g., 0x3 indicates you want to use 1st and 2nd NIC ports, and 0x11 means you want to use 1st and 5th NIC ports.

To connect the port's sequence number to the actual NIC port information, please use the following command ./tools/dpdk_nic_bind.py --status

The sample output is shown as follows:
Network devices using DPDK-compatible driver ============================================
0000:02:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe 	--> 1st port, port mask 0x01
0000:02:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe	--> 2nd port, port mask 0x02
0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe	--> 3rd port, port mask 0x04
0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=ixgbe  	--> 4th port, port mask 0x08

To use full 4 ports, -p 0x0f option should be included.

Hope this answer your question.

Best regards,
Fan

> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Grace Liu
> Sent: Monday, November 16, 2015 11:03 PM
> To: users@dpdk.org
> Subject: [dpdk-users] issues with running ip_pipeline sample 
> application
> 
> Hello DPDK community,
> 
> I met errors when running ip_pipeline and test_ip_pipeline sample 
> application. I'm using dpdk-2.1.0 on ubuntu 14.04 host with kernel 3.16.
> My host machine has two 10G ports and they are working for l2fwd 
> sample application but not for ip_pipeline, so I'm wondering is there 
> any specific requirement for NIC to run this app? The command I use 
> is: sudo ./build/ip_pipeline -p 0x11.
> 
> The error message is as follows:
> 
> EAL: Ask a virtual area of 0x200000 bytes
> 
> EAL: Virtual area found at 0x7fb9d8200000 (size = 0x200000)
> 
> EAL: Requesting 256 pages of size 2MB from socket 0
> 
> EAL: Requesting 768 pages of size 2MB from socket 1
> 
> EAL: TSC frequency is ~2666753 KHz
> 
> EAL: Master lcore 0 is ready (tid=74659900;cpuset=[0])
> 
> EAL: lcore 1 is ready (tid=d81ff700;cpuset=[1])
> 
> EAL: PCI device 0000:05:00.0 on NUMA socket -1
> 
> EAL:   probe driver: 8086:10c9 rte_igb_pmd
> 
> EAL:   Not managed by a supported kernel driver, skipped
> 
> EAL: PCI device 0000:05:00.1 on NUMA socket -1
> 
> EAL:   probe driver: 8086:10c9 rte_igb_pmd
> 
> EAL:   Not managed by a supported kernel driver, skipped
> 
> EAL: PCI device 0000:07:00.0 on NUMA socket -1
> 
> EAL:   probe driver: 8086:1572 rte_i40e_pmd
> 
> EAL:   PCI memory mapped at 0x7fb9d71ff000
> 
> EAL:   PCI memory mapped at 0x7fba7465d000
> 
> PMD: eth_i40e_dev_init(): FW 4.22 API 1.2 NVM 04.02.05 eetrack 
> 8000143f
> 
> PMD: eth_i40e_dev_init(): Failed to stop lldp
> 
> PMD: i40e_pf_parameter_init(): Max supported VSIs:66
> 
> PMD: i40e_pf_parameter_init(): PF queue pairs:64
> 
> PMD: i40e_pf_parameter_init(): Max VMDQ VSI num:63
> 
> PMD: i40e_pf_parameter_init(): VMDQ queue pairs:4
> 
> EAL: PCI device 0000:07:00.1 on NUMA socket -1
> 
> EAL:   probe driver: 8086:1572 rte_i40e_pmd
> 
> EAL:   PCI memory mapped at 0x7fb9d69ff000
> 
> EAL:   PCI memory mapped at 0x7fba7461e000
> 
> PMD: eth_i40e_dev_init(): FW 4.22 API 1.2 NVM 04.02.05 eetrack 
> 8000143f
> 
> PMD: eth_i40e_dev_init(): Failed to stop lldp
> 
> PMD: i40e_pf_parameter_init(): Max supported VSIs:66
> 
> PMD: i40e_pf_parameter_init(): PF queue pairs:64
> 
> PMD: i40e_pf_parameter_init(): Max VMDQ VSI num:63
> 
> PMD: i40e_pf_parameter_init(): VMDQ queue pairs:4
> 
> [APP] Initializing MEMPOOL0 ...
> 
> [APP] Initializing LINK0 (0) (1 RXQ, 1 TXQ) ...
> 
> PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are 
> satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
> 
> PMD: i40e_dev_tx_queue_setup(): Using simple tx path
> 
> [APP] Initializing LINK1 (4) (1 RXQ, 1 TXQ) ...
> 
> PANIC in app_init_link():
> 
> LINK1 (4): init error (-22)
> 
> 6: [./build/ip_pipeline() [0x42dad3]]
> 
> 5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)
> [0x7fba73776ec5]]
> 
> 4: [./build/ip_pipeline(main+0x55) [0x42c715]]
> 
> 3: [./build/ip_pipeline(app_init+0x1530) [0x439f10]]
> 
> 2: [./build/ip_pipeline(__rte_panic+0xc1) [0x427bdb]]
> 
> 1: [./build/ip_pipeline(rte_dump_stack+0x18) [0x4abe98]]
> 
> 
> Thanks,
> 
> Grace

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [dpdk-users] issues with running ip_pipeline sample application
@ 2015-11-16 23:02 Grace Liu
       [not found] ` <B27915DBBA3421428155699D51E4CFE2023BDA5D@IRSMSX103.ger.corp.intel.com>
  0 siblings, 1 reply; 4+ messages in thread
From: Grace Liu @ 2015-11-16 23:02 UTC (permalink / raw)
  To: users

Hello DPDK community,

I met errors when running ip_pipeline and test_ip_pipeline sample
application. I'm using dpdk-2.1.0 on ubuntu 14.04 host with kernel 3.16. My
host machine has two 10G ports and they are working for l2fwd sample
application but not for ip_pipeline, so I'm wondering is there any specific
requirement for NIC to run this app? The command I use is: sudo
./build/ip_pipeline -p 0x11.

The error message is as follows:

EAL: Ask a virtual area of 0x200000 bytes

EAL: Virtual area found at 0x7fb9d8200000 (size = 0x200000)

EAL: Requesting 256 pages of size 2MB from socket 0

EAL: Requesting 768 pages of size 2MB from socket 1

EAL: TSC frequency is ~2666753 KHz

EAL: Master lcore 0 is ready (tid=74659900;cpuset=[0])

EAL: lcore 1 is ready (tid=d81ff700;cpuset=[1])

EAL: PCI device 0000:05:00.0 on NUMA socket -1

EAL:   probe driver: 8086:10c9 rte_igb_pmd

EAL:   Not managed by a supported kernel driver, skipped

EAL: PCI device 0000:05:00.1 on NUMA socket -1

EAL:   probe driver: 8086:10c9 rte_igb_pmd

EAL:   Not managed by a supported kernel driver, skipped

EAL: PCI device 0000:07:00.0 on NUMA socket -1

EAL:   probe driver: 8086:1572 rte_i40e_pmd

EAL:   PCI memory mapped at 0x7fb9d71ff000

EAL:   PCI memory mapped at 0x7fba7465d000

PMD: eth_i40e_dev_init(): FW 4.22 API 1.2 NVM 04.02.05 eetrack 8000143f

PMD: eth_i40e_dev_init(): Failed to stop lldp

PMD: i40e_pf_parameter_init(): Max supported VSIs:66

PMD: i40e_pf_parameter_init(): PF queue pairs:64

PMD: i40e_pf_parameter_init(): Max VMDQ VSI num:63

PMD: i40e_pf_parameter_init(): VMDQ queue pairs:4

EAL: PCI device 0000:07:00.1 on NUMA socket -1

EAL:   probe driver: 8086:1572 rte_i40e_pmd

EAL:   PCI memory mapped at 0x7fb9d69ff000

EAL:   PCI memory mapped at 0x7fba7461e000

PMD: eth_i40e_dev_init(): FW 4.22 API 1.2 NVM 04.02.05 eetrack 8000143f

PMD: eth_i40e_dev_init(): Failed to stop lldp

PMD: i40e_pf_parameter_init(): Max supported VSIs:66

PMD: i40e_pf_parameter_init(): PF queue pairs:64

PMD: i40e_pf_parameter_init(): Max VMDQ VSI num:63

PMD: i40e_pf_parameter_init(): VMDQ queue pairs:4

[APP] Initializing MEMPOOL0 ...

[APP] Initializing LINK0 (0) (1 RXQ, 1 TXQ) ...

PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.

PMD: i40e_dev_tx_queue_setup(): Using simple tx path

[APP] Initializing LINK1 (4) (1 RXQ, 1 TXQ) ...

PANIC in app_init_link():

LINK1 (4): init error (-22)

6: [./build/ip_pipeline() [0x42dad3]]

5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)
[0x7fba73776ec5]]

4: [./build/ip_pipeline(main+0x55) [0x42c715]]

3: [./build/ip_pipeline(app_init+0x1530) [0x439f10]]

2: [./build/ip_pipeline(__rte_panic+0xc1) [0x427bdb]]

1: [./build/ip_pipeline(rte_dump_stack+0x18) [0x4abe98]]


Thanks,

Grace

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2015-11-17 21:07 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-11-17 11:35 [dpdk-users] issues with running ip_pipeline sample application Eli Britstein
  -- strict thread matches above, loose matches on Subject: below --
2015-11-16 23:02 Grace Liu
     [not found] ` <B27915DBBA3421428155699D51E4CFE2023BDA5D@IRSMSX103.ger.corp.intel.com>
2015-11-17 16:40   ` Zhang, Roy Fan
     [not found]     ` <CAGZfBEj4j0Kx5co7La-RDGmdQEEWi_gCeLuWktJ3KHGTXz7bgw@mail.gmail.com>
2015-11-17 21:06       ` Zhang, Roy Fan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).