DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] problem running ip pipeline application
@ 2016-04-06  0:42 Talukdar, Biju
  2016-04-06  9:22 ` Singh, Jasvinder
  0 siblings, 1 reply; 7+ messages in thread
From: Talukdar, Biju @ 2016-04-06  0:42 UTC (permalink / raw)
  To: users; +Cc: Talukdar, Biju

Hi ,


I am getting the following error when trying to run dpdk ip pipeline example.Could someone please tell me what went wrong.


system configuration:

dpdk 2.2.0

ubuntu 14.04


network driver:

/dpdk-2.2.0$ ./tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver
============================================
0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=

Network devices using kernel driver
===================================
0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth3 drv=ixgbe unused=igb_uio
0000:05:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb unused=igb_uio *Active*
0000:05:00.1 'I350 Gigabit Network Connection' if=eth2 drv=igb unused=igb_uio

Other network devices
=====================
0000:03:00.0 'Device 15ad' unused=igb_uio
0000:03:00.1 'Device 15ad' unused=igb_uio
uml@uml:~/dpdk-2.2.0$ ./tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver
============================================
0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=

Network devices using kernel driver
===================================
0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth3 drv=ixgbe unused=igb_uio
0000:05:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb unused=igb_uio *Active*
0000:05:00.1 'I350 Gigabit Network Connection' if=eth2 drv=igb unused=igb_uio

Other network devices
=====================
0000:03:00.0 'Device 15ad' unused=igb_uio
0000:03:00.1 'Device 15ad' unused=igb_uio
uml@uml:~/dpdk-2.2.0$


Here is the dump -------->



~/dpdk-2.2.0/examples/ip_pipeline/ip_pipeline/x86_64-native-linuxapp-gcc/app$ sudo -E ./ip_pipeline -f /home/uml/dpdk-2.2.0/examples/ip_pipeline/config/l2fwd.cfg -p 0x01
[sudo] password for uml:
[APP] Initializing CPU core map ...
[APP] CPU core mask = 0x0000000000000003
[APP] Initializing EAL ...
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Detected lcore 4 as core 4 on socket 0
EAL: Detected lcore 5 as core 5 on socket 0
EAL: Detected lcore 6 as core 6 on socket 0
EAL: Detected lcore 7 as core 7 on socket 0
EAL: Detected lcore 8 as core 0 on socket 0
EAL: Detected lcore 9 as core 1 on socket 0
EAL: Detected lcore 10 as core 2 on socket 0
EAL: Detected lcore 11 as core 3 on socket 0
EAL: Detected lcore 12 as core 4 on socket 0
EAL: Detected lcore 13 as core 5 on socket 0
EAL: Detected lcore 14 as core 6 on socket 0
EAL: Detected lcore 15 as core 7 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 16 lcore(s)
EAL: VFIO modules not all loaded, skip VFIO support...
EAL: Setting up physically contiguous memory...
EAL: Ask a virtual area of 0x40000000 bytes
EAL: Virtual area found at 0x7f24c0000000 (size = 0x40000000)
EAL: Requesting 1 pages of size 1024MB from socket 0
EAL: TSC frequency is ~1999998 KHz
EAL: Master lcore 0 is ready (tid=bc483940;cpuset=[0])
EAL: lcore 1 is ready (tid=bad86700;cpuset=[1])
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:15ad rte_ixgbe_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:03:00.1 on NUMA socket 0
EAL:   probe driver: 8086:15ad rte_ixgbe_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7f2500000000
EAL:   PCI memory mapped at 0x7f2500080000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 5
PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
EAL: PCI device 0000:04:00.1 on NUMA socket 0
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:05:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   Not managed by a supported kernel driver, skipped
[APP] Initializing MEMPOOL0 ...
[APP] Initializing LINK0 (0) (1 RXQ, 1 TXQ) ...
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f24fb128340 sw_sc_ring=0x7f24fb127e00 hw_ring=0x7f24fb128880 dma_addr=0xffb128880
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f24fb115c40 hw_ring=0x7f24fb117c80 dma_addr=0xffb117c80
PMD: ixgbe_set_tx_function(): Using simple tx code path
PMD: ixgbe_set_tx_function(): Vector tx enabled.
PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst size no less than 4 (port=0).
[APP] LINK0 (0) (10 Gbps) UP
[APP] Initializing MSGQ-REQ-PIPELINE0 ...
[APP] Initializing MSGQ-RSP-PIPELINE0 ...
[APP] Initializing MSGQ-REQ-CORE-s0c0 ...
[APP] Initializing MSGQ-RSP-CORE-s0c0 ...
[APP] Initializing MSGQ-REQ-PIPELINE1 ...
[APP] Initializing MSGQ-RSP-PIPELINE1 ...
[APP] Initializing MSGQ-REQ-CORE-s0c1 ...
[APP] Initializing MSGQ-RSP-CORE-s0c1 ...
[APP] Initializing PIPELINE0 ...
pipeline> [APP] Initializing PIPELINE1 ...
Cannot find LINK1 for RXQ1.0

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-04-15  8:48 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-06  0:42 [dpdk-users] problem running ip pipeline application Talukdar, Biju
2016-04-06  9:22 ` Singh, Jasvinder
2016-04-06 21:21   ` Talukdar, Biju
2016-04-06 23:36     ` Talukdar, Biju
2016-04-08 20:23       ` Singh, Jasvinder
2016-04-13  0:04         ` Talukdar, Biju
2016-04-15  8:48           ` Singh, Jasvinder

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).