DPDK usage discussions
 help / color / mirror / Atom feed
From: "Singh, Jasvinder" <jasvinder.singh@intel.com>
To: "Talukdar, Biju" <Biju_Talukdar@student.uml.edu>, users <users@dpdk.org>
Subject: Re: [dpdk-users] problem running ip pipeline application
Date: Wed, 6 Apr 2016 09:22:03 +0000	[thread overview]
Message-ID: <54CBAA185211B4429112C315DA58FF6DDED8FD@IRSMSX103.ger.corp.intel.com> (raw)
In-Reply-To: <BLUPR02MB43942A451FFCE749441481EC89F0@BLUPR02MB439.namprd02.prod.outlook.com>

Hi,

> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Talukdar, Biju
> Sent: Wednesday, April 6, 2016 1:43 AM
> To: users <users@dpdk.org>
> Cc: Talukdar, Biju <Biju_Talukdar@student.uml.edu>
> Subject: [dpdk-users] problem running ip pipeline application
> 
> Hi ,
> 
> 
> I am getting the following error when trying to run dpdk ip pipeline
> example.Could someone please tell me what went wrong.
> 
> 
> system configuration:
> 
> dpdk 2.2.0
> 
> ubuntu 14.04
> 
> 
> network driver:
> 
> /dpdk-2.2.0$ ./tools/dpdk_nic_bind.py --status
> 
> Network devices using DPDK-compatible driver
> ============================================
> 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=
> 
> Network devices using kernel driver
> ===================================
> 0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth3
> drv=ixgbe unused=igb_uio
> 0000:05:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb
> unused=igb_uio *Active*
> 0000:05:00.1 'I350 Gigabit Network Connection' if=eth2 drv=igb
> unused=igb_uio
> 
> Other network devices
> =====================
> 0000:03:00.0 'Device 15ad' unused=igb_uio
> 0000:03:00.1 'Device 15ad' unused=igb_uio uml@uml:~/dpdk-2.2.0$
> ./tools/dpdk_nic_bind.py --status
> 
> Network devices using DPDK-compatible driver
> ============================================
> 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=
> 
> Network devices using kernel driver
> ===================================
> 0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth3
> drv=ixgbe unused=igb_uio
> 0000:05:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb
> unused=igb_uio *Active*
> 0000:05:00.1 'I350 Gigabit Network Connection' if=eth2 drv=igb
> unused=igb_uio
> 
> Other network devices
> =====================
> 0000:03:00.0 'Device 15ad' unused=igb_uio
> 0000:03:00.1 'Device 15ad' unused=igb_uio uml@uml:~/dpdk-2.2.0$
> 
> 
> Here is the dump -------->
> 
> 
> 
> ~/dpdk-2.2.0/examples/ip_pipeline/ip_pipeline/x86_64-native-linuxapp-
> gcc/app$ sudo -E ./ip_pipeline -f /home/uml/dpdk-
> 2.2.0/examples/ip_pipeline/config/l2fwd.cfg -p 0x01 [sudo] password for
> uml:
> [APP] Initializing CPU core map ...
> [APP] CPU core mask = 0x0000000000000003 [APP] Initializing EAL ...
> EAL: Detected lcore 0 as core 0 on socket 0
> EAL: Detected lcore 1 as core 1 on socket 0
> EAL: Detected lcore 2 as core 2 on socket 0
> EAL: Detected lcore 3 as core 3 on socket 0
> EAL: Detected lcore 4 as core 4 on socket 0
> EAL: Detected lcore 5 as core 5 on socket 0
> EAL: Detected lcore 6 as core 6 on socket 0
> EAL: Detected lcore 7 as core 7 on socket 0
> EAL: Detected lcore 8 as core 0 on socket 0
> EAL: Detected lcore 9 as core 1 on socket 0
> EAL: Detected lcore 10 as core 2 on socket 0
> EAL: Detected lcore 11 as core 3 on socket 0
> EAL: Detected lcore 12 as core 4 on socket 0
> EAL: Detected lcore 13 as core 5 on socket 0
> EAL: Detected lcore 14 as core 6 on socket 0
> EAL: Detected lcore 15 as core 7 on socket 0
> EAL: Support maximum 128 logical core(s) by configuration.
> EAL: Detected 16 lcore(s)
> EAL: VFIO modules not all loaded, skip VFIO support...
> EAL: Setting up physically contiguous memory...
> EAL: Ask a virtual area of 0x40000000 bytes
> EAL: Virtual area found at 0x7f24c0000000 (size = 0x40000000)
> EAL: Requesting 1 pages of size 1024MB from socket 0
> EAL: TSC frequency is ~1999998 KHz
> EAL: Master lcore 0 is ready (tid=bc483940;cpuset=[0])
> EAL: lcore 1 is ready (tid=bad86700;cpuset=[1])
> EAL: PCI device 0000:03:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:15ad rte_ixgbe_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:03:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:15ad rte_ixgbe_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:04:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   PCI memory mapped at 0x7f2500000000
> EAL:   PCI memory mapped at 0x7f2500080000
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 5
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> EAL: PCI device 0000:04:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:05:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1521 rte_igb_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:05:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1521 rte_igb_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> [APP] Initializing MEMPOOL0 ...
> [APP] Initializing LINK0 (0) (1 RXQ, 1 TXQ) ...
> PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f24fb128340
> sw_sc_ring=0x7f24fb127e00 hw_ring=0x7f24fb128880
> dma_addr=0xffb128880
> PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f24fb115c40
> hw_ring=0x7f24fb117c80 dma_addr=0xffb117c80
> PMD: ixgbe_set_tx_function(): Using simple tx code path
> PMD: ixgbe_set_tx_function(): Vector tx enabled.
> PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst
> size no less than 4 (port=0).
> [APP] LINK0 (0) (10 Gbps) UP
> [APP] Initializing MSGQ-REQ-PIPELINE0 ...
> [APP] Initializing MSGQ-RSP-PIPELINE0 ...
> [APP] Initializing MSGQ-REQ-CORE-s0c0 ...
> [APP] Initializing MSGQ-RSP-CORE-s0c0 ...
> [APP] Initializing MSGQ-REQ-PIPELINE1 ...
> [APP] Initializing MSGQ-RSP-PIPELINE1 ...
> [APP] Initializing MSGQ-REQ-CORE-s0c1 ...
> [APP] Initializing MSGQ-RSP-CORE-s0c1 ...
> [APP] Initializing PIPELINE0 ...
> pipeline> [APP] Initializing PIPELINE1 ...
> Cannot find LINK1 for RXQ1.0


Seems like you are having less ports than specified in l2fwd.cfg. In order to run ip_pipeline with l2fwd.cfg configuration file, 4 ports are required.  So , you can either change configuration file and remove 3 extra ports (RXQ1.0, RXQ2.0, RXQ3.0 and TXQ1.0, TXQ2.0, TXQ3.0)  or bind 3 more ports to dpdk and use the default l2fwd.cfg straightaway.

Jasvinder

  reply	other threads:[~2016-04-06  9:22 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-06  0:42 Talukdar, Biju
2016-04-06  9:22 ` Singh, Jasvinder [this message]
2016-04-06 21:21   ` Talukdar, Biju
2016-04-06 23:36     ` Talukdar, Biju
2016-04-08 20:23       ` Singh, Jasvinder
2016-04-13  0:04         ` Talukdar, Biju
2016-04-15  8:48           ` Singh, Jasvinder

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54CBAA185211B4429112C315DA58FF6DDED8FD@IRSMSX103.ger.corp.intel.com \
    --to=jasvinder.singh@intel.com \
    --cc=Biju_Talukdar@student.uml.edu \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).