DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] problem running ip pipeline application
@ 2016-04-06  0:42 Talukdar, Biju
  2016-04-06  9:22 ` Singh, Jasvinder
  0 siblings, 1 reply; 7+ messages in thread
From: Talukdar, Biju @ 2016-04-06  0:42 UTC (permalink / raw)
  To: users; +Cc: Talukdar, Biju

Hi ,


I am getting the following error when trying to run dpdk ip pipeline example.Could someone please tell me what went wrong.


system configuration:

dpdk 2.2.0

ubuntu 14.04


network driver:

/dpdk-2.2.0$ ./tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver
============================================
0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=

Network devices using kernel driver
===================================
0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth3 drv=ixgbe unused=igb_uio
0000:05:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb unused=igb_uio *Active*
0000:05:00.1 'I350 Gigabit Network Connection' if=eth2 drv=igb unused=igb_uio

Other network devices
=====================
0000:03:00.0 'Device 15ad' unused=igb_uio
0000:03:00.1 'Device 15ad' unused=igb_uio
uml@uml:~/dpdk-2.2.0$ ./tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver
============================================
0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio unused=

Network devices using kernel driver
===================================
0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth3 drv=ixgbe unused=igb_uio
0000:05:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb unused=igb_uio *Active*
0000:05:00.1 'I350 Gigabit Network Connection' if=eth2 drv=igb unused=igb_uio

Other network devices
=====================
0000:03:00.0 'Device 15ad' unused=igb_uio
0000:03:00.1 'Device 15ad' unused=igb_uio
uml@uml:~/dpdk-2.2.0$


Here is the dump -------->



~/dpdk-2.2.0/examples/ip_pipeline/ip_pipeline/x86_64-native-linuxapp-gcc/app$ sudo -E ./ip_pipeline -f /home/uml/dpdk-2.2.0/examples/ip_pipeline/config/l2fwd.cfg -p 0x01
[sudo] password for uml:
[APP] Initializing CPU core map ...
[APP] CPU core mask = 0x0000000000000003
[APP] Initializing EAL ...
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Detected lcore 4 as core 4 on socket 0
EAL: Detected lcore 5 as core 5 on socket 0
EAL: Detected lcore 6 as core 6 on socket 0
EAL: Detected lcore 7 as core 7 on socket 0
EAL: Detected lcore 8 as core 0 on socket 0
EAL: Detected lcore 9 as core 1 on socket 0
EAL: Detected lcore 10 as core 2 on socket 0
EAL: Detected lcore 11 as core 3 on socket 0
EAL: Detected lcore 12 as core 4 on socket 0
EAL: Detected lcore 13 as core 5 on socket 0
EAL: Detected lcore 14 as core 6 on socket 0
EAL: Detected lcore 15 as core 7 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 16 lcore(s)
EAL: VFIO modules not all loaded, skip VFIO support...
EAL: Setting up physically contiguous memory...
EAL: Ask a virtual area of 0x40000000 bytes
EAL: Virtual area found at 0x7f24c0000000 (size = 0x40000000)
EAL: Requesting 1 pages of size 1024MB from socket 0
EAL: TSC frequency is ~1999998 KHz
EAL: Master lcore 0 is ready (tid=bc483940;cpuset=[0])
EAL: lcore 1 is ready (tid=bad86700;cpuset=[1])
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 8086:15ad rte_ixgbe_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:03:00.1 on NUMA socket 0
EAL:   probe driver: 8086:15ad rte_ixgbe_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:04:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7f2500000000
EAL:   PCI memory mapped at 0x7f2500080000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 5
PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
EAL: PCI device 0000:04:00.1 on NUMA socket 0
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:05:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:05:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 rte_igb_pmd
EAL:   Not managed by a supported kernel driver, skipped
[APP] Initializing MEMPOOL0 ...
[APP] Initializing LINK0 (0) (1 RXQ, 1 TXQ) ...
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f24fb128340 sw_sc_ring=0x7f24fb127e00 hw_ring=0x7f24fb128880 dma_addr=0xffb128880
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f24fb115c40 hw_ring=0x7f24fb117c80 dma_addr=0xffb117c80
PMD: ixgbe_set_tx_function(): Using simple tx code path
PMD: ixgbe_set_tx_function(): Vector tx enabled.
PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst size no less than 4 (port=0).
[APP] LINK0 (0) (10 Gbps) UP
[APP] Initializing MSGQ-REQ-PIPELINE0 ...
[APP] Initializing MSGQ-RSP-PIPELINE0 ...
[APP] Initializing MSGQ-REQ-CORE-s0c0 ...
[APP] Initializing MSGQ-RSP-CORE-s0c0 ...
[APP] Initializing MSGQ-REQ-PIPELINE1 ...
[APP] Initializing MSGQ-RSP-PIPELINE1 ...
[APP] Initializing MSGQ-REQ-CORE-s0c1 ...
[APP] Initializing MSGQ-RSP-CORE-s0c1 ...
[APP] Initializing PIPELINE0 ...
pipeline> [APP] Initializing PIPELINE1 ...
Cannot find LINK1 for RXQ1.0

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] problem running ip pipeline application
  2016-04-06  0:42 [dpdk-users] problem running ip pipeline application Talukdar, Biju
@ 2016-04-06  9:22 ` Singh, Jasvinder
  2016-04-06 21:21   ` Talukdar, Biju
  0 siblings, 1 reply; 7+ messages in thread
From: Singh, Jasvinder @ 2016-04-06  9:22 UTC (permalink / raw)
  To: Talukdar, Biju, users

Hi,

> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Talukdar, Biju
> Sent: Wednesday, April 6, 2016 1:43 AM
> To: users <users@dpdk.org>
> Cc: Talukdar, Biju <Biju_Talukdar@student.uml.edu>
> Subject: [dpdk-users] problem running ip pipeline application
> 
> Hi ,
> 
> 
> I am getting the following error when trying to run dpdk ip pipeline
> example.Could someone please tell me what went wrong.
> 
> 
> system configuration:
> 
> dpdk 2.2.0
> 
> ubuntu 14.04
> 
> 
> network driver:
> 
> /dpdk-2.2.0$ ./tools/dpdk_nic_bind.py --status
> 
> Network devices using DPDK-compatible driver
> ============================================
> 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=
> 
> Network devices using kernel driver
> ===================================
> 0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth3
> drv=ixgbe unused=igb_uio
> 0000:05:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb
> unused=igb_uio *Active*
> 0000:05:00.1 'I350 Gigabit Network Connection' if=eth2 drv=igb
> unused=igb_uio
> 
> Other network devices
> =====================
> 0000:03:00.0 'Device 15ad' unused=igb_uio
> 0000:03:00.1 'Device 15ad' unused=igb_uio uml@uml:~/dpdk-2.2.0$
> ./tools/dpdk_nic_bind.py --status
> 
> Network devices using DPDK-compatible driver
> ============================================
> 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=
> 
> Network devices using kernel driver
> ===================================
> 0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth3
> drv=ixgbe unused=igb_uio
> 0000:05:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb
> unused=igb_uio *Active*
> 0000:05:00.1 'I350 Gigabit Network Connection' if=eth2 drv=igb
> unused=igb_uio
> 
> Other network devices
> =====================
> 0000:03:00.0 'Device 15ad' unused=igb_uio
> 0000:03:00.1 'Device 15ad' unused=igb_uio uml@uml:~/dpdk-2.2.0$
> 
> 
> Here is the dump -------->
> 
> 
> 
> ~/dpdk-2.2.0/examples/ip_pipeline/ip_pipeline/x86_64-native-linuxapp-
> gcc/app$ sudo -E ./ip_pipeline -f /home/uml/dpdk-
> 2.2.0/examples/ip_pipeline/config/l2fwd.cfg -p 0x01 [sudo] password for
> uml:
> [APP] Initializing CPU core map ...
> [APP] CPU core mask = 0x0000000000000003 [APP] Initializing EAL ...
> EAL: Detected lcore 0 as core 0 on socket 0
> EAL: Detected lcore 1 as core 1 on socket 0
> EAL: Detected lcore 2 as core 2 on socket 0
> EAL: Detected lcore 3 as core 3 on socket 0
> EAL: Detected lcore 4 as core 4 on socket 0
> EAL: Detected lcore 5 as core 5 on socket 0
> EAL: Detected lcore 6 as core 6 on socket 0
> EAL: Detected lcore 7 as core 7 on socket 0
> EAL: Detected lcore 8 as core 0 on socket 0
> EAL: Detected lcore 9 as core 1 on socket 0
> EAL: Detected lcore 10 as core 2 on socket 0
> EAL: Detected lcore 11 as core 3 on socket 0
> EAL: Detected lcore 12 as core 4 on socket 0
> EAL: Detected lcore 13 as core 5 on socket 0
> EAL: Detected lcore 14 as core 6 on socket 0
> EAL: Detected lcore 15 as core 7 on socket 0
> EAL: Support maximum 128 logical core(s) by configuration.
> EAL: Detected 16 lcore(s)
> EAL: VFIO modules not all loaded, skip VFIO support...
> EAL: Setting up physically contiguous memory...
> EAL: Ask a virtual area of 0x40000000 bytes
> EAL: Virtual area found at 0x7f24c0000000 (size = 0x40000000)
> EAL: Requesting 1 pages of size 1024MB from socket 0
> EAL: TSC frequency is ~1999998 KHz
> EAL: Master lcore 0 is ready (tid=bc483940;cpuset=[0])
> EAL: lcore 1 is ready (tid=bad86700;cpuset=[1])
> EAL: PCI device 0000:03:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:15ad rte_ixgbe_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:03:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:15ad rte_ixgbe_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:04:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   PCI memory mapped at 0x7f2500000000
> EAL:   PCI memory mapped at 0x7f2500080000
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 5
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> EAL: PCI device 0000:04:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:05:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1521 rte_igb_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:05:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1521 rte_igb_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> [APP] Initializing MEMPOOL0 ...
> [APP] Initializing LINK0 (0) (1 RXQ, 1 TXQ) ...
> PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f24fb128340
> sw_sc_ring=0x7f24fb127e00 hw_ring=0x7f24fb128880
> dma_addr=0xffb128880
> PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f24fb115c40
> hw_ring=0x7f24fb117c80 dma_addr=0xffb117c80
> PMD: ixgbe_set_tx_function(): Using simple tx code path
> PMD: ixgbe_set_tx_function(): Vector tx enabled.
> PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst
> size no less than 4 (port=0).
> [APP] LINK0 (0) (10 Gbps) UP
> [APP] Initializing MSGQ-REQ-PIPELINE0 ...
> [APP] Initializing MSGQ-RSP-PIPELINE0 ...
> [APP] Initializing MSGQ-REQ-CORE-s0c0 ...
> [APP] Initializing MSGQ-RSP-CORE-s0c0 ...
> [APP] Initializing MSGQ-REQ-PIPELINE1 ...
> [APP] Initializing MSGQ-RSP-PIPELINE1 ...
> [APP] Initializing MSGQ-REQ-CORE-s0c1 ...
> [APP] Initializing MSGQ-RSP-CORE-s0c1 ...
> [APP] Initializing PIPELINE0 ...
> pipeline> [APP] Initializing PIPELINE1 ...
> Cannot find LINK1 for RXQ1.0


Seems like you are having less ports than specified in l2fwd.cfg. In order to run ip_pipeline with l2fwd.cfg configuration file, 4 ports are required.  So , you can either change configuration file and remove 3 extra ports (RXQ1.0, RXQ2.0, RXQ3.0 and TXQ1.0, TXQ2.0, TXQ3.0)  or bind 3 more ports to dpdk and use the default l2fwd.cfg straightaway.

Jasvinder

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] problem running ip pipeline application
  2016-04-06  9:22 ` Singh, Jasvinder
@ 2016-04-06 21:21   ` Talukdar, Biju
  2016-04-06 23:36     ` Talukdar, Biju
  0 siblings, 1 reply; 7+ messages in thread
From: Talukdar, Biju @ 2016-04-06 21:21 UTC (permalink / raw)
  To: Singh, Jasvinder, users


Hi Jasvinder,
Thank you very much . That worked fine. I am trying to change the source code of pass-through pipeline. 
I believe the statement
*port_out = port_in / p->n_ports_in ;
in the function "in pipeline_passthrough_track(...)" is passing through the packets from input port to the output port.My goal is to extract each packet's source ip and destination ip to apply my hash function and save the hash value into a array. I did the following  :

l2fwd_pktmbuf_pool = rte_mempool_create("mbuf_pool", NB_MBUF, MBUF_SIZE, 32,
               sizeof(struct rte_pktmbuf_pool_private),
               rte_pktmbuf_pool_init, NULL,
               rte_pktmbuf_init, NULL,
               rte_socket_id(), 0);

if (l2fwd_pktmbuf_pool == NULL)
    rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");

    total_nb_rx = rte_eth_rx_burst((uint8_t) port_in, 0, pkts_burst,
        MAX_PKT_BURST);

before the statement 
 *port_out = port_in / p->n_ports_in ;

I know that this is wrong. The program is not even passing through there. I put a printf and I could not see the print.

Now my question is could you please give some pointer where I should copy the packets( or just extract the src ip/dst ip ) before forwarding. I am really getting lost looking at the manual. Please help !!


Thanks a ton 
________________________________________
From: Singh, Jasvinder <jasvinder.singh@intel.com>
Sent: Wednesday, April 6, 2016 5:22 AM
To: Talukdar, Biju; users
Subject: RE: problem running ip pipeline application

Hi,

> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Talukdar, Biju
> Sent: Wednesday, April 6, 2016 1:43 AM
> To: users <users@dpdk.org>
> Cc: Talukdar, Biju <Biju_Talukdar@student.uml.edu>
> Subject: [dpdk-users] problem running ip pipeline application
>
> Hi ,
>
>
> I am getting the following error when trying to run dpdk ip pipeline
> example.Could someone please tell me what went wrong.
>
>
> system configuration:
>
> dpdk 2.2.0
>
> ubuntu 14.04
>
>
> network driver:
>
> /dpdk-2.2.0$ ./tools/dpdk_nic_bind.py --status
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=
>
> Network devices using kernel driver
> ===================================
> 0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth3
> drv=ixgbe unused=igb_uio
> 0000:05:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb
> unused=igb_uio *Active*
> 0000:05:00.1 'I350 Gigabit Network Connection' if=eth2 drv=igb
> unused=igb_uio
>
> Other network devices
> =====================
> 0000:03:00.0 'Device 15ad' unused=igb_uio
> 0000:03:00.1 'Device 15ad' unused=igb_uio uml@uml:~/dpdk-2.2.0$
> ./tools/dpdk_nic_bind.py --status
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=
>
> Network devices using kernel driver
> ===================================
> 0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth3
> drv=ixgbe unused=igb_uio
> 0000:05:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb
> unused=igb_uio *Active*
> 0000:05:00.1 'I350 Gigabit Network Connection' if=eth2 drv=igb
> unused=igb_uio
>
> Other network devices
> =====================
> 0000:03:00.0 'Device 15ad' unused=igb_uio
> 0000:03:00.1 'Device 15ad' unused=igb_uio uml@uml:~/dpdk-2.2.0$
>
>
> Here is the dump -------->
>
>
>
> ~/dpdk-2.2.0/examples/ip_pipeline/ip_pipeline/x86_64-native-linuxapp-
> gcc/app$ sudo -E ./ip_pipeline -f /home/uml/dpdk-
> 2.2.0/examples/ip_pipeline/config/l2fwd.cfg -p 0x01 [sudo] password for
> uml:
> [APP] Initializing CPU core map ...
> [APP] CPU core mask = 0x0000000000000003 [APP] Initializing EAL ...
> EAL: Detected lcore 0 as core 0 on socket 0
> EAL: Detected lcore 1 as core 1 on socket 0
> EAL: Detected lcore 2 as core 2 on socket 0
> EAL: Detected lcore 3 as core 3 on socket 0
> EAL: Detected lcore 4 as core 4 on socket 0
> EAL: Detected lcore 5 as core 5 on socket 0
> EAL: Detected lcore 6 as core 6 on socket 0
> EAL: Detected lcore 7 as core 7 on socket 0
> EAL: Detected lcore 8 as core 0 on socket 0
> EAL: Detected lcore 9 as core 1 on socket 0
> EAL: Detected lcore 10 as core 2 on socket 0
> EAL: Detected lcore 11 as core 3 on socket 0
> EAL: Detected lcore 12 as core 4 on socket 0
> EAL: Detected lcore 13 as core 5 on socket 0
> EAL: Detected lcore 14 as core 6 on socket 0
> EAL: Detected lcore 15 as core 7 on socket 0
> EAL: Support maximum 128 logical core(s) by configuration.
> EAL: Detected 16 lcore(s)
> EAL: VFIO modules not all loaded, skip VFIO support...
> EAL: Setting up physically contiguous memory...
> EAL: Ask a virtual area of 0x40000000 bytes
> EAL: Virtual area found at 0x7f24c0000000 (size = 0x40000000)
> EAL: Requesting 1 pages of size 1024MB from socket 0
> EAL: TSC frequency is ~1999998 KHz
> EAL: Master lcore 0 is ready (tid=bc483940;cpuset=[0])
> EAL: lcore 1 is ready (tid=bad86700;cpuset=[1])
> EAL: PCI device 0000:03:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:15ad rte_ixgbe_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:03:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:15ad rte_ixgbe_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:04:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   PCI memory mapped at 0x7f2500000000
> EAL:   PCI memory mapped at 0x7f2500080000
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 5
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> EAL: PCI device 0000:04:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:05:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1521 rte_igb_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:05:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1521 rte_igb_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> [APP] Initializing MEMPOOL0 ...
> [APP] Initializing LINK0 (0) (1 RXQ, 1 TXQ) ...
> PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f24fb128340
> sw_sc_ring=0x7f24fb127e00 hw_ring=0x7f24fb128880
> dma_addr=0xffb128880
> PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f24fb115c40
> hw_ring=0x7f24fb117c80 dma_addr=0xffb117c80
> PMD: ixgbe_set_tx_function(): Using simple tx code path
> PMD: ixgbe_set_tx_function(): Vector tx enabled.
> PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst
> size no less than 4 (port=0).
> [APP] LINK0 (0) (10 Gbps) UP
> [APP] Initializing MSGQ-REQ-PIPELINE0 ...
> [APP] Initializing MSGQ-RSP-PIPELINE0 ...
> [APP] Initializing MSGQ-REQ-CORE-s0c0 ...
> [APP] Initializing MSGQ-RSP-CORE-s0c0 ...
> [APP] Initializing MSGQ-REQ-PIPELINE1 ...
> [APP] Initializing MSGQ-RSP-PIPELINE1 ...
> [APP] Initializing MSGQ-REQ-CORE-s0c1 ...
> [APP] Initializing MSGQ-RSP-CORE-s0c1 ...
> [APP] Initializing PIPELINE0 ...
> pipeline> [APP] Initializing PIPELINE1 ...
> Cannot find LINK1 for RXQ1.0


Seems like you are having less ports than specified in l2fwd.cfg. In order to run ip_pipeline with l2fwd.cfg configuration file, 4 ports are required.  So , you can either change configuration file and remove 3 extra ports (RXQ1.0, RXQ2.0, RXQ3.0 and TXQ1.0, TXQ2.0, TXQ3.0)  or bind 3 more ports to dpdk and use the default l2fwd.cfg straightaway.

Jasvinder

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] problem running ip pipeline application
  2016-04-06 21:21   ` Talukdar, Biju
@ 2016-04-06 23:36     ` Talukdar, Biju
  2016-04-08 20:23       ` Singh, Jasvinder
  0 siblings, 1 reply; 7+ messages in thread
From: Talukdar, Biju @ 2016-04-06 23:36 UTC (permalink / raw)
  To: Singh, Jasvinder, users


Hi Jasvinder,
I have seen one of your archive mail, where you described how one can extract fields by modifying the configuration file. here is a small excerpt:
 
PIPELINE1]
type = PASS-THROUGH
core = 1
pktq_in = RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0
pktq_out = TXQ0.0 TXQ1.0 TXQ2.0 TXQ3.0
dma_size = 16
dma_dst_offset = 64
dma_src_offset = 150
dma_src_mask = 00FF0000FFFFFFFFFFFFFFFFFFFFFFFF
dma_hash_offset = 80

I am not able to understand what these field means. I want to extract src ip and destination ip. How would my configuration file would look like.
 Again where the fields will be extracted. I need them as a input to my function where I will use the src/dst ips  to calculate a hash value and update my array.
Also how do I print values of my variable. Is there any log file where I can print values. How do I debug my program, feeling helpless.
Regarding the CLI. I also could not get any command for CLI. I just found one "p 1 ping". Could you please give some information.

please help !!

Regards
Biju
________________________________________
From: users <users-bounces@dpdk.org> on behalf of Talukdar, Biju <Biju_Talukdar@student.uml.edu>
Sent: Wednesday, April 6, 2016 5:21 PM
To: Singh, Jasvinder; users
Subject: Re: [dpdk-users] problem running ip pipeline application

Hi Jasvinder,
Thank you very much . That worked fine. I am trying to change the source code of pass-through pipeline.
I believe the statement
*port_out = port_in / p->n_ports_in ;
in the function "in pipeline_passthrough_track(...)" is passing through the packets from input port to the output port.My goal is to extract each packet's source ip and destination ip to apply my hash function and save the hash value into a array. I did the following  :

l2fwd_pktmbuf_pool = rte_mempool_create("mbuf_pool", NB_MBUF, MBUF_SIZE, 32,
               sizeof(struct rte_pktmbuf_pool_private),
               rte_pktmbuf_pool_init, NULL,
               rte_pktmbuf_init, NULL,
               rte_socket_id(), 0);

if (l2fwd_pktmbuf_pool == NULL)
    rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");

    total_nb_rx = rte_eth_rx_burst((uint8_t) port_in, 0, pkts_burst,
        MAX_PKT_BURST);

before the statement
 *port_out = port_in / p->n_ports_in ;

I know that this is wrong. The program is not even passing through there. I put a printf and I could not see the print.

Now my question is could you please give some pointer where I should copy the packets( or just extract the src ip/dst ip ) before forwarding. I am really getting lost looking at the manual. Please help !!


Thanks a ton
________________________________________
From: Singh, Jasvinder <jasvinder.singh@intel.com>
Sent: Wednesday, April 6, 2016 5:22 AM
To: Talukdar, Biju; users
Subject: RE: problem running ip pipeline application

Hi,

> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of Talukdar, Biju
> Sent: Wednesday, April 6, 2016 1:43 AM
> To: users <users@dpdk.org>
> Cc: Talukdar, Biju <Biju_Talukdar@student.uml.edu>
> Subject: [dpdk-users] problem running ip pipeline application
>
> Hi ,
>
>
> I am getting the following error when trying to run dpdk ip pipeline
> example.Could someone please tell me what went wrong.
>
>
> system configuration:
>
> dpdk 2.2.0
>
> ubuntu 14.04
>
>
> network driver:
>
> /dpdk-2.2.0$ ./tools/dpdk_nic_bind.py --status
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=
>
> Network devices using kernel driver
> ===================================
> 0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth3
> drv=ixgbe unused=igb_uio
> 0000:05:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb
> unused=igb_uio *Active*
> 0000:05:00.1 'I350 Gigabit Network Connection' if=eth2 drv=igb
> unused=igb_uio
>
> Other network devices
> =====================
> 0000:03:00.0 'Device 15ad' unused=igb_uio
> 0000:03:00.1 'Device 15ad' unused=igb_uio uml@uml:~/dpdk-2.2.0$
> ./tools/dpdk_nic_bind.py --status
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=igb_uio
> unused=
>
> Network devices using kernel driver
> ===================================
> 0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth3
> drv=ixgbe unused=igb_uio
> 0000:05:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb
> unused=igb_uio *Active*
> 0000:05:00.1 'I350 Gigabit Network Connection' if=eth2 drv=igb
> unused=igb_uio
>
> Other network devices
> =====================
> 0000:03:00.0 'Device 15ad' unused=igb_uio
> 0000:03:00.1 'Device 15ad' unused=igb_uio uml@uml:~/dpdk-2.2.0$
>
>
> Here is the dump -------->
>
>
>
> ~/dpdk-2.2.0/examples/ip_pipeline/ip_pipeline/x86_64-native-linuxapp-
> gcc/app$ sudo -E ./ip_pipeline -f /home/uml/dpdk-
> 2.2.0/examples/ip_pipeline/config/l2fwd.cfg -p 0x01 [sudo] password for
> uml:
> [APP] Initializing CPU core map ...
> [APP] CPU core mask = 0x0000000000000003 [APP] Initializing EAL ...
> EAL: Detected lcore 0 as core 0 on socket 0
> EAL: Detected lcore 1 as core 1 on socket 0
> EAL: Detected lcore 2 as core 2 on socket 0
> EAL: Detected lcore 3 as core 3 on socket 0
> EAL: Detected lcore 4 as core 4 on socket 0
> EAL: Detected lcore 5 as core 5 on socket 0
> EAL: Detected lcore 6 as core 6 on socket 0
> EAL: Detected lcore 7 as core 7 on socket 0
> EAL: Detected lcore 8 as core 0 on socket 0
> EAL: Detected lcore 9 as core 1 on socket 0
> EAL: Detected lcore 10 as core 2 on socket 0
> EAL: Detected lcore 11 as core 3 on socket 0
> EAL: Detected lcore 12 as core 4 on socket 0
> EAL: Detected lcore 13 as core 5 on socket 0
> EAL: Detected lcore 14 as core 6 on socket 0
> EAL: Detected lcore 15 as core 7 on socket 0
> EAL: Support maximum 128 logical core(s) by configuration.
> EAL: Detected 16 lcore(s)
> EAL: VFIO modules not all loaded, skip VFIO support...
> EAL: Setting up physically contiguous memory...
> EAL: Ask a virtual area of 0x40000000 bytes
> EAL: Virtual area found at 0x7f24c0000000 (size = 0x40000000)
> EAL: Requesting 1 pages of size 1024MB from socket 0
> EAL: TSC frequency is ~1999998 KHz
> EAL: Master lcore 0 is ready (tid=bc483940;cpuset=[0])
> EAL: lcore 1 is ready (tid=bad86700;cpuset=[1])
> EAL: PCI device 0000:03:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:15ad rte_ixgbe_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:03:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:15ad rte_ixgbe_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:04:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   PCI memory mapped at 0x7f2500000000
> EAL:   PCI memory mapped at 0x7f2500080000
> PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 18, SFP+: 5
> PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
> EAL: PCI device 0000:04:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:05:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1521 rte_igb_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> EAL: PCI device 0000:05:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1521 rte_igb_pmd
> EAL:   Not managed by a supported kernel driver, skipped
> [APP] Initializing MEMPOOL0 ...
> [APP] Initializing LINK0 (0) (1 RXQ, 1 TXQ) ...
> PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f24fb128340
> sw_sc_ring=0x7f24fb127e00 hw_ring=0x7f24fb128880
> dma_addr=0xffb128880
> PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f24fb115c40
> hw_ring=0x7f24fb117c80 dma_addr=0xffb117c80
> PMD: ixgbe_set_tx_function(): Using simple tx code path
> PMD: ixgbe_set_tx_function(): Vector tx enabled.
> PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst
> size no less than 4 (port=0).
> [APP] LINK0 (0) (10 Gbps) UP
> [APP] Initializing MSGQ-REQ-PIPELINE0 ...
> [APP] Initializing MSGQ-RSP-PIPELINE0 ...
> [APP] Initializing MSGQ-REQ-CORE-s0c0 ...
> [APP] Initializing MSGQ-RSP-CORE-s0c0 ...
> [APP] Initializing MSGQ-REQ-PIPELINE1 ...
> [APP] Initializing MSGQ-RSP-PIPELINE1 ...
> [APP] Initializing MSGQ-REQ-CORE-s0c1 ...
> [APP] Initializing MSGQ-RSP-CORE-s0c1 ...
> [APP] Initializing PIPELINE0 ...
> pipeline> [APP] Initializing PIPELINE1 ...
> Cannot find LINK1 for RXQ1.0


Seems like you are having less ports than specified in l2fwd.cfg. In order to run ip_pipeline with l2fwd.cfg configuration file, 4 ports are required.  So , you can either change configuration file and remove 3 extra ports (RXQ1.0, RXQ2.0, RXQ3.0 and TXQ1.0, TXQ2.0, TXQ3.0)  or bind 3 more ports to dpdk and use the default l2fwd.cfg straightaway.

Jasvinder

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] problem running ip pipeline application
  2016-04-06 23:36     ` Talukdar, Biju
@ 2016-04-08 20:23       ` Singh, Jasvinder
  2016-04-13  0:04         ` Talukdar, Biju
  0 siblings, 1 reply; 7+ messages in thread
From: Singh, Jasvinder @ 2016-04-08 20:23 UTC (permalink / raw)
  To: Talukdar, Biju, users

HI Biju,

> -----Original Message-----
> From: Talukdar, Biju [mailto:Biju_Talukdar@student.uml.edu]
> Sent: Thursday, April 7, 2016 12:36 AM
> To: Singh, Jasvinder <jasvinder.singh@intel.com>; users <users@dpdk.org>
> Subject: Re: problem running ip pipeline application
> 
> 
> Hi Jasvinder,
> I have seen one of your archive mail, where you described how one can
> extract fields by modifying the configuration file. here is a small excerpt:
> 
> PIPELINE1]
> type = PASS-THROUGH
> core = 1
> pktq_in = RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0
> pktq_out = TXQ0.0 TXQ1.0 TXQ2.0 TXQ3.0
> dma_size = 16
> dma_dst_offset = 64
> dma_src_offset = 150
> dma_src_mask = 00FF0000FFFFFFFFFFFFFFFFFFFFFFFF dma_hash_offset = 80
> 
> I am not able to understand what these field means. I want to extract src ip
> and destination ip. How would my configuration file would look like.
>  Again where the fields will be extracted. I need them as a input to my
> function where I will use the src/dst ips  to calculate a hash value and update
> my array.

<snip>
	/*Input ports*/
	for (i = 0; i < p->n_ports_in; i++) {
		struct rte_pipeline_port_in_params port_params = {
			.ops = pipeline_port_in_params_get_ops(
				&params->port_in[i]),
			.arg_create = pipeline_port_in_params_convert(
				&params->port_in[i]),
			.f_action = get_port_in_ah(p_pt),
			.arg_ah = p_pt,
			.burst_size = params->port_in[i].burst_size,
		};

In "static void* pipeline_passthrough_init" function, actions on the packets received at input ports have been defined during input ports initialization. These actions have been implemented using pkt_work (or pkt4_work) routines, where the mask (defined in configuration file) is applied over incoming packet header fileds to extract the key.  For example; let's assume we have following configuration file;

[PIPELINE0]
type = MASTER
core = 0

[PIPELINE1]
type = PASS-THROUGH
core = 1
pktq_in = RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0
pktq_out = SWQ0 SWQ1 SWQ2 SWQ3
dma_size = 16
dma_dst_offset = 128; mbuf (128) + 64
dma_src_offset = 278; mbuf (128) + headroom (128) + ethernet header (14) + ttl offset within ip header (8) = 278
dma_src_mask = 00FF0000FFFFFFFFFFFFFFFFFFFFFFFF; ipv4 5-tuple
dma_hash_offset = 144; dma_dst_offset + dma_size

The above mask (16 bytes) will be applied over the ipv4 packet header fields starting at dma_src_offset and will allow to extract 5-tuples (src/dst ip, src/dst tcp port and protocol).
The extracted fields are saved at dma_dst_offset. Application developer can defined his own mask at any offset (dma_src_offset). In pkt_work/pkt4_work routines, hash
values are computed over extracted 5-tuples and stored at dma_hash_offset for the consumption of any other pipeline in the chain.

> Also how do I print values of my variable. Is there any log file where I can
> print values. How do I debug my program, feeling helpless.

You can introduce printf statements in pkt_work/pkt4_work routines or can use gdb to see computed parameters value.

> Regarding the CLI. I also could not get any command for CLI. I just found one
> "p 1 ping". Could you please give some information.
> 
Regarding CLI command, passthrough pipeline accpets p <pipeline id> ping, link up/down related command etc. Please have a look at examples/ip_pipeline/pipeline/pipeline_comman_fe.c(h) for
commands that passthrough pipeline can accept.


Jasvinder

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] problem running ip pipeline application
  2016-04-08 20:23       ` Singh, Jasvinder
@ 2016-04-13  0:04         ` Talukdar, Biju
  2016-04-15  8:48           ` Singh, Jasvinder
  0 siblings, 1 reply; 7+ messages in thread
From: Talukdar, Biju @ 2016-04-13  0:04 UTC (permalink / raw)
  To: Singh, Jasvinder, users

Hi Jasvinder,
Thanks a lot for your guidance. 

I could see the prints that the pipeline initialization is complete. I gave prints in the function pipeline_passthrough_init(). And I could see the prints. But I get no prints from the function pkt_work(). This function process all the packets. PKT_WORK macro uses pkt_work() function. Again PKT_WORK is used by port_in_ah macro. I am showing the code snippets from passthrough pipeline.

#define port_in_ah(dma_size, hash_enabled)            \
PKT_WORK(dma_size, hash_enabled)                \
PKT4_WORK(dma_size, hash_enabled)                \
PIPELINE_PORT_IN_AH(port_in_ah_size##dma_size##_hash##hash_enabled,\
    pkt_work_size##dma_size##_hash##hash_enabled,        \
    pkt4_work_size##dma_size##_hash##hash_enabled) 


Now, port_in_ah is called like as show below:

port_in_ah(8, 0)
port_in_ah(8, 1)
port_in_ah(16, 0)
port_in_ah(16, 1)
port_in_ah(24, 0)
port_in_ah(24, 1)
port_in_ah(32, 0)
port_in_ah(32, 1)
port_in_ah(40, 0)
port_in_ah(40, 1)
port_in_ah(48, 0)
port_in_ah(48, 1)
port_in_ah(56, 0)
port_in_ah(56, 1)
port_in_ah(64, 0)
port_in_ah(64, 1)

So eventually pkt_work() should get called by the above calls. But why I am not getting any prints from pkt_work(). I have no clue why I am not getting any test prints. Why the program execuion is not reaching up to those printfs.
Do you think that I am not even getting any packets  and thats why pkt_work() is not get called at all. In that scenario, where I should look in? Remember I am just trying the sample example application. There is no code change there apart from my prints. Logically it should work.

________________________________________
From: Singh, Jasvinder <jasvinder.singh@intel.com>
Sent: Friday, April 8, 2016 4:23 PM
To: Talukdar, Biju; users
Subject: RE: problem running ip pipeline application

HI Biju,

> -----Original Message-----
> From: Talukdar, Biju [mailto:Biju_Talukdar@student.uml.edu]
> Sent: Thursday, April 7, 2016 12:36 AM
> To: Singh, Jasvinder <jasvinder.singh@intel.com>; users <users@dpdk.org>
> Subject: Re: problem running ip pipeline application
>
>
> Hi Jasvinder,
> I have seen one of your archive mail, where you described how one can
> extract fields by modifying the configuration file. here is a small excerpt:
>
> PIPELINE1]
> type = PASS-THROUGH
> core = 1
> pktq_in = RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0
> pktq_out = TXQ0.0 TXQ1.0 TXQ2.0 TXQ3.0
> dma_size = 16
> dma_dst_offset = 64
> dma_src_offset = 150
> dma_src_mask = 00FF0000FFFFFFFFFFFFFFFFFFFFFFFF dma_hash_offset = 80
>
> I am not able to understand what these field means. I want to extract src ip
> and destination ip. How would my configuration file would look like.
>  Again where the fields will be extracted. I need them as a input to my
> function where I will use the src/dst ips  to calculate a hash value and update
> my array.

<snip>
        /*Input ports*/
        for (i = 0; i < p->n_ports_in; i++) {
                struct rte_pipeline_port_in_params port_params = {
                        .ops = pipeline_port_in_params_get_ops(
                                &params->port_in[i]),
                        .arg_create = pipeline_port_in_params_convert(
                                &params->port_in[i]),
                        .f_action = get_port_in_ah(p_pt),
                        .arg_ah = p_pt,
                        .burst_size = params->port_in[i].burst_size,
                };

In "static void* pipeline_passthrough_init" function, actions on the packets received at input ports have been defined during input ports initialization. These actions have been implemented using pkt_work (or pkt4_work) routines, where the mask (defined in configuration file) is applied over incoming packet header fileds to extract the key.  For example; let's assume we have following configuration file;

[PIPELINE0]
type = MASTER
core = 0

[PIPELINE1]
type = PASS-THROUGH
core = 1
pktq_in = RXQ0.0 RXQ1.0 RXQ2.0 RXQ3.0
pktq_out = SWQ0 SWQ1 SWQ2 SWQ3
dma_size = 16
dma_dst_offset = 128; mbuf (128) + 64
dma_src_offset = 278; mbuf (128) + headroom (128) + ethernet header (14) + ttl offset within ip header (8) = 278
dma_src_mask = 00FF0000FFFFFFFFFFFFFFFFFFFFFFFF; ipv4 5-tuple
dma_hash_offset = 144; dma_dst_offset + dma_size

The above mask (16 bytes) will be applied over the ipv4 packet header fields starting at dma_src_offset and will allow to extract 5-tuples (src/dst ip, src/dst tcp port and protocol).
The extracted fields are saved at dma_dst_offset. Application developer can defined his own mask at any offset (dma_src_offset). In pkt_work/pkt4_work routines, hash
values are computed over extracted 5-tuples and stored at dma_hash_offset for the consumption of any other pipeline in the chain.

> Also how do I print values of my variable. Is there any log file where I can
> print values. How do I debug my program, feeling helpless.

You can introduce printf statements in pkt_work/pkt4_work routines or can use gdb to see computed parameters value.

> Regarding the CLI. I also could not get any command for CLI. I just found one
> "p 1 ping". Could you please give some information.
>
Regarding CLI command, passthrough pipeline accpets p <pipeline id> ping, link up/down related command etc. Please have a look at examples/ip_pipeline/pipeline/pipeline_comman_fe.c(h) for
commands that passthrough pipeline can accept.


Jasvinder

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [dpdk-users] problem running ip pipeline application
  2016-04-13  0:04         ` Talukdar, Biju
@ 2016-04-15  8:48           ` Singh, Jasvinder
  0 siblings, 0 replies; 7+ messages in thread
From: Singh, Jasvinder @ 2016-04-15  8:48 UTC (permalink / raw)
  To: Talukdar, Biju, users

Hi Biju,

> -----Original Message-----
> From: Talukdar, Biju [mailto:Biju_Talukdar@student.uml.edu]
> Sent: Wednesday, April 13, 2016 1:04 AM
> To: Singh, Jasvinder <jasvinder.singh@intel.com>; users <users@dpdk.org>
> Subject: Re: problem running ip pipeline application
> 
> Hi Jasvinder,
> Thanks a lot for your guidance.
> 
> I could see the prints that the pipeline initialization is complete. I gave prints
> in the function pipeline_passthrough_init(). And I could see the prints. But I
> get no prints from the function pkt_work(). This function process all the
> packets. PKT_WORK macro uses pkt_work() function. Again PKT_WORK is
> used by port_in_ah macro. I am showing the code snippets from
> passthrough pipeline.
> 
> #define port_in_ah(dma_size, hash_enabled)            \
> PKT_WORK(dma_size, hash_enabled)                \
> PKT4_WORK(dma_size, hash_enabled)                \
> PIPELINE_PORT_IN_AH(port_in_ah_size##dma_size##_hash##hash_enable
> d,\
>     pkt_work_size##dma_size##_hash##hash_enabled,        \
>     pkt4_work_size##dma_size##_hash##hash_enabled)
> 
> 
> Now, port_in_ah is called like as show below:
> 
> port_in_ah(8, 0)
> port_in_ah(8, 1)
> port_in_ah(16, 0)
> port_in_ah(16, 1)
> port_in_ah(24, 0)
> port_in_ah(24, 1)
> port_in_ah(32, 0)
> port_in_ah(32, 1)
> port_in_ah(40, 0)
> port_in_ah(40, 1)
> port_in_ah(48, 0)
> port_in_ah(48, 1)
> port_in_ah(56, 0)
> port_in_ah(56, 1)
> port_in_ah(64, 0)
> port_in_ah(64, 1)
> 
> So eventually pkt_work() should get called by the above calls. But why I am
> not getting any prints from pkt_work(). I have no clue why I am not getting
> any test prints. Why the program execuion is not reaching up to those
> printfs.
> Do you think that I am not even getting any packets  and thats why
> pkt_work() is not get called at all. In that scenario, where I should look in?
> Remember I am just trying the sample example application. There is no code
> change there apart from my prints. Logically it should work.
> 

The routine pkt_work() is invoked to process one packet at a time and, in case, when continuous packets are sent, pkt4_work() comes into play which processes packets in bulk (4 packets at a time) for the performance reason. Therefore, try sending one packet to the pipeline and see if printf() (introduced in pkt_work()) works or insert printf() in pkt4_work() as well.

Thanks,
Jasvinder

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-04-15  8:48 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-06  0:42 [dpdk-users] problem running ip pipeline application Talukdar, Biju
2016-04-06  9:22 ` Singh, Jasvinder
2016-04-06 21:21   ` Talukdar, Biju
2016-04-06 23:36     ` Talukdar, Biju
2016-04-08 20:23       ` Singh, Jasvinder
2016-04-13  0:04         ` Talukdar, Biju
2016-04-15  8:48           ` Singh, Jasvinder

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).