DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] Capture traffic with DPDK-dump
@ 2016-11-07 17:50 jose suarez
  2016-11-07 22:42 ` Wiles, Keith
  2016-11-10 10:56 ` Pattan, Reshma
  0 siblings, 2 replies; 6+ messages in thread
From: jose suarez @ 2016-11-07 17:50 UTC (permalink / raw)
  To: users

Hello everybody!

I am new in DPDK. I'm trying simply to capture traffic from a 10G 
physical NIC. I installed the DPDK from source files and activated the 
following modules in common-base file:

CONFIG_RTE_LIBRTE_PMD_PCAP=y

CONFIG_RTE_LIBRTE_PDUMP=y

CONFIG_RTE_PORT_PCAP=y

Then I built the distribution using the dpdk-setup.h script. Also I add 
hugepages and check they are configured successfully:

AnonHugePages:      4096 kB
HugePages_Total:    2048
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

To capture the traffic I guess I can use the dpdk-pdump application, but 
I don't know how to use it. First of all, does it work if I bind the 
interfaces using the uio_pci_generic drive? I guess that if I capture 
the traffic using the linux kernel driver (ixgbe) I will loose a lot of 
packets.

To bind the NIC I write this command:

sudo ./tools/dpdk-devbind.py --bind=uio_pci_generic eth0


When I check the interfaces I can see that the NIC was binded 
successfully. Also I checked that mi NIC is compatible with DPDK (Intel 
8599)

Network devices using DPDK-compatible driver
============================================
0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' 
drv=uio_pci_generic unused=ixgbe,vfio-pci


To capture packets, I read in the mailing list that it is necessary to 
run the testpmd application and then dpdk-pdump using different cores. 
So I used the following commands:

sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x6 -n 4 -- -i

sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xff -n 2 -- --pdump 
'device_id=01:00.0,queue=*,rx-dev=/tmp/file.pcap'

Did I miss any step? Is it necessary to execute any more commands when 
running the testpmd app in interactive mode?


When I execute the pdump application I get the following error:

EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: WARNING: Address Space Layout Randomization (ASLR) is enabled in 
the kernel.
EAL:    This may cause issues with mapping memory into secondary processes
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:01:00.0 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI device 0000:01:00.1 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
PMD: Initializing pmd_pcap for eth_pcap_rx_0
PMD: Creating pcap-backed ethdev on numa socket 0
Port 2 MAC: 00 00 00 01 02 03
PDUMP: client request for pdump enable/disable failed
PDUMP: client request for pdump enable/disable failed
EAL: Error - exiting with code: 1
   Cause: Unknown error -22


In the testpmd app I get the following info:

EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:01:00.0 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI device 0000:01:00.1 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
Interactive-mode selected
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, 
socket=0
Configuring Port 0 (socket 0)
Port 0: 00:E0:ED:FF:60:5C
Configuring Port 1 (socket 0)
Port 1: 00:E0:ED:FF:60:5D
Checking link statuses...
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd> PDUMP: failed to get potid for device id=01:00.0
PDUMP: failed to get potid for device id=01:00.0


Could you please help me?

Thank you!

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-users] Capture traffic with DPDK-dump
  2016-11-07 17:50 [dpdk-users] Capture traffic with DPDK-dump jose suarez
@ 2016-11-07 22:42 ` Wiles, Keith
  2016-11-10 10:56 ` Pattan, Reshma
  1 sibling, 0 replies; 6+ messages in thread
From: Wiles, Keith @ 2016-11-07 22:42 UTC (permalink / raw)
  To: jose suarez; +Cc: users


> On Nov 7, 2016, at 9:50 AM, jose suarez <jsuarezv@ac.upc.edu> wrote:
> 
> Hello everybody!
> 
> I am new in DPDK. I'm trying simply to capture traffic from a 10G physical NIC. I installed the DPDK from source files and activated the following modules in common-base file:
> 
> CONFIG_RTE_LIBRTE_PMD_PCAP=y
> 
> CONFIG_RTE_LIBRTE_PDUMP=y
> 
> CONFIG_RTE_PORT_PCAP=y
> 
> Then I built the distribution using the dpdk-setup.h script. Also I add hugepages and check they are configured successfully:
> 
> AnonHugePages:      4096 kB
> HugePages_Total:    2048
> HugePages_Free:        0
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:       2048 kB
> 
> To capture the traffic I guess I can use the dpdk-pdump application, but I don't know how to use it. First of all, does it work if I bind the interfaces using the uio_pci_generic drive? I guess that if I capture the traffic using the linux kernel driver (ixgbe) I will loose a lot of packets.
> 
> To bind the NIC I write this command:
> 
> sudo ./tools/dpdk-devbind.py --bind=uio_pci_generic eth0
> 
> 
> When I check the interfaces I can see that the NIC was binded successfully. Also I checked that mi NIC is compatible with DPDK (Intel 8599)
> 
> Network devices using DPDK-compatible driver
> ============================================
> 0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv=uio_pci_generic unused=ixgbe,vfio-pci
> 
> 
> To capture packets, I read in the mailing list that it is necessary to run the testpmd application and then dpdk-pdump using different cores. So I used the following commands:
> 
> sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x6 -n 4 -- -i
> 
> sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xff -n 2 -- --pdump 'device_id=01:00.0,queue=*,rx-dev=/tmp/file.pcap’

I did notice you used lcores 1-2 on testpmd and then used all lcores 0-16 on pdump normally you need to use something like 0xf8 on pdump to not have two threads on a single core.

Not sure this will fix your problem.

> 
> Did I miss any step? Is it necessary to execute any more commands when running the testpmd app in interactive mode?
> 
> 
> When I execute the pdump application I get the following error:
> 
> EAL: Detected 8 lcore(s)
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: WARNING: Address Space Layout Randomization (ASLR) is enabled in the kernel.
> EAL:    This may cause issues with mapping memory into secondary processes
> PMD: bnxt_rte_pmd_init() called for (null)
> EAL: PCI device 0000:01:00.0 on NUMA socket -1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL: PCI device 0000:01:00.1 on NUMA socket -1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> PMD: Initializing pmd_pcap for eth_pcap_rx_0
> PMD: Creating pcap-backed ethdev on numa socket 0
> Port 2 MAC: 00 00 00 01 02 03
> PDUMP: client request for pdump enable/disable failed
> PDUMP: client request for pdump enable/disable failed
> EAL: Error - exiting with code: 1
>  Cause: Unknown error -22
> 
> 
> In the testpmd app I get the following info:
> 
> EAL: Detected 8 lcore(s)
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> PMD: bnxt_rte_pmd_init() called for (null)
> EAL: PCI device 0000:01:00.0 on NUMA socket -1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> EAL: PCI device 0000:01:00.1 on NUMA socket -1
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> Interactive-mode selected
> USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0
> Configuring Port 0 (socket 0)
> Port 0: 00:E0:ED:FF:60:5C
> Configuring Port 1 (socket 0)
> Port 1: 00:E0:ED:FF:60:5D
> Checking link statuses...
> Port 0 Link Up - speed 10000 Mbps - full-duplex
> Port 1 Link Up - speed 10000 Mbps - full-duplex
> Done
> testpmd> PDUMP: failed to get potid for device id=01:00.0
> PDUMP: failed to get potid for device id=01:00.0
> 
> 
> Could you please help me?
> 
> Thank you!
> 
> 
> 

Regards,
Keith


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-users] Capture traffic with DPDK-dump
  2016-11-07 17:50 [dpdk-users] Capture traffic with DPDK-dump jose suarez
  2016-11-07 22:42 ` Wiles, Keith
@ 2016-11-10 10:56 ` Pattan, Reshma
  2016-11-10 12:32   ` jose suarez
  1 sibling, 1 reply; 6+ messages in thread
From: Pattan, Reshma @ 2016-11-10 10:56 UTC (permalink / raw)
  To: jose suarez; +Cc: users

Hi,

Comments below.

Thanks,
Reshma


> -----Original Message-----
> From: users [mailto:users-bounces@dpdk.org] On Behalf Of jose suarez
> Sent: Monday, November 7, 2016 5:50 PM
> To: users@dpdk.org
> Subject: [dpdk-users] Capture traffic with DPDK-dump
> 
> Hello everybody!
> 
> I am new in DPDK. I'm trying simply to capture traffic from a 10G physical
> NIC. I installed the DPDK from source files and activated the following
> modules in common-base file:
> 
> CONFIG_RTE_LIBRTE_PMD_PCAP=y
> 
> CONFIG_RTE_LIBRTE_PDUMP=y
> 
> CONFIG_RTE_PORT_PCAP=y
> 
> Then I built the distribution using the dpdk-setup.h script. Also I add
> hugepages and check they are configured successfully:
> 
> AnonHugePages:      4096 kB
> HugePages_Total:    2048
> HugePages_Free:        0
> HugePages_Rsvd:        0
> HugePages_Surp:        0
> Hugepagesize:       2048 kB
> 
> To capture the traffic I guess I can use the dpdk-pdump application, but I
> don't know how to use it. First of all, does it work if I bind the interfaces
> using the uio_pci_generic drive? I guess that if I capture the traffic using the
> linux kernel driver (ixgbe) I will loose a lot of packets.
> 
> To bind the NIC I write this command:
> 
> sudo ./tools/dpdk-devbind.py --bind=uio_pci_generic eth0
> 
> 
> When I check the interfaces I can see that the NIC was binded successfully.
> Also I checked that mi NIC is compatible with DPDK (Intel
> 8599)
> 
> Network devices using DPDK-compatible driver
> ============================================
> 0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
> drv=uio_pci_generic unused=ixgbe,vfio-pci
> 
> 
> To capture packets, I read in the mailing list that it is necessary to run the
> testpmd application and then dpdk-pdump using different cores.
> So I used the following commands:
> 
> sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x6 -n 4 -- -i
> 
> sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xff -n 2 -- --pdump
> 'device_id=01:00.0,queue=*,rx-dev=/tmp/file.pcap'

1)Please pass on the full PCI id, i.e "0000:01:00.0" in the command instead of "01:00.0".
In latest DPDK 16.11 code  full PCI id is used by eal layer to identify the device. 

2)Also note that you should not use same core mask for both primary and secondary processes in multi process context.
ex: -c0x6 for testpmd and -c0x2 for dpdk-pdump can be used. 

Please let me know how you are proceeding.


Thanks,
Reshma

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-users] Capture traffic with DPDK-dump
  2016-11-10 10:56 ` Pattan, Reshma
@ 2016-11-10 12:32   ` jose suarez
  2016-11-10 13:20     ` Pattan, Reshma
  0 siblings, 1 reply; 6+ messages in thread
From: jose suarez @ 2016-11-10 12:32 UTC (permalink / raw)
  To: Pattan, Reshma; +Cc: users

Hi,

Thank you very much for your response. I followed your comment about the 
full PCI id and now the PDUMP application is working fine :). It creates 
the pcap file.

My problem now is that I noticed that in the testpmd app I don't receive 
any packets. I write below the commands that I use to execute both apps:

# sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 --proc-type 
primary --socket-mem 1000 --file-prefix pg1 -w 0000:01:00.0 -- -i 
--port-topology=chained

# sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xf8 -n 4 
--proc-type auto --socket-mem 1000 --file-prefix pg2 -w 0000:01:00.0 -- 
--pdump 'device_id=0000:01:00.0,queue=*,rx-dev=/tmp/file.pcap'

Before I execute these commands, I ensure that all the hugepages are 
free (sudo rm -R /dev/hugepages/*)

In this way I split up the hugepages (I have 2048 in total) between both 
processes, as Keith Wiles advised me. Also I don't overlap any core with 
the masks used (0x06 and 0xf8)

My NIC (Intel 82599ES), which PCI id is 0000:01:00.0, it is connected to 
a 10G link that receives traffic from a mirrored port. I show you the 
internet device settings related to this NIC:

Network devices using DPDK-compatible driver
============================================
0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' 
drv=igb_uio unused=ixgbe


When I run the testpmd app and check the port stats, I get the following 
output:

#sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 
--proc-type=auto --socket-mem 1000 --file-prefix pg1 -w 0000:01:00.0 -- 
-i --port-topology=chained
EAL: Detected 8 lcore(s)
EAL: Auto-detected process type: PRIMARY
EAL: Probing VFIO support...
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
Interactive-mode selected
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, 
socket=0
Configuring Port 0 (socket 0)
Port 0: XX:XX:XX:XX:XX:XX
Checking link statuses...
Port 0 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd> show port stats 0

   ######################## NIC statistics for port 0 
########################
   RX-packets: 0          RX-missed: 0          RX-bytes:  0
   RX-errors: 0
   RX-nombuf:  0
   TX-packets: 0          TX-errors: 0          TX-bytes:  0

   Throughput (since last show)
   Rx-pps:            0
   Tx-pps:            0
############################################################################

It doesn't receive any packet. Did I missed any step in the 
configuration of the testpmd app?


Thanks!

José.


El 10/11/16 a las 11:56, Pattan, Reshma escribió:
> Hi,
>
> Comments below.
>
> Thanks,
> Reshma
>
>
>> -----Original Message-----
>> From: users [mailto:users-bounces@dpdk.org] On Behalf Of jose suarez
>> Sent: Monday, November 7, 2016 5:50 PM
>> To: users@dpdk.org
>> Subject: [dpdk-users] Capture traffic with DPDK-dump
>>
>> Hello everybody!
>>
>> I am new in DPDK. I'm trying simply to capture traffic from a 10G physical
>> NIC. I installed the DPDK from source files and activated the following
>> modules in common-base file:
>>
>> CONFIG_RTE_LIBRTE_PMD_PCAP=y
>>
>> CONFIG_RTE_LIBRTE_PDUMP=y
>>
>> CONFIG_RTE_PORT_PCAP=y
>>
>> Then I built the distribution using the dpdk-setup.h script. Also I add
>> hugepages and check they are configured successfully:
>>
>> AnonHugePages:      4096 kB
>> HugePages_Total:    2048
>> HugePages_Free:        0
>> HugePages_Rsvd:        0
>> HugePages_Surp:        0
>> Hugepagesize:       2048 kB
>>
>> To capture the traffic I guess I can use the dpdk-pdump application, but I
>> don't know how to use it. First of all, does it work if I bind the interfaces
>> using the uio_pci_generic drive? I guess that if I capture the traffic using the
>> linux kernel driver (ixgbe) I will loose a lot of packets.
>>
>> To bind the NIC I write this command:
>>
>> sudo ./tools/dpdk-devbind.py --bind=uio_pci_generic eth0
>>
>>
>> When I check the interfaces I can see that the NIC was binded successfully.
>> Also I checked that mi NIC is compatible with DPDK (Intel
>> 8599)
>>
>> Network devices using DPDK-compatible driver
>> ============================================
>> 0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
>> drv=uio_pci_generic unused=ixgbe,vfio-pci
>>
>>
>> To capture packets, I read in the mailing list that it is necessary to run the
>> testpmd application and then dpdk-pdump using different cores.
>> So I used the following commands:
>>
>> sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x6 -n 4 -- -i
>>
>> sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xff -n 2 -- --pdump
>> 'device_id=01:00.0,queue=*,rx-dev=/tmp/file.pcap'
> 1)Please pass on the full PCI id, i.e "0000:01:00.0" in the command instead of "01:00.0".
> In latest DPDK 16.11 code  full PCI id is used by eal layer to identify the device.
>
> 2)Also note that you should not use same core mask for both primary and secondary processes in multi process context.
> ex: -c0x6 for testpmd and -c0x2 for dpdk-pdump can be used.
>
> Please let me know how you are proceeding.
>
>
> Thanks,
> Reshma

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-users] Capture traffic with DPDK-dump
  2016-11-10 12:32   ` jose suarez
@ 2016-11-10 13:20     ` Pattan, Reshma
  2016-11-10 17:44       ` jose suarez
  0 siblings, 1 reply; 6+ messages in thread
From: Pattan, Reshma @ 2016-11-10 13:20 UTC (permalink / raw)
  To: jose suarez; +Cc: users

Hi,

Comments below.

Thanks,
Reshma

> -----Original Message-----
> From: jose suarez [mailto:jsuarezv@ac.upc.edu]
> Sent: Thursday, November 10, 2016 12:32 PM
> To: Pattan, Reshma <reshma.pattan@intel.com>
> Cc: users@dpdk.org
> Subject: Re: [dpdk-users] Capture traffic with DPDK-dump
> 
> Hi,
> 
> Thank you very much for your response. I followed your comment about the
> full PCI id and now the PDUMP application is working fine :). It creates the
> pcap file.
> 
> My problem now is that I noticed that in the testpmd app I don't receive any
> packets. I write below the commands that I use to execute both apps:
> 
> # sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 --proc-type
> primary --socket-mem 1000 --file-prefix pg1 -w 0000:01:00.0 -- -i --port-
> topology=chained
> 
> # sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xf8 -n 4 --proc-
> type auto --socket-mem 1000 --file-prefix pg2 -w 0000:01:00.0 -- --pdump
> 'device_id=0000:01:00.0,queue=*,rx-dev=/tmp/file.pcap'
> 
> Before I execute these commands, I ensure that all the hugepages are free
> (sudo rm -R /dev/hugepages/*)
> 
> In this way I split up the hugepages (I have 2048 in total) between both
> processes, as Keith Wiles advised me. Also I don't overlap any core with the
> masks used (0x06 and 0xf8)
> 
> My NIC (Intel 82599ES), which PCI id is 0000:01:00.0, it is connected to a 10G
> link that receives traffic from a mirrored port. I show you the internet device
> settings related to this NIC:
> 
> Network devices using DPDK-compatible driver
> ============================================
> 0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
> drv=igb_uio unused=ixgbe
> 
> 
> When I run the testpmd app and check the port stats, I get the following
> output:
> 
> #sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 --proc-
> type=auto --socket-mem 1000 --file-prefix pg1 -w 0000:01:00.0 -- -i --port-
> topology=chained
> EAL: Detected 8 lcore(s)
> EAL: Auto-detected process type: PRIMARY
> EAL: Probing VFIO support...
> PMD: bnxt_rte_pmd_init() called for (null)
> EAL: PCI device 0000:01:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
> Interactive-mode selected
> USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
> size=2176,
> socket=0
> Configuring Port 0 (socket 0)
> Port 0: XX:XX:XX:XX:XX:XX
> Checking link statuses...
> Port 0 Link Up - speed 10000 Mbps - full-duplex Done
> testpmd> show port stats 0


After testpmd comes to the prompt, you need to execute command "start". This will start traffic forwarding.

>    ######################## NIC statistics for port 0
> ########################
>    RX-packets: 0          RX-missed: 0          RX-bytes:  0
>    RX-errors: 0
>    RX-nombuf:  0
>    TX-packets: 0          TX-errors: 0          TX-bytes:  0
> 
>    Throughput (since last show)
>    Rx-pps:            0
>    Tx-pps:            0
> ################################################################
> ############
> 
> It doesn't receive any packet. Did I missed any step in the configuration of
> the testpmd app?

I am wondering , you should see all packets hitting Rx-missed. 
So I suggest just stop everything. Unbind port back to Linux. Then check if the port is receiving packets from other end using tcpdump. 
If not then you may need to debug the issue. 

After everything is fine, bind back the port to dpdk, run testpmd and see if you could receive packets or not. 
If you are seeing packets against "Rx-missed", then run start command in testpmd prompt to start packet forwarding. After that you will be able to
See the packets in the capture file.

Thanks,
Reshma


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-users] Capture traffic with DPDK-dump
  2016-11-10 13:20     ` Pattan, Reshma
@ 2016-11-10 17:44       ` jose suarez
  0 siblings, 0 replies; 6+ messages in thread
From: jose suarez @ 2016-11-10 17:44 UTC (permalink / raw)
  To: Pattan, Reshma; +Cc: users

Hi,

I made a test using the linux kernel to capture packets with the testpmd 
app. In this way I used the following command:

# sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c '0xfc' -n 4 --vdev 
'eth_pcap0,rx_iface=eth0,tx_pcap=/tmp/file.pcap' -- --port-topology=chained


I show below the output in this case:

EAL: Detected 8 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
PMD: Initializing pmd_pcap for eth_pcap0
PMD: Creating pcap-backed ethdev on numa socket 0
PMD: bnxt_rte_pmd_init() called for (null)
EAL: PCI device 0000:01:00.0 on NUMA socket -1
EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=187456, size=2176, 
socket=0
Configuring Port 0 (socket 0)
Port 0: XX:XX:XX:XX:XX:XX
Checking link statuses...
Port 0 Link Up - speed 10000 Mbps - full-duplex
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support 
disabled, MP over anonymous pages disabled
Logical Core 3 (socket 0) forwards packets on 1 streams:
   RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

   io packet forwarding - CRC stripping disabled - packets/burst=32
   nb forwarding cores=1 - nb forwarding ports=1
   RX queues=1 - RX desc=128 - RX free threshold=0
   RX threshold registers: pthresh=0 hthresh=0 wthresh=0
   TX queues=1 - TX desc=512 - TX free threshold=0
   TX threshold registers: pthresh=0 hthresh=0 wthresh=0
   TX RS bit threshold=0 - TXQ flags=0x0
Press enter to exit

Telling cores to stop...
Waiting for lcores to finish...

   ---------------------- Forward statistics for port 0 
----------------------
   RX-packets: 1591270        RX-dropped: 0             RX-total: 1591270
   TX-packets: 1591270        TX-dropped: 0             TX-total: 1591270
----------------------------------------------------------------------------

   +++++++++++++++ Accumulated forward statistics for all 
ports+++++++++++++++
   RX-packets: 1591270        RX-dropped: 0             RX-total: 1591270
   TX-packets: 1591270        TX-dropped: 0             TX-total: 1591270
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Shutting down port 0...
Stopping ports...
Done
Closing ports...
Done

Bye...

Once I interrupt the app I can see that packets are recorded and also 
the pcap was generated. So I receive the traffic correctly. It seems 
that the problem happens when I install the uio_pci_generic driver for 
the NIC.


Thanks a lot!

José.


El 10/11/16 a las 14:20, Pattan, Reshma escribió:
> Hi,
>
> Comments below.
>
> Thanks,
> Reshma
>
>> -----Original Message-----
>> From: jose suarez [mailto:jsuarezv@ac.upc.edu]
>> Sent: Thursday, November 10, 2016 12:32 PM
>> To: Pattan, Reshma <reshma.pattan@intel.com>
>> Cc: users@dpdk.org
>> Subject: Re: [dpdk-users] Capture traffic with DPDK-dump
>>
>> Hi,
>>
>> Thank you very much for your response. I followed your comment about the
>> full PCI id and now the PDUMP application is working fine :). It creates the
>> pcap file.
>>
>> My problem now is that I noticed that in the testpmd app I don't receive any
>> packets. I write below the commands that I use to execute both apps:
>>
>> # sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 --proc-type
>> primary --socket-mem 1000 --file-prefix pg1 -w 0000:01:00.0 -- -i --port-
>> topology=chained
>>
>> # sudo ./x86_64-native-linuxapp-gcc/app/dpdk-pdump -c 0xf8 -n 4 --proc-
>> type auto --socket-mem 1000 --file-prefix pg2 -w 0000:01:00.0 -- --pdump
>> 'device_id=0000:01:00.0,queue=*,rx-dev=/tmp/file.pcap'
>>
>> Before I execute these commands, I ensure that all the hugepages are free
>> (sudo rm -R /dev/hugepages/*)
>>
>> In this way I split up the hugepages (I have 2048 in total) between both
>> processes, as Keith Wiles advised me. Also I don't overlap any core with the
>> masks used (0x06 and 0xf8)
>>
>> My NIC (Intel 82599ES), which PCI id is 0000:01:00.0, it is connected to a 10G
>> link that receives traffic from a mirrored port. I show you the internet device
>> settings related to this NIC:
>>
>> Network devices using DPDK-compatible driver
>> ============================================
>> 0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection'
>> drv=igb_uio unused=ixgbe
>>
>>
>> When I run the testpmd app and check the port stats, I get the following
>> output:
>>
>> #sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x06 -n 4 --proc-
>> type=auto --socket-mem 1000 --file-prefix pg1 -w 0000:01:00.0 -- -i --port-
>> topology=chained
>> EAL: Detected 8 lcore(s)
>> EAL: Auto-detected process type: PRIMARY
>> EAL: Probing VFIO support...
>> PMD: bnxt_rte_pmd_init() called for (null)
>> EAL: PCI device 0000:01:00.0 on NUMA socket 0
>> EAL:   probe driver: 8086:10fb rte_ixgbe_pmd
>> Interactive-mode selected
>> USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
>> size=2176,
>> socket=0
>> Configuring Port 0 (socket 0)
>> Port 0: XX:XX:XX:XX:XX:XX
>> Checking link statuses...
>> Port 0 Link Up - speed 10000 Mbps - full-duplex Done
>> testpmd> show port stats 0
>
> After testpmd comes to the prompt, you need to execute command "start". This will start traffic forwarding.
>
>>     ######################## NIC statistics for port 0
>> ########################
>>     RX-packets: 0          RX-missed: 0          RX-bytes:  0
>>     RX-errors: 0
>>     RX-nombuf:  0
>>     TX-packets: 0          TX-errors: 0          TX-bytes:  0
>>
>>     Throughput (since last show)
>>     Rx-pps:            0
>>     Tx-pps:            0
>> ################################################################
>> ############
>>
>> It doesn't receive any packet. Did I missed any step in the configuration of
>> the testpmd app?
> I am wondering , you should see all packets hitting Rx-missed.
> So I suggest just stop everything. Unbind port back to Linux. Then check if the port is receiving packets from other end using tcpdump.
> If not then you may need to debug the issue.
>
> After everything is fine, bind back the port to dpdk, run testpmd and see if you could receive packets or not.
> If you are seeing packets against "Rx-missed", then run start command in testpmd prompt to start packet forwarding. After that you will be able to
> See the packets in the capture file.
>
> Thanks,
> Reshma
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2016-11-10 17:44 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-07 17:50 [dpdk-users] Capture traffic with DPDK-dump jose suarez
2016-11-07 22:42 ` Wiles, Keith
2016-11-10 10:56 ` Pattan, Reshma
2016-11-10 12:32   ` jose suarez
2016-11-10 13:20     ` Pattan, Reshma
2016-11-10 17:44       ` jose suarez

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).