DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] DPDK failes to initailze on VMXNet3
@ 2019-08-13  4:09 vikram T
  2019-08-13  7:25 ` vikram T
  0 siblings, 1 reply; 2+ messages in thread
From: vikram T @ 2019-08-13  4:09 UTC (permalink / raw)
  To: dev

Hi,
When initialing the DPDK failed with the Below error on VMXNet3:









*Aug  9 14:05:34 vprobe rat_dpdk_sniffer[10768]: EAL: Probing VFIO
support...Aug  9 14:05:34 vprobe rat_dpdk_sniffer[10768]: EAL: PCI device
0000:02:00.0 on NUMA socket -1Aug  9 14:05:34 vprobe
rat_dpdk_sniffer[10768]: EAL:   Invalid NUMA socket, default to 0Aug  9
14:05:34 vprobe rat_dpdk_sniffer[10768]: EAL:   probe driver: 8086:100f
net_e1000_emAug  9 14:05:34 vprobe rat_dpdk_sniffer[10768]: EAL: PCI device
0000:03:00.0 on NUMA socket 0Aug  9 14:05:34 vprobe kernel: igb_uio
0000:03:00.0: uio device registered with irq 58Aug  9 14:05:34 vprobe
rat_dpdk_sniffer[10768]: EAL:   probe driver: 15ad:7b0 net_vmxnet3Aug  9
14:05:34 vprobe rat_dpdk_sniffer[10768]: PANIC in
rte_eth_dev_shared_data_prepare():Aug  9 14:05:34 vprobe
rat_dpdk_sniffer[10768]: Cannot allocate ethdev shared data*

With the BackTrace pointing to :













*(gdb) bt#0  0x00007ffff54612c7 in raise () from /lib64/libc.so.6#1
 0x00007ffff54629b8 in abort () from /lib64/libc.so.6#2  0x00000000004eab34
in __rte_panic ()#3  0x000000000050cbf8 in rte_eth_dev_shared_data_prepare
()#4  0x000000000050de1c in rte_eth_dev_allocate ()#5  0x0000000000667025
in eth_vmxnet3_pci_probe ()#6  0x00000000005b4178 in pci_probe_all_drivers
()#7  0x00000000005b42bc in rte_pci_probe ()#8  0x000000000053642c in
rte_bus_probe ()#9  0x00000000005242ee in rte_eal_init ()#10
0x00000000006c24c7 in rat::dpdk::init (cfg=...) at
../../rat/src/sniffer/dpdk_utils.cc:71*

The sample application testpmd was running successfully:



























*[root@vprobe test-pmd]# ./testpmd -l 0-3 -n 4 -- -i --portmask=0x1
--nb-cores=2EAL: Detected 16 lcore(s)EAL: Detected 4 NUMA nodesEAL:
Multi-process socket /var/run/dpdk/rte/mp_socketEAL: No free hugepages
reported in hugepages-2048kBEAL: No free hugepages reported in
hugepages-2048kBEAL: Probing VFIO support...EAL: PCI device 0000:02:00.0 on
NUMA socket -1EAL:   Invalid NUMA socket, default to 0EAL:   probe driver:
8086:100f net_e1000_emEAL: PCI device 0000:03:00.0 on NUMA socket 0EAL:
probe driver: 15ad:7b0 net_vmxnet3Interactive-mode selectedtestpmd: create
a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0testpmd:
preferred mempool ops selected: ring_mp_mcWarning! port-topology=paired and
odd forward ports number, the last port will pair with itself.Configuring
Port 0 (socket 0)Port 0: 00:0C:29:36:B2:F1Checking link
statuses...Donetestpmd> startio packet forwarding - ports=1 - cores=1 -
streams=1 - NUMA support enabled, MP allocation mode: nativeLogical Core 1
(socket 0) forwards packets on 1 streams:  RX P=0/Q=0 (socket 0) -> TX
P=0/Q=0 (socket 0) peer=02:00:00:00:00:00*

Additionally I observed that on this virtual machine file
*"/sys/bus/pci/devices/0000:03:00.0/numa_node"* is set as -1 and when
sample application are run the programs detects 4 NUMA Nodes.
But on any other physical machine it is properly set to appropriate
numa_node.


It would be of great help if I get pointers on why the initialization fails
here.

Regards
Vikram

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-dev] DPDK failes to initailze on VMXNet3
  2019-08-13  4:09 [dpdk-dev] DPDK failes to initailze on VMXNet3 vikram T
@ 2019-08-13  7:25 ` vikram T
  0 siblings, 0 replies; 2+ messages in thread
From: vikram T @ 2019-08-13  7:25 UTC (permalink / raw)
  To: dev

Additionally the dpdk-devbind.py shows as follows:
[root@vprobe mnt]#
/var/cache/ocsm/dpdk/dpdk-18.11/usertools/dpdk-devbind.py -s

Network devices using DPDK-compatible driver
============================================
0000:03:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=vmxnet3

Network devices using kernel driver
===================================
0000:02:00.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' if=ens32
drv=e1000 unused=igb_uio *Active*

Any pointers would be very helpful
Thanks in Advance

Regards
Vikram

On Tue, Aug 13, 2019 at 9:39 AM vikram T <vikramet@gmail.com> wrote:

> Hi,
> When initialing the DPDK failed with the Below error on VMXNet3:
>
>
>
>
>
>
>
>
>
> *Aug  9 14:05:34 vprobe rat_dpdk_sniffer[10768]: EAL: Probing VFIO
> support...Aug  9 14:05:34 vprobe rat_dpdk_sniffer[10768]: EAL: PCI device
> 0000:02:00.0 on NUMA socket -1Aug  9 14:05:34 vprobe
> rat_dpdk_sniffer[10768]: EAL:   Invalid NUMA socket, default to 0Aug  9
> 14:05:34 vprobe rat_dpdk_sniffer[10768]: EAL:   probe driver: 8086:100f
> net_e1000_emAug  9 14:05:34 vprobe rat_dpdk_sniffer[10768]: EAL: PCI device
> 0000:03:00.0 on NUMA socket 0Aug  9 14:05:34 vprobe kernel: igb_uio
> 0000:03:00.0: uio device registered with irq 58Aug  9 14:05:34 vprobe
> rat_dpdk_sniffer[10768]: EAL:   probe driver: 15ad:7b0 net_vmxnet3Aug  9
> 14:05:34 vprobe rat_dpdk_sniffer[10768]: PANIC in
> rte_eth_dev_shared_data_prepare():Aug  9 14:05:34 vprobe
> rat_dpdk_sniffer[10768]: Cannot allocate ethdev shared data*
>
> With the BackTrace pointing to :
>
>
>
>
>
>
>
>
>
>
>
>
>
> *(gdb) bt#0  0x00007ffff54612c7 in raise () from /lib64/libc.so.6#1
>  0x00007ffff54629b8 in abort () from /lib64/libc.so.6#2  0x00000000004eab34
> in __rte_panic ()#3  0x000000000050cbf8 in rte_eth_dev_shared_data_prepare
> ()#4  0x000000000050de1c in rte_eth_dev_allocate ()#5  0x0000000000667025
> in eth_vmxnet3_pci_probe ()#6  0x00000000005b4178 in pci_probe_all_drivers
> ()#7  0x00000000005b42bc in rte_pci_probe ()#8  0x000000000053642c in
> rte_bus_probe ()#9  0x00000000005242ee in rte_eal_init ()#10
> 0x00000000006c24c7 in rat::dpdk::init (cfg=...) at
> ../../rat/src/sniffer/dpdk_utils.cc:71*
>
> The sample application testpmd was running successfully:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[root@vprobe test-pmd]# ./testpmd -l 0-3 -n 4 -- -i --portmask=0x1
> --nb-cores=2EAL: Detected 16 lcore(s)EAL: Detected 4 NUMA nodesEAL:
> Multi-process socket /var/run/dpdk/rte/mp_socketEAL: No free hugepages
> reported in hugepages-2048kBEAL: No free hugepages reported in
> hugepages-2048kBEAL: Probing VFIO support...EAL: PCI device 0000:02:00.0 on
> NUMA socket -1EAL:   Invalid NUMA socket, default to 0EAL:   probe driver:
> 8086:100f net_e1000_emEAL: PCI device 0000:03:00.0 on NUMA socket 0EAL:
> probe driver: 15ad:7b0 net_vmxnet3Interactive-mode selectedtestpmd: create
> a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0testpmd:
> preferred mempool ops selected: ring_mp_mcWarning! port-topology=paired and
> odd forward ports number, the last port will pair with itself.Configuring
> Port 0 (socket 0)Port 0: 00:0C:29:36:B2:F1Checking link
> statuses...Donetestpmd> startio packet forwarding - ports=1 - cores=1 -
> streams=1 - NUMA support enabled, MP allocation mode: nativeLogical Core 1
> (socket 0) forwards packets on 1 streams:  RX P=0/Q=0 (socket 0) -> TX
> P=0/Q=0 (socket 0) peer=02:00:00:00:00:00*
>
> Additionally I observed that on this virtual machine file
> *"/sys/bus/pci/devices/0000:03:00.0/numa_node"* is set as -1 and when
> sample application are run the programs detects 4 NUMA Nodes.
> But on any other physical machine it is properly set to appropriate
> numa_node.
>
>
> It would be of great help if I get pointers on why the initialization
> fails here.
>
> Regards
> Vikram
>

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-08-13  7:25 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-13  4:09 [dpdk-dev] DPDK failes to initailze on VMXNet3 vikram T
2019-08-13  7:25 ` vikram T

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).