DPDK patches and discussions
 help / color / mirror / Atom feed
From: vikram T <vikramet@gmail.com>
To: dev@dpdk.org
Subject: Re: [dpdk-dev] DPDK failes to initailze on VMXNet3
Date: Tue, 13 Aug 2019 12:55:10 +0530	[thread overview]
Message-ID: <CANxYRjyitnJ6pu7yfgWjvp9GtcY1zrOij75-GHovUcRJ+MJPhA@mail.gmail.com> (raw)
In-Reply-To: <CANxYRjyCZ=KZj=JvnRZTd9Pc4Bf88zR7+nxT-7tOWM6qYWP5Sg@mail.gmail.com>

Additionally the dpdk-devbind.py shows as follows:
[root@vprobe mnt]#
/var/cache/ocsm/dpdk/dpdk-18.11/usertools/dpdk-devbind.py -s

Network devices using DPDK-compatible driver
============================================
0000:03:00.0 'VMXNET3 Ethernet Controller 07b0' drv=igb_uio unused=vmxnet3

Network devices using kernel driver
===================================
0000:02:00.0 '82545EM Gigabit Ethernet Controller (Copper) 100f' if=ens32
drv=e1000 unused=igb_uio *Active*

Any pointers would be very helpful
Thanks in Advance

Regards
Vikram

On Tue, Aug 13, 2019 at 9:39 AM vikram T <vikramet@gmail.com> wrote:

> Hi,
> When initialing the DPDK failed with the Below error on VMXNet3:
>
>
>
>
>
>
>
>
>
> *Aug  9 14:05:34 vprobe rat_dpdk_sniffer[10768]: EAL: Probing VFIO
> support...Aug  9 14:05:34 vprobe rat_dpdk_sniffer[10768]: EAL: PCI device
> 0000:02:00.0 on NUMA socket -1Aug  9 14:05:34 vprobe
> rat_dpdk_sniffer[10768]: EAL:   Invalid NUMA socket, default to 0Aug  9
> 14:05:34 vprobe rat_dpdk_sniffer[10768]: EAL:   probe driver: 8086:100f
> net_e1000_emAug  9 14:05:34 vprobe rat_dpdk_sniffer[10768]: EAL: PCI device
> 0000:03:00.0 on NUMA socket 0Aug  9 14:05:34 vprobe kernel: igb_uio
> 0000:03:00.0: uio device registered with irq 58Aug  9 14:05:34 vprobe
> rat_dpdk_sniffer[10768]: EAL:   probe driver: 15ad:7b0 net_vmxnet3Aug  9
> 14:05:34 vprobe rat_dpdk_sniffer[10768]: PANIC in
> rte_eth_dev_shared_data_prepare():Aug  9 14:05:34 vprobe
> rat_dpdk_sniffer[10768]: Cannot allocate ethdev shared data*
>
> With the BackTrace pointing to :
>
>
>
>
>
>
>
>
>
>
>
>
>
> *(gdb) bt#0  0x00007ffff54612c7 in raise () from /lib64/libc.so.6#1
>  0x00007ffff54629b8 in abort () from /lib64/libc.so.6#2  0x00000000004eab34
> in __rte_panic ()#3  0x000000000050cbf8 in rte_eth_dev_shared_data_prepare
> ()#4  0x000000000050de1c in rte_eth_dev_allocate ()#5  0x0000000000667025
> in eth_vmxnet3_pci_probe ()#6  0x00000000005b4178 in pci_probe_all_drivers
> ()#7  0x00000000005b42bc in rte_pci_probe ()#8  0x000000000053642c in
> rte_bus_probe ()#9  0x00000000005242ee in rte_eal_init ()#10
> 0x00000000006c24c7 in rat::dpdk::init (cfg=...) at
> ../../rat/src/sniffer/dpdk_utils.cc:71*
>
> The sample application testpmd was running successfully:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *[root@vprobe test-pmd]# ./testpmd -l 0-3 -n 4 -- -i --portmask=0x1
> --nb-cores=2EAL: Detected 16 lcore(s)EAL: Detected 4 NUMA nodesEAL:
> Multi-process socket /var/run/dpdk/rte/mp_socketEAL: No free hugepages
> reported in hugepages-2048kBEAL: No free hugepages reported in
> hugepages-2048kBEAL: Probing VFIO support...EAL: PCI device 0000:02:00.0 on
> NUMA socket -1EAL:   Invalid NUMA socket, default to 0EAL:   probe driver:
> 8086:100f net_e1000_emEAL: PCI device 0000:03:00.0 on NUMA socket 0EAL:
> probe driver: 15ad:7b0 net_vmxnet3Interactive-mode selectedtestpmd: create
> a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0testpmd:
> preferred mempool ops selected: ring_mp_mcWarning! port-topology=paired and
> odd forward ports number, the last port will pair with itself.Configuring
> Port 0 (socket 0)Port 0: 00:0C:29:36:B2:F1Checking link
> statuses...Donetestpmd> startio packet forwarding - ports=1 - cores=1 -
> streams=1 - NUMA support enabled, MP allocation mode: nativeLogical Core 1
> (socket 0) forwards packets on 1 streams:  RX P=0/Q=0 (socket 0) -> TX
> P=0/Q=0 (socket 0) peer=02:00:00:00:00:00*
>
> Additionally I observed that on this virtual machine file
> *"/sys/bus/pci/devices/0000:03:00.0/numa_node"* is set as -1 and when
> sample application are run the programs detects 4 NUMA Nodes.
> But on any other physical machine it is properly set to appropriate
> numa_node.
>
>
> It would be of great help if I get pointers on why the initialization
> fails here.
>
> Regards
> Vikram
>

      reply	other threads:[~2019-08-13  7:25 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-13  4:09 vikram T
2019-08-13  7:25 ` vikram T [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CANxYRjyitnJ6pu7yfgWjvp9GtcY1zrOij75-GHovUcRJ+MJPhA@mail.gmail.com \
    --to=vikramet@gmail.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).