DPDK usage discussions
 help / color / mirror / Atom feed
From: "Zhang, Helin" <helin.zhang@intel.com>
To: Saurabh Mishra <saurabh.globe@gmail.com>,
	Thomas Monjalon <thomas.monjalon@6wind.com>,
	"Xu, Qian Q" <qian.q.xu@intel.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()
Date: Sun, 24 Jan 2016 03:16:42 +0000	[thread overview]
Message-ID: <F35DEAC7BCE34641BA9FAC6BCA4A12E70A98CDC4@SHSMSX104.ccr.corp.intel.com> (raw)
In-Reply-To: <CAMnwyJ0BVWJthc9v-xWDK-6b2ezQtHg3giwaVpAAncAQHE0Cyg@mail.gmail.com>

Hi Saurabh

Were you talking about assigning NIC PF to the VMWARE guest?
I remember that there is an issue of assigning PF to KVM guest, then rombar=0 is needed for grub.
Thank you for the issue report!

Can Qian tell the details of that?

Regards,
Helin

From: Saurabh Mishra [mailto:saurabh.globe@gmail.com]
Sent: Saturday, January 23, 2016 1:30 AM
To: Thomas Monjalon <thomas.monjalon@6wind.com>
Cc: users@dpdk.org; Zhang, Helin <helin.zhang@intel.com>; Mcnamara, John <john.mcnamara@intel.com>
Subject: Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()

Hi,

Thanks Thomas.. So this is the error we see on i40e PCI-passthrough of whole NIC:


Secondary Process:

EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: Setting up physically contiguous memory...
EAL: Analysing 1024 files
EAL: Mapped segment 0 of size 0x22800000
EAL: Mapped segment 1 of size 0x200000
EAL: Mapped segment 2 of size 0x200000
EAL: Mapped segment 3 of size 0x57600000
EAL: Mapped segment 4 of size 0x400000
EAL: Mapped segment 5 of size 0x400000
EAL: Mapped segment 6 of size 0x400000
EAL: Mapped segment 7 of size 0x200000
EAL: Mapped segment 8 of size 0x2200000
EAL: Mapped segment 9 of size 0x200000
EAL: Mapped segment 10 of size 0x800000
EAL: Mapped segment 11 of size 0x600000
EAL: Mapped segment 12 of size 0x800000
EAL: Mapped segment 13 of size 0xa00000
EAL: Mapped segment 14 of size 0x400000
EAL: Mapped segment 15 of size 0x200000
EAL: Mapped segment 16 of size 0x200000
EAL: Mapped segment 17 of size 0x200000
EAL: Mapped segment 18 of size 0x200000
EAL: memzone_reserve_aligned_thread_unsafe(): memzone <RG_MP_log_history> already exists
RING: Cannot reserve memory
EAL: TSC frequency is ~1799997 KHz
EAL: Master lcore 1 is ready (tid=f7fe78c0;cpuset=[1])
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:1b:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1572 rte_i40e_pmd
EAL:   PCI memory mapped at 0x7fff6f1f3000
EAL:   PCI memory mapped at 0x7ffff7faa000
EAL: Cannot mmap device resource file /sys/bus/pci/devices/0000:1b:00.0/resource3 to address: 0x7ffff7fac000
EAL: Error - exiting with code: 1
  Cause: Requested device 0000:1b:00.0 cannot be used


# ./dpdk-2.2.0/tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver
============================================
0000:1b:00.0 'Device 1572' drv=uio_pci_generic unused=i40e

Network devices using kernel driver
===================================
0000:03:00.0 'VMXNET3 Ethernet Controller' if=eth0 drv=vmxnet3 unused=uio_pci_generic *Active*

Other network devices
=====================
<none>

# grep Huge /proc/meminfo
AnonHugePages:    118784 kB
HugePages_Total:    1024
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
#


Primary Process:

EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: Setting up physically contiguous memory...
EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0
EAL: Ask a virtual area of 0x22800000 bytes
EAL: Virtual area found at 0x7fffd0a00000 (size = 0x22800000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fffd0600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fffd0200000 (size = 0x200000)
EAL: Ask a virtual area of 0x57600000 bytes
EAL: Virtual area found at 0x7fff78a00000 (size = 0x57600000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff78400000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff77e00000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff77800000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff77400000 (size = 0x200000)
EAL: Ask a virtual area of 0x2200000 bytes
EAL: Virtual area found at 0x7fff75000000 (size = 0x2200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff74c00000 (size = 0x200000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7fff74200000 (size = 0x800000)
EAL: Ask a virtual area of 0x600000 bytes
EAL: Virtual area found at 0x7fff73a00000 (size = 0x600000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7fff73000000 (size = 0x800000)
EAL: Ask a virtual area of 0xa00000 bytes
EAL: Virtual area found at 0x7fff72400000 (size = 0xa00000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff71e00000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff71a00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff71600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff71200000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff70e00000 (size = 0x200000)
EAL: Requesting 1024 pages of size 2MB from socket 0
EAL: TSC frequency is ~1799997 KHz
EAL: Master lcore 1 is ready (tid=f7fe78a0;cpuset=[1])
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:1b:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1572 rte_i40e_pmd
EAL:   PCI memory mapped at 0x7fff6f1f3000
EAL:   PCI memory mapped at 0x7ffff7fac000
#

On Fri, Jan 22, 2016 at 2:12 AM, Thomas Monjalon <thomas.monjalon@6wind.com<mailto:thomas.monjalon@6wind.com>> wrote:
Hi,

Thanks for asking.
There are a couple of requests hidden in this message.
The maintainer of i40e is Helin (CC'ed).

2016-01-21 14:54, Saurabh Mishra:
> Hi,
>
> We have noticed that i40e if we do PCI-pass through of whole NIC to VM on
> ESXi 6.0, the DPDK exits abruptly in rte_eal_init()?

This looks to be a bug report :)

> We are passing following parameters:
>
>     char *eal_argv[] = {"fakeelf",
>
>                         "-c2",
>
>                         "-n4",
>
>                         "--proc-type=primary",};
>
>
> int ret = rte_eal_init(4, eal_argv);
>
> The code works with Intel '82599ES 10-Gigabit SFI/SFP+ ' adapter in
> PCI-passthrough or SR-IOV mode however i40e it does not work.

You probably have the feeling that i40e does not work as other PMDs,
maybe wondering what are the special extended PCI configs.

> [root@localhost:~] esxcfg-nics -l
>
> [.]
>
> vmnic6  0000:07:00.0 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c0
> 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
>
> vmnic7  0000:07:00.1 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c2
> 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
>
>
> We have turned on following config in DPDK:
>
> CONFIG_RTE_PCI_CONFIG=y
>
> CONFIG_RTE_PCI_EXTENDED_TAG="on"
> CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y
>
>
> Is there any special handling in DPDK for i40e adapter in terms of config?

So you had no help when reading the code comments neither in the doc.
Indeed the only doc about i40e is the SR-IOV VF page:
        http://dpdk.org/doc/guides/nics/intel_vf.html

Please Helin, check the issue and the lack of documentation.
Thanks


  reply	other threads:[~2016-01-24  3:16 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-01-21 22:54 Saurabh Mishra
2016-01-22 10:12 ` Thomas Monjalon
2016-01-22 17:29   ` Saurabh Mishra
2016-01-24  3:16     ` Zhang, Helin [this message]
2016-01-25  1:17       ` Xu, Qian Q
2016-01-25 17:20         ` Saurabh Mishra
2016-01-26  1:09           ` Xu, Qian Q
2016-01-26  1:36             ` Saurabh Mishra
2016-01-26  1:40               ` Xu, Qian Q
2016-01-26  1:48                 ` Saurabh Mishra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=F35DEAC7BCE34641BA9FAC6BCA4A12E70A98CDC4@SHSMSX104.ccr.corp.intel.com \
    --to=helin.zhang@intel.com \
    --cc=qian.q.xu@intel.com \
    --cc=saurabh.globe@gmail.com \
    --cc=thomas.monjalon@6wind.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).