DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()
@ 2016-01-21 22:54 Saurabh Mishra
  2016-01-22 10:12 ` Thomas Monjalon
  0 siblings, 1 reply; 10+ messages in thread
From: Saurabh Mishra @ 2016-01-21 22:54 UTC (permalink / raw)
  To: users

Hi,

We have noticed that i40e if we do PCI-pass through of whole NIC to VM on
ESXi 6.0, the DPDK exits abruptly in rte_eal_init()?

We are passing following parameters:

    char *eal_argv[] = {"fakeelf",

                        "-c2",

                        "-n4",

                        "--proc-type=primary",};


int ret = rte_eal_init(4, eal_argv);

The code works with Intel '82599ES 10-Gigabit SFI/SFP+ ' adapter in
PCI-passthrough or SR-IOV mode however i40e it does not work.

[root@localhost:~] esxcfg-nics -l

[.]

vmnic6  0000:07:00.0 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c0
1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+

vmnic7  0000:07:00.1 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c2
1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+


We have turned on following config in DPDK:

CONFIG_RTE_PCI_CONFIG=y

CONFIG_RTE_PCI_EXTENDED_TAG="on"
CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y


Is there any special handling in DPDK for i40e adapter in terms of config?

Thanks,

/Saurabh

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()
  2016-01-21 22:54 [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init() Saurabh Mishra
@ 2016-01-22 10:12 ` Thomas Monjalon
  2016-01-22 17:29   ` Saurabh Mishra
  0 siblings, 1 reply; 10+ messages in thread
From: Thomas Monjalon @ 2016-01-22 10:12 UTC (permalink / raw)
  To: Saurabh Mishra; +Cc: Helin Zhang, users

Hi,

Thanks for asking.
There are a couple of requests hidden in this message.
The maintainer of i40e is Helin (CC'ed).

2016-01-21 14:54, Saurabh Mishra:
> Hi,
> 
> We have noticed that i40e if we do PCI-pass through of whole NIC to VM on
> ESXi 6.0, the DPDK exits abruptly in rte_eal_init()?

This looks to be a bug report :)

> We are passing following parameters:
> 
>     char *eal_argv[] = {"fakeelf",
> 
>                         "-c2",
> 
>                         "-n4",
> 
>                         "--proc-type=primary",};
> 
> 
> int ret = rte_eal_init(4, eal_argv);
> 
> The code works with Intel '82599ES 10-Gigabit SFI/SFP+ ' adapter in
> PCI-passthrough or SR-IOV mode however i40e it does not work.

You probably have the feeling that i40e does not work as other PMDs,
maybe wondering what are the special extended PCI configs.

> [root@localhost:~] esxcfg-nics -l
> 
> [.]
> 
> vmnic6  0000:07:00.0 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c0
> 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
> 
> vmnic7  0000:07:00.1 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c2
> 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
> 
> 
> We have turned on following config in DPDK:
> 
> CONFIG_RTE_PCI_CONFIG=y
> 
> CONFIG_RTE_PCI_EXTENDED_TAG="on"
> CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y
> 
> 
> Is there any special handling in DPDK for i40e adapter in terms of config?

So you had no help when reading the code comments neither in the doc.
Indeed the only doc about i40e is the SR-IOV VF page:
	http://dpdk.org/doc/guides/nics/intel_vf.html

Please Helin, check the issue and the lack of documentation.
Thanks

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()
  2016-01-22 10:12 ` Thomas Monjalon
@ 2016-01-22 17:29   ` Saurabh Mishra
  2016-01-24  3:16     ` Zhang, Helin
  0 siblings, 1 reply; 10+ messages in thread
From: Saurabh Mishra @ 2016-01-22 17:29 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: Helin Zhang, users

Hi,

Thanks Thomas.. So this is the error we see on i40e PCI-passthrough of
whole NIC:


*Secondary Process:*

EAL: Detected lcore 0 as core 0 on socket 0

EAL: Detected lcore 1 as core 0 on socket 0

EAL: Support maximum 128 logical core(s) by configuration.

EAL: Detected 2 lcore(s)

EAL: Setting up physically contiguous memory...

EAL: Analysing 1024 files

EAL: Mapped segment 0 of size 0x22800000

EAL: Mapped segment 1 of size 0x200000

EAL: Mapped segment 2 of size 0x200000

EAL: Mapped segment 3 of size 0x57600000

EAL: Mapped segment 4 of size 0x400000

EAL: Mapped segment 5 of size 0x400000

EAL: Mapped segment 6 of size 0x400000

EAL: Mapped segment 7 of size 0x200000

EAL: Mapped segment 8 of size 0x2200000

EAL: Mapped segment 9 of size 0x200000

EAL: Mapped segment 10 of size 0x800000

EAL: Mapped segment 11 of size 0x600000

EAL: Mapped segment 12 of size 0x800000

EAL: Mapped segment 13 of size 0xa00000

EAL: Mapped segment 14 of size 0x400000

EAL: Mapped segment 15 of size 0x200000

EAL: Mapped segment 16 of size 0x200000

EAL: Mapped segment 17 of size 0x200000

EAL: Mapped segment 18 of size 0x200000

EAL: memzone_reserve_aligned_thread_unsafe(): memzone <RG_MP_log_history>
already exists

RING: Cannot reserve memory

EAL: TSC frequency is ~1799997 KHz

EAL: Master lcore 1 is ready (tid=f7fe78c0;cpuset=[1])

EAL: PCI device 0000:03:00.0 on NUMA socket 0

EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd

EAL:   Not managed by a supported kernel driver, skipped

EAL: PCI device 0000:1b:00.0 on NUMA socket 0

EAL:   probe driver: 8086:1572 rte_i40e_pmd

EAL:   PCI memory mapped at 0x7fff6f1f3000

EAL:   PCI memory mapped at 0x7ffff7faa000

EAL: Cannot mmap device resource file
/sys/bus/pci/devices/0000:1b:00.0/resource3 to address: 0x7ffff7fac000

EAL: Error - exiting with code: 1

  Cause: Requested device 0000:1b:00.0 cannot be used



# ./dpdk-2.2.0/tools/dpdk_nic_bind.py --status


Network devices using DPDK-compatible driver

============================================

0000:1b:00.0 'Device 1572' drv=uio_pci_generic unused=i40e


Network devices using kernel driver

===================================

0000:03:00.0 'VMXNET3 Ethernet Controller' if=eth0 drv=vmxnet3
unused=uio_pci_generic *Active*


Other network devices

=====================

<none>


# grep Huge /proc/meminfo

AnonHugePages:    118784 kB

HugePages_Total:    1024

HugePages_Free:        0

HugePages_Rsvd:        0

HugePages_Surp:        0

Hugepagesize:       2048 kB

#



*Primary Process:*


EAL: Detected lcore 0 as core 0 on socket 0

EAL: Detected lcore 1 as core 0 on socket 0

EAL: Support maximum 128 logical core(s) by configuration.

EAL: Detected 2 lcore(s)

EAL: Setting up physically contiguous memory...

EAL: cannot open /proc/self/numa_maps, consider that all memory is in
socket_id 0

EAL: Ask a virtual area of 0x22800000 bytes

EAL: Virtual area found at 0x7fffd0a00000 (size = 0x22800000)

EAL: Ask a virtual area of 0x200000 bytes

EAL: Virtual area found at 0x7fffd0600000 (size = 0x200000)

EAL: Ask a virtual area of 0x200000 bytes

EAL: Virtual area found at 0x7fffd0200000 (size = 0x200000)

EAL: Ask a virtual area of 0x57600000 bytes

EAL: Virtual area found at 0x7fff78a00000 (size = 0x57600000)

EAL: Ask a virtual area of 0x400000 bytes

EAL: Virtual area found at 0x7fff78400000 (size = 0x400000)

EAL: Ask a virtual area of 0x400000 bytes

EAL: Virtual area found at 0x7fff77e00000 (size = 0x400000)

EAL: Ask a virtual area of 0x400000 bytes

EAL: Virtual area found at 0x7fff77800000 (size = 0x400000)

EAL: Ask a virtual area of 0x200000 bytes

EAL: Virtual area found at 0x7fff77400000 (size = 0x200000)

EAL: Ask a virtual area of 0x2200000 bytes

EAL: Virtual area found at 0x7fff75000000 (size = 0x2200000)

EAL: Ask a virtual area of 0x200000 bytes

EAL: Virtual area found at 0x7fff74c00000 (size = 0x200000)

EAL: Ask a virtual area of 0x800000 bytes

EAL: Virtual area found at 0x7fff74200000 (size = 0x800000)

EAL: Ask a virtual area of 0x600000 bytes

EAL: Virtual area found at 0x7fff73a00000 (size = 0x600000)

EAL: Ask a virtual area of 0x800000 bytes

EAL: Virtual area found at 0x7fff73000000 (size = 0x800000)

EAL: Ask a virtual area of 0xa00000 bytes

EAL: Virtual area found at 0x7fff72400000 (size = 0xa00000)

EAL: Ask a virtual area of 0x400000 bytes

EAL: Virtual area found at 0x7fff71e00000 (size = 0x400000)

EAL: Ask a virtual area of 0x200000 bytes

EAL: Virtual area found at 0x7fff71a00000 (size = 0x200000)

EAL: Ask a virtual area of 0x200000 bytes

EAL: Virtual area found at 0x7fff71600000 (size = 0x200000)

EAL: Ask a virtual area of 0x200000 bytes

EAL: Virtual area found at 0x7fff71200000 (size = 0x200000)

EAL: Ask a virtual area of 0x200000 bytes

EAL: Virtual area found at 0x7fff70e00000 (size = 0x200000)

EAL: Requesting 1024 pages of size 2MB from socket 0

EAL: TSC frequency is ~1799997 KHz

EAL: Master lcore 1 is ready (tid=f7fe78a0;cpuset=[1])

EAL: PCI device 0000:03:00.0 on NUMA socket 0

EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd

EAL:   Not managed by a supported kernel driver, skipped

EAL: PCI device 0000:1b:00.0 on NUMA socket 0

EAL:   probe driver: 8086:1572 rte_i40e_pmd

EAL:   PCI memory mapped at 0x7fff6f1f3000

EAL:   PCI memory mapped at 0x7ffff7fac000

#

On Fri, Jan 22, 2016 at 2:12 AM, Thomas Monjalon <thomas.monjalon@6wind.com>
wrote:

> Hi,
>
> Thanks for asking.
> There are a couple of requests hidden in this message.
> The maintainer of i40e is Helin (CC'ed).
>
> 2016-01-21 14:54, Saurabh Mishra:
> > Hi,
> >
> > We have noticed that i40e if we do PCI-pass through of whole NIC to VM on
> > ESXi 6.0, the DPDK exits abruptly in rte_eal_init()?
>
> This looks to be a bug report :)
>
> > We are passing following parameters:
> >
> >     char *eal_argv[] = {"fakeelf",
> >
> >                         "-c2",
> >
> >                         "-n4",
> >
> >                         "--proc-type=primary",};
> >
> >
> > int ret = rte_eal_init(4, eal_argv);
> >
> > The code works with Intel '82599ES 10-Gigabit SFI/SFP+ ' adapter in
> > PCI-passthrough or SR-IOV mode however i40e it does not work.
>
> You probably have the feeling that i40e does not work as other PMDs,
> maybe wondering what are the special extended PCI configs.
>
> > [root@localhost:~] esxcfg-nics -l
> >
> > [.]
> >
> > vmnic6  0000:07:00.0 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c0
> > 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
> >
> > vmnic7  0000:07:00.1 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c2
> > 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
> >
> >
> > We have turned on following config in DPDK:
> >
> > CONFIG_RTE_PCI_CONFIG=y
> >
> > CONFIG_RTE_PCI_EXTENDED_TAG="on"
> > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y
> >
> >
> > Is there any special handling in DPDK for i40e adapter in terms of
> config?
>
> So you had no help when reading the code comments neither in the doc.
> Indeed the only doc about i40e is the SR-IOV VF page:
>         http://dpdk.org/doc/guides/nics/intel_vf.html
>
> Please Helin, check the issue and the lack of documentation.
> Thanks
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()
  2016-01-22 17:29   ` Saurabh Mishra
@ 2016-01-24  3:16     ` Zhang, Helin
  2016-01-25  1:17       ` Xu, Qian Q
  0 siblings, 1 reply; 10+ messages in thread
From: Zhang, Helin @ 2016-01-24  3:16 UTC (permalink / raw)
  To: Saurabh Mishra, Thomas Monjalon, Xu, Qian Q; +Cc: users

Hi Saurabh

Were you talking about assigning NIC PF to the VMWARE guest?
I remember that there is an issue of assigning PF to KVM guest, then rombar=0 is needed for grub.
Thank you for the issue report!

Can Qian tell the details of that?

Regards,
Helin

From: Saurabh Mishra [mailto:saurabh.globe@gmail.com]
Sent: Saturday, January 23, 2016 1:30 AM
To: Thomas Monjalon <thomas.monjalon@6wind.com>
Cc: users@dpdk.org; Zhang, Helin <helin.zhang@intel.com>; Mcnamara, John <john.mcnamara@intel.com>
Subject: Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()

Hi,

Thanks Thomas.. So this is the error we see on i40e PCI-passthrough of whole NIC:


Secondary Process:

EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: Setting up physically contiguous memory...
EAL: Analysing 1024 files
EAL: Mapped segment 0 of size 0x22800000
EAL: Mapped segment 1 of size 0x200000
EAL: Mapped segment 2 of size 0x200000
EAL: Mapped segment 3 of size 0x57600000
EAL: Mapped segment 4 of size 0x400000
EAL: Mapped segment 5 of size 0x400000
EAL: Mapped segment 6 of size 0x400000
EAL: Mapped segment 7 of size 0x200000
EAL: Mapped segment 8 of size 0x2200000
EAL: Mapped segment 9 of size 0x200000
EAL: Mapped segment 10 of size 0x800000
EAL: Mapped segment 11 of size 0x600000
EAL: Mapped segment 12 of size 0x800000
EAL: Mapped segment 13 of size 0xa00000
EAL: Mapped segment 14 of size 0x400000
EAL: Mapped segment 15 of size 0x200000
EAL: Mapped segment 16 of size 0x200000
EAL: Mapped segment 17 of size 0x200000
EAL: Mapped segment 18 of size 0x200000
EAL: memzone_reserve_aligned_thread_unsafe(): memzone <RG_MP_log_history> already exists
RING: Cannot reserve memory
EAL: TSC frequency is ~1799997 KHz
EAL: Master lcore 1 is ready (tid=f7fe78c0;cpuset=[1])
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:1b:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1572 rte_i40e_pmd
EAL:   PCI memory mapped at 0x7fff6f1f3000
EAL:   PCI memory mapped at 0x7ffff7faa000
EAL: Cannot mmap device resource file /sys/bus/pci/devices/0000:1b:00.0/resource3 to address: 0x7ffff7fac000
EAL: Error - exiting with code: 1
  Cause: Requested device 0000:1b:00.0 cannot be used


# ./dpdk-2.2.0/tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver
============================================
0000:1b:00.0 'Device 1572' drv=uio_pci_generic unused=i40e

Network devices using kernel driver
===================================
0000:03:00.0 'VMXNET3 Ethernet Controller' if=eth0 drv=vmxnet3 unused=uio_pci_generic *Active*

Other network devices
=====================
<none>

# grep Huge /proc/meminfo
AnonHugePages:    118784 kB
HugePages_Total:    1024
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
#


Primary Process:

EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: Setting up physically contiguous memory...
EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0
EAL: Ask a virtual area of 0x22800000 bytes
EAL: Virtual area found at 0x7fffd0a00000 (size = 0x22800000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fffd0600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fffd0200000 (size = 0x200000)
EAL: Ask a virtual area of 0x57600000 bytes
EAL: Virtual area found at 0x7fff78a00000 (size = 0x57600000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff78400000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff77e00000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff77800000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff77400000 (size = 0x200000)
EAL: Ask a virtual area of 0x2200000 bytes
EAL: Virtual area found at 0x7fff75000000 (size = 0x2200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff74c00000 (size = 0x200000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7fff74200000 (size = 0x800000)
EAL: Ask a virtual area of 0x600000 bytes
EAL: Virtual area found at 0x7fff73a00000 (size = 0x600000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7fff73000000 (size = 0x800000)
EAL: Ask a virtual area of 0xa00000 bytes
EAL: Virtual area found at 0x7fff72400000 (size = 0xa00000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff71e00000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff71a00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff71600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff71200000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff70e00000 (size = 0x200000)
EAL: Requesting 1024 pages of size 2MB from socket 0
EAL: TSC frequency is ~1799997 KHz
EAL: Master lcore 1 is ready (tid=f7fe78a0;cpuset=[1])
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:1b:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1572 rte_i40e_pmd
EAL:   PCI memory mapped at 0x7fff6f1f3000
EAL:   PCI memory mapped at 0x7ffff7fac000
#

On Fri, Jan 22, 2016 at 2:12 AM, Thomas Monjalon <thomas.monjalon@6wind.com<mailto:thomas.monjalon@6wind.com>> wrote:
Hi,

Thanks for asking.
There are a couple of requests hidden in this message.
The maintainer of i40e is Helin (CC'ed).

2016-01-21 14:54, Saurabh Mishra:
> Hi,
>
> We have noticed that i40e if we do PCI-pass through of whole NIC to VM on
> ESXi 6.0, the DPDK exits abruptly in rte_eal_init()?

This looks to be a bug report :)

> We are passing following parameters:
>
>     char *eal_argv[] = {"fakeelf",
>
>                         "-c2",
>
>                         "-n4",
>
>                         "--proc-type=primary",};
>
>
> int ret = rte_eal_init(4, eal_argv);
>
> The code works with Intel '82599ES 10-Gigabit SFI/SFP+ ' adapter in
> PCI-passthrough or SR-IOV mode however i40e it does not work.

You probably have the feeling that i40e does not work as other PMDs,
maybe wondering what are the special extended PCI configs.

> [root@localhost:~] esxcfg-nics -l
>
> [.]
>
> vmnic6  0000:07:00.0 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c0
> 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
>
> vmnic7  0000:07:00.1 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c2
> 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
>
>
> We have turned on following config in DPDK:
>
> CONFIG_RTE_PCI_CONFIG=y
>
> CONFIG_RTE_PCI_EXTENDED_TAG="on"
> CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y
>
>
> Is there any special handling in DPDK for i40e adapter in terms of config?

So you had no help when reading the code comments neither in the doc.
Indeed the only doc about i40e is the SR-IOV VF page:
        http://dpdk.org/doc/guides/nics/intel_vf.html

Please Helin, check the issue and the lack of documentation.
Thanks


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()
  2016-01-24  3:16     ` Zhang, Helin
@ 2016-01-25  1:17       ` Xu, Qian Q
  2016-01-25 17:20         ` Saurabh Mishra
  0 siblings, 1 reply; 10+ messages in thread
From: Xu, Qian Q @ 2016-01-25  1:17 UTC (permalink / raw)
  To: Zhang, Helin, Saurabh Mishra, Thomas Monjalon; +Cc: users

We have tried the I40E driver with FVL4 on ESXi in r2.2 testing and PCI pass through can work but SRIOV will have issues. I didn’t see Mishra’s error . Now we are stuck at setting up the ESXi environment, and need
Vmware’s support to set up the env again.
Mishra, could you help answer below questions?

1.       What’s your i40e’s vmware driver on host?

2.       What’s the firmware of FVL? Is your FVL 4X10G or 2x10G?

3.       Could you tell us your detailed test step? I only launch one dpdk app, no secondary process, but your case have 2 processes. When pass through the PCI device, I just pass through 2 ports to the VM and start the VM in my case.

Helin, rombar=0 seems for KVM only, on VMWARE, not see the issue.


Thanks
Qian

From: Zhang, Helin
Sent: Sunday, January 24, 2016 11:17 AM
To: Saurabh Mishra; Thomas Monjalon; Xu, Qian Q
Cc: users@dpdk.org; Mcnamara, John
Subject: RE: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()

Hi Saurabh

Were you talking about assigning NIC PF to the VMWARE guest?
I remember that there is an issue of assigning PF to KVM guest, then rombar=0 is needed for grub.
Thank you for the issue report!

Can Qian tell the details of that?

Regards,
Helin

From: Saurabh Mishra [mailto:saurabh.globe@gmail.com]
Sent: Saturday, January 23, 2016 1:30 AM
To: Thomas Monjalon <thomas.monjalon@6wind.com<mailto:thomas.monjalon@6wind.com>>
Cc: users@dpdk.org<mailto:users@dpdk.org>; Zhang, Helin <helin.zhang@intel.com<mailto:helin.zhang@intel.com>>; Mcnamara, John <john.mcnamara@intel.com<mailto:john.mcnamara@intel.com>>
Subject: Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()

Hi,

Thanks Thomas.. So this is the error we see on i40e PCI-passthrough of whole NIC:


Secondary Process:

EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: Setting up physically contiguous memory...
EAL: Analysing 1024 files
EAL: Mapped segment 0 of size 0x22800000
EAL: Mapped segment 1 of size 0x200000
EAL: Mapped segment 2 of size 0x200000
EAL: Mapped segment 3 of size 0x57600000
EAL: Mapped segment 4 of size 0x400000
EAL: Mapped segment 5 of size 0x400000
EAL: Mapped segment 6 of size 0x400000
EAL: Mapped segment 7 of size 0x200000
EAL: Mapped segment 8 of size 0x2200000
EAL: Mapped segment 9 of size 0x200000
EAL: Mapped segment 10 of size 0x800000
EAL: Mapped segment 11 of size 0x600000
EAL: Mapped segment 12 of size 0x800000
EAL: Mapped segment 13 of size 0xa00000
EAL: Mapped segment 14 of size 0x400000
EAL: Mapped segment 15 of size 0x200000
EAL: Mapped segment 16 of size 0x200000
EAL: Mapped segment 17 of size 0x200000
EAL: Mapped segment 18 of size 0x200000
EAL: memzone_reserve_aligned_thread_unsafe(): memzone <RG_MP_log_history> already exists
RING: Cannot reserve memory
EAL: TSC frequency is ~1799997 KHz
EAL: Master lcore 1 is ready (tid=f7fe78c0;cpuset=[1])
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:1b:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1572 rte_i40e_pmd
EAL:   PCI memory mapped at 0x7fff6f1f3000
EAL:   PCI memory mapped at 0x7ffff7faa000
EAL: Cannot mmap device resource file /sys/bus/pci/devices/0000:1b:00.0/resource3 to address: 0x7ffff7fac000
EAL: Error - exiting with code: 1
  Cause: Requested device 0000:1b:00.0 cannot be used


# ./dpdk-2.2.0/tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver
============================================
0000:1b:00.0 'Device 1572' drv=uio_pci_generic unused=i40e

Network devices using kernel driver
===================================
0000:03:00.0 'VMXNET3 Ethernet Controller' if=eth0 drv=vmxnet3 unused=uio_pci_generic *Active*

Other network devices
=====================
<none>

# grep Huge /proc/meminfo
AnonHugePages:    118784 kB
HugePages_Total:    1024
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
#


Primary Process:

EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: Setting up physically contiguous memory...
EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0
EAL: Ask a virtual area of 0x22800000 bytes
EAL: Virtual area found at 0x7fffd0a00000 (size = 0x22800000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fffd0600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fffd0200000 (size = 0x200000)
EAL: Ask a virtual area of 0x57600000 bytes
EAL: Virtual area found at 0x7fff78a00000 (size = 0x57600000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff78400000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff77e00000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff77800000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff77400000 (size = 0x200000)
EAL: Ask a virtual area of 0x2200000 bytes
EAL: Virtual area found at 0x7fff75000000 (size = 0x2200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff74c00000 (size = 0x200000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7fff74200000 (size = 0x800000)
EAL: Ask a virtual area of 0x600000 bytes
EAL: Virtual area found at 0x7fff73a00000 (size = 0x600000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7fff73000000 (size = 0x800000)
EAL: Ask a virtual area of 0xa00000 bytes
EAL: Virtual area found at 0x7fff72400000 (size = 0xa00000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff71e00000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff71a00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff71600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff71200000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff70e00000 (size = 0x200000)
EAL: Requesting 1024 pages of size 2MB from socket 0
EAL: TSC frequency is ~1799997 KHz
EAL: Master lcore 1 is ready (tid=f7fe78a0;cpuset=[1])
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:1b:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1572 rte_i40e_pmd
EAL:   PCI memory mapped at 0x7fff6f1f3000
EAL:   PCI memory mapped at 0x7ffff7fac000
#

On Fri, Jan 22, 2016 at 2:12 AM, Thomas Monjalon <thomas.monjalon@6wind.com<mailto:thomas.monjalon@6wind.com>> wrote:
Hi,

Thanks for asking.
There are a couple of requests hidden in this message.
The maintainer of i40e is Helin (CC'ed).

2016-01-21 14:54, Saurabh Mishra:
> Hi,
>
> We have noticed that i40e if we do PCI-pass through of whole NIC to VM on
> ESXi 6.0, the DPDK exits abruptly in rte_eal_init()?

This looks to be a bug report :)

> We are passing following parameters:
>
>     char *eal_argv[] = {"fakeelf",
>
>                         "-c2",
>
>                         "-n4",
>
>                         "--proc-type=primary",};
>
>
> int ret = rte_eal_init(4, eal_argv);
>
> The code works with Intel '82599ES 10-Gigabit SFI/SFP+ ' adapter in
> PCI-passthrough or SR-IOV mode however i40e it does not work.

You probably have the feeling that i40e does not work as other PMDs,
maybe wondering what are the special extended PCI configs.

> [root@localhost:~] esxcfg-nics -l
>
> [.]
>
> vmnic6  0000:07:00.0 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c0
> 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
>
> vmnic7  0000:07:00.1 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c2
> 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
>
>
> We have turned on following config in DPDK:
>
> CONFIG_RTE_PCI_CONFIG=y
>
> CONFIG_RTE_PCI_EXTENDED_TAG="on"
> CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y
>
>
> Is there any special handling in DPDK for i40e adapter in terms of config?

So you had no help when reading the code comments neither in the doc.
Indeed the only doc about i40e is the SR-IOV VF page:
        http://dpdk.org/doc/guides/nics/intel_vf.html

Please Helin, check the issue and the lack of documentation.
Thanks


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()
  2016-01-25  1:17       ` Xu, Qian Q
@ 2016-01-25 17:20         ` Saurabh Mishra
  2016-01-26  1:09           ` Xu, Qian Q
  0 siblings, 1 reply; 10+ messages in thread
From: Saurabh Mishra @ 2016-01-25 17:20 UTC (permalink / raw)
  To: Xu, Qian Q; +Cc: Zhang, Helin, users

Hi Qian --

>Mishra, could you help answer below questions?

*>* What’s your i40e’s vmware driver on host?


We are doing PCI-passthrough of i40e.


vmnic7  0000:07:00.1 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c2
1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+

[root@localhost:~] vmkload_mod -s i40e |grep Version

 Version: Version 1.3.38, Build: 1331820, Interface: 9.2 Built on: Aug  5
2015

[root@localhost:~] vmkload_mod -s i40e

vmkload_mod module information

 input file: /usr/lib/vmware/vmkmod/i40e

 Version: Version 1.3.38, Build: 1331820, Interface: 9.2 Built on: Aug  5
2015

 License: GPL

 Required name-spaces:

  com.vmware.driverAPI#9.2.2.0

  com.vmware.vmkapi#v2_2_0_0

 Parameters:

  skb_mpool_max: int

    Maximum attainable private socket buffer memory pool size for the
driver.

  skb_mpool_initial: int

    Driver's minimum private socket buffer memory pool size.

  heap_max: int

    Maximum attainable heap size for the driver.

  heap_initial: int

    Initial heap size allocated for the driver.

  debug: int

    Debug level (0=none,...,16=all)

  RSS: array of int

    Number of Receive-Side Scaling Descriptor Queues: 0 = disable/default,
1-4 = enable (number of cpus)

  VMDQ: array of int

    Number of Virtual Machine Device Queues: 0/1 = disable, 2-16 enable
(default =I40E_ESX_DEFAULT_VMDQ)

  max_vfs: array of int

    Number of Virtual Functions: 0 = disable (default), 1-128 = enable this
many VFs

[root@localhost:~]

>What’s the firmware of FVL? Is your FVL 4X10G or 2x10G?

Hmm..how do I check that? It has two ports.


>Could you tell us your detailed test step? I only launch one dpdk app, no
secondary process, but your case have 2 processes. When pass through the
PCI device, I just pass through 2 ports to the VM and start the VM in my
case.

So I was passing one port to VM. I do launch primary and a secondary
process.

This is the error we got. It seems we were not able to map to the VA. So I
moved the rte_eal_init() and mbuf alloc to a separate process and now it
seems to work with one primary and six secondary process.

EAL:   PCI memory mapped at 0x7ffff7faa000

EAL: Cannot mmap device resource file
/sys/bus/pci/devices/0000:1b:00.0/resource3 to address: 0x7ffff7fac000

EAL: Error - exiting with code: 1

  Cause: Requested device 0000:1b:00.0 cannot be used



Thanks,
/Saurabh


On Sun, Jan 24, 2016 at 5:17 PM, Xu, Qian Q <qian.q.xu@intel.com> wrote:

> We have tried the I40E driver with FVL4 on ESXi in r2.2 testing and PCI
> pass through can work but SRIOV will have issues. I didn’t see Mishra’s
> error . Now we are stuck at setting up the ESXi environment, and need
>
> Vmware’s support to set up the env again.
>
> Mishra, could you help answer below questions?
>
> 1.       What’s your i40e’s vmware driver on host?
>
> 2.       What’s the firmware of FVL? Is your FVL 4X10G or 2x10G?
>
> 3.       Could you tell us your detailed test step? I only launch one
> dpdk app, no secondary process, but your case have 2 processes. When pass
> through the PCI device, I just pass through 2 ports to the VM and start the
> VM in my case.
>
>
>
> Helin, rombar=0 seems for KVM only, on VMWARE, not see the issue.
>
>
>
>
>
> Thanks
>
> Qian
>
>
>
> *From:* Zhang, Helin
> *Sent:* Sunday, January 24, 2016 11:17 AM
> *To:* Saurabh Mishra; Thomas Monjalon; Xu, Qian Q
> *Cc:* users@dpdk.org; Mcnamara, John
> *Subject:* RE: [dpdk-users] i40e with DPDK exits abruptly in
> rte_eal_init()
>
>
>
> Hi Saurabh
>
>
>
> Were you talking about assigning NIC PF to the VMWARE guest?
>
> I remember that there is an issue of assigning PF to KVM guest, then
> rombar=0 is needed for grub.
>
> Thank you for the issue report!
>
>
>
> Can Qian tell the details of that?
>
>
>
> Regards,
>
> Helin
>
>
>
> *From:* Saurabh Mishra [mailto:saurabh.globe@gmail.com
> <saurabh.globe@gmail.com>]
> *Sent:* Saturday, January 23, 2016 1:30 AM
> *To:* Thomas Monjalon <thomas.monjalon@6wind.com>
> *Cc:* users@dpdk.org; Zhang, Helin <helin.zhang@intel.com>; Mcnamara,
> John <john.mcnamara@intel.com>
> *Subject:* Re: [dpdk-users] i40e with DPDK exits abruptly in
> rte_eal_init()
>
>
>
> Hi,
>
>
>
> Thanks Thomas.. So this is the error we see on i40e PCI-passthrough of
> whole NIC:
>
>
>
>
>
> *Secondary Process:*
>
>
>
> EAL: Detected lcore 0 as core 0 on socket 0
>
> EAL: Detected lcore 1 as core 0 on socket 0
>
> EAL: Support maximum 128 logical core(s) by configuration.
>
> EAL: Detected 2 lcore(s)
>
> EAL: Setting up physically contiguous memory...
>
> EAL: Analysing 1024 files
>
> EAL: Mapped segment 0 of size 0x22800000
>
> EAL: Mapped segment 1 of size 0x200000
>
> EAL: Mapped segment 2 of size 0x200000
>
> EAL: Mapped segment 3 of size 0x57600000
>
> EAL: Mapped segment 4 of size 0x400000
>
> EAL: Mapped segment 5 of size 0x400000
>
> EAL: Mapped segment 6 of size 0x400000
>
> EAL: Mapped segment 7 of size 0x200000
>
> EAL: Mapped segment 8 of size 0x2200000
>
> EAL: Mapped segment 9 of size 0x200000
>
> EAL: Mapped segment 10 of size 0x800000
>
> EAL: Mapped segment 11 of size 0x600000
>
> EAL: Mapped segment 12 of size 0x800000
>
> EAL: Mapped segment 13 of size 0xa00000
>
> EAL: Mapped segment 14 of size 0x400000
>
> EAL: Mapped segment 15 of size 0x200000
>
> EAL: Mapped segment 16 of size 0x200000
>
> EAL: Mapped segment 17 of size 0x200000
>
> EAL: Mapped segment 18 of size 0x200000
>
> EAL: memzone_reserve_aligned_thread_unsafe(): memzone <RG_MP_log_history>
> already exists
>
> RING: Cannot reserve memory
>
> EAL: TSC frequency is ~1799997 KHz
>
> EAL: Master lcore 1 is ready (tid=f7fe78c0;cpuset=[1])
>
> EAL: PCI device 0000:03:00.0 on NUMA socket 0
>
> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
>
> EAL:   Not managed by a supported kernel driver, skipped
>
> EAL: PCI device 0000:1b:00.0 on NUMA socket 0
>
> EAL:   probe driver: 8086:1572 rte_i40e_pmd
>
> EAL:   PCI memory mapped at 0x7fff6f1f3000
>
> EAL:   PCI memory mapped at 0x7ffff7faa000
>
> EAL: Cannot mmap device resource file
> /sys/bus/pci/devices/0000:1b:00.0/resource3 to address: 0x7ffff7fac000
>
> EAL: Error - exiting with code: 1
>
>   Cause: Requested device 0000:1b:00.0 cannot be used
>
>
>
>
>
> # ./dpdk-2.2.0/tools/dpdk_nic_bind.py --status
>
>
>
> Network devices using DPDK-compatible driver
>
> ============================================
>
> 0000:1b:00.0 'Device 1572' drv=uio_pci_generic unused=i40e
>
>
>
> Network devices using kernel driver
>
> ===================================
>
> 0000:03:00.0 'VMXNET3 Ethernet Controller' if=eth0 drv=vmxnet3
> unused=uio_pci_generic *Active*
>
>
>
> Other network devices
>
> =====================
>
> <none>
>
>
>
> # grep Huge /proc/meminfo
>
> AnonHugePages:    118784 kB
>
> HugePages_Total:    1024
>
> HugePages_Free:        0
>
> HugePages_Rsvd:        0
>
> HugePages_Surp:        0
>
> Hugepagesize:       2048 kB
>
> #
>
>
>
>
>
> *Primary Process:*
>
>
>
> EAL: Detected lcore 0 as core 0 on socket 0
>
> EAL: Detected lcore 1 as core 0 on socket 0
>
> EAL: Support maximum 128 logical core(s) by configuration.
>
> EAL: Detected 2 lcore(s)
>
> EAL: Setting up physically contiguous memory...
>
> EAL: cannot open /proc/self/numa_maps, consider that all memory is in
> socket_id 0
>
> EAL: Ask a virtual area of 0x22800000 bytes
>
> EAL: Virtual area found at 0x7fffd0a00000 (size = 0x22800000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fffd0600000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fffd0200000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x57600000 bytes
>
> EAL: Virtual area found at 0x7fff78a00000 (size = 0x57600000)
>
> EAL: Ask a virtual area of 0x400000 bytes
>
> EAL: Virtual area found at 0x7fff78400000 (size = 0x400000)
>
> EAL: Ask a virtual area of 0x400000 bytes
>
> EAL: Virtual area found at 0x7fff77e00000 (size = 0x400000)
>
> EAL: Ask a virtual area of 0x400000 bytes
>
> EAL: Virtual area found at 0x7fff77800000 (size = 0x400000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff77400000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x2200000 bytes
>
> EAL: Virtual area found at 0x7fff75000000 (size = 0x2200000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff74c00000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x800000 bytes
>
> EAL: Virtual area found at 0x7fff74200000 (size = 0x800000)
>
> EAL: Ask a virtual area of 0x600000 bytes
>
> EAL: Virtual area found at 0x7fff73a00000 (size = 0x600000)
>
> EAL: Ask a virtual area of 0x800000 bytes
>
> EAL: Virtual area found at 0x7fff73000000 (size = 0x800000)
>
> EAL: Ask a virtual area of 0xa00000 bytes
>
> EAL: Virtual area found at 0x7fff72400000 (size = 0xa00000)
>
> EAL: Ask a virtual area of 0x400000 bytes
>
> EAL: Virtual area found at 0x7fff71e00000 (size = 0x400000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff71a00000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff71600000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff71200000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff70e00000 (size = 0x200000)
>
> EAL: Requesting 1024 pages of size 2MB from socket 0
>
> EAL: TSC frequency is ~1799997 KHz
>
> EAL: Master lcore 1 is ready (tid=f7fe78a0;cpuset=[1])
>
> EAL: PCI device 0000:03:00.0 on NUMA socket 0
>
> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
>
> EAL:   Not managed by a supported kernel driver, skipped
>
> EAL: PCI device 0000:1b:00.0 on NUMA socket 0
>
> EAL:   probe driver: 8086:1572 rte_i40e_pmd
>
> EAL:   PCI memory mapped at 0x7fff6f1f3000
>
> EAL:   PCI memory mapped at 0x7ffff7fac000
>
> #
>
>
>
> On Fri, Jan 22, 2016 at 2:12 AM, Thomas Monjalon <
> thomas.monjalon@6wind.com> wrote:
>
> Hi,
>
> Thanks for asking.
> There are a couple of requests hidden in this message.
> The maintainer of i40e is Helin (CC'ed).
>
> 2016-01-21 14:54, Saurabh Mishra:
> > Hi,
> >
> > We have noticed that i40e if we do PCI-pass through of whole NIC to VM on
> > ESXi 6.0, the DPDK exits abruptly in rte_eal_init()?
>
> This looks to be a bug report :)
>
> > We are passing following parameters:
> >
> >     char *eal_argv[] = {"fakeelf",
> >
> >                         "-c2",
> >
> >                         "-n4",
> >
> >                         "--proc-type=primary",};
> >
> >
> > int ret = rte_eal_init(4, eal_argv);
> >
> > The code works with Intel '82599ES 10-Gigabit SFI/SFP+ ' adapter in
> > PCI-passthrough or SR-IOV mode however i40e it does not work.
>
> You probably have the feeling that i40e does not work as other PMDs,
> maybe wondering what are the special extended PCI configs.
>
> > [root@localhost:~] esxcfg-nics -l
> >
> > [.]
> >
> > vmnic6  0000:07:00.0 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c0
> > 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
> >
> > vmnic7  0000:07:00.1 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c2
> > 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
> >
> >
> > We have turned on following config in DPDK:
> >
> > CONFIG_RTE_PCI_CONFIG=y
> >
> > CONFIG_RTE_PCI_EXTENDED_TAG="on"
> > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y
> >
> >
> > Is there any special handling in DPDK for i40e adapter in terms of
> config?
>
> So you had no help when reading the code comments neither in the doc.
> Indeed the only doc about i40e is the SR-IOV VF page:
>         http://dpdk.org/doc/guides/nics/intel_vf.html
>
> Please Helin, check the issue and the lack of documentation.
> Thanks
>
>
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()
  2016-01-25 17:20         ` Saurabh Mishra
@ 2016-01-26  1:09           ` Xu, Qian Q
  2016-01-26  1:36             ` Saurabh Mishra
  0 siblings, 1 reply; 10+ messages in thread
From: Xu, Qian Q @ 2016-01-26  1:09 UTC (permalink / raw)
  To: Saurabh Mishra; +Cc: Zhang, Helin, users

Mishra
Could you tell me your exact command when running primary and secondary commands? Do you change the code when running dpdk app? Thx.


Thanks
Qian

From: Saurabh Mishra [mailto:saurabh.globe@gmail.com]
Sent: Tuesday, January 26, 2016 1:20 AM
To: Xu, Qian Q
Cc: Zhang, Helin; Thomas Monjalon; users@dpdk.org; Mcnamara, John
Subject: Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()

Hi Qian --

>Mishra, could you help answer below questions?
> What’s your i40e’s vmware driver on host?

We are doing PCI-passthrough of i40e.

vmnic7  0000:07:00.1 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c2 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
[root@localhost:~] vmkload_mod -s i40e |grep Version
 Version: Version 1.3.38, Build: 1331820, Interface: 9.2 Built on: Aug  5 2015
[root@localhost:~] vmkload_mod -s i40e
vmkload_mod module information
 input file: /usr/lib/vmware/vmkmod/i40e
 Version: Version 1.3.38, Build: 1331820, Interface: 9.2 Built on: Aug  5 2015
 License: GPL
 Required name-spaces:
  com.vmware.driverAPI#9.2.2.0
  com.vmware.vmkapi#v2_2_0_0
 Parameters:
  skb_mpool_max: int
    Maximum attainable private socket buffer memory pool size for the driver.
  skb_mpool_initial: int
    Driver's minimum private socket buffer memory pool size.
  heap_max: int
    Maximum attainable heap size for the driver.
  heap_initial: int
    Initial heap size allocated for the driver.
  debug: int
    Debug level (0=none,...,16=all)
  RSS: array of int
    Number of Receive-Side Scaling Descriptor Queues: 0 = disable/default, 1-4 = enable (number of cpus)
  VMDQ: array of int
    Number of Virtual Machine Device Queues: 0/1 = disable, 2-16 enable (default =I40E_ESX_DEFAULT_VMDQ)
  max_vfs: array of int
    Number of Virtual Functions: 0 = disable (default), 1-128 = enable this many VFs
[root@localhost:~]
>What’s the firmware of FVL? Is your FVL 4X10G or 2x10G?
Hmm..how do I check that? It has two ports.

>Could you tell us your detailed test step? I only launch one dpdk app, no secondary process, but your case have 2 processes. When pass through the PCI device, I just pass through 2 ports to the VM and start the VM in my case.

So I was passing one port to VM. I do launch primary and a secondary process.

This is the error we got. It seems we were not able to map to the VA. So I moved the rte_eal_init() and mbuf alloc to a separate process and now it seems to work with one primary and six secondary process.

EAL:   PCI memory mapped at 0x7ffff7faa000
EAL: Cannot mmap device resource file /sys/bus/pci/devices/0000:1b:00.0/resource3 to address: 0x7ffff7fac000
EAL: Error - exiting with code: 1
  Cause: Requested device 0000:1b:00.0 cannot be used


Thanks,
/Saurabh


On Sun, Jan 24, 2016 at 5:17 PM, Xu, Qian Q <qian.q.xu@intel.com<mailto:qian.q.xu@intel.com>> wrote:
We have tried the I40E driver with FVL4 on ESXi in r2.2 testing and PCI pass through can work but SRIOV will have issues. I didn’t see Mishra’s error . Now we are stuck at setting up the ESXi environment, and need
Vmware’s support to set up the env again.
Mishra, could you help answer below questions?

1.       What’s your i40e’s vmware driver on host?

2.       What’s the firmware of FVL? Is your FVL 4X10G or 2x10G?

3.       Could you tell us your detailed test step? I only launch one dpdk app, no secondary process, but your case have 2 processes. When pass through the PCI device, I just pass through 2 ports to the VM and start the VM in my case.

Helin, rombar=0 seems for KVM only, on VMWARE, not see the issue.


Thanks
Qian

From: Zhang, Helin
Sent: Sunday, January 24, 2016 11:17 AM
To: Saurabh Mishra; Thomas Monjalon; Xu, Qian Q
Cc: users@dpdk.org<mailto:users@dpdk.org>; Mcnamara, John
Subject: RE: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()

Hi Saurabh

Were you talking about assigning NIC PF to the VMWARE guest?
I remember that there is an issue of assigning PF to KVM guest, then rombar=0 is needed for grub.
Thank you for the issue report!

Can Qian tell the details of that?

Regards,
Helin

From: Saurabh Mishra [mailto:saurabh.globe@gmail.com]
Sent: Saturday, January 23, 2016 1:30 AM
To: Thomas Monjalon <thomas.monjalon@6wind.com<mailto:thomas.monjalon@6wind.com>>
Cc: users@dpdk.org<mailto:users@dpdk.org>; Zhang, Helin <helin.zhang@intel.com<mailto:helin.zhang@intel.com>>; Mcnamara, John <john.mcnamara@intel.com<mailto:john.mcnamara@intel.com>>
Subject: Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()

Hi,

Thanks Thomas.. So this is the error we see on i40e PCI-passthrough of whole NIC:


Secondary Process:

EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: Setting up physically contiguous memory...
EAL: Analysing 1024 files
EAL: Mapped segment 0 of size 0x22800000
EAL: Mapped segment 1 of size 0x200000
EAL: Mapped segment 2 of size 0x200000
EAL: Mapped segment 3 of size 0x57600000
EAL: Mapped segment 4 of size 0x400000
EAL: Mapped segment 5 of size 0x400000
EAL: Mapped segment 6 of size 0x400000
EAL: Mapped segment 7 of size 0x200000
EAL: Mapped segment 8 of size 0x2200000
EAL: Mapped segment 9 of size 0x200000
EAL: Mapped segment 10 of size 0x800000
EAL: Mapped segment 11 of size 0x600000
EAL: Mapped segment 12 of size 0x800000
EAL: Mapped segment 13 of size 0xa00000
EAL: Mapped segment 14 of size 0x400000
EAL: Mapped segment 15 of size 0x200000
EAL: Mapped segment 16 of size 0x200000
EAL: Mapped segment 17 of size 0x200000
EAL: Mapped segment 18 of size 0x200000
EAL: memzone_reserve_aligned_thread_unsafe(): memzone <RG_MP_log_history> already exists
RING: Cannot reserve memory
EAL: TSC frequency is ~1799997 KHz
EAL: Master lcore 1 is ready (tid=f7fe78c0;cpuset=[1])
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:1b:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1572 rte_i40e_pmd
EAL:   PCI memory mapped at 0x7fff6f1f3000
EAL:   PCI memory mapped at 0x7ffff7faa000
EAL: Cannot mmap device resource file /sys/bus/pci/devices/0000:1b:00.0/resource3 to address: 0x7ffff7fac000
EAL: Error - exiting with code: 1
  Cause: Requested device 0000:1b:00.0 cannot be used


# ./dpdk-2.2.0/tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver
============================================
0000:1b:00.0 'Device 1572' drv=uio_pci_generic unused=i40e

Network devices using kernel driver
===================================
0000:03:00.0 'VMXNET3 Ethernet Controller' if=eth0 drv=vmxnet3 unused=uio_pci_generic *Active*

Other network devices
=====================
<none>

# grep Huge /proc/meminfo
AnonHugePages:    118784 kB
HugePages_Total:    1024
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
#


Primary Process:

EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: Setting up physically contiguous memory...
EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0
EAL: Ask a virtual area of 0x22800000 bytes
EAL: Virtual area found at 0x7fffd0a00000 (size = 0x22800000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fffd0600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fffd0200000 (size = 0x200000)
EAL: Ask a virtual area of 0x57600000 bytes
EAL: Virtual area found at 0x7fff78a00000 (size = 0x57600000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff78400000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff77e00000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff77800000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff77400000 (size = 0x200000)
EAL: Ask a virtual area of 0x2200000 bytes
EAL: Virtual area found at 0x7fff75000000 (size = 0x2200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff74c00000 (size = 0x200000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7fff74200000 (size = 0x800000)
EAL: Ask a virtual area of 0x600000 bytes
EAL: Virtual area found at 0x7fff73a00000 (size = 0x600000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7fff73000000 (size = 0x800000)
EAL: Ask a virtual area of 0xa00000 bytes
EAL: Virtual area found at 0x7fff72400000 (size = 0xa00000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff71e00000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff71a00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff71600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff71200000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff70e00000 (size = 0x200000)
EAL: Requesting 1024 pages of size 2MB from socket 0
EAL: TSC frequency is ~1799997 KHz
EAL: Master lcore 1 is ready (tid=f7fe78a0;cpuset=[1])
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:1b:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1572 rte_i40e_pmd
EAL:   PCI memory mapped at 0x7fff6f1f3000
EAL:   PCI memory mapped at 0x7ffff7fac000
#

On Fri, Jan 22, 2016 at 2:12 AM, Thomas Monjalon <thomas.monjalon@6wind.com<mailto:thomas.monjalon@6wind.com>> wrote:
Hi,

Thanks for asking.
There are a couple of requests hidden in this message.
The maintainer of i40e is Helin (CC'ed).

2016-01-21 14:54, Saurabh Mishra:
> Hi,
>
> We have noticed that i40e if we do PCI-pass through of whole NIC to VM on
> ESXi 6.0, the DPDK exits abruptly in rte_eal_init()?

This looks to be a bug report :)

> We are passing following parameters:
>
>     char *eal_argv[] = {"fakeelf",
>
>                         "-c2",
>
>                         "-n4",
>
>                         "--proc-type=primary",};
>
>
> int ret = rte_eal_init(4, eal_argv);
>
> The code works with Intel '82599ES 10-Gigabit SFI/SFP+ ' adapter in
> PCI-passthrough or SR-IOV mode however i40e it does not work.

You probably have the feeling that i40e does not work as other PMDs,
maybe wondering what are the special extended PCI configs.

> [root@localhost:~] esxcfg-nics -l
>
> [.]
>
> vmnic6  0000:07:00.0 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c0
> 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
>
> vmnic7  0000:07:00.1 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c2
> 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
>
>
> We have turned on following config in DPDK:
>
> CONFIG_RTE_PCI_CONFIG=y
>
> CONFIG_RTE_PCI_EXTENDED_TAG="on"
> CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y
>
>
> Is there any special handling in DPDK for i40e adapter in terms of config?

So you had no help when reading the code comments neither in the doc.
Indeed the only doc about i40e is the SR-IOV VF page:
        http://dpdk.org/doc/guides/nics/intel_vf.html

Please Helin, check the issue and the lack of documentation.
Thanks



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()
  2016-01-26  1:09           ` Xu, Qian Q
@ 2016-01-26  1:36             ` Saurabh Mishra
  2016-01-26  1:40               ` Xu, Qian Q
  0 siblings, 1 reply; 10+ messages in thread
From: Saurabh Mishra @ 2016-01-26  1:36 UTC (permalink / raw)
  To: Xu, Qian Q; +Cc: Zhang, Helin, users

Hi Qian --

>Mishra

>Could you tell me your exact command when running primary and secondary
commands?

>Do you change the code when running dpdk app? Thx.

We are running our apps which is based on the symmetric_mp example code.
But here are the arguments:

    char *eal_argv[] = {"fakeelf",

                        "-c2",

                        "-m2048",

                        "-n4",

                        "--proc-type=primary",

                        "--",

                        "-p 3",

                        "--num-procs=2",

                        "--proc-id=0",};


    if (core_id != 0) {



        snprintf(core_arg, sizeof(core_arg), "-c%x", (1 << core_id));



        eal_argv[1] = core_arg;



    }







    snprintf(port_arg, sizeof(core_arg), "-p %x", (1 << num_cores) - 1);



    eal_argv[6] = port_arg;







    snprintf(ncores, sizeof(ncores), "--num-procs=%d", num_cores);



    eal_argv[7] = ncores;



    snprintf(pid, sizeof(pid), "--proc-id=%d", core_id);



    eal_argv[8] = pid;


We have one primary and seven secondary process.

What we have seen is that when our primary process is our agent process
then our secondary processes don't attach. Our agent process setups
everything and then goes away (exits).


If we use your sample application, then we don't see this problem even
though I CTRL+C primary symmetric_mp program.


Thanks,

/Saurabh

On Mon, Jan 25, 2016 at 5:09 PM, Xu, Qian Q <qian.q.xu@intel.com> wrote:

> Mishra
>
> Could you tell me your exact command when running primary and secondary
> commands? Do you change the code when running dpdk app? Thx.
>
>
>
>
>
> Thanks
>
> Qian
>
>
>
> *From:* Saurabh Mishra [mailto:saurabh.globe@gmail.com]
> *Sent:* Tuesday, January 26, 2016 1:20 AM
> *To:* Xu, Qian Q
> *Cc:* Zhang, Helin; Thomas Monjalon; users@dpdk.org; Mcnamara, John
>
> *Subject:* Re: [dpdk-users] i40e with DPDK exits abruptly in
> rte_eal_init()
>
>
>
> Hi Qian --
>
>
>
> >Mishra, could you help answer below questions?
>
> *>* What’s your i40e’s vmware driver on host?
>
>
>
> We are doing PCI-passthrough of i40e.
>
>
>
> vmnic7  0000:07:00.1 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c2
> 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
>
> [root@localhost:~] vmkload_mod -s i40e |grep Version
>
>  Version: Version 1.3.38, Build: 1331820, Interface: 9.2 Built on: Aug  5
> 2015
>
> [root@localhost:~] vmkload_mod -s i40e
>
> vmkload_mod module information
>
>  input file: /usr/lib/vmware/vmkmod/i40e
>
>  Version: Version 1.3.38, Build: 1331820, Interface: 9.2 Built on: Aug  5
> 2015
>
>  License: GPL
>
>  Required name-spaces:
>
>   com.vmware.driverAPI#9.2.2.0
>
>   com.vmware.vmkapi#v2_2_0_0
>
>  Parameters:
>
>   skb_mpool_max: int
>
>     Maximum attainable private socket buffer memory pool size for the
> driver.
>
>   skb_mpool_initial: int
>
>     Driver's minimum private socket buffer memory pool size.
>
>   heap_max: int
>
>     Maximum attainable heap size for the driver.
>
>   heap_initial: int
>
>     Initial heap size allocated for the driver.
>
>   debug: int
>
>     Debug level (0=none,...,16=all)
>
>   RSS: array of int
>
>     Number of Receive-Side Scaling Descriptor Queues: 0 = disable/default,
> 1-4 = enable (number of cpus)
>
>   VMDQ: array of int
>
>     Number of Virtual Machine Device Queues: 0/1 = disable, 2-16 enable
> (default =I40E_ESX_DEFAULT_VMDQ)
>
>   max_vfs: array of int
>
>     Number of Virtual Functions: 0 = disable (default), 1-128 = enable
> this many VFs
>
> [root@localhost:~]
>
> >What’s the firmware of FVL? Is your FVL 4X10G or 2x10G?
>
> Hmm..how do I check that? It has two ports.
>
>
>
> >Could you tell us your detailed test step? I only launch one dpdk app, no
> secondary process, but your case have 2 processes. When pass through the
> PCI device, I just pass through 2 ports to the VM and start the VM in my
> case.
>
>
>
> So I was passing one port to VM. I do launch primary and a secondary
> process.
>
>
>
> This is the error we got. It seems we were not able to map to the VA. So I
> moved the rte_eal_init() and mbuf alloc to a separate process and now it
> seems to work with one primary and six secondary process.
>
>
>
> EAL:   PCI memory mapped at 0x7ffff7faa000
>
> EAL: Cannot mmap device resource file
> /sys/bus/pci/devices/0000:1b:00.0/resource3 to address: 0x7ffff7fac000
>
> EAL: Error - exiting with code: 1
>
>   Cause: Requested device 0000:1b:00.0 cannot be used
>
>
>
>
>
> Thanks,
>
> /Saurabh
>
>
>
>
>
> On Sun, Jan 24, 2016 at 5:17 PM, Xu, Qian Q <qian.q.xu@intel.com> wrote:
>
> We have tried the I40E driver with FVL4 on ESXi in r2.2 testing and PCI
> pass through can work but SRIOV will have issues. I didn’t see Mishra’s
> error . Now we are stuck at setting up the ESXi environment, and need
>
> Vmware’s support to set up the env again.
>
> Mishra, could you help answer below questions?
>
> 1.       What’s your i40e’s vmware driver on host?
>
> 2.       What’s the firmware of FVL? Is your FVL 4X10G or 2x10G?
>
> 3.       Could you tell us your detailed test step? I only launch one
> dpdk app, no secondary process, but your case have 2 processes. When pass
> through the PCI device, I just pass through 2 ports to the VM and start the
> VM in my case.
>
>
>
> Helin, rombar=0 seems for KVM only, on VMWARE, not see the issue.
>
>
>
>
>
> Thanks
>
> Qian
>
>
>
> *From:* Zhang, Helin
> *Sent:* Sunday, January 24, 2016 11:17 AM
> *To:* Saurabh Mishra; Thomas Monjalon; Xu, Qian Q
> *Cc:* users@dpdk.org; Mcnamara, John
> *Subject:* RE: [dpdk-users] i40e with DPDK exits abruptly in
> rte_eal_init()
>
>
>
> Hi Saurabh
>
>
>
> Were you talking about assigning NIC PF to the VMWARE guest?
>
> I remember that there is an issue of assigning PF to KVM guest, then
> rombar=0 is needed for grub.
>
> Thank you for the issue report!
>
>
>
> Can Qian tell the details of that?
>
>
>
> Regards,
>
> Helin
>
>
>
> *From:* Saurabh Mishra [mailto:saurabh.globe@gmail.com
> <saurabh.globe@gmail.com>]
> *Sent:* Saturday, January 23, 2016 1:30 AM
> *To:* Thomas Monjalon <thomas.monjalon@6wind.com>
> *Cc:* users@dpdk.org; Zhang, Helin <helin.zhang@intel.com>; Mcnamara,
> John <john.mcnamara@intel.com>
> *Subject:* Re: [dpdk-users] i40e with DPDK exits abruptly in
> rte_eal_init()
>
>
>
> Hi,
>
>
>
> Thanks Thomas.. So this is the error we see on i40e PCI-passthrough of
> whole NIC:
>
>
>
>
>
> *Secondary Process:*
>
>
>
> EAL: Detected lcore 0 as core 0 on socket 0
>
> EAL: Detected lcore 1 as core 0 on socket 0
>
> EAL: Support maximum 128 logical core(s) by configuration.
>
> EAL: Detected 2 lcore(s)
>
> EAL: Setting up physically contiguous memory...
>
> EAL: Analysing 1024 files
>
> EAL: Mapped segment 0 of size 0x22800000
>
> EAL: Mapped segment 1 of size 0x200000
>
> EAL: Mapped segment 2 of size 0x200000
>
> EAL: Mapped segment 3 of size 0x57600000
>
> EAL: Mapped segment 4 of size 0x400000
>
> EAL: Mapped segment 5 of size 0x400000
>
> EAL: Mapped segment 6 of size 0x400000
>
> EAL: Mapped segment 7 of size 0x200000
>
> EAL: Mapped segment 8 of size 0x2200000
>
> EAL: Mapped segment 9 of size 0x200000
>
> EAL: Mapped segment 10 of size 0x800000
>
> EAL: Mapped segment 11 of size 0x600000
>
> EAL: Mapped segment 12 of size 0x800000
>
> EAL: Mapped segment 13 of size 0xa00000
>
> EAL: Mapped segment 14 of size 0x400000
>
> EAL: Mapped segment 15 of size 0x200000
>
> EAL: Mapped segment 16 of size 0x200000
>
> EAL: Mapped segment 17 of size 0x200000
>
> EAL: Mapped segment 18 of size 0x200000
>
> EAL: memzone_reserve_aligned_thread_unsafe(): memzone <RG_MP_log_history>
> already exists
>
> RING: Cannot reserve memory
>
> EAL: TSC frequency is ~1799997 KHz
>
> EAL: Master lcore 1 is ready (tid=f7fe78c0;cpuset=[1])
>
> EAL: PCI device 0000:03:00.0 on NUMA socket 0
>
> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
>
> EAL:   Not managed by a supported kernel driver, skipped
>
> EAL: PCI device 0000:1b:00.0 on NUMA socket 0
>
> EAL:   probe driver: 8086:1572 rte_i40e_pmd
>
> EAL:   PCI memory mapped at 0x7fff6f1f3000
>
> EAL:   PCI memory mapped at 0x7ffff7faa000
>
> EAL: Cannot mmap device resource file
> /sys/bus/pci/devices/0000:1b:00.0/resource3 to address: 0x7ffff7fac000
>
> EAL: Error - exiting with code: 1
>
>   Cause: Requested device 0000:1b:00.0 cannot be used
>
>
>
>
>
> # ./dpdk-2.2.0/tools/dpdk_nic_bind.py --status
>
>
>
> Network devices using DPDK-compatible driver
>
> ============================================
>
> 0000:1b:00.0 'Device 1572' drv=uio_pci_generic unused=i40e
>
>
>
> Network devices using kernel driver
>
> ===================================
>
> 0000:03:00.0 'VMXNET3 Ethernet Controller' if=eth0 drv=vmxnet3
> unused=uio_pci_generic *Active*
>
>
>
> Other network devices
>
> =====================
>
> <none>
>
>
>
> # grep Huge /proc/meminfo
>
> AnonHugePages:    118784 kB
>
> HugePages_Total:    1024
>
> HugePages_Free:        0
>
> HugePages_Rsvd:        0
>
> HugePages_Surp:        0
>
> Hugepagesize:       2048 kB
>
> #
>
>
>
>
>
> *Primary Process:*
>
>
>
> EAL: Detected lcore 0 as core 0 on socket 0
>
> EAL: Detected lcore 1 as core 0 on socket 0
>
> EAL: Support maximum 128 logical core(s) by configuration.
>
> EAL: Detected 2 lcore(s)
>
> EAL: Setting up physically contiguous memory...
>
> EAL: cannot open /proc/self/numa_maps, consider that all memory is in
> socket_id 0
>
> EAL: Ask a virtual area of 0x22800000 bytes
>
> EAL: Virtual area found at 0x7fffd0a00000 (size = 0x22800000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fffd0600000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fffd0200000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x57600000 bytes
>
> EAL: Virtual area found at 0x7fff78a00000 (size = 0x57600000)
>
> EAL: Ask a virtual area of 0x400000 bytes
>
> EAL: Virtual area found at 0x7fff78400000 (size = 0x400000)
>
> EAL: Ask a virtual area of 0x400000 bytes
>
> EAL: Virtual area found at 0x7fff77e00000 (size = 0x400000)
>
> EAL: Ask a virtual area of 0x400000 bytes
>
> EAL: Virtual area found at 0x7fff77800000 (size = 0x400000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff77400000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x2200000 bytes
>
> EAL: Virtual area found at 0x7fff75000000 (size = 0x2200000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff74c00000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x800000 bytes
>
> EAL: Virtual area found at 0x7fff74200000 (size = 0x800000)
>
> EAL: Ask a virtual area of 0x600000 bytes
>
> EAL: Virtual area found at 0x7fff73a00000 (size = 0x600000)
>
> EAL: Ask a virtual area of 0x800000 bytes
>
> EAL: Virtual area found at 0x7fff73000000 (size = 0x800000)
>
> EAL: Ask a virtual area of 0xa00000 bytes
>
> EAL: Virtual area found at 0x7fff72400000 (size = 0xa00000)
>
> EAL: Ask a virtual area of 0x400000 bytes
>
> EAL: Virtual area found at 0x7fff71e00000 (size = 0x400000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff71a00000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff71600000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff71200000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff70e00000 (size = 0x200000)
>
> EAL: Requesting 1024 pages of size 2MB from socket 0
>
> EAL: TSC frequency is ~1799997 KHz
>
> EAL: Master lcore 1 is ready (tid=f7fe78a0;cpuset=[1])
>
> EAL: PCI device 0000:03:00.0 on NUMA socket 0
>
> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
>
> EAL:   Not managed by a supported kernel driver, skipped
>
> EAL: PCI device 0000:1b:00.0 on NUMA socket 0
>
> EAL:   probe driver: 8086:1572 rte_i40e_pmd
>
> EAL:   PCI memory mapped at 0x7fff6f1f3000
>
> EAL:   PCI memory mapped at 0x7ffff7fac000
>
> #
>
>
>
> On Fri, Jan 22, 2016 at 2:12 AM, Thomas Monjalon <
> thomas.monjalon@6wind.com> wrote:
>
> Hi,
>
> Thanks for asking.
> There are a couple of requests hidden in this message.
> The maintainer of i40e is Helin (CC'ed).
>
> 2016-01-21 14:54, Saurabh Mishra:
> > Hi,
> >
> > We have noticed that i40e if we do PCI-pass through of whole NIC to VM on
> > ESXi 6.0, the DPDK exits abruptly in rte_eal_init()?
>
> This looks to be a bug report :)
>
> > We are passing following parameters:
> >
> >     char *eal_argv[] = {"fakeelf",
> >
> >                         "-c2",
> >
> >                         "-n4",
> >
> >                         "--proc-type=primary",};
> >
> >
> > int ret = rte_eal_init(4, eal_argv);
> >
> > The code works with Intel '82599ES 10-Gigabit SFI/SFP+ ' adapter in
> > PCI-passthrough or SR-IOV mode however i40e it does not work.
>
> You probably have the feeling that i40e does not work as other PMDs,
> maybe wondering what are the special extended PCI configs.
>
> > [root@localhost:~] esxcfg-nics -l
> >
> > [.]
> >
> > vmnic6  0000:07:00.0 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c0
> > 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
> >
> > vmnic7  0000:07:00.1 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c2
> > 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
> >
> >
> > We have turned on following config in DPDK:
> >
> > CONFIG_RTE_PCI_CONFIG=y
> >
> > CONFIG_RTE_PCI_EXTENDED_TAG="on"
> > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y
> >
> >
> > Is there any special handling in DPDK for i40e adapter in terms of
> config?
>
> So you had no help when reading the code comments neither in the doc.
> Indeed the only doc about i40e is the SR-IOV VF page:
>         http://dpdk.org/doc/guides/nics/intel_vf.html
>
> Please Helin, check the issue and the lack of documentation.
> Thanks
>
>
>
>
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()
  2016-01-26  1:36             ` Saurabh Mishra
@ 2016-01-26  1:40               ` Xu, Qian Q
  2016-01-26  1:48                 ` Saurabh Mishra
  0 siblings, 1 reply; 10+ messages in thread
From: Xu, Qian Q @ 2016-01-26  1:40 UTC (permalink / raw)
  To: Saurabh Mishra; +Cc: Zhang, Helin, users

Does it work on host without PCI devices pass-through?

Thanks
Qian

From: Saurabh Mishra [mailto:saurabh.globe@gmail.com]
Sent: Tuesday, January 26, 2016 9:36 AM
To: Xu, Qian Q
Cc: Zhang, Helin; Thomas Monjalon; users@dpdk.org; Mcnamara, John
Subject: Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()

Hi Qian --

>Mishra
>Could you tell me your exact command when running primary and secondary commands?
>Do you change the code when running dpdk app? Thx.

We are running our apps which is based on the symmetric_mp example code. But here are the arguments:

    char *eal_argv[] = {"fakeelf",
                        "-c2",
                        "-m2048",
                        "-n4",
                        "--proc-type=primary",
                        "--",
                        "-p 3",
                        "--num-procs=2",
                        "--proc-id=0",};

    if (core_id != 0) {
        snprintf(core_arg, sizeof(core_arg), "-c%x", (1 << core_id));
        eal_argv[1] = core_arg;
    }

    snprintf(port_arg, sizeof(core_arg), "-p %x", (1 << num_cores) - 1);
    eal_argv[6] = port_arg;

    snprintf(ncores, sizeof(ncores), "--num-procs=%d", num_cores);
    eal_argv[7] = ncores;
    snprintf(pid, sizeof(pid), "--proc-id=%d", core_id);
    eal_argv[8] = pid;

We have one primary and seven secondary process.
What we have seen is that when our primary process is our agent process then our secondary processes don't attach. Our agent process setups everything and then goes away (exits).

If we use your sample application, then we don't see this problem even though I CTRL+C primary symmetric_mp program.

Thanks,
/Saurabh

On Mon, Jan 25, 2016 at 5:09 PM, Xu, Qian Q <qian.q.xu@intel.com<mailto:qian.q.xu@intel.com>> wrote:
Mishra
Could you tell me your exact command when running primary and secondary commands? Do you change the code when running dpdk app? Thx.


Thanks
Qian

From: Saurabh Mishra [mailto:saurabh.globe@gmail.com<mailto:saurabh.globe@gmail.com>]
Sent: Tuesday, January 26, 2016 1:20 AM
To: Xu, Qian Q
Cc: Zhang, Helin; Thomas Monjalon; users@dpdk.org<mailto:users@dpdk.org>; Mcnamara, John

Subject: Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()

Hi Qian --

>Mishra, could you help answer below questions?
> What’s your i40e’s vmware driver on host?

We are doing PCI-passthrough of i40e.

vmnic7  0000:07:00.1 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c2 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
[root@localhost:~] vmkload_mod -s i40e |grep Version
 Version: Version 1.3.38, Build: 1331820, Interface: 9.2 Built on: Aug  5 2015
[root@localhost:~] vmkload_mod -s i40e
vmkload_mod module information
 input file: /usr/lib/vmware/vmkmod/i40e
 Version: Version 1.3.38, Build: 1331820, Interface: 9.2 Built on: Aug  5 2015
 License: GPL
 Required name-spaces:
  com.vmware.driverAPI#9.2.2.0
  com.vmware.vmkapi#v2_2_0_0
 Parameters:
  skb_mpool_max: int
    Maximum attainable private socket buffer memory pool size for the driver.
  skb_mpool_initial: int
    Driver's minimum private socket buffer memory pool size.
  heap_max: int
    Maximum attainable heap size for the driver.
  heap_initial: int
    Initial heap size allocated for the driver.
  debug: int
    Debug level (0=none,...,16=all)
  RSS: array of int
    Number of Receive-Side Scaling Descriptor Queues: 0 = disable/default, 1-4 = enable (number of cpus)
  VMDQ: array of int
    Number of Virtual Machine Device Queues: 0/1 = disable, 2-16 enable (default =I40E_ESX_DEFAULT_VMDQ)
  max_vfs: array of int
    Number of Virtual Functions: 0 = disable (default), 1-128 = enable this many VFs
[root@localhost:~]
>What’s the firmware of FVL? Is your FVL 4X10G or 2x10G?
Hmm..how do I check that? It has two ports.

>Could you tell us your detailed test step? I only launch one dpdk app, no secondary process, but your case have 2 processes. When pass through the PCI device, I just pass through 2 ports to the VM and start the VM in my case.

So I was passing one port to VM. I do launch primary and a secondary process.

This is the error we got. It seems we were not able to map to the VA. So I moved the rte_eal_init() and mbuf alloc to a separate process and now it seems to work with one primary and six secondary process.

EAL:   PCI memory mapped at 0x7ffff7faa000
EAL: Cannot mmap device resource file /sys/bus/pci/devices/0000:1b:00.0/resource3 to address: 0x7ffff7fac000
EAL: Error - exiting with code: 1
  Cause: Requested device 0000:1b:00.0 cannot be used


Thanks,
/Saurabh


On Sun, Jan 24, 2016 at 5:17 PM, Xu, Qian Q <qian.q.xu@intel.com<mailto:qian.q.xu@intel.com>> wrote:
We have tried the I40E driver with FVL4 on ESXi in r2.2 testing and PCI pass through can work but SRIOV will have issues. I didn’t see Mishra’s error . Now we are stuck at setting up the ESXi environment, and need
Vmware’s support to set up the env again.
Mishra, could you help answer below questions?

1.       What’s your i40e’s vmware driver on host?

2.       What’s the firmware of FVL? Is your FVL 4X10G or 2x10G?

3.       Could you tell us your detailed test step? I only launch one dpdk app, no secondary process, but your case have 2 processes. When pass through the PCI device, I just pass through 2 ports to the VM and start the VM in my case.

Helin, rombar=0 seems for KVM only, on VMWARE, not see the issue.


Thanks
Qian

From: Zhang, Helin
Sent: Sunday, January 24, 2016 11:17 AM
To: Saurabh Mishra; Thomas Monjalon; Xu, Qian Q
Cc: users@dpdk.org<mailto:users@dpdk.org>; Mcnamara, John
Subject: RE: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()

Hi Saurabh

Were you talking about assigning NIC PF to the VMWARE guest?
I remember that there is an issue of assigning PF to KVM guest, then rombar=0 is needed for grub.
Thank you for the issue report!

Can Qian tell the details of that?

Regards,
Helin

From: Saurabh Mishra [mailto:saurabh.globe@gmail.com]
Sent: Saturday, January 23, 2016 1:30 AM
To: Thomas Monjalon <thomas.monjalon@6wind.com<mailto:thomas.monjalon@6wind.com>>
Cc: users@dpdk.org<mailto:users@dpdk.org>; Zhang, Helin <helin.zhang@intel.com<mailto:helin.zhang@intel.com>>; Mcnamara, John <john.mcnamara@intel.com<mailto:john.mcnamara@intel.com>>
Subject: Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()

Hi,

Thanks Thomas.. So this is the error we see on i40e PCI-passthrough of whole NIC:


Secondary Process:

EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: Setting up physically contiguous memory...
EAL: Analysing 1024 files
EAL: Mapped segment 0 of size 0x22800000
EAL: Mapped segment 1 of size 0x200000
EAL: Mapped segment 2 of size 0x200000
EAL: Mapped segment 3 of size 0x57600000
EAL: Mapped segment 4 of size 0x400000
EAL: Mapped segment 5 of size 0x400000
EAL: Mapped segment 6 of size 0x400000
EAL: Mapped segment 7 of size 0x200000
EAL: Mapped segment 8 of size 0x2200000
EAL: Mapped segment 9 of size 0x200000
EAL: Mapped segment 10 of size 0x800000
EAL: Mapped segment 11 of size 0x600000
EAL: Mapped segment 12 of size 0x800000
EAL: Mapped segment 13 of size 0xa00000
EAL: Mapped segment 14 of size 0x400000
EAL: Mapped segment 15 of size 0x200000
EAL: Mapped segment 16 of size 0x200000
EAL: Mapped segment 17 of size 0x200000
EAL: Mapped segment 18 of size 0x200000
EAL: memzone_reserve_aligned_thread_unsafe(): memzone <RG_MP_log_history> already exists
RING: Cannot reserve memory
EAL: TSC frequency is ~1799997 KHz
EAL: Master lcore 1 is ready (tid=f7fe78c0;cpuset=[1])
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:1b:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1572 rte_i40e_pmd
EAL:   PCI memory mapped at 0x7fff6f1f3000
EAL:   PCI memory mapped at 0x7ffff7faa000
EAL: Cannot mmap device resource file /sys/bus/pci/devices/0000:1b:00.0/resource3 to address: 0x7ffff7fac000
EAL: Error - exiting with code: 1
  Cause: Requested device 0000:1b:00.0 cannot be used


# ./dpdk-2.2.0/tools/dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver
============================================
0000:1b:00.0 'Device 1572' drv=uio_pci_generic unused=i40e

Network devices using kernel driver
===================================
0000:03:00.0 'VMXNET3 Ethernet Controller' if=eth0 drv=vmxnet3 unused=uio_pci_generic *Active*

Other network devices
=====================
<none>

# grep Huge /proc/meminfo
AnonHugePages:    118784 kB
HugePages_Total:    1024
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
#


Primary Process:

EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 0 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 2 lcore(s)
EAL: Setting up physically contiguous memory...
EAL: cannot open /proc/self/numa_maps, consider that all memory is in socket_id 0
EAL: Ask a virtual area of 0x22800000 bytes
EAL: Virtual area found at 0x7fffd0a00000 (size = 0x22800000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fffd0600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fffd0200000 (size = 0x200000)
EAL: Ask a virtual area of 0x57600000 bytes
EAL: Virtual area found at 0x7fff78a00000 (size = 0x57600000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff78400000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff77e00000 (size = 0x400000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff77800000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff77400000 (size = 0x200000)
EAL: Ask a virtual area of 0x2200000 bytes
EAL: Virtual area found at 0x7fff75000000 (size = 0x2200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff74c00000 (size = 0x200000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7fff74200000 (size = 0x800000)
EAL: Ask a virtual area of 0x600000 bytes
EAL: Virtual area found at 0x7fff73a00000 (size = 0x600000)
EAL: Ask a virtual area of 0x800000 bytes
EAL: Virtual area found at 0x7fff73000000 (size = 0x800000)
EAL: Ask a virtual area of 0xa00000 bytes
EAL: Virtual area found at 0x7fff72400000 (size = 0xa00000)
EAL: Ask a virtual area of 0x400000 bytes
EAL: Virtual area found at 0x7fff71e00000 (size = 0x400000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff71a00000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff71600000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff71200000 (size = 0x200000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7fff70e00000 (size = 0x200000)
EAL: Requesting 1024 pages of size 2MB from socket 0
EAL: TSC frequency is ~1799997 KHz
EAL: Master lcore 1 is ready (tid=f7fe78a0;cpuset=[1])
EAL: PCI device 0000:03:00.0 on NUMA socket 0
EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
EAL:   Not managed by a supported kernel driver, skipped
EAL: PCI device 0000:1b:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1572 rte_i40e_pmd
EAL:   PCI memory mapped at 0x7fff6f1f3000
EAL:   PCI memory mapped at 0x7ffff7fac000
#

On Fri, Jan 22, 2016 at 2:12 AM, Thomas Monjalon <thomas.monjalon@6wind.com<mailto:thomas.monjalon@6wind.com>> wrote:
Hi,

Thanks for asking.
There are a couple of requests hidden in this message.
The maintainer of i40e is Helin (CC'ed).

2016-01-21 14:54, Saurabh Mishra:
> Hi,
>
> We have noticed that i40e if we do PCI-pass through of whole NIC to VM on
> ESXi 6.0, the DPDK exits abruptly in rte_eal_init()?

This looks to be a bug report :)

> We are passing following parameters:
>
>     char *eal_argv[] = {"fakeelf",
>
>                         "-c2",
>
>                         "-n4",
>
>                         "--proc-type=primary",};
>
>
> int ret = rte_eal_init(4, eal_argv);
>
> The code works with Intel '82599ES 10-Gigabit SFI/SFP+ ' adapter in
> PCI-passthrough or SR-IOV mode however i40e it does not work.

You probably have the feeling that i40e does not work as other PMDs,
maybe wondering what are the special extended PCI configs.

> [root@localhost:~] esxcfg-nics -l
>
> [.]
>
> vmnic6  0000:07:00.0 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c0
> 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
>
> vmnic7  0000:07:00.1 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c2
> 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
>
>
> We have turned on following config in DPDK:
>
> CONFIG_RTE_PCI_CONFIG=y
>
> CONFIG_RTE_PCI_EXTENDED_TAG="on"
> CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y
>
>
> Is there any special handling in DPDK for i40e adapter in terms of config?

So you had no help when reading the code comments neither in the doc.
Indeed the only doc about i40e is the SR-IOV VF page:
        http://dpdk.org/doc/guides/nics/intel_vf.html

Please Helin, check the issue and the lack of documentation.
Thanks




^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init()
  2016-01-26  1:40               ` Xu, Qian Q
@ 2016-01-26  1:48                 ` Saurabh Mishra
  0 siblings, 0 replies; 10+ messages in thread
From: Saurabh Mishra @ 2016-01-26  1:48 UTC (permalink / raw)
  To: Xu, Qian Q; +Cc: Zhang, Helin, users

I've not tried on host without PCI passthrough. I've not tried i40e vf
SR-IOV also.

I just tried i40e pass-through and we had this failure when primary process
(agent process) inits DPDK and then exits.

If we keep our primary process alive, then secondary processes were able to
attach.

Thanks,
/Saurabh

On Mon, Jan 25, 2016 at 5:40 PM, Xu, Qian Q <qian.q.xu@intel.com> wrote:

> Does it work on host without PCI devices pass-through?
>
>
>
> Thanks
>
> Qian
>
>
>
> *From:* Saurabh Mishra [mailto:saurabh.globe@gmail.com]
> *Sent:* Tuesday, January 26, 2016 9:36 AM
>
> *To:* Xu, Qian Q
> *Cc:* Zhang, Helin; Thomas Monjalon; users@dpdk.org; Mcnamara, John
> *Subject:* Re: [dpdk-users] i40e with DPDK exits abruptly in
> rte_eal_init()
>
>
>
> Hi Qian --
>
>
>
> >Mishra
>
> >Could you tell me your exact command when running primary and secondary
> commands?
>
> >Do you change the code when running dpdk app? Thx.
>
>
>
> We are running our apps which is based on the symmetric_mp example code.
> But here are the arguments:
>
>
>
>     char *eal_argv[] = {"fakeelf",
>
>                         "-c2",
>
>                         "-m2048",
>
>                         "-n4",
>
>                         "--proc-type=primary",
>
>                         "--",
>
>                         "-p 3",
>
>                         "--num-procs=2",
>
>                         "--proc-id=0",};
>
>
>
>     if (core_id != 0) {
>
>
>
>         snprintf(core_arg, sizeof(core_arg), "-c%x", (1 << core_id));
>
>
>
>         eal_argv[1] = core_arg;
>
>
>
>     }
>
>
>
>
>
>
>
>     snprintf(port_arg, sizeof(core_arg), "-p %x", (1 << num_cores) - 1);
>
>
>
>     eal_argv[6] = port_arg;
>
>
>
>
>
>
>
>     snprintf(ncores, sizeof(ncores), "--num-procs=%d", num_cores);
>
>
>
>     eal_argv[7] = ncores;
>
>
>
>     snprintf(pid, sizeof(pid), "--proc-id=%d", core_id);
>
>
>
>     eal_argv[8] = pid;
>
>
>
> We have one primary and seven secondary process.
>
> What we have seen is that when our primary process is our agent process
> then our secondary processes don't attach. Our agent process setups
> everything and then goes away (exits).
>
>
>
> If we use your sample application, then we don't see this problem even
> though I CTRL+C primary symmetric_mp program.
>
>
>
> Thanks,
>
> /Saurabh
>
>
>
> On Mon, Jan 25, 2016 at 5:09 PM, Xu, Qian Q <qian.q.xu@intel.com> wrote:
>
> Mishra
>
> Could you tell me your exact command when running primary and secondary
> commands? Do you change the code when running dpdk app? Thx.
>
>
>
>
>
> Thanks
>
> Qian
>
>
>
> *From:* Saurabh Mishra [mailto:saurabh.globe@gmail.com]
> *Sent:* Tuesday, January 26, 2016 1:20 AM
> *To:* Xu, Qian Q
> *Cc:* Zhang, Helin; Thomas Monjalon; users@dpdk.org; Mcnamara, John
>
>
> *Subject:* Re: [dpdk-users] i40e with DPDK exits abruptly in
> rte_eal_init()
>
>
>
> Hi Qian --
>
>
>
> >Mishra, could you help answer below questions?
>
> *>* What’s your i40e’s vmware driver on host?
>
>
>
> We are doing PCI-passthrough of i40e.
>
>
>
> vmnic7  0000:07:00.1 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c2
> 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
>
> [root@localhost:~] vmkload_mod -s i40e |grep Version
>
>  Version: Version 1.3.38, Build: 1331820, Interface: 9.2 Built on: Aug  5
> 2015
>
> [root@localhost:~] vmkload_mod -s i40e
>
> vmkload_mod module information
>
>  input file: /usr/lib/vmware/vmkmod/i40e
>
>  Version: Version 1.3.38, Build: 1331820, Interface: 9.2 Built on: Aug  5
> 2015
>
>  License: GPL
>
>  Required name-spaces:
>
>   com.vmware.driverAPI#9.2.2.0
>
>   com.vmware.vmkapi#v2_2_0_0
>
>  Parameters:
>
>   skb_mpool_max: int
>
>     Maximum attainable private socket buffer memory pool size for the
> driver.
>
>   skb_mpool_initial: int
>
>     Driver's minimum private socket buffer memory pool size.
>
>   heap_max: int
>
>     Maximum attainable heap size for the driver.
>
>   heap_initial: int
>
>     Initial heap size allocated for the driver.
>
>   debug: int
>
>     Debug level (0=none,...,16=all)
>
>   RSS: array of int
>
>     Number of Receive-Side Scaling Descriptor Queues: 0 = disable/default,
> 1-4 = enable (number of cpus)
>
>   VMDQ: array of int
>
>     Number of Virtual Machine Device Queues: 0/1 = disable, 2-16 enable
> (default =I40E_ESX_DEFAULT_VMDQ)
>
>   max_vfs: array of int
>
>     Number of Virtual Functions: 0 = disable (default), 1-128 = enable
> this many VFs
>
> [root@localhost:~]
>
> >What’s the firmware of FVL? Is your FVL 4X10G or 2x10G?
>
> Hmm..how do I check that? It has two ports.
>
>
>
> >Could you tell us your detailed test step? I only launch one dpdk app, no
> secondary process, but your case have 2 processes. When pass through the
> PCI device, I just pass through 2 ports to the VM and start the VM in my
> case.
>
>
>
> So I was passing one port to VM. I do launch primary and a secondary
> process.
>
>
>
> This is the error we got. It seems we were not able to map to the VA. So I
> moved the rte_eal_init() and mbuf alloc to a separate process and now it
> seems to work with one primary and six secondary process.
>
>
>
> EAL:   PCI memory mapped at 0x7ffff7faa000
>
> EAL: Cannot mmap device resource file
> /sys/bus/pci/devices/0000:1b:00.0/resource3 to address: 0x7ffff7fac000
>
> EAL: Error - exiting with code: 1
>
>   Cause: Requested device 0000:1b:00.0 cannot be used
>
>
>
>
>
> Thanks,
>
> /Saurabh
>
>
>
>
>
> On Sun, Jan 24, 2016 at 5:17 PM, Xu, Qian Q <qian.q.xu@intel.com> wrote:
>
> We have tried the I40E driver with FVL4 on ESXi in r2.2 testing and PCI
> pass through can work but SRIOV will have issues. I didn’t see Mishra’s
> error . Now we are stuck at setting up the ESXi environment, and need
>
> Vmware’s support to set up the env again.
>
> Mishra, could you help answer below questions?
>
> 1.       What’s your i40e’s vmware driver on host?
>
> 2.       What’s the firmware of FVL? Is your FVL 4X10G or 2x10G?
>
> 3.       Could you tell us your detailed test step? I only launch one
> dpdk app, no secondary process, but your case have 2 processes. When pass
> through the PCI device, I just pass through 2 ports to the VM and start the
> VM in my case.
>
>
>
> Helin, rombar=0 seems for KVM only, on VMWARE, not see the issue.
>
>
>
>
>
> Thanks
>
> Qian
>
>
>
> *From:* Zhang, Helin
> *Sent:* Sunday, January 24, 2016 11:17 AM
> *To:* Saurabh Mishra; Thomas Monjalon; Xu, Qian Q
> *Cc:* users@dpdk.org; Mcnamara, John
> *Subject:* RE: [dpdk-users] i40e with DPDK exits abruptly in
> rte_eal_init()
>
>
>
> Hi Saurabh
>
>
>
> Were you talking about assigning NIC PF to the VMWARE guest?
>
> I remember that there is an issue of assigning PF to KVM guest, then
> rombar=0 is needed for grub.
>
> Thank you for the issue report!
>
>
>
> Can Qian tell the details of that?
>
>
>
> Regards,
>
> Helin
>
>
>
> *From:* Saurabh Mishra [mailto:saurabh.globe@gmail.com
> <saurabh.globe@gmail.com>]
> *Sent:* Saturday, January 23, 2016 1:30 AM
> *To:* Thomas Monjalon <thomas.monjalon@6wind.com>
> *Cc:* users@dpdk.org; Zhang, Helin <helin.zhang@intel.com>; Mcnamara,
> John <john.mcnamara@intel.com>
> *Subject:* Re: [dpdk-users] i40e with DPDK exits abruptly in
> rte_eal_init()
>
>
>
> Hi,
>
>
>
> Thanks Thomas.. So this is the error we see on i40e PCI-passthrough of
> whole NIC:
>
>
>
>
>
> *Secondary Process:*
>
>
>
> EAL: Detected lcore 0 as core 0 on socket 0
>
> EAL: Detected lcore 1 as core 0 on socket 0
>
> EAL: Support maximum 128 logical core(s) by configuration.
>
> EAL: Detected 2 lcore(s)
>
> EAL: Setting up physically contiguous memory...
>
> EAL: Analysing 1024 files
>
> EAL: Mapped segment 0 of size 0x22800000
>
> EAL: Mapped segment 1 of size 0x200000
>
> EAL: Mapped segment 2 of size 0x200000
>
> EAL: Mapped segment 3 of size 0x57600000
>
> EAL: Mapped segment 4 of size 0x400000
>
> EAL: Mapped segment 5 of size 0x400000
>
> EAL: Mapped segment 6 of size 0x400000
>
> EAL: Mapped segment 7 of size 0x200000
>
> EAL: Mapped segment 8 of size 0x2200000
>
> EAL: Mapped segment 9 of size 0x200000
>
> EAL: Mapped segment 10 of size 0x800000
>
> EAL: Mapped segment 11 of size 0x600000
>
> EAL: Mapped segment 12 of size 0x800000
>
> EAL: Mapped segment 13 of size 0xa00000
>
> EAL: Mapped segment 14 of size 0x400000
>
> EAL: Mapped segment 15 of size 0x200000
>
> EAL: Mapped segment 16 of size 0x200000
>
> EAL: Mapped segment 17 of size 0x200000
>
> EAL: Mapped segment 18 of size 0x200000
>
> EAL: memzone_reserve_aligned_thread_unsafe(): memzone <RG_MP_log_history>
> already exists
>
> RING: Cannot reserve memory
>
> EAL: TSC frequency is ~1799997 KHz
>
> EAL: Master lcore 1 is ready (tid=f7fe78c0;cpuset=[1])
>
> EAL: PCI device 0000:03:00.0 on NUMA socket 0
>
> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
>
> EAL:   Not managed by a supported kernel driver, skipped
>
> EAL: PCI device 0000:1b:00.0 on NUMA socket 0
>
> EAL:   probe driver: 8086:1572 rte_i40e_pmd
>
> EAL:   PCI memory mapped at 0x7fff6f1f3000
>
> EAL:   PCI memory mapped at 0x7ffff7faa000
>
> EAL: Cannot mmap device resource file
> /sys/bus/pci/devices/0000:1b:00.0/resource3 to address: 0x7ffff7fac000
>
> EAL: Error - exiting with code: 1
>
>   Cause: Requested device 0000:1b:00.0 cannot be used
>
>
>
>
>
> # ./dpdk-2.2.0/tools/dpdk_nic_bind.py --status
>
>
>
> Network devices using DPDK-compatible driver
>
> ============================================
>
> 0000:1b:00.0 'Device 1572' drv=uio_pci_generic unused=i40e
>
>
>
> Network devices using kernel driver
>
> ===================================
>
> 0000:03:00.0 'VMXNET3 Ethernet Controller' if=eth0 drv=vmxnet3
> unused=uio_pci_generic *Active*
>
>
>
> Other network devices
>
> =====================
>
> <none>
>
>
>
> # grep Huge /proc/meminfo
>
> AnonHugePages:    118784 kB
>
> HugePages_Total:    1024
>
> HugePages_Free:        0
>
> HugePages_Rsvd:        0
>
> HugePages_Surp:        0
>
> Hugepagesize:       2048 kB
>
> #
>
>
>
>
>
> *Primary Process:*
>
>
>
> EAL: Detected lcore 0 as core 0 on socket 0
>
> EAL: Detected lcore 1 as core 0 on socket 0
>
> EAL: Support maximum 128 logical core(s) by configuration.
>
> EAL: Detected 2 lcore(s)
>
> EAL: Setting up physically contiguous memory...
>
> EAL: cannot open /proc/self/numa_maps, consider that all memory is in
> socket_id 0
>
> EAL: Ask a virtual area of 0x22800000 bytes
>
> EAL: Virtual area found at 0x7fffd0a00000 (size = 0x22800000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fffd0600000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fffd0200000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x57600000 bytes
>
> EAL: Virtual area found at 0x7fff78a00000 (size = 0x57600000)
>
> EAL: Ask a virtual area of 0x400000 bytes
>
> EAL: Virtual area found at 0x7fff78400000 (size = 0x400000)
>
> EAL: Ask a virtual area of 0x400000 bytes
>
> EAL: Virtual area found at 0x7fff77e00000 (size = 0x400000)
>
> EAL: Ask a virtual area of 0x400000 bytes
>
> EAL: Virtual area found at 0x7fff77800000 (size = 0x400000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff77400000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x2200000 bytes
>
> EAL: Virtual area found at 0x7fff75000000 (size = 0x2200000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff74c00000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x800000 bytes
>
> EAL: Virtual area found at 0x7fff74200000 (size = 0x800000)
>
> EAL: Ask a virtual area of 0x600000 bytes
>
> EAL: Virtual area found at 0x7fff73a00000 (size = 0x600000)
>
> EAL: Ask a virtual area of 0x800000 bytes
>
> EAL: Virtual area found at 0x7fff73000000 (size = 0x800000)
>
> EAL: Ask a virtual area of 0xa00000 bytes
>
> EAL: Virtual area found at 0x7fff72400000 (size = 0xa00000)
>
> EAL: Ask a virtual area of 0x400000 bytes
>
> EAL: Virtual area found at 0x7fff71e00000 (size = 0x400000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff71a00000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff71600000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff71200000 (size = 0x200000)
>
> EAL: Ask a virtual area of 0x200000 bytes
>
> EAL: Virtual area found at 0x7fff70e00000 (size = 0x200000)
>
> EAL: Requesting 1024 pages of size 2MB from socket 0
>
> EAL: TSC frequency is ~1799997 KHz
>
> EAL: Master lcore 1 is ready (tid=f7fe78a0;cpuset=[1])
>
> EAL: PCI device 0000:03:00.0 on NUMA socket 0
>
> EAL:   probe driver: 15ad:7b0 rte_vmxnet3_pmd
>
> EAL:   Not managed by a supported kernel driver, skipped
>
> EAL: PCI device 0000:1b:00.0 on NUMA socket 0
>
> EAL:   probe driver: 8086:1572 rte_i40e_pmd
>
> EAL:   PCI memory mapped at 0x7fff6f1f3000
>
> EAL:   PCI memory mapped at 0x7ffff7fac000
>
> #
>
>
>
> On Fri, Jan 22, 2016 at 2:12 AM, Thomas Monjalon <
> thomas.monjalon@6wind.com> wrote:
>
> Hi,
>
> Thanks for asking.
> There are a couple of requests hidden in this message.
> The maintainer of i40e is Helin (CC'ed).
>
> 2016-01-21 14:54, Saurabh Mishra:
> > Hi,
> >
> > We have noticed that i40e if we do PCI-pass through of whole NIC to VM on
> > ESXi 6.0, the DPDK exits abruptly in rte_eal_init()?
>
> This looks to be a bug report :)
>
> > We are passing following parameters:
> >
> >     char *eal_argv[] = {"fakeelf",
> >
> >                         "-c2",
> >
> >                         "-n4",
> >
> >                         "--proc-type=primary",};
> >
> >
> > int ret = rte_eal_init(4, eal_argv);
> >
> > The code works with Intel '82599ES 10-Gigabit SFI/SFP+ ' adapter in
> > PCI-passthrough or SR-IOV mode however i40e it does not work.
>
> You probably have the feeling that i40e does not work as other PMDs,
> maybe wondering what are the special extended PCI configs.
>
> > [root@localhost:~] esxcfg-nics -l
> >
> > [.]
> >
> > vmnic6  0000:07:00.0 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c0
> > 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
> >
> > vmnic7  0000:07:00.1 i40e        Up   10000Mbps Full   3c:fd:fe:04:11:c2
> > 1500   Intel Corporation Ethernet Controller X710 for 10GbE SFP+
> >
> >
> > We have turned on following config in DPDK:
> >
> > CONFIG_RTE_PCI_CONFIG=y
> >
> > CONFIG_RTE_PCI_EXTENDED_TAG="on"
> > CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=y
> >
> >
> > Is there any special handling in DPDK for i40e adapter in terms of
> config?
>
> So you had no help when reading the code comments neither in the doc.
> Indeed the only doc about i40e is the SR-IOV VF page:
>         http://dpdk.org/doc/guides/nics/intel_vf.html
>
> Please Helin, check the issue and the lack of documentation.
> Thanks
>
>
>
>
>
>
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-01-26  1:48 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-21 22:54 [dpdk-users] i40e with DPDK exits abruptly in rte_eal_init() Saurabh Mishra
2016-01-22 10:12 ` Thomas Monjalon
2016-01-22 17:29   ` Saurabh Mishra
2016-01-24  3:16     ` Zhang, Helin
2016-01-25  1:17       ` Xu, Qian Q
2016-01-25 17:20         ` Saurabh Mishra
2016-01-26  1:09           ` Xu, Qian Q
2016-01-26  1:36             ` Saurabh Mishra
2016-01-26  1:40               ` Xu, Qian Q
2016-01-26  1:48                 ` Saurabh Mishra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).