DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] Poor SRIOV performance with ESXi Linux guest
@ 2015-09-02 22:18 Ale Mansoor
  2015-09-02 22:31 ` Stephen Hemminger
  0 siblings, 1 reply; 3+ messages in thread
From: Ale Mansoor @ 2015-09-02 22:18 UTC (permalink / raw)
  To: dev

 Getting less than 100 packets per second throughput between VF's under my Fedora FC20 VM running under ESXi 6.0 with DPDK l2fwd (Used as ./l2fwd -c 0xf -n 4 -- -p 0x3 -T 1)
 
Questions:
---------------
 
Q1) Is DPDK + SRIOV under ESXi supposed to use the igb_uio driver or the vfio-pci driver inside Linux guest os ?
Q2) What is the expected l2fwd performance when running DPDK under the Linux guest OS under ESXI with SRIOV ?
Q3) Any idea what may be preventing vfio-pci driver from binding to the VF's inside the guest instance ?
Q4) Why is igb_uio performing so poorly ?
 
 
 
My hardware setup:
---------------------------
HP DL380-G8 host with dual Xeon-2670 CPU
64 GB RAM
Intel 82599 NIC connected to 10G switch
HP onboard bnx2 NIC's.
 
VMware Hypervisor ESXi 6.0:
---------------------------------------
On above hardware running ESXi 6.0 hypervisor and Intel ixgbe VMware (driver version: 3.21.4.3iov). 
 
ESXi side SRIOV is setup as follows: "ixgbe_enabled = 1 options = max_vfs=0,8,8,0"
(Total 16 VF's configured)
 
VF's are pass-through to the two guest instances. 
 
First 4 VF's to one vm instance and next 2 VF's to second vm instance, remaining VF's are unused, have also tried with single guest instances having 4 VF's and remaining unused, but it did not make any difference
 
 
My VMware guest instances:
----------------------------------------
 
Have tried two different guests:
 
Linux guest setup each with 8 cores and 8GB memory (memory is setup as locked down to vm to help with iommu).
 
 
1) Fedora FC20 OS (Linux kernel 3.19.8-100.fc20) 
2) Ubuntu 15.04 Server distro (Linux kernel 3.19.0-15),
 
On both these guests systems my Linux kernel boot line is set with "intel_iommu=on and iommu=pt"
 
My kernel command info for Fedora guest is as follows:
 
 
$cat /proc/cmdline
---------------------------
BOOT_IMAGE=/vmlinuz-3.19.8-100.fc20.x86_64 root=UUID=f0cde7fc-d835-4d90-b086-82bf88f58f88 ro console=tty0 console=ttyS0,9600 rhgb quiet net.ifnames=0 biosdevname=0 intel_iommu=on iommu=pt
 
My dmesg has message:
 
"Intel-IOMMU: enabled"
 
Kernel IOMMU config:
------------------------------
 
 cat /boot/config-3.19.8-100.fc20.x86_64 |grep -i iommu
# CONFIG_GART_IOMMU is not set
# CONFIG_CALGARY_IOMMU is not set
CONFIG_IOMMU_HELPER=y
CONFIG_VFIO_IOMMU_TYPE1=m
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y
CONFIG_AMD_IOMMU=y
CONFIG_AMD_IOMMU_STATS=y
CONFIG_AMD_IOMMU_V2=m
CONFIG_INTEL_IOMMU=y
# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
CONFIG_INTEL_IOMMU_FLOPPY_WA=y
# CONFIG_IOMMU_STRESS is not set
 
 cat /boot/config-3.19.8-100.fc20.x86_64 |grep -i VIRTUAL
CONFIG_FB_VIRTUAL=m
# CONFIG_DEBUG_VIRTUAL is not set
CONFIG_VIRTUALIZATION=y


 
 ./dpdk_nic_bind.py --status
----------------------------------------
Network devices using DPDK-compatible driver
============================================
0000:03:00.0 '82599 Ethernet Controller Virtual Function' drv=igb_uio unused=ixgbevf,vfio-pci
0000:0b:00.0 '82599 Ethernet Controller Virtual Function' drv=igb_uio unused=ixgbevf,vfio-pci
0000:13:00.0 '82599 Ethernet Controller Virtual Function' drv=igb_uio unused=ixgbevf,vfio-pci
0000:1b:00.0 '82599 Ethernet Controller Virtual Function' drv=igb_uio unused=ixgbevf,vfio-pci
Network devices using kernel driver
===================================
0000:02:00.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth4 drv=e1000 unused=igb_uio,vfio-pci *Active*
Other network devices
=====================
<none>
 
The l2fw program is run with:   ./l2fwd -c 0xf -n 4 -- -p 0x3 -T 1
 
[Using the default igb_uio driver shipped with Fedora Fc20]
 
Getting around 70 packets per second with these settings:
 
When I tried using the vfio-pci driver (again using the default vfio-pci driver from Fedora FC20), get the following errors:
 
./dpdk_nic_bind.py --bind vfio-pci 0000:03:00.0
Error: bind failed for 0000:03:00.0 - Cannot bind to driver vfio-pci
 
dmesg shows:
[  882.685134] vfio-pci: probe of 0000:03:00.0 failed with error -22
 
My guest side lspci shows: 
-----------------------------------
 
03:00.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
0b:00.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
13:00.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
1b:00.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
 
My ESXi side VMware lscpi shows:
----------------------------------------------
 
0000:09:00.0 Network controller: Intel Corporation 82599 10 Gigabit Dual Port Network Connection [vmnic4]
0000:09:00.1 Network controller: Intel Corporation 82599 10 Gigabit Dual Port Network Connection [vmnic5]
0000:09:10.1 Network controller: Intel Corporation 82599 Ethernet Controller Virtual Function [PF_0.9.1_VF_0]
0000:09:10.3 Network controller: Intel Corporation 82599 Ethernet Controller Virtual Function [PF_0.9.1_VF_1]
0000:09:10.5 Network controller: Intel Corporation 82599 Ethernet Controller Virtual Function [PF_0.9.1_VF_2]
0000:09:10.7 Network controller: Intel Corporation 82599 Ethernet Controller Virtual Function [PF_0.9.1_VF_3]
0000:09:11.1 Network controller: Intel Corporation 82599 Ethernet Controller Virtual Function [PF_0.9.1_VF_4]
0000:09:11.3 Network controller: Intel Corporation 82599 Ethernet Controller Virtual Function [PF_0.9.1_VF_5]
0000:09:11.5 Network controller: Intel Corporation 82599 Ethernet Controller Virtual Function [PF_0.9.1_VF_6]
0000:09:11.7 Network controller: Intel Corporation 82599 Ethernet Controller Virtual Function [PF_0.9.1_VF_7]
0000:0c:00.0 Network controller: Intel Corporation 82599 10 Gigabit Dual Port Network Connection [vmnic6]
0000:0c:00.1 Network controller: Intel Corporation 82599 10 Gigabit Dual Port Network Connection [vmnic7]
0000:0c:10.0 Network controller: Intel Corporation 82599 Ethernet Controller Virtual Function [PF_0.12.0_VF_0]
0000:0c:10.2 Network controller: Intel Corporation 82599 Ethernet Controller Virtual Function [PF_0.12.0_VF_1]
0000:0c:10.4 Network controller: Intel Corporation 82599 Ethernet Controller Virtual Function [PF_0.12.0_VF_2]
0000:0c:10.6 Network controller: Intel Corporation 82599 Ethernet Controller Virtual Function [PF_0.12.0_VF_3]
0000:0c:11.0 Network controller: Intel Corporation 82599 Ethernet Controller Virtual Function [PF_0.12.0_VF_4]
0000:0c:11.2 Network controller: Intel Corporation 82599 Ethernet Controller Virtual Function [PF_0.12.0_VF_5]
0000:0c:11.4 Network controller: Intel Corporation 82599 Ethernet Controller Virtual Function [PF_0.12.0_VF_6]
0000:0c:11.6 Network controller: Intel Corporation 82599 Ethernet Controller Virtual Function [PF_0.12.0_VF_7]
 
---
Regards
 
Ale


 


 
 
 
 		 	   		  

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] Poor SRIOV performance with ESXi Linux guest
  2015-09-02 22:18 [dpdk-dev] Poor SRIOV performance with ESXi Linux guest Ale Mansoor
@ 2015-09-02 22:31 ` Stephen Hemminger
  2015-09-03  1:02   ` Ale Mansoor
  0 siblings, 1 reply; 3+ messages in thread
From: Stephen Hemminger @ 2015-09-02 22:31 UTC (permalink / raw)
  To: Ale Mansoor; +Cc: dev

On Wed, 2 Sep 2015 22:18:27 +0000
Ale Mansoor <mansooraa@hotmail.com> wrote:

>  Getting less than 100 packets per second throughput between VF's under my Fedora FC20 VM running under ESXi 6.0 with DPDK l2fwd (Used as ./l2fwd -c 0xf -n 4 -- -p 0x3 -T 1)

That is many orders of magnitude less than expected.

 
> Questions:
> ---------------
>  
> Q1) Is DPDK + SRIOV under ESXi supposed to use the igb_uio driver or the vfio-pci driver inside Linux guest os ?

You have to use igb_uio, there is no emulated IOMMU in ESX

> Q2) What is the expected l2fwd performance when running DPDK under the Linux guest OS under ESXI with SRIOV ?

Depends on many things. With SRIOV you should reach 10Mpps or more.
Did you try running Linux on baremetal on same hardware first?

> Q3) Any idea what may be preventing vfio-pci driver from binding to the VF's inside the guest instance ?

vfio-pci needs IOMMU which is not available in guest.

> Q4) Why is igb_uio performing so poorly ?

Don't blame igb_uio. It is probably something in system or vmware.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-dev] Poor SRIOV performance with ESXi Linux guest
  2015-09-02 22:31 ` Stephen Hemminger
@ 2015-09-03  1:02   ` Ale Mansoor
  0 siblings, 0 replies; 3+ messages in thread
From: Ale Mansoor @ 2015-09-03  1:02 UTC (permalink / raw)
  To: Stephen Hemminger, dev

Thank you for your input, earlier on this same ESXi server another similar guest was able to run the Kernel mode ixgbevf driver and push several hundred kilo packets per second via the kernel mode driver. 
 
I am trying to get my hands on a second similar physical system for comparison, but likely the hardware and ESXi SRIOV VF mapping into the guest is not the issue.
 
Are there any Kernel options or DPDK options that may influence performance so significantly under ESXi ? 
 
I used the tools/setup.sh script under DPDK version 2.1 for building and setting up my DPDK environment.
 
My Kernel UIO settings are below:
----------------------------------------------
 
cat /boot/config-3.19.8-100.fc20.x86_64 |grep -i uio
CONFIG_HID_HUION=m
CONFIG_UIO=m
CONFIG_UIO_CIF=m
# CONFIG_UIO_PDRV_GENIRQ is not set
# CONFIG_UIO_DMEM_GENIRQ is not set
CONFIG_UIO_AEC=m
CONFIG_UIO_SERCOS3=m
CONFIG_UIO_PCI_GENERIC=m
# CONFIG_UIO_NETX is not set
# CONFIG_UIO_MF624 is not set
 
--

 
Regards
Ale
 

 
> Date: Wed, 2 Sep 2015 15:31:04 -0700
> From: stephen@networkplumber.org
> To: mansooraa@hotmail.com
> CC: dev@dpdk.org
> Subject: Re: [dpdk-dev] Poor SRIOV performance with ESXi Linux guest
> 
> On Wed, 2 Sep 2015 22:18:27 +0000
> Ale Mansoor <mansooraa@hotmail.com> wrote:
> 
> >  Getting less than 100 packets per second throughput between VF's under my Fedora FC20 VM running under ESXi 6.0 with DPDK l2fwd (Used as ./l2fwd -c 0xf -n 4 -- -p 0x3 -T 1)
> 
> That is many orders of magnitude less than expected.
> 
>  
> > Questions:
> > ---------------
> >  
> > Q1) Is DPDK + SRIOV under ESXi supposed to use the igb_uio driver or the vfio-pci driver inside Linux guest os ?
> 
> You have to use igb_uio, there is no emulated IOMMU in ESX
> 
> > Q2) What is the expected l2fwd performance when running DPDK under the Linux guest OS under ESXI with SRIOV ?
> 
> Depends on many things. With SRIOV you should reach 10Mpps or more.
> Did you try running Linux on baremetal on same hardware first?
> 
> > Q3) Any idea what may be preventing vfio-pci driver from binding to the VF's inside the guest instance ?
> 
> vfio-pci needs IOMMU which is not available in guest.
> 
> > Q4) Why is igb_uio performing so poorly ?
> 
> Don't blame igb_uio. It is probably something in system or vmware.
> 
 		 	   		  

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2015-09-03  1:02 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-09-02 22:18 [dpdk-dev] Poor SRIOV performance with ESXi Linux guest Ale Mansoor
2015-09-02 22:31 ` Stephen Hemminger
2015-09-03  1:02   ` Ale Mansoor

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).