DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] unable to bind to vfio_pci inside RHEL virtual machine.
@ 2016-05-23 13:39 DING, TAO
  2016-05-24 10:32 ` Sergio Gonzalez Monroy
  0 siblings, 1 reply; 2+ messages in thread
From: DING, TAO @ 2016-05-23 13:39 UTC (permalink / raw)
  To: dev


Hello dpdk dev,

Do you know if the vfio_pci can be bind to network interface from within RedHat virtual machine ? I read the doc that igb_uio  should not be used ; it is not stable. (http://people.redhat.com/~pmatilai/dpdk-guide/index.html) however I cannot use vfio_pci driver from inside VM.

Currently I am working on a project that migrating a network package capture application into virtual machines so that it can be hosted on  cloud. My intent is using SR-IOR to ensure the data sending from physical NIC to vNIC  in line speed; using DPDK inside VM to read data from vNIC to get good performance because the libpcap does not perform well  inside VM.

Following dpdk instruction, I was able to set up the SR-IOV and bind the vfio_pci to virtual Function on the host. Once the VM starts, the Virtual Functions bind to vfio-pci automatically on the host. . The following is the output from host.
Option: 22


Network devices using DPDK-compatible driver
============================================
0000:04:10.4 'X540 Ethernet Controller Virtual Function' drv=vfio-pci unused=
0000:04:10.6 'X540 Ethernet Controller Virtual Function' drv=vfio-pci unused=
0000:04:11.4 'X540 Ethernet Controller Virtual Function' drv=vfio-pci unused=
0000:04:11.6 'X540 Ethernet Controller Virtual Function' drv=vfio-pci unused=

Network devices using kernel driver
===================================
0000:01:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=em1 drv=tg3 unused=vfio-pci
0000:01:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=em2 drv=tg3 unused=vfio-pci
0000:02:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=em3 drv=tg3 unused=vfio-pci
0000:02:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=em4 drv=tg3 unused=vfio-pci
0000:04:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=p3p1 drv=ixgbe unused=vfio-pci *Active*
0000:04:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=p3p2 drv=ixgbe unused=vfio-pci
0000:04:10.0 'X540 Ethernet Controller Virtual Function' if=p3p1_0 drv=ixgbevf unused=vfio-pci
0000:04:10.2 'X540 Ethernet Controller Virtual Function' if=p3p1_1 drv=ixgbevf unused=vfio-pci
0000:04:11.0 'X540 Ethernet Controller Virtual Function' if=p3p1_4 drv=ixgbevf unused=vfio-pci
0000:04:11.2 'X540 Ethernet Controller Virtual Function' if=p3p1_5 drv=ixgbevf unused=vfio-pci

I repeated the same set up within the VM which has 4 Virtual Functions assigned to it, I could not successfully bind any of network devices  to vfio-pci. I followed different suggestion from the web , but no luck.   (however I was able to bind UIO driver to the network devices inside the VM)
One difference I noticed between VM and host is the outcome of IOMMU setting.  On the host, the /sys/kernel/iommu_groups/ is NOT empty , but on the VM , it is empty. I rebooted VM several time. There's not luck.

The following is the output from inside the VM. The driver vfio_pci  is visible to lsmod , but not visible to driverctl.

[root@hn14vm3 tools]# driverctl -v list-devices | grep -i net
0000:00:03.0 ixgbevf (X540 Ethernet Controller Virtual Function)
0000:00:08.0 ixgbevf (X540 Ethernet Controller Virtual Function)
0000:00:09.0 ixgbevf (X540 Ethernet Controller Virtual Function)
0000:00:0b.0 ixgbevf (X540 Ethernet Controller Virtual Function)
0000:00:0c.0 e1000 (82540EM Gigabit Ethernet Controller (QEMU Virtual Machine))
0000:00:0d.0 e1000 (82540EM Gigabit Ethernet Controller (QEMU Virtual Machine))
[root@hn14vm3 tools]# lsmod |grep uio
igb_uio                13224  0
uio                    19259  1 igb_uio
[root@hn14vm3 tools]# lsmod |grep vfio
vfio_pci               36735  0
vfio_iommu_type1       17632  0
vfio                   25291  2 vfio_iommu_type1,vfio_pci
[root@hn14vm3 tools]# driverctl set-override 0000:00:03.0 vfio_pci
driverctl: failed to bind device 0000:00:03.0 to driver vfio_pci
[root@hn14vm3 tools]# driverctl set-override 0000:00:03.0 igb_uio
[root@hn14vm3 tools]# driverctl -v list-devices | grep -i net
0000:00:03.0 igb_uio [*] (X540 Ethernet Controller Virtual Function)
0000:00:08.0 ixgbevf (X540 Ethernet Controller Virtual Function)
0000:00:09.0 ixgbevf (X540 Ethernet Controller Virtual Function)
0000:00:0b.0 ixgbevf (X540 Ethernet Controller Virtual Function)
0000:00:0c.0 e1000 (82540EM Gigabit Ethernet Controller (QEMU Virtual Machine))
0000:00:0d.0 e1000 (82540EM Gigabit Ethernet Controller (QEMU Virtual Machine))

Here are some info about host and vm.
OS, -- both host and VM , redhat enterprise 7.2.  Dell R630 with intel 10GB, (and one 1GB interface).
Dpdk version -- dpdk-16.04.
VM has 4 cores and 10G RAM.
VM created  with virt-manager , used the default setting except the "IDE Disk 1/Advanced options/Performance options/Cache mode chose writeback"

Any pointers would be appreciated
Thanks a lot for your time.





Tao Ding

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-dev] unable to bind to vfio_pci inside RHEL virtual machine.
  2016-05-23 13:39 [dpdk-dev] unable to bind to vfio_pci inside RHEL virtual machine DING, TAO
@ 2016-05-24 10:32 ` Sergio Gonzalez Monroy
  0 siblings, 0 replies; 2+ messages in thread
From: Sergio Gonzalez Monroy @ 2016-05-24 10:32 UTC (permalink / raw)
  To: DING, TAO, dev

Hi Tao,

On 23/05/2016 14:39, DING, TAO wrote:
> Hello dpdk dev,
>
> Do you know if the vfio_pci can be bind to network interface from within RedHat virtual machine ? I read the doc that igb_uio  should not be used ; it is not stable. (http://people.redhat.com/~pmatilai/dpdk-guide/index.html) however I cannot use vfio_pci driver from inside VM.
>
> Currently I am working on a project that migrating a network package capture application into virtual machines so that it can be hosted on  cloud. My intent is using SR-IOR to ensure the data sending from physical NIC to vNIC  in line speed; using DPDK inside VM to read data from vNIC to get good performance because the libpcap does not perform well  inside VM.
>
> Following dpdk instruction, I was able to set up the SR-IOV and bind the vfio_pci to virtual Function on the host. Once the VM starts, the Virtual Functions bind to vfio-pci automatically on the host. . The following is the output from host.
> Option: 22
>
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:04:10.4 'X540 Ethernet Controller Virtual Function' drv=vfio-pci unused=
> 0000:04:10.6 'X540 Ethernet Controller Virtual Function' drv=vfio-pci unused=
> 0000:04:11.4 'X540 Ethernet Controller Virtual Function' drv=vfio-pci unused=
> 0000:04:11.6 'X540 Ethernet Controller Virtual Function' drv=vfio-pci unused=
>
> Network devices using kernel driver
> ===================================
> 0000:01:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=em1 drv=tg3 unused=vfio-pci
> 0000:01:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=em2 drv=tg3 unused=vfio-pci
> 0000:02:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=em3 drv=tg3 unused=vfio-pci
> 0000:02:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=em4 drv=tg3 unused=vfio-pci
> 0000:04:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=p3p1 drv=ixgbe unused=vfio-pci *Active*
> 0000:04:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=p3p2 drv=ixgbe unused=vfio-pci
> 0000:04:10.0 'X540 Ethernet Controller Virtual Function' if=p3p1_0 drv=ixgbevf unused=vfio-pci
> 0000:04:10.2 'X540 Ethernet Controller Virtual Function' if=p3p1_1 drv=ixgbevf unused=vfio-pci
> 0000:04:11.0 'X540 Ethernet Controller Virtual Function' if=p3p1_4 drv=ixgbevf unused=vfio-pci
> 0000:04:11.2 'X540 Ethernet Controller Virtual Function' if=p3p1_5 drv=ixgbevf unused=vfio-pci
>
> I repeated the same set up within the VM which has 4 Virtual Functions assigned to it, I could not successfully bind any of network devices  to vfio-pci. I followed different suggestion from the web , but no luck.   (however I was able to bind UIO driver to the network devices inside the VM)
> One difference I noticed between VM and host is the outcome of IOMMU setting.  On the host, the /sys/kernel/iommu_groups/ is NOT empty , but on the VM , it is empty. I rebooted VM several time. There's not luck.

AFAIK VFIO is not supported in a guest.
https://lists.gnu.org/archive/html/qemu-devel/2015-11/msg04284.html

So you are left with two options, VFIO no-IOMMU or igb_uio, none of them 
safe.
If you have Linux kernel +4.5 and DPDK +16.04, you could use VFIO 
no-IOMMU inside the VM. Otherwise, you are left with igb_uio.
IMHO the main difference is that igb_uio is an out-of-tree kernel module.

Sergio

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-05-24 10:32 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-23 13:39 [dpdk-dev] unable to bind to vfio_pci inside RHEL virtual machine DING, TAO
2016-05-24 10:32 ` Sergio Gonzalez Monroy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).