DPDK patches and discussions
 help / color / mirror / Atom feed
From: "DING, TAO" <td559h@att.com>
To: "dev@dpdk.org" <dev@dpdk.org>
Subject: [dpdk-dev] unable to bind to vfio_pci inside RHEL virtual machine.
Date: Mon, 23 May 2016 13:39:20 +0000	[thread overview]
Message-ID: <7BC361771D33CB489E50B2EC34311AA001913829@MISOUT7MSGUSRCB.ITServices.sbc.com> (raw)


Hello dpdk dev,

Do you know if the vfio_pci can be bind to network interface from within RedHat virtual machine ? I read the doc that igb_uio  should not be used ; it is not stable. (http://people.redhat.com/~pmatilai/dpdk-guide/index.html) however I cannot use vfio_pci driver from inside VM.

Currently I am working on a project that migrating a network package capture application into virtual machines so that it can be hosted on  cloud. My intent is using SR-IOR to ensure the data sending from physical NIC to vNIC  in line speed; using DPDK inside VM to read data from vNIC to get good performance because the libpcap does not perform well  inside VM.

Following dpdk instruction, I was able to set up the SR-IOV and bind the vfio_pci to virtual Function on the host. Once the VM starts, the Virtual Functions bind to vfio-pci automatically on the host. . The following is the output from host.
Option: 22


Network devices using DPDK-compatible driver
============================================
0000:04:10.4 'X540 Ethernet Controller Virtual Function' drv=vfio-pci unused=
0000:04:10.6 'X540 Ethernet Controller Virtual Function' drv=vfio-pci unused=
0000:04:11.4 'X540 Ethernet Controller Virtual Function' drv=vfio-pci unused=
0000:04:11.6 'X540 Ethernet Controller Virtual Function' drv=vfio-pci unused=

Network devices using kernel driver
===================================
0000:01:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=em1 drv=tg3 unused=vfio-pci
0000:01:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=em2 drv=tg3 unused=vfio-pci
0000:02:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=em3 drv=tg3 unused=vfio-pci
0000:02:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe' if=em4 drv=tg3 unused=vfio-pci
0000:04:00.0 'Ethernet Controller 10-Gigabit X540-AT2' if=p3p1 drv=ixgbe unused=vfio-pci *Active*
0000:04:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=p3p2 drv=ixgbe unused=vfio-pci
0000:04:10.0 'X540 Ethernet Controller Virtual Function' if=p3p1_0 drv=ixgbevf unused=vfio-pci
0000:04:10.2 'X540 Ethernet Controller Virtual Function' if=p3p1_1 drv=ixgbevf unused=vfio-pci
0000:04:11.0 'X540 Ethernet Controller Virtual Function' if=p3p1_4 drv=ixgbevf unused=vfio-pci
0000:04:11.2 'X540 Ethernet Controller Virtual Function' if=p3p1_5 drv=ixgbevf unused=vfio-pci

I repeated the same set up within the VM which has 4 Virtual Functions assigned to it, I could not successfully bind any of network devices  to vfio-pci. I followed different suggestion from the web , but no luck.   (however I was able to bind UIO driver to the network devices inside the VM)
One difference I noticed between VM and host is the outcome of IOMMU setting.  On the host, the /sys/kernel/iommu_groups/ is NOT empty , but on the VM , it is empty. I rebooted VM several time. There's not luck.

The following is the output from inside the VM. The driver vfio_pci  is visible to lsmod , but not visible to driverctl.

[root@hn14vm3 tools]# driverctl -v list-devices | grep -i net
0000:00:03.0 ixgbevf (X540 Ethernet Controller Virtual Function)
0000:00:08.0 ixgbevf (X540 Ethernet Controller Virtual Function)
0000:00:09.0 ixgbevf (X540 Ethernet Controller Virtual Function)
0000:00:0b.0 ixgbevf (X540 Ethernet Controller Virtual Function)
0000:00:0c.0 e1000 (82540EM Gigabit Ethernet Controller (QEMU Virtual Machine))
0000:00:0d.0 e1000 (82540EM Gigabit Ethernet Controller (QEMU Virtual Machine))
[root@hn14vm3 tools]# lsmod |grep uio
igb_uio                13224  0
uio                    19259  1 igb_uio
[root@hn14vm3 tools]# lsmod |grep vfio
vfio_pci               36735  0
vfio_iommu_type1       17632  0
vfio                   25291  2 vfio_iommu_type1,vfio_pci
[root@hn14vm3 tools]# driverctl set-override 0000:00:03.0 vfio_pci
driverctl: failed to bind device 0000:00:03.0 to driver vfio_pci
[root@hn14vm3 tools]# driverctl set-override 0000:00:03.0 igb_uio
[root@hn14vm3 tools]# driverctl -v list-devices | grep -i net
0000:00:03.0 igb_uio [*] (X540 Ethernet Controller Virtual Function)
0000:00:08.0 ixgbevf (X540 Ethernet Controller Virtual Function)
0000:00:09.0 ixgbevf (X540 Ethernet Controller Virtual Function)
0000:00:0b.0 ixgbevf (X540 Ethernet Controller Virtual Function)
0000:00:0c.0 e1000 (82540EM Gigabit Ethernet Controller (QEMU Virtual Machine))
0000:00:0d.0 e1000 (82540EM Gigabit Ethernet Controller (QEMU Virtual Machine))

Here are some info about host and vm.
OS, -- both host and VM , redhat enterprise 7.2.  Dell R630 with intel 10GB, (and one 1GB interface).
Dpdk version -- dpdk-16.04.
VM has 4 cores and 10G RAM.
VM created  with virt-manager , used the default setting except the "IDE Disk 1/Advanced options/Performance options/Cache mode chose writeback"

Any pointers would be appreciated
Thanks a lot for your time.





Tao Ding

             reply	other threads:[~2016-05-23 13:39 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-23 13:39 DING, TAO [this message]
2016-05-24 10:32 ` Sergio Gonzalez Monroy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7BC361771D33CB489E50B2EC34311AA001913829@MISOUT7MSGUSRCB.ITServices.sbc.com \
    --to=td559h@att.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).