DPDK usage discussions
 help / color / mirror / Atom feed
From: "Raman, Sandeep" <sandeepr@hpe.com>
To: Rami Rosen <roszenrami@gmail.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] vfio-pci: probe of 0000:00:07.0 failed with error -22-KVM guest
Date: Thu, 23 Aug 2018 03:05:32 +0000	[thread overview]
Message-ID: <TU4PR8401MB13263D40BB2CC9F6975C3254D2370@TU4PR8401MB1326.NAMPRD84.PROD.OUTLOOK.COM> (raw)
In-Reply-To: <CAKoUArkPVeNJP33aPx6=tTpQ+BmzF2HREpUCMituoVi+uqULOw@mail.gmail.com>

Hi Rami,

This is solved now after enabling noiommu for vfio in the KVM guest as below:

modprobe vfio enable_unsafe_noiommu_mode=Y
modprobe vfio-pci

I launch the guest via virt-manager and the VF's are attached by 'Add PCI Hardware' option.

Regards,
Sandeep.

-----Original Message-----
From: Rami Rosen [mailto:roszenrami@gmail.com] 
Sent: Wednesday, August 22, 2018 6:50 PM
To: Raman, Sandeep <sandeepr@hpe.com>
Cc: users@dpdk.org
Subject: Re: [dpdk-users] vfio-pci: probe of 0000:00:07.0 failed with error -22-KVM guest

Hi Sandeep,
How to you launch the guest ? if it by qemu, can you post the full command you are using?

Regards,
Rami Rosen

On 16 August 2018 at 12:50, Raman, Sandeep <sandeepr@hpe.com> wrote:
> Hi,
>
> I am trying to bind SRIOV VF to a kvm guest with vfio-pci module. DPDK version is 17.11. Both host and guest OS is RHEL 7.5.
>
> On KVM guest:
> [root@rh75vm ~]# cat /proc/cmdline
> BOOT_IMAGE=/vmlinuz-3.10.0-862.11.6.el7.x86_64 
> root=/dev/mapper/rhel-root ro crashkernel=auto rd.lvm.lv=rhel/root 
> rd.lvm.lv=rhel/swap LANG=en_US.UTF-8 default_hugepagesz=1GB 
> hugepagesz=1G hugepages=8 console=ttyS0,115200
>
> [root@rh75vm ~]# dpdk-devbind --status-dev net
>
> Network devices using DPDK-compatible driver 
> ============================================
> <none>
>
> Network devices using kernel driver
> ===================================
> 0000:00:03.0 'Virtio network device 1000' if=eth0 drv=virtio-pci 
> unused= *Active*
> 0000:00:07.0 'Ethernet Virtual Function 700 Series 154c' if=ens7 
> drv=i40evf unused=
> 0000:00:08.0 'Ethernet Virtual Function 700 Series 154c' if=ens8 
> drv=i40evf unused=
>
> Other Network devices
> =====================
> <none>
>
> [root@rh75vm ~]# modprobe vfio-pci
> [root@rh75vm ~]# lsmod |grep vfio
> vfio_pci               41312  0
> vfio_iommu_type1       22300  0
> vfio                   32695  2 vfio_iommu_type1,vfio_pci
> irqbypass              13503  1 vfio_pci
> [root@rh75vm ~]# dpdk-devbind -b vfio-pci 00:07.0 00:08.0
> Error: bind failed for 0000:00:07.0 - Cannot bind to driver vfio-pci
> Error: unbind failed for 0000:00:07.0 - Cannot open 
> /sys/bus/pci/drivers//unbind [root@rh75vm ~]# tailf -n3 
> /var/log/messages Aug 16 05:35:44 rh75vm systemd: Starting Session 1 of user root.
> Aug 16 05:36:55 rh75vm kernel: VFIO - User Level meta-driver version: 
> 0.3 Aug 16 05:37:09 rh75vm kernel: vfio-pci: probe of 0000:00:07.0 
> failed with error -22
>
> On host:
>
> /proc/cmdline:
>
> BOOT_IMAGE=/vmlinuz-3.10.0-862.11.6.el7.x86_64 
> root=/dev/mapper/rhel-root ro crashkernel=auto rd.lvm.lv=rhel/root 
> rd.lvm.lv=rhel/swap rhgb quiet LANG=en_US.UTF-8 default_hugepagesz=1GB 
> hugepagesz=1G hugepages=20 isolcpus=1-15,17-31 rcu_nocbs=1-15,17-31 
> nohz_full=1-15,17-31 intel_iommu=on iommu=pt selinux=0 enforcing=0 
> processor.max_cstate=0 intel_pstate=disable hpet=disable nosoftlockup 
> intel_idle.max_cstate=0 mce=ignore_ce audit=0
>
> dpdk-devbind --status-dev net
>
> Network devices using DPDK-compatible driver 
> ============================================
> 0000:86:02.0 'Ethernet Virtual Function 700 Series 154c' drv=vfio-pci 
> unused=
> 0000:86:0a.0 'Ethernet Virtual Function 700 Series 154c' drv=vfio-pci 
> unused=
>
> Network devices using kernel driver
> ===================================
> 0000:02:00.0 'NetXtreme BCM5719 Gigabit Ethernet PCIe 1657' if=eno1 
> drv=tg3 unused=vfio-pci
> 0000:02:00.1 'NetXtreme BCM5719 Gigabit Ethernet PCIe 1657' if=eno2 
> drv=tg3 unused=vfio-pci
> 0000:02:00.2 'NetXtreme BCM5719 Gigabit Ethernet PCIe 1657' if=eno3 
> drv=tg3 unused=vfio-pci
> 0000:02:00.3 'NetXtreme BCM5719 Gigabit Ethernet PCIe 1657' if=eno4 
> drv=tg3 unused=vfio-pci
> 0000:86:00.0 'Ethernet Controller XXV710 for 25GbE SFP28 158b' 
> if=ens5f0 drv=i40e unused=vfio-pci
> 0000:86:00.1 'Ethernet Controller XXV710 for 25GbE SFP28 158b' 
> if=ens5f1 drv=i40e unused=vfio-pci
>
> Other Network devices
> =====================
>
> dmesg:
>
> [    0.000000] DMAR: IOMMU enabled
> [    2.020878] DMAR: Hardware identity mapping for device 0000:86:00.0
> [    2.020881] DMAR: Hardware identity mapping for device 0000:86:00.1
> [    2.025617] DMAR: Intel(R) Virtualization Technology for Directed I/O
> [    2.048734] iommu: Adding device 0000:86:00.0 to group 59
> [    2.048788] iommu: Adding device 0000:86:00.1 to group 60
>
> [    2.136560] pci 0000:86:00.0: Signaling PME through PCIe PME interrupt
> [    2.136562] pci 0000:86:00.1: Signaling PME through PCIe PME interrupt
> [    2.181458] DMAR: 32bit 0000:01:00.4 uses non-identity mapping
> [    2.181839] DMAR: Setting identity map for device 0000:01:00.4 [0x8a688000 - 0x8a688fff]
> [    2.810190] DMAR: 32bit 0000:5c:00.0 uses non-identity mapping
> [    2.841274] i40e 0000:86:00.0: fw 6.70.48807 api 1.7 nvm 6.00 0x800036cb 1.1747.0
> [    3.082766] i40e 0000:86:00.0: MAC address: 48:df:37:36:1b:44
> [    3.092871] i40e 0000:86:00.0 eth3: NIC Link is Up, 25 Gbps Full Duplex, Requested FEC: None, FEC: None, Autoneg: False, Flow Control: RX/TX
> [    3.102039] i40e 0000:86:00.0: PCI-Express: Speed 8.0GT/s Width x8
> [    3.110709] i40e 0000:86:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 32 RSS FD_ATR FD_SB NTUPLE DCB VxLAN Geneve PTP VEPA
> [    3.136506] i40e 0000:86:00.1: fw 6.70.48807 api 1.7 nvm 6.00 0x800036cb 1.1747.0
> [    3.373367] i40e 0000:86:00.1: MAC address: 48:df:37:36:1b:45
>
> [    1.510851] pci 0000:86:00.0: [8086:158b] type 00 class 0x020000
> [    1.510871] pci 0000:86:00.0: reg 0x10: [mem 0xf0000000-0xf0ffffff 64bit pref]
> [    1.510889] pci 0000:86:00.0: reg 0x1c: [mem 0xf2000000-0xf2007fff 64bit pref]
> [    1.510903] pci 0000:86:00.0: reg 0x30: [mem 0x00000000-0x0007ffff pref]
> [    1.510955] pci 0000:86:00.0: PME# supported from D0 D3hot D3cold
> [    1.510980] pci 0000:86:00.0: reg 0x184: [mem 0xd7fffa00000-0xd7fffa0ffff 64bit pref]
> [    1.510984] pci 0000:86:00.0: VF(n) BAR0 space: [mem 0xd7fffa00000-0xd7fffdfffff 64bit pref] (contains BAR0 for 64 VFs)
> [    1.510998] pci 0000:86:00.0: reg 0x190: [mem 0xd7ffff00000-0xd7ffff03fff 64bit pref]
> [    1.511000] pci 0000:86:00.0: VF(n) BAR3 space: [mem 0xd7ffff00000-0xd7fffffffff 64bit pref] (contains BAR3 for 64 VFs)
> [    1.511193] pci 0000:86:00.1: [8086:158b] type 00 class 0x020000
> [    1.511212] pci 0000:86:00.1: reg 0x10: [mem 0xf1000000-0xf1ffffff 64bit pref]
> [    1.511229] pci 0000:86:00.1: reg 0x1c: [mem 0xf2008000-0xf200ffff 64bit pref]
> [    1.511243] pci 0000:86:00.1: reg 0x30: [mem 0x00000000-0x0007ffff pref]
> [    1.511296] pci 0000:86:00.1: PME# supported from D0 D3hot D3cold
> [    1.511317] pci 0000:86:00.1: reg 0x184: [mem 0xd7fff600000-0xd7fff60ffff 64bit pref]
> [    1.511320] pci 0000:86:00.1: VF(n) BAR0 space: [mem 0xd7fff600000-0xd7fff9fffff 64bit pref] (contains BAR0 for 64 VFs)
> [    1.511334] pci 0000:86:00.1: reg 0x190: [mem 0xd7fffe00000-0xd7fffe03fff 64bit pref]
> [    1.511337] pci 0000:86:00.1: VF(n) BAR3 space: [mem 0xd7fffe00000-0xd7fffefffff 64bit pref] (contains BAR3 for 64 VFs)
> [    1.538878] pci 0000:86:00.0: BAR 6: assigned [mem 0xf2080000-0xf20fffff pref]
> [    1.538881] pci 0000:86:00.1: BAR 6: no space for [mem size 0x00080000 pref]
> [    1.538884] pci 0000:86:00.1: BAR 6: failed to assign [mem size 0x00080000 pref]
> [    1.538907] pci_bus 0000:86: resource 1 [mem 0xf0000000-0xf20fffff]
> [    1.538910] pci_bus 0000:86: resource 2 [mem 0xd7fff600000-0xd7fffffffff 64bit pref]
>
> I found a few threads reported with this error and the solution was to add intel_iommu=on and iommu=pt in the kernel. I have already added this.
> Any pointers what is causing the -22 error and how to solve this.
>
> http://mails.dpdk.org/archives/dev/2014-December/010455.html
> http://mails.dpdk.org/archives/users/2017-February/001544.html
> http://mails.dpdk.org/archives/users/2017-September/002433.html
> https://software.intel.com/en-us/forums/networking/topic/600159
>
> Thanks,
> Sandeep.

      reply	other threads:[~2018-08-23  3:05 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-16  9:50 Raman, Sandeep
2018-08-22 13:20 ` Rami Rosen
2018-08-23  3:05   ` Raman, Sandeep [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=TU4PR8401MB13263D40BB2CC9F6975C3254D2370@TU4PR8401MB1326.NAMPRD84.PROD.OUTLOOK.COM \
    --to=sandeepr@hpe.com \
    --cc=roszenrami@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).