From: "Burakov, Anatoly" <anatoly.burakov@intel.com>
To: Ravi Kerur <rkerur@gmail.com>
Cc: dev@dpdk.org
Subject: Re: [dpdk-dev] IXGBE, IOMMU DMAR DRHD handling fault issue
Date: Mon, 12 Feb 2018 10:13:58 +0000 [thread overview]
Message-ID: <d0682205-c335-14c4-0b0f-8ebbf58bb9f7@intel.com> (raw)
In-Reply-To: <CAFb4SLAyxYm+iD6wYQBw1fPfnMT7JcjGsR6H+kb7SBYJcFWDKQ@mail.gmail.com>
On 10-Feb-18 5:53 PM, Ravi Kerur wrote:
>
>
> On Sat, Feb 10, 2018 at 2:58 AM, Burakov, Anatoly
> <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
>
> On 29-Jan-18 10:35 PM, Ravi Kerur wrote:
>
> Hi Burakov,
>
> When using vfio-pci on host both VF and PF interfaces works fine
> with dpdk i.e. I don't see DMAR fault messages anymore. However,
> when I attach a VF interface to a VM and start DPDK with
> vfio-pci inside VM I still see DMAR fault messages on host. Both
> host and VM are booted with 'intel-iommu=on' on GRUB. Ping from
> VM with DPDK/vfio-pci doesn't work (I think it's expected
> because of DMAR faults), however, when VF interface uses ixgbevf
> driver ping works.
>
> Following are some details
>
> /*****************On VM***************/
> dpdk-devbind -s
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:00:07.0 '82599 Ethernet Controller Virtual Function'
> drv=vfio-pci unused=ixgbevf
>
> Network devices using kernel driver
> ===================================
> 0000:03:00.0 'Device 1041' if=eth0 drv=virtio-pci
> unused=vfio-pci *Active*
> 0000:04:00.0 'Device 1041' if=eth1 drv=virtio-pci unused=vfio-pci
> 0000:05:00.0 'Device 1041' if=eth2 drv=virtio-pci unused=vfio-pci
>
> Other network devices
> =====================
> <none>
>
> Crypto devices using DPDK-compatible driver
> ===========================================
> <none>
>
> Crypto devices using kernel driver
> ==================================
> <none>
>
> Other crypto devices
> ====================
> <none>
>
>
> 00:07.0 Ethernet controller: Intel Corporation 82599 Ethernet
> Controller Virtual Function (rev 01)
> Subsystem: Intel Corporation 82599 Ethernet Controller
> Virtual Function
> Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV-
> VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast
> >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
> Region 0: Memory at fda00000 (64-bit, prefetchable)
> [size=16K]
> Region 3: Memory at fda04000 (64-bit, prefetchable)
> [size=16K]
> Capabilities: [70] MSI-X: Enable+ Count=3 Masked-
> Vector table: BAR=3 offset=00000000
> PBA: BAR=3 offset=00002000
> Capabilities: [a0] Express (v1) Root Complex
> Integrated Endpoint, MSI 00
> DevCap: MaxPayload 128 bytes, PhantFunc 0
> ExtTag- RBE-
> DevCtl: Report errors: Correctable- Non-Fatal-
> Fatal- Unsupported-
> RlxdOrd- ExtTag- PhantFunc- AuxPwr-
> NoSnoop-
> MaxPayload 128 bytes, MaxReadReq 128 bytes
> DevSta: CorrErr- UncorrErr- FatalErr-
> UnsuppReq- AuxPwr- TransPend-
> Capabilities: [100 v1] Advanced Error Reporting
> UESta: DLP- SDES- TLP- FCP- CmpltTO-
> CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
> UEMsk: DLP- SDES- TLP- FCP- CmpltTO-
> CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
> UESvrt: DLP- SDES- TLP- FCP- CmpltTO-
> CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
> CESta: RxErr- BadTLP- BadDLLP- Rollover-
> Timeout- NonFatalErr-
> CEMsk: RxErr- BadTLP- BadDLLP- Rollover-
> Timeout- NonFatalErr-
> AERCap: First Error Pointer: 00, GenCap-
> CGenEn- ChkCap- ChkEn-
> Kernel driver in use: vfio-pci
> Kernel modules: ixgbevf
>
> /***************on Host*************/
> dmesg | grep DMAR
> ...
> [ 978.268143] DMAR: DRHD: handling fault status reg 2
> [ 978.268147] DMAR: [DMA Read] *Request device [04:10.0]* fault
> addr 33a128000 [fault reason 06] PTE Read access is not set
> [ 1286.677726] DMAR: DRHD: handling fault status reg 102
> [ 1286.677730] DMAR: [DMA Read] Request device [04:10.0] fault
> addr fb663000 [fault reason 06] PTE Read access is not set
> [ 1676.436145] DMAR: DRHD: handling fault status reg 202
> [ 1676.436149] DMAR: [DMA Read] Request device [04:10.0] fault
> addr 33a128000 [fault reason 06] PTE Read access is not set
> [ 1734.433649] DMAR: DRHD: handling fault status reg 302
> [ 1734.433652] DMAR: [DMA Read] Request device [04:10.0] fault
> addr 33a128000 [fault reason 06] PTE Read access is not set
> [ 2324.428938] DMAR: DRHD: handling fault status reg 402
> [ 2324.428942] DMAR: [DMA Read] Request device [04:10.0] fault
> addr 7770c000 [fault reason 06] PTE Read access is not set
> [ 2388.553640] DMAR: DRHD: handling fault status reg 502
> [ 2388.553643] DMAR: [DMA Read] *Request device [04:10.0]* fault
> addr 33a128000 [fault reason 06] PTE Read access is not set
>
>
>
> Going back to this, i would like to suggest run a few tests to
> ensure that we have all information that we can gather.
>
> First of all, i'm assuming that you're using native ixgbe Linux
> driver on the host, and that you're only passing through the VF
> device to the VM using VFIO. Is my understanding correct here?
>
> Now, let's forget about the iommu=pt and igb_uio for a moment. Boot
> both your host and your VM with iommu=on and intel_iommu=on (or
> whatever command-line enables full IOMMU support on both host and
> guest) and do the same tests you've done before. Do you still see
> your issues?
>
> It would also be very useful to also try native Linux kernel driver
> on the guest *with traffic forwarding* and see how it works in your
> VM. Therefore i would suggest you to compile DPDK with PCAP support,
> bind your (VM) interface to native Linux driver, and use the
> interface via our pcap driver (creating a vdev should do the trick -
> please refer to PCAP PMD documentation [1]). Simple forwarding test
> should be enough - just make sure to pass traffic to and from DPDK
> in both cases, and that it doesn't give you any DMAR errors.
>
> We can go from there.
>
>
> Let me just give you what has been tested and working/nonworking
> scenarios. Some of your questions might get answered as well. Test bed
> is very simple with 2 VF's created under IXGBE PF on host with one VF
> interface added to ovs-bridge on host and another VF interface given to
> guest. Test connectivity between VF's via ping.
>
> Host and guest -- Kernel 4.9
> Host -- Qemu 2.11.50 (tried both released 2.11 and tip of the git (2.11.50))
> DPDK -- 17.05.1 on host and guest
> Host and guest -- booted with GRUB intel_iommu=on (which enables IOMMU).
> Have tried with "iommu=on and intel_iommu=on" as well, but iommu=on is
> not needed when intel_iommu=on is set.
>
> Test-scenario-1: Host -- ixgbe_vf driver, Guest ixgbe_vf driver ping works
> Test-scenario-2: Host -- DPDK vfio-pci driver, Guest ixgbe_vf driver
> ping works
> Test-scenario-3: Host -- DPDK vfio-pci driver, Guest DPDK vfio-pci
> driver, DMAR errors seen on host, ping doesn't work
OK, that makes it clearer, thanks. Does the third scenario work in other
DPDK versions?
>
> DPDK works fine on host with vfio-pci, however, has issues when used
> inside the guest. Please let me know if more information is needed.
>
> Thanks,
> Ravi
>
> [1] http://dpdk.org/doc/guides/nics/pcap_ring.html
> <http://dpdk.org/doc/guides/nics/pcap_ring.html>
>
> --
> Thanks,
> Anatoly
>
>
--
Thanks,
Anatoly
next prev parent reply other threads:[~2018-02-12 10:14 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-23 17:25 Ravi Kerur
2018-01-24 10:31 ` Burakov, Anatoly
2018-01-24 19:13 ` Ravi Kerur
2018-01-25 10:49 ` Burakov, Anatoly
2018-01-29 22:35 ` Ravi Kerur
2018-01-31 9:59 ` Burakov, Anatoly
2018-01-31 21:51 ` Ravi Kerur
2018-02-01 10:10 ` Burakov, Anatoly
2018-02-01 19:26 ` Ravi Kerur
2018-02-02 10:28 ` Burakov, Anatoly
2018-02-02 20:21 ` Ravi Kerur
2018-02-02 20:51 ` Ravi Kerur
2018-02-05 10:01 ` Burakov, Anatoly
2018-02-06 17:55 ` Ravi Kerur
2018-02-08 11:20 ` Burakov, Anatoly
2018-02-09 17:41 ` Ravi Kerur
2018-02-10 10:11 ` Burakov, Anatoly
2018-02-10 10:58 ` Burakov, Anatoly
2018-02-10 17:53 ` Ravi Kerur
2018-02-12 10:13 ` Burakov, Anatoly [this message]
2018-02-12 22:00 ` Ravi Kerur
2018-02-13 14:31 ` Burakov, Anatoly
2018-02-14 20:00 ` Ravi Kerur
2018-02-15 10:28 ` Burakov, Anatoly
2018-02-15 18:27 ` Ravi Kerur
2018-02-15 20:53 ` Ravi Kerur
2018-02-16 9:41 ` Burakov, Anatoly
2019-01-15 7:07 ` Hu, Xuekun
2019-01-15 11:22 ` Burakov, Anatoly
2019-01-15 13:07 ` Hu, Xuekun
2019-01-21 13:18 ` Hu, Xuekun
2019-01-21 13:39 ` Burakov, Anatoly
2019-01-21 14:44 ` Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d0682205-c335-14c4-0b0f-8ebbf58bb9f7@intel.com \
--to=anatoly.burakov@intel.com \
--cc=dev@dpdk.org \
--cc=rkerur@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).