From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 5F2071B31B for ; Mon, 12 Feb 2018 11:14:01 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Feb 2018 02:14:00 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,501,1511856000"; d="scan'208";a="17454181" Received: from aburakov-mobl.ger.corp.intel.com (HELO [10.237.220.71]) ([10.237.220.71]) by orsmga008.jf.intel.com with ESMTP; 12 Feb 2018 02:13:59 -0800 To: Ravi Kerur Cc: dev@dpdk.org References: <8ddb30a3-1253-ff60-20bb-b735fef5a91c@intel.com> From: "Burakov, Anatoly" Message-ID: Date: Mon, 12 Feb 2018 10:13:58 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] IXGBE, IOMMU DMAR DRHD handling fault issue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 12 Feb 2018 10:14:02 -0000 On 10-Feb-18 5:53 PM, Ravi Kerur wrote: > > > On Sat, Feb 10, 2018 at 2:58 AM, Burakov, Anatoly > > wrote: > > On 29-Jan-18 10:35 PM, Ravi Kerur wrote: > > Hi Burakov, > > When using vfio-pci on host both VF and PF interfaces works fine > with dpdk i.e. I don't see DMAR fault messages anymore. However, > when I attach a VF interface to a VM and start DPDK with > vfio-pci inside VM I still see DMAR fault messages on host. Both > host and VM are booted with 'intel-iommu=on' on GRUB. Ping from > VM with DPDK/vfio-pci doesn't work (I think it's expected > because of DMAR faults), however, when VF interface uses ixgbevf > driver ping works. > > Following are some details > > /*****************On VM***************/ > dpdk-devbind -s > > Network devices using DPDK-compatible driver > ============================================ > 0000:00:07.0 '82599 Ethernet Controller Virtual Function' > drv=vfio-pci unused=ixgbevf > > Network devices using kernel driver > =================================== > 0000:03:00.0 'Device 1041' if=eth0 drv=virtio-pci > unused=vfio-pci *Active* > 0000:04:00.0 'Device 1041' if=eth1 drv=virtio-pci unused=vfio-pci > 0000:05:00.0 'Device 1041' if=eth2 drv=virtio-pci unused=vfio-pci > > Other network devices > ===================== > > > Crypto devices using DPDK-compatible driver > =========================================== > > > Crypto devices using kernel driver > ================================== > > > Other crypto devices > ==================== > > > > 00:07.0 Ethernet controller: Intel Corporation 82599 Ethernet > Controller Virtual Function (rev 01) >          Subsystem: Intel Corporation 82599 Ethernet Controller > Virtual Function >          Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV- > VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ >          Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast > >TAbort- SERR-          Region 0: Memory at fda00000 (64-bit, prefetchable) > [size=16K] >          Region 3: Memory at fda04000 (64-bit, prefetchable) > [size=16K] >          Capabilities: [70] MSI-X: Enable+ Count=3 Masked- >                  Vector table: BAR=3 offset=00000000 >                  PBA: BAR=3 offset=00002000 >          Capabilities: [a0] Express (v1) Root Complex > Integrated Endpoint, MSI 00 >                  DevCap: MaxPayload 128 bytes, PhantFunc 0 >                          ExtTag- RBE- >                  DevCtl: Report errors: Correctable- Non-Fatal- > Fatal- Unsupported- >                          RlxdOrd- ExtTag- PhantFunc- AuxPwr- > NoSnoop- >                          MaxPayload 128 bytes, MaxReadReq 128 bytes >                  DevSta: CorrErr- UncorrErr- FatalErr- > UnsuppReq- AuxPwr- TransPend- >          Capabilities: [100 v1] Advanced Error Reporting >                  UESta:  DLP- SDES- TLP- FCP- CmpltTO- > CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- >                  UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- > CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- >                  UESvrt: DLP- SDES- TLP- FCP- CmpltTO- > CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- >                  CESta:  RxErr- BadTLP- BadDLLP- Rollover- > Timeout- NonFatalErr- >                  CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- > Timeout- NonFatalErr- >                  AERCap: First Error Pointer: 00, GenCap- > CGenEn- ChkCap- ChkEn- >          Kernel driver in use: vfio-pci >          Kernel modules: ixgbevf > > /***************on Host*************/ > dmesg | grep DMAR > ... > [  978.268143] DMAR: DRHD: handling fault status reg 2 > [  978.268147] DMAR: [DMA Read] *Request device [04:10.0]* fault > addr 33a128000 [fault reason 06] PTE Read access is not set > [ 1286.677726] DMAR: DRHD: handling fault status reg 102 > [ 1286.677730] DMAR: [DMA Read] Request device [04:10.0] fault > addr fb663000 [fault reason 06] PTE Read access is not set > [ 1676.436145] DMAR: DRHD: handling fault status reg 202 > [ 1676.436149] DMAR: [DMA Read] Request device [04:10.0] fault > addr 33a128000 [fault reason 06] PTE Read access is not set > [ 1734.433649] DMAR: DRHD: handling fault status reg 302 > [ 1734.433652] DMAR: [DMA Read] Request device [04:10.0] fault > addr 33a128000 [fault reason 06] PTE Read access is not set > [ 2324.428938] DMAR: DRHD: handling fault status reg 402 > [ 2324.428942] DMAR: [DMA Read] Request device [04:10.0] fault > addr 7770c000 [fault reason 06] PTE Read access is not set > [ 2388.553640] DMAR: DRHD: handling fault status reg 502 > [ 2388.553643] DMAR: [DMA Read] *Request device [04:10.0]* fault > addr 33a128000 [fault reason 06] PTE Read access is not set > > > > Going back to this, i would like to suggest run a few tests to > ensure that we have all information that we can gather. > > First of all, i'm assuming that you're using native ixgbe Linux > driver on the host, and that you're only passing through the VF > device to the VM using VFIO. Is my understanding correct here? > > Now, let's forget about the iommu=pt and igb_uio for a moment. Boot > both your host and your VM with iommu=on and intel_iommu=on (or > whatever command-line enables full IOMMU support on both host and > guest) and do the same tests you've done before. Do you still see > your issues? > > It would also be very useful to also try native Linux kernel driver > on the guest *with traffic forwarding* and see how it works in your > VM. Therefore i would suggest you to compile DPDK with PCAP support, > bind your (VM) interface to native Linux driver, and use the > interface via our pcap driver (creating a vdev should do the trick - > please refer to PCAP PMD documentation [1]). Simple forwarding test > should be enough - just make sure to pass traffic to and from DPDK > in both cases, and that it doesn't give you any DMAR errors. > > We can go from there. > > > Let me just give you what has been tested and working/nonworking > scenarios. Some of your questions might get answered as well. Test bed > is very simple with 2 VF's created under IXGBE PF on host with one VF > interface added to ovs-bridge on host and another VF interface given to > guest. Test connectivity between VF's via ping. > > Host and guest -- Kernel 4.9 > Host -- Qemu 2.11.50 (tried both released 2.11 and tip of the git (2.11.50)) > DPDK -- 17.05.1 on host and guest > Host and guest -- booted with GRUB intel_iommu=on (which enables IOMMU). > Have tried with "iommu=on and intel_iommu=on" as well, but iommu=on is > not needed when intel_iommu=on is set. > > Test-scenario-1: Host -- ixgbe_vf driver, Guest ixgbe_vf driver ping works > Test-scenario-2: Host -- DPDK vfio-pci driver, Guest ixgbe_vf driver > ping works > Test-scenario-3: Host -- DPDK vfio-pci driver, Guest DPDK vfio-pci > driver, DMAR errors seen on host, ping doesn't work OK, that makes it clearer, thanks. Does the third scenario work in other DPDK versions? > > DPDK works fine on host with vfio-pci, however, has issues when used > inside the guest. Please let me know if more information is needed. > > Thanks, > Ravi > > [1] http://dpdk.org/doc/guides/nics/pcap_ring.html > > > -- > Thanks, > Anatoly > > -- Thanks, Anatoly