From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-lf0-f41.google.com (mail-lf0-f41.google.com [209.85.215.41]) by dpdk.org (Postfix) with ESMTP id 346391B3E6 for ; Mon, 12 Feb 2018 23:00:43 +0100 (CET) Received: by mail-lf0-f41.google.com with SMTP id q69so4594087lfi.10 for ; Mon, 12 Feb 2018 14:00:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=ZvRZO8M4QeXYTdz0Eb5pbJErHl2wo7HgZAv7dY170Xs=; b=PzMNuTPPlLr4Z3OPROZY5O7Pgj1CGeEeO4Sie6AHHurB2TS4i9BUpT5vbShfqbWNue kz6KSWc4dVElO15QJ+wFxxaugXtzFSqnCDXMiKCO80C9Yg4iEziSaQ74ICxTE/Gters+ 3UYZjwBAYHYU32cAYFoWYTYoQy8CEKuHOSwYrhsyZFYCfHCtrGbjAw5H7htezJYITypb UQonm208N+6o5g/y3CCV3IEpBVS5cacSPOR61xli+cjgSwLcQQPDc7LcaKyET+vhV+K5 q8S7oFB19D3zQiEIKkUNXK6aMRp2Qlhqy0cfSqLESbvlnPEYRfh0kWR0zjZsC0oY34pi sRNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=ZvRZO8M4QeXYTdz0Eb5pbJErHl2wo7HgZAv7dY170Xs=; b=CRkpNFWAuxBx39B4ZGcqIbg152pUqfBsbyqkR87mhlKQALqnvV6kmETheeRS1Q05ko L/EZmGaB5/hiZcFdAdJ/d3BeABLL1eAT7Yurc3uFfnwWWVNch0X9vqonl2FAbKW4ppCz YG7i/81MClhM7OeCytDGCfgD4Y3WLLZim/Lq8q8VEKebfogQy9xXJI/ths1DxuQOBG+E 9jGYk3N1Zai0Z1diyiLJobj9fZPsnrKR5V3dNu4PgndqLZu76cpP/bM6oLJLEjMw51AD Zoh3wcfvV/9ObZtZHmm97BnLwkHXu7R8/Dz00VPYQzJkoV/+W+54wQh0MtsZ62miBNP1 6WEQ== X-Gm-Message-State: APf1xPAJpXa9yOz3j66lJeF3TTmjHM1diK3S7e0i7ds1Ro708wH+P/2g QR9TmA6Ry+bLhPb6bxP2n1dvoqGWwBivaqt8mYY= X-Google-Smtp-Source: AH8x225by3Gnyg7Dgdw4zLqvMJp/nl3N2GnFiowIyHthjIbln47VPCOQVsHr+aXRkYlZobXpao4Eya4CQ+J4jcn+Qds= X-Received: by 10.25.113.2 with SMTP id m2mr1721860lfc.71.1518472842596; Mon, 12 Feb 2018 14:00:42 -0800 (PST) MIME-Version: 1.0 Received: by 10.25.80.12 with HTTP; Mon, 12 Feb 2018 14:00:41 -0800 (PST) In-Reply-To: References: <8ddb30a3-1253-ff60-20bb-b735fef5a91c@intel.com> From: Ravi Kerur Date: Mon, 12 Feb 2018 14:00:41 -0800 Message-ID: To: "Burakov, Anatoly" Cc: dev@dpdk.org Content-Type: text/plain; charset="UTF-8" X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Subject: Re: [dpdk-dev] IXGBE, IOMMU DMAR DRHD handling fault issue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 12 Feb 2018 22:00:43 -0000 On Mon, Feb 12, 2018 at 2:13 AM, Burakov, Anatoly wrote: > On 10-Feb-18 5:53 PM, Ravi Kerur wrote: > > >> >> On Sat, Feb 10, 2018 at 2:58 AM, Burakov, Anatoly < >> anatoly.burakov@intel.com > wrote: >> >> On 29-Jan-18 10:35 PM, Ravi Kerur wrote: >> >> Hi Burakov, >> >> When using vfio-pci on host both VF and PF interfaces works fine >> with dpdk i.e. I don't see DMAR fault messages anymore. However, >> when I attach a VF interface to a VM and start DPDK with >> vfio-pci inside VM I still see DMAR fault messages on host. Both >> host and VM are booted with 'intel-iommu=on' on GRUB. Ping from >> VM with DPDK/vfio-pci doesn't work (I think it's expected >> because of DMAR faults), however, when VF interface uses ixgbevf >> driver ping works. >> >> Following are some details >> >> /*****************On VM***************/ >> dpdk-devbind -s >> >> Network devices using DPDK-compatible driver >> ============================================ >> 0000:00:07.0 '82599 Ethernet Controller Virtual Function' >> drv=vfio-pci unused=ixgbevf >> >> Network devices using kernel driver >> =================================== >> 0000:03:00.0 'Device 1041' if=eth0 drv=virtio-pci >> unused=vfio-pci *Active* >> 0000:04:00.0 'Device 1041' if=eth1 drv=virtio-pci unused=vfio-pci >> 0000:05:00.0 'Device 1041' if=eth2 drv=virtio-pci unused=vfio-pci >> >> Other network devices >> ===================== >> >> >> Crypto devices using DPDK-compatible driver >> =========================================== >> >> >> Crypto devices using kernel driver >> ================================== >> >> >> Other crypto devices >> ==================== >> >> >> >> 00:07.0 Ethernet controller: Intel Corporation 82599 Ethernet >> Controller Virtual Function (rev 01) >> Subsystem: Intel Corporation 82599 Ethernet Controller >> Virtual Function >> Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV- >> VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ >> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >> >TAbort- SERR- > Region 0: Memory at fda00000 (64-bit, prefetchable) >> [size=16K] >> Region 3: Memory at fda04000 (64-bit, prefetchable) >> [size=16K] >> Capabilities: [70] MSI-X: Enable+ Count=3 Masked- >> Vector table: BAR=3 offset=00000000 >> PBA: BAR=3 offset=00002000 >> Capabilities: [a0] Express (v1) Root Complex >> Integrated Endpoint, MSI 00 >> DevCap: MaxPayload 128 bytes, PhantFunc 0 >> ExtTag- RBE- >> DevCtl: Report errors: Correctable- Non-Fatal- >> Fatal- Unsupported- >> RlxdOrd- ExtTag- PhantFunc- AuxPwr- >> NoSnoop- >> MaxPayload 128 bytes, MaxReadReq 128 >> bytes >> DevSta: CorrErr- UncorrErr- FatalErr- >> UnsuppReq- AuxPwr- TransPend- >> Capabilities: [100 v1] Advanced Error Reporting >> UESta: DLP- SDES- TLP- FCP- CmpltTO- >> CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- >> UEMsk: DLP- SDES- TLP- FCP- CmpltTO- >> CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- >> UESvrt: DLP- SDES- TLP- FCP- CmpltTO- >> CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- >> CESta: RxErr- BadTLP- BadDLLP- Rollover- >> Timeout- NonFatalErr- >> CEMsk: RxErr- BadTLP- BadDLLP- Rollover- >> Timeout- NonFatalErr- >> AERCap: First Error Pointer: 00, GenCap- >> CGenEn- ChkCap- ChkEn- >> Kernel driver in use: vfio-pci >> Kernel modules: ixgbevf >> >> /***************on Host*************/ >> dmesg | grep DMAR >> ... >> [ 978.268143] DMAR: DRHD: handling fault status reg 2 >> [ 978.268147] DMAR: [DMA Read] *Request device [04:10.0]* fault >> addr 33a128000 [fault reason 06] PTE Read access is not set >> [ 1286.677726] DMAR: DRHD: handling fault status reg 102 >> [ 1286.677730] DMAR: [DMA Read] Request device [04:10.0] fault >> addr fb663000 [fault reason 06] PTE Read access is not set >> [ 1676.436145] DMAR: DRHD: handling fault status reg 202 >> [ 1676.436149] DMAR: [DMA Read] Request device [04:10.0] fault >> addr 33a128000 [fault reason 06] PTE Read access is not set >> [ 1734.433649] DMAR: DRHD: handling fault status reg 302 >> [ 1734.433652] DMAR: [DMA Read] Request device [04:10.0] fault >> addr 33a128000 [fault reason 06] PTE Read access is not set >> [ 2324.428938] DMAR: DRHD: handling fault status reg 402 >> [ 2324.428942] DMAR: [DMA Read] Request device [04:10.0] fault >> addr 7770c000 [fault reason 06] PTE Read access is not set >> [ 2388.553640] DMAR: DRHD: handling fault status reg 502 >> [ 2388.553643] DMAR: [DMA Read] *Request device [04:10.0]* fault >> addr 33a128000 [fault reason 06] PTE Read access is not set >> >> >> >> Going back to this, i would like to suggest run a few tests to >> ensure that we have all information that we can gather. >> >> First of all, i'm assuming that you're using native ixgbe Linux >> driver on the host, and that you're only passing through the VF >> device to the VM using VFIO. Is my understanding correct here? >> >> Now, let's forget about the iommu=pt and igb_uio for a moment. Boot >> both your host and your VM with iommu=on and intel_iommu=on (or >> whatever command-line enables full IOMMU support on both host and >> guest) and do the same tests you've done before. Do you still see >> your issues? >> >> It would also be very useful to also try native Linux kernel driver >> on the guest *with traffic forwarding* and see how it works in your >> VM. Therefore i would suggest you to compile DPDK with PCAP support, >> bind your (VM) interface to native Linux driver, and use the >> interface via our pcap driver (creating a vdev should do the trick - >> please refer to PCAP PMD documentation [1]). Simple forwarding test >> should be enough - just make sure to pass traffic to and from DPDK >> in both cases, and that it doesn't give you any DMAR errors. >> >> We can go from there. >> >> >> Let me just give you what has been tested and working/nonworking >> scenarios. Some of your questions might get answered as well. Test bed is >> very simple with 2 VF's created under IXGBE PF on host with one VF >> interface added to ovs-bridge on host and another VF interface given to >> guest. Test connectivity between VF's via ping. >> >> Host and guest -- Kernel 4.9 >> Host -- Qemu 2.11.50 (tried both released 2.11 and tip of the git >> (2.11.50)) >> DPDK -- 17.05.1 on host and guest >> Host and guest -- booted with GRUB intel_iommu=on (which enables IOMMU). >> Have tried with "iommu=on and intel_iommu=on" as well, but iommu=on is not >> needed when intel_iommu=on is set. >> >> Test-scenario-1: Host -- ixgbe_vf driver, Guest ixgbe_vf driver ping works >> Test-scenario-2: Host -- DPDK vfio-pci driver, Guest ixgbe_vf driver ping >> works >> Test-scenario-3: Host -- DPDK vfio-pci driver, Guest DPDK vfio-pci >> driver, DMAR errors seen on host, ping doesn't work >> > > OK, that makes it clearer, thanks. Does the third scenario work in other > DPDK versions? No. Tried 16.11 same issue on guest and works fine on host. > > > >> DPDK works fine on host with vfio-pci, however, has issues when used >> inside the guest. Please let me know if more information is needed. >> >> Thanks, >> Ravi >> >> [1] http://dpdk.org/doc/guides/nics/pcap_ring.html >> >> >> -- Thanks, >> Anatoly >> >> >> > > -- > Thanks, > Anatoly >