From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail7.windriver.com (mail7.windriver.com [128.224.252.3]) by dpdk.org (Postfix) with ESMTP id AA7771F3 for ; Tue, 13 Aug 2013 00:21:45 +0200 (CEST) Received: from ALA-HCA.corp.ad.wrs.com (ala-hca.corp.ad.wrs.com [147.11.189.40]) by mail7.windriver.com (8.14.5/8.14.3) with ESMTP id r7CMM74w022145 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=FAIL); Mon, 12 Aug 2013 18:22:07 -0400 (EDT) Received: from yow-pbarrett-lx.wrs.com (128.224.146.11) by ALA-HCA.corp.ad.wrs.com (147.11.189.50) with Microsoft SMTP Server id 14.2.342.3; Mon, 12 Aug 2013 15:22:06 -0700 Message-ID: <5209600E.2030404@windriver.com> Date: Mon, 12 Aug 2013 18:22:06 -0400 From: Paul Barrette User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130625 Thunderbird/17.0.7 MIME-Version: 1.0 To: jinho hwang References: <5209456A.9020200@windriver.com> In-Reply-To: Content-Type: multipart/alternative; boundary="------------030407050002060608010403" Cc: dev Subject: Re: [dpdk-dev] DMAR fault X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 12 Aug 2013 22:21:46 -0000 --------------030407050002060608010403 Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit On 08/12/2013 06:07 PM, jinho hwang wrote: > > On Mon, Aug 12, 2013 at 4:28 PM, Paul Barrette > > wrote: > > > On 08/12/2013 04:19 PM, jinho hwang wrote: >> Hi All, >> >> I am using iommu to receive packets both from hypervisor and from >> VM. KVM is used for the virtualization. However, after I deliver >> the kernel options (iommu and pci realloc), I can not receive >> packets in hypervisor, but VF works fine in VM. When I tried to >> receive packets in hypervisor, dmesg shows the following: >> >> ixgbe 0000:03:00.1: complete >> ixgbe 0000:03:00.1: PCI INT A disabled >> igb_uio 0000:03:00.1: PCI INT A -> GSI 38 (level, low) -> IRQ 38 >> igb_uio 0000:03:00.1: setting latency timer to 64 >> igb_uio 0000:03:00.1: irq 87 for MSI/MSI-X >> uio device registered with irq 57 >> DRHD: handling fault status reg 2 >> DMAR:[DMA Read] Request device [03:00.1] fault addr *b9d0f000* >> DMAR:[fault reason 02] Present bit in context entry is clear >> >> 03:00.1 Ethernet controller: Intel Corporation 82599EB 10 Gigabit >> Dual Port Backplane Connection (rev 01) >> Subsystem: Intel Corporation Ethernet X520 10GbE Dual >> Port KX4-KR Mezz >> Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- >> VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ >> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >> >TAbort- SERR- > Latency: 0, Cache Line Size: 64 bytes >> Interrupt: pin A routed to IRQ 38 >> Region 0: Memory at *d9400000* (64-bit, prefetchable) >> [size=4M] >> Region 2: I/O ports at ece0 [size=32] >> Region 4: Memory at d9bfc000 (64-bit, prefetchable) >> [size=16K] >> Expansion ROM at [disabled] >> Capabilities: >> Kernel driver in use: igb_uio >> Kernel modules: ixgbe >> >> We can see those addresses are not matched. So the kernel got >> fault. I am wondering why this happens? > I have seen this happen when VT-d is enabled in the bios. If you > are using dpdk 1.4, add "iommu=pt" to your boot line. Without it, > no packets are received. > > Pb > >> >> One suspicion for this is BIOS. I am currently using BIOS version >> 3.0, but the latest is 6.3.0. Does this affect the matter? >> >> Any help appreciated! >> >> Jinho >> > > > Paul, > > thanks. I tried your suggestion, but it works like no iommu command in > boot line. I passed intel_iommu=pt, and receive packets from > hypervisor. However, when I started VM with "-device > pci-assign,host=01:00.0", it shows the following message: > > qemu-system-x86_64: -device pci-assign,host=03:10.0: No IOMMU found. > Unable to assign device "(null)" > qemu-system-x86_64: -device pci-assign,host=03:10.0: Device > initialization failed. > qemu-system-x86_64: -device pci-assign,host=03:10.0: Device > 'kvm-pci-assign' could not be initialized > > The device is detached from kernel, and move to pci-stub. dmesg does > not show any DMAR fault message anymore. > > Any idea? > > Jinho Jinho, you need to specify both " intel_iommu=on iommu=pt" Pb --------------030407050002060608010403 Content-Type: text/html; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit
On 08/12/2013 06:07 PM, jinho hwang wrote:

On Mon, Aug 12, 2013 at 4:28 PM, Paul Barrette <paul.barrette@windriver.com> wrote:

On 08/12/2013 04:19 PM, jinho hwang wrote:
Hi All,

I am using iommu to receive packets both from hypervisor and from VM. KVM is used for the virtualization. However, after I deliver the kernel options (iommu and pci realloc), I can not receive packets in hypervisor, but VF works fine in VM. When I tried to receive packets in hypervisor, dmesg shows the following:

ixgbe 0000:03:00.1: complete                                    
ixgbe 0000:03:00.1: PCI INT A disabled                          
igb_uio 0000:03:00.1: PCI INT A -> GSI 38 (level, low) -> IRQ 38
igb_uio 0000:03:00.1: setting latency timer to 64              
igb_uio 0000:03:00.1: irq 87 for MSI/MSI-X                      
uio device registered with irq 57                              
DRHD: handling fault status reg 2                              
DMAR:[DMA Read] Request device [03:00.1] fault addr b9d0f000                                                                                                                                                      
DMAR:[fault reason 02] Present bit in context entry is clear

03:00.1 Ethernet controller: Intel Corporation 82599EB 10 Gigabit Dual Port Backplane Connection (rev 01)
        Subsystem: Intel Corporation Ethernet X520 10GbE Dual Port KX4-KR Mezz
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes                  
        Interrupt: pin A routed to IRQ 38                      
        Region 0: Memory at d9400000 (64-bit, prefetchable) [size=4M]
        Region 2: I/O ports at ece0 [size=32]                  
        Region 4: Memory at d9bfc000 (64-bit, prefetchable) [size=16K]
        Expansion ROM at <ignored> [disabled]                  
        Capabilities: <access denied>                          
        Kernel driver in use: igb_uio                          
        Kernel modules: ixgbe                                  

We can see those addresses are not matched. So the kernel got fault. I am wondering why this happens?
I have seen this happen when VT-d is enabled in the bios.  If you are using dpdk 1.4, add "iommu=pt" to your boot line.  Without it, no packets are received.

Pb


One suspicion for this is BIOS. I am currently using BIOS version 3.0, but the latest is 6.3.0. Does this affect the matter? 

Any help appreciated!

Jinho                                             
                                                               


Paul, 

thanks. I tried your suggestion, but it works like no iommu command in boot line. I passed intel_iommu=pt, and receive packets from hypervisor. However, when I started VM with "-device pci-assign,host=01:00.0", it shows the following message:

qemu-system-x86_64: -device pci-assign,host=03:10.0: No IOMMU found.  Unable to assign device "(null)"
qemu-system-x86_64: -device pci-assign,host=03:10.0: Device initialization failed.
qemu-system-x86_64: -device pci-assign,host=03:10.0: Device 'kvm-pci-assign' could not be initialized

The device is detached from kernel, and move to pci-stub. dmesg does not show any DMAR fault message anymore. 

Any idea? 

Jinho

Jinho,
 you need to specify both
" intel_iommu=on iommu=pt"
Pb
--------------030407050002060608010403--