On Mon, Aug 12, 2013 at 6:22 PM, Paul Barrette <paul.barrette@windriver.com> wrote:

On 08/12/2013 06:07 PM, jinho hwang wrote:

On Mon, Aug 12, 2013 at 4:28 PM, Paul Barrette <paul.barrette@windriver.com> wrote:

On 08/12/2013 04:19 PM, jinho hwang wrote:
Hi All,

I am using iommu to receive packets both from hypervisor and from VM. KVM is used for the virtualization. However, after I deliver the kernel options (iommu and pci realloc), I can not receive packets in hypervisor, but VF works fine in VM. When I tried to receive packets in hypervisor, dmesg shows the following:

ixgbe 0000:03:00.1: complete                                    
ixgbe 0000:03:00.1: PCI INT A disabled                          
igb_uio 0000:03:00.1: PCI INT A -> GSI 38 (level, low) -> IRQ 38
igb_uio 0000:03:00.1: setting latency timer to 64              
igb_uio 0000:03:00.1: irq 87 for MSI/MSI-X                      
uio device registered with irq 57                              
DRHD: handling fault status reg 2                              
DMAR:[DMA Read] Request device [03:00.1] fault addr b9d0f000                                                                                                                                                      
DMAR:[fault reason 02] Present bit in context entry is clear

03:00.1 Ethernet controller: Intel Corporation 82599EB 10 Gigabit Dual Port Backplane Connection (rev 01)
        Subsystem: Intel Corporation Ethernet X520 10GbE Dual Port KX4-KR Mezz
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+ >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes                  
        Interrupt: pin A routed to IRQ 38                      
        Region 0: Memory at d9400000 (64-bit, prefetchable) [size=4M]
        Region 2: I/O ports at ece0 [size=32]                  
        Region 4: Memory at d9bfc000 (64-bit, prefetchable) [size=16K]
        Expansion ROM at <ignored> [disabled]                  
        Capabilities: <access denied>                          
        Kernel driver in use: igb_uio                          
        Kernel modules: ixgbe                                  

We can see those addresses are not matched. So the kernel got fault. I am wondering why this happens?
I have seen this happen when VT-d is enabled in the bios.  If you are using dpdk 1.4, add "iommu=pt" to your boot line.  Without it, no packets are received.

Pb


One suspicion for this is BIOS. I am currently using BIOS version 3.0, but the latest is 6.3.0. Does this affect the matter? 

Any help appreciated!

Jinho                                             
                                                               


Paul, 

thanks. I tried your suggestion, but it works like no iommu command in boot line. I passed intel_iommu=pt, and receive packets from hypervisor. However, when I started VM with "-device pci-assign,host=01:00.0", it shows the following message:

qemu-system-x86_64: -device pci-assign,host=03:10.0: No IOMMU found.  Unable to assign device "(null)"
qemu-system-x86_64: -device pci-assign,host=03:10.0: Device initialization failed.
qemu-system-x86_64: -device pci-assign,host=03:10.0: Device 'kvm-pci-assign' could not be initialized

The device is detached from kernel, and move to pci-stub. dmesg does not show any DMAR fault message anymore. 

Any idea? 

Jinho

Jinho,
 you need to specify both
" intel_iommu=on iommu=pt"
Pb

I tried that as well, but it works as if I only add intel_iommu=on.. which means I do not receive any packets from hypervisor. I also have pci realloc added. Does this affect? 

Jinho