DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev]  DMAR fault
@ 2013-08-12 20:19 jinho hwang
  2013-08-12 20:28 ` Paul Barrette
  0 siblings, 1 reply; 5+ messages in thread
From: jinho hwang @ 2013-08-12 20:19 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 1838 bytes --]

Hi All,

I am using iommu to receive packets both from hypervisor and from VM. KVM
is used for the virtualization. However, after I deliver the kernel options
(iommu and pci realloc), I can not receive packets in hypervisor, but VF
works fine in VM. When I tried to receive packets in hypervisor, dmesg
shows the following:

ixgbe 0000:03:00.1: complete
ixgbe 0000:03:00.1: PCI INT A disabled
igb_uio 0000:03:00.1: PCI INT A -> GSI 38 (level, low) -> IRQ 38
igb_uio 0000:03:00.1: setting latency timer to 64
igb_uio 0000:03:00.1: irq 87 for MSI/MSI-X
uio device registered with irq 57
DRHD: handling fault status reg 2
DMAR:[DMA Read] Request device [03:00.1] fault addr *b9d0f000*


DMAR:[fault reason 02] Present bit in context entry is clear

03:00.1 Ethernet controller: Intel Corporation 82599EB 10 Gigabit Dual Port
Backplane Connection (rev 01)
        Subsystem: Intel Corporation Ethernet X520 10GbE Dual Port KX4-KR
Mezz
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
<TAbort- <MAbort+ >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 38
        Region 0: Memory at *d9400000* (64-bit, prefetchable) [size=4M]
        Region 2: I/O ports at ece0 [size=32]
        Region 4: Memory at d9bfc000 (64-bit, prefetchable) [size=16K]
        Expansion ROM at <ignored> [disabled]
        Capabilities: <access denied>
        Kernel driver in use: igb_uio
        Kernel modules: ixgbe

We can see those addresses are not matched. So the kernel got fault. I am
wondering why this happens?

One suspicion for this is BIOS. I am currently using BIOS version 3.0, but
the latest is 6.3.0. Does this affect the matter?

Any help appreciated!

Jinho

[-- Attachment #2: Type: text/html, Size: 2702 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-dev] DMAR fault
  2013-08-12 20:19 [dpdk-dev] DMAR fault jinho hwang
@ 2013-08-12 20:28 ` Paul Barrette
  2013-08-12 22:07   ` jinho hwang
  0 siblings, 1 reply; 5+ messages in thread
From: Paul Barrette @ 2013-08-12 20:28 UTC (permalink / raw)
  To: jinho hwang; +Cc: dev

[-- Attachment #1: Type: text/plain, Size: 2140 bytes --]


On 08/12/2013 04:19 PM, jinho hwang wrote:
> Hi All,
>
> I am using iommu to receive packets both from hypervisor and from VM. 
> KVM is used for the virtualization. However, after I deliver the 
> kernel options (iommu and pci realloc), I can not receive packets in 
> hypervisor, but VF works fine in VM. When I tried to receive packets 
> in hypervisor, dmesg shows the following:
>
> ixgbe 0000:03:00.1: complete
> ixgbe 0000:03:00.1: PCI INT A disabled
> igb_uio 0000:03:00.1: PCI INT A -> GSI 38 (level, low) -> IRQ 38
> igb_uio 0000:03:00.1: setting latency timer to 64
> igb_uio 0000:03:00.1: irq 87 for MSI/MSI-X
> uio device registered with irq 57
> DRHD: handling fault status reg 2
> DMAR:[DMA Read] Request device [03:00.1] fault addr *b9d0f000*
> DMAR:[fault reason 02] Present bit in context entry is clear
>
> 03:00.1 Ethernet controller: Intel Corporation 82599EB 10 Gigabit Dual 
> Port Backplane Connection (rev 01)
>         Subsystem: Intel Corporation Ethernet X520 10GbE Dual Port 
> KX4-KR Mezz
>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- 
> ParErr- Stepping- SERR- FastB2B- DisINTx+
>         Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- 
> <TAbort- <MAbort+ >SERR- <PERR- INTx-
>         Latency: 0, Cache Line Size: 64 bytes
>         Interrupt: pin A routed to IRQ 38
>         Region 0: Memory at *d9400000* (64-bit, prefetchable) [size=4M]
>         Region 2: I/O ports at ece0 [size=32]
>         Region 4: Memory at d9bfc000 (64-bit, prefetchable) [size=16K]
>         Expansion ROM at <ignored> [disabled]
>         Capabilities: <access denied>
>         Kernel driver in use: igb_uio
>         Kernel modules: ixgbe
>
> We can see those addresses are not matched. So the kernel got fault. I 
> am wondering why this happens?
I have seen this happen when VT-d is enabled in the bios.  If you are 
using dpdk 1.4, add "iommu=pt" to your boot line.  Without it, no 
packets are received.

Pb
>
> One suspicion for this is BIOS. I am currently using BIOS version 3.0, 
> but the latest is 6.3.0. Does this affect the matter?
>
> Any help appreciated!
>
> Jinho
>


[-- Attachment #2: Type: text/html, Size: 5815 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-dev] DMAR fault
  2013-08-12 20:28 ` Paul Barrette
@ 2013-08-12 22:07   ` jinho hwang
  2013-08-12 22:22     ` Paul Barrette
  0 siblings, 1 reply; 5+ messages in thread
From: jinho hwang @ 2013-08-12 22:07 UTC (permalink / raw)
  To: Paul Barrette; +Cc: dev

[-- Attachment #1: Type: text/plain, Size: 2910 bytes --]

On Mon, Aug 12, 2013 at 4:28 PM, Paul Barrette
<paul.barrette@windriver.com>wrote:

>
> On 08/12/2013 04:19 PM, jinho hwang wrote:
>
> Hi All,
>
> I am using iommu to receive packets both from hypervisor and from VM. KVM
> is used for the virtualization. However, after I deliver the kernel options
> (iommu and pci realloc), I can not receive packets in hypervisor, but VF
> works fine in VM. When I tried to receive packets in hypervisor, dmesg
> shows the following:
>
> ixgbe 0000:03:00.1: complete
> ixgbe 0000:03:00.1: PCI INT A disabled
> igb_uio 0000:03:00.1: PCI INT A -> GSI 38 (level, low) -> IRQ 38
> igb_uio 0000:03:00.1: setting latency timer to 64
> igb_uio 0000:03:00.1: irq 87 for MSI/MSI-X
> uio device registered with irq 57
> DRHD: handling fault status reg 2
> DMAR:[DMA Read] Request device [03:00.1] fault addr *b9d0f000*
>
>
> DMAR:[fault reason 02] Present bit in context entry is clear
>
> 03:00.1 Ethernet controller: Intel Corporation 82599EB 10 Gigabit Dual
> Port Backplane Connection (rev 01)
>         Subsystem: Intel Corporation Ethernet X520 10GbE Dual Port KX4-KR
> Mezz
>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
> ParErr- Stepping- SERR- FastB2B- DisINTx+
>         Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
> <TAbort- <MAbort+ >SERR- <PERR- INTx-
>         Latency: 0, Cache Line Size: 64 bytes
>         Interrupt: pin A routed to IRQ 38
>         Region 0: Memory at *d9400000* (64-bit, prefetchable) [size=4M]
>         Region 2: I/O ports at ece0 [size=32]
>         Region 4: Memory at d9bfc000 (64-bit, prefetchable) [size=16K]
>         Expansion ROM at <ignored> [disabled]
>         Capabilities: <access denied>
>         Kernel driver in use: igb_uio
>         Kernel modules: ixgbe
>
> We can see those addresses are not matched. So the kernel got fault. I am
> wondering why this happens?
>
> I have seen this happen when VT-d is enabled in the bios.  If you are
> using dpdk 1.4, add "iommu=pt" to your boot line.  Without it, no packets
> are received.
>
> Pb
>
>
>  One suspicion for this is BIOS. I am currently using BIOS version 3.0,
> but the latest is 6.3.0. Does this affect the matter?
>
>  Any help appreciated!
>
>  Jinho
>
>
>
>
Paul,

thanks. I tried your suggestion, but it works like no iommu command in boot
line. I passed intel_iommu=pt, and receive packets from hypervisor.
However, when I started VM with "-device pci-assign,host=01:00.0", it shows
the following message:

qemu-system-x86_64: -device pci-assign,host=03:10.0: No IOMMU found.
 Unable to assign device "(null)"
qemu-system-x86_64: -device pci-assign,host=03:10.0: Device initialization
failed.
qemu-system-x86_64: -device pci-assign,host=03:10.0: Device
'kvm-pci-assign' could not be initialized

The device is detached from kernel, and move to pci-stub. dmesg does not
show any DMAR fault message anymore.

Any idea?

Jinho

[-- Attachment #2: Type: text/html, Size: 5565 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-dev] DMAR fault
  2013-08-12 22:07   ` jinho hwang
@ 2013-08-12 22:22     ` Paul Barrette
  2013-08-12 22:25       ` jinho hwang
  0 siblings, 1 reply; 5+ messages in thread
From: Paul Barrette @ 2013-08-12 22:22 UTC (permalink / raw)
  To: jinho hwang; +Cc: dev

[-- Attachment #1: Type: text/plain, Size: 3332 bytes --]


On 08/12/2013 06:07 PM, jinho hwang wrote:
>
> On Mon, Aug 12, 2013 at 4:28 PM, Paul Barrette 
> <paul.barrette@windriver.com <mailto:paul.barrette@windriver.com>> wrote:
>
>
>     On 08/12/2013 04:19 PM, jinho hwang wrote:
>>     Hi All,
>>
>>     I am using iommu to receive packets both from hypervisor and from
>>     VM. KVM is used for the virtualization. However, after I deliver
>>     the kernel options (iommu and pci realloc), I can not receive
>>     packets in hypervisor, but VF works fine in VM. When I tried to
>>     receive packets in hypervisor, dmesg shows the following:
>>
>>     ixgbe 0000:03:00.1: complete
>>     ixgbe 0000:03:00.1: PCI INT A disabled
>>     igb_uio 0000:03:00.1: PCI INT A -> GSI 38 (level, low) -> IRQ 38
>>     igb_uio 0000:03:00.1: setting latency timer to 64
>>     igb_uio 0000:03:00.1: irq 87 for MSI/MSI-X
>>     uio device registered with irq 57
>>     DRHD: handling fault status reg 2
>>     DMAR:[DMA Read] Request device [03:00.1] fault addr *b9d0f000*
>>     DMAR:[fault reason 02] Present bit in context entry is clear
>>
>>     03:00.1 Ethernet controller: Intel Corporation 82599EB 10 Gigabit
>>     Dual Port Backplane Connection (rev 01)
>>             Subsystem: Intel Corporation Ethernet X520 10GbE Dual
>>     Port KX4-KR Mezz
>>             Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV-
>>     VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
>>             Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast
>>     >TAbort- <TAbort- <MAbort+ >SERR- <PERR- INTx-
>>             Latency: 0, Cache Line Size: 64 bytes
>>             Interrupt: pin A routed to IRQ 38
>>             Region 0: Memory at *d9400000* (64-bit, prefetchable)
>>     [size=4M]
>>             Region 2: I/O ports at ece0 [size=32]
>>             Region 4: Memory at d9bfc000 (64-bit, prefetchable)
>>     [size=16K]
>>             Expansion ROM at <ignored> [disabled]
>>             Capabilities: <access denied>
>>             Kernel driver in use: igb_uio
>>             Kernel modules: ixgbe
>>
>>     We can see those addresses are not matched. So the kernel got
>>     fault. I am wondering why this happens?
>     I have seen this happen when VT-d is enabled in the bios.  If you
>     are using dpdk 1.4, add "iommu=pt" to your boot line.  Without it,
>     no packets are received.
>
>     Pb
>
>>
>>     One suspicion for this is BIOS. I am currently using BIOS version
>>     3.0, but the latest is 6.3.0. Does this affect the matter?
>>
>>     Any help appreciated!
>>
>>     Jinho
>>
>
>
> Paul,
>
> thanks. I tried your suggestion, but it works like no iommu command in 
> boot line. I passed intel_iommu=pt, and receive packets from 
> hypervisor. However, when I started VM with "-device 
> pci-assign,host=01:00.0", it shows the following message:
>
> qemu-system-x86_64: -device pci-assign,host=03:10.0: No IOMMU found. 
>  Unable to assign device "(null)"
> qemu-system-x86_64: -device pci-assign,host=03:10.0: Device 
> initialization failed.
> qemu-system-x86_64: -device pci-assign,host=03:10.0: Device 
> 'kvm-pci-assign' could not be initialized
>
> The device is detached from kernel, and move to pci-stub. dmesg does 
> not show any DMAR fault message anymore.
>
> Any idea?
>
> Jinho

Jinho,
  you need to specify both

    " intel_iommu=on iommu=pt"

Pb

[-- Attachment #2: Type: text/html, Size: 10032 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-dev] DMAR fault
  2013-08-12 22:22     ` Paul Barrette
@ 2013-08-12 22:25       ` jinho hwang
  0 siblings, 0 replies; 5+ messages in thread
From: jinho hwang @ 2013-08-12 22:25 UTC (permalink / raw)
  To: Paul Barrette; +Cc: dev

[-- Attachment #1: Type: text/plain, Size: 3420 bytes --]

On Mon, Aug 12, 2013 at 6:22 PM, Paul Barrette
<paul.barrette@windriver.com>wrote:

>
> On 08/12/2013 06:07 PM, jinho hwang wrote:
>
>
> On Mon, Aug 12, 2013 at 4:28 PM, Paul Barrette <
> paul.barrette@windriver.com> wrote:
>
>>
>> On 08/12/2013 04:19 PM, jinho hwang wrote:
>>
>> Hi All,
>>
>> I am using iommu to receive packets both from hypervisor and from VM. KVM
>> is used for the virtualization. However, after I deliver the kernel options
>> (iommu and pci realloc), I can not receive packets in hypervisor, but VF
>> works fine in VM. When I tried to receive packets in hypervisor, dmesg
>> shows the following:
>>
>> ixgbe 0000:03:00.1: complete
>> ixgbe 0000:03:00.1: PCI INT A disabled
>> igb_uio 0000:03:00.1: PCI INT A -> GSI 38 (level, low) -> IRQ 38
>> igb_uio 0000:03:00.1: setting latency timer to 64
>> igb_uio 0000:03:00.1: irq 87 for MSI/MSI-X
>> uio device registered with irq 57
>> DRHD: handling fault status reg 2
>> DMAR:[DMA Read] Request device [03:00.1] fault addr *b9d0f000*
>>
>>
>> DMAR:[fault reason 02] Present bit in context entry is clear
>>
>> 03:00.1 Ethernet controller: Intel Corporation 82599EB 10 Gigabit Dual
>> Port Backplane Connection (rev 01)
>>         Subsystem: Intel Corporation Ethernet X520 10GbE Dual Port KX4-KR
>> Mezz
>>         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
>> ParErr- Stepping- SERR- FastB2B- DisINTx+
>>         Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
>> <TAbort- <MAbort+ >SERR- <PERR- INTx-
>>         Latency: 0, Cache Line Size: 64 bytes
>>         Interrupt: pin A routed to IRQ 38
>>         Region 0: Memory at *d9400000* (64-bit, prefetchable) [size=4M]
>>         Region 2: I/O ports at ece0 [size=32]
>>         Region 4: Memory at d9bfc000 (64-bit, prefetchable) [size=16K]
>>         Expansion ROM at <ignored> [disabled]
>>         Capabilities: <access denied>
>>         Kernel driver in use: igb_uio
>>         Kernel modules: ixgbe
>>
>> We can see those addresses are not matched. So the kernel got fault. I am
>> wondering why this happens?
>>
>>  I have seen this happen when VT-d is enabled in the bios.  If you are
>> using dpdk 1.4, add "iommu=pt" to your boot line.  Without it, no packets
>> are received.
>>
>> Pb
>>
>>
>>  One suspicion for this is BIOS. I am currently using BIOS version 3.0,
>> but the latest is 6.3.0. Does this affect the matter?
>>
>>  Any help appreciated!
>>
>>  Jinho
>>
>>
>>
>>
> Paul,
>
>  thanks. I tried your suggestion, but it works like no iommu command in
> boot line. I passed intel_iommu=pt, and receive packets from hypervisor.
> However, when I started VM with "-device pci-assign,host=01:00.0", it shows
> the following message:
>
>  qemu-system-x86_64: -device pci-assign,host=03:10.0: No IOMMU found.
>  Unable to assign device "(null)"
>  qemu-system-x86_64: -device pci-assign,host=03:10.0: Device
> initialization failed.
> qemu-system-x86_64: -device pci-assign,host=03:10.0: Device
> 'kvm-pci-assign' could not be initialized
>
>  The device is detached from kernel, and move to pci-stub. dmesg does not
> show any DMAR fault message anymore.
>
>  Any idea?
>
>  Jinho
>
>
> Jinho,
>  you need to specify both
>
> " intel_iommu=on iommu=pt"
>
> Pb
>

I tried that as well, but it works as if I only add intel_iommu=on.. which
means I do not receive any packets from hypervisor. I also have pci realloc
added. Does this affect?

Jinho

[-- Attachment #2: Type: text/html, Size: 8868 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2013-08-12 22:25 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-12 20:19 [dpdk-dev] DMAR fault jinho hwang
2013-08-12 20:28 ` Paul Barrette
2013-08-12 22:07   ` jinho hwang
2013-08-12 22:22     ` Paul Barrette
2013-08-12 22:25       ` jinho hwang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).