From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wg0-x230.google.com (mail-wg0-x230.google.com [IPv6:2a00:1450:400c:c00::230]) by dpdk.org (Postfix) with ESMTP id 2BBBF1F3 for ; Tue, 13 Aug 2013 00:25:44 +0200 (CEST) Received: by mail-wg0-f48.google.com with SMTP id f12so5521908wgh.15 for ; Mon, 12 Aug 2013 15:26:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=SIhqaH013Lrd6EfqgqFmvllDCc8bJO1QoLB7tzTOo+o=; b=YeKrCU4kKlm9HETSVqnQVqS5I+16duQs9SMG5TUT18orzbCbwxyafu/bQ+Br6BYdGa W8n3gRneYpeJVI7EP/ElGxsAOkmEGyA5a9UvJ6WjJauzUQo60jgxCHBiEVvXjE6NG8fU sEpMVM5duYF9glbJ8Dn7CPrxZzn6Zv9ewAnzS0D8uiD6sKBrMYt/8CKdNAWv6DI6wg56 NKwDKgq20FTmac96uH9PnGzrU6FHHPSckdxM8xD+vAaJcoRtgJtq3puC0lq1+9SIqDfe /hG+JZ/QZXQpK2YUmM/0HUlDjBDa7QsmJq8lS1dH+lAESGy3yuWXE3wxRFfJvERUlrsl hetA== X-Received: by 10.194.240.129 with SMTP id wa1mr811537wjc.31.1376346371936; Mon, 12 Aug 2013 15:26:11 -0700 (PDT) MIME-Version: 1.0 Received: by 10.194.200.129 with HTTP; Mon, 12 Aug 2013 15:25:50 -0700 (PDT) In-Reply-To: <5209600E.2030404@windriver.com> References: <5209456A.9020200@windriver.com> <5209600E.2030404@windriver.com> From: jinho hwang Date: Mon, 12 Aug 2013 18:25:50 -0400 Message-ID: To: Paul Barrette Content-Type: multipart/alternative; boundary=089e013d1db296572104e3c79c31 Cc: dev Subject: Re: [dpdk-dev] DMAR fault X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 12 Aug 2013 22:25:44 -0000 --089e013d1db296572104e3c79c31 Content-Type: text/plain; charset=ISO-8859-1 On Mon, Aug 12, 2013 at 6:22 PM, Paul Barrette wrote: > > On 08/12/2013 06:07 PM, jinho hwang wrote: > > > On Mon, Aug 12, 2013 at 4:28 PM, Paul Barrette < > paul.barrette@windriver.com> wrote: > >> >> On 08/12/2013 04:19 PM, jinho hwang wrote: >> >> Hi All, >> >> I am using iommu to receive packets both from hypervisor and from VM. KVM >> is used for the virtualization. However, after I deliver the kernel options >> (iommu and pci realloc), I can not receive packets in hypervisor, but VF >> works fine in VM. When I tried to receive packets in hypervisor, dmesg >> shows the following: >> >> ixgbe 0000:03:00.1: complete >> ixgbe 0000:03:00.1: PCI INT A disabled >> igb_uio 0000:03:00.1: PCI INT A -> GSI 38 (level, low) -> IRQ 38 >> igb_uio 0000:03:00.1: setting latency timer to 64 >> igb_uio 0000:03:00.1: irq 87 for MSI/MSI-X >> uio device registered with irq 57 >> DRHD: handling fault status reg 2 >> DMAR:[DMA Read] Request device [03:00.1] fault addr *b9d0f000* >> >> >> DMAR:[fault reason 02] Present bit in context entry is clear >> >> 03:00.1 Ethernet controller: Intel Corporation 82599EB 10 Gigabit Dual >> Port Backplane Connection (rev 01) >> Subsystem: Intel Corporation Ethernet X520 10GbE Dual Port KX4-KR >> Mezz >> Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- >> ParErr- Stepping- SERR- FastB2B- DisINTx+ >> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- >> SERR- > Latency: 0, Cache Line Size: 64 bytes >> Interrupt: pin A routed to IRQ 38 >> Region 0: Memory at *d9400000* (64-bit, prefetchable) [size=4M] >> Region 2: I/O ports at ece0 [size=32] >> Region 4: Memory at d9bfc000 (64-bit, prefetchable) [size=16K] >> Expansion ROM at [disabled] >> Capabilities: >> Kernel driver in use: igb_uio >> Kernel modules: ixgbe >> >> We can see those addresses are not matched. So the kernel got fault. I am >> wondering why this happens? >> >> I have seen this happen when VT-d is enabled in the bios. If you are >> using dpdk 1.4, add "iommu=pt" to your boot line. Without it, no packets >> are received. >> >> Pb >> >> >> One suspicion for this is BIOS. I am currently using BIOS version 3.0, >> but the latest is 6.3.0. Does this affect the matter? >> >> Any help appreciated! >> >> Jinho >> >> >> >> > Paul, > > thanks. I tried your suggestion, but it works like no iommu command in > boot line. I passed intel_iommu=pt, and receive packets from hypervisor. > However, when I started VM with "-device pci-assign,host=01:00.0", it shows > the following message: > > qemu-system-x86_64: -device pci-assign,host=03:10.0: No IOMMU found. > Unable to assign device "(null)" > qemu-system-x86_64: -device pci-assign,host=03:10.0: Device > initialization failed. > qemu-system-x86_64: -device pci-assign,host=03:10.0: Device > 'kvm-pci-assign' could not be initialized > > The device is detached from kernel, and move to pci-stub. dmesg does not > show any DMAR fault message anymore. > > Any idea? > > Jinho > > > Jinho, > you need to specify both > > " intel_iommu=on iommu=pt" > > Pb > I tried that as well, but it works as if I only add intel_iommu=on.. which means I do not receive any packets from hypervisor. I also have pci realloc added. Does this affect? Jinho --089e013d1db296572104e3c79c31 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable

= On Mon, Aug 12, 2013 at 6:22 PM, Paul Barrette <paul.barrette@wi= ndriver.com> wrote:
=20 =20 =20

On 08/12/2013 06:07 PM, jinho hwang wrote:
=20

On Mon, Aug 12, 2013 at 4:28 PM, Paul Barrette <paul.barrette@windriver.com> wrote:

On 08/12/2013 04:19 PM, jinho hwang wrote:
Hi All,

I am using iommu to receive packets both from hypervisor and from VM. KVM is used for the virtualization. However, after I deliver the kernel options (iommu and pci realloc), I can not receive packets in hypervisor, but VF works fine in VM. When I tried to receive packets in hypervisor, dmesg shows the following:

ixgbe 0000:03:00.1: complete =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0
ixgbe 0000:03:00.1: PCI INT A disabled =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0
igb_uio 0000:03:00.1: PCI INT A -> GSI 38 (level, low) -> IRQ 38
igb_uio 0000:03:00.1: setting latency timer to 64 =A0 =A0 =A0 =A0 =A0 =A0 =A0
igb_uio 0000:03:00.1: irq 87 for MSI/MSI-X =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0
uio device registered with irq 57 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0
DRHD: handling fault status reg 2 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0
DMAR:[DMA Read] Request device [03:00.1] fault addr b9d0f000 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A0
DMAR:[fault reason 02] Present bit in context entry is clear

03:00.1 Ethernet controller: Intel Corporation 82599EB 10 Gigabit Dual Port Backplane Connection (rev 01)
=A0 =A0 =A0 =A0 Subsystem: Intel Corporation Ethern= et X520 10GbE Dual Port KX4-KR Mezz
=A0 =A0 =A0 =A0 Control: I/O+ Mem+ BusMaster+ SpecC= ycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
=A0 =A0 =A0 =A0 Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=3Dfast >TAbort- <TAbort- <MAbort+ >SERR- <PERR- INTx-
=A0 =A0 =A0 =A0 Latency: 0, Cache Line Size: 64 byt= es =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0
=A0 =A0 =A0 =A0 Interrupt: pin A routed to IRQ 38 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0
=A0 =A0 =A0 =A0 Region 0: Memory at d9400000 (64-bit, prefetchable) [size=3D4M]
=A0 =A0 =A0 =A0 Region 2: I/O ports at ece0 [size= =3D32] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0
=A0 =A0 =A0 =A0 Region 4: Memory at d9bfc000 (64-bi= t, prefetchable) [size=3D16K]
=A0 =A0 =A0 =A0 Expansion ROM at <ignored> [disabled] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0
=A0 =A0 =A0 =A0 Capabilities: <access denied>= =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0
=A0 =A0 =A0 =A0 Kernel driver in use: igb_uio =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0
=A0 =A0 =A0 =A0 Kernel modules: ixgbe =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0

We can see those addresses are not matched. So the kernel got fault. I am wondering why this happens?
I have seen this happen when VT-d is enabled in the bios.=A0 If you are using dpdk 1.4, add "iommu=3Dpt&qu= ot; to your boot line.=A0 Without it, no packets are received.

Pb


One suspicion for this is BIOS. I am currently using BIOS version 3.0, but the latest is 6.3.0. Does this affect the matter?=A0

Any help appreciated!

Jinho =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=A0
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0


Paul,=A0

thanks. I tried your suggestion, but it works like no iommu command in boot line. I passed intel_iommu=3Dpt, and receive packets from hypervisor. However, when I started VM with "-device pci-assign,host=3D01:00.0", it shows the following message:<= /div>

qemu-system-x86_64: -device pci-assign,host=3D03:10.0: No IOMMU found. =A0Unable to assign device "(null)"
qemu-system-x86_64: -device pci-assign,host=3D03:10.0: Device initialization failed.
qemu-system-x86_64: -device pci-assign,host=3D03:10.0: Device 'kvm-pci-assign' coul= d not be initialized

The device is detached from kernel, and move to pci-stub. dmesg does not show any DMAR fault message anymore.=A0

Any idea?=A0

Jinho

Jinho,
=A0you need to specify both
" intel_iommu=3Don iommu=3Dpt"
Pb

I tried that as well, but it works as if I only add = intel_iommu=3Don.. which means I do not receive any packets from hypervisor= . I also have pci realloc added. Does this affect?=A0

Jinho

=
--089e013d1db296572104e3c79c31--