* [dpdk-dev] DPDK doesn't work with iommu=pt
@ 2014-09-26 9:13 Hiroshi Shimamoto
2014-09-26 14:47 ` Choi, Sy Jong
2014-09-28 7:48 ` Zhang, Jerry
0 siblings, 2 replies; 8+ messages in thread
From: Hiroshi Shimamoto @ 2014-09-26 9:13 UTC (permalink / raw)
To: dev; +Cc: Hayato Momma
I encountered an issue that DPDK doesn't work with "iommu=pt intel_iommu=on"
on HP ProLiant DL380p Gen8 server. I'm using the following environment;
HW: ProLiant DL380p Gen8
CPU: E5-2697 v2
OS: RHEL7
kernel: kernel-3.10.0-123 and the latest kernel 3.17-rc6+
DPDK: v1.7.1-53-gce5abac
NIC: 82599ES
When boot with "iommu=pt intel_iommu=on", I got the below message and
no packets are handled.
[ 120.809611] dmar: DRHD: handling fault status reg 2
[ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000
DMAR:[fault reason 02] Present bit in context entry is clear
How to reproduce;
just run testpmd
# ./testpmd -c 0xf -n 4 -- -i
Configuring Port 0 (socket 0)
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7ffff54eafc0 hw_ring=0x7ffff4200000 dma_addr=0xaa000000
PMD: ixgbe_dev_tx_queue_setup(): Using full-featured tx code path
PMD: ixgbe_dev_tx_queue_setup(): - txq_flags = 0 [IXGBE_SIMPLE_FLAGS=f01]
PMD: ixgbe_dev_tx_queue_setup(): - tx_rs_thresh = 32 [RTE_PMD_IXGBE_TX_MAX_BURST=32]
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000 dma_addr=0xaa010000
PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are not satisfied, Scattered Rx is requested, or RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC is not enabled (port=0, queue=0).
PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32
testpmd> start
io packet forwarding - CRC stripping disabled - packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
RX queues=1 - RX desc=128 - RX free threshold=0
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX RS bit threshold=0 - TXQ flags=0x0
and ping from another box to this server.
# ping6 -I eth2 ff02::1
I got the below error message and no packet is received.
I couldn't see any increase RX/TX count in testpmt statistics
testpmd> show port stats 0
######################## NIC statistics for port 0 ########################
RX-packets: 6 RX-missed: 0 RX-bytes: 732
RX-badcrc: 0 RX-badlen: 0 RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
testpmd> show port stats 0
######################## NIC statistics for port 0 ########################
RX-packets: 6 RX-missed: 0 RX-bytes: 732
RX-badcrc: 0 RX-badlen: 0 RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
The fault addr in error message must be RX DMA descriptor
error message
[ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000
log in testpmd
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000 dma_addr=0xaa010000
I think the NIC received a packet in fifo and try to put into memory with DMA.
Before starting DMA, the NIC get the target address from RX descriptors in RDBA register.
But accessing RX descriptors failed in IOMMU unit and reported it to the kernel.
DMAR:[fault reason 02] Present bit in context entry is clear
The error message looks there is no valid entry in IOMMU.
I think the following issue is very similar, but using Ubuntu14.04 couldn't fix in my case.
http://thread.gmane.org/gmane.comp.networking.dpdk.devel/2281
I tried Ubuntu14.04.1 and got the below error.
[ 199.710191] dmar: DRHD: handling fault status reg 2
[ 199.710896] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr 7c24df000
[ 199.710896] DMAR:[fault reason 06] PTE Read access is not set
Currently I could see this issue on HP ProLiant DL380p Gen8 only.
Is there any idea?
Has anyone noticed this issue?
Note: we're thinking to use SR-IOV and DPDK app in the same box.
The box has 2 NICs, one for SR-IOV and pass through to VM, one (no SR-IOV) for DPDK app in host.
thanks,
Hiroshi
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] DPDK doesn't work with iommu=pt
2014-09-26 9:13 [dpdk-dev] DPDK doesn't work with iommu=pt Hiroshi Shimamoto
@ 2014-09-26 14:47 ` Choi, Sy Jong
2014-09-28 7:35 ` Alex Markuze
2014-09-28 7:48 ` Zhang, Jerry
1 sibling, 1 reply; 8+ messages in thread
From: Choi, Sy Jong @ 2014-09-26 14:47 UTC (permalink / raw)
To: Hiroshi Shimamoto, dev; +Cc: Hayato Momma
Hi Shimamoto-san,
There are a lot of sighting relate to "DMAR:[fault reason 06] PTE Read access is not set"
https://www.mail-archive.com/kvm@vger.kernel.org/msg106573.html
This might be related to IOMMU, and kernel code.
Here is what we know :-
1) Disabling VT-d in bios also removed the symptom
2) Switch to another OS distribution also removed the symptom
3) even different HW we will not see the symptom. In my case, switch from Engineering board to EPSD board.
Regards,
Choi, Sy Jong
Platform Application Engineer
-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hiroshi Shimamoto
Sent: Friday, September 26, 2014 5:14 PM
To: dev@dpdk.org
Cc: Hayato Momma
Subject: [dpdk-dev] DPDK doesn't work with iommu=pt
I encountered an issue that DPDK doesn't work with "iommu=pt intel_iommu=on"
on HP ProLiant DL380p Gen8 server. I'm using the following environment;
HW: ProLiant DL380p Gen8
CPU: E5-2697 v2
OS: RHEL7
kernel: kernel-3.10.0-123 and the latest kernel 3.17-rc6+
DPDK: v1.7.1-53-gce5abac
NIC: 82599ES
When boot with "iommu=pt intel_iommu=on", I got the below message and no packets are handled.
[ 120.809611] dmar: DRHD: handling fault status reg 2
[ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000
DMAR:[fault reason 02] Present bit in context entry is clear
How to reproduce;
just run testpmd
# ./testpmd -c 0xf -n 4 -- -i
Configuring Port 0 (socket 0)
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7ffff54eafc0 hw_ring=0x7ffff4200000 dma_addr=0xaa000000
PMD: ixgbe_dev_tx_queue_setup(): Using full-featured tx code path
PMD: ixgbe_dev_tx_queue_setup(): - txq_flags = 0 [IXGBE_SIMPLE_FLAGS=f01]
PMD: ixgbe_dev_tx_queue_setup(): - tx_rs_thresh = 32 [RTE_PMD_IXGBE_TX_MAX_BURST=32]
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000 dma_addr=0xaa010000
PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are not satisfied, Scattered Rx is requested, or RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC is not enabled (port=0, queue=0).
PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32
testpmd> start
io packet forwarding - CRC stripping disabled - packets/burst=32
nb forwarding cores=1 - nb forwarding ports=2
RX queues=1 - RX desc=128 - RX free threshold=0
RX threshold registers: pthresh=8 hthresh=8 wthresh=0
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=32 hthresh=0 wthresh=0
TX RS bit threshold=0 - TXQ flags=0x0
and ping from another box to this server.
# ping6 -I eth2 ff02::1
I got the below error message and no packet is received.
I couldn't see any increase RX/TX count in testpmt statistics
testpmd> show port stats 0
######################## NIC statistics for port 0 ########################
RX-packets: 6 RX-missed: 0 RX-bytes: 732
RX-badcrc: 0 RX-badlen: 0 RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
testpmd> show port stats 0
######################## NIC statistics for port 0 ########################
RX-packets: 6 RX-missed: 0 RX-bytes: 732
RX-badcrc: 0 RX-badlen: 0 RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
The fault addr in error message must be RX DMA descriptor
error message
[ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000
log in testpmd
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000 dma_addr=0xaa010000
I think the NIC received a packet in fifo and try to put into memory with DMA.
Before starting DMA, the NIC get the target address from RX descriptors in RDBA register.
But accessing RX descriptors failed in IOMMU unit and reported it to the kernel.
DMAR:[fault reason 02] Present bit in context entry is clear
The error message looks there is no valid entry in IOMMU.
I think the following issue is very similar, but using Ubuntu14.04 couldn't fix in my case.
http://thread.gmane.org/gmane.comp.networking.dpdk.devel/2281
I tried Ubuntu14.04.1 and got the below error.
[ 199.710191] dmar: DRHD: handling fault status reg 2
[ 199.710896] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr 7c24df000
[ 199.710896] DMAR:[fault reason 06] PTE Read access is not set
Currently I could see this issue on HP ProLiant DL380p Gen8 only.
Is there any idea?
Has anyone noticed this issue?
Note: we're thinking to use SR-IOV and DPDK app in the same box.
The box has 2 NICs, one for SR-IOV and pass through to VM, one (no SR-IOV) for DPDK app in host.
thanks,
Hiroshi
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] DPDK doesn't work with iommu=pt
2014-09-26 14:47 ` Choi, Sy Jong
@ 2014-09-28 7:35 ` Alex Markuze
2014-09-28 23:53 ` Hiroshi Shimamoto
0 siblings, 1 reply; 8+ messages in thread
From: Alex Markuze @ 2014-09-28 7:35 UTC (permalink / raw)
To: Choi, Sy Jong; +Cc: dev, Hayato Momma
iommu=pt effectively disables iommu for the kernel and iommu is
enabled only for KVM.
http://lwn.net/Articles/329174/
Basically unless you have KVM running you can remove both lines for
the same effect.
On the other hand if you do have KVM and you do want iommu=on You can
remove the iommu=pt for the same performance because AFAIK unlike the
kernel drivers DPDK doesn't dma_map and dma_unman each and every
ingress/egress packet (Please correct me if I'm wrong), and will not
suffer any performance penalties.
FYI. Kernel NIC drivers:
When iommu=on{,strict} the kernel network drivers will suffer a heavy
performance penalty due to regular IOVA modifications (both HW and SW
at fault here). Ixgbe and Mellanox reuse dma_mapped pages on the
receive side to avoid this penalty, but still suffer from iommu on TX.
On Fri, Sep 26, 2014 at 5:47 PM, Choi, Sy Jong <sy.jong.choi@intel.com> wrote:
> Hi Shimamoto-san,
>
> There are a lot of sighting relate to "DMAR:[fault reason 06] PTE Read access is not set"
> https://www.mail-archive.com/kvm@vger.kernel.org/msg106573.html
>
> This might be related to IOMMU, and kernel code.
>
> Here is what we know :-
> 1) Disabling VT-d in bios also removed the symptom
> 2) Switch to another OS distribution also removed the symptom
> 3) even different HW we will not see the symptom. In my case, switch from Engineering board to EPSD board.
>
> Regards,
> Choi, Sy Jong
> Platform Application Engineer
>
>
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hiroshi Shimamoto
> Sent: Friday, September 26, 2014 5:14 PM
> To: dev@dpdk.org
> Cc: Hayato Momma
> Subject: [dpdk-dev] DPDK doesn't work with iommu=pt
>
> I encountered an issue that DPDK doesn't work with "iommu=pt intel_iommu=on"
> on HP ProLiant DL380p Gen8 server. I'm using the following environment;
>
> HW: ProLiant DL380p Gen8
> CPU: E5-2697 v2
> OS: RHEL7
> kernel: kernel-3.10.0-123 and the latest kernel 3.17-rc6+
> DPDK: v1.7.1-53-gce5abac
> NIC: 82599ES
>
> When boot with "iommu=pt intel_iommu=on", I got the below message and no packets are handled.
>
> [ 120.809611] dmar: DRHD: handling fault status reg 2
> [ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000
> DMAR:[fault reason 02] Present bit in context entry is clear
>
> How to reproduce;
> just run testpmd
> # ./testpmd -c 0xf -n 4 -- -i
>
> Configuring Port 0 (socket 0)
> PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7ffff54eafc0 hw_ring=0x7ffff4200000 dma_addr=0xaa000000
> PMD: ixgbe_dev_tx_queue_setup(): Using full-featured tx code path
> PMD: ixgbe_dev_tx_queue_setup(): - txq_flags = 0 [IXGBE_SIMPLE_FLAGS=f01]
> PMD: ixgbe_dev_tx_queue_setup(): - tx_rs_thresh = 32 [RTE_PMD_IXGBE_TX_MAX_BURST=32]
> PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000 dma_addr=0xaa010000
> PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32
> PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are not satisfied, Scattered Rx is requested, or RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC is not enabled (port=0, queue=0).
> PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32
>
> testpmd> start
> io packet forwarding - CRC stripping disabled - packets/burst=32
> nb forwarding cores=1 - nb forwarding ports=2
> RX queues=1 - RX desc=128 - RX free threshold=0
> RX threshold registers: pthresh=8 hthresh=8 wthresh=0
> TX queues=1 - TX desc=512 - TX free threshold=0
> TX threshold registers: pthresh=32 hthresh=0 wthresh=0
> TX RS bit threshold=0 - TXQ flags=0x0
>
>
> and ping from another box to this server.
> # ping6 -I eth2 ff02::1
>
> I got the below error message and no packet is received.
> I couldn't see any increase RX/TX count in testpmt statistics
>
> testpmd> show port stats 0
>
> ######################## NIC statistics for port 0 ########################
> RX-packets: 6 RX-missed: 0 RX-bytes: 732
> RX-badcrc: 0 RX-badlen: 0 RX-errors: 0
> RX-nombuf: 0
> TX-packets: 0 TX-errors: 0 TX-bytes: 0
> ############################################################################
> testpmd> show port stats 0
>
> ######################## NIC statistics for port 0 ########################
> RX-packets: 6 RX-missed: 0 RX-bytes: 732
> RX-badcrc: 0 RX-badlen: 0 RX-errors: 0
> RX-nombuf: 0
> TX-packets: 0 TX-errors: 0 TX-bytes: 0
> ############################################################################
>
>
> The fault addr in error message must be RX DMA descriptor
>
> error message
> [ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000
>
> log in testpmd
> PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000 dma_addr=0xaa010000
>
> I think the NIC received a packet in fifo and try to put into memory with DMA.
> Before starting DMA, the NIC get the target address from RX descriptors in RDBA register.
> But accessing RX descriptors failed in IOMMU unit and reported it to the kernel.
>
> DMAR:[fault reason 02] Present bit in context entry is clear
>
> The error message looks there is no valid entry in IOMMU.
>
> I think the following issue is very similar, but using Ubuntu14.04 couldn't fix in my case.
> http://thread.gmane.org/gmane.comp.networking.dpdk.devel/2281
>
> I tried Ubuntu14.04.1 and got the below error.
>
> [ 199.710191] dmar: DRHD: handling fault status reg 2
> [ 199.710896] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr 7c24df000
> [ 199.710896] DMAR:[fault reason 06] PTE Read access is not set
>
> Currently I could see this issue on HP ProLiant DL380p Gen8 only.
> Is there any idea?
> Has anyone noticed this issue?
>
> Note: we're thinking to use SR-IOV and DPDK app in the same box.
> The box has 2 NICs, one for SR-IOV and pass through to VM, one (no SR-IOV) for DPDK app in host.
>
> thanks,
> Hiroshi
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] DPDK doesn't work with iommu=pt
2014-09-26 9:13 [dpdk-dev] DPDK doesn't work with iommu=pt Hiroshi Shimamoto
2014-09-26 14:47 ` Choi, Sy Jong
@ 2014-09-28 7:48 ` Zhang, Jerry
2014-09-28 23:20 ` Hiroshi Shimamoto
1 sibling, 1 reply; 8+ messages in thread
From: Zhang, Jerry @ 2014-09-28 7:48 UTC (permalink / raw)
To: Hiroshi Shimamoto, dev; +Cc: Hayato Momma
Met the similar issue before.
VT-d enabled? If so you may need to contact HP to upgrade the BIOS or you may disable VT-d and remove iommu=pt intel_iommu=on if you don't need VF function.
>-----Original Message-----
>From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hiroshi Shimamoto
>Sent: Friday, September 26, 2014 5:14 PM
>To: dev@dpdk.org
>Cc: Hayato Momma
>Subject: [dpdk-dev] DPDK doesn't work with iommu=pt
>
>I encountered an issue that DPDK doesn't work with "iommu=pt
>intel_iommu=on"
>on HP ProLiant DL380p Gen8 server. I'm using the following environment;
>
> HW: ProLiant DL380p Gen8
> CPU: E5-2697 v2
> OS: RHEL7
> kernel: kernel-3.10.0-123 and the latest kernel 3.17-rc6+
> DPDK: v1.7.1-53-gce5abac
> NIC: 82599ES
>
>When boot with "iommu=pt intel_iommu=on", I got the below message and no
>packets are handled.
>
> [ 120.809611] dmar: DRHD: handling fault status reg 2
> [ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr
>aa010000
> DMAR:[fault reason 02] Present bit in context entry is clear
>
>How to reproduce;
>just run testpmd
># ./testpmd -c 0xf -n 4 -- -i
>
>Configuring Port 0 (socket 0)
>PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7ffff54eafc0
>hw_ring=0x7ffff4200000 dma_addr=0xaa000000
>PMD: ixgbe_dev_tx_queue_setup(): Using full-featured tx code path
>PMD: ixgbe_dev_tx_queue_setup(): - txq_flags = 0
>[IXGBE_SIMPLE_FLAGS=f01]
>PMD: ixgbe_dev_tx_queue_setup(): - tx_rs_thresh = 32
>[RTE_PMD_IXGBE_TX_MAX_BURST=32]
>PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740
>hw_ring=0x7ffff4210000 dma_addr=0xaa010000
>PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc
>Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32
>PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are not
>satisfied, Scattered Rx is requested, or
>RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC is not enabled (port=0, queue=0).
>PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc
>Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32
>
>testpmd> start
> io packet forwarding - CRC stripping disabled - packets/burst=32
> nb forwarding cores=1 - nb forwarding ports=2
> RX queues=1 - RX desc=128 - RX free threshold=0
> RX threshold registers: pthresh=8 hthresh=8 wthresh=0
> TX queues=1 - TX desc=512 - TX free threshold=0
> TX threshold registers: pthresh=32 hthresh=0 wthresh=0
> TX RS bit threshold=0 - TXQ flags=0x0
>
>
>and ping from another box to this server.
># ping6 -I eth2 ff02::1
>
>I got the below error message and no packet is received.
>I couldn't see any increase RX/TX count in testpmt statistics
>
>testpmd> show port stats 0
>
> ######################## NIC statistics for port 0
>########################
> RX-packets: 6 RX-missed: 0 RX-bytes: 732
> RX-badcrc: 0 RX-badlen: 0 RX-errors: 0
> RX-nombuf: 0
> TX-packets: 0 TX-errors: 0 TX-bytes: 0
>
>#################################################################
>###########
>testpmd> show port stats 0
>
> ######################## NIC statistics for port 0
>########################
> RX-packets: 6 RX-missed: 0 RX-bytes: 732
> RX-badcrc: 0 RX-badlen: 0 RX-errors: 0
> RX-nombuf: 0
> TX-packets: 0 TX-errors: 0 TX-bytes: 0
>
>#################################################################
>###########
>
>
>The fault addr in error message must be RX DMA descriptor
>
>error message
> [ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr
>aa010000
>
>log in testpmd
> PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740
>hw_ring=0x7ffff4210000 dma_addr=0xaa010000
>
>I think the NIC received a packet in fifo and try to put into memory with DMA.
>Before starting DMA, the NIC get the target address from RX descriptors in
>RDBA register.
>But accessing RX descriptors failed in IOMMU unit and reported it to the kernel.
>
> DMAR:[fault reason 02] Present bit in context entry is clear
>
>The error message looks there is no valid entry in IOMMU.
>
>I think the following issue is very similar, but using Ubuntu14.04 couldn't fix in my
>case.
>http://thread.gmane.org/gmane.comp.networking.dpdk.devel/2281
>
>I tried Ubuntu14.04.1 and got the below error.
>
> [ 199.710191] dmar: DRHD: handling fault status reg 2
> [ 199.710896] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr
>7c24df000
> [ 199.710896] DMAR:[fault reason 06] PTE Read access is not set
>
>Currently I could see this issue on HP ProLiant DL380p Gen8 only.
>Is there any idea?
>Has anyone noticed this issue?
>
>Note: we're thinking to use SR-IOV and DPDK app in the same box.
>The box has 2 NICs, one for SR-IOV and pass through to VM, one (no SR-IOV) for
>DPDK app in host.
>
>thanks,
>Hiroshi
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] DPDK doesn't work with iommu=pt
2014-09-28 7:48 ` Zhang, Jerry
@ 2014-09-28 23:20 ` Hiroshi Shimamoto
0 siblings, 0 replies; 8+ messages in thread
From: Hiroshi Shimamoto @ 2014-09-28 23:20 UTC (permalink / raw)
To: Zhang, Jerry, dev; +Cc: Hayato Momma
Hi,
> Subject: RE: DPDK doesn't work with iommu=pt
>
> Met the similar issue before.
> VT-d enabled? If so you may need to contact HP to upgrade the BIOS or you may disable VT-d and remove iommu=pt intel_iommu=on
> if you don't need VF function.
we need VT-d and it's enabled.
What we want to do is that use SR-IOV functionality and DPDK application concurrently on the same box.
thanks,
Hiroshi
>
> >-----Original Message-----
> >From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hiroshi Shimamoto
> >Sent: Friday, September 26, 2014 5:14 PM
> >To: dev@dpdk.org
> >Cc: Hayato Momma
> >Subject: [dpdk-dev] DPDK doesn't work with iommu=pt
> >
> >I encountered an issue that DPDK doesn't work with "iommu=pt
> >intel_iommu=on"
> >on HP ProLiant DL380p Gen8 server. I'm using the following environment;
> >
> > HW: ProLiant DL380p Gen8
> > CPU: E5-2697 v2
> > OS: RHEL7
> > kernel: kernel-3.10.0-123 and the latest kernel 3.17-rc6+
> > DPDK: v1.7.1-53-gce5abac
> > NIC: 82599ES
> >
> >When boot with "iommu=pt intel_iommu=on", I got the below message and no
> >packets are handled.
> >
> > [ 120.809611] dmar: DRHD: handling fault status reg 2
> > [ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr
> >aa010000
> > DMAR:[fault reason 02] Present bit in context entry is clear
> >
> >How to reproduce;
> >just run testpmd
> ># ./testpmd -c 0xf -n 4 -- -i
> >
> >Configuring Port 0 (socket 0)
> >PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7ffff54eafc0
> >hw_ring=0x7ffff4200000 dma_addr=0xaa000000
> >PMD: ixgbe_dev_tx_queue_setup(): Using full-featured tx code path
> >PMD: ixgbe_dev_tx_queue_setup(): - txq_flags = 0
> >[IXGBE_SIMPLE_FLAGS=f01]
> >PMD: ixgbe_dev_tx_queue_setup(): - tx_rs_thresh = 32
> >[RTE_PMD_IXGBE_TX_MAX_BURST=32]
> >PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740
> >hw_ring=0x7ffff4210000 dma_addr=0xaa010000
> >PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc
> >Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32
> >PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are not
> >satisfied, Scattered Rx is requested, or
> >RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC is not enabled (port=0, queue=0).
> >PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc
> >Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32
> >
> >testpmd> start
> > io packet forwarding - CRC stripping disabled - packets/burst=32
> > nb forwarding cores=1 - nb forwarding ports=2
> > RX queues=1 - RX desc=128 - RX free threshold=0
> > RX threshold registers: pthresh=8 hthresh=8 wthresh=0
> > TX queues=1 - TX desc=512 - TX free threshold=0
> > TX threshold registers: pthresh=32 hthresh=0 wthresh=0
> > TX RS bit threshold=0 - TXQ flags=0x0
> >
> >
> >and ping from another box to this server.
> ># ping6 -I eth2 ff02::1
> >
> >I got the below error message and no packet is received.
> >I couldn't see any increase RX/TX count in testpmt statistics
> >
> >testpmd> show port stats 0
> >
> > ######################## NIC statistics for port 0
> >########################
> > RX-packets: 6 RX-missed: 0 RX-bytes: 732
> > RX-badcrc: 0 RX-badlen: 0 RX-errors: 0
> > RX-nombuf: 0
> > TX-packets: 0 TX-errors: 0 TX-bytes: 0
> >
> >#################################################################
> >###########
> >testpmd> show port stats 0
> >
> > ######################## NIC statistics for port 0
> >########################
> > RX-packets: 6 RX-missed: 0 RX-bytes: 732
> > RX-badcrc: 0 RX-badlen: 0 RX-errors: 0
> > RX-nombuf: 0
> > TX-packets: 0 TX-errors: 0 TX-bytes: 0
> >
> >#################################################################
> >###########
> >
> >
> >The fault addr in error message must be RX DMA descriptor
> >
> >error message
> > [ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr
> >aa010000
> >
> >log in testpmd
> > PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740
> >hw_ring=0x7ffff4210000 dma_addr=0xaa010000
> >
> >I think the NIC received a packet in fifo and try to put into memory with DMA.
> >Before starting DMA, the NIC get the target address from RX descriptors in
> >RDBA register.
> >But accessing RX descriptors failed in IOMMU unit and reported it to the kernel.
> >
> > DMAR:[fault reason 02] Present bit in context entry is clear
> >
> >The error message looks there is no valid entry in IOMMU.
> >
> >I think the following issue is very similar, but using Ubuntu14.04 couldn't fix in my
> >case.
> >http://thread.gmane.org/gmane.comp.networking.dpdk.devel/2281
> >
> >I tried Ubuntu14.04.1 and got the below error.
> >
> > [ 199.710191] dmar: DRHD: handling fault status reg 2
> > [ 199.710896] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr
> >7c24df000
> > [ 199.710896] DMAR:[fault reason 06] PTE Read access is not set
> >
> >Currently I could see this issue on HP ProLiant DL380p Gen8 only.
> >Is there any idea?
> >Has anyone noticed this issue?
> >
> >Note: we're thinking to use SR-IOV and DPDK app in the same box.
> >The box has 2 NICs, one for SR-IOV and pass through to VM, one (no SR-IOV) for
> >DPDK app in host.
> >
> >thanks,
> >Hiroshi
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] DPDK doesn't work with iommu=pt
2014-09-28 7:35 ` Alex Markuze
@ 2014-09-28 23:53 ` Hiroshi Shimamoto
2014-09-29 6:31 ` Alex Markuze
0 siblings, 1 reply; 8+ messages in thread
From: Hiroshi Shimamoto @ 2014-09-28 23:53 UTC (permalink / raw)
To: Alex Markuze, Choi, Sy Jong; +Cc: dev, Hayato Momma
Hi,
> Subject: Re: [dpdk-dev] DPDK doesn't work with iommu=pt
>
> iommu=pt effectively disables iommu for the kernel and iommu is
> enabled only for KVM.
> http://lwn.net/Articles/329174/
thanks for pointing that.
Okay, I think DPDK cannot handle IOMMU because of no kernel code in
DPDK application.
And now, I think "iommu=pt" doesn't work correctly DMA on host PMD
causes DMAR fault which means IOMMU catches a wrong operation.
Will dig around "iommu=pt".
>
> Basically unless you have KVM running you can remove both lines for
> the same effect.
> On the other hand if you do have KVM and you do want iommu=on You can
> remove the iommu=pt for the same performance because AFAIK unlike the
> kernel drivers DPDK doesn't dma_map and dma_unman each and every
> ingress/egress packet (Please correct me if I'm wrong), and will not
> suffer any performance penalties.
I also tried "iommu=on", but it didn't fix the issue.
I saw the same error messages in kernel.
[ 46.978097] dmar: DRHD: handling fault status reg 2
[ 46.978120] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000
DMAR:[fault reason 02] Present bit in context entry is clear
thanks,
Hiroshi
>
> FYI. Kernel NIC drivers:
> When iommu=on{,strict} the kernel network drivers will suffer a heavy
> performance penalty due to regular IOVA modifications (both HW and SW
> at fault here). Ixgbe and Mellanox reuse dma_mapped pages on the
> receive side to avoid this penalty, but still suffer from iommu on TX.
>
> On Fri, Sep 26, 2014 at 5:47 PM, Choi, Sy Jong <sy.jong.choi@intel.com> wrote:
> > Hi Shimamoto-san,
> >
> > There are a lot of sighting relate to "DMAR:[fault reason 06] PTE Read access is not set"
> > https://www.mail-archive.com/kvm@vger.kernel.org/msg106573.html
> >
> > This might be related to IOMMU, and kernel code.
> >
> > Here is what we know :-
> > 1) Disabling VT-d in bios also removed the symptom
> > 2) Switch to another OS distribution also removed the symptom
> > 3) even different HW we will not see the symptom. In my case, switch from Engineering board to EPSD board.
> >
> > Regards,
> > Choi, Sy Jong
> > Platform Application Engineer
> >
> >
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hiroshi Shimamoto
> > Sent: Friday, September 26, 2014 5:14 PM
> > To: dev@dpdk.org
> > Cc: Hayato Momma
> > Subject: [dpdk-dev] DPDK doesn't work with iommu=pt
> >
> > I encountered an issue that DPDK doesn't work with "iommu=pt intel_iommu=on"
> > on HP ProLiant DL380p Gen8 server. I'm using the following environment;
> >
> > HW: ProLiant DL380p Gen8
> > CPU: E5-2697 v2
> > OS: RHEL7
> > kernel: kernel-3.10.0-123 and the latest kernel 3.17-rc6+
> > DPDK: v1.7.1-53-gce5abac
> > NIC: 82599ES
> >
> > When boot with "iommu=pt intel_iommu=on", I got the below message and no packets are handled.
> >
> > [ 120.809611] dmar: DRHD: handling fault status reg 2
> > [ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000
> > DMAR:[fault reason 02] Present bit in context entry is clear
> >
> > How to reproduce;
> > just run testpmd
> > # ./testpmd -c 0xf -n 4 -- -i
> >
> > Configuring Port 0 (socket 0)
> > PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7ffff54eafc0 hw_ring=0x7ffff4200000 dma_addr=0xaa000000
> > PMD: ixgbe_dev_tx_queue_setup(): Using full-featured tx code path
> > PMD: ixgbe_dev_tx_queue_setup(): - txq_flags = 0 [IXGBE_SIMPLE_FLAGS=f01]
> > PMD: ixgbe_dev_tx_queue_setup(): - tx_rs_thresh = 32 [RTE_PMD_IXGBE_TX_MAX_BURST=32]
> > PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000 dma_addr=0xaa010000
> > PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc Preconditions: rxq->rx_free_thresh=0,
> RTE_PMD_IXGBE_RX_MAX_BURST=32
> > PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are not satisfied, Scattered Rx is requested, or
> RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC is not enabled (port=0, queue=0).
> > PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc Preconditions: rxq->rx_free_thresh=0,
> RTE_PMD_IXGBE_RX_MAX_BURST=32
> >
> > testpmd> start
> > io packet forwarding - CRC stripping disabled - packets/burst=32
> > nb forwarding cores=1 - nb forwarding ports=2
> > RX queues=1 - RX desc=128 - RX free threshold=0
> > RX threshold registers: pthresh=8 hthresh=8 wthresh=0
> > TX queues=1 - TX desc=512 - TX free threshold=0
> > TX threshold registers: pthresh=32 hthresh=0 wthresh=0
> > TX RS bit threshold=0 - TXQ flags=0x0
> >
> >
> > and ping from another box to this server.
> > # ping6 -I eth2 ff02::1
> >
> > I got the below error message and no packet is received.
> > I couldn't see any increase RX/TX count in testpmt statistics
> >
> > testpmd> show port stats 0
> >
> > ######################## NIC statistics for port 0 ########################
> > RX-packets: 6 RX-missed: 0 RX-bytes: 732
> > RX-badcrc: 0 RX-badlen: 0 RX-errors: 0
> > RX-nombuf: 0
> > TX-packets: 0 TX-errors: 0 TX-bytes: 0
> > ############################################################################
> > testpmd> show port stats 0
> >
> > ######################## NIC statistics for port 0 ########################
> > RX-packets: 6 RX-missed: 0 RX-bytes: 732
> > RX-badcrc: 0 RX-badlen: 0 RX-errors: 0
> > RX-nombuf: 0
> > TX-packets: 0 TX-errors: 0 TX-bytes: 0
> > ############################################################################
> >
> >
> > The fault addr in error message must be RX DMA descriptor
> >
> > error message
> > [ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000
> >
> > log in testpmd
> > PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000 dma_addr=0xaa010000
> >
> > I think the NIC received a packet in fifo and try to put into memory with DMA.
> > Before starting DMA, the NIC get the target address from RX descriptors in RDBA register.
> > But accessing RX descriptors failed in IOMMU unit and reported it to the kernel.
> >
> > DMAR:[fault reason 02] Present bit in context entry is clear
> >
> > The error message looks there is no valid entry in IOMMU.
> >
> > I think the following issue is very similar, but using Ubuntu14.04 couldn't fix in my case.
> > http://thread.gmane.org/gmane.comp.networking.dpdk.devel/2281
> >
> > I tried Ubuntu14.04.1 and got the below error.
> >
> > [ 199.710191] dmar: DRHD: handling fault status reg 2
> > [ 199.710896] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr 7c24df000
> > [ 199.710896] DMAR:[fault reason 06] PTE Read access is not set
> >
> > Currently I could see this issue on HP ProLiant DL380p Gen8 only.
> > Is there any idea?
> > Has anyone noticed this issue?
> >
> > Note: we're thinking to use SR-IOV and DPDK app in the same box.
> > The box has 2 NICs, one for SR-IOV and pass through to VM, one (no SR-IOV) for DPDK app in host.
> >
> > thanks,
> > Hiroshi
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] DPDK doesn't work with iommu=pt
2014-09-28 23:53 ` Hiroshi Shimamoto
@ 2014-09-29 6:31 ` Alex Markuze
2014-09-30 1:02 ` Hiroshi Shimamoto
0 siblings, 1 reply; 8+ messages in thread
From: Alex Markuze @ 2014-09-29 6:31 UTC (permalink / raw)
To: Hiroshi Shimamoto; +Cc: dev, Hayato Momma
On Mon, Sep 29, 2014 at 2:53 AM, Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
wrote:
> Hi,
>
>> Subject: Re: [dpdk-dev] DPDK doesn't work with iommu=pt
>>
>> iommu=pt effectively disables iommu for the kernel and iommu is
>> enabled only for KVM.
>> http://lwn.net/Articles/329174/
>
> thanks for pointing that.
>
> Okay, I think DPDK cannot handle IOMMU because of no kernel code in
> DPDK application.
>
> And now, I think "iommu=pt" doesn't work correctly DMA on host PMD
> causes DMAR fault which means IOMMU catches a wrong operation.
> Will dig around "iommu=pt".
>
I agree with your analysis, It seems that a fairly recent patch (3~4)
months has introduced a bug that confuses unprotected DMA access with an
iommu access, by the device and produces an equivalent of a page fault.
>>
>> Basically unless you have KVM running you can remove both lines for
>> the same effect.
>> On the other hand if you do have KVM and you do want iommu=on You can
>> remove the iommu=pt for the same performance because AFAIK unlike the
>> kernel drivers DPDK doesn't dma_map and dma_unman each and every
>> ingress/egress packet (Please correct me if I'm wrong), and will not
>> suffer any performance penalties.
>
> I also tried "iommu=on", but it didn't fix the issue.
> I saw the same error messages in kernel.
>
Just to clarify, what I suggested you to try is leaving only this string in
the command line "intel_iommu=on". w/o iommu=pt.
But this would work iff DPDK can handle iota's (I/O virtual addresses).
> [ 46.978097] dmar: DRHD: handling fault status reg 2
> [ 46.978120] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr
aa010000
> DMAR:[fault reason 02] Present bit in context entry is clear
>
> thanks,
> Hiroshi
>
>>
>> FYI. Kernel NIC drivers:
>> When iommu=on{,strict} the kernel network drivers will suffer a heavy
>> performance penalty due to regular IOVA modifications (both HW and SW
>> at fault here). Ixgbe and Mellanox reuse dma_mapped pages on the
>> receive side to avoid this penalty, but still suffer from iommu on TX.
>>
>> On Fri, Sep 26, 2014 at 5:47 PM, Choi, Sy Jong <sy.jong.choi@intel.com>
wrote:
>> > Hi Shimamoto-san,
>> >
>> > There are a lot of sighting relate to "DMAR:[fault reason 06] PTE Read
access is not set"
>> > https://www.mail-archive.com/kvm@vger.kernel.org/msg106573.html
>> >
>> > This might be related to IOMMU, and kernel code.
>> >
>> > Here is what we know :-
>> > 1) Disabling VT-d in bios also removed the symptom
>> > 2) Switch to another OS distribution also removed the symptom
>> > 3) even different HW we will not see the symptom. In my case, switch
from Engineering board to EPSD board.
>> >
>> > Regards,
>> > Choi, Sy Jong
>> > Platform Application Engineer
>> >
>> >
>> > -----Original Message-----
>> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hiroshi Shimamoto
>> > Sent: Friday, September 26, 2014 5:14 PM
>> > To: dev@dpdk.org
>> > Cc: Hayato Momma
>> > Subject: [dpdk-dev] DPDK doesn't work with iommu=pt
>> >
>> > I encountered an issue that DPDK doesn't work with "iommu=pt intel_
iommu=on"
>> > on HP ProLiant DL380p Gen8 server. I'm using the following environment;
>> >
>> > HW: ProLiant DL380p Gen8
>> > CPU: E5-2697 v2
>> > OS: RHEL7
>> > kernel: kernel-3.10.0-123 and the latest kernel 3.17-rc6+
>> > DPDK: v1.7.1-53-gce5abac
>> > NIC: 82599ES
>> >
>> > When boot with "iommu=pt intel_iommu=on", I got the below message and
no packets are handled.
>> >
>> > [ 120.809611] dmar: DRHD: handling fault status reg 2
>> > [ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault
addr aa010000
>> > DMAR:[fault reason 02] Present bit in context entry is clear
>> >
>> > How to reproduce;
>> > just run testpmd
>> > # ./testpmd -c 0xf -n 4 -- -i
>> >
>> > Configuring Port 0 (socket 0)
>> > PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7ffff54eafc0 hw_ring=0x7ffff4200000
dma_addr=0xaa000000
>> > PMD: ixgbe_dev_tx_queue_setup(): Using full-featured tx code path
>> > PMD: ixgbe_dev_tx_queue_setup(): - txq_flags = 0 [IXGBE
_SIMPLE_FLAGS=f01]
>> > PMD: ixgbe_dev_tx_queue_setup(): - tx_rs_thresh = 32 [RTE_PMD_IXGBE
_TX_MAX_BURST=32]
>> > PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000
dma_addr=0xaa010000
>> > PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc
Preconditions: rxq->rx_free_thresh=0,
>> RTE_PMD_IXGBE_RX_MAX_BURST=32
>> > PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
not satisfied, Scattered Rx is requested, or
>> RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC is not enabled (port=0, queue=0).
>> > PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc
Preconditions: rxq->rx_free_thresh=0,
>> RTE_PMD_IXGBE_RX_MAX_BURST=32
>> >
>> > testpmd> start
>> > io packet forwarding - CRC stripping disabled - packets/burst=32
>> > nb forwarding cores=1 - nb forwarding ports=2
>> > RX queues=1 - RX desc=128 - RX free threshold=0
>> > RX threshold registers: pthresh=8 hthresh=8 wthresh=0
>> > TX queues=1 - TX desc=512 - TX free threshold=0
>> > TX threshold registers: pthresh=32 hthresh=0 wthresh=0
>> > TX RS bit threshold=0 - TXQ flags=0x0
>> >
>> >
>> > and ping from another box to this server.
>> > # ping6 -I eth2 ff02::1
>> >
>> > I got the below error message and no packet is received.
>> > I couldn't see any increase RX/TX count in testpmt statistics
>> >
>> > testpmd> show port stats 0
>> >
>> > ######################## NIC statistics for port 0
########################
>> > RX-packets: 6 RX-missed: 0 RX-bytes: 732
>> > RX-badcrc: 0 RX-badlen: 0 RX-errors: 0
>> > RX-nombuf: 0
>> > TX-packets: 0 TX-errors: 0 TX-bytes: 0
>> >
############################################################################
>> > testpmd> show port stats 0
>> >
>> > ######################## NIC statistics for port 0
########################
>> > RX-packets: 6 RX-missed: 0 RX-bytes: 732
>> > RX-badcrc: 0 RX-badlen: 0 RX-errors: 0
>> > RX-nombuf: 0
>> > TX-packets: 0 TX-errors: 0 TX-bytes: 0
>> >
############################################################################
>> >
>> >
>> > The fault addr in error message must be RX DMA descriptor
>> >
>> > error message
>> > [ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault
addr aa010000
>> >
>> > log in testpmd
>> > PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000
dma_addr=0xaa010000
>> >
>> > I think the NIC received a packet in fifo and try to put into memory
with DMA.
>> > Before starting DMA, the NIC get the target address from RX
descriptors in RDBA register.
>> > But accessing RX descriptors failed in IOMMU unit and reported it to
the kernel.
>> >
>> > DMAR:[fault reason 02] Present bit in context entry is clear
>> >
>> > The error message looks there is no valid entry in IOMMU.
>> >
>> > I think the following issue is very similar, but using Ubuntu14.04
couldn't fix in my case.
>> > http://thread.gmane.org/gmane.comp.networking.dpdk.devel/2281
>> >
>> > I tried Ubuntu14.04.1 and got the below error.
>> >
>> > [ 199.710191] dmar: DRHD: handling fault status reg 2
>> > [ 199.710896] dmar: DMAR:[DMA Read] Request device [21:00.0] fault
addr 7c24df000
>> > [ 199.710896] DMAR:[fault reason 06] PTE Read access is not set
>> >
>> > Currently I could see this issue on HP ProLiant DL380p Gen8 only.
>> > Is there any idea?
>> > Has anyone noticed this issue?
>> >
>> > Note: we're thinking to use SR-IOV and DPDK app in the same box.
>> > The box has 2 NICs, one for SR-IOV and pass through to VM, one (no SR-
IOV) for DPDK app in host.
>> >
>> > thanks,
>> > Hiroshi
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-dev] DPDK doesn't work with iommu=pt
2014-09-29 6:31 ` Alex Markuze
@ 2014-09-30 1:02 ` Hiroshi Shimamoto
0 siblings, 0 replies; 8+ messages in thread
From: Hiroshi Shimamoto @ 2014-09-30 1:02 UTC (permalink / raw)
To: Alex Markuze; +Cc: dev, Hayato Momma
> Subject: Re: [dpdk-dev] DPDK doesn't work with iommu=pt
>
>
>
> On Mon, Sep 29, 2014 at 2:53 AM, Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com> wrote:
> > Hi,
> >
> >> Subject: Re: [dpdk-dev] DPDK doesn't work with iommu=pt
> >>
> >> iommu=pt effectively disables iommu for the kernel and iommu is
> >> enabled only for KVM.
> >> http://lwn.net/Articles/329174/
> >
> > thanks for pointing that.
> >
> > Okay, I think DPDK cannot handle IOMMU because of no kernel code in
> > DPDK application.
> >
> > And now, I think "iommu=pt" doesn't work correctly DMA on host PMD
> > causes DMAR fault which means IOMMU catches a wrong operation.
> > Will dig around "iommu=pt".
> >
> I agree with your analysis, It seems that a fairly recent patch (3~4) months has introduced a bug that confuses unprotected
> DMA access with an iommu access, by the device and produces an equivalent of a page fault.
>
> >>
> >> Basically unless you have KVM running you can remove both lines for
> >> the same effect.
> >> On the other hand if you do have KVM and you do want iommu=on You can
> >> remove the iommu=pt for the same performance because AFAIK unlike the
> >> kernel drivers DPDK doesn't dma_map and dma_unman each and every
> >> ingress/egress packet (Please correct me if I'm wrong), and will not
> >> suffer any performance penalties.
> >
> > I also tried "iommu=on", but it didn't fix the issue.
> > I saw the same error messages in kernel.
> >
>
> Just to clarify, what I suggested you to try is leaving only this string in the command line "intel_iommu=on". w/o iommu=pt.
> But this would work iff DPDK can handle iota's (I/O virtual addresses).
okay, I tried with "intel_iommu=on" only, but nothing was changed.
By the way, in several testing and my investigation, I think the issue comes from
no DMAR entry for hw pass through mode.
So using VFIO which turns IOMMU always on seems to solve my issue.
Unbind devices from igb_uio, and bind them vfio-pci, run testpmd looks working.
thanks,
Hiroshi
>
> > [ 46.978097] dmar: DRHD: handling fault status reg 2
> > [ 46.978120] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000
> > DMAR:[fault reason 02] Present bit in context entry is clear
> >
> > thanks,
> > Hiroshi
> >
> >>
> >> FYI. Kernel NIC drivers:
> >> When iommu=on{,strict} the kernel network drivers will suffer a heavy
> >> performance penalty due to regular IOVA modifications (both HW and SW
> >> at fault here). Ixgbe and Mellanox reuse dma_mapped pages on the
> >> receive side to avoid this penalty, but still suffer from iommu on TX.
> >>
> >> On Fri, Sep 26, 2014 at 5:47 PM, Choi, Sy Jong <sy.jong.choi@intel.com> wrote:
> >> > Hi Shimamoto-san,
> >> >
> >> > There are a lot of sighting relate to "DMAR:[fault reason 06] PTE Read access is not set"
> >> > https://www.mail-archive.com/kvm@vger.kernel.org/msg106573.html
> >> >
> >> > This might be related to IOMMU, and kernel code.
> >> >
> >> > Here is what we know :-
> >> > 1) Disabling VT-d in bios also removed the symptom
> >> > 2) Switch to another OS distribution also removed the symptom
> >> > 3) even different HW we will not see the symptom. In my case, switch from Engineering board to EPSD board.
> >> >
> >> > Regards,
> >> > Choi, Sy Jong
> >> > Platform Application Engineer
> >> >
> >> >
> >> > -----Original Message-----
> >> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hiroshi Shimamoto
> >> > Sent: Friday, September 26, 2014 5:14 PM
> >> > To: dev@dpdk.org
> >> > Cc: Hayato Momma
> >> > Subject: [dpdk-dev] DPDK doesn't work with iommu=pt
> >> >
> >> > I encountered an issue that DPDK doesn't work with "iommu=pt intel_iommu=on"
> >> > on HP ProLiant DL380p Gen8 server. I'm using the following environment;
> >> >
> >> > HW: ProLiant DL380p Gen8
> >> > CPU: E5-2697 v2
> >> > OS: RHEL7
> >> > kernel: kernel-3.10.0-123 and the latest kernel 3.17-rc6+
> >> > DPDK: v1.7.1-53-gce5abac
> >> > NIC: 82599ES
> >> >
> >> > When boot with "iommu=pt intel_iommu=on", I got the below message and no packets are handled.
> >> >
> >> > [ 120.809611] dmar: DRHD: handling fault status reg 2
> >> > [ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000
> >> > DMAR:[fault reason 02] Present bit in context entry is clear
> >> >
> >> > How to reproduce;
> >> > just run testpmd
> >> > # ./testpmd -c 0xf -n 4 -- -i
> >> >
> >> > Configuring Port 0 (socket 0)
> >> > PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7ffff54eafc0 hw_ring=0x7ffff4200000 dma_addr=0xaa000000
> >> > PMD: ixgbe_dev_tx_queue_setup(): Using full-featured tx code path
> >> > PMD: ixgbe_dev_tx_queue_setup(): - txq_flags = 0 [IXGBE_SIMPLE_FLAGS=f01]
> >> > PMD: ixgbe_dev_tx_queue_setup(): - tx_rs_thresh = 32 [RTE_PMD_IXGBE_TX_MAX_BURST=32]
> >> > PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000 dma_addr=0xaa010000
> >> > PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc Preconditions: rxq->rx_free_thresh=0,
> >> RTE_PMD_IXGBE_RX_MAX_BURST=32
> >> > PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are not satisfied, Scattered Rx is requested,
> or
> >> RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC is not enabled (port=0, queue=0).
> >> > PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc Preconditions: rxq->rx_free_thresh=0,
> >> RTE_PMD_IXGBE_RX_MAX_BURST=32
> >> >
> >> > testpmd> start
> >> > io packet forwarding - CRC stripping disabled - packets/burst=32
> >> > nb forwarding cores=1 - nb forwarding ports=2
> >> > RX queues=1 - RX desc=128 - RX free threshold=0
> >> > RX threshold registers: pthresh=8 hthresh=8 wthresh=0
> >> > TX queues=1 - TX desc=512 - TX free threshold=0
> >> > TX threshold registers: pthresh=32 hthresh=0 wthresh=0
> >> > TX RS bit threshold=0 - TXQ flags=0x0
> >> >
> >> >
> >> > and ping from another box to this server.
> >> > # ping6 -I eth2 ff02::1
> >> >
> >> > I got the below error message and no packet is received.
> >> > I couldn't see any increase RX/TX count in testpmt statistics
> >> >
> >> > testpmd> show port stats 0
> >> >
> >> > ######################## NIC statistics for port 0 ########################
> >> > RX-packets: 6 RX-missed: 0 RX-bytes: 732
> >> > RX-badcrc: 0 RX-badlen: 0 RX-errors: 0
> >> > RX-nombuf: 0
> >> > TX-packets: 0 TX-errors: 0 TX-bytes: 0
> >> > ############################################################################
> >> > testpmd> show port stats 0
> >> >
> >> > ######################## NIC statistics for port 0 ########################
> >> > RX-packets: 6 RX-missed: 0 RX-bytes: 732
> >> > RX-badcrc: 0 RX-badlen: 0 RX-errors: 0
> >> > RX-nombuf: 0
> >> > TX-packets: 0 TX-errors: 0 TX-bytes: 0
> >> > ############################################################################
> >> >
> >> >
> >> > The fault addr in error message must be RX DMA descriptor
> >> >
> >> > error message
> >> > [ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000
> >> >
> >> > log in testpmd
> >> > PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000 dma_addr=0xaa010000
> >> >
> >> > I think the NIC received a packet in fifo and try to put into memory with DMA.
> >> > Before starting DMA, the NIC get the target address from RX descriptors in RDBA register.
> >> > But accessing RX descriptors failed in IOMMU unit and reported it to the kernel.
> >> >
> >> > DMAR:[fault reason 02] Present bit in context entry is clear
> >> >
> >> > The error message looks there is no valid entry in IOMMU.
> >> >
> >> > I think the following issue is very similar, but using Ubuntu14.04 couldn't fix in my case.
> >> > http://thread.gmane.org/gmane.comp.networking.dpdk.devel/2281
> >> >
> >> > I tried Ubuntu14.04.1 and got the below error.
> >> >
> >> > [ 199.710191] dmar: DRHD: handling fault status reg 2
> >> > [ 199.710896] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr 7c24df000
> >> > [ 199.710896] DMAR:[fault reason 06] PTE Read access is not set
> >> >
> >> > Currently I could see this issue on HP ProLiant DL380p Gen8 only.
> >> > Is there any idea?
> >> > Has anyone noticed this issue?
> >> >
> >> > Note: we're thinking to use SR-IOV and DPDK app in the same box.
> >> > The box has 2 NICs, one for SR-IOV and pass through to VM, one (no SR-IOV) for DPDK app in host.
> >> >
> >> > thanks,
> >> > Hiroshi
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2014-09-30 0:56 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-26 9:13 [dpdk-dev] DPDK doesn't work with iommu=pt Hiroshi Shimamoto
2014-09-26 14:47 ` Choi, Sy Jong
2014-09-28 7:35 ` Alex Markuze
2014-09-28 23:53 ` Hiroshi Shimamoto
2014-09-29 6:31 ` Alex Markuze
2014-09-30 1:02 ` Hiroshi Shimamoto
2014-09-28 7:48 ` Zhang, Jerry
2014-09-28 23:20 ` Hiroshi Shimamoto
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).