DPDK patches and discussions
 help / color / mirror / Atom feed
From: Hiroshi Shimamoto <h-shimamoto@ct.jp.nec.com>
To: "dev@dpdk.org" <dev@dpdk.org>
Cc: Hayato Momma <h-momma@ce.jp.nec.com>
Subject: [dpdk-dev] DPDK doesn't work with iommu=pt
Date: Fri, 26 Sep 2014 09:13:51 +0000	[thread overview]
Message-ID: <7F861DC0615E0C47A872E6F3C5FCDDBD02ADDCDD@BPXM14GP.gisp.nec.co.jp> (raw)

I encountered an issue that DPDK doesn't work with "iommu=pt intel_iommu=on"
on HP ProLiant DL380p Gen8 server. I'm using the following environment;

  HW: ProLiant DL380p Gen8
  CPU: E5-2697 v2
  OS: RHEL7 
  kernel: kernel-3.10.0-123 and the latest kernel 3.17-rc6+
  DPDK: v1.7.1-53-gce5abac
  NIC: 82599ES

When boot with "iommu=pt intel_iommu=on", I got the below message and
no packets are handled.

  [  120.809611] dmar: DRHD: handling fault status reg 2
  [  120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000
  DMAR:[fault reason 02] Present bit in context entry is clear

How to reproduce;
just run testpmd
# ./testpmd -c 0xf -n 4 -- -i

Configuring Port 0 (socket 0)
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7ffff54eafc0 hw_ring=0x7ffff4200000 dma_addr=0xaa000000
PMD: ixgbe_dev_tx_queue_setup(): Using full-featured tx code path
PMD: ixgbe_dev_tx_queue_setup():  - txq_flags = 0 [IXGBE_SIMPLE_FLAGS=f01]
PMD: ixgbe_dev_tx_queue_setup():  - tx_rs_thresh = 32 [RTE_PMD_IXGBE_TX_MAX_BURST=32]
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000 dma_addr=0xaa010000
PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are not satisfied, Scattered Rx is requested, or RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC is not enabled (port=0, queue=0).
PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32

testpmd> start
  io packet forwarding - CRC stripping disabled - packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=2
  RX queues=1 - RX desc=128 - RX free threshold=0
  RX threshold registers: pthresh=8 hthresh=8 wthresh=0
  TX queues=1 - TX desc=512 - TX free threshold=0
  TX threshold registers: pthresh=32 hthresh=0 wthresh=0
  TX RS bit threshold=0 - TXQ flags=0x0


and ping from another box to this server.
# ping6 -I eth2 ff02::1

I got the below error message and no packet is received.
I couldn't see any increase RX/TX count in testpmt statistics

testpmd> show port stats 0

  ######################## NIC statistics for port 0  ########################
  RX-packets: 6          RX-missed: 0          RX-bytes:  732
  RX-badcrc:  0          RX-badlen: 0          RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0
  ############################################################################
testpmd> show port stats 0

  ######################## NIC statistics for port 0  ########################
  RX-packets: 6          RX-missed: 0          RX-bytes:  732
  RX-badcrc:  0          RX-badlen: 0          RX-errors: 0
  RX-nombuf:  0
  TX-packets: 0          TX-errors: 0          TX-bytes:  0
  ############################################################################


The fault addr in error message must be RX DMA descriptor

error message
  [  120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000

log in testpmd
  PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000 dma_addr=0xaa010000

I think the NIC received a packet in fifo and try to put into memory with DMA.
Before starting DMA, the NIC get the target address from RX descriptors in RDBA register.
But accessing RX descriptors failed in IOMMU unit and reported it to the kernel.

  DMAR:[fault reason 02] Present bit in context entry is clear

The error message looks there is no valid entry in IOMMU.

I think the following issue is very similar, but using Ubuntu14.04 couldn't fix in my case.
http://thread.gmane.org/gmane.comp.networking.dpdk.devel/2281

I tried Ubuntu14.04.1 and got the below error.

  [  199.710191] dmar: DRHD: handling fault status reg 2
  [  199.710896] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr 7c24df000
  [  199.710896] DMAR:[fault reason 06] PTE Read access is not set

Currently I could see this issue on HP ProLiant DL380p Gen8 only.
Is there any idea?
Has anyone noticed this issue?

Note: we're thinking to use SR-IOV and DPDK app in the same box.
The box has 2 NICs, one for SR-IOV and pass through to VM, one (no SR-IOV) for DPDK app in host.

thanks,
Hiroshi

             reply	other threads:[~2014-09-26  9:07 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-09-26  9:13 Hiroshi Shimamoto [this message]
2014-09-26 14:47 ` Choi, Sy Jong
2014-09-28  7:35   ` Alex Markuze
2014-09-28 23:53     ` Hiroshi Shimamoto
2014-09-29  6:31       ` Alex Markuze
2014-09-30  1:02         ` Hiroshi Shimamoto
2014-09-28  7:48 ` Zhang, Jerry
2014-09-28 23:20   ` Hiroshi Shimamoto

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7F861DC0615E0C47A872E6F3C5FCDDBD02ADDCDD@BPXM14GP.gisp.nec.co.jp \
    --to=h-shimamoto@ct.jp.nec.com \
    --cc=dev@dpdk.org \
    --cc=h-momma@ce.jp.nec.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).