From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-la0-f42.google.com (mail-la0-f42.google.com [209.85.215.42]) by dpdk.org (Postfix) with ESMTP id D9AB17E10 for ; Sun, 28 Sep 2014 09:29:07 +0200 (CEST) Received: by mail-la0-f42.google.com with SMTP id mk6so194032lab.1 for ; Sun, 28 Sep 2014 00:35:37 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=kGw1eafWQwdWHv/cv5Io/EiDjVBTN8vWUvReqFVHpuQ=; b=hFaRqzmYld9F7SYngZgh73grLsjTOGo+0lp9q/9Byz8J8YvvNm76nVXaosOjXdxzGc Am9y3IwjflRfx9GI07yoW1FDG/UDcrormY13mSFoG50oLC96brMDoCiEbOgRJyvxKSzT r8xkJZX9wXuKgNQySffhhJsKZli0VOICdB7FvOCEnLgQQuPYMptTBbciaS0hfsbW4y80 Dff3+KOMgUJgW7oJtu67AAx2NbqtkBwXpohb+3KNr4kNxOsbJyzg3xliBdwGle9vz/+j hFSHfh0b2yemMjtc9OPwyoIytDRX/ZBXnBb79RSIEL4vwq2SQrJ2YXx39i6rQm56NXQG fO3w== X-Gm-Message-State: ALoCoQm/lSBSYEsDotOW2vtFm7wCQJGI/U5oVXAOhlD53m/Rx6RhlMQzcUyQhx4p/XcBQwh4lEMn MIME-Version: 1.0 X-Received: by 10.112.134.229 with SMTP id pn5mr3721114lbb.22.1411889737871; Sun, 28 Sep 2014 00:35:37 -0700 (PDT) Received: by 10.25.136.198 with HTTP; Sun, 28 Sep 2014 00:35:37 -0700 (PDT) In-Reply-To: <697F8B1B48670548A5BAB03E8283550F2D52706F@PGSMSX107.gar.corp.intel.com> References: <7F861DC0615E0C47A872E6F3C5FCDDBD02ADDCDD@BPXM14GP.gisp.nec.co.jp> <697F8B1B48670548A5BAB03E8283550F2D52706F@PGSMSX107.gar.corp.intel.com> Date: Sun, 28 Sep 2014 10:35:37 +0300 Message-ID: From: Alex Markuze To: "Choi, Sy Jong" Content-Type: text/plain; charset=UTF-8 Cc: "dev@dpdk.org" , Hayato Momma Subject: Re: [dpdk-dev] DPDK doesn't work with iommu=pt X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Sun, 28 Sep 2014 07:29:08 -0000 iommu=pt effectively disables iommu for the kernel and iommu is enabled only for KVM. http://lwn.net/Articles/329174/ Basically unless you have KVM running you can remove both lines for the same effect. On the other hand if you do have KVM and you do want iommu=on You can remove the iommu=pt for the same performance because AFAIK unlike the kernel drivers DPDK doesn't dma_map and dma_unman each and every ingress/egress packet (Please correct me if I'm wrong), and will not suffer any performance penalties. FYI. Kernel NIC drivers: When iommu=on{,strict} the kernel network drivers will suffer a heavy performance penalty due to regular IOVA modifications (both HW and SW at fault here). Ixgbe and Mellanox reuse dma_mapped pages on the receive side to avoid this penalty, but still suffer from iommu on TX. On Fri, Sep 26, 2014 at 5:47 PM, Choi, Sy Jong wrote: > Hi Shimamoto-san, > > There are a lot of sighting relate to "DMAR:[fault reason 06] PTE Read access is not set" > https://www.mail-archive.com/kvm@vger.kernel.org/msg106573.html > > This might be related to IOMMU, and kernel code. > > Here is what we know :- > 1) Disabling VT-d in bios also removed the symptom > 2) Switch to another OS distribution also removed the symptom > 3) even different HW we will not see the symptom. In my case, switch from Engineering board to EPSD board. > > Regards, > Choi, Sy Jong > Platform Application Engineer > > > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hiroshi Shimamoto > Sent: Friday, September 26, 2014 5:14 PM > To: dev@dpdk.org > Cc: Hayato Momma > Subject: [dpdk-dev] DPDK doesn't work with iommu=pt > > I encountered an issue that DPDK doesn't work with "iommu=pt intel_iommu=on" > on HP ProLiant DL380p Gen8 server. I'm using the following environment; > > HW: ProLiant DL380p Gen8 > CPU: E5-2697 v2 > OS: RHEL7 > kernel: kernel-3.10.0-123 and the latest kernel 3.17-rc6+ > DPDK: v1.7.1-53-gce5abac > NIC: 82599ES > > When boot with "iommu=pt intel_iommu=on", I got the below message and no packets are handled. > > [ 120.809611] dmar: DRHD: handling fault status reg 2 > [ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000 > DMAR:[fault reason 02] Present bit in context entry is clear > > How to reproduce; > just run testpmd > # ./testpmd -c 0xf -n 4 -- -i > > Configuring Port 0 (socket 0) > PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7ffff54eafc0 hw_ring=0x7ffff4200000 dma_addr=0xaa000000 > PMD: ixgbe_dev_tx_queue_setup(): Using full-featured tx code path > PMD: ixgbe_dev_tx_queue_setup(): - txq_flags = 0 [IXGBE_SIMPLE_FLAGS=f01] > PMD: ixgbe_dev_tx_queue_setup(): - tx_rs_thresh = 32 [RTE_PMD_IXGBE_TX_MAX_BURST=32] > PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000 dma_addr=0xaa010000 > PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32 > PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are not satisfied, Scattered Rx is requested, or RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC is not enabled (port=0, queue=0). > PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc Preconditions: rxq->rx_free_thresh=0, RTE_PMD_IXGBE_RX_MAX_BURST=32 > > testpmd> start > io packet forwarding - CRC stripping disabled - packets/burst=32 > nb forwarding cores=1 - nb forwarding ports=2 > RX queues=1 - RX desc=128 - RX free threshold=0 > RX threshold registers: pthresh=8 hthresh=8 wthresh=0 > TX queues=1 - TX desc=512 - TX free threshold=0 > TX threshold registers: pthresh=32 hthresh=0 wthresh=0 > TX RS bit threshold=0 - TXQ flags=0x0 > > > and ping from another box to this server. > # ping6 -I eth2 ff02::1 > > I got the below error message and no packet is received. > I couldn't see any increase RX/TX count in testpmt statistics > > testpmd> show port stats 0 > > ######################## NIC statistics for port 0 ######################## > RX-packets: 6 RX-missed: 0 RX-bytes: 732 > RX-badcrc: 0 RX-badlen: 0 RX-errors: 0 > RX-nombuf: 0 > TX-packets: 0 TX-errors: 0 TX-bytes: 0 > ############################################################################ > testpmd> show port stats 0 > > ######################## NIC statistics for port 0 ######################## > RX-packets: 6 RX-missed: 0 RX-bytes: 732 > RX-badcrc: 0 RX-badlen: 0 RX-errors: 0 > RX-nombuf: 0 > TX-packets: 0 TX-errors: 0 TX-bytes: 0 > ############################################################################ > > > The fault addr in error message must be RX DMA descriptor > > error message > [ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000 > > log in testpmd > PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000 dma_addr=0xaa010000 > > I think the NIC received a packet in fifo and try to put into memory with DMA. > Before starting DMA, the NIC get the target address from RX descriptors in RDBA register. > But accessing RX descriptors failed in IOMMU unit and reported it to the kernel. > > DMAR:[fault reason 02] Present bit in context entry is clear > > The error message looks there is no valid entry in IOMMU. > > I think the following issue is very similar, but using Ubuntu14.04 couldn't fix in my case. > http://thread.gmane.org/gmane.comp.networking.dpdk.devel/2281 > > I tried Ubuntu14.04.1 and got the below error. > > [ 199.710191] dmar: DRHD: handling fault status reg 2 > [ 199.710896] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr 7c24df000 > [ 199.710896] DMAR:[fault reason 06] PTE Read access is not set > > Currently I could see this issue on HP ProLiant DL380p Gen8 only. > Is there any idea? > Has anyone noticed this issue? > > Note: we're thinking to use SR-IOV and DPDK app in the same box. > The box has 2 NICs, one for SR-IOV and pass through to VM, one (no SR-IOV) for DPDK app in host. > > thanks, > Hiroshi