From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-la0-f44.google.com (mail-la0-f44.google.com [209.85.215.44]) by dpdk.org (Postfix) with ESMTP id 659015902 for ; Mon, 29 Sep 2014 08:24:29 +0200 (CEST) Received: by mail-la0-f44.google.com with SMTP id gi9so7104992lab.17 for ; Sun, 28 Sep 2014 23:31:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=tg0kiSjlQBJ8VhjWJ3iBZojcpz2jURviO1aGRJfiy3A=; b=O/+vXPDFRxgLqShokvNaJoEG8mVCO9Ndtavb7R6UYgAPa3j5+NxuTPR7qoCZ/tuDPu n+f5YXyNplkJU5LP4tVb8uJspUVcoRJez0lBxRSthZ3z3x0V2SpILY5q3LJ2FZFEUpEG aoIJNm4nBfI/AvufRkDK37zOIiTyZ7JqDyztIT7hBSHmfpJ/yi1KSHDftIszjisoTBM+ /HxXJe5D1LbW7C1jjkKNZGcDx2/etRLBBmgZzO+DGBL7SRKBYP2kyeXwe4rpKiVUg8LE sCTm9SpUWs7VIAMMvNuLZSyvTbbxWXoBKCJQHSenfAmCdizIx6a+EFyf1Cp+ffK4WQbG JHqw== X-Gm-Message-State: ALoCoQkbw/kxB/4eurGVPSK+Tb0TVOD/twgfmdzqd9VZaYhCcH/A2meOHrQjX9r8JZ0PLMqlmZ8S MIME-Version: 1.0 X-Received: by 10.112.134.101 with SMTP id pj5mr34903635lbb.47.1411972263707; Sun, 28 Sep 2014 23:31:03 -0700 (PDT) Received: by 10.25.136.198 with HTTP; Sun, 28 Sep 2014 23:31:03 -0700 (PDT) In-Reply-To: <7F861DC0615E0C47A872E6F3C5FCDDBD02ADF55D@BPXM14GP.gisp.nec.co.jp> References: <7F861DC0615E0C47A872E6F3C5FCDDBD02ADDCDD@BPXM14GP.gisp.nec.co.jp> <697F8B1B48670548A5BAB03E8283550F2D52706F@PGSMSX107.gar.corp.intel.com> <7F861DC0615E0C47A872E6F3C5FCDDBD02ADF55D@BPXM14GP.gisp.nec.co.jp> Date: Mon, 29 Sep 2014 09:31:03 +0300 Message-ID: From: Alex Markuze To: Hiroshi Shimamoto Content-Type: text/plain; charset=UTF-8 X-Content-Filtered-By: Mailman/MimeDel 2.1.15 Cc: "dev@dpdk.org" , Hayato Momma Subject: Re: [dpdk-dev] DPDK doesn't work with iommu=pt X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 29 Sep 2014 06:24:29 -0000 On Mon, Sep 29, 2014 at 2:53 AM, Hiroshi Shimamoto wrote: > Hi, > >> Subject: Re: [dpdk-dev] DPDK doesn't work with iommu=pt >> >> iommu=pt effectively disables iommu for the kernel and iommu is >> enabled only for KVM. >> http://lwn.net/Articles/329174/ > > thanks for pointing that. > > Okay, I think DPDK cannot handle IOMMU because of no kernel code in > DPDK application. > > And now, I think "iommu=pt" doesn't work correctly DMA on host PMD > causes DMAR fault which means IOMMU catches a wrong operation. > Will dig around "iommu=pt". > I agree with your analysis, It seems that a fairly recent patch (3~4) months has introduced a bug that confuses unprotected DMA access with an iommu access, by the device and produces an equivalent of a page fault. >> >> Basically unless you have KVM running you can remove both lines for >> the same effect. >> On the other hand if you do have KVM and you do want iommu=on You can >> remove the iommu=pt for the same performance because AFAIK unlike the >> kernel drivers DPDK doesn't dma_map and dma_unman each and every >> ingress/egress packet (Please correct me if I'm wrong), and will not >> suffer any performance penalties. > > I also tried "iommu=on", but it didn't fix the issue. > I saw the same error messages in kernel. > Just to clarify, what I suggested you to try is leaving only this string in the command line "intel_iommu=on". w/o iommu=pt. But this would work iff DPDK can handle iota's (I/O virtual addresses). > [ 46.978097] dmar: DRHD: handling fault status reg 2 > [ 46.978120] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000 > DMAR:[fault reason 02] Present bit in context entry is clear > > thanks, > Hiroshi > >> >> FYI. Kernel NIC drivers: >> When iommu=on{,strict} the kernel network drivers will suffer a heavy >> performance penalty due to regular IOVA modifications (both HW and SW >> at fault here). Ixgbe and Mellanox reuse dma_mapped pages on the >> receive side to avoid this penalty, but still suffer from iommu on TX. >> >> On Fri, Sep 26, 2014 at 5:47 PM, Choi, Sy Jong wrote: >> > Hi Shimamoto-san, >> > >> > There are a lot of sighting relate to "DMAR:[fault reason 06] PTE Read access is not set" >> > https://www.mail-archive.com/kvm@vger.kernel.org/msg106573.html >> > >> > This might be related to IOMMU, and kernel code. >> > >> > Here is what we know :- >> > 1) Disabling VT-d in bios also removed the symptom >> > 2) Switch to another OS distribution also removed the symptom >> > 3) even different HW we will not see the symptom. In my case, switch from Engineering board to EPSD board. >> > >> > Regards, >> > Choi, Sy Jong >> > Platform Application Engineer >> > >> > >> > -----Original Message----- >> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hiroshi Shimamoto >> > Sent: Friday, September 26, 2014 5:14 PM >> > To: dev@dpdk.org >> > Cc: Hayato Momma >> > Subject: [dpdk-dev] DPDK doesn't work with iommu=pt >> > >> > I encountered an issue that DPDK doesn't work with "iommu=pt intel_ iommu=on" >> > on HP ProLiant DL380p Gen8 server. I'm using the following environment; >> > >> > HW: ProLiant DL380p Gen8 >> > CPU: E5-2697 v2 >> > OS: RHEL7 >> > kernel: kernel-3.10.0-123 and the latest kernel 3.17-rc6+ >> > DPDK: v1.7.1-53-gce5abac >> > NIC: 82599ES >> > >> > When boot with "iommu=pt intel_iommu=on", I got the below message and no packets are handled. >> > >> > [ 120.809611] dmar: DRHD: handling fault status reg 2 >> > [ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000 >> > DMAR:[fault reason 02] Present bit in context entry is clear >> > >> > How to reproduce; >> > just run testpmd >> > # ./testpmd -c 0xf -n 4 -- -i >> > >> > Configuring Port 0 (socket 0) >> > PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7ffff54eafc0 hw_ring=0x7ffff4200000 dma_addr=0xaa000000 >> > PMD: ixgbe_dev_tx_queue_setup(): Using full-featured tx code path >> > PMD: ixgbe_dev_tx_queue_setup(): - txq_flags = 0 [IXGBE _SIMPLE_FLAGS=f01] >> > PMD: ixgbe_dev_tx_queue_setup(): - tx_rs_thresh = 32 [RTE_PMD_IXGBE _TX_MAX_BURST=32] >> > PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000 dma_addr=0xaa010000 >> > PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc Preconditions: rxq->rx_free_thresh=0, >> RTE_PMD_IXGBE_RX_MAX_BURST=32 >> > PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are not satisfied, Scattered Rx is requested, or >> RTE_LIBRTE_IXGBE_RX_ALLOW_BULK_ALLOC is not enabled (port=0, queue=0). >> > PMD: check_rx_burst_bulk_alloc_preconditions(): Rx Burst Bulk Alloc Preconditions: rxq->rx_free_thresh=0, >> RTE_PMD_IXGBE_RX_MAX_BURST=32 >> > >> > testpmd> start >> > io packet forwarding - CRC stripping disabled - packets/burst=32 >> > nb forwarding cores=1 - nb forwarding ports=2 >> > RX queues=1 - RX desc=128 - RX free threshold=0 >> > RX threshold registers: pthresh=8 hthresh=8 wthresh=0 >> > TX queues=1 - TX desc=512 - TX free threshold=0 >> > TX threshold registers: pthresh=32 hthresh=0 wthresh=0 >> > TX RS bit threshold=0 - TXQ flags=0x0 >> > >> > >> > and ping from another box to this server. >> > # ping6 -I eth2 ff02::1 >> > >> > I got the below error message and no packet is received. >> > I couldn't see any increase RX/TX count in testpmt statistics >> > >> > testpmd> show port stats 0 >> > >> > ######################## NIC statistics for port 0 ######################## >> > RX-packets: 6 RX-missed: 0 RX-bytes: 732 >> > RX-badcrc: 0 RX-badlen: 0 RX-errors: 0 >> > RX-nombuf: 0 >> > TX-packets: 0 TX-errors: 0 TX-bytes: 0 >> > ############################################################################ >> > testpmd> show port stats 0 >> > >> > ######################## NIC statistics for port 0 ######################## >> > RX-packets: 6 RX-missed: 0 RX-bytes: 732 >> > RX-badcrc: 0 RX-badlen: 0 RX-errors: 0 >> > RX-nombuf: 0 >> > TX-packets: 0 TX-errors: 0 TX-bytes: 0 >> > ############################################################################ >> > >> > >> > The fault addr in error message must be RX DMA descriptor >> > >> > error message >> > [ 120.809635] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr aa010000 >> > >> > log in testpmd >> > PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7ffff54ea740 hw_ring=0x7ffff4210000 dma_addr=0xaa010000 >> > >> > I think the NIC received a packet in fifo and try to put into memory with DMA. >> > Before starting DMA, the NIC get the target address from RX descriptors in RDBA register. >> > But accessing RX descriptors failed in IOMMU unit and reported it to the kernel. >> > >> > DMAR:[fault reason 02] Present bit in context entry is clear >> > >> > The error message looks there is no valid entry in IOMMU. >> > >> > I think the following issue is very similar, but using Ubuntu14.04 couldn't fix in my case. >> > http://thread.gmane.org/gmane.comp.networking.dpdk.devel/2281 >> > >> > I tried Ubuntu14.04.1 and got the below error. >> > >> > [ 199.710191] dmar: DRHD: handling fault status reg 2 >> > [ 199.710896] dmar: DMAR:[DMA Read] Request device [21:00.0] fault addr 7c24df000 >> > [ 199.710896] DMAR:[fault reason 06] PTE Read access is not set >> > >> > Currently I could see this issue on HP ProLiant DL380p Gen8 only. >> > Is there any idea? >> > Has anyone noticed this issue? >> > >> > Note: we're thinking to use SR-IOV and DPDK app in the same box. >> > The box has 2 NICs, one for SR-IOV and pass through to VM, one (no SR- IOV) for DPDK app in host. >> > >> > thanks, >> > Hiroshi