From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id C874B98 for ; Mon, 13 Feb 2017 16:55:18 +0100 (CET) Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Feb 2017 07:54:57 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.35,156,1484035200"; d="scan'208";a="224727465" Received: from fyigit-mobl1.ger.corp.intel.com (HELO [10.237.220.137]) ([10.237.220.137]) by fmsmga004.fm.intel.com with ESMTP; 13 Feb 2017 07:54:56 -0800 To: Alejandro Lucero References: <1484742475-41005-1-git-send-email-alejandro.lucero@netronome.com> <67a9fd3b-b7f5-c641-9f59-590155cbd30b@intel.com> <6f31fa21-8de8-961d-e66c-7824c65ab5fc@intel.com> Cc: dev From: Ferruh Yigit Message-ID: Date: Mon, 13 Feb 2017 15:54:55 +0000 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Subject: Re: [dpdk-dev] [PATCH] igb_uio: map dummy dma forcing iommu domain attachment X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 13 Feb 2017 15:55:19 -0000 On 2/13/2017 1:38 PM, Alejandro Lucero wrote: > > > On Fri, Feb 10, 2017 at 7:06 PM, Ferruh Yigit > wrote: > > On 2/10/2017 7:03 PM, Ferruh Yigit wrote: > > On 2/8/2017 11:54 AM, Alejandro Lucero wrote: > >> Hi Ferruh, > >> > >> On Tue, Feb 7, 2017 at 3:59 PM, Ferruh Yigit > >> >> wrote: > >> > >> Hi Alejandro, > >> > >> On 1/18/2017 12:27 PM, Alejandro Lucero wrote: > >> > For using a DPDK app when iommu is enabled, it requires to > >> > add iommu=pt to the kernel command line. But using igb_uio driver > >> > makes DMAR errors because the device has not an IOMMU domain. > >> > >> Please help to understand the scope of the problem, > >> > >> > >> After reading your reply, I realize I could have explained it better. > >> First of all, this is related to SRIOV, exactly when the VFs are created. > >> > >> > >> 1- How can you re-produce the problem? > >> > >> > >> Using a VF from a Intel card by a DPDK app in the host and a kernel >= > >> 3.15. Although usually VFs are assigned to VMs, it could also be an > >> option to use VFs by the host. > >> > >> BTW, I did not try to reproduce the problem with an Intel card. I > >> triggered this problem with an NFP, but because the problem behind, I > >> bet that is going to happen for an Intel one as well. > > > > I can able to reproduce the problem with ixgbe, by using VF on the host. > > > > And I verified your patch fixes it, it cause device attached to a vfio > > group. > > I want to send this in a separate mail, since not directly related to > your patch, but while I was testing with vfio-pci I get lower numbers > comparing to the igb_uio, which is unexpected AFAIK. > > Most probably I am doing something wrong, but I would like to ask if are > you observing same behavior? > > > Can you tell me which test are you running? > > Although both, igb_uio and vfio, allow to work with IOMMU, the first one > requires iommu=pt. It implies a single IOMMU domain already created by > the system with the 1:1 mapping being used. With VFIO, a specific per > device IOMMU domain is created. Depending on how are you measuring > performance, that specific IOMMU domain creation by the DPDK app could > have an impact, but I do not think that should be really significant. > But with IOMMU you have the same problem than with MMU, there is a IOTLB > for IOMMU as there is a TLB for MMU. Depending on the app, some > IOMMU/IOTLB contention is likely. I have done some experiments and still > investigating this during my spare time. It would be worth a talk about > this in the next DPDK meeting. After spending a few hours on it, still not sure about source of the problem, and assume it is specific to my platform, otherwise we would hear before. The performance drop is huge to suspect from IOTLB. With igb_uio, I am getting 10G line rate for 64bytes, ~14Mpps. When switch to vfio-pci, it is ~1,5Mppps. I am testing with "iommu=pt intel_iommu=on" kernel params. Tried with kernel versions: * 4.9.8 * 4.4.14 * 4.0.4 Tested with dpdk version: * 17.02-rc3 * 16.11 Tested vfio-pci with kernel option "iommu=on intel_iommu=on" with 4.0.4 kernel. All above gives low performance with vfio_pci, which does not make sense to me, most probably doing a stupid mistake ... Thanks, ferruh > > > > Thanks, > ferruh > >