DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@intel.com>
To: Alejandro Lucero <alejandro.lucero@netronome.com>
Cc: dev <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] igb_uio: map dummy dma forcing iommu domain attachment
Date: Mon, 13 Feb 2017 15:54:55 +0000	[thread overview]
Message-ID: <eeaacd7f-47a4-de7b-3df9-47ed61ee466a@intel.com> (raw)
In-Reply-To: <CAD+H993F-RhUi1oCuOwEYmej75MD_ZSBWpq3aV_eDmY_Y+6gEw@mail.gmail.com>

On 2/13/2017 1:38 PM, Alejandro Lucero wrote:
> 
> 
> On Fri, Feb 10, 2017 at 7:06 PM, Ferruh Yigit <ferruh.yigit@intel.com
> <mailto:ferruh.yigit@intel.com>> wrote:
> 
>     On 2/10/2017 7:03 PM, Ferruh Yigit wrote:
>     > On 2/8/2017 11:54 AM, Alejandro Lucero wrote:
>     >> Hi Ferruh,
>     >>
>     >> On Tue, Feb 7, 2017 at 3:59 PM, Ferruh Yigit <ferruh.yigit@intel.com <mailto:ferruh.yigit@intel.com>
>     >> <mailto:ferruh.yigit@intel.com <mailto:ferruh.yigit@intel.com>>> wrote:
>     >>
>     >>     Hi Alejandro,
>     >>
>     >>     On 1/18/2017 12:27 PM, Alejandro Lucero wrote:
>     >>     > For using a DPDK app when iommu is enabled, it requires to
>     >>     > add iommu=pt to the kernel command line. But using igb_uio driver
>     >>     > makes DMAR errors because the device has not an IOMMU domain.
>     >>
>     >>     Please help to understand the scope of the problem,
>     >>
>     >>
>     >> After reading your reply, I realize I could have explained it better.
>     >> First of all, this is related to SRIOV, exactly when the VFs are created.
>     >>
>     >>
>     >>     1- How can you re-produce the problem?
>     >>
>     >>
>     >> Using a VF from a Intel card by a DPDK app in the host and a kernel >=
>     >> 3.15. Although usually VFs are assigned to VMs, it could also be an
>     >> option to use VFs by the host.
>     >>
>     >> BTW, I did not try to reproduce the problem with an Intel card. I
>     >> triggered this problem with an NFP, but because the problem behind, I
>     >> bet that is going to happen for an Intel one as well.
>     >
>     > I can able to reproduce the problem with ixgbe, by using VF on the host.
>     >
>     > And I verified your patch fixes it, it cause device attached to a vfio
>     > group.
> 
>     I want to send this in a separate mail, since not directly related to
>     your patch, but while I was testing with vfio-pci I get lower numbers
>     comparing to the igb_uio, which is unexpected AFAIK.
> 
>     Most probably I am doing something wrong, but I would like to ask if are
>     you observing same behavior?
> 
> 
> Can you tell me which test are you running?
> 
> Although both, igb_uio and vfio, allow to work with IOMMU, the first one
> requires iommu=pt. It implies a single IOMMU domain already created by
> the system with the 1:1 mapping being used.  With VFIO, a specific per
> device IOMMU domain is created. Depending on how are you measuring
> performance, that specific IOMMU domain creation by the DPDK app could
> have an impact, but I do not think that should be really significant.
> But with IOMMU you have the same problem than with MMU, there is a IOTLB
> for IOMMU as there is a TLB for MMU. Depending on the app, some
> IOMMU/IOTLB contention is likely. I have done some experiments and still
> investigating this during my spare time. It would be worth a talk about
> this in the next DPDK meeting.

After spending a few hours on it, still not sure about source of the
problem, and assume it is specific to my platform, otherwise we would
hear before.

The performance drop is huge to suspect from IOTLB.

With igb_uio, I am getting 10G line rate for 64bytes, ~14Mpps. When
switch to vfio-pci, it is ~1,5Mppps.

I am testing with "iommu=pt intel_iommu=on" kernel params.

Tried with kernel versions:
* 4.9.8
* 4.4.14
* 4.0.4

Tested with dpdk version:
* 17.02-rc3
* 16.11

Tested vfio-pci with kernel option "iommu=on intel_iommu=on" with 4.0.4
kernel.

All above gives low performance with vfio_pci, which does not make sense
to me, most probably doing a stupid mistake ...

Thanks,
ferruh

>  
> 
> 
>     Thanks,
>     ferruh
> 
> 

  reply	other threads:[~2017-02-13 15:55 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-18 12:27 Alejandro Lucero
2017-02-07 15:59 ` Ferruh Yigit
2017-02-08 11:54   ` Alejandro Lucero
2017-02-10 19:03     ` Ferruh Yigit
2017-02-10 19:06       ` Ferruh Yigit
2017-02-13 13:38         ` Alejandro Lucero
2017-02-13 15:54           ` Ferruh Yigit [this message]
2017-02-13 13:31       ` Alejandro Lucero
2017-02-17 12:29 ` Ferruh Yigit
2017-03-30 20:20   ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=eeaacd7f-47a4-de7b-3df9-47ed61ee466a@intel.com \
    --to=ferruh.yigit@intel.com \
    --cc=alejandro.lucero@netronome.com \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).