DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Burakov, Anatoly" <anatoly.burakov@intel.com>
To: Alejandro Lucero <alejandro.lucero@netronome.com>
Cc: dev <dev@dpdk.org>, stable@dpdk.org
Subject: Re: [dpdk-dev] [RFC] Add support for device dma mask
Date: Thu, 28 Jun 2018 09:54:11 +0100	[thread overview]
Message-ID: <35c86511-7bf7-4840-d7ba-8362ddefc8ec@intel.com> (raw)
In-Reply-To: <CAD+H993ipV0XjRjR3NzhnaX8gE8mTs4i9xu0A4ri6q_Y5-Ta5A@mail.gmail.com>

On 27-Jun-18 5:52 PM, Alejandro Lucero wrote:
> 
> 
> On Wed, Jun 27, 2018 at 2:24 PM, Burakov, Anatoly 
> <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
> 
>     On 27-Jun-18 11:13 AM, Alejandro Lucero wrote:
> 
> 
> 
>         On Wed, Jun 27, 2018 at 9:17 AM, Burakov, Anatoly
>         <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>
>         <mailto:anatoly.burakov@intel.com
>         <mailto:anatoly.burakov@intel.com>>> wrote:
> 
>              On 26-Jun-18 6:37 PM, Alejandro Lucero wrote:
> 
>                  This RFC tries to handle devices with addressing
>         limitations.
>                  NFP devices
>                  4000/6000 can just handle addresses with 40 bits implying
>                  problems for handling
>                  physical address when machines have more than 1TB of
>         memory. But
>                  because how
>                  iovas are configured, which can be equivalent to physical
>                  addresses or based on
>                  virtual addresses, this can be a more likely problem.
> 
>                  I tried to solve this some time ago:
> 
>         https://www.mail-archive.com/dev@dpdk.org/msg45214.html
>         <https://www.mail-archive.com/dev@dpdk.org/msg45214.html>
>                 
>         <https://www.mail-archive.com/dev@dpdk.org/msg45214.html
>         <https://www.mail-archive.com/dev@dpdk.org/msg45214.html>>
> 
>                  It was delayed because there was some changes in
>         progress with
>                  EAL device
>                  handling, and, being honest, I completely forgot about this
>                  until now, when
>                  I have had to work on supporting NFP devices with DPDK and
>                  non-root users.
> 
>                  I was working on a patch for being applied on main DPDK
>         branch
>                  upstream, but
>                  because changes to memory initialization during the
>         last months,
>                  this can not
>                  be backported to stable versions, at least the part
>         where the
>                  hugepages iovas
>                  are checked.
> 
>                  I realize stable versions only allow bug fixing, and this
>                  patchset could
>                  arguably not be considered as so. But without this, it
>         could be,
>                  although
>                  unlikely, a DPDK used in a machine with more than 1TB,
>         and then
>                  NFP using
>                  the wrong DMA host addresses.
> 
>                  Although virtual addresses used as iovas are more
>         dangerous, for
>                  DPDK versions
>                  before 18.05 this is not worse than with physical
>         addresses,
>                  because iovas,
>                  when physical addresses are not available, are based on a
>                  starting address set
>                  to 0x0.
> 
> 
>              You might want to look at the following patch:
> 
>         http://patches.dpdk.org/patch/37149/
>         <http://patches.dpdk.org/patch/37149/>
>              <http://patches.dpdk.org/patch/37149/
>         <http://patches.dpdk.org/patch/37149/>>
> 
>              Since this patch, IOVA as VA mode uses VA addresses, and
>         that has
>              been backported to earlier releases. I don't think there's
>         any case
>              where we used zero-based addresses any more.
> 
> 
>         But memsegs get the iova based on hugepages physaddr, and for VA
>         mode that is based on 0x0 as starting point.
> 
>         And as far as I know, memsegs iovas are what end up being used
>         for IOMMU mappings and what devices will use.
> 
> 
>     For when physaddrs are available, IOVA as PA mode assigns IOVA
>     addresses to PA, while IOVA as VA mode assigns IOVA addresses to VA
>     (both 18.05+ and pre-18.05 as per above patch, which was applied to
>     pre-18.05 stable releases).
> 
>     When physaddrs aren't available, IOVA as VA mode assigns IOVA
>     addresses to VA, both 18.05+ and pre-18.05, as per above patch.
> 
> 
> This is right.
> 
>     If physaddrs aren't available and IOVA as PA mode is used, then i as
>     far as i can remember, even though technically memsegs get their
>     addresses set to 0x0 onwards, the actual addresses we get in
>     memzones etc. are RTE_BAD_IOVA.
> 
> 
> This is not right. Not sure if this was the intention, but if PA mode 
> and physaddrs not available, this code inside vfio_type1_dma_map:
> 
> if(rte_eal_iova_mode() == RTE_IOVA_VA)
> 
> dma_map.iova = dma_map.vaddr;
> 
> else
> 
> dma_map.iova = ms[i].iova;
> 
> 
> does the IOMMU mapping using the iovas and not the vaddr, with the iovas 
> starting at 0x0.

Yep, you're right, apologies. I confused this with no-huge option.

-- 
Thanks,
Anatoly

  reply	other threads:[~2018-06-28  8:54 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-26 17:37 Alejandro Lucero
2018-06-26 17:37 ` [dpdk-dev] [PATCH 1/6] eal: add internal " Alejandro Lucero
2018-06-26 17:37 ` [dpdk-dev] [PATCH 2/6] mem: add hugepages check Alejandro Lucero
2018-06-26 17:37 ` [dpdk-dev] [PATCH 3/6] eal: check hugepages within dma mask range Alejandro Lucero
2018-06-26 17:37 ` [dpdk-dev] [PATCH 4/6] mem: add function for setting internal dma mask Alejandro Lucero
2018-06-26 17:37 ` [dpdk-dev] [PATCH 5/6] ethdev: add function for " Alejandro Lucero
2018-06-26 17:37 ` [dpdk-dev] [PATCH 6/6] net/nfp: set " Alejandro Lucero
2018-06-27  8:17 ` [dpdk-dev] [RFC] Add support for device " Burakov, Anatoly
2018-06-27 10:13   ` Alejandro Lucero
2018-06-27 13:24     ` Burakov, Anatoly
2018-06-27 16:52       ` Alejandro Lucero
2018-06-28  8:54         ` Burakov, Anatoly [this message]
2018-06-28  9:56           ` Alejandro Lucero
2018-06-28 10:03             ` Burakov, Anatoly
2018-06-28 10:27               ` Alejandro Lucero
2018-06-28 10:30                 ` Burakov, Anatoly

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=35c86511-7bf7-4840-d7ba-8362ddefc8ec@intel.com \
    --to=anatoly.burakov@intel.com \
    --cc=alejandro.lucero@netronome.com \
    --cc=dev@dpdk.org \
    --cc=stable@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).