From: Alejandro Lucero <alejandro.lucero@netronome.com>
To: dev@dpdk.org
Cc: stable@dpdk.org, anatoly.burakov@intel.com
Subject: [dpdk-dev] [RFC] Add support for device dma mask
Date: Tue, 26 Jun 2018 18:37:27 +0100 [thread overview]
Message-ID: <1530034653-28299-1-git-send-email-alejandro.lucero@netronome.com> (raw)
This RFC tries to handle devices with addressing limitations. NFP devices
4000/6000 can just handle addresses with 40 bits implying problems for handling
physical address when machines have more than 1TB of memory. But because how
iovas are configured, which can be equivalent to physical addresses or based on
virtual addresses, this can be a more likely problem.
I tried to solve this some time ago:
https://www.mail-archive.com/dev@dpdk.org/msg45214.html
It was delayed because there was some changes in progress with EAL device
handling, and, being honest, I completely forgot about this until now, when
I have had to work on supporting NFP devices with DPDK and non-root users.
I was working on a patch for being applied on main DPDK branch upstream, but
because changes to memory initialization during the last months, this can not
be backported to stable versions, at least the part where the hugepages iovas
are checked.
I realize stable versions only allow bug fixing, and this patchset could
arguably not be considered as so. But without this, it could be, although
unlikely, a DPDK used in a machine with more than 1TB, and then NFP using
the wrong DMA host addresses.
Although virtual addresses used as iovas are more dangerous, for DPDK versions
before 18.05 this is not worse than with physical addresses, because iovas,
when physical addresses are not available, are based on a starting address set
to 0x0. Since 18.05, those iovas can, and usually are, higher than 1TB, as they
are based on 64 bits address space addresses, and by default the kernel uses a
starting point far higher than 1TB.
This patchset applies to stable 17.11.3 but I will be happy to submit patches, if
required, for other DPDK stable versions.
next reply other threads:[~2018-06-26 17:38 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-06-26 17:37 Alejandro Lucero [this message]
2018-06-26 17:37 ` [dpdk-dev] [PATCH 1/6] eal: add internal " Alejandro Lucero
2018-06-26 17:37 ` [dpdk-dev] [PATCH 2/6] mem: add hugepages check Alejandro Lucero
2018-06-26 17:37 ` [dpdk-dev] [PATCH 3/6] eal: check hugepages within dma mask range Alejandro Lucero
2018-06-26 17:37 ` [dpdk-dev] [PATCH 4/6] mem: add function for setting internal dma mask Alejandro Lucero
2018-06-26 17:37 ` [dpdk-dev] [PATCH 5/6] ethdev: add function for " Alejandro Lucero
2018-06-26 17:37 ` [dpdk-dev] [PATCH 6/6] net/nfp: set " Alejandro Lucero
2018-06-27 8:17 ` [dpdk-dev] [RFC] Add support for device " Burakov, Anatoly
2018-06-27 10:13 ` Alejandro Lucero
2018-06-27 13:24 ` Burakov, Anatoly
2018-06-27 16:52 ` Alejandro Lucero
2018-06-28 8:54 ` Burakov, Anatoly
2018-06-28 9:56 ` Alejandro Lucero
2018-06-28 10:03 ` Burakov, Anatoly
2018-06-28 10:27 ` Alejandro Lucero
2018-06-28 10:30 ` Burakov, Anatoly
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1530034653-28299-1-git-send-email-alejandro.lucero@netronome.com \
--to=alejandro.lucero@netronome.com \
--cc=anatoly.burakov@intel.com \
--cc=dev@dpdk.org \
--cc=stable@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).