From: Jonas Pfefferle <jpf@zurich.ibm.com>
To: anatoly.burakov@intel.com
Cc: dev@dpdk.org, aik@ozlabs.ru, Jonas Pfefferle <jpf@zurich.ibm.com>
Subject: [dpdk-dev] [PATCH] vfio: fix sPAPR IOMMU DMA window size
Date: Mon, 7 Aug 2017 17:11:05 +0200 [thread overview]
Message-ID: <1502118665-27439-1-git-send-email-jpf@zurich.ibm.com> (raw)
DMA window size needs to be big enough to span all memory segment's
physical addresses. We do not need multiple levels of IOMMU tables
as we already span ~70TB of physical memory with 16MB hugepages.
Signed-off-by: Jonas Pfefferle <jpf@zurich.ibm.com>
---
lib/librte_eal/linuxapp/eal/eal_vfio.c | 25 ++++++++++++++++++++++---
1 file changed, 22 insertions(+), 3 deletions(-)
diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c b/lib/librte_eal/linuxapp/eal/eal_vfio.c
index 946df7e..8502216 100644
--- a/lib/librte_eal/linuxapp/eal/eal_vfio.c
+++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c
@@ -722,6 +722,18 @@ vfio_type1_dma_map(int vfio_container_fd)
return 0;
}
+static uint64_t
+roundup_next_pow2(uint64_t n)
+{
+ uint32_t i;
+
+ n--;
+ for (i = 1; i < sizeof(n) * CHAR_BIT; i += i)
+ n |= n >> i;
+
+ return ++n;
+}
+
static int
vfio_spapr_dma_map(int vfio_container_fd)
{
@@ -759,10 +771,12 @@ vfio_spapr_dma_map(int vfio_container_fd)
return -1;
}
- /* calculate window size based on number of hugepages configured */
- create.window_size = rte_eal_get_physmem_size();
+ /* physicaly pages are sorted descending i.e. ms[0].phys_addr is max */
+ /* create DMA window from 0 to max(phys_addr + len) */
+ /* sPAPR requires window size to be a power of 2 */
+ create.window_size = roundup_next_pow2(ms[0].phys_addr + ms[0].len);
create.page_shift = __builtin_ctzll(ms->hugepage_sz);
- create.levels = 2;
+ create.levels = 1;
ret = ioctl(vfio_container_fd, VFIO_IOMMU_SPAPR_TCE_CREATE, &create);
if (ret) {
@@ -771,6 +785,11 @@ vfio_spapr_dma_map(int vfio_container_fd)
return -1;
}
+ if (create.start_addr != 0) {
+ RTE_LOG(ERR, EAL, " DMA window start address != 0\n");
+ return -1;
+ }
+
/* map all DPDK segments for DMA. use 1:1 PA to IOVA mapping */
for (i = 0; i < RTE_MAX_MEMSEG; i++) {
struct vfio_iommu_type1_dma_map dma_map;
--
2.7.4
next reply other threads:[~2017-08-07 15:11 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-08-07 15:11 Jonas Pfefferle [this message]
2017-08-08 7:38 ` Alexey Kardashevskiy
2017-08-08 7:56 ` Jonas Pfefferle1
2017-08-08 8:27 ` Ananyev, Konstantin
2017-08-08 8:47 ` Jonas Pfefferle1
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1502118665-27439-1-git-send-email-jpf@zurich.ibm.com \
--to=jpf@zurich.ibm.com \
--cc=aik@ozlabs.ru \
--cc=anatoly.burakov@intel.com \
--cc=dev@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).