DPDK patches and discussions
 help / color / mirror / Atom feed
From: Drocula <quzeyao@gmail.com>
Cc: dev@dpdk.org, anatoly.burakov@intel.com,
	stephen@networkplumber.org, Drocula <quzeyao@gmail.com>
Subject: [dpdk-dev] [PATCH v2] bus/pci: check if 5-level paging is enabled when testing IOMMU address width
Date: Mon, 13 Aug 2018 12:57:48 +0000	[thread overview]
Message-ID: <1534165068-23161-1-git-send-email-quzeyao@gmail.com> (raw)
In-Reply-To: <1533494497-16253-1-git-send-email-quzeyao@gmail.com>

The kernel version 4.14 released with the support of 5-level paging.
When PML5 enabled, user-space virtual addresses uses up to 56 bits.
see kernel's Documentation/x86/x86_64/mm.txt.

Signed-off-by: ZY Qiu <quzeyao@gmail.com>
---
 drivers/bus/pci/linux/pci.c | 33 ++++++++++++++++++++++++++++++---
 1 file changed, 30 insertions(+), 3 deletions(-)

diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index 04648ac..acc19df 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -4,6 +4,7 @@
 
 #include <string.h>
 #include <dirent.h>
+#include <sys/mman.h>
 
 #include <rte_log.h>
 #include <rte_bus.h>
@@ -552,16 +553,39 @@
 }
 
 #if defined(RTE_ARCH_X86)
+/*
+ * Try to detect whether the system uses 5-level page table.
+ */
+static bool
+system_uses_PML5(void)
+{
+#define X86_56_BIT_VA (0xfULL << 52)
+	void *page_4k;
+	page_4k = mmap((void *)X86_56_BIT_VA, 4096, PROT_READ | PROT_WRITE,
+		MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+
+	if (page_4k == MAP_FAILED)
+		return false;
+	munmap(page_4k, 4096);
+
+	if ((unsigned long long)page_4k & X86_56_BIT_VA)
+		return true;
+	return false;
+}
+
 static bool
 pci_one_device_iommu_support_va(struct rte_pci_device *dev)
 {
 #define VTD_CAP_MGAW_SHIFT	16
 #define VTD_CAP_MGAW_MASK	(0x3fULL << VTD_CAP_MGAW_SHIFT)
-#define X86_VA_WIDTH 47 /* From Documentation/x86/x86_64/mm.txt */
+/*  From Documentation/x86/x86_64/mm.txt */
+#define X86_VA_WIDTH_PML4 47
+#define X86_VA_WIDTH_PML5 56
+
 	struct rte_pci_addr *addr = &dev->addr;
 	char filename[PATH_MAX];
 	FILE *fp;
-	uint64_t mgaw, vtd_cap_reg = 0;
+	uint64_t mgaw, vtd_cap_reg = 0, va_width = X86_VA_WIDTH_PML4;
 
 	snprintf(filename, sizeof(filename),
 		 "%s/" PCI_PRI_FMT "/iommu/intel-iommu/cap",
@@ -587,8 +611,11 @@
 
 	fclose(fp);
 
+	if (system_uses_PML5())
+		va_width = X86_VA_WIDTH_PML5;
+
 	mgaw = ((vtd_cap_reg & VTD_CAP_MGAW_MASK) >> VTD_CAP_MGAW_SHIFT) + 1;
-	if (mgaw < X86_VA_WIDTH)
+	if (mgaw < va_width)
 		return false;
 
 	return true;
-- 
1.8.3.1

  parent reply	other threads:[~2018-08-13 12:58 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-05 18:41 [dpdk-dev] [PATCH] " Drocula
2018-08-09 10:49 ` Burakov, Anatoly
2018-08-10  7:51   ` Drocula
2018-08-09 17:03 ` Stephen Hemminger
2018-08-10  8:35   ` Drocula
2018-08-10  9:18     ` Burakov, Anatoly
2018-08-13 12:57 ` Drocula [this message]
2018-10-28 18:22   ` [dpdk-dev] [PATCH v2] " Thomas Monjalon
2023-06-09 16:31 ` [PATCH] " Stephen Hemminger

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1534165068-23161-1-git-send-email-quzeyao@gmail.com \
    --to=quzeyao@gmail.com \
    --cc=anatoly.burakov@intel.com \
    --cc=dev@dpdk.org \
    --cc=stephen@networkplumber.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).