DPDK patches and discussions
 help / color / mirror / Atom feed
From: Dmitry Kozlyuk <dkozlyuk@oss.nvidia.com>
To: <dev@dpdk.org>
Cc: <stable@dpdk.org>, Tal Shnaiderman <talshn@oss.nvidia.com>,
	Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>,
	Narcisa Ana Maria Vasile <navasile@linux.microsoft.com>,
	Dmitry Malloy <dmitrym@microsoft.com>,
	Pallavi Kadam <pallavi.kadam@intel.com>
Subject: [dpdk-dev] [PATCH] eal/windows: fix IOVA mode detection and handling
Date: Mon, 25 Oct 2021 15:20:52 +0300	[thread overview]
Message-ID: <20211025122053.326790-1-dkozlyuk@nvidia.com> (raw)

Windows EAL did not detect IOVA mode and worked incorrectly
if physical addresses could not be obtained
(if virt2phys driver was missing or inaccessible).
In this case, rte_mem_virt2iova() reported RTE_BAD_IOVA for any address.
Inability to obtain IOVA, be it PA or VA, should cause a failure
for the DPDK allocator, but it was hidden by the implementation,
so allocations did not fail when they should.
The mode when DPDK cannot obtain PA but can work is IOVA-as-VA mode.
However, rte_eal_iova_mode() always returned RTE_IOVA_DC
(while it should only ever return RTE_IOVA_PA or RTE_IOVA_VA),
because IOVA mode detection was not implemented.

Implement IOVA mode detection:
1. Always allow to force --iova-mode=va.
2. Allow to force --iova-mode=pa only if virt2phys is available.
3. If no mode is forced and virt2phys is available,
   select the mode according to bus requests, default to PA.
4. If no mode is forced but virt2phys is unavailable, default to VA.
Fix rte_mem_virt2iova() by returning VA when using IOVA-as-VA.
Fix rte_eal_iova_mode() by returning the selected mode.

Fixes: 2a5d547a4a9b ("eal/windows: implement basic memory management")
Cc: stable@dpdk.org

Reported-by: Tal Shnaiderman <talshn@nvidia.com>
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
---
Fixes tag points to the commit that introduced the wrong behavior.
Commit fec28ca0e3a9 ("net/mlx5: support mempool registration")
exposed it, because since commit 11541c5c81dd ("mempool: add non-IO flag")
RTE_MEMPOOL_F_NON_IO was mistakenly set for all mempools on Windows
when virt2phys was not available.

 lib/eal/windows/eal.c          | 63 ++++++++++++++++++++++++----------
 lib/eal/windows/eal_memalloc.c | 15 +++-----
 lib/eal/windows/eal_memory.c   |  6 ++--
 3 files changed, 51 insertions(+), 33 deletions(-)

diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index 3d8c520412..f7ce1b6671 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -276,6 +276,8 @@ rte_eal_init(int argc, char **argv)
 	const struct rte_config *config = rte_eal_get_configuration();
 	struct internal_config *internal_conf =
 		eal_get_internal_configuration();
+	bool has_phys_addr;
+	enum rte_iova_mode iova_mode;
 	int ret;
 
 	eal_log_init(NULL, 0);
@@ -322,18 +324,59 @@ rte_eal_init(int argc, char **argv)
 			internal_conf->memory = MEMSIZE_IF_NO_HUGE_PAGE;
 	}
 
+	if (rte_eal_intr_init() < 0) {
+		rte_eal_init_alert("Cannot init interrupt-handling thread");
+		return -1;
+	}
+
+	if (rte_eal_timer_init() < 0) {
+		rte_eal_init_alert("Cannot init TSC timer");
+		rte_errno = EFAULT;
+		return -1;
+	}
+
+	bscan = rte_bus_scan();
+	if (bscan < 0) {
+		rte_eal_init_alert("Cannot scan the buses");
+		rte_errno = ENODEV;
+		return -1;
+	}
+
 	if (eal_mem_win32api_init() < 0) {
 		rte_eal_init_alert("Cannot access Win32 memory management");
 		rte_errno = ENOTSUP;
 		return -1;
 	}
 
+	has_phys_addr = true;
 	if (eal_mem_virt2iova_init() < 0) {
 		/* Non-fatal error if physical addresses are not required. */
-		RTE_LOG(WARNING, EAL, "Cannot access virt2phys driver, "
+		RTE_LOG(DEBUG, EAL, "Cannot access virt2phys driver, "
 			"PA will not be available\n");
+		has_phys_addr = false;
 	}
 
+	iova_mode = internal_conf->iova_mode;
+	if (iova_mode == RTE_IOVA_PA && !has_phys_addr) {
+		rte_eal_init_alert("Cannot use IOVA as 'PA' since physical addresses are not available");
+		rte_errno = EINVAL;
+		return -1;
+	}
+	if (iova_mode == RTE_IOVA_DC) {
+		RTE_LOG(DEBUG, EAL, "Specific IOVA mode is not requested, autodetecting\n");
+		if (has_phys_addr) {
+			RTE_LOG(DEBUG, EAL, "Selecting IOVA mode according to bus requests\n");
+			iova_mode = rte_bus_get_iommu_class();
+			if (iova_mode == RTE_IOVA_DC)
+				iova_mode = RTE_IOVA_PA;
+		} else {
+			iova_mode = RTE_IOVA_VA;
+		}
+	}
+	RTE_LOG(DEBUG, EAL, "Selected IOVA mode '%s'\n",
+		iova_mode == RTE_IOVA_PA ? "PA" : "VA");
+	rte_eal_get_configuration()->iova_mode = iova_mode;
+
 	if (rte_eal_memzone_init() < 0) {
 		rte_eal_init_alert("Cannot init memzone");
 		rte_errno = ENODEV;
@@ -358,27 +401,9 @@ rte_eal_init(int argc, char **argv)
 		return -1;
 	}
 
-	if (rte_eal_intr_init() < 0) {
-		rte_eal_init_alert("Cannot init interrupt-handling thread");
-		return -1;
-	}
-
-	if (rte_eal_timer_init() < 0) {
-		rte_eal_init_alert("Cannot init TSC timer");
-		rte_errno = EFAULT;
-		return -1;
-	}
-
 	__rte_thread_init(config->main_lcore,
 		&lcore_config[config->main_lcore].cpuset);
 
-	bscan = rte_bus_scan();
-	if (bscan < 0) {
-		rte_eal_init_alert("Cannot init PCI");
-		rte_errno = ENODEV;
-		return -1;
-	}
-
 	RTE_LCORE_FOREACH_WORKER(i) {
 
 		/*
diff --git a/lib/eal/windows/eal_memalloc.c b/lib/eal/windows/eal_memalloc.c
index 4459d59b1a..55d6dcc71c 100644
--- a/lib/eal/windows/eal_memalloc.c
+++ b/lib/eal/windows/eal_memalloc.c
@@ -99,16 +99,11 @@ alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id,
 	 */
 	*(volatile int *)addr = *(volatile int *)addr;
 
-	/* Only try to obtain IOVA if it's available, so that applications
-	 * that do not need IOVA can use this allocator.
-	 */
-	if (rte_eal_using_phys_addrs()) {
-		iova = rte_mem_virt2iova(addr);
-		if (iova == RTE_BAD_IOVA) {
-			RTE_LOG(DEBUG, EAL,
-				"Cannot get IOVA of allocated segment\n");
-			goto error;
-		}
+	iova = rte_mem_virt2iova(addr);
+	if (iova == RTE_BAD_IOVA) {
+		RTE_LOG(DEBUG, EAL,
+			"Cannot get IOVA of allocated segment\n");
+		goto error;
 	}
 
 	/* Only "Ex" function can handle hugepages. */
diff --git a/lib/eal/windows/eal_memory.c b/lib/eal/windows/eal_memory.c
index 71741fc07e..2fd37d9708 100644
--- a/lib/eal/windows/eal_memory.c
+++ b/lib/eal/windows/eal_memory.c
@@ -225,19 +225,17 @@ rte_mem_virt2phy(const void *virt)
 	return phys.QuadPart;
 }
 
-/* Windows currently only supports IOVA as PA. */
 rte_iova_t
 rte_mem_virt2iova(const void *virt)
 {
 	phys_addr_t phys;
 
-	if (virt2phys_device == INVALID_HANDLE_VALUE)
-		return RTE_BAD_IOVA;
+	if (rte_eal_iova_mode() == RTE_IOVA_VA)
+		return (rte_iova_t)virt;
 
 	phys = rte_mem_virt2phy(virt);
 	if (phys == RTE_BAD_PHYS_ADDR)
 		return RTE_BAD_IOVA;
-
 	return (rte_iova_t)phys;
 }
 
-- 
2.25.1


             reply	other threads:[~2021-10-25 12:21 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-25 12:20 Dmitry Kozlyuk [this message]
2021-10-25 18:22 ` Kadam, Pallavi
2021-10-25 19:00   ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211025122053.326790-1-dkozlyuk@nvidia.com \
    --to=dkozlyuk@oss.nvidia.com \
    --cc=dev@dpdk.org \
    --cc=dmitry.kozliuk@gmail.com \
    --cc=dmitrym@microsoft.com \
    --cc=navasile@linux.microsoft.com \
    --cc=pallavi.kadam@intel.com \
    --cc=stable@dpdk.org \
    --cc=talshn@oss.nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).