DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v2 0/7] fix DMA mask check
@ 2018-11-01 19:53 Alejandro Lucero
  2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 1/7] mem: fix call to " Alejandro Lucero
                   ` (7 more replies)
  0 siblings, 8 replies; 20+ messages in thread
From: Alejandro Lucero @ 2018-11-01 19:53 UTC (permalink / raw)
  To: dev

A patchset sent introducing DMA mask checks has several critical
issues precluding apps to execute. The patchset was reviewed and
finally accepted after three versions. Obviously it did not go 
through the proper testing what can be explained, at least from my
side, due to the big changes to the memory initialization code these
last months. It turns out the patchset did work with legacy memory
and I'm afraid that was mainly my testing.

This patchset should solve the main problems reported:

 - deadlock duriing initialization
 - segmentation fault with secondary processes

For solving the deadlock, a new API is introduced:

 rte_mem_check_dma_mask_safe/unsafe

making the previous rte_mem_check_dma_mask the one those new functions
end calling. A boolean param is used for calling rte_memseg_walk thread
safe or thread unsafe. This second option is needed for avoiding the 
deadlock.

For the secondary processes problem, the call to check the dma mask is
avoided from code being executed before the memory initialization. 
Instead, a new API function, rte_mem_set_dma_mask is introduced, which 
will be used in those cases. The dma mask check is done once the memory
initialization is completed.

This last change implies the IOVA mode can not be set depending on IOMMU
hardware limitations, and it is assumed IOVA VA is possible. If the dma
mask check reports a problem after memory initilization, the error 
message includes now advice for trying with --iova-mode option set to 
pa.

The patchet also includes the dma mask check for legacy memory and the 
no hugepage option.

Finally, all the DMA mask API has been updated for using the same prefix
than other EAL memory code.

An initial version of this patchset has been tested by Intel DPDK 
Validation team and it seems it solves all the problems reported. This 
final patchset has the same functionality with minor changes. I have
successfully tested the patchset with my limited testbench.

v2:
 - modify error messages with more descriptive information
 - change safe/unsafe versions for dma check to previous one plus a
   thread_unsafe version.  
 - use rte_eal_using_phys_addr instead of getuid
 - fix comments
 - reorder patches

Alejandro Lucero (7):
  mem: fix call to DMA mask check
  mem: use proper prefix
  mem: add function for setting DMA mask
  bus/pci: avoid call to DMA mask check
  mem: modify error message for DMA mask check
  eal/mem: use DMA mask check for legacy memory
  mem: add thread unsafe version for checking DMA mask

 doc/guides/rel_notes/release_18_11.rst     |  2 +-
 drivers/bus/pci/linux/pci.c                | 11 +++++-
 drivers/net/nfp/nfp_net.c                  |  2 +-
 lib/librte_eal/common/eal_common_memory.c  | 42 +++++++++++++++++++--
 lib/librte_eal/common/include/rte_memory.h | 41 ++++++++++++++++++++-
 lib/librte_eal/common/malloc_heap.c        | 43 ++++++++++++++++++----
 lib/librte_eal/linuxapp/eal/eal_memory.c   | 20 ++++++++++
 lib/librte_eal/rte_eal_version.map         |  4 +-
 8 files changed, 148 insertions(+), 17 deletions(-)

-- 
2.17.1

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH v2 1/7] mem: fix call to DMA mask check
  2018-11-01 19:53 [dpdk-dev] [PATCH v2 0/7] fix DMA mask check Alejandro Lucero
@ 2018-11-01 19:53 ` Alejandro Lucero
  2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 2/7] mem: use proper prefix Alejandro Lucero
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Alejandro Lucero @ 2018-11-01 19:53 UTC (permalink / raw)
  To: dev

The param needs to be the maskbits and not the mask.

Fixes: 223b7f1d5ef6 ("mem: add function for checking memseg IOVA")

Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
---
 lib/librte_eal/common/malloc_heap.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c
index 1973b6e6e..d1019e3cd 100644
--- a/lib/librte_eal/common/malloc_heap.c
+++ b/lib/librte_eal/common/malloc_heap.c
@@ -294,7 +294,6 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
 	size_t alloc_sz;
 	int allocd_pages;
 	void *ret, *map_addr;
-	uint64_t mask;
 
 	alloc_sz = (size_t)pg_sz * n_segs;
 
@@ -323,8 +322,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
 	}
 
 	if (mcfg->dma_maskbits) {
-		mask = ~((1ULL << mcfg->dma_maskbits) - 1);
-		if (rte_eal_check_dma_mask(mask)) {
+		if (rte_eal_check_dma_mask(mcfg->dma_maskbits)) {
 			RTE_LOG(ERR, EAL,
 				"%s(): couldn't allocate memory due to DMA mask\n",
 				__func__);
-- 
2.17.1

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH v2 2/7] mem: use proper prefix
  2018-11-01 19:53 [dpdk-dev] [PATCH v2 0/7] fix DMA mask check Alejandro Lucero
  2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 1/7] mem: fix call to " Alejandro Lucero
@ 2018-11-01 19:53 ` Alejandro Lucero
  2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 3/7] mem: add function for setting DMA mask Alejandro Lucero
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Alejandro Lucero @ 2018-11-01 19:53 UTC (permalink / raw)
  To: dev

Current name rte_eal_check_dma_mask does not follow the naming
used in the rest of the file.

Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
---
 doc/guides/rel_notes/release_18_11.rst     | 2 +-
 drivers/bus/pci/linux/pci.c                | 2 +-
 drivers/net/nfp/nfp_net.c                  | 2 +-
 lib/librte_eal/common/eal_common_memory.c  | 4 ++--
 lib/librte_eal/common/include/rte_memory.h | 2 +-
 lib/librte_eal/common/malloc_heap.c        | 2 +-
 lib/librte_eal/rte_eal_version.map         | 2 +-
 7 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/doc/guides/rel_notes/release_18_11.rst b/doc/guides/rel_notes/release_18_11.rst
index 376128f68..11a27405c 100644
--- a/doc/guides/rel_notes/release_18_11.rst
+++ b/doc/guides/rel_notes/release_18_11.rst
@@ -63,7 +63,7 @@ New Features
 * **Added check for ensuring allocated memory addressable by devices.**
 
   Some devices can have addressing limitations so a new function,
-  ``rte_eal_check_dma_mask``, has been added for checking allocated memory is
+  ``rte_mem_check_dma_mask``, has been added for checking allocated memory is
   not out of the device range. Because now memory can be dynamically allocated
   after initialization, a dma mask is kept and any new allocated memory will be
   checked out against that dma mask and rejected if out of range. If more than
diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index 45c24ef7e..0a81e063b 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -590,7 +590,7 @@ pci_one_device_iommu_support_va(struct rte_pci_device *dev)
 
 	mgaw = ((vtd_cap_reg & VTD_CAP_MGAW_MASK) >> VTD_CAP_MGAW_SHIFT) + 1;
 
-	return rte_eal_check_dma_mask(mgaw) == 0 ? true : false;
+	return rte_mem_check_dma_mask(mgaw) == 0 ? true : false;
 }
 #elif defined(RTE_ARCH_PPC_64)
 static bool
diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
index bab1f68eb..54c6da924 100644
--- a/drivers/net/nfp/nfp_net.c
+++ b/drivers/net/nfp/nfp_net.c
@@ -2703,7 +2703,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
 	pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
 
 	/* NFP can not handle DMA addresses requiring more than 40 bits */
-	if (rte_eal_check_dma_mask(40)) {
+	if (rte_mem_check_dma_mask(40)) {
 		RTE_LOG(ERR, PMD, "device %s can not be used:",
 				   pci_dev->device.name);
 		RTE_LOG(ERR, PMD, "\trestricted dma mask to 40 bits!\n");
diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c
index 12dcedf5c..e0f08f39a 100644
--- a/lib/librte_eal/common/eal_common_memory.c
+++ b/lib/librte_eal/common/eal_common_memory.c
@@ -49,7 +49,7 @@ static uint64_t system_page_sz;
  * Current known limitations are 39 or 40 bits. Setting the starting address
  * at 4GB implies there are 508GB or 1020GB for mapping the available
  * hugepages. This is likely enough for most systems, although a device with
- * addressing limitations should call rte_eal_check_dma_mask for ensuring all
+ * addressing limitations should call rte_mem_check_dma_mask for ensuring all
  * memory is within supported range.
  */
 static uint64_t baseaddr = 0x100000000;
@@ -447,7 +447,7 @@ check_iova(const struct rte_memseg_list *msl __rte_unused,
 
 /* check memseg iovas are within the required range based on dma mask */
 int __rte_experimental
-rte_eal_check_dma_mask(uint8_t maskbits)
+rte_mem_check_dma_mask(uint8_t maskbits)
 {
 	struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
 	uint64_t mask;
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index ce9370582..ad3f3cfb0 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -464,7 +464,7 @@ unsigned rte_memory_get_nchannel(void);
 unsigned rte_memory_get_nrank(void);
 
 /* check memsegs iovas are within a range based on dma mask */
-int __rte_experimental rte_eal_check_dma_mask(uint8_t maskbits);
+int __rte_experimental rte_mem_check_dma_mask(uint8_t maskbits);
 
 /**
  * Drivers based on uio will not load unless physical
diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c
index d1019e3cd..4997c5ef5 100644
--- a/lib/librte_eal/common/malloc_heap.c
+++ b/lib/librte_eal/common/malloc_heap.c
@@ -322,7 +322,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
 	}
 
 	if (mcfg->dma_maskbits) {
-		if (rte_eal_check_dma_mask(mcfg->dma_maskbits)) {
+		if (rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
 			RTE_LOG(ERR, EAL,
 				"%s(): couldn't allocate memory due to DMA mask\n",
 				__func__);
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index 04f624246..4eb16ee3b 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -295,7 +295,6 @@ EXPERIMENTAL {
 	rte_devargs_parsef;
 	rte_devargs_remove;
 	rte_devargs_type_count;
-	rte_eal_check_dma_mask;
 	rte_eal_cleanup;
 	rte_fbarray_attach;
 	rte_fbarray_destroy;
@@ -331,6 +330,7 @@ EXPERIMENTAL {
 	rte_malloc_heap_socket_is_external;
 	rte_mem_alloc_validator_register;
 	rte_mem_alloc_validator_unregister;
+	rte_mem_check_dma_mask;
 	rte_mem_event_callback_register;
 	rte_mem_event_callback_unregister;
 	rte_mem_iova2virt;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH v2 3/7] mem: add function for setting DMA mask
  2018-11-01 19:53 [dpdk-dev] [PATCH v2 0/7] fix DMA mask check Alejandro Lucero
  2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 1/7] mem: fix call to " Alejandro Lucero
  2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 2/7] mem: use proper prefix Alejandro Lucero
@ 2018-11-01 19:53 ` Alejandro Lucero
  2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 4/7] bus/pci: avoid call to DMA mask check Alejandro Lucero
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Alejandro Lucero @ 2018-11-01 19:53 UTC (permalink / raw)
  To: dev

This patch adds the possibility of setting a dma mask to be used
once the memory initialization is done.

This is currently needed when IOVA mode is set by PCI related
code and an x86 IOMMU hardware unit is present. Current code calls
rte_mem_check_dma_mask but it is wrong to do so at that point
because the memory has not been initialized yet.

Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
---
 lib/librte_eal/common/eal_common_memory.c  | 16 ++++++++++++++++
 lib/librte_eal/common/include/rte_memory.h | 10 ++++++++++
 lib/librte_eal/rte_eal_version.map         |  1 +
 3 files changed, 27 insertions(+)

diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c
index e0f08f39a..cc4f1d80f 100644
--- a/lib/librte_eal/common/eal_common_memory.c
+++ b/lib/librte_eal/common/eal_common_memory.c
@@ -480,6 +480,22 @@ rte_mem_check_dma_mask(uint8_t maskbits)
 	return 0;
 }
 
+/*
+ * Set dma mask to use when memory initialization is done.
+ *
+ * This function should ONLY be used by code executed before the memory
+ * initialization. PMDs should use rte_mem_check_dma_mask if addressing
+ * limitations by the device.
+ */
+void __rte_experimental
+rte_mem_set_dma_mask(uint8_t maskbits)
+{
+	struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+
+	mcfg->dma_maskbits = mcfg->dma_maskbits == 0 ? maskbits :
+			     RTE_MIN(mcfg->dma_maskbits, maskbits);
+}
+
 /* return the number of memory channels */
 unsigned rte_memory_get_nchannel(void)
 {
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index ad3f3cfb0..abbfe2364 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -466,6 +466,16 @@ unsigned rte_memory_get_nrank(void);
 /* check memsegs iovas are within a range based on dma mask */
 int __rte_experimental rte_mem_check_dma_mask(uint8_t maskbits);
 
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ *  Set dma mask to use once memory initialization is done.
+ *  Previous function rte_mem_check_dma_mask can not be used
+ *  safely until memory has been initialized.
+ */
+void __rte_experimental rte_mem_set_dma_mask(uint8_t maskbits);
+
 /**
  * Drivers based on uio will not load unless physical
  * addresses are obtainable. It is only possible to get
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index 4eb16ee3b..51ee948ba 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -334,6 +334,7 @@ EXPERIMENTAL {
 	rte_mem_event_callback_register;
 	rte_mem_event_callback_unregister;
 	rte_mem_iova2virt;
+	rte_mem_set_dma_mask;
 	rte_mem_virt2memseg;
 	rte_mem_virt2memseg_list;
 	rte_memseg_contig_walk;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH v2 4/7] bus/pci: avoid call to DMA mask check
  2018-11-01 19:53 [dpdk-dev] [PATCH v2 0/7] fix DMA mask check Alejandro Lucero
                   ` (2 preceding siblings ...)
  2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 3/7] mem: add function for setting DMA mask Alejandro Lucero
@ 2018-11-01 19:53 ` Alejandro Lucero
  2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 5/7] mem: modify error message for " Alejandro Lucero
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Alejandro Lucero @ 2018-11-01 19:53 UTC (permalink / raw)
  To: dev

Calling rte_mem_check_dma_mask when memory has not been initialized
yet is wrong. This patch use rte_mem_set_dma_mask instead.

Once memory initialization is done, the dma mask set will be used
for checking memory mapped is within the specified mask.

Fixes: fe822eb8c565 ("bus/pci: use IOVA DMA mask check when setting IOVA mode")

Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
---
 drivers/bus/pci/linux/pci.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index 0a81e063b..d87384c72 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -590,7 +590,16 @@ pci_one_device_iommu_support_va(struct rte_pci_device *dev)
 
 	mgaw = ((vtd_cap_reg & VTD_CAP_MGAW_MASK) >> VTD_CAP_MGAW_SHIFT) + 1;
 
-	return rte_mem_check_dma_mask(mgaw) == 0 ? true : false;
+	/*
+	 * Assuming there is no limitation by now. We can not know at this point
+	 * because the memory has not been initialized yet. Setting the dma mask
+	 * will force a check once memory initialization is done. We can not do
+	 * a fallback to IOVA PA now, but if the dma check fails, the error
+	 * message should advice for using '--iova-mode pa' if IOVA VA is the
+	 * current mode.
+	 */
+	rte_mem_set_dma_mask(mgaw);
+	return true;
 }
 #elif defined(RTE_ARCH_PPC_64)
 static bool
-- 
2.17.1

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH v2 5/7] mem: modify error message for DMA mask check
  2018-11-01 19:53 [dpdk-dev] [PATCH v2 0/7] fix DMA mask check Alejandro Lucero
                   ` (3 preceding siblings ...)
  2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 4/7] bus/pci: avoid call to DMA mask check Alejandro Lucero
@ 2018-11-01 19:53 ` Alejandro Lucero
  2018-11-05 10:01   ` Li, WenjieX A
  2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 6/7] eal/mem: use DMA mask check for legacy memory Alejandro Lucero
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 20+ messages in thread
From: Alejandro Lucero @ 2018-11-01 19:53 UTC (permalink / raw)
  To: dev

If DMA mask checks shows mapped memory out of the supported range
specified by the DMA mask, nothing can be done but return an error
an report the error. This can imply the app not being executed at
all or precluding dynamic memory allocation once the app is running.
In any case, we can advice the user to force IOVA as PA if currently
IOVA being VA and user being root.

Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
---
 lib/librte_eal/common/malloc_heap.c | 41 +++++++++++++++++++++++++----
 1 file changed, 36 insertions(+), 5 deletions(-)

diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c
index 4997c5ef5..c2c112aa6 100644
--- a/lib/librte_eal/common/malloc_heap.c
+++ b/lib/librte_eal/common/malloc_heap.c
@@ -321,13 +321,44 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
 		goto fail;
 	}
 
-	if (mcfg->dma_maskbits) {
-		if (rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
+	/*
+	 * Once we have all the memseg lists configured, if there is a dma mask
+	 * set, check iova addresses are not out of range. Otherwise the device
+	 * setting the dma mask could have problems with the mapped memory.
+	 *
+	 * There are two situations when this can happen:
+	 *	1) memory initialization
+	 *	2) dynamic memory allocation
+	 *
+	 * For 1), an error when checking dma mask implies app can not be
+	 * executed. For 2) implies the new memory can not be added.
+	 */
+	if (mcfg->dma_maskbits &&
+	    rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
+		/*
+		 * Currently this can only happen if IOMMU is enabled
+		 * and the address width supported by the IOMMU hw is
+		 * not enough for using the memory mapped IOVAs.
+		 *
+		 * If IOVA is VA, advice to try with '--iova-mode pa'
+		 * which could solve some situations when IOVA VA is not
+		 * really needed.
+		 */
+		RTE_LOG(ERR, EAL,
+			"%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask\n",
+			__func__);
+
+		/*
+		 * If IOVA is VA and it is possible to run with IOVA PA,
+		 * because user is root, give and advice for solving the
+		 * problem.
+		 */
+		if ((rte_eal_iova_mode() == RTE_IOVA_VA) &&
+		     rte_eal_using_phys_addrs())
 			RTE_LOG(ERR, EAL,
-				"%s(): couldn't allocate memory due to DMA mask\n",
+				"%s(): Please try initializing EAL with --iova-mode=pa parameter\n",
 				__func__);
-			goto fail;
-		}
+		goto fail;
 	}
 
 	/* add newly minted memsegs to malloc heap */
-- 
2.17.1

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH v2 6/7] eal/mem: use DMA mask check for legacy memory
  2018-11-01 19:53 [dpdk-dev] [PATCH v2 0/7] fix DMA mask check Alejandro Lucero
                   ` (4 preceding siblings ...)
  2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 5/7] mem: modify error message for " Alejandro Lucero
@ 2018-11-01 19:53 ` Alejandro Lucero
  2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 7/7] mem: add thread unsafe version for checking DMA mask Alejandro Lucero
  2018-11-02 18:38 ` [dpdk-dev] [PATCH v2 0/7] fix DMA mask check Ferruh Yigit
  7 siblings, 0 replies; 20+ messages in thread
From: Alejandro Lucero @ 2018-11-01 19:53 UTC (permalink / raw)
  To: dev

If a device reports addressing limitations through a dma mask,
the IOVAs for mapped memory needs to be checked out for ensuring
correct functionality.

Previous patches introduced this DMA check for main memory code
currently being used but other options like legacy memory and the
no hugepages option need to be also considered.

This patch adds the DMA check for those cases.

Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
---
 lib/librte_eal/linuxapp/eal/eal_memory.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
index fce86fda6..c7935879a 100644
--- a/lib/librte_eal/linuxapp/eal/eal_memory.c
+++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
@@ -1393,6 +1393,18 @@ eal_legacy_hugepage_init(void)
 
 			addr = RTE_PTR_ADD(addr, (size_t)page_sz);
 		}
+		if (mcfg->dma_maskbits &&
+		    rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
+			RTE_LOG(ERR, EAL,
+				"%s(): couldnt allocate memory due to IOVA exceeding limits of current DMA mask.\n",
+				__func__);
+			if (rte_eal_iova_mode() == RTE_IOVA_VA &&
+			    rte_eal_using_phys_addrs())
+				RTE_LOG(ERR, EAL,
+					"%s(): Please try initializing EAL with --iova-mode=pa parameter.\n",
+					__func__);
+			goto fail;
+		}
 		return 0;
 	}
 
@@ -1628,6 +1640,14 @@ eal_legacy_hugepage_init(void)
 		rte_fbarray_destroy(&msl->memseg_arr);
 	}
 
+	if (mcfg->dma_maskbits &&
+	    rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
+		RTE_LOG(ERR, EAL,
+			"%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.\n",
+			__func__);
+		goto fail;
+	}
+
 	return 0;
 
 fail:
-- 
2.17.1

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [dpdk-dev] [PATCH v2 7/7] mem: add thread unsafe version for checking DMA mask
  2018-11-01 19:53 [dpdk-dev] [PATCH v2 0/7] fix DMA mask check Alejandro Lucero
                   ` (5 preceding siblings ...)
  2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 6/7] eal/mem: use DMA mask check for legacy memory Alejandro Lucero
@ 2018-11-01 19:53 ` Alejandro Lucero
  2018-11-02 18:38 ` [dpdk-dev] [PATCH v2 0/7] fix DMA mask check Ferruh Yigit
  7 siblings, 0 replies; 20+ messages in thread
From: Alejandro Lucero @ 2018-11-01 19:53 UTC (permalink / raw)
  To: dev

During memory initialization calling rte_mem_check_dma_mask
leads to a deadlock because memory_hotplug_lock is locked by a
writer, the current code in execution, and rte_memseg_walk
tries to lock as a reader.

This patch adds a thread_unsafe version which will call the final
function specifying the memory_hotplug_lock does not need to be
acquired. The patch also modified rte_mem_check_dma_mask as a
intermediate step which will call the final function as before,
implying memory_hotplug_lock will be acquired.

PMDs should always use the version acquiring the lock with the
thread_unsafe one being just for internal EAL memory code.

Fixes: 223b7f1d5ef6 ("mem: add function for checking memseg IOVA")

Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
---
 lib/librte_eal/common/eal_common_memory.c  | 24 +++++++++++++--
 lib/librte_eal/common/include/rte_memory.h | 35 +++++++++++++++++++---
 lib/librte_eal/common/malloc_heap.c        |  2 +-
 lib/librte_eal/linuxapp/eal/eal_memory.c   |  4 +--
 lib/librte_eal/rte_eal_version.map         |  1 +
 5 files changed, 56 insertions(+), 10 deletions(-)

diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c
index cc4f1d80f..87fd9921f 100644
--- a/lib/librte_eal/common/eal_common_memory.c
+++ b/lib/librte_eal/common/eal_common_memory.c
@@ -446,11 +446,12 @@ check_iova(const struct rte_memseg_list *msl __rte_unused,
 #endif
 
 /* check memseg iovas are within the required range based on dma mask */
-int __rte_experimental
-rte_mem_check_dma_mask(uint8_t maskbits)
+static int __rte_experimental
+check_dma_mask(uint8_t maskbits, bool thread_unsafe)
 {
 	struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
 	uint64_t mask;
+	int ret;
 
 	/* sanity check */
 	if (maskbits > MAX_DMA_MASK_BITS) {
@@ -462,7 +463,12 @@ rte_mem_check_dma_mask(uint8_t maskbits)
 	/* create dma mask */
 	mask = ~((1ULL << maskbits) - 1);
 
-	if (rte_memseg_walk(check_iova, &mask))
+	if (thread_unsafe)
+		ret = rte_memseg_walk_thread_unsafe(check_iova, &mask);
+	else
+		ret = rte_memseg_walk(check_iova, &mask);
+
+	if (ret)
 		/*
 		 * Dma mask precludes hugepage usage.
 		 * This device can not be used and we do not need to keep
@@ -480,6 +486,18 @@ rte_mem_check_dma_mask(uint8_t maskbits)
 	return 0;
 }
 
+int __rte_experimental
+rte_mem_check_dma_mask(uint8_t maskbits)
+{
+	return check_dma_mask(maskbits, false);
+}
+
+int __rte_experimental
+rte_mem_check_dma_mask_thread_unsafe(uint8_t maskbits)
+{
+	return check_dma_mask(maskbits, true);
+}
+
 /*
  * Set dma mask to use when memory initialization is done.
  *
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index abbfe2364..d970825df 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -463,16 +463,43 @@ unsigned rte_memory_get_nchannel(void);
  */
 unsigned rte_memory_get_nrank(void);
 
-/* check memsegs iovas are within a range based on dma mask */
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Check if all currently allocated memory segments are compliant with
+ * supplied DMA address width.
+ *
+ *  @param maskbits
+ *    Address width to check against.
+ */
 int __rte_experimental rte_mem_check_dma_mask(uint8_t maskbits);
 
 /**
  * @warning
  * @b EXPERIMENTAL: this API may change without prior notice
  *
- *  Set dma mask to use once memory initialization is done.
- *  Previous function rte_mem_check_dma_mask can not be used
- *  safely until memory has been initialized.
+ * Check if all currently allocated memory segments are compliant with
+ * supplied DMA address width. This function will use
+ * rte_memseg_walk_thread_unsafe instead of rte_memseg_walk implying
+ * memory_hotplug_lock will not be acquired avoiding deadlock during
+ * memory initialization.
+ *
+ * This function is just for EAL core memory internal use. Drivers should
+ * use the previous rte_mem_check_dma_mask.
+ *
+ *  @param maskbits
+ *    Address width to check against.
+ */
+int __rte_experimental rte_mem_check_dma_mask_thread_unsafe(uint8_t maskbits);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ *  Set dma mask to use once memory initialization is done. Previous functions
+ *  rte_mem_check_dma_mask and rte_mem_check_dma_mask_thread_unsafe can not be
+ *  used safely until memory has been initialized.
  */
 void __rte_experimental rte_mem_set_dma_mask(uint8_t maskbits);
 
diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c
index c2c112aa6..c6a6d4f6b 100644
--- a/lib/librte_eal/common/malloc_heap.c
+++ b/lib/librte_eal/common/malloc_heap.c
@@ -334,7 +334,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
 	 * executed. For 2) implies the new memory can not be added.
 	 */
 	if (mcfg->dma_maskbits &&
-	    rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
+	    rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) {
 		/*
 		 * Currently this can only happen if IOMMU is enabled
 		 * and the address width supported by the IOMMU hw is
diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
index c7935879a..c1b5e0791 100644
--- a/lib/librte_eal/linuxapp/eal/eal_memory.c
+++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
@@ -1394,7 +1394,7 @@ eal_legacy_hugepage_init(void)
 			addr = RTE_PTR_ADD(addr, (size_t)page_sz);
 		}
 		if (mcfg->dma_maskbits &&
-		    rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
+		    rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) {
 			RTE_LOG(ERR, EAL,
 				"%s(): couldnt allocate memory due to IOVA exceeding limits of current DMA mask.\n",
 				__func__);
@@ -1641,7 +1641,7 @@ eal_legacy_hugepage_init(void)
 	}
 
 	if (mcfg->dma_maskbits &&
-	    rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
+	    rte_mem_check_dma_mask_thread_unsafe(mcfg->dma_maskbits)) {
 		RTE_LOG(ERR, EAL,
 			"%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask.\n",
 			__func__);
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index 51ee948ba..61bc5ca05 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -331,6 +331,7 @@ EXPERIMENTAL {
 	rte_mem_alloc_validator_register;
 	rte_mem_alloc_validator_unregister;
 	rte_mem_check_dma_mask;
+	rte_mem_check_dma_mask_thread_unsafe;
 	rte_mem_event_callback_register;
 	rte_mem_event_callback_unregister;
 	rte_mem_iova2virt;
-- 
2.17.1

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/7] fix DMA mask check
  2018-11-01 19:53 [dpdk-dev] [PATCH v2 0/7] fix DMA mask check Alejandro Lucero
                   ` (6 preceding siblings ...)
  2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 7/7] mem: add thread unsafe version for checking DMA mask Alejandro Lucero
@ 2018-11-02 18:38 ` Ferruh Yigit
  2018-11-05  0:03   ` Thomas Monjalon
  7 siblings, 1 reply; 20+ messages in thread
From: Ferruh Yigit @ 2018-11-02 18:38 UTC (permalink / raw)
  To: Alejandro Lucero, dev

On 11/1/2018 7:53 PM, Alejandro Lucero wrote:
> A patchset sent introducing DMA mask checks has several critical
> issues precluding apps to execute. The patchset was reviewed and
> finally accepted after three versions. Obviously it did not go 
> through the proper testing what can be explained, at least from my
> side, due to the big changes to the memory initialization code these
> last months. It turns out the patchset did work with legacy memory
> and I'm afraid that was mainly my testing.
> 
> This patchset should solve the main problems reported:
> 
>  - deadlock duriing initialization
>  - segmentation fault with secondary processes
> 
> For solving the deadlock, a new API is introduced:
> 
>  rte_mem_check_dma_mask_safe/unsafe
> 
> making the previous rte_mem_check_dma_mask the one those new functions
> end calling. A boolean param is used for calling rte_memseg_walk thread
> safe or thread unsafe. This second option is needed for avoiding the 
> deadlock.
> 
> For the secondary processes problem, the call to check the dma mask is
> avoided from code being executed before the memory initialization. 
> Instead, a new API function, rte_mem_set_dma_mask is introduced, which 
> will be used in those cases. The dma mask check is done once the memory
> initialization is completed.
> 
> This last change implies the IOVA mode can not be set depending on IOMMU
> hardware limitations, and it is assumed IOVA VA is possible. If the dma
> mask check reports a problem after memory initilization, the error 
> message includes now advice for trying with --iova-mode option set to 
> pa.
> 
> The patchet also includes the dma mask check for legacy memory and the 
> no hugepage option.
> 
> Finally, all the DMA mask API has been updated for using the same prefix
> than other EAL memory code.
> 
> An initial version of this patchset has been tested by Intel DPDK 
> Validation team and it seems it solves all the problems reported. This 
> final patchset has the same functionality with minor changes. I have
> successfully tested the patchset with my limited testbench.
> 
> v2:
>  - modify error messages with more descriptive information
>  - change safe/unsafe versions for dma check to previous one plus a
>    thread_unsafe version.  
>  - use rte_eal_using_phys_addr instead of getuid
>  - fix comments
>  - reorder patches
> 
> Alejandro Lucero (7):
>   mem: fix call to DMA mask check
>   mem: use proper prefix
>   mem: add function for setting DMA mask
>   bus/pci: avoid call to DMA mask check
>   mem: modify error message for DMA mask check
>   eal/mem: use DMA mask check for legacy memory
>   mem: add thread unsafe version for checking DMA mask

Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>

Fixing the deadlock issue, and build/per-patch build is good.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/7] fix DMA mask check
  2018-11-02 18:38 ` [dpdk-dev] [PATCH v2 0/7] fix DMA mask check Ferruh Yigit
@ 2018-11-05  0:03   ` Thomas Monjalon
  0 siblings, 0 replies; 20+ messages in thread
From: Thomas Monjalon @ 2018-11-05  0:03 UTC (permalink / raw)
  To: Alejandro Lucero; +Cc: dev, Ferruh Yigit

02/11/2018 19:38, Ferruh Yigit:
> On 11/1/2018 7:53 PM, Alejandro Lucero wrote:
> > A patchset sent introducing DMA mask checks has several critical
> > issues precluding apps to execute. The patchset was reviewed and
> > finally accepted after three versions. Obviously it did not go 
> > through the proper testing what can be explained, at least from my
> > side, due to the big changes to the memory initialization code these
> > last months. It turns out the patchset did work with legacy memory
> > and I'm afraid that was mainly my testing.
> > 
> > This patchset should solve the main problems reported:
> > 
> >  - deadlock duriing initialization
> >  - segmentation fault with secondary processes
> > 
> > For solving the deadlock, a new API is introduced:
> > 
> >  rte_mem_check_dma_mask_safe/unsafe
> > 
> > making the previous rte_mem_check_dma_mask the one those new functions
> > end calling. A boolean param is used for calling rte_memseg_walk thread
> > safe or thread unsafe. This second option is needed for avoiding the 
> > deadlock.
> > 
> > For the secondary processes problem, the call to check the dma mask is
> > avoided from code being executed before the memory initialization. 
> > Instead, a new API function, rte_mem_set_dma_mask is introduced, which 
> > will be used in those cases. The dma mask check is done once the memory
> > initialization is completed.
> > 
> > This last change implies the IOVA mode can not be set depending on IOMMU
> > hardware limitations, and it is assumed IOVA VA is possible. If the dma
> > mask check reports a problem after memory initilization, the error 
> > message includes now advice for trying with --iova-mode option set to 
> > pa.
> > 
> > The patchet also includes the dma mask check for legacy memory and the 
> > no hugepage option.
> > 
> > Finally, all the DMA mask API has been updated for using the same prefix
> > than other EAL memory code.
> > 
> > An initial version of this patchset has been tested by Intel DPDK 
> > Validation team and it seems it solves all the problems reported. This 
> > final patchset has the same functionality with minor changes. I have
> > successfully tested the patchset with my limited testbench.
> > 
> > v2:
> >  - modify error messages with more descriptive information
> >  - change safe/unsafe versions for dma check to previous one plus a
> >    thread_unsafe version.  
> >  - use rte_eal_using_phys_addr instead of getuid
> >  - fix comments
> >  - reorder patches
> > 
> > Alejandro Lucero (7):
> >   mem: fix call to DMA mask check
> >   mem: use proper prefix
> >   mem: add function for setting DMA mask
> >   bus/pci: avoid call to DMA mask check
> >   mem: modify error message for DMA mask check
> >   eal/mem: use DMA mask check for legacy memory
> >   mem: add thread unsafe version for checking DMA mask
> 
> Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
> 
> Fixing the deadlock issue, and build/per-patch build is good.

Applied, thanks

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 5/7] mem: modify error message for DMA mask check
  2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 5/7] mem: modify error message for " Alejandro Lucero
@ 2018-11-05 10:01   ` Li, WenjieX A
  2018-11-05 10:13     ` Alejandro Lucero
  0 siblings, 1 reply; 20+ messages in thread
From: Li, WenjieX A @ 2018-11-05 10:01 UTC (permalink / raw)
  To: Alejandro Lucero, dev
  Cc: Yigit, Ferruh, Lin, Xueqin, Burakov, Anatoly, Li, WenjieX A

1. With GCC32, testpmd could not startup without '--iova-mode pa'.
./i686-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
The output is:
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Some devices want iova as va but pa will be used because.. EAL: few device bound to UIO
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: wrong dma mask size 48 (Max: 31)
EAL: alloc_pages_on_heap(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask
error allocating rte services array
EAL: FATAL: rte_service_init() failed
EAL: rte_service_init() failed
PANIC in main():
Cannot init EAL
5: [./i686-native-linuxapp-gcc/app/testpmd(+0x95fda) [0x56606fda]]
4: [/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf6) [0xf74d1276]]
3: [./i686-native-linuxapp-gcc/app/testpmd(main+0xf21) [0x565fcee1]]
2: [./i686-native-linuxapp-gcc/app/testpmd(__rte_panic+0x3d) [0x565edc68]]
1: [./i686-native-linuxapp-gcc/app/testpmd(rte_dump_stack+0x33) [0x5675f333]]
Aborted

2. With '--iova-mode pa', testpmd could startup.
3. With GCC64, there is no such issue.
Thanks!


-----Original Message-----
From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Alejandro Lucero
Sent: Friday, November 2, 2018 3:53 AM
To: dev@dpdk.org
Subject: [dpdk-dev] [PATCH v2 5/7] mem: modify error message for DMA mask check

If DMA mask checks shows mapped memory out of the supported range specified by the DMA mask, nothing can be done but return an error an report the error. This can imply the app not being executed at all or precluding dynamic memory allocation once the app is running.
In any case, we can advice the user to force IOVA as PA if currently IOVA being VA and user being root.

Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
Tested-by: Li, WenjieX A <wenjiex.a.li@intel.com>
---
 lib/librte_eal/common/malloc_heap.c | 41 +++++++++++++++++++++++++----
 1 file changed, 36 insertions(+), 5 deletions(-)

diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c
index 4997c5ef5..c2c112aa6 100644
--- a/lib/librte_eal/common/malloc_heap.c
+++ b/lib/librte_eal/common/malloc_heap.c
@@ -321,13 +321,44 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
 		goto fail;
 	}
 
-	if (mcfg->dma_maskbits) {
-		if (rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
+	/*
+	 * Once we have all the memseg lists configured, if there is a dma mask
+	 * set, check iova addresses are not out of range. Otherwise the device
+	 * setting the dma mask could have problems with the mapped memory.
+	 *
+	 * There are two situations when this can happen:
+	 *	1) memory initialization
+	 *	2) dynamic memory allocation
+	 *
+	 * For 1), an error when checking dma mask implies app can not be
+	 * executed. For 2) implies the new memory can not be added.
+	 */
+	if (mcfg->dma_maskbits &&
+	    rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
+		/*
+		 * Currently this can only happen if IOMMU is enabled
+		 * and the address width supported by the IOMMU hw is
+		 * not enough for using the memory mapped IOVAs.
+		 *
+		 * If IOVA is VA, advice to try with '--iova-mode pa'
+		 * which could solve some situations when IOVA VA is not
+		 * really needed.
+		 */
+		RTE_LOG(ERR, EAL,
+			"%s(): couldn't allocate memory due to IOVA exceeding limits of current DMA mask\n",
+			__func__);
+
+		/*
+		 * If IOVA is VA and it is possible to run with IOVA PA,
+		 * because user is root, give and advice for solving the
+		 * problem.
+		 */
+		if ((rte_eal_iova_mode() == RTE_IOVA_VA) &&
+		     rte_eal_using_phys_addrs())
 			RTE_LOG(ERR, EAL,
-				"%s(): couldn't allocate memory due to DMA mask\n",
+				"%s(): Please try initializing EAL with --iova-mode=pa 
+parameter\n",
 				__func__);
-			goto fail;
-		}
+		goto fail;
 	}
 
 	/* add newly minted memsegs to malloc heap */
--
2.17.1

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 5/7] mem: modify error message for DMA mask check
  2018-11-05 10:01   ` Li, WenjieX A
@ 2018-11-05 10:13     ` Alejandro Lucero
  2018-11-05 15:12       ` Burakov, Anatoly
  0 siblings, 1 reply; 20+ messages in thread
From: Alejandro Lucero @ 2018-11-05 10:13 UTC (permalink / raw)
  To: wenjiex.a.li; +Cc: dev, Ferruh Yigit, xueqin.lin, Burakov, Anatoly

On Mon, Nov 5, 2018 at 10:01 AM Li, WenjieX A <wenjiex.a.li@intel.com>
wrote:

> 1. With GCC32, testpmd could not startup without '--iova-mode pa'.
> ./i686-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
> The output is:
> EAL: Detected 16 lcore(s)
> EAL: Detected 1 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Some devices want iova as va but pa will be used because.. EAL: few
> device bound to UIO
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: wrong dma mask size 48 (Max: 31)
> EAL: alloc_pages_on_heap(): couldn't allocate memory due to IOVA exceeding
> limits of current DMA mask
> error allocating rte services array
> EAL: FATAL: rte_service_init() failed
> EAL: rte_service_init() failed
> PANIC in main():
> Cannot init EAL
> 5: [./i686-native-linuxapp-gcc/app/testpmd(+0x95fda) [0x56606fda]]
> 4: [/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf6) [0xf74d1276]]
> 3: [./i686-native-linuxapp-gcc/app/testpmd(main+0xf21) [0x565fcee1]]
> 2: [./i686-native-linuxapp-gcc/app/testpmd(__rte_panic+0x3d) [0x565edc68]]
> 1: [./i686-native-linuxapp-gcc/app/testpmd(rte_dump_stack+0x33)
> [0x5675f333]]
> Aborted
>
> 2. With '--iova-mode pa', testpmd could startup.
> 3. With GCC64, there is no such issue.
> Thanks!
>
>
Does 32 bits support require IOMMU? It would be a surprise. If there is no
IOMMU hardware, no dma mask should be there at all.


>
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Alejandro Lucero
> Sent: Friday, November 2, 2018 3:53 AM
> To: dev@dpdk.org
> Subject: [dpdk-dev] [PATCH v2 5/7] mem: modify error message for DMA mask
> check
>
> If DMA mask checks shows mapped memory out of the supported range
> specified by the DMA mask, nothing can be done but return an error an
> report the error. This can imply the app not being executed at all or
> precluding dynamic memory allocation once the app is running.
> In any case, we can advice the user to force IOVA as PA if currently IOVA
> being VA and user being root.
>
> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> Tested-by: Li, WenjieX A <wenjiex.a.li@intel.com>
> ---
>  lib/librte_eal/common/malloc_heap.c | 41 +++++++++++++++++++++++++----
>  1 file changed, 36 insertions(+), 5 deletions(-)
>
> diff --git a/lib/librte_eal/common/malloc_heap.c
> b/lib/librte_eal/common/malloc_heap.c
> index 4997c5ef5..c2c112aa6 100644
> --- a/lib/librte_eal/common/malloc_heap.c
> +++ b/lib/librte_eal/common/malloc_heap.c
> @@ -321,13 +321,44 @@ alloc_pages_on_heap(struct malloc_heap *heap,
> uint64_t pg_sz, size_t elt_size,
>                 goto fail;
>         }
>
> -       if (mcfg->dma_maskbits) {
> -               if (rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
> +       /*
> +        * Once we have all the memseg lists configured, if there is a dma
> mask
> +        * set, check iova addresses are not out of range. Otherwise the
> device
> +        * setting the dma mask could have problems with the mapped memory.
> +        *
> +        * There are two situations when this can happen:
> +        *      1) memory initialization
> +        *      2) dynamic memory allocation
> +        *
> +        * For 1), an error when checking dma mask implies app can not be
> +        * executed. For 2) implies the new memory can not be added.
> +        */
> +       if (mcfg->dma_maskbits &&
> +           rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
> +               /*
> +                * Currently this can only happen if IOMMU is enabled
> +                * and the address width supported by the IOMMU hw is
> +                * not enough for using the memory mapped IOVAs.
> +                *
> +                * If IOVA is VA, advice to try with '--iova-mode pa'
> +                * which could solve some situations when IOVA VA is not
> +                * really needed.
> +                */
> +               RTE_LOG(ERR, EAL,
> +                       "%s(): couldn't allocate memory due to IOVA
> exceeding limits of current DMA mask\n",
> +                       __func__);
> +
> +               /*
> +                * If IOVA is VA and it is possible to run with IOVA PA,
> +                * because user is root, give and advice for solving the
> +                * problem.
> +                */
> +               if ((rte_eal_iova_mode() == RTE_IOVA_VA) &&
> +                    rte_eal_using_phys_addrs())
>                         RTE_LOG(ERR, EAL,
> -                               "%s(): couldn't allocate memory due to DMA
> mask\n",
> +                               "%s(): Please try initializing EAL with
> --iova-mode=pa
> +parameter\n",
>                                 __func__);
> -                       goto fail;
> -               }
> +               goto fail;
>         }
>
>         /* add newly minted memsegs to malloc heap */
> --
> 2.17.1
>
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 5/7] mem: modify error message for DMA mask check
  2018-11-05 10:13     ` Alejandro Lucero
@ 2018-11-05 15:12       ` Burakov, Anatoly
  2018-11-05 15:33         ` Alejandro Lucero
  0 siblings, 1 reply; 20+ messages in thread
From: Burakov, Anatoly @ 2018-11-05 15:12 UTC (permalink / raw)
  To: Alejandro Lucero, wenjiex.a.li; +Cc: dev, Ferruh Yigit, xueqin.lin

On 05-Nov-18 10:13 AM, Alejandro Lucero wrote:
> On Mon, Nov 5, 2018 at 10:01 AM Li, WenjieX A <wenjiex.a.li@intel.com>
> wrote:
> 
>> 1. With GCC32, testpmd could not startup without '--iova-mode pa'.
>> ./i686-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
>> The output is:
>> EAL: Detected 16 lcore(s)
>> EAL: Detected 1 NUMA nodes
>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> EAL: Some devices want iova as va but pa will be used because.. EAL: few
>> device bound to UIO
>> EAL: No free hugepages reported in hugepages-1048576kB
>> EAL: Probing VFIO support...
>> EAL: VFIO support initialized
>> EAL: wrong dma mask size 48 (Max: 31)
>> EAL: alloc_pages_on_heap(): couldn't allocate memory due to IOVA exceeding
>> limits of current DMA mask
>> error allocating rte services array
>> EAL: FATAL: rte_service_init() failed
>> EAL: rte_service_init() failed
>> PANIC in main():
>> Cannot init EAL
>> 5: [./i686-native-linuxapp-gcc/app/testpmd(+0x95fda) [0x56606fda]]
>> 4: [/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf6) [0xf74d1276]]
>> 3: [./i686-native-linuxapp-gcc/app/testpmd(main+0xf21) [0x565fcee1]]
>> 2: [./i686-native-linuxapp-gcc/app/testpmd(__rte_panic+0x3d) [0x565edc68]]
>> 1: [./i686-native-linuxapp-gcc/app/testpmd(rte_dump_stack+0x33)
>> [0x5675f333]]
>> Aborted
>>
>> 2. With '--iova-mode pa', testpmd could startup.
>> 3. With GCC64, there is no such issue.
>> Thanks!
>>
>>
> Does 32 bits support require IOMMU? It would be a surprise. If there is no
> IOMMU hardware, no dma mask should be there at all.

IOMMU is supported on 32-bits, however limited the address space might 
be. Maybe limit IOMMU width to RTE_MIN(31, value) bits for everything on 
32-bit?

-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 5/7] mem: modify error message for DMA mask check
  2018-11-05 15:12       ` Burakov, Anatoly
@ 2018-11-05 15:33         ` Alejandro Lucero
  2018-11-05 16:34           ` Burakov, Anatoly
  0 siblings, 1 reply; 20+ messages in thread
From: Alejandro Lucero @ 2018-11-05 15:33 UTC (permalink / raw)
  To: Burakov, Anatoly; +Cc: wenjiex.a.li, dev, Ferruh Yigit, xueqin.lin

On Mon, Nov 5, 2018 at 3:12 PM Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:

> On 05-Nov-18 10:13 AM, Alejandro Lucero wrote:
> > On Mon, Nov 5, 2018 at 10:01 AM Li, WenjieX A <wenjiex.a.li@intel.com>
> > wrote:
> >
> >> 1. With GCC32, testpmd could not startup without '--iova-mode pa'.
> >> ./i686-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
> >> The output is:
> >> EAL: Detected 16 lcore(s)
> >> EAL: Detected 1 NUMA nodes
> >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> >> EAL: Some devices want iova as va but pa will be used because.. EAL: few
> >> device bound to UIO
> >> EAL: No free hugepages reported in hugepages-1048576kB
> >> EAL: Probing VFIO support...
> >> EAL: VFIO support initialized
> >> EAL: wrong dma mask size 48 (Max: 31)
> >> EAL: alloc_pages_on_heap(): couldn't allocate memory due to IOVA
> exceeding
> >> limits of current DMA mask
> >> error allocating rte services array
> >> EAL: FATAL: rte_service_init() failed
> >> EAL: rte_service_init() failed
> >> PANIC in main():
> >> Cannot init EAL
> >> 5: [./i686-native-linuxapp-gcc/app/testpmd(+0x95fda) [0x56606fda]]
> >> 4: [/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf6) [0xf74d1276]]
> >> 3: [./i686-native-linuxapp-gcc/app/testpmd(main+0xf21) [0x565fcee1]]
> >> 2: [./i686-native-linuxapp-gcc/app/testpmd(__rte_panic+0x3d)
> [0x565edc68]]
> >> 1: [./i686-native-linuxapp-gcc/app/testpmd(rte_dump_stack+0x33)
> >> [0x5675f333]]
> >> Aborted
> >>
> >> 2. With '--iova-mode pa', testpmd could startup.
> >> 3. With GCC64, there is no such issue.
> >> Thanks!
> >>
> >>
> > Does 32 bits support require IOMMU? It would be a surprise. If there is
> no
> > IOMMU hardware, no dma mask should be there at all.
>
> IOMMU is supported on 32-bits, however limited the address space might
> be. Maybe limit IOMMU width to RTE_MIN(31, value) bits for everything on
> 32-bit?
>
>
If IOMMU is supported in 32 bits, then the DMA mask check should not be
happening. AFAIK, the IOMMU hardware addressing limitations is a problem
only in 64 bits systems. The worst situation I have head of is 39 bits for
virtualized IOMMU with QEMU.

I would prefer not to invoke rte_mem_set_dma_mask for 32 bits system for
the Intel IOMMU case. The only other dma mask client is the NFP PMD and we
do not support 32 bits systems.



> --
> Thanks,
> Anatoly
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 5/7] mem: modify error message for DMA mask check
  2018-11-05 15:33         ` Alejandro Lucero
@ 2018-11-05 16:34           ` Burakov, Anatoly
  2018-11-06  9:32             ` Alejandro Lucero
  0 siblings, 1 reply; 20+ messages in thread
From: Burakov, Anatoly @ 2018-11-05 16:34 UTC (permalink / raw)
  To: Alejandro Lucero; +Cc: wenjiex.a.li, dev, Ferruh Yigit, xueqin.lin

On 05-Nov-18 3:33 PM, Alejandro Lucero wrote:
> 
> 
> On Mon, Nov 5, 2018 at 3:12 PM Burakov, Anatoly 
> <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
> 
>     On 05-Nov-18 10:13 AM, Alejandro Lucero wrote:
>      > On Mon, Nov 5, 2018 at 10:01 AM Li, WenjieX A
>     <wenjiex.a.li@intel.com <mailto:wenjiex.a.li@intel.com>>
>      > wrote:
>      >
>      >> 1. With GCC32, testpmd could not startup without '--iova-mode pa'.
>      >> ./i686-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
>      >> The output is:
>      >> EAL: Detected 16 lcore(s)
>      >> EAL: Detected 1 NUMA nodes
>      >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>      >> EAL: Some devices want iova as va but pa will be used because..
>     EAL: few
>      >> device bound to UIO
>      >> EAL: No free hugepages reported in hugepages-1048576kB
>      >> EAL: Probing VFIO support...
>      >> EAL: VFIO support initialized
>      >> EAL: wrong dma mask size 48 (Max: 31)
>      >> EAL: alloc_pages_on_heap(): couldn't allocate memory due to IOVA
>     exceeding
>      >> limits of current DMA mask
>      >> error allocating rte services array
>      >> EAL: FATAL: rte_service_init() failed
>      >> EAL: rte_service_init() failed
>      >> PANIC in main():
>      >> Cannot init EAL
>      >> 5: [./i686-native-linuxapp-gcc/app/testpmd(+0x95fda) [0x56606fda]]
>      >> 4: [/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf6)
>     [0xf74d1276]]
>      >> 3: [./i686-native-linuxapp-gcc/app/testpmd(main+0xf21) [0x565fcee1]]
>      >> 2: [./i686-native-linuxapp-gcc/app/testpmd(__rte_panic+0x3d)
>     [0x565edc68]]
>      >> 1: [./i686-native-linuxapp-gcc/app/testpmd(rte_dump_stack+0x33)
>      >> [0x5675f333]]
>      >> Aborted
>      >>
>      >> 2. With '--iova-mode pa', testpmd could startup.
>      >> 3. With GCC64, there is no such issue.
>      >> Thanks!
>      >>
>      >>
>      > Does 32 bits support require IOMMU? It would be a surprise. If
>     there is no
>      > IOMMU hardware, no dma mask should be there at all.
> 
>     IOMMU is supported on 32-bits, however limited the address space might
>     be. Maybe limit IOMMU width to RTE_MIN(31, value) bits for
>     everything on
>     32-bit?
> 
> 
> If IOMMU is supported in 32 bits, then the DMA mask check should not be 
> happening. AFAIK, the IOMMU hardware addressing limitations is a problem 
> only in 64 bits systems. The worst situation I have head of is 39 bits 
> for virtualized IOMMU with QEMU.
> 
> I would prefer not to invoke rte_mem_set_dma_mask for 32 bits system for 
> the Intel IOMMU case. The only other dma mask client is the NFP PMD and 
> we do not support 32 bits systems.
> 

I don't think not invoking DMA mask check is the right choice here. In 
practice it may be, but i'd rather the behavior to be "correct", if at 
all possible :) It is theoretically possible to have an IOMMU with an 
addressing limitation of, say, 30 bits (even though they don't exist in 
reality), so therefore our code should handle it, should it encounter 
one, and it should also handle the "proper" ones correctly (as in, treat 
them as 32-bit-limited instead of 39- or 48-bit-limited).

> 
>     -- 
>     Thanks,
>     Anatoly
> 


-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 5/7] mem: modify error message for DMA mask check
  2018-11-05 16:34           ` Burakov, Anatoly
@ 2018-11-06  9:32             ` Alejandro Lucero
  2018-11-06 10:31               ` Burakov, Anatoly
  0 siblings, 1 reply; 20+ messages in thread
From: Alejandro Lucero @ 2018-11-06  9:32 UTC (permalink / raw)
  To: Burakov, Anatoly; +Cc: wenjiex.a.li, dev, Ferruh Yigit, xueqin.lin

On Mon, Nov 5, 2018 at 4:35 PM Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:

> On 05-Nov-18 3:33 PM, Alejandro Lucero wrote:
> >
> >
> > On Mon, Nov 5, 2018 at 3:12 PM Burakov, Anatoly
> > <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
> >
> >     On 05-Nov-18 10:13 AM, Alejandro Lucero wrote:
> >      > On Mon, Nov 5, 2018 at 10:01 AM Li, WenjieX A
> >     <wenjiex.a.li@intel.com <mailto:wenjiex.a.li@intel.com>>
> >      > wrote:
> >      >
> >      >> 1. With GCC32, testpmd could not startup without '--iova-mode
> pa'.
> >      >> ./i686-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
> >      >> The output is:
> >      >> EAL: Detected 16 lcore(s)
> >      >> EAL: Detected 1 NUMA nodes
> >      >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> >      >> EAL: Some devices want iova as va but pa will be used because..
> >     EAL: few
> >      >> device bound to UIO
> >      >> EAL: No free hugepages reported in hugepages-1048576kB
> >      >> EAL: Probing VFIO support...
> >      >> EAL: VFIO support initialized
> >      >> EAL: wrong dma mask size 48 (Max: 31)
> >      >> EAL: alloc_pages_on_heap(): couldn't allocate memory due to IOVA
> >     exceeding
> >      >> limits of current DMA mask
> >      >> error allocating rte services array
> >      >> EAL: FATAL: rte_service_init() failed
> >      >> EAL: rte_service_init() failed
> >      >> PANIC in main():
> >      >> Cannot init EAL
> >      >> 5: [./i686-native-linuxapp-gcc/app/testpmd(+0x95fda)
> [0x56606fda]]
> >      >> 4: [/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf6)
> >     [0xf74d1276]]
> >      >> 3: [./i686-native-linuxapp-gcc/app/testpmd(main+0xf21)
> [0x565fcee1]]
> >      >> 2: [./i686-native-linuxapp-gcc/app/testpmd(__rte_panic+0x3d)
> >     [0x565edc68]]
> >      >> 1: [./i686-native-linuxapp-gcc/app/testpmd(rte_dump_stack+0x33)
> >      >> [0x5675f333]]
> >      >> Aborted
> >      >>
> >      >> 2. With '--iova-mode pa', testpmd could startup.
> >      >> 3. With GCC64, there is no such issue.
> >      >> Thanks!
> >      >>
> >      >>
> >      > Does 32 bits support require IOMMU? It would be a surprise. If
> >     there is no
> >      > IOMMU hardware, no dma mask should be there at all.
> >
> >     IOMMU is supported on 32-bits, however limited the address space
> might
> >     be. Maybe limit IOMMU width to RTE_MIN(31, value) bits for
> >     everything on
> >     32-bit?
> >
> >
> > If IOMMU is supported in 32 bits, then the DMA mask check should not be
> > happening. AFAIK, the IOMMU hardware addressing limitations is a problem
> > only in 64 bits systems. The worst situation I have head of is 39 bits
> > for virtualized IOMMU with QEMU.
> >
> > I would prefer not to invoke rte_mem_set_dma_mask for 32 bits system for
> > the Intel IOMMU case. The only other dma mask client is the NFP PMD and
> > we do not support 32 bits systems.
> >
>
> I don't think not invoking DMA mask check is the right choice here. In
> practice it may be, but i'd rather the behavior to be "correct", if at
> all possible :) It is theoretically possible to have an IOMMU with an
> addressing limitation of, say, 30 bits (even though they don't exist in
> reality), so therefore our code should handle it, should it encounter
> one, and it should also handle the "proper" ones correctly (as in, treat
> them as 32-bit-limited instead of 39- or 48-bit-limited).
>
>
Fine.

The problem is the current sanity check about the dma mask width, what is
31 for 32 bits systems.
Should we just leave a single max dma width to 63? This covers the
possibility of 32 bit systems integrating an IOMMU designed for  64 bits. I
really doubt this is a real possibility in x86, although I can see it more
likely in embedded systems where this sort of hardware components
integration happens.

>
> >     --
> >     Thanks,
> >     Anatoly
> >
>
>
> --
> Thanks,
> Anatoly
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 5/7] mem: modify error message for DMA mask check
  2018-11-06  9:32             ` Alejandro Lucero
@ 2018-11-06 10:31               ` Burakov, Anatoly
  2018-11-06 10:37                 ` Alejandro Lucero
  0 siblings, 1 reply; 20+ messages in thread
From: Burakov, Anatoly @ 2018-11-06 10:31 UTC (permalink / raw)
  To: Alejandro Lucero; +Cc: wenjiex.a.li, dev, Ferruh Yigit, xueqin.lin

On 06-Nov-18 9:32 AM, Alejandro Lucero wrote:
> 
> 
> On Mon, Nov 5, 2018 at 4:35 PM Burakov, Anatoly 
> <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
> 
>     On 05-Nov-18 3:33 PM, Alejandro Lucero wrote:
>      >
>      >
>      > On Mon, Nov 5, 2018 at 3:12 PM Burakov, Anatoly
>      > <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>
>     <mailto:anatoly.burakov@intel.com
>     <mailto:anatoly.burakov@intel.com>>> wrote:
>      >
>      >     On 05-Nov-18 10:13 AM, Alejandro Lucero wrote:
>      >      > On Mon, Nov 5, 2018 at 10:01 AM Li, WenjieX A
>      >     <wenjiex.a.li@intel.com <mailto:wenjiex.a.li@intel.com>
>     <mailto:wenjiex.a.li@intel.com <mailto:wenjiex.a.li@intel.com>>>
>      >      > wrote:
>      >      >
>      >      >> 1. With GCC32, testpmd could not startup without
>     '--iova-mode pa'.
>      >      >> ./i686-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
>      >      >> The output is:
>      >      >> EAL: Detected 16 lcore(s)
>      >      >> EAL: Detected 1 NUMA nodes
>      >      >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>      >      >> EAL: Some devices want iova as va but pa will be used
>     because..
>      >     EAL: few
>      >      >> device bound to UIO
>      >      >> EAL: No free hugepages reported in hugepages-1048576kB
>      >      >> EAL: Probing VFIO support...
>      >      >> EAL: VFIO support initialized
>      >      >> EAL: wrong dma mask size 48 (Max: 31)
>      >      >> EAL: alloc_pages_on_heap(): couldn't allocate memory due
>     to IOVA
>      >     exceeding
>      >      >> limits of current DMA mask
>      >      >> error allocating rte services array
>      >      >> EAL: FATAL: rte_service_init() failed
>      >      >> EAL: rte_service_init() failed
>      >      >> PANIC in main():
>      >      >> Cannot init EAL
>      >      >> 5: [./i686-native-linuxapp-gcc/app/testpmd(+0x95fda)
>     [0x56606fda]]
>      >      >> 4: [/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf6)
>      >     [0xf74d1276]]
>      >      >> 3: [./i686-native-linuxapp-gcc/app/testpmd(main+0xf21)
>     [0x565fcee1]]
>      >      >> 2: [./i686-native-linuxapp-gcc/app/testpmd(__rte_panic+0x3d)
>      >     [0x565edc68]]
>      >      >> 1:
>     [./i686-native-linuxapp-gcc/app/testpmd(rte_dump_stack+0x33)
>      >      >> [0x5675f333]]
>      >      >> Aborted
>      >      >>
>      >      >> 2. With '--iova-mode pa', testpmd could startup.
>      >      >> 3. With GCC64, there is no such issue.
>      >      >> Thanks!
>      >      >>
>      >      >>
>      >      > Does 32 bits support require IOMMU? It would be a surprise. If
>      >     there is no
>      >      > IOMMU hardware, no dma mask should be there at all.
>      >
>      >     IOMMU is supported on 32-bits, however limited the address
>     space might
>      >     be. Maybe limit IOMMU width to RTE_MIN(31, value) bits for
>      >     everything on
>      >     32-bit?
>      >
>      >
>      > If IOMMU is supported in 32 bits, then the DMA mask check should
>     not be
>      > happening. AFAIK, the IOMMU hardware addressing limitations is a
>     problem
>      > only in 64 bits systems. The worst situation I have head of is 39
>     bits
>      > for virtualized IOMMU with QEMU.
>      >
>      > I would prefer not to invoke rte_mem_set_dma_mask for 32 bits
>     system for
>      > the Intel IOMMU case. The only other dma mask client is the NFP
>     PMD and
>      > we do not support 32 bits systems.
>      >
> 
>     I don't think not invoking DMA mask check is the right choice here. In
>     practice it may be, but i'd rather the behavior to be "correct", if at
>     all possible :) It is theoretically possible to have an IOMMU with an
>     addressing limitation of, say, 30 bits (even though they don't exist in
>     reality), so therefore our code should handle it, should it encounter
>     one, and it should also handle the "proper" ones correctly (as in,
>     treat
>     them as 32-bit-limited instead of 39- or 48-bit-limited).
> 
> 
> Fine.
> 
> The problem is the current sanity check about the dma mask width, what 
> is 31 for 32 bits systems.
> Should we just leave a single max dma width to 63? This covers the 
> possibility of 32 bit systems integrating an IOMMU designed for  64 
> bits. I really doubt this is a real possibility in x86, although I can 
> see it more likely in embedded systems where this sort of hardware 
> components integration happens.

Actually (and after a quick chat with Ferruh), is this even needed? IOVA 
addresses are independent from VA width, IOVA can happily be bigger than 
32-bits if i understand things correctly. All of our IOVA addresses are 
always 64-bit throughout DPDK. I don't think this check is even valid.

> 
>      >
>      >     --
>      >     Thanks,
>      >     Anatoly
>      >
> 
> 
>     -- 
>     Thanks,
>     Anatoly
> 


-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 5/7] mem: modify error message for DMA mask check
  2018-11-06 10:31               ` Burakov, Anatoly
@ 2018-11-06 10:37                 ` Alejandro Lucero
  2018-11-06 10:48                   ` Burakov, Anatoly
  0 siblings, 1 reply; 20+ messages in thread
From: Alejandro Lucero @ 2018-11-06 10:37 UTC (permalink / raw)
  To: Burakov, Anatoly; +Cc: wenjiex.a.li, dev, Ferruh Yigit, xueqin.lin

On Tue, Nov 6, 2018 at 10:31 AM Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:

> On 06-Nov-18 9:32 AM, Alejandro Lucero wrote:
> >
> >
> > On Mon, Nov 5, 2018 at 4:35 PM Burakov, Anatoly
> > <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
> >
> >     On 05-Nov-18 3:33 PM, Alejandro Lucero wrote:
> >      >
> >      >
> >      > On Mon, Nov 5, 2018 at 3:12 PM Burakov, Anatoly
> >      > <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>
> >     <mailto:anatoly.burakov@intel.com
> >     <mailto:anatoly.burakov@intel.com>>> wrote:
> >      >
> >      >     On 05-Nov-18 10:13 AM, Alejandro Lucero wrote:
> >      >      > On Mon, Nov 5, 2018 at 10:01 AM Li, WenjieX A
> >      >     <wenjiex.a.li@intel.com <mailto:wenjiex.a.li@intel.com>
> >     <mailto:wenjiex.a.li@intel.com <mailto:wenjiex.a.li@intel.com>>>
> >      >      > wrote:
> >      >      >
> >      >      >> 1. With GCC32, testpmd could not startup without
> >     '--iova-mode pa'.
> >      >      >> ./i686-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
> >      >      >> The output is:
> >      >      >> EAL: Detected 16 lcore(s)
> >      >      >> EAL: Detected 1 NUMA nodes
> >      >      >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> >      >      >> EAL: Some devices want iova as va but pa will be used
> >     because..
> >      >     EAL: few
> >      >      >> device bound to UIO
> >      >      >> EAL: No free hugepages reported in hugepages-1048576kB
> >      >      >> EAL: Probing VFIO support...
> >      >      >> EAL: VFIO support initialized
> >      >      >> EAL: wrong dma mask size 48 (Max: 31)
> >      >      >> EAL: alloc_pages_on_heap(): couldn't allocate memory due
> >     to IOVA
> >      >     exceeding
> >      >      >> limits of current DMA mask
> >      >      >> error allocating rte services array
> >      >      >> EAL: FATAL: rte_service_init() failed
> >      >      >> EAL: rte_service_init() failed
> >      >      >> PANIC in main():
> >      >      >> Cannot init EAL
> >      >      >> 5: [./i686-native-linuxapp-gcc/app/testpmd(+0x95fda)
> >     [0x56606fda]]
> >      >      >> 4: [/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf6)
> >      >     [0xf74d1276]]
> >      >      >> 3: [./i686-native-linuxapp-gcc/app/testpmd(main+0xf21)
> >     [0x565fcee1]]
> >      >      >> 2:
> [./i686-native-linuxapp-gcc/app/testpmd(__rte_panic+0x3d)
> >      >     [0x565edc68]]
> >      >      >> 1:
> >     [./i686-native-linuxapp-gcc/app/testpmd(rte_dump_stack+0x33)
> >      >      >> [0x5675f333]]
> >      >      >> Aborted
> >      >      >>
> >      >      >> 2. With '--iova-mode pa', testpmd could startup.
> >      >      >> 3. With GCC64, there is no such issue.
> >      >      >> Thanks!
> >      >      >>
> >      >      >>
> >      >      > Does 32 bits support require IOMMU? It would be a
> surprise. If
> >      >     there is no
> >      >      > IOMMU hardware, no dma mask should be there at all.
> >      >
> >      >     IOMMU is supported on 32-bits, however limited the address
> >     space might
> >      >     be. Maybe limit IOMMU width to RTE_MIN(31, value) bits for
> >      >     everything on
> >      >     32-bit?
> >      >
> >      >
> >      > If IOMMU is supported in 32 bits, then the DMA mask check should
> >     not be
> >      > happening. AFAIK, the IOMMU hardware addressing limitations is a
> >     problem
> >      > only in 64 bits systems. The worst situation I have head of is 39
> >     bits
> >      > for virtualized IOMMU with QEMU.
> >      >
> >      > I would prefer not to invoke rte_mem_set_dma_mask for 32 bits
> >     system for
> >      > the Intel IOMMU case. The only other dma mask client is the NFP
> >     PMD and
> >      > we do not support 32 bits systems.
> >      >
> >
> >     I don't think not invoking DMA mask check is the right choice here.
> In
> >     practice it may be, but i'd rather the behavior to be "correct", if
> at
> >     all possible :) It is theoretically possible to have an IOMMU with an
> >     addressing limitation of, say, 30 bits (even though they don't exist
> in
> >     reality), so therefore our code should handle it, should it encounter
> >     one, and it should also handle the "proper" ones correctly (as in,
> >     treat
> >     them as 32-bit-limited instead of 39- or 48-bit-limited).
> >
> >
> > Fine.
> >
> > The problem is the current sanity check about the dma mask width, what
> > is 31 for 32 bits systems.
> > Should we just leave a single max dma width to 63? This covers the
> > possibility of 32 bit systems integrating an IOMMU designed for  64
> > bits. I really doubt this is a real possibility in x86, although I can
> > see it more likely in embedded systems where this sort of hardware
> > components integration happens.
>
> Actually (and after a quick chat with Ferruh), is this even needed? IOVA
> addresses are independent from VA width, IOVA can happily be bigger than
> 32-bits if i understand things correctly. All of our IOVA addresses are
> always 64-bit throughout DPDK. I don't think this check is even valid.
>
>
Although iova_t is 64 bits, there should not be a IOVA higher than 32 bits,
although there could be exceptions like PAE extensions (I'm old enough for
remembering that option :-( ).

Anyway, the original idea of dma mask sanity check is 32 bits systems was
assuming there should not be a dma mask above 32 bits, but I'm happy with
removing that sanity check for 32 bits systems. So, do you agree to just
leave the sanity check for a max width of 63 bits?


> >
> >      >
> >      >     --
> >      >     Thanks,
> >      >     Anatoly
> >      >
> >
> >
> >     --
> >     Thanks,
> >     Anatoly
> >
>
>
> --
> Thanks,
> Anatoly
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 5/7] mem: modify error message for DMA mask check
  2018-11-06 10:37                 ` Alejandro Lucero
@ 2018-11-06 10:48                   ` Burakov, Anatoly
  2018-11-06 12:55                     ` Alejandro Lucero
  0 siblings, 1 reply; 20+ messages in thread
From: Burakov, Anatoly @ 2018-11-06 10:48 UTC (permalink / raw)
  To: Alejandro Lucero; +Cc: wenjiex.a.li, dev, Ferruh Yigit, xueqin.lin

On 06-Nov-18 10:37 AM, Alejandro Lucero wrote:
> 
> 
> On Tue, Nov 6, 2018 at 10:31 AM Burakov, Anatoly 
> <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
> 
>     On 06-Nov-18 9:32 AM, Alejandro Lucero wrote:
>      >
>      >
>      > On Mon, Nov 5, 2018 at 4:35 PM Burakov, Anatoly
>      > <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>
>     <mailto:anatoly.burakov@intel.com
>     <mailto:anatoly.burakov@intel.com>>> wrote:
>      >
>      >     On 05-Nov-18 3:33 PM, Alejandro Lucero wrote:
>      >      >
>      >      >
>      >      > On Mon, Nov 5, 2018 at 3:12 PM Burakov, Anatoly
>      >      > <anatoly.burakov@intel.com
>     <mailto:anatoly.burakov@intel.com> <mailto:anatoly.burakov@intel.com
>     <mailto:anatoly.burakov@intel.com>>
>      >     <mailto:anatoly.burakov@intel.com
>     <mailto:anatoly.burakov@intel.com>
>      >     <mailto:anatoly.burakov@intel.com
>     <mailto:anatoly.burakov@intel.com>>>> wrote:
>      >      >
>      >      >     On 05-Nov-18 10:13 AM, Alejandro Lucero wrote:
>      >      >      > On Mon, Nov 5, 2018 at 10:01 AM Li, WenjieX A
>      >      >     <wenjiex.a.li@intel.com
>     <mailto:wenjiex.a.li@intel.com> <mailto:wenjiex.a.li@intel.com
>     <mailto:wenjiex.a.li@intel.com>>
>      >     <mailto:wenjiex.a.li@intel.com
>     <mailto:wenjiex.a.li@intel.com> <mailto:wenjiex.a.li@intel.com
>     <mailto:wenjiex.a.li@intel.com>>>>
>      >      >      > wrote:
>      >      >      >
>      >      >      >> 1. With GCC32, testpmd could not startup without
>      >     '--iova-mode pa'.
>      >      >      >> ./i686-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
>      >      >      >> The output is:
>      >      >      >> EAL: Detected 16 lcore(s)
>      >      >      >> EAL: Detected 1 NUMA nodes
>      >      >      >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>      >      >      >> EAL: Some devices want iova as va but pa will be used
>      >     because..
>      >      >     EAL: few
>      >      >      >> device bound to UIO
>      >      >      >> EAL: No free hugepages reported in hugepages-1048576kB
>      >      >      >> EAL: Probing VFIO support...
>      >      >      >> EAL: VFIO support initialized
>      >      >      >> EAL: wrong dma mask size 48 (Max: 31)
>      >      >      >> EAL: alloc_pages_on_heap(): couldn't allocate
>     memory due
>      >     to IOVA
>      >      >     exceeding
>      >      >      >> limits of current DMA mask
>      >      >      >> error allocating rte services array
>      >      >      >> EAL: FATAL: rte_service_init() failed
>      >      >      >> EAL: rte_service_init() failed
>      >      >      >> PANIC in main():
>      >      >      >> Cannot init EAL
>      >      >      >> 5: [./i686-native-linuxapp-gcc/app/testpmd(+0x95fda)
>      >     [0x56606fda]]
>      >      >      >> 4:
>     [/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf6)
>      >      >     [0xf74d1276]]
>      >      >      >> 3: [./i686-native-linuxapp-gcc/app/testpmd(main+0xf21)
>      >     [0x565fcee1]]
>      >      >      >> 2:
>     [./i686-native-linuxapp-gcc/app/testpmd(__rte_panic+0x3d)
>      >      >     [0x565edc68]]
>      >      >      >> 1:
>      >     [./i686-native-linuxapp-gcc/app/testpmd(rte_dump_stack+0x33)
>      >      >      >> [0x5675f333]]
>      >      >      >> Aborted
>      >      >      >>
>      >      >      >> 2. With '--iova-mode pa', testpmd could startup.
>      >      >      >> 3. With GCC64, there is no such issue.
>      >      >      >> Thanks!
>      >      >      >>
>      >      >      >>
>      >      >      > Does 32 bits support require IOMMU? It would be a
>     surprise. If
>      >      >     there is no
>      >      >      > IOMMU hardware, no dma mask should be there at all.
>      >      >
>      >      >     IOMMU is supported on 32-bits, however limited the address
>      >     space might
>      >      >     be. Maybe limit IOMMU width to RTE_MIN(31, value) bits for
>      >      >     everything on
>      >      >     32-bit?
>      >      >
>      >      >
>      >      > If IOMMU is supported in 32 bits, then the DMA mask check
>     should
>      >     not be
>      >      > happening. AFAIK, the IOMMU hardware addressing
>     limitations is a
>      >     problem
>      >      > only in 64 bits systems. The worst situation I have head
>     of is 39
>      >     bits
>      >      > for virtualized IOMMU with QEMU.
>      >      >
>      >      > I would prefer not to invoke rte_mem_set_dma_mask for 32 bits
>      >     system for
>      >      > the Intel IOMMU case. The only other dma mask client is
>     the NFP
>      >     PMD and
>      >      > we do not support 32 bits systems.
>      >      >
>      >
>      >     I don't think not invoking DMA mask check is the right choice
>     here. In
>      >     practice it may be, but i'd rather the behavior to be
>     "correct", if at
>      >     all possible :) It is theoretically possible to have an IOMMU
>     with an
>      >     addressing limitation of, say, 30 bits (even though they
>     don't exist in
>      >     reality), so therefore our code should handle it, should it
>     encounter
>      >     one, and it should also handle the "proper" ones correctly
>     (as in,
>      >     treat
>      >     them as 32-bit-limited instead of 39- or 48-bit-limited).
>      >
>      >
>      > Fine.
>      >
>      > The problem is the current sanity check about the dma mask width,
>     what
>      > is 31 for 32 bits systems.
>      > Should we just leave a single max dma width to 63? This covers the
>      > possibility of 32 bit systems integrating an IOMMU designed for  64
>      > bits. I really doubt this is a real possibility in x86, although
>     I can
>      > see it more likely in embedded systems where this sort of hardware
>      > components integration happens.
> 
>     Actually (and after a quick chat with Ferruh), is this even needed?
>     IOVA
>     addresses are independent from VA width, IOVA can happily be bigger
>     than
>     32-bits if i understand things correctly. All of our IOVA addresses are
>     always 64-bit throughout DPDK. I don't think this check is even valid.
> 
> 
> Although iova_t is 64 bits, there should not be a IOVA higher than 32 
> bits, although there could be exceptions like PAE extensions (I'm old 
> enough for remembering that option :-( ).
> 
> Anyway, the original idea of dma mask sanity check is 32 bits systems 
> was assuming there should not be a dma mask above 32 bits, but I'm happy 
> with removing that sanity check for 32 bits systems. So, do you agree to 
> just leave the sanity check for a max width of 63 bits?
> 

So, the issue with 32-bit here is that for this check to make sense, the 
*kernel* must be 32-bit - not just userspace. IOW, this check should 
*not* be present in a 32-bit application running on a 64-bit kernel.

So IMO, unless you know of a way to easily check if 1) kernel is 32-bit, 
and 2) PAE is enabled/disabled (and by easily i mean using something 
other than reading sysfs etc.), i don't think this check should be in 
there :)

>      >
>      >      >
>      >      >     --
>      >      >     Thanks,
>      >      >     Anatoly
>      >      >
>      >
>      >
>      >     --
>      >     Thanks,
>      >     Anatoly
>      >
> 
> 
>     -- 
>     Thanks,
>     Anatoly
> 


-- 
Thanks,
Anatoly

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [dpdk-dev] [PATCH v2 5/7] mem: modify error message for DMA mask check
  2018-11-06 10:48                   ` Burakov, Anatoly
@ 2018-11-06 12:55                     ` Alejandro Lucero
  0 siblings, 0 replies; 20+ messages in thread
From: Alejandro Lucero @ 2018-11-06 12:55 UTC (permalink / raw)
  To: Burakov, Anatoly; +Cc: wenjiex.a.li, dev, Ferruh Yigit, xueqin.lin

On Tue, Nov 6, 2018 at 10:48 AM Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:

> On 06-Nov-18 10:37 AM, Alejandro Lucero wrote:
> >
> >
> > On Tue, Nov 6, 2018 at 10:31 AM Burakov, Anatoly
> > <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
> >
> >     On 06-Nov-18 9:32 AM, Alejandro Lucero wrote:
> >      >
> >      >
> >      > On Mon, Nov 5, 2018 at 4:35 PM Burakov, Anatoly
> >      > <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>
> >     <mailto:anatoly.burakov@intel.com
> >     <mailto:anatoly.burakov@intel.com>>> wrote:
> >      >
> >      >     On 05-Nov-18 3:33 PM, Alejandro Lucero wrote:
> >      >      >
> >      >      >
> >      >      > On Mon, Nov 5, 2018 at 3:12 PM Burakov, Anatoly
> >      >      > <anatoly.burakov@intel.com
> >     <mailto:anatoly.burakov@intel.com> <mailto:anatoly.burakov@intel.com
> >     <mailto:anatoly.burakov@intel.com>>
> >      >     <mailto:anatoly.burakov@intel.com
> >     <mailto:anatoly.burakov@intel.com>
> >      >     <mailto:anatoly.burakov@intel.com
> >     <mailto:anatoly.burakov@intel.com>>>> wrote:
> >      >      >
> >      >      >     On 05-Nov-18 10:13 AM, Alejandro Lucero wrote:
> >      >      >      > On Mon, Nov 5, 2018 at 10:01 AM Li, WenjieX A
> >      >      >     <wenjiex.a.li@intel.com
> >     <mailto:wenjiex.a.li@intel.com> <mailto:wenjiex.a.li@intel.com
> >     <mailto:wenjiex.a.li@intel.com>>
> >      >     <mailto:wenjiex.a.li@intel.com
> >     <mailto:wenjiex.a.li@intel.com> <mailto:wenjiex.a.li@intel.com
> >     <mailto:wenjiex.a.li@intel.com>>>>
> >      >      >      > wrote:
> >      >      >      >
> >      >      >      >> 1. With GCC32, testpmd could not startup without
> >      >     '--iova-mode pa'.
> >      >      >      >> ./i686-native-linuxapp-gcc/app/testpmd -c f -n 4
> -- -i
> >      >      >      >> The output is:
> >      >      >      >> EAL: Detected 16 lcore(s)
> >      >      >      >> EAL: Detected 1 NUMA nodes
> >      >      >      >> EAL: Multi-process socket
> /var/run/dpdk/rte/mp_socket
> >      >      >      >> EAL: Some devices want iova as va but pa will be
> used
> >      >     because..
> >      >      >     EAL: few
> >      >      >      >> device bound to UIO
> >      >      >      >> EAL: No free hugepages reported in
> hugepages-1048576kB
> >      >      >      >> EAL: Probing VFIO support...
> >      >      >      >> EAL: VFIO support initialized
> >      >      >      >> EAL: wrong dma mask size 48 (Max: 31)
> >      >      >      >> EAL: alloc_pages_on_heap(): couldn't allocate
> >     memory due
> >      >     to IOVA
> >      >      >     exceeding
> >      >      >      >> limits of current DMA mask
> >      >      >      >> error allocating rte services array
> >      >      >      >> EAL: FATAL: rte_service_init() failed
> >      >      >      >> EAL: rte_service_init() failed
> >      >      >      >> PANIC in main():
> >      >      >      >> Cannot init EAL
> >      >      >      >> 5:
> [./i686-native-linuxapp-gcc/app/testpmd(+0x95fda)
> >      >     [0x56606fda]]
> >      >      >      >> 4:
> >     [/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf6)
> >      >      >     [0xf74d1276]]
> >      >      >      >> 3:
> [./i686-native-linuxapp-gcc/app/testpmd(main+0xf21)
> >      >     [0x565fcee1]]
> >      >      >      >> 2:
> >     [./i686-native-linuxapp-gcc/app/testpmd(__rte_panic+0x3d)
> >      >      >     [0x565edc68]]
> >      >      >      >> 1:
> >      >     [./i686-native-linuxapp-gcc/app/testpmd(rte_dump_stack+0x33)
> >      >      >      >> [0x5675f333]]
> >      >      >      >> Aborted
> >      >      >      >>
> >      >      >      >> 2. With '--iova-mode pa', testpmd could startup.
> >      >      >      >> 3. With GCC64, there is no such issue.
> >      >      >      >> Thanks!
> >      >      >      >>
> >      >      >      >>
> >      >      >      > Does 32 bits support require IOMMU? It would be a
> >     surprise. If
> >      >      >     there is no
> >      >      >      > IOMMU hardware, no dma mask should be there at all.
> >      >      >
> >      >      >     IOMMU is supported on 32-bits, however limited the
> address
> >      >     space might
> >      >      >     be. Maybe limit IOMMU width to RTE_MIN(31, value) bits
> for
> >      >      >     everything on
> >      >      >     32-bit?
> >      >      >
> >      >      >
> >      >      > If IOMMU is supported in 32 bits, then the DMA mask check
> >     should
> >      >     not be
> >      >      > happening. AFAIK, the IOMMU hardware addressing
> >     limitations is a
> >      >     problem
> >      >      > only in 64 bits systems. The worst situation I have head
> >     of is 39
> >      >     bits
> >      >      > for virtualized IOMMU with QEMU.
> >      >      >
> >      >      > I would prefer not to invoke rte_mem_set_dma_mask for 32
> bits
> >      >     system for
> >      >      > the Intel IOMMU case. The only other dma mask client is
> >     the NFP
> >      >     PMD and
> >      >      > we do not support 32 bits systems.
> >      >      >
> >      >
> >      >     I don't think not invoking DMA mask check is the right choice
> >     here. In
> >      >     practice it may be, but i'd rather the behavior to be
> >     "correct", if at
> >      >     all possible :) It is theoretically possible to have an IOMMU
> >     with an
> >      >     addressing limitation of, say, 30 bits (even though they
> >     don't exist in
> >      >     reality), so therefore our code should handle it, should it
> >     encounter
> >      >     one, and it should also handle the "proper" ones correctly
> >     (as in,
> >      >     treat
> >      >     them as 32-bit-limited instead of 39- or 48-bit-limited).
> >      >
> >      >
> >      > Fine.
> >      >
> >      > The problem is the current sanity check about the dma mask width,
> >     what
> >      > is 31 for 32 bits systems.
> >      > Should we just leave a single max dma width to 63? This covers the
> >      > possibility of 32 bit systems integrating an IOMMU designed for
> 64
> >      > bits. I really doubt this is a real possibility in x86, although
> >     I can
> >      > see it more likely in embedded systems where this sort of hardware
> >      > components integration happens.
> >
> >     Actually (and after a quick chat with Ferruh), is this even needed?
> >     IOVA
> >     addresses are independent from VA width, IOVA can happily be bigger
> >     than
> >     32-bits if i understand things correctly. All of our IOVA addresses
> are
> >     always 64-bit throughout DPDK. I don't think this check is even
> valid.
> >
> >
> > Although iova_t is 64 bits, there should not be a IOVA higher than 32
> > bits, although there could be exceptions like PAE extensions (I'm old
> > enough for remembering that option :-( ).
> >
> > Anyway, the original idea of dma mask sanity check is 32 bits systems
> > was assuming there should not be a dma mask above 32 bits, but I'm happy
> > with removing that sanity check for 32 bits systems. So, do you agree to
> > just leave the sanity check for a max width of 63 bits?
> >
>
> So, the issue with 32-bit here is that for this check to make sense, the
> *kernel* must be 32-bit - not just userspace. IOW, this check should
> *not* be present in a 32-bit application running on a 64-bit kernel.
>
> So IMO, unless you know of a way to easily check if 1) kernel is 32-bit,
> and 2) PAE is enabled/disabled (and by easily i mean using something
> other than reading sysfs etc.), i don't think this check should be in
> there :)
>

Ok then. If there are no other opinions, I will remove the sanity check for
32 bits systems.

Thanks


>
> >      >
> >      >      >
> >      >      >     --
> >      >      >     Thanks,
> >      >      >     Anatoly
> >      >      >
> >      >
> >      >
> >      >     --
> >      >     Thanks,
> >      >     Anatoly
> >      >
> >
> >
> >     --
> >     Thanks,
> >     Anatoly
> >
>
>
> --
> Thanks,
> Anatoly
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2018-11-06 12:55 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-01 19:53 [dpdk-dev] [PATCH v2 0/7] fix DMA mask check Alejandro Lucero
2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 1/7] mem: fix call to " Alejandro Lucero
2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 2/7] mem: use proper prefix Alejandro Lucero
2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 3/7] mem: add function for setting DMA mask Alejandro Lucero
2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 4/7] bus/pci: avoid call to DMA mask check Alejandro Lucero
2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 5/7] mem: modify error message for " Alejandro Lucero
2018-11-05 10:01   ` Li, WenjieX A
2018-11-05 10:13     ` Alejandro Lucero
2018-11-05 15:12       ` Burakov, Anatoly
2018-11-05 15:33         ` Alejandro Lucero
2018-11-05 16:34           ` Burakov, Anatoly
2018-11-06  9:32             ` Alejandro Lucero
2018-11-06 10:31               ` Burakov, Anatoly
2018-11-06 10:37                 ` Alejandro Lucero
2018-11-06 10:48                   ` Burakov, Anatoly
2018-11-06 12:55                     ` Alejandro Lucero
2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 6/7] eal/mem: use DMA mask check for legacy memory Alejandro Lucero
2018-11-01 19:53 ` [dpdk-dev] [PATCH v2 7/7] mem: add thread unsafe version for checking DMA mask Alejandro Lucero
2018-11-02 18:38 ` [dpdk-dev] [PATCH v2 0/7] fix DMA mask check Ferruh Yigit
2018-11-05  0:03   ` Thomas Monjalon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).