* [dpdk-dev] [PATCH 0/7] fix DMA mask check
@ 2018-10-31 17:29 Alejandro Lucero
2018-10-31 17:29 ` [dpdk-dev] [PATCH 1/7] mem: fix call to " Alejandro Lucero
` (9 more replies)
0 siblings, 10 replies; 34+ messages in thread
From: Alejandro Lucero @ 2018-10-31 17:29 UTC (permalink / raw)
To: dev
A patchset sent introducing DMA mask checks has several critical
issues precluding apps to execute. The patchset was reviewed and
finally accepted after three versions. Obviously it did not go
through the proper testing what can be explained, at least from my
side, due to the big changes to the memory initialization code these
last months. It turns out the patchset did work with legacy memory
and I'm afraid that was mainly my testing.
This patchset should solve the main problems reported:
- deadlock duriing initialization
- segmentation fault with secondary processes
For solving the deadlock, a new API is introduced:
rte_mem_check_dma_mask_safe/unsafe
making the previous rte_mem_check_dma_mask the one those new functions
end calling. A boolean param is used for calling rte_memseg_walk thread
safe or thread unsafe. This second option is needed for avoiding the
deadlock.
For the secondary processes problem, the call to check the dma mask is
avoided from code being executed before the memory initialization.
Instead, a new API function, rte_mem_set_dma_mask is introduced, which
will be used in those cases. The dma mask check is done once the memory
initialization is completed.
This last change implies the IOVA mode can not be set depending on IOMMU
hardware limitations, and it is assumed IOVA VA is possible. If the dma
mask check reports a problem after memory initilization, the error
message includes now advice for trying with --iova-mode option set to
pa.
The patchet also includes the dma mask check for legacy memory and the
no hugepage option.
Finally, all the DMA mask API has been updated for using the same prefix
than other EAL memory code.
An initial version of this patchset has been tested by Intel DPDK
Validation team and it seems it solves all the problems reported. This
final patchset has the same functionality with minor changes. I have
successfully tested the patchset with my limited testbench.
Alejandro Lucero (7):
mem: fix call to DMA mask check
mem: use proper prefix
mem: add function for setting DMA mask
bus/pci: avoid call to DMA mask check
mem: modify error message for DMA mask check
mem: add safe and unsafe versions for checking DMA mask
eal/mem: use DMA mask check for legacy memory
doc/guides/rel_notes/release_18_11.rst | 2 +-
drivers/bus/pci/linux/pci.c | 11 ++++++-
drivers/net/nfp/nfp_net.c | 2 +-
lib/librte_eal/common/eal_common_memory.c | 36 +++++++++++++++++---
lib/librte_eal/common/include/rte_memory.h | 37 +++++++++++++++++++--
lib/librte_eal/common/malloc_heap.c | 38 ++++++++++++++++++----
lib/librte_eal/linuxapp/eal/eal_memory.c | 17 ++++++++++
lib/librte_eal/rte_eal_version.map | 4 ++-
8 files changed, 131 insertions(+), 16 deletions(-)
--
2.17.1
^ permalink raw reply [flat|nested] 34+ messages in thread
* [dpdk-dev] [PATCH 1/7] mem: fix call to DMA mask check
2018-10-31 17:29 [dpdk-dev] [PATCH 0/7] fix DMA mask check Alejandro Lucero
@ 2018-10-31 17:29 ` Alejandro Lucero
2018-11-01 10:11 ` Burakov, Anatoly
2018-10-31 17:29 ` [dpdk-dev] [PATCH 2/7] mem: use proper prefix Alejandro Lucero
` (8 subsequent siblings)
9 siblings, 1 reply; 34+ messages in thread
From: Alejandro Lucero @ 2018-10-31 17:29 UTC (permalink / raw)
To: dev
The param needs to be the maskbits and not the mask.
Fixes: 223b7f1d5ef6 ("mem: add function for checking memseg IOVA")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
---
lib/librte_eal/common/malloc_heap.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c
index 1973b6e6e..0adab62ae 100644
--- a/lib/librte_eal/common/malloc_heap.c
+++ b/lib/librte_eal/common/malloc_heap.c
@@ -323,8 +323,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
}
if (mcfg->dma_maskbits) {
- mask = ~((1ULL << mcfg->dma_maskbits) - 1);
- if (rte_eal_check_dma_mask(mask)) {
+ if (rte_eal_check_dma_mask(mcfg->dma_maskbits)) {
RTE_LOG(ERR, EAL,
"%s(): couldn't allocate memory due to DMA mask\n",
__func__);
--
2.17.1
^ permalink raw reply [flat|nested] 34+ messages in thread
* [dpdk-dev] [PATCH 2/7] mem: use proper prefix
2018-10-31 17:29 [dpdk-dev] [PATCH 0/7] fix DMA mask check Alejandro Lucero
2018-10-31 17:29 ` [dpdk-dev] [PATCH 1/7] mem: fix call to " Alejandro Lucero
@ 2018-10-31 17:29 ` Alejandro Lucero
2018-11-01 10:08 ` Burakov, Anatoly
2018-10-31 17:29 ` [dpdk-dev] [PATCH 3/7] mem: add function for setting DMA mask Alejandro Lucero
` (7 subsequent siblings)
9 siblings, 1 reply; 34+ messages in thread
From: Alejandro Lucero @ 2018-10-31 17:29 UTC (permalink / raw)
To: dev
Current name rte_eal_check_dma_mask does not follow the naming
used in the rest of the file.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
---
doc/guides/rel_notes/release_18_11.rst | 2 +-
drivers/bus/pci/linux/pci.c | 2 +-
drivers/net/nfp/nfp_net.c | 2 +-
lib/librte_eal/common/eal_common_memory.c | 4 ++--
lib/librte_eal/common/include/rte_memory.h | 2 +-
lib/librte_eal/common/malloc_heap.c | 2 +-
lib/librte_eal/rte_eal_version.map | 2 +-
7 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/doc/guides/rel_notes/release_18_11.rst b/doc/guides/rel_notes/release_18_11.rst
index 376128f68..11a27405c 100644
--- a/doc/guides/rel_notes/release_18_11.rst
+++ b/doc/guides/rel_notes/release_18_11.rst
@@ -63,7 +63,7 @@ New Features
* **Added check for ensuring allocated memory addressable by devices.**
Some devices can have addressing limitations so a new function,
- ``rte_eal_check_dma_mask``, has been added for checking allocated memory is
+ ``rte_mem_check_dma_mask``, has been added for checking allocated memory is
not out of the device range. Because now memory can be dynamically allocated
after initialization, a dma mask is kept and any new allocated memory will be
checked out against that dma mask and rejected if out of range. If more than
diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index 45c24ef7e..0a81e063b 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -590,7 +590,7 @@ pci_one_device_iommu_support_va(struct rte_pci_device *dev)
mgaw = ((vtd_cap_reg & VTD_CAP_MGAW_MASK) >> VTD_CAP_MGAW_SHIFT) + 1;
- return rte_eal_check_dma_mask(mgaw) == 0 ? true : false;
+ return rte_mem_check_dma_mask(mgaw) == 0 ? true : false;
}
#elif defined(RTE_ARCH_PPC_64)
static bool
diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
index bab1f68eb..54c6da924 100644
--- a/drivers/net/nfp/nfp_net.c
+++ b/drivers/net/nfp/nfp_net.c
@@ -2703,7 +2703,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
/* NFP can not handle DMA addresses requiring more than 40 bits */
- if (rte_eal_check_dma_mask(40)) {
+ if (rte_mem_check_dma_mask(40)) {
RTE_LOG(ERR, PMD, "device %s can not be used:",
pci_dev->device.name);
RTE_LOG(ERR, PMD, "\trestricted dma mask to 40 bits!\n");
diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c
index 12dcedf5c..e0f08f39a 100644
--- a/lib/librte_eal/common/eal_common_memory.c
+++ b/lib/librte_eal/common/eal_common_memory.c
@@ -49,7 +49,7 @@ static uint64_t system_page_sz;
* Current known limitations are 39 or 40 bits. Setting the starting address
* at 4GB implies there are 508GB or 1020GB for mapping the available
* hugepages. This is likely enough for most systems, although a device with
- * addressing limitations should call rte_eal_check_dma_mask for ensuring all
+ * addressing limitations should call rte_mem_check_dma_mask for ensuring all
* memory is within supported range.
*/
static uint64_t baseaddr = 0x100000000;
@@ -447,7 +447,7 @@ check_iova(const struct rte_memseg_list *msl __rte_unused,
/* check memseg iovas are within the required range based on dma mask */
int __rte_experimental
-rte_eal_check_dma_mask(uint8_t maskbits)
+rte_mem_check_dma_mask(uint8_t maskbits)
{
struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
uint64_t mask;
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index ce9370582..ad3f3cfb0 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -464,7 +464,7 @@ unsigned rte_memory_get_nchannel(void);
unsigned rte_memory_get_nrank(void);
/* check memsegs iovas are within a range based on dma mask */
-int __rte_experimental rte_eal_check_dma_mask(uint8_t maskbits);
+int __rte_experimental rte_mem_check_dma_mask(uint8_t maskbits);
/**
* Drivers based on uio will not load unless physical
diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c
index 0adab62ae..7d423089d 100644
--- a/lib/librte_eal/common/malloc_heap.c
+++ b/lib/librte_eal/common/malloc_heap.c
@@ -323,7 +323,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
}
if (mcfg->dma_maskbits) {
- if (rte_eal_check_dma_mask(mcfg->dma_maskbits)) {
+ if (rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
RTE_LOG(ERR, EAL,
"%s(): couldn't allocate memory due to DMA mask\n",
__func__);
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index 04f624246..ef8126a97 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -295,7 +295,7 @@ EXPERIMENTAL {
rte_devargs_parsef;
rte_devargs_remove;
rte_devargs_type_count;
- rte_eal_check_dma_mask;
+ rte_mem_check_dma_mask;
rte_eal_cleanup;
rte_fbarray_attach;
rte_fbarray_destroy;
--
2.17.1
^ permalink raw reply [flat|nested] 34+ messages in thread
* [dpdk-dev] [PATCH 3/7] mem: add function for setting DMA mask
2018-10-31 17:29 [dpdk-dev] [PATCH 0/7] fix DMA mask check Alejandro Lucero
2018-10-31 17:29 ` [dpdk-dev] [PATCH 1/7] mem: fix call to " Alejandro Lucero
2018-10-31 17:29 ` [dpdk-dev] [PATCH 2/7] mem: use proper prefix Alejandro Lucero
@ 2018-10-31 17:29 ` Alejandro Lucero
2018-11-01 10:11 ` Burakov, Anatoly
2018-10-31 17:29 ` [dpdk-dev] [PATCH 4/7] bus/pci: avoid call to DMA mask check Alejandro Lucero
` (6 subsequent siblings)
9 siblings, 1 reply; 34+ messages in thread
From: Alejandro Lucero @ 2018-10-31 17:29 UTC (permalink / raw)
To: dev
This patch adds the possibility of setting a dma mask to be used
once the memory initialization is done.
This is currently needed when IOVA mode is set by PCI related
code and an x86 IOMMU hardware unit is present. Current code calls
rte_mem_check_dma_mask but it is wrong to do so at that point
because the memory has not been initialized yet.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
---
lib/librte_eal/common/eal_common_memory.c | 10 ++++++++++
lib/librte_eal/common/include/rte_memory.h | 10 ++++++++++
lib/librte_eal/rte_eal_version.map | 1 +
3 files changed, 21 insertions(+)
diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c
index e0f08f39a..24b72fcb0 100644
--- a/lib/librte_eal/common/eal_common_memory.c
+++ b/lib/librte_eal/common/eal_common_memory.c
@@ -480,6 +480,16 @@ rte_mem_check_dma_mask(uint8_t maskbits)
return 0;
}
+/* set dma mask to use when memory initialization is done */
+void __rte_experimental
+rte_mem_set_dma_mask(uint8_t maskbits)
+{
+ struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+
+ mcfg->dma_maskbits = mcfg->dma_maskbits == 0 ? maskbits :
+ RTE_MIN(mcfg->dma_maskbits, maskbits);
+}
+
/* return the number of memory channels */
unsigned rte_memory_get_nchannel(void)
{
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index ad3f3cfb0..eff028db1 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -466,6 +466,16 @@ unsigned rte_memory_get_nrank(void);
/* check memsegs iovas are within a range based on dma mask */
int __rte_experimental rte_mem_check_dma_mask(uint8_t maskbits);
+/**
+ * * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Set dma mask to use once memory initialization is done.
+ * Previous function rte_mem_check_dma_mask can not be used
+ * safely until memory has been initialized.
+ */
+void __rte_experimental rte_mem_set_dma_mask(uint8_t maskbits);
+
/**
* Drivers based on uio will not load unless physical
* addresses are obtainable. It is only possible to get
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index ef8126a97..ae24b5c73 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -296,6 +296,7 @@ EXPERIMENTAL {
rte_devargs_remove;
rte_devargs_type_count;
rte_mem_check_dma_mask;
+ rte_mem_set_dma_mask;
rte_eal_cleanup;
rte_fbarray_attach;
rte_fbarray_destroy;
--
2.17.1
^ permalink raw reply [flat|nested] 34+ messages in thread
* [dpdk-dev] [PATCH 4/7] bus/pci: avoid call to DMA mask check
2018-10-31 17:29 [dpdk-dev] [PATCH 0/7] fix DMA mask check Alejandro Lucero
` (2 preceding siblings ...)
2018-10-31 17:29 ` [dpdk-dev] [PATCH 3/7] mem: add function for setting DMA mask Alejandro Lucero
@ 2018-10-31 17:29 ` Alejandro Lucero
2018-11-01 10:12 ` Burakov, Anatoly
2018-10-31 17:29 ` [dpdk-dev] [PATCH 5/7] mem: modify error message for " Alejandro Lucero
` (5 subsequent siblings)
9 siblings, 1 reply; 34+ messages in thread
From: Alejandro Lucero @ 2018-10-31 17:29 UTC (permalink / raw)
To: dev
Calling rte_mem_check_dma_mask when memory has not been initialized
yet is wrong. This patch use rte_mem_set_dma_mask instead.
Once memory initialization is done, the dma mask set will be used
for checking memory mapped is within the specified mask.
Fixes: fe822eb8c565 ("bus/pci: use IOVA DMA mask check when setting IOVA mode")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
---
drivers/bus/pci/linux/pci.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
index 0a81e063b..d87384c72 100644
--- a/drivers/bus/pci/linux/pci.c
+++ b/drivers/bus/pci/linux/pci.c
@@ -590,7 +590,16 @@ pci_one_device_iommu_support_va(struct rte_pci_device *dev)
mgaw = ((vtd_cap_reg & VTD_CAP_MGAW_MASK) >> VTD_CAP_MGAW_SHIFT) + 1;
- return rte_mem_check_dma_mask(mgaw) == 0 ? true : false;
+ /*
+ * Assuming there is no limitation by now. We can not know at this point
+ * because the memory has not been initialized yet. Setting the dma mask
+ * will force a check once memory initialization is done. We can not do
+ * a fallback to IOVA PA now, but if the dma check fails, the error
+ * message should advice for using '--iova-mode pa' if IOVA VA is the
+ * current mode.
+ */
+ rte_mem_set_dma_mask(mgaw);
+ return true;
}
#elif defined(RTE_ARCH_PPC_64)
static bool
--
2.17.1
^ permalink raw reply [flat|nested] 34+ messages in thread
* [dpdk-dev] [PATCH 5/7] mem: modify error message for DMA mask check
2018-10-31 17:29 [dpdk-dev] [PATCH 0/7] fix DMA mask check Alejandro Lucero
` (3 preceding siblings ...)
2018-10-31 17:29 ` [dpdk-dev] [PATCH 4/7] bus/pci: avoid call to DMA mask check Alejandro Lucero
@ 2018-10-31 17:29 ` Alejandro Lucero
2018-11-01 10:29 ` Burakov, Anatoly
2018-10-31 17:29 ` [dpdk-dev] [PATCH 6/7] mem: add safe and unsafe versions for checking DMA mask Alejandro Lucero
` (4 subsequent siblings)
9 siblings, 1 reply; 34+ messages in thread
From: Alejandro Lucero @ 2018-10-31 17:29 UTC (permalink / raw)
To: dev
If DMA mask checks shows mapped memory out of the supported range
specified by the DMA mask, nothing can be done but return an error
an report the error. This can imply the app not being executed at
all or precluding dynamic memory allocation once the app is running.
In any case, we can advice the user to force IOVA as PA if currently
IOVA being VA and user being root.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
---
lib/librte_eal/common/malloc_heap.c | 35 +++++++++++++++++++++++++----
1 file changed, 31 insertions(+), 4 deletions(-)
diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c
index 7d423089d..711622f19 100644
--- a/lib/librte_eal/common/malloc_heap.c
+++ b/lib/librte_eal/common/malloc_heap.c
@@ -5,8 +5,10 @@
#include <stddef.h>
#include <stdlib.h>
#include <stdio.h>
+#include <unistd.h>
#include <stdarg.h>
#include <errno.h>
+#include <sys/types.h>
#include <sys/queue.h>
#include <rte_memory.h>
@@ -294,7 +296,6 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
size_t alloc_sz;
int allocd_pages;
void *ret, *map_addr;
- uint64_t mask;
alloc_sz = (size_t)pg_sz * n_segs;
@@ -322,11 +323,37 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
goto fail;
}
+ /* Once we have all the memseg lists configured, if there is a dma mask
+ * set, check iova addresses are not out of range. Otherwise the device
+ * setting the dma mask could have problems with the mapped memory.
+ *
+ * There are two situations when this can happen:
+ * 1) memory initialization
+ * 2) dynamic memory allocation
+ *
+ * For 1), an error when checking dma mask implies app can not be
+ * executed. For 2) implies the new memory can not be added.
+ */
if (mcfg->dma_maskbits) {
if (rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
- RTE_LOG(ERR, EAL,
- "%s(): couldn't allocate memory due to DMA mask\n",
- __func__);
+ /* Currently this can only happen if IOMMU is enabled
+ * with RTE_ARCH_X86. It is not safe to use this memory
+ * so returning an error here.
+ *
+ * If IOVA is VA, advice to try with '--iova-mode pa'
+ * which could solve some situations when IOVA VA is not
+ * really needed.
+ */
+ uid_t user = getuid();
+ if ((rte_eal_iova_mode() == RTE_IOVA_VA) && user == 0)
+ RTE_LOG(ERR, EAL,
+ "%s(): couldn't allocate memory due to DMA mask.\n"
+ "Try with 'iova-mode=pa'\n",
+ __func__);
+ else
+ RTE_LOG(ERR, EAL,
+ "%s(): couldn't allocate memory due to DMA mask\n",
+ __func__);
goto fail;
}
}
--
2.17.1
^ permalink raw reply [flat|nested] 34+ messages in thread
* [dpdk-dev] [PATCH 6/7] mem: add safe and unsafe versions for checking DMA mask
2018-10-31 17:29 [dpdk-dev] [PATCH 0/7] fix DMA mask check Alejandro Lucero
` (4 preceding siblings ...)
2018-10-31 17:29 ` [dpdk-dev] [PATCH 5/7] mem: modify error message for " Alejandro Lucero
@ 2018-10-31 17:29 ` Alejandro Lucero
2018-11-01 10:38 ` Burakov, Anatoly
2018-10-31 17:29 ` [dpdk-dev] [PATCH 7/7] eal/mem: use DMA mask check for legacy memory Alejandro Lucero
` (3 subsequent siblings)
9 siblings, 1 reply; 34+ messages in thread
From: Alejandro Lucero @ 2018-10-31 17:29 UTC (permalink / raw)
To: dev
During memory initialization calling rte_mem_check_dma_mask
leads to a deadlock because memory_hotplug_lock is locked by a
writer, the current code in execution, and rte_memseg_walk
tries to lock as a reader.
This patch adds safe and unsafe versions for invoking the final
function specifying if the memory_hotplug_lock needs to be
acquired, this is for the safe version, or not, the unsafe one.
PMDs should use the safe version and just internal EAL memory
code should use the unsafe one.
Fixes: 223b7f1d5ef6 ("mem: add function for checking memseg IOVA")
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
---
drivers/net/nfp/nfp_net.c | 2 +-
lib/librte_eal/common/eal_common_memory.c | 24 +++++++++++++++---
lib/librte_eal/common/include/rte_memory.h | 29 +++++++++++++++++++---
lib/librte_eal/common/malloc_heap.c | 2 +-
lib/librte_eal/rte_eal_version.map | 3 ++-
5 files changed, 51 insertions(+), 9 deletions(-)
diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
index 54c6da924..72c2d3cbb 100644
--- a/drivers/net/nfp/nfp_net.c
+++ b/drivers/net/nfp/nfp_net.c
@@ -2703,7 +2703,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)
pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
/* NFP can not handle DMA addresses requiring more than 40 bits */
- if (rte_mem_check_dma_mask(40)) {
+ if (rte_mem_check_dma_mask_safe(40)) {
RTE_LOG(ERR, PMD, "device %s can not be used:",
pci_dev->device.name);
RTE_LOG(ERR, PMD, "\trestricted dma mask to 40 bits!\n");
diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c
index 24b72fcb0..2eb3eb48a 100644
--- a/lib/librte_eal/common/eal_common_memory.c
+++ b/lib/librte_eal/common/eal_common_memory.c
@@ -446,11 +446,12 @@ check_iova(const struct rte_memseg_list *msl __rte_unused,
#endif
/* check memseg iovas are within the required range based on dma mask */
-int __rte_experimental
-rte_mem_check_dma_mask(uint8_t maskbits)
+static int __rte_experimental
+rte_mem_check_dma_mask(uint8_t maskbits, bool safe)
{
struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
uint64_t mask;
+ int ret;
/* sanity check */
if (maskbits > MAX_DMA_MASK_BITS) {
@@ -462,7 +463,12 @@ rte_mem_check_dma_mask(uint8_t maskbits)
/* create dma mask */
mask = ~((1ULL << maskbits) - 1);
- if (rte_memseg_walk(check_iova, &mask))
+ if (safe)
+ ret = rte_memseg_walk(check_iova, &mask);
+ else
+ ret = rte_memseg_walk_thread_unsafe(check_iova, &mask);
+
+ if (ret)
/*
* Dma mask precludes hugepage usage.
* This device can not be used and we do not need to keep
@@ -480,6 +486,18 @@ rte_mem_check_dma_mask(uint8_t maskbits)
return 0;
}
+int __rte_experimental
+rte_mem_check_dma_mask_safe(uint8_t maskbits)
+{
+ return rte_mem_check_dma_mask(maskbits, true);
+}
+
+int __rte_experimental
+rte_mem_check_dma_mask_unsafe(uint8_t maskbits)
+{
+ return rte_mem_check_dma_mask(maskbits, false);
+}
+
/* set dma mask to use when memory initialization is done */
void __rte_experimental
rte_mem_set_dma_mask(uint8_t maskbits)
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index eff028db1..187a3c668 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -463,15 +463,38 @@ unsigned rte_memory_get_nchannel(void);
*/
unsigned rte_memory_get_nrank(void);
-/* check memsegs iovas are within a range based on dma mask */
-int __rte_experimental rte_mem_check_dma_mask(uint8_t maskbits);
+/**
+ * * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Check memsegs iovas are within a range based on dma mask.
+ *
+ * @param maskbits
+ * Address width to check against.
+ */
+int __rte_experimental rte_mem_check_dma_mask_safe(uint8_t maskbits);
+
+/**
+ * * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Check memsegs iovas are within a range based on dma mask without acquiring
+ * memory_hotplug_lock first.
+ *
+ * This function is just for EAL core memory internal use. Drivers should
+ * use the previous safe one.
+ *
+ * @param maskbits
+ * Address width to check against.
+ */
+int __rte_experimental rte_mem_check_dma_mask_unsafe(uint8_t maskbits);
/**
* * @warning
* @b EXPERIMENTAL: this API may change without prior notice
*
* Set dma mask to use once memory initialization is done.
- * Previous function rte_mem_check_dma_mask can not be used
+ * Previous functions rte_mem_check_dma_mask_safe/unsafe can not be used
* safely until memory has been initialized.
*/
void __rte_experimental rte_mem_set_dma_mask(uint8_t maskbits);
diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c
index 711622f19..dd8b983e7 100644
--- a/lib/librte_eal/common/malloc_heap.c
+++ b/lib/librte_eal/common/malloc_heap.c
@@ -335,7 +335,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
* executed. For 2) implies the new memory can not be added.
*/
if (mcfg->dma_maskbits) {
- if (rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
+ if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) {
/* Currently this can only happen if IOMMU is enabled
* with RTE_ARCH_X86. It is not safe to use this memory
* so returning an error here.
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index ae24b5c73..f863903b6 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -296,7 +296,8 @@ EXPERIMENTAL {
rte_devargs_remove;
rte_devargs_type_count;
rte_mem_check_dma_mask;
- rte_mem_set_dma_mask;
+ rte_mem_set_dma_mask_safe;
+ rte_mem_set_dma_mask_unsafe;
rte_eal_cleanup;
rte_fbarray_attach;
rte_fbarray_destroy;
--
2.17.1
^ permalink raw reply [flat|nested] 34+ messages in thread
* [dpdk-dev] [PATCH 7/7] eal/mem: use DMA mask check for legacy memory
2018-10-31 17:29 [dpdk-dev] [PATCH 0/7] fix DMA mask check Alejandro Lucero
` (5 preceding siblings ...)
2018-10-31 17:29 ` [dpdk-dev] [PATCH 6/7] mem: add safe and unsafe versions for checking DMA mask Alejandro Lucero
@ 2018-10-31 17:29 ` Alejandro Lucero
2018-11-01 10:40 ` Burakov, Anatoly
2018-10-31 17:47 ` [dpdk-dev] [PATCH 0/7] fix DMA mask check Alejandro Lucero
` (2 subsequent siblings)
9 siblings, 1 reply; 34+ messages in thread
From: Alejandro Lucero @ 2018-10-31 17:29 UTC (permalink / raw)
To: dev
If a device reports addressing limitations through a dma mask,
the IOVAs for mapped memory needs to be checked out for ensuring
correct functionality.
Previous patches introduced this DMA check for main memory code
currently being used but other options like legacy memory and the
no hugepages one need to be also considered.
This patch adds the DMA check for those cases.
Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
---
lib/librte_eal/linuxapp/eal/eal_memory.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
index fce86fda6..2a3a8c7a3 100644
--- a/lib/librte_eal/linuxapp/eal/eal_memory.c
+++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
@@ -1393,6 +1393,14 @@ eal_legacy_hugepage_init(void)
addr = RTE_PTR_ADD(addr, (size_t)page_sz);
}
+ if (mcfg->dma_maskbits) {
+ if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) {
+ RTE_LOG(ERR, EAL,
+ "%s(): couldn't allocate memory due to DMA mask\n",
+ __func__);
+ goto fail;
+ }
+ }
return 0;
}
@@ -1628,6 +1636,15 @@ eal_legacy_hugepage_init(void)
rte_fbarray_destroy(&msl->memseg_arr);
}
+ if (mcfg->dma_maskbits) {
+ if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) {
+ RTE_LOG(ERR, EAL,
+ "%s(): couldn't allocate memory due to DMA mask\n",
+ __func__);
+ goto fail;
+ }
+ }
+
return 0;
fail:
--
2.17.1
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 0/7] fix DMA mask check
2018-10-31 17:29 [dpdk-dev] [PATCH 0/7] fix DMA mask check Alejandro Lucero
` (6 preceding siblings ...)
2018-10-31 17:29 ` [dpdk-dev] [PATCH 7/7] eal/mem: use DMA mask check for legacy memory Alejandro Lucero
@ 2018-10-31 17:47 ` Alejandro Lucero
2018-11-02 5:52 ` Hyong Youb Kim
2018-11-01 10:13 ` Mattias Rönnblom
2018-11-01 17:28 ` Ferruh Yigit
9 siblings, 1 reply; 34+ messages in thread
From: Alejandro Lucero @ 2018-10-31 17:47 UTC (permalink / raw)
To: dev
On Wed, Oct 31, 2018 at 5:29 PM Alejandro Lucero <
alejandro.lucero@netronome.com> wrote:
> A patchset sent introducing DMA mask checks has several critical
> issues precluding apps to execute. The patchset was reviewed and
> finally accepted after three versions. Obviously it did not go
> through the proper testing what can be explained, at least from my
> side, due to the big changes to the memory initialization code these
> last months. It turns out the patchset did work with legacy memory
> and I'm afraid that was mainly my testing.
>
> This patchset should solve the main problems reported:
>
> - deadlock duriing initialization
> - segmentation fault with secondary processes
>
> For solving the deadlock, a new API is introduced:
>
> rte_mem_check_dma_mask_safe/unsafe
>
> making the previous rte_mem_check_dma_mask the one those new functions
> end calling. A boolean param is used for calling rte_memseg_walk thread
> safe or thread unsafe. This second option is needed for avoiding the
> deadlock.
>
> For the secondary processes problem, the call to check the dma mask is
> avoided from code being executed before the memory initialization.
> Instead, a new API function, rte_mem_set_dma_mask is introduced, which
> will be used in those cases. The dma mask check is done once the memory
> initialization is completed.
>
> This last change implies the IOVA mode can not be set depending on IOMMU
> hardware limitations, and it is assumed IOVA VA is possible. If the dma
> mask check reports a problem after memory initilization, the error
> message includes now advice for trying with --iova-mode option set to
> pa.
>
> The patchet also includes the dma mask check for legacy memory and the
> no hugepage option.
>
> Finally, all the DMA mask API has been updated for using the same prefix
> than other EAL memory code.
>
> An initial version of this patchset has been tested by Intel DPDK
> Validation team and it seems it solves all the problems reported. This
> final patchset has the same functionality with minor changes. I have
> successfully tested the patchset with my limited testbench.
>
> Alejandro Lucero (7):
> mem: fix call to DMA mask check
> mem: use proper prefix
> mem: add function for setting DMA mask
> bus/pci: avoid call to DMA mask check
> mem: modify error message for DMA mask check
> mem: add safe and unsafe versions for checking DMA mask
> eal/mem: use DMA mask check for legacy memory
>
> doc/guides/rel_notes/release_18_11.rst | 2 +-
> drivers/bus/pci/linux/pci.c | 11 ++++++-
> drivers/net/nfp/nfp_net.c | 2 +-
> lib/librte_eal/common/eal_common_memory.c | 36 +++++++++++++++++---
> lib/librte_eal/common/include/rte_memory.h | 37 +++++++++++++++++++--
> lib/librte_eal/common/malloc_heap.c | 38 ++++++++++++++++++----
> lib/librte_eal/linuxapp/eal/eal_memory.c | 17 ++++++++++
> lib/librte_eal/rte_eal_version.map | 4 ++-
> 8 files changed, 131 insertions(+), 16 deletions(-)
>
> --
> 2.17.1
>
>
I have just realized that there is a minor problem with compiling one patch
at a time, but it does compile when all applied.
I will fix this in another version but I will wait until review and testing
has been reported.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 2/7] mem: use proper prefix
2018-10-31 17:29 ` [dpdk-dev] [PATCH 2/7] mem: use proper prefix Alejandro Lucero
@ 2018-11-01 10:08 ` Burakov, Anatoly
2018-11-01 10:40 ` Alejandro Lucero
2018-11-01 14:50 ` Thomas Monjalon
0 siblings, 2 replies; 34+ messages in thread
From: Burakov, Anatoly @ 2018-11-01 10:08 UTC (permalink / raw)
To: Alejandro Lucero, dev
On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> Current name rte_eal_check_dma_mask does not follow the naming
> used in the rest of the file.
>
> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> ---
I don't think this belongs in the _mem_ namespace. It is usually used
for things to do with memory, while the DMA mask IMO sits firmly in the
domain of EAL, specifically bus subsystem.
However, i don't have strong feelings one way or the other, so if you do
decide to go forward with this naming...
> diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
> index 04f624246..ef8126a97 100644
> --- a/lib/librte_eal/rte_eal_version.map
> +++ b/lib/librte_eal/rte_eal_version.map
> @@ -295,7 +295,7 @@ EXPERIMENTAL {
> rte_devargs_parsef;
> rte_devargs_remove;
> rte_devargs_type_count;
> - rte_eal_check_dma_mask;
> + rte_mem_check_dma_mask;
...then this should be in alphabetical order.
> rte_eal_cleanup;
> rte_fbarray_attach;
> rte_fbarray_destroy;
>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 3/7] mem: add function for setting DMA mask
2018-10-31 17:29 ` [dpdk-dev] [PATCH 3/7] mem: add function for setting DMA mask Alejandro Lucero
@ 2018-11-01 10:11 ` Burakov, Anatoly
2018-11-01 10:48 ` Alejandro Lucero
0 siblings, 1 reply; 34+ messages in thread
From: Burakov, Anatoly @ 2018-11-01 10:11 UTC (permalink / raw)
To: Alejandro Lucero, dev
On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> This patch adds the possibility of setting a dma mask to be used
> once the memory initialization is done.
>
> This is currently needed when IOVA mode is set by PCI related
> code and an x86 IOMMU hardware unit is present. Current code calls
> rte_mem_check_dma_mask but it is wrong to do so at that point
> because the memory has not been initialized yet.
>
> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> ---
> lib/librte_eal/common/eal_common_memory.c | 10 ++++++++++
> lib/librte_eal/common/include/rte_memory.h | 10 ++++++++++
> lib/librte_eal/rte_eal_version.map | 1 +
> 3 files changed, 21 insertions(+)
>
> diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c
> index e0f08f39a..24b72fcb0 100644
> --- a/lib/librte_eal/common/eal_common_memory.c
> +++ b/lib/librte_eal/common/eal_common_memory.c
> @@ -480,6 +480,16 @@ rte_mem_check_dma_mask(uint8_t maskbits)
> return 0;
> }
>
> +/* set dma mask to use when memory initialization is done */
> +void __rte_experimental
> +rte_mem_set_dma_mask(uint8_t maskbits)
> +{
> + struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
> +
> + mcfg->dma_maskbits = mcfg->dma_maskbits == 0 ? maskbits :
> + RTE_MIN(mcfg->dma_maskbits, maskbits);
> +}
> +
> /* return the number of memory channels */
> unsigned rte_memory_get_nchannel(void)
> {
> diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
> index ad3f3cfb0..eff028db1 100644
> --- a/lib/librte_eal/common/include/rte_memory.h
> +++ b/lib/librte_eal/common/include/rte_memory.h
> @@ -466,6 +466,16 @@ unsigned rte_memory_get_nrank(void);
> /* check memsegs iovas are within a range based on dma mask */
> int __rte_experimental rte_mem_check_dma_mask(uint8_t maskbits);
>
> +/**
> + * * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
I think there's a copy-paste error here (extra star and indent before
warning).
> + *
> + * Set dma mask to use once memory initialization is done.
> + * Previous function rte_mem_check_dma_mask can not be used
> + * safely until memory has been initialized.
IMO this really requires a big bold warning saying that this function
must only be called early during initialization, before memory is
initialized, and not to be called in user code.
> + */
> +void __rte_experimental rte_mem_set_dma_mask(uint8_t maskbits);
> +
> /**
> * Drivers based on uio will not load unless physical
> * addresses are obtainable. It is only possible to get
> diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
> index ef8126a97..ae24b5c73 100644
> --- a/lib/librte_eal/rte_eal_version.map
> +++ b/lib/librte_eal/rte_eal_version.map
> @@ -296,6 +296,7 @@ EXPERIMENTAL {
> rte_devargs_remove;
> rte_devargs_type_count;
> rte_mem_check_dma_mask;
> + rte_mem_set_dma_mask;
Same, this needs to be alphabetic.
> rte_eal_cleanup;
> rte_fbarray_attach;
> rte_fbarray_destroy;
>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 1/7] mem: fix call to DMA mask check
2018-10-31 17:29 ` [dpdk-dev] [PATCH 1/7] mem: fix call to " Alejandro Lucero
@ 2018-11-01 10:11 ` Burakov, Anatoly
0 siblings, 0 replies; 34+ messages in thread
From: Burakov, Anatoly @ 2018-11-01 10:11 UTC (permalink / raw)
To: Alejandro Lucero, dev
On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> The param needs to be the maskbits and not the mask.
>
> Fixes: 223b7f1d5ef6 ("mem: add function for checking memseg IOVA")
>
> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> ---
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 4/7] bus/pci: avoid call to DMA mask check
2018-10-31 17:29 ` [dpdk-dev] [PATCH 4/7] bus/pci: avoid call to DMA mask check Alejandro Lucero
@ 2018-11-01 10:12 ` Burakov, Anatoly
0 siblings, 0 replies; 34+ messages in thread
From: Burakov, Anatoly @ 2018-11-01 10:12 UTC (permalink / raw)
To: Alejandro Lucero, dev
On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> Calling rte_mem_check_dma_mask when memory has not been initialized
> yet is wrong. This patch use rte_mem_set_dma_mask instead.
>
> Once memory initialization is done, the dma mask set will be used
> for checking memory mapped is within the specified mask.
>
> Fixes: fe822eb8c565 ("bus/pci: use IOVA DMA mask check when setting IOVA mode")
>
> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> ---
> drivers/bus/pci/linux/pci.c | 11 ++++++++++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/bus/pci/linux/pci.c b/drivers/bus/pci/linux/pci.c
> index 0a81e063b..d87384c72 100644
> --- a/drivers/bus/pci/linux/pci.c
> +++ b/drivers/bus/pci/linux/pci.c
> @@ -590,7 +590,16 @@ pci_one_device_iommu_support_va(struct rte_pci_device *dev)
>
> mgaw = ((vtd_cap_reg & VTD_CAP_MGAW_MASK) >> VTD_CAP_MGAW_SHIFT) + 1;
>
> - return rte_mem_check_dma_mask(mgaw) == 0 ? true : false;
> + /*
> + * Assuming there is no limitation by now. We can not know at this point
> + * because the memory has not been initialized yet. Setting the dma mask
> + * will force a check once memory initialization is done. We can not do
> + * a fallback to IOVA PA now, but if the dma check fails, the error
> + * message should advice for using '--iova-mode pa' if IOVA VA is the
> + * current mode.
> + */
> + rte_mem_set_dma_mask(mgaw);
> + return true;
This is IMO a good solution to the circular dependency between setting
DMA mask and memory init. At least i can't think of a better one :)
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
> }
> #elif defined(RTE_ARCH_PPC_64)
> static bool
>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 0/7] fix DMA mask check
2018-10-31 17:29 [dpdk-dev] [PATCH 0/7] fix DMA mask check Alejandro Lucero
` (7 preceding siblings ...)
2018-10-31 17:47 ` [dpdk-dev] [PATCH 0/7] fix DMA mask check Alejandro Lucero
@ 2018-11-01 10:13 ` Mattias Rönnblom
2018-11-01 17:28 ` Ferruh Yigit
9 siblings, 0 replies; 34+ messages in thread
From: Mattias Rönnblom @ 2018-11-01 10:13 UTC (permalink / raw)
To: Alejandro Lucero, dev
On 2018-10-31 18:29, Alejandro Lucero wrote:
> A patchset sent introducing DMA mask checks has several critical
> issues precluding apps to execute. The patchset was reviewed and
> finally accepted after three versions. Obviously it did not go
> through the proper testing what can be explained, at least from my
> side, due to the big changes to the memory initialization code these
> last months. It turns out the patchset did work with legacy memory
> and I'm afraid that was mainly my testing.
>
> This patchset should solve the main problems reported:
>
> - deadlock duriing initialization
> - segmentation fault with secondary processes
>
The deadlock I reported[1] no longer occur after applying this patch set.
[1] https://bugs.dpdk.org/show_bug.cgi?id=102
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 5/7] mem: modify error message for DMA mask check
2018-10-31 17:29 ` [dpdk-dev] [PATCH 5/7] mem: modify error message for " Alejandro Lucero
@ 2018-11-01 10:29 ` Burakov, Anatoly
2018-11-01 11:03 ` Alejandro Lucero
0 siblings, 1 reply; 34+ messages in thread
From: Burakov, Anatoly @ 2018-11-01 10:29 UTC (permalink / raw)
To: Alejandro Lucero, dev
On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> If DMA mask checks shows mapped memory out of the supported range
> specified by the DMA mask, nothing can be done but return an error
> an report the error. This can imply the app not being executed at
> all or precluding dynamic memory allocation once the app is running.
> In any case, we can advice the user to force IOVA as PA if currently
> IOVA being VA and user being root.
>
> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> ---
General comment - legacy memory will also need this check, correct?
> lib/librte_eal/common/malloc_heap.c | 35 +++++++++++++++++++++++++----
> 1 file changed, 31 insertions(+), 4 deletions(-)
>
> diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c
> index 7d423089d..711622f19 100644
> --- a/lib/librte_eal/common/malloc_heap.c
> +++ b/lib/librte_eal/common/malloc_heap.c
> @@ -5,8 +5,10 @@
> #include <stddef.h>
> #include <stdlib.h>
> #include <stdio.h>
> +#include <unistd.h>
> #include <stdarg.h>
> #include <errno.h>
> +#include <sys/types.h>
> #include <sys/queue.h>
>
> #include <rte_memory.h>
> @@ -294,7 +296,6 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
> size_t alloc_sz;
> int allocd_pages;
> void *ret, *map_addr;
> - uint64_t mask;
>
> alloc_sz = (size_t)pg_sz * n_segs;
>
> @@ -322,11 +323,37 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
> goto fail;
> }
>
> + /* Once we have all the memseg lists configured, if there is a dma mask
> + * set, check iova addresses are not out of range. Otherwise the device
> + * setting the dma mask could have problems with the mapped memory.
> + *
> + * There are two situations when this can happen:
> + * 1) memory initialization
> + * 2) dynamic memory allocation
> + *
> + * For 1), an error when checking dma mask implies app can not be
> + * executed. For 2) implies the new memory can not be added.
> + */
> if (mcfg->dma_maskbits) {
> if (rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
> - RTE_LOG(ERR, EAL,
> - "%s(): couldn't allocate memory due to DMA mask\n",
> - __func__);
> + /* Currently this can only happen if IOMMU is enabled
> + * with RTE_ARCH_X86. It is not safe to use this memory
> + * so returning an error here.
I don't think it's RTE_ARCH_X86-only. It can be any other arch with an
IOMMU that's reporting addressing limitations.
> + *
> + * If IOVA is VA, advice to try with '--iova-mode pa'
> + * which could solve some situations when IOVA VA is not
> + * really needed.
> + */
> + uid_t user = getuid();
> + if ((rte_eal_iova_mode() == RTE_IOVA_VA) && user == 0)
rte_eal_using_phys_addrs()?
(the above function name is a bit of a misnomer, it really checks if
they are available, but not necessarily used - it will return true in
RTE_IOVA_VA mode if you're running as root)
> + RTE_LOG(ERR, EAL,
> + "%s(): couldn't allocate memory due to DMA mask.\n"
> + "Try with 'iova-mode=pa'\n",
> + __func__);
> + else
> + RTE_LOG(ERR, EAL,
> + "%s(): couldn't allocate memory due to DMA mask\n",
> + __func__);
I don't think the error message is terribly descriptive. Looking at it
through the eyes of someone who sees it for the first time and who has
no idea what "iova-mode=pa" is, i think it would be more useful to word
it the following way:
couldn't allocate memory due to IOVA exceeding limits of current DMA mask.
[for non-using phys addrs case] Please try initializing EAL with
--iova-mode=pa parameter.
Also, generally newlines in RTE_LOG look ugly unless you indent the line :)
> goto fail;
> }
> }
>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 6/7] mem: add safe and unsafe versions for checking DMA mask
2018-10-31 17:29 ` [dpdk-dev] [PATCH 6/7] mem: add safe and unsafe versions for checking DMA mask Alejandro Lucero
@ 2018-11-01 10:38 ` Burakov, Anatoly
2018-11-01 13:34 ` Alejandro Lucero
0 siblings, 1 reply; 34+ messages in thread
From: Burakov, Anatoly @ 2018-11-01 10:38 UTC (permalink / raw)
To: Alejandro Lucero, dev
On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> During memory initialization calling rte_mem_check_dma_mask
> leads to a deadlock because memory_hotplug_lock is locked by a
> writer, the current code in execution, and rte_memseg_walk
> tries to lock as a reader.
>
> This patch adds safe and unsafe versions for invoking the final
> function specifying if the memory_hotplug_lock needs to be
> acquired, this is for the safe version, or not, the unsafe one.
> PMDs should use the safe version and just internal EAL memory
> code should use the unsafe one.
>
> Fixes: 223b7f1d5ef6 ("mem: add function for checking memseg IOVA")
>
> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> ---
I don't think _safe and _unsafe are good names. _unsafe implies
something might blow up, which isn't the case :) I think following the
naming convention established by other functions (rte_mem_check_dma_mask
and rte_mem_check_dma_mask_thread_unsafe) is better. User/driver code is
only supposed to use rte_mem_check_dma_mask safe version anyway, so
there's no need to differentiate between the two if the other one is
never supposed to be used.
> drivers/net/nfp/nfp_net.c | 2 +-
> lib/librte_eal/common/eal_common_memory.c | 24 +++++++++++++++---
> lib/librte_eal/common/include/rte_memory.h | 29 +++++++++++++++++++---
> lib/librte_eal/common/malloc_heap.c | 2 +-
> lib/librte_eal/rte_eal_version.map | 3 ++-
> 5 files changed, 51 insertions(+), 9 deletions(-)
>
<...>
> -/* check memsegs iovas are within a range based on dma mask */
> -int __rte_experimental rte_mem_check_dma_mask(uint8_t maskbits);
> +/**
> + * * @warning
Here and in other places - same issue with extra star.
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Check memsegs iovas are within a range based on dma mask.
The comments make it seem like the parameter is an actual DMA mask,
rather than DMA mask *width*. In fact, you seem to have tripped yourself
up on that already :)
Suggested rewording:
Check if all currently allocated memory segments are compliant with
supplied DMA address width.
> + *
> + * @param maskbits
> + * Address width to check against.
> + */
> +int __rte_experimental rte_mem_check_dma_mask_safe(uint8_t maskbits);
> +
> +/**
> + * * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Check memsegs iovas are within a range based on dma mask without acquiring
> + * memory_hotplug_lock first.
> + *
> + * This function is just for EAL core memory internal use. Drivers should
> + * use the previous safe one.
This is IMO too detailed. Suggested rewording:
Check if all currently allocated memory segments are compliant with
supplied DMA address width.
@warning This function is not thread-safe and is for internal use only.
> + *
> + * @param maskbits
> + * Address width to check against.
> + */
> +int __rte_experimental rte_mem_check_dma_mask_unsafe(uint8_t maskbits);
>
> /**
> * * @warning
> * @b EXPERIMENTAL: this API may change without prior notice
> *
> * Set dma mask to use once memory initialization is done.
> - * Previous function rte_mem_check_dma_mask can not be used
> + * Previous functions rte_mem_check_dma_mask_safe/unsafe can not be used
> * safely until memory has been initialized.
> */
> void __rte_experimental rte_mem_set_dma_mask(uint8_t maskbits);
> diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c
> index 711622f19..dd8b983e7 100644
> --- a/lib/librte_eal/common/malloc_heap.c
> +++ b/lib/librte_eal/common/malloc_heap.c
> @@ -335,7 +335,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
> * executed. For 2) implies the new memory can not be added.
> */
> if (mcfg->dma_maskbits) {
> - if (rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
> + if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) {
> /* Currently this can only happen if IOMMU is enabled
> * with RTE_ARCH_X86. It is not safe to use this memory
> * so returning an error here.
> diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
> index ae24b5c73..f863903b6 100644
> --- a/lib/librte_eal/rte_eal_version.map
> +++ b/lib/librte_eal/rte_eal_version.map
> @@ -296,7 +296,8 @@ EXPERIMENTAL {
> rte_devargs_remove;
> rte_devargs_type_count;
> rte_mem_check_dma_mask;
> - rte_mem_set_dma_mask;
> + rte_mem_set_dma_mask_safe;
> + rte_mem_set_dma_mask_unsafe;
Again, alphabet :)
> rte_eal_cleanup;
> rte_fbarray_attach;
> rte_fbarray_destroy;
>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 2/7] mem: use proper prefix
2018-11-01 10:08 ` Burakov, Anatoly
@ 2018-11-01 10:40 ` Alejandro Lucero
2018-11-01 14:50 ` Thomas Monjalon
1 sibling, 0 replies; 34+ messages in thread
From: Alejandro Lucero @ 2018-11-01 10:40 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: dev
On Thu, Nov 1, 2018 at 10:08 AM Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:
> On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> > Current name rte_eal_check_dma_mask does not follow the naming
> > used in the rest of the file.
> >
> > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> > ---
>
> I don't think this belongs in the _mem_ namespace. It is usually used
> for things to do with memory, while the DMA mask IMO sits firmly in the
> domain of EAL, specifically bus subsystem.
>
> However, i don't have strong feelings one way or the other, so if you do
> decide to go forward with this naming...
>
>
This naming change was suggested by Thomas. I'm fine with any naming we
decide to use.
> > diff --git a/lib/librte_eal/rte_eal_version.map
> b/lib/librte_eal/rte_eal_version.map
> > index 04f624246..ef8126a97 100644
> > --- a/lib/librte_eal/rte_eal_version.map
> > +++ b/lib/librte_eal/rte_eal_version.map
> > @@ -295,7 +295,7 @@ EXPERIMENTAL {
> > rte_devargs_parsef;
> > rte_devargs_remove;
> > rte_devargs_type_count;
> > - rte_eal_check_dma_mask;
> > + rte_mem_check_dma_mask;
>
> ...then this should be in alphabetical order.
>
> > rte_eal_cleanup;
> > rte_fbarray_attach;
> > rte_fbarray_destroy;
> >
>
>
> --
> Thanks,
> Anatoly
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 7/7] eal/mem: use DMA mask check for legacy memory
2018-10-31 17:29 ` [dpdk-dev] [PATCH 7/7] eal/mem: use DMA mask check for legacy memory Alejandro Lucero
@ 2018-11-01 10:40 ` Burakov, Anatoly
2018-11-01 13:39 ` Alejandro Lucero
0 siblings, 1 reply; 34+ messages in thread
From: Burakov, Anatoly @ 2018-11-01 10:40 UTC (permalink / raw)
To: Alejandro Lucero, dev
On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> If a device reports addressing limitations through a dma mask,
> the IOVAs for mapped memory needs to be checked out for ensuring
> correct functionality.
>
> Previous patches introduced this DMA check for main memory code
> currently being used but other options like legacy memory and the
> no hugepages one need to be also considered.
>
> This patch adds the DMA check for those cases.
>
> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> ---
IMO this needs to be integrated with patch 5.
> lib/librte_eal/linuxapp/eal/eal_memory.c | 17 +++++++++++++++++
> 1 file changed, 17 insertions(+)
>
> diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
> index fce86fda6..2a3a8c7a3 100644
> --- a/lib/librte_eal/linuxapp/eal/eal_memory.c
> +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
> @@ -1393,6 +1393,14 @@ eal_legacy_hugepage_init(void)
>
> addr = RTE_PTR_ADD(addr, (size_t)page_sz);
> }
> + if (mcfg->dma_maskbits) {
> + if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) {
> + RTE_LOG(ERR, EAL,
> + "%s(): couldn't allocate memory due to DMA mask\n",
I would use suggested rewording from patch 5 :)
> + __func__);
> + goto fail;
> + }
> + }
> return 0;
> }
>
> @@ -1628,6 +1636,15 @@ eal_legacy_hugepage_init(void)
> rte_fbarray_destroy(&msl->memseg_arr);
> }
>
> + if (mcfg->dma_maskbits) {
> + if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) {
> + RTE_LOG(ERR, EAL,
> + "%s(): couldn't allocate memory due to DMA mask\n",
Same as above.
> + __func__);
> + goto fail;
> + }
> + }
> +
> return 0;
>
> fail:
>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 3/7] mem: add function for setting DMA mask
2018-11-01 10:11 ` Burakov, Anatoly
@ 2018-11-01 10:48 ` Alejandro Lucero
2018-11-01 10:57 ` Burakov, Anatoly
0 siblings, 1 reply; 34+ messages in thread
From: Alejandro Lucero @ 2018-11-01 10:48 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: dev
On Thu, Nov 1, 2018 at 10:11 AM Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:
> On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> > This patch adds the possibility of setting a dma mask to be used
> > once the memory initialization is done.
> >
> > This is currently needed when IOVA mode is set by PCI related
> > code and an x86 IOMMU hardware unit is present. Current code calls
> > rte_mem_check_dma_mask but it is wrong to do so at that point
> > because the memory has not been initialized yet.
> >
> > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> > ---
> > lib/librte_eal/common/eal_common_memory.c | 10 ++++++++++
> > lib/librte_eal/common/include/rte_memory.h | 10 ++++++++++
> > lib/librte_eal/rte_eal_version.map | 1 +
> > 3 files changed, 21 insertions(+)
> >
> > diff --git a/lib/librte_eal/common/eal_common_memory.c
> b/lib/librte_eal/common/eal_common_memory.c
> > index e0f08f39a..24b72fcb0 100644
> > --- a/lib/librte_eal/common/eal_common_memory.c
> > +++ b/lib/librte_eal/common/eal_common_memory.c
> > @@ -480,6 +480,16 @@ rte_mem_check_dma_mask(uint8_t maskbits)
> > return 0;
> > }
> >
> > +/* set dma mask to use when memory initialization is done */
> > +void __rte_experimental
> > +rte_mem_set_dma_mask(uint8_t maskbits)
> > +{
> > + struct rte_mem_config *mcfg =
> rte_eal_get_configuration()->mem_config;
> > +
> > + mcfg->dma_maskbits = mcfg->dma_maskbits == 0 ? maskbits :
> > + RTE_MIN(mcfg->dma_maskbits, maskbits);
> > +}
> > +
> > /* return the number of memory channels */
> > unsigned rte_memory_get_nchannel(void)
> > {
> > diff --git a/lib/librte_eal/common/include/rte_memory.h
> b/lib/librte_eal/common/include/rte_memory.h
> > index ad3f3cfb0..eff028db1 100644
> > --- a/lib/librte_eal/common/include/rte_memory.h
> > +++ b/lib/librte_eal/common/include/rte_memory.h
> > @@ -466,6 +466,16 @@ unsigned rte_memory_get_nrank(void);
> > /* check memsegs iovas are within a range based on dma mask */
> > int __rte_experimental rte_mem_check_dma_mask(uint8_t maskbits);
> >
> > +/**
> > + * * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
>
> I think there's a copy-paste error here (extra star and indent before
> warning).
>
I will fix this.
Thanks.
>
> > + *
> > + * Set dma mask to use once memory initialization is done.
> > + * Previous function rte_mem_check_dma_mask can not be used
> > + * safely until memory has been initialized.
>
> IMO this really requires a big bold warning saying that this function
> must only be called early during initialization, before memory is
> initialized, and not to be called in user code.
>
>
what about adding a new EAL variable for keeping memory initialization
status?
It would be something like "uninit" until is changed to "done" after the
memory initialization is completed.
That variable would help to avoid effective calls to set the dma mask after
initialization.
> > + */
> > +void __rte_experimental rte_mem_set_dma_mask(uint8_t maskbits);
> > +
> > /**
> > * Drivers based on uio will not load unless physical
> > * addresses are obtainable. It is only possible to get
> > diff --git a/lib/librte_eal/rte_eal_version.map
> b/lib/librte_eal/rte_eal_version.map
> > index ef8126a97..ae24b5c73 100644
> > --- a/lib/librte_eal/rte_eal_version.map
> > +++ b/lib/librte_eal/rte_eal_version.map
> > @@ -296,6 +296,7 @@ EXPERIMENTAL {
> > rte_devargs_remove;
> > rte_devargs_type_count;
> > rte_mem_check_dma_mask;
> > + rte_mem_set_dma_mask;
>
> Same, this needs to be alphabetic.
>
> > rte_eal_cleanup;
> > rte_fbarray_attach;
> > rte_fbarray_destroy;
> >
>
>
> --
> Thanks,
> Anatoly
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 3/7] mem: add function for setting DMA mask
2018-11-01 10:48 ` Alejandro Lucero
@ 2018-11-01 10:57 ` Burakov, Anatoly
2018-11-01 11:30 ` Alejandro Lucero
0 siblings, 1 reply; 34+ messages in thread
From: Burakov, Anatoly @ 2018-11-01 10:57 UTC (permalink / raw)
To: Alejandro Lucero; +Cc: dev
On 01-Nov-18 10:48 AM, Alejandro Lucero wrote:
>
>
> On Thu, Nov 1, 2018 at 10:11 AM Burakov, Anatoly
> <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
>
> On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> > This patch adds the possibility of setting a dma mask to be used
> > once the memory initialization is done.
> >
> > This is currently needed when IOVA mode is set by PCI related
> > code and an x86 IOMMU hardware unit is present. Current code calls
> > rte_mem_check_dma_mask but it is wrong to do so at that point
> > because the memory has not been initialized yet.
> >
> > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com
> <mailto:alejandro.lucero@netronome.com>>
> > ---
> > lib/librte_eal/common/eal_common_memory.c | 10 ++++++++++
> > lib/librte_eal/common/include/rte_memory.h | 10 ++++++++++
> > lib/librte_eal/rte_eal_version.map | 1 +
> > 3 files changed, 21 insertions(+)
> >
> > diff --git a/lib/librte_eal/common/eal_common_memory.c
> b/lib/librte_eal/common/eal_common_memory.c
> > index e0f08f39a..24b72fcb0 100644
> > --- a/lib/librte_eal/common/eal_common_memory.c
> > +++ b/lib/librte_eal/common/eal_common_memory.c
> > @@ -480,6 +480,16 @@ rte_mem_check_dma_mask(uint8_t maskbits)
> > return 0;
> > }
> >
> > +/* set dma mask to use when memory initialization is done */
> > +void __rte_experimental
> > +rte_mem_set_dma_mask(uint8_t maskbits)
> > +{
> > + struct rte_mem_config *mcfg =
> rte_eal_get_configuration()->mem_config;
> > +
> > + mcfg->dma_maskbits = mcfg->dma_maskbits == 0 ? maskbits :
> > + RTE_MIN(mcfg->dma_maskbits, maskbits);
> > +}
> > +
> > /* return the number of memory channels */
> > unsigned rte_memory_get_nchannel(void)
> > {
> > diff --git a/lib/librte_eal/common/include/rte_memory.h
> b/lib/librte_eal/common/include/rte_memory.h
> > index ad3f3cfb0..eff028db1 100644
> > --- a/lib/librte_eal/common/include/rte_memory.h
> > +++ b/lib/librte_eal/common/include/rte_memory.h
> > @@ -466,6 +466,16 @@ unsigned rte_memory_get_nrank(void);
> > /* check memsegs iovas are within a range based on dma mask */
> > int __rte_experimental rte_mem_check_dma_mask(uint8_t maskbits);
> >
> > +/**
> > + * * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
>
> I think there's a copy-paste error here (extra star and indent before
> warning).
>
>
> I will fix this.
>
> Thanks.
>
>
> > + *
> > + * Set dma mask to use once memory initialization is done.
> > + * Previous function rte_mem_check_dma_mask can not be used
> > + * safely until memory has been initialized.
>
> IMO this really requires a big bold warning saying that this function
> must only be called early during initialization, before memory is
> initialized, and not to be called in user code.
>
>
> what about adding a new EAL variable for keeping memory initialization
> status?
> It would be something like "uninit" until is changed to "done" after the
> memory initialization is completed.
> That variable would help to avoid effective calls to set the dma mask
> after initialization.
I'm not opposed to it in principle, however, this is a slippery slope
towards having each and every subsystem storing their init status :)
That said, while it's not a complete solution because it won't prevent
spurious calls to this function _after_ memory has initialized, there is
a variable in internal_config that you can use to deny calls to this
function after EAL init is complete (see internal_config->init_complete).
You could also store a static variable in eal_common_memory.c, and
change it at the end of rte_eal_memory_init() call, but that does not
really complete the memory initialization, because we then go to
rte_eal_malloc_heap_init(), which actually completes the memory init.
I would greatly prefer using internal_config->init_complete - it is
enough to forbid user code from calling it, while driver/DPDK developers
presumably know what they're doing enough to read the warning and not
try to call this after memory init :)
>
> > + */
> > +void __rte_experimental rte_mem_set_dma_mask(uint8_t maskbits);
> > +
> > /**
> > * Drivers based on uio will not load unless physical
> > * addresses are obtainable. It is only possible to get
> > diff --git a/lib/librte_eal/rte_eal_version.map
> b/lib/librte_eal/rte_eal_version.map
> > index ef8126a97..ae24b5c73 100644
> > --- a/lib/librte_eal/rte_eal_version.map
> > +++ b/lib/librte_eal/rte_eal_version.map
> > @@ -296,6 +296,7 @@ EXPERIMENTAL {
> > rte_devargs_remove;
> > rte_devargs_type_count;
> > rte_mem_check_dma_mask;
> > + rte_mem_set_dma_mask;
>
> Same, this needs to be alphabetic.
>
> > rte_eal_cleanup;
> > rte_fbarray_attach;
> > rte_fbarray_destroy;
> >
>
>
> --
> Thanks,
> Anatoly
>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 5/7] mem: modify error message for DMA mask check
2018-11-01 10:29 ` Burakov, Anatoly
@ 2018-11-01 11:03 ` Alejandro Lucero
2018-11-01 11:12 ` Burakov, Anatoly
0 siblings, 1 reply; 34+ messages in thread
From: Alejandro Lucero @ 2018-11-01 11:03 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: dev
On Thu, Nov 1, 2018 at 10:29 AM Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:
> On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> > If DMA mask checks shows mapped memory out of the supported range
> > specified by the DMA mask, nothing can be done but return an error
> > an report the error. This can imply the app not being executed at
> > all or precluding dynamic memory allocation once the app is running.
> > In any case, we can advice the user to force IOVA as PA if currently
> > IOVA being VA and user being root.
> >
> > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> > ---
>
> General comment - legacy memory will also need this check, correct?
>
>
Yes, there is another patch adding this for both, legacy-mem and no-huge
options.
> > lib/librte_eal/common/malloc_heap.c | 35 +++++++++++++++++++++++++----
> > 1 file changed, 31 insertions(+), 4 deletions(-)
> >
> > diff --git a/lib/librte_eal/common/malloc_heap.c
> b/lib/librte_eal/common/malloc_heap.c
> > index 7d423089d..711622f19 100644
> > --- a/lib/librte_eal/common/malloc_heap.c
> > +++ b/lib/librte_eal/common/malloc_heap.c
> > @@ -5,8 +5,10 @@
> > #include <stddef.h>
> > #include <stdlib.h>
> > #include <stdio.h>
> > +#include <unistd.h>
> > #include <stdarg.h>
> > #include <errno.h>
> > +#include <sys/types.h>
> > #include <sys/queue.h>
> >
> > #include <rte_memory.h>
> > @@ -294,7 +296,6 @@ alloc_pages_on_heap(struct malloc_heap *heap,
> uint64_t pg_sz, size_t elt_size,
> > size_t alloc_sz;
> > int allocd_pages;
> > void *ret, *map_addr;
> > - uint64_t mask;
> >
> > alloc_sz = (size_t)pg_sz * n_segs;
> >
> > @@ -322,11 +323,37 @@ alloc_pages_on_heap(struct malloc_heap *heap,
> uint64_t pg_sz, size_t elt_size,
> > goto fail;
> > }
> >
> > + /* Once we have all the memseg lists configured, if there is a dma
> mask
> > + * set, check iova addresses are not out of range. Otherwise the
> device
> > + * setting the dma mask could have problems with the mapped memory.
> > + *
> > + * There are two situations when this can happen:
> > + * 1) memory initialization
> > + * 2) dynamic memory allocation
> > + *
> > + * For 1), an error when checking dma mask implies app can not be
> > + * executed. For 2) implies the new memory can not be added.
> > + */
> > if (mcfg->dma_maskbits) {
> > if (rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
> > - RTE_LOG(ERR, EAL,
> > - "%s(): couldn't allocate memory due to DMA
> mask\n",
> > - __func__);
> > + /* Currently this can only happen if IOMMU is
> enabled
> > + * with RTE_ARCH_X86. It is not safe to use this
> memory
> > + * so returning an error here.
>
> I don't think it's RTE_ARCH_X86-only. It can be any other arch with an
> IOMMU that's reporting addressing limitations.
>
Right, but it is just IOMMU hardware from this architecture having the
current limitation.
I was trying to just explain why this can happen but I can remove the
reference to specific
architecture problems.
> > + *
> > + * If IOVA is VA, advice to try with '--iova-mode
> pa'
> > + * which could solve some situations when IOVA VA
> is not
> > + * really needed.
> > + */
> > + uid_t user = getuid();
> > + if ((rte_eal_iova_mode() == RTE_IOVA_VA) && user
> == 0)
>
> rte_eal_using_phys_addrs()?
>
> (the above function name is a bit of a misnomer, it really checks if
> they are available, but not necessarily used - it will return true in
> RTE_IOVA_VA mode if you're running as root)
>
rte_eal_iova_mode returns rte_eal_get_configuration()->iova_mode what
is set during initialization. It can be PA not just because IOMMU (not
after the patch)
but because some PMD does not reports IOVA VA support or because UIO is in
use.
Checking for root is because IOVA PA can not be used if non root.
> > + RTE_LOG(ERR, EAL,
> > + "%s(): couldn't allocate memory
> due to DMA mask.\n"
> > + "Try with 'iova-mode=pa'\n",
> > + __func__);
> > + else
> > + RTE_LOG(ERR, EAL,
> > + "%s(): couldn't allocate memory
> due to DMA mask\n",
> > + __func__);
>
> I don't think the error message is terribly descriptive. Looking at it
> through the eyes of someone who sees it for the first time and who has
> no idea what "iova-mode=pa" is, i think it would be more useful to word
> it the following way:
>
> couldn't allocate memory due to IOVA exceeding limits of current DMA mask.
> [for non-using phys addrs case] Please try initializing EAL with
> --iova-mode=pa parameter.
>
>
I'm happy with using your terrific description instead ;-)
Thanks!
> Also, generally newlines in RTE_LOG look ugly unless you indent the line :)
>
> > goto fail;
> > }
> > }
> >
>
>
> --
> Thanks,
> Anatoly
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 5/7] mem: modify error message for DMA mask check
2018-11-01 11:03 ` Alejandro Lucero
@ 2018-11-01 11:12 ` Burakov, Anatoly
2018-11-01 11:32 ` Alejandro Lucero
0 siblings, 1 reply; 34+ messages in thread
From: Burakov, Anatoly @ 2018-11-01 11:12 UTC (permalink / raw)
To: Alejandro Lucero; +Cc: dev
On 01-Nov-18 11:03 AM, Alejandro Lucero wrote:
>
>
> On Thu, Nov 1, 2018 at 10:29 AM Burakov, Anatoly
> <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
>
> On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> > If DMA mask checks shows mapped memory out of the supported range
> > specified by the DMA mask, nothing can be done but return an error
> > an report the error. This can imply the app not being executed at
> > all or precluding dynamic memory allocation once the app is running.
> > In any case, we can advice the user to force IOVA as PA if currently
> > IOVA being VA and user being root.
> >
> > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com
> <mailto:alejandro.lucero@netronome.com>>
> > ---
>
> General comment - legacy memory will also need this check, correct?
>
>
> Yes, there is another patch adding this for both, legacy-mem and no-huge
> options.
>
> > lib/librte_eal/common/malloc_heap.c | 35
> +++++++++++++++++++++++++----
> > 1 file changed, 31 insertions(+), 4 deletions(-)
> >
> > diff --git a/lib/librte_eal/common/malloc_heap.c
> b/lib/librte_eal/common/malloc_heap.c
> > index 7d423089d..711622f19 100644
> > --- a/lib/librte_eal/common/malloc_heap.c
> > +++ b/lib/librte_eal/common/malloc_heap.c
> > @@ -5,8 +5,10 @@
> > #include <stddef.h>
> > #include <stdlib.h>
> > #include <stdio.h>
> > +#include <unistd.h>
> > #include <stdarg.h>
> > #include <errno.h>
> > +#include <sys/types.h>
> > #include <sys/queue.h>
> >
> > #include <rte_memory.h>
> > @@ -294,7 +296,6 @@ alloc_pages_on_heap(struct malloc_heap *heap,
> uint64_t pg_sz, size_t elt_size,
> > size_t alloc_sz;
> > int allocd_pages;
> > void *ret, *map_addr;
> > - uint64_t mask;
> >
> > alloc_sz = (size_t)pg_sz * n_segs;
> >
> > @@ -322,11 +323,37 @@ alloc_pages_on_heap(struct malloc_heap
> *heap, uint64_t pg_sz, size_t elt_size,
> > goto fail;
> > }
> >
> > + /* Once we have all the memseg lists configured, if there
> is a dma mask
> > + * set, check iova addresses are not out of range.
> Otherwise the device
> > + * setting the dma mask could have problems with the mapped
> memory.
> > + *
> > + * There are two situations when this can happen:
> > + * 1) memory initialization
> > + * 2) dynamic memory allocation
> > + *
> > + * For 1), an error when checking dma mask implies app can
> not be
> > + * executed. For 2) implies the new memory can not be added.
> > + */
> > if (mcfg->dma_maskbits) {
> > if (rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
> > - RTE_LOG(ERR, EAL,
> > - "%s(): couldn't allocate memory due
> to DMA mask\n",
> > - __func__);
> > + /* Currently this can only happen if IOMMU
> is enabled
> > + * with RTE_ARCH_X86. It is not safe to use
> this memory
> > + * so returning an error here.
>
> I don't think it's RTE_ARCH_X86-only. It can be any other arch with an
> IOMMU that's reporting addressing limitations.
>
>
> Right, but it is just IOMMU hardware from this architecture having the
> current limitation.
> I was trying to just explain why this can happen but I can remove the
> reference to specific
> architecture problems.
>
>
> > + *
> > + * If IOVA is VA, advice to try with
> '--iova-mode pa'
> > + * which could solve some situations when
> IOVA VA is not
> > + * really needed.
> > + */
> > + uid_t user = getuid();
> > + if ((rte_eal_iova_mode() == RTE_IOVA_VA) &&
> user == 0)
>
> rte_eal_using_phys_addrs()?
>
> (the above function name is a bit of a misnomer, it really checks if
> they are available, but not necessarily used - it will return true in
> RTE_IOVA_VA mode if you're running as root)
>
>
> rte_eal_iova_mode returns rte_eal_get_configuration()->iova_mode what
> is set during initialization. It can be PA not just because IOMMU (not
> after the patch)
> but because some PMD does not reports IOVA VA support or because UIO is
> in use.
> Checking for root is because IOVA PA can not be used if non root.
You've misinterpreted my comment.
rte_eal_using_phys_addrs() will effectively return true if you're
running as root. There's no need for an uid check.
The "misnomer" comment was about the rte_eal_using_phys_addrs() - it
reads like it would return false in IOVA_VA mode, but in reality, it
will return true even if IOVA_VA mode - it really should be named
"rte_eal_phys_addrs_available()" rather than
"rte_eal_using_phys_addrs()". This would make it clearer.
>
>
> > + RTE_LOG(ERR, EAL,
> > + "%s(): couldn't allocate
> memory due to DMA mask.\n"
> > + "Try with 'iova-mode=pa'\n",
> > + __func__);
> > + else
> > + RTE_LOG(ERR, EAL,
> > + "%s(): couldn't allocate
> memory due to DMA mask\n",
> > + __func__);
>
> I don't think the error message is terribly descriptive. Looking at it
> through the eyes of someone who sees it for the first time and who has
> no idea what "iova-mode=pa" is, i think it would be more useful to word
> it the following way:
>
> couldn't allocate memory due to IOVA exceeding limits of current DMA
> mask.
> [for non-using phys addrs case] Please try initializing EAL with
> --iova-mode=pa parameter.
>
>
> I'm happy with using your terrific description instead ;-)
> Thanks!
>
> Also, generally newlines in RTE_LOG look ugly unless you indent the
> line :)
>
> > goto fail;
> > }
> > }
> >
>
>
> --
> Thanks,
> Anatoly
>
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 3/7] mem: add function for setting DMA mask
2018-11-01 10:57 ` Burakov, Anatoly
@ 2018-11-01 11:30 ` Alejandro Lucero
2018-11-01 14:32 ` Alejandro Lucero
0 siblings, 1 reply; 34+ messages in thread
From: Alejandro Lucero @ 2018-11-01 11:30 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: dev
On Thu, Nov 1, 2018 at 10:57 AM Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:
> On 01-Nov-18 10:48 AM, Alejandro Lucero wrote:
> >
> >
> > On Thu, Nov 1, 2018 at 10:11 AM Burakov, Anatoly
> > <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
> >
> > On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> > > This patch adds the possibility of setting a dma mask to be used
> > > once the memory initialization is done.
> > >
> > > This is currently needed when IOVA mode is set by PCI related
> > > code and an x86 IOMMU hardware unit is present. Current code calls
> > > rte_mem_check_dma_mask but it is wrong to do so at that point
> > > because the memory has not been initialized yet.
> > >
> > > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com
> > <mailto:alejandro.lucero@netronome.com>>
> > > ---
> > > lib/librte_eal/common/eal_common_memory.c | 10 ++++++++++
> > > lib/librte_eal/common/include/rte_memory.h | 10 ++++++++++
> > > lib/librte_eal/rte_eal_version.map | 1 +
> > > 3 files changed, 21 insertions(+)
> > >
> > > diff --git a/lib/librte_eal/common/eal_common_memory.c
> > b/lib/librte_eal/common/eal_common_memory.c
> > > index e0f08f39a..24b72fcb0 100644
> > > --- a/lib/librte_eal/common/eal_common_memory.c
> > > +++ b/lib/librte_eal/common/eal_common_memory.c
> > > @@ -480,6 +480,16 @@ rte_mem_check_dma_mask(uint8_t maskbits)
> > > return 0;
> > > }
> > >
> > > +/* set dma mask to use when memory initialization is done */
> > > +void __rte_experimental
> > > +rte_mem_set_dma_mask(uint8_t maskbits)
> > > +{
> > > + struct rte_mem_config *mcfg =
> > rte_eal_get_configuration()->mem_config;
> > > +
> > > + mcfg->dma_maskbits = mcfg->dma_maskbits == 0 ? maskbits :
> > > + RTE_MIN(mcfg->dma_maskbits, maskbits);
> > > +}
> > > +
> > > /* return the number of memory channels */
> > > unsigned rte_memory_get_nchannel(void)
> > > {
> > > diff --git a/lib/librte_eal/common/include/rte_memory.h
> > b/lib/librte_eal/common/include/rte_memory.h
> > > index ad3f3cfb0..eff028db1 100644
> > > --- a/lib/librte_eal/common/include/rte_memory.h
> > > +++ b/lib/librte_eal/common/include/rte_memory.h
> > > @@ -466,6 +466,16 @@ unsigned rte_memory_get_nrank(void);
> > > /* check memsegs iovas are within a range based on dma mask */
> > > int __rte_experimental rte_mem_check_dma_mask(uint8_t maskbits);
> > >
> > > +/**
> > > + * * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice
> >
> > I think there's a copy-paste error here (extra star and indent before
> > warning).
> >
> >
> > I will fix this.
> >
> > Thanks.
> >
> >
> > > + *
> > > + * Set dma mask to use once memory initialization is done.
> > > + * Previous function rte_mem_check_dma_mask can not be used
> > > + * safely until memory has been initialized.
> >
> > IMO this really requires a big bold warning saying that this function
> > must only be called early during initialization, before memory is
> > initialized, and not to be called in user code.
> >
> >
> > what about adding a new EAL variable for keeping memory initialization
> > status?
> > It would be something like "uninit" until is changed to "done" after the
> > memory initialization is completed.
> > That variable would help to avoid effective calls to set the dma mask
> > after initialization.
>
> I'm not opposed to it in principle, however, this is a slippery slope
> towards having each and every subsystem storing their init status :)
>
> That said, while it's not a complete solution because it won't prevent
> spurious calls to this function _after_ memory has initialized, there is
> a variable in internal_config that you can use to deny calls to this
> function after EAL init is complete (see internal_config->init_complete).
>
> You could also store a static variable in eal_common_memory.c, and
> change it at the end of rte_eal_memory_init() call, but that does not
> really complete the memory initialization, because we then go to
> rte_eal_malloc_heap_init(), which actually completes the memory init.
>
> I would greatly prefer using internal_config->init_complete - it is
> enough to forbid user code from calling it, while driver/DPDK developers
> presumably know what they're doing enough to read the warning and not
> try to call this after memory init :)
>
I agree that we should avoid new status variables as I proposed, and in
this case
I think using internal_config->init_complete will be enough.
I will add the check in the next version.
Thanks
>
> >
> > > + */
> > > +void __rte_experimental rte_mem_set_dma_mask(uint8_t maskbits);
> > > +
> > > /**
> > > * Drivers based on uio will not load unless physical
> > > * addresses are obtainable. It is only possible to get
> > > diff --git a/lib/librte_eal/rte_eal_version.map
> > b/lib/librte_eal/rte_eal_version.map
> > > index ef8126a97..ae24b5c73 100644
> > > --- a/lib/librte_eal/rte_eal_version.map
> > > +++ b/lib/librte_eal/rte_eal_version.map
> > > @@ -296,6 +296,7 @@ EXPERIMENTAL {
> > > rte_devargs_remove;
> > > rte_devargs_type_count;
> > > rte_mem_check_dma_mask;
> > > + rte_mem_set_dma_mask;
> >
> > Same, this needs to be alphabetic.
> >
> > > rte_eal_cleanup;
> > > rte_fbarray_attach;
> > > rte_fbarray_destroy;
> > >
> >
> >
> > --
> > Thanks,
> > Anatoly
> >
>
>
> --
> Thanks,
> Anatoly
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 5/7] mem: modify error message for DMA mask check
2018-11-01 11:12 ` Burakov, Anatoly
@ 2018-11-01 11:32 ` Alejandro Lucero
0 siblings, 0 replies; 34+ messages in thread
From: Alejandro Lucero @ 2018-11-01 11:32 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: dev
On Thu, Nov 1, 2018 at 11:12 AM Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:
> On 01-Nov-18 11:03 AM, Alejandro Lucero wrote:
> >
> >
> > On Thu, Nov 1, 2018 at 10:29 AM Burakov, Anatoly
> > <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
> >
> > On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> > > If DMA mask checks shows mapped memory out of the supported range
> > > specified by the DMA mask, nothing can be done but return an error
> > > an report the error. This can imply the app not being executed at
> > > all or precluding dynamic memory allocation once the app is
> running.
> > > In any case, we can advice the user to force IOVA as PA if
> currently
> > > IOVA being VA and user being root.
> > >
> > > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com
> > <mailto:alejandro.lucero@netronome.com>>
> > > ---
> >
> > General comment - legacy memory will also need this check, correct?
> >
> >
> > Yes, there is another patch adding this for both, legacy-mem and no-huge
> > options.
> >
> > > lib/librte_eal/common/malloc_heap.c | 35
> > +++++++++++++++++++++++++----
> > > 1 file changed, 31 insertions(+), 4 deletions(-)
> > >
> > > diff --git a/lib/librte_eal/common/malloc_heap.c
> > b/lib/librte_eal/common/malloc_heap.c
> > > index 7d423089d..711622f19 100644
> > > --- a/lib/librte_eal/common/malloc_heap.c
> > > +++ b/lib/librte_eal/common/malloc_heap.c
> > > @@ -5,8 +5,10 @@
> > > #include <stddef.h>
> > > #include <stdlib.h>
> > > #include <stdio.h>
> > > +#include <unistd.h>
> > > #include <stdarg.h>
> > > #include <errno.h>
> > > +#include <sys/types.h>
> > > #include <sys/queue.h>
> > >
> > > #include <rte_memory.h>
> > > @@ -294,7 +296,6 @@ alloc_pages_on_heap(struct malloc_heap *heap,
> > uint64_t pg_sz, size_t elt_size,
> > > size_t alloc_sz;
> > > int allocd_pages;
> > > void *ret, *map_addr;
> > > - uint64_t mask;
> > >
> > > alloc_sz = (size_t)pg_sz * n_segs;
> > >
> > > @@ -322,11 +323,37 @@ alloc_pages_on_heap(struct malloc_heap
> > *heap, uint64_t pg_sz, size_t elt_size,
> > > goto fail;
> > > }
> > >
> > > + /* Once we have all the memseg lists configured, if there
> > is a dma mask
> > > + * set, check iova addresses are not out of range.
> > Otherwise the device
> > > + * setting the dma mask could have problems with the mapped
> > memory.
> > > + *
> > > + * There are two situations when this can happen:
> > > + * 1) memory initialization
> > > + * 2) dynamic memory allocation
> > > + *
> > > + * For 1), an error when checking dma mask implies app can
> > not be
> > > + * executed. For 2) implies the new memory can not be added.
> > > + */
> > > if (mcfg->dma_maskbits) {
> > > if (rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
> > > - RTE_LOG(ERR, EAL,
> > > - "%s(): couldn't allocate memory due
> > to DMA mask\n",
> > > - __func__);
> > > + /* Currently this can only happen if IOMMU
> > is enabled
> > > + * with RTE_ARCH_X86. It is not safe to use
> > this memory
> > > + * so returning an error here.
> >
> > I don't think it's RTE_ARCH_X86-only. It can be any other arch with
> an
> > IOMMU that's reporting addressing limitations.
> >
> >
> > Right, but it is just IOMMU hardware from this architecture having the
> > current limitation.
> > I was trying to just explain why this can happen but I can remove the
> > reference to specific
> > architecture problems.
> >
> >
> > > + *
> > > + * If IOVA is VA, advice to try with
> > '--iova-mode pa'
> > > + * which could solve some situations when
> > IOVA VA is not
> > > + * really needed.
> > > + */
> > > + uid_t user = getuid();
> > > + if ((rte_eal_iova_mode() == RTE_IOVA_VA) &&
> > user == 0)
> >
> > rte_eal_using_phys_addrs()?
> >
> > (the above function name is a bit of a misnomer, it really checks if
> > they are available, but not necessarily used - it will return true in
> > RTE_IOVA_VA mode if you're running as root)
> >
> >
> > rte_eal_iova_mode returns rte_eal_get_configuration()->iova_mode what
> > is set during initialization. It can be PA not just because IOMMU (not
> > after the patch)
> > but because some PMD does not reports IOVA VA support or because UIO is
> > in use.
> > Checking for root is because IOVA PA can not be used if non root.
>
> You've misinterpreted my comment.
>
> rte_eal_using_phys_addrs() will effectively return true if you're
> running as root. There's no need for an uid check.
>
> Ok. I got it now, and it is definitely better than adding the uid check.
Thanks
> The "misnomer" comment was about the rte_eal_using_phys_addrs() - it
> reads like it would return false in IOVA_VA mode, but in reality, it
> will return true even if IOVA_VA mode - it really should be named
> "rte_eal_phys_addrs_available()" rather than
> "rte_eal_using_phys_addrs()". This would make it clearer.
>
> >
> >
> > > + RTE_LOG(ERR, EAL,
> > > + "%s(): couldn't allocate
> > memory due to DMA mask.\n"
> > > + "Try with 'iova-mode=pa'\n",
> > > + __func__);
> > > + else
> > > + RTE_LOG(ERR, EAL,
> > > + "%s(): couldn't allocate
> > memory due to DMA mask\n",
> > > + __func__);
> >
> > I don't think the error message is terribly descriptive. Looking at
> it
> > through the eyes of someone who sees it for the first time and who
> has
> > no idea what "iova-mode=pa" is, i think it would be more useful to
> word
> > it the following way:
> >
> > couldn't allocate memory due to IOVA exceeding limits of current DMA
> > mask.
> > [for non-using phys addrs case] Please try initializing EAL with
> > --iova-mode=pa parameter.
> >
> >
> > I'm happy with using your terrific description instead ;-)
> > Thanks!
> >
> > Also, generally newlines in RTE_LOG look ugly unless you indent the
> > line :)
> >
> > > goto fail;
> > > }
> > > }
> > >
> >
> >
> > --
> > Thanks,
> > Anatoly
> >
>
>
> --
> Thanks,
> Anatoly
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 6/7] mem: add safe and unsafe versions for checking DMA mask
2018-11-01 10:38 ` Burakov, Anatoly
@ 2018-11-01 13:34 ` Alejandro Lucero
0 siblings, 0 replies; 34+ messages in thread
From: Alejandro Lucero @ 2018-11-01 13:34 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: dev
On Thu, Nov 1, 2018 at 10:38 AM Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:
> On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> > During memory initialization calling rte_mem_check_dma_mask
> > leads to a deadlock because memory_hotplug_lock is locked by a
> > writer, the current code in execution, and rte_memseg_walk
> > tries to lock as a reader.
> >
> > This patch adds safe and unsafe versions for invoking the final
> > function specifying if the memory_hotplug_lock needs to be
> > acquired, this is for the safe version, or not, the unsafe one.
> > PMDs should use the safe version and just internal EAL memory
> > code should use the unsafe one.
> >
> > Fixes: 223b7f1d5ef6 ("mem: add function for checking memseg IOVA")
> >
> > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> > ---
>
> I don't think _safe and _unsafe are good names. _unsafe implies
> something might blow up, which isn't the case :) I think following the
> naming convention established by other functions (rte_mem_check_dma_mask
> and rte_mem_check_dma_mask_thread_unsafe) is better. User/driver code is
> only supposed to use rte_mem_check_dma_mask safe version anyway, so
> there's no need to differentiate between the two if the other one is
> never supposed to be used.
>
>
It makes sense. I will do it in next version.
Thanks
> > drivers/net/nfp/nfp_net.c | 2 +-
> > lib/librte_eal/common/eal_common_memory.c | 24 +++++++++++++++---
> > lib/librte_eal/common/include/rte_memory.h | 29 +++++++++++++++++++---
> > lib/librte_eal/common/malloc_heap.c | 2 +-
> > lib/librte_eal/rte_eal_version.map | 3 ++-
> > 5 files changed, 51 insertions(+), 9 deletions(-)
> >
>
> <...>
>
> > -/* check memsegs iovas are within a range based on dma mask */
> > -int __rte_experimental rte_mem_check_dma_mask(uint8_t maskbits);
> > +/**
> > + * * @warning
>
> Here and in other places - same issue with extra star.
>
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Check memsegs iovas are within a range based on dma mask.
>
> The comments make it seem like the parameter is an actual DMA mask,
> rather than DMA mask *width*. In fact, you seem to have tripped yourself
> up on that already :)
>
> Suggested rewording:
>
> Check if all currently allocated memory segments are compliant with
> supplied DMA address width.
>
>
Ok.
> > + *
> > + * @param maskbits
> > + * Address width to check against.
> > + */
> > +int __rte_experimental rte_mem_check_dma_mask_safe(uint8_t maskbits);
> > +
> > +/**
> > + * * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Check memsegs iovas are within a range based on dma mask without
> acquiring
> > + * memory_hotplug_lock first.
> > + *
> > + * This function is just for EAL core memory internal use. Drivers
> should
> > + * use the previous safe one.
>
> This is IMO too detailed. Suggested rewording:
>
> Check if all currently allocated memory segments are compliant with
> supplied DMA address width.
>
>
Ok
> @warning This function is not thread-safe and is for internal use only.
>
> > + *
> > + * @param maskbits
> > + * Address width to check against.
> > + */
> > +int __rte_experimental rte_mem_check_dma_mask_unsafe(uint8_t maskbits);
> >
> > /**
> > * * @warning
> > * @b EXPERIMENTAL: this API may change without prior notice
> > *
> > * Set dma mask to use once memory initialization is done.
> > - * Previous function rte_mem_check_dma_mask can not be used
> > + * Previous functions rte_mem_check_dma_mask_safe/unsafe can not be
> used
> > * safely until memory has been initialized.
> > */
> > void __rte_experimental rte_mem_set_dma_mask(uint8_t maskbits);
> > diff --git a/lib/librte_eal/common/malloc_heap.c
> b/lib/librte_eal/common/malloc_heap.c
> > index 711622f19..dd8b983e7 100644
> > --- a/lib/librte_eal/common/malloc_heap.c
> > +++ b/lib/librte_eal/common/malloc_heap.c
> > @@ -335,7 +335,7 @@ alloc_pages_on_heap(struct malloc_heap *heap,
> uint64_t pg_sz, size_t elt_size,
> > * executed. For 2) implies the new memory can not be added.
> > */
> > if (mcfg->dma_maskbits) {
> > - if (rte_mem_check_dma_mask(mcfg->dma_maskbits)) {
> > + if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) {
> > /* Currently this can only happen if IOMMU is
> enabled
> > * with RTE_ARCH_X86. It is not safe to use this
> memory
> > * so returning an error here.
> > diff --git a/lib/librte_eal/rte_eal_version.map
> b/lib/librte_eal/rte_eal_version.map
> > index ae24b5c73..f863903b6 100644
> > --- a/lib/librte_eal/rte_eal_version.map
> > +++ b/lib/librte_eal/rte_eal_version.map
> > @@ -296,7 +296,8 @@ EXPERIMENTAL {
> > rte_devargs_remove;
> > rte_devargs_type_count;
> > rte_mem_check_dma_mask;
> > - rte_mem_set_dma_mask;
> > + rte_mem_set_dma_mask_safe;
> > + rte_mem_set_dma_mask_unsafe;
>
> Again, alphabet :)
>
> > rte_eal_cleanup;
> > rte_fbarray_attach;
> > rte_fbarray_destroy;
> >
>
>
> --
> Thanks,
> Anatoly
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 7/7] eal/mem: use DMA mask check for legacy memory
2018-11-01 10:40 ` Burakov, Anatoly
@ 2018-11-01 13:39 ` Alejandro Lucero
2018-11-01 14:28 ` Burakov, Anatoly
0 siblings, 1 reply; 34+ messages in thread
From: Alejandro Lucero @ 2018-11-01 13:39 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: dev
On Thu, Nov 1, 2018 at 10:40 AM Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:
> On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> > If a device reports addressing limitations through a dma mask,
> > the IOVAs for mapped memory needs to be checked out for ensuring
> > correct functionality.
> >
> > Previous patches introduced this DMA check for main memory code
> > currently being used but other options like legacy memory and the
> > no hugepages one need to be also considered.
> >
> > This patch adds the DMA check for those cases.
> >
> > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> > ---
>
> IMO this needs to be integrated with patch 5.
>
>
Not sure about this. patch 5 is a follow-up of patch 4, and this is to add
support for other EAL supported memory options.
> > lib/librte_eal/linuxapp/eal/eal_memory.c | 17 +++++++++++++++++
> > 1 file changed, 17 insertions(+)
> >
> > diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c
> b/lib/librte_eal/linuxapp/eal/eal_memory.c
> > index fce86fda6..2a3a8c7a3 100644
> > --- a/lib/librte_eal/linuxapp/eal/eal_memory.c
> > +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
> > @@ -1393,6 +1393,14 @@ eal_legacy_hugepage_init(void)
> >
> > addr = RTE_PTR_ADD(addr, (size_t)page_sz);
> > }
> > + if (mcfg->dma_maskbits) {
> > + if
> (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) {
> > + RTE_LOG(ERR, EAL,
> > + "%s(): couldn't allocate memory
> due to DMA mask\n",
>
> I would use suggested rewording from patch 5 :)
>
Ok
>
> > + __func__);
> > + goto fail;
> > + }
> > + }
> > return 0;
> > }
> >
> > @@ -1628,6 +1636,15 @@ eal_legacy_hugepage_init(void)
> > rte_fbarray_destroy(&msl->memseg_arr);
> > }
> >
> > + if (mcfg->dma_maskbits) {
> > + if (rte_mem_check_dma_mask_unsafe(mcfg->dma_maskbits)) {
> > + RTE_LOG(ERR, EAL,
> > + "%s(): couldn't allocate memory due to DMA
> mask\n",
>
> Same as above.
>
> > + __func__);
> > + goto fail;
> > + }
> > + }
> > +
> > return 0;
> >
> > fail:
> >
>
>
> --
> Thanks,
> Anatoly
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 7/7] eal/mem: use DMA mask check for legacy memory
2018-11-01 13:39 ` Alejandro Lucero
@ 2018-11-01 14:28 ` Burakov, Anatoly
2018-11-01 14:32 ` Alejandro Lucero
0 siblings, 1 reply; 34+ messages in thread
From: Burakov, Anatoly @ 2018-11-01 14:28 UTC (permalink / raw)
To: Alejandro Lucero; +Cc: dev
On 01-Nov-18 1:39 PM, Alejandro Lucero wrote:
>
>
> On Thu, Nov 1, 2018 at 10:40 AM Burakov, Anatoly
> <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
>
> On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> > If a device reports addressing limitations through a dma mask,
> > the IOVAs for mapped memory needs to be checked out for ensuring
> > correct functionality.
> >
> > Previous patches introduced this DMA check for main memory code
> > currently being used but other options like legacy memory and the
> > no hugepages one need to be also considered.
> >
> > This patch adds the DMA check for those cases.
> >
> > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com
> <mailto:alejandro.lucero@netronome.com>>
> > ---
>
> IMO this needs to be integrated with patch 5.
>
>
> Not sure about this. patch 5 is a follow-up of patch 4, and this is to
> add support for other EAL supported memory options.
>
So it's a followup of patch 5, adding pretty much the same functionality
(only in a different place). It's pretty safe to say these should be in
the same patch, or at the very least one after the other.
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 3/7] mem: add function for setting DMA mask
2018-11-01 11:30 ` Alejandro Lucero
@ 2018-11-01 14:32 ` Alejandro Lucero
0 siblings, 0 replies; 34+ messages in thread
From: Alejandro Lucero @ 2018-11-01 14:32 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: dev
On Thu, Nov 1, 2018 at 11:30 AM Alejandro Lucero <
alejandro.lucero@netronome.com> wrote:
>
>
> On Thu, Nov 1, 2018 at 10:57 AM Burakov, Anatoly <
> anatoly.burakov@intel.com> wrote:
>
>> On 01-Nov-18 10:48 AM, Alejandro Lucero wrote:
>> >
>> >
>> > On Thu, Nov 1, 2018 at 10:11 AM Burakov, Anatoly
>> > <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
>> >
>> > On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
>> > > This patch adds the possibility of setting a dma mask to be used
>> > > once the memory initialization is done.
>> > >
>> > > This is currently needed when IOVA mode is set by PCI related
>> > > code and an x86 IOMMU hardware unit is present. Current code
>> calls
>> > > rte_mem_check_dma_mask but it is wrong to do so at that point
>> > > because the memory has not been initialized yet.
>> > >
>> > > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com
>> > <mailto:alejandro.lucero@netronome.com>>
>> > > ---
>> > > lib/librte_eal/common/eal_common_memory.c | 10 ++++++++++
>> > > lib/librte_eal/common/include/rte_memory.h | 10 ++++++++++
>> > > lib/librte_eal/rte_eal_version.map | 1 +
>> > > 3 files changed, 21 insertions(+)
>> > >
>> > > diff --git a/lib/librte_eal/common/eal_common_memory.c
>> > b/lib/librte_eal/common/eal_common_memory.c
>> > > index e0f08f39a..24b72fcb0 100644
>> > > --- a/lib/librte_eal/common/eal_common_memory.c
>> > > +++ b/lib/librte_eal/common/eal_common_memory.c
>> > > @@ -480,6 +480,16 @@ rte_mem_check_dma_mask(uint8_t maskbits)
>> > > return 0;
>> > > }
>> > >
>> > > +/* set dma mask to use when memory initialization is done */
>> > > +void __rte_experimental
>> > > +rte_mem_set_dma_mask(uint8_t maskbits)
>> > > +{
>> > > + struct rte_mem_config *mcfg =
>> > rte_eal_get_configuration()->mem_config;
>> > > +
>> > > + mcfg->dma_maskbits = mcfg->dma_maskbits == 0 ? maskbits :
>> > > + RTE_MIN(mcfg->dma_maskbits, maskbits);
>> > > +}
>> > > +
>> > > /* return the number of memory channels */
>> > > unsigned rte_memory_get_nchannel(void)
>> > > {
>> > > diff --git a/lib/librte_eal/common/include/rte_memory.h
>> > b/lib/librte_eal/common/include/rte_memory.h
>> > > index ad3f3cfb0..eff028db1 100644
>> > > --- a/lib/librte_eal/common/include/rte_memory.h
>> > > +++ b/lib/librte_eal/common/include/rte_memory.h
>> > > @@ -466,6 +466,16 @@ unsigned rte_memory_get_nrank(void);
>> > > /* check memsegs iovas are within a range based on dma mask */
>> > > int __rte_experimental rte_mem_check_dma_mask(uint8_t
>> maskbits);
>> > >
>> > > +/**
>> > > + * * @warning
>> > > + * @b EXPERIMENTAL: this API may change without prior notice
>> >
>> > I think there's a copy-paste error here (extra star and indent
>> before
>> > warning).
>> >
>> >
>> > I will fix this.
>> >
>> > Thanks.
>> >
>> >
>> > > + *
>> > > + * Set dma mask to use once memory initialization is done.
>> > > + * Previous function rte_mem_check_dma_mask can not be used
>> > > + * safely until memory has been initialized.
>> >
>> > IMO this really requires a big bold warning saying that this
>> function
>> > must only be called early during initialization, before memory is
>> > initialized, and not to be called in user code.
>> >
>> >
>> > what about adding a new EAL variable for keeping memory initialization
>> > status?
>> > It would be something like "uninit" until is changed to "done" after
>> the
>> > memory initialization is completed.
>> > That variable would help to avoid effective calls to set the dma mask
>> > after initialization.
>>
>> I'm not opposed to it in principle, however, this is a slippery slope
>> towards having each and every subsystem storing their init status :)
>>
>> That said, while it's not a complete solution because it won't prevent
>> spurious calls to this function _after_ memory has initialized, there is
>> a variable in internal_config that you can use to deny calls to this
>> function after EAL init is complete (see internal_config->init_complete).
>>
>> You could also store a static variable in eal_common_memory.c, and
>> change it at the end of rte_eal_memory_init() call, but that does not
>> really complete the memory initialization, because we then go to
>> rte_eal_malloc_heap_init(), which actually completes the memory init.
>>
>> I would greatly prefer using internal_config->init_complete - it is
>> enough to forbid user code from calling it, while driver/DPDK developers
>> presumably know what they're doing enough to read the warning and not
>> try to call this after memory init :)
>>
>
> I agree that we should avoid new status variables as I proposed, and in
> this case
> I think using internal_config->init_complete will be enough.
>
>
I'm wrong. And you already said it, that this will not avoid a bad usage
from PMDs.
I think I will follow your first advice about adding a warning about its
usage.
> I will add the check in the next version.
>
> Thanks
>
>
>>
>> >
>> > > + */
>> > > +void __rte_experimental rte_mem_set_dma_mask(uint8_t maskbits);
>> > > +
>> > > /**
>> > > * Drivers based on uio will not load unless physical
>> > > * addresses are obtainable. It is only possible to get
>> > > diff --git a/lib/librte_eal/rte_eal_version.map
>> > b/lib/librte_eal/rte_eal_version.map
>> > > index ef8126a97..ae24b5c73 100644
>> > > --- a/lib/librte_eal/rte_eal_version.map
>> > > +++ b/lib/librte_eal/rte_eal_version.map
>> > > @@ -296,6 +296,7 @@ EXPERIMENTAL {
>> > > rte_devargs_remove;
>> > > rte_devargs_type_count;
>> > > rte_mem_check_dma_mask;
>> > > + rte_mem_set_dma_mask;
>> >
>> > Same, this needs to be alphabetic.
>> >
>> > > rte_eal_cleanup;
>> > > rte_fbarray_attach;
>> > > rte_fbarray_destroy;
>> > >
>> >
>> >
>> > --
>> > Thanks,
>> > Anatoly
>> >
>>
>>
>> --
>> Thanks,
>> Anatoly
>>
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 7/7] eal/mem: use DMA mask check for legacy memory
2018-11-01 14:28 ` Burakov, Anatoly
@ 2018-11-01 14:32 ` Alejandro Lucero
0 siblings, 0 replies; 34+ messages in thread
From: Alejandro Lucero @ 2018-11-01 14:32 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: dev
On Thu, Nov 1, 2018 at 2:28 PM Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:
> On 01-Nov-18 1:39 PM, Alejandro Lucero wrote:
> >
> >
> > On Thu, Nov 1, 2018 at 10:40 AM Burakov, Anatoly
> > <anatoly.burakov@intel.com <mailto:anatoly.burakov@intel.com>> wrote:
> >
> > On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> > > If a device reports addressing limitations through a dma mask,
> > > the IOVAs for mapped memory needs to be checked out for ensuring
> > > correct functionality.
> > >
> > > Previous patches introduced this DMA check for main memory code
> > > currently being used but other options like legacy memory and the
> > > no hugepages one need to be also considered.
> > >
> > > This patch adds the DMA check for those cases.
> > >
> > > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com
> > <mailto:alejandro.lucero@netronome.com>>
> > > ---
> >
> > IMO this needs to be integrated with patch 5.
> >
> >
> > Not sure about this. patch 5 is a follow-up of patch 4, and this is to
> > add support for other EAL supported memory options.
> >
>
> So it's a followup of patch 5, adding pretty much the same functionality
> (only in a different place). It's pretty safe to say these should be in
> the same patch, or at the very least one after the other.
>
>
Ok. I'll do so.
Thanks
> --
> Thanks,
> Anatoly
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 2/7] mem: use proper prefix
2018-11-01 10:08 ` Burakov, Anatoly
2018-11-01 10:40 ` Alejandro Lucero
@ 2018-11-01 14:50 ` Thomas Monjalon
2018-11-01 15:03 ` Burakov, Anatoly
1 sibling, 1 reply; 34+ messages in thread
From: Thomas Monjalon @ 2018-11-01 14:50 UTC (permalink / raw)
To: Burakov, Anatoly, Alejandro Lucero; +Cc: dev
01/11/2018 11:08, Burakov, Anatoly:
> On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> > Current name rte_eal_check_dma_mask does not follow the naming
> > used in the rest of the file.
> >
> > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> > ---
>
> I don't think this belongs in the _mem_ namespace. It is usually used
> for things to do with memory, while the DMA mask IMO sits firmly in the
> domain of EAL, specifically bus subsystem.
It is a memory allocation check, isn't it?
I think rte_mem_ prefix is more meaningful.
Anyway, we should avoid rte_eal which is too vague.
For device management, we use rte_bus, rte_dev, etc.
> However, i don't have strong feelings one way or the other, so if you do
> decide to go forward with this naming...
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 2/7] mem: use proper prefix
2018-11-01 14:50 ` Thomas Monjalon
@ 2018-11-01 15:03 ` Burakov, Anatoly
2018-11-01 16:18 ` Alejandro Lucero
0 siblings, 1 reply; 34+ messages in thread
From: Burakov, Anatoly @ 2018-11-01 15:03 UTC (permalink / raw)
To: Thomas Monjalon, Alejandro Lucero; +Cc: dev
On 01-Nov-18 2:50 PM, Thomas Monjalon wrote:
> 01/11/2018 11:08, Burakov, Anatoly:
>> On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
>>> Current name rte_eal_check_dma_mask does not follow the naming
>>> used in the rest of the file.
>>>
>>> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
>>> ---
>>
>> I don't think this belongs in the _mem_ namespace. It is usually used
>> for things to do with memory, while the DMA mask IMO sits firmly in the
>> domain of EAL, specifically bus subsystem.
>
> It is a memory allocation check, isn't it?
>
> I think rte_mem_ prefix is more meaningful.
> Anyway, we should avoid rte_eal which is too vague.
> For device management, we use rte_bus, rte_dev, etc.
>
No strong feelings here, you can keep the mem namespace. Dem alphabets
tho...
--
Thanks,
Anatoly
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 2/7] mem: use proper prefix
2018-11-01 15:03 ` Burakov, Anatoly
@ 2018-11-01 16:18 ` Alejandro Lucero
0 siblings, 0 replies; 34+ messages in thread
From: Alejandro Lucero @ 2018-11-01 16:18 UTC (permalink / raw)
To: Burakov, Anatoly; +Cc: Thomas Monjalon, dev
On Thu, Nov 1, 2018 at 3:03 PM Burakov, Anatoly <anatoly.burakov@intel.com>
wrote:
> On 01-Nov-18 2:50 PM, Thomas Monjalon wrote:
> > 01/11/2018 11:08, Burakov, Anatoly:
> >> On 31-Oct-18 5:29 PM, Alejandro Lucero wrote:
> >>> Current name rte_eal_check_dma_mask does not follow the naming
> >>> used in the rest of the file.
> >>>
> >>> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> >>> ---
> >>
> >> I don't think this belongs in the _mem_ namespace. It is usually used
> >> for things to do with memory, while the DMA mask IMO sits firmly in the
> >> domain of EAL, specifically bus subsystem.
> >
> > It is a memory allocation check, isn't it?
> >
> > I think rte_mem_ prefix is more meaningful.
> > Anyway, we should avoid rte_eal which is too vague.
> > For device management, we use rte_bus, rte_dev, etc.
> >
>
> No strong feelings here, you can keep the mem namespace. Dem alphabets
> tho...
>
>
Sure. I'll send the next version later today.
Thanks
> --
> Thanks,
> Anatoly
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 0/7] fix DMA mask check
2018-10-31 17:29 [dpdk-dev] [PATCH 0/7] fix DMA mask check Alejandro Lucero
` (8 preceding siblings ...)
2018-11-01 10:13 ` Mattias Rönnblom
@ 2018-11-01 17:28 ` Ferruh Yigit
9 siblings, 0 replies; 34+ messages in thread
From: Ferruh Yigit @ 2018-11-01 17:28 UTC (permalink / raw)
To: Alejandro Lucero, dev
On 10/31/2018 5:29 PM, Alejandro Lucero wrote:
> A patchset sent introducing DMA mask checks has several critical
> issues precluding apps to execute. The patchset was reviewed and
> finally accepted after three versions. Obviously it did not go
> through the proper testing what can be explained, at least from my
> side, due to the big changes to the memory initialization code these
> last months. It turns out the patchset did work with legacy memory
> and I'm afraid that was mainly my testing.
>
> This patchset should solve the main problems reported:
>
> - deadlock duriing initialization
> - segmentation fault with secondary processes
>
> For solving the deadlock, a new API is introduced:
>
> rte_mem_check_dma_mask_safe/unsafe
>
> making the previous rte_mem_check_dma_mask the one those new functions
> end calling. A boolean param is used for calling rte_memseg_walk thread
> safe or thread unsafe. This second option is needed for avoiding the
> deadlock.
>
> For the secondary processes problem, the call to check the dma mask is
> avoided from code being executed before the memory initialization.
> Instead, a new API function, rte_mem_set_dma_mask is introduced, which
> will be used in those cases. The dma mask check is done once the memory
> initialization is completed.
>
> This last change implies the IOVA mode can not be set depending on IOMMU
> hardware limitations, and it is assumed IOVA VA is possible. If the dma
> mask check reports a problem after memory initilization, the error
> message includes now advice for trying with --iova-mode option set to
> pa.
>
> The patchet also includes the dma mask check for legacy memory and the
> no hugepage option.
>
> Finally, all the DMA mask API has been updated for using the same prefix
> than other EAL memory code.
>
> An initial version of this patchset has been tested by Intel DPDK
> Validation team and it seems it solves all the problems reported. This
> final patchset has the same functionality with minor changes. I have
> successfully tested the patchset with my limited testbench.
>
> Alejandro Lucero (7):
> mem: fix call to DMA mask check
> mem: use proper prefix
> mem: add function for setting DMA mask
> bus/pci: avoid call to DMA mask check
> mem: modify error message for DMA mask check
> mem: add safe and unsafe versions for checking DMA mask
> eal/mem: use DMA mask check for legacy memory
Tested-by: Ferruh Yigit <ferruh.yigit@intel.com>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [dpdk-dev] [PATCH 0/7] fix DMA mask check
2018-10-31 17:47 ` [dpdk-dev] [PATCH 0/7] fix DMA mask check Alejandro Lucero
@ 2018-11-02 5:52 ` Hyong Youb Kim
0 siblings, 0 replies; 34+ messages in thread
From: Hyong Youb Kim @ 2018-11-02 5:52 UTC (permalink / raw)
To: Alejandro Lucero; +Cc: dev
On Wed, Oct 31, 2018 at 05:47:40PM +0000, Alejandro Lucero wrote:
> On Wed, Oct 31, 2018 at 5:29 PM Alejandro Lucero <
> alejandro.lucero@netronome.com> wrote:
>
> > A patchset sent introducing DMA mask checks has several critical
> > issues precluding apps to execute. The patchset was reviewed and
> > finally accepted after three versions. Obviously it did not go
> > through the proper testing what can be explained, at least from my
> > side, due to the big changes to the memory initialization code these
> > last months. It turns out the patchset did work with legacy memory
> > and I'm afraid that was mainly my testing.
[...]
> >
> I have just realized that there is a minor problem with compiling one patch
> at a time, but it does compile when all applied.
>
> I will fix this in another version but I will wait until review and testing
> has been reported.
This series fixes the deadlock problem in our testbed too.
-Hyong
^ permalink raw reply [flat|nested] 34+ messages in thread
end of thread, other threads:[~2018-11-02 5:52 UTC | newest]
Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-31 17:29 [dpdk-dev] [PATCH 0/7] fix DMA mask check Alejandro Lucero
2018-10-31 17:29 ` [dpdk-dev] [PATCH 1/7] mem: fix call to " Alejandro Lucero
2018-11-01 10:11 ` Burakov, Anatoly
2018-10-31 17:29 ` [dpdk-dev] [PATCH 2/7] mem: use proper prefix Alejandro Lucero
2018-11-01 10:08 ` Burakov, Anatoly
2018-11-01 10:40 ` Alejandro Lucero
2018-11-01 14:50 ` Thomas Monjalon
2018-11-01 15:03 ` Burakov, Anatoly
2018-11-01 16:18 ` Alejandro Lucero
2018-10-31 17:29 ` [dpdk-dev] [PATCH 3/7] mem: add function for setting DMA mask Alejandro Lucero
2018-11-01 10:11 ` Burakov, Anatoly
2018-11-01 10:48 ` Alejandro Lucero
2018-11-01 10:57 ` Burakov, Anatoly
2018-11-01 11:30 ` Alejandro Lucero
2018-11-01 14:32 ` Alejandro Lucero
2018-10-31 17:29 ` [dpdk-dev] [PATCH 4/7] bus/pci: avoid call to DMA mask check Alejandro Lucero
2018-11-01 10:12 ` Burakov, Anatoly
2018-10-31 17:29 ` [dpdk-dev] [PATCH 5/7] mem: modify error message for " Alejandro Lucero
2018-11-01 10:29 ` Burakov, Anatoly
2018-11-01 11:03 ` Alejandro Lucero
2018-11-01 11:12 ` Burakov, Anatoly
2018-11-01 11:32 ` Alejandro Lucero
2018-10-31 17:29 ` [dpdk-dev] [PATCH 6/7] mem: add safe and unsafe versions for checking DMA mask Alejandro Lucero
2018-11-01 10:38 ` Burakov, Anatoly
2018-11-01 13:34 ` Alejandro Lucero
2018-10-31 17:29 ` [dpdk-dev] [PATCH 7/7] eal/mem: use DMA mask check for legacy memory Alejandro Lucero
2018-11-01 10:40 ` Burakov, Anatoly
2018-11-01 13:39 ` Alejandro Lucero
2018-11-01 14:28 ` Burakov, Anatoly
2018-11-01 14:32 ` Alejandro Lucero
2018-10-31 17:47 ` [dpdk-dev] [PATCH 0/7] fix DMA mask check Alejandro Lucero
2018-11-02 5:52 ` Hyong Youb Kim
2018-11-01 10:13 ` Mattias Rönnblom
2018-11-01 17:28 ` Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).