* [dpdk-dev] [PATCH 1/3] mem: provide thread-unsafe contig walk variant
@ 2018-06-12 9:46 Anatoly Burakov
2018-06-12 9:46 ` [dpdk-dev] [PATCH 2/3] mem: provide thread-unsafe memseg " Anatoly Burakov
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Anatoly Burakov @ 2018-06-12 9:46 UTC (permalink / raw)
To: dev
Sometimes, user code needs to walk memseg list while being inside
a memory-related callback. Rather than making everyone copy around
the same iteration code and depending on DPDK internals, provide an
official way to do memseg_contig_walk() inside callbacks.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
lib/librte_eal/common/eal_common_memory.c | 28 ++++++++++++----------
lib/librte_eal/common/include/rte_memory.h | 18 ++++++++++++++
lib/librte_eal/rte_eal_version.map | 1 +
3 files changed, 35 insertions(+), 12 deletions(-)
diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c
index 4f0688f9d..e3320a746 100644
--- a/lib/librte_eal/common/eal_common_memory.c
+++ b/lib/librte_eal/common/eal_common_memory.c
@@ -788,14 +788,11 @@ rte_mem_lock_page(const void *virt)
}
int __rte_experimental
-rte_memseg_contig_walk(rte_memseg_contig_walk_t func, void *arg)
+rte_memseg_contig_walk_thread_unsafe(rte_memseg_contig_walk_t func, void *arg)
{
struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
int i, ms_idx, ret = 0;
- /* do not allow allocations/frees/init while we iterate */
- rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
-
for (i = 0; i < RTE_MAX_MEMSEG_LISTS; i++) {
struct rte_memseg_list *msl = &mcfg->memsegs[i];
const struct rte_memseg *ms;
@@ -820,19 +817,26 @@ rte_memseg_contig_walk(rte_memseg_contig_walk_t func, void *arg)
len = n_segs * msl->page_sz;
ret = func(msl, ms, len, arg);
- if (ret < 0) {
- ret = -1;
- goto out;
- } else if (ret > 0) {
- ret = 1;
- goto out;
- }
+ if (ret)
+ return ret;
ms_idx = rte_fbarray_find_next_used(arr,
ms_idx + n_segs);
}
}
-out:
+ return 0;
+}
+
+int __rte_experimental
+rte_memseg_contig_walk(rte_memseg_contig_walk_t func, void *arg)
+{
+ struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ int ret = 0;
+
+ /* do not allow allocations/frees/init while we iterate */
+ rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
+ ret = rte_memseg_contig_walk_thread_unsafe(func, arg);
rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+
return ret;
}
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index aab9f6fe5..aeba38bfa 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -263,6 +263,24 @@ rte_memseg_contig_walk(rte_memseg_contig_walk_t func, void *arg);
int __rte_experimental
rte_memseg_list_walk(rte_memseg_list_walk_t func, void *arg);
+/**
+ * Walk each VA-contiguous area without performing any locking.
+ *
+ * @note This function does not perform any locking, and is only safe to call
+ * from within memory-related callback functions.
+ *
+ * @param func
+ * Iterator function
+ * @param arg
+ * Argument passed to iterator
+ * @return
+ * 0 if walked over the entire list
+ * 1 if stopped by the user
+ * -1 if user function reported error
+ */
+int __rte_experimental
+rte_memseg_contig_walk_thread_unsafe(rte_memseg_contig_walk_t func, void *arg);
+
/**
* Dump the physical memory layout to a file.
*
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index f7dd0e7bc..98bfbe796 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -286,6 +286,7 @@ EXPERIMENTAL {
rte_mem_virt2memseg;
rte_mem_virt2memseg_list;
rte_memseg_contig_walk;
+ rte_memseg_contig_walk_thread_unsafe;
rte_memseg_list_walk;
rte_memseg_walk;
rte_mp_action_register;
--
2.17.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [dpdk-dev] [PATCH 2/3] mem: provide thread-unsafe memseg walk variant
2018-06-12 9:46 [dpdk-dev] [PATCH 1/3] mem: provide thread-unsafe contig walk variant Anatoly Burakov
@ 2018-06-12 9:46 ` Anatoly Burakov
2018-06-12 9:46 ` [dpdk-dev] [PATCH 3/3] mem: provide thread-unsafe memseg list " Anatoly Burakov
2018-07-13 9:21 ` [dpdk-dev] [PATCH 1/3] mem: provide thread-unsafe contig " Thomas Monjalon
2 siblings, 0 replies; 4+ messages in thread
From: Anatoly Burakov @ 2018-06-12 9:46 UTC (permalink / raw)
To: dev
Sometimes, user code needs to walk memseg list while being inside
a memory-related callback. Rather than making everyone copy around
the same iteration code and depending on DPDK internals, provide an
official way to do memseg_walk() inside callbacks.
Also, remove existing reimplementation from sPAPR VFIO code and use
the new API instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
lib/librte_eal/common/eal_common_memory.c | 28 ++++++++------
lib/librte_eal/common/include/rte_memory.h | 18 +++++++++
lib/librte_eal/linuxapp/eal/eal_vfio.c | 43 +++-------------------
lib/librte_eal/rte_eal_version.map | 1 +
4 files changed, 40 insertions(+), 50 deletions(-)
diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c
index e3320a746..afe0d5b57 100644
--- a/lib/librte_eal/common/eal_common_memory.c
+++ b/lib/librte_eal/common/eal_common_memory.c
@@ -841,14 +841,11 @@ rte_memseg_contig_walk(rte_memseg_contig_walk_t func, void *arg)
}
int __rte_experimental
-rte_memseg_walk(rte_memseg_walk_t func, void *arg)
+rte_memseg_walk_thread_unsafe(rte_memseg_walk_t func, void *arg)
{
struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
int i, ms_idx, ret = 0;
- /* do not allow allocations/frees/init while we iterate */
- rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
-
for (i = 0; i < RTE_MAX_MEMSEG_LISTS; i++) {
struct rte_memseg_list *msl = &mcfg->memsegs[i];
const struct rte_memseg *ms;
@@ -863,18 +860,25 @@ rte_memseg_walk(rte_memseg_walk_t func, void *arg)
while (ms_idx >= 0) {
ms = rte_fbarray_get(arr, ms_idx);
ret = func(msl, ms, arg);
- if (ret < 0) {
- ret = -1;
- goto out;
- } else if (ret > 0) {
- ret = 1;
- goto out;
- }
+ if (ret)
+ return ret;
ms_idx = rte_fbarray_find_next_used(arr, ms_idx + 1);
}
}
-out:
+ return 0;
+}
+
+int __rte_experimental
+rte_memseg_walk(rte_memseg_walk_t func, void *arg)
+{
+ struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ int ret = 0;
+
+ /* do not allow allocations/frees/init while we iterate */
+ rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
+ ret = rte_memseg_walk_thread_unsafe(func, arg);
rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+
return ret;
}
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index aeba38bfa..c5a84c333 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -263,6 +263,24 @@ rte_memseg_contig_walk(rte_memseg_contig_walk_t func, void *arg);
int __rte_experimental
rte_memseg_list_walk(rte_memseg_list_walk_t func, void *arg);
+/**
+ * Walk list of all memsegs without performing any locking.
+ *
+ * @note This function does not perform any locking, and is only safe to call
+ * from within memory-related callback functions.
+ *
+ * @param func
+ * Iterator function
+ * @param arg
+ * Argument passed to iterator
+ * @return
+ * 0 if walked over the entire list
+ * 1 if stopped by the user
+ * -1 if user function reported error
+ */
+int __rte_experimental
+rte_memseg_walk_thread_unsafe(rte_memseg_walk_t func, void *arg);
+
/**
* Walk each VA-contiguous area without performing any locking.
*
diff --git a/lib/librte_eal/linuxapp/eal/eal_vfio.c b/lib/librte_eal/linuxapp/eal/eal_vfio.c
index a2bbdfbf4..14c9332e9 100644
--- a/lib/librte_eal/linuxapp/eal/eal_vfio.c
+++ b/lib/librte_eal/linuxapp/eal/eal_vfio.c
@@ -87,42 +87,6 @@ static const struct vfio_iommu_type iommu_types[] = {
},
};
-/* for sPAPR IOMMU, we will need to walk memseg list, but we cannot use
- * rte_memseg_walk() because by the time we enter callback we will be holding a
- * write lock, so regular rte-memseg_walk will deadlock. copying the same
- * iteration code everywhere is not ideal as well. so, use a lockless copy of
- * memseg walk here.
- */
-static int
-memseg_walk_thread_unsafe(rte_memseg_walk_t func, void *arg)
-{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
- int i, ms_idx, ret = 0;
-
- for (i = 0; i < RTE_MAX_MEMSEG_LISTS; i++) {
- struct rte_memseg_list *msl = &mcfg->memsegs[i];
- const struct rte_memseg *ms;
- struct rte_fbarray *arr;
-
- if (msl->memseg_arr.count == 0)
- continue;
-
- arr = &msl->memseg_arr;
-
- ms_idx = rte_fbarray_find_next_used(arr, 0);
- while (ms_idx >= 0) {
- ms = rte_fbarray_get(arr, ms_idx);
- ret = func(msl, ms, arg);
- if (ret < 0)
- return -1;
- if (ret > 0)
- return 1;
- ms_idx = rte_fbarray_find_next_used(arr, ms_idx + 1);
- }
- }
- return 0;
-}
-
static int
is_null_map(const struct user_mem_map *map)
{
@@ -1357,7 +1321,8 @@ vfio_spapr_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova,
/* check if window size needs to be adjusted */
memset(¶m, 0, sizeof(param));
- if (memseg_walk_thread_unsafe(vfio_spapr_window_size_walk,
+ /* we're inside a callback so use thread-unsafe version */
+ if (rte_memseg_walk_thread_unsafe(vfio_spapr_window_size_walk,
¶m) < 0) {
RTE_LOG(ERR, EAL, "Could not get window size\n");
ret = -1;
@@ -1386,7 +1351,9 @@ vfio_spapr_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova,
ret = -1;
goto out;
}
- if (memseg_walk_thread_unsafe(vfio_spapr_map_walk,
+ /* we're inside a callback, so use thread-unsafe version
+ */
+ if (rte_memseg_walk_thread_unsafe(vfio_spapr_map_walk,
&vfio_container_fd) < 0) {
RTE_LOG(ERR, EAL, "Could not recreate DMA maps\n");
ret = -1;
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index 98bfbe796..72d32fc39 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -289,6 +289,7 @@ EXPERIMENTAL {
rte_memseg_contig_walk_thread_unsafe;
rte_memseg_list_walk;
rte_memseg_walk;
+ rte_memseg_walk_thread_unsafe;
rte_mp_action_register;
rte_mp_action_unregister;
rte_mp_reply;
--
2.17.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [dpdk-dev] [PATCH 3/3] mem: provide thread-unsafe memseg list walk variant
2018-06-12 9:46 [dpdk-dev] [PATCH 1/3] mem: provide thread-unsafe contig walk variant Anatoly Burakov
2018-06-12 9:46 ` [dpdk-dev] [PATCH 2/3] mem: provide thread-unsafe memseg " Anatoly Burakov
@ 2018-06-12 9:46 ` Anatoly Burakov
2018-07-13 9:21 ` [dpdk-dev] [PATCH 1/3] mem: provide thread-unsafe contig " Thomas Monjalon
2 siblings, 0 replies; 4+ messages in thread
From: Anatoly Burakov @ 2018-06-12 9:46 UTC (permalink / raw)
To: dev
Sometimes, user code needs to walk memseg list while being inside
a memory-related callback. Rather than making everyone copy around
the same iteration code and depending on DPDK internals, provide an
official way to do memseg_list_walk() inside callbacks.
Also, remove existing reimplementation from memalloc code and use
the new API instead.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
lib/librte_eal/common/eal_common_memory.c | 29 +++++++++--------
lib/librte_eal/common/include/rte_memory.h | 18 +++++++++++
lib/librte_eal/linuxapp/eal/eal_memalloc.c | 37 +++++-----------------
lib/librte_eal/rte_eal_version.map | 1 +
4 files changed, 43 insertions(+), 42 deletions(-)
diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c
index afe0d5b57..6c4a8d40b 100644
--- a/lib/librte_eal/common/eal_common_memory.c
+++ b/lib/librte_eal/common/eal_common_memory.c
@@ -883,14 +883,11 @@ rte_memseg_walk(rte_memseg_walk_t func, void *arg)
}
int __rte_experimental
-rte_memseg_list_walk(rte_memseg_list_walk_t func, void *arg)
+rte_memseg_list_walk_thread_unsafe(rte_memseg_list_walk_t func, void *arg)
{
struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
int i, ret = 0;
- /* do not allow allocations/frees/init while we iterate */
- rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
-
for (i = 0; i < RTE_MAX_MEMSEG_LISTS; i++) {
struct rte_memseg_list *msl = &mcfg->memsegs[i];
@@ -898,17 +895,23 @@ rte_memseg_list_walk(rte_memseg_list_walk_t func, void *arg)
continue;
ret = func(msl, arg);
- if (ret < 0) {
- ret = -1;
- goto out;
- }
- if (ret > 0) {
- ret = 1;
- goto out;
- }
+ if (ret)
+ return ret;
}
-out:
+ return 0;
+}
+
+int __rte_experimental
+rte_memseg_list_walk(rte_memseg_list_walk_t func, void *arg)
+{
+ struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ int ret = 0;
+
+ /* do not allow allocations/frees/init while we iterate */
+ rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
+ ret = rte_memseg_list_walk_thread_unsafe(func, arg);
rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+
return ret;
}
diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h
index c5a84c333..c4b7f4cff 100644
--- a/lib/librte_eal/common/include/rte_memory.h
+++ b/lib/librte_eal/common/include/rte_memory.h
@@ -299,6 +299,24 @@ rte_memseg_walk_thread_unsafe(rte_memseg_walk_t func, void *arg);
int __rte_experimental
rte_memseg_contig_walk_thread_unsafe(rte_memseg_contig_walk_t func, void *arg);
+/**
+ * Walk each allocated memseg list without performing any locking.
+ *
+ * @note This function does not perform any locking, and is only safe to call
+ * from within memory-related callback functions.
+ *
+ * @param func
+ * Iterator function
+ * @param arg
+ * Argument passed to iterator
+ * @return
+ * 0 if walked over the entire list
+ * 1 if stopped by the user
+ * -1 if user function reported error
+ */
+int __rte_experimental
+rte_memseg_list_walk_thread_unsafe(rte_memseg_list_walk_t func, void *arg);
+
/**
* Dump the physical memory layout to a file.
*
diff --git a/lib/librte_eal/linuxapp/eal/eal_memalloc.c b/lib/librte_eal/linuxapp/eal/eal_memalloc.c
index 8c11f98c9..1ebc4b571 100644
--- a/lib/librte_eal/linuxapp/eal/eal_memalloc.c
+++ b/lib/librte_eal/linuxapp/eal/eal_memalloc.c
@@ -171,32 +171,6 @@ get_file_size(int fd)
return st.st_size;
}
-/* we cannot use rte_memseg_list_walk() here because we will be holding a
- * write lock whenever we enter every function in this file, however copying
- * the same iteration code everywhere is not ideal as well. so, use a lockless
- * copy of memseg list walk here.
- */
-static int
-memseg_list_walk_thread_unsafe(rte_memseg_list_walk_t func, void *arg)
-{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
- int i, ret = 0;
-
- for (i = 0; i < RTE_MAX_MEMSEG_LISTS; i++) {
- struct rte_memseg_list *msl = &mcfg->memsegs[i];
-
- if (msl->base_va == NULL)
- continue;
-
- ret = func(msl, arg);
- if (ret < 0)
- return -1;
- if (ret > 0)
- return 1;
- }
- return 0;
-}
-
/* returns 1 on successful lock, 0 on unsuccessful lock, -1 on error */
static int lock(int fd, int type)
{
@@ -878,7 +852,8 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, size_t page_sz,
wa.socket = socket;
wa.segs_allocated = 0;
- ret = memseg_list_walk_thread_unsafe(alloc_seg_walk, &wa);
+ /* memalloc is locked, so it's safe to use thread-unsafe version */
+ ret = rte_memseg_list_walk_thread_unsafe(alloc_seg_walk, &wa);
if (ret == 0) {
RTE_LOG(ERR, EAL, "%s(): couldn't find suitable memseg_list\n",
__func__);
@@ -943,7 +918,10 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
wa.ms = cur;
wa.hi = hi;
- walk_res = memseg_list_walk_thread_unsafe(free_seg_walk, &wa);
+ /* memalloc is locked, so it's safe to use thread-unsafe version
+ */
+ walk_res = rte_memseg_list_walk_thread_unsafe(free_seg_walk,
+ &wa);
if (walk_res == 1)
continue;
if (walk_res == 0)
@@ -1230,7 +1208,8 @@ eal_memalloc_sync_with_primary(void)
if (rte_eal_process_type() == RTE_PROC_PRIMARY)
return 0;
- if (memseg_list_walk_thread_unsafe(sync_walk, NULL))
+ /* memalloc is locked, so it's safe to call thread-unsafe version */
+ if (rte_memseg_list_walk_thread_unsafe(sync_walk, NULL))
return -1;
return 0;
}
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index 72d32fc39..592ffb867 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -288,6 +288,7 @@ EXPERIMENTAL {
rte_memseg_contig_walk;
rte_memseg_contig_walk_thread_unsafe;
rte_memseg_list_walk;
+ rte_memseg_list_walk_thread_unsafe;
rte_memseg_walk;
rte_memseg_walk_thread_unsafe;
rte_mp_action_register;
--
2.17.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] mem: provide thread-unsafe contig walk variant
2018-06-12 9:46 [dpdk-dev] [PATCH 1/3] mem: provide thread-unsafe contig walk variant Anatoly Burakov
2018-06-12 9:46 ` [dpdk-dev] [PATCH 2/3] mem: provide thread-unsafe memseg " Anatoly Burakov
2018-06-12 9:46 ` [dpdk-dev] [PATCH 3/3] mem: provide thread-unsafe memseg list " Anatoly Burakov
@ 2018-07-13 9:21 ` Thomas Monjalon
2 siblings, 0 replies; 4+ messages in thread
From: Thomas Monjalon @ 2018-07-13 9:21 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev
12/06/2018 11:46, Anatoly Burakov:
> Sometimes, user code needs to walk memseg list while being inside
> a memory-related callback. Rather than making everyone copy around
> the same iteration code and depending on DPDK internals, provide an
> official way to do memseg_contig_walk() inside callbacks.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Series applied, thanks
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2018-07-13 9:21 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-12 9:46 [dpdk-dev] [PATCH 1/3] mem: provide thread-unsafe contig walk variant Anatoly Burakov
2018-06-12 9:46 ` [dpdk-dev] [PATCH 2/3] mem: provide thread-unsafe memseg " Anatoly Burakov
2018-06-12 9:46 ` [dpdk-dev] [PATCH 3/3] mem: provide thread-unsafe memseg list " Anatoly Burakov
2018-07-13 9:21 ` [dpdk-dev] [PATCH 1/3] mem: provide thread-unsafe contig " Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).