From: Anatoly Burakov <anatoly.burakov@intel.com>
To: dev@dpdk.org
Cc: Hemant Agrawal <hemant.agrawal@nxp.com>,
Shreyansh Jain <shreyansh.jain@nxp.com>,
Matan Azrad <matan@mellanox.com>,
Shahaf Shuler <shahafs@mellanox.com>,
Yongseok Koh <yskoh@mellanox.com>,
Maxime Coquelin <maxime.coquelin@redhat.com>,
Tiwei Bie <tiwei.bie@intel.com>,
Zhihong Wang <zhihong.wang@intel.com>,
Bruce Richardson <bruce.richardson@intel.com>,
thomas@monjalon.net, david.marchand@redhat.com,
stephen@networkplumber.org
Subject: [dpdk-dev] [PATCH v4 1/8] eal: add API to lock/unlock memory hotplug
Date: Fri, 5 Jul 2019 14:10:27 +0100 [thread overview]
Message-ID: <d6441ec8f00c9b51093ceb8024a0a45cb1157655.1562332112.git.anatoly.burakov@intel.com> (raw)
In-Reply-To: <cover.1562332112.git.anatoly.burakov@intel.com>
In-Reply-To: <cover.1562332112.git.anatoly.burakov@intel.com>
Currently, the memory hotplug is locked automatically by all
memory-related _walk() functions, but sometimes locking the
memory subsystem outside of them is needed. There is no
public API to do that, so it creates a dependency on shared
memory config to be public. Fix this by introducing a new
API to lock/unlock the memory hotplug subsystem.
Create a new common file for all things mem config, and a
new API namespace rte_mcfg_*, and search-and-replace all
usages of the locks with the new API.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/bus/fslmc/fslmc_vfio.c | 8 ++--
drivers/net/mlx4/mlx4_mr.c | 11 +++--
drivers/net/mlx5/mlx5_mr.c | 11 +++--
.../net/virtio/virtio_user/virtio_user_dev.c | 7 ++-
lib/librte_eal/common/eal_common_mcfg.c | 34 +++++++++++++++
lib/librte_eal/common/eal_common_memory.c | 43 ++++++++-----------
.../common/include/rte_eal_memconfig.h | 24 +++++++++++
lib/librte_eal/common/malloc_heap.c | 14 +++---
lib/librte_eal/common/meson.build | 1 +
lib/librte_eal/common/rte_malloc.c | 32 ++++++--------
lib/librte_eal/freebsd/eal/Makefile | 1 +
lib/librte_eal/linux/eal/Makefile | 1 +
lib/librte_eal/linux/eal/eal_vfio.c | 16 +++----
lib/librte_eal/rte_eal_version.map | 4 ++
14 files changed, 125 insertions(+), 82 deletions(-)
create mode 100644 lib/librte_eal/common/eal_common_mcfg.c
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 1aae56fa9..44e4fa6e2 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -347,14 +347,12 @@ fslmc_dmamap_seg(const struct rte_memseg_list *msl __rte_unused,
int rte_fslmc_vfio_dmamap(void)
{
int i = 0, ret;
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
- rte_rwlock_t *mem_lock = &mcfg->memory_hotplug_lock;
/* Lock before parsing and registering callback to memory subsystem */
- rte_rwlock_read_lock(mem_lock);
+ rte_mcfg_mem_read_lock();
if (rte_memseg_walk(fslmc_dmamap_seg, &i) < 0) {
- rte_rwlock_read_unlock(mem_lock);
+ rte_mcfg_mem_read_unlock();
return -1;
}
@@ -378,7 +376,7 @@ int rte_fslmc_vfio_dmamap(void)
/* Existing segments have been mapped and memory callback for hotplug
* has been installed.
*/
- rte_rwlock_read_unlock(mem_lock);
+ rte_mcfg_mem_read_unlock();
return 0;
}
diff --git a/drivers/net/mlx4/mlx4_mr.c b/drivers/net/mlx4/mlx4_mr.c
index 48d458ad4..80827ce75 100644
--- a/drivers/net/mlx4/mlx4_mr.c
+++ b/drivers/net/mlx4/mlx4_mr.c
@@ -593,7 +593,6 @@ mlx4_mr_create_primary(struct rte_eth_dev *dev, struct mlx4_mr_cache *entry,
uintptr_t addr)
{
struct mlx4_priv *priv = dev->data->dev_private;
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
const struct rte_memseg_list *msl;
const struct rte_memseg *ms;
struct mlx4_mr *mr = NULL;
@@ -696,7 +695,7 @@ mlx4_mr_create_primary(struct rte_eth_dev *dev, struct mlx4_mr_cache *entry,
* just single page. If not, go on with the big chunk atomically from
* here.
*/
- rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_lock();
data_re = data;
if (len > msl->page_sz &&
!rte_memseg_contig_walk(mr_find_contig_memsegs_cb, &data_re)) {
@@ -714,7 +713,7 @@ mlx4_mr_create_primary(struct rte_eth_dev *dev, struct mlx4_mr_cache *entry,
*/
data.start = RTE_ALIGN_FLOOR(addr, msl->page_sz);
data.end = data.start + msl->page_sz;
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
mr_free(mr);
goto alloc_resources;
}
@@ -734,7 +733,7 @@ mlx4_mr_create_primary(struct rte_eth_dev *dev, struct mlx4_mr_cache *entry,
DEBUG("port %u found MR for %p on final lookup, abort",
dev->data->port_id, (void *)addr);
rte_rwlock_write_unlock(&priv->mr.rwlock);
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
/*
* Must be unlocked before calling rte_free() because
* mlx4_mr_mem_event_free_cb() can be called inside.
@@ -802,12 +801,12 @@ mlx4_mr_create_primary(struct rte_eth_dev *dev, struct mlx4_mr_cache *entry,
/* Lookup can't fail. */
assert(entry->lkey != UINT32_MAX);
rte_rwlock_write_unlock(&priv->mr.rwlock);
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
return entry->lkey;
err_mrlock:
rte_rwlock_write_unlock(&priv->mr.rwlock);
err_memlock:
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
err_nolock:
/*
* In case of error, as this can be called in a datapath, a warning
diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c
index 66e8e874e..872d0591e 100644
--- a/drivers/net/mlx5/mlx5_mr.c
+++ b/drivers/net/mlx5/mlx5_mr.c
@@ -580,7 +580,6 @@ mlx5_mr_create_primary(struct rte_eth_dev *dev, struct mlx5_mr_cache *entry,
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_ibv_shared *sh = priv->sh;
struct mlx5_dev_config *config = &priv->config;
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
const struct rte_memseg_list *msl;
const struct rte_memseg *ms;
struct mlx5_mr *mr = NULL;
@@ -684,7 +683,7 @@ mlx5_mr_create_primary(struct rte_eth_dev *dev, struct mlx5_mr_cache *entry,
* just single page. If not, go on with the big chunk atomically from
* here.
*/
- rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_lock();
data_re = data;
if (len > msl->page_sz &&
!rte_memseg_contig_walk(mr_find_contig_memsegs_cb, &data_re)) {
@@ -702,7 +701,7 @@ mlx5_mr_create_primary(struct rte_eth_dev *dev, struct mlx5_mr_cache *entry,
*/
data.start = RTE_ALIGN_FLOOR(addr, msl->page_sz);
data.end = data.start + msl->page_sz;
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
mr_free(mr);
goto alloc_resources;
}
@@ -722,7 +721,7 @@ mlx5_mr_create_primary(struct rte_eth_dev *dev, struct mlx5_mr_cache *entry,
DEBUG("port %u found MR for %p on final lookup, abort",
dev->data->port_id, (void *)addr);
rte_rwlock_write_unlock(&sh->mr.rwlock);
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
/*
* Must be unlocked before calling rte_free() because
* mlx5_mr_mem_event_free_cb() can be called inside.
@@ -790,12 +789,12 @@ mlx5_mr_create_primary(struct rte_eth_dev *dev, struct mlx5_mr_cache *entry,
/* Lookup can't fail. */
assert(entry->lkey != UINT32_MAX);
rte_rwlock_write_unlock(&sh->mr.rwlock);
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
return entry->lkey;
err_mrlock:
rte_rwlock_write_unlock(&sh->mr.rwlock);
err_memlock:
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
err_nolock:
/*
* In case of error, as this can be called in a datapath, a warning
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index e743695e4..c3ab9a21d 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -125,7 +125,6 @@ is_vhost_user_by_type(const char *path)
int
virtio_user_start_device(struct virtio_user_dev *dev)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
uint64_t features;
int ret;
@@ -142,7 +141,7 @@ virtio_user_start_device(struct virtio_user_dev *dev)
* replaced when we get proper supports from the
* memory subsystem in the future.
*/
- rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_lock();
pthread_mutex_lock(&dev->mutex);
if (is_vhost_user_by_type(dev->path) && dev->vhostfd < 0)
@@ -180,12 +179,12 @@ virtio_user_start_device(struct virtio_user_dev *dev)
dev->started = true;
pthread_mutex_unlock(&dev->mutex);
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
return 0;
error:
pthread_mutex_unlock(&dev->mutex);
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
/* TODO: free resource here or caller to check */
return -1;
}
diff --git a/lib/librte_eal/common/eal_common_mcfg.c b/lib/librte_eal/common/eal_common_mcfg.c
new file mode 100644
index 000000000..985d36cc2
--- /dev/null
+++ b/lib/librte_eal/common/eal_common_mcfg.c
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <rte_config.h>
+#include <rte_eal_memconfig.h>
+
+void
+rte_mcfg_mem_read_lock(void)
+{
+ struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
+}
+
+void
+rte_mcfg_mem_read_unlock(void)
+{
+ struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+}
+
+void
+rte_mcfg_mem_write_lock(void)
+{
+ struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ rte_rwlock_write_lock(&mcfg->memory_hotplug_lock);
+}
+
+void
+rte_mcfg_mem_write_unlock(void)
+{
+ struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ rte_rwlock_write_unlock(&mcfg->memory_hotplug_lock);
+}
diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c
index 858d56382..fe22b139b 100644
--- a/lib/librte_eal/common/eal_common_memory.c
+++ b/lib/librte_eal/common/eal_common_memory.c
@@ -596,13 +596,12 @@ rte_memseg_contig_walk_thread_unsafe(rte_memseg_contig_walk_t func, void *arg)
int
rte_memseg_contig_walk(rte_memseg_contig_walk_t func, void *arg)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
int ret = 0;
/* do not allow allocations/frees/init while we iterate */
- rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_lock();
ret = rte_memseg_contig_walk_thread_unsafe(func, arg);
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
return ret;
}
@@ -638,13 +637,12 @@ rte_memseg_walk_thread_unsafe(rte_memseg_walk_t func, void *arg)
int
rte_memseg_walk(rte_memseg_walk_t func, void *arg)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
int ret = 0;
/* do not allow allocations/frees/init while we iterate */
- rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_lock();
ret = rte_memseg_walk_thread_unsafe(func, arg);
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
return ret;
}
@@ -671,13 +669,12 @@ rte_memseg_list_walk_thread_unsafe(rte_memseg_list_walk_t func, void *arg)
int
rte_memseg_list_walk(rte_memseg_list_walk_t func, void *arg)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
int ret = 0;
/* do not allow allocations/frees/init while we iterate */
- rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_lock();
ret = rte_memseg_list_walk_thread_unsafe(func, arg);
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
return ret;
}
@@ -727,12 +724,11 @@ rte_memseg_get_fd_thread_unsafe(const struct rte_memseg *ms)
int
rte_memseg_get_fd(const struct rte_memseg *ms)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
int ret;
- rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_lock();
ret = rte_memseg_get_fd_thread_unsafe(ms);
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
return ret;
}
@@ -783,12 +779,11 @@ rte_memseg_get_fd_offset_thread_unsafe(const struct rte_memseg *ms,
int
rte_memseg_get_fd_offset(const struct rte_memseg *ms, size_t *offset)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
int ret;
- rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_lock();
ret = rte_memseg_get_fd_offset_thread_unsafe(ms, offset);
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
return ret;
}
@@ -809,7 +804,7 @@ rte_extmem_register(void *va_addr, size_t len, rte_iova_t iova_addrs[],
rte_errno = EINVAL;
return -1;
}
- rte_rwlock_write_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_lock();
/* make sure the segment doesn't already exist */
if (malloc_heap_find_external_seg(va_addr, len) != NULL) {
@@ -838,14 +833,13 @@ rte_extmem_register(void *va_addr, size_t len, rte_iova_t iova_addrs[],
/* memseg list successfully created - increment next socket ID */
mcfg->next_socket_id++;
unlock:
- rte_rwlock_write_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_unlock();
return ret;
}
int
rte_extmem_unregister(void *va_addr, size_t len)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
struct rte_memseg_list *msl;
int ret = 0;
@@ -853,7 +847,7 @@ rte_extmem_unregister(void *va_addr, size_t len)
rte_errno = EINVAL;
return -1;
}
- rte_rwlock_write_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_lock();
/* find our segment */
msl = malloc_heap_find_external_seg(va_addr, len);
@@ -865,14 +859,13 @@ rte_extmem_unregister(void *va_addr, size_t len)
ret = malloc_heap_destroy_external_seg(msl);
unlock:
- rte_rwlock_write_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_unlock();
return ret;
}
static int
sync_memory(void *va_addr, size_t len, bool attach)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
struct rte_memseg_list *msl;
int ret = 0;
@@ -880,7 +873,7 @@ sync_memory(void *va_addr, size_t len, bool attach)
rte_errno = EINVAL;
return -1;
}
- rte_rwlock_write_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_lock();
/* find our segment */
msl = malloc_heap_find_external_seg(va_addr, len);
@@ -895,7 +888,7 @@ sync_memory(void *va_addr, size_t len, bool attach)
ret = rte_fbarray_detach(&msl->memseg_arr);
unlock:
- rte_rwlock_write_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_unlock();
return ret;
}
@@ -923,7 +916,7 @@ rte_eal_memory_init(void)
return -1;
/* lock mem hotplug here, to prevent races while we init */
- rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_lock();
if (rte_eal_memseg_init() < 0)
goto fail;
@@ -942,6 +935,6 @@ rte_eal_memory_init(void)
return 0;
fail:
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
return -1;
}
diff --git a/lib/librte_eal/common/include/rte_eal_memconfig.h b/lib/librte_eal/common/include/rte_eal_memconfig.h
index 84aabe36c..a554518ef 100644
--- a/lib/librte_eal/common/include/rte_eal_memconfig.h
+++ b/lib/librte_eal/common/include/rte_eal_memconfig.h
@@ -100,6 +100,30 @@ rte_eal_mcfg_wait_complete(struct rte_mem_config* mcfg)
rte_pause();
}
+/**
+ * Lock the internal EAL shared memory configuration for shared access.
+ */
+void
+rte_mcfg_mem_read_lock(void);
+
+/**
+ * Unlock the internal EAL shared memory configuration for shared access.
+ */
+void
+rte_mcfg_mem_read_unlock(void);
+
+/**
+ * Lock the internal EAL shared memory configuration for exclusive access.
+ */
+void
+rte_mcfg_mem_write_lock(void);
+
+/**
+ * Unlock the internal EAL shared memory configuration for exclusive access.
+ */
+void
+rte_mcfg_mem_write_unlock(void);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c
index f9235932e..f1d31de0d 100644
--- a/lib/librte_eal/common/malloc_heap.c
+++ b/lib/librte_eal/common/malloc_heap.c
@@ -485,10 +485,9 @@ try_expand_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
int socket, unsigned int flags, size_t align, size_t bound,
bool contig)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
int ret;
- rte_rwlock_write_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_lock();
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
ret = try_expand_heap_primary(heap, pg_sz, elt_size, socket,
@@ -498,7 +497,7 @@ try_expand_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
flags, align, bound, contig);
}
- rte_rwlock_write_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_unlock();
return ret;
}
@@ -821,7 +820,6 @@ malloc_heap_free_pages(void *aligned_start, size_t aligned_len)
int
malloc_heap_free(struct malloc_elem *elem)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
struct malloc_heap *heap;
void *start, *aligned_start, *end, *aligned_end;
size_t len, aligned_len, page_sz;
@@ -935,7 +933,7 @@ malloc_heap_free(struct malloc_elem *elem)
/* now we can finally free us some pages */
- rte_rwlock_write_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_lock();
/*
* we allow secondary processes to clear the heap of this allocated
@@ -990,7 +988,7 @@ malloc_heap_free(struct malloc_elem *elem)
RTE_LOG(DEBUG, EAL, "Heap on socket %d was shrunk by %zdMB\n",
msl->socket_id, aligned_len >> 20ULL);
- rte_rwlock_write_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_unlock();
free_unlock:
rte_spinlock_unlock(&(heap->lock));
return ret;
@@ -1344,7 +1342,7 @@ rte_eal_malloc_heap_init(void)
if (register_mp_requests()) {
RTE_LOG(ERR, EAL, "Couldn't register malloc multiprocess actions\n");
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
return -1;
}
@@ -1352,7 +1350,7 @@ rte_eal_malloc_heap_init(void)
* even come before primary itself is fully initialized, and secondaries
* do not need to initialize the heap.
*/
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
/* secondary process does not need to initialize anything */
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
diff --git a/lib/librte_eal/common/meson.build b/lib/librte_eal/common/meson.build
index bafd23207..58b433bc2 100644
--- a/lib/librte_eal/common/meson.build
+++ b/lib/librte_eal/common/meson.build
@@ -18,6 +18,7 @@ common_sources = files(
'eal_common_launch.c',
'eal_common_lcore.c',
'eal_common_log.c',
+ 'eal_common_mcfg.c',
'eal_common_memalloc.c',
'eal_common_memory.c',
'eal_common_memzone.c',
diff --git a/lib/librte_eal/common/rte_malloc.c b/lib/librte_eal/common/rte_malloc.c
index b119ebae3..2cad7beaa 100644
--- a/lib/librte_eal/common/rte_malloc.c
+++ b/lib/librte_eal/common/rte_malloc.c
@@ -223,7 +223,7 @@ rte_malloc_heap_get_socket(const char *name)
rte_errno = EINVAL;
return -1;
}
- rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_lock();
for (idx = 0; idx < RTE_MAX_HEAPS; idx++) {
struct malloc_heap *tmp = &mcfg->malloc_heaps[idx];
@@ -239,7 +239,7 @@ rte_malloc_heap_get_socket(const char *name)
rte_errno = ENOENT;
ret = -1;
}
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
return ret;
}
@@ -254,7 +254,7 @@ rte_malloc_heap_socket_is_external(int socket_id)
if (socket_id == SOCKET_ID_ANY)
return 0;
- rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_lock();
for (idx = 0; idx < RTE_MAX_HEAPS; idx++) {
struct malloc_heap *tmp = &mcfg->malloc_heaps[idx];
@@ -264,7 +264,7 @@ rte_malloc_heap_socket_is_external(int socket_id)
break;
}
}
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
return ret;
}
@@ -352,7 +352,6 @@ int
rte_malloc_heap_memory_add(const char *heap_name, void *va_addr, size_t len,
rte_iova_t iova_addrs[], unsigned int n_pages, size_t page_sz)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
struct malloc_heap *heap = NULL;
struct rte_memseg_list *msl;
unsigned int n;
@@ -369,7 +368,7 @@ rte_malloc_heap_memory_add(const char *heap_name, void *va_addr, size_t len,
rte_errno = EINVAL;
return -1;
}
- rte_rwlock_write_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_lock();
/* find our heap */
heap = find_named_heap(heap_name);
@@ -398,7 +397,7 @@ rte_malloc_heap_memory_add(const char *heap_name, void *va_addr, size_t len,
rte_spinlock_unlock(&heap->lock);
unlock:
- rte_rwlock_write_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_unlock();
return ret;
}
@@ -406,7 +405,6 @@ rte_malloc_heap_memory_add(const char *heap_name, void *va_addr, size_t len,
int
rte_malloc_heap_memory_remove(const char *heap_name, void *va_addr, size_t len)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
struct malloc_heap *heap = NULL;
struct rte_memseg_list *msl;
int ret;
@@ -418,7 +416,7 @@ rte_malloc_heap_memory_remove(const char *heap_name, void *va_addr, size_t len)
rte_errno = EINVAL;
return -1;
}
- rte_rwlock_write_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_lock();
/* find our heap */
heap = find_named_heap(heap_name);
if (heap == NULL) {
@@ -448,7 +446,7 @@ rte_malloc_heap_memory_remove(const char *heap_name, void *va_addr, size_t len)
ret = malloc_heap_destroy_external_seg(msl);
unlock:
- rte_rwlock_write_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_unlock();
return ret;
}
@@ -456,7 +454,6 @@ rte_malloc_heap_memory_remove(const char *heap_name, void *va_addr, size_t len)
static int
sync_memory(const char *heap_name, void *va_addr, size_t len, bool attach)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
struct malloc_heap *heap = NULL;
struct rte_memseg_list *msl;
int ret;
@@ -468,7 +465,7 @@ sync_memory(const char *heap_name, void *va_addr, size_t len, bool attach)
rte_errno = EINVAL;
return -1;
}
- rte_rwlock_read_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_lock();
/* find our heap */
heap = find_named_heap(heap_name);
@@ -516,7 +513,7 @@ sync_memory(const char *heap_name, void *va_addr, size_t len, bool attach)
}
}
unlock:
- rte_rwlock_read_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_read_unlock();
return ret;
}
@@ -549,7 +546,7 @@ rte_malloc_heap_create(const char *heap_name)
/* check if there is space in the heap list, or if heap with this name
* already exists.
*/
- rte_rwlock_write_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_lock();
for (i = 0; i < RTE_MAX_HEAPS; i++) {
struct malloc_heap *tmp = &mcfg->malloc_heaps[i];
@@ -578,7 +575,7 @@ rte_malloc_heap_create(const char *heap_name)
/* we're sure that we can create a new heap, so do it */
ret = malloc_heap_create(heap, heap_name);
unlock:
- rte_rwlock_write_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_unlock();
return ret;
}
@@ -586,7 +583,6 @@ rte_malloc_heap_create(const char *heap_name)
int
rte_malloc_heap_destroy(const char *heap_name)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
struct malloc_heap *heap = NULL;
int ret;
@@ -597,7 +593,7 @@ rte_malloc_heap_destroy(const char *heap_name)
rte_errno = EINVAL;
return -1;
}
- rte_rwlock_write_lock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_lock();
/* start from non-socket heaps */
heap = find_named_heap(heap_name);
@@ -621,7 +617,7 @@ rte_malloc_heap_destroy(const char *heap_name)
if (ret < 0)
rte_spinlock_unlock(&heap->lock);
unlock:
- rte_rwlock_write_unlock(&mcfg->memory_hotplug_lock);
+ rte_mcfg_mem_write_unlock();
return ret;
}
diff --git a/lib/librte_eal/freebsd/eal/Makefile b/lib/librte_eal/freebsd/eal/Makefile
index ca616c480..eb921275e 100644
--- a/lib/librte_eal/freebsd/eal/Makefile
+++ b/lib/librte_eal/freebsd/eal/Makefile
@@ -44,6 +44,7 @@ SRCS-$(CONFIG_RTE_EXEC_ENV_FREEBSD) += eal_common_timer.c
SRCS-$(CONFIG_RTE_EXEC_ENV_FREEBSD) += eal_common_memzone.c
SRCS-$(CONFIG_RTE_EXEC_ENV_FREEBSD) += eal_common_log.c
SRCS-$(CONFIG_RTE_EXEC_ENV_FREEBSD) += eal_common_launch.c
+SRCS-$(CONFIG_RTE_EXEC_ENV_FREEBSD) += eal_common_mcfg.c
SRCS-$(CONFIG_RTE_EXEC_ENV_FREEBSD) += eal_common_memalloc.c
SRCS-$(CONFIG_RTE_EXEC_ENV_FREEBSD) += eal_common_memory.c
SRCS-$(CONFIG_RTE_EXEC_ENV_FREEBSD) += eal_common_tailqs.c
diff --git a/lib/librte_eal/linux/eal/Makefile b/lib/librte_eal/linux/eal/Makefile
index 729795a10..dfe8e9a49 100644
--- a/lib/librte_eal/linux/eal/Makefile
+++ b/lib/librte_eal/linux/eal/Makefile
@@ -52,6 +52,7 @@ SRCS-$(CONFIG_RTE_EXEC_ENV_LINUX) += eal_common_timer.c
SRCS-$(CONFIG_RTE_EXEC_ENV_LINUX) += eal_common_memzone.c
SRCS-$(CONFIG_RTE_EXEC_ENV_LINUX) += eal_common_log.c
SRCS-$(CONFIG_RTE_EXEC_ENV_LINUX) += eal_common_launch.c
+SRCS-$(CONFIG_RTE_EXEC_ENV_LINUX) += eal_common_mcfg.c
SRCS-$(CONFIG_RTE_EXEC_ENV_LINUX) += eal_common_memalloc.c
SRCS-$(CONFIG_RTE_EXEC_ENV_LINUX) += eal_common_memory.c
SRCS-$(CONFIG_RTE_EXEC_ENV_LINUX) += eal_common_tailqs.c
diff --git a/lib/librte_eal/linux/eal/eal_vfio.c b/lib/librte_eal/linux/eal/eal_vfio.c
index feada64c0..96a03a657 100644
--- a/lib/librte_eal/linux/eal/eal_vfio.c
+++ b/lib/librte_eal/linux/eal/eal_vfio.c
@@ -635,8 +635,6 @@ int
rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
int *vfio_dev_fd, struct vfio_device_info *device_info)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
- rte_rwlock_t *mem_lock = &mcfg->memory_hotplug_lock;
struct vfio_group_status group_status = {
.argsz = sizeof(group_status)
};
@@ -739,7 +737,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
/* lock memory hotplug before mapping and release it
* after registering callback, to prevent races
*/
- rte_rwlock_read_lock(mem_lock);
+ rte_mcfg_mem_read_lock();
if (vfio_cfg == default_vfio_cfg)
ret = t->dma_map_func(vfio_container_fd);
else
@@ -750,7 +748,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
dev_addr, errno, strerror(errno));
close(vfio_group_fd);
rte_vfio_clear_group(vfio_group_fd);
- rte_rwlock_read_unlock(mem_lock);
+ rte_mcfg_mem_read_unlock();
return -1;
}
@@ -781,7 +779,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
map->len);
rte_spinlock_recursive_unlock(
&user_mem_maps->lock);
- rte_rwlock_read_unlock(mem_lock);
+ rte_mcfg_mem_read_unlock();
return -1;
}
}
@@ -795,7 +793,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
else
ret = 0;
/* unlock memory hotplug */
- rte_rwlock_read_unlock(mem_lock);
+ rte_mcfg_mem_read_unlock();
if (ret && rte_errno != ENOTSUP) {
RTE_LOG(ERR, EAL, "Could not install memory event callback for VFIO\n");
@@ -862,8 +860,6 @@ int
rte_vfio_release_device(const char *sysfs_base, const char *dev_addr,
int vfio_dev_fd)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
- rte_rwlock_t *mem_lock = &mcfg->memory_hotplug_lock;
struct vfio_group_status group_status = {
.argsz = sizeof(group_status)
};
@@ -876,7 +872,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr,
* VFIO device, because this might be the last device and we might need
* to unregister the callback.
*/
- rte_rwlock_read_lock(mem_lock);
+ rte_mcfg_mem_read_lock();
/* get group number */
ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_num);
@@ -947,7 +943,7 @@ rte_vfio_release_device(const char *sysfs_base, const char *dev_addr,
ret = 0;
out:
- rte_rwlock_read_unlock(mem_lock);
+ rte_mcfg_mem_read_unlock();
return ret;
}
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index a53a29a35..754060dc9 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -292,6 +292,10 @@ DPDK_19.08 {
rte_lcore_index;
rte_lcore_to_socket_id;
+ rte_mcfg_mem_read_lock;
+ rte_mcfg_mem_read_unlock;
+ rte_mcfg_mem_write_lock;
+ rte_mcfg_mem_write_unlock;
rte_rand;
rte_srand;
--
2.17.1
next prev parent reply other threads:[~2019-07-05 13:10 UTC|newest]
Thread overview: 117+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-29 16:30 [dpdk-dev] [PATCH 00/25] Make shared memory config non-public Anatoly Burakov
2019-05-29 16:30 ` [dpdk-dev] [PATCH 01/25] eal: add API to lock/unlock memory hotplug Anatoly Burakov
2019-05-29 16:41 ` Stephen Hemminger
2019-05-29 16:30 ` [dpdk-dev] [PATCH 02/25] bus/fslmc: use new memory locking API Anatoly Burakov
2019-06-03 6:41 ` Shreyansh Jain
2019-05-29 16:30 ` [dpdk-dev] [PATCH 03/25] net/mlx4: " Anatoly Burakov
2019-05-29 16:30 ` [dpdk-dev] [PATCH 04/25] net/mlx5: " Anatoly Burakov
2019-05-29 16:30 ` [dpdk-dev] [PATCH 05/25] net/virtio: " Anatoly Burakov
2019-05-30 6:39 ` Tiwei Bie
2019-05-29 16:30 ` [dpdk-dev] [PATCH 06/25] mem: " Anatoly Burakov
2019-05-29 16:30 ` [dpdk-dev] [PATCH 07/25] malloc: " Anatoly Burakov
2019-05-29 16:30 ` [dpdk-dev] [PATCH 08/25] vfio: " Anatoly Burakov
2019-05-29 16:30 ` [dpdk-dev] [PATCH 09/25] eal: add EAL tailq list lock/unlock API Anatoly Burakov
2019-05-29 16:30 ` [dpdk-dev] [PATCH 10/25] acl: use new tailq locking API Anatoly Burakov
2019-05-29 16:30 ` [dpdk-dev] [PATCH 11/25] distributor: " Anatoly Burakov
2019-05-29 16:30 ` [dpdk-dev] [PATCH 12/25] efd: " Anatoly Burakov
2019-05-29 16:30 ` [dpdk-dev] [PATCH 13/25] eventdev: " Anatoly Burakov
2019-05-29 16:31 ` [dpdk-dev] [PATCH 14/25] hash: " Anatoly Burakov
2019-05-29 16:31 ` [dpdk-dev] [PATCH 15/25] lpm: " Anatoly Burakov
2019-05-29 16:31 ` [dpdk-dev] [PATCH 16/25] member: " Anatoly Burakov
2019-05-29 16:31 ` [dpdk-dev] [PATCH 17/25] mempool: " Anatoly Burakov
2019-05-29 16:47 ` Andrew Rybchenko
2019-05-29 16:31 ` [dpdk-dev] [PATCH 18/25] reorder: " Anatoly Burakov
2019-05-29 16:31 ` [dpdk-dev] [PATCH 19/25] ring: " Anatoly Burakov
2019-05-29 16:31 ` [dpdk-dev] [PATCH 20/25] stack: " Anatoly Burakov
2019-05-29 18:20 ` Eads, Gage
2019-05-29 16:31 ` [dpdk-dev] [PATCH 21/25] eal: add new API to lock/unlock mempool list Anatoly Burakov
2019-05-29 16:31 ` [dpdk-dev] [PATCH 22/25] mempool: use new mempool list locking API Anatoly Burakov
2019-05-29 16:50 ` Andrew Rybchenko
2019-05-29 16:31 ` [dpdk-dev] [PATCH 23/25] eal: remove unused macros Anatoly Burakov
2019-05-29 16:31 ` [dpdk-dev] [PATCH 24/25] net/ena: fix direct access to shared memory config Anatoly Burakov
2019-06-03 7:33 ` Michał Krawczyk
2019-06-03 13:36 ` Michał Krawczyk
2019-06-04 10:28 ` Burakov, Anatoly
2019-06-04 10:45 ` Michał Krawczyk
2019-06-04 12:38 ` Burakov, Anatoly
2019-05-29 16:31 ` [dpdk-dev] [PATCH 25/25] eal: hide " Anatoly Burakov
2019-05-29 16:40 ` Stephen Hemminger
2019-05-30 8:02 ` Burakov, Anatoly
2019-05-29 20:14 ` David Marchand
2019-05-29 20:11 ` [dpdk-dev] [PATCH 00/25] Make shared memory config non-public David Marchand
2019-05-30 8:07 ` Burakov, Anatoly
2019-05-30 10:15 ` Bruce Richardson
2019-06-03 9:42 ` Thomas Monjalon
2019-06-25 16:05 ` [dpdk-dev] [PATCH v2 00/14] " Anatoly Burakov
2019-06-27 11:38 ` [dpdk-dev] [PATCH v3 " Anatoly Burakov
2019-06-27 15:36 ` Stephen Hemminger
2019-07-03 9:38 ` David Marchand
2019-07-03 10:47 ` Burakov, Anatoly
2019-07-04 8:09 ` David Marchand
2019-07-04 19:52 ` Thomas Monjalon
2019-07-05 13:10 ` [dpdk-dev] [PATCH v4 0/8] " Anatoly Burakov
2019-07-05 17:26 ` [dpdk-dev] [PATCH v5 0/9] " Anatoly Burakov
2019-07-05 19:30 ` David Marchand
2019-07-05 21:09 ` Thomas Monjalon
2019-07-31 10:07 ` David Marchand
2019-07-31 10:32 ` Burakov, Anatoly
2019-07-31 10:48 ` David Marchand
2019-07-05 17:26 ` [dpdk-dev] [PATCH v5 1/9] eal: add API to lock/unlock memory hotplug Anatoly Burakov
2019-07-05 17:26 ` [dpdk-dev] [PATCH v5 2/9] eal: add EAL tailq list lock/unlock API Anatoly Burakov
2019-07-05 17:26 ` [dpdk-dev] [PATCH v5 3/9] eal: add new API to lock/unlock mempool list Anatoly Burakov
2019-07-05 17:26 ` [dpdk-dev] [PATCH v5 4/9] eal: hide shared memory config Anatoly Burakov
2019-07-05 19:08 ` Thomas Monjalon
2019-07-08 9:22 ` Burakov, Anatoly
2019-07-08 9:38 ` Thomas Monjalon
2019-07-05 17:26 ` [dpdk-dev] [PATCH v5 5/9] eal: remove packed attribute from mcfg structure Anatoly Burakov
2019-07-05 17:26 ` [dpdk-dev] [PATCH v5 6/9] eal: uninline wait for mcfg complete function Anatoly Burakov
2019-07-05 17:26 ` [dpdk-dev] [PATCH v5 7/9] eal: unify and move " Anatoly Burakov
2019-07-05 17:26 ` [dpdk-dev] [PATCH v5 8/9] eal: unify internal config initialization Anatoly Burakov
2019-07-05 17:26 ` [dpdk-dev] [PATCH v5 9/9] eal: prevent different primary/secondary process versions Anatoly Burakov
2019-07-05 13:10 ` Anatoly Burakov [this message]
2019-07-05 13:10 ` [dpdk-dev] [PATCH v4 2/8] eal: add EAL tailq list lock/unlock API Anatoly Burakov
2019-07-05 13:10 ` [dpdk-dev] [PATCH v4 3/8] eal: add new API to lock/unlock mempool list Anatoly Burakov
2019-07-05 13:10 ` [dpdk-dev] [PATCH v4 4/8] eal: hide shared memory config Anatoly Burakov
2019-07-05 13:10 ` [dpdk-dev] [PATCH v4 5/8] eal: remove packed attribute from mcfg structure Anatoly Burakov
2019-07-05 13:10 ` [dpdk-dev] [PATCH v4 6/8] eal: uninline wait for mcfg complete function Anatoly Burakov
2019-07-05 13:10 ` [dpdk-dev] [PATCH v4 7/8] eal: unify and move " Anatoly Burakov
2019-07-05 13:10 ` [dpdk-dev] [PATCH v4 8/8] eal: unify internal config initialization Anatoly Burakov
2019-06-27 11:38 ` [dpdk-dev] [PATCH v3 01/14] eal: add API to lock/unlock memory hotplug Anatoly Burakov
2019-06-27 11:38 ` [dpdk-dev] [PATCH v3 02/14] drivers: use new memory locking API Anatoly Burakov
2019-06-27 11:38 ` [dpdk-dev] [PATCH v3 03/14] lib: " Anatoly Burakov
2019-06-27 11:38 ` [dpdk-dev] [PATCH v3 04/14] eal: add EAL tailq list lock/unlock API Anatoly Burakov
2019-06-27 11:39 ` [dpdk-dev] [PATCH v3 05/14] lib: use new tailq locking API Anatoly Burakov
2019-06-27 11:39 ` [dpdk-dev] [PATCH v3 06/14] eal: add new API to lock/unlock mempool list Anatoly Burakov
2019-06-27 11:39 ` [dpdk-dev] [PATCH v3 07/14] mempool: use new mempool list locking API Anatoly Burakov
2019-06-27 11:39 ` [dpdk-dev] [PATCH v3 08/14] eal: remove unused macros Anatoly Burakov
2019-06-27 11:39 ` [dpdk-dev] [PATCH v3 09/14] eal: hide shared memory config Anatoly Burakov
2019-07-04 7:43 ` David Marchand
2019-07-04 10:47 ` Burakov, Anatoly
2019-07-04 10:52 ` David Marchand
2019-07-04 19:51 ` Thomas Monjalon
2019-06-27 11:39 ` [dpdk-dev] [PATCH v3 10/14] eal: remove packed attribute from mcfg structure Anatoly Burakov
2019-06-27 11:39 ` [dpdk-dev] [PATCH v3 11/14] eal: uninline wait for mcfg complete function Anatoly Burakov
2019-06-27 11:39 ` [dpdk-dev] [PATCH v3 12/14] eal: unify and move " Anatoly Burakov
2019-06-27 11:39 ` [dpdk-dev] [PATCH v3 13/14] eal: unify internal config initialization Anatoly Burakov
2019-07-04 7:50 ` David Marchand
2019-07-04 7:56 ` David Marchand
2019-07-04 10:50 ` Burakov, Anatoly
2019-07-04 10:54 ` David Marchand
2019-07-04 11:26 ` Burakov, Anatoly
2019-06-27 11:39 ` [dpdk-dev] [PATCH v3 14/14] eal: prevent different primary/secondary process versions Anatoly Burakov
2019-06-25 16:05 ` [dpdk-dev] [PATCH v2 01/14] eal: add API to lock/unlock memory hotplug Anatoly Burakov
2019-06-25 16:05 ` [dpdk-dev] [PATCH v2 02/14] drivers: use new memory locking API Anatoly Burakov
2019-06-27 9:24 ` Hemant Agrawal
2019-06-28 15:21 ` Yongseok Koh
2019-06-25 16:05 ` [dpdk-dev] [PATCH v2 03/14] lib: " Anatoly Burakov
2019-06-25 16:05 ` [dpdk-dev] [PATCH v2 04/14] eal: add EAL tailq list lock/unlock API Anatoly Burakov
2019-06-25 16:05 ` [dpdk-dev] [PATCH v2 05/14] lib: use new tailq locking API Anatoly Burakov
2019-06-25 16:05 ` [dpdk-dev] [PATCH v2 06/14] eal: add new API to lock/unlock mempool list Anatoly Burakov
2019-06-25 16:05 ` [dpdk-dev] [PATCH v2 07/14] mempool: use new mempool list locking API Anatoly Burakov
2019-06-25 16:05 ` [dpdk-dev] [PATCH v2 08/14] eal: remove unused macros Anatoly Burakov
2019-06-25 16:05 ` [dpdk-dev] [PATCH v2 09/14] eal: hide shared memory config Anatoly Burakov
2019-06-25 16:05 ` [dpdk-dev] [PATCH v2 10/14] eal: remove packed attribute from mcfg structure Anatoly Burakov
2019-06-25 16:05 ` [dpdk-dev] [PATCH v2 11/14] eal: uninline wait for mcfg complete function Anatoly Burakov
2019-06-25 16:05 ` [dpdk-dev] [PATCH v2 12/14] eal: unify and move " Anatoly Burakov
2019-06-25 16:05 ` [dpdk-dev] [PATCH v2 13/14] eal: unify internal config initialization Anatoly Burakov
2019-06-25 16:05 ` [dpdk-dev] [PATCH v2 14/14] eal: prevent different primary/secondary process versions Anatoly Burakov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d6441ec8f00c9b51093ceb8024a0a45cb1157655.1562332112.git.anatoly.burakov@intel.com \
--to=anatoly.burakov@intel.com \
--cc=bruce.richardson@intel.com \
--cc=david.marchand@redhat.com \
--cc=dev@dpdk.org \
--cc=hemant.agrawal@nxp.com \
--cc=matan@mellanox.com \
--cc=maxime.coquelin@redhat.com \
--cc=shahafs@mellanox.com \
--cc=shreyansh.jain@nxp.com \
--cc=stephen@networkplumber.org \
--cc=thomas@monjalon.net \
--cc=tiwei.bie@intel.com \
--cc=yskoh@mellanox.com \
--cc=zhihong.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).