* [dpdk-dev] [PATCH 0/7] Subject: [PATCH ...] Add stack library and new mempool handler
@ 2019-02-22 16:06 Gage Eads
2019-02-22 16:06 ` [dpdk-dev] [PATCH 1/7] stack: introduce rte stack library Gage Eads
` (7 more replies)
0 siblings, 8 replies; 228+ messages in thread
From: Gage Eads @ 2019-02-22 16:06 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This patchset introduces a stack library, supporting both lock-based and
non-blocking stacks, and a non-blocking stack mempool handler.
The lock-based stack code is derived from the existing stack mempool handler,
and that handler is refactored to use the stack library.
The non-blocking stack mempool handler is intended for usages where the rte
ring's "non-preemptive" constraint is not acceptable; for example, if the
application uses a mixture of pinned high-priority threads and multiplexed
low-priority threads that share a mempool.
Note that the non-blocking algorithm relies on a 128-bit compare-and-swap, so
it is currently limited to the x86_64 platform.
This patchset is the successor to a patchset containing only the new mempool
handler[1].
[1] http://mails.dpdk.org/archives/dev/2019-January/123555.html
Gage Eads (7):
stack: introduce rte stack library
mempool/stack: convert mempool to use rte stack
test/stack: add stack test
test/stack: add stack perf test
stack: add non-blocking stack implementation
test/stack: add non-blocking stack tests
mempool/stack: add non-blocking stack mempool handler
MAINTAINERS | 9 +-
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/env_abstraction_layer.rst | 5 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 66 ++++
doc/guides/rel_notes/release_19_05.rst | 13 +
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 115 +++----
lib/Makefile | 2 +
lib/librte_stack/Makefile | 25 ++
lib/librte_stack/meson.build | 10 +
lib/librte_stack/rte_stack.c | 220 +++++++++++++
lib/librte_stack/rte_stack.h | 406 +++++++++++++++++++++++
lib/librte_stack/rte_stack_c11_mem.h | 173 ++++++++++
lib/librte_stack/rte_stack_generic.h | 157 +++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++
lib/librte_stack/rte_stack_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
test/test/Makefile | 3 +
test/test/meson.build | 7 +
test/test/test_stack.c | 407 ++++++++++++++++++++++++
test/test/test_stack_perf.c | 356 +++++++++++++++++++++
26 files changed, 1965 insertions(+), 72 deletions(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_c11_mem.h
create mode 100644 lib/librte_stack/rte_stack_generic.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_version.map
create mode 100644 test/test/test_stack.c
create mode 100644 test/test/test_stack_perf.c
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH 1/7] stack: introduce rte stack library
2019-02-22 16:06 [dpdk-dev] [PATCH 0/7] Subject: [PATCH ...] Add stack library and new mempool handler Gage Eads
@ 2019-02-22 16:06 ` Gage Eads
2019-02-25 10:43 ` Olivier Matz
2019-02-22 16:06 ` [dpdk-dev] [PATCH 2/7] mempool/stack: convert mempool to use rte stack Gage Eads
` (6 subsequent siblings)
7 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-02-22 16:06 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 26 ++++
doc/guides/rel_notes/release_19_05.rst | 5 +
lib/Makefile | 2 +
lib/librte_stack/Makefile | 23 +++
lib/librte_stack/meson.build | 8 +
lib/librte_stack/rte_stack.c | 194 +++++++++++++++++++++++
lib/librte_stack/rte_stack.h | 277 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++++
lib/librte_stack/rte_stack_version.map | 9 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
16 files changed, 594 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index eef480ab5..237f05eb2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -407,6 +407,12 @@ F: drivers/raw/skeleton_rawdev/
F: test/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
+Stack API - EXPERIMENTAL
+M: Gage Eads <gage.eads@intel.com>
+M: Olivier Matz <olivier.matz@6wind.com>
+F: lib/librte_stack/
+F: doc/guides/prog_guide/stack_lib.rst
+
Memory Pool Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index 7c6da5165..5861eb09c 100644
--- a/config/common_base
+++ b/config/common_base
@@ -980,3 +980,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
# Compile the eventdev application
#
CONFIG_RTE_APP_EVENTDEV=y
+
+#
+# Compile librte_stack
+#
+CONFIG_RTE_LIBRTE_STACK=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index d95ad566c..0df8848c0 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -124,6 +124,7 @@ The public API headers are grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
[ring] (@ref rte_ring.h),
+ [stack] (@ref rte_stack.h),
[tailq] (@ref rte_tailq.h),
[bitmap] (@ref rte_bitmap.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index bef9320c0..dd972a3fe 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -56,6 +56,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
@TOPDIR@/lib/librte_security \
+ @TOPDIR@/lib/librte_stack \
@TOPDIR@/lib/librte_table \
@TOPDIR@/lib/librte_telemetry \
@TOPDIR@/lib/librte_timer \
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..f4f60862f 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ stack_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
new file mode 100644
index 000000000..51689cfe1
--- /dev/null
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -0,0 +1,26 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+Stack Library
+=============
+
+DPDK's stack library provides an API for configuration and use of a bounded stack of
+pointers.
+
+The stack library provides the following basic operations:
+
+* Create a uniquely named stack of a user-specified size and using a user-specified socket.
+
+* Push and pop a burst of one or more stack objects (pointers). These function are multi-threading safe.
+
+* Free a previously created stack.
+
+* Lookup a pointer to a stack by its name.
+
+* Query a stack's current depth and number of free entries.
+
+Implementation
+~~~~~~~~~~~~~~
+
+The stack consists of a contiguous array of pointers, a current index, and a
+spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 2b0f60d3d..04394f8cf 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -65,6 +65,11 @@ New Features
process.
* Added support for Rx packet types list in a secondary process.
+* **Added Stack API.**
+
+ Added a new stack API for configuration and use of a bounded stack of
+ pointers. The API provides MT-safe push and pop operations that can operate
+ on one or more pointers per operation.
Removed Items
-------------
diff --git a/lib/Makefile b/lib/Makefile
index d6239d27c..d22e2072b 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack
+DEPDIRS-librte_stack := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
new file mode 100644
index 000000000..e956b6535
--- /dev/null
+++ b/lib/librte_stack/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_stack.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_stack_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
new file mode 100644
index 000000000..99f43710e
--- /dev/null
+++ b/lib/librte_stack/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+allow_experimental_apis = true
+
+version = 1
+sources = files('rte_stack.c')
+headers = files('rte_stack.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
new file mode 100644
index 000000000..a43ebb68f
--- /dev/null
+++ b/lib/librte_stack/rte_stack.c
@@ -0,0 +1,194 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_tailq.h>
+
+#include "rte_stack.h"
+#include "rte_stack_pvt.h"
+
+int stack_logtype;
+
+TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_stack_tailq = {
+ .name = RTE_TAILQ_STACK_NAME,
+};
+EAL_REGISTER_TAILQ(rte_stack_tailq)
+
+static void
+lifo_init(struct rte_stack *s)
+{
+ rte_spinlock_init(&s->lifo.lock);
+}
+
+static void
+rte_stack_init(struct rte_stack *s)
+{
+ memset(s, 0, sizeof(*s));
+
+ lifo_init(s);
+}
+
+static ssize_t
+rte_stack_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ /* Add padding to avoid false sharing conflicts */
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
+ 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
+
+struct rte_stack *
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_stack_list *stack_list;
+ const struct rte_memzone *mz;
+ struct rte_tailq_entry *te;
+ struct rte_stack *s;
+ unsigned int sz;
+ int ret;
+
+ RTE_SET_USED(flags);
+
+ sz = rte_stack_get_memsize(count);
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_STACK_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ STACK_LOG_ERR("Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
+ 0, __alignof__(*s));
+ if (mz == NULL) {
+ STACK_LOG_ERR("Cannot reserve stack memzone!\n");
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(te);
+ return NULL;
+ }
+
+ s = mz->addr;
+
+ rte_stack_init(s);
+
+ /* Store the name for later lookups */
+ ret = snprintf(s->name, sizeof(s->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(s->name)) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_errno = ENAMETOOLONG;
+ rte_free(te);
+ rte_memzone_free(mz);
+ return NULL;
+ }
+
+ s->memzone = mz;
+ s->capacity = count;
+ s->flags = flags;
+
+ te->data = s;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ TAILQ_INSERT_TAIL(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return s;
+}
+
+void
+rte_stack_free(struct rte_stack *lifo)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+
+ if (lifo == NULL)
+ return;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, stack_list, next) {
+ if (te->data == lifo)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+
+ rte_memzone_free(lifo->memzone);
+}
+
+struct rte_stack *
+rte_stack_lookup(const char *name)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+ struct rte_stack *r = NULL;
+
+ if (name == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, stack_list, next) {
+ r = (struct rte_stack *) te->data;
+ if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+RTE_INIT(librte_stack_init_log)
+{
+ stack_logtype = rte_log_register("lib.stack");
+ if (stack_logtype >= 0)
+ rte_log_set_level(stack_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
new file mode 100644
index 000000000..da0210550
--- /dev/null
+++ b/lib/librte_stack/rte_stack.h
@@ -0,0 +1,277 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+/**
+ * @file rte_stack.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Stack.
+ * librte_stack provides an API for configuration and use of a bounded stack of
+ * pointers. Push and pop operations are MT-safe, allowing concurrent access,
+ * and the interface supports pushing and popping multiple pointers at a time.
+ */
+
+#ifndef _RTE_STACK_H_
+#define _RTE_STACK_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+#define RTE_TAILQ_STACK_NAME "RTE_STACK"
+#define RTE_STACK_MZ_PREFIX "STK_"
+/**< The maximum length of a stack name. */
+#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
+ sizeof(RTE_STACK_MZ_PREFIX) + 1)
+
+/* Structure containing the LIFO, its current length, and a lock for mutual
+ * exclusion.
+ */
+struct rte_lifo {
+ rte_spinlock_t lock; /**< LIFO lock */
+ uint32_t len; /**< LIFO len */
+ void *objs[]; /**< LIFO pointer table */
+};
+
+/* The RTE stack structure contains the LIFO structure itself, plus metadata
+ * such as its name and memzone pointer.
+ */
+struct rte_stack {
+ /** Name of the stack. */
+ char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
+ /** Memzone containing the rte_stack structure */
+ const struct rte_memzone *memzone;
+ uint32_t capacity; /**< Usable size of the stack */
+ uint32_t flags; /**< Flags supplied at creation */
+ struct rte_lifo lifo; /**< LIFO structure */
+} __rte_cache_aligned;
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * @internal Push several objects on the stack (MT-safe)
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_lifo_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ unsigned int index;
+ void **cache_objs;
+
+ rte_spinlock_lock(&s->lifo.lock);
+ cache_objs = &s->lifo.objs[s->lifo.len];
+
+ /* Is there sufficient space in the stack? */
+ if ((s->lifo.len + n) > s->capacity) {
+ rte_spinlock_unlock(&s->lifo.lock);
+ return 0;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ s->lifo.len += n;
+
+ rte_spinlock_unlock(&s->lifo.lock);
+ return n;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Push several objects on the stack (MT-safe)
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ return rte_lifo_push(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * @internal Pop several objects from the stack (MT-safe)
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_lifo_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ unsigned int index, len;
+ void **cache_objs;
+
+ rte_spinlock_lock(&s->lifo.lock);
+
+ if (unlikely(n > s->lifo.len)) {
+ rte_spinlock_unlock(&s->lifo.lock);
+ return 0;
+ }
+
+ cache_objs = s->lifo.objs;
+
+ for (index = 0, len = s->lifo.len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ s->lifo.len -= n;
+ rte_spinlock_unlock(&s->lifo.lock);
+
+ return n;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Pop several objects from the stack (MT-safe)
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ if (unlikely(n == 0 || obj_table == NULL))
+ return 0;
+
+ return rte_lifo_pop(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_count(struct rte_stack *s)
+{
+ return (unsigned int)s->lifo.len;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of free entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of free entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_free_count(struct rte_stack *s)
+{
+ return s->capacity - rte_stack_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new stack named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory for a stack of
+ * size *count*. The behavior of the stack is controlled by the *flags*.
+ *
+ * @param name
+ * The name of the stack.
+ * @param count
+ * The size of the stack.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * Reserved for future use
+ * @return
+ * On success, the pointer to the new allocated stack. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a stack with the same name already exists
+ * - ENOMEM - insufficient memory to create the stack
+ * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
+ */
+struct rte_stack *__rte_experimental
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free all memory used by the stack.
+ *
+ * @param s
+ * Stack to free
+ */
+void __rte_experimental
+rte_stack_free(struct rte_stack *s);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Lookup a stack by its name
+ *
+ * @param name
+ * The name of the stack.
+ * @return
+ * The pointer to the stack matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - Stack with name *name* not found.
+ * - EINVAL - *name* pointer is NULL.
+ */
+struct rte_stack * __rte_experimental
+rte_stack_lookup(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_H_ */
diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
new file mode 100644
index 000000000..4a6a7bdb3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_pvt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_PVT_H_
+#define _RTE_STACK_PVT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_log.h>
+
+extern int stack_logtype;
+
+#define STACK_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
+ __func__, ##args)
+
+#define STACK_LOG_ERR(fmt, args...) \
+ STACK_LOG(ERR, fmt, ## args)
+
+#define STACK_LOG_WARN(fmt, args...) \
+ STACK_LOG(WARNING, fmt, ## args)
+
+#define STACK_LOG_INFO(fmt, args...) \
+ STACK_LOG(INFO, fmt, ## args)
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_PVT_H_ */
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
new file mode 100644
index 000000000..6662679c3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_stack_create;
+ rte_stack_free;
+ rte_stack_lookup;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index e8b40f546..0f0e589bc 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -21,7 +21,7 @@ libraries = [ 'compat', # just a header, used for versioning
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 8a4f0f4e5..55568c603 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH 2/7] mempool/stack: convert mempool to use rte stack
2019-02-22 16:06 [dpdk-dev] [PATCH 0/7] Subject: [PATCH ...] Add stack library and new mempool handler Gage Eads
2019-02-22 16:06 ` [dpdk-dev] [PATCH 1/7] stack: introduce rte stack library Gage Eads
@ 2019-02-22 16:06 ` Gage Eads
2019-02-25 10:46 ` Olivier Matz
2019-02-22 16:06 ` [dpdk-dev] [PATCH 3/7] test/stack: add stack test Gage Eads
` (5 subsequent siblings)
7 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-02-22 16:06 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
MAINTAINERS | 2 +-
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 93 +++++++++----------------------
4 files changed, 33 insertions(+), 71 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 237f05eb2..7e64f63b6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -284,7 +284,6 @@ M: Andrew Rybchenko <arybchenko@solarflare.com>
F: lib/librte_mempool/
F: drivers/mempool/Makefile
F: drivers/mempool/ring/
-F: drivers/mempool/stack/
F: doc/guides/prog_guide/mempool_lib.rst
F: test/test/test_mempool*
F: test/test/test_func_reentrancy.c
@@ -412,6 +411,7 @@ M: Gage Eads <gage.eads@intel.com>
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
+F: drivers/mempool/stack/
Memory Pool Drivers
diff --git a/drivers/mempool/stack/Makefile b/drivers/mempool/stack/Makefile
index 0444aedad..1681a62bc 100644
--- a/drivers/mempool/stack/Makefile
+++ b/drivers/mempool/stack/Makefile
@@ -10,10 +10,11 @@ LIB = librte_mempool_stack.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
# Headers
CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+LDLIBS += -lrte_eal -lrte_mempool -lrte_stack
EXPORT_MAP := rte_mempool_stack_version.map
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index b75a3bb56..03e369a41 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
+
+allow_experimental_apis = true
sources = files('rte_mempool_stack.c')
+
+deps += ['stack']
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index e6d504af5..25ccdb9af 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -1,39 +1,29 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2019 Intel Corporation
*/
#include <stdio.h>
#include <rte_mempool.h>
-#include <rte_malloc.h>
-
-struct rte_mempool_stack {
- rte_spinlock_t sl;
-
- uint32_t size;
- uint32_t len;
- void *objs[];
-};
+#include <rte_stack.h>
static int
stack_alloc(struct rte_mempool *mp)
{
- struct rte_mempool_stack *s;
- unsigned n = mp->size;
- int size = sizeof(*s) + (n+16)*sizeof(void *);
-
- /* Allocate our local memory structure */
- s = rte_zmalloc_socket("mempool-stack",
- size,
- RTE_CACHE_LINE_SIZE,
- mp->socket_id);
- if (s == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
- return -ENOMEM;
+ char name[RTE_STACK_NAMESIZE];
+ struct rte_stack *s;
+ int ret;
+
+ ret = snprintf(name, sizeof(name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(name)) {
+ rte_errno = ENAMETOOLONG;
+ return -rte_errno;
}
- rte_spinlock_init(&s->sl);
+ s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ if (s == NULL)
+ return -rte_errno;
- s->size = n;
mp->pool_data = s;
return 0;
@@ -41,69 +31,36 @@ stack_alloc(struct rte_mempool *mp)
static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index;
-
- rte_spinlock_lock(&s->sl);
- cache_objs = &s->objs[s->len];
-
- /* Is there sufficient space in the stack ? */
- if ((s->len + n) > s->size) {
- rte_spinlock_unlock(&s->sl);
- return -ENOBUFS;
- }
-
- /* Add elements back into the cache */
- for (index = 0; index < n; ++index, obj_table++)
- cache_objs[index] = *obj_table;
-
- s->len += n;
+ struct rte_stack *s = mp->pool_data;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_push(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static int
stack_dequeue(struct rte_mempool *mp, void **obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index, len;
-
- rte_spinlock_lock(&s->sl);
-
- if (unlikely(n > s->len)) {
- rte_spinlock_unlock(&s->sl);
- return -ENOENT;
- }
+ struct rte_stack *s = mp->pool_data;
- cache_objs = s->objs;
-
- for (index = 0, len = s->len - 1; index < n;
- ++index, len--, obj_table++)
- *obj_table = cache_objs[len];
-
- s->len -= n;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_pop(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static unsigned
stack_get_count(const struct rte_mempool *mp)
{
- struct rte_mempool_stack *s = mp->pool_data;
+ struct rte_stack *s = mp->pool_data;
- return s->len;
+ return rte_stack_count(s);
}
static void
stack_free(struct rte_mempool *mp)
{
- rte_free((void *)(mp->pool_data));
+ struct rte_stack *s = mp->pool_data;
+
+ rte_stack_free(s);
}
static struct rte_mempool_ops ops_stack = {
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH 3/7] test/stack: add stack test
2019-02-22 16:06 [dpdk-dev] [PATCH 0/7] Subject: [PATCH ...] Add stack library and new mempool handler Gage Eads
2019-02-22 16:06 ` [dpdk-dev] [PATCH 1/7] stack: introduce rte stack library Gage Eads
2019-02-22 16:06 ` [dpdk-dev] [PATCH 2/7] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-02-22 16:06 ` Gage Eads
2019-02-25 10:59 ` Olivier Matz
2019-02-22 16:06 ` [dpdk-dev] [PATCH 4/7] test/stack: add stack perf test Gage Eads
` (4 subsequent siblings)
7 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-02-22 16:06 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_autotest performs positive and negative testing of the stack API, and
exercises the push and pop datapath functions with all available lcores.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
MAINTAINERS | 1 +
test/test/Makefile | 2 +
test/test/meson.build | 3 +
test/test/test_stack.c | 394 +++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 400 insertions(+)
create mode 100644 test/test/test_stack.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 7e64f63b6..58b438414 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -412,6 +412,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
F: drivers/mempool/stack/
+F: test/test/*stack*
Memory Pool Drivers
diff --git a/test/test/Makefile b/test/test/Makefile
index 89949c2bb..47cf98a3a 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -89,6 +89,8 @@ endif
SRCS-y += test_rwlock.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
diff --git a/test/test/meson.build b/test/test/meson.build
index 05e5ddeb0..b00e1201a 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -95,6 +95,7 @@ test_sources = files('commands.c',
'test_sched.c',
'test_service_cores.c',
'test_spinlock.c',
+ 'test_stack.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -132,6 +133,7 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
+ 'stack',
'timer'
]
@@ -173,6 +175,7 @@ fast_parallel_test_names = [
'rwlock_autotest',
'sched_autotest',
'spinlock_autotest',
+ 'stack_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
diff --git a/test/test/test_stack.c b/test/test/test_stack.c
new file mode 100644
index 000000000..510c7aac1
--- /dev/null
+++ b/test/test/test_stack.c
@@ -0,0 +1,394 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_SIZE 4096
+#define MAX_BULK 32
+
+static int
+test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
+{
+ void *popped_objs[STACK_SIZE];
+ unsigned int i, ret;
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_push(s, &obj_table[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] push returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_pop(s, &popped_objs[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] pop returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i++) {
+ if (obj_table[i] != popped_objs[STACK_SIZE - i - 1]) {
+ printf("[%s():%u] Incorrect value %p at index 0x%x\n",
+ __func__, __LINE__,
+ popped_objs[STACK_SIZE - i - 1], i);
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int
+test_stack_basic(void)
+{
+ struct rte_stack *s = NULL;
+ void **obj_table = NULL;
+ int i, ret = -1;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %lu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ goto fail_test;
+ }
+
+ for (i = 0; i < STACK_SIZE; i++)
+ obj_table[i] = (void *)(uintptr_t)i;
+
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_lookup(__func__) != s) {
+ printf("[%s():%u] failed to lookup a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_count(s) != 0) {
+ printf("[%s():%u] stack count: %u (expected 0)\n",
+ __func__, __LINE__, rte_stack_count(s));
+ goto fail_test;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, 1);
+ if (ret) {
+ printf("[%s():%u] Single object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, MAX_BULK);
+ if (ret) {
+ printf("[%s():%u] Bulk object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
+ if (ret != 0) {
+ printf("[%s():%u] Excess objects push succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_pop(s, obj_table, 1);
+ if (ret != 0) {
+ printf("[%s():%u] Empty stack pop succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = 0;
+
+fail_test:
+ rte_stack_free(s);
+
+ if (obj_table != NULL)
+ rte_free(obj_table);
+
+ return ret;
+}
+
+static int
+test_stack_name_reuse(void)
+{
+ struct rte_stack *s[2];
+
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[0] == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[1] != NULL) {
+ printf("[%s():%u] Failed to detect re-used name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ rte_stack_free(s[0]);
+
+ return 0;
+}
+
+static int
+test_stack_name_length(void)
+{
+ char name[RTE_STACK_NAMESIZE + 1];
+ struct rte_stack *s;
+
+ memset(name, 's', sizeof(name));
+
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ if (s != NULL) {
+ printf("[%s():%u] Failed to prevent long name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENAMETOOLONG) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_stack *s = rte_stack_lookup("stack_not_found");
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENOENT) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s = rte_stack_lookup(NULL);
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != EINVAL) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_free_null(void)
+{
+ /* Check whether the library proper handles a NULL pointer */
+ rte_stack_free(NULL);
+
+ return 0;
+}
+
+#define NUM_ITERS_PER_THREAD 100000
+
+struct test_args {
+ struct rte_stack *s;
+ rte_atomic64_t *sz;
+};
+
+static int
+stack_thread_push_pop(void *args)
+{
+ struct test_args *t = args;
+ void **obj_table;
+ int i;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %lu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < NUM_ITERS_PER_THREAD; i++) {
+ unsigned int success, num;
+
+ /* Reserve up to min(MAX_BULK, available slots) stack entries,
+ * then push and pop those stack entries.
+ */
+ do {
+ uint64_t sz = rte_atomic64_read(t->sz);
+ volatile uint64_t *sz_addr;
+
+ sz_addr = (volatile uint64_t *)&t->sz;
+
+ num = RTE_MIN(rte_rand() % MAX_BULK, STACK_SIZE - sz);
+
+ success = rte_atomic64_cmpset(sz_addr, sz, sz + num);
+ } while (success == 0);
+
+ if (rte_stack_push(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to push %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ if (rte_stack_pop(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to pop %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ rte_atomic64_sub(t->sz, num);
+ }
+
+ rte_free(obj_table);
+ return 0;
+}
+
+static int
+test_stack_multithreaded(void)
+{
+ struct test_args *args;
+ unsigned int lcore_id;
+ struct rte_stack *s;
+ rte_atomic64_t size;
+
+ printf("[%s():%u] Running with %u lcores\n",
+ __func__, __LINE__, rte_lcore_count());
+
+ if (rte_lcore_count() < 2)
+ return 0;
+
+ args = rte_malloc(NULL, sizeof(struct test_args) * RTE_MAX_LCORE, 0);
+ if (args == NULL) {
+ printf("[%s():%u] failed to malloc %lu bytes\n",
+ __func__, __LINE__,
+ sizeof(struct test_args) * RTE_MAX_LCORE);
+ return -1;
+ }
+
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ rte_free(args);
+ return -1;
+ }
+
+ rte_atomic64_init(&size);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ if (rte_eal_remote_launch(stack_thread_push_pop,
+ &args[lcore_id], lcore_id))
+ rte_panic("Failed to launch lcore %d\n", lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ stack_thread_push_pop(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ rte_stack_free(s);
+ rte_free(args);
+
+ return 0;
+}
+
+static int
+test_stack(void)
+{
+ if (test_stack_basic() < 0)
+ return -1;
+
+ if (test_lookup_null() < 0)
+ return -1;
+
+ if (test_free_null() < 0)
+ return -1;
+
+ if (test_stack_name_reuse() < 0)
+ return -1;
+
+ if (test_stack_name_length() < 0)
+ return -1;
+
+ if (test_stack_multithreaded() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_autotest, test_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH 4/7] test/stack: add stack perf test
2019-02-22 16:06 [dpdk-dev] [PATCH 0/7] Subject: [PATCH ...] Add stack library and new mempool handler Gage Eads
` (2 preceding siblings ...)
2019-02-22 16:06 ` [dpdk-dev] [PATCH 3/7] test/stack: add stack test Gage Eads
@ 2019-02-22 16:06 ` Gage Eads
2019-02-25 11:04 ` Olivier Matz
2019-02-22 16:06 ` [dpdk-dev] [PATCH 5/7] stack: add non-blocking stack implementation Gage Eads
` (3 subsequent siblings)
7 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-02-22 16:06 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
test/test/Makefile | 1 +
test/test/meson.build | 2 +
test/test/test_stack_perf.c | 343 ++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 346 insertions(+)
create mode 100644 test/test/test_stack_perf.c
diff --git a/test/test/Makefile b/test/test/Makefile
index 47cf98a3a..f9536fb31 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -90,6 +90,7 @@ endif
SRCS-y += test_rwlock.c
SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
diff --git a/test/test/meson.build b/test/test/meson.build
index b00e1201a..ba3cb6261 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -96,6 +96,7 @@ test_sources = files('commands.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
+ 'test_stack_perf.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -240,6 +241,7 @@ perf_test_names = [
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
+ 'stack_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/test/test/test_stack_perf.c b/test/test/test_stack_perf.c
new file mode 100644
index 000000000..484370d30
--- /dev/null
+++ b/test/test/test_stack_perf.c
@@ -0,0 +1,343 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+#include <rte_stack.h>
+#include <rte_cycles.h>
+#include <rte_launch.h>
+#include <rte_pause.h>
+
+#include "test.h"
+
+#define STACK_NAME "STACK_PERF"
+#define MAX_BURST 32
+#define STACK_SIZE (RTE_MAX_LCORE * MAX_BURST)
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+/*
+ * Push/pop bulk sizes, marked volatile so they aren't treated as compile-time
+ * constants.
+ */
+static volatile unsigned int bulk_sizes[] = {8, MAX_BURST};
+
+static rte_atomic32_t lcore_barrier;
+
+struct lcore_pair {
+ unsigned int c1;
+ unsigned int c2;
+};
+
+static int
+get_two_hyperthreads(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] == core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_cores(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] != core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_sockets(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if (socket[0] != socket[1]) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+/* Measure the cycle cost of popping an empty stack. */
+static void
+test_empty_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 100000000;
+ void *objs[MAX_BURST];
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++)
+ rte_stack_pop(s, objs, bulk_sizes[0]);
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Stack empty pop: %.2F\n",
+ (double)(end - start) / iterations);
+}
+
+struct thread_args {
+ struct rte_stack *s;
+ unsigned int sz;
+ double avg;
+};
+
+/* Measure the average per-pointer cycle cost of stack push and pop */
+static int
+bulk_push_pop(void *p)
+{
+ unsigned int iterations = 1000000;
+ struct thread_args *args = p;
+ void *objs[MAX_BURST] = {0};
+ unsigned int size, i;
+ struct rte_stack *s;
+
+ s = args->s;
+ size = args->sz;
+
+ rte_atomic32_sub(&lcore_barrier, 1);
+ while (rte_atomic32_read(&lcore_barrier) != 0)
+ rte_pause();
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, size);
+ rte_stack_pop(s, objs, size);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ args->avg = ((double)(end - start))/(iterations * size);
+
+ return 0;
+}
+
+/*
+ * Run bulk_push_pop() simultaneously on pairs of cores, to measure stack
+ * perf when between hyperthread siblings, cores on the same socket, and cores
+ * on different sockets.
+ */
+static void
+run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s,
+ lcore_function_t fn)
+{
+ struct thread_args args[2];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ rte_atomic32_set(&lcore_barrier, 2);
+
+ args[0].sz = args[1].sz = bulk_sizes[i];
+ args[0].s = args[1].s = s;
+
+ if (cores->c1 == rte_get_master_lcore()) {
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ fn(&args[0]);
+ rte_eal_wait_lcore(cores->c2);
+ } else {
+ rte_eal_remote_launch(fn, &args[0], cores->c1);
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ rte_eal_wait_lcore(cores->c1);
+ rte_eal_wait_lcore(cores->c2);
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], (args[0].avg + args[1].avg) / 2);
+ }
+}
+
+/* Run bulk_push_pop() simultaneously on 1+ cores. */
+static void
+run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n)
+{
+ struct thread_args args[RTE_MAX_LCORE];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ unsigned int lcore_id;
+ int cnt = 0;
+ double avg;
+
+ rte_atomic32_set(&lcore_barrier, n);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ if (rte_eal_remote_launch(fn, &args[lcore_id],
+ lcore_id))
+ rte_panic("Failed to launch lcore %d\n",
+ lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ fn(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ avg = args[rte_lcore_id()].avg;
+
+ cnt = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+ avg += args[lcore_id].avg;
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], avg / n);
+ }
+}
+
+/*
+ * Measure the cycle cost of pushing and popping a single pointer on a single
+ * lcore.
+ */
+static void
+test_single_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 16000000;
+ void *obj = NULL;
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, &obj, 1);
+ rte_stack_pop(s, &obj, 1);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Average cycles per single object push/pop: %.2F\n",
+ ((double)(end - start)) / iterations);
+}
+
+/* Measure the cycle cost of bulk pushing and popping on a single lcore. */
+static void
+test_bulk_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 8000000;
+ void *objs[MAX_BURST];
+ unsigned int sz, i;
+
+ for (sz = 0; sz < ARRAY_SIZE(bulk_sizes); sz++) {
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, bulk_sizes[sz]);
+ rte_stack_pop(s, objs, bulk_sizes[sz]);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ double avg = ((double)(end - start) /
+ (iterations * bulk_sizes[sz]));
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[sz], avg);
+ }
+}
+
+static int
+test_stack_perf(void)
+{
+ struct lcore_pair cores;
+ struct rte_stack *s;
+
+ rte_atomic32_init(&lcore_barrier);
+
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ printf("### Testing single element push/pop ###\n");
+ test_single_push_pop(s);
+
+ printf("\n### Testing empty pop ###\n");
+ test_empty_pop(s);
+
+ printf("\n### Testing using a single lcore ###\n");
+ test_bulk_push_pop(s);
+
+ if (get_two_hyperthreads(&cores) == 0) {
+ printf("\n### Testing using two hyperthreads ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_cores(&cores) == 0) {
+ printf("\n### Testing using two physical cores ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_sockets(&cores) == 0) {
+ printf("\n### Testing using two NUMA nodes ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+
+ printf("\n### Testing on all %u lcores ###\n", rte_lcore_count());
+ run_on_n_cores(s, bulk_push_pop, rte_lcore_count());
+
+ rte_stack_free(s);
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH 5/7] stack: add non-blocking stack implementation
2019-02-22 16:06 [dpdk-dev] [PATCH 0/7] Subject: [PATCH ...] Add stack library and new mempool handler Gage Eads
` (3 preceding siblings ...)
2019-02-22 16:06 ` [dpdk-dev] [PATCH 4/7] test/stack: add stack perf test Gage Eads
@ 2019-02-22 16:06 ` Gage Eads
2019-02-25 11:28 ` Olivier Matz
2019-02-22 16:06 ` [dpdk-dev] [PATCH 6/7] test/stack: add non-blocking stack tests Gage Eads
` (2 subsequent siblings)
7 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-02-22 16:06 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for a non-blocking (linked list based) stack to
the stack API. This behavior is selected through a new rte_stack_create()
flag, STACK_F_NB.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The non-blocking push operation enqueues a linked list of pointers by
pointing the tail of the list to the current stack head, and using a CAS to
swing the stack head pointer to the head of the list. The operation retries
if it is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The non-blocking pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a non-blocking LIFO,
and are allocated before stack pushes and freed after stack pops. Since the
stack has a fixed maximum depth, these elements do not need to be
dynamically created.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
doc/guides/prog_guide/stack_lib.rst | 46 ++++++++-
doc/guides/rel_notes/release_19_05.rst | 3 +
lib/librte_stack/Makefile | 4 +-
lib/librte_stack/meson.build | 4 +-
lib/librte_stack/rte_stack.c | 42 ++++++--
lib/librte_stack/rte_stack.h | 139 +++++++++++++++++++++++++-
lib/librte_stack/rte_stack_c11_mem.h | 173 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_generic.h | 157 ++++++++++++++++++++++++++++++
8 files changed, 550 insertions(+), 18 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_c11_mem.h
create mode 100644 lib/librte_stack/rte_stack_generic.h
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 51689cfe1..86fdc0a9b 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -9,7 +9,7 @@ pointers.
The stack library provides the following basic operations:
-* Create a uniquely named stack of a user-specified size and using a user-specified socket.
+* Create a uniquely named stack of a user-specified size and using a user-specified socket, with either lock-based or non-blocking behavior.
* Push and pop a burst of one or more stack objects (pointers). These function are multi-threading safe.
@@ -22,5 +22,45 @@ The stack library provides the following basic operations:
Implementation
~~~~~~~~~~~~~~
-The stack consists of a contiguous array of pointers, a current index, and a
-spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
+The library supports two types of stacks: lock-based and non-blocking.
+Both types use the same set of interfaces, but their implementations differ.
+
+Lock-based Stack
+----------------
+
+The lock-based stack consists of a contiguous array of pointers, a current index, and a spinlock.
+Accesses to the stack are made multi-thread safe by the spinlock.
+
+The lock-based stack consists of a contiguous array of pointers, a current
+index, and a spinlock. Accesses to the stack are made multi-thread safe by the
+spinlock.
+
+Non-blocking Stack
+------------------
+
+The non-blocking stack consists of a linked list of elements, each containing a data pointer and a next pointer, and an atomic stack depth counter.
+The non-blocking property means that multiple threads can push and pop simultaneously, and one thread being preempted/delayed in a push or pop operation will not impede the forward progress of any other thread.
+
+The non-blocking push operation enqueues a linked list of pointers by pointing the list's tail to the current stack head, and using a CAS to swing the stack head pointer to the head of the list.
+The operation retries if it is unsuccessful (i.e. the list changed between reading the head and modifying it), else it adjusts the stack length and returns.
+
+The non-blocking pop operation first reserves one or more list elements by adjusting the stack length, to ensure the dequeue operation will succeed without blocking.
+It then dequeues pointers by walking the list -- starting from the head -- then swinging the head pointer (using a CAS as well).
+While walking the list, the data pointers are recorded in an object table.
+
+The linked list elements themselves are maintained in a non-blocking LIFO, and are allocated before stack pushes and freed after stack pops.
+Since the stack has a fixed maximum depth, these elements do not need to be dynamically created.
+
+The non-blocking behavior is selected by passing the *STACK_F_NB* flag to rte_stack_create().
+
+Preventing the ABA Problem
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To prevent the ABA problem, this algorithm stack uses a 128-bit compare-and-swap instruction to atomically update both the stack top pointer and a modification counter. The ABA problem can occur without a modification counter if, for example:
+
+1. Thread A reads head pointer X and stores the pointed-to list element.
+2. Other threads modify the list such that the head pointer is once again X, but its pointed-to data is different than what thread A read.
+3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+In this case thread A would not detect that the list had changed, and would both pop stale data and incorrect change the head pointer.
+By adding a modification counter that is updated on every push and pop as part of the compare-and-swap, the algorithm can detect when the list changes even if the head pointer remains the same.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 04394f8cf..52c5ba78e 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -71,6 +71,9 @@ New Features
pointers. The API provides MT-safe push and pop operations that can operate
on one or more pointers per operation.
+ The library supports two stack implementations: lock-based and non-blocking.
+ The non-blocking implementation is currently limited to x86-64 platforms.
+
Removed Items
-------------
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index e956b6535..94a7c1476 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -18,6 +18,8 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
# install includes
-SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
+ rte_stack_generic.h \
+ rte_stack_c11_mem.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 99f43710e..dec527966 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -5,4 +5,6 @@ allow_experimental_apis = true
version = 1
sources = files('rte_stack.c')
-headers = files('rte_stack.h')
+headers = files('rte_stack.h',
+ 'rte_stack_c11_mem.h',
+ 'rte_stack_generic.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
index a43ebb68f..f1c0b5bba 100644
--- a/lib/librte_stack/rte_stack.c
+++ b/lib/librte_stack/rte_stack.c
@@ -26,27 +26,46 @@ static struct rte_tailq_elem rte_stack_tailq = {
EAL_REGISTER_TAILQ(rte_stack_tailq)
static void
+nb_lifo_init(struct rte_stack *s, unsigned int count)
+{
+ struct rte_nb_lifo_elem *elems;
+ unsigned int i;
+
+ elems = (struct rte_nb_lifo_elem *)&s[1];
+ for (i = 0; i < count; i++)
+ __nb_lifo_push(&s->nb_lifo.free, &elems[i], &elems[i], 1);
+}
+
+static void
lifo_init(struct rte_stack *s)
{
rte_spinlock_init(&s->lifo.lock);
}
static void
-rte_stack_init(struct rte_stack *s)
+rte_stack_init(struct rte_stack *s, unsigned int count, uint32_t flags)
{
memset(s, 0, sizeof(*s));
- lifo_init(s);
+ if (flags & STACK_F_NB)
+ nb_lifo_init(s, count);
+ else
+ lifo_init(s);
}
static ssize_t
-rte_stack_get_memsize(unsigned int count)
+rte_stack_get_memsize(unsigned int count, uint32_t flags)
{
ssize_t sz = sizeof(struct rte_stack);
+ if (flags & STACK_F_NB)
+ sz += RTE_CACHE_LINE_ROUNDUP(count *
+ sizeof(struct rte_nb_lifo_elem));
+ else
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *));
+
/* Add padding to avoid false sharing conflicts */
- sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
- 2 * RTE_CACHE_LINE_SIZE;
+ sz += 2 * RTE_CACHE_LINE_SIZE;
return sz;
}
@@ -63,9 +82,16 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
unsigned int sz;
int ret;
- RTE_SET_USED(flags);
+#ifdef RTE_ARCH_X86_64
+ RTE_BUILD_BUG_ON(sizeof(struct rte_nb_lifo_head) != 16);
+#else
+ if (flags & STACK_F_NB) {
+ STACK_LOG_ERR("Non-blocking stack is not supported on your platform\n");
+ return NULL;
+ }
+#endif
- sz = rte_stack_get_memsize(count);
+ sz = rte_stack_get_memsize(count, flags);
ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
RTE_STACK_MZ_PREFIX, name);
@@ -94,7 +120,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
s = mz->addr;
- rte_stack_init(s);
+ rte_stack_init(s, count, flags);
/* Store the name for later lookups */
ret = snprintf(s->name, sizeof(s->name), "%s", name);
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index da0210550..6ca175a8c 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -29,6 +29,33 @@ extern "C" {
#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
sizeof(RTE_STACK_MZ_PREFIX) + 1)
+struct rte_nb_lifo_elem {
+ void *data; /**< Data pointer */
+ struct rte_nb_lifo_elem *next; /**< Next pointer */
+};
+
+struct rte_nb_lifo_head {
+ struct rte_nb_lifo_elem *top; /**< Stack top */
+ uint64_t cnt; /**< Modification counter for avoiding ABA problem */
+};
+
+struct rte_nb_lifo_list {
+ /** List head */
+ struct rte_nb_lifo_head head __rte_aligned(16);
+ /** List len */
+ rte_atomic64_t len;
+};
+
+/* Structure containing two non-blocking LIFO lists: the stack itself and a
+ * list of free linked-list elements.
+ */
+struct rte_nb_lifo {
+ /** LIFO list of elements */
+ struct rte_nb_lifo_list used __rte_cache_aligned;
+ /** LIFO list of free elements */
+ struct rte_nb_lifo_list free __rte_cache_aligned;
+};
+
/* Structure containing the LIFO, its current length, and a lock for mutual
* exclusion.
*/
@@ -48,10 +75,69 @@ struct rte_stack {
const struct rte_memzone *memzone;
uint32_t capacity; /**< Usable size of the stack */
uint32_t flags; /**< Flags supplied at creation */
- struct rte_lifo lifo; /**< LIFO structure */
+ RTE_STD_C11
+ union {
+ struct rte_nb_lifo nb_lifo; /**< Non-blocking LIFO structure */
+ struct rte_lifo lifo; /**< LIFO structure */
+ };
} __rte_cache_aligned;
/**
+ * The stack uses non-blocking push and pop functions. This flag is only
+ * supported on x86_64 platforms, currently.
+ */
+#define STACK_F_NB 0x0001
+
+#ifdef RTE_USE_C11_MEM_MODEL
+#include "rte_stack_c11_mem.h"
+#else
+#include "rte_stack_generic.h"
+#endif
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * @internal Push several objects on the non-blocking stack (MT-safe)
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects enqueued.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_nb_lifo_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ struct rte_nb_lifo_elem *tmp, *first, *last = NULL;
+ unsigned int i;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n free elements */
+ first = __nb_lifo_pop(&s->nb_lifo.free, n, NULL, NULL);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Construct the list elements */
+ tmp = first;
+ for (i = 0; i < n; i++) {
+ tmp->data = obj_table[n - i - 1];
+ last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* Push them to the used list */
+ __nb_lifo_push(&s->nb_lifo.used, first, last, n);
+
+ return n;
+}
+
+/**
* @warning
* @b EXPERIMENTAL: this API may change without prior notice
*
@@ -109,7 +195,41 @@ rte_lifo_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
static __rte_always_inline unsigned int __rte_experimental
rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
{
- return rte_lifo_push(s, obj_table, n);
+ if (s->flags & STACK_F_NB)
+ return rte_nb_lifo_push(s, obj_table, n);
+ else
+ return rte_lifo_push(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * @internal Pop several objects from the non-blocking stack (MT-safe)
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * - Actual number of objects popped.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_nb_lifo_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_nb_lifo_elem *first, *last = NULL;
+
+ /* Pop n used elements */
+ first = __nb_lifo_pop(&s->nb_lifo.used, n, obj_table, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Push the list elements to the free list */
+ __nb_lifo_push(&s->nb_lifo.free, first, last, n);
+
+ return n;
}
/**
@@ -173,7 +293,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
if (unlikely(n == 0 || obj_table == NULL))
return 0;
- return rte_lifo_pop(s, obj_table, n);
+ if (s->flags & STACK_F_NB)
+ return rte_nb_lifo_pop(s, obj_table, n);
+ else
+ return rte_lifo_pop(s, obj_table, n);
}
/**
@@ -190,7 +313,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
static __rte_always_inline unsigned int __rte_experimental
rte_stack_count(struct rte_stack *s)
{
- return (unsigned int)s->lifo.len;
+ if (s->flags & STACK_F_NB)
+ return rte_nb_lifo_len(s);
+ else
+ return (unsigned int)s->lifo.len;
}
/**
@@ -228,7 +354,10 @@ rte_stack_free_count(struct rte_stack *s)
* NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
* constraint for the reserved zone.
* @param flags
- * Reserved for future use
+ * An OR of the following:
+ * - STACK_F_NB: If this flag is set, the stack uses non-blocking variants
+ * of the push and pop functions. Otherwise, it achieves thread-safety
+ * using a lock.
* @return
* On success, the pointer to the new allocated stack. NULL on error with
* rte_errno set appropriately. Possible errno values include:
diff --git a/lib/librte_stack/rte_stack_c11_mem.h b/lib/librte_stack/rte_stack_c11_mem.h
new file mode 100644
index 000000000..c8276c530
--- /dev/null
+++ b/lib/librte_stack/rte_stack_c11_mem.h
@@ -0,0 +1,173 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _NB_LIFO_C11_MEM_H_
+#define _NB_LIFO_C11_MEM_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+rte_nb_lifo_len(struct rte_stack *s)
+{
+ /* nb_lifo_push() and nb_lifo_pop() do not update the list's contents
+ * and lifo->len atomically, which can cause the list to appear shorter
+ * than it actually is if this function is called while other threads
+ * are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The lifo->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)__atomic_load_n(&s->nb_lifo.used.len.cnt,
+ __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+__nb_lifo_push(struct rte_nb_lifo_list *lifo,
+ struct rte_nb_lifo_elem *first,
+ struct rte_nb_lifo_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(lifo);
+ RTE_SET_USED(num);
+#else
+ struct rte_nb_lifo_head old_head;
+ int success;
+
+ old_head = lifo->head;
+
+ do {
+ struct rte_nb_lifo_head new_head;
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Use the release memmodel to ensure the writes to the NB LIFO
+ * elements are visible before the head pointer write.
+ */
+ success = rte_atomic128_cmpxchg((rte_int128_t *)&lifo->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ /* Ensure the stack modifications are not reordered with respect
+ * to the LIFO len update.
+ */
+ __atomic_add_fetch(&lifo->len.cnt, num, __ATOMIC_RELEASE);
+#endif
+}
+
+static __rte_always_inline struct rte_nb_lifo_elem *
+__nb_lifo_pop(struct rte_nb_lifo_list *lifo,
+ unsigned int num,
+ void **obj_table,
+ struct rte_nb_lifo_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(lifo);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_nb_lifo_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = __atomic_load_n(&lifo->len.cnt,
+ __ATOMIC_ACQUIRE);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (__atomic_compare_exchange_n(&lifo->len.cnt,
+ &len, len - num,
+ 0, __ATOMIC_RELAXED,
+ __ATOMIC_RELAXED))
+ break;
+ }
+
+#ifndef RTE_ARCH_X86_64
+ /* Use the acquire memmodel to ensure the reads to the NB LIFO elements
+ * are properly ordered with respect to the head pointer read.
+ *
+ * Note that for aarch64, GCC's implementation of __atomic_load_16 in
+ * libatomic uses locks, and so this function should be replaced by
+ * a new function (e.g. "rte_atomic128_load()").
+ */
+ __atomic_load((volatile __int128 *)&lifo->head,
+ &old_head,
+ __ATOMIC_ACQUIRE);
+#else
+ /* x86-64 does not require an atomic load here; if a torn read occurs,
+ * the CAS will fail and set old_head to the correct/latest value.
+ */
+ old_head = lifo->head;
+#endif
+
+ /* Pop num elements */
+ do {
+ struct rte_nb_lifo_head new_head;
+ struct rte_nb_lifo_elem *tmp;
+ unsigned int i;
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ success = rte_atomic128_cmpxchg((rte_int128_t *)&lifo->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _NB_LIFO_C11_MEM_H_ */
diff --git a/lib/librte_stack/rte_stack_generic.h b/lib/librte_stack/rte_stack_generic.h
new file mode 100644
index 000000000..7d8570b34
--- /dev/null
+++ b/lib/librte_stack/rte_stack_generic.h
@@ -0,0 +1,157 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _NB_LIFO_GENERIC_H_
+#define _NB_LIFO_GENERIC_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+rte_nb_lifo_len(struct rte_stack *s)
+{
+ /* nb_lifo_push() and nb_lifo_pop() do not update the list's contents
+ * and nb_lifo->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The nb_lifo->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)rte_atomic64_read(&s->nb_lifo.used.len);
+}
+
+static __rte_always_inline void
+__nb_lifo_push(struct rte_nb_lifo_list *lifo,
+ struct rte_nb_lifo_elem *first,
+ struct rte_nb_lifo_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(lifo);
+ RTE_SET_USED(num);
+#else
+ struct rte_nb_lifo_head old_head;
+ int success;
+
+ old_head = lifo->head;
+
+ do {
+ struct rte_nb_lifo_head new_head;
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Ensure the list entry writes are visible before pushing them
+ * to the stack.
+ */
+ rte_wmb();
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmpxchg((rte_int128_t *)&lifo->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ rte_atomic64_add(&lifo->len, num);
+#endif
+}
+
+static __rte_always_inline struct rte_nb_lifo_elem *
+__nb_lifo_pop(struct rte_nb_lifo_list *lifo,
+ unsigned int num,
+ void **obj_table,
+ struct rte_nb_lifo_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(lifo);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_nb_lifo_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = rte_atomic64_read(&lifo->len);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (rte_atomic64_cmpset((volatile uint64_t *)&lifo->len,
+ len, len - num))
+ break;
+ }
+
+ old_head = lifo->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_nb_lifo_head new_head;
+ struct rte_nb_lifo_elem *tmp;
+ unsigned int i;
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ /* Ensure the list reads occur before popping the list */
+ rte_rmb();
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmpxchg((rte_int128_t *)&lifo->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _NB_LIFO_GENERIC_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH 6/7] test/stack: add non-blocking stack tests
2019-02-22 16:06 [dpdk-dev] [PATCH 0/7] Subject: [PATCH ...] Add stack library and new mempool handler Gage Eads
` (4 preceding siblings ...)
2019-02-22 16:06 ` [dpdk-dev] [PATCH 5/7] stack: add non-blocking stack implementation Gage Eads
@ 2019-02-22 16:06 ` Gage Eads
2019-02-25 11:28 ` Olivier Matz
2019-02-22 16:06 ` [dpdk-dev] [PATCH 7/7] mempool/stack: add non-blocking stack mempool handler Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 0/8] Add stack library and new " Gage Eads
7 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-02-22 16:06 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds non-blocking stack variants of stack_autotest
(stack_nb_autotest) and stack_perf_autotest (stack_nb_perf_autotest),
which differ only in that the non-blocking versions pass the STACK_F_NB
flag to all rte_stack_create() calls.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
test/test/meson.build | 2 ++
test/test/test_stack.c | 41 +++++++++++++++++++++++++++--------------
test/test/test_stack_perf.c | 17 +++++++++++++++--
3 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/test/test/meson.build b/test/test/meson.build
index ba3cb6261..474611291 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -177,6 +177,7 @@ fast_parallel_test_names = [
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
+ 'stack_nb_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
@@ -242,6 +243,7 @@ perf_test_names = [
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
+ 'stack_nb_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/test/test/test_stack.c b/test/test/test_stack.c
index 510c7aac1..729c25230 100644
--- a/test/test/test_stack.c
+++ b/test/test/test_stack.c
@@ -81,7 +81,7 @@ test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
}
static int
-test_stack_basic(void)
+test_stack_basic(uint32_t flags)
{
struct rte_stack *s = NULL;
void **obj_table = NULL;
@@ -97,7 +97,7 @@ test_stack_basic(void)
for (i = 0; i < STACK_SIZE; i++)
obj_table[i] = (void *)(uintptr_t)i;
- s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -162,18 +162,18 @@ test_stack_basic(void)
}
static int
-test_stack_name_reuse(void)
+test_stack_name_reuse(uint32_t flags)
{
struct rte_stack *s[2];
- s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[0] == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
return -1;
}
- s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[1] != NULL) {
printf("[%s():%u] Failed to detect re-used name\n",
__func__, __LINE__);
@@ -186,14 +186,14 @@ test_stack_name_reuse(void)
}
static int
-test_stack_name_length(void)
+test_stack_name_length(uint32_t flags)
{
char name[RTE_STACK_NAMESIZE + 1];
struct rte_stack *s;
memset(name, 's', sizeof(name));
- s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), flags);
if (s != NULL) {
printf("[%s():%u] Failed to prevent long name\n",
__func__, __LINE__);
@@ -312,7 +312,7 @@ stack_thread_push_pop(void *args)
}
static int
-test_stack_multithreaded(void)
+test_stack_multithreaded(uint32_t flags)
{
struct test_args *args;
unsigned int lcore_id;
@@ -333,7 +333,7 @@ test_stack_multithreaded(void)
return -1;
}
- s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
@@ -368,9 +368,9 @@ test_stack_multithreaded(void)
}
static int
-test_stack(void)
+__test_stack(uint32_t flags)
{
- if (test_stack_basic() < 0)
+ if (test_stack_basic(flags) < 0)
return -1;
if (test_lookup_null() < 0)
@@ -379,16 +379,29 @@ test_stack(void)
if (test_free_null() < 0)
return -1;
- if (test_stack_name_reuse() < 0)
+ if (test_stack_name_reuse(flags) < 0)
return -1;
- if (test_stack_name_length() < 0)
+ if (test_stack_name_length(flags) < 0)
return -1;
- if (test_stack_multithreaded() < 0)
+ if (test_stack_multithreaded(flags) < 0)
return -1;
return 0;
}
+static int
+test_stack(void)
+{
+ return __test_stack(0);
+}
+
+static int
+test_nb_stack(void)
+{
+ return __test_stack(STACK_F_NB);
+}
+
REGISTER_TEST_COMMAND(stack_autotest, test_stack);
+REGISTER_TEST_COMMAND(stack_nb_autotest, test_nb_stack);
diff --git a/test/test/test_stack_perf.c b/test/test/test_stack_perf.c
index 484370d30..57a0e806b 100644
--- a/test/test/test_stack_perf.c
+++ b/test/test/test_stack_perf.c
@@ -297,14 +297,14 @@ test_bulk_push_pop(struct rte_stack *s)
}
static int
-test_stack_perf(void)
+__test_stack_perf(uint32_t flags)
{
struct lcore_pair cores;
struct rte_stack *s;
rte_atomic32_init(&lcore_barrier);
- s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -340,4 +340,17 @@ test_stack_perf(void)
return 0;
}
+static int
+test_stack_perf(void)
+{
+ return __test_stack_perf(0);
+}
+
+static int
+test_nb_stack_perf(void)
+{
+ return __test_stack_perf(STACK_F_NB);
+}
+
REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
+REGISTER_TEST_COMMAND(stack_nb_perf_autotest, test_nb_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH 7/7] mempool/stack: add non-blocking stack mempool handler
2019-02-22 16:06 [dpdk-dev] [PATCH 0/7] Subject: [PATCH ...] Add stack library and new mempool handler Gage Eads
` (5 preceding siblings ...)
2019-02-22 16:06 ` [dpdk-dev] [PATCH 6/7] test/stack: add non-blocking stack tests Gage Eads
@ 2019-02-22 16:06 ` Gage Eads
2019-02-25 11:29 ` Olivier Matz
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 0/8] Add stack library and new " Gage Eads
7 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-02-22 16:06 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for non-blocking (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
non-blocking handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a lock-based stack's
worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the non-blocking stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the non-blocking stack's.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 5 +++++
doc/guides/rel_notes/release_19_05.rst | 5 +++++
drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++--
3 files changed, 34 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 929d76dba..5c2dbc706 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -541,6 +541,11 @@ Known Issues
5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+ Alternatively, applications can use the non-blocking stack mempool handler. When considering this handler, note that:
+
+ - it is currently limited to the x86_64 platform, because it uses an instruction (16-byte compare-and-swap) that is not yet available on other platforms.
+ - it has worse average-case performance than the non-preemptive rte_ring, but software caching (e.g. the mempool cache) can mitigate this by reducing the number of stack accesses.
+
+ rte_timer
Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 52c5ba78e..111a93ea6 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -74,6 +74,11 @@ New Features
The library supports two stack implementations: lock-based and non-blocking.
The non-blocking implementation is currently limited to x86-64 platforms.
+* **Added Non-blocking Stack Mempool Handler.**
+
+ Added a new non-blocking stack handler, which uses the newly added stack
+ library.
+
Removed Items
-------------
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 25ccdb9af..eae71aa0c 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -7,7 +7,7 @@
#include <rte_stack.h>
static int
-stack_alloc(struct rte_mempool *mp)
+__stack_alloc(struct rte_mempool *mp, uint32_t flags)
{
char name[RTE_STACK_NAMESIZE];
struct rte_stack *s;
@@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp)
return -rte_errno;
}
- s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ s = rte_stack_create(name, mp->size, mp->socket_id, flags);
if (s == NULL)
return -rte_errno;
@@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp)
}
static int
+stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, 0);
+}
+
+static int
+nb_stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, STACK_F_NB);
+}
+
+static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
unsigned int n)
{
@@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = {
.get_count = stack_get_count
};
+static struct rte_mempool_ops ops_nb_stack = {
+ .name = "nb_stack",
+ .alloc = nb_stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
MEMPOOL_REGISTER_OPS(ops_stack);
+MEMPOOL_REGISTER_OPS(ops_nb_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH 1/7] stack: introduce rte stack library
2019-02-22 16:06 ` [dpdk-dev] [PATCH 1/7] stack: introduce rte stack library Gage Eads
@ 2019-02-25 10:43 ` Olivier Matz
2019-02-28 5:10 ` Eads, Gage
0 siblings, 1 reply; 228+ messages in thread
From: Olivier Matz @ 2019-02-25 10:43 UTC (permalink / raw)
To: Gage Eads
Cc: dev, arybchenko, bruce.richardson, konstantin.ananyev, gavin.hu,
Honnappa.Nagarahalli, nd, thomas
Hi Gage,
Please find few comments below.
On Fri, Feb 22, 2019 at 10:06:49AM -0600, Gage Eads wrote:
> The rte_stack library provides an API for configuration and use of a
> bounded stack of pointers. Push and pop operations are MT-safe, allowing
> concurrent access, and the interface supports pushing and popping multiple
> pointers at a time.
>
> The library's interface is modeled after another DPDK data structure,
> rte_ring, and its lock-based implementation is derived from the stack
> mempool handler. An upcoming commit will migrate the stack mempool handler
> to rte_stack.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
[...]
> --- /dev/null
> +++ b/doc/guides/prog_guide/stack_lib.rst
> @@ -0,0 +1,26 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright(c) 2019 Intel Corporation.
> +
> +Stack Library
> +=============
> +
> +DPDK's stack library provides an API for configuration and use of a bounded stack of
> +pointers.
> +
> +The stack library provides the following basic operations:
> +
> +* Create a uniquely named stack of a user-specified size and using a user-specified socket.
> +
> +* Push and pop a burst of one or more stack objects (pointers). These function are multi-threading safe.
> +
> +* Free a previously created stack.
> +
> +* Lookup a pointer to a stack by its name.
> +
> +* Query a stack's current depth and number of free entries.
It seems the 80-cols limitation also applies to documentation:
https://mails.dpdk.org/archives/dev/2019-February/124917.html
[...]
> --- /dev/null
> +++ b/lib/librte_stack/rte_stack.h
> @@ -0,0 +1,277 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2019 Intel Corporation
> + */
> +
> +/**
> + * @file rte_stack.h
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * RTE Stack.
> + * librte_stack provides an API for configuration and use of a bounded stack of
> + * pointers. Push and pop operations are MT-safe, allowing concurrent access,
> + * and the interface supports pushing and popping multiple pointers at a time.
> + */
> +
> +#ifndef _RTE_STACK_H_
> +#define _RTE_STACK_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <rte_errno.h>
> +#include <rte_memzone.h>
> +#include <rte_spinlock.h>
> +
> +#define RTE_TAILQ_STACK_NAME "RTE_STACK"
> +#define RTE_STACK_MZ_PREFIX "STK_"
> +/**< The maximum length of a stack name. */
> +#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
> + sizeof(RTE_STACK_MZ_PREFIX) + 1)
> +
> +/* Structure containing the LIFO, its current length, and a lock for mutual
> + * exclusion.
> + */
> +struct rte_lifo {
> + rte_spinlock_t lock; /**< LIFO lock */
> + uint32_t len; /**< LIFO len */
> + void *objs[]; /**< LIFO pointer table */
> +};
> +
> +/* The RTE stack structure contains the LIFO structure itself, plus metadata
> + * such as its name and memzone pointer.
> + */
> +struct rte_stack {
> + /** Name of the stack. */
> + char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
> + /** Memzone containing the rte_stack structure */
> + const struct rte_memzone *memzone;
> + uint32_t capacity; /**< Usable size of the stack */
> + uint32_t flags; /**< Flags supplied at creation */
> + struct rte_lifo lifo; /**< LIFO structure */
> +} __rte_cache_aligned;
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * @internal Push several objects on the stack (MT-safe)
> + *
> + * @param s
> + * A pointer to the stack structure.
> + * @param obj_table
> + * A pointer to a table of void * pointers (objects).
> + * @param n
> + * The number of objects to push on the stack from the obj_table.
> + * @return
> + * Actual number of objects pushed (either 0 or *n*).
> + */
Minor: a dot is missing at the end of the title. There are few in this
patch, and maybe in next ones.
[...]
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Return the number of used entries in a stack.
> + *
> + * @param s
> + * A pointer to the stack structure.
> + * @return
> + * The number of used entries in the stack.
> + */
> +static __rte_always_inline unsigned int __rte_experimental
> +rte_stack_count(struct rte_stack *s)
> +{
> + return (unsigned int)s->lifo.len;
> +}
The argument can be const.
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Return the number of free entries in a stack.
> + *
> + * @param s
> + * A pointer to the stack structure.
> + * @return
> + * The number of free entries in the stack.
> + */
> +static __rte_always_inline unsigned int __rte_experimental
> +rte_stack_free_count(struct rte_stack *s)
> +{
> + return s->capacity - rte_stack_count(s);
> +}
Same here.
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH 2/7] mempool/stack: convert mempool to use rte stack
2019-02-22 16:06 ` [dpdk-dev] [PATCH 2/7] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-02-25 10:46 ` Olivier Matz
0 siblings, 0 replies; 228+ messages in thread
From: Olivier Matz @ 2019-02-25 10:46 UTC (permalink / raw)
To: Gage Eads
Cc: dev, arybchenko, bruce.richardson, konstantin.ananyev, gavin.hu,
Honnappa.Nagarahalli, nd, thomas
On Fri, Feb 22, 2019 at 10:06:50AM -0600, Gage Eads wrote:
> The new rte_stack library is derived from the mempool handler, so this
> commit removes duplicated code and simplifies the handler by migrating it
> to this new API.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH 3/7] test/stack: add stack test
2019-02-22 16:06 ` [dpdk-dev] [PATCH 3/7] test/stack: add stack test Gage Eads
@ 2019-02-25 10:59 ` Olivier Matz
2019-02-28 5:11 ` Eads, Gage
0 siblings, 1 reply; 228+ messages in thread
From: Olivier Matz @ 2019-02-25 10:59 UTC (permalink / raw)
To: Gage Eads
Cc: dev, arybchenko, bruce.richardson, konstantin.ananyev, gavin.hu,
Honnappa.Nagarahalli, nd, thomas
On Fri, Feb 22, 2019 at 10:06:51AM -0600, Gage Eads wrote:
> stack_autotest performs positive and negative testing of the stack API, and
> exercises the push and pop datapath functions with all available lcores.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
[...]
> --- /dev/null
> +++ b/test/test/test_stack.c
> @@ -0,0 +1,394 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2019 Intel Corporation
> + */
> +
> +#include <string.h>
> +
> +#include <rte_lcore.h>
> +#include <rte_malloc.h>
> +#include <rte_random.h>
> +#include <rte_stack.h>
> +
> +#include "test.h"
> +
> +#define STACK_SIZE 4096
> +#define MAX_BULK 32
> +
> +static int
> +test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
> +{
> + void *popped_objs[STACK_SIZE];
> + unsigned int i, ret;
Here, a dynamic sized table is used. In test_stack_basic() below, it
uses a heap-based allocation for the same purpose. I think it would be
more consistent to have the same method for both. I suggest to allocate
in heap to avoid a stack overflow if STACK_SIZE is increased in the
future.
[...]
> +static int
> +test_stack_basic(void)
> +{
> + struct rte_stack *s = NULL;
> + void **obj_table = NULL;
> + int i, ret = -1;
> +
> + obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
> + if (obj_table == NULL) {
> + printf("[%s():%u] failed to calloc %lu bytes\n",
> + __func__, __LINE__, STACK_SIZE * sizeof(void *));
> + goto fail_test;
> + }
> +
> + for (i = 0; i < STACK_SIZE; i++)
> + obj_table[i] = (void *)(uintptr_t)i;
> +
> + s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
> + if (s == NULL) {
> + printf("[%s():%u] failed to create a stack\n",
> + __func__, __LINE__);
> + goto fail_test;
> + }
> +
> + if (rte_stack_lookup(__func__) != s) {
> + printf("[%s():%u] failed to lookup a stack\n",
> + __func__, __LINE__);
> + goto fail_test;
> + }
> +
> + if (rte_stack_count(s) != 0) {
> + printf("[%s():%u] stack count: %u (expected 0)\n",
> + __func__, __LINE__, rte_stack_count(s));
> + goto fail_test;
> + }
> +
> + if (rte_stack_free_count(s) != STACK_SIZE) {
> + printf("[%s():%u] stack free count: %u (expected %u)\n",
> + __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
> + goto fail_test;
> + }
> +
> + ret = test_stack_push_pop(s, obj_table, 1);
> + if (ret) {
> + printf("[%s():%u] Single object push/pop failed\n",
> + __func__, __LINE__);
> + goto fail_test;
> + }
> +
> + ret = test_stack_push_pop(s, obj_table, MAX_BULK);
> + if (ret) {
> + printf("[%s():%u] Bulk object push/pop failed\n",
> + __func__, __LINE__);
> + goto fail_test;
> + }
> +
> + ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
> + if (ret != 0) {
> + printf("[%s():%u] Excess objects push succeeded\n",
> + __func__, __LINE__);
> + goto fail_test;
> + }
> +
> + ret = rte_stack_pop(s, obj_table, 1);
> + if (ret != 0) {
> + printf("[%s():%u] Empty stack pop succeeded\n",
> + __func__, __LINE__);
> + goto fail_test;
> + }
> +
> + ret = 0;
> +
> +fail_test:
> + rte_stack_free(s);
> +
> + if (obj_table != NULL)
> + rte_free(obj_table);
> +
The if can be removed.
> +static int
> +test_stack_name_length(void)
> +{
> + char name[RTE_STACK_NAMESIZE + 1];
> + struct rte_stack *s;
> +
> + memset(name, 's', sizeof(name));
> +
> + s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
> + if (s != NULL) {
> + printf("[%s():%u] Failed to prevent long name\n",
> + __func__, __LINE__);
> + return -1;
> + }
Here, "name" is not a valid string (no \0 at the end). It does not hurt because
the length check is properly done in the lib, but we could imagine that the
wrong name is logged by the library on error, which would trigger a crash
here. So I suggest to pass a valid string instead.
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH 4/7] test/stack: add stack perf test
2019-02-22 16:06 ` [dpdk-dev] [PATCH 4/7] test/stack: add stack perf test Gage Eads
@ 2019-02-25 11:04 ` Olivier Matz
0 siblings, 0 replies; 228+ messages in thread
From: Olivier Matz @ 2019-02-25 11:04 UTC (permalink / raw)
To: Gage Eads
Cc: dev, arybchenko, bruce.richardson, konstantin.ananyev, gavin.hu,
Honnappa.Nagarahalli, nd, thomas
On Fri, Feb 22, 2019 at 10:06:52AM -0600, Gage Eads wrote:
> stack_perf_autotest tests the following with one lcore:
> - Cycles to attempt to pop an empty stack
> - Cycles to push then pop a single object
> - Cycles to push then pop a burst of 32 objects
>
> It also tests the cycles to push then pop a burst of 8 and 32 objects with
> the following lcore combinations (if possible):
> - Two hyperthreads
> - Two physical cores
> - Two physical cores on separate NUMA nodes
> - All available lcores
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH 5/7] stack: add non-blocking stack implementation
2019-02-22 16:06 ` [dpdk-dev] [PATCH 5/7] stack: add non-blocking stack implementation Gage Eads
@ 2019-02-25 11:28 ` Olivier Matz
[not found] ` <2EC44CCD3517A842B44C82651A5557A14AF13386@fmsmsx118.amr.corp.intel.com>
0 siblings, 1 reply; 228+ messages in thread
From: Olivier Matz @ 2019-02-25 11:28 UTC (permalink / raw)
To: Gage Eads
Cc: dev, arybchenko, bruce.richardson, konstantin.ananyev, gavin.hu,
Honnappa.Nagarahalli, nd, thomas
On Fri, Feb 22, 2019 at 10:06:53AM -0600, Gage Eads wrote:
> This commit adds support for a non-blocking (linked list based) stack to
> the stack API. This behavior is selected through a new rte_stack_create()
> flag, STACK_F_NB.
>
> The stack consists of a linked list of elements, each containing a data
> pointer and a next pointer, and an atomic stack depth counter.
>
> The non-blocking push operation enqueues a linked list of pointers by
> pointing the tail of the list to the current stack head, and using a CAS to
> swing the stack head pointer to the head of the list. The operation retries
> if it is unsuccessful (i.e. the list changed between reading the head and
> modifying it), else it adjusts the stack length and returns.
>
> The non-blocking pop operation first reserves num elements by adjusting the
> stack length, to ensure the dequeue operation will succeed without
> blocking. It then dequeues pointers by walking the list -- starting from
> the head -- then swinging the head pointer (using a CAS as well). While
> walking the list, the data pointers are recorded in an object table.
>
> This algorithm stack uses a 128-bit compare-and-swap instruction, which
> atomically updates the stack top pointer and a modification counter, to
> protect against the ABA problem.
>
> The linked list elements themselves are maintained in a non-blocking LIFO,
> and are allocated before stack pushes and freed after stack pops. Since the
> stack has a fixed maximum depth, these elements do not need to be
> dynamically created.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
[...]
> diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
> index 51689cfe1..86fdc0a9b 100644
> --- a/doc/guides/prog_guide/stack_lib.rst
> +++ b/doc/guides/prog_guide/stack_lib.rst
> @@ -9,7 +9,7 @@ pointers.
>
> The stack library provides the following basic operations:
>
> -* Create a uniquely named stack of a user-specified size and using a user-specified socket.
> +* Create a uniquely named stack of a user-specified size and using a user-specified socket, with either lock-based or non-blocking behavior.
>
> * Push and pop a burst of one or more stack objects (pointers). These function are multi-threading safe.
>
Same comment about 80-cols than in the first patch.
[...]
> --- a/lib/librte_stack/rte_stack.c
> +++ b/lib/librte_stack/rte_stack.c
> @@ -26,27 +26,46 @@ static struct rte_tailq_elem rte_stack_tailq = {
> EAL_REGISTER_TAILQ(rte_stack_tailq)
>
> static void
> +nb_lifo_init(struct rte_stack *s, unsigned int count)
> +{
> + struct rte_nb_lifo_elem *elems;
> + unsigned int i;
> +
> + elems = (struct rte_nb_lifo_elem *)&s[1];
> + for (i = 0; i < count; i++)
> + __nb_lifo_push(&s->nb_lifo.free, &elems[i], &elems[i], 1);
> +}
Would it be possible to add:
struct rte_nb_lifo {
/** LIFO list of elements */
struct rte_nb_lifo_list used __rte_cache_aligned;
/** LIFO list of free elements */
struct rte_nb_lifo_list free __rte_cache_aligned;
+ struct rte_nb_lifo_elem elems[];
};
I think it is more consistent with the non-blocking structure.
[...]
> --- a/lib/librte_stack/rte_stack.h
> +++ b/lib/librte_stack/rte_stack.h
> @@ -29,6 +29,33 @@ extern "C" {
> #define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
> sizeof(RTE_STACK_MZ_PREFIX) + 1)
>
> +struct rte_nb_lifo_elem {
> + void *data; /**< Data pointer */
> + struct rte_nb_lifo_elem *next; /**< Next pointer */
> +};
> +
> +struct rte_nb_lifo_head {
> + struct rte_nb_lifo_elem *top; /**< Stack top */
> + uint64_t cnt; /**< Modification counter for avoiding ABA problem */
> +};
> +
> +struct rte_nb_lifo_list {
> + /** List head */
> + struct rte_nb_lifo_head head __rte_aligned(16);
> + /** List len */
> + rte_atomic64_t len;
> +};
> +
> +/* Structure containing two non-blocking LIFO lists: the stack itself and a
> + * list of free linked-list elements.
> + */
> +struct rte_nb_lifo {
> + /** LIFO list of elements */
> + struct rte_nb_lifo_list used __rte_cache_aligned;
> + /** LIFO list of free elements */
> + struct rte_nb_lifo_list free __rte_cache_aligned;
> +};
> +
The names "rte_nb_lifo*" bothers me a bit. I think a more usual name
format is "rte_<module_name>_<struct_name>".
What would you think about names like this?
rte_nb_lifo -> rte_stack_nb
rte_nb_lifo_elem -> rte_stack_nb_elem
rte_nb_lifo_head -> rte_stack_nb_head
rte_nb_lifo_list -> rte_stack_nb_list
rte_lifo -> rte_stack_std
I even wonder if "nonblock", "noblk", or "lockless" shouldn't be used
in place of "nb" (which is also a common abbreviation for number). This
can also applies to the STACK_F_NB flag name.
[...]
> /* Structure containing the LIFO, its current length, and a lock for mutual
> * exclusion.
> */
> @@ -48,10 +75,69 @@ struct rte_stack {
> const struct rte_memzone *memzone;
> uint32_t capacity; /**< Usable size of the stack */
> uint32_t flags; /**< Flags supplied at creation */
> - struct rte_lifo lifo; /**< LIFO structure */
> + RTE_STD_C11
> + union {
> + struct rte_nb_lifo nb_lifo; /**< Non-blocking LIFO structure */
> + struct rte_lifo lifo; /**< LIFO structure */
> + };
> } __rte_cache_aligned;
>
> /**
> + * The stack uses non-blocking push and pop functions. This flag is only
> + * supported on x86_64 platforms, currently.
> + */
> +#define STACK_F_NB 0x0001
What about adding the RTE_ prefix?
> +static __rte_always_inline unsigned int __rte_experimental
> +rte_nb_lifo_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
> +{
> + struct rte_nb_lifo_elem *tmp, *first, *last = NULL;
> + unsigned int i;
> +
> + if (unlikely(n == 0))
> + return 0;
> +
> + /* Pop n free elements */
> + first = __nb_lifo_pop(&s->nb_lifo.free, n, NULL, NULL);
> + if (unlikely(first == NULL))
> + return 0;
> +
> + /* Construct the list elements */
> + tmp = first;
> + for (i = 0; i < n; i++) {
> + tmp->data = obj_table[n - i - 1];
> + last = tmp;
> + tmp = tmp->next;
> + }
> +
> + /* Push them to the used list */
> + __nb_lifo_push(&s->nb_lifo.used, first, last, n);
> +
> + return n;
> +}
Here, I didn't get why "last" is not retrieved through __nb_lifo_pop(),
like it's done in rte_nb_lifo_pop(). Is there a reason for that?
[...]
> --- /dev/null
> +++ b/lib/librte_stack/rte_stack_c11_mem.h
For the c11 memory model, please consider having an additional reviewer ;)
[...]
> --- /dev/null
> +++ b/lib/librte_stack/rte_stack_generic.h
> @@ -0,0 +1,157 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2019 Intel Corporation
> + */
> +
> +#ifndef _NB_LIFO_GENERIC_H_
> +#define _NB_LIFO_GENERIC_H_
> +
> +#include <rte_branch_prediction.h>
> +#include <rte_prefetch.h>
> +
> +static __rte_always_inline unsigned int
> +rte_nb_lifo_len(struct rte_stack *s)
> +{
> + /* nb_lifo_push() and nb_lifo_pop() do not update the list's contents
> + * and nb_lifo->len atomically, which can cause the list to appear
> + * shorter than it actually is if this function is called while other
> + * threads are modifying the list.
> + *
> + * However, given the inherently approximate nature of the get_count
> + * callback -- even if the list and its size were updated atomically,
> + * the size could change between when get_count executes and when the
> + * value is returned to the caller -- this is acceptable.
> + *
> + * The nb_lifo->len updates are placed such that the list may appear to
> + * have fewer elements than it does, but will never appear to have more
> + * elements. If the mempool is near-empty to the point that this is a
> + * concern, the user should consider increasing the mempool size.
> + */
> + return (unsigned int)rte_atomic64_read(&s->nb_lifo.used.len);
> +}
> +
> +static __rte_always_inline void
> +__nb_lifo_push(struct rte_nb_lifo_list *lifo,
> + struct rte_nb_lifo_elem *first,
> + struct rte_nb_lifo_elem *last,
> + unsigned int num)
> +{
> +#ifndef RTE_ARCH_X86_64
> + RTE_SET_USED(first);
> + RTE_SET_USED(last);
> + RTE_SET_USED(lifo);
> + RTE_SET_USED(num);
> +#else
> + struct rte_nb_lifo_head old_head;
> + int success;
> +
> + old_head = lifo->head;
> +
> + do {
> + struct rte_nb_lifo_head new_head;
> +
> + /* Swing the top pointer to the first element in the list and
> + * make the last element point to the old top.
> + */
> + new_head.top = first;
> + new_head.cnt = old_head.cnt + 1;
> +
> + last->next = old_head.top;
> +
> + /* Ensure the list entry writes are visible before pushing them
> + * to the stack.
> + */
> + rte_wmb();
> +
> + /* old_head is updated on failure */
> + success = rte_atomic128_cmpxchg((rte_int128_t *)&lifo->head,
> + (rte_int128_t *)&old_head,
> + (rte_int128_t *)&new_head,
> + 1, __ATOMIC_RELEASE,
> + __ATOMIC_RELAXED);
> + } while (success == 0);
> +
> + rte_atomic64_add(&lifo->len, num);
> +#endif
> +}
> +
> +static __rte_always_inline struct rte_nb_lifo_elem *
> +__nb_lifo_pop(struct rte_nb_lifo_list *lifo,
> + unsigned int num,
> + void **obj_table,
> + struct rte_nb_lifo_elem **last)
> +{
> +#ifndef RTE_ARCH_X86_64
> + RTE_SET_USED(obj_table);
> + RTE_SET_USED(last);
> + RTE_SET_USED(lifo);
> + RTE_SET_USED(num);
> +
> + return NULL;
> +#else
> + struct rte_nb_lifo_head old_head;
> + int success;
> +
> + /* Reserve num elements, if available */
> + while (1) {
> + uint64_t len = rte_atomic64_read(&lifo->len);
> +
> + /* Does the list contain enough elements? */
> + if (unlikely(len < num))
> + return NULL;
> +
> + if (rte_atomic64_cmpset((volatile uint64_t *)&lifo->len,
> + len, len - num))
> + break;
> + }
> +
Here, accessing the length with a compare and set costs probably more
than a standard atomic sub fonction. I understand that was done for the
reason described above:
The nb_lifo->len updates are placed such that the list may
appear to have fewer elements than it does, but will never
appear to have more elements.
Another strategy could be to use a rte_atomic64_sub() after the effective
pop and change rte_nb_lifo_len() to bound the result to [0:size].
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH 6/7] test/stack: add non-blocking stack tests
2019-02-22 16:06 ` [dpdk-dev] [PATCH 6/7] test/stack: add non-blocking stack tests Gage Eads
@ 2019-02-25 11:28 ` Olivier Matz
0 siblings, 0 replies; 228+ messages in thread
From: Olivier Matz @ 2019-02-25 11:28 UTC (permalink / raw)
To: Gage Eads
Cc: dev, arybchenko, bruce.richardson, konstantin.ananyev, gavin.hu,
Honnappa.Nagarahalli, nd, thomas
On Fri, Feb 22, 2019 at 10:06:54AM -0600, Gage Eads wrote:
> This commit adds non-blocking stack variants of stack_autotest
> (stack_nb_autotest) and stack_perf_autotest (stack_nb_perf_autotest),
> which differ only in that the non-blocking versions pass the STACK_F_NB
> flag to all rte_stack_create() calls.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH 7/7] mempool/stack: add non-blocking stack mempool handler
2019-02-22 16:06 ` [dpdk-dev] [PATCH 7/7] mempool/stack: add non-blocking stack mempool handler Gage Eads
@ 2019-02-25 11:29 ` Olivier Matz
0 siblings, 0 replies; 228+ messages in thread
From: Olivier Matz @ 2019-02-25 11:29 UTC (permalink / raw)
To: Gage Eads
Cc: dev, arybchenko, bruce.richardson, konstantin.ananyev, gavin.hu,
Honnappa.Nagarahalli, nd, thomas
On Fri, Feb 22, 2019 at 10:06:55AM -0600, Gage Eads wrote:
> This commit adds support for non-blocking (linked list based) stack mempool
> handler.
>
> In mempool_perf_autotest the lock-based stack outperforms the
> non-blocking handler for certain lcore/alloc count/free count
> combinations*, however:
> - For applications with preemptible pthreads, a lock-based stack's
> worst-case performance (i.e. one thread being preempted while
> holding the spinlock) is much worse than the non-blocking stack's.
> - Using per-thread mempool caches will largely mitigate the performance
> difference.
>
> *Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
> running on isolcpus cores with a tickless scheduler. The lock-based stack's
> rate_persec was 0.6x-3.5x the non-blocking stack's.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH 1/7] stack: introduce rte stack library
2019-02-25 10:43 ` Olivier Matz
@ 2019-02-28 5:10 ` Eads, Gage
0 siblings, 0 replies; 228+ messages in thread
From: Eads, Gage @ 2019-02-28 5:10 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, arybchenko, Richardson, Bruce, Ananyev, Konstantin,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
> -----Original Message-----
> From: Olivier Matz [mailto:olivier.matz@6wind.com]
> Sent: Monday, February 25, 2019 4:43 AM
> To: Eads, Gage <gage.eads@intel.com>
> Cc: dev@dpdk.org; arybchenko@solarflare.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; gavin.hu@arm.com;
> Honnappa.Nagarahalli@arm.com; nd@arm.com; thomas@monjalon.net
> Subject: Re: [PATCH 1/7] stack: introduce rte stack library
>
> Hi Gage,
>
> Please find few comments below.
>
> On Fri, Feb 22, 2019 at 10:06:49AM -0600, Gage Eads wrote:
> > The rte_stack library provides an API for configuration and use of a
> > bounded stack of pointers. Push and pop operations are MT-safe,
> > allowing concurrent access, and the interface supports pushing and
> > popping multiple pointers at a time.
> >
> > The library's interface is modeled after another DPDK data structure,
> > rte_ring, and its lock-based implementation is derived from the stack
> > mempool handler. An upcoming commit will migrate the stack mempool
> > handler to rte_stack.
> >
> > Signed-off-by: Gage Eads <gage.eads@intel.com>
>
> [...]
>
> > --- /dev/null
> > +++ b/doc/guides/prog_guide/stack_lib.rst
> > @@ -0,0 +1,26 @@
> > +.. SPDX-License-Identifier: BSD-3-Clause
> > + Copyright(c) 2019 Intel Corporation.
> > +
> > +Stack Library
> > +=============
> > +
> > +DPDK's stack library provides an API for configuration and use of a
> > +bounded stack of pointers.
> > +
> > +The stack library provides the following basic operations:
> > +
> > +* Create a uniquely named stack of a user-specified size and using a user-
> specified socket.
> > +
> > +* Push and pop a burst of one or more stack objects (pointers). These
> function are multi-threading safe.
> > +
> > +* Free a previously created stack.
> > +
> > +* Lookup a pointer to a stack by its name.
> > +
> > +* Query a stack's current depth and number of free entries.
>
> It seems the 80-cols limitation also applies to documentation:
> https://mails.dpdk.org/archives/dev/2019-February/124917.html
>
Sure, will fix in v2.
> [...]
>
> > --- /dev/null
> > +++ b/lib/librte_stack/rte_stack.h
> > @@ -0,0 +1,277 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2019 Intel Corporation */
> > +
> > +/**
> > + * @file rte_stack.h
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * RTE Stack.
> > + * librte_stack provides an API for configuration and use of a
> > +bounded stack of
> > + * pointers. Push and pop operations are MT-safe, allowing concurrent
> > +access,
> > + * and the interface supports pushing and popping multiple pointers at a time.
> > + */
> > +
> > +#ifndef _RTE_STACK_H_
> > +#define _RTE_STACK_H_
> > +
> > +#ifdef __cplusplus
> > +extern "C" {
> > +#endif
> > +
> > +#include <rte_errno.h>
> > +#include <rte_memzone.h>
> > +#include <rte_spinlock.h>
> > +
> > +#define RTE_TAILQ_STACK_NAME "RTE_STACK"
> > +#define RTE_STACK_MZ_PREFIX "STK_"
> > +/**< The maximum length of a stack name. */ #define
> > +RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
> > + sizeof(RTE_STACK_MZ_PREFIX) + 1)
> > +
> > +/* Structure containing the LIFO, its current length, and a lock for
> > +mutual
> > + * exclusion.
> > + */
> > +struct rte_lifo {
> > + rte_spinlock_t lock; /**< LIFO lock */
> > + uint32_t len; /**< LIFO len */
> > + void *objs[]; /**< LIFO pointer table */ };
> > +
> > +/* The RTE stack structure contains the LIFO structure itself, plus
> > +metadata
> > + * such as its name and memzone pointer.
> > + */
> > +struct rte_stack {
> > + /** Name of the stack. */
> > + char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
> > + /** Memzone containing the rte_stack structure */
> > + const struct rte_memzone *memzone;
> > + uint32_t capacity; /**< Usable size of the stack */
> > + uint32_t flags; /**< Flags supplied at creation */
> > + struct rte_lifo lifo; /**< LIFO structure */ } __rte_cache_aligned;
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * @internal Push several objects on the stack (MT-safe)
> > + *
> > + * @param s
> > + * A pointer to the stack structure.
> > + * @param obj_table
> > + * A pointer to a table of void * pointers (objects).
> > + * @param n
> > + * The number of objects to push on the stack from the obj_table.
> > + * @return
> > + * Actual number of objects pushed (either 0 or *n*).
> > + */
>
> Minor: a dot is missing at the end of the title. There are few in this patch, and
> maybe in next ones.
>
Will fix.
> [...]
>
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Return the number of used entries in a stack.
> > + *
> > + * @param s
> > + * A pointer to the stack structure.
> > + * @return
> > + * The number of used entries in the stack.
> > + */
> > +static __rte_always_inline unsigned int __rte_experimental
> > +rte_stack_count(struct rte_stack *s) {
> > + return (unsigned int)s->lifo.len;
> > +}
>
> The argument can be const.
>
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Return the number of free entries in a stack.
> > + *
> > + * @param s
> > + * A pointer to the stack structure.
> > + * @return
> > + * The number of free entries in the stack.
> > + */
> > +static __rte_always_inline unsigned int __rte_experimental
> > +rte_stack_free_count(struct rte_stack *s) {
> > + return s->capacity - rte_stack_count(s); }
>
> Same here.
Unfortunately the const keyword causes a discarded-qualifiers warning in the non-blocking implementation, due to the call to rte_atomic64_read().
Thanks,
Gage
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH 3/7] test/stack: add stack test
2019-02-25 10:59 ` Olivier Matz
@ 2019-02-28 5:11 ` Eads, Gage
0 siblings, 0 replies; 228+ messages in thread
From: Eads, Gage @ 2019-02-28 5:11 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, arybchenko, Richardson, Bruce, Ananyev, Konstantin,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
> -----Original Message-----
> From: Olivier Matz [mailto:olivier.matz@6wind.com]
> Sent: Monday, February 25, 2019 4:59 AM
> To: Eads, Gage <gage.eads@intel.com>
> Cc: dev@dpdk.org; arybchenko@solarflare.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; gavin.hu@arm.com;
> Honnappa.Nagarahalli@arm.com; nd@arm.com; thomas@monjalon.net
> Subject: Re: [PATCH 3/7] test/stack: add stack test
>
> On Fri, Feb 22, 2019 at 10:06:51AM -0600, Gage Eads wrote:
> > stack_autotest performs positive and negative testing of the stack
> > API, and exercises the push and pop datapath functions with all available
> lcores.
> >
> > Signed-off-by: Gage Eads <gage.eads@intel.com>
>
> [...]
>
> > --- /dev/null
> > +++ b/test/test/test_stack.c
> > @@ -0,0 +1,394 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2019 Intel Corporation */
> > +
> > +#include <string.h>
> > +
> > +#include <rte_lcore.h>
> > +#include <rte_malloc.h>
> > +#include <rte_random.h>
> > +#include <rte_stack.h>
> > +
> > +#include "test.h"
> > +
> > +#define STACK_SIZE 4096
> > +#define MAX_BULK 32
> > +
> > +static int
> > +test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned
> > +int bulk_sz) {
> > + void *popped_objs[STACK_SIZE];
> > + unsigned int i, ret;
>
> Here, a dynamic sized table is used. In test_stack_basic() below, it uses a heap-
> based allocation for the same purpose. I think it would be more consistent to
> have the same method for both. I suggest to allocate in heap to avoid a stack
> overflow if STACK_SIZE is increased in the future.
>
Sure, I'll make popped_objs dynamically allocated.
> [...]
>
> > +static int
> > +test_stack_basic(void)
> > +{
> > + struct rte_stack *s = NULL;
> > + void **obj_table = NULL;
> > + int i, ret = -1;
> > +
> > + obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
> > + if (obj_table == NULL) {
> > + printf("[%s():%u] failed to calloc %lu bytes\n",
> > + __func__, __LINE__, STACK_SIZE * sizeof(void *));
> > + goto fail_test;
> > + }
> > +
> > + for (i = 0; i < STACK_SIZE; i++)
> > + obj_table[i] = (void *)(uintptr_t)i;
> > +
> > + s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
> > + if (s == NULL) {
> > + printf("[%s():%u] failed to create a stack\n",
> > + __func__, __LINE__);
> > + goto fail_test;
> > + }
> > +
> > + if (rte_stack_lookup(__func__) != s) {
> > + printf("[%s():%u] failed to lookup a stack\n",
> > + __func__, __LINE__);
> > + goto fail_test;
> > + }
> > +
> > + if (rte_stack_count(s) != 0) {
> > + printf("[%s():%u] stack count: %u (expected 0)\n",
> > + __func__, __LINE__, rte_stack_count(s));
> > + goto fail_test;
> > + }
> > +
> > + if (rte_stack_free_count(s) != STACK_SIZE) {
> > + printf("[%s():%u] stack free count: %u (expected %u)\n",
> > + __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
> > + goto fail_test;
> > + }
> > +
> > + ret = test_stack_push_pop(s, obj_table, 1);
> > + if (ret) {
> > + printf("[%s():%u] Single object push/pop failed\n",
> > + __func__, __LINE__);
> > + goto fail_test;
> > + }
> > +
> > + ret = test_stack_push_pop(s, obj_table, MAX_BULK);
> > + if (ret) {
> > + printf("[%s():%u] Bulk object push/pop failed\n",
> > + __func__, __LINE__);
> > + goto fail_test;
> > + }
> > +
> > + ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
> > + if (ret != 0) {
> > + printf("[%s():%u] Excess objects push succeeded\n",
> > + __func__, __LINE__);
> > + goto fail_test;
> > + }
> > +
> > + ret = rte_stack_pop(s, obj_table, 1);
> > + if (ret != 0) {
> > + printf("[%s():%u] Empty stack pop succeeded\n",
> > + __func__, __LINE__);
> > + goto fail_test;
> > + }
> > +
> > + ret = 0;
> > +
> > +fail_test:
> > + rte_stack_free(s);
> > +
> > + if (obj_table != NULL)
> > + rte_free(obj_table);
> > +
>
> The if can be removed.
Ah, I didn't know rte_free() checks for NULL. Will remove.
>
> > +static int
> > +test_stack_name_length(void)
> > +{
> > + char name[RTE_STACK_NAMESIZE + 1];
> > + struct rte_stack *s;
> > +
> > + memset(name, 's', sizeof(name));
> > +
> > + s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
> > + if (s != NULL) {
> > + printf("[%s():%u] Failed to prevent long name\n",
> > + __func__, __LINE__);
> > + return -1;
> > + }
>
> Here, "name" is not a valid string (no \0 at the end). It does not hurt because the
> length check is properly done in the lib, but we could imagine that the wrong
> name is logged by the library on error, which would trigger a crash here. So I
> suggest to pass a valid string instead.
Good catch. Will fix.
Thanks,
Gage
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] FW: [PATCH 5/7] stack: add non-blocking stack implementation
[not found] ` <2EC44CCD3517A842B44C82651A5557A14AF13386@fmsmsx118.amr.corp.intel.com>
@ 2019-03-01 20:53 ` Eads, Gage
2019-03-01 21:12 ` Thomas Monjalon
0 siblings, 1 reply; 228+ messages in thread
From: Eads, Gage @ 2019-03-01 20:53 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, arybchenko, Richardson, Bruce, Ananyev, Konstantin,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
> -----Original Message-----
> From: Olivier Matz [mailto:olivier.matz@6wind.com]
> Sent: Monday, February 25, 2019 5:28 AM
> To: Eads, Gage <gage.eads@intel.com>
> Cc: dev@dpdk.org; arybchenko@solarflare.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; gavin.hu@arm.com;
> Honnappa.Nagarahalli@arm.com; nd@arm.com; thomas@monjalon.net
> Subject: Re: [PATCH 5/7] stack: add non-blocking stack implementation
>
> On Fri, Feb 22, 2019 at 10:06:53AM -0600, Gage Eads wrote:
> > This commit adds support for a non-blocking (linked list based)
> > stack to the stack API. This behavior is selected through a new
> > rte_stack_create() flag, STACK_F_NB.
> >
> > The stack consists of a linked list of elements, each containing a
> > data pointer and a next pointer, and an atomic stack depth counter.
> >
> > The non-blocking push operation enqueues a linked list of pointers
> by pointing the tail of the list to the current stack head, and
> > using a CAS to swing the stack head pointer to the head of the list.
> > The operation retries if it is unsuccessful (i.e. the list changed
> > between reading the head and modifying it), else it adjusts the stack length
and returns.
> >
> > The non-blocking pop operation first reserves num elements by
> > adjusting the stack length, to ensure the dequeue operation will
> > succeed without blocking. It then dequeues pointers by walking the
> > list -- starting from the head -- then swinging the head pointer
> > (using a CAS as well). While walking the list, the data pointers are
> > recorded in
> an object table.
> >
> > This algorithm stack uses a 128-bit compare-and-swap instruction,
> > which atomically updates the stack top pointer and a modification
> > counter, to protect against the ABA problem.
> >
> > The linked list elements themselves are maintained in a non-blocking
> > LIFO, and are allocated before stack pushes and freed after stack
> > pops. Since the stack has a fixed maximum depth, these elements do
> > not need to be dynamically created.
> >
> > Signed-off-by: Gage Eads <gage.eads@intel.com>
>
> [...]
>
> > diff --git a/doc/guides/prog_guide/stack_lib.rst
> > b/doc/guides/prog_guide/stack_lib.rst
> > index 51689cfe1..86fdc0a9b 100644
> > --- a/doc/guides/prog_guide/stack_lib.rst
> > +++ b/doc/guides/prog_guide/stack_lib.rst
> > @@ -9,7 +9,7 @@ pointers.
> >
> > The stack library provides the following basic operations:
> >
> > -* Create a uniquely named stack of a user-specified size and using
> > a user-
> specified socket.
> > +* Create a uniquely named stack of a user-specified size and using
> > +a user-
> specified socket, with either lock-based or non-blocking behavior.
> >
> > * Push and pop a burst of one or more stack objects (pointers).
> > These
> function are multi-threading safe.
> >
>
> Same comment about 80-cols than in the first patch.
>
> [...]
>
> > --- a/lib/librte_stack/rte_stack.c
> > +++ b/lib/librte_stack/rte_stack.c
> > @@ -26,27 +26,46 @@ static struct rte_tailq_elem rte_stack_tailq = {
> > EAL_REGISTER_TAILQ(rte_stack_tailq)
> >
> > static void
> > +nb_lifo_init(struct rte_stack *s, unsigned int count) {
> > + struct rte_nb_lifo_elem *elems;
> > + unsigned int i;
> > +
> > + elems = (struct rte_nb_lifo_elem *)&s[1];
> > + for (i = 0; i < count; i++)
> > + __nb_lifo_push(&s->nb_lifo.free, &elems[i], &elems[i], 1); }
>
> Would it be possible to add:
>
> struct rte_nb_lifo {
> /** LIFO list of elements */
> struct rte_nb_lifo_list used __rte_cache_aligned;
> /** LIFO list of free elements */
> struct rte_nb_lifo_list free __rte_cache_aligned;
> + struct rte_nb_lifo_elem elems[];
> };
>
> I think it is more consistent with the non-blocking structure.
>
Will do.
> [...]
>
> > --- a/lib/librte_stack/rte_stack.h
> > +++ b/lib/librte_stack/rte_stack.h
> > @@ -29,6 +29,33 @@ extern "C" {
> > #define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
> > sizeof(RTE_STACK_MZ_PREFIX) + 1)
> >
> > +struct rte_nb_lifo_elem {
> > + void *data; /**< Data pointer */
> > + struct rte_nb_lifo_elem *next; /**< Next pointer */
> > +};
> > +
> > +struct rte_nb_lifo_head {
> > + struct rte_nb_lifo_elem *top; /**< Stack top */
> > + uint64_t cnt; /**< Modification counter for avoiding ABA problem
> > +*/ };
> > +
> > +struct rte_nb_lifo_list {
> > + /** List head */
> > + struct rte_nb_lifo_head head __rte_aligned(16);
> > + /** List len */
> > + rte_atomic64_t len;
> > +};
> > +
> > +/* Structure containing two non-blocking LIFO lists: the stack
> > +itself and a
> > + * list of free linked-list elements.
> > + */
> > +struct rte_nb_lifo {
> > + /** LIFO list of elements */
> > + struct rte_nb_lifo_list used __rte_cache_aligned;
> > + /** LIFO list of free elements */
> > + struct rte_nb_lifo_list free __rte_cache_aligned; };
> > +
>
> The names "rte_nb_lifo*" bothers me a bit. I think a more usual name
> format is "rte_<module_name>_<struct_name>".
>
> What would you think about names like this?
> rte_nb_lifo -> rte_stack_nb
> rte_nb_lifo_elem -> rte_stack_nb_elem
> rte_nb_lifo_head -> rte_stack_nb_head
> rte_nb_lifo_list -> rte_stack_nb_list
> rte_lifo -> rte_stack_std
>
> I even wonder if "nonblock", "noblk", or "lockless" shouldn't be used
> in place of "nb" (which is also a common abbreviation for number).
> This can also applies to the STACK_F_NB flag name.
>
How about std and lf (lock-free)?
> [...]
>
> > /* Structure containing the LIFO, its current length, and a lock for mutual
> > * exclusion.
> > */
> > @@ -48,10 +75,69 @@ struct rte_stack {
> > const struct rte_memzone *memzone;
> > uint32_t capacity; /**< Usable size of the stack */
> > uint32_t flags; /**< Flags supplied at creation */
> > - struct rte_lifo lifo; /**< LIFO structure */
> > + RTE_STD_C11
> > + union {
> > + struct rte_nb_lifo nb_lifo; /**< Non-blocking LIFO structure */
> > + struct rte_lifo lifo; /**< LIFO structure */
> > + };
> > } __rte_cache_aligned;
> >
> > /**
> > + * The stack uses non-blocking push and pop functions. This flag is
> > +only
> > + * supported on x86_64 platforms, currently.
> > + */
> > +#define STACK_F_NB 0x0001
>
> What about adding the RTE_ prefix?
I'm fine with either, but there's precedent for flag macros named
<module_name>_*. E.g. RING_F_*, MEMPOOL_F_*, ETH_*, and SERVICE_F_*.
>
> > +static __rte_always_inline unsigned int __rte_experimental
> > +rte_nb_lifo_push(struct rte_stack *s, void * const *obj_table,
> > +unsigned int n) {
> > + struct rte_nb_lifo_elem *tmp, *first, *last = NULL;
> > + unsigned int i;
> > +
> > + if (unlikely(n == 0))
> > + return 0;
> > +
> > + /* Pop n free elements */
> > + first = __nb_lifo_pop(&s->nb_lifo.free, n, NULL, NULL);
> > + if (unlikely(first == NULL))
> > + return 0;
> > +
> > + /* Construct the list elements */
> > + tmp = first;
> > + for (i = 0; i < n; i++) {
> > + tmp->data = obj_table[n - i - 1];
> > + last = tmp;
> > + tmp = tmp->next;
> > + }
> > +
> > + /* Push them to the used list */
> > + __nb_lifo_push(&s->nb_lifo.used, first, last, n);
> > +
> > + return n;
> > +}
>
> Here, I didn't get why "last" is not retrieved through
> __nb_lifo_pop(), like it's done in rte_nb_lifo_pop(). Is there a reason for that?
>
Just an simple oversight -- that works, and I'll change it for v2.
> [...]
>
> > --- /dev/null
> > +++ b/lib/librte_stack/rte_stack_c11_mem.h
>
> For the c11 memory model, please consider having an additional
> reviewer ;)
No problem, and I'll break out the C11 implementation into a separate patch in
case that makes reviewing it easier.
>
> [...]
>
> > --- /dev/null
> > +++ b/lib/librte_stack/rte_stack_generic.h
> > @@ -0,0 +1,157 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright(c) 2019 Intel Corporation */
> > +
> > +#ifndef _NB_LIFO_GENERIC_H_
> > +#define _NB_LIFO_GENERIC_H_
> > +
> > +#include <rte_branch_prediction.h>
> > +#include <rte_prefetch.h>
> > +
> > +static __rte_always_inline unsigned int rte_nb_lifo_len(struct
> > +rte_stack *s) {
> > + /* nb_lifo_push() and nb_lifo_pop() do not update the list's contents
> > + * and nb_lifo->len atomically, which can cause the list to appear
> > + * shorter than it actually is if this function is called while other
> > + * threads are modifying the list.
> > + *
> > + * However, given the inherently approximate nature of the get_count
> > + * callback -- even if the list and its size were updated atomically,
> > + * the size could change between when get_count executes and when
> the
> > + * value is returned to the caller -- this is acceptable.
> > + *
> > + * The nb_lifo->len updates are placed such that the list may appear to
> > + * have fewer elements than it does, but will never appear to have more
> > + * elements. If the mempool is near-empty to the point that this is a
> > + * concern, the user should consider increasing the mempool size.
> > + */
> > + return (unsigned int)rte_atomic64_read(&s->nb_lifo.used.len);
> > +}
> > +
> > +static __rte_always_inline void
> > +__nb_lifo_push(struct rte_nb_lifo_list *lifo,
> > + struct rte_nb_lifo_elem *first,
> > + struct rte_nb_lifo_elem *last,
> > + unsigned int num)
> > +{
> > +#ifndef RTE_ARCH_X86_64
> > + RTE_SET_USED(first);
> > + RTE_SET_USED(last);
> > + RTE_SET_USED(lifo);
> > + RTE_SET_USED(num);
> > +#else
> > + struct rte_nb_lifo_head old_head;
> > + int success;
> > +
> > + old_head = lifo->head;
> > +
> > + do {
> > + struct rte_nb_lifo_head new_head;
> > +
> > + /* Swing the top pointer to the first element in the list and
> > + * make the last element point to the old top.
> > + */
> > + new_head.top = first;
> > + new_head.cnt = old_head.cnt + 1;
> > +
> > + last->next = old_head.top;
> > +
> > + /* Ensure the list entry writes are visible before pushing them
> > + * to the stack.
> > + */
> > + rte_wmb();
> > +
> > + /* old_head is updated on failure */
> > + success = rte_atomic128_cmpxchg((rte_int128_t *)&lifo->head,
> > + (rte_int128_t *)&old_head,
> > + (rte_int128_t *)&new_head,
> > + 1, __ATOMIC_RELEASE,
> > + __ATOMIC_RELAXED);
> > + } while (success == 0);
> > +
> > + rte_atomic64_add(&lifo->len, num); #endif }
> > +
> > +static __rte_always_inline struct rte_nb_lifo_elem *
> > +__nb_lifo_pop(struct rte_nb_lifo_list *lifo,
> > + unsigned int num,
> > + void **obj_table,
> > + struct rte_nb_lifo_elem **last) { #ifndef RTE_ARCH_X86_64
> > + RTE_SET_USED(obj_table);
> > + RTE_SET_USED(last);
> > + RTE_SET_USED(lifo);
> > + RTE_SET_USED(num);
> > +
> > + return NULL;
> > +#else
> > + struct rte_nb_lifo_head old_head;
> > + int success;
> > +
> > + /* Reserve num elements, if available */
> > + while (1) {
> > + uint64_t len = rte_atomic64_read(&lifo->len);
> > +
> > + /* Does the list contain enough elements? */
> > + if (unlikely(len < num))
> > + return NULL;
> > +
> > + if (rte_atomic64_cmpset((volatile uint64_t *)&lifo->len,
> > + len, len - num))
> > + break;
> > + }
> > +
>
> Here, accessing the length with a compare and set costs probably more
> than a standard atomic sub fonction. I understand that was done for
> the reason described above:
>
> The nb_lifo->len updates are placed such that the list may
> appear to have fewer elements than it does, but will never
> appear to have more elements.
>
> Another strategy could be to use a rte_atomic64_sub() after the
> effective pop and change rte_nb_lifo_len() to bound the result to [0:size].
It serves a second purpose: if the CAS succeeds, the subsequent do-while loop is
guaranteed to (eventually) succeed because we've effectively reserved num
elements. Otherwise there's the chance that the list runs empty after popping
fewer than num elements.
If folks are interested in this patchset, please also consider reviewing the 128-bit
CAS patch here: http://mails.dpdk.org/archives/dev/2019-February/125059.html
Thanks!
Gage
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] FW: [PATCH 5/7] stack: add non-blocking stack implementation
2019-03-01 20:53 ` [dpdk-dev] FW: " Eads, Gage
@ 2019-03-01 21:12 ` Thomas Monjalon
2019-03-01 21:29 ` Eads, Gage
0 siblings, 1 reply; 228+ messages in thread
From: Thomas Monjalon @ 2019-03-01 21:12 UTC (permalink / raw)
To: Eads, Gage
Cc: Olivier Matz, dev, arybchenko, Richardson, Bruce, Ananyev,
Konstantin, gavin.hu, Honnappa.Nagarahalli, nd
01/03/2019 21:53, Eads, Gage:
> From: Olivier Matz [mailto:olivier.matz@6wind.com]
> > On Fri, Feb 22, 2019 at 10:06:53AM -0600, Gage Eads wrote:
> > > +#define STACK_F_NB 0x0001
> >
> > What about adding the RTE_ prefix?
>
> I'm fine with either, but there's precedent for flag macros named
> <module_name>_*. E.g. RING_F_*, MEMPOOL_F_*, ETH_*, and SERVICE_F_*.
They should be fixed.
Every public symbols should be prefixed to avoid namespace conflict.
At first, we should rename them and keep the old name as an alias.
Later, non-prefixed names should be removed after a deprecation notice.
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] FW: [PATCH 5/7] stack: add non-blocking stack implementation
2019-03-01 21:12 ` Thomas Monjalon
@ 2019-03-01 21:29 ` Eads, Gage
0 siblings, 0 replies; 228+ messages in thread
From: Eads, Gage @ 2019-03-01 21:29 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Olivier Matz, dev, arybchenko, Richardson, Bruce, Ananyev,
Konstantin, gavin.hu, Honnappa.Nagarahalli, nd
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Friday, March 1, 2019 3:13 PM
> To: Eads, Gage <gage.eads@intel.com>
> Cc: Olivier Matz <olivier.matz@6wind.com>; dev@dpdk.org;
> arybchenko@solarflare.com; Richardson, Bruce <bruce.richardson@intel.com>;
> Ananyev, Konstantin <konstantin.ananyev@intel.com>; gavin.hu@arm.com;
> Honnappa.Nagarahalli@arm.com; nd@arm.com
> Subject: Re: FW: [PATCH 5/7] stack: add non-blocking stack implementation
>
> 01/03/2019 21:53, Eads, Gage:
> > From: Olivier Matz [mailto:olivier.matz@6wind.com]
> > > On Fri, Feb 22, 2019 at 10:06:53AM -0600, Gage Eads wrote:
> > > > +#define STACK_F_NB 0x0001
> > >
> > > What about adding the RTE_ prefix?
> >
> > I'm fine with either, but there's precedent for flag macros named
> > <module_name>_*. E.g. RING_F_*, MEMPOOL_F_*, ETH_*, and
> SERVICE_F_*.
>
> They should be fixed.
> Every public symbols should be prefixed to avoid namespace conflict.
> At first, we should rename them and keep the old name as an alias.
> Later, non-prefixed names should be removed after a deprecation notice.
>
Ok, will fix.
Thanks,
Gage
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v2 0/8] Add stack library and new mempool handler
2019-02-22 16:06 [dpdk-dev] [PATCH 0/7] Subject: [PATCH ...] Add stack library and new mempool handler Gage Eads
` (6 preceding siblings ...)
2019-02-22 16:06 ` [dpdk-dev] [PATCH 7/7] mempool/stack: add non-blocking stack mempool handler Gage Eads
@ 2019-03-05 16:42 ` Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 1/8] stack: introduce rte stack library Gage Eads
` (8 more replies)
7 siblings, 9 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-05 16:42 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This patchset introduces a stack library, supporting both lock-based and
lock-free stacks, and a lock-free stack mempool handler.
The lock-based stack code is derived from the existing stack mempool handler,
and that handler is refactored to use the stack library.
The lock-free stack mempool handler is intended for usages where the rte
ring's "non-preemptive" constraint is not acceptable; for example, if the
application uses a mixture of pinned high-priority threads and multiplexed
low-priority threads that share a mempool.
Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
so it is currently limited to the x86_64 platform.
This patchset is the successor to a patchset containing only the new mempool
handler[2].
[1] http://mails.dpdk.org/archives/dev/2019-March/125751.html
[2] http://mails.dpdk.org/archives/dev/2019-January/123555.html
---
v2:
- Reworked structure and function naming to use rte_stack_{std, lf}_...
- Updated to the latest rte_atomic128_cmp_exchange() interface.
- Rename STACK_F_NB -> RTE_STACK_F_LF.
- Remove rte_rmb() and rte_wmb() from the generic push and pop implementations.
These are obviated by rte_atomic128_cmp_exchange()'s two memorder arguments.
- Edit stack_lib.rst text to 80 chars/line.
- Fix rte_stack.h doxygen formatting.
- Allocate popped_objs array from the heap
- Fix stack_thread_push_pop bug ("&t->sz" -> "t->sz")
- Remove unnecessary NULL check from test_stack_basic
- Properly terminate the name string in test_stack_name_length
- Add an empty array of struct rte_nb_lifo_elem elements
- In rte_nb_lifo_push(), retrieve the last element from __nb_lifo_pop()
- Split C11 implementation into a separate patchset
Gage Eads (8):
stack: introduce rte stack library
mempool/stack: convert mempool to use rte stack
test/stack: add stack test
test/stack: add stack perf test
stack: add lock-free stack implementation
stack: add C11 atomic implementation
test/stack: add lock-free stack tests
mempool/stack: add lock-free stack mempool handler
MAINTAINERS | 9 +-
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/env_abstraction_layer.rst | 10 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 83 +++++
doc/guides/rel_notes/release_19_05.rst | 13 +
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 115 +++----
lib/Makefile | 2 +
lib/librte_stack/Makefile | 25 ++
lib/librte_stack/meson.build | 10 +
lib/librte_stack/rte_stack.c | 219 ++++++++++++
lib/librte_stack/rte_stack.h | 395 ++++++++++++++++++++++
lib/librte_stack/rte_stack_c11_mem.h | 175 ++++++++++
lib/librte_stack/rte_stack_generic.h | 151 +++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++
lib/librte_stack/rte_stack_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
test/test/Makefile | 3 +
test/test/meson.build | 7 +
test/test/test_stack.c | 423 ++++++++++++++++++++++++
test/test/test_stack_perf.c | 356 ++++++++++++++++++++
26 files changed, 1987 insertions(+), 72 deletions(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_c11_mem.h
create mode 100644 lib/librte_stack/rte_stack_generic.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_version.map
create mode 100644 test/test/test_stack.c
create mode 100644 test/test/test_stack_perf.c
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v2 1/8] stack: introduce rte stack library
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 0/8] Add stack library and new " Gage Eads
@ 2019-03-05 16:42 ` Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
` (7 subsequent siblings)
8 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-05 16:42 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 28 ++++
doc/guides/rel_notes/release_19_05.rst | 5 +
lib/Makefile | 2 +
lib/librte_stack/Makefile | 23 +++
lib/librte_stack/meson.build | 8 +
lib/librte_stack/rte_stack.c | 194 +++++++++++++++++++++++
lib/librte_stack/rte_stack.h | 274 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++++
lib/librte_stack/rte_stack_version.map | 9 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
16 files changed, 593 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index eef480ab5..237f05eb2 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -407,6 +407,12 @@ F: drivers/raw/skeleton_rawdev/
F: test/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
+Stack API - EXPERIMENTAL
+M: Gage Eads <gage.eads@intel.com>
+M: Olivier Matz <olivier.matz@6wind.com>
+F: lib/librte_stack/
+F: doc/guides/prog_guide/stack_lib.rst
+
Memory Pool Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index 7c6da5165..5861eb09c 100644
--- a/config/common_base
+++ b/config/common_base
@@ -980,3 +980,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
# Compile the eventdev application
#
CONFIG_RTE_APP_EVENTDEV=y
+
+#
+# Compile librte_stack
+#
+CONFIG_RTE_LIBRTE_STACK=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index d95ad566c..0df8848c0 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -124,6 +124,7 @@ The public API headers are grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
[ring] (@ref rte_ring.h),
+ [stack] (@ref rte_stack.h),
[tailq] (@ref rte_tailq.h),
[bitmap] (@ref rte_bitmap.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index bef9320c0..dd972a3fe 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -56,6 +56,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
@TOPDIR@/lib/librte_security \
+ @TOPDIR@/lib/librte_stack \
@TOPDIR@/lib/librte_table \
@TOPDIR@/lib/librte_telemetry \
@TOPDIR@/lib/librte_timer \
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..f4f60862f 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ stack_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
new file mode 100644
index 000000000..25a8cc38a
--- /dev/null
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -0,0 +1,28 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+Stack Library
+=============
+
+DPDK's stack library provides an API for configuration and use of a bounded
+stack of pointers.
+
+The stack library provides the following basic operations:
+
+* Create a uniquely named stack of a user-specified size and using a
+ user-specified socket.
+
+* Push and pop a burst of one or more stack objects (pointers). These function
+ are multi-threading safe.
+
+* Free a previously created stack.
+
+* Lookup a pointer to a stack by its name.
+
+* Query a stack's current depth and number of free entries.
+
+Implementation
+~~~~~~~~~~~~~~
+
+The stack consists of a contiguous array of pointers, a current index, and a
+spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 2b0f60d3d..04394f8cf 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -65,6 +65,11 @@ New Features
process.
* Added support for Rx packet types list in a secondary process.
+* **Added Stack API.**
+
+ Added a new stack API for configuration and use of a bounded stack of
+ pointers. The API provides MT-safe push and pop operations that can operate
+ on one or more pointers per operation.
Removed Items
-------------
diff --git a/lib/Makefile b/lib/Makefile
index d6239d27c..d22e2072b 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -111,6 +111,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack
+DEPDIRS-librte_stack := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
new file mode 100644
index 000000000..e956b6535
--- /dev/null
+++ b/lib/librte_stack/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_stack.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_stack_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
new file mode 100644
index 000000000..99f43710e
--- /dev/null
+++ b/lib/librte_stack/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+allow_experimental_apis = true
+
+version = 1
+sources = files('rte_stack.c')
+headers = files('rte_stack.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
new file mode 100644
index 000000000..96dffdf44
--- /dev/null
+++ b/lib/librte_stack/rte_stack.c
@@ -0,0 +1,194 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_tailq.h>
+
+#include "rte_stack.h"
+#include "rte_stack_pvt.h"
+
+int stack_logtype;
+
+TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_stack_tailq = {
+ .name = RTE_TAILQ_STACK_NAME,
+};
+EAL_REGISTER_TAILQ(rte_stack_tailq)
+
+static void
+rte_stack_std_init(struct rte_stack *s)
+{
+ rte_spinlock_init(&s->stack_std.lock);
+}
+
+static void
+rte_stack_init(struct rte_stack *s)
+{
+ memset(s, 0, sizeof(*s));
+
+ rte_stack_std_init(s);
+}
+
+static ssize_t
+rte_stack_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ /* Add padding to avoid false sharing conflicts */
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
+ 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
+
+struct rte_stack *
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_stack_list *stack_list;
+ const struct rte_memzone *mz;
+ struct rte_tailq_entry *te;
+ struct rte_stack *s;
+ unsigned int sz;
+ int ret;
+
+ RTE_SET_USED(flags);
+
+ sz = rte_stack_get_memsize(count);
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_STACK_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ STACK_LOG_ERR("Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
+ 0, __alignof__(*s));
+ if (mz == NULL) {
+ STACK_LOG_ERR("Cannot reserve stack memzone!\n");
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(te);
+ return NULL;
+ }
+
+ s = mz->addr;
+
+ rte_stack_init(s);
+
+ /* Store the name for later lookups */
+ ret = snprintf(s->name, sizeof(s->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(s->name)) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_errno = ENAMETOOLONG;
+ rte_free(te);
+ rte_memzone_free(mz);
+ return NULL;
+ }
+
+ s->memzone = mz;
+ s->capacity = count;
+ s->flags = flags;
+
+ te->data = s;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ TAILQ_INSERT_TAIL(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return s;
+}
+
+void
+rte_stack_free(struct rte_stack *s)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+
+ if (s == NULL)
+ return;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, stack_list, next) {
+ if (te->data == s)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+
+ rte_memzone_free(s->memzone);
+}
+
+struct rte_stack *
+rte_stack_lookup(const char *name)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+ struct rte_stack *r = NULL;
+
+ if (name == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, stack_list, next) {
+ r = (struct rte_stack *) te->data;
+ if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+RTE_INIT(librte_stack_init_log)
+{
+ stack_logtype = rte_log_register("lib.stack");
+ if (stack_logtype >= 0)
+ rte_log_set_level(stack_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
new file mode 100644
index 000000000..68023394f
--- /dev/null
+++ b/lib/librte_stack/rte_stack.h
@@ -0,0 +1,274 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+/**
+ * @file rte_stack.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Stack
+ *
+ * librte_stack provides an API for configuration and use of a bounded stack of
+ * pointers. Push and pop operations are MT-safe, allowing concurrent access,
+ * and the interface supports pushing and popping multiple pointers at a time.
+ */
+
+#ifndef _RTE_STACK_H_
+#define _RTE_STACK_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+#define RTE_TAILQ_STACK_NAME "RTE_STACK"
+#define RTE_STACK_MZ_PREFIX "STK_"
+/** The maximum length of a stack name. */
+#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
+ sizeof(RTE_STACK_MZ_PREFIX) + 1)
+
+/* Structure containing the LIFO, its current length, and a lock for mutual
+ * exclusion.
+ */
+struct rte_stack_std {
+ rte_spinlock_t lock; /**< LIFO lock */
+ uint32_t len; /**< LIFO len */
+ void *objs[]; /**< LIFO pointer table */
+};
+
+/* The RTE stack structure contains the LIFO structure itself, plus metadata
+ * such as its name and memzone pointer.
+ */
+struct rte_stack {
+ /** Name of the stack. */
+ char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
+ /** Memzone containing the rte_stack structure. */
+ const struct rte_memzone *memzone;
+ uint32_t capacity; /**< Usable size of the stack. */
+ uint32_t flags; /**< Flags supplied at creation. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+} __rte_cache_aligned;
+
+/**
+ * @internal Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_std_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+ cache_objs = stack->objs[stack->len];
+
+ /* Is there sufficient space in the stack? */
+ if ((stack->len + n) > s->capacity) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ stack->len += n;
+
+ rte_spinlock_unlock(&stack->lock);
+ return n;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ return rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @internal Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index, len;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+
+ if (unlikely(n > stack->len)) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ cache_objs = stack->objs;
+
+ for (index = 0, len = stack->len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ stack->len -= n;
+ rte_spinlock_unlock(&stack->lock);
+
+ return n;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ if (unlikely(n == 0 || obj_table == NULL))
+ return 0;
+
+ return rte_stack_std_pop(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_count(struct rte_stack *s)
+{
+ return (unsigned int)s->stack_std.len;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of free entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of free entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_free_count(struct rte_stack *s)
+{
+ return s->capacity - rte_stack_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new stack named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory for a stack of
+ * size *count*. The behavior of the stack is controlled by the *flags*.
+ *
+ * @param name
+ * The name of the stack.
+ * @param count
+ * The size of the stack.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * Reserved for future use.
+ * @return
+ * On success, the pointer to the new allocated stack. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a stack with the same name already exists
+ * - ENOMEM - insufficient memory to create the stack
+ * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
+ */
+struct rte_stack *__rte_experimental
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free all memory used by the stack.
+ *
+ * @param s
+ * Stack to free
+ */
+void __rte_experimental
+rte_stack_free(struct rte_stack *s);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Lookup a stack by its name.
+ *
+ * @param name
+ * The name of the stack.
+ * @return
+ * The pointer to the stack matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - Stack with name *name* not found.
+ * - EINVAL - *name* pointer is NULL.
+ */
+struct rte_stack * __rte_experimental
+rte_stack_lookup(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_H_ */
diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
new file mode 100644
index 000000000..4a6a7bdb3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_pvt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_PVT_H_
+#define _RTE_STACK_PVT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_log.h>
+
+extern int stack_logtype;
+
+#define STACK_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
+ __func__, ##args)
+
+#define STACK_LOG_ERR(fmt, args...) \
+ STACK_LOG(ERR, fmt, ## args)
+
+#define STACK_LOG_WARN(fmt, args...) \
+ STACK_LOG(WARNING, fmt, ## args)
+
+#define STACK_LOG_INFO(fmt, args...) \
+ STACK_LOG(INFO, fmt, ## args)
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_PVT_H_ */
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
new file mode 100644
index 000000000..6662679c3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_stack_create;
+ rte_stack_free;
+ rte_stack_lookup;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index e8b40f546..0f0e589bc 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -21,7 +21,7 @@ libraries = [ 'compat', # just a header, used for versioning
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 8a4f0f4e5..55568c603 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v2 2/8] mempool/stack: convert mempool to use rte stack
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 0/8] Add stack library and new " Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 1/8] stack: introduce rte stack library Gage Eads
@ 2019-03-05 16:42 ` Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 3/8] test/stack: add stack test Gage Eads
` (6 subsequent siblings)
8 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-05 16:42 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 2 +-
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 93 +++++++++----------------------
4 files changed, 33 insertions(+), 71 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 237f05eb2..7e64f63b6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -284,7 +284,6 @@ M: Andrew Rybchenko <arybchenko@solarflare.com>
F: lib/librte_mempool/
F: drivers/mempool/Makefile
F: drivers/mempool/ring/
-F: drivers/mempool/stack/
F: doc/guides/prog_guide/mempool_lib.rst
F: test/test/test_mempool*
F: test/test/test_func_reentrancy.c
@@ -412,6 +411,7 @@ M: Gage Eads <gage.eads@intel.com>
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
+F: drivers/mempool/stack/
Memory Pool Drivers
diff --git a/drivers/mempool/stack/Makefile b/drivers/mempool/stack/Makefile
index 0444aedad..1681a62bc 100644
--- a/drivers/mempool/stack/Makefile
+++ b/drivers/mempool/stack/Makefile
@@ -10,10 +10,11 @@ LIB = librte_mempool_stack.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
# Headers
CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+LDLIBS += -lrte_eal -lrte_mempool -lrte_stack
EXPORT_MAP := rte_mempool_stack_version.map
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index b75a3bb56..03e369a41 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
+
+allow_experimental_apis = true
sources = files('rte_mempool_stack.c')
+
+deps += ['stack']
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index e6d504af5..25ccdb9af 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -1,39 +1,29 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2019 Intel Corporation
*/
#include <stdio.h>
#include <rte_mempool.h>
-#include <rte_malloc.h>
-
-struct rte_mempool_stack {
- rte_spinlock_t sl;
-
- uint32_t size;
- uint32_t len;
- void *objs[];
-};
+#include <rte_stack.h>
static int
stack_alloc(struct rte_mempool *mp)
{
- struct rte_mempool_stack *s;
- unsigned n = mp->size;
- int size = sizeof(*s) + (n+16)*sizeof(void *);
-
- /* Allocate our local memory structure */
- s = rte_zmalloc_socket("mempool-stack",
- size,
- RTE_CACHE_LINE_SIZE,
- mp->socket_id);
- if (s == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
- return -ENOMEM;
+ char name[RTE_STACK_NAMESIZE];
+ struct rte_stack *s;
+ int ret;
+
+ ret = snprintf(name, sizeof(name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(name)) {
+ rte_errno = ENAMETOOLONG;
+ return -rte_errno;
}
- rte_spinlock_init(&s->sl);
+ s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ if (s == NULL)
+ return -rte_errno;
- s->size = n;
mp->pool_data = s;
return 0;
@@ -41,69 +31,36 @@ stack_alloc(struct rte_mempool *mp)
static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index;
-
- rte_spinlock_lock(&s->sl);
- cache_objs = &s->objs[s->len];
-
- /* Is there sufficient space in the stack ? */
- if ((s->len + n) > s->size) {
- rte_spinlock_unlock(&s->sl);
- return -ENOBUFS;
- }
-
- /* Add elements back into the cache */
- for (index = 0; index < n; ++index, obj_table++)
- cache_objs[index] = *obj_table;
-
- s->len += n;
+ struct rte_stack *s = mp->pool_data;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_push(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static int
stack_dequeue(struct rte_mempool *mp, void **obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index, len;
-
- rte_spinlock_lock(&s->sl);
-
- if (unlikely(n > s->len)) {
- rte_spinlock_unlock(&s->sl);
- return -ENOENT;
- }
+ struct rte_stack *s = mp->pool_data;
- cache_objs = s->objs;
-
- for (index = 0, len = s->len - 1; index < n;
- ++index, len--, obj_table++)
- *obj_table = cache_objs[len];
-
- s->len -= n;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_pop(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static unsigned
stack_get_count(const struct rte_mempool *mp)
{
- struct rte_mempool_stack *s = mp->pool_data;
+ struct rte_stack *s = mp->pool_data;
- return s->len;
+ return rte_stack_count(s);
}
static void
stack_free(struct rte_mempool *mp)
{
- rte_free((void *)(mp->pool_data));
+ struct rte_stack *s = mp->pool_data;
+
+ rte_stack_free(s);
}
static struct rte_mempool_ops ops_stack = {
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v2 3/8] test/stack: add stack test
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 0/8] Add stack library and new " Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 1/8] stack: introduce rte stack library Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-03-05 16:42 ` Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 4/8] test/stack: add stack perf test Gage Eads
` (5 subsequent siblings)
8 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-05 16:42 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_autotest performs positive and negative testing of the stack API, and
exercises the push and pop datapath functions with all available lcores.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
MAINTAINERS | 1 +
test/test/Makefile | 2 +
test/test/meson.build | 3 +
test/test/test_stack.c | 410 +++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 416 insertions(+)
create mode 100644 test/test/test_stack.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 7e64f63b6..58b438414 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -412,6 +412,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
F: drivers/mempool/stack/
+F: test/test/*stack*
Memory Pool Drivers
diff --git a/test/test/Makefile b/test/test/Makefile
index 89949c2bb..47cf98a3a 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -89,6 +89,8 @@ endif
SRCS-y += test_rwlock.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
diff --git a/test/test/meson.build b/test/test/meson.build
index 05e5ddeb0..b00e1201a 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -95,6 +95,7 @@ test_sources = files('commands.c',
'test_sched.c',
'test_service_cores.c',
'test_spinlock.c',
+ 'test_stack.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -132,6 +133,7 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
+ 'stack',
'timer'
]
@@ -173,6 +175,7 @@ fast_parallel_test_names = [
'rwlock_autotest',
'sched_autotest',
'spinlock_autotest',
+ 'stack_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
diff --git a/test/test/test_stack.c b/test/test/test_stack.c
new file mode 100644
index 000000000..92ce05288
--- /dev/null
+++ b/test/test/test_stack.c
@@ -0,0 +1,410 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_SIZE 4096
+#define MAX_BULK 32
+
+static int
+test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
+{
+ unsigned int i, ret;
+ void **popped_objs;
+
+ popped_objs = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (popped_objs == NULL) {
+ printf("[%s():%u] failed to calloc %lu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_push(s, &obj_table[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] push returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_pop(s, &popped_objs[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] pop returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i++) {
+ if (obj_table[i] != popped_objs[STACK_SIZE - i - 1]) {
+ printf("[%s():%u] Incorrect value %p at index 0x%x\n",
+ __func__, __LINE__,
+ popped_objs[STACK_SIZE - i - 1], i);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ rte_free(popped_objs);
+
+ return 0;
+}
+
+static int
+test_stack_basic(void)
+{
+ struct rte_stack *s = NULL;
+ void **obj_table = NULL;
+ int i, ret = -1;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %lu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ goto fail_test;
+ }
+
+ for (i = 0; i < STACK_SIZE; i++)
+ obj_table[i] = (void *)(uintptr_t)i;
+
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_lookup(__func__) != s) {
+ printf("[%s():%u] failed to lookup a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_count(s) != 0) {
+ printf("[%s():%u] stack count: %u (expected 0)\n",
+ __func__, __LINE__, rte_stack_count(s));
+ goto fail_test;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, 1);
+ if (ret) {
+ printf("[%s():%u] Single object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, MAX_BULK);
+ if (ret) {
+ printf("[%s():%u] Bulk object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
+ if (ret != 0) {
+ printf("[%s():%u] Excess objects push succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_pop(s, obj_table, 1);
+ if (ret != 0) {
+ printf("[%s():%u] Empty stack pop succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = 0;
+
+fail_test:
+ rte_stack_free(s);
+
+ rte_free(obj_table);
+
+ return ret;
+}
+
+static int
+test_stack_name_reuse(void)
+{
+ struct rte_stack *s[2];
+
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[0] == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[1] != NULL) {
+ printf("[%s():%u] Failed to detect re-used name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ rte_stack_free(s[0]);
+
+ return 0;
+}
+
+static int
+test_stack_name_length(void)
+{
+ char name[RTE_STACK_NAMESIZE + 1];
+ struct rte_stack *s;
+
+ memset(name, 's', sizeof(name));
+ name[RTE_STACK_NAMESIZE] = '\0';
+
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ if (s != NULL) {
+ printf("[%s():%u] Failed to prevent long name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENAMETOOLONG) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_stack *s = rte_stack_lookup("stack_not_found");
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENOENT) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s = rte_stack_lookup(NULL);
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != EINVAL) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_free_null(void)
+{
+ /* Check whether the library proper handles a NULL pointer */
+ rte_stack_free(NULL);
+
+ return 0;
+}
+
+#define NUM_ITERS_PER_THREAD 100000
+
+struct test_args {
+ struct rte_stack *s;
+ rte_atomic64_t *sz;
+};
+
+static int
+stack_thread_push_pop(void *args)
+{
+ struct test_args *t = args;
+ void **obj_table;
+ int i;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %lu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < NUM_ITERS_PER_THREAD; i++) {
+ unsigned int success, num;
+
+ /* Reserve up to min(MAX_BULK, available slots) stack entries,
+ * then push and pop those stack entries.
+ */
+ do {
+ uint64_t sz = rte_atomic64_read(t->sz);
+ volatile uint64_t *sz_addr;
+
+ sz_addr = (volatile uint64_t *)t->sz;
+
+ num = RTE_MIN(rte_rand() % MAX_BULK, STACK_SIZE - sz);
+
+ success = rte_atomic64_cmpset(sz_addr, sz, sz + num);
+ } while (success == 0);
+
+ if (rte_stack_push(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to push %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ if (rte_stack_pop(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to pop %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ rte_atomic64_sub(t->sz, num);
+ }
+
+ rte_free(obj_table);
+ return 0;
+}
+
+static int
+test_stack_multithreaded(void)
+{
+ struct test_args *args;
+ unsigned int lcore_id;
+ struct rte_stack *s;
+ rte_atomic64_t size;
+
+ printf("[%s():%u] Running with %u lcores\n",
+ __func__, __LINE__, rte_lcore_count());
+
+ if (rte_lcore_count() < 2)
+ return 0;
+
+ args = rte_malloc(NULL, sizeof(struct test_args) * RTE_MAX_LCORE, 0);
+ if (args == NULL) {
+ printf("[%s():%u] failed to malloc %lu bytes\n",
+ __func__, __LINE__,
+ sizeof(struct test_args) * RTE_MAX_LCORE);
+ return -1;
+ }
+
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ rte_free(args);
+ return -1;
+ }
+
+ rte_atomic64_init(&size);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ if (rte_eal_remote_launch(stack_thread_push_pop,
+ &args[lcore_id], lcore_id))
+ rte_panic("Failed to launch lcore %d\n", lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ stack_thread_push_pop(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ rte_stack_free(s);
+ rte_free(args);
+
+ return 0;
+}
+
+static int
+test_stack(void)
+{
+ if (test_stack_basic() < 0)
+ return -1;
+
+ if (test_lookup_null() < 0)
+ return -1;
+
+ if (test_free_null() < 0)
+ return -1;
+
+ if (test_stack_name_reuse() < 0)
+ return -1;
+
+ if (test_stack_name_length() < 0)
+ return -1;
+
+ if (test_stack_multithreaded() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_autotest, test_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v2 4/8] test/stack: add stack perf test
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 0/8] Add stack library and new " Gage Eads
` (2 preceding siblings ...)
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 3/8] test/stack: add stack test Gage Eads
@ 2019-03-05 16:42 ` Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 5/8] stack: add lock-free stack implementation Gage Eads
` (4 subsequent siblings)
8 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-05 16:42 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
test/test/Makefile | 1 +
test/test/meson.build | 2 +
test/test/test_stack_perf.c | 343 ++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 346 insertions(+)
create mode 100644 test/test/test_stack_perf.c
diff --git a/test/test/Makefile b/test/test/Makefile
index 47cf98a3a..f9536fb31 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -90,6 +90,7 @@ endif
SRCS-y += test_rwlock.c
SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
diff --git a/test/test/meson.build b/test/test/meson.build
index b00e1201a..ba3cb6261 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -96,6 +96,7 @@ test_sources = files('commands.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
+ 'test_stack_perf.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -240,6 +241,7 @@ perf_test_names = [
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
+ 'stack_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/test/test/test_stack_perf.c b/test/test/test_stack_perf.c
new file mode 100644
index 000000000..484370d30
--- /dev/null
+++ b/test/test/test_stack_perf.c
@@ -0,0 +1,343 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+#include <rte_stack.h>
+#include <rte_cycles.h>
+#include <rte_launch.h>
+#include <rte_pause.h>
+
+#include "test.h"
+
+#define STACK_NAME "STACK_PERF"
+#define MAX_BURST 32
+#define STACK_SIZE (RTE_MAX_LCORE * MAX_BURST)
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+/*
+ * Push/pop bulk sizes, marked volatile so they aren't treated as compile-time
+ * constants.
+ */
+static volatile unsigned int bulk_sizes[] = {8, MAX_BURST};
+
+static rte_atomic32_t lcore_barrier;
+
+struct lcore_pair {
+ unsigned int c1;
+ unsigned int c2;
+};
+
+static int
+get_two_hyperthreads(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] == core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_cores(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] != core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_sockets(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if (socket[0] != socket[1]) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+/* Measure the cycle cost of popping an empty stack. */
+static void
+test_empty_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 100000000;
+ void *objs[MAX_BURST];
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++)
+ rte_stack_pop(s, objs, bulk_sizes[0]);
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Stack empty pop: %.2F\n",
+ (double)(end - start) / iterations);
+}
+
+struct thread_args {
+ struct rte_stack *s;
+ unsigned int sz;
+ double avg;
+};
+
+/* Measure the average per-pointer cycle cost of stack push and pop */
+static int
+bulk_push_pop(void *p)
+{
+ unsigned int iterations = 1000000;
+ struct thread_args *args = p;
+ void *objs[MAX_BURST] = {0};
+ unsigned int size, i;
+ struct rte_stack *s;
+
+ s = args->s;
+ size = args->sz;
+
+ rte_atomic32_sub(&lcore_barrier, 1);
+ while (rte_atomic32_read(&lcore_barrier) != 0)
+ rte_pause();
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, size);
+ rte_stack_pop(s, objs, size);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ args->avg = ((double)(end - start))/(iterations * size);
+
+ return 0;
+}
+
+/*
+ * Run bulk_push_pop() simultaneously on pairs of cores, to measure stack
+ * perf when between hyperthread siblings, cores on the same socket, and cores
+ * on different sockets.
+ */
+static void
+run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s,
+ lcore_function_t fn)
+{
+ struct thread_args args[2];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ rte_atomic32_set(&lcore_barrier, 2);
+
+ args[0].sz = args[1].sz = bulk_sizes[i];
+ args[0].s = args[1].s = s;
+
+ if (cores->c1 == rte_get_master_lcore()) {
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ fn(&args[0]);
+ rte_eal_wait_lcore(cores->c2);
+ } else {
+ rte_eal_remote_launch(fn, &args[0], cores->c1);
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ rte_eal_wait_lcore(cores->c1);
+ rte_eal_wait_lcore(cores->c2);
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], (args[0].avg + args[1].avg) / 2);
+ }
+}
+
+/* Run bulk_push_pop() simultaneously on 1+ cores. */
+static void
+run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n)
+{
+ struct thread_args args[RTE_MAX_LCORE];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ unsigned int lcore_id;
+ int cnt = 0;
+ double avg;
+
+ rte_atomic32_set(&lcore_barrier, n);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ if (rte_eal_remote_launch(fn, &args[lcore_id],
+ lcore_id))
+ rte_panic("Failed to launch lcore %d\n",
+ lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ fn(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ avg = args[rte_lcore_id()].avg;
+
+ cnt = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+ avg += args[lcore_id].avg;
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], avg / n);
+ }
+}
+
+/*
+ * Measure the cycle cost of pushing and popping a single pointer on a single
+ * lcore.
+ */
+static void
+test_single_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 16000000;
+ void *obj = NULL;
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, &obj, 1);
+ rte_stack_pop(s, &obj, 1);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Average cycles per single object push/pop: %.2F\n",
+ ((double)(end - start)) / iterations);
+}
+
+/* Measure the cycle cost of bulk pushing and popping on a single lcore. */
+static void
+test_bulk_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 8000000;
+ void *objs[MAX_BURST];
+ unsigned int sz, i;
+
+ for (sz = 0; sz < ARRAY_SIZE(bulk_sizes); sz++) {
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, bulk_sizes[sz]);
+ rte_stack_pop(s, objs, bulk_sizes[sz]);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ double avg = ((double)(end - start) /
+ (iterations * bulk_sizes[sz]));
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[sz], avg);
+ }
+}
+
+static int
+test_stack_perf(void)
+{
+ struct lcore_pair cores;
+ struct rte_stack *s;
+
+ rte_atomic32_init(&lcore_barrier);
+
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ printf("### Testing single element push/pop ###\n");
+ test_single_push_pop(s);
+
+ printf("\n### Testing empty pop ###\n");
+ test_empty_pop(s);
+
+ printf("\n### Testing using a single lcore ###\n");
+ test_bulk_push_pop(s);
+
+ if (get_two_hyperthreads(&cores) == 0) {
+ printf("\n### Testing using two hyperthreads ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_cores(&cores) == 0) {
+ printf("\n### Testing using two physical cores ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_sockets(&cores) == 0) {
+ printf("\n### Testing using two NUMA nodes ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+
+ printf("\n### Testing on all %u lcores ###\n", rte_lcore_count());
+ run_on_n_cores(s, bulk_push_pop, rte_lcore_count());
+
+ rte_stack_free(s);
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v2 5/8] stack: add lock-free stack implementation
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 0/8] Add stack library and new " Gage Eads
` (3 preceding siblings ...)
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 4/8] test/stack: add stack perf test Gage Eads
@ 2019-03-05 16:42 ` Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 6/8] stack: add C11 atomic implementation Gage Eads
` (3 subsequent siblings)
8 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-05 16:42 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for a lock-free (linked list based) stack to the
stack API. This behavior is selected through a new rte_stack_create() flag,
RTE_STACK_F_LF.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The lock-free push operation enqueues a linked list of pointers by pointing
the tail of the list to the current stack head, and using a CAS to swing
the stack head pointer to the head of the list. The operation retries if it
is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The lock-free pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a lock-free LIFO
list, and are allocated before stack pushes and freed after stack pops.
Since the stack has a fixed maximum depth, these elements do not need to be
dynamically created.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
doc/guides/prog_guide/stack_lib.rst | 61 ++++++++++++-
doc/guides/rel_notes/release_19_05.rst | 3 +
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack.c | 41 +++++++--
lib/librte_stack/rte_stack.h | 127 +++++++++++++++++++++++++--
lib/librte_stack/rte_stack_generic.h | 151 +++++++++++++++++++++++++++++++++
7 files changed, 371 insertions(+), 18 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_generic.h
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 25a8cc38a..8fe8804e3 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -10,7 +10,8 @@ stack of pointers.
The stack library provides the following basic operations:
* Create a uniquely named stack of a user-specified size and using a
- user-specified socket.
+ user-specified socket, with either standard (lock-based) or lock-free
+ behavior.
* Push and pop a burst of one or more stack objects (pointers). These function
are multi-threading safe.
@@ -24,5 +25,59 @@ The stack library provides the following basic operations:
Implementation
~~~~~~~~~~~~~~
-The stack consists of a contiguous array of pointers, a current index, and a
-spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
+The library supports two types of stacks: standard (lock-based) and lock-free.
+Both types use the same set of interfaces, but their implementations differ.
+
+Lock-based Stack
+----------------
+
+The lock-based stack consists of a contiguous array of pointers, a current
+index, and a spinlock. Accesses to the stack are made multi-thread safe by the
+spinlock.
+
+Lock-free Stack
+------------------
+
+The lock-free stack consists of a linked list of elements, each containing a
+data pointer and a next pointer, and an atomic stack depth counter. The
+lock-free property means that multiple threads can push and pop simultaneously,
+and one thread being preempted/delayed in a push or pop operation will not
+impede the forward progress of any other thread.
+
+The lock-free push operation enqueues a linked list of pointers by pointing the
+list's tail to the current stack head, and using a CAS to swing the stack head
+pointer to the head of the list. The operation retries if it is unsuccessful
+(i.e. the list changed between reading the head and modifying it), else it
+adjusts the stack length and returns.
+
+The lock-free pop operation first reserves one or more list elements by
+adjusting the stack length, to ensure the dequeue operation will succeed
+without blocking. It then dequeues pointers by walking the list -- starting
+from the head -- then swinging the head pointer (using a CAS as well). While
+walking the list, the data pointers are recorded in an object table.
+
+The linked list elements themselves are maintained in a lock-free LIFO, and are
+allocated before stack pushes and freed after stack pops. Since the stack has a
+fixed maximum depth, these elements do not need to be dynamically created.
+
+The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
+rte_stack_create().
+
+Preventing the ABA Problem
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To prevent the ABA problem, this algorithm stack uses a 128-bit
+compare-and-swap instruction to atomically update both the stack top pointer
+and a modification counter. The ABA problem can occur without a modification
+counter if, for example:
+
+1. Thread A reads head pointer X and stores the pointed-to list element.
+2. Other threads modify the list such that the head pointer is once again X,
+ but its pointed-to data is different than what thread A read.
+3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+In this case thread A would not detect that the list had changed, and would
+both pop stale data and incorrect change the head pointer. By adding a
+modification counter that is updated on every push and pop as part of the
+compare-and-swap, the algorithm can detect when the list changes even if the
+head pointer remains the same.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 04394f8cf..51f0d2121 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -71,6 +71,9 @@ New Features
pointers. The API provides MT-safe push and pop operations that can operate
on one or more pointers per operation.
+ The library supports two stack implementations: standard (lock-based) and lock-free.
+ The lock-free implementation is currently limited to x86-64 platforms.
+
Removed Items
-------------
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index e956b6535..3ecddf033 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -18,6 +18,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
# install includes
-SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
+ rte_stack_generic.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 99f43710e..99d7f9ec5 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -5,4 +5,5 @@ allow_experimental_apis = true
version = 1
sources = files('rte_stack.c')
-headers = files('rte_stack.h')
+headers = files('rte_stack.h',
+ 'rte_stack_generic.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
index 96dffdf44..8f0361ea1 100644
--- a/lib/librte_stack/rte_stack.c
+++ b/lib/librte_stack/rte_stack.c
@@ -26,27 +26,45 @@ static struct rte_tailq_elem rte_stack_tailq = {
EAL_REGISTER_TAILQ(rte_stack_tailq)
static void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count)
+{
+ struct rte_stack_lf_elem *elems = s->stack_lf.elems;
+ unsigned int i;
+
+ for (i = 0; i < count; i++)
+ __rte_stack_lf_push(&s->stack_lf.free, &elems[i], &elems[i], 1);
+}
+
+static void
rte_stack_std_init(struct rte_stack *s)
{
rte_spinlock_init(&s->stack_std.lock);
}
static void
-rte_stack_init(struct rte_stack *s)
+rte_stack_init(struct rte_stack *s, unsigned int count, uint32_t flags)
{
memset(s, 0, sizeof(*s));
- rte_stack_std_init(s);
+ if (flags & RTE_STACK_F_LF)
+ rte_stack_lf_init(s, count);
+ else
+ rte_stack_std_init(s);
}
static ssize_t
-rte_stack_get_memsize(unsigned int count)
+rte_stack_get_memsize(unsigned int count, uint32_t flags)
{
ssize_t sz = sizeof(struct rte_stack);
+ if (flags & RTE_STACK_F_LF)
+ sz += RTE_CACHE_LINE_ROUNDUP(count *
+ sizeof(struct rte_stack_lf_elem));
+ else
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *));
+
/* Add padding to avoid false sharing conflicts */
- sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
- 2 * RTE_CACHE_LINE_SIZE;
+ sz += 2 * RTE_CACHE_LINE_SIZE;
return sz;
}
@@ -63,9 +81,16 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
unsigned int sz;
int ret;
- RTE_SET_USED(flags);
+#ifdef RTE_ARCH_X86_64
+ RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
+#else
+ if (flags & RTE_STACK_F_LF) {
+ STACK_LOG_ERR("Lock-free stack is not supported on your platform\n");
+ return NULL;
+ }
+#endif
- sz = rte_stack_get_memsize(count);
+ sz = rte_stack_get_memsize(count, flags);
ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
RTE_STACK_MZ_PREFIX, name);
@@ -94,7 +119,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
s = mz->addr;
- rte_stack_init(s);
+ rte_stack_init(s, count, flags);
/* Store the name for later lookups */
ret = snprintf(s->name, sizeof(s->name), "%s", name);
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index 68023394f..e576fb9ce 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -30,6 +30,35 @@ extern "C" {
#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
sizeof(RTE_STACK_MZ_PREFIX) + 1)
+struct rte_stack_lf_elem {
+ void *data; /**< Data pointer */
+ struct rte_stack_lf_elem *next; /**< Next pointer */
+};
+
+struct rte_stack_lf_head {
+ struct rte_stack_lf_elem *top; /**< Stack top */
+ uint64_t cnt; /**< Modification counter for avoiding ABA problem */
+};
+
+struct rte_stack_lf_list {
+ /** List head */
+ struct rte_stack_lf_head head __rte_aligned(16);
+ /** List len */
+ rte_atomic64_t len;
+};
+
+/* Structure containing two lock-free LIFO lists: the stack itself and a list
+ * of free linked-list elements.
+ */
+struct rte_stack_lf {
+ /** LIFO list of elements */
+ struct rte_stack_lf_list used __rte_cache_aligned;
+ /** LIFO list of free elements */
+ struct rte_stack_lf_list free __rte_cache_aligned;
+ /** LIFO elements */
+ struct rte_stack_lf_elem elems[] __rte_cache_aligned;
+};
+
/* Structure containing the LIFO, its current length, and a lock for mutual
* exclusion.
*/
@@ -49,10 +78,58 @@ struct rte_stack {
const struct rte_memzone *memzone;
uint32_t capacity; /**< Usable size of the stack. */
uint32_t flags; /**< Flags supplied at creation. */
- struct rte_stack_std stack_std; /**< LIFO structure. */
+ RTE_STD_C11
+ union {
+ struct rte_stack_lf stack_lf; /**< Lock-free LIFO structure. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+ };
} __rte_cache_aligned;
/**
+ * The stack uses lock-free push and pop functions. This flag is only
+ * supported on x86_64 platforms, currently.
+ */
+#define RTE_STACK_F_LF 0x0001
+
+#include "rte_stack_generic.h"
+
+/**
+ * @internal Push several objects on the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects enqueued.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_lf_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *tmp, *first, *last = NULL;
+ unsigned int i;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n free elements */
+ first = __rte_stack_lf_pop(&s->stack_lf.free, n, NULL, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Construct the list elements */
+ for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
+ tmp->data = obj_table[n - i - 1];
+
+ /* Push them to the used list */
+ __rte_stack_lf_push(&s->stack_lf.used, first, last, n);
+
+ return n;
+}
+
+/**
* @internal Push several objects on the stack (MT-safe).
*
* @param s
@@ -108,7 +185,38 @@ rte_stack_std_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
static __rte_always_inline unsigned int __rte_experimental
rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
{
- return rte_stack_std_push(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return rte_stack_lf_push(s, obj_table, n);
+ else
+ return rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @internal Pop several objects from the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * - Actual number of objects popped.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *first, *last = NULL;
+
+ /* Pop n used elements */
+ first = __rte_stack_lf_pop(&s->stack_lf.used, n, obj_table, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Push the list elements to the free list */
+ __rte_stack_lf_push(&s->stack_lf.free, first, last, n);
+
+ return n;
}
/**
@@ -170,7 +278,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
if (unlikely(n == 0 || obj_table == NULL))
return 0;
- return rte_stack_std_pop(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return rte_stack_lf_pop(s, obj_table, n);
+ else
+ return rte_stack_std_pop(s, obj_table, n);
}
/**
@@ -187,7 +298,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
static __rte_always_inline unsigned int __rte_experimental
rte_stack_count(struct rte_stack *s)
{
- return (unsigned int)s->stack_std.len;
+ if (s->flags & RTE_STACK_F_LF)
+ return rte_stack_lf_len(s);
+ else
+ return (unsigned int)s->stack_std.len;
}
/**
@@ -225,7 +339,10 @@ rte_stack_free_count(struct rte_stack *s)
* NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
* constraint for the reserved zone.
* @param flags
- * Reserved for future use.
+ * An OR of the following:
+ * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
+ * variants of the push and pop functions. Otherwise, it achieves
+ * thread-safety using a lock.
* @return
* On success, the pointer to the new allocated stack. NULL on error with
* rte_errno set appropriately. Possible errno values include:
diff --git a/lib/librte_stack/rte_stack_generic.h b/lib/librte_stack/rte_stack_generic.h
new file mode 100644
index 000000000..5e4cbc38e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_generic.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_GENERIC_H_
+#define _RTE_STACK_GENERIC_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+rte_stack_lf_len(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)rte_atomic64_read(&s->stack_lf.used.len);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ rte_atomic64_add(&list->len, num);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = rte_atomic64_read(&list->len);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
+ len, len - num))
+ break;
+ }
+
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_GENERIC_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v2 6/8] stack: add C11 atomic implementation
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 0/8] Add stack library and new " Gage Eads
` (4 preceding siblings ...)
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 5/8] stack: add lock-free stack implementation Gage Eads
@ 2019-03-05 16:42 ` Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 7/8] test/stack: add lock-free stack tests Gage Eads
` (2 subsequent siblings)
8 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-05 16:42 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds an implementation of the lock-free stack push, pop, and
length functions that use __atomic builtins, for systems that benefit from
the finer-grained memory ordering control.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack.h | 4 +
lib/librte_stack/rte_stack_c11_mem.h | 175 +++++++++++++++++++++++++++++++++++
4 files changed, 183 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_c11_mem.h
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 3ecddf033..94a7c1476 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -19,6 +19,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
- rte_stack_generic.h
+ rte_stack_generic.h \
+ rte_stack_c11_mem.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 99d7f9ec5..7e2d1dbb8 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -6,4 +6,5 @@ allow_experimental_apis = true
version = 1
sources = files('rte_stack.c')
headers = files('rte_stack.h',
- 'rte_stack_generic.h')
+ 'rte_stack_generic.h',
+ 'rte_stack_c11_mem.h')
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index e576fb9ce..01a6ae281 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -91,7 +91,11 @@ struct rte_stack {
*/
#define RTE_STACK_F_LF 0x0001
+#ifdef RTE_USE_C11_MEM_MODEL
+#include "rte_stack_c11_mem.h"
+#else
#include "rte_stack_generic.h"
+#endif
/**
* @internal Push several objects on the lock-free stack (MT-safe).
diff --git a/lib/librte_stack/rte_stack_c11_mem.h b/lib/librte_stack/rte_stack_c11_mem.h
new file mode 100644
index 000000000..44f9ece6e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_c11_mem.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_C11_MEM_H_
+#define _RTE_STACK_C11_MEM_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+rte_stack_lf_len(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
+ __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Use the release memmodel to ensure the writes to the LF LIFO
+ * elements are visible before the head pointer write.
+ */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ /* Ensure the stack modifications are not reordered with respect
+ * to the LIFO len update.
+ */
+ __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = __atomic_load_n(&list->len.cnt,
+ __ATOMIC_ACQUIRE);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (__atomic_compare_exchange_n(&list->len.cnt,
+ &len, len - num,
+ 0, __ATOMIC_RELAXED,
+ __ATOMIC_RELAXED))
+ break;
+ }
+
+#ifndef RTE_ARCH_X86_64
+ /* Use the acquire memmodel to ensure the reads to the LF LIFO elements
+ * are properly ordered with respect to the head pointer read.
+ *
+ * Note that for aarch64, GCC's implementation of __atomic_load_16 in
+ * libatomic uses locks, and so this function should be replaced by
+ * a new function (e.g. "rte_atomic128_load()").
+ */
+ __atomic_load((volatile __int128 *)&list->head,
+ &old_head,
+ __ATOMIC_ACQUIRE);
+#else
+ /* x86-64 does not require an atomic load here; if a torn read occurs,
+ * the CAS will fail and set old_head to the correct/latest value.
+ */
+ old_head = list->head;
+#endif
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_C11_MEM_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v2 7/8] test/stack: add lock-free stack tests
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 0/8] Add stack library and new " Gage Eads
` (5 preceding siblings ...)
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 6/8] stack: add C11 atomic implementation Gage Eads
@ 2019-03-05 16:42 ` Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 0/8] Add stack library and new " Gage Eads
8 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-05 16:42 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds lock-free stack variants of stack_autotest
(stack_lf_autotest) and stack_perf_autotest (stack_lf_perf_autotest), which
differ only in that the lock-free versions pass the RTE_STACK_F_LF flag to
all rte_stack_create() calls.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
test/test/meson.build | 2 ++
test/test/test_stack.c | 41 +++++++++++++++++++++++++++--------------
test/test/test_stack_perf.c | 17 +++++++++++++++--
3 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/test/test/meson.build b/test/test/meson.build
index ba3cb6261..474611291 100644
--- a/test/test/meson.build
+++ b/test/test/meson.build
@@ -177,6 +177,7 @@ fast_parallel_test_names = [
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
+ 'stack_nb_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
@@ -242,6 +243,7 @@ perf_test_names = [
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
+ 'stack_nb_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/test/test/test_stack.c b/test/test/test_stack.c
index 92ce05288..10c84dd37 100644
--- a/test/test/test_stack.c
+++ b/test/test/test_stack.c
@@ -97,7 +97,7 @@ test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
}
static int
-test_stack_basic(void)
+test_stack_basic(uint32_t flags)
{
struct rte_stack *s = NULL;
void **obj_table = NULL;
@@ -113,7 +113,7 @@ test_stack_basic(void)
for (i = 0; i < STACK_SIZE; i++)
obj_table[i] = (void *)(uintptr_t)i;
- s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -177,18 +177,18 @@ test_stack_basic(void)
}
static int
-test_stack_name_reuse(void)
+test_stack_name_reuse(uint32_t flags)
{
struct rte_stack *s[2];
- s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[0] == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
return -1;
}
- s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[1] != NULL) {
printf("[%s():%u] Failed to detect re-used name\n",
__func__, __LINE__);
@@ -201,7 +201,7 @@ test_stack_name_reuse(void)
}
static int
-test_stack_name_length(void)
+test_stack_name_length(uint32_t flags)
{
char name[RTE_STACK_NAMESIZE + 1];
struct rte_stack *s;
@@ -209,7 +209,7 @@ test_stack_name_length(void)
memset(name, 's', sizeof(name));
name[RTE_STACK_NAMESIZE] = '\0';
- s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), flags);
if (s != NULL) {
printf("[%s():%u] Failed to prevent long name\n",
__func__, __LINE__);
@@ -328,7 +328,7 @@ stack_thread_push_pop(void *args)
}
static int
-test_stack_multithreaded(void)
+test_stack_multithreaded(uint32_t flags)
{
struct test_args *args;
unsigned int lcore_id;
@@ -349,7 +349,7 @@ test_stack_multithreaded(void)
return -1;
}
- s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
@@ -384,9 +384,9 @@ test_stack_multithreaded(void)
}
static int
-test_stack(void)
+__test_stack(uint32_t flags)
{
- if (test_stack_basic() < 0)
+ if (test_stack_basic(flags) < 0)
return -1;
if (test_lookup_null() < 0)
@@ -395,16 +395,29 @@ test_stack(void)
if (test_free_null() < 0)
return -1;
- if (test_stack_name_reuse() < 0)
+ if (test_stack_name_reuse(flags) < 0)
return -1;
- if (test_stack_name_length() < 0)
+ if (test_stack_name_length(flags) < 0)
return -1;
- if (test_stack_multithreaded() < 0)
+ if (test_stack_multithreaded(flags) < 0)
return -1;
return 0;
}
+static int
+test_stack(void)
+{
+ return __test_stack(0);
+}
+
+static int
+test_lf_stack(void)
+{
+ return __test_stack(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_autotest, test_stack);
+REGISTER_TEST_COMMAND(stack_lf_autotest, test_lf_stack);
diff --git a/test/test/test_stack_perf.c b/test/test/test_stack_perf.c
index 484370d30..e09d5384c 100644
--- a/test/test/test_stack_perf.c
+++ b/test/test/test_stack_perf.c
@@ -297,14 +297,14 @@ test_bulk_push_pop(struct rte_stack *s)
}
static int
-test_stack_perf(void)
+__test_stack_perf(uint32_t flags)
{
struct lcore_pair cores;
struct rte_stack *s;
rte_atomic32_init(&lcore_barrier);
- s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -340,4 +340,17 @@ test_stack_perf(void)
return 0;
}
+static int
+test_stack_perf(void)
+{
+ return __test_stack_perf(0);
+}
+
+static int
+test_lf_stack_perf(void)
+{
+ return __test_stack_perf(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
+REGISTER_TEST_COMMAND(stack_lf_perf_autotest, test_lf_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v2 8/8] mempool/stack: add lock-free stack mempool handler
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 0/8] Add stack library and new " Gage Eads
` (6 preceding siblings ...)
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 7/8] test/stack: add lock-free stack tests Gage Eads
@ 2019-03-05 16:42 ` Gage Eads
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 0/8] Add stack library and new " Gage Eads
8 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-05 16:42 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for lock-free (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
stack's worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 10 ++++++++++
doc/guides/rel_notes/release_19_05.rst | 5 +++++
drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++--
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 929d76dba..dbcfc328e 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -541,6 +541,16 @@ Known Issues
5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+ Alternatively, applications can use the lock-free stack mempool handler. When
+ considering this handler, note that:
+
+ - It is currently limited to the x86_64 platform, because it uses an
+ instruction (16-byte compare-and-swap) that is not yet available on other
+ platforms.
+ - It has worse average-case performance than the non-preemptive rte_ring, but
+ software caching (e.g. the mempool cache) can mitigate this by reducing the
+ number of stack accesses.
+
+ rte_timer
Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 51f0d2121..f916f34c9 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -74,6 +74,11 @@ New Features
The library supports two stack implementations: standard (lock-based) and lock-free.
The lock-free implementation is currently limited to x86-64 platforms.
+* **Added Lock-Free Stack Mempool Handler.**
+
+ Added a new lock-free stack handler, which uses the newly added stack
+ library.
+
Removed Items
-------------
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 25ccdb9af..7e85c8d6b 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -7,7 +7,7 @@
#include <rte_stack.h>
static int
-stack_alloc(struct rte_mempool *mp)
+__stack_alloc(struct rte_mempool *mp, uint32_t flags)
{
char name[RTE_STACK_NAMESIZE];
struct rte_stack *s;
@@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp)
return -rte_errno;
}
- s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ s = rte_stack_create(name, mp->size, mp->socket_id, flags);
if (s == NULL)
return -rte_errno;
@@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp)
}
static int
+stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, 0);
+}
+
+static int
+lf_stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, RTE_STACK_F_LF);
+}
+
+static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
unsigned int n)
{
@@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = {
.get_count = stack_get_count
};
+static struct rte_mempool_ops ops_lf_stack = {
+ .name = "lf_stack",
+ .alloc = lf_stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
MEMPOOL_REGISTER_OPS(ops_stack);
+MEMPOOL_REGISTER_OPS(ops_lf_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v3 0/8] Add stack library and new mempool handler
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 0/8] Add stack library and new " Gage Eads
` (7 preceding siblings ...)
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
@ 2019-03-06 14:45 ` Gage Eads
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library Gage Eads
` (8 more replies)
8 siblings, 9 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-06 14:45 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This patchset introduces a stack library, supporting both lock-based and
lock-free stacks, and a lock-free stack mempool handler.
The lock-based stack code is derived from the existing stack mempool handler,
and that handler is refactored to use the stack library.
The lock-free stack mempool handler is intended for usages where the rte
ring's "non-preemptive" constraint is not acceptable; for example, if the
application uses a mixture of pinned high-priority threads and multiplexed
low-priority threads that share a mempool.
Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
so it is currently limited to the x86_64 platform.
This patchset is the successor to a patchset containing only the new mempool
handler[2].
[1] http://mails.dpdk.org/archives/dev/2019-March/125751.html
[2] http://mails.dpdk.org/archives/dev/2019-January/123555.html
---
v3:
- Rebase patchset onto master (test/test/ -> app/test/)
- Fix rte_stack_std_push() segfault introduced in v2
v2:
- Reworked structure and function naming to use rte_stack_{std, lf}_...
- Updated to the latest rte_atomic128_cmp_exchange() interface.
- Rename STACK_F_NB -> RTE_STACK_F_LF.
- Remove rte_rmb() and rte_wmb() from the generic push and pop implementations.
These are obviated by rte_atomic128_cmp_exchange()'s two memorder arguments.
- Edit stack_lib.rst text to 80 chars/line.
- Fix rte_stack.h doxygen formatting.
- Allocate popped_objs array from the heap
- Fix stack_thread_push_pop bug ("&t->sz" -> "t->sz")
- Remove unnecessary NULL check from test_stack_basic
- Properly terminate the name string in test_stack_name_length
- Add an empty array of struct rte_nb_lifo_elem elements
- In rte_nb_lifo_push(), retrieve the last element from __nb_lifo_pop()
- Split C11 implementation into a separate patchset
Gage Eads (8):
stack: introduce rte stack library
mempool/stack: convert mempool to use rte stack
test/stack: add stack test
test/stack: add stack perf test
stack: add lock-free stack implementation
stack: add C11 atomic implementation
test/stack: add lock-free stack tests
mempool/stack: add lock-free stack mempool handler
MAINTAINERS | 9 +-
app/test/Makefile | 3 +
app/test/meson.build | 7 +
app/test/test_stack.c | 423 ++++++++++++++++++++++++
app/test/test_stack_perf.c | 356 ++++++++++++++++++++
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/env_abstraction_layer.rst | 10 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 83 +++++
doc/guides/rel_notes/release_19_05.rst | 13 +
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 115 +++----
lib/Makefile | 2 +
lib/librte_stack/Makefile | 25 ++
lib/librte_stack/meson.build | 10 +
lib/librte_stack/rte_stack.c | 219 ++++++++++++
lib/librte_stack/rte_stack.h | 395 ++++++++++++++++++++++
lib/librte_stack/rte_stack_c11_mem.h | 175 ++++++++++
lib/librte_stack/rte_stack_generic.h | 151 +++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++
lib/librte_stack/rte_stack_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
26 files changed, 1987 insertions(+), 72 deletions(-)
create mode 100644 app/test/test_stack.c
create mode 100644 app/test/test_stack_perf.c
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_c11_mem.h
create mode 100644 lib/librte_stack/rte_stack_generic.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_version.map
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 0/8] Add stack library and new " Gage Eads
@ 2019-03-06 14:45 ` Gage Eads
2019-03-14 8:00 ` Olivier Matz
2019-03-28 23:26 ` Honnappa Nagarahalli
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
` (7 subsequent siblings)
8 siblings, 2 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-06 14:45 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 28 ++++
doc/guides/rel_notes/release_19_05.rst | 5 +
lib/Makefile | 2 +
lib/librte_stack/Makefile | 23 +++
lib/librte_stack/meson.build | 8 +
lib/librte_stack/rte_stack.c | 194 +++++++++++++++++++++++
lib/librte_stack/rte_stack.h | 274 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++++
lib/librte_stack/rte_stack_version.map | 9 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
16 files changed, 593 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 097cfb4f3..5fca30823 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -405,6 +405,12 @@ F: drivers/raw/skeleton_rawdev/
F: app/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
+Stack API - EXPERIMENTAL
+M: Gage Eads <gage.eads@intel.com>
+M: Olivier Matz <olivier.matz@6wind.com>
+F: lib/librte_stack/
+F: doc/guides/prog_guide/stack_lib.rst
+
Memory Pool Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index 0b09a9348..1b45dea6c 100644
--- a/config/common_base
+++ b/config/common_base
@@ -980,3 +980,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
# Compile the eventdev application
#
CONFIG_RTE_APP_EVENTDEV=y
+
+#
+# Compile librte_stack
+#
+CONFIG_RTE_LIBRTE_STACK=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index d95ad566c..0df8848c0 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -124,6 +124,7 @@ The public API headers are grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
[ring] (@ref rte_ring.h),
+ [stack] (@ref rte_stack.h),
[tailq] (@ref rte_tailq.h),
[bitmap] (@ref rte_bitmap.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..7722fc3e9 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -55,6 +55,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
@TOPDIR@/lib/librte_security \
+ @TOPDIR@/lib/librte_stack \
@TOPDIR@/lib/librte_table \
@TOPDIR@/lib/librte_telemetry \
@TOPDIR@/lib/librte_timer \
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..f4f60862f 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ stack_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
new file mode 100644
index 000000000..25a8cc38a
--- /dev/null
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -0,0 +1,28 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+Stack Library
+=============
+
+DPDK's stack library provides an API for configuration and use of a bounded
+stack of pointers.
+
+The stack library provides the following basic operations:
+
+* Create a uniquely named stack of a user-specified size and using a
+ user-specified socket.
+
+* Push and pop a burst of one or more stack objects (pointers). These function
+ are multi-threading safe.
+
+* Free a previously created stack.
+
+* Lookup a pointer to a stack by its name.
+
+* Query a stack's current depth and number of free entries.
+
+Implementation
+~~~~~~~~~~~~~~
+
+The stack consists of a contiguous array of pointers, a current index, and a
+spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 4a3e2a7f3..8c649a954 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -77,6 +77,11 @@ New Features
which includes the directory name, lib name, filenames, makefile, docs,
macros, functions, structs and any other strings in the code.
+* **Added Stack API.**
+
+ Added a new stack API for configuration and use of a bounded stack of
+ pointers. The API provides MT-safe push and pop operations that can operate
+ on one or more pointers per operation.
Removed Items
-------------
diff --git a/lib/Makefile b/lib/Makefile
index ffbfd0d94..d941bd849 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack
+DEPDIRS-librte_stack := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
new file mode 100644
index 000000000..e956b6535
--- /dev/null
+++ b/lib/librte_stack/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_stack.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_stack_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
new file mode 100644
index 000000000..99f43710e
--- /dev/null
+++ b/lib/librte_stack/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+allow_experimental_apis = true
+
+version = 1
+sources = files('rte_stack.c')
+headers = files('rte_stack.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
new file mode 100644
index 000000000..96dffdf44
--- /dev/null
+++ b/lib/librte_stack/rte_stack.c
@@ -0,0 +1,194 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_tailq.h>
+
+#include "rte_stack.h"
+#include "rte_stack_pvt.h"
+
+int stack_logtype;
+
+TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_stack_tailq = {
+ .name = RTE_TAILQ_STACK_NAME,
+};
+EAL_REGISTER_TAILQ(rte_stack_tailq)
+
+static void
+rte_stack_std_init(struct rte_stack *s)
+{
+ rte_spinlock_init(&s->stack_std.lock);
+}
+
+static void
+rte_stack_init(struct rte_stack *s)
+{
+ memset(s, 0, sizeof(*s));
+
+ rte_stack_std_init(s);
+}
+
+static ssize_t
+rte_stack_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ /* Add padding to avoid false sharing conflicts */
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
+ 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
+
+struct rte_stack *
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_stack_list *stack_list;
+ const struct rte_memzone *mz;
+ struct rte_tailq_entry *te;
+ struct rte_stack *s;
+ unsigned int sz;
+ int ret;
+
+ RTE_SET_USED(flags);
+
+ sz = rte_stack_get_memsize(count);
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_STACK_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ STACK_LOG_ERR("Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
+ 0, __alignof__(*s));
+ if (mz == NULL) {
+ STACK_LOG_ERR("Cannot reserve stack memzone!\n");
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(te);
+ return NULL;
+ }
+
+ s = mz->addr;
+
+ rte_stack_init(s);
+
+ /* Store the name for later lookups */
+ ret = snprintf(s->name, sizeof(s->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(s->name)) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_errno = ENAMETOOLONG;
+ rte_free(te);
+ rte_memzone_free(mz);
+ return NULL;
+ }
+
+ s->memzone = mz;
+ s->capacity = count;
+ s->flags = flags;
+
+ te->data = s;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ TAILQ_INSERT_TAIL(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return s;
+}
+
+void
+rte_stack_free(struct rte_stack *s)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+
+ if (s == NULL)
+ return;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, stack_list, next) {
+ if (te->data == s)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+
+ rte_memzone_free(s->memzone);
+}
+
+struct rte_stack *
+rte_stack_lookup(const char *name)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+ struct rte_stack *r = NULL;
+
+ if (name == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, stack_list, next) {
+ r = (struct rte_stack *) te->data;
+ if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+RTE_INIT(librte_stack_init_log)
+{
+ stack_logtype = rte_log_register("lib.stack");
+ if (stack_logtype >= 0)
+ rte_log_set_level(stack_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
new file mode 100644
index 000000000..7a633deb5
--- /dev/null
+++ b/lib/librte_stack/rte_stack.h
@@ -0,0 +1,274 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+/**
+ * @file rte_stack.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Stack
+ *
+ * librte_stack provides an API for configuration and use of a bounded stack of
+ * pointers. Push and pop operations are MT-safe, allowing concurrent access,
+ * and the interface supports pushing and popping multiple pointers at a time.
+ */
+
+#ifndef _RTE_STACK_H_
+#define _RTE_STACK_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+#define RTE_TAILQ_STACK_NAME "RTE_STACK"
+#define RTE_STACK_MZ_PREFIX "STK_"
+/** The maximum length of a stack name. */
+#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
+ sizeof(RTE_STACK_MZ_PREFIX) + 1)
+
+/* Structure containing the LIFO, its current length, and a lock for mutual
+ * exclusion.
+ */
+struct rte_stack_std {
+ rte_spinlock_t lock; /**< LIFO lock */
+ uint32_t len; /**< LIFO len */
+ void *objs[]; /**< LIFO pointer table */
+};
+
+/* The RTE stack structure contains the LIFO structure itself, plus metadata
+ * such as its name and memzone pointer.
+ */
+struct rte_stack {
+ /** Name of the stack. */
+ char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
+ /** Memzone containing the rte_stack structure. */
+ const struct rte_memzone *memzone;
+ uint32_t capacity; /**< Usable size of the stack. */
+ uint32_t flags; /**< Flags supplied at creation. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+} __rte_cache_aligned;
+
+/**
+ * @internal Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_std_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+ cache_objs = &stack->objs[stack->len];
+
+ /* Is there sufficient space in the stack? */
+ if ((stack->len + n) > s->capacity) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ stack->len += n;
+
+ rte_spinlock_unlock(&stack->lock);
+ return n;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ return rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @internal Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index, len;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+
+ if (unlikely(n > stack->len)) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ cache_objs = stack->objs;
+
+ for (index = 0, len = stack->len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ stack->len -= n;
+ rte_spinlock_unlock(&stack->lock);
+
+ return n;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ if (unlikely(n == 0 || obj_table == NULL))
+ return 0;
+
+ return rte_stack_std_pop(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_count(struct rte_stack *s)
+{
+ return (unsigned int)s->stack_std.len;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of free entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of free entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_free_count(struct rte_stack *s)
+{
+ return s->capacity - rte_stack_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new stack named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory for a stack of
+ * size *count*. The behavior of the stack is controlled by the *flags*.
+ *
+ * @param name
+ * The name of the stack.
+ * @param count
+ * The size of the stack.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * Reserved for future use.
+ * @return
+ * On success, the pointer to the new allocated stack. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a stack with the same name already exists
+ * - ENOMEM - insufficient memory to create the stack
+ * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
+ */
+struct rte_stack *__rte_experimental
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free all memory used by the stack.
+ *
+ * @param s
+ * Stack to free
+ */
+void __rte_experimental
+rte_stack_free(struct rte_stack *s);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Lookup a stack by its name.
+ *
+ * @param name
+ * The name of the stack.
+ * @return
+ * The pointer to the stack matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - Stack with name *name* not found.
+ * - EINVAL - *name* pointer is NULL.
+ */
+struct rte_stack * __rte_experimental
+rte_stack_lookup(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_H_ */
diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
new file mode 100644
index 000000000..4a6a7bdb3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_pvt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_PVT_H_
+#define _RTE_STACK_PVT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_log.h>
+
+extern int stack_logtype;
+
+#define STACK_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
+ __func__, ##args)
+
+#define STACK_LOG_ERR(fmt, args...) \
+ STACK_LOG(ERR, fmt, ## args)
+
+#define STACK_LOG_WARN(fmt, args...) \
+ STACK_LOG(WARNING, fmt, ## args)
+
+#define STACK_LOG_INFO(fmt, args...) \
+ STACK_LOG(INFO, fmt, ## args)
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_PVT_H_ */
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
new file mode 100644
index 000000000..6662679c3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_stack_create;
+ rte_stack_free;
+ rte_stack_lookup;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 99957ba7d..90115477f 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 3c40f9df2..8decfb851 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v3 2/8] mempool/stack: convert mempool to use rte stack
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 0/8] Add stack library and new " Gage Eads
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library Gage Eads
@ 2019-03-06 14:45 ` Gage Eads
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 3/8] test/stack: add stack test Gage Eads
` (6 subsequent siblings)
8 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-06 14:45 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 2 +-
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 93 +++++++++----------------------
4 files changed, 33 insertions(+), 71 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 5fca30823..4e088d2bd 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -282,7 +282,6 @@ M: Andrew Rybchenko <arybchenko@solarflare.com>
F: lib/librte_mempool/
F: drivers/mempool/Makefile
F: drivers/mempool/ring/
-F: drivers/mempool/stack/
F: doc/guides/prog_guide/mempool_lib.rst
F: app/test/test_mempool*
F: app/test/test_func_reentrancy.c
@@ -410,6 +409,7 @@ M: Gage Eads <gage.eads@intel.com>
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
+F: drivers/mempool/stack/
Memory Pool Drivers
diff --git a/drivers/mempool/stack/Makefile b/drivers/mempool/stack/Makefile
index 0444aedad..1681a62bc 100644
--- a/drivers/mempool/stack/Makefile
+++ b/drivers/mempool/stack/Makefile
@@ -10,10 +10,11 @@ LIB = librte_mempool_stack.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
# Headers
CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+LDLIBS += -lrte_eal -lrte_mempool -lrte_stack
EXPORT_MAP := rte_mempool_stack_version.map
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index b75a3bb56..03e369a41 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
+
+allow_experimental_apis = true
sources = files('rte_mempool_stack.c')
+
+deps += ['stack']
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index e6d504af5..25ccdb9af 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -1,39 +1,29 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2019 Intel Corporation
*/
#include <stdio.h>
#include <rte_mempool.h>
-#include <rte_malloc.h>
-
-struct rte_mempool_stack {
- rte_spinlock_t sl;
-
- uint32_t size;
- uint32_t len;
- void *objs[];
-};
+#include <rte_stack.h>
static int
stack_alloc(struct rte_mempool *mp)
{
- struct rte_mempool_stack *s;
- unsigned n = mp->size;
- int size = sizeof(*s) + (n+16)*sizeof(void *);
-
- /* Allocate our local memory structure */
- s = rte_zmalloc_socket("mempool-stack",
- size,
- RTE_CACHE_LINE_SIZE,
- mp->socket_id);
- if (s == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
- return -ENOMEM;
+ char name[RTE_STACK_NAMESIZE];
+ struct rte_stack *s;
+ int ret;
+
+ ret = snprintf(name, sizeof(name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(name)) {
+ rte_errno = ENAMETOOLONG;
+ return -rte_errno;
}
- rte_spinlock_init(&s->sl);
+ s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ if (s == NULL)
+ return -rte_errno;
- s->size = n;
mp->pool_data = s;
return 0;
@@ -41,69 +31,36 @@ stack_alloc(struct rte_mempool *mp)
static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index;
-
- rte_spinlock_lock(&s->sl);
- cache_objs = &s->objs[s->len];
-
- /* Is there sufficient space in the stack ? */
- if ((s->len + n) > s->size) {
- rte_spinlock_unlock(&s->sl);
- return -ENOBUFS;
- }
-
- /* Add elements back into the cache */
- for (index = 0; index < n; ++index, obj_table++)
- cache_objs[index] = *obj_table;
-
- s->len += n;
+ struct rte_stack *s = mp->pool_data;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_push(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static int
stack_dequeue(struct rte_mempool *mp, void **obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index, len;
-
- rte_spinlock_lock(&s->sl);
-
- if (unlikely(n > s->len)) {
- rte_spinlock_unlock(&s->sl);
- return -ENOENT;
- }
+ struct rte_stack *s = mp->pool_data;
- cache_objs = s->objs;
-
- for (index = 0, len = s->len - 1; index < n;
- ++index, len--, obj_table++)
- *obj_table = cache_objs[len];
-
- s->len -= n;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_pop(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static unsigned
stack_get_count(const struct rte_mempool *mp)
{
- struct rte_mempool_stack *s = mp->pool_data;
+ struct rte_stack *s = mp->pool_data;
- return s->len;
+ return rte_stack_count(s);
}
static void
stack_free(struct rte_mempool *mp)
{
- rte_free((void *)(mp->pool_data));
+ struct rte_stack *s = mp->pool_data;
+
+ rte_stack_free(s);
}
static struct rte_mempool_ops ops_stack = {
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v3 3/8] test/stack: add stack test
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 0/8] Add stack library and new " Gage Eads
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library Gage Eads
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-03-06 14:45 ` Gage Eads
2019-03-14 8:00 ` Olivier Matz
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 4/8] test/stack: add stack perf test Gage Eads
` (5 subsequent siblings)
8 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-03-06 14:45 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_autotest performs positive and negative testing of the stack API, and
exercises the push and pop datapath functions with all available lcores.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
MAINTAINERS | 1 +
app/test/Makefile | 2 +
app/test/meson.build | 3 +
app/test/test_stack.c | 410 ++++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 416 insertions(+)
create mode 100644 app/test/test_stack.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 4e088d2bd..f6593fa9c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -410,6 +410,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
F: drivers/mempool/stack/
+F: test/test/*stack*
Memory Pool Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index 89949c2bb..47cf98a3a 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -89,6 +89,8 @@ endif
SRCS-y += test_rwlock.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
diff --git a/app/test/meson.build b/app/test/meson.build
index 05e5ddeb0..b00e1201a 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -95,6 +95,7 @@ test_sources = files('commands.c',
'test_sched.c',
'test_service_cores.c',
'test_spinlock.c',
+ 'test_stack.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -132,6 +133,7 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
+ 'stack',
'timer'
]
@@ -173,6 +175,7 @@ fast_parallel_test_names = [
'rwlock_autotest',
'sched_autotest',
'spinlock_autotest',
+ 'stack_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
new file mode 100644
index 000000000..92ce05288
--- /dev/null
+++ b/app/test/test_stack.c
@@ -0,0 +1,410 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_SIZE 4096
+#define MAX_BULK 32
+
+static int
+test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
+{
+ unsigned int i, ret;
+ void **popped_objs;
+
+ popped_objs = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (popped_objs == NULL) {
+ printf("[%s():%u] failed to calloc %lu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_push(s, &obj_table[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] push returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_pop(s, &popped_objs[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] pop returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i++) {
+ if (obj_table[i] != popped_objs[STACK_SIZE - i - 1]) {
+ printf("[%s():%u] Incorrect value %p at index 0x%x\n",
+ __func__, __LINE__,
+ popped_objs[STACK_SIZE - i - 1], i);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ rte_free(popped_objs);
+
+ return 0;
+}
+
+static int
+test_stack_basic(void)
+{
+ struct rte_stack *s = NULL;
+ void **obj_table = NULL;
+ int i, ret = -1;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %lu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ goto fail_test;
+ }
+
+ for (i = 0; i < STACK_SIZE; i++)
+ obj_table[i] = (void *)(uintptr_t)i;
+
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_lookup(__func__) != s) {
+ printf("[%s():%u] failed to lookup a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_count(s) != 0) {
+ printf("[%s():%u] stack count: %u (expected 0)\n",
+ __func__, __LINE__, rte_stack_count(s));
+ goto fail_test;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, 1);
+ if (ret) {
+ printf("[%s():%u] Single object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, MAX_BULK);
+ if (ret) {
+ printf("[%s():%u] Bulk object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
+ if (ret != 0) {
+ printf("[%s():%u] Excess objects push succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_pop(s, obj_table, 1);
+ if (ret != 0) {
+ printf("[%s():%u] Empty stack pop succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = 0;
+
+fail_test:
+ rte_stack_free(s);
+
+ rte_free(obj_table);
+
+ return ret;
+}
+
+static int
+test_stack_name_reuse(void)
+{
+ struct rte_stack *s[2];
+
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[0] == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[1] != NULL) {
+ printf("[%s():%u] Failed to detect re-used name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ rte_stack_free(s[0]);
+
+ return 0;
+}
+
+static int
+test_stack_name_length(void)
+{
+ char name[RTE_STACK_NAMESIZE + 1];
+ struct rte_stack *s;
+
+ memset(name, 's', sizeof(name));
+ name[RTE_STACK_NAMESIZE] = '\0';
+
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ if (s != NULL) {
+ printf("[%s():%u] Failed to prevent long name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENAMETOOLONG) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_stack *s = rte_stack_lookup("stack_not_found");
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENOENT) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s = rte_stack_lookup(NULL);
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != EINVAL) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_free_null(void)
+{
+ /* Check whether the library proper handles a NULL pointer */
+ rte_stack_free(NULL);
+
+ return 0;
+}
+
+#define NUM_ITERS_PER_THREAD 100000
+
+struct test_args {
+ struct rte_stack *s;
+ rte_atomic64_t *sz;
+};
+
+static int
+stack_thread_push_pop(void *args)
+{
+ struct test_args *t = args;
+ void **obj_table;
+ int i;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %lu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < NUM_ITERS_PER_THREAD; i++) {
+ unsigned int success, num;
+
+ /* Reserve up to min(MAX_BULK, available slots) stack entries,
+ * then push and pop those stack entries.
+ */
+ do {
+ uint64_t sz = rte_atomic64_read(t->sz);
+ volatile uint64_t *sz_addr;
+
+ sz_addr = (volatile uint64_t *)t->sz;
+
+ num = RTE_MIN(rte_rand() % MAX_BULK, STACK_SIZE - sz);
+
+ success = rte_atomic64_cmpset(sz_addr, sz, sz + num);
+ } while (success == 0);
+
+ if (rte_stack_push(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to push %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ if (rte_stack_pop(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to pop %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ rte_atomic64_sub(t->sz, num);
+ }
+
+ rte_free(obj_table);
+ return 0;
+}
+
+static int
+test_stack_multithreaded(void)
+{
+ struct test_args *args;
+ unsigned int lcore_id;
+ struct rte_stack *s;
+ rte_atomic64_t size;
+
+ printf("[%s():%u] Running with %u lcores\n",
+ __func__, __LINE__, rte_lcore_count());
+
+ if (rte_lcore_count() < 2)
+ return 0;
+
+ args = rte_malloc(NULL, sizeof(struct test_args) * RTE_MAX_LCORE, 0);
+ if (args == NULL) {
+ printf("[%s():%u] failed to malloc %lu bytes\n",
+ __func__, __LINE__,
+ sizeof(struct test_args) * RTE_MAX_LCORE);
+ return -1;
+ }
+
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ rte_free(args);
+ return -1;
+ }
+
+ rte_atomic64_init(&size);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ if (rte_eal_remote_launch(stack_thread_push_pop,
+ &args[lcore_id], lcore_id))
+ rte_panic("Failed to launch lcore %d\n", lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ stack_thread_push_pop(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ rte_stack_free(s);
+ rte_free(args);
+
+ return 0;
+}
+
+static int
+test_stack(void)
+{
+ if (test_stack_basic() < 0)
+ return -1;
+
+ if (test_lookup_null() < 0)
+ return -1;
+
+ if (test_free_null() < 0)
+ return -1;
+
+ if (test_stack_name_reuse() < 0)
+ return -1;
+
+ if (test_stack_name_length() < 0)
+ return -1;
+
+ if (test_stack_multithreaded() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_autotest, test_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v3 4/8] test/stack: add stack perf test
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 0/8] Add stack library and new " Gage Eads
` (2 preceding siblings ...)
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 3/8] test/stack: add stack test Gage Eads
@ 2019-03-06 14:45 ` Gage Eads
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 5/8] stack: add lock-free stack implementation Gage Eads
` (4 subsequent siblings)
8 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-06 14:45 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/Makefile | 1 +
app/test/meson.build | 2 +
app/test/test_stack_perf.c | 343 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 346 insertions(+)
create mode 100644 app/test/test_stack_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index 47cf98a3a..f9536fb31 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -90,6 +90,7 @@ endif
SRCS-y += test_rwlock.c
SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
diff --git a/app/test/meson.build b/app/test/meson.build
index b00e1201a..ba3cb6261 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -96,6 +96,7 @@ test_sources = files('commands.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
+ 'test_stack_perf.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -240,6 +241,7 @@ perf_test_names = [
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
+ 'stack_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
new file mode 100644
index 000000000..484370d30
--- /dev/null
+++ b/app/test/test_stack_perf.c
@@ -0,0 +1,343 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+#include <rte_stack.h>
+#include <rte_cycles.h>
+#include <rte_launch.h>
+#include <rte_pause.h>
+
+#include "test.h"
+
+#define STACK_NAME "STACK_PERF"
+#define MAX_BURST 32
+#define STACK_SIZE (RTE_MAX_LCORE * MAX_BURST)
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+/*
+ * Push/pop bulk sizes, marked volatile so they aren't treated as compile-time
+ * constants.
+ */
+static volatile unsigned int bulk_sizes[] = {8, MAX_BURST};
+
+static rte_atomic32_t lcore_barrier;
+
+struct lcore_pair {
+ unsigned int c1;
+ unsigned int c2;
+};
+
+static int
+get_two_hyperthreads(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] == core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_cores(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] != core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_sockets(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if (socket[0] != socket[1]) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+/* Measure the cycle cost of popping an empty stack. */
+static void
+test_empty_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 100000000;
+ void *objs[MAX_BURST];
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++)
+ rte_stack_pop(s, objs, bulk_sizes[0]);
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Stack empty pop: %.2F\n",
+ (double)(end - start) / iterations);
+}
+
+struct thread_args {
+ struct rte_stack *s;
+ unsigned int sz;
+ double avg;
+};
+
+/* Measure the average per-pointer cycle cost of stack push and pop */
+static int
+bulk_push_pop(void *p)
+{
+ unsigned int iterations = 1000000;
+ struct thread_args *args = p;
+ void *objs[MAX_BURST] = {0};
+ unsigned int size, i;
+ struct rte_stack *s;
+
+ s = args->s;
+ size = args->sz;
+
+ rte_atomic32_sub(&lcore_barrier, 1);
+ while (rte_atomic32_read(&lcore_barrier) != 0)
+ rte_pause();
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, size);
+ rte_stack_pop(s, objs, size);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ args->avg = ((double)(end - start))/(iterations * size);
+
+ return 0;
+}
+
+/*
+ * Run bulk_push_pop() simultaneously on pairs of cores, to measure stack
+ * perf when between hyperthread siblings, cores on the same socket, and cores
+ * on different sockets.
+ */
+static void
+run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s,
+ lcore_function_t fn)
+{
+ struct thread_args args[2];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ rte_atomic32_set(&lcore_barrier, 2);
+
+ args[0].sz = args[1].sz = bulk_sizes[i];
+ args[0].s = args[1].s = s;
+
+ if (cores->c1 == rte_get_master_lcore()) {
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ fn(&args[0]);
+ rte_eal_wait_lcore(cores->c2);
+ } else {
+ rte_eal_remote_launch(fn, &args[0], cores->c1);
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ rte_eal_wait_lcore(cores->c1);
+ rte_eal_wait_lcore(cores->c2);
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], (args[0].avg + args[1].avg) / 2);
+ }
+}
+
+/* Run bulk_push_pop() simultaneously on 1+ cores. */
+static void
+run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n)
+{
+ struct thread_args args[RTE_MAX_LCORE];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ unsigned int lcore_id;
+ int cnt = 0;
+ double avg;
+
+ rte_atomic32_set(&lcore_barrier, n);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ if (rte_eal_remote_launch(fn, &args[lcore_id],
+ lcore_id))
+ rte_panic("Failed to launch lcore %d\n",
+ lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ fn(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ avg = args[rte_lcore_id()].avg;
+
+ cnt = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+ avg += args[lcore_id].avg;
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], avg / n);
+ }
+}
+
+/*
+ * Measure the cycle cost of pushing and popping a single pointer on a single
+ * lcore.
+ */
+static void
+test_single_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 16000000;
+ void *obj = NULL;
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, &obj, 1);
+ rte_stack_pop(s, &obj, 1);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Average cycles per single object push/pop: %.2F\n",
+ ((double)(end - start)) / iterations);
+}
+
+/* Measure the cycle cost of bulk pushing and popping on a single lcore. */
+static void
+test_bulk_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 8000000;
+ void *objs[MAX_BURST];
+ unsigned int sz, i;
+
+ for (sz = 0; sz < ARRAY_SIZE(bulk_sizes); sz++) {
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, bulk_sizes[sz]);
+ rte_stack_pop(s, objs, bulk_sizes[sz]);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ double avg = ((double)(end - start) /
+ (iterations * bulk_sizes[sz]));
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[sz], avg);
+ }
+}
+
+static int
+test_stack_perf(void)
+{
+ struct lcore_pair cores;
+ struct rte_stack *s;
+
+ rte_atomic32_init(&lcore_barrier);
+
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ printf("### Testing single element push/pop ###\n");
+ test_single_push_pop(s);
+
+ printf("\n### Testing empty pop ###\n");
+ test_empty_pop(s);
+
+ printf("\n### Testing using a single lcore ###\n");
+ test_bulk_push_pop(s);
+
+ if (get_two_hyperthreads(&cores) == 0) {
+ printf("\n### Testing using two hyperthreads ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_cores(&cores) == 0) {
+ printf("\n### Testing using two physical cores ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_sockets(&cores) == 0) {
+ printf("\n### Testing using two NUMA nodes ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+
+ printf("\n### Testing on all %u lcores ###\n", rte_lcore_count());
+ run_on_n_cores(s, bulk_push_pop, rte_lcore_count());
+
+ rte_stack_free(s);
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v3 5/8] stack: add lock-free stack implementation
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 0/8] Add stack library and new " Gage Eads
` (3 preceding siblings ...)
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 4/8] test/stack: add stack perf test Gage Eads
@ 2019-03-06 14:45 ` Gage Eads
2019-03-14 8:01 ` Olivier Matz
2019-03-28 23:27 ` Honnappa Nagarahalli
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation Gage Eads
` (3 subsequent siblings)
8 siblings, 2 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-06 14:45 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for a lock-free (linked list based) stack to the
stack API. This behavior is selected through a new rte_stack_create() flag,
RTE_STACK_F_LF.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The lock-free push operation enqueues a linked list of pointers by pointing
the tail of the list to the current stack head, and using a CAS to swing
the stack head pointer to the head of the list. The operation retries if it
is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The lock-free pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a lock-free LIFO
list, and are allocated before stack pushes and freed after stack pops.
Since the stack has a fixed maximum depth, these elements do not need to be
dynamically created.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
doc/guides/prog_guide/stack_lib.rst | 61 ++++++++++++-
doc/guides/rel_notes/release_19_05.rst | 3 +
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack.c | 41 +++++++--
lib/librte_stack/rte_stack.h | 127 +++++++++++++++++++++++++--
lib/librte_stack/rte_stack_generic.h | 151 +++++++++++++++++++++++++++++++++
7 files changed, 371 insertions(+), 18 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_generic.h
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 25a8cc38a..8fe8804e3 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -10,7 +10,8 @@ stack of pointers.
The stack library provides the following basic operations:
* Create a uniquely named stack of a user-specified size and using a
- user-specified socket.
+ user-specified socket, with either standard (lock-based) or lock-free
+ behavior.
* Push and pop a burst of one or more stack objects (pointers). These function
are multi-threading safe.
@@ -24,5 +25,59 @@ The stack library provides the following basic operations:
Implementation
~~~~~~~~~~~~~~
-The stack consists of a contiguous array of pointers, a current index, and a
-spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
+The library supports two types of stacks: standard (lock-based) and lock-free.
+Both types use the same set of interfaces, but their implementations differ.
+
+Lock-based Stack
+----------------
+
+The lock-based stack consists of a contiguous array of pointers, a current
+index, and a spinlock. Accesses to the stack are made multi-thread safe by the
+spinlock.
+
+Lock-free Stack
+------------------
+
+The lock-free stack consists of a linked list of elements, each containing a
+data pointer and a next pointer, and an atomic stack depth counter. The
+lock-free property means that multiple threads can push and pop simultaneously,
+and one thread being preempted/delayed in a push or pop operation will not
+impede the forward progress of any other thread.
+
+The lock-free push operation enqueues a linked list of pointers by pointing the
+list's tail to the current stack head, and using a CAS to swing the stack head
+pointer to the head of the list. The operation retries if it is unsuccessful
+(i.e. the list changed between reading the head and modifying it), else it
+adjusts the stack length and returns.
+
+The lock-free pop operation first reserves one or more list elements by
+adjusting the stack length, to ensure the dequeue operation will succeed
+without blocking. It then dequeues pointers by walking the list -- starting
+from the head -- then swinging the head pointer (using a CAS as well). While
+walking the list, the data pointers are recorded in an object table.
+
+The linked list elements themselves are maintained in a lock-free LIFO, and are
+allocated before stack pushes and freed after stack pops. Since the stack has a
+fixed maximum depth, these elements do not need to be dynamically created.
+
+The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
+rte_stack_create().
+
+Preventing the ABA Problem
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To prevent the ABA problem, this algorithm stack uses a 128-bit
+compare-and-swap instruction to atomically update both the stack top pointer
+and a modification counter. The ABA problem can occur without a modification
+counter if, for example:
+
+1. Thread A reads head pointer X and stores the pointed-to list element.
+2. Other threads modify the list such that the head pointer is once again X,
+ but its pointed-to data is different than what thread A read.
+3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+In this case thread A would not detect that the list had changed, and would
+both pop stale data and incorrect change the head pointer. By adding a
+modification counter that is updated on every push and pop as part of the
+compare-and-swap, the algorithm can detect when the list changes even if the
+head pointer remains the same.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 8c649a954..5294ace3b 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -83,6 +83,9 @@ New Features
pointers. The API provides MT-safe push and pop operations that can operate
on one or more pointers per operation.
+ The library supports two stack implementations: standard (lock-based) and lock-free.
+ The lock-free implementation is currently limited to x86-64 platforms.
+
Removed Items
-------------
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index e956b6535..3ecddf033 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -18,6 +18,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
# install includes
-SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
+ rte_stack_generic.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 99f43710e..99d7f9ec5 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -5,4 +5,5 @@ allow_experimental_apis = true
version = 1
sources = files('rte_stack.c')
-headers = files('rte_stack.h')
+headers = files('rte_stack.h',
+ 'rte_stack_generic.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
index 96dffdf44..8f0361ea1 100644
--- a/lib/librte_stack/rte_stack.c
+++ b/lib/librte_stack/rte_stack.c
@@ -26,27 +26,45 @@ static struct rte_tailq_elem rte_stack_tailq = {
EAL_REGISTER_TAILQ(rte_stack_tailq)
static void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count)
+{
+ struct rte_stack_lf_elem *elems = s->stack_lf.elems;
+ unsigned int i;
+
+ for (i = 0; i < count; i++)
+ __rte_stack_lf_push(&s->stack_lf.free, &elems[i], &elems[i], 1);
+}
+
+static void
rte_stack_std_init(struct rte_stack *s)
{
rte_spinlock_init(&s->stack_std.lock);
}
static void
-rte_stack_init(struct rte_stack *s)
+rte_stack_init(struct rte_stack *s, unsigned int count, uint32_t flags)
{
memset(s, 0, sizeof(*s));
- rte_stack_std_init(s);
+ if (flags & RTE_STACK_F_LF)
+ rte_stack_lf_init(s, count);
+ else
+ rte_stack_std_init(s);
}
static ssize_t
-rte_stack_get_memsize(unsigned int count)
+rte_stack_get_memsize(unsigned int count, uint32_t flags)
{
ssize_t sz = sizeof(struct rte_stack);
+ if (flags & RTE_STACK_F_LF)
+ sz += RTE_CACHE_LINE_ROUNDUP(count *
+ sizeof(struct rte_stack_lf_elem));
+ else
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *));
+
/* Add padding to avoid false sharing conflicts */
- sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
- 2 * RTE_CACHE_LINE_SIZE;
+ sz += 2 * RTE_CACHE_LINE_SIZE;
return sz;
}
@@ -63,9 +81,16 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
unsigned int sz;
int ret;
- RTE_SET_USED(flags);
+#ifdef RTE_ARCH_X86_64
+ RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
+#else
+ if (flags & RTE_STACK_F_LF) {
+ STACK_LOG_ERR("Lock-free stack is not supported on your platform\n");
+ return NULL;
+ }
+#endif
- sz = rte_stack_get_memsize(count);
+ sz = rte_stack_get_memsize(count, flags);
ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
RTE_STACK_MZ_PREFIX, name);
@@ -94,7 +119,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
s = mz->addr;
- rte_stack_init(s);
+ rte_stack_init(s, count, flags);
/* Store the name for later lookups */
ret = snprintf(s->name, sizeof(s->name), "%s", name);
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index 7a633deb5..b484313bb 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -30,6 +30,35 @@ extern "C" {
#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
sizeof(RTE_STACK_MZ_PREFIX) + 1)
+struct rte_stack_lf_elem {
+ void *data; /**< Data pointer */
+ struct rte_stack_lf_elem *next; /**< Next pointer */
+};
+
+struct rte_stack_lf_head {
+ struct rte_stack_lf_elem *top; /**< Stack top */
+ uint64_t cnt; /**< Modification counter for avoiding ABA problem */
+};
+
+struct rte_stack_lf_list {
+ /** List head */
+ struct rte_stack_lf_head head __rte_aligned(16);
+ /** List len */
+ rte_atomic64_t len;
+};
+
+/* Structure containing two lock-free LIFO lists: the stack itself and a list
+ * of free linked-list elements.
+ */
+struct rte_stack_lf {
+ /** LIFO list of elements */
+ struct rte_stack_lf_list used __rte_cache_aligned;
+ /** LIFO list of free elements */
+ struct rte_stack_lf_list free __rte_cache_aligned;
+ /** LIFO elements */
+ struct rte_stack_lf_elem elems[] __rte_cache_aligned;
+};
+
/* Structure containing the LIFO, its current length, and a lock for mutual
* exclusion.
*/
@@ -49,10 +78,58 @@ struct rte_stack {
const struct rte_memzone *memzone;
uint32_t capacity; /**< Usable size of the stack. */
uint32_t flags; /**< Flags supplied at creation. */
- struct rte_stack_std stack_std; /**< LIFO structure. */
+ RTE_STD_C11
+ union {
+ struct rte_stack_lf stack_lf; /**< Lock-free LIFO structure. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+ };
} __rte_cache_aligned;
/**
+ * The stack uses lock-free push and pop functions. This flag is only
+ * supported on x86_64 platforms, currently.
+ */
+#define RTE_STACK_F_LF 0x0001
+
+#include "rte_stack_generic.h"
+
+/**
+ * @internal Push several objects on the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects enqueued.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_lf_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *tmp, *first, *last = NULL;
+ unsigned int i;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n free elements */
+ first = __rte_stack_lf_pop(&s->stack_lf.free, n, NULL, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Construct the list elements */
+ for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
+ tmp->data = obj_table[n - i - 1];
+
+ /* Push them to the used list */
+ __rte_stack_lf_push(&s->stack_lf.used, first, last, n);
+
+ return n;
+}
+
+/**
* @internal Push several objects on the stack (MT-safe).
*
* @param s
@@ -108,7 +185,38 @@ rte_stack_std_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
static __rte_always_inline unsigned int __rte_experimental
rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
{
- return rte_stack_std_push(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return rte_stack_lf_push(s, obj_table, n);
+ else
+ return rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @internal Pop several objects from the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * - Actual number of objects popped.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *first, *last = NULL;
+
+ /* Pop n used elements */
+ first = __rte_stack_lf_pop(&s->stack_lf.used, n, obj_table, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Push the list elements to the free list */
+ __rte_stack_lf_push(&s->stack_lf.free, first, last, n);
+
+ return n;
}
/**
@@ -170,7 +278,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
if (unlikely(n == 0 || obj_table == NULL))
return 0;
- return rte_stack_std_pop(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return rte_stack_lf_pop(s, obj_table, n);
+ else
+ return rte_stack_std_pop(s, obj_table, n);
}
/**
@@ -187,7 +298,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
static __rte_always_inline unsigned int __rte_experimental
rte_stack_count(struct rte_stack *s)
{
- return (unsigned int)s->stack_std.len;
+ if (s->flags & RTE_STACK_F_LF)
+ return rte_stack_lf_len(s);
+ else
+ return (unsigned int)s->stack_std.len;
}
/**
@@ -225,7 +339,10 @@ rte_stack_free_count(struct rte_stack *s)
* NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
* constraint for the reserved zone.
* @param flags
- * Reserved for future use.
+ * An OR of the following:
+ * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
+ * variants of the push and pop functions. Otherwise, it achieves
+ * thread-safety using a lock.
* @return
* On success, the pointer to the new allocated stack. NULL on error with
* rte_errno set appropriately. Possible errno values include:
diff --git a/lib/librte_stack/rte_stack_generic.h b/lib/librte_stack/rte_stack_generic.h
new file mode 100644
index 000000000..5e4cbc38e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_generic.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_GENERIC_H_
+#define _RTE_STACK_GENERIC_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+rte_stack_lf_len(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)rte_atomic64_read(&s->stack_lf.used.len);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ rte_atomic64_add(&list->len, num);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = rte_atomic64_read(&list->len);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
+ len, len - num))
+ break;
+ }
+
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_GENERIC_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 0/8] Add stack library and new " Gage Eads
` (4 preceding siblings ...)
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 5/8] stack: add lock-free stack implementation Gage Eads
@ 2019-03-06 14:45 ` Gage Eads
2019-03-14 8:04 ` Olivier Matz
2019-03-28 23:27 ` Honnappa Nagarahalli
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 7/8] test/stack: add lock-free stack tests Gage Eads
` (2 subsequent siblings)
8 siblings, 2 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-06 14:45 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds an implementation of the lock-free stack push, pop, and
length functions that use __atomic builtins, for systems that benefit from
the finer-grained memory ordering control.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack.h | 4 +
lib/librte_stack/rte_stack_c11_mem.h | 175 +++++++++++++++++++++++++++++++++++
4 files changed, 183 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_c11_mem.h
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 3ecddf033..94a7c1476 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -19,6 +19,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
- rte_stack_generic.h
+ rte_stack_generic.h \
+ rte_stack_c11_mem.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 99d7f9ec5..7e2d1dbb8 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -6,4 +6,5 @@ allow_experimental_apis = true
version = 1
sources = files('rte_stack.c')
headers = files('rte_stack.h',
- 'rte_stack_generic.h')
+ 'rte_stack_generic.h',
+ 'rte_stack_c11_mem.h')
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index b484313bb..de16f8fff 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -91,7 +91,11 @@ struct rte_stack {
*/
#define RTE_STACK_F_LF 0x0001
+#ifdef RTE_USE_C11_MEM_MODEL
+#include "rte_stack_c11_mem.h"
+#else
#include "rte_stack_generic.h"
+#endif
/**
* @internal Push several objects on the lock-free stack (MT-safe).
diff --git a/lib/librte_stack/rte_stack_c11_mem.h b/lib/librte_stack/rte_stack_c11_mem.h
new file mode 100644
index 000000000..44f9ece6e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_c11_mem.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_C11_MEM_H_
+#define _RTE_STACK_C11_MEM_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+rte_stack_lf_len(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
+ __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Use the release memmodel to ensure the writes to the LF LIFO
+ * elements are visible before the head pointer write.
+ */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ /* Ensure the stack modifications are not reordered with respect
+ * to the LIFO len update.
+ */
+ __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = __atomic_load_n(&list->len.cnt,
+ __ATOMIC_ACQUIRE);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (__atomic_compare_exchange_n(&list->len.cnt,
+ &len, len - num,
+ 0, __ATOMIC_RELAXED,
+ __ATOMIC_RELAXED))
+ break;
+ }
+
+#ifndef RTE_ARCH_X86_64
+ /* Use the acquire memmodel to ensure the reads to the LF LIFO elements
+ * are properly ordered with respect to the head pointer read.
+ *
+ * Note that for aarch64, GCC's implementation of __atomic_load_16 in
+ * libatomic uses locks, and so this function should be replaced by
+ * a new function (e.g. "rte_atomic128_load()").
+ */
+ __atomic_load((volatile __int128 *)&list->head,
+ &old_head,
+ __ATOMIC_ACQUIRE);
+#else
+ /* x86-64 does not require an atomic load here; if a torn read occurs,
+ * the CAS will fail and set old_head to the correct/latest value.
+ */
+ old_head = list->head;
+#endif
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_C11_MEM_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v3 7/8] test/stack: add lock-free stack tests
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 0/8] Add stack library and new " Gage Eads
` (5 preceding siblings ...)
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation Gage Eads
@ 2019-03-06 14:45 ` Gage Eads
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 0/8] Add stack library and new " Gage Eads
8 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-06 14:45 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds lock-free stack variants of stack_autotest
(stack_lf_autotest) and stack_perf_autotest (stack_lf_perf_autotest), which
differ only in that the lock-free versions pass the RTE_STACK_F_LF flag to
all rte_stack_create() calls.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/meson.build | 2 ++
app/test/test_stack.c | 41 +++++++++++++++++++++++++++--------------
app/test/test_stack_perf.c | 17 +++++++++++++++--
3 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/app/test/meson.build b/app/test/meson.build
index ba3cb6261..474611291 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -177,6 +177,7 @@ fast_parallel_test_names = [
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
+ 'stack_nb_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
@@ -242,6 +243,7 @@ perf_test_names = [
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
+ 'stack_nb_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
index 92ce05288..10c84dd37 100644
--- a/app/test/test_stack.c
+++ b/app/test/test_stack.c
@@ -97,7 +97,7 @@ test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
}
static int
-test_stack_basic(void)
+test_stack_basic(uint32_t flags)
{
struct rte_stack *s = NULL;
void **obj_table = NULL;
@@ -113,7 +113,7 @@ test_stack_basic(void)
for (i = 0; i < STACK_SIZE; i++)
obj_table[i] = (void *)(uintptr_t)i;
- s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -177,18 +177,18 @@ test_stack_basic(void)
}
static int
-test_stack_name_reuse(void)
+test_stack_name_reuse(uint32_t flags)
{
struct rte_stack *s[2];
- s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[0] == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
return -1;
}
- s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[1] != NULL) {
printf("[%s():%u] Failed to detect re-used name\n",
__func__, __LINE__);
@@ -201,7 +201,7 @@ test_stack_name_reuse(void)
}
static int
-test_stack_name_length(void)
+test_stack_name_length(uint32_t flags)
{
char name[RTE_STACK_NAMESIZE + 1];
struct rte_stack *s;
@@ -209,7 +209,7 @@ test_stack_name_length(void)
memset(name, 's', sizeof(name));
name[RTE_STACK_NAMESIZE] = '\0';
- s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), flags);
if (s != NULL) {
printf("[%s():%u] Failed to prevent long name\n",
__func__, __LINE__);
@@ -328,7 +328,7 @@ stack_thread_push_pop(void *args)
}
static int
-test_stack_multithreaded(void)
+test_stack_multithreaded(uint32_t flags)
{
struct test_args *args;
unsigned int lcore_id;
@@ -349,7 +349,7 @@ test_stack_multithreaded(void)
return -1;
}
- s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
@@ -384,9 +384,9 @@ test_stack_multithreaded(void)
}
static int
-test_stack(void)
+__test_stack(uint32_t flags)
{
- if (test_stack_basic() < 0)
+ if (test_stack_basic(flags) < 0)
return -1;
if (test_lookup_null() < 0)
@@ -395,16 +395,29 @@ test_stack(void)
if (test_free_null() < 0)
return -1;
- if (test_stack_name_reuse() < 0)
+ if (test_stack_name_reuse(flags) < 0)
return -1;
- if (test_stack_name_length() < 0)
+ if (test_stack_name_length(flags) < 0)
return -1;
- if (test_stack_multithreaded() < 0)
+ if (test_stack_multithreaded(flags) < 0)
return -1;
return 0;
}
+static int
+test_stack(void)
+{
+ return __test_stack(0);
+}
+
+static int
+test_lf_stack(void)
+{
+ return __test_stack(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_autotest, test_stack);
+REGISTER_TEST_COMMAND(stack_lf_autotest, test_lf_stack);
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
index 484370d30..e09d5384c 100644
--- a/app/test/test_stack_perf.c
+++ b/app/test/test_stack_perf.c
@@ -297,14 +297,14 @@ test_bulk_push_pop(struct rte_stack *s)
}
static int
-test_stack_perf(void)
+__test_stack_perf(uint32_t flags)
{
struct lcore_pair cores;
struct rte_stack *s;
rte_atomic32_init(&lcore_barrier);
- s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -340,4 +340,17 @@ test_stack_perf(void)
return 0;
}
+static int
+test_stack_perf(void)
+{
+ return __test_stack_perf(0);
+}
+
+static int
+test_lf_stack_perf(void)
+{
+ return __test_stack_perf(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
+REGISTER_TEST_COMMAND(stack_lf_perf_autotest, test_lf_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v3 8/8] mempool/stack: add lock-free stack mempool handler
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 0/8] Add stack library and new " Gage Eads
` (6 preceding siblings ...)
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 7/8] test/stack: add lock-free stack tests Gage Eads
@ 2019-03-06 14:45 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 0/8] Add stack library and new " Gage Eads
8 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-06 14:45 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for lock-free (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
stack's worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 10 ++++++++++
doc/guides/rel_notes/release_19_05.rst | 5 +++++
drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++--
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 929d76dba..dbcfc328e 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -541,6 +541,16 @@ Known Issues
5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+ Alternatively, applications can use the lock-free stack mempool handler. When
+ considering this handler, note that:
+
+ - It is currently limited to the x86_64 platform, because it uses an
+ instruction (16-byte compare-and-swap) that is not yet available on other
+ platforms.
+ - It has worse average-case performance than the non-preemptive rte_ring, but
+ software caching (e.g. the mempool cache) can mitigate this by reducing the
+ number of stack accesses.
+
+ rte_timer
Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 5294ace3b..90e9bbaa6 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -86,6 +86,11 @@ New Features
The library supports two stack implementations: standard (lock-based) and lock-free.
The lock-free implementation is currently limited to x86-64 platforms.
+* **Added Lock-Free Stack Mempool Handler.**
+
+ Added a new lock-free stack handler, which uses the newly added stack
+ library.
+
Removed Items
-------------
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 25ccdb9af..7e85c8d6b 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -7,7 +7,7 @@
#include <rte_stack.h>
static int
-stack_alloc(struct rte_mempool *mp)
+__stack_alloc(struct rte_mempool *mp, uint32_t flags)
{
char name[RTE_STACK_NAMESIZE];
struct rte_stack *s;
@@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp)
return -rte_errno;
}
- s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ s = rte_stack_create(name, mp->size, mp->socket_id, flags);
if (s == NULL)
return -rte_errno;
@@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp)
}
static int
+stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, 0);
+}
+
+static int
+lf_stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, RTE_STACK_F_LF);
+}
+
+static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
unsigned int n)
{
@@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = {
.get_count = stack_get_count
};
+static struct rte_mempool_ops ops_lf_stack = {
+ .name = "lf_stack",
+ .alloc = lf_stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
MEMPOOL_REGISTER_OPS(ops_stack);
+MEMPOOL_REGISTER_OPS(ops_lf_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library Gage Eads
@ 2019-03-14 8:00 ` Olivier Matz
2019-03-14 8:00 ` Olivier Matz
2019-03-28 23:26 ` Honnappa Nagarahalli
1 sibling, 1 reply; 228+ messages in thread
From: Olivier Matz @ 2019-03-14 8:00 UTC (permalink / raw)
To: Gage Eads
Cc: dev, arybchenko, bruce.richardson, konstantin.ananyev, gavin.hu,
Honnappa.Nagarahalli, nd, thomas
On Wed, Mar 06, 2019 at 08:45:52AM -0600, Gage Eads wrote:
> The rte_stack library provides an API for configuration and use of a
> bounded stack of pointers. Push and pop operations are MT-safe, allowing
> concurrent access, and the interface supports pushing and popping multiple
> pointers at a time.
>
> The library's interface is modeled after another DPDK data structure,
> rte_ring, and its lock-based implementation is derived from the stack
> mempool handler. An upcoming commit will migrate the stack mempool handler
> to rte_stack.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library
2019-03-14 8:00 ` Olivier Matz
@ 2019-03-14 8:00 ` Olivier Matz
0 siblings, 0 replies; 228+ messages in thread
From: Olivier Matz @ 2019-03-14 8:00 UTC (permalink / raw)
To: Gage Eads
Cc: dev, arybchenko, bruce.richardson, konstantin.ananyev, gavin.hu,
Honnappa.Nagarahalli, nd, thomas
On Wed, Mar 06, 2019 at 08:45:52AM -0600, Gage Eads wrote:
> The rte_stack library provides an API for configuration and use of a
> bounded stack of pointers. Push and pop operations are MT-safe, allowing
> concurrent access, and the interface supports pushing and popping multiple
> pointers at a time.
>
> The library's interface is modeled after another DPDK data structure,
> rte_ring, and its lock-based implementation is derived from the stack
> mempool handler. An upcoming commit will migrate the stack mempool handler
> to rte_stack.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/8] test/stack: add stack test
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 3/8] test/stack: add stack test Gage Eads
@ 2019-03-14 8:00 ` Olivier Matz
2019-03-14 8:00 ` Olivier Matz
0 siblings, 1 reply; 228+ messages in thread
From: Olivier Matz @ 2019-03-14 8:00 UTC (permalink / raw)
To: Gage Eads
Cc: dev, arybchenko, bruce.richardson, konstantin.ananyev, gavin.hu,
Honnappa.Nagarahalli, nd, thomas
On Wed, Mar 06, 2019 at 08:45:54AM -0600, Gage Eads wrote:
> stack_autotest performs positive and negative testing of the stack API, and
> exercises the push and pop datapath functions with all available lcores.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/8] test/stack: add stack test
2019-03-14 8:00 ` Olivier Matz
@ 2019-03-14 8:00 ` Olivier Matz
0 siblings, 0 replies; 228+ messages in thread
From: Olivier Matz @ 2019-03-14 8:00 UTC (permalink / raw)
To: Gage Eads
Cc: dev, arybchenko, bruce.richardson, konstantin.ananyev, gavin.hu,
Honnappa.Nagarahalli, nd, thomas
On Wed, Mar 06, 2019 at 08:45:54AM -0600, Gage Eads wrote:
> stack_autotest performs positive and negative testing of the stack API, and
> exercises the push and pop datapath functions with all available lcores.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 5/8] stack: add lock-free stack implementation
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 5/8] stack: add lock-free stack implementation Gage Eads
@ 2019-03-14 8:01 ` Olivier Matz
2019-03-14 8:01 ` Olivier Matz
2019-03-28 23:27 ` Honnappa Nagarahalli
1 sibling, 1 reply; 228+ messages in thread
From: Olivier Matz @ 2019-03-14 8:01 UTC (permalink / raw)
To: Gage Eads
Cc: dev, arybchenko, bruce.richardson, konstantin.ananyev, gavin.hu,
Honnappa.Nagarahalli, nd, thomas
On Wed, Mar 06, 2019 at 08:45:56AM -0600, Gage Eads wrote:
> This commit adds support for a lock-free (linked list based) stack to the
> stack API. This behavior is selected through a new rte_stack_create() flag,
> RTE_STACK_F_LF.
>
> The stack consists of a linked list of elements, each containing a data
> pointer and a next pointer, and an atomic stack depth counter.
>
> The lock-free push operation enqueues a linked list of pointers by pointing
> the tail of the list to the current stack head, and using a CAS to swing
> the stack head pointer to the head of the list. The operation retries if it
> is unsuccessful (i.e. the list changed between reading the head and
> modifying it), else it adjusts the stack length and returns.
>
> The lock-free pop operation first reserves num elements by adjusting the
> stack length, to ensure the dequeue operation will succeed without
> blocking. It then dequeues pointers by walking the list -- starting from
> the head -- then swinging the head pointer (using a CAS as well). While
> walking the list, the data pointers are recorded in an object table.
>
> This algorithm stack uses a 128-bit compare-and-swap instruction, which
> atomically updates the stack top pointer and a modification counter, to
> protect against the ABA problem.
>
> The linked list elements themselves are maintained in a lock-free LIFO
> list, and are allocated before stack pushes and freed after stack pops.
> Since the stack has a fixed maximum depth, these elements do not need to be
> dynamically created.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 5/8] stack: add lock-free stack implementation
2019-03-14 8:01 ` Olivier Matz
@ 2019-03-14 8:01 ` Olivier Matz
0 siblings, 0 replies; 228+ messages in thread
From: Olivier Matz @ 2019-03-14 8:01 UTC (permalink / raw)
To: Gage Eads
Cc: dev, arybchenko, bruce.richardson, konstantin.ananyev, gavin.hu,
Honnappa.Nagarahalli, nd, thomas
On Wed, Mar 06, 2019 at 08:45:56AM -0600, Gage Eads wrote:
> This commit adds support for a lock-free (linked list based) stack to the
> stack API. This behavior is selected through a new rte_stack_create() flag,
> RTE_STACK_F_LF.
>
> The stack consists of a linked list of elements, each containing a data
> pointer and a next pointer, and an atomic stack depth counter.
>
> The lock-free push operation enqueues a linked list of pointers by pointing
> the tail of the list to the current stack head, and using a CAS to swing
> the stack head pointer to the head of the list. The operation retries if it
> is unsuccessful (i.e. the list changed between reading the head and
> modifying it), else it adjusts the stack length and returns.
>
> The lock-free pop operation first reserves num elements by adjusting the
> stack length, to ensure the dequeue operation will succeed without
> blocking. It then dequeues pointers by walking the list -- starting from
> the head -- then swinging the head pointer (using a CAS as well). While
> walking the list, the data pointers are recorded in an object table.
>
> This algorithm stack uses a 128-bit compare-and-swap instruction, which
> atomically updates the stack top pointer and a modification counter, to
> protect against the ABA problem.
>
> The linked list elements themselves are maintained in a lock-free LIFO
> list, and are allocated before stack pushes and freed after stack pops.
> Since the stack has a fixed maximum depth, these elements do not need to be
> dynamically created.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation Gage Eads
@ 2019-03-14 8:04 ` Olivier Matz
2019-03-14 8:04 ` Olivier Matz
2019-03-28 23:27 ` Honnappa Nagarahalli
1 sibling, 1 reply; 228+ messages in thread
From: Olivier Matz @ 2019-03-14 8:04 UTC (permalink / raw)
To: Gage Eads
Cc: dev, arybchenko, bruce.richardson, konstantin.ananyev, gavin.hu,
Honnappa.Nagarahalli, nd, thomas
On Wed, Mar 06, 2019 at 08:45:57AM -0600, Gage Eads wrote:
> This commit adds an implementation of the lock-free stack push, pop, and
> length functions that use __atomic builtins, for systems that benefit from
> the finer-grained memory ordering control.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation
2019-03-14 8:04 ` Olivier Matz
@ 2019-03-14 8:04 ` Olivier Matz
0 siblings, 0 replies; 228+ messages in thread
From: Olivier Matz @ 2019-03-14 8:04 UTC (permalink / raw)
To: Gage Eads
Cc: dev, arybchenko, bruce.richardson, konstantin.ananyev, gavin.hu,
Honnappa.Nagarahalli, nd, thomas
On Wed, Mar 06, 2019 at 08:45:57AM -0600, Gage Eads wrote:
> This commit adds an implementation of the lock-free stack push, pop, and
> length functions that use __atomic builtins, for systems that benefit from
> the finer-grained memory ordering control.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 0/8] Add stack library and new mempool handler
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 0/8] Add stack library and new " Gage Eads
` (7 preceding siblings ...)
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` Gage Eads
` (9 more replies)
8 siblings, 10 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This patchset introduces a stack library, supporting both lock-based and
lock-free stacks, and a lock-free stack mempool handler.
The lock-based stack code is derived from the existing stack mempool handler,
and that handler is refactored to use the stack library.
The lock-free stack mempool handler is intended for usages where the rte
ring's "non-preemptive" constraint is not acceptable; for example, if the
application uses a mixture of pinned high-priority threads and multiplexed
low-priority threads that share a mempool.
Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
so it is currently limited to the x86_64 platform.
This patchset is the successor to a patchset containing only the new mempool
handler[2].
[1] http://mails.dpdk.org/archives/dev/2019-March/125751.html
[2] http://mails.dpdk.org/archives/dev/2019-January/123555.html
---
v4:
- Fix test_stack.c 32-bit build by using %zu format specifier for size_t
- Rebase onto master
v3:
- Rebase patchset onto master (test/test/ -> app/test/)
- Fix rte_stack_std_push() segfault introduced in v2
v2:
- Reworked structure and function naming to use rte_stack_{std, lf}_...
- Updated to the latest rte_atomic128_cmp_exchange() interface.
- Rename STACK_F_NB -> RTE_STACK_F_LF.
- Remove rte_rmb() and rte_wmb() from the generic push and pop implementations.
These are obviated by rte_atomic128_cmp_exchange()'s two memorder arguments.
- Edit stack_lib.rst text to 80 chars/line.
- Fix rte_stack.h doxygen formatting.
- Allocate popped_objs array from the heap
- Fix stack_thread_push_pop bug ("&t->sz" -> "t->sz")
- Remove unnecessary NULL check from test_stack_basic
- Properly terminate the name string in test_stack_name_length
- Add an empty array of struct rte_nb_lifo_elem elements
- In rte_nb_lifo_push(), retrieve the last element from __nb_lifo_pop()
- Split C11 implementation into a separate patchset
Gage Eads (8):
stack: introduce rte stack library
mempool/stack: convert mempool to use rte stack
test/stack: add stack test
test/stack: add stack perf test
stack: add lock-free stack implementation
stack: add C11 atomic implementation
test/stack: add lock-free stack tests
mempool/stack: add lock-free stack mempool handler
MAINTAINERS | 9 +-
app/test/Makefile | 3 +
app/test/meson.build | 7 +
app/test/test_stack.c | 423 ++++++++++++++++++++++++
app/test/test_stack_perf.c | 356 ++++++++++++++++++++
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/env_abstraction_layer.rst | 10 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 83 +++++
doc/guides/rel_notes/release_19_05.rst | 13 +
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 115 +++----
lib/Makefile | 2 +
lib/librte_stack/Makefile | 25 ++
lib/librte_stack/meson.build | 10 +
lib/librte_stack/rte_stack.c | 219 ++++++++++++
lib/librte_stack/rte_stack.h | 395 ++++++++++++++++++++++
lib/librte_stack/rte_stack_c11_mem.h | 175 ++++++++++
lib/librte_stack/rte_stack_generic.h | 151 +++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++
lib/librte_stack/rte_stack_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
26 files changed, 1987 insertions(+), 72 deletions(-)
create mode 100644 app/test/test_stack.c
create mode 100644 app/test/test_stack_perf.c
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_c11_mem.h
create mode 100644 lib/librte_stack/rte_stack_generic.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_version.map
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 0/8] Add stack library and new mempool handler
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 0/8] Add stack library and new " Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 1/8] stack: introduce rte stack library Gage Eads
` (8 subsequent siblings)
9 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This patchset introduces a stack library, supporting both lock-based and
lock-free stacks, and a lock-free stack mempool handler.
The lock-based stack code is derived from the existing stack mempool handler,
and that handler is refactored to use the stack library.
The lock-free stack mempool handler is intended for usages where the rte
ring's "non-preemptive" constraint is not acceptable; for example, if the
application uses a mixture of pinned high-priority threads and multiplexed
low-priority threads that share a mempool.
Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
so it is currently limited to the x86_64 platform.
This patchset is the successor to a patchset containing only the new mempool
handler[2].
[1] http://mails.dpdk.org/archives/dev/2019-March/125751.html
[2] http://mails.dpdk.org/archives/dev/2019-January/123555.html
---
v4:
- Fix test_stack.c 32-bit build by using %zu format specifier for size_t
- Rebase onto master
v3:
- Rebase patchset onto master (test/test/ -> app/test/)
- Fix rte_stack_std_push() segfault introduced in v2
v2:
- Reworked structure and function naming to use rte_stack_{std, lf}_...
- Updated to the latest rte_atomic128_cmp_exchange() interface.
- Rename STACK_F_NB -> RTE_STACK_F_LF.
- Remove rte_rmb() and rte_wmb() from the generic push and pop implementations.
These are obviated by rte_atomic128_cmp_exchange()'s two memorder arguments.
- Edit stack_lib.rst text to 80 chars/line.
- Fix rte_stack.h doxygen formatting.
- Allocate popped_objs array from the heap
- Fix stack_thread_push_pop bug ("&t->sz" -> "t->sz")
- Remove unnecessary NULL check from test_stack_basic
- Properly terminate the name string in test_stack_name_length
- Add an empty array of struct rte_nb_lifo_elem elements
- In rte_nb_lifo_push(), retrieve the last element from __nb_lifo_pop()
- Split C11 implementation into a separate patchset
Gage Eads (8):
stack: introduce rte stack library
mempool/stack: convert mempool to use rte stack
test/stack: add stack test
test/stack: add stack perf test
stack: add lock-free stack implementation
stack: add C11 atomic implementation
test/stack: add lock-free stack tests
mempool/stack: add lock-free stack mempool handler
MAINTAINERS | 9 +-
app/test/Makefile | 3 +
app/test/meson.build | 7 +
app/test/test_stack.c | 423 ++++++++++++++++++++++++
app/test/test_stack_perf.c | 356 ++++++++++++++++++++
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/env_abstraction_layer.rst | 10 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 83 +++++
doc/guides/rel_notes/release_19_05.rst | 13 +
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 115 +++----
lib/Makefile | 2 +
lib/librte_stack/Makefile | 25 ++
lib/librte_stack/meson.build | 10 +
lib/librte_stack/rte_stack.c | 219 ++++++++++++
lib/librte_stack/rte_stack.h | 395 ++++++++++++++++++++++
lib/librte_stack/rte_stack_c11_mem.h | 175 ++++++++++
lib/librte_stack/rte_stack_generic.h | 151 +++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++
lib/librte_stack/rte_stack_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
26 files changed, 1987 insertions(+), 72 deletions(-)
create mode 100644 app/test/test_stack.c
create mode 100644 app/test/test_stack_perf.c
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_c11_mem.h
create mode 100644 lib/librte_stack/rte_stack_generic.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_version.map
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 1/8] stack: introduce rte stack library
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 0/8] Add stack library and new " Gage Eads
2019-03-28 18:00 ` Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
` (7 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 28 ++++
doc/guides/rel_notes/release_19_05.rst | 5 +
lib/Makefile | 2 +
lib/librte_stack/Makefile | 23 +++
lib/librte_stack/meson.build | 8 +
lib/librte_stack/rte_stack.c | 194 +++++++++++++++++++++++
lib/librte_stack/rte_stack.h | 274 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++++
lib/librte_stack/rte_stack_version.map | 9 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
16 files changed, 593 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index e9ff2b4c2..09fd99dbf 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -416,6 +416,12 @@ F: drivers/raw/skeleton_rawdev/
F: app/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
+Stack API - EXPERIMENTAL
+M: Gage Eads <gage.eads@intel.com>
+M: Olivier Matz <olivier.matz@6wind.com>
+F: lib/librte_stack/
+F: doc/guides/prog_guide/stack_lib.rst
+
Memory Pool Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index 6292bc4af..fc8dba69d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -994,3 +994,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
# Compile the eventdev application
#
CONFIG_RTE_APP_EVENTDEV=y
+
+#
+# Compile librte_stack
+#
+CONFIG_RTE_LIBRTE_STACK=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index aacc66bd8..de1e215dd 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -125,6 +125,7 @@ The public API headers are grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
[ring] (@ref rte_ring.h),
+ [stack] (@ref rte_stack.h),
[tailq] (@ref rte_tailq.h),
[bitmap] (@ref rte_bitmap.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..7722fc3e9 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -55,6 +55,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
@TOPDIR@/lib/librte_security \
+ @TOPDIR@/lib/librte_stack \
@TOPDIR@/lib/librte_table \
@TOPDIR@/lib/librte_telemetry \
@TOPDIR@/lib/librte_timer \
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..f4f60862f 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ stack_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
new file mode 100644
index 000000000..25a8cc38a
--- /dev/null
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -0,0 +1,28 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+Stack Library
+=============
+
+DPDK's stack library provides an API for configuration and use of a bounded
+stack of pointers.
+
+The stack library provides the following basic operations:
+
+* Create a uniquely named stack of a user-specified size and using a
+ user-specified socket.
+
+* Push and pop a burst of one or more stack objects (pointers). These function
+ are multi-threading safe.
+
+* Free a previously created stack.
+
+* Lookup a pointer to a stack by its name.
+
+* Query a stack's current depth and number of free entries.
+
+Implementation
+~~~~~~~~~~~~~~
+
+The stack consists of a contiguous array of pointers, a current index, and a
+spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index d11bb5a2b..525ae616f 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -105,6 +105,11 @@ New Features
Improved testpmd application performance on ARM platform. For ``macswap``
forwarding mode, NEON intrinsics were used to do swap to save CPU cycles.
+* **Added Stack API.**
+
+ Added a new stack API for configuration and use of a bounded stack of
+ pointers. The API provides MT-safe push and pop operations that can operate
+ on one or more pointers per operation.
Removed Items
-------------
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..9f90e80ad 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack
+DEPDIRS-librte_stack := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
new file mode 100644
index 000000000..e956b6535
--- /dev/null
+++ b/lib/librte_stack/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_stack.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_stack_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
new file mode 100644
index 000000000..99f43710e
--- /dev/null
+++ b/lib/librte_stack/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+allow_experimental_apis = true
+
+version = 1
+sources = files('rte_stack.c')
+headers = files('rte_stack.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
new file mode 100644
index 000000000..96dffdf44
--- /dev/null
+++ b/lib/librte_stack/rte_stack.c
@@ -0,0 +1,194 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_tailq.h>
+
+#include "rte_stack.h"
+#include "rte_stack_pvt.h"
+
+int stack_logtype;
+
+TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_stack_tailq = {
+ .name = RTE_TAILQ_STACK_NAME,
+};
+EAL_REGISTER_TAILQ(rte_stack_tailq)
+
+static void
+rte_stack_std_init(struct rte_stack *s)
+{
+ rte_spinlock_init(&s->stack_std.lock);
+}
+
+static void
+rte_stack_init(struct rte_stack *s)
+{
+ memset(s, 0, sizeof(*s));
+
+ rte_stack_std_init(s);
+}
+
+static ssize_t
+rte_stack_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ /* Add padding to avoid false sharing conflicts */
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
+ 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
+
+struct rte_stack *
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_stack_list *stack_list;
+ const struct rte_memzone *mz;
+ struct rte_tailq_entry *te;
+ struct rte_stack *s;
+ unsigned int sz;
+ int ret;
+
+ RTE_SET_USED(flags);
+
+ sz = rte_stack_get_memsize(count);
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_STACK_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ STACK_LOG_ERR("Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
+ 0, __alignof__(*s));
+ if (mz == NULL) {
+ STACK_LOG_ERR("Cannot reserve stack memzone!\n");
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(te);
+ return NULL;
+ }
+
+ s = mz->addr;
+
+ rte_stack_init(s);
+
+ /* Store the name for later lookups */
+ ret = snprintf(s->name, sizeof(s->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(s->name)) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_errno = ENAMETOOLONG;
+ rte_free(te);
+ rte_memzone_free(mz);
+ return NULL;
+ }
+
+ s->memzone = mz;
+ s->capacity = count;
+ s->flags = flags;
+
+ te->data = s;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ TAILQ_INSERT_TAIL(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return s;
+}
+
+void
+rte_stack_free(struct rte_stack *s)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+
+ if (s == NULL)
+ return;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, stack_list, next) {
+ if (te->data == s)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+
+ rte_memzone_free(s->memzone);
+}
+
+struct rte_stack *
+rte_stack_lookup(const char *name)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+ struct rte_stack *r = NULL;
+
+ if (name == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, stack_list, next) {
+ r = (struct rte_stack *) te->data;
+ if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+RTE_INIT(librte_stack_init_log)
+{
+ stack_logtype = rte_log_register("lib.stack");
+ if (stack_logtype >= 0)
+ rte_log_set_level(stack_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
new file mode 100644
index 000000000..7a633deb5
--- /dev/null
+++ b/lib/librte_stack/rte_stack.h
@@ -0,0 +1,274 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+/**
+ * @file rte_stack.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Stack
+ *
+ * librte_stack provides an API for configuration and use of a bounded stack of
+ * pointers. Push and pop operations are MT-safe, allowing concurrent access,
+ * and the interface supports pushing and popping multiple pointers at a time.
+ */
+
+#ifndef _RTE_STACK_H_
+#define _RTE_STACK_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+#define RTE_TAILQ_STACK_NAME "RTE_STACK"
+#define RTE_STACK_MZ_PREFIX "STK_"
+/** The maximum length of a stack name. */
+#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
+ sizeof(RTE_STACK_MZ_PREFIX) + 1)
+
+/* Structure containing the LIFO, its current length, and a lock for mutual
+ * exclusion.
+ */
+struct rte_stack_std {
+ rte_spinlock_t lock; /**< LIFO lock */
+ uint32_t len; /**< LIFO len */
+ void *objs[]; /**< LIFO pointer table */
+};
+
+/* The RTE stack structure contains the LIFO structure itself, plus metadata
+ * such as its name and memzone pointer.
+ */
+struct rte_stack {
+ /** Name of the stack. */
+ char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
+ /** Memzone containing the rte_stack structure. */
+ const struct rte_memzone *memzone;
+ uint32_t capacity; /**< Usable size of the stack. */
+ uint32_t flags; /**< Flags supplied at creation. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+} __rte_cache_aligned;
+
+/**
+ * @internal Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_std_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+ cache_objs = &stack->objs[stack->len];
+
+ /* Is there sufficient space in the stack? */
+ if ((stack->len + n) > s->capacity) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ stack->len += n;
+
+ rte_spinlock_unlock(&stack->lock);
+ return n;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ return rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @internal Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index, len;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+
+ if (unlikely(n > stack->len)) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ cache_objs = stack->objs;
+
+ for (index = 0, len = stack->len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ stack->len -= n;
+ rte_spinlock_unlock(&stack->lock);
+
+ return n;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ if (unlikely(n == 0 || obj_table == NULL))
+ return 0;
+
+ return rte_stack_std_pop(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_count(struct rte_stack *s)
+{
+ return (unsigned int)s->stack_std.len;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of free entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of free entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_free_count(struct rte_stack *s)
+{
+ return s->capacity - rte_stack_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new stack named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory for a stack of
+ * size *count*. The behavior of the stack is controlled by the *flags*.
+ *
+ * @param name
+ * The name of the stack.
+ * @param count
+ * The size of the stack.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * Reserved for future use.
+ * @return
+ * On success, the pointer to the new allocated stack. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a stack with the same name already exists
+ * - ENOMEM - insufficient memory to create the stack
+ * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
+ */
+struct rte_stack *__rte_experimental
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free all memory used by the stack.
+ *
+ * @param s
+ * Stack to free
+ */
+void __rte_experimental
+rte_stack_free(struct rte_stack *s);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Lookup a stack by its name.
+ *
+ * @param name
+ * The name of the stack.
+ * @return
+ * The pointer to the stack matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - Stack with name *name* not found.
+ * - EINVAL - *name* pointer is NULL.
+ */
+struct rte_stack * __rte_experimental
+rte_stack_lookup(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_H_ */
diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
new file mode 100644
index 000000000..4a6a7bdb3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_pvt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_PVT_H_
+#define _RTE_STACK_PVT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_log.h>
+
+extern int stack_logtype;
+
+#define STACK_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
+ __func__, ##args)
+
+#define STACK_LOG_ERR(fmt, args...) \
+ STACK_LOG(ERR, fmt, ## args)
+
+#define STACK_LOG_WARN(fmt, args...) \
+ STACK_LOG(WARNING, fmt, ## args)
+
+#define STACK_LOG_INFO(fmt, args...) \
+ STACK_LOG(INFO, fmt, ## args)
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_PVT_H_ */
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
new file mode 100644
index 000000000..6662679c3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_stack_create;
+ rte_stack_free;
+ rte_stack_lookup;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 99957ba7d..90115477f 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..7e033e78c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 1/8] stack: introduce rte stack library
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 1/8] stack: introduce rte stack library Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 28 ++++
doc/guides/rel_notes/release_19_05.rst | 5 +
lib/Makefile | 2 +
lib/librte_stack/Makefile | 23 +++
lib/librte_stack/meson.build | 8 +
lib/librte_stack/rte_stack.c | 194 +++++++++++++++++++++++
lib/librte_stack/rte_stack.h | 274 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++++
lib/librte_stack/rte_stack_version.map | 9 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
16 files changed, 593 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index e9ff2b4c2..09fd99dbf 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -416,6 +416,12 @@ F: drivers/raw/skeleton_rawdev/
F: app/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
+Stack API - EXPERIMENTAL
+M: Gage Eads <gage.eads@intel.com>
+M: Olivier Matz <olivier.matz@6wind.com>
+F: lib/librte_stack/
+F: doc/guides/prog_guide/stack_lib.rst
+
Memory Pool Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index 6292bc4af..fc8dba69d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -994,3 +994,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
# Compile the eventdev application
#
CONFIG_RTE_APP_EVENTDEV=y
+
+#
+# Compile librte_stack
+#
+CONFIG_RTE_LIBRTE_STACK=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index aacc66bd8..de1e215dd 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -125,6 +125,7 @@ The public API headers are grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
[ring] (@ref rte_ring.h),
+ [stack] (@ref rte_stack.h),
[tailq] (@ref rte_tailq.h),
[bitmap] (@ref rte_bitmap.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..7722fc3e9 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -55,6 +55,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
@TOPDIR@/lib/librte_security \
+ @TOPDIR@/lib/librte_stack \
@TOPDIR@/lib/librte_table \
@TOPDIR@/lib/librte_telemetry \
@TOPDIR@/lib/librte_timer \
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..f4f60862f 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ stack_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
new file mode 100644
index 000000000..25a8cc38a
--- /dev/null
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -0,0 +1,28 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+Stack Library
+=============
+
+DPDK's stack library provides an API for configuration and use of a bounded
+stack of pointers.
+
+The stack library provides the following basic operations:
+
+* Create a uniquely named stack of a user-specified size and using a
+ user-specified socket.
+
+* Push and pop a burst of one or more stack objects (pointers). These function
+ are multi-threading safe.
+
+* Free a previously created stack.
+
+* Lookup a pointer to a stack by its name.
+
+* Query a stack's current depth and number of free entries.
+
+Implementation
+~~~~~~~~~~~~~~
+
+The stack consists of a contiguous array of pointers, a current index, and a
+spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index d11bb5a2b..525ae616f 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -105,6 +105,11 @@ New Features
Improved testpmd application performance on ARM platform. For ``macswap``
forwarding mode, NEON intrinsics were used to do swap to save CPU cycles.
+* **Added Stack API.**
+
+ Added a new stack API for configuration and use of a bounded stack of
+ pointers. The API provides MT-safe push and pop operations that can operate
+ on one or more pointers per operation.
Removed Items
-------------
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..9f90e80ad 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack
+DEPDIRS-librte_stack := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
new file mode 100644
index 000000000..e956b6535
--- /dev/null
+++ b/lib/librte_stack/Makefile
@@ -0,0 +1,23 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_stack.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_stack_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
new file mode 100644
index 000000000..99f43710e
--- /dev/null
+++ b/lib/librte_stack/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+allow_experimental_apis = true
+
+version = 1
+sources = files('rte_stack.c')
+headers = files('rte_stack.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
new file mode 100644
index 000000000..96dffdf44
--- /dev/null
+++ b/lib/librte_stack/rte_stack.c
@@ -0,0 +1,194 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_tailq.h>
+
+#include "rte_stack.h"
+#include "rte_stack_pvt.h"
+
+int stack_logtype;
+
+TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_stack_tailq = {
+ .name = RTE_TAILQ_STACK_NAME,
+};
+EAL_REGISTER_TAILQ(rte_stack_tailq)
+
+static void
+rte_stack_std_init(struct rte_stack *s)
+{
+ rte_spinlock_init(&s->stack_std.lock);
+}
+
+static void
+rte_stack_init(struct rte_stack *s)
+{
+ memset(s, 0, sizeof(*s));
+
+ rte_stack_std_init(s);
+}
+
+static ssize_t
+rte_stack_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ /* Add padding to avoid false sharing conflicts */
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
+ 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
+
+struct rte_stack *
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_stack_list *stack_list;
+ const struct rte_memzone *mz;
+ struct rte_tailq_entry *te;
+ struct rte_stack *s;
+ unsigned int sz;
+ int ret;
+
+ RTE_SET_USED(flags);
+
+ sz = rte_stack_get_memsize(count);
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_STACK_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ STACK_LOG_ERR("Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
+ 0, __alignof__(*s));
+ if (mz == NULL) {
+ STACK_LOG_ERR("Cannot reserve stack memzone!\n");
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(te);
+ return NULL;
+ }
+
+ s = mz->addr;
+
+ rte_stack_init(s);
+
+ /* Store the name for later lookups */
+ ret = snprintf(s->name, sizeof(s->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(s->name)) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_errno = ENAMETOOLONG;
+ rte_free(te);
+ rte_memzone_free(mz);
+ return NULL;
+ }
+
+ s->memzone = mz;
+ s->capacity = count;
+ s->flags = flags;
+
+ te->data = s;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ TAILQ_INSERT_TAIL(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return s;
+}
+
+void
+rte_stack_free(struct rte_stack *s)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+
+ if (s == NULL)
+ return;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, stack_list, next) {
+ if (te->data == s)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+
+ rte_memzone_free(s->memzone);
+}
+
+struct rte_stack *
+rte_stack_lookup(const char *name)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+ struct rte_stack *r = NULL;
+
+ if (name == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, stack_list, next) {
+ r = (struct rte_stack *) te->data;
+ if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+RTE_INIT(librte_stack_init_log)
+{
+ stack_logtype = rte_log_register("lib.stack");
+ if (stack_logtype >= 0)
+ rte_log_set_level(stack_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
new file mode 100644
index 000000000..7a633deb5
--- /dev/null
+++ b/lib/librte_stack/rte_stack.h
@@ -0,0 +1,274 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+/**
+ * @file rte_stack.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Stack
+ *
+ * librte_stack provides an API for configuration and use of a bounded stack of
+ * pointers. Push and pop operations are MT-safe, allowing concurrent access,
+ * and the interface supports pushing and popping multiple pointers at a time.
+ */
+
+#ifndef _RTE_STACK_H_
+#define _RTE_STACK_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+#define RTE_TAILQ_STACK_NAME "RTE_STACK"
+#define RTE_STACK_MZ_PREFIX "STK_"
+/** The maximum length of a stack name. */
+#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
+ sizeof(RTE_STACK_MZ_PREFIX) + 1)
+
+/* Structure containing the LIFO, its current length, and a lock for mutual
+ * exclusion.
+ */
+struct rte_stack_std {
+ rte_spinlock_t lock; /**< LIFO lock */
+ uint32_t len; /**< LIFO len */
+ void *objs[]; /**< LIFO pointer table */
+};
+
+/* The RTE stack structure contains the LIFO structure itself, plus metadata
+ * such as its name and memzone pointer.
+ */
+struct rte_stack {
+ /** Name of the stack. */
+ char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
+ /** Memzone containing the rte_stack structure. */
+ const struct rte_memzone *memzone;
+ uint32_t capacity; /**< Usable size of the stack. */
+ uint32_t flags; /**< Flags supplied at creation. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+} __rte_cache_aligned;
+
+/**
+ * @internal Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_std_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+ cache_objs = &stack->objs[stack->len];
+
+ /* Is there sufficient space in the stack? */
+ if ((stack->len + n) > s->capacity) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ stack->len += n;
+
+ rte_spinlock_unlock(&stack->lock);
+ return n;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ return rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @internal Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index, len;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+
+ if (unlikely(n > stack->len)) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ cache_objs = stack->objs;
+
+ for (index = 0, len = stack->len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ stack->len -= n;
+ rte_spinlock_unlock(&stack->lock);
+
+ return n;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ if (unlikely(n == 0 || obj_table == NULL))
+ return 0;
+
+ return rte_stack_std_pop(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_count(struct rte_stack *s)
+{
+ return (unsigned int)s->stack_std.len;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of free entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of free entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_free_count(struct rte_stack *s)
+{
+ return s->capacity - rte_stack_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new stack named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory for a stack of
+ * size *count*. The behavior of the stack is controlled by the *flags*.
+ *
+ * @param name
+ * The name of the stack.
+ * @param count
+ * The size of the stack.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * Reserved for future use.
+ * @return
+ * On success, the pointer to the new allocated stack. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a stack with the same name already exists
+ * - ENOMEM - insufficient memory to create the stack
+ * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
+ */
+struct rte_stack *__rte_experimental
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free all memory used by the stack.
+ *
+ * @param s
+ * Stack to free
+ */
+void __rte_experimental
+rte_stack_free(struct rte_stack *s);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Lookup a stack by its name.
+ *
+ * @param name
+ * The name of the stack.
+ * @return
+ * The pointer to the stack matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - Stack with name *name* not found.
+ * - EINVAL - *name* pointer is NULL.
+ */
+struct rte_stack * __rte_experimental
+rte_stack_lookup(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_H_ */
diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
new file mode 100644
index 000000000..4a6a7bdb3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_pvt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_PVT_H_
+#define _RTE_STACK_PVT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_log.h>
+
+extern int stack_logtype;
+
+#define STACK_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
+ __func__, ##args)
+
+#define STACK_LOG_ERR(fmt, args...) \
+ STACK_LOG(ERR, fmt, ## args)
+
+#define STACK_LOG_WARN(fmt, args...) \
+ STACK_LOG(WARNING, fmt, ## args)
+
+#define STACK_LOG_INFO(fmt, args...) \
+ STACK_LOG(INFO, fmt, ## args)
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_PVT_H_ */
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
new file mode 100644
index 000000000..6662679c3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_stack_create;
+ rte_stack_free;
+ rte_stack_lookup;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 99957ba7d..90115477f 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..7e033e78c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 2/8] mempool/stack: convert mempool to use rte stack
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 0/8] Add stack library and new " Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 1/8] stack: introduce rte stack library Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 3/8] test/stack: add stack test Gage Eads
` (6 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 2 +-
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 93 +++++++++----------------------
4 files changed, 33 insertions(+), 71 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 09fd99dbf..13fe49e2b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -293,7 +293,6 @@ M: Andrew Rybchenko <arybchenko@solarflare.com>
F: lib/librte_mempool/
F: drivers/mempool/Makefile
F: drivers/mempool/ring/
-F: drivers/mempool/stack/
F: doc/guides/prog_guide/mempool_lib.rst
F: app/test/test_mempool*
F: app/test/test_func_reentrancy.c
@@ -421,6 +420,7 @@ M: Gage Eads <gage.eads@intel.com>
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
+F: drivers/mempool/stack/
Memory Pool Drivers
diff --git a/drivers/mempool/stack/Makefile b/drivers/mempool/stack/Makefile
index 0444aedad..1681a62bc 100644
--- a/drivers/mempool/stack/Makefile
+++ b/drivers/mempool/stack/Makefile
@@ -10,10 +10,11 @@ LIB = librte_mempool_stack.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
# Headers
CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+LDLIBS += -lrte_eal -lrte_mempool -lrte_stack
EXPORT_MAP := rte_mempool_stack_version.map
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index b75a3bb56..03e369a41 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
+
+allow_experimental_apis = true
sources = files('rte_mempool_stack.c')
+
+deps += ['stack']
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index e6d504af5..25ccdb9af 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -1,39 +1,29 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2019 Intel Corporation
*/
#include <stdio.h>
#include <rte_mempool.h>
-#include <rte_malloc.h>
-
-struct rte_mempool_stack {
- rte_spinlock_t sl;
-
- uint32_t size;
- uint32_t len;
- void *objs[];
-};
+#include <rte_stack.h>
static int
stack_alloc(struct rte_mempool *mp)
{
- struct rte_mempool_stack *s;
- unsigned n = mp->size;
- int size = sizeof(*s) + (n+16)*sizeof(void *);
-
- /* Allocate our local memory structure */
- s = rte_zmalloc_socket("mempool-stack",
- size,
- RTE_CACHE_LINE_SIZE,
- mp->socket_id);
- if (s == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
- return -ENOMEM;
+ char name[RTE_STACK_NAMESIZE];
+ struct rte_stack *s;
+ int ret;
+
+ ret = snprintf(name, sizeof(name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(name)) {
+ rte_errno = ENAMETOOLONG;
+ return -rte_errno;
}
- rte_spinlock_init(&s->sl);
+ s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ if (s == NULL)
+ return -rte_errno;
- s->size = n;
mp->pool_data = s;
return 0;
@@ -41,69 +31,36 @@ stack_alloc(struct rte_mempool *mp)
static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index;
-
- rte_spinlock_lock(&s->sl);
- cache_objs = &s->objs[s->len];
-
- /* Is there sufficient space in the stack ? */
- if ((s->len + n) > s->size) {
- rte_spinlock_unlock(&s->sl);
- return -ENOBUFS;
- }
-
- /* Add elements back into the cache */
- for (index = 0; index < n; ++index, obj_table++)
- cache_objs[index] = *obj_table;
-
- s->len += n;
+ struct rte_stack *s = mp->pool_data;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_push(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static int
stack_dequeue(struct rte_mempool *mp, void **obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index, len;
-
- rte_spinlock_lock(&s->sl);
-
- if (unlikely(n > s->len)) {
- rte_spinlock_unlock(&s->sl);
- return -ENOENT;
- }
+ struct rte_stack *s = mp->pool_data;
- cache_objs = s->objs;
-
- for (index = 0, len = s->len - 1; index < n;
- ++index, len--, obj_table++)
- *obj_table = cache_objs[len];
-
- s->len -= n;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_pop(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static unsigned
stack_get_count(const struct rte_mempool *mp)
{
- struct rte_mempool_stack *s = mp->pool_data;
+ struct rte_stack *s = mp->pool_data;
- return s->len;
+ return rte_stack_count(s);
}
static void
stack_free(struct rte_mempool *mp)
{
- rte_free((void *)(mp->pool_data));
+ struct rte_stack *s = mp->pool_data;
+
+ rte_stack_free(s);
}
static struct rte_mempool_ops ops_stack = {
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 2/8] mempool/stack: convert mempool to use rte stack
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 2 +-
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 93 +++++++++----------------------
4 files changed, 33 insertions(+), 71 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 09fd99dbf..13fe49e2b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -293,7 +293,6 @@ M: Andrew Rybchenko <arybchenko@solarflare.com>
F: lib/librte_mempool/
F: drivers/mempool/Makefile
F: drivers/mempool/ring/
-F: drivers/mempool/stack/
F: doc/guides/prog_guide/mempool_lib.rst
F: app/test/test_mempool*
F: app/test/test_func_reentrancy.c
@@ -421,6 +420,7 @@ M: Gage Eads <gage.eads@intel.com>
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
+F: drivers/mempool/stack/
Memory Pool Drivers
diff --git a/drivers/mempool/stack/Makefile b/drivers/mempool/stack/Makefile
index 0444aedad..1681a62bc 100644
--- a/drivers/mempool/stack/Makefile
+++ b/drivers/mempool/stack/Makefile
@@ -10,10 +10,11 @@ LIB = librte_mempool_stack.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
# Headers
CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+LDLIBS += -lrte_eal -lrte_mempool -lrte_stack
EXPORT_MAP := rte_mempool_stack_version.map
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index b75a3bb56..03e369a41 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
+
+allow_experimental_apis = true
sources = files('rte_mempool_stack.c')
+
+deps += ['stack']
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index e6d504af5..25ccdb9af 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -1,39 +1,29 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2019 Intel Corporation
*/
#include <stdio.h>
#include <rte_mempool.h>
-#include <rte_malloc.h>
-
-struct rte_mempool_stack {
- rte_spinlock_t sl;
-
- uint32_t size;
- uint32_t len;
- void *objs[];
-};
+#include <rte_stack.h>
static int
stack_alloc(struct rte_mempool *mp)
{
- struct rte_mempool_stack *s;
- unsigned n = mp->size;
- int size = sizeof(*s) + (n+16)*sizeof(void *);
-
- /* Allocate our local memory structure */
- s = rte_zmalloc_socket("mempool-stack",
- size,
- RTE_CACHE_LINE_SIZE,
- mp->socket_id);
- if (s == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
- return -ENOMEM;
+ char name[RTE_STACK_NAMESIZE];
+ struct rte_stack *s;
+ int ret;
+
+ ret = snprintf(name, sizeof(name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(name)) {
+ rte_errno = ENAMETOOLONG;
+ return -rte_errno;
}
- rte_spinlock_init(&s->sl);
+ s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ if (s == NULL)
+ return -rte_errno;
- s->size = n;
mp->pool_data = s;
return 0;
@@ -41,69 +31,36 @@ stack_alloc(struct rte_mempool *mp)
static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index;
-
- rte_spinlock_lock(&s->sl);
- cache_objs = &s->objs[s->len];
-
- /* Is there sufficient space in the stack ? */
- if ((s->len + n) > s->size) {
- rte_spinlock_unlock(&s->sl);
- return -ENOBUFS;
- }
-
- /* Add elements back into the cache */
- for (index = 0; index < n; ++index, obj_table++)
- cache_objs[index] = *obj_table;
-
- s->len += n;
+ struct rte_stack *s = mp->pool_data;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_push(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static int
stack_dequeue(struct rte_mempool *mp, void **obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index, len;
-
- rte_spinlock_lock(&s->sl);
-
- if (unlikely(n > s->len)) {
- rte_spinlock_unlock(&s->sl);
- return -ENOENT;
- }
+ struct rte_stack *s = mp->pool_data;
- cache_objs = s->objs;
-
- for (index = 0, len = s->len - 1; index < n;
- ++index, len--, obj_table++)
- *obj_table = cache_objs[len];
-
- s->len -= n;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_pop(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static unsigned
stack_get_count(const struct rte_mempool *mp)
{
- struct rte_mempool_stack *s = mp->pool_data;
+ struct rte_stack *s = mp->pool_data;
- return s->len;
+ return rte_stack_count(s);
}
static void
stack_free(struct rte_mempool *mp)
{
- rte_free((void *)(mp->pool_data));
+ struct rte_stack *s = mp->pool_data;
+
+ rte_stack_free(s);
}
static struct rte_mempool_ops ops_stack = {
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 3/8] test/stack: add stack test
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 0/8] Add stack library and new " Gage Eads
` (2 preceding siblings ...)
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 4/8] test/stack: add stack perf test Gage Eads
` (5 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_autotest performs positive and negative testing of the stack API, and
exercises the push and pop datapath functions with all available lcores.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
MAINTAINERS | 1 +
app/test/Makefile | 2 +
app/test/meson.build | 3 +
app/test/test_stack.c | 410 ++++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 416 insertions(+)
create mode 100644 app/test/test_stack.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 13fe49e2b..2842f07ab 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -421,6 +421,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
F: drivers/mempool/stack/
+F: test/test/*stack*
Memory Pool Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index d6aa28bad..e5bde81af 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -90,6 +90,8 @@ endif
SRCS-y += test_rwlock.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
diff --git a/app/test/meson.build b/app/test/meson.build
index ddb4d09ae..29e88106b 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -95,6 +95,7 @@ test_sources = files('commands.c',
'test_sched.c',
'test_service_cores.c',
'test_spinlock.c',
+ 'test_stack.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -133,6 +134,7 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
+ 'stack',
'timer'
]
@@ -174,6 +176,7 @@ fast_parallel_test_names = [
'rwlock_autotest',
'sched_autotest',
'spinlock_autotest',
+ 'stack_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
new file mode 100644
index 000000000..8392e4e4d
--- /dev/null
+++ b/app/test/test_stack.c
@@ -0,0 +1,410 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_SIZE 4096
+#define MAX_BULK 32
+
+static int
+test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
+{
+ unsigned int i, ret;
+ void **popped_objs;
+
+ popped_objs = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (popped_objs == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_push(s, &obj_table[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] push returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_pop(s, &popped_objs[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] pop returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i++) {
+ if (obj_table[i] != popped_objs[STACK_SIZE - i - 1]) {
+ printf("[%s():%u] Incorrect value %p at index 0x%x\n",
+ __func__, __LINE__,
+ popped_objs[STACK_SIZE - i - 1], i);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ rte_free(popped_objs);
+
+ return 0;
+}
+
+static int
+test_stack_basic(void)
+{
+ struct rte_stack *s = NULL;
+ void **obj_table = NULL;
+ int i, ret = -1;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ goto fail_test;
+ }
+
+ for (i = 0; i < STACK_SIZE; i++)
+ obj_table[i] = (void *)(uintptr_t)i;
+
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_lookup(__func__) != s) {
+ printf("[%s():%u] failed to lookup a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_count(s) != 0) {
+ printf("[%s():%u] stack count: %u (expected 0)\n",
+ __func__, __LINE__, rte_stack_count(s));
+ goto fail_test;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, 1);
+ if (ret) {
+ printf("[%s():%u] Single object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, MAX_BULK);
+ if (ret) {
+ printf("[%s():%u] Bulk object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
+ if (ret != 0) {
+ printf("[%s():%u] Excess objects push succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_pop(s, obj_table, 1);
+ if (ret != 0) {
+ printf("[%s():%u] Empty stack pop succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = 0;
+
+fail_test:
+ rte_stack_free(s);
+
+ rte_free(obj_table);
+
+ return ret;
+}
+
+static int
+test_stack_name_reuse(void)
+{
+ struct rte_stack *s[2];
+
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[0] == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[1] != NULL) {
+ printf("[%s():%u] Failed to detect re-used name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ rte_stack_free(s[0]);
+
+ return 0;
+}
+
+static int
+test_stack_name_length(void)
+{
+ char name[RTE_STACK_NAMESIZE + 1];
+ struct rte_stack *s;
+
+ memset(name, 's', sizeof(name));
+ name[RTE_STACK_NAMESIZE] = '\0';
+
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ if (s != NULL) {
+ printf("[%s():%u] Failed to prevent long name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENAMETOOLONG) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_stack *s = rte_stack_lookup("stack_not_found");
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENOENT) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s = rte_stack_lookup(NULL);
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != EINVAL) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_free_null(void)
+{
+ /* Check whether the library proper handles a NULL pointer */
+ rte_stack_free(NULL);
+
+ return 0;
+}
+
+#define NUM_ITERS_PER_THREAD 100000
+
+struct test_args {
+ struct rte_stack *s;
+ rte_atomic64_t *sz;
+};
+
+static int
+stack_thread_push_pop(void *args)
+{
+ struct test_args *t = args;
+ void **obj_table;
+ int i;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < NUM_ITERS_PER_THREAD; i++) {
+ unsigned int success, num;
+
+ /* Reserve up to min(MAX_BULK, available slots) stack entries,
+ * then push and pop those stack entries.
+ */
+ do {
+ uint64_t sz = rte_atomic64_read(t->sz);
+ volatile uint64_t *sz_addr;
+
+ sz_addr = (volatile uint64_t *)t->sz;
+
+ num = RTE_MIN(rte_rand() % MAX_BULK, STACK_SIZE - sz);
+
+ success = rte_atomic64_cmpset(sz_addr, sz, sz + num);
+ } while (success == 0);
+
+ if (rte_stack_push(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to push %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ if (rte_stack_pop(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to pop %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ rte_atomic64_sub(t->sz, num);
+ }
+
+ rte_free(obj_table);
+ return 0;
+}
+
+static int
+test_stack_multithreaded(void)
+{
+ struct test_args *args;
+ unsigned int lcore_id;
+ struct rte_stack *s;
+ rte_atomic64_t size;
+
+ printf("[%s():%u] Running with %u lcores\n",
+ __func__, __LINE__, rte_lcore_count());
+
+ if (rte_lcore_count() < 2)
+ return 0;
+
+ args = rte_malloc(NULL, sizeof(struct test_args) * RTE_MAX_LCORE, 0);
+ if (args == NULL) {
+ printf("[%s():%u] failed to malloc %zu bytes\n",
+ __func__, __LINE__,
+ sizeof(struct test_args) * RTE_MAX_LCORE);
+ return -1;
+ }
+
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ rte_free(args);
+ return -1;
+ }
+
+ rte_atomic64_init(&size);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ if (rte_eal_remote_launch(stack_thread_push_pop,
+ &args[lcore_id], lcore_id))
+ rte_panic("Failed to launch lcore %d\n", lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ stack_thread_push_pop(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ rte_stack_free(s);
+ rte_free(args);
+
+ return 0;
+}
+
+static int
+test_stack(void)
+{
+ if (test_stack_basic() < 0)
+ return -1;
+
+ if (test_lookup_null() < 0)
+ return -1;
+
+ if (test_free_null() < 0)
+ return -1;
+
+ if (test_stack_name_reuse() < 0)
+ return -1;
+
+ if (test_stack_name_length() < 0)
+ return -1;
+
+ if (test_stack_multithreaded() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_autotest, test_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 3/8] test/stack: add stack test
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 3/8] test/stack: add stack test Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_autotest performs positive and negative testing of the stack API, and
exercises the push and pop datapath functions with all available lcores.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
MAINTAINERS | 1 +
app/test/Makefile | 2 +
app/test/meson.build | 3 +
app/test/test_stack.c | 410 ++++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 416 insertions(+)
create mode 100644 app/test/test_stack.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 13fe49e2b..2842f07ab 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -421,6 +421,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
F: drivers/mempool/stack/
+F: test/test/*stack*
Memory Pool Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index d6aa28bad..e5bde81af 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -90,6 +90,8 @@ endif
SRCS-y += test_rwlock.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
diff --git a/app/test/meson.build b/app/test/meson.build
index ddb4d09ae..29e88106b 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -95,6 +95,7 @@ test_sources = files('commands.c',
'test_sched.c',
'test_service_cores.c',
'test_spinlock.c',
+ 'test_stack.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -133,6 +134,7 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
+ 'stack',
'timer'
]
@@ -174,6 +176,7 @@ fast_parallel_test_names = [
'rwlock_autotest',
'sched_autotest',
'spinlock_autotest',
+ 'stack_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
new file mode 100644
index 000000000..8392e4e4d
--- /dev/null
+++ b/app/test/test_stack.c
@@ -0,0 +1,410 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_SIZE 4096
+#define MAX_BULK 32
+
+static int
+test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
+{
+ unsigned int i, ret;
+ void **popped_objs;
+
+ popped_objs = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (popped_objs == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_push(s, &obj_table[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] push returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_pop(s, &popped_objs[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] pop returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i++) {
+ if (obj_table[i] != popped_objs[STACK_SIZE - i - 1]) {
+ printf("[%s():%u] Incorrect value %p at index 0x%x\n",
+ __func__, __LINE__,
+ popped_objs[STACK_SIZE - i - 1], i);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ rte_free(popped_objs);
+
+ return 0;
+}
+
+static int
+test_stack_basic(void)
+{
+ struct rte_stack *s = NULL;
+ void **obj_table = NULL;
+ int i, ret = -1;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ goto fail_test;
+ }
+
+ for (i = 0; i < STACK_SIZE; i++)
+ obj_table[i] = (void *)(uintptr_t)i;
+
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_lookup(__func__) != s) {
+ printf("[%s():%u] failed to lookup a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_count(s) != 0) {
+ printf("[%s():%u] stack count: %u (expected 0)\n",
+ __func__, __LINE__, rte_stack_count(s));
+ goto fail_test;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, 1);
+ if (ret) {
+ printf("[%s():%u] Single object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, MAX_BULK);
+ if (ret) {
+ printf("[%s():%u] Bulk object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
+ if (ret != 0) {
+ printf("[%s():%u] Excess objects push succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_pop(s, obj_table, 1);
+ if (ret != 0) {
+ printf("[%s():%u] Empty stack pop succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = 0;
+
+fail_test:
+ rte_stack_free(s);
+
+ rte_free(obj_table);
+
+ return ret;
+}
+
+static int
+test_stack_name_reuse(void)
+{
+ struct rte_stack *s[2];
+
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[0] == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[1] != NULL) {
+ printf("[%s():%u] Failed to detect re-used name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ rte_stack_free(s[0]);
+
+ return 0;
+}
+
+static int
+test_stack_name_length(void)
+{
+ char name[RTE_STACK_NAMESIZE + 1];
+ struct rte_stack *s;
+
+ memset(name, 's', sizeof(name));
+ name[RTE_STACK_NAMESIZE] = '\0';
+
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ if (s != NULL) {
+ printf("[%s():%u] Failed to prevent long name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENAMETOOLONG) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_stack *s = rte_stack_lookup("stack_not_found");
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENOENT) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s = rte_stack_lookup(NULL);
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != EINVAL) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_free_null(void)
+{
+ /* Check whether the library proper handles a NULL pointer */
+ rte_stack_free(NULL);
+
+ return 0;
+}
+
+#define NUM_ITERS_PER_THREAD 100000
+
+struct test_args {
+ struct rte_stack *s;
+ rte_atomic64_t *sz;
+};
+
+static int
+stack_thread_push_pop(void *args)
+{
+ struct test_args *t = args;
+ void **obj_table;
+ int i;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < NUM_ITERS_PER_THREAD; i++) {
+ unsigned int success, num;
+
+ /* Reserve up to min(MAX_BULK, available slots) stack entries,
+ * then push and pop those stack entries.
+ */
+ do {
+ uint64_t sz = rte_atomic64_read(t->sz);
+ volatile uint64_t *sz_addr;
+
+ sz_addr = (volatile uint64_t *)t->sz;
+
+ num = RTE_MIN(rte_rand() % MAX_BULK, STACK_SIZE - sz);
+
+ success = rte_atomic64_cmpset(sz_addr, sz, sz + num);
+ } while (success == 0);
+
+ if (rte_stack_push(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to push %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ if (rte_stack_pop(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to pop %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ rte_atomic64_sub(t->sz, num);
+ }
+
+ rte_free(obj_table);
+ return 0;
+}
+
+static int
+test_stack_multithreaded(void)
+{
+ struct test_args *args;
+ unsigned int lcore_id;
+ struct rte_stack *s;
+ rte_atomic64_t size;
+
+ printf("[%s():%u] Running with %u lcores\n",
+ __func__, __LINE__, rte_lcore_count());
+
+ if (rte_lcore_count() < 2)
+ return 0;
+
+ args = rte_malloc(NULL, sizeof(struct test_args) * RTE_MAX_LCORE, 0);
+ if (args == NULL) {
+ printf("[%s():%u] failed to malloc %zu bytes\n",
+ __func__, __LINE__,
+ sizeof(struct test_args) * RTE_MAX_LCORE);
+ return -1;
+ }
+
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ rte_free(args);
+ return -1;
+ }
+
+ rte_atomic64_init(&size);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ if (rte_eal_remote_launch(stack_thread_push_pop,
+ &args[lcore_id], lcore_id))
+ rte_panic("Failed to launch lcore %d\n", lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ stack_thread_push_pop(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ rte_stack_free(s);
+ rte_free(args);
+
+ return 0;
+}
+
+static int
+test_stack(void)
+{
+ if (test_stack_basic() < 0)
+ return -1;
+
+ if (test_lookup_null() < 0)
+ return -1;
+
+ if (test_free_null() < 0)
+ return -1;
+
+ if (test_stack_name_reuse() < 0)
+ return -1;
+
+ if (test_stack_name_length() < 0)
+ return -1;
+
+ if (test_stack_multithreaded() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_autotest, test_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 4/8] test/stack: add stack perf test
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 0/8] Add stack library and new " Gage Eads
` (3 preceding siblings ...)
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 3/8] test/stack: add stack test Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 5/8] stack: add lock-free stack implementation Gage Eads
` (4 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/Makefile | 1 +
app/test/meson.build | 2 +
app/test/test_stack_perf.c | 343 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 346 insertions(+)
create mode 100644 app/test/test_stack_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index e5bde81af..b28bed2d4 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -91,6 +91,7 @@ endif
SRCS-y += test_rwlock.c
SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
diff --git a/app/test/meson.build b/app/test/meson.build
index 29e88106b..2699a3f5b 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -96,6 +96,7 @@ test_sources = files('commands.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
+ 'test_stack_perf.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -241,6 +242,7 @@ perf_test_names = [
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
+ 'stack_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
new file mode 100644
index 000000000..484370d30
--- /dev/null
+++ b/app/test/test_stack_perf.c
@@ -0,0 +1,343 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+#include <rte_stack.h>
+#include <rte_cycles.h>
+#include <rte_launch.h>
+#include <rte_pause.h>
+
+#include "test.h"
+
+#define STACK_NAME "STACK_PERF"
+#define MAX_BURST 32
+#define STACK_SIZE (RTE_MAX_LCORE * MAX_BURST)
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+/*
+ * Push/pop bulk sizes, marked volatile so they aren't treated as compile-time
+ * constants.
+ */
+static volatile unsigned int bulk_sizes[] = {8, MAX_BURST};
+
+static rte_atomic32_t lcore_barrier;
+
+struct lcore_pair {
+ unsigned int c1;
+ unsigned int c2;
+};
+
+static int
+get_two_hyperthreads(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] == core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_cores(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] != core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_sockets(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if (socket[0] != socket[1]) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+/* Measure the cycle cost of popping an empty stack. */
+static void
+test_empty_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 100000000;
+ void *objs[MAX_BURST];
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++)
+ rte_stack_pop(s, objs, bulk_sizes[0]);
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Stack empty pop: %.2F\n",
+ (double)(end - start) / iterations);
+}
+
+struct thread_args {
+ struct rte_stack *s;
+ unsigned int sz;
+ double avg;
+};
+
+/* Measure the average per-pointer cycle cost of stack push and pop */
+static int
+bulk_push_pop(void *p)
+{
+ unsigned int iterations = 1000000;
+ struct thread_args *args = p;
+ void *objs[MAX_BURST] = {0};
+ unsigned int size, i;
+ struct rte_stack *s;
+
+ s = args->s;
+ size = args->sz;
+
+ rte_atomic32_sub(&lcore_barrier, 1);
+ while (rte_atomic32_read(&lcore_barrier) != 0)
+ rte_pause();
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, size);
+ rte_stack_pop(s, objs, size);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ args->avg = ((double)(end - start))/(iterations * size);
+
+ return 0;
+}
+
+/*
+ * Run bulk_push_pop() simultaneously on pairs of cores, to measure stack
+ * perf when between hyperthread siblings, cores on the same socket, and cores
+ * on different sockets.
+ */
+static void
+run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s,
+ lcore_function_t fn)
+{
+ struct thread_args args[2];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ rte_atomic32_set(&lcore_barrier, 2);
+
+ args[0].sz = args[1].sz = bulk_sizes[i];
+ args[0].s = args[1].s = s;
+
+ if (cores->c1 == rte_get_master_lcore()) {
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ fn(&args[0]);
+ rte_eal_wait_lcore(cores->c2);
+ } else {
+ rte_eal_remote_launch(fn, &args[0], cores->c1);
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ rte_eal_wait_lcore(cores->c1);
+ rte_eal_wait_lcore(cores->c2);
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], (args[0].avg + args[1].avg) / 2);
+ }
+}
+
+/* Run bulk_push_pop() simultaneously on 1+ cores. */
+static void
+run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n)
+{
+ struct thread_args args[RTE_MAX_LCORE];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ unsigned int lcore_id;
+ int cnt = 0;
+ double avg;
+
+ rte_atomic32_set(&lcore_barrier, n);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ if (rte_eal_remote_launch(fn, &args[lcore_id],
+ lcore_id))
+ rte_panic("Failed to launch lcore %d\n",
+ lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ fn(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ avg = args[rte_lcore_id()].avg;
+
+ cnt = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+ avg += args[lcore_id].avg;
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], avg / n);
+ }
+}
+
+/*
+ * Measure the cycle cost of pushing and popping a single pointer on a single
+ * lcore.
+ */
+static void
+test_single_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 16000000;
+ void *obj = NULL;
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, &obj, 1);
+ rte_stack_pop(s, &obj, 1);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Average cycles per single object push/pop: %.2F\n",
+ ((double)(end - start)) / iterations);
+}
+
+/* Measure the cycle cost of bulk pushing and popping on a single lcore. */
+static void
+test_bulk_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 8000000;
+ void *objs[MAX_BURST];
+ unsigned int sz, i;
+
+ for (sz = 0; sz < ARRAY_SIZE(bulk_sizes); sz++) {
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, bulk_sizes[sz]);
+ rte_stack_pop(s, objs, bulk_sizes[sz]);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ double avg = ((double)(end - start) /
+ (iterations * bulk_sizes[sz]));
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[sz], avg);
+ }
+}
+
+static int
+test_stack_perf(void)
+{
+ struct lcore_pair cores;
+ struct rte_stack *s;
+
+ rte_atomic32_init(&lcore_barrier);
+
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ printf("### Testing single element push/pop ###\n");
+ test_single_push_pop(s);
+
+ printf("\n### Testing empty pop ###\n");
+ test_empty_pop(s);
+
+ printf("\n### Testing using a single lcore ###\n");
+ test_bulk_push_pop(s);
+
+ if (get_two_hyperthreads(&cores) == 0) {
+ printf("\n### Testing using two hyperthreads ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_cores(&cores) == 0) {
+ printf("\n### Testing using two physical cores ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_sockets(&cores) == 0) {
+ printf("\n### Testing using two NUMA nodes ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+
+ printf("\n### Testing on all %u lcores ###\n", rte_lcore_count());
+ run_on_n_cores(s, bulk_push_pop, rte_lcore_count());
+
+ rte_stack_free(s);
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 4/8] test/stack: add stack perf test
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 4/8] test/stack: add stack perf test Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/Makefile | 1 +
app/test/meson.build | 2 +
app/test/test_stack_perf.c | 343 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 346 insertions(+)
create mode 100644 app/test/test_stack_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index e5bde81af..b28bed2d4 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -91,6 +91,7 @@ endif
SRCS-y += test_rwlock.c
SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
diff --git a/app/test/meson.build b/app/test/meson.build
index 29e88106b..2699a3f5b 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -96,6 +96,7 @@ test_sources = files('commands.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
+ 'test_stack_perf.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -241,6 +242,7 @@ perf_test_names = [
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
+ 'stack_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
new file mode 100644
index 000000000..484370d30
--- /dev/null
+++ b/app/test/test_stack_perf.c
@@ -0,0 +1,343 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+#include <rte_stack.h>
+#include <rte_cycles.h>
+#include <rte_launch.h>
+#include <rte_pause.h>
+
+#include "test.h"
+
+#define STACK_NAME "STACK_PERF"
+#define MAX_BURST 32
+#define STACK_SIZE (RTE_MAX_LCORE * MAX_BURST)
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+/*
+ * Push/pop bulk sizes, marked volatile so they aren't treated as compile-time
+ * constants.
+ */
+static volatile unsigned int bulk_sizes[] = {8, MAX_BURST};
+
+static rte_atomic32_t lcore_barrier;
+
+struct lcore_pair {
+ unsigned int c1;
+ unsigned int c2;
+};
+
+static int
+get_two_hyperthreads(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] == core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_cores(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] != core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_sockets(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if (socket[0] != socket[1]) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+/* Measure the cycle cost of popping an empty stack. */
+static void
+test_empty_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 100000000;
+ void *objs[MAX_BURST];
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++)
+ rte_stack_pop(s, objs, bulk_sizes[0]);
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Stack empty pop: %.2F\n",
+ (double)(end - start) / iterations);
+}
+
+struct thread_args {
+ struct rte_stack *s;
+ unsigned int sz;
+ double avg;
+};
+
+/* Measure the average per-pointer cycle cost of stack push and pop */
+static int
+bulk_push_pop(void *p)
+{
+ unsigned int iterations = 1000000;
+ struct thread_args *args = p;
+ void *objs[MAX_BURST] = {0};
+ unsigned int size, i;
+ struct rte_stack *s;
+
+ s = args->s;
+ size = args->sz;
+
+ rte_atomic32_sub(&lcore_barrier, 1);
+ while (rte_atomic32_read(&lcore_barrier) != 0)
+ rte_pause();
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, size);
+ rte_stack_pop(s, objs, size);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ args->avg = ((double)(end - start))/(iterations * size);
+
+ return 0;
+}
+
+/*
+ * Run bulk_push_pop() simultaneously on pairs of cores, to measure stack
+ * perf when between hyperthread siblings, cores on the same socket, and cores
+ * on different sockets.
+ */
+static void
+run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s,
+ lcore_function_t fn)
+{
+ struct thread_args args[2];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ rte_atomic32_set(&lcore_barrier, 2);
+
+ args[0].sz = args[1].sz = bulk_sizes[i];
+ args[0].s = args[1].s = s;
+
+ if (cores->c1 == rte_get_master_lcore()) {
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ fn(&args[0]);
+ rte_eal_wait_lcore(cores->c2);
+ } else {
+ rte_eal_remote_launch(fn, &args[0], cores->c1);
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ rte_eal_wait_lcore(cores->c1);
+ rte_eal_wait_lcore(cores->c2);
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], (args[0].avg + args[1].avg) / 2);
+ }
+}
+
+/* Run bulk_push_pop() simultaneously on 1+ cores. */
+static void
+run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n)
+{
+ struct thread_args args[RTE_MAX_LCORE];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ unsigned int lcore_id;
+ int cnt = 0;
+ double avg;
+
+ rte_atomic32_set(&lcore_barrier, n);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ if (rte_eal_remote_launch(fn, &args[lcore_id],
+ lcore_id))
+ rte_panic("Failed to launch lcore %d\n",
+ lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ fn(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ avg = args[rte_lcore_id()].avg;
+
+ cnt = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+ avg += args[lcore_id].avg;
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], avg / n);
+ }
+}
+
+/*
+ * Measure the cycle cost of pushing and popping a single pointer on a single
+ * lcore.
+ */
+static void
+test_single_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 16000000;
+ void *obj = NULL;
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, &obj, 1);
+ rte_stack_pop(s, &obj, 1);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Average cycles per single object push/pop: %.2F\n",
+ ((double)(end - start)) / iterations);
+}
+
+/* Measure the cycle cost of bulk pushing and popping on a single lcore. */
+static void
+test_bulk_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 8000000;
+ void *objs[MAX_BURST];
+ unsigned int sz, i;
+
+ for (sz = 0; sz < ARRAY_SIZE(bulk_sizes); sz++) {
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, bulk_sizes[sz]);
+ rte_stack_pop(s, objs, bulk_sizes[sz]);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ double avg = ((double)(end - start) /
+ (iterations * bulk_sizes[sz]));
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[sz], avg);
+ }
+}
+
+static int
+test_stack_perf(void)
+{
+ struct lcore_pair cores;
+ struct rte_stack *s;
+
+ rte_atomic32_init(&lcore_barrier);
+
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ printf("### Testing single element push/pop ###\n");
+ test_single_push_pop(s);
+
+ printf("\n### Testing empty pop ###\n");
+ test_empty_pop(s);
+
+ printf("\n### Testing using a single lcore ###\n");
+ test_bulk_push_pop(s);
+
+ if (get_two_hyperthreads(&cores) == 0) {
+ printf("\n### Testing using two hyperthreads ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_cores(&cores) == 0) {
+ printf("\n### Testing using two physical cores ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_sockets(&cores) == 0) {
+ printf("\n### Testing using two NUMA nodes ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+
+ printf("\n### Testing on all %u lcores ###\n", rte_lcore_count());
+ run_on_n_cores(s, bulk_push_pop, rte_lcore_count());
+
+ rte_stack_free(s);
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 5/8] stack: add lock-free stack implementation
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 0/8] Add stack library and new " Gage Eads
` (4 preceding siblings ...)
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 4/8] test/stack: add stack perf test Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 6/8] stack: add C11 atomic implementation Gage Eads
` (3 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for a lock-free (linked list based) stack to the
stack API. This behavior is selected through a new rte_stack_create() flag,
RTE_STACK_F_LF.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The lock-free push operation enqueues a linked list of pointers by pointing
the tail of the list to the current stack head, and using a CAS to swing
the stack head pointer to the head of the list. The operation retries if it
is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The lock-free pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a lock-free LIFO
list, and are allocated before stack pushes and freed after stack pops.
Since the stack has a fixed maximum depth, these elements do not need to be
dynamically created.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
doc/guides/prog_guide/stack_lib.rst | 61 ++++++++++++-
doc/guides/rel_notes/release_19_05.rst | 3 +
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack.c | 41 +++++++--
lib/librte_stack/rte_stack.h | 127 +++++++++++++++++++++++++--
lib/librte_stack/rte_stack_generic.h | 151 +++++++++++++++++++++++++++++++++
7 files changed, 371 insertions(+), 18 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_generic.h
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 25a8cc38a..8fe8804e3 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -10,7 +10,8 @@ stack of pointers.
The stack library provides the following basic operations:
* Create a uniquely named stack of a user-specified size and using a
- user-specified socket.
+ user-specified socket, with either standard (lock-based) or lock-free
+ behavior.
* Push and pop a burst of one or more stack objects (pointers). These function
are multi-threading safe.
@@ -24,5 +25,59 @@ The stack library provides the following basic operations:
Implementation
~~~~~~~~~~~~~~
-The stack consists of a contiguous array of pointers, a current index, and a
-spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
+The library supports two types of stacks: standard (lock-based) and lock-free.
+Both types use the same set of interfaces, but their implementations differ.
+
+Lock-based Stack
+----------------
+
+The lock-based stack consists of a contiguous array of pointers, a current
+index, and a spinlock. Accesses to the stack are made multi-thread safe by the
+spinlock.
+
+Lock-free Stack
+------------------
+
+The lock-free stack consists of a linked list of elements, each containing a
+data pointer and a next pointer, and an atomic stack depth counter. The
+lock-free property means that multiple threads can push and pop simultaneously,
+and one thread being preempted/delayed in a push or pop operation will not
+impede the forward progress of any other thread.
+
+The lock-free push operation enqueues a linked list of pointers by pointing the
+list's tail to the current stack head, and using a CAS to swing the stack head
+pointer to the head of the list. The operation retries if it is unsuccessful
+(i.e. the list changed between reading the head and modifying it), else it
+adjusts the stack length and returns.
+
+The lock-free pop operation first reserves one or more list elements by
+adjusting the stack length, to ensure the dequeue operation will succeed
+without blocking. It then dequeues pointers by walking the list -- starting
+from the head -- then swinging the head pointer (using a CAS as well). While
+walking the list, the data pointers are recorded in an object table.
+
+The linked list elements themselves are maintained in a lock-free LIFO, and are
+allocated before stack pushes and freed after stack pops. Since the stack has a
+fixed maximum depth, these elements do not need to be dynamically created.
+
+The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
+rte_stack_create().
+
+Preventing the ABA Problem
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To prevent the ABA problem, this algorithm stack uses a 128-bit
+compare-and-swap instruction to atomically update both the stack top pointer
+and a modification counter. The ABA problem can occur without a modification
+counter if, for example:
+
+1. Thread A reads head pointer X and stores the pointed-to list element.
+2. Other threads modify the list such that the head pointer is once again X,
+ but its pointed-to data is different than what thread A read.
+3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+In this case thread A would not detect that the list had changed, and would
+both pop stale data and incorrect change the head pointer. By adding a
+modification counter that is updated on every push and pop as part of the
+compare-and-swap, the algorithm can detect when the list changes even if the
+head pointer remains the same.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 525ae616f..96e851e13 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -111,6 +111,9 @@ New Features
pointers. The API provides MT-safe push and pop operations that can operate
on one or more pointers per operation.
+ The library supports two stack implementations: standard (lock-based) and lock-free.
+ The lock-free implementation is currently limited to x86-64 platforms.
+
Removed Items
-------------
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index e956b6535..3ecddf033 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -18,6 +18,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
# install includes
-SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
+ rte_stack_generic.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 99f43710e..99d7f9ec5 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -5,4 +5,5 @@ allow_experimental_apis = true
version = 1
sources = files('rte_stack.c')
-headers = files('rte_stack.h')
+headers = files('rte_stack.h',
+ 'rte_stack_generic.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
index 96dffdf44..8f0361ea1 100644
--- a/lib/librte_stack/rte_stack.c
+++ b/lib/librte_stack/rte_stack.c
@@ -26,27 +26,45 @@ static struct rte_tailq_elem rte_stack_tailq = {
EAL_REGISTER_TAILQ(rte_stack_tailq)
static void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count)
+{
+ struct rte_stack_lf_elem *elems = s->stack_lf.elems;
+ unsigned int i;
+
+ for (i = 0; i < count; i++)
+ __rte_stack_lf_push(&s->stack_lf.free, &elems[i], &elems[i], 1);
+}
+
+static void
rte_stack_std_init(struct rte_stack *s)
{
rte_spinlock_init(&s->stack_std.lock);
}
static void
-rte_stack_init(struct rte_stack *s)
+rte_stack_init(struct rte_stack *s, unsigned int count, uint32_t flags)
{
memset(s, 0, sizeof(*s));
- rte_stack_std_init(s);
+ if (flags & RTE_STACK_F_LF)
+ rte_stack_lf_init(s, count);
+ else
+ rte_stack_std_init(s);
}
static ssize_t
-rte_stack_get_memsize(unsigned int count)
+rte_stack_get_memsize(unsigned int count, uint32_t flags)
{
ssize_t sz = sizeof(struct rte_stack);
+ if (flags & RTE_STACK_F_LF)
+ sz += RTE_CACHE_LINE_ROUNDUP(count *
+ sizeof(struct rte_stack_lf_elem));
+ else
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *));
+
/* Add padding to avoid false sharing conflicts */
- sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
- 2 * RTE_CACHE_LINE_SIZE;
+ sz += 2 * RTE_CACHE_LINE_SIZE;
return sz;
}
@@ -63,9 +81,16 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
unsigned int sz;
int ret;
- RTE_SET_USED(flags);
+#ifdef RTE_ARCH_X86_64
+ RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
+#else
+ if (flags & RTE_STACK_F_LF) {
+ STACK_LOG_ERR("Lock-free stack is not supported on your platform\n");
+ return NULL;
+ }
+#endif
- sz = rte_stack_get_memsize(count);
+ sz = rte_stack_get_memsize(count, flags);
ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
RTE_STACK_MZ_PREFIX, name);
@@ -94,7 +119,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
s = mz->addr;
- rte_stack_init(s);
+ rte_stack_init(s, count, flags);
/* Store the name for later lookups */
ret = snprintf(s->name, sizeof(s->name), "%s", name);
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index 7a633deb5..b484313bb 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -30,6 +30,35 @@ extern "C" {
#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
sizeof(RTE_STACK_MZ_PREFIX) + 1)
+struct rte_stack_lf_elem {
+ void *data; /**< Data pointer */
+ struct rte_stack_lf_elem *next; /**< Next pointer */
+};
+
+struct rte_stack_lf_head {
+ struct rte_stack_lf_elem *top; /**< Stack top */
+ uint64_t cnt; /**< Modification counter for avoiding ABA problem */
+};
+
+struct rte_stack_lf_list {
+ /** List head */
+ struct rte_stack_lf_head head __rte_aligned(16);
+ /** List len */
+ rte_atomic64_t len;
+};
+
+/* Structure containing two lock-free LIFO lists: the stack itself and a list
+ * of free linked-list elements.
+ */
+struct rte_stack_lf {
+ /** LIFO list of elements */
+ struct rte_stack_lf_list used __rte_cache_aligned;
+ /** LIFO list of free elements */
+ struct rte_stack_lf_list free __rte_cache_aligned;
+ /** LIFO elements */
+ struct rte_stack_lf_elem elems[] __rte_cache_aligned;
+};
+
/* Structure containing the LIFO, its current length, and a lock for mutual
* exclusion.
*/
@@ -49,10 +78,58 @@ struct rte_stack {
const struct rte_memzone *memzone;
uint32_t capacity; /**< Usable size of the stack. */
uint32_t flags; /**< Flags supplied at creation. */
- struct rte_stack_std stack_std; /**< LIFO structure. */
+ RTE_STD_C11
+ union {
+ struct rte_stack_lf stack_lf; /**< Lock-free LIFO structure. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+ };
} __rte_cache_aligned;
/**
+ * The stack uses lock-free push and pop functions. This flag is only
+ * supported on x86_64 platforms, currently.
+ */
+#define RTE_STACK_F_LF 0x0001
+
+#include "rte_stack_generic.h"
+
+/**
+ * @internal Push several objects on the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects enqueued.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_lf_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *tmp, *first, *last = NULL;
+ unsigned int i;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n free elements */
+ first = __rte_stack_lf_pop(&s->stack_lf.free, n, NULL, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Construct the list elements */
+ for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
+ tmp->data = obj_table[n - i - 1];
+
+ /* Push them to the used list */
+ __rte_stack_lf_push(&s->stack_lf.used, first, last, n);
+
+ return n;
+}
+
+/**
* @internal Push several objects on the stack (MT-safe).
*
* @param s
@@ -108,7 +185,38 @@ rte_stack_std_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
static __rte_always_inline unsigned int __rte_experimental
rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
{
- return rte_stack_std_push(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return rte_stack_lf_push(s, obj_table, n);
+ else
+ return rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @internal Pop several objects from the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * - Actual number of objects popped.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *first, *last = NULL;
+
+ /* Pop n used elements */
+ first = __rte_stack_lf_pop(&s->stack_lf.used, n, obj_table, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Push the list elements to the free list */
+ __rte_stack_lf_push(&s->stack_lf.free, first, last, n);
+
+ return n;
}
/**
@@ -170,7 +278,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
if (unlikely(n == 0 || obj_table == NULL))
return 0;
- return rte_stack_std_pop(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return rte_stack_lf_pop(s, obj_table, n);
+ else
+ return rte_stack_std_pop(s, obj_table, n);
}
/**
@@ -187,7 +298,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
static __rte_always_inline unsigned int __rte_experimental
rte_stack_count(struct rte_stack *s)
{
- return (unsigned int)s->stack_std.len;
+ if (s->flags & RTE_STACK_F_LF)
+ return rte_stack_lf_len(s);
+ else
+ return (unsigned int)s->stack_std.len;
}
/**
@@ -225,7 +339,10 @@ rte_stack_free_count(struct rte_stack *s)
* NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
* constraint for the reserved zone.
* @param flags
- * Reserved for future use.
+ * An OR of the following:
+ * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
+ * variants of the push and pop functions. Otherwise, it achieves
+ * thread-safety using a lock.
* @return
* On success, the pointer to the new allocated stack. NULL on error with
* rte_errno set appropriately. Possible errno values include:
diff --git a/lib/librte_stack/rte_stack_generic.h b/lib/librte_stack/rte_stack_generic.h
new file mode 100644
index 000000000..5e4cbc38e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_generic.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_GENERIC_H_
+#define _RTE_STACK_GENERIC_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+rte_stack_lf_len(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)rte_atomic64_read(&s->stack_lf.used.len);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ rte_atomic64_add(&list->len, num);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = rte_atomic64_read(&list->len);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
+ len, len - num))
+ break;
+ }
+
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_GENERIC_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 5/8] stack: add lock-free stack implementation
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 5/8] stack: add lock-free stack implementation Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for a lock-free (linked list based) stack to the
stack API. This behavior is selected through a new rte_stack_create() flag,
RTE_STACK_F_LF.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The lock-free push operation enqueues a linked list of pointers by pointing
the tail of the list to the current stack head, and using a CAS to swing
the stack head pointer to the head of the list. The operation retries if it
is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The lock-free pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a lock-free LIFO
list, and are allocated before stack pushes and freed after stack pops.
Since the stack has a fixed maximum depth, these elements do not need to be
dynamically created.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
doc/guides/prog_guide/stack_lib.rst | 61 ++++++++++++-
doc/guides/rel_notes/release_19_05.rst | 3 +
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack.c | 41 +++++++--
lib/librte_stack/rte_stack.h | 127 +++++++++++++++++++++++++--
lib/librte_stack/rte_stack_generic.h | 151 +++++++++++++++++++++++++++++++++
7 files changed, 371 insertions(+), 18 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_generic.h
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 25a8cc38a..8fe8804e3 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -10,7 +10,8 @@ stack of pointers.
The stack library provides the following basic operations:
* Create a uniquely named stack of a user-specified size and using a
- user-specified socket.
+ user-specified socket, with either standard (lock-based) or lock-free
+ behavior.
* Push and pop a burst of one or more stack objects (pointers). These function
are multi-threading safe.
@@ -24,5 +25,59 @@ The stack library provides the following basic operations:
Implementation
~~~~~~~~~~~~~~
-The stack consists of a contiguous array of pointers, a current index, and a
-spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
+The library supports two types of stacks: standard (lock-based) and lock-free.
+Both types use the same set of interfaces, but their implementations differ.
+
+Lock-based Stack
+----------------
+
+The lock-based stack consists of a contiguous array of pointers, a current
+index, and a spinlock. Accesses to the stack are made multi-thread safe by the
+spinlock.
+
+Lock-free Stack
+------------------
+
+The lock-free stack consists of a linked list of elements, each containing a
+data pointer and a next pointer, and an atomic stack depth counter. The
+lock-free property means that multiple threads can push and pop simultaneously,
+and one thread being preempted/delayed in a push or pop operation will not
+impede the forward progress of any other thread.
+
+The lock-free push operation enqueues a linked list of pointers by pointing the
+list's tail to the current stack head, and using a CAS to swing the stack head
+pointer to the head of the list. The operation retries if it is unsuccessful
+(i.e. the list changed between reading the head and modifying it), else it
+adjusts the stack length and returns.
+
+The lock-free pop operation first reserves one or more list elements by
+adjusting the stack length, to ensure the dequeue operation will succeed
+without blocking. It then dequeues pointers by walking the list -- starting
+from the head -- then swinging the head pointer (using a CAS as well). While
+walking the list, the data pointers are recorded in an object table.
+
+The linked list elements themselves are maintained in a lock-free LIFO, and are
+allocated before stack pushes and freed after stack pops. Since the stack has a
+fixed maximum depth, these elements do not need to be dynamically created.
+
+The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
+rte_stack_create().
+
+Preventing the ABA Problem
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To prevent the ABA problem, this algorithm stack uses a 128-bit
+compare-and-swap instruction to atomically update both the stack top pointer
+and a modification counter. The ABA problem can occur without a modification
+counter if, for example:
+
+1. Thread A reads head pointer X and stores the pointed-to list element.
+2. Other threads modify the list such that the head pointer is once again X,
+ but its pointed-to data is different than what thread A read.
+3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+In this case thread A would not detect that the list had changed, and would
+both pop stale data and incorrect change the head pointer. By adding a
+modification counter that is updated on every push and pop as part of the
+compare-and-swap, the algorithm can detect when the list changes even if the
+head pointer remains the same.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 525ae616f..96e851e13 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -111,6 +111,9 @@ New Features
pointers. The API provides MT-safe push and pop operations that can operate
on one or more pointers per operation.
+ The library supports two stack implementations: standard (lock-based) and lock-free.
+ The lock-free implementation is currently limited to x86-64 platforms.
+
Removed Items
-------------
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index e956b6535..3ecddf033 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -18,6 +18,7 @@ LIBABIVER := 1
SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
# install includes
-SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
+ rte_stack_generic.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 99f43710e..99d7f9ec5 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -5,4 +5,5 @@ allow_experimental_apis = true
version = 1
sources = files('rte_stack.c')
-headers = files('rte_stack.h')
+headers = files('rte_stack.h',
+ 'rte_stack_generic.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
index 96dffdf44..8f0361ea1 100644
--- a/lib/librte_stack/rte_stack.c
+++ b/lib/librte_stack/rte_stack.c
@@ -26,27 +26,45 @@ static struct rte_tailq_elem rte_stack_tailq = {
EAL_REGISTER_TAILQ(rte_stack_tailq)
static void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count)
+{
+ struct rte_stack_lf_elem *elems = s->stack_lf.elems;
+ unsigned int i;
+
+ for (i = 0; i < count; i++)
+ __rte_stack_lf_push(&s->stack_lf.free, &elems[i], &elems[i], 1);
+}
+
+static void
rte_stack_std_init(struct rte_stack *s)
{
rte_spinlock_init(&s->stack_std.lock);
}
static void
-rte_stack_init(struct rte_stack *s)
+rte_stack_init(struct rte_stack *s, unsigned int count, uint32_t flags)
{
memset(s, 0, sizeof(*s));
- rte_stack_std_init(s);
+ if (flags & RTE_STACK_F_LF)
+ rte_stack_lf_init(s, count);
+ else
+ rte_stack_std_init(s);
}
static ssize_t
-rte_stack_get_memsize(unsigned int count)
+rte_stack_get_memsize(unsigned int count, uint32_t flags)
{
ssize_t sz = sizeof(struct rte_stack);
+ if (flags & RTE_STACK_F_LF)
+ sz += RTE_CACHE_LINE_ROUNDUP(count *
+ sizeof(struct rte_stack_lf_elem));
+ else
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *));
+
/* Add padding to avoid false sharing conflicts */
- sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
- 2 * RTE_CACHE_LINE_SIZE;
+ sz += 2 * RTE_CACHE_LINE_SIZE;
return sz;
}
@@ -63,9 +81,16 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
unsigned int sz;
int ret;
- RTE_SET_USED(flags);
+#ifdef RTE_ARCH_X86_64
+ RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
+#else
+ if (flags & RTE_STACK_F_LF) {
+ STACK_LOG_ERR("Lock-free stack is not supported on your platform\n");
+ return NULL;
+ }
+#endif
- sz = rte_stack_get_memsize(count);
+ sz = rte_stack_get_memsize(count, flags);
ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
RTE_STACK_MZ_PREFIX, name);
@@ -94,7 +119,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
s = mz->addr;
- rte_stack_init(s);
+ rte_stack_init(s, count, flags);
/* Store the name for later lookups */
ret = snprintf(s->name, sizeof(s->name), "%s", name);
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index 7a633deb5..b484313bb 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -30,6 +30,35 @@ extern "C" {
#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
sizeof(RTE_STACK_MZ_PREFIX) + 1)
+struct rte_stack_lf_elem {
+ void *data; /**< Data pointer */
+ struct rte_stack_lf_elem *next; /**< Next pointer */
+};
+
+struct rte_stack_lf_head {
+ struct rte_stack_lf_elem *top; /**< Stack top */
+ uint64_t cnt; /**< Modification counter for avoiding ABA problem */
+};
+
+struct rte_stack_lf_list {
+ /** List head */
+ struct rte_stack_lf_head head __rte_aligned(16);
+ /** List len */
+ rte_atomic64_t len;
+};
+
+/* Structure containing two lock-free LIFO lists: the stack itself and a list
+ * of free linked-list elements.
+ */
+struct rte_stack_lf {
+ /** LIFO list of elements */
+ struct rte_stack_lf_list used __rte_cache_aligned;
+ /** LIFO list of free elements */
+ struct rte_stack_lf_list free __rte_cache_aligned;
+ /** LIFO elements */
+ struct rte_stack_lf_elem elems[] __rte_cache_aligned;
+};
+
/* Structure containing the LIFO, its current length, and a lock for mutual
* exclusion.
*/
@@ -49,10 +78,58 @@ struct rte_stack {
const struct rte_memzone *memzone;
uint32_t capacity; /**< Usable size of the stack. */
uint32_t flags; /**< Flags supplied at creation. */
- struct rte_stack_std stack_std; /**< LIFO structure. */
+ RTE_STD_C11
+ union {
+ struct rte_stack_lf stack_lf; /**< Lock-free LIFO structure. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+ };
} __rte_cache_aligned;
/**
+ * The stack uses lock-free push and pop functions. This flag is only
+ * supported on x86_64 platforms, currently.
+ */
+#define RTE_STACK_F_LF 0x0001
+
+#include "rte_stack_generic.h"
+
+/**
+ * @internal Push several objects on the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects enqueued.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_lf_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *tmp, *first, *last = NULL;
+ unsigned int i;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n free elements */
+ first = __rte_stack_lf_pop(&s->stack_lf.free, n, NULL, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Construct the list elements */
+ for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
+ tmp->data = obj_table[n - i - 1];
+
+ /* Push them to the used list */
+ __rte_stack_lf_push(&s->stack_lf.used, first, last, n);
+
+ return n;
+}
+
+/**
* @internal Push several objects on the stack (MT-safe).
*
* @param s
@@ -108,7 +185,38 @@ rte_stack_std_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
static __rte_always_inline unsigned int __rte_experimental
rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
{
- return rte_stack_std_push(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return rte_stack_lf_push(s, obj_table, n);
+ else
+ return rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @internal Pop several objects from the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * - Actual number of objects popped.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *first, *last = NULL;
+
+ /* Pop n used elements */
+ first = __rte_stack_lf_pop(&s->stack_lf.used, n, obj_table, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Push the list elements to the free list */
+ __rte_stack_lf_push(&s->stack_lf.free, first, last, n);
+
+ return n;
}
/**
@@ -170,7 +278,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
if (unlikely(n == 0 || obj_table == NULL))
return 0;
- return rte_stack_std_pop(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return rte_stack_lf_pop(s, obj_table, n);
+ else
+ return rte_stack_std_pop(s, obj_table, n);
}
/**
@@ -187,7 +298,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
static __rte_always_inline unsigned int __rte_experimental
rte_stack_count(struct rte_stack *s)
{
- return (unsigned int)s->stack_std.len;
+ if (s->flags & RTE_STACK_F_LF)
+ return rte_stack_lf_len(s);
+ else
+ return (unsigned int)s->stack_std.len;
}
/**
@@ -225,7 +339,10 @@ rte_stack_free_count(struct rte_stack *s)
* NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
* constraint for the reserved zone.
* @param flags
- * Reserved for future use.
+ * An OR of the following:
+ * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
+ * variants of the push and pop functions. Otherwise, it achieves
+ * thread-safety using a lock.
* @return
* On success, the pointer to the new allocated stack. NULL on error with
* rte_errno set appropriately. Possible errno values include:
diff --git a/lib/librte_stack/rte_stack_generic.h b/lib/librte_stack/rte_stack_generic.h
new file mode 100644
index 000000000..5e4cbc38e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_generic.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_GENERIC_H_
+#define _RTE_STACK_GENERIC_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+rte_stack_lf_len(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)rte_atomic64_read(&s->stack_lf.used.len);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ rte_atomic64_add(&list->len, num);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = rte_atomic64_read(&list->len);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
+ len, len - num))
+ break;
+ }
+
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_GENERIC_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 6/8] stack: add C11 atomic implementation
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 0/8] Add stack library and new " Gage Eads
` (5 preceding siblings ...)
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 5/8] stack: add lock-free stack implementation Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 7/8] test/stack: add lock-free stack tests Gage Eads
` (2 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds an implementation of the lock-free stack push, pop, and
length functions that use __atomic builtins, for systems that benefit from
the finer-grained memory ordering control.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack.h | 4 +
lib/librte_stack/rte_stack_c11_mem.h | 175 +++++++++++++++++++++++++++++++++++
4 files changed, 183 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_c11_mem.h
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 3ecddf033..94a7c1476 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -19,6 +19,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
- rte_stack_generic.h
+ rte_stack_generic.h \
+ rte_stack_c11_mem.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 99d7f9ec5..7e2d1dbb8 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -6,4 +6,5 @@ allow_experimental_apis = true
version = 1
sources = files('rte_stack.c')
headers = files('rte_stack.h',
- 'rte_stack_generic.h')
+ 'rte_stack_generic.h',
+ 'rte_stack_c11_mem.h')
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index b484313bb..de16f8fff 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -91,7 +91,11 @@ struct rte_stack {
*/
#define RTE_STACK_F_LF 0x0001
+#ifdef RTE_USE_C11_MEM_MODEL
+#include "rte_stack_c11_mem.h"
+#else
#include "rte_stack_generic.h"
+#endif
/**
* @internal Push several objects on the lock-free stack (MT-safe).
diff --git a/lib/librte_stack/rte_stack_c11_mem.h b/lib/librte_stack/rte_stack_c11_mem.h
new file mode 100644
index 000000000..44f9ece6e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_c11_mem.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_C11_MEM_H_
+#define _RTE_STACK_C11_MEM_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+rte_stack_lf_len(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
+ __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Use the release memmodel to ensure the writes to the LF LIFO
+ * elements are visible before the head pointer write.
+ */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ /* Ensure the stack modifications are not reordered with respect
+ * to the LIFO len update.
+ */
+ __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = __atomic_load_n(&list->len.cnt,
+ __ATOMIC_ACQUIRE);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (__atomic_compare_exchange_n(&list->len.cnt,
+ &len, len - num,
+ 0, __ATOMIC_RELAXED,
+ __ATOMIC_RELAXED))
+ break;
+ }
+
+#ifndef RTE_ARCH_X86_64
+ /* Use the acquire memmodel to ensure the reads to the LF LIFO elements
+ * are properly ordered with respect to the head pointer read.
+ *
+ * Note that for aarch64, GCC's implementation of __atomic_load_16 in
+ * libatomic uses locks, and so this function should be replaced by
+ * a new function (e.g. "rte_atomic128_load()").
+ */
+ __atomic_load((volatile __int128 *)&list->head,
+ &old_head,
+ __ATOMIC_ACQUIRE);
+#else
+ /* x86-64 does not require an atomic load here; if a torn read occurs,
+ * the CAS will fail and set old_head to the correct/latest value.
+ */
+ old_head = list->head;
+#endif
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_C11_MEM_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 6/8] stack: add C11 atomic implementation
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 6/8] stack: add C11 atomic implementation Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds an implementation of the lock-free stack push, pop, and
length functions that use __atomic builtins, for systems that benefit from
the finer-grained memory ordering control.
Signed-off-by: Gage Eads <gage.eads@intel.com>
---
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack.h | 4 +
lib/librte_stack/rte_stack_c11_mem.h | 175 +++++++++++++++++++++++++++++++++++
4 files changed, 183 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_c11_mem.h
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 3ecddf033..94a7c1476 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -19,6 +19,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
- rte_stack_generic.h
+ rte_stack_generic.h \
+ rte_stack_c11_mem.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 99d7f9ec5..7e2d1dbb8 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -6,4 +6,5 @@ allow_experimental_apis = true
version = 1
sources = files('rte_stack.c')
headers = files('rte_stack.h',
- 'rte_stack_generic.h')
+ 'rte_stack_generic.h',
+ 'rte_stack_c11_mem.h')
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index b484313bb..de16f8fff 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -91,7 +91,11 @@ struct rte_stack {
*/
#define RTE_STACK_F_LF 0x0001
+#ifdef RTE_USE_C11_MEM_MODEL
+#include "rte_stack_c11_mem.h"
+#else
#include "rte_stack_generic.h"
+#endif
/**
* @internal Push several objects on the lock-free stack (MT-safe).
diff --git a/lib/librte_stack/rte_stack_c11_mem.h b/lib/librte_stack/rte_stack_c11_mem.h
new file mode 100644
index 000000000..44f9ece6e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_c11_mem.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_C11_MEM_H_
+#define _RTE_STACK_C11_MEM_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+rte_stack_lf_len(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
+ __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Use the release memmodel to ensure the writes to the LF LIFO
+ * elements are visible before the head pointer write.
+ */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ /* Ensure the stack modifications are not reordered with respect
+ * to the LIFO len update.
+ */
+ __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = __atomic_load_n(&list->len.cnt,
+ __ATOMIC_ACQUIRE);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (__atomic_compare_exchange_n(&list->len.cnt,
+ &len, len - num,
+ 0, __ATOMIC_RELAXED,
+ __ATOMIC_RELAXED))
+ break;
+ }
+
+#ifndef RTE_ARCH_X86_64
+ /* Use the acquire memmodel to ensure the reads to the LF LIFO elements
+ * are properly ordered with respect to the head pointer read.
+ *
+ * Note that for aarch64, GCC's implementation of __atomic_load_16 in
+ * libatomic uses locks, and so this function should be replaced by
+ * a new function (e.g. "rte_atomic128_load()").
+ */
+ __atomic_load((volatile __int128 *)&list->head,
+ &old_head,
+ __ATOMIC_ACQUIRE);
+#else
+ /* x86-64 does not require an atomic load here; if a torn read occurs,
+ * the CAS will fail and set old_head to the correct/latest value.
+ */
+ old_head = list->head;
+#endif
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_C11_MEM_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 7/8] test/stack: add lock-free stack tests
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 0/8] Add stack library and new " Gage Eads
` (6 preceding siblings ...)
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 6/8] stack: add C11 atomic implementation Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 0/8] Add stack library and new " Gage Eads
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds lock-free stack variants of stack_autotest
(stack_lf_autotest) and stack_perf_autotest (stack_lf_perf_autotest), which
differ only in that the lock-free versions pass the RTE_STACK_F_LF flag to
all rte_stack_create() calls.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/meson.build | 2 ++
app/test/test_stack.c | 41 +++++++++++++++++++++++++++--------------
app/test/test_stack_perf.c | 17 +++++++++++++++--
3 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/app/test/meson.build b/app/test/meson.build
index 2699a3f5b..57d8a5b55 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -178,6 +178,7 @@ fast_parallel_test_names = [
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
+ 'stack_nb_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
@@ -243,6 +244,7 @@ perf_test_names = [
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
+ 'stack_nb_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
index 8392e4e4d..f199136aa 100644
--- a/app/test/test_stack.c
+++ b/app/test/test_stack.c
@@ -97,7 +97,7 @@ test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
}
static int
-test_stack_basic(void)
+test_stack_basic(uint32_t flags)
{
struct rte_stack *s = NULL;
void **obj_table = NULL;
@@ -113,7 +113,7 @@ test_stack_basic(void)
for (i = 0; i < STACK_SIZE; i++)
obj_table[i] = (void *)(uintptr_t)i;
- s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -177,18 +177,18 @@ test_stack_basic(void)
}
static int
-test_stack_name_reuse(void)
+test_stack_name_reuse(uint32_t flags)
{
struct rte_stack *s[2];
- s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[0] == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
return -1;
}
- s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[1] != NULL) {
printf("[%s():%u] Failed to detect re-used name\n",
__func__, __LINE__);
@@ -201,7 +201,7 @@ test_stack_name_reuse(void)
}
static int
-test_stack_name_length(void)
+test_stack_name_length(uint32_t flags)
{
char name[RTE_STACK_NAMESIZE + 1];
struct rte_stack *s;
@@ -209,7 +209,7 @@ test_stack_name_length(void)
memset(name, 's', sizeof(name));
name[RTE_STACK_NAMESIZE] = '\0';
- s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), flags);
if (s != NULL) {
printf("[%s():%u] Failed to prevent long name\n",
__func__, __LINE__);
@@ -328,7 +328,7 @@ stack_thread_push_pop(void *args)
}
static int
-test_stack_multithreaded(void)
+test_stack_multithreaded(uint32_t flags)
{
struct test_args *args;
unsigned int lcore_id;
@@ -349,7 +349,7 @@ test_stack_multithreaded(void)
return -1;
}
- s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
@@ -384,9 +384,9 @@ test_stack_multithreaded(void)
}
static int
-test_stack(void)
+__test_stack(uint32_t flags)
{
- if (test_stack_basic() < 0)
+ if (test_stack_basic(flags) < 0)
return -1;
if (test_lookup_null() < 0)
@@ -395,16 +395,29 @@ test_stack(void)
if (test_free_null() < 0)
return -1;
- if (test_stack_name_reuse() < 0)
+ if (test_stack_name_reuse(flags) < 0)
return -1;
- if (test_stack_name_length() < 0)
+ if (test_stack_name_length(flags) < 0)
return -1;
- if (test_stack_multithreaded() < 0)
+ if (test_stack_multithreaded(flags) < 0)
return -1;
return 0;
}
+static int
+test_stack(void)
+{
+ return __test_stack(0);
+}
+
+static int
+test_lf_stack(void)
+{
+ return __test_stack(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_autotest, test_stack);
+REGISTER_TEST_COMMAND(stack_lf_autotest, test_lf_stack);
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
index 484370d30..e09d5384c 100644
--- a/app/test/test_stack_perf.c
+++ b/app/test/test_stack_perf.c
@@ -297,14 +297,14 @@ test_bulk_push_pop(struct rte_stack *s)
}
static int
-test_stack_perf(void)
+__test_stack_perf(uint32_t flags)
{
struct lcore_pair cores;
struct rte_stack *s;
rte_atomic32_init(&lcore_barrier);
- s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -340,4 +340,17 @@ test_stack_perf(void)
return 0;
}
+static int
+test_stack_perf(void)
+{
+ return __test_stack_perf(0);
+}
+
+static int
+test_lf_stack_perf(void)
+{
+ return __test_stack_perf(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
+REGISTER_TEST_COMMAND(stack_lf_perf_autotest, test_lf_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 7/8] test/stack: add lock-free stack tests
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 7/8] test/stack: add lock-free stack tests Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds lock-free stack variants of stack_autotest
(stack_lf_autotest) and stack_perf_autotest (stack_lf_perf_autotest), which
differ only in that the lock-free versions pass the RTE_STACK_F_LF flag to
all rte_stack_create() calls.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/meson.build | 2 ++
app/test/test_stack.c | 41 +++++++++++++++++++++++++++--------------
app/test/test_stack_perf.c | 17 +++++++++++++++--
3 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/app/test/meson.build b/app/test/meson.build
index 2699a3f5b..57d8a5b55 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -178,6 +178,7 @@ fast_parallel_test_names = [
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
+ 'stack_nb_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
@@ -243,6 +244,7 @@ perf_test_names = [
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
+ 'stack_nb_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
index 8392e4e4d..f199136aa 100644
--- a/app/test/test_stack.c
+++ b/app/test/test_stack.c
@@ -97,7 +97,7 @@ test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
}
static int
-test_stack_basic(void)
+test_stack_basic(uint32_t flags)
{
struct rte_stack *s = NULL;
void **obj_table = NULL;
@@ -113,7 +113,7 @@ test_stack_basic(void)
for (i = 0; i < STACK_SIZE; i++)
obj_table[i] = (void *)(uintptr_t)i;
- s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -177,18 +177,18 @@ test_stack_basic(void)
}
static int
-test_stack_name_reuse(void)
+test_stack_name_reuse(uint32_t flags)
{
struct rte_stack *s[2];
- s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[0] == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
return -1;
}
- s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[1] != NULL) {
printf("[%s():%u] Failed to detect re-used name\n",
__func__, __LINE__);
@@ -201,7 +201,7 @@ test_stack_name_reuse(void)
}
static int
-test_stack_name_length(void)
+test_stack_name_length(uint32_t flags)
{
char name[RTE_STACK_NAMESIZE + 1];
struct rte_stack *s;
@@ -209,7 +209,7 @@ test_stack_name_length(void)
memset(name, 's', sizeof(name));
name[RTE_STACK_NAMESIZE] = '\0';
- s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), flags);
if (s != NULL) {
printf("[%s():%u] Failed to prevent long name\n",
__func__, __LINE__);
@@ -328,7 +328,7 @@ stack_thread_push_pop(void *args)
}
static int
-test_stack_multithreaded(void)
+test_stack_multithreaded(uint32_t flags)
{
struct test_args *args;
unsigned int lcore_id;
@@ -349,7 +349,7 @@ test_stack_multithreaded(void)
return -1;
}
- s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
@@ -384,9 +384,9 @@ test_stack_multithreaded(void)
}
static int
-test_stack(void)
+__test_stack(uint32_t flags)
{
- if (test_stack_basic() < 0)
+ if (test_stack_basic(flags) < 0)
return -1;
if (test_lookup_null() < 0)
@@ -395,16 +395,29 @@ test_stack(void)
if (test_free_null() < 0)
return -1;
- if (test_stack_name_reuse() < 0)
+ if (test_stack_name_reuse(flags) < 0)
return -1;
- if (test_stack_name_length() < 0)
+ if (test_stack_name_length(flags) < 0)
return -1;
- if (test_stack_multithreaded() < 0)
+ if (test_stack_multithreaded(flags) < 0)
return -1;
return 0;
}
+static int
+test_stack(void)
+{
+ return __test_stack(0);
+}
+
+static int
+test_lf_stack(void)
+{
+ return __test_stack(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_autotest, test_stack);
+REGISTER_TEST_COMMAND(stack_lf_autotest, test_lf_stack);
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
index 484370d30..e09d5384c 100644
--- a/app/test/test_stack_perf.c
+++ b/app/test/test_stack_perf.c
@@ -297,14 +297,14 @@ test_bulk_push_pop(struct rte_stack *s)
}
static int
-test_stack_perf(void)
+__test_stack_perf(uint32_t flags)
{
struct lcore_pair cores;
struct rte_stack *s;
rte_atomic32_init(&lcore_barrier);
- s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -340,4 +340,17 @@ test_stack_perf(void)
return 0;
}
+static int
+test_stack_perf(void)
+{
+ return __test_stack_perf(0);
+}
+
+static int
+test_lf_stack_perf(void)
+{
+ return __test_stack_perf(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
+REGISTER_TEST_COMMAND(stack_lf_perf_autotest, test_lf_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 8/8] mempool/stack: add lock-free stack mempool handler
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 0/8] Add stack library and new " Gage Eads
` (7 preceding siblings ...)
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 7/8] test/stack: add lock-free stack tests Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 0/8] Add stack library and new " Gage Eads
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for lock-free (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
stack's worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 10 ++++++++++
doc/guides/rel_notes/release_19_05.rst | 5 +++++
drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++--
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 2361c3b8f..d22f72f65 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -563,6 +563,16 @@ Known Issues
5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+ Alternatively, applications can use the lock-free stack mempool handler. When
+ considering this handler, note that:
+
+ - It is currently limited to the x86_64 platform, because it uses an
+ instruction (16-byte compare-and-swap) that is not yet available on other
+ platforms.
+ - It has worse average-case performance than the non-preemptive rte_ring, but
+ software caching (e.g. the mempool cache) can mitigate this by reducing the
+ number of stack accesses.
+
+ rte_timer
Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 96e851e13..9e56d1058 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -114,6 +114,11 @@ New Features
The library supports two stack implementations: standard (lock-based) and lock-free.
The lock-free implementation is currently limited to x86-64 platforms.
+* **Added Lock-Free Stack Mempool Handler.**
+
+ Added a new lock-free stack handler, which uses the newly added stack
+ library.
+
Removed Items
-------------
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 25ccdb9af..7e85c8d6b 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -7,7 +7,7 @@
#include <rte_stack.h>
static int
-stack_alloc(struct rte_mempool *mp)
+__stack_alloc(struct rte_mempool *mp, uint32_t flags)
{
char name[RTE_STACK_NAMESIZE];
struct rte_stack *s;
@@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp)
return -rte_errno;
}
- s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ s = rte_stack_create(name, mp->size, mp->socket_id, flags);
if (s == NULL)
return -rte_errno;
@@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp)
}
static int
+stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, 0);
+}
+
+static int
+lf_stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, RTE_STACK_F_LF);
+}
+
+static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
unsigned int n)
{
@@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = {
.get_count = stack_get_count
};
+static struct rte_mempool_ops ops_lf_stack = {
+ .name = "lf_stack",
+ .alloc = lf_stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
MEMPOOL_REGISTER_OPS(ops_stack);
+MEMPOOL_REGISTER_OPS(ops_lf_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v4 8/8] mempool/stack: add lock-free stack mempool handler
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
@ 2019-03-28 18:00 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-03-28 18:00 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for lock-free (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
stack's worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 10 ++++++++++
doc/guides/rel_notes/release_19_05.rst | 5 +++++
drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++--
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 2361c3b8f..d22f72f65 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -563,6 +563,16 @@ Known Issues
5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+ Alternatively, applications can use the lock-free stack mempool handler. When
+ considering this handler, note that:
+
+ - It is currently limited to the x86_64 platform, because it uses an
+ instruction (16-byte compare-and-swap) that is not yet available on other
+ platforms.
+ - It has worse average-case performance than the non-preemptive rte_ring, but
+ software caching (e.g. the mempool cache) can mitigate this by reducing the
+ number of stack accesses.
+
+ rte_timer
Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 96e851e13..9e56d1058 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -114,6 +114,11 @@ New Features
The library supports two stack implementations: standard (lock-based) and lock-free.
The lock-free implementation is currently limited to x86-64 platforms.
+* **Added Lock-Free Stack Mempool Handler.**
+
+ Added a new lock-free stack handler, which uses the newly added stack
+ library.
+
Removed Items
-------------
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 25ccdb9af..7e85c8d6b 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -7,7 +7,7 @@
#include <rte_stack.h>
static int
-stack_alloc(struct rte_mempool *mp)
+__stack_alloc(struct rte_mempool *mp, uint32_t flags)
{
char name[RTE_STACK_NAMESIZE];
struct rte_stack *s;
@@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp)
return -rte_errno;
}
- s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ s = rte_stack_create(name, mp->size, mp->socket_id, flags);
if (s == NULL)
return -rte_errno;
@@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp)
}
static int
+stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, 0);
+}
+
+static int
+lf_stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, RTE_STACK_F_LF);
+}
+
+static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
unsigned int n)
{
@@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = {
.get_count = stack_get_count
};
+static struct rte_mempool_ops ops_lf_stack = {
+ .name = "lf_stack",
+ .alloc = lf_stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
MEMPOOL_REGISTER_OPS(ops_stack);
+MEMPOOL_REGISTER_OPS(ops_lf_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library Gage Eads
2019-03-14 8:00 ` Olivier Matz
@ 2019-03-28 23:26 ` Honnappa Nagarahalli
2019-03-28 23:26 ` Honnappa Nagarahalli
2019-03-29 19:23 ` Eads, Gage
1 sibling, 2 replies; 228+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-28 23:26 UTC (permalink / raw)
To: Gage Eads, dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
Gavin Hu (Arm Technology China),
nd, thomas, nd
Hi Gage,
Apologies for the late comments.
> -----Original Message-----
> From: Gage Eads <gage.eads@intel.com>
> Sent: Wednesday, March 6, 2019 8:46 AM
> To: dev@dpdk.org
> Cc: olivier.matz@6wind.com; arybchenko@solarflare.com;
> bruce.richardson@intel.com; konstantin.ananyev@intel.com; Gavin Hu (Arm
> Technology China) <Gavin.Hu@arm.com>; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>;
> thomas@monjalon.net
> Subject: [PATCH v3 1/8] stack: introduce rte stack library
>
> The rte_stack library provides an API for configuration and use of a bounded
> stack of pointers. Push and pop operations are MT-safe, allowing concurrent
> access, and the interface supports pushing and popping multiple pointers at a
> time.
>
> The library's interface is modeled after another DPDK data structure, rte_ring,
> and its lock-based implementation is derived from the stack mempool
> handler. An upcoming commit will migrate the stack mempool handler to
> rte_stack.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
> ---
> MAINTAINERS | 6 +
> config/common_base | 5 +
> doc/api/doxy-api-index.md | 1 +
> doc/api/doxy-api.conf.in | 1 +
> doc/guides/prog_guide/index.rst | 1 +
> doc/guides/prog_guide/stack_lib.rst | 28 ++++
> doc/guides/rel_notes/release_19_05.rst | 5 +
> lib/Makefile | 2 +
> lib/librte_stack/Makefile | 23 +++
> lib/librte_stack/meson.build | 8 +
> lib/librte_stack/rte_stack.c | 194 +++++++++++++++++++++++
> lib/librte_stack/rte_stack.h | 274
> +++++++++++++++++++++++++++++++++
> lib/librte_stack/rte_stack_pvt.h | 34 ++++
> lib/librte_stack/rte_stack_version.map | 9 ++
> lib/meson.build | 2 +-
> mk/rte.app.mk | 1 +
> 16 files changed, 593 insertions(+), 1 deletion(-) create mode 100644
> doc/guides/prog_guide/stack_lib.rst
> create mode 100644 lib/librte_stack/Makefile create mode 100644
> lib/librte_stack/meson.build create mode 100644 lib/librte_stack/rte_stack.c
> create mode 100644 lib/librte_stack/rte_stack.h create mode 100644
> lib/librte_stack/rte_stack_pvt.h create mode 100644
> lib/librte_stack/rte_stack_version.map
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 097cfb4f3..5fca30823 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -405,6 +405,12 @@ F: drivers/raw/skeleton_rawdev/
> F: app/test/test_rawdev.c
> F: doc/guides/prog_guide/rawdev.rst
>
> +Stack API - EXPERIMENTAL
> +M: Gage Eads <gage.eads@intel.com>
> +M: Olivier Matz <olivier.matz@6wind.com>
> +F: lib/librte_stack/
> +F: doc/guides/prog_guide/stack_lib.rst
> +
>
> Memory Pool Drivers
> -------------------
> diff --git a/config/common_base b/config/common_base index
> 0b09a9348..1b45dea6c 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -980,3 +980,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y # Compile the
> eventdev application # CONFIG_RTE_APP_EVENTDEV=y
> +
> +#
> +# Compile librte_stack
> +#
> +CONFIG_RTE_LIBRTE_STACK=y
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index
> d95ad566c..0df8848c0 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -124,6 +124,7 @@ The public API headers are grouped by topics:
> [mbuf] (@ref rte_mbuf.h),
> [mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
> [ring] (@ref rte_ring.h),
> + [stack] (@ref rte_stack.h),
> [tailq] (@ref rte_tailq.h),
> [bitmap] (@ref rte_bitmap.h)
>
> diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in index
> a365e669b..7722fc3e9 100644
> --- a/doc/api/doxy-api.conf.in
> +++ b/doc/api/doxy-api.conf.in
> @@ -55,6 +55,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-
> index.md \
> @TOPDIR@/lib/librte_ring \
> @TOPDIR@/lib/librte_sched \
> @TOPDIR@/lib/librte_security \
> + @TOPDIR@/lib/librte_stack \
> @TOPDIR@/lib/librte_table \
> @TOPDIR@/lib/librte_telemetry \
> @TOPDIR@/lib/librte_timer \ diff --git
> a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst index
> 6726b1e8d..f4f60862f 100644
> --- a/doc/guides/prog_guide/index.rst
> +++ b/doc/guides/prog_guide/index.rst
> @@ -55,6 +55,7 @@ Programmer's Guide
> metrics_lib
> bpf_lib
> ipsec_lib
> + stack_lib
> source_org
> dev_kit_build_system
> dev_kit_root_make_help
> diff --git a/doc/guides/prog_guide/stack_lib.rst
> b/doc/guides/prog_guide/stack_lib.rst
> new file mode 100644
> index 000000000..25a8cc38a
> --- /dev/null
> +++ b/doc/guides/prog_guide/stack_lib.rst
> @@ -0,0 +1,28 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright(c) 2019 Intel Corporation.
> +
> +Stack Library
> +=============
> +
> +DPDK's stack library provides an API for configuration and use of a
> +bounded stack of pointers.
> +
> +The stack library provides the following basic operations:
> +
> +* Create a uniquely named stack of a user-specified size and using a
> + user-specified socket.
> +
> +* Push and pop a burst of one or more stack objects (pointers). These
> function
> + are multi-threading safe.
> +
> +* Free a previously created stack.
> +
> +* Lookup a pointer to a stack by its name.
> +
> +* Query a stack's current depth and number of free entries.
> +
> +Implementation
> +~~~~~~~~~~~~~~
> +
> +The stack consists of a contiguous array of pointers, a current index,
> +and a spinlock. Accesses to the stack are made multi-thread safe by the
> spinlock.
> diff --git a/doc/guides/rel_notes/release_19_05.rst
> b/doc/guides/rel_notes/release_19_05.rst
> index 4a3e2a7f3..8c649a954 100644
> --- a/doc/guides/rel_notes/release_19_05.rst
> +++ b/doc/guides/rel_notes/release_19_05.rst
> @@ -77,6 +77,11 @@ New Features
> which includes the directory name, lib name, filenames, makefile, docs,
> macros, functions, structs and any other strings in the code.
>
> +* **Added Stack API.**
> +
> + Added a new stack API for configuration and use of a bounded stack of
> + pointers. The API provides MT-safe push and pop operations that can
> + operate on one or more pointers per operation.
>
> Removed Items
> -------------
> diff --git a/lib/Makefile b/lib/Makefile index ffbfd0d94..d941bd849 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
> DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev
> librte_security
> DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry DEPDIRS-
> librte_telemetry := librte_eal librte_metrics librte_ethdev
> +DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack DEPDIRS-librte_stack :=
> +librte_eal
>
> ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
> DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni diff --git
> a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile new file mode 100644
> index 000000000..e956b6535
> --- /dev/null
> +++ b/lib/librte_stack/Makefile
> @@ -0,0 +1,23 @@
> +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2019 Intel
> +Corporation
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +# library name
> +LIB = librte_stack.a
> +
> +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 CFLAGS +=
> +-DALLOW_EXPERIMENTAL_API LDLIBS += -lrte_eal
> +
> +EXPORT_MAP := rte_stack_version.map
> +
> +LIBABIVER := 1
> +
> +# all source are stored in SRCS-y
> +SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
> +
> +# install includes
> +SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h
> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build new
> file mode 100644 index 000000000..99f43710e
> --- /dev/null
> +++ b/lib/librte_stack/meson.build
> @@ -0,0 +1,8 @@
> +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2019 Intel
> +Corporation
> +
> +allow_experimental_apis = true
> +
> +version = 1
> +sources = files('rte_stack.c')
> +headers = files('rte_stack.h')
> diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c new file
> mode 100644 index 000000000..96dffdf44
> --- /dev/null
> +++ b/lib/librte_stack/rte_stack.c
> @@ -0,0 +1,194 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2019 Intel Corporation
> + */
> +
> +#include <string.h>
> +
> +#include <rte_atomic.h>
> +#include <rte_eal.h>
> +#include <rte_eal_memconfig.h>
> +#include <rte_errno.h>
> +#include <rte_malloc.h>
> +#include <rte_memzone.h>
> +#include <rte_rwlock.h>
> +#include <rte_tailq.h>
> +
> +#include "rte_stack.h"
> +#include "rte_stack_pvt.h"
> +
> +int stack_logtype;
> +
> +TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
> +
> +static struct rte_tailq_elem rte_stack_tailq = {
> + .name = RTE_TAILQ_STACK_NAME,
> +};
> +EAL_REGISTER_TAILQ(rte_stack_tailq)
> +
> +static void
> +rte_stack_std_init(struct rte_stack *s) {
> + rte_spinlock_init(&s->stack_std.lock);
> +}
> +
> +static void
> +rte_stack_init(struct rte_stack *s)
> +{
> + memset(s, 0, sizeof(*s));
> +
> + rte_stack_std_init(s);
> +}
> +
> +static ssize_t
> +rte_stack_get_memsize(unsigned int count) {
> + ssize_t sz = sizeof(struct rte_stack);
> +
> + /* Add padding to avoid false sharing conflicts */
> + sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
> + 2 * RTE_CACHE_LINE_SIZE;
I did not understand how the false sharing is caused and how this padding is solving the issue. Verbose comments would help.
> +
> + return sz;
> +}
> +
> +struct rte_stack *
> +rte_stack_create(const char *name, unsigned int count, int socket_id,
> + uint32_t flags)
> +{
> + char mz_name[RTE_MEMZONE_NAMESIZE];
> + struct rte_stack_list *stack_list;
> + const struct rte_memzone *mz;
> + struct rte_tailq_entry *te;
> + struct rte_stack *s;
> + unsigned int sz;
> + int ret;
> +
> + RTE_SET_USED(flags);
> +
> + sz = rte_stack_get_memsize(count);
> +
> + ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
> + RTE_STACK_MZ_PREFIX, name);
> + if (ret < 0 || ret >= (int)sizeof(mz_name)) {
> + rte_errno = ENAMETOOLONG;
> + return NULL;
> + }
> +
> + te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
> + if (te == NULL) {
> + STACK_LOG_ERR("Cannot reserve memory for tailq\n");
> + rte_errno = ENOMEM;
> + return NULL;
> + }
> +
> + rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
> +
I think there is a need to check if a stack with the same name exists already.
> + mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
> + 0, __alignof__(*s));
> + if (mz == NULL) {
> + STACK_LOG_ERR("Cannot reserve stack memzone!\n");
> + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> + rte_free(te);
> + return NULL;
> + }
> +
> + s = mz->addr;
> +
> + rte_stack_init(s);
> +
> + /* Store the name for later lookups */
> + ret = snprintf(s->name, sizeof(s->name), "%s", name);
> + if (ret < 0 || ret >= (int)sizeof(s->name)) {
> + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> +
> + rte_errno = ENAMETOOLONG;
> + rte_free(te);
> + rte_memzone_free(mz);
> + return NULL;
> + }
> +
> + s->memzone = mz;
> + s->capacity = count;
> + s->flags = flags;
> +
> + te->data = s;
> +
> + stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
> +
> + TAILQ_INSERT_TAIL(stack_list, te, next);
> +
> + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> +
> + return s;
> +}
> +
> +void
> +rte_stack_free(struct rte_stack *s)
> +{
> + struct rte_stack_list *stack_list;
> + struct rte_tailq_entry *te;
> +
> + if (s == NULL)
> + return;
> +
Adding a check to make sure the length of the stack is 0 would help catch issues?
> + stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
> + rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
> +
> + /* find out tailq entry */
> + TAILQ_FOREACH(te, stack_list, next) {
> + if (te->data == s)
> + break;
> + }
> +
> + if (te == NULL) {
> + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> + return;
> + }
> +
> + TAILQ_REMOVE(stack_list, te, next);
> +
> + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> +
> + rte_free(te);
> +
> + rte_memzone_free(s->memzone);
> +}
> +
> +struct rte_stack *
> +rte_stack_lookup(const char *name)
> +{
> + struct rte_stack_list *stack_list;
> + struct rte_tailq_entry *te;
> + struct rte_stack *r = NULL;
> +
> + if (name == NULL) {
> + rte_errno = EINVAL;
> + return NULL;
> + }
> +
> + stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
> +
> + rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
> +
> + TAILQ_FOREACH(te, stack_list, next) {
> + r = (struct rte_stack *) te->data;
> + if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
> + break;
> + }
> +
> + rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
> +
> + if (te == NULL) {
> + rte_errno = ENOENT;
> + return NULL;
> + }
> +
> + return r;
> +}
> +
> +RTE_INIT(librte_stack_init_log)
> +{
> + stack_logtype = rte_log_register("lib.stack");
> + if (stack_logtype >= 0)
> + rte_log_set_level(stack_logtype, RTE_LOG_NOTICE); }
> diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h new file
> mode 100644 index 000000000..7a633deb5
> --- /dev/null
> +++ b/lib/librte_stack/rte_stack.h
> @@ -0,0 +1,274 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2019 Intel Corporation
> + */
> +
> +/**
> + * @file rte_stack.h
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * RTE Stack
> + *
> + * librte_stack provides an API for configuration and use of a bounded
> +stack of
> + * pointers. Push and pop operations are MT-safe, allowing concurrent
> +access,
> + * and the interface supports pushing and popping multiple pointers at a
> time.
> + */
> +
> +#ifndef _RTE_STACK_H_
> +#define _RTE_STACK_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <rte_errno.h>
> +#include <rte_memzone.h>
> +#include <rte_spinlock.h>
> +
> +#define RTE_TAILQ_STACK_NAME "RTE_STACK"
> +#define RTE_STACK_MZ_PREFIX "STK_"
Nit, "STACK_" would be easier to debug
> +/** The maximum length of a stack name. */ #define RTE_STACK_NAMESIZE
> +(RTE_MEMZONE_NAMESIZE - \
> + sizeof(RTE_STACK_MZ_PREFIX) + 1)
> +
> +/* Structure containing the LIFO, its current length, and a lock for
> +mutual
> + * exclusion.
> + */
> +struct rte_stack_std {
> + rte_spinlock_t lock; /**< LIFO lock */
> + uint32_t len; /**< LIFO len */
> + void *objs[]; /**< LIFO pointer table */ };
> +
> +/* The RTE stack structure contains the LIFO structure itself, plus
> +metadata
> + * such as its name and memzone pointer.
> + */
> +struct rte_stack {
> + /** Name of the stack. */
> + char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
> + /** Memzone containing the rte_stack structure. */
> + const struct rte_memzone *memzone;
> + uint32_t capacity; /**< Usable size of the stack. */
> + uint32_t flags; /**< Flags supplied at creation. */
> + struct rte_stack_std stack_std; /**< LIFO structure. */ }
> +__rte_cache_aligned;
> +
> +/**
> + * @internal Push several objects on the stack (MT-safe).
> + *
> + * @param s
> + * A pointer to the stack structure.
> + * @param obj_table
> + * A pointer to a table of void * pointers (objects).
> + * @param n
> + * The number of objects to push on the stack from the obj_table.
> + * @return
> + * Actual number of objects pushed (either 0 or *n*).
> + */
> +static __rte_always_inline unsigned int __rte_experimental
This is an internal function. Is '__rte_experimental' tag required?
> +rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
> +unsigned int n) {
Since this is an internal function, does it make sense to add '__' to the beginning of the function name (similar to what is done in rte_ring?).
> + struct rte_stack_std *stack = &s->stack_std;
> + unsigned int index;
> + void **cache_objs;
> +
> + rte_spinlock_lock(&stack->lock);
> + cache_objs = &stack->objs[stack->len];
> +
> + /* Is there sufficient space in the stack? */
> + if ((stack->len + n) > s->capacity) {
> + rte_spinlock_unlock(&stack->lock);
> + return 0;
> + }
> +
> + /* Add elements back into the cache */
> + for (index = 0; index < n; ++index, obj_table++)
> + cache_objs[index] = *obj_table;
> +
> + stack->len += n;
> +
> + rte_spinlock_unlock(&stack->lock);
> + return n;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Push several objects on the stack (MT-safe).
> + *
> + * @param s
> + * A pointer to the stack structure.
> + * @param obj_table
> + * A pointer to a table of void * pointers (objects).
> + * @param n
> + * The number of objects to push on the stack from the obj_table.
> + * @return
> + * Actual number of objects pushed (either 0 or *n*).
> + */
> +static __rte_always_inline unsigned int __rte_experimental
> +rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned
> +int n) {
> + return rte_stack_std_push(s, obj_table, n); }
> +
> +/**
> + * @internal Pop several objects from the stack (MT-safe).
> + *
> + * @param s
> + * A pointer to the stack structure.
> + * @param obj_table
> + * A pointer to a table of void * pointers (objects).
> + * @param n
> + * The number of objects to pull from the stack.
> + * @return
> + * Actual number of objects popped (either 0 or *n*).
> + */
> +static __rte_always_inline unsigned int __rte_experimental
This is an internal function. Is '__rte_experimental' tag required?
> +rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int
> +n) {
> + struct rte_stack_std *stack = &s->stack_std;
> + unsigned int index, len;
> + void **cache_objs;
> +
> + rte_spinlock_lock(&stack->lock);
> +
> + if (unlikely(n > stack->len)) {
> + rte_spinlock_unlock(&stack->lock);
> + return 0;
> + }
> +
> + cache_objs = stack->objs;
> +
> + for (index = 0, len = stack->len - 1; index < n;
> + ++index, len--, obj_table++)
> + *obj_table = cache_objs[len];
> +
> + stack->len -= n;
> + rte_spinlock_unlock(&stack->lock);
> +
> + return n;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Pop several objects from the stack (MT-safe).
> + *
> + * @param s
> + * A pointer to the stack structure.
> + * @param obj_table
> + * A pointer to a table of void * pointers (objects).
> + * @param n
> + * The number of objects to pull from the stack.
> + * @return
> + * Actual number of objects popped (either 0 or *n*).
> + */
> +static __rte_always_inline unsigned int __rte_experimental
> +rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n) {
> + if (unlikely(n == 0 || obj_table == NULL))
> + return 0;
's == NULL' can be added as well. Similar check is missing in 'rte_stack_push'. Since these are data-path APIs, RTE_ASSERT would be better.
> +
> + return rte_stack_std_pop(s, obj_table, n); }
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Return the number of used entries in a stack.
> + *
> + * @param s
> + * A pointer to the stack structure.
> + * @return
> + * The number of used entries in the stack.
> + */
> +static __rte_always_inline unsigned int __rte_experimental
> +rte_stack_count(struct rte_stack *s) {
> + return (unsigned int)s->stack_std.len; }
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Return the number of free entries in a stack.
> + *
> + * @param s
> + * A pointer to the stack structure.
> + * @return
> + * The number of free entries in the stack.
> + */
> +static __rte_always_inline unsigned int __rte_experimental
> +rte_stack_free_count(struct rte_stack *s) {
> + return s->capacity - rte_stack_count(s); }
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Create a new stack named *name* in memory.
> + *
> + * This function uses ``memzone_reserve()`` to allocate memory for a
> +stack of
> + * size *count*. The behavior of the stack is controlled by the *flags*.
> + *
> + * @param name
> + * The name of the stack.
> + * @param count
> + * The size of the stack.
> + * @param socket_id
> + * The *socket_id* argument is the socket identifier in case of
> + * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
> + * constraint for the reserved zone.
> + * @param flags
> + * Reserved for future use.
> + * @return
> + * On success, the pointer to the new allocated stack. NULL on error with
> + * rte_errno set appropriately. Possible errno values include:
> + * - ENOSPC - the maximum number of memzones has already been
> allocated
> + * - EEXIST - a stack with the same name already exists
This is not implemented currently
> + * - ENOMEM - insufficient memory to create the stack
> + * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
> + */
> +struct rte_stack *__rte_experimental
> +rte_stack_create(const char *name, unsigned int count, int socket_id,
> + uint32_t flags);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Free all memory used by the stack.
> + *
> + * @param s
> + * Stack to free
> + */
> +void __rte_experimental
> +rte_stack_free(struct rte_stack *s);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Lookup a stack by its name.
> + *
> + * @param name
> + * The name of the stack.
> + * @return
> + * The pointer to the stack matching the name, or NULL if not found,
> + * with rte_errno set appropriately. Possible rte_errno values include:
> + * - ENOENT - Stack with name *name* not found.
> + * - EINVAL - *name* pointer is NULL.
> + */
> +struct rte_stack * __rte_experimental
> +rte_stack_lookup(const char *name);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_STACK_H_ */
> diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
> new file mode 100644
> index 000000000..4a6a7bdb3
> --- /dev/null
> +++ b/lib/librte_stack/rte_stack_pvt.h
> @@ -0,0 +1,34 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2019 Intel Corporation
> + */
> +
> +#ifndef _RTE_STACK_PVT_H_
> +#define _RTE_STACK_PVT_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <rte_log.h>
> +
> +extern int stack_logtype;
> +
> +#define STACK_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
> + __func__, ##args)
> +
> +#define STACK_LOG_ERR(fmt, args...) \
> + STACK_LOG(ERR, fmt, ## args)
> +
> +#define STACK_LOG_WARN(fmt, args...) \
> + STACK_LOG(WARNING, fmt, ## args)
> +
> +#define STACK_LOG_INFO(fmt, args...) \
> + STACK_LOG(INFO, fmt, ## args)
> +
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_STACK_PVT_H_ */
> diff --git a/lib/librte_stack/rte_stack_version.map
> b/lib/librte_stack/rte_stack_version.map
> new file mode 100644
> index 000000000..6662679c3
> --- /dev/null
> +++ b/lib/librte_stack/rte_stack_version.map
> @@ -0,0 +1,9 @@
> +EXPERIMENTAL {
> + global:
> +
> + rte_stack_create;
> + rte_stack_free;
> + rte_stack_lookup;
> +
> + local: *;
> +};
> diff --git a/lib/meson.build b/lib/meson.build index 99957ba7d..90115477f
> 100644
> --- a/lib/meson.build
> +++ b/lib/meson.build
> @@ -22,7 +22,7 @@ libraries = [
> 'gro', 'gso', 'ip_frag', 'jobstats',
> 'kni', 'latencystats', 'lpm', 'member',
> 'power', 'pdump', 'rawdev',
> - 'reorder', 'sched', 'security', 'vhost',
> + 'reorder', 'sched', 'security', 'stack', 'vhost',
> #ipsec lib depends on crypto and security
> 'ipsec',
> # add pkt framework libs which use other libs from above diff --git
> a/mk/rte.app.mk b/mk/rte.app.mk index 3c40f9df2..8decfb851 100644
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -
> lrte_security
> _LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
> _LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
> _LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
> _LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
> _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
> _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
> --
> 2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library
2019-03-28 23:26 ` Honnappa Nagarahalli
@ 2019-03-28 23:26 ` Honnappa Nagarahalli
2019-03-29 19:23 ` Eads, Gage
1 sibling, 0 replies; 228+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-28 23:26 UTC (permalink / raw)
To: Gage Eads, dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
Gavin Hu (Arm Technology China),
nd, thomas, nd
Hi Gage,
Apologies for the late comments.
> -----Original Message-----
> From: Gage Eads <gage.eads@intel.com>
> Sent: Wednesday, March 6, 2019 8:46 AM
> To: dev@dpdk.org
> Cc: olivier.matz@6wind.com; arybchenko@solarflare.com;
> bruce.richardson@intel.com; konstantin.ananyev@intel.com; Gavin Hu (Arm
> Technology China) <Gavin.Hu@arm.com>; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>;
> thomas@monjalon.net
> Subject: [PATCH v3 1/8] stack: introduce rte stack library
>
> The rte_stack library provides an API for configuration and use of a bounded
> stack of pointers. Push and pop operations are MT-safe, allowing concurrent
> access, and the interface supports pushing and popping multiple pointers at a
> time.
>
> The library's interface is modeled after another DPDK data structure, rte_ring,
> and its lock-based implementation is derived from the stack mempool
> handler. An upcoming commit will migrate the stack mempool handler to
> rte_stack.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
> ---
> MAINTAINERS | 6 +
> config/common_base | 5 +
> doc/api/doxy-api-index.md | 1 +
> doc/api/doxy-api.conf.in | 1 +
> doc/guides/prog_guide/index.rst | 1 +
> doc/guides/prog_guide/stack_lib.rst | 28 ++++
> doc/guides/rel_notes/release_19_05.rst | 5 +
> lib/Makefile | 2 +
> lib/librte_stack/Makefile | 23 +++
> lib/librte_stack/meson.build | 8 +
> lib/librte_stack/rte_stack.c | 194 +++++++++++++++++++++++
> lib/librte_stack/rte_stack.h | 274
> +++++++++++++++++++++++++++++++++
> lib/librte_stack/rte_stack_pvt.h | 34 ++++
> lib/librte_stack/rte_stack_version.map | 9 ++
> lib/meson.build | 2 +-
> mk/rte.app.mk | 1 +
> 16 files changed, 593 insertions(+), 1 deletion(-) create mode 100644
> doc/guides/prog_guide/stack_lib.rst
> create mode 100644 lib/librte_stack/Makefile create mode 100644
> lib/librte_stack/meson.build create mode 100644 lib/librte_stack/rte_stack.c
> create mode 100644 lib/librte_stack/rte_stack.h create mode 100644
> lib/librte_stack/rte_stack_pvt.h create mode 100644
> lib/librte_stack/rte_stack_version.map
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 097cfb4f3..5fca30823 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -405,6 +405,12 @@ F: drivers/raw/skeleton_rawdev/
> F: app/test/test_rawdev.c
> F: doc/guides/prog_guide/rawdev.rst
>
> +Stack API - EXPERIMENTAL
> +M: Gage Eads <gage.eads@intel.com>
> +M: Olivier Matz <olivier.matz@6wind.com>
> +F: lib/librte_stack/
> +F: doc/guides/prog_guide/stack_lib.rst
> +
>
> Memory Pool Drivers
> -------------------
> diff --git a/config/common_base b/config/common_base index
> 0b09a9348..1b45dea6c 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -980,3 +980,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y # Compile the
> eventdev application # CONFIG_RTE_APP_EVENTDEV=y
> +
> +#
> +# Compile librte_stack
> +#
> +CONFIG_RTE_LIBRTE_STACK=y
> diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index
> d95ad566c..0df8848c0 100644
> --- a/doc/api/doxy-api-index.md
> +++ b/doc/api/doxy-api-index.md
> @@ -124,6 +124,7 @@ The public API headers are grouped by topics:
> [mbuf] (@ref rte_mbuf.h),
> [mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
> [ring] (@ref rte_ring.h),
> + [stack] (@ref rte_stack.h),
> [tailq] (@ref rte_tailq.h),
> [bitmap] (@ref rte_bitmap.h)
>
> diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in index
> a365e669b..7722fc3e9 100644
> --- a/doc/api/doxy-api.conf.in
> +++ b/doc/api/doxy-api.conf.in
> @@ -55,6 +55,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-
> index.md \
> @TOPDIR@/lib/librte_ring \
> @TOPDIR@/lib/librte_sched \
> @TOPDIR@/lib/librte_security \
> + @TOPDIR@/lib/librte_stack \
> @TOPDIR@/lib/librte_table \
> @TOPDIR@/lib/librte_telemetry \
> @TOPDIR@/lib/librte_timer \ diff --git
> a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst index
> 6726b1e8d..f4f60862f 100644
> --- a/doc/guides/prog_guide/index.rst
> +++ b/doc/guides/prog_guide/index.rst
> @@ -55,6 +55,7 @@ Programmer's Guide
> metrics_lib
> bpf_lib
> ipsec_lib
> + stack_lib
> source_org
> dev_kit_build_system
> dev_kit_root_make_help
> diff --git a/doc/guides/prog_guide/stack_lib.rst
> b/doc/guides/prog_guide/stack_lib.rst
> new file mode 100644
> index 000000000..25a8cc38a
> --- /dev/null
> +++ b/doc/guides/prog_guide/stack_lib.rst
> @@ -0,0 +1,28 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright(c) 2019 Intel Corporation.
> +
> +Stack Library
> +=============
> +
> +DPDK's stack library provides an API for configuration and use of a
> +bounded stack of pointers.
> +
> +The stack library provides the following basic operations:
> +
> +* Create a uniquely named stack of a user-specified size and using a
> + user-specified socket.
> +
> +* Push and pop a burst of one or more stack objects (pointers). These
> function
> + are multi-threading safe.
> +
> +* Free a previously created stack.
> +
> +* Lookup a pointer to a stack by its name.
> +
> +* Query a stack's current depth and number of free entries.
> +
> +Implementation
> +~~~~~~~~~~~~~~
> +
> +The stack consists of a contiguous array of pointers, a current index,
> +and a spinlock. Accesses to the stack are made multi-thread safe by the
> spinlock.
> diff --git a/doc/guides/rel_notes/release_19_05.rst
> b/doc/guides/rel_notes/release_19_05.rst
> index 4a3e2a7f3..8c649a954 100644
> --- a/doc/guides/rel_notes/release_19_05.rst
> +++ b/doc/guides/rel_notes/release_19_05.rst
> @@ -77,6 +77,11 @@ New Features
> which includes the directory name, lib name, filenames, makefile, docs,
> macros, functions, structs and any other strings in the code.
>
> +* **Added Stack API.**
> +
> + Added a new stack API for configuration and use of a bounded stack of
> + pointers. The API provides MT-safe push and pop operations that can
> + operate on one or more pointers per operation.
>
> Removed Items
> -------------
> diff --git a/lib/Makefile b/lib/Makefile index ffbfd0d94..d941bd849 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
> DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev
> librte_security
> DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry DEPDIRS-
> librte_telemetry := librte_eal librte_metrics librte_ethdev
> +DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack DEPDIRS-librte_stack :=
> +librte_eal
>
> ifeq ($(CONFIG_RTE_EXEC_ENV_LINUXAPP),y)
> DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni diff --git
> a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile new file mode 100644
> index 000000000..e956b6535
> --- /dev/null
> +++ b/lib/librte_stack/Makefile
> @@ -0,0 +1,23 @@
> +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2019 Intel
> +Corporation
> +
> +include $(RTE_SDK)/mk/rte.vars.mk
> +
> +# library name
> +LIB = librte_stack.a
> +
> +CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3 CFLAGS +=
> +-DALLOW_EXPERIMENTAL_API LDLIBS += -lrte_eal
> +
> +EXPORT_MAP := rte_stack_version.map
> +
> +LIBABIVER := 1
> +
> +# all source are stored in SRCS-y
> +SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
> +
> +# install includes
> +SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h
> +
> +include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build new
> file mode 100644 index 000000000..99f43710e
> --- /dev/null
> +++ b/lib/librte_stack/meson.build
> @@ -0,0 +1,8 @@
> +# SPDX-License-Identifier: BSD-3-Clause # Copyright(c) 2019 Intel
> +Corporation
> +
> +allow_experimental_apis = true
> +
> +version = 1
> +sources = files('rte_stack.c')
> +headers = files('rte_stack.h')
> diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c new file
> mode 100644 index 000000000..96dffdf44
> --- /dev/null
> +++ b/lib/librte_stack/rte_stack.c
> @@ -0,0 +1,194 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2019 Intel Corporation
> + */
> +
> +#include <string.h>
> +
> +#include <rte_atomic.h>
> +#include <rte_eal.h>
> +#include <rte_eal_memconfig.h>
> +#include <rte_errno.h>
> +#include <rte_malloc.h>
> +#include <rte_memzone.h>
> +#include <rte_rwlock.h>
> +#include <rte_tailq.h>
> +
> +#include "rte_stack.h"
> +#include "rte_stack_pvt.h"
> +
> +int stack_logtype;
> +
> +TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
> +
> +static struct rte_tailq_elem rte_stack_tailq = {
> + .name = RTE_TAILQ_STACK_NAME,
> +};
> +EAL_REGISTER_TAILQ(rte_stack_tailq)
> +
> +static void
> +rte_stack_std_init(struct rte_stack *s) {
> + rte_spinlock_init(&s->stack_std.lock);
> +}
> +
> +static void
> +rte_stack_init(struct rte_stack *s)
> +{
> + memset(s, 0, sizeof(*s));
> +
> + rte_stack_std_init(s);
> +}
> +
> +static ssize_t
> +rte_stack_get_memsize(unsigned int count) {
> + ssize_t sz = sizeof(struct rte_stack);
> +
> + /* Add padding to avoid false sharing conflicts */
> + sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
> + 2 * RTE_CACHE_LINE_SIZE;
I did not understand how the false sharing is caused and how this padding is solving the issue. Verbose comments would help.
> +
> + return sz;
> +}
> +
> +struct rte_stack *
> +rte_stack_create(const char *name, unsigned int count, int socket_id,
> + uint32_t flags)
> +{
> + char mz_name[RTE_MEMZONE_NAMESIZE];
> + struct rte_stack_list *stack_list;
> + const struct rte_memzone *mz;
> + struct rte_tailq_entry *te;
> + struct rte_stack *s;
> + unsigned int sz;
> + int ret;
> +
> + RTE_SET_USED(flags);
> +
> + sz = rte_stack_get_memsize(count);
> +
> + ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
> + RTE_STACK_MZ_PREFIX, name);
> + if (ret < 0 || ret >= (int)sizeof(mz_name)) {
> + rte_errno = ENAMETOOLONG;
> + return NULL;
> + }
> +
> + te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
> + if (te == NULL) {
> + STACK_LOG_ERR("Cannot reserve memory for tailq\n");
> + rte_errno = ENOMEM;
> + return NULL;
> + }
> +
> + rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
> +
I think there is a need to check if a stack with the same name exists already.
> + mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
> + 0, __alignof__(*s));
> + if (mz == NULL) {
> + STACK_LOG_ERR("Cannot reserve stack memzone!\n");
> + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> + rte_free(te);
> + return NULL;
> + }
> +
> + s = mz->addr;
> +
> + rte_stack_init(s);
> +
> + /* Store the name for later lookups */
> + ret = snprintf(s->name, sizeof(s->name), "%s", name);
> + if (ret < 0 || ret >= (int)sizeof(s->name)) {
> + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> +
> + rte_errno = ENAMETOOLONG;
> + rte_free(te);
> + rte_memzone_free(mz);
> + return NULL;
> + }
> +
> + s->memzone = mz;
> + s->capacity = count;
> + s->flags = flags;
> +
> + te->data = s;
> +
> + stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
> +
> + TAILQ_INSERT_TAIL(stack_list, te, next);
> +
> + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> +
> + return s;
> +}
> +
> +void
> +rte_stack_free(struct rte_stack *s)
> +{
> + struct rte_stack_list *stack_list;
> + struct rte_tailq_entry *te;
> +
> + if (s == NULL)
> + return;
> +
Adding a check to make sure the length of the stack is 0 would help catch issues?
> + stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
> + rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
> +
> + /* find out tailq entry */
> + TAILQ_FOREACH(te, stack_list, next) {
> + if (te->data == s)
> + break;
> + }
> +
> + if (te == NULL) {
> + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> + return;
> + }
> +
> + TAILQ_REMOVE(stack_list, te, next);
> +
> + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> +
> + rte_free(te);
> +
> + rte_memzone_free(s->memzone);
> +}
> +
> +struct rte_stack *
> +rte_stack_lookup(const char *name)
> +{
> + struct rte_stack_list *stack_list;
> + struct rte_tailq_entry *te;
> + struct rte_stack *r = NULL;
> +
> + if (name == NULL) {
> + rte_errno = EINVAL;
> + return NULL;
> + }
> +
> + stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
> +
> + rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
> +
> + TAILQ_FOREACH(te, stack_list, next) {
> + r = (struct rte_stack *) te->data;
> + if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
> + break;
> + }
> +
> + rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
> +
> + if (te == NULL) {
> + rte_errno = ENOENT;
> + return NULL;
> + }
> +
> + return r;
> +}
> +
> +RTE_INIT(librte_stack_init_log)
> +{
> + stack_logtype = rte_log_register("lib.stack");
> + if (stack_logtype >= 0)
> + rte_log_set_level(stack_logtype, RTE_LOG_NOTICE); }
> diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h new file
> mode 100644 index 000000000..7a633deb5
> --- /dev/null
> +++ b/lib/librte_stack/rte_stack.h
> @@ -0,0 +1,274 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2019 Intel Corporation
> + */
> +
> +/**
> + * @file rte_stack.h
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * RTE Stack
> + *
> + * librte_stack provides an API for configuration and use of a bounded
> +stack of
> + * pointers. Push and pop operations are MT-safe, allowing concurrent
> +access,
> + * and the interface supports pushing and popping multiple pointers at a
> time.
> + */
> +
> +#ifndef _RTE_STACK_H_
> +#define _RTE_STACK_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <rte_errno.h>
> +#include <rte_memzone.h>
> +#include <rte_spinlock.h>
> +
> +#define RTE_TAILQ_STACK_NAME "RTE_STACK"
> +#define RTE_STACK_MZ_PREFIX "STK_"
Nit, "STACK_" would be easier to debug
> +/** The maximum length of a stack name. */ #define RTE_STACK_NAMESIZE
> +(RTE_MEMZONE_NAMESIZE - \
> + sizeof(RTE_STACK_MZ_PREFIX) + 1)
> +
> +/* Structure containing the LIFO, its current length, and a lock for
> +mutual
> + * exclusion.
> + */
> +struct rte_stack_std {
> + rte_spinlock_t lock; /**< LIFO lock */
> + uint32_t len; /**< LIFO len */
> + void *objs[]; /**< LIFO pointer table */ };
> +
> +/* The RTE stack structure contains the LIFO structure itself, plus
> +metadata
> + * such as its name and memzone pointer.
> + */
> +struct rte_stack {
> + /** Name of the stack. */
> + char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
> + /** Memzone containing the rte_stack structure. */
> + const struct rte_memzone *memzone;
> + uint32_t capacity; /**< Usable size of the stack. */
> + uint32_t flags; /**< Flags supplied at creation. */
> + struct rte_stack_std stack_std; /**< LIFO structure. */ }
> +__rte_cache_aligned;
> +
> +/**
> + * @internal Push several objects on the stack (MT-safe).
> + *
> + * @param s
> + * A pointer to the stack structure.
> + * @param obj_table
> + * A pointer to a table of void * pointers (objects).
> + * @param n
> + * The number of objects to push on the stack from the obj_table.
> + * @return
> + * Actual number of objects pushed (either 0 or *n*).
> + */
> +static __rte_always_inline unsigned int __rte_experimental
This is an internal function. Is '__rte_experimental' tag required?
> +rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
> +unsigned int n) {
Since this is an internal function, does it make sense to add '__' to the beginning of the function name (similar to what is done in rte_ring?).
> + struct rte_stack_std *stack = &s->stack_std;
> + unsigned int index;
> + void **cache_objs;
> +
> + rte_spinlock_lock(&stack->lock);
> + cache_objs = &stack->objs[stack->len];
> +
> + /* Is there sufficient space in the stack? */
> + if ((stack->len + n) > s->capacity) {
> + rte_spinlock_unlock(&stack->lock);
> + return 0;
> + }
> +
> + /* Add elements back into the cache */
> + for (index = 0; index < n; ++index, obj_table++)
> + cache_objs[index] = *obj_table;
> +
> + stack->len += n;
> +
> + rte_spinlock_unlock(&stack->lock);
> + return n;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Push several objects on the stack (MT-safe).
> + *
> + * @param s
> + * A pointer to the stack structure.
> + * @param obj_table
> + * A pointer to a table of void * pointers (objects).
> + * @param n
> + * The number of objects to push on the stack from the obj_table.
> + * @return
> + * Actual number of objects pushed (either 0 or *n*).
> + */
> +static __rte_always_inline unsigned int __rte_experimental
> +rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned
> +int n) {
> + return rte_stack_std_push(s, obj_table, n); }
> +
> +/**
> + * @internal Pop several objects from the stack (MT-safe).
> + *
> + * @param s
> + * A pointer to the stack structure.
> + * @param obj_table
> + * A pointer to a table of void * pointers (objects).
> + * @param n
> + * The number of objects to pull from the stack.
> + * @return
> + * Actual number of objects popped (either 0 or *n*).
> + */
> +static __rte_always_inline unsigned int __rte_experimental
This is an internal function. Is '__rte_experimental' tag required?
> +rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int
> +n) {
> + struct rte_stack_std *stack = &s->stack_std;
> + unsigned int index, len;
> + void **cache_objs;
> +
> + rte_spinlock_lock(&stack->lock);
> +
> + if (unlikely(n > stack->len)) {
> + rte_spinlock_unlock(&stack->lock);
> + return 0;
> + }
> +
> + cache_objs = stack->objs;
> +
> + for (index = 0, len = stack->len - 1; index < n;
> + ++index, len--, obj_table++)
> + *obj_table = cache_objs[len];
> +
> + stack->len -= n;
> + rte_spinlock_unlock(&stack->lock);
> +
> + return n;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Pop several objects from the stack (MT-safe).
> + *
> + * @param s
> + * A pointer to the stack structure.
> + * @param obj_table
> + * A pointer to a table of void * pointers (objects).
> + * @param n
> + * The number of objects to pull from the stack.
> + * @return
> + * Actual number of objects popped (either 0 or *n*).
> + */
> +static __rte_always_inline unsigned int __rte_experimental
> +rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n) {
> + if (unlikely(n == 0 || obj_table == NULL))
> + return 0;
's == NULL' can be added as well. Similar check is missing in 'rte_stack_push'. Since these are data-path APIs, RTE_ASSERT would be better.
> +
> + return rte_stack_std_pop(s, obj_table, n); }
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Return the number of used entries in a stack.
> + *
> + * @param s
> + * A pointer to the stack structure.
> + * @return
> + * The number of used entries in the stack.
> + */
> +static __rte_always_inline unsigned int __rte_experimental
> +rte_stack_count(struct rte_stack *s) {
> + return (unsigned int)s->stack_std.len; }
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Return the number of free entries in a stack.
> + *
> + * @param s
> + * A pointer to the stack structure.
> + * @return
> + * The number of free entries in the stack.
> + */
> +static __rte_always_inline unsigned int __rte_experimental
> +rte_stack_free_count(struct rte_stack *s) {
> + return s->capacity - rte_stack_count(s); }
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Create a new stack named *name* in memory.
> + *
> + * This function uses ``memzone_reserve()`` to allocate memory for a
> +stack of
> + * size *count*. The behavior of the stack is controlled by the *flags*.
> + *
> + * @param name
> + * The name of the stack.
> + * @param count
> + * The size of the stack.
> + * @param socket_id
> + * The *socket_id* argument is the socket identifier in case of
> + * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
> + * constraint for the reserved zone.
> + * @param flags
> + * Reserved for future use.
> + * @return
> + * On success, the pointer to the new allocated stack. NULL on error with
> + * rte_errno set appropriately. Possible errno values include:
> + * - ENOSPC - the maximum number of memzones has already been
> allocated
> + * - EEXIST - a stack with the same name already exists
This is not implemented currently
> + * - ENOMEM - insufficient memory to create the stack
> + * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
> + */
> +struct rte_stack *__rte_experimental
> +rte_stack_create(const char *name, unsigned int count, int socket_id,
> + uint32_t flags);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Free all memory used by the stack.
> + *
> + * @param s
> + * Stack to free
> + */
> +void __rte_experimental
> +rte_stack_free(struct rte_stack *s);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Lookup a stack by its name.
> + *
> + * @param name
> + * The name of the stack.
> + * @return
> + * The pointer to the stack matching the name, or NULL if not found,
> + * with rte_errno set appropriately. Possible rte_errno values include:
> + * - ENOENT - Stack with name *name* not found.
> + * - EINVAL - *name* pointer is NULL.
> + */
> +struct rte_stack * __rte_experimental
> +rte_stack_lookup(const char *name);
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_STACK_H_ */
> diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
> new file mode 100644
> index 000000000..4a6a7bdb3
> --- /dev/null
> +++ b/lib/librte_stack/rte_stack_pvt.h
> @@ -0,0 +1,34 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2019 Intel Corporation
> + */
> +
> +#ifndef _RTE_STACK_PVT_H_
> +#define _RTE_STACK_PVT_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <rte_log.h>
> +
> +extern int stack_logtype;
> +
> +#define STACK_LOG(level, fmt, args...) \
> + rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
> + __func__, ##args)
> +
> +#define STACK_LOG_ERR(fmt, args...) \
> + STACK_LOG(ERR, fmt, ## args)
> +
> +#define STACK_LOG_WARN(fmt, args...) \
> + STACK_LOG(WARNING, fmt, ## args)
> +
> +#define STACK_LOG_INFO(fmt, args...) \
> + STACK_LOG(INFO, fmt, ## args)
> +
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_STACK_PVT_H_ */
> diff --git a/lib/librte_stack/rte_stack_version.map
> b/lib/librte_stack/rte_stack_version.map
> new file mode 100644
> index 000000000..6662679c3
> --- /dev/null
> +++ b/lib/librte_stack/rte_stack_version.map
> @@ -0,0 +1,9 @@
> +EXPERIMENTAL {
> + global:
> +
> + rte_stack_create;
> + rte_stack_free;
> + rte_stack_lookup;
> +
> + local: *;
> +};
> diff --git a/lib/meson.build b/lib/meson.build index 99957ba7d..90115477f
> 100644
> --- a/lib/meson.build
> +++ b/lib/meson.build
> @@ -22,7 +22,7 @@ libraries = [
> 'gro', 'gso', 'ip_frag', 'jobstats',
> 'kni', 'latencystats', 'lpm', 'member',
> 'power', 'pdump', 'rawdev',
> - 'reorder', 'sched', 'security', 'vhost',
> + 'reorder', 'sched', 'security', 'stack', 'vhost',
> #ipsec lib depends on crypto and security
> 'ipsec',
> # add pkt framework libs which use other libs from above diff --git
> a/mk/rte.app.mk b/mk/rte.app.mk index 3c40f9df2..8decfb851 100644
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -
> lrte_security
> _LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
> _LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
> _LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
> _LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
> _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
> _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
> --
> 2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 5/8] stack: add lock-free stack implementation
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 5/8] stack: add lock-free stack implementation Gage Eads
2019-03-14 8:01 ` Olivier Matz
@ 2019-03-28 23:27 ` Honnappa Nagarahalli
2019-03-28 23:27 ` Honnappa Nagarahalli
2019-03-29 19:25 ` Eads, Gage
1 sibling, 2 replies; 228+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-28 23:27 UTC (permalink / raw)
To: Gage Eads, dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
Gavin Hu (Arm Technology China),
nd, thomas, nd
<snip>
> diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c index
> 96dffdf44..8f0361ea1 100644
> --- a/lib/librte_stack/rte_stack.c
> +++ b/lib/librte_stack/rte_stack.c
<snip>
> @@ -63,9 +81,16 @@ rte_stack_create(const char *name, unsigned int
> count, int socket_id,
> unsigned int sz;
> int ret;
>
> - RTE_SET_USED(flags);
> +#ifdef RTE_ARCH_X86_64
> + RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
This check should be independent of the platform.
#else
> + if (flags & RTE_STACK_F_LF) {
> + STACK_LOG_ERR("Lock-free stack is not supported on your
> platform\n");
> + return NULL;
> + }
> +#endif
>
> - sz = rte_stack_get_memsize(count);
> + sz = rte_stack_get_memsize(count, flags);
>
> ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
> RTE_STACK_MZ_PREFIX, name);
> @@ -94,7 +119,7 @@ rte_stack_create(const char *name, unsigned int
> count, int socket_id,
>
> s = mz->addr;
>
> - rte_stack_init(s);
> + rte_stack_init(s, count, flags);
>
> /* Store the name for later lookups */
> ret = snprintf(s->name, sizeof(s->name), "%s", name); diff --git
> a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h index
> 7a633deb5..b484313bb 100644
> --- a/lib/librte_stack/rte_stack.h
> +++ b/lib/librte_stack/rte_stack.h
> @@ -30,6 +30,35 @@ extern "C" {
<snip>
> +/**
> + * @internal Push several objects on the lock-free stack (MT-safe).
> + *
> + * @param s
> + * A pointer to the stack structure.
> + * @param obj_table
> + * A pointer to a table of void * pointers (objects).
> + * @param n
> + * The number of objects to push on the stack from the obj_table.
> + * @return
> + * Actual number of objects enqueued.
> + */
> +static __rte_always_inline unsigned int __rte_experimental
This is an internal function. Is '__rte_experimental' tag required? (applies to other instances in this patch)
> +rte_stack_lf_push(struct rte_stack *s, void * const *obj_table,
> +unsigned int n) {
> + struct rte_stack_lf_elem *tmp, *first, *last = NULL;
> + unsigned int i;
> +
> + if (unlikely(n == 0))
> + return 0;
> +
> + /* Pop n free elements */
> + first = __rte_stack_lf_pop(&s->stack_lf.free, n, NULL, &last);
> + if (unlikely(first == NULL))
> + return 0;
> +
> + /* Construct the list elements */
> + for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
> + tmp->data = obj_table[n - i - 1];
> +
> + /* Push them to the used list */
> + __rte_stack_lf_push(&s->stack_lf.used, first, last, n);
> +
> + return n;
> +}
> +
<snip>
>
> /**
> @@ -225,7 +339,10 @@ rte_stack_free_count(struct rte_stack *s)
> * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
> * constraint for the reserved zone.
> * @param flags
> - * Reserved for future use.
> + * An OR of the following:
> + * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
> + * variants of the push and pop functions. Otherwise, it achieves
> + * thread-safety using a lock.
> * @return
> * On success, the pointer to the new allocated stack. NULL on error with
> * rte_errno set appropriately. Possible errno values include:
> diff --git a/lib/librte_stack/rte_stack_generic.h
> b/lib/librte_stack/rte_stack_generic.h
> new file mode 100644
> index 000000000..5e4cbc38e
> --- /dev/null
> +++ b/lib/librte_stack/rte_stack_generic.h
The name "...stack_generic.h" is confusing. It is implementing LF algorithm.
IMO, the code should be re-organized differently.
rte_stack.h, rte_stack.c - Contain the APIs (calling std or LF based on the flag) and top level structure definition
rte_stack_std.c, rte_stack_std.h - Contain the standard implementation
rte_stack_lf.c, rte_stack_lf.h - Contain the LF implementation
rte_stack_lf_c11.h - Contain the LF C11 implementation
> @@ -0,0 +1,151 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2019 Intel Corporation
> + */
> +
> +#ifndef _RTE_STACK_GENERIC_H_
> +#define _RTE_STACK_GENERIC_H_
> +
> +#include <rte_branch_prediction.h>
> +#include <rte_prefetch.h>
> +
<snip>
> +
> +static __rte_always_inline struct rte_stack_lf_elem *
> +__rte_stack_lf_pop(struct rte_stack_lf_list *list,
> + unsigned int num,
> + void **obj_table,
> + struct rte_stack_lf_elem **last)
> +{
> +#ifndef RTE_ARCH_X86_64
> + RTE_SET_USED(obj_table);
> + RTE_SET_USED(last);
> + RTE_SET_USED(list);
> + RTE_SET_USED(num);
> +
> + return NULL;
> +#else
> + struct rte_stack_lf_head old_head;
> + int success;
> +
> + /* Reserve num elements, if available */
> + while (1) {
> + uint64_t len = rte_atomic64_read(&list->len);
> +
> + /* Does the list contain enough elements? */
> + if (unlikely(len < num))
> + return NULL;
> +
> + if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
> + len, len - num))
> + break;
> + }
> +
> + old_head = list->head;
> +
> + /* Pop num elements */
> + do {
> + struct rte_stack_lf_head new_head;
> + struct rte_stack_lf_elem *tmp;
> + unsigned int i;
> +
> + rte_prefetch0(old_head.top);
> +
> + tmp = old_head.top;
> +
> + /* Traverse the list to find the new head. A next pointer will
> + * either point to another element or NULL; if a thread
> + * encounters a pointer that has already been popped, the
> CAS
> + * will fail.
> + */
> + for (i = 0; i < num && tmp != NULL; i++) {
> + rte_prefetch0(tmp->next);
> + if (obj_table)
> + obj_table[i] = tmp->data;
> + if (last)
> + *last = tmp;
> + tmp = tmp->next;
> + }
> +
> + /* If NULL was encountered, the list was modified while
> + * traversing it. Retry.
> + */
> + if (i != num)
> + continue;
> +
> + new_head.top = tmp;
> + new_head.cnt = old_head.cnt + 1;
> +
> + /* old_head is updated on failure */
> + success = rte_atomic128_cmp_exchange(
> + (rte_int128_t *)&list->head,
> + (rte_int128_t *)&old_head,
> + (rte_int128_t *)&new_head,
> + 1, __ATOMIC_ACQUIRE,
> + __ATOMIC_ACQUIRE);
Just wondering if 'rte_atomic128_cmp_exchange' for x86 should have compiler barriers based on the memory order passed?
C++11 memory model is getting mixed with barrier based model. I think this is something that needs to be discussed at a wider level.
> + } while (success == 0);
> +
> + return old_head.top;
> +#endif
> +}
> +
> +#endif /* _RTE_STACK_GENERIC_H_ */
> --
> 2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 5/8] stack: add lock-free stack implementation
2019-03-28 23:27 ` Honnappa Nagarahalli
@ 2019-03-28 23:27 ` Honnappa Nagarahalli
2019-03-29 19:25 ` Eads, Gage
1 sibling, 0 replies; 228+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-28 23:27 UTC (permalink / raw)
To: Gage Eads, dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
Gavin Hu (Arm Technology China),
nd, thomas, nd
<snip>
> diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c index
> 96dffdf44..8f0361ea1 100644
> --- a/lib/librte_stack/rte_stack.c
> +++ b/lib/librte_stack/rte_stack.c
<snip>
> @@ -63,9 +81,16 @@ rte_stack_create(const char *name, unsigned int
> count, int socket_id,
> unsigned int sz;
> int ret;
>
> - RTE_SET_USED(flags);
> +#ifdef RTE_ARCH_X86_64
> + RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
This check should be independent of the platform.
#else
> + if (flags & RTE_STACK_F_LF) {
> + STACK_LOG_ERR("Lock-free stack is not supported on your
> platform\n");
> + return NULL;
> + }
> +#endif
>
> - sz = rte_stack_get_memsize(count);
> + sz = rte_stack_get_memsize(count, flags);
>
> ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
> RTE_STACK_MZ_PREFIX, name);
> @@ -94,7 +119,7 @@ rte_stack_create(const char *name, unsigned int
> count, int socket_id,
>
> s = mz->addr;
>
> - rte_stack_init(s);
> + rte_stack_init(s, count, flags);
>
> /* Store the name for later lookups */
> ret = snprintf(s->name, sizeof(s->name), "%s", name); diff --git
> a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h index
> 7a633deb5..b484313bb 100644
> --- a/lib/librte_stack/rte_stack.h
> +++ b/lib/librte_stack/rte_stack.h
> @@ -30,6 +30,35 @@ extern "C" {
<snip>
> +/**
> + * @internal Push several objects on the lock-free stack (MT-safe).
> + *
> + * @param s
> + * A pointer to the stack structure.
> + * @param obj_table
> + * A pointer to a table of void * pointers (objects).
> + * @param n
> + * The number of objects to push on the stack from the obj_table.
> + * @return
> + * Actual number of objects enqueued.
> + */
> +static __rte_always_inline unsigned int __rte_experimental
This is an internal function. Is '__rte_experimental' tag required? (applies to other instances in this patch)
> +rte_stack_lf_push(struct rte_stack *s, void * const *obj_table,
> +unsigned int n) {
> + struct rte_stack_lf_elem *tmp, *first, *last = NULL;
> + unsigned int i;
> +
> + if (unlikely(n == 0))
> + return 0;
> +
> + /* Pop n free elements */
> + first = __rte_stack_lf_pop(&s->stack_lf.free, n, NULL, &last);
> + if (unlikely(first == NULL))
> + return 0;
> +
> + /* Construct the list elements */
> + for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
> + tmp->data = obj_table[n - i - 1];
> +
> + /* Push them to the used list */
> + __rte_stack_lf_push(&s->stack_lf.used, first, last, n);
> +
> + return n;
> +}
> +
<snip>
>
> /**
> @@ -225,7 +339,10 @@ rte_stack_free_count(struct rte_stack *s)
> * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
> * constraint for the reserved zone.
> * @param flags
> - * Reserved for future use.
> + * An OR of the following:
> + * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
> + * variants of the push and pop functions. Otherwise, it achieves
> + * thread-safety using a lock.
> * @return
> * On success, the pointer to the new allocated stack. NULL on error with
> * rte_errno set appropriately. Possible errno values include:
> diff --git a/lib/librte_stack/rte_stack_generic.h
> b/lib/librte_stack/rte_stack_generic.h
> new file mode 100644
> index 000000000..5e4cbc38e
> --- /dev/null
> +++ b/lib/librte_stack/rte_stack_generic.h
The name "...stack_generic.h" is confusing. It is implementing LF algorithm.
IMO, the code should be re-organized differently.
rte_stack.h, rte_stack.c - Contain the APIs (calling std or LF based on the flag) and top level structure definition
rte_stack_std.c, rte_stack_std.h - Contain the standard implementation
rte_stack_lf.c, rte_stack_lf.h - Contain the LF implementation
rte_stack_lf_c11.h - Contain the LF C11 implementation
> @@ -0,0 +1,151 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2019 Intel Corporation
> + */
> +
> +#ifndef _RTE_STACK_GENERIC_H_
> +#define _RTE_STACK_GENERIC_H_
> +
> +#include <rte_branch_prediction.h>
> +#include <rte_prefetch.h>
> +
<snip>
> +
> +static __rte_always_inline struct rte_stack_lf_elem *
> +__rte_stack_lf_pop(struct rte_stack_lf_list *list,
> + unsigned int num,
> + void **obj_table,
> + struct rte_stack_lf_elem **last)
> +{
> +#ifndef RTE_ARCH_X86_64
> + RTE_SET_USED(obj_table);
> + RTE_SET_USED(last);
> + RTE_SET_USED(list);
> + RTE_SET_USED(num);
> +
> + return NULL;
> +#else
> + struct rte_stack_lf_head old_head;
> + int success;
> +
> + /* Reserve num elements, if available */
> + while (1) {
> + uint64_t len = rte_atomic64_read(&list->len);
> +
> + /* Does the list contain enough elements? */
> + if (unlikely(len < num))
> + return NULL;
> +
> + if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
> + len, len - num))
> + break;
> + }
> +
> + old_head = list->head;
> +
> + /* Pop num elements */
> + do {
> + struct rte_stack_lf_head new_head;
> + struct rte_stack_lf_elem *tmp;
> + unsigned int i;
> +
> + rte_prefetch0(old_head.top);
> +
> + tmp = old_head.top;
> +
> + /* Traverse the list to find the new head. A next pointer will
> + * either point to another element or NULL; if a thread
> + * encounters a pointer that has already been popped, the
> CAS
> + * will fail.
> + */
> + for (i = 0; i < num && tmp != NULL; i++) {
> + rte_prefetch0(tmp->next);
> + if (obj_table)
> + obj_table[i] = tmp->data;
> + if (last)
> + *last = tmp;
> + tmp = tmp->next;
> + }
> +
> + /* If NULL was encountered, the list was modified while
> + * traversing it. Retry.
> + */
> + if (i != num)
> + continue;
> +
> + new_head.top = tmp;
> + new_head.cnt = old_head.cnt + 1;
> +
> + /* old_head is updated on failure */
> + success = rte_atomic128_cmp_exchange(
> + (rte_int128_t *)&list->head,
> + (rte_int128_t *)&old_head,
> + (rte_int128_t *)&new_head,
> + 1, __ATOMIC_ACQUIRE,
> + __ATOMIC_ACQUIRE);
Just wondering if 'rte_atomic128_cmp_exchange' for x86 should have compiler barriers based on the memory order passed?
C++11 memory model is getting mixed with barrier based model. I think this is something that needs to be discussed at a wider level.
> + } while (success == 0);
> +
> + return old_head.top;
> +#endif
> +}
> +
> +#endif /* _RTE_STACK_GENERIC_H_ */
> --
> 2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation Gage Eads
2019-03-14 8:04 ` Olivier Matz
@ 2019-03-28 23:27 ` Honnappa Nagarahalli
2019-03-28 23:27 ` Honnappa Nagarahalli
2019-03-29 19:24 ` Eads, Gage
1 sibling, 2 replies; 228+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-28 23:27 UTC (permalink / raw)
To: Gage Eads, dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
Gavin Hu (Arm Technology China),
nd, thomas, nd
>
> This commit adds an implementation of the lock-free stack push, pop, and
> length functions that use __atomic builtins, for systems that benefit from the
> finer-grained memory ordering control.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
> ---
> lib/librte_stack/Makefile | 3 +-
> lib/librte_stack/meson.build | 3 +-
> lib/librte_stack/rte_stack.h | 4 +
> lib/librte_stack/rte_stack_c11_mem.h | 175
> +++++++++++++++++++++++++++++++++++
> 4 files changed, 183 insertions(+), 2 deletions(-) create mode 100644
> lib/librte_stack/rte_stack_c11_mem.h
>
> diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile index
> 3ecddf033..94a7c1476 100644
> --- a/lib/librte_stack/Makefile
> +++ b/lib/librte_stack/Makefile
> @@ -19,6 +19,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
>
> # install includes
> SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
> - rte_stack_generic.h
> + rte_stack_generic.h \
> + rte_stack_c11_mem.h
>
> include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build index
> 99d7f9ec5..7e2d1dbb8 100644
> --- a/lib/librte_stack/meson.build
> +++ b/lib/librte_stack/meson.build
> @@ -6,4 +6,5 @@ allow_experimental_apis = true version = 1 sources =
> files('rte_stack.c') headers = files('rte_stack.h',
> - 'rte_stack_generic.h')
> + 'rte_stack_generic.h',
> + 'rte_stack_c11_mem.h')
> diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h index
> b484313bb..de16f8fff 100644
> --- a/lib/librte_stack/rte_stack.h
> +++ b/lib/librte_stack/rte_stack.h
> @@ -91,7 +91,11 @@ struct rte_stack {
> */
> #define RTE_STACK_F_LF 0x0001
>
> +#ifdef RTE_USE_C11_MEM_MODEL
> +#include "rte_stack_c11_mem.h"
> +#else
> #include "rte_stack_generic.h"
> +#endif
>
> /**
> * @internal Push several objects on the lock-free stack (MT-safe).
> diff --git a/lib/librte_stack/rte_stack_c11_mem.h
> b/lib/librte_stack/rte_stack_c11_mem.h
> new file mode 100644
> index 000000000..44f9ece6e
> --- /dev/null
> +++ b/lib/librte_stack/rte_stack_c11_mem.h
> @@ -0,0 +1,175 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2019 Intel Corporation
> + */
> +
> +#ifndef _RTE_STACK_C11_MEM_H_
> +#define _RTE_STACK_C11_MEM_H_
> +
> +#include <rte_branch_prediction.h>
> +#include <rte_prefetch.h>
> +
> +static __rte_always_inline unsigned int rte_stack_lf_len(struct
> +rte_stack *s) {
> + /* stack_lf_push() and stack_lf_pop() do not update the list's
> contents
> + * and stack_lf->len atomically, which can cause the list to appear
> + * shorter than it actually is if this function is called while other
> + * threads are modifying the list.
> + *
> + * However, given the inherently approximate nature of the
> get_count
> + * callback -- even if the list and its size were updated atomically,
> + * the size could change between when get_count executes and when
> the
> + * value is returned to the caller -- this is acceptable.
> + *
> + * The stack_lf->len updates are placed such that the list may appear
> to
> + * have fewer elements than it does, but will never appear to have
> more
> + * elements. If the mempool is near-empty to the point that this is a
> + * concern, the user should consider increasing the mempool size.
> + */
> + return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
> + __ATOMIC_RELAXED);
> +}
> +
> +static __rte_always_inline void
> +__rte_stack_lf_push(struct rte_stack_lf_list *list,
> + struct rte_stack_lf_elem *first,
> + struct rte_stack_lf_elem *last,
> + unsigned int num)
> +{
> +#ifndef RTE_ARCH_X86_64
> + RTE_SET_USED(first);
> + RTE_SET_USED(last);
> + RTE_SET_USED(list);
> + RTE_SET_USED(num);
> +#else
> + struct rte_stack_lf_head old_head;
> + int success;
> +
> + old_head = list->head;
This can be a torn read (same as you have mentioned in __rte_stack_lf_pop). I suggest we use acquire thread fence here as well (please see the comments in __rte_stack_lf_pop).
> +
> + do {
> + struct rte_stack_lf_head new_head;
> +
We can add __atomic_thread_fence(__ATOMIC_ACQUIRE) here (please see the comments in __rte_stack_lf_pop).
> + /* Swing the top pointer to the first element in the list and
> + * make the last element point to the old top.
> + */
> + new_head.top = first;
> + new_head.cnt = old_head.cnt + 1;
> +
> + last->next = old_head.top;
> +
> + /* Use the release memmodel to ensure the writes to the LF
> LIFO
> + * elements are visible before the head pointer write.
> + */
> + success = rte_atomic128_cmp_exchange(
> + (rte_int128_t *)&list->head,
> + (rte_int128_t *)&old_head,
> + (rte_int128_t *)&new_head,
> + 1, __ATOMIC_RELEASE,
> + __ATOMIC_RELAXED);
Success memory order can be RELAXED as the store to list->len.cnt is RELEASE.
> + } while (success == 0);
> +
> + /* Ensure the stack modifications are not reordered with respect
> + * to the LIFO len update.
> + */
> + __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
> #endif }
> +
> +static __rte_always_inline struct rte_stack_lf_elem *
> +__rte_stack_lf_pop(struct rte_stack_lf_list *list,
> + unsigned int num,
> + void **obj_table,
> + struct rte_stack_lf_elem **last)
> +{
> +#ifndef RTE_ARCH_X86_64
> + RTE_SET_USED(obj_table);
> + RTE_SET_USED(last);
> + RTE_SET_USED(list);
> + RTE_SET_USED(num);
> +
> + return NULL;
> +#else
> + struct rte_stack_lf_head old_head;
> + int success;
> +
> + /* Reserve num elements, if available */
> + while (1) {
> + uint64_t len = __atomic_load_n(&list->len.cnt,
> + __ATOMIC_ACQUIRE);
This can be outside the loop.
> +
> + /* Does the list contain enough elements? */
> + if (unlikely(len < num))
> + return NULL;
> +
> + if (__atomic_compare_exchange_n(&list->len.cnt,
> + &len, len - num,
> + 0, __ATOMIC_RELAXED,
> + __ATOMIC_RELAXED))
> + break;
> + }
> +
> +#ifndef RTE_ARCH_X86_64
> + /* Use the acquire memmodel to ensure the reads to the LF LIFO
> elements
> + * are properly ordered with respect to the head pointer read.
> + *
> + * Note that for aarch64, GCC's implementation of __atomic_load_16
> in
> + * libatomic uses locks, and so this function should be replaced by
> + * a new function (e.g. "rte_atomic128_load()").
aarch64 does not have 'pure' atomic 128b load instructions. They have to be implemented using load/store exclusives.
> + */
> + __atomic_load((volatile __int128 *)&list->head,
> + &old_head,
> + __ATOMIC_ACQUIRE);
Since, we know of x86/aarch64 (power?) that cannot implement pure atomic 128b loads, should we just use relaxed reads and assume the possibility of torn reads for all architectures? Then, we can use acquire fence to prevent the reordering (see below)
> +#else
> + /* x86-64 does not require an atomic load here; if a torn read occurs,
IMO, we should not make architecture specific distinctions as this algorithm is based on C11 memory model.
> + * the CAS will fail and set old_head to the correct/latest value.
> + */
> + old_head = list->head;
> +#endif
> +
> + /* Pop num elements */
> + do {
> + struct rte_stack_lf_head new_head;
> + struct rte_stack_lf_elem *tmp;
> + unsigned int i;
> +
We can add __atomic_thread_fence(__ATOMIC_ACQUIRE) here.
> + rte_prefetch0(old_head.top);
> +
> + tmp = old_head.top;
> +
> + /* Traverse the list to find the new head. A next pointer will
> + * either point to another element or NULL; if a thread
> + * encounters a pointer that has already been popped, the
> CAS
> + * will fail.
> + */
> + for (i = 0; i < num && tmp != NULL; i++) {
> + rte_prefetch0(tmp->next);
> + if (obj_table)
> + obj_table[i] = tmp->data;
> + if (last)
> + *last = tmp;
> + tmp = tmp->next;
> + }
> +
> + /* If NULL was encountered, the list was modified while
> + * traversing it. Retry.
> + */
> + if (i != num)
> + continue;
> +
> + new_head.top = tmp;
> + new_head.cnt = old_head.cnt + 1;
> +
> + success = rte_atomic128_cmp_exchange(
> + (rte_int128_t *)&list->head,
> + (rte_int128_t *)&old_head,
> + (rte_int128_t *)&new_head,
> + 1, __ATOMIC_ACQUIRE,
> + __ATOMIC_ACQUIRE);
The success order should be __ATOMIC_RELEASE as the write to list->len.cnt is relaxed.
The failure order can be __ATOMIC_RELAXED if the thread fence is added.
> + } while (success == 0);
> +
> + return old_head.top;
> +#endif
> +}
> +
> +#endif /* _RTE_STACK_C11_MEM_H_ */
> --
> 2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation
2019-03-28 23:27 ` Honnappa Nagarahalli
@ 2019-03-28 23:27 ` Honnappa Nagarahalli
2019-03-29 19:24 ` Eads, Gage
1 sibling, 0 replies; 228+ messages in thread
From: Honnappa Nagarahalli @ 2019-03-28 23:27 UTC (permalink / raw)
To: Gage Eads, dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
Gavin Hu (Arm Technology China),
nd, thomas, nd
>
> This commit adds an implementation of the lock-free stack push, pop, and
> length functions that use __atomic builtins, for systems that benefit from the
> finer-grained memory ordering control.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
> ---
> lib/librte_stack/Makefile | 3 +-
> lib/librte_stack/meson.build | 3 +-
> lib/librte_stack/rte_stack.h | 4 +
> lib/librte_stack/rte_stack_c11_mem.h | 175
> +++++++++++++++++++++++++++++++++++
> 4 files changed, 183 insertions(+), 2 deletions(-) create mode 100644
> lib/librte_stack/rte_stack_c11_mem.h
>
> diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile index
> 3ecddf033..94a7c1476 100644
> --- a/lib/librte_stack/Makefile
> +++ b/lib/librte_stack/Makefile
> @@ -19,6 +19,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c
>
> # install includes
> SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
> - rte_stack_generic.h
> + rte_stack_generic.h \
> + rte_stack_c11_mem.h
>
> include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build index
> 99d7f9ec5..7e2d1dbb8 100644
> --- a/lib/librte_stack/meson.build
> +++ b/lib/librte_stack/meson.build
> @@ -6,4 +6,5 @@ allow_experimental_apis = true version = 1 sources =
> files('rte_stack.c') headers = files('rte_stack.h',
> - 'rte_stack_generic.h')
> + 'rte_stack_generic.h',
> + 'rte_stack_c11_mem.h')
> diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h index
> b484313bb..de16f8fff 100644
> --- a/lib/librte_stack/rte_stack.h
> +++ b/lib/librte_stack/rte_stack.h
> @@ -91,7 +91,11 @@ struct rte_stack {
> */
> #define RTE_STACK_F_LF 0x0001
>
> +#ifdef RTE_USE_C11_MEM_MODEL
> +#include "rte_stack_c11_mem.h"
> +#else
> #include "rte_stack_generic.h"
> +#endif
>
> /**
> * @internal Push several objects on the lock-free stack (MT-safe).
> diff --git a/lib/librte_stack/rte_stack_c11_mem.h
> b/lib/librte_stack/rte_stack_c11_mem.h
> new file mode 100644
> index 000000000..44f9ece6e
> --- /dev/null
> +++ b/lib/librte_stack/rte_stack_c11_mem.h
> @@ -0,0 +1,175 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2019 Intel Corporation
> + */
> +
> +#ifndef _RTE_STACK_C11_MEM_H_
> +#define _RTE_STACK_C11_MEM_H_
> +
> +#include <rte_branch_prediction.h>
> +#include <rte_prefetch.h>
> +
> +static __rte_always_inline unsigned int rte_stack_lf_len(struct
> +rte_stack *s) {
> + /* stack_lf_push() and stack_lf_pop() do not update the list's
> contents
> + * and stack_lf->len atomically, which can cause the list to appear
> + * shorter than it actually is if this function is called while other
> + * threads are modifying the list.
> + *
> + * However, given the inherently approximate nature of the
> get_count
> + * callback -- even if the list and its size were updated atomically,
> + * the size could change between when get_count executes and when
> the
> + * value is returned to the caller -- this is acceptable.
> + *
> + * The stack_lf->len updates are placed such that the list may appear
> to
> + * have fewer elements than it does, but will never appear to have
> more
> + * elements. If the mempool is near-empty to the point that this is a
> + * concern, the user should consider increasing the mempool size.
> + */
> + return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
> + __ATOMIC_RELAXED);
> +}
> +
> +static __rte_always_inline void
> +__rte_stack_lf_push(struct rte_stack_lf_list *list,
> + struct rte_stack_lf_elem *first,
> + struct rte_stack_lf_elem *last,
> + unsigned int num)
> +{
> +#ifndef RTE_ARCH_X86_64
> + RTE_SET_USED(first);
> + RTE_SET_USED(last);
> + RTE_SET_USED(list);
> + RTE_SET_USED(num);
> +#else
> + struct rte_stack_lf_head old_head;
> + int success;
> +
> + old_head = list->head;
This can be a torn read (same as you have mentioned in __rte_stack_lf_pop). I suggest we use acquire thread fence here as well (please see the comments in __rte_stack_lf_pop).
> +
> + do {
> + struct rte_stack_lf_head new_head;
> +
We can add __atomic_thread_fence(__ATOMIC_ACQUIRE) here (please see the comments in __rte_stack_lf_pop).
> + /* Swing the top pointer to the first element in the list and
> + * make the last element point to the old top.
> + */
> + new_head.top = first;
> + new_head.cnt = old_head.cnt + 1;
> +
> + last->next = old_head.top;
> +
> + /* Use the release memmodel to ensure the writes to the LF
> LIFO
> + * elements are visible before the head pointer write.
> + */
> + success = rte_atomic128_cmp_exchange(
> + (rte_int128_t *)&list->head,
> + (rte_int128_t *)&old_head,
> + (rte_int128_t *)&new_head,
> + 1, __ATOMIC_RELEASE,
> + __ATOMIC_RELAXED);
Success memory order can be RELAXED as the store to list->len.cnt is RELEASE.
> + } while (success == 0);
> +
> + /* Ensure the stack modifications are not reordered with respect
> + * to the LIFO len update.
> + */
> + __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
> #endif }
> +
> +static __rte_always_inline struct rte_stack_lf_elem *
> +__rte_stack_lf_pop(struct rte_stack_lf_list *list,
> + unsigned int num,
> + void **obj_table,
> + struct rte_stack_lf_elem **last)
> +{
> +#ifndef RTE_ARCH_X86_64
> + RTE_SET_USED(obj_table);
> + RTE_SET_USED(last);
> + RTE_SET_USED(list);
> + RTE_SET_USED(num);
> +
> + return NULL;
> +#else
> + struct rte_stack_lf_head old_head;
> + int success;
> +
> + /* Reserve num elements, if available */
> + while (1) {
> + uint64_t len = __atomic_load_n(&list->len.cnt,
> + __ATOMIC_ACQUIRE);
This can be outside the loop.
> +
> + /* Does the list contain enough elements? */
> + if (unlikely(len < num))
> + return NULL;
> +
> + if (__atomic_compare_exchange_n(&list->len.cnt,
> + &len, len - num,
> + 0, __ATOMIC_RELAXED,
> + __ATOMIC_RELAXED))
> + break;
> + }
> +
> +#ifndef RTE_ARCH_X86_64
> + /* Use the acquire memmodel to ensure the reads to the LF LIFO
> elements
> + * are properly ordered with respect to the head pointer read.
> + *
> + * Note that for aarch64, GCC's implementation of __atomic_load_16
> in
> + * libatomic uses locks, and so this function should be replaced by
> + * a new function (e.g. "rte_atomic128_load()").
aarch64 does not have 'pure' atomic 128b load instructions. They have to be implemented using load/store exclusives.
> + */
> + __atomic_load((volatile __int128 *)&list->head,
> + &old_head,
> + __ATOMIC_ACQUIRE);
Since, we know of x86/aarch64 (power?) that cannot implement pure atomic 128b loads, should we just use relaxed reads and assume the possibility of torn reads for all architectures? Then, we can use acquire fence to prevent the reordering (see below)
> +#else
> + /* x86-64 does not require an atomic load here; if a torn read occurs,
IMO, we should not make architecture specific distinctions as this algorithm is based on C11 memory model.
> + * the CAS will fail and set old_head to the correct/latest value.
> + */
> + old_head = list->head;
> +#endif
> +
> + /* Pop num elements */
> + do {
> + struct rte_stack_lf_head new_head;
> + struct rte_stack_lf_elem *tmp;
> + unsigned int i;
> +
We can add __atomic_thread_fence(__ATOMIC_ACQUIRE) here.
> + rte_prefetch0(old_head.top);
> +
> + tmp = old_head.top;
> +
> + /* Traverse the list to find the new head. A next pointer will
> + * either point to another element or NULL; if a thread
> + * encounters a pointer that has already been popped, the
> CAS
> + * will fail.
> + */
> + for (i = 0; i < num && tmp != NULL; i++) {
> + rte_prefetch0(tmp->next);
> + if (obj_table)
> + obj_table[i] = tmp->data;
> + if (last)
> + *last = tmp;
> + tmp = tmp->next;
> + }
> +
> + /* If NULL was encountered, the list was modified while
> + * traversing it. Retry.
> + */
> + if (i != num)
> + continue;
> +
> + new_head.top = tmp;
> + new_head.cnt = old_head.cnt + 1;
> +
> + success = rte_atomic128_cmp_exchange(
> + (rte_int128_t *)&list->head,
> + (rte_int128_t *)&old_head,
> + (rte_int128_t *)&new_head,
> + 1, __ATOMIC_ACQUIRE,
> + __ATOMIC_ACQUIRE);
The success order should be __ATOMIC_RELEASE as the write to list->len.cnt is relaxed.
The failure order can be __ATOMIC_RELAXED if the thread fence is added.
> + } while (success == 0);
> +
> + return old_head.top;
> +#endif
> +}
> +
> +#endif /* _RTE_STACK_C11_MEM_H_ */
> --
> 2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library
2019-03-28 23:26 ` Honnappa Nagarahalli
2019-03-28 23:26 ` Honnappa Nagarahalli
@ 2019-03-29 19:23 ` Eads, Gage
2019-03-29 19:23 ` Eads, Gage
` (2 more replies)
1 sibling, 3 replies; 228+ messages in thread
From: Eads, Gage @ 2019-03-29 19:23 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev
Cc: olivier.matz, arybchenko, Richardson, Bruce, Ananyev, Konstantin,
Gavin Hu (Arm Technology China),
nd, thomas, nd, Thomas Monjalon
@Thomas: I expect I can address Honnappa's feedback within a day or two. Since today is the 19.05 merge deadline, what do you think about these options for merging?
1. Merge V4 now and address these comments during RC1.
2. Delay merge until RC2, with all comments addressed.
In terms of risk, Honnappa identified an incorrect memory ordering argument (patch 6/8), but that doesn't affect the one platform (x86-64) that can (currently) use this library. His other comments address readability, error-checking, and performance, but aren't critical. Beyond that, this patchset is isolated from the rest of DPDK. So, I think the risk to the project is very low.
(Also, note that I accidentally left off Olivier's Reviewed-by tag in V4's patches 1, 3, 5, and 6 -- I'll address that as well)
> -----Original Message-----
> From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli@arm.com]
> Sent: Thursday, March 28, 2019 6:27 PM
> To: Eads, Gage <gage.eads@intel.com>; dev@dpdk.org
> Cc: olivier.matz@6wind.com; arybchenko@solarflare.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; nd <nd@arm.com>; thomas@monjalon.net; nd
> <nd@arm.com>
> Subject: RE: [PATCH v3 1/8] stack: introduce rte stack library
>
> Hi Gage,
> Apologies for the late comments.
>
No problem, I appreciate the feedback.
[snip]
> > +static ssize_t
> > +rte_stack_get_memsize(unsigned int count) {
> > + ssize_t sz = sizeof(struct rte_stack);
> > +
> > + /* Add padding to avoid false sharing conflicts */
> > + sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
> > + 2 * RTE_CACHE_LINE_SIZE;
> I did not understand how the false sharing is caused and how this padding is
> solving the issue. Verbose comments would help.
The additional padding (beyond the CACHE_LINE_ROUNDUP) is to prevent false sharing caused by adjacent/next-line hardware prefetchers. I'll address this.
[snip]
> > +struct rte_stack *
> > +rte_stack_create(const char *name, unsigned int count, int socket_id,
> > + uint32_t flags)
> > +{
> > + char mz_name[RTE_MEMZONE_NAMESIZE];
> > + struct rte_stack_list *stack_list;
> > + const struct rte_memzone *mz;
> > + struct rte_tailq_entry *te;
> > + struct rte_stack *s;
> > + unsigned int sz;
> > + int ret;
> > +
> > + RTE_SET_USED(flags);
> > +
> > + sz = rte_stack_get_memsize(count);
> > +
> > + ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
> > + RTE_STACK_MZ_PREFIX, name);
> > + if (ret < 0 || ret >= (int)sizeof(mz_name)) {
> > + rte_errno = ENAMETOOLONG;
> > + return NULL;
> > + }
> > +
> > + te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
> > + if (te == NULL) {
> > + STACK_LOG_ERR("Cannot reserve memory for tailq\n");
> > + rte_errno = ENOMEM;
> > + return NULL;
> > + }
> > +
> > + rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
> > +
> I think there is a need to check if a stack with the same name exists already.
rte_memzone_reserve_aligned() does just that. This behavior is tested in the function test_stack_name_reuse(), added in commit " test/stack: add stack test".
> > + mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
> > + 0, __alignof__(*s));
> > + if (mz == NULL) {
> > + STACK_LOG_ERR("Cannot reserve stack memzone!\n");
> > + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> > + rte_free(te);
> > + return NULL;
> > + }
[snip]
> > +void
> > +rte_stack_free(struct rte_stack *s)
> > +{
> > + struct rte_stack_list *stack_list;
> > + struct rte_tailq_entry *te;
> > +
> > + if (s == NULL)
> > + return;
> > +
> Adding a check to make sure the length of the stack is 0 would help catch
> issues?
My preference is to leave that check to the user, for any apps that want to/can safely free non-empty stacks.
[snip]
> > +#define RTE_TAILQ_STACK_NAME "RTE_STACK"
> > +#define RTE_STACK_MZ_PREFIX "STK_"
> Nit, "STACK_" would be easier to debug
Since RTE_MEMZONE_NAMESIZE (32) doesn't give us a lot of space, I kept the prefix short. Adding 2 more characters *probably* won't make a difference...but I'd prefer the shortened name.
> > +/** The maximum length of a stack name. */ #define
> RTE_STACK_NAMESIZE
> > +(RTE_MEMZONE_NAMESIZE - \
> > + sizeof(RTE_STACK_MZ_PREFIX) + 1)
> > +
[snip]
> > +/**
> > + * @internal Push several objects on the stack (MT-safe).
> > + *
> > + * @param s
> > + * A pointer to the stack structure.
> > + * @param obj_table
> > + * A pointer to a table of void * pointers (objects).
> > + * @param n
> > + * The number of objects to push on the stack from the obj_table.
> > + * @return
> > + * Actual number of objects pushed (either 0 or *n*).
> > + */
> > +static __rte_always_inline unsigned int __rte_experimental
> This is an internal function. Is '__rte_experimental' tag required?
I don't think so, but I erred on the side of caution. I don't think the tag causes any problems.
>
> > +rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
> > +unsigned int n) {
> Since this is an internal function, does it make sense to add '__' to the
> beginning of the function name (similar to what is done in rte_ring?).
Makes sense. I'll address this.
[snip]
> > +/**
> > + * @internal Pop several objects from the stack (MT-safe).
> > + *
> > + * @param s
> > + * A pointer to the stack structure.
> > + * @param obj_table
> > + * A pointer to a table of void * pointers (objects).
> > + * @param n
> > + * The number of objects to pull from the stack.
> > + * @return
> > + * Actual number of objects popped (either 0 or *n*).
> > + */
> > +static __rte_always_inline unsigned int __rte_experimental
> This is an internal function. Is '__rte_experimental' tag required?
(see above)
[snip]
> > +static __rte_always_inline unsigned int __rte_experimental
> > +rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n) {
> > + if (unlikely(n == 0 || obj_table == NULL))
> > + return 0;
> 's == NULL' can be added as well. Similar check is missing in 'rte_stack_push'.
> Since these are data-path APIs, RTE_ASSERT would be better.
>
Good point. I'll add RTE_ASSERT for obj_table and s. That won't work for "n == 0" -- the pop code assumes n > 0, so we can't allow that check to be compiled out.
[snip]
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Create a new stack named *name* in memory.
> > + *
> > + * This function uses ``memzone_reserve()`` to allocate memory for a
> > +stack of
> > + * size *count*. The behavior of the stack is controlled by the *flags*.
> > + *
> > + * @param name
> > + * The name of the stack.
> > + * @param count
> > + * The size of the stack.
> > + * @param socket_id
> > + * The *socket_id* argument is the socket identifier in case of
> > + * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
> > + * constraint for the reserved zone.
> > + * @param flags
> > + * Reserved for future use.
> > + * @return
> > + * On success, the pointer to the new allocated stack. NULL on error with
> > + * rte_errno set appropriately. Possible errno values include:
> > + * - ENOSPC - the maximum number of memzones has already been
> > allocated
> > + * - EEXIST - a stack with the same name already exists
> This is not implemented currently
It is -- see above.
Thanks,
Gage
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library
2019-03-29 19:23 ` Eads, Gage
@ 2019-03-29 19:23 ` Eads, Gage
2019-03-29 21:07 ` Thomas Monjalon
2019-04-01 17:41 ` Honnappa Nagarahalli
2 siblings, 0 replies; 228+ messages in thread
From: Eads, Gage @ 2019-03-29 19:23 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev
Cc: olivier.matz, arybchenko, Richardson, Bruce, Ananyev, Konstantin,
Gavin Hu (Arm Technology China),
nd, thomas, nd, Thomas Monjalon
@Thomas: I expect I can address Honnappa's feedback within a day or two. Since today is the 19.05 merge deadline, what do you think about these options for merging?
1. Merge V4 now and address these comments during RC1.
2. Delay merge until RC2, with all comments addressed.
In terms of risk, Honnappa identified an incorrect memory ordering argument (patch 6/8), but that doesn't affect the one platform (x86-64) that can (currently) use this library. His other comments address readability, error-checking, and performance, but aren't critical. Beyond that, this patchset is isolated from the rest of DPDK. So, I think the risk to the project is very low.
(Also, note that I accidentally left off Olivier's Reviewed-by tag in V4's patches 1, 3, 5, and 6 -- I'll address that as well)
> -----Original Message-----
> From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli@arm.com]
> Sent: Thursday, March 28, 2019 6:27 PM
> To: Eads, Gage <gage.eads@intel.com>; dev@dpdk.org
> Cc: olivier.matz@6wind.com; arybchenko@solarflare.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; nd <nd@arm.com>; thomas@monjalon.net; nd
> <nd@arm.com>
> Subject: RE: [PATCH v3 1/8] stack: introduce rte stack library
>
> Hi Gage,
> Apologies for the late comments.
>
No problem, I appreciate the feedback.
[snip]
> > +static ssize_t
> > +rte_stack_get_memsize(unsigned int count) {
> > + ssize_t sz = sizeof(struct rte_stack);
> > +
> > + /* Add padding to avoid false sharing conflicts */
> > + sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
> > + 2 * RTE_CACHE_LINE_SIZE;
> I did not understand how the false sharing is caused and how this padding is
> solving the issue. Verbose comments would help.
The additional padding (beyond the CACHE_LINE_ROUNDUP) is to prevent false sharing caused by adjacent/next-line hardware prefetchers. I'll address this.
[snip]
> > +struct rte_stack *
> > +rte_stack_create(const char *name, unsigned int count, int socket_id,
> > + uint32_t flags)
> > +{
> > + char mz_name[RTE_MEMZONE_NAMESIZE];
> > + struct rte_stack_list *stack_list;
> > + const struct rte_memzone *mz;
> > + struct rte_tailq_entry *te;
> > + struct rte_stack *s;
> > + unsigned int sz;
> > + int ret;
> > +
> > + RTE_SET_USED(flags);
> > +
> > + sz = rte_stack_get_memsize(count);
> > +
> > + ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
> > + RTE_STACK_MZ_PREFIX, name);
> > + if (ret < 0 || ret >= (int)sizeof(mz_name)) {
> > + rte_errno = ENAMETOOLONG;
> > + return NULL;
> > + }
> > +
> > + te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
> > + if (te == NULL) {
> > + STACK_LOG_ERR("Cannot reserve memory for tailq\n");
> > + rte_errno = ENOMEM;
> > + return NULL;
> > + }
> > +
> > + rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
> > +
> I think there is a need to check if a stack with the same name exists already.
rte_memzone_reserve_aligned() does just that. This behavior is tested in the function test_stack_name_reuse(), added in commit " test/stack: add stack test".
> > + mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
> > + 0, __alignof__(*s));
> > + if (mz == NULL) {
> > + STACK_LOG_ERR("Cannot reserve stack memzone!\n");
> > + rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
> > + rte_free(te);
> > + return NULL;
> > + }
[snip]
> > +void
> > +rte_stack_free(struct rte_stack *s)
> > +{
> > + struct rte_stack_list *stack_list;
> > + struct rte_tailq_entry *te;
> > +
> > + if (s == NULL)
> > + return;
> > +
> Adding a check to make sure the length of the stack is 0 would help catch
> issues?
My preference is to leave that check to the user, for any apps that want to/can safely free non-empty stacks.
[snip]
> > +#define RTE_TAILQ_STACK_NAME "RTE_STACK"
> > +#define RTE_STACK_MZ_PREFIX "STK_"
> Nit, "STACK_" would be easier to debug
Since RTE_MEMZONE_NAMESIZE (32) doesn't give us a lot of space, I kept the prefix short. Adding 2 more characters *probably* won't make a difference...but I'd prefer the shortened name.
> > +/** The maximum length of a stack name. */ #define
> RTE_STACK_NAMESIZE
> > +(RTE_MEMZONE_NAMESIZE - \
> > + sizeof(RTE_STACK_MZ_PREFIX) + 1)
> > +
[snip]
> > +/**
> > + * @internal Push several objects on the stack (MT-safe).
> > + *
> > + * @param s
> > + * A pointer to the stack structure.
> > + * @param obj_table
> > + * A pointer to a table of void * pointers (objects).
> > + * @param n
> > + * The number of objects to push on the stack from the obj_table.
> > + * @return
> > + * Actual number of objects pushed (either 0 or *n*).
> > + */
> > +static __rte_always_inline unsigned int __rte_experimental
> This is an internal function. Is '__rte_experimental' tag required?
I don't think so, but I erred on the side of caution. I don't think the tag causes any problems.
>
> > +rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
> > +unsigned int n) {
> Since this is an internal function, does it make sense to add '__' to the
> beginning of the function name (similar to what is done in rte_ring?).
Makes sense. I'll address this.
[snip]
> > +/**
> > + * @internal Pop several objects from the stack (MT-safe).
> > + *
> > + * @param s
> > + * A pointer to the stack structure.
> > + * @param obj_table
> > + * A pointer to a table of void * pointers (objects).
> > + * @param n
> > + * The number of objects to pull from the stack.
> > + * @return
> > + * Actual number of objects popped (either 0 or *n*).
> > + */
> > +static __rte_always_inline unsigned int __rte_experimental
> This is an internal function. Is '__rte_experimental' tag required?
(see above)
[snip]
> > +static __rte_always_inline unsigned int __rte_experimental
> > +rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n) {
> > + if (unlikely(n == 0 || obj_table == NULL))
> > + return 0;
> 's == NULL' can be added as well. Similar check is missing in 'rte_stack_push'.
> Since these are data-path APIs, RTE_ASSERT would be better.
>
Good point. I'll add RTE_ASSERT for obj_table and s. That won't work for "n == 0" -- the pop code assumes n > 0, so we can't allow that check to be compiled out.
[snip]
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice
> > + *
> > + * Create a new stack named *name* in memory.
> > + *
> > + * This function uses ``memzone_reserve()`` to allocate memory for a
> > +stack of
> > + * size *count*. The behavior of the stack is controlled by the *flags*.
> > + *
> > + * @param name
> > + * The name of the stack.
> > + * @param count
> > + * The size of the stack.
> > + * @param socket_id
> > + * The *socket_id* argument is the socket identifier in case of
> > + * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
> > + * constraint for the reserved zone.
> > + * @param flags
> > + * Reserved for future use.
> > + * @return
> > + * On success, the pointer to the new allocated stack. NULL on error with
> > + * rte_errno set appropriately. Possible errno values include:
> > + * - ENOSPC - the maximum number of memzones has already been
> > allocated
> > + * - EEXIST - a stack with the same name already exists
> This is not implemented currently
It is -- see above.
Thanks,
Gage
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation
2019-03-28 23:27 ` Honnappa Nagarahalli
2019-03-28 23:27 ` Honnappa Nagarahalli
@ 2019-03-29 19:24 ` Eads, Gage
2019-03-29 19:24 ` Eads, Gage
2019-04-01 0:06 ` Eads, Gage
1 sibling, 2 replies; 228+ messages in thread
From: Eads, Gage @ 2019-03-29 19:24 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev
Cc: olivier.matz, arybchenko, Richardson, Bruce, Ananyev, Konstantin,
Gavin Hu (Arm Technology China),
nd, thomas, nd, Thomas Monjalon
[snip]
> > +static __rte_always_inline void
> > +__rte_stack_lf_push(struct rte_stack_lf_list *list,
> > + struct rte_stack_lf_elem *first,
> > + struct rte_stack_lf_elem *last,
> > + unsigned int num)
> > +{
> > +#ifndef RTE_ARCH_X86_64
> > + RTE_SET_USED(first);
> > + RTE_SET_USED(last);
> > + RTE_SET_USED(list);
> > + RTE_SET_USED(num);
> > +#else
> > + struct rte_stack_lf_head old_head;
> > + int success;
> > +
> > + old_head = list->head;
> This can be a torn read (same as you have mentioned in
> __rte_stack_lf_pop). I suggest we use acquire thread fence here as well
> (please see the comments in __rte_stack_lf_pop).
Agreed. I'll add the acquire fence.
> > +
> > + do {
> > + struct rte_stack_lf_head new_head;
> > +
> We can add __atomic_thread_fence(__ATOMIC_ACQUIRE) here (please see
> the comments in __rte_stack_lf_pop).
Will add the fence here.
> > + /* Swing the top pointer to the first element in the list and
> > + * make the last element point to the old top.
> > + */
> > + new_head.top = first;
> > + new_head.cnt = old_head.cnt + 1;
> > +
> > + last->next = old_head.top;
> > +
> > + /* Use the release memmodel to ensure the writes to the LF
> > LIFO
> > + * elements are visible before the head pointer write.
> > + */
> > + success = rte_atomic128_cmp_exchange(
> > + (rte_int128_t *)&list->head,
> > + (rte_int128_t *)&old_head,
> > + (rte_int128_t *)&new_head,
> > + 1, __ATOMIC_RELEASE,
> > + __ATOMIC_RELAXED);
> Success memory order can be RELAXED as the store to list->len.cnt is
> RELEASE.
The RELEASE success order here ensures that the store to 'last->next' is visible before the head update. The RELEASE in the list->len.cnt store only guarantees that the preceding stores are visible before list->len.cnt's store, but doesn't guarantee any ordering between the 'last->next' store and the head update, so we can't rely on that.
> > + } while (success == 0);
> > +
> > + /* Ensure the stack modifications are not reordered with respect
> > + * to the LIFO len update.
> > + */
> > + __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
> > #endif }
> > +
> > +static __rte_always_inline struct rte_stack_lf_elem *
> > +__rte_stack_lf_pop(struct rte_stack_lf_list *list,
> > + unsigned int num,
> > + void **obj_table,
> > + struct rte_stack_lf_elem **last) { #ifndef
> RTE_ARCH_X86_64
> > + RTE_SET_USED(obj_table);
> > + RTE_SET_USED(last);
> > + RTE_SET_USED(list);
> > + RTE_SET_USED(num);
> > +
> > + return NULL;
> > +#else
> > + struct rte_stack_lf_head old_head;
> > + int success;
> > +
> > + /* Reserve num elements, if available */
> > + while (1) {
> > + uint64_t len = __atomic_load_n(&list->len.cnt,
> > + __ATOMIC_ACQUIRE);
> This can be outside the loop.
Good idea. I'll move this out of the loop and change the atomic_compare_exchange_n's failure memory order to ACQUIRE.
> > +
> > + /* Does the list contain enough elements? */
> > + if (unlikely(len < num))
> > + return NULL;
> > +
> > + if (__atomic_compare_exchange_n(&list->len.cnt,
> > + &len, len - num,
> > + 0, __ATOMIC_RELAXED,
> > + __ATOMIC_RELAXED))
> > + break;
> > + }
> > +
> > +#ifndef RTE_ARCH_X86_64
> > + /* Use the acquire memmodel to ensure the reads to the LF LIFO
> > elements
> > + * are properly ordered with respect to the head pointer read.
> > + *
> > + * Note that for aarch64, GCC's implementation of __atomic_load_16
> > in
> > + * libatomic uses locks, and so this function should be replaced by
> > + * a new function (e.g. "rte_atomic128_load()").
> aarch64 does not have 'pure' atomic 128b load instructions. They have to be
> implemented using load/store exclusives.
>
> > + */
> > + __atomic_load((volatile __int128 *)&list->head,
> > + &old_head,
> > + __ATOMIC_ACQUIRE);
> Since, we know of x86/aarch64 (power?) that cannot implement pure atomic
> 128b loads, should we just use relaxed reads and assume the possibility of
> torn reads for all architectures? Then, we can use acquire fence to prevent
> the reordering (see below)
That's a cleaner solution. I'll implement that, dropping the architecture distinction.
> > +#else
> > + /* x86-64 does not require an atomic load here; if a torn read
> > +occurs,
> IMO, we should not make architecture specific distinctions as this algorithm is
> based on C11 memory model.
>
> > + * the CAS will fail and set old_head to the correct/latest value.
> > + */
> > + old_head = list->head;
> > +#endif
> > +
> > + /* Pop num elements */
> > + do {
> > + struct rte_stack_lf_head new_head;
> > + struct rte_stack_lf_elem *tmp;
> > + unsigned int i;
> > +
> We can add __atomic_thread_fence(__ATOMIC_ACQUIRE) here.
Will do.
> > + rte_prefetch0(old_head.top);
> > +
> > + tmp = old_head.top;
> > +
> > + /* Traverse the list to find the new head. A next pointer will
> > + * either point to another element or NULL; if a thread
> > + * encounters a pointer that has already been popped, the
> > CAS
> > + * will fail.
> > + */
> > + for (i = 0; i < num && tmp != NULL; i++) {
> > + rte_prefetch0(tmp->next);
> > + if (obj_table)
> > + obj_table[i] = tmp->data;
> > + if (last)
> > + *last = tmp;
> > + tmp = tmp->next;
> > + }
> > +
> > + /* If NULL was encountered, the list was modified while
> > + * traversing it. Retry.
> > + */
> > + if (i != num)
> > + continue;
> > +
> > + new_head.top = tmp;
> > + new_head.cnt = old_head.cnt + 1;
> > +
> > + success = rte_atomic128_cmp_exchange(
> > + (rte_int128_t *)&list->head,
> > + (rte_int128_t *)&old_head,
> > + (rte_int128_t *)&new_head,
> > + 1, __ATOMIC_ACQUIRE,
> > + __ATOMIC_ACQUIRE);
> The success order should be __ATOMIC_RELEASE as the write to list->len.cnt
> is relaxed.
> The failure order can be __ATOMIC_RELAXED if the thread fence is added.
Agreed on both counts. Will address.
> > + } while (success == 0);
> > +
> > + return old_head.top;
> > +#endif
> > +}
> > +
> > +#endif /* _RTE_STACK_C11_MEM_H_ */
> > --
> > 2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation
2019-03-29 19:24 ` Eads, Gage
@ 2019-03-29 19:24 ` Eads, Gage
2019-04-01 0:06 ` Eads, Gage
1 sibling, 0 replies; 228+ messages in thread
From: Eads, Gage @ 2019-03-29 19:24 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev
Cc: olivier.matz, arybchenko, Richardson, Bruce, Ananyev, Konstantin,
Gavin Hu (Arm Technology China),
nd, thomas, nd, Thomas Monjalon
[snip]
> > +static __rte_always_inline void
> > +__rte_stack_lf_push(struct rte_stack_lf_list *list,
> > + struct rte_stack_lf_elem *first,
> > + struct rte_stack_lf_elem *last,
> > + unsigned int num)
> > +{
> > +#ifndef RTE_ARCH_X86_64
> > + RTE_SET_USED(first);
> > + RTE_SET_USED(last);
> > + RTE_SET_USED(list);
> > + RTE_SET_USED(num);
> > +#else
> > + struct rte_stack_lf_head old_head;
> > + int success;
> > +
> > + old_head = list->head;
> This can be a torn read (same as you have mentioned in
> __rte_stack_lf_pop). I suggest we use acquire thread fence here as well
> (please see the comments in __rte_stack_lf_pop).
Agreed. I'll add the acquire fence.
> > +
> > + do {
> > + struct rte_stack_lf_head new_head;
> > +
> We can add __atomic_thread_fence(__ATOMIC_ACQUIRE) here (please see
> the comments in __rte_stack_lf_pop).
Will add the fence here.
> > + /* Swing the top pointer to the first element in the list and
> > + * make the last element point to the old top.
> > + */
> > + new_head.top = first;
> > + new_head.cnt = old_head.cnt + 1;
> > +
> > + last->next = old_head.top;
> > +
> > + /* Use the release memmodel to ensure the writes to the LF
> > LIFO
> > + * elements are visible before the head pointer write.
> > + */
> > + success = rte_atomic128_cmp_exchange(
> > + (rte_int128_t *)&list->head,
> > + (rte_int128_t *)&old_head,
> > + (rte_int128_t *)&new_head,
> > + 1, __ATOMIC_RELEASE,
> > + __ATOMIC_RELAXED);
> Success memory order can be RELAXED as the store to list->len.cnt is
> RELEASE.
The RELEASE success order here ensures that the store to 'last->next' is visible before the head update. The RELEASE in the list->len.cnt store only guarantees that the preceding stores are visible before list->len.cnt's store, but doesn't guarantee any ordering between the 'last->next' store and the head update, so we can't rely on that.
> > + } while (success == 0);
> > +
> > + /* Ensure the stack modifications are not reordered with respect
> > + * to the LIFO len update.
> > + */
> > + __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
> > #endif }
> > +
> > +static __rte_always_inline struct rte_stack_lf_elem *
> > +__rte_stack_lf_pop(struct rte_stack_lf_list *list,
> > + unsigned int num,
> > + void **obj_table,
> > + struct rte_stack_lf_elem **last) { #ifndef
> RTE_ARCH_X86_64
> > + RTE_SET_USED(obj_table);
> > + RTE_SET_USED(last);
> > + RTE_SET_USED(list);
> > + RTE_SET_USED(num);
> > +
> > + return NULL;
> > +#else
> > + struct rte_stack_lf_head old_head;
> > + int success;
> > +
> > + /* Reserve num elements, if available */
> > + while (1) {
> > + uint64_t len = __atomic_load_n(&list->len.cnt,
> > + __ATOMIC_ACQUIRE);
> This can be outside the loop.
Good idea. I'll move this out of the loop and change the atomic_compare_exchange_n's failure memory order to ACQUIRE.
> > +
> > + /* Does the list contain enough elements? */
> > + if (unlikely(len < num))
> > + return NULL;
> > +
> > + if (__atomic_compare_exchange_n(&list->len.cnt,
> > + &len, len - num,
> > + 0, __ATOMIC_RELAXED,
> > + __ATOMIC_RELAXED))
> > + break;
> > + }
> > +
> > +#ifndef RTE_ARCH_X86_64
> > + /* Use the acquire memmodel to ensure the reads to the LF LIFO
> > elements
> > + * are properly ordered with respect to the head pointer read.
> > + *
> > + * Note that for aarch64, GCC's implementation of __atomic_load_16
> > in
> > + * libatomic uses locks, and so this function should be replaced by
> > + * a new function (e.g. "rte_atomic128_load()").
> aarch64 does not have 'pure' atomic 128b load instructions. They have to be
> implemented using load/store exclusives.
>
> > + */
> > + __atomic_load((volatile __int128 *)&list->head,
> > + &old_head,
> > + __ATOMIC_ACQUIRE);
> Since, we know of x86/aarch64 (power?) that cannot implement pure atomic
> 128b loads, should we just use relaxed reads and assume the possibility of
> torn reads for all architectures? Then, we can use acquire fence to prevent
> the reordering (see below)
That's a cleaner solution. I'll implement that, dropping the architecture distinction.
> > +#else
> > + /* x86-64 does not require an atomic load here; if a torn read
> > +occurs,
> IMO, we should not make architecture specific distinctions as this algorithm is
> based on C11 memory model.
>
> > + * the CAS will fail and set old_head to the correct/latest value.
> > + */
> > + old_head = list->head;
> > +#endif
> > +
> > + /* Pop num elements */
> > + do {
> > + struct rte_stack_lf_head new_head;
> > + struct rte_stack_lf_elem *tmp;
> > + unsigned int i;
> > +
> We can add __atomic_thread_fence(__ATOMIC_ACQUIRE) here.
Will do.
> > + rte_prefetch0(old_head.top);
> > +
> > + tmp = old_head.top;
> > +
> > + /* Traverse the list to find the new head. A next pointer will
> > + * either point to another element or NULL; if a thread
> > + * encounters a pointer that has already been popped, the
> > CAS
> > + * will fail.
> > + */
> > + for (i = 0; i < num && tmp != NULL; i++) {
> > + rte_prefetch0(tmp->next);
> > + if (obj_table)
> > + obj_table[i] = tmp->data;
> > + if (last)
> > + *last = tmp;
> > + tmp = tmp->next;
> > + }
> > +
> > + /* If NULL was encountered, the list was modified while
> > + * traversing it. Retry.
> > + */
> > + if (i != num)
> > + continue;
> > +
> > + new_head.top = tmp;
> > + new_head.cnt = old_head.cnt + 1;
> > +
> > + success = rte_atomic128_cmp_exchange(
> > + (rte_int128_t *)&list->head,
> > + (rte_int128_t *)&old_head,
> > + (rte_int128_t *)&new_head,
> > + 1, __ATOMIC_ACQUIRE,
> > + __ATOMIC_ACQUIRE);
> The success order should be __ATOMIC_RELEASE as the write to list->len.cnt
> is relaxed.
> The failure order can be __ATOMIC_RELAXED if the thread fence is added.
Agreed on both counts. Will address.
> > + } while (success == 0);
> > +
> > + return old_head.top;
> > +#endif
> > +}
> > +
> > +#endif /* _RTE_STACK_C11_MEM_H_ */
> > --
> > 2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 5/8] stack: add lock-free stack implementation
2019-03-28 23:27 ` Honnappa Nagarahalli
2019-03-28 23:27 ` Honnappa Nagarahalli
@ 2019-03-29 19:25 ` Eads, Gage
2019-03-29 19:25 ` Eads, Gage
1 sibling, 1 reply; 228+ messages in thread
From: Eads, Gage @ 2019-03-29 19:25 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev
Cc: olivier.matz, arybchenko, Richardson, Bruce, Ananyev, Konstantin,
Gavin Hu (Arm Technology China),
nd, thomas, nd, Thomas Monjalon
> -----Original Message-----
> From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli@arm.com]
> Sent: Thursday, March 28, 2019 6:27 PM
> To: Eads, Gage <gage.eads@intel.com>; dev@dpdk.org
> Cc: olivier.matz@6wind.com; arybchenko@solarflare.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; nd <nd@arm.com>; thomas@monjalon.net; nd
> <nd@arm.com>
> Subject: RE: [PATCH v3 5/8] stack: add lock-free stack implementation
>
> <snip>
>
> > diff --git a/lib/librte_stack/rte_stack.c
> > b/lib/librte_stack/rte_stack.c index
> > 96dffdf44..8f0361ea1 100644
> > --- a/lib/librte_stack/rte_stack.c
> > +++ b/lib/librte_stack/rte_stack.c
>
> <snip>
>
> > @@ -63,9 +81,16 @@ rte_stack_create(const char *name, unsigned int
> > count, int socket_id,
> > unsigned int sz;
> > int ret;
> >
> > - RTE_SET_USED(flags);
> > +#ifdef RTE_ARCH_X86_64
> > + RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
> This check should be independent of the platform.
Good catch. Will change the ifdef to RTE_ARCH_64.
[snip]
> > +/**
> > + * @internal Push several objects on the lock-free stack (MT-safe).
> > + *
> > + * @param s
> > + * A pointer to the stack structure.
> > + * @param obj_table
> > + * A pointer to a table of void * pointers (objects).
> > + * @param n
> > + * The number of objects to push on the stack from the obj_table.
> > + * @return
> > + * Actual number of objects enqueued.
> > + */
> > +static __rte_always_inline unsigned int __rte_experimental
> This is an internal function. Is '__rte_experimental' tag required? (applies to
> other instances in this patch)
(Addressed in comments to patch 1/8)
[snip]
> >
> > /**
> > @@ -225,7 +339,10 @@ rte_stack_free_count(struct rte_stack *s)
> > * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
> > * constraint for the reserved zone.
> > * @param flags
> > - * Reserved for future use.
> > + * An OR of the following:
> > + * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
> > + * variants of the push and pop functions. Otherwise, it achieves
> > + * thread-safety using a lock.
> > * @return
> > * On success, the pointer to the new allocated stack. NULL on error with
> > * rte_errno set appropriately. Possible errno values include:
> > diff --git a/lib/librte_stack/rte_stack_generic.h
> > b/lib/librte_stack/rte_stack_generic.h
> > new file mode 100644
> > index 000000000..5e4cbc38e
> > --- /dev/null
> > +++ b/lib/librte_stack/rte_stack_generic.h
> The name "...stack_generic.h" is confusing. It is implementing LF algorithm.
> IMO, the code should be re-organized differently.
> rte_stack.h, rte_stack.c - Contain the APIs (calling std or LF based on the flag)
> and top level structure definition rte_stack_std.c, rte_stack_std.h - Contain
> the standard implementation rte_stack_lf.c, rte_stack_lf.h - Contain the LF
> implementation rte_stack_lf_c11.h - Contain the LF C11 implementation
>
'generic' here refers to the "generic API for atomic operations" (generic/rte_atomic.h:12), but I see how that can be misleading.
Yeah, I like this proposal, but with one tweak: use three lock-free header files: rte_stack_lf.h (common inline lock-free functions like rte_stack_lf_pop()), rte_stack_lf_c11.h (C11 implementation), rte_stack_lf_generic.h (generic atomic implementation). Since the name is *_lf_generic.h, it should be clear that it implements the lock-free functions, and this naming matches rte ring's (easier to pick up for those already used to the ring organization).
[snip]
> > + /* old_head is updated on failure */
> > + success = rte_atomic128_cmp_exchange(
> > + (rte_int128_t *)&list->head,
> > + (rte_int128_t *)&old_head,
> > + (rte_int128_t *)&new_head,
> > + 1, __ATOMIC_ACQUIRE,
> > + __ATOMIC_ACQUIRE);
> Just wondering if 'rte_atomic128_cmp_exchange' for x86 should have
> compiler barriers based on the memory order passed?
> C++11 memory model is getting mixed with barrier based model. I think this
> is something that needs to be discussed at a wider level.
The x86 implementation uses a compiler barrier (i.e. the inline assembly clobber list contains "memory") regardless of the memory order, so we're (conservatively) adhering to the C11 model, which guarantees ordering at both the compiler and processor levels. Whether/how we relax the x86 implementation (e.g. no compiler barrier if ordering == relaxed) is an interesting question.
Thanks,
Gage
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 5/8] stack: add lock-free stack implementation
2019-03-29 19:25 ` Eads, Gage
@ 2019-03-29 19:25 ` Eads, Gage
0 siblings, 0 replies; 228+ messages in thread
From: Eads, Gage @ 2019-03-29 19:25 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev
Cc: olivier.matz, arybchenko, Richardson, Bruce, Ananyev, Konstantin,
Gavin Hu (Arm Technology China),
nd, thomas, nd, Thomas Monjalon
> -----Original Message-----
> From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli@arm.com]
> Sent: Thursday, March 28, 2019 6:27 PM
> To: Eads, Gage <gage.eads@intel.com>; dev@dpdk.org
> Cc: olivier.matz@6wind.com; arybchenko@solarflare.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; nd <nd@arm.com>; thomas@monjalon.net; nd
> <nd@arm.com>
> Subject: RE: [PATCH v3 5/8] stack: add lock-free stack implementation
>
> <snip>
>
> > diff --git a/lib/librte_stack/rte_stack.c
> > b/lib/librte_stack/rte_stack.c index
> > 96dffdf44..8f0361ea1 100644
> > --- a/lib/librte_stack/rte_stack.c
> > +++ b/lib/librte_stack/rte_stack.c
>
> <snip>
>
> > @@ -63,9 +81,16 @@ rte_stack_create(const char *name, unsigned int
> > count, int socket_id,
> > unsigned int sz;
> > int ret;
> >
> > - RTE_SET_USED(flags);
> > +#ifdef RTE_ARCH_X86_64
> > + RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
> This check should be independent of the platform.
Good catch. Will change the ifdef to RTE_ARCH_64.
[snip]
> > +/**
> > + * @internal Push several objects on the lock-free stack (MT-safe).
> > + *
> > + * @param s
> > + * A pointer to the stack structure.
> > + * @param obj_table
> > + * A pointer to a table of void * pointers (objects).
> > + * @param n
> > + * The number of objects to push on the stack from the obj_table.
> > + * @return
> > + * Actual number of objects enqueued.
> > + */
> > +static __rte_always_inline unsigned int __rte_experimental
> This is an internal function. Is '__rte_experimental' tag required? (applies to
> other instances in this patch)
(Addressed in comments to patch 1/8)
[snip]
> >
> > /**
> > @@ -225,7 +339,10 @@ rte_stack_free_count(struct rte_stack *s)
> > * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
> > * constraint for the reserved zone.
> > * @param flags
> > - * Reserved for future use.
> > + * An OR of the following:
> > + * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
> > + * variants of the push and pop functions. Otherwise, it achieves
> > + * thread-safety using a lock.
> > * @return
> > * On success, the pointer to the new allocated stack. NULL on error with
> > * rte_errno set appropriately. Possible errno values include:
> > diff --git a/lib/librte_stack/rte_stack_generic.h
> > b/lib/librte_stack/rte_stack_generic.h
> > new file mode 100644
> > index 000000000..5e4cbc38e
> > --- /dev/null
> > +++ b/lib/librte_stack/rte_stack_generic.h
> The name "...stack_generic.h" is confusing. It is implementing LF algorithm.
> IMO, the code should be re-organized differently.
> rte_stack.h, rte_stack.c - Contain the APIs (calling std or LF based on the flag)
> and top level structure definition rte_stack_std.c, rte_stack_std.h - Contain
> the standard implementation rte_stack_lf.c, rte_stack_lf.h - Contain the LF
> implementation rte_stack_lf_c11.h - Contain the LF C11 implementation
>
'generic' here refers to the "generic API for atomic operations" (generic/rte_atomic.h:12), but I see how that can be misleading.
Yeah, I like this proposal, but with one tweak: use three lock-free header files: rte_stack_lf.h (common inline lock-free functions like rte_stack_lf_pop()), rte_stack_lf_c11.h (C11 implementation), rte_stack_lf_generic.h (generic atomic implementation). Since the name is *_lf_generic.h, it should be clear that it implements the lock-free functions, and this naming matches rte ring's (easier to pick up for those already used to the ring organization).
[snip]
> > + /* old_head is updated on failure */
> > + success = rte_atomic128_cmp_exchange(
> > + (rte_int128_t *)&list->head,
> > + (rte_int128_t *)&old_head,
> > + (rte_int128_t *)&new_head,
> > + 1, __ATOMIC_ACQUIRE,
> > + __ATOMIC_ACQUIRE);
> Just wondering if 'rte_atomic128_cmp_exchange' for x86 should have
> compiler barriers based on the memory order passed?
> C++11 memory model is getting mixed with barrier based model. I think this
> is something that needs to be discussed at a wider level.
The x86 implementation uses a compiler barrier (i.e. the inline assembly clobber list contains "memory") regardless of the memory order, so we're (conservatively) adhering to the C11 model, which guarantees ordering at both the compiler and processor levels. Whether/how we relax the x86 implementation (e.g. no compiler barrier if ordering == relaxed) is an interesting question.
Thanks,
Gage
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library
2019-03-29 19:23 ` Eads, Gage
2019-03-29 19:23 ` Eads, Gage
@ 2019-03-29 21:07 ` Thomas Monjalon
2019-03-29 21:07 ` Thomas Monjalon
2019-04-01 17:41 ` Honnappa Nagarahalli
2 siblings, 1 reply; 228+ messages in thread
From: Thomas Monjalon @ 2019-03-29 21:07 UTC (permalink / raw)
To: Eads, Gage
Cc: Honnappa Nagarahalli, dev, olivier.matz, arybchenko, Richardson,
Bruce, Ananyev, Konstantin, Gavin Hu (Arm Technology China),
nd
29/03/2019 20:23, Eads, Gage:
> @Thomas: I expect I can address Honnappa's feedback within a day or two. Since today is the 19.05 merge deadline, what do you think about these options for merging?
> 1. Merge V4 now and address these comments during RC1.
> 2. Delay merge until RC2, with all comments addressed.
I plan to release RC1 on Wednesday,
allowing last revision to be sent on Tuesday.
If it does not impact the rest of DPDK, the RC2 is also an option
to consider.
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library
2019-03-29 21:07 ` Thomas Monjalon
@ 2019-03-29 21:07 ` Thomas Monjalon
0 siblings, 0 replies; 228+ messages in thread
From: Thomas Monjalon @ 2019-03-29 21:07 UTC (permalink / raw)
To: Eads, Gage
Cc: Honnappa Nagarahalli, dev, olivier.matz, arybchenko, Richardson,
Bruce, Ananyev, Konstantin, Gavin Hu (Arm Technology China),
nd
29/03/2019 20:23, Eads, Gage:
> @Thomas: I expect I can address Honnappa's feedback within a day or two. Since today is the 19.05 merge deadline, what do you think about these options for merging?
> 1. Merge V4 now and address these comments during RC1.
> 2. Delay merge until RC2, with all comments addressed.
I plan to release RC1 on Wednesday,
allowing last revision to be sent on Tuesday.
If it does not impact the rest of DPDK, the RC2 is also an option
to consider.
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation
2019-03-29 19:24 ` Eads, Gage
2019-03-29 19:24 ` Eads, Gage
@ 2019-04-01 0:06 ` Eads, Gage
2019-04-01 0:06 ` Eads, Gage
2019-04-01 19:06 ` Honnappa Nagarahalli
1 sibling, 2 replies; 228+ messages in thread
From: Eads, Gage @ 2019-04-01 0:06 UTC (permalink / raw)
To: 'Honnappa Nagarahalli', 'dev@dpdk.org'
Cc: 'olivier.matz@6wind.com',
'arybchenko@solarflare.com',
Richardson, Bruce, Ananyev, Konstantin,
'Gavin Hu (Arm Technology China)', 'nd',
'thomas@monjalon.net', 'nd',
'Thomas Monjalon'
> -----Original Message-----
> From: Eads, Gage
> Sent: Friday, March 29, 2019 2:25 PM
> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>;
> dev@dpdk.org
> Cc: olivier.matz@6wind.com; arybchenko@solarflare.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; nd <nd@arm.com>; thomas@monjalon.net; nd
> <nd@arm.com>; Thomas Monjalon <thomas@monjalon.net>
> Subject: RE: [PATCH v3 6/8] stack: add C11 atomic implementation
>
> [snip]
>
> > > +static __rte_always_inline void
> > > +__rte_stack_lf_push(struct rte_stack_lf_list *list,
> > > + struct rte_stack_lf_elem *first,
> > > + struct rte_stack_lf_elem *last,
> > > + unsigned int num)
> > > +{
> > > +#ifndef RTE_ARCH_X86_64
> > > + RTE_SET_USED(first);
> > > + RTE_SET_USED(last);
> > > + RTE_SET_USED(list);
> > > + RTE_SET_USED(num);
> > > +#else
> > > + struct rte_stack_lf_head old_head;
> > > + int success;
> > > +
> > > + old_head = list->head;
> > This can be a torn read (same as you have mentioned in
> > __rte_stack_lf_pop). I suggest we use acquire thread fence here as
> > well (please see the comments in __rte_stack_lf_pop).
>
> Agreed. I'll add the acquire fence.
>
On second thought, an acquire fence isn't necessary. The acquire fence in __rte_stack_lf_pop() ensures the list->head is ordered before the list element reads. That isn't necessary here; we need to ensure that the last->next write occurs (and is observed) before the list->head write, which the CAS's RELEASE success memorder accomplishes.
If a torn read occurs, the CAS will fail and will atomically re-load &old_head.
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation
2019-04-01 0:06 ` Eads, Gage
@ 2019-04-01 0:06 ` Eads, Gage
2019-04-01 19:06 ` Honnappa Nagarahalli
1 sibling, 0 replies; 228+ messages in thread
From: Eads, Gage @ 2019-04-01 0:06 UTC (permalink / raw)
To: 'Honnappa Nagarahalli', 'dev@dpdk.org'
Cc: 'olivier.matz@6wind.com',
'arybchenko@solarflare.com',
Richardson, Bruce, Ananyev, Konstantin,
'Gavin Hu (Arm Technology China)', 'nd',
'thomas@monjalon.net', 'nd',
'Thomas Monjalon'
> -----Original Message-----
> From: Eads, Gage
> Sent: Friday, March 29, 2019 2:25 PM
> To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>;
> dev@dpdk.org
> Cc: olivier.matz@6wind.com; arybchenko@solarflare.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; nd <nd@arm.com>; thomas@monjalon.net; nd
> <nd@arm.com>; Thomas Monjalon <thomas@monjalon.net>
> Subject: RE: [PATCH v3 6/8] stack: add C11 atomic implementation
>
> [snip]
>
> > > +static __rte_always_inline void
> > > +__rte_stack_lf_push(struct rte_stack_lf_list *list,
> > > + struct rte_stack_lf_elem *first,
> > > + struct rte_stack_lf_elem *last,
> > > + unsigned int num)
> > > +{
> > > +#ifndef RTE_ARCH_X86_64
> > > + RTE_SET_USED(first);
> > > + RTE_SET_USED(last);
> > > + RTE_SET_USED(list);
> > > + RTE_SET_USED(num);
> > > +#else
> > > + struct rte_stack_lf_head old_head;
> > > + int success;
> > > +
> > > + old_head = list->head;
> > This can be a torn read (same as you have mentioned in
> > __rte_stack_lf_pop). I suggest we use acquire thread fence here as
> > well (please see the comments in __rte_stack_lf_pop).
>
> Agreed. I'll add the acquire fence.
>
On second thought, an acquire fence isn't necessary. The acquire fence in __rte_stack_lf_pop() ensures the list->head is ordered before the list element reads. That isn't necessary here; we need to ensure that the last->next write occurs (and is observed) before the list->head write, which the CAS's RELEASE success memorder accomplishes.
If a torn read occurs, the CAS will fail and will atomically re-load &old_head.
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 0/8] Add stack library and new mempool handler
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 0/8] Add stack library and new " Gage Eads
` (8 preceding siblings ...)
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` Gage Eads
` (9 more replies)
9 siblings, 10 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This patchset introduces a stack library, supporting both lock-based and
lock-free stacks, and a lock-free stack mempool handler.
The lock-based stack code is derived from the existing stack mempool handler,
and that handler is refactored to use the stack library.
The lock-free stack mempool handler is intended for usages where the rte
ring's "non-preemptive" constraint is not acceptable; for example, if the
application uses a mixture of pinned high-priority threads and multiplexed
low-priority threads that share a mempool.
Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
so it is currently limited to the x86_64 platform.
This patchset is the successor to a patchset containing only the new mempool
handler[2].
[1] http://mails.dpdk.org/archives/dev/2019-March/125751.html
[2] http://mails.dpdk.org/archives/dev/2019-January/123555.html
---
v5:
- Add comment to explain padding in *_get_memsize() functions
- Prefix internal functions with '__'
- Use RTE_ASSERT for performance critical run-time checks
- Don't use __atomic_load in the C11 pop_elems function, and put an acquire
thread fence at the start of the 2nd do-while loop
- Change pop_elems 128b CAS success memorder to RELEASE and failure memorder to
RELAXED
- Change compile-time assertion to run for all 64-bit architectures
- Reorganize the code with standard and lock-free .c and .h files
v4:
- Fix 32-bit build error in test_stack.c by using %zu format specifier for
size_t
- Rebase onto master
v3:
- Rebase patchset onto master (test/test/ -> app/test/)
- Fix rte_stack_std_push() segfault introduced in v2
v2:
- Reworked structure and function naming to use rte_stack_{std, lf}_...
- Updated to the latest rte_atomic128_cmp_exchange() interface.
- Rename STACK_F_NB -> RTE_STACK_F_LF.
- Remove rte_rmb() and rte_wmb() from the generic push and pop implementations.
These are obviated by rte_atomic128_cmp_exchange()'s two memorder arguments.
- Edit stack_lib.rst text to 80 chars/line.
- Fix rte_stack.h doxygen formatting.
- Allocate popped_objs array from the heap
- Fix stack_thread_push_pop bug ("&t->sz" -> "t->sz")
- Remove unnecessary NULL check from test_stack_basic
- Properly terminate the name string in test_stack_name_length
- Add an empty array of struct rte_nb_lifo_elem elements
- In rte_nb_lifo_push(), retrieve the last element from __nb_lifo_pop()
- Split C11 implementation into a separate patchset
Gage Eads (8):
stack: introduce rte stack library
mempool/stack: convert mempool to use rte stack
test/stack: add stack test
test/stack: add stack perf test
stack: add lock-free stack implementation
stack: add C11 atomic implementation
test/stack: add lock-free stack tests
mempool/stack: add lock-free stack mempool handler
MAINTAINERS | 9 +-
app/test/Makefile | 3 +
app/test/meson.build | 7 +
app/test/test_stack.c | 423 ++++++++++++++++++++++++
app/test/test_stack_perf.c | 356 ++++++++++++++++++++
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/env_abstraction_layer.rst | 10 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 83 +++++
doc/guides/rel_notes/release_19_05.rst | 13 +
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 115 +++----
lib/Makefile | 2 +
lib/librte_stack/Makefile | 29 ++
lib/librte_stack/meson.build | 12 +
lib/librte_stack/rte_stack.c | 196 +++++++++++
lib/librte_stack/rte_stack.h | 259 +++++++++++++++
lib/librte_stack/rte_stack_lf.c | 31 ++
lib/librte_stack/rte_stack_lf.h | 106 ++++++
lib/librte_stack/rte_stack_lf_c11.h | 169 ++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 151 +++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++
lib/librte_stack/rte_stack_std.c | 26 ++
lib/librte_stack/rte_stack_std.h | 119 +++++++
lib/librte_stack/rte_stack_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
30 files changed, 2110 insertions(+), 72 deletions(-)
create mode 100644 app/test/test_stack.c
create mode 100644 app/test/test_stack_perf.c
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 0/8] Add stack library and new mempool handler
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 0/8] Add stack library and new " Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 1/8] stack: introduce rte stack library Gage Eads
` (8 subsequent siblings)
9 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This patchset introduces a stack library, supporting both lock-based and
lock-free stacks, and a lock-free stack mempool handler.
The lock-based stack code is derived from the existing stack mempool handler,
and that handler is refactored to use the stack library.
The lock-free stack mempool handler is intended for usages where the rte
ring's "non-preemptive" constraint is not acceptable; for example, if the
application uses a mixture of pinned high-priority threads and multiplexed
low-priority threads that share a mempool.
Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
so it is currently limited to the x86_64 platform.
This patchset is the successor to a patchset containing only the new mempool
handler[2].
[1] http://mails.dpdk.org/archives/dev/2019-March/125751.html
[2] http://mails.dpdk.org/archives/dev/2019-January/123555.html
---
v5:
- Add comment to explain padding in *_get_memsize() functions
- Prefix internal functions with '__'
- Use RTE_ASSERT for performance critical run-time checks
- Don't use __atomic_load in the C11 pop_elems function, and put an acquire
thread fence at the start of the 2nd do-while loop
- Change pop_elems 128b CAS success memorder to RELEASE and failure memorder to
RELAXED
- Change compile-time assertion to run for all 64-bit architectures
- Reorganize the code with standard and lock-free .c and .h files
v4:
- Fix 32-bit build error in test_stack.c by using %zu format specifier for
size_t
- Rebase onto master
v3:
- Rebase patchset onto master (test/test/ -> app/test/)
- Fix rte_stack_std_push() segfault introduced in v2
v2:
- Reworked structure and function naming to use rte_stack_{std, lf}_...
- Updated to the latest rte_atomic128_cmp_exchange() interface.
- Rename STACK_F_NB -> RTE_STACK_F_LF.
- Remove rte_rmb() and rte_wmb() from the generic push and pop implementations.
These are obviated by rte_atomic128_cmp_exchange()'s two memorder arguments.
- Edit stack_lib.rst text to 80 chars/line.
- Fix rte_stack.h doxygen formatting.
- Allocate popped_objs array from the heap
- Fix stack_thread_push_pop bug ("&t->sz" -> "t->sz")
- Remove unnecessary NULL check from test_stack_basic
- Properly terminate the name string in test_stack_name_length
- Add an empty array of struct rte_nb_lifo_elem elements
- In rte_nb_lifo_push(), retrieve the last element from __nb_lifo_pop()
- Split C11 implementation into a separate patchset
Gage Eads (8):
stack: introduce rte stack library
mempool/stack: convert mempool to use rte stack
test/stack: add stack test
test/stack: add stack perf test
stack: add lock-free stack implementation
stack: add C11 atomic implementation
test/stack: add lock-free stack tests
mempool/stack: add lock-free stack mempool handler
MAINTAINERS | 9 +-
app/test/Makefile | 3 +
app/test/meson.build | 7 +
app/test/test_stack.c | 423 ++++++++++++++++++++++++
app/test/test_stack_perf.c | 356 ++++++++++++++++++++
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/env_abstraction_layer.rst | 10 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 83 +++++
doc/guides/rel_notes/release_19_05.rst | 13 +
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 115 +++----
lib/Makefile | 2 +
lib/librte_stack/Makefile | 29 ++
lib/librte_stack/meson.build | 12 +
lib/librte_stack/rte_stack.c | 196 +++++++++++
lib/librte_stack/rte_stack.h | 259 +++++++++++++++
lib/librte_stack/rte_stack_lf.c | 31 ++
lib/librte_stack/rte_stack_lf.h | 106 ++++++
lib/librte_stack/rte_stack_lf_c11.h | 169 ++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 151 +++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++
lib/librte_stack/rte_stack_std.c | 26 ++
lib/librte_stack/rte_stack_std.h | 119 +++++++
lib/librte_stack/rte_stack_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
30 files changed, 2110 insertions(+), 72 deletions(-)
create mode 100644 app/test/test_stack.c
create mode 100644 app/test/test_stack_perf.c
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 1/8] stack: introduce rte stack library
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 0/8] Add stack library and new " Gage Eads
2019-04-01 0:12 ` Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
` (7 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 28 +++++
doc/guides/rel_notes/release_19_05.rst | 5 +
lib/Makefile | 2 +
lib/librte_stack/Makefile | 25 ++++
lib/librte_stack/meson.build | 8 ++
lib/librte_stack/rte_stack.c | 182 +++++++++++++++++++++++++++++
lib/librte_stack/rte_stack.h | 207 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++++++
lib/librte_stack/rte_stack_std.c | 26 +++++
lib/librte_stack/rte_stack_std.h | 119 +++++++++++++++++++
lib/librte_stack/rte_stack_version.map | 9 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
18 files changed, 661 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index e9ff2b4c2..09fd99dbf 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -416,6 +416,12 @@ F: drivers/raw/skeleton_rawdev/
F: app/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
+Stack API - EXPERIMENTAL
+M: Gage Eads <gage.eads@intel.com>
+M: Olivier Matz <olivier.matz@6wind.com>
+F: lib/librte_stack/
+F: doc/guides/prog_guide/stack_lib.rst
+
Memory Pool Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index 6292bc4af..fc8dba69d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -994,3 +994,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
# Compile the eventdev application
#
CONFIG_RTE_APP_EVENTDEV=y
+
+#
+# Compile librte_stack
+#
+CONFIG_RTE_LIBRTE_STACK=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index aacc66bd8..de1e215dd 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -125,6 +125,7 @@ The public API headers are grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
[ring] (@ref rte_ring.h),
+ [stack] (@ref rte_stack.h),
[tailq] (@ref rte_tailq.h),
[bitmap] (@ref rte_bitmap.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..7722fc3e9 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -55,6 +55,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
@TOPDIR@/lib/librte_security \
+ @TOPDIR@/lib/librte_stack \
@TOPDIR@/lib/librte_table \
@TOPDIR@/lib/librte_telemetry \
@TOPDIR@/lib/librte_timer \
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..f4f60862f 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ stack_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
new file mode 100644
index 000000000..25a8cc38a
--- /dev/null
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -0,0 +1,28 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+Stack Library
+=============
+
+DPDK's stack library provides an API for configuration and use of a bounded
+stack of pointers.
+
+The stack library provides the following basic operations:
+
+* Create a uniquely named stack of a user-specified size and using a
+ user-specified socket.
+
+* Push and pop a burst of one or more stack objects (pointers). These function
+ are multi-threading safe.
+
+* Free a previously created stack.
+
+* Lookup a pointer to a stack by its name.
+
+* Query a stack's current depth and number of free entries.
+
+Implementation
+~~~~~~~~~~~~~~
+
+The stack consists of a contiguous array of pointers, a current index, and a
+spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index bdad1ddbe..ebfbe36e5 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -121,6 +121,11 @@ New Features
Improved testpmd application performance on ARM platform. For ``macswap``
forwarding mode, NEON intrinsics were used to do swap to save CPU cycles.
+* **Added Stack API.**
+
+ Added a new stack API for configuration and use of a bounded stack of
+ pointers. The API provides MT-safe push and pop operations that can operate
+ on one or more pointers per operation.
Removed Items
-------------
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..9f90e80ad 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack
+DEPDIRS-librte_stack := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
new file mode 100644
index 000000000..6db540073
--- /dev/null
+++ b/lib/librte_stack/Makefile
@@ -0,0 +1,25 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_stack.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_stack_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
+ rte_stack_std.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
+ rte_stack_std.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
new file mode 100644
index 000000000..d2e60ce9b
--- /dev/null
+++ b/lib/librte_stack/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+allow_experimental_apis = true
+
+version = 1
+sources = files('rte_stack.c', 'rte_stack_std.c')
+headers = files('rte_stack.h', 'rte_stack_std.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
new file mode 100644
index 000000000..610014b6c
--- /dev/null
+++ b/lib/librte_stack/rte_stack.c
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_tailq.h>
+
+#include "rte_stack.h"
+#include "rte_stack_pvt.h"
+
+int stack_logtype;
+
+TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_stack_tailq = {
+ .name = RTE_TAILQ_STACK_NAME,
+};
+EAL_REGISTER_TAILQ(rte_stack_tailq)
+
+static void
+rte_stack_init(struct rte_stack *s)
+{
+ memset(s, 0, sizeof(*s));
+
+ rte_stack_std_init(s);
+}
+
+static ssize_t
+rte_stack_get_memsize(unsigned int count)
+{
+ return rte_stack_std_get_memsize(count);
+}
+
+struct rte_stack *
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_stack_list *stack_list;
+ const struct rte_memzone *mz;
+ struct rte_tailq_entry *te;
+ struct rte_stack *s;
+ unsigned int sz;
+ int ret;
+
+ RTE_SET_USED(flags);
+
+ sz = rte_stack_get_memsize(count);
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_STACK_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ STACK_LOG_ERR("Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
+ 0, __alignof__(*s));
+ if (mz == NULL) {
+ STACK_LOG_ERR("Cannot reserve stack memzone!\n");
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(te);
+ return NULL;
+ }
+
+ s = mz->addr;
+
+ rte_stack_init(s);
+
+ /* Store the name for later lookups */
+ ret = snprintf(s->name, sizeof(s->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(s->name)) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_errno = ENAMETOOLONG;
+ rte_free(te);
+ rte_memzone_free(mz);
+ return NULL;
+ }
+
+ s->memzone = mz;
+ s->capacity = count;
+ s->flags = flags;
+
+ te->data = s;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ TAILQ_INSERT_TAIL(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return s;
+}
+
+void
+rte_stack_free(struct rte_stack *s)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+
+ if (s == NULL)
+ return;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, stack_list, next) {
+ if (te->data == s)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+
+ rte_memzone_free(s->memzone);
+}
+
+struct rte_stack *
+rte_stack_lookup(const char *name)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+ struct rte_stack *r = NULL;
+
+ if (name == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, stack_list, next) {
+ r = (struct rte_stack *) te->data;
+ if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+RTE_INIT(librte_stack_init_log)
+{
+ stack_logtype = rte_log_register("lib.stack");
+ if (stack_logtype >= 0)
+ rte_log_set_level(stack_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
new file mode 100644
index 000000000..d9799d747
--- /dev/null
+++ b/lib/librte_stack/rte_stack.h
@@ -0,0 +1,207 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+/**
+ * @file rte_stack.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Stack
+ *
+ * librte_stack provides an API for configuration and use of a bounded stack of
+ * pointers. Push and pop operations are MT-safe, allowing concurrent access,
+ * and the interface supports pushing and popping multiple pointers at a time.
+ */
+
+#ifndef _RTE_STACK_H_
+#define _RTE_STACK_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+#define RTE_TAILQ_STACK_NAME "RTE_STACK"
+#define RTE_STACK_MZ_PREFIX "STK_"
+/** The maximum length of a stack name. */
+#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
+ sizeof(RTE_STACK_MZ_PREFIX) + 1)
+
+/* Structure containing the LIFO, its current length, and a lock for mutual
+ * exclusion.
+ */
+struct rte_stack_std {
+ rte_spinlock_t lock; /**< LIFO lock */
+ uint32_t len; /**< LIFO len */
+ void *objs[]; /**< LIFO pointer table */
+};
+
+/* The RTE stack structure contains the LIFO structure itself, plus metadata
+ * such as its name and memzone pointer.
+ */
+struct rte_stack {
+ /** Name of the stack. */
+ char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
+ /** Memzone containing the rte_stack structure. */
+ const struct rte_memzone *memzone;
+ uint32_t capacity; /**< Usable size of the stack. */
+ uint32_t flags; /**< Flags supplied at creation. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+} __rte_cache_aligned;
+
+#include "rte_stack_std.h"
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_pop(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return __rte_stack_std_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of free entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of free entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_free_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return s->capacity - rte_stack_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new stack named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory for a stack of
+ * size *count*. The behavior of the stack is controlled by the *flags*.
+ *
+ * @param name
+ * The name of the stack.
+ * @param count
+ * The size of the stack.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * Reserved for future use.
+ * @return
+ * On success, the pointer to the new allocated stack. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a stack with the same name already exists
+ * - ENOMEM - insufficient memory to create the stack
+ * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
+ */
+struct rte_stack *__rte_experimental
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free all memory used by the stack.
+ *
+ * @param s
+ * Stack to free
+ */
+void __rte_experimental
+rte_stack_free(struct rte_stack *s);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Lookup a stack by its name.
+ *
+ * @param name
+ * The name of the stack.
+ * @return
+ * The pointer to the stack matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - Stack with name *name* not found.
+ * - EINVAL - *name* pointer is NULL.
+ */
+struct rte_stack * __rte_experimental
+rte_stack_lookup(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_H_ */
diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
new file mode 100644
index 000000000..4a6a7bdb3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_pvt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_PVT_H_
+#define _RTE_STACK_PVT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_log.h>
+
+extern int stack_logtype;
+
+#define STACK_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
+ __func__, ##args)
+
+#define STACK_LOG_ERR(fmt, args...) \
+ STACK_LOG(ERR, fmt, ## args)
+
+#define STACK_LOG_WARN(fmt, args...) \
+ STACK_LOG(WARNING, fmt, ## args)
+
+#define STACK_LOG_INFO(fmt, args...) \
+ STACK_LOG(INFO, fmt, ## args)
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_PVT_H_ */
diff --git a/lib/librte_stack/rte_stack_std.c b/lib/librte_stack/rte_stack_std.c
new file mode 100644
index 000000000..0a310d7c6
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.c
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_std_init(struct rte_stack *s)
+{
+ rte_spinlock_init(&s->stack_std.lock);
+}
+
+ssize_t
+rte_stack_std_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_std.h b/lib/librte_stack/rte_stack_std.h
new file mode 100644
index 000000000..f9af087dc
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.h
@@ -0,0 +1,119 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_STD_H_
+#define _RTE_STACK_STD_H_
+
+/**
+ * @internal Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+ cache_objs = &stack->objs[stack->len];
+
+ /* Is there sufficient space in the stack? */
+ if ((stack->len + n) > s->capacity) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ stack->len += n;
+
+ rte_spinlock_unlock(&stack->lock);
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index, len;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+
+ if (unlikely(n > stack->len)) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ cache_objs = stack->objs;
+
+ for (index = 0, len = stack->len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ stack->len -= n;
+ rte_spinlock_unlock(&stack->lock);
+
+ return n;
+}
+
+/**
+ * @internal Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_count(struct rte_stack *s)
+{
+ return (unsigned int)s->stack_std.len;
+}
+
+/**
+ * @internal Initialize a standard stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ */
+void
+rte_stack_std_init(struct rte_stack *s);
+
+/**
+ * @internal Return the memory required for a standard stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a standard stack.
+ */
+ssize_t
+rte_stack_std_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_STD_H_ */
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
new file mode 100644
index 000000000..6662679c3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_stack_create;
+ rte_stack_free;
+ rte_stack_lookup;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 99957ba7d..90115477f 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..7e033e78c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 1/8] stack: introduce rte stack library
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 1/8] stack: introduce rte stack library Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 28 +++++
doc/guides/rel_notes/release_19_05.rst | 5 +
lib/Makefile | 2 +
lib/librte_stack/Makefile | 25 ++++
lib/librte_stack/meson.build | 8 ++
lib/librte_stack/rte_stack.c | 182 +++++++++++++++++++++++++++++
lib/librte_stack/rte_stack.h | 207 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++++++
lib/librte_stack/rte_stack_std.c | 26 +++++
lib/librte_stack/rte_stack_std.h | 119 +++++++++++++++++++
lib/librte_stack/rte_stack_version.map | 9 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
18 files changed, 661 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index e9ff2b4c2..09fd99dbf 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -416,6 +416,12 @@ F: drivers/raw/skeleton_rawdev/
F: app/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
+Stack API - EXPERIMENTAL
+M: Gage Eads <gage.eads@intel.com>
+M: Olivier Matz <olivier.matz@6wind.com>
+F: lib/librte_stack/
+F: doc/guides/prog_guide/stack_lib.rst
+
Memory Pool Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index 6292bc4af..fc8dba69d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -994,3 +994,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
# Compile the eventdev application
#
CONFIG_RTE_APP_EVENTDEV=y
+
+#
+# Compile librte_stack
+#
+CONFIG_RTE_LIBRTE_STACK=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index aacc66bd8..de1e215dd 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -125,6 +125,7 @@ The public API headers are grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
[ring] (@ref rte_ring.h),
+ [stack] (@ref rte_stack.h),
[tailq] (@ref rte_tailq.h),
[bitmap] (@ref rte_bitmap.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..7722fc3e9 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -55,6 +55,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
@TOPDIR@/lib/librte_security \
+ @TOPDIR@/lib/librte_stack \
@TOPDIR@/lib/librte_table \
@TOPDIR@/lib/librte_telemetry \
@TOPDIR@/lib/librte_timer \
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..f4f60862f 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ stack_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
new file mode 100644
index 000000000..25a8cc38a
--- /dev/null
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -0,0 +1,28 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+Stack Library
+=============
+
+DPDK's stack library provides an API for configuration and use of a bounded
+stack of pointers.
+
+The stack library provides the following basic operations:
+
+* Create a uniquely named stack of a user-specified size and using a
+ user-specified socket.
+
+* Push and pop a burst of one or more stack objects (pointers). These function
+ are multi-threading safe.
+
+* Free a previously created stack.
+
+* Lookup a pointer to a stack by its name.
+
+* Query a stack's current depth and number of free entries.
+
+Implementation
+~~~~~~~~~~~~~~
+
+The stack consists of a contiguous array of pointers, a current index, and a
+spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index bdad1ddbe..ebfbe36e5 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -121,6 +121,11 @@ New Features
Improved testpmd application performance on ARM platform. For ``macswap``
forwarding mode, NEON intrinsics were used to do swap to save CPU cycles.
+* **Added Stack API.**
+
+ Added a new stack API for configuration and use of a bounded stack of
+ pointers. The API provides MT-safe push and pop operations that can operate
+ on one or more pointers per operation.
Removed Items
-------------
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..9f90e80ad 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack
+DEPDIRS-librte_stack := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
new file mode 100644
index 000000000..6db540073
--- /dev/null
+++ b/lib/librte_stack/Makefile
@@ -0,0 +1,25 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_stack.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_stack_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
+ rte_stack_std.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
+ rte_stack_std.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
new file mode 100644
index 000000000..d2e60ce9b
--- /dev/null
+++ b/lib/librte_stack/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+allow_experimental_apis = true
+
+version = 1
+sources = files('rte_stack.c', 'rte_stack_std.c')
+headers = files('rte_stack.h', 'rte_stack_std.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
new file mode 100644
index 000000000..610014b6c
--- /dev/null
+++ b/lib/librte_stack/rte_stack.c
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_tailq.h>
+
+#include "rte_stack.h"
+#include "rte_stack_pvt.h"
+
+int stack_logtype;
+
+TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_stack_tailq = {
+ .name = RTE_TAILQ_STACK_NAME,
+};
+EAL_REGISTER_TAILQ(rte_stack_tailq)
+
+static void
+rte_stack_init(struct rte_stack *s)
+{
+ memset(s, 0, sizeof(*s));
+
+ rte_stack_std_init(s);
+}
+
+static ssize_t
+rte_stack_get_memsize(unsigned int count)
+{
+ return rte_stack_std_get_memsize(count);
+}
+
+struct rte_stack *
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_stack_list *stack_list;
+ const struct rte_memzone *mz;
+ struct rte_tailq_entry *te;
+ struct rte_stack *s;
+ unsigned int sz;
+ int ret;
+
+ RTE_SET_USED(flags);
+
+ sz = rte_stack_get_memsize(count);
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_STACK_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ STACK_LOG_ERR("Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
+ 0, __alignof__(*s));
+ if (mz == NULL) {
+ STACK_LOG_ERR("Cannot reserve stack memzone!\n");
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(te);
+ return NULL;
+ }
+
+ s = mz->addr;
+
+ rte_stack_init(s);
+
+ /* Store the name for later lookups */
+ ret = snprintf(s->name, sizeof(s->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(s->name)) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_errno = ENAMETOOLONG;
+ rte_free(te);
+ rte_memzone_free(mz);
+ return NULL;
+ }
+
+ s->memzone = mz;
+ s->capacity = count;
+ s->flags = flags;
+
+ te->data = s;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ TAILQ_INSERT_TAIL(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return s;
+}
+
+void
+rte_stack_free(struct rte_stack *s)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+
+ if (s == NULL)
+ return;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, stack_list, next) {
+ if (te->data == s)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+
+ rte_memzone_free(s->memzone);
+}
+
+struct rte_stack *
+rte_stack_lookup(const char *name)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+ struct rte_stack *r = NULL;
+
+ if (name == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, stack_list, next) {
+ r = (struct rte_stack *) te->data;
+ if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+RTE_INIT(librte_stack_init_log)
+{
+ stack_logtype = rte_log_register("lib.stack");
+ if (stack_logtype >= 0)
+ rte_log_set_level(stack_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
new file mode 100644
index 000000000..d9799d747
--- /dev/null
+++ b/lib/librte_stack/rte_stack.h
@@ -0,0 +1,207 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+/**
+ * @file rte_stack.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Stack
+ *
+ * librte_stack provides an API for configuration and use of a bounded stack of
+ * pointers. Push and pop operations are MT-safe, allowing concurrent access,
+ * and the interface supports pushing and popping multiple pointers at a time.
+ */
+
+#ifndef _RTE_STACK_H_
+#define _RTE_STACK_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+#define RTE_TAILQ_STACK_NAME "RTE_STACK"
+#define RTE_STACK_MZ_PREFIX "STK_"
+/** The maximum length of a stack name. */
+#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
+ sizeof(RTE_STACK_MZ_PREFIX) + 1)
+
+/* Structure containing the LIFO, its current length, and a lock for mutual
+ * exclusion.
+ */
+struct rte_stack_std {
+ rte_spinlock_t lock; /**< LIFO lock */
+ uint32_t len; /**< LIFO len */
+ void *objs[]; /**< LIFO pointer table */
+};
+
+/* The RTE stack structure contains the LIFO structure itself, plus metadata
+ * such as its name and memzone pointer.
+ */
+struct rte_stack {
+ /** Name of the stack. */
+ char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
+ /** Memzone containing the rte_stack structure. */
+ const struct rte_memzone *memzone;
+ uint32_t capacity; /**< Usable size of the stack. */
+ uint32_t flags; /**< Flags supplied at creation. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+} __rte_cache_aligned;
+
+#include "rte_stack_std.h"
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_pop(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return __rte_stack_std_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of free entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of free entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_free_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return s->capacity - rte_stack_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new stack named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory for a stack of
+ * size *count*. The behavior of the stack is controlled by the *flags*.
+ *
+ * @param name
+ * The name of the stack.
+ * @param count
+ * The size of the stack.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * Reserved for future use.
+ * @return
+ * On success, the pointer to the new allocated stack. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a stack with the same name already exists
+ * - ENOMEM - insufficient memory to create the stack
+ * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
+ */
+struct rte_stack *__rte_experimental
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free all memory used by the stack.
+ *
+ * @param s
+ * Stack to free
+ */
+void __rte_experimental
+rte_stack_free(struct rte_stack *s);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Lookup a stack by its name.
+ *
+ * @param name
+ * The name of the stack.
+ * @return
+ * The pointer to the stack matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - Stack with name *name* not found.
+ * - EINVAL - *name* pointer is NULL.
+ */
+struct rte_stack * __rte_experimental
+rte_stack_lookup(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_H_ */
diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
new file mode 100644
index 000000000..4a6a7bdb3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_pvt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_PVT_H_
+#define _RTE_STACK_PVT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_log.h>
+
+extern int stack_logtype;
+
+#define STACK_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
+ __func__, ##args)
+
+#define STACK_LOG_ERR(fmt, args...) \
+ STACK_LOG(ERR, fmt, ## args)
+
+#define STACK_LOG_WARN(fmt, args...) \
+ STACK_LOG(WARNING, fmt, ## args)
+
+#define STACK_LOG_INFO(fmt, args...) \
+ STACK_LOG(INFO, fmt, ## args)
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_PVT_H_ */
diff --git a/lib/librte_stack/rte_stack_std.c b/lib/librte_stack/rte_stack_std.c
new file mode 100644
index 000000000..0a310d7c6
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.c
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_std_init(struct rte_stack *s)
+{
+ rte_spinlock_init(&s->stack_std.lock);
+}
+
+ssize_t
+rte_stack_std_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_std.h b/lib/librte_stack/rte_stack_std.h
new file mode 100644
index 000000000..f9af087dc
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.h
@@ -0,0 +1,119 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_STD_H_
+#define _RTE_STACK_STD_H_
+
+/**
+ * @internal Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+ cache_objs = &stack->objs[stack->len];
+
+ /* Is there sufficient space in the stack? */
+ if ((stack->len + n) > s->capacity) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ stack->len += n;
+
+ rte_spinlock_unlock(&stack->lock);
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index, len;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+
+ if (unlikely(n > stack->len)) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ cache_objs = stack->objs;
+
+ for (index = 0, len = stack->len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ stack->len -= n;
+ rte_spinlock_unlock(&stack->lock);
+
+ return n;
+}
+
+/**
+ * @internal Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_count(struct rte_stack *s)
+{
+ return (unsigned int)s->stack_std.len;
+}
+
+/**
+ * @internal Initialize a standard stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ */
+void
+rte_stack_std_init(struct rte_stack *s);
+
+/**
+ * @internal Return the memory required for a standard stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a standard stack.
+ */
+ssize_t
+rte_stack_std_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_STD_H_ */
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
new file mode 100644
index 000000000..6662679c3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_stack_create;
+ rte_stack_free;
+ rte_stack_lookup;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 99957ba7d..90115477f 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..7e033e78c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 2/8] mempool/stack: convert mempool to use rte stack
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 0/8] Add stack library and new " Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 1/8] stack: introduce rte stack library Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 3/8] test/stack: add stack test Gage Eads
` (6 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 2 +-
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 93 +++++++++----------------------
4 files changed, 33 insertions(+), 71 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 09fd99dbf..13fe49e2b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -293,7 +293,6 @@ M: Andrew Rybchenko <arybchenko@solarflare.com>
F: lib/librte_mempool/
F: drivers/mempool/Makefile
F: drivers/mempool/ring/
-F: drivers/mempool/stack/
F: doc/guides/prog_guide/mempool_lib.rst
F: app/test/test_mempool*
F: app/test/test_func_reentrancy.c
@@ -421,6 +420,7 @@ M: Gage Eads <gage.eads@intel.com>
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
+F: drivers/mempool/stack/
Memory Pool Drivers
diff --git a/drivers/mempool/stack/Makefile b/drivers/mempool/stack/Makefile
index 0444aedad..1681a62bc 100644
--- a/drivers/mempool/stack/Makefile
+++ b/drivers/mempool/stack/Makefile
@@ -10,10 +10,11 @@ LIB = librte_mempool_stack.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
# Headers
CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+LDLIBS += -lrte_eal -lrte_mempool -lrte_stack
EXPORT_MAP := rte_mempool_stack_version.map
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index b75a3bb56..03e369a41 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
+
+allow_experimental_apis = true
sources = files('rte_mempool_stack.c')
+
+deps += ['stack']
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index e6d504af5..25ccdb9af 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -1,39 +1,29 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2019 Intel Corporation
*/
#include <stdio.h>
#include <rte_mempool.h>
-#include <rte_malloc.h>
-
-struct rte_mempool_stack {
- rte_spinlock_t sl;
-
- uint32_t size;
- uint32_t len;
- void *objs[];
-};
+#include <rte_stack.h>
static int
stack_alloc(struct rte_mempool *mp)
{
- struct rte_mempool_stack *s;
- unsigned n = mp->size;
- int size = sizeof(*s) + (n+16)*sizeof(void *);
-
- /* Allocate our local memory structure */
- s = rte_zmalloc_socket("mempool-stack",
- size,
- RTE_CACHE_LINE_SIZE,
- mp->socket_id);
- if (s == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
- return -ENOMEM;
+ char name[RTE_STACK_NAMESIZE];
+ struct rte_stack *s;
+ int ret;
+
+ ret = snprintf(name, sizeof(name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(name)) {
+ rte_errno = ENAMETOOLONG;
+ return -rte_errno;
}
- rte_spinlock_init(&s->sl);
+ s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ if (s == NULL)
+ return -rte_errno;
- s->size = n;
mp->pool_data = s;
return 0;
@@ -41,69 +31,36 @@ stack_alloc(struct rte_mempool *mp)
static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index;
-
- rte_spinlock_lock(&s->sl);
- cache_objs = &s->objs[s->len];
-
- /* Is there sufficient space in the stack ? */
- if ((s->len + n) > s->size) {
- rte_spinlock_unlock(&s->sl);
- return -ENOBUFS;
- }
-
- /* Add elements back into the cache */
- for (index = 0; index < n; ++index, obj_table++)
- cache_objs[index] = *obj_table;
-
- s->len += n;
+ struct rte_stack *s = mp->pool_data;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_push(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static int
stack_dequeue(struct rte_mempool *mp, void **obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index, len;
-
- rte_spinlock_lock(&s->sl);
-
- if (unlikely(n > s->len)) {
- rte_spinlock_unlock(&s->sl);
- return -ENOENT;
- }
+ struct rte_stack *s = mp->pool_data;
- cache_objs = s->objs;
-
- for (index = 0, len = s->len - 1; index < n;
- ++index, len--, obj_table++)
- *obj_table = cache_objs[len];
-
- s->len -= n;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_pop(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static unsigned
stack_get_count(const struct rte_mempool *mp)
{
- struct rte_mempool_stack *s = mp->pool_data;
+ struct rte_stack *s = mp->pool_data;
- return s->len;
+ return rte_stack_count(s);
}
static void
stack_free(struct rte_mempool *mp)
{
- rte_free((void *)(mp->pool_data));
+ struct rte_stack *s = mp->pool_data;
+
+ rte_stack_free(s);
}
static struct rte_mempool_ops ops_stack = {
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 2/8] mempool/stack: convert mempool to use rte stack
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 2 +-
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 93 +++++++++----------------------
4 files changed, 33 insertions(+), 71 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 09fd99dbf..13fe49e2b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -293,7 +293,6 @@ M: Andrew Rybchenko <arybchenko@solarflare.com>
F: lib/librte_mempool/
F: drivers/mempool/Makefile
F: drivers/mempool/ring/
-F: drivers/mempool/stack/
F: doc/guides/prog_guide/mempool_lib.rst
F: app/test/test_mempool*
F: app/test/test_func_reentrancy.c
@@ -421,6 +420,7 @@ M: Gage Eads <gage.eads@intel.com>
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
+F: drivers/mempool/stack/
Memory Pool Drivers
diff --git a/drivers/mempool/stack/Makefile b/drivers/mempool/stack/Makefile
index 0444aedad..1681a62bc 100644
--- a/drivers/mempool/stack/Makefile
+++ b/drivers/mempool/stack/Makefile
@@ -10,10 +10,11 @@ LIB = librte_mempool_stack.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
# Headers
CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+LDLIBS += -lrte_eal -lrte_mempool -lrte_stack
EXPORT_MAP := rte_mempool_stack_version.map
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index b75a3bb56..03e369a41 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
+
+allow_experimental_apis = true
sources = files('rte_mempool_stack.c')
+
+deps += ['stack']
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index e6d504af5..25ccdb9af 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -1,39 +1,29 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2019 Intel Corporation
*/
#include <stdio.h>
#include <rte_mempool.h>
-#include <rte_malloc.h>
-
-struct rte_mempool_stack {
- rte_spinlock_t sl;
-
- uint32_t size;
- uint32_t len;
- void *objs[];
-};
+#include <rte_stack.h>
static int
stack_alloc(struct rte_mempool *mp)
{
- struct rte_mempool_stack *s;
- unsigned n = mp->size;
- int size = sizeof(*s) + (n+16)*sizeof(void *);
-
- /* Allocate our local memory structure */
- s = rte_zmalloc_socket("mempool-stack",
- size,
- RTE_CACHE_LINE_SIZE,
- mp->socket_id);
- if (s == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
- return -ENOMEM;
+ char name[RTE_STACK_NAMESIZE];
+ struct rte_stack *s;
+ int ret;
+
+ ret = snprintf(name, sizeof(name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(name)) {
+ rte_errno = ENAMETOOLONG;
+ return -rte_errno;
}
- rte_spinlock_init(&s->sl);
+ s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ if (s == NULL)
+ return -rte_errno;
- s->size = n;
mp->pool_data = s;
return 0;
@@ -41,69 +31,36 @@ stack_alloc(struct rte_mempool *mp)
static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index;
-
- rte_spinlock_lock(&s->sl);
- cache_objs = &s->objs[s->len];
-
- /* Is there sufficient space in the stack ? */
- if ((s->len + n) > s->size) {
- rte_spinlock_unlock(&s->sl);
- return -ENOBUFS;
- }
-
- /* Add elements back into the cache */
- for (index = 0; index < n; ++index, obj_table++)
- cache_objs[index] = *obj_table;
-
- s->len += n;
+ struct rte_stack *s = mp->pool_data;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_push(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static int
stack_dequeue(struct rte_mempool *mp, void **obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index, len;
-
- rte_spinlock_lock(&s->sl);
-
- if (unlikely(n > s->len)) {
- rte_spinlock_unlock(&s->sl);
- return -ENOENT;
- }
+ struct rte_stack *s = mp->pool_data;
- cache_objs = s->objs;
-
- for (index = 0, len = s->len - 1; index < n;
- ++index, len--, obj_table++)
- *obj_table = cache_objs[len];
-
- s->len -= n;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_pop(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static unsigned
stack_get_count(const struct rte_mempool *mp)
{
- struct rte_mempool_stack *s = mp->pool_data;
+ struct rte_stack *s = mp->pool_data;
- return s->len;
+ return rte_stack_count(s);
}
static void
stack_free(struct rte_mempool *mp)
{
- rte_free((void *)(mp->pool_data));
+ struct rte_stack *s = mp->pool_data;
+
+ rte_stack_free(s);
}
static struct rte_mempool_ops ops_stack = {
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 3/8] test/stack: add stack test
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 0/8] Add stack library and new " Gage Eads
` (2 preceding siblings ...)
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 4/8] test/stack: add stack perf test Gage Eads
` (5 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_autotest performs positive and negative testing of the stack API, and
exercises the push and pop datapath functions with all available lcores.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 1 +
app/test/Makefile | 2 +
app/test/meson.build | 3 +
app/test/test_stack.c | 410 ++++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 416 insertions(+)
create mode 100644 app/test/test_stack.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 13fe49e2b..2842f07ab 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -421,6 +421,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
F: drivers/mempool/stack/
+F: test/test/*stack*
Memory Pool Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index d6aa28bad..e5bde81af 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -90,6 +90,8 @@ endif
SRCS-y += test_rwlock.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
diff --git a/app/test/meson.build b/app/test/meson.build
index c5e65fe66..56ea13f53 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -95,6 +95,7 @@ test_sources = files('commands.c',
'test_sched.c',
'test_service_cores.c',
'test_spinlock.c',
+ 'test_stack.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -133,6 +134,7 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
+ 'stack',
'timer'
]
@@ -174,6 +176,7 @@ fast_parallel_test_names = [
'rwlock_autotest',
'sched_autotest',
'spinlock_autotest',
+ 'stack_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
new file mode 100644
index 000000000..8392e4e4d
--- /dev/null
+++ b/app/test/test_stack.c
@@ -0,0 +1,410 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_SIZE 4096
+#define MAX_BULK 32
+
+static int
+test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
+{
+ unsigned int i, ret;
+ void **popped_objs;
+
+ popped_objs = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (popped_objs == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_push(s, &obj_table[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] push returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_pop(s, &popped_objs[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] pop returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i++) {
+ if (obj_table[i] != popped_objs[STACK_SIZE - i - 1]) {
+ printf("[%s():%u] Incorrect value %p at index 0x%x\n",
+ __func__, __LINE__,
+ popped_objs[STACK_SIZE - i - 1], i);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ rte_free(popped_objs);
+
+ return 0;
+}
+
+static int
+test_stack_basic(void)
+{
+ struct rte_stack *s = NULL;
+ void **obj_table = NULL;
+ int i, ret = -1;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ goto fail_test;
+ }
+
+ for (i = 0; i < STACK_SIZE; i++)
+ obj_table[i] = (void *)(uintptr_t)i;
+
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_lookup(__func__) != s) {
+ printf("[%s():%u] failed to lookup a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_count(s) != 0) {
+ printf("[%s():%u] stack count: %u (expected 0)\n",
+ __func__, __LINE__, rte_stack_count(s));
+ goto fail_test;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, 1);
+ if (ret) {
+ printf("[%s():%u] Single object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, MAX_BULK);
+ if (ret) {
+ printf("[%s():%u] Bulk object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
+ if (ret != 0) {
+ printf("[%s():%u] Excess objects push succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_pop(s, obj_table, 1);
+ if (ret != 0) {
+ printf("[%s():%u] Empty stack pop succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = 0;
+
+fail_test:
+ rte_stack_free(s);
+
+ rte_free(obj_table);
+
+ return ret;
+}
+
+static int
+test_stack_name_reuse(void)
+{
+ struct rte_stack *s[2];
+
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[0] == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[1] != NULL) {
+ printf("[%s():%u] Failed to detect re-used name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ rte_stack_free(s[0]);
+
+ return 0;
+}
+
+static int
+test_stack_name_length(void)
+{
+ char name[RTE_STACK_NAMESIZE + 1];
+ struct rte_stack *s;
+
+ memset(name, 's', sizeof(name));
+ name[RTE_STACK_NAMESIZE] = '\0';
+
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ if (s != NULL) {
+ printf("[%s():%u] Failed to prevent long name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENAMETOOLONG) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_stack *s = rte_stack_lookup("stack_not_found");
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENOENT) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s = rte_stack_lookup(NULL);
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != EINVAL) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_free_null(void)
+{
+ /* Check whether the library proper handles a NULL pointer */
+ rte_stack_free(NULL);
+
+ return 0;
+}
+
+#define NUM_ITERS_PER_THREAD 100000
+
+struct test_args {
+ struct rte_stack *s;
+ rte_atomic64_t *sz;
+};
+
+static int
+stack_thread_push_pop(void *args)
+{
+ struct test_args *t = args;
+ void **obj_table;
+ int i;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < NUM_ITERS_PER_THREAD; i++) {
+ unsigned int success, num;
+
+ /* Reserve up to min(MAX_BULK, available slots) stack entries,
+ * then push and pop those stack entries.
+ */
+ do {
+ uint64_t sz = rte_atomic64_read(t->sz);
+ volatile uint64_t *sz_addr;
+
+ sz_addr = (volatile uint64_t *)t->sz;
+
+ num = RTE_MIN(rte_rand() % MAX_BULK, STACK_SIZE - sz);
+
+ success = rte_atomic64_cmpset(sz_addr, sz, sz + num);
+ } while (success == 0);
+
+ if (rte_stack_push(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to push %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ if (rte_stack_pop(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to pop %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ rte_atomic64_sub(t->sz, num);
+ }
+
+ rte_free(obj_table);
+ return 0;
+}
+
+static int
+test_stack_multithreaded(void)
+{
+ struct test_args *args;
+ unsigned int lcore_id;
+ struct rte_stack *s;
+ rte_atomic64_t size;
+
+ printf("[%s():%u] Running with %u lcores\n",
+ __func__, __LINE__, rte_lcore_count());
+
+ if (rte_lcore_count() < 2)
+ return 0;
+
+ args = rte_malloc(NULL, sizeof(struct test_args) * RTE_MAX_LCORE, 0);
+ if (args == NULL) {
+ printf("[%s():%u] failed to malloc %zu bytes\n",
+ __func__, __LINE__,
+ sizeof(struct test_args) * RTE_MAX_LCORE);
+ return -1;
+ }
+
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ rte_free(args);
+ return -1;
+ }
+
+ rte_atomic64_init(&size);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ if (rte_eal_remote_launch(stack_thread_push_pop,
+ &args[lcore_id], lcore_id))
+ rte_panic("Failed to launch lcore %d\n", lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ stack_thread_push_pop(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ rte_stack_free(s);
+ rte_free(args);
+
+ return 0;
+}
+
+static int
+test_stack(void)
+{
+ if (test_stack_basic() < 0)
+ return -1;
+
+ if (test_lookup_null() < 0)
+ return -1;
+
+ if (test_free_null() < 0)
+ return -1;
+
+ if (test_stack_name_reuse() < 0)
+ return -1;
+
+ if (test_stack_name_length() < 0)
+ return -1;
+
+ if (test_stack_multithreaded() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_autotest, test_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 3/8] test/stack: add stack test
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 3/8] test/stack: add stack test Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_autotest performs positive and negative testing of the stack API, and
exercises the push and pop datapath functions with all available lcores.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 1 +
app/test/Makefile | 2 +
app/test/meson.build | 3 +
app/test/test_stack.c | 410 ++++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 416 insertions(+)
create mode 100644 app/test/test_stack.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 13fe49e2b..2842f07ab 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -421,6 +421,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
F: drivers/mempool/stack/
+F: test/test/*stack*
Memory Pool Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index d6aa28bad..e5bde81af 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -90,6 +90,8 @@ endif
SRCS-y += test_rwlock.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
diff --git a/app/test/meson.build b/app/test/meson.build
index c5e65fe66..56ea13f53 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -95,6 +95,7 @@ test_sources = files('commands.c',
'test_sched.c',
'test_service_cores.c',
'test_spinlock.c',
+ 'test_stack.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -133,6 +134,7 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
+ 'stack',
'timer'
]
@@ -174,6 +176,7 @@ fast_parallel_test_names = [
'rwlock_autotest',
'sched_autotest',
'spinlock_autotest',
+ 'stack_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
new file mode 100644
index 000000000..8392e4e4d
--- /dev/null
+++ b/app/test/test_stack.c
@@ -0,0 +1,410 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_SIZE 4096
+#define MAX_BULK 32
+
+static int
+test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
+{
+ unsigned int i, ret;
+ void **popped_objs;
+
+ popped_objs = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (popped_objs == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_push(s, &obj_table[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] push returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_pop(s, &popped_objs[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] pop returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i++) {
+ if (obj_table[i] != popped_objs[STACK_SIZE - i - 1]) {
+ printf("[%s():%u] Incorrect value %p at index 0x%x\n",
+ __func__, __LINE__,
+ popped_objs[STACK_SIZE - i - 1], i);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ rte_free(popped_objs);
+
+ return 0;
+}
+
+static int
+test_stack_basic(void)
+{
+ struct rte_stack *s = NULL;
+ void **obj_table = NULL;
+ int i, ret = -1;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ goto fail_test;
+ }
+
+ for (i = 0; i < STACK_SIZE; i++)
+ obj_table[i] = (void *)(uintptr_t)i;
+
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_lookup(__func__) != s) {
+ printf("[%s():%u] failed to lookup a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_count(s) != 0) {
+ printf("[%s():%u] stack count: %u (expected 0)\n",
+ __func__, __LINE__, rte_stack_count(s));
+ goto fail_test;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, 1);
+ if (ret) {
+ printf("[%s():%u] Single object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, MAX_BULK);
+ if (ret) {
+ printf("[%s():%u] Bulk object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
+ if (ret != 0) {
+ printf("[%s():%u] Excess objects push succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_pop(s, obj_table, 1);
+ if (ret != 0) {
+ printf("[%s():%u] Empty stack pop succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = 0;
+
+fail_test:
+ rte_stack_free(s);
+
+ rte_free(obj_table);
+
+ return ret;
+}
+
+static int
+test_stack_name_reuse(void)
+{
+ struct rte_stack *s[2];
+
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[0] == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[1] != NULL) {
+ printf("[%s():%u] Failed to detect re-used name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ rte_stack_free(s[0]);
+
+ return 0;
+}
+
+static int
+test_stack_name_length(void)
+{
+ char name[RTE_STACK_NAMESIZE + 1];
+ struct rte_stack *s;
+
+ memset(name, 's', sizeof(name));
+ name[RTE_STACK_NAMESIZE] = '\0';
+
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ if (s != NULL) {
+ printf("[%s():%u] Failed to prevent long name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENAMETOOLONG) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_stack *s = rte_stack_lookup("stack_not_found");
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENOENT) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s = rte_stack_lookup(NULL);
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != EINVAL) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_free_null(void)
+{
+ /* Check whether the library proper handles a NULL pointer */
+ rte_stack_free(NULL);
+
+ return 0;
+}
+
+#define NUM_ITERS_PER_THREAD 100000
+
+struct test_args {
+ struct rte_stack *s;
+ rte_atomic64_t *sz;
+};
+
+static int
+stack_thread_push_pop(void *args)
+{
+ struct test_args *t = args;
+ void **obj_table;
+ int i;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < NUM_ITERS_PER_THREAD; i++) {
+ unsigned int success, num;
+
+ /* Reserve up to min(MAX_BULK, available slots) stack entries,
+ * then push and pop those stack entries.
+ */
+ do {
+ uint64_t sz = rte_atomic64_read(t->sz);
+ volatile uint64_t *sz_addr;
+
+ sz_addr = (volatile uint64_t *)t->sz;
+
+ num = RTE_MIN(rte_rand() % MAX_BULK, STACK_SIZE - sz);
+
+ success = rte_atomic64_cmpset(sz_addr, sz, sz + num);
+ } while (success == 0);
+
+ if (rte_stack_push(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to push %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ if (rte_stack_pop(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to pop %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ rte_atomic64_sub(t->sz, num);
+ }
+
+ rte_free(obj_table);
+ return 0;
+}
+
+static int
+test_stack_multithreaded(void)
+{
+ struct test_args *args;
+ unsigned int lcore_id;
+ struct rte_stack *s;
+ rte_atomic64_t size;
+
+ printf("[%s():%u] Running with %u lcores\n",
+ __func__, __LINE__, rte_lcore_count());
+
+ if (rte_lcore_count() < 2)
+ return 0;
+
+ args = rte_malloc(NULL, sizeof(struct test_args) * RTE_MAX_LCORE, 0);
+ if (args == NULL) {
+ printf("[%s():%u] failed to malloc %zu bytes\n",
+ __func__, __LINE__,
+ sizeof(struct test_args) * RTE_MAX_LCORE);
+ return -1;
+ }
+
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ rte_free(args);
+ return -1;
+ }
+
+ rte_atomic64_init(&size);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ if (rte_eal_remote_launch(stack_thread_push_pop,
+ &args[lcore_id], lcore_id))
+ rte_panic("Failed to launch lcore %d\n", lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ stack_thread_push_pop(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ rte_stack_free(s);
+ rte_free(args);
+
+ return 0;
+}
+
+static int
+test_stack(void)
+{
+ if (test_stack_basic() < 0)
+ return -1;
+
+ if (test_lookup_null() < 0)
+ return -1;
+
+ if (test_free_null() < 0)
+ return -1;
+
+ if (test_stack_name_reuse() < 0)
+ return -1;
+
+ if (test_stack_name_length() < 0)
+ return -1;
+
+ if (test_stack_multithreaded() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_autotest, test_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 4/8] test/stack: add stack perf test
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 0/8] Add stack library and new " Gage Eads
` (3 preceding siblings ...)
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 3/8] test/stack: add stack test Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 5/8] stack: add lock-free stack implementation Gage Eads
` (4 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/Makefile | 1 +
app/test/meson.build | 2 +
app/test/test_stack_perf.c | 343 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 346 insertions(+)
create mode 100644 app/test/test_stack_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index e5bde81af..b28bed2d4 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -91,6 +91,7 @@ endif
SRCS-y += test_rwlock.c
SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
diff --git a/app/test/meson.build b/app/test/meson.build
index 56ea13f53..02eb788a4 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -96,6 +96,7 @@ test_sources = files('commands.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
+ 'test_stack_perf.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -241,6 +242,7 @@ perf_test_names = [
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
+ 'stack_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
new file mode 100644
index 000000000..484370d30
--- /dev/null
+++ b/app/test/test_stack_perf.c
@@ -0,0 +1,343 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+#include <rte_stack.h>
+#include <rte_cycles.h>
+#include <rte_launch.h>
+#include <rte_pause.h>
+
+#include "test.h"
+
+#define STACK_NAME "STACK_PERF"
+#define MAX_BURST 32
+#define STACK_SIZE (RTE_MAX_LCORE * MAX_BURST)
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+/*
+ * Push/pop bulk sizes, marked volatile so they aren't treated as compile-time
+ * constants.
+ */
+static volatile unsigned int bulk_sizes[] = {8, MAX_BURST};
+
+static rte_atomic32_t lcore_barrier;
+
+struct lcore_pair {
+ unsigned int c1;
+ unsigned int c2;
+};
+
+static int
+get_two_hyperthreads(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] == core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_cores(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] != core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_sockets(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if (socket[0] != socket[1]) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+/* Measure the cycle cost of popping an empty stack. */
+static void
+test_empty_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 100000000;
+ void *objs[MAX_BURST];
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++)
+ rte_stack_pop(s, objs, bulk_sizes[0]);
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Stack empty pop: %.2F\n",
+ (double)(end - start) / iterations);
+}
+
+struct thread_args {
+ struct rte_stack *s;
+ unsigned int sz;
+ double avg;
+};
+
+/* Measure the average per-pointer cycle cost of stack push and pop */
+static int
+bulk_push_pop(void *p)
+{
+ unsigned int iterations = 1000000;
+ struct thread_args *args = p;
+ void *objs[MAX_BURST] = {0};
+ unsigned int size, i;
+ struct rte_stack *s;
+
+ s = args->s;
+ size = args->sz;
+
+ rte_atomic32_sub(&lcore_barrier, 1);
+ while (rte_atomic32_read(&lcore_barrier) != 0)
+ rte_pause();
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, size);
+ rte_stack_pop(s, objs, size);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ args->avg = ((double)(end - start))/(iterations * size);
+
+ return 0;
+}
+
+/*
+ * Run bulk_push_pop() simultaneously on pairs of cores, to measure stack
+ * perf when between hyperthread siblings, cores on the same socket, and cores
+ * on different sockets.
+ */
+static void
+run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s,
+ lcore_function_t fn)
+{
+ struct thread_args args[2];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ rte_atomic32_set(&lcore_barrier, 2);
+
+ args[0].sz = args[1].sz = bulk_sizes[i];
+ args[0].s = args[1].s = s;
+
+ if (cores->c1 == rte_get_master_lcore()) {
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ fn(&args[0]);
+ rte_eal_wait_lcore(cores->c2);
+ } else {
+ rte_eal_remote_launch(fn, &args[0], cores->c1);
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ rte_eal_wait_lcore(cores->c1);
+ rte_eal_wait_lcore(cores->c2);
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], (args[0].avg + args[1].avg) / 2);
+ }
+}
+
+/* Run bulk_push_pop() simultaneously on 1+ cores. */
+static void
+run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n)
+{
+ struct thread_args args[RTE_MAX_LCORE];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ unsigned int lcore_id;
+ int cnt = 0;
+ double avg;
+
+ rte_atomic32_set(&lcore_barrier, n);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ if (rte_eal_remote_launch(fn, &args[lcore_id],
+ lcore_id))
+ rte_panic("Failed to launch lcore %d\n",
+ lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ fn(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ avg = args[rte_lcore_id()].avg;
+
+ cnt = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+ avg += args[lcore_id].avg;
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], avg / n);
+ }
+}
+
+/*
+ * Measure the cycle cost of pushing and popping a single pointer on a single
+ * lcore.
+ */
+static void
+test_single_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 16000000;
+ void *obj = NULL;
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, &obj, 1);
+ rte_stack_pop(s, &obj, 1);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Average cycles per single object push/pop: %.2F\n",
+ ((double)(end - start)) / iterations);
+}
+
+/* Measure the cycle cost of bulk pushing and popping on a single lcore. */
+static void
+test_bulk_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 8000000;
+ void *objs[MAX_BURST];
+ unsigned int sz, i;
+
+ for (sz = 0; sz < ARRAY_SIZE(bulk_sizes); sz++) {
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, bulk_sizes[sz]);
+ rte_stack_pop(s, objs, bulk_sizes[sz]);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ double avg = ((double)(end - start) /
+ (iterations * bulk_sizes[sz]));
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[sz], avg);
+ }
+}
+
+static int
+test_stack_perf(void)
+{
+ struct lcore_pair cores;
+ struct rte_stack *s;
+
+ rte_atomic32_init(&lcore_barrier);
+
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ printf("### Testing single element push/pop ###\n");
+ test_single_push_pop(s);
+
+ printf("\n### Testing empty pop ###\n");
+ test_empty_pop(s);
+
+ printf("\n### Testing using a single lcore ###\n");
+ test_bulk_push_pop(s);
+
+ if (get_two_hyperthreads(&cores) == 0) {
+ printf("\n### Testing using two hyperthreads ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_cores(&cores) == 0) {
+ printf("\n### Testing using two physical cores ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_sockets(&cores) == 0) {
+ printf("\n### Testing using two NUMA nodes ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+
+ printf("\n### Testing on all %u lcores ###\n", rte_lcore_count());
+ run_on_n_cores(s, bulk_push_pop, rte_lcore_count());
+
+ rte_stack_free(s);
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 4/8] test/stack: add stack perf test
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 4/8] test/stack: add stack perf test Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/Makefile | 1 +
app/test/meson.build | 2 +
app/test/test_stack_perf.c | 343 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 346 insertions(+)
create mode 100644 app/test/test_stack_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index e5bde81af..b28bed2d4 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -91,6 +91,7 @@ endif
SRCS-y += test_rwlock.c
SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
diff --git a/app/test/meson.build b/app/test/meson.build
index 56ea13f53..02eb788a4 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -96,6 +96,7 @@ test_sources = files('commands.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
+ 'test_stack_perf.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -241,6 +242,7 @@ perf_test_names = [
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
+ 'stack_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
new file mode 100644
index 000000000..484370d30
--- /dev/null
+++ b/app/test/test_stack_perf.c
@@ -0,0 +1,343 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+#include <rte_stack.h>
+#include <rte_cycles.h>
+#include <rte_launch.h>
+#include <rte_pause.h>
+
+#include "test.h"
+
+#define STACK_NAME "STACK_PERF"
+#define MAX_BURST 32
+#define STACK_SIZE (RTE_MAX_LCORE * MAX_BURST)
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+/*
+ * Push/pop bulk sizes, marked volatile so they aren't treated as compile-time
+ * constants.
+ */
+static volatile unsigned int bulk_sizes[] = {8, MAX_BURST};
+
+static rte_atomic32_t lcore_barrier;
+
+struct lcore_pair {
+ unsigned int c1;
+ unsigned int c2;
+};
+
+static int
+get_two_hyperthreads(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] == core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_cores(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] != core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_sockets(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if (socket[0] != socket[1]) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+/* Measure the cycle cost of popping an empty stack. */
+static void
+test_empty_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 100000000;
+ void *objs[MAX_BURST];
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++)
+ rte_stack_pop(s, objs, bulk_sizes[0]);
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Stack empty pop: %.2F\n",
+ (double)(end - start) / iterations);
+}
+
+struct thread_args {
+ struct rte_stack *s;
+ unsigned int sz;
+ double avg;
+};
+
+/* Measure the average per-pointer cycle cost of stack push and pop */
+static int
+bulk_push_pop(void *p)
+{
+ unsigned int iterations = 1000000;
+ struct thread_args *args = p;
+ void *objs[MAX_BURST] = {0};
+ unsigned int size, i;
+ struct rte_stack *s;
+
+ s = args->s;
+ size = args->sz;
+
+ rte_atomic32_sub(&lcore_barrier, 1);
+ while (rte_atomic32_read(&lcore_barrier) != 0)
+ rte_pause();
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, size);
+ rte_stack_pop(s, objs, size);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ args->avg = ((double)(end - start))/(iterations * size);
+
+ return 0;
+}
+
+/*
+ * Run bulk_push_pop() simultaneously on pairs of cores, to measure stack
+ * perf when between hyperthread siblings, cores on the same socket, and cores
+ * on different sockets.
+ */
+static void
+run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s,
+ lcore_function_t fn)
+{
+ struct thread_args args[2];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ rte_atomic32_set(&lcore_barrier, 2);
+
+ args[0].sz = args[1].sz = bulk_sizes[i];
+ args[0].s = args[1].s = s;
+
+ if (cores->c1 == rte_get_master_lcore()) {
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ fn(&args[0]);
+ rte_eal_wait_lcore(cores->c2);
+ } else {
+ rte_eal_remote_launch(fn, &args[0], cores->c1);
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ rte_eal_wait_lcore(cores->c1);
+ rte_eal_wait_lcore(cores->c2);
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], (args[0].avg + args[1].avg) / 2);
+ }
+}
+
+/* Run bulk_push_pop() simultaneously on 1+ cores. */
+static void
+run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n)
+{
+ struct thread_args args[RTE_MAX_LCORE];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ unsigned int lcore_id;
+ int cnt = 0;
+ double avg;
+
+ rte_atomic32_set(&lcore_barrier, n);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ if (rte_eal_remote_launch(fn, &args[lcore_id],
+ lcore_id))
+ rte_panic("Failed to launch lcore %d\n",
+ lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ fn(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ avg = args[rte_lcore_id()].avg;
+
+ cnt = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+ avg += args[lcore_id].avg;
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], avg / n);
+ }
+}
+
+/*
+ * Measure the cycle cost of pushing and popping a single pointer on a single
+ * lcore.
+ */
+static void
+test_single_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 16000000;
+ void *obj = NULL;
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, &obj, 1);
+ rte_stack_pop(s, &obj, 1);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Average cycles per single object push/pop: %.2F\n",
+ ((double)(end - start)) / iterations);
+}
+
+/* Measure the cycle cost of bulk pushing and popping on a single lcore. */
+static void
+test_bulk_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 8000000;
+ void *objs[MAX_BURST];
+ unsigned int sz, i;
+
+ for (sz = 0; sz < ARRAY_SIZE(bulk_sizes); sz++) {
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, bulk_sizes[sz]);
+ rte_stack_pop(s, objs, bulk_sizes[sz]);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ double avg = ((double)(end - start) /
+ (iterations * bulk_sizes[sz]));
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[sz], avg);
+ }
+}
+
+static int
+test_stack_perf(void)
+{
+ struct lcore_pair cores;
+ struct rte_stack *s;
+
+ rte_atomic32_init(&lcore_barrier);
+
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ printf("### Testing single element push/pop ###\n");
+ test_single_push_pop(s);
+
+ printf("\n### Testing empty pop ###\n");
+ test_empty_pop(s);
+
+ printf("\n### Testing using a single lcore ###\n");
+ test_bulk_push_pop(s);
+
+ if (get_two_hyperthreads(&cores) == 0) {
+ printf("\n### Testing using two hyperthreads ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_cores(&cores) == 0) {
+ printf("\n### Testing using two physical cores ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_sockets(&cores) == 0) {
+ printf("\n### Testing using two NUMA nodes ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+
+ printf("\n### Testing on all %u lcores ###\n", rte_lcore_count());
+ run_on_n_cores(s, bulk_push_pop, rte_lcore_count());
+
+ rte_stack_free(s);
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 5/8] stack: add lock-free stack implementation
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 0/8] Add stack library and new " Gage Eads
` (4 preceding siblings ...)
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 4/8] test/stack: add stack perf test Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 18:08 ` Honnappa Nagarahalli
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 6/8] stack: add C11 atomic implementation Gage Eads
` (3 subsequent siblings)
9 siblings, 2 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for a lock-free (linked list based) stack to the
stack API. This behavior is selected through a new rte_stack_create() flag,
RTE_STACK_F_LF.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The lock-free push operation enqueues a linked list of pointers by pointing
the tail of the list to the current stack head, and using a CAS to swing
the stack head pointer to the head of the list. The operation retries if it
is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The lock-free pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a lock-free LIFO
list, and are allocated before stack pushes and freed after stack pops.
Since the stack has a fixed maximum depth, these elements do not need to be
dynamically created.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/stack_lib.rst | 61 ++++++++++++-
doc/guides/rel_notes/release_19_05.rst | 3 +
lib/librte_stack/Makefile | 7 +-
lib/librte_stack/meson.build | 7 +-
lib/librte_stack/rte_stack.c | 28 ++++--
lib/librte_stack/rte_stack.h | 62 +++++++++++--
lib/librte_stack/rte_stack_lf.c | 31 +++++++
lib/librte_stack/rte_stack_lf.h | 102 +++++++++++++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 151 ++++++++++++++++++++++++++++++++
9 files changed, 433 insertions(+), 19 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 25a8cc38a..8fe8804e3 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -10,7 +10,8 @@ stack of pointers.
The stack library provides the following basic operations:
* Create a uniquely named stack of a user-specified size and using a
- user-specified socket.
+ user-specified socket, with either standard (lock-based) or lock-free
+ behavior.
* Push and pop a burst of one or more stack objects (pointers). These function
are multi-threading safe.
@@ -24,5 +25,59 @@ The stack library provides the following basic operations:
Implementation
~~~~~~~~~~~~~~
-The stack consists of a contiguous array of pointers, a current index, and a
-spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
+The library supports two types of stacks: standard (lock-based) and lock-free.
+Both types use the same set of interfaces, but their implementations differ.
+
+Lock-based Stack
+----------------
+
+The lock-based stack consists of a contiguous array of pointers, a current
+index, and a spinlock. Accesses to the stack are made multi-thread safe by the
+spinlock.
+
+Lock-free Stack
+------------------
+
+The lock-free stack consists of a linked list of elements, each containing a
+data pointer and a next pointer, and an atomic stack depth counter. The
+lock-free property means that multiple threads can push and pop simultaneously,
+and one thread being preempted/delayed in a push or pop operation will not
+impede the forward progress of any other thread.
+
+The lock-free push operation enqueues a linked list of pointers by pointing the
+list's tail to the current stack head, and using a CAS to swing the stack head
+pointer to the head of the list. The operation retries if it is unsuccessful
+(i.e. the list changed between reading the head and modifying it), else it
+adjusts the stack length and returns.
+
+The lock-free pop operation first reserves one or more list elements by
+adjusting the stack length, to ensure the dequeue operation will succeed
+without blocking. It then dequeues pointers by walking the list -- starting
+from the head -- then swinging the head pointer (using a CAS as well). While
+walking the list, the data pointers are recorded in an object table.
+
+The linked list elements themselves are maintained in a lock-free LIFO, and are
+allocated before stack pushes and freed after stack pops. Since the stack has a
+fixed maximum depth, these elements do not need to be dynamically created.
+
+The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
+rte_stack_create().
+
+Preventing the ABA Problem
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To prevent the ABA problem, this algorithm stack uses a 128-bit
+compare-and-swap instruction to atomically update both the stack top pointer
+and a modification counter. The ABA problem can occur without a modification
+counter if, for example:
+
+1. Thread A reads head pointer X and stores the pointed-to list element.
+2. Other threads modify the list such that the head pointer is once again X,
+ but its pointed-to data is different than what thread A read.
+3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+In this case thread A would not detect that the list had changed, and would
+both pop stale data and incorrect change the head pointer. By adding a
+modification counter that is updated on every push and pop as part of the
+compare-and-swap, the algorithm can detect when the list changes even if the
+head pointer remains the same.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index ebfbe36e5..3b115b5f6 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -127,6 +127,9 @@ New Features
pointers. The API provides MT-safe push and pop operations that can operate
on one or more pointers per operation.
+ The library supports two stack implementations: standard (lock-based) and lock-free.
+ The lock-free implementation is currently limited to x86-64 platforms.
+
Removed Items
-------------
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 6db540073..311edd997 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -16,10 +16,13 @@ LIBABIVER := 1
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
- rte_stack_std.c
+ rte_stack_std.c \
+ rte_stack_lf.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
- rte_stack_std.h
+ rte_stack_std.h \
+ rte_stack_lf.h \
+ rte_stack_lf_generic.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index d2e60ce9b..7a09a5d66 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -4,5 +4,8 @@
allow_experimental_apis = true
version = 1
-sources = files('rte_stack.c', 'rte_stack_std.c')
-headers = files('rte_stack.h', 'rte_stack_std.h')
+sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
+headers = files('rte_stack.h',
+ 'rte_stack_std.h',
+ 'rte_stack_lf.h',
+ 'rte_stack_lf_generic.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
index 610014b6c..1a4d9bd1e 100644
--- a/lib/librte_stack/rte_stack.c
+++ b/lib/librte_stack/rte_stack.c
@@ -25,18 +25,25 @@ static struct rte_tailq_elem rte_stack_tailq = {
};
EAL_REGISTER_TAILQ(rte_stack_tailq)
+
static void
-rte_stack_init(struct rte_stack *s)
+rte_stack_init(struct rte_stack *s, unsigned int count, uint32_t flags)
{
memset(s, 0, sizeof(*s));
- rte_stack_std_init(s);
+ if (flags & RTE_STACK_F_LF)
+ rte_stack_lf_init(s, count);
+ else
+ rte_stack_std_init(s);
}
static ssize_t
-rte_stack_get_memsize(unsigned int count)
+rte_stack_get_memsize(unsigned int count, uint32_t flags)
{
- return rte_stack_std_get_memsize(count);
+ if (flags & RTE_STACK_F_LF)
+ return rte_stack_lf_get_memsize(count);
+ else
+ return rte_stack_std_get_memsize(count);
}
struct rte_stack *
@@ -51,9 +58,16 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
unsigned int sz;
int ret;
- RTE_SET_USED(flags);
+#ifdef RTE_ARCH_64
+ RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
+#else
+ if (flags & RTE_STACK_F_LF) {
+ STACK_LOG_ERR("Lock-free stack is not supported on your platform\n");
+ return NULL;
+ }
+#endif
- sz = rte_stack_get_memsize(count);
+ sz = rte_stack_get_memsize(count, flags);
ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
RTE_STACK_MZ_PREFIX, name);
@@ -82,7 +96,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
s = mz->addr;
- rte_stack_init(s);
+ rte_stack_init(s, count, flags);
/* Store the name for later lookups */
ret = snprintf(s->name, sizeof(s->name), "%s", name);
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index d9799d747..e0f9e9cff 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -30,6 +30,35 @@ extern "C" {
#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
sizeof(RTE_STACK_MZ_PREFIX) + 1)
+struct rte_stack_lf_elem {
+ void *data; /**< Data pointer */
+ struct rte_stack_lf_elem *next; /**< Next pointer */
+};
+
+struct rte_stack_lf_head {
+ struct rte_stack_lf_elem *top; /**< Stack top */
+ uint64_t cnt; /**< Modification counter for avoiding ABA problem */
+};
+
+struct rte_stack_lf_list {
+ /** List head */
+ struct rte_stack_lf_head head __rte_aligned(16);
+ /** List len */
+ rte_atomic64_t len;
+};
+
+/* Structure containing two lock-free LIFO lists: the stack itself and a list
+ * of free linked-list elements.
+ */
+struct rte_stack_lf {
+ /** LIFO list of elements */
+ struct rte_stack_lf_list used __rte_cache_aligned;
+ /** LIFO list of free elements */
+ struct rte_stack_lf_list free __rte_cache_aligned;
+ /** LIFO elements */
+ struct rte_stack_lf_elem elems[] __rte_cache_aligned;
+};
+
/* Structure containing the LIFO, its current length, and a lock for mutual
* exclusion.
*/
@@ -49,10 +78,21 @@ struct rte_stack {
const struct rte_memzone *memzone;
uint32_t capacity; /**< Usable size of the stack. */
uint32_t flags; /**< Flags supplied at creation. */
- struct rte_stack_std stack_std; /**< LIFO structure. */
+ RTE_STD_C11
+ union {
+ struct rte_stack_lf stack_lf; /**< Lock-free LIFO structure. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+ };
} __rte_cache_aligned;
+/**
+ * The stack uses lock-free push and pop functions. This flag is only
+ * supported on x86_64 platforms, currently.
+ */
+#define RTE_STACK_F_LF 0x0001
+
#include "rte_stack_std.h"
+#include "rte_stack_lf.h"
/**
* @warning
@@ -75,7 +115,10 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_push(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_push(s, obj_table, n);
+ else
+ return __rte_stack_std_push(s, obj_table, n);
}
/**
@@ -99,7 +142,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_pop(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_pop(s, obj_table, n);
+ else
+ return __rte_stack_std_pop(s, obj_table, n);
}
/**
@@ -118,7 +164,10 @@ rte_stack_count(struct rte_stack *s)
{
RTE_ASSERT(s != NULL);
- return __rte_stack_std_count(s);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_count(s);
+ else
+ return __rte_stack_std_count(s);
}
/**
@@ -158,7 +207,10 @@ rte_stack_free_count(struct rte_stack *s)
* NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
* constraint for the reserved zone.
* @param flags
- * Reserved for future use.
+ * An OR of the following:
+ * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
+ * variants of the push and pop functions. Otherwise, it achieves
+ * thread-safety using a lock.
* @return
* On success, the pointer to the new allocated stack. NULL on error with
* rte_errno set appropriately. Possible errno values include:
diff --git a/lib/librte_stack/rte_stack_lf.c b/lib/librte_stack/rte_stack_lf.c
new file mode 100644
index 000000000..0adcc263e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.c
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count)
+{
+ struct rte_stack_lf_elem *elems = s->stack_lf.elems;
+ unsigned int i;
+
+ for (i = 0; i < count; i++)
+ __rte_stack_lf_push_elems(&s->stack_lf.free,
+ &elems[i], &elems[i], 1);
+}
+
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(struct rte_stack_lf_elem));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
new file mode 100644
index 000000000..bfd680133
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_H_
+#define _RTE_STACK_LF_H_
+
+#include "rte_stack_lf_generic.h"
+
+/**
+ * @internal Push several objects on the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects enqueued.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_push(struct rte_stack *s,
+ void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_lf_elem *tmp, *first, *last = NULL;
+ unsigned int i;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n free elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.free, n, NULL, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Construct the list elements */
+ for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
+ tmp->data = obj_table[n - i - 1];
+
+ /* Push them to the used list */
+ __rte_stack_lf_push_elems(&s->stack_lf.used, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * - Actual number of objects popped.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *first, *last = NULL;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n used elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.used,
+ n, obj_table, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Push the list elements to the free list */
+ __rte_stack_lf_push_elems(&s->stack_lf.free, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Initialize a lock-free stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param count
+ * The size of the stack.
+ */
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count);
+
+/**
+ * @internal Return the memory required for a lock-free stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a lock-free stack.
+ */
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_LF_H_ */
diff --git a/lib/librte_stack/rte_stack_lf_generic.h b/lib/librte_stack/rte_stack_lf_generic.h
new file mode 100644
index 000000000..0ce282226
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_generic.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_GENERIC_H_
+#define _RTE_STACK_LF_GENERIC_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)rte_atomic64_read(&s->stack_lf.used.len);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ rte_atomic64_add(&list->len, num);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = rte_atomic64_read(&list->len);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
+ len, len - num))
+ break;
+ }
+
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_GENERIC_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 5/8] stack: add lock-free stack implementation
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 5/8] stack: add lock-free stack implementation Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
2019-04-01 18:08 ` Honnappa Nagarahalli
1 sibling, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for a lock-free (linked list based) stack to the
stack API. This behavior is selected through a new rte_stack_create() flag,
RTE_STACK_F_LF.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The lock-free push operation enqueues a linked list of pointers by pointing
the tail of the list to the current stack head, and using a CAS to swing
the stack head pointer to the head of the list. The operation retries if it
is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The lock-free pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a lock-free LIFO
list, and are allocated before stack pushes and freed after stack pops.
Since the stack has a fixed maximum depth, these elements do not need to be
dynamically created.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/stack_lib.rst | 61 ++++++++++++-
doc/guides/rel_notes/release_19_05.rst | 3 +
lib/librte_stack/Makefile | 7 +-
lib/librte_stack/meson.build | 7 +-
lib/librte_stack/rte_stack.c | 28 ++++--
lib/librte_stack/rte_stack.h | 62 +++++++++++--
lib/librte_stack/rte_stack_lf.c | 31 +++++++
lib/librte_stack/rte_stack_lf.h | 102 +++++++++++++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 151 ++++++++++++++++++++++++++++++++
9 files changed, 433 insertions(+), 19 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 25a8cc38a..8fe8804e3 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -10,7 +10,8 @@ stack of pointers.
The stack library provides the following basic operations:
* Create a uniquely named stack of a user-specified size and using a
- user-specified socket.
+ user-specified socket, with either standard (lock-based) or lock-free
+ behavior.
* Push and pop a burst of one or more stack objects (pointers). These function
are multi-threading safe.
@@ -24,5 +25,59 @@ The stack library provides the following basic operations:
Implementation
~~~~~~~~~~~~~~
-The stack consists of a contiguous array of pointers, a current index, and a
-spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
+The library supports two types of stacks: standard (lock-based) and lock-free.
+Both types use the same set of interfaces, but their implementations differ.
+
+Lock-based Stack
+----------------
+
+The lock-based stack consists of a contiguous array of pointers, a current
+index, and a spinlock. Accesses to the stack are made multi-thread safe by the
+spinlock.
+
+Lock-free Stack
+------------------
+
+The lock-free stack consists of a linked list of elements, each containing a
+data pointer and a next pointer, and an atomic stack depth counter. The
+lock-free property means that multiple threads can push and pop simultaneously,
+and one thread being preempted/delayed in a push or pop operation will not
+impede the forward progress of any other thread.
+
+The lock-free push operation enqueues a linked list of pointers by pointing the
+list's tail to the current stack head, and using a CAS to swing the stack head
+pointer to the head of the list. The operation retries if it is unsuccessful
+(i.e. the list changed between reading the head and modifying it), else it
+adjusts the stack length and returns.
+
+The lock-free pop operation first reserves one or more list elements by
+adjusting the stack length, to ensure the dequeue operation will succeed
+without blocking. It then dequeues pointers by walking the list -- starting
+from the head -- then swinging the head pointer (using a CAS as well). While
+walking the list, the data pointers are recorded in an object table.
+
+The linked list elements themselves are maintained in a lock-free LIFO, and are
+allocated before stack pushes and freed after stack pops. Since the stack has a
+fixed maximum depth, these elements do not need to be dynamically created.
+
+The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
+rte_stack_create().
+
+Preventing the ABA Problem
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To prevent the ABA problem, this algorithm stack uses a 128-bit
+compare-and-swap instruction to atomically update both the stack top pointer
+and a modification counter. The ABA problem can occur without a modification
+counter if, for example:
+
+1. Thread A reads head pointer X and stores the pointed-to list element.
+2. Other threads modify the list such that the head pointer is once again X,
+ but its pointed-to data is different than what thread A read.
+3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+In this case thread A would not detect that the list had changed, and would
+both pop stale data and incorrect change the head pointer. By adding a
+modification counter that is updated on every push and pop as part of the
+compare-and-swap, the algorithm can detect when the list changes even if the
+head pointer remains the same.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index ebfbe36e5..3b115b5f6 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -127,6 +127,9 @@ New Features
pointers. The API provides MT-safe push and pop operations that can operate
on one or more pointers per operation.
+ The library supports two stack implementations: standard (lock-based) and lock-free.
+ The lock-free implementation is currently limited to x86-64 platforms.
+
Removed Items
-------------
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 6db540073..311edd997 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -16,10 +16,13 @@ LIBABIVER := 1
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
- rte_stack_std.c
+ rte_stack_std.c \
+ rte_stack_lf.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
- rte_stack_std.h
+ rte_stack_std.h \
+ rte_stack_lf.h \
+ rte_stack_lf_generic.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index d2e60ce9b..7a09a5d66 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -4,5 +4,8 @@
allow_experimental_apis = true
version = 1
-sources = files('rte_stack.c', 'rte_stack_std.c')
-headers = files('rte_stack.h', 'rte_stack_std.h')
+sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
+headers = files('rte_stack.h',
+ 'rte_stack_std.h',
+ 'rte_stack_lf.h',
+ 'rte_stack_lf_generic.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
index 610014b6c..1a4d9bd1e 100644
--- a/lib/librte_stack/rte_stack.c
+++ b/lib/librte_stack/rte_stack.c
@@ -25,18 +25,25 @@ static struct rte_tailq_elem rte_stack_tailq = {
};
EAL_REGISTER_TAILQ(rte_stack_tailq)
+
static void
-rte_stack_init(struct rte_stack *s)
+rte_stack_init(struct rte_stack *s, unsigned int count, uint32_t flags)
{
memset(s, 0, sizeof(*s));
- rte_stack_std_init(s);
+ if (flags & RTE_STACK_F_LF)
+ rte_stack_lf_init(s, count);
+ else
+ rte_stack_std_init(s);
}
static ssize_t
-rte_stack_get_memsize(unsigned int count)
+rte_stack_get_memsize(unsigned int count, uint32_t flags)
{
- return rte_stack_std_get_memsize(count);
+ if (flags & RTE_STACK_F_LF)
+ return rte_stack_lf_get_memsize(count);
+ else
+ return rte_stack_std_get_memsize(count);
}
struct rte_stack *
@@ -51,9 +58,16 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
unsigned int sz;
int ret;
- RTE_SET_USED(flags);
+#ifdef RTE_ARCH_64
+ RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
+#else
+ if (flags & RTE_STACK_F_LF) {
+ STACK_LOG_ERR("Lock-free stack is not supported on your platform\n");
+ return NULL;
+ }
+#endif
- sz = rte_stack_get_memsize(count);
+ sz = rte_stack_get_memsize(count, flags);
ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
RTE_STACK_MZ_PREFIX, name);
@@ -82,7 +96,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
s = mz->addr;
- rte_stack_init(s);
+ rte_stack_init(s, count, flags);
/* Store the name for later lookups */
ret = snprintf(s->name, sizeof(s->name), "%s", name);
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index d9799d747..e0f9e9cff 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -30,6 +30,35 @@ extern "C" {
#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
sizeof(RTE_STACK_MZ_PREFIX) + 1)
+struct rte_stack_lf_elem {
+ void *data; /**< Data pointer */
+ struct rte_stack_lf_elem *next; /**< Next pointer */
+};
+
+struct rte_stack_lf_head {
+ struct rte_stack_lf_elem *top; /**< Stack top */
+ uint64_t cnt; /**< Modification counter for avoiding ABA problem */
+};
+
+struct rte_stack_lf_list {
+ /** List head */
+ struct rte_stack_lf_head head __rte_aligned(16);
+ /** List len */
+ rte_atomic64_t len;
+};
+
+/* Structure containing two lock-free LIFO lists: the stack itself and a list
+ * of free linked-list elements.
+ */
+struct rte_stack_lf {
+ /** LIFO list of elements */
+ struct rte_stack_lf_list used __rte_cache_aligned;
+ /** LIFO list of free elements */
+ struct rte_stack_lf_list free __rte_cache_aligned;
+ /** LIFO elements */
+ struct rte_stack_lf_elem elems[] __rte_cache_aligned;
+};
+
/* Structure containing the LIFO, its current length, and a lock for mutual
* exclusion.
*/
@@ -49,10 +78,21 @@ struct rte_stack {
const struct rte_memzone *memzone;
uint32_t capacity; /**< Usable size of the stack. */
uint32_t flags; /**< Flags supplied at creation. */
- struct rte_stack_std stack_std; /**< LIFO structure. */
+ RTE_STD_C11
+ union {
+ struct rte_stack_lf stack_lf; /**< Lock-free LIFO structure. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+ };
} __rte_cache_aligned;
+/**
+ * The stack uses lock-free push and pop functions. This flag is only
+ * supported on x86_64 platforms, currently.
+ */
+#define RTE_STACK_F_LF 0x0001
+
#include "rte_stack_std.h"
+#include "rte_stack_lf.h"
/**
* @warning
@@ -75,7 +115,10 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_push(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_push(s, obj_table, n);
+ else
+ return __rte_stack_std_push(s, obj_table, n);
}
/**
@@ -99,7 +142,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_pop(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_pop(s, obj_table, n);
+ else
+ return __rte_stack_std_pop(s, obj_table, n);
}
/**
@@ -118,7 +164,10 @@ rte_stack_count(struct rte_stack *s)
{
RTE_ASSERT(s != NULL);
- return __rte_stack_std_count(s);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_count(s);
+ else
+ return __rte_stack_std_count(s);
}
/**
@@ -158,7 +207,10 @@ rte_stack_free_count(struct rte_stack *s)
* NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
* constraint for the reserved zone.
* @param flags
- * Reserved for future use.
+ * An OR of the following:
+ * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
+ * variants of the push and pop functions. Otherwise, it achieves
+ * thread-safety using a lock.
* @return
* On success, the pointer to the new allocated stack. NULL on error with
* rte_errno set appropriately. Possible errno values include:
diff --git a/lib/librte_stack/rte_stack_lf.c b/lib/librte_stack/rte_stack_lf.c
new file mode 100644
index 000000000..0adcc263e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.c
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count)
+{
+ struct rte_stack_lf_elem *elems = s->stack_lf.elems;
+ unsigned int i;
+
+ for (i = 0; i < count; i++)
+ __rte_stack_lf_push_elems(&s->stack_lf.free,
+ &elems[i], &elems[i], 1);
+}
+
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(struct rte_stack_lf_elem));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
new file mode 100644
index 000000000..bfd680133
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_H_
+#define _RTE_STACK_LF_H_
+
+#include "rte_stack_lf_generic.h"
+
+/**
+ * @internal Push several objects on the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects enqueued.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_push(struct rte_stack *s,
+ void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_lf_elem *tmp, *first, *last = NULL;
+ unsigned int i;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n free elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.free, n, NULL, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Construct the list elements */
+ for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
+ tmp->data = obj_table[n - i - 1];
+
+ /* Push them to the used list */
+ __rte_stack_lf_push_elems(&s->stack_lf.used, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * - Actual number of objects popped.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *first, *last = NULL;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n used elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.used,
+ n, obj_table, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Push the list elements to the free list */
+ __rte_stack_lf_push_elems(&s->stack_lf.free, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Initialize a lock-free stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param count
+ * The size of the stack.
+ */
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count);
+
+/**
+ * @internal Return the memory required for a lock-free stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a lock-free stack.
+ */
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_LF_H_ */
diff --git a/lib/librte_stack/rte_stack_lf_generic.h b/lib/librte_stack/rte_stack_lf_generic.h
new file mode 100644
index 000000000..0ce282226
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_generic.h
@@ -0,0 +1,151 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_GENERIC_H_
+#define _RTE_STACK_LF_GENERIC_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)rte_atomic64_read(&s->stack_lf.used.len);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ rte_atomic64_add(&list->len, num);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = rte_atomic64_read(&list->len);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
+ len, len - num))
+ break;
+ }
+
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_GENERIC_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 6/8] stack: add C11 atomic implementation
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 0/8] Add stack library and new " Gage Eads
` (5 preceding siblings ...)
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 5/8] stack: add lock-free stack implementation Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 7/8] test/stack: add lock-free stack tests Gage Eads
` (2 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds an implementation of the lock-free stack push, pop, and
length functions that use __atomic builtins, for systems that benefit from
the finer-grained memory ordering control.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack_lf.h | 4 +
lib/librte_stack/rte_stack_lf_c11.h | 169 ++++++++++++++++++++++++++++++++++++
4 files changed, 177 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 311edd997..8d18ce520 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -23,6 +23,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
rte_stack_std.h \
rte_stack_lf.h \
- rte_stack_lf_generic.h
+ rte_stack_lf_generic.h \
+ rte_stack_lf_c11.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 7a09a5d66..46fce0c20 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -8,4 +8,5 @@ sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
headers = files('rte_stack.h',
'rte_stack_std.h',
'rte_stack_lf.h',
- 'rte_stack_lf_generic.h')
+ 'rte_stack_lf_generic.h',
+ 'rte_stack_lf_c11.h')
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
index bfd680133..518889a05 100644
--- a/lib/librte_stack/rte_stack_lf.h
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -5,7 +5,11 @@
#ifndef _RTE_STACK_LF_H_
#define _RTE_STACK_LF_H_
+#ifdef RTE_USE_C11_MEM_MODEL
+#include "rte_stack_lf_c11.h"
+#else
#include "rte_stack_lf_generic.h"
+#endif
/**
* @internal Push several objects on the lock-free stack (MT-safe).
diff --git a/lib/librte_stack/rte_stack_lf_c11.h b/lib/librte_stack/rte_stack_lf_c11.h
new file mode 100644
index 000000000..74e3d8eb4
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_c11.h
@@ -0,0 +1,169 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_C11_H_
+#define _RTE_STACK_LF_C11_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
+ __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Use the release memmodel to ensure the writes to the LF LIFO
+ * elements are visible before the head pointer write.
+ */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ /* Ensure the stack modifications are not reordered with respect
+ * to the LIFO len update.
+ */
+ __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ uint64_t len;
+ int success;
+
+ /* Reserve num elements, if available */
+ len = __atomic_load_n(&list->len.cnt, __ATOMIC_ACQUIRE);
+
+ while (1) {
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ /* len is updated on failure */
+ if (__atomic_compare_exchange_n(&list->len.cnt,
+ &len, len - num,
+ 0, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE))
+ break;
+ }
+
+ /* If a torn read occurs, the CAS will fail and set old_head to the
+ * correct/latest value.
+ */
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* Use the acquire memmodel to ensure the reads to the LF LIFO
+ * elements are properly ordered with respect to the head
+ * pointer read.
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_C11_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 6/8] stack: add C11 atomic implementation
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 6/8] stack: add C11 atomic implementation Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds an implementation of the lock-free stack push, pop, and
length functions that use __atomic builtins, for systems that benefit from
the finer-grained memory ordering control.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack_lf.h | 4 +
lib/librte_stack/rte_stack_lf_c11.h | 169 ++++++++++++++++++++++++++++++++++++
4 files changed, 177 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 311edd997..8d18ce520 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -23,6 +23,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
rte_stack_std.h \
rte_stack_lf.h \
- rte_stack_lf_generic.h
+ rte_stack_lf_generic.h \
+ rte_stack_lf_c11.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 7a09a5d66..46fce0c20 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -8,4 +8,5 @@ sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
headers = files('rte_stack.h',
'rte_stack_std.h',
'rte_stack_lf.h',
- 'rte_stack_lf_generic.h')
+ 'rte_stack_lf_generic.h',
+ 'rte_stack_lf_c11.h')
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
index bfd680133..518889a05 100644
--- a/lib/librte_stack/rte_stack_lf.h
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -5,7 +5,11 @@
#ifndef _RTE_STACK_LF_H_
#define _RTE_STACK_LF_H_
+#ifdef RTE_USE_C11_MEM_MODEL
+#include "rte_stack_lf_c11.h"
+#else
#include "rte_stack_lf_generic.h"
+#endif
/**
* @internal Push several objects on the lock-free stack (MT-safe).
diff --git a/lib/librte_stack/rte_stack_lf_c11.h b/lib/librte_stack/rte_stack_lf_c11.h
new file mode 100644
index 000000000..74e3d8eb4
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_c11.h
@@ -0,0 +1,169 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_C11_H_
+#define _RTE_STACK_LF_C11_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
+ __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Use the release memmodel to ensure the writes to the LF LIFO
+ * elements are visible before the head pointer write.
+ */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ /* Ensure the stack modifications are not reordered with respect
+ * to the LIFO len update.
+ */
+ __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ uint64_t len;
+ int success;
+
+ /* Reserve num elements, if available */
+ len = __atomic_load_n(&list->len.cnt, __ATOMIC_ACQUIRE);
+
+ while (1) {
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ /* len is updated on failure */
+ if (__atomic_compare_exchange_n(&list->len.cnt,
+ &len, len - num,
+ 0, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE))
+ break;
+ }
+
+ /* If a torn read occurs, the CAS will fail and set old_head to the
+ * correct/latest value.
+ */
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* Use the acquire memmodel to ensure the reads to the LF LIFO
+ * elements are properly ordered with respect to the head
+ * pointer read.
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_C11_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 7/8] test/stack: add lock-free stack tests
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 0/8] Add stack library and new " Gage Eads
` (6 preceding siblings ...)
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 6/8] stack: add C11 atomic implementation Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Gage Eads
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds lock-free stack variants of stack_autotest
(stack_lf_autotest) and stack_perf_autotest (stack_lf_perf_autotest), which
differ only in that the lock-free versions pass the RTE_STACK_F_LF flag to
all rte_stack_create() calls.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/meson.build | 2 ++
app/test/test_stack.c | 41 +++++++++++++++++++++++++++--------------
app/test/test_stack_perf.c | 17 +++++++++++++++--
3 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/app/test/meson.build b/app/test/meson.build
index 02eb788a4..867cc5863 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -178,6 +178,7 @@ fast_parallel_test_names = [
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
+ 'stack_nb_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
@@ -243,6 +244,7 @@ perf_test_names = [
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
+ 'stack_nb_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
index 8392e4e4d..f199136aa 100644
--- a/app/test/test_stack.c
+++ b/app/test/test_stack.c
@@ -97,7 +97,7 @@ test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
}
static int
-test_stack_basic(void)
+test_stack_basic(uint32_t flags)
{
struct rte_stack *s = NULL;
void **obj_table = NULL;
@@ -113,7 +113,7 @@ test_stack_basic(void)
for (i = 0; i < STACK_SIZE; i++)
obj_table[i] = (void *)(uintptr_t)i;
- s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -177,18 +177,18 @@ test_stack_basic(void)
}
static int
-test_stack_name_reuse(void)
+test_stack_name_reuse(uint32_t flags)
{
struct rte_stack *s[2];
- s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[0] == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
return -1;
}
- s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[1] != NULL) {
printf("[%s():%u] Failed to detect re-used name\n",
__func__, __LINE__);
@@ -201,7 +201,7 @@ test_stack_name_reuse(void)
}
static int
-test_stack_name_length(void)
+test_stack_name_length(uint32_t flags)
{
char name[RTE_STACK_NAMESIZE + 1];
struct rte_stack *s;
@@ -209,7 +209,7 @@ test_stack_name_length(void)
memset(name, 's', sizeof(name));
name[RTE_STACK_NAMESIZE] = '\0';
- s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), flags);
if (s != NULL) {
printf("[%s():%u] Failed to prevent long name\n",
__func__, __LINE__);
@@ -328,7 +328,7 @@ stack_thread_push_pop(void *args)
}
static int
-test_stack_multithreaded(void)
+test_stack_multithreaded(uint32_t flags)
{
struct test_args *args;
unsigned int lcore_id;
@@ -349,7 +349,7 @@ test_stack_multithreaded(void)
return -1;
}
- s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
@@ -384,9 +384,9 @@ test_stack_multithreaded(void)
}
static int
-test_stack(void)
+__test_stack(uint32_t flags)
{
- if (test_stack_basic() < 0)
+ if (test_stack_basic(flags) < 0)
return -1;
if (test_lookup_null() < 0)
@@ -395,16 +395,29 @@ test_stack(void)
if (test_free_null() < 0)
return -1;
- if (test_stack_name_reuse() < 0)
+ if (test_stack_name_reuse(flags) < 0)
return -1;
- if (test_stack_name_length() < 0)
+ if (test_stack_name_length(flags) < 0)
return -1;
- if (test_stack_multithreaded() < 0)
+ if (test_stack_multithreaded(flags) < 0)
return -1;
return 0;
}
+static int
+test_stack(void)
+{
+ return __test_stack(0);
+}
+
+static int
+test_lf_stack(void)
+{
+ return __test_stack(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_autotest, test_stack);
+REGISTER_TEST_COMMAND(stack_lf_autotest, test_lf_stack);
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
index 484370d30..e09d5384c 100644
--- a/app/test/test_stack_perf.c
+++ b/app/test/test_stack_perf.c
@@ -297,14 +297,14 @@ test_bulk_push_pop(struct rte_stack *s)
}
static int
-test_stack_perf(void)
+__test_stack_perf(uint32_t flags)
{
struct lcore_pair cores;
struct rte_stack *s;
rte_atomic32_init(&lcore_barrier);
- s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -340,4 +340,17 @@ test_stack_perf(void)
return 0;
}
+static int
+test_stack_perf(void)
+{
+ return __test_stack_perf(0);
+}
+
+static int
+test_lf_stack_perf(void)
+{
+ return __test_stack_perf(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
+REGISTER_TEST_COMMAND(stack_lf_perf_autotest, test_lf_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 7/8] test/stack: add lock-free stack tests
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 7/8] test/stack: add lock-free stack tests Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds lock-free stack variants of stack_autotest
(stack_lf_autotest) and stack_perf_autotest (stack_lf_perf_autotest), which
differ only in that the lock-free versions pass the RTE_STACK_F_LF flag to
all rte_stack_create() calls.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/meson.build | 2 ++
app/test/test_stack.c | 41 +++++++++++++++++++++++++++--------------
app/test/test_stack_perf.c | 17 +++++++++++++++--
3 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/app/test/meson.build b/app/test/meson.build
index 02eb788a4..867cc5863 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -178,6 +178,7 @@ fast_parallel_test_names = [
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
+ 'stack_nb_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
@@ -243,6 +244,7 @@ perf_test_names = [
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
+ 'stack_nb_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
index 8392e4e4d..f199136aa 100644
--- a/app/test/test_stack.c
+++ b/app/test/test_stack.c
@@ -97,7 +97,7 @@ test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
}
static int
-test_stack_basic(void)
+test_stack_basic(uint32_t flags)
{
struct rte_stack *s = NULL;
void **obj_table = NULL;
@@ -113,7 +113,7 @@ test_stack_basic(void)
for (i = 0; i < STACK_SIZE; i++)
obj_table[i] = (void *)(uintptr_t)i;
- s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -177,18 +177,18 @@ test_stack_basic(void)
}
static int
-test_stack_name_reuse(void)
+test_stack_name_reuse(uint32_t flags)
{
struct rte_stack *s[2];
- s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[0] == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
return -1;
}
- s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[1] != NULL) {
printf("[%s():%u] Failed to detect re-used name\n",
__func__, __LINE__);
@@ -201,7 +201,7 @@ test_stack_name_reuse(void)
}
static int
-test_stack_name_length(void)
+test_stack_name_length(uint32_t flags)
{
char name[RTE_STACK_NAMESIZE + 1];
struct rte_stack *s;
@@ -209,7 +209,7 @@ test_stack_name_length(void)
memset(name, 's', sizeof(name));
name[RTE_STACK_NAMESIZE] = '\0';
- s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), flags);
if (s != NULL) {
printf("[%s():%u] Failed to prevent long name\n",
__func__, __LINE__);
@@ -328,7 +328,7 @@ stack_thread_push_pop(void *args)
}
static int
-test_stack_multithreaded(void)
+test_stack_multithreaded(uint32_t flags)
{
struct test_args *args;
unsigned int lcore_id;
@@ -349,7 +349,7 @@ test_stack_multithreaded(void)
return -1;
}
- s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
@@ -384,9 +384,9 @@ test_stack_multithreaded(void)
}
static int
-test_stack(void)
+__test_stack(uint32_t flags)
{
- if (test_stack_basic() < 0)
+ if (test_stack_basic(flags) < 0)
return -1;
if (test_lookup_null() < 0)
@@ -395,16 +395,29 @@ test_stack(void)
if (test_free_null() < 0)
return -1;
- if (test_stack_name_reuse() < 0)
+ if (test_stack_name_reuse(flags) < 0)
return -1;
- if (test_stack_name_length() < 0)
+ if (test_stack_name_length(flags) < 0)
return -1;
- if (test_stack_multithreaded() < 0)
+ if (test_stack_multithreaded(flags) < 0)
return -1;
return 0;
}
+static int
+test_stack(void)
+{
+ return __test_stack(0);
+}
+
+static int
+test_lf_stack(void)
+{
+ return __test_stack(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_autotest, test_stack);
+REGISTER_TEST_COMMAND(stack_lf_autotest, test_lf_stack);
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
index 484370d30..e09d5384c 100644
--- a/app/test/test_stack_perf.c
+++ b/app/test/test_stack_perf.c
@@ -297,14 +297,14 @@ test_bulk_push_pop(struct rte_stack *s)
}
static int
-test_stack_perf(void)
+__test_stack_perf(uint32_t flags)
{
struct lcore_pair cores;
struct rte_stack *s;
rte_atomic32_init(&lcore_barrier);
- s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -340,4 +340,17 @@ test_stack_perf(void)
return 0;
}
+static int
+test_stack_perf(void)
+{
+ return __test_stack_perf(0);
+}
+
+static int
+test_lf_stack_perf(void)
+{
+ return __test_stack_perf(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
+REGISTER_TEST_COMMAND(stack_lf_perf_autotest, test_lf_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 8/8] mempool/stack: add lock-free stack mempool handler
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 0/8] Add stack library and new " Gage Eads
` (7 preceding siblings ...)
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 7/8] test/stack: add lock-free stack tests Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Gage Eads
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for lock-free (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
stack's worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 10 ++++++++++
doc/guides/rel_notes/release_19_05.rst | 5 +++++
drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++--
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index c1346363b..1a4391898 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -563,6 +563,16 @@ Known Issues
5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+ Alternatively, applications can use the lock-free stack mempool handler. When
+ considering this handler, note that:
+
+ - It is currently limited to the x86_64 platform, because it uses an
+ instruction (16-byte compare-and-swap) that is not yet available on other
+ platforms.
+ - It has worse average-case performance than the non-preemptive rte_ring, but
+ software caching (e.g. the mempool cache) can mitigate this by reducing the
+ number of stack accesses.
+
+ rte_timer
Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 3b115b5f6..f873984ad 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -130,6 +130,11 @@ New Features
The library supports two stack implementations: standard (lock-based) and lock-free.
The lock-free implementation is currently limited to x86-64 platforms.
+* **Added Lock-Free Stack Mempool Handler.**
+
+ Added a new lock-free stack handler, which uses the newly added stack
+ library.
+
Removed Items
-------------
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 25ccdb9af..7e85c8d6b 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -7,7 +7,7 @@
#include <rte_stack.h>
static int
-stack_alloc(struct rte_mempool *mp)
+__stack_alloc(struct rte_mempool *mp, uint32_t flags)
{
char name[RTE_STACK_NAMESIZE];
struct rte_stack *s;
@@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp)
return -rte_errno;
}
- s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ s = rte_stack_create(name, mp->size, mp->socket_id, flags);
if (s == NULL)
return -rte_errno;
@@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp)
}
static int
+stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, 0);
+}
+
+static int
+lf_stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, RTE_STACK_F_LF);
+}
+
+static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
unsigned int n)
{
@@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = {
.get_count = stack_get_count
};
+static struct rte_mempool_ops ops_lf_stack = {
+ .name = "lf_stack",
+ .alloc = lf_stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
MEMPOOL_REGISTER_OPS(ops_stack);
+MEMPOOL_REGISTER_OPS(ops_lf_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v5 8/8] mempool/stack: add lock-free stack mempool handler
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
@ 2019-04-01 0:12 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 0:12 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for lock-free (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
stack's worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 10 ++++++++++
doc/guides/rel_notes/release_19_05.rst | 5 +++++
drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++--
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index c1346363b..1a4391898 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -563,6 +563,16 @@ Known Issues
5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+ Alternatively, applications can use the lock-free stack mempool handler. When
+ considering this handler, note that:
+
+ - It is currently limited to the x86_64 platform, because it uses an
+ instruction (16-byte compare-and-swap) that is not yet available on other
+ platforms.
+ - It has worse average-case performance than the non-preemptive rte_ring, but
+ software caching (e.g. the mempool cache) can mitigate this by reducing the
+ number of stack accesses.
+
+ rte_timer
Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 3b115b5f6..f873984ad 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -130,6 +130,11 @@ New Features
The library supports two stack implementations: standard (lock-based) and lock-free.
The lock-free implementation is currently limited to x86-64 platforms.
+* **Added Lock-Free Stack Mempool Handler.**
+
+ Added a new lock-free stack handler, which uses the newly added stack
+ library.
+
Removed Items
-------------
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 25ccdb9af..7e85c8d6b 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -7,7 +7,7 @@
#include <rte_stack.h>
static int
-stack_alloc(struct rte_mempool *mp)
+__stack_alloc(struct rte_mempool *mp, uint32_t flags)
{
char name[RTE_STACK_NAMESIZE];
struct rte_stack *s;
@@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp)
return -rte_errno;
}
- s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ s = rte_stack_create(name, mp->size, mp->socket_id, flags);
if (s == NULL)
return -rte_errno;
@@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp)
}
static int
+stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, 0);
+}
+
+static int
+lf_stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, RTE_STACK_F_LF);
+}
+
+static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
unsigned int n)
{
@@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = {
.get_count = stack_get_count
};
+static struct rte_mempool_ops ops_lf_stack = {
+ .name = "lf_stack",
+ .alloc = lf_stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
MEMPOOL_REGISTER_OPS(ops_stack);
+MEMPOOL_REGISTER_OPS(ops_lf_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library
2019-03-29 19:23 ` Eads, Gage
2019-03-29 19:23 ` Eads, Gage
2019-03-29 21:07 ` Thomas Monjalon
@ 2019-04-01 17:41 ` Honnappa Nagarahalli
2019-04-01 17:41 ` Honnappa Nagarahalli
2019-04-01 19:34 ` Eads, Gage
2 siblings, 2 replies; 228+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-01 17:41 UTC (permalink / raw)
To: Eads, Gage, dev
Cc: olivier.matz, arybchenko, Richardson, Bruce, Ananyev, Konstantin,
Gavin Hu (Arm Technology China),
nd, thomas, nd
>
> > > +static ssize_t
> > > +rte_stack_get_memsize(unsigned int count) {
> > > + ssize_t sz = sizeof(struct rte_stack);
> > > +
> > > + /* Add padding to avoid false sharing conflicts */
> > > + sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
> > > + 2 * RTE_CACHE_LINE_SIZE;
> > I did not understand how the false sharing is caused and how this
> > padding is solving the issue. Verbose comments would help.
>
> The additional padding (beyond the CACHE_LINE_ROUNDUP) is to prevent
> false sharing caused by adjacent/next-line hardware prefetchers. I'll address
> this.
>
Is it not a generic problem? Or is it specific to this library?
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library
2019-04-01 17:41 ` Honnappa Nagarahalli
@ 2019-04-01 17:41 ` Honnappa Nagarahalli
2019-04-01 19:34 ` Eads, Gage
1 sibling, 0 replies; 228+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-01 17:41 UTC (permalink / raw)
To: Eads, Gage, dev
Cc: olivier.matz, arybchenko, Richardson, Bruce, Ananyev, Konstantin,
Gavin Hu (Arm Technology China),
nd, thomas, nd
>
> > > +static ssize_t
> > > +rte_stack_get_memsize(unsigned int count) {
> > > + ssize_t sz = sizeof(struct rte_stack);
> > > +
> > > + /* Add padding to avoid false sharing conflicts */
> > > + sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
> > > + 2 * RTE_CACHE_LINE_SIZE;
> > I did not understand how the false sharing is caused and how this
> > padding is solving the issue. Verbose comments would help.
>
> The additional padding (beyond the CACHE_LINE_ROUNDUP) is to prevent
> false sharing caused by adjacent/next-line hardware prefetchers. I'll address
> this.
>
Is it not a generic problem? Or is it specific to this library?
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v5 5/8] stack: add lock-free stack implementation
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 5/8] stack: add lock-free stack implementation Gage Eads
2019-04-01 0:12 ` Gage Eads
@ 2019-04-01 18:08 ` Honnappa Nagarahalli
2019-04-01 18:08 ` Honnappa Nagarahalli
1 sibling, 1 reply; 228+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-01 18:08 UTC (permalink / raw)
To: Gage Eads, dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
Gavin Hu (Arm Technology China),
nd, thomas, nd
> Subject: [PATCH v5 5/8] stack: add lock-free stack implementation
>
> This commit adds support for a lock-free (linked list based) stack to the stack
> API. This behavior is selected through a new rte_stack_create() flag,
> RTE_STACK_F_LF.
>
> The stack consists of a linked list of elements, each containing a data pointer
> and a next pointer, and an atomic stack depth counter.
>
> The lock-free push operation enqueues a linked list of pointers by pointing
> the tail of the list to the current stack head, and using a CAS to swing the
> stack head pointer to the head of the list. The operation retries if it is
> unsuccessful (i.e. the list changed between reading the head and modifying
> it), else it adjusts the stack length and returns.
>
> The lock-free pop operation first reserves num elements by adjusting the
> stack length, to ensure the dequeue operation will succeed without blocking.
> It then dequeues pointers by walking the list -- starting from the head -- then
> swinging the head pointer (using a CAS as well). While walking the list, the
> data pointers are recorded in an object table.
>
> This algorithm stack uses a 128-bit compare-and-swap instruction, which
> atomically updates the stack top pointer and a modification counter, to
> protect against the ABA problem.
>
> The linked list elements themselves are maintained in a lock-free LIFO list,
> and are allocated before stack pushes and freed after stack pops.
> Since the stack has a fixed maximum depth, these elements do not need to
> be dynamically created.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
> Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> ---
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v5 5/8] stack: add lock-free stack implementation
2019-04-01 18:08 ` Honnappa Nagarahalli
@ 2019-04-01 18:08 ` Honnappa Nagarahalli
0 siblings, 0 replies; 228+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-01 18:08 UTC (permalink / raw)
To: Gage Eads, dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
Gavin Hu (Arm Technology China),
nd, thomas, nd
> Subject: [PATCH v5 5/8] stack: add lock-free stack implementation
>
> This commit adds support for a lock-free (linked list based) stack to the stack
> API. This behavior is selected through a new rte_stack_create() flag,
> RTE_STACK_F_LF.
>
> The stack consists of a linked list of elements, each containing a data pointer
> and a next pointer, and an atomic stack depth counter.
>
> The lock-free push operation enqueues a linked list of pointers by pointing
> the tail of the list to the current stack head, and using a CAS to swing the
> stack head pointer to the head of the list. The operation retries if it is
> unsuccessful (i.e. the list changed between reading the head and modifying
> it), else it adjusts the stack length and returns.
>
> The lock-free pop operation first reserves num elements by adjusting the
> stack length, to ensure the dequeue operation will succeed without blocking.
> It then dequeues pointers by walking the list -- starting from the head -- then
> swinging the head pointer (using a CAS as well). While walking the list, the
> data pointers are recorded in an object table.
>
> This algorithm stack uses a 128-bit compare-and-swap instruction, which
> atomically updates the stack top pointer and a modification counter, to
> protect against the ABA problem.
>
> The linked list elements themselves are maintained in a lock-free LIFO list,
> and are allocated before stack pushes and freed after stack pops.
> Since the stack has a fixed maximum depth, these elements do not need to
> be dynamically created.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
> Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> ---
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation
2019-04-01 0:06 ` Eads, Gage
2019-04-01 0:06 ` Eads, Gage
@ 2019-04-01 19:06 ` Honnappa Nagarahalli
2019-04-01 19:06 ` Honnappa Nagarahalli
2019-04-01 20:21 ` Eads, Gage
1 sibling, 2 replies; 228+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-01 19:06 UTC (permalink / raw)
To: Eads, Gage, 'dev@dpdk.org'
Cc: 'olivier.matz@6wind.com',
'arybchenko@solarflare.com',
Richardson, Bruce, Ananyev, Konstantin,
Gavin Hu (Arm Technology China),
nd, thomas, nd
> > Subject: RE: [PATCH v3 6/8] stack: add C11 atomic implementation
> >
> > [snip]
> >
> > > > +static __rte_always_inline void
> > > > +__rte_stack_lf_push(struct rte_stack_lf_list *list,
> > > > + struct rte_stack_lf_elem *first,
> > > > + struct rte_stack_lf_elem *last,
> > > > + unsigned int num)
> > > > +{
> > > > +#ifndef RTE_ARCH_X86_64
> > > > + RTE_SET_USED(first);
> > > > + RTE_SET_USED(last);
> > > > + RTE_SET_USED(list);
> > > > + RTE_SET_USED(num);
> > > > +#else
> > > > + struct rte_stack_lf_head old_head;
> > > > + int success;
> > > > +
> > > > + old_head = list->head;
> > > This can be a torn read (same as you have mentioned in
> > > __rte_stack_lf_pop). I suggest we use acquire thread fence here as
> > > well (please see the comments in __rte_stack_lf_pop).
> >
> > Agreed. I'll add the acquire fence.
> >
>
> On second thought, an acquire fence isn't necessary. The acquire fence in
> __rte_stack_lf_pop() ensures the list->head is ordered before the list element
> reads. That isn't necessary here; we need to ensure that the last->next write
> occurs (and is observed) before the list->head write, which the CAS's RELEASE
> success memorder accomplishes.
>
> If a torn read occurs, the CAS will fail and will atomically re-load &old_head.
Following is my understanding:
The general guideline is there should be a load-acquire for every store-release. In both xxx_lf_pop and xxx_lf_push, the head is store-released, hence the load of the head should be load-acquire.
>From the code (for ex: in function _xxx_lf_push), you can notice that there is dependency from 'old_head to new_head to list->head(in compare_exchange)'. When such a dependency exists, if the memory orderings have to be avoided, one needs to use __ATOMIC_CONSUME. Currently, the compilers will use a stronger memory order (which is __ATOMIC_ACQUIRE) as __ATOMIC_CONSUME is not well defined. Please refer to [1] and [2] for more info.
IMO, since, for 128b, we do not have a pure load-acquire, I suggest we use thread_fence with acquire semantics. It is a heavier barrier, but I think it is a safer code which will adhere to C11 memory model.
[1] https://preshing.com/20140709/the-purpose-of-memory_order_consume-in-cpp11/
[2] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0750r1.html
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation
2019-04-01 19:06 ` Honnappa Nagarahalli
@ 2019-04-01 19:06 ` Honnappa Nagarahalli
2019-04-01 20:21 ` Eads, Gage
1 sibling, 0 replies; 228+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-01 19:06 UTC (permalink / raw)
To: Eads, Gage, 'dev@dpdk.org'
Cc: 'olivier.matz@6wind.com',
'arybchenko@solarflare.com',
Richardson, Bruce, Ananyev, Konstantin,
Gavin Hu (Arm Technology China),
nd, thomas, nd
> > Subject: RE: [PATCH v3 6/8] stack: add C11 atomic implementation
> >
> > [snip]
> >
> > > > +static __rte_always_inline void
> > > > +__rte_stack_lf_push(struct rte_stack_lf_list *list,
> > > > + struct rte_stack_lf_elem *first,
> > > > + struct rte_stack_lf_elem *last,
> > > > + unsigned int num)
> > > > +{
> > > > +#ifndef RTE_ARCH_X86_64
> > > > + RTE_SET_USED(first);
> > > > + RTE_SET_USED(last);
> > > > + RTE_SET_USED(list);
> > > > + RTE_SET_USED(num);
> > > > +#else
> > > > + struct rte_stack_lf_head old_head;
> > > > + int success;
> > > > +
> > > > + old_head = list->head;
> > > This can be a torn read (same as you have mentioned in
> > > __rte_stack_lf_pop). I suggest we use acquire thread fence here as
> > > well (please see the comments in __rte_stack_lf_pop).
> >
> > Agreed. I'll add the acquire fence.
> >
>
> On second thought, an acquire fence isn't necessary. The acquire fence in
> __rte_stack_lf_pop() ensures the list->head is ordered before the list element
> reads. That isn't necessary here; we need to ensure that the last->next write
> occurs (and is observed) before the list->head write, which the CAS's RELEASE
> success memorder accomplishes.
>
> If a torn read occurs, the CAS will fail and will atomically re-load &old_head.
Following is my understanding:
The general guideline is there should be a load-acquire for every store-release. In both xxx_lf_pop and xxx_lf_push, the head is store-released, hence the load of the head should be load-acquire.
From the code (for ex: in function _xxx_lf_push), you can notice that there is dependency from 'old_head to new_head to list->head(in compare_exchange)'. When such a dependency exists, if the memory orderings have to be avoided, one needs to use __ATOMIC_CONSUME. Currently, the compilers will use a stronger memory order (which is __ATOMIC_ACQUIRE) as __ATOMIC_CONSUME is not well defined. Please refer to [1] and [2] for more info.
IMO, since, for 128b, we do not have a pure load-acquire, I suggest we use thread_fence with acquire semantics. It is a heavier barrier, but I think it is a safer code which will adhere to C11 memory model.
[1] https://preshing.com/20140709/the-purpose-of-memory_order_consume-in-cpp11/
[2] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0750r1.html
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library
2019-04-01 17:41 ` Honnappa Nagarahalli
2019-04-01 17:41 ` Honnappa Nagarahalli
@ 2019-04-01 19:34 ` Eads, Gage
2019-04-01 19:34 ` Eads, Gage
1 sibling, 1 reply; 228+ messages in thread
From: Eads, Gage @ 2019-04-01 19:34 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev
Cc: olivier.matz, arybchenko, Richardson, Bruce, Ananyev, Konstantin,
Gavin Hu (Arm Technology China),
nd, thomas, nd
> -----Original Message-----
> From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli@arm.com]
> Sent: Monday, April 1, 2019 12:41 PM
> To: Eads, Gage <gage.eads@intel.com>; dev@dpdk.org
> Cc: olivier.matz@6wind.com; arybchenko@solarflare.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; nd <nd@arm.com>; thomas@monjalon.net; nd
> <nd@arm.com>
> Subject: RE: [PATCH v3 1/8] stack: introduce rte stack library
>
> >
> > > > +static ssize_t
> > > > +rte_stack_get_memsize(unsigned int count) {
> > > > + ssize_t sz = sizeof(struct rte_stack);
> > > > +
> > > > + /* Add padding to avoid false sharing conflicts */
> > > > + sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
> > > > + 2 * RTE_CACHE_LINE_SIZE;
> > > I did not understand how the false sharing is caused and how this
> > > padding is solving the issue. Verbose comments would help.
> >
> > The additional padding (beyond the CACHE_LINE_ROUNDUP) is to prevent
> > false sharing caused by adjacent/next-line hardware prefetchers. I'll
> > address this.
> >
> Is it not a generic problem? Or is it specific to this library?
This is not limited to this library, but it only affects systems with (enabled) next-line prefetchers, for example Intel products with an L2 adjacent cache line prefetcher[1]. For those systems, additional padding can potentially improve performance. As I understand it, this was the reason behind the 128B alignment added to rte_ring a couple years ago[2].
[1] https://software.intel.com/en-us/articles/disclosure-of-hw-prefetcher-control-on-some-intel-processors
[2] http://mails.dpdk.org/archives/dev/2017-February/058613.html
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library
2019-04-01 19:34 ` Eads, Gage
@ 2019-04-01 19:34 ` Eads, Gage
0 siblings, 0 replies; 228+ messages in thread
From: Eads, Gage @ 2019-04-01 19:34 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev
Cc: olivier.matz, arybchenko, Richardson, Bruce, Ananyev, Konstantin,
Gavin Hu (Arm Technology China),
nd, thomas, nd
> -----Original Message-----
> From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli@arm.com]
> Sent: Monday, April 1, 2019 12:41 PM
> To: Eads, Gage <gage.eads@intel.com>; dev@dpdk.org
> Cc: olivier.matz@6wind.com; arybchenko@solarflare.com; Richardson, Bruce
> <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; nd <nd@arm.com>; thomas@monjalon.net; nd
> <nd@arm.com>
> Subject: RE: [PATCH v3 1/8] stack: introduce rte stack library
>
> >
> > > > +static ssize_t
> > > > +rte_stack_get_memsize(unsigned int count) {
> > > > + ssize_t sz = sizeof(struct rte_stack);
> > > > +
> > > > + /* Add padding to avoid false sharing conflicts */
> > > > + sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *)) +
> > > > + 2 * RTE_CACHE_LINE_SIZE;
> > > I did not understand how the false sharing is caused and how this
> > > padding is solving the issue. Verbose comments would help.
> >
> > The additional padding (beyond the CACHE_LINE_ROUNDUP) is to prevent
> > false sharing caused by adjacent/next-line hardware prefetchers. I'll
> > address this.
> >
> Is it not a generic problem? Or is it specific to this library?
This is not limited to this library, but it only affects systems with (enabled) next-line prefetchers, for example Intel products with an L2 adjacent cache line prefetcher[1]. For those systems, additional padding can potentially improve performance. As I understand it, this was the reason behind the 128B alignment added to rte_ring a couple years ago[2].
[1] https://software.intel.com/en-us/articles/disclosure-of-hw-prefetcher-control-on-some-intel-processors
[2] http://mails.dpdk.org/archives/dev/2017-February/058613.html
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation
2019-04-01 19:06 ` Honnappa Nagarahalli
2019-04-01 19:06 ` Honnappa Nagarahalli
@ 2019-04-01 20:21 ` Eads, Gage
2019-04-01 20:21 ` Eads, Gage
1 sibling, 1 reply; 228+ messages in thread
From: Eads, Gage @ 2019-04-01 20:21 UTC (permalink / raw)
To: Honnappa Nagarahalli, 'dev@dpdk.org'
Cc: 'olivier.matz@6wind.com',
'arybchenko@solarflare.com',
Richardson, Bruce, Ananyev, Konstantin,
Gavin Hu (Arm Technology China),
nd, thomas, nd
> -----Original Message-----
> From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli@arm.com]
> Sent: Monday, April 1, 2019 2:07 PM
> To: Eads, Gage <gage.eads@intel.com>; 'dev@dpdk.org' <dev@dpdk.org>
> Cc: 'olivier.matz@6wind.com' <olivier.matz@6wind.com>;
> 'arybchenko@solarflare.com' <arybchenko@solarflare.com>; Richardson,
> Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; nd <nd@arm.com>; thomas@monjalon.net; nd
> <nd@arm.com>
> Subject: RE: [PATCH v3 6/8] stack: add C11 atomic implementation
>
> > > Subject: RE: [PATCH v3 6/8] stack: add C11 atomic implementation
> > >
> > > [snip]
> > >
> > > > > +static __rte_always_inline void __rte_stack_lf_push(struct
> > > > > +rte_stack_lf_list *list,
> > > > > + struct rte_stack_lf_elem *first,
> > > > > + struct rte_stack_lf_elem *last,
> > > > > + unsigned int num)
> > > > > +{
> > > > > +#ifndef RTE_ARCH_X86_64
> > > > > + RTE_SET_USED(first);
> > > > > + RTE_SET_USED(last);
> > > > > + RTE_SET_USED(list);
> > > > > + RTE_SET_USED(num);
> > > > > +#else
> > > > > + struct rte_stack_lf_head old_head;
> > > > > + int success;
> > > > > +
> > > > > + old_head = list->head;
> > > > This can be a torn read (same as you have mentioned in
> > > > __rte_stack_lf_pop). I suggest we use acquire thread fence here as
> > > > well (please see the comments in __rte_stack_lf_pop).
> > >
> > > Agreed. I'll add the acquire fence.
> > >
> >
> > On second thought, an acquire fence isn't necessary. The acquire fence
> > in
> > __rte_stack_lf_pop() ensures the list->head is ordered before the list
> > element reads. That isn't necessary here; we need to ensure that the
> > last->next write occurs (and is observed) before the list->head write,
> > which the CAS's RELEASE success memorder accomplishes.
> >
> > If a torn read occurs, the CAS will fail and will atomically re-load &old_head.
>
> Following is my understanding:
> The general guideline is there should be a load-acquire for every store-
> release. In both xxx_lf_pop and xxx_lf_push, the head is store-released,
> hence the load of the head should be load-acquire.
> From the code (for ex: in function _xxx_lf_push), you can notice that there is
> dependency from 'old_head to new_head to list->head(in
> compare_exchange)'. When such a dependency exists, if the memory
> orderings have to be avoided, one needs to use __ATOMIC_CONSUME.
> Currently, the compilers will use a stronger memory order (which is
> __ATOMIC_ACQUIRE) as __ATOMIC_CONSUME is not well defined. Please
> refer to [1] and [2] for more info.
>
> IMO, since, for 128b, we do not have a pure load-acquire, I suggest we use
> thread_fence with acquire semantics. It is a heavier barrier, but I think it is a
> safer code which will adhere to C11 memory model.
>
> [1] https://preshing.com/20140709/the-purpose-of-
> memory_order_consume-in-cpp11/
> [2] http://www.open-
> std.org/jtc1/sc22/wg21/docs/papers/2018/p0750r1.html
Thanks for those two links, they're good resources.
I agree with your understanding. I admit I'm not fully convinced the synchronized-with relationship is needed between pop's list->head store and push's list->head load (or between push's list->head store and its list->head load), but it's better to err on the side of caution to ensure it's functionally correct...at least until I can manage to convince you :).
I'll send out a V6 with the acquire thread fence.
Thanks,
Gage
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation
2019-04-01 20:21 ` Eads, Gage
@ 2019-04-01 20:21 ` Eads, Gage
0 siblings, 0 replies; 228+ messages in thread
From: Eads, Gage @ 2019-04-01 20:21 UTC (permalink / raw)
To: Honnappa Nagarahalli, 'dev@dpdk.org'
Cc: 'olivier.matz@6wind.com',
'arybchenko@solarflare.com',
Richardson, Bruce, Ananyev, Konstantin,
Gavin Hu (Arm Technology China),
nd, thomas, nd
> -----Original Message-----
> From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli@arm.com]
> Sent: Monday, April 1, 2019 2:07 PM
> To: Eads, Gage <gage.eads@intel.com>; 'dev@dpdk.org' <dev@dpdk.org>
> Cc: 'olivier.matz@6wind.com' <olivier.matz@6wind.com>;
> 'arybchenko@solarflare.com' <arybchenko@solarflare.com>; Richardson,
> Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Gavin Hu (Arm Technology China)
> <Gavin.Hu@arm.com>; nd <nd@arm.com>; thomas@monjalon.net; nd
> <nd@arm.com>
> Subject: RE: [PATCH v3 6/8] stack: add C11 atomic implementation
>
> > > Subject: RE: [PATCH v3 6/8] stack: add C11 atomic implementation
> > >
> > > [snip]
> > >
> > > > > +static __rte_always_inline void __rte_stack_lf_push(struct
> > > > > +rte_stack_lf_list *list,
> > > > > + struct rte_stack_lf_elem *first,
> > > > > + struct rte_stack_lf_elem *last,
> > > > > + unsigned int num)
> > > > > +{
> > > > > +#ifndef RTE_ARCH_X86_64
> > > > > + RTE_SET_USED(first);
> > > > > + RTE_SET_USED(last);
> > > > > + RTE_SET_USED(list);
> > > > > + RTE_SET_USED(num);
> > > > > +#else
> > > > > + struct rte_stack_lf_head old_head;
> > > > > + int success;
> > > > > +
> > > > > + old_head = list->head;
> > > > This can be a torn read (same as you have mentioned in
> > > > __rte_stack_lf_pop). I suggest we use acquire thread fence here as
> > > > well (please see the comments in __rte_stack_lf_pop).
> > >
> > > Agreed. I'll add the acquire fence.
> > >
> >
> > On second thought, an acquire fence isn't necessary. The acquire fence
> > in
> > __rte_stack_lf_pop() ensures the list->head is ordered before the list
> > element reads. That isn't necessary here; we need to ensure that the
> > last->next write occurs (and is observed) before the list->head write,
> > which the CAS's RELEASE success memorder accomplishes.
> >
> > If a torn read occurs, the CAS will fail and will atomically re-load &old_head.
>
> Following is my understanding:
> The general guideline is there should be a load-acquire for every store-
> release. In both xxx_lf_pop and xxx_lf_push, the head is store-released,
> hence the load of the head should be load-acquire.
> From the code (for ex: in function _xxx_lf_push), you can notice that there is
> dependency from 'old_head to new_head to list->head(in
> compare_exchange)'. When such a dependency exists, if the memory
> orderings have to be avoided, one needs to use __ATOMIC_CONSUME.
> Currently, the compilers will use a stronger memory order (which is
> __ATOMIC_ACQUIRE) as __ATOMIC_CONSUME is not well defined. Please
> refer to [1] and [2] for more info.
>
> IMO, since, for 128b, we do not have a pure load-acquire, I suggest we use
> thread_fence with acquire semantics. It is a heavier barrier, but I think it is a
> safer code which will adhere to C11 memory model.
>
> [1] https://preshing.com/20140709/the-purpose-of-
> memory_order_consume-in-cpp11/
> [2] http://www.open-
> std.org/jtc1/sc22/wg21/docs/papers/2018/p0750r1.html
Thanks for those two links, they're good resources.
I agree with your understanding. I admit I'm not fully convinced the synchronized-with relationship is needed between pop's list->head store and push's list->head load (or between push's list->head store and its list->head load), but it's better to err on the side of caution to ensure it's functionally correct...at least until I can manage to convince you :).
I'll send out a V6 with the acquire thread fence.
Thanks,
Gage
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 0/8] Add stack library and new mempool handler
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 0/8] Add stack library and new " Gage Eads
` (8 preceding siblings ...)
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` Gage Eads
` (10 more replies)
9 siblings, 11 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This patchset introduces a stack library, supporting both lock-based and
lock-free stacks, and a lock-free stack mempool handler.
The lock-based stack code is derived from the existing stack mempool handler,
and that handler is refactored to use the stack library.
The lock-free stack mempool handler is intended for usages where the rte
ring's "non-preemptive" constraint is not acceptable; for example, if the
application uses a mixture of pinned high-priority threads and multiplexed
low-priority threads that share a mempool.
Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
so it is currently limited to the x86_64 platform.
This patchset is the successor to a patchset containing only the new mempool
handler[2].
[1] http://mails.dpdk.org/archives/dev/2019-March/125751.html
[2] http://mails.dpdk.org/archives/dev/2019-January/123555.html
---
v6:
- Add load-acquire fence to the lock-free push function
- Correct generic implementation's pop_elems 128b CAS success and failure
memorder to match those in the C11 implementation.
v5:
- Add comment to explain padding in *_get_memsize() functions
- Prefix internal functions with '__'
- Use RTE_ASSERT for performance critical run-time checks
- Don't use __atomic_load in the C11 pop_elems function, and put an acquire
thread fence at the start of the 2nd do-while loop
- Change pop_elems 128b CAS success memorder to RELEASE and failure memorder to
RELAXED
- Change compile-time assertion to run for all 64-bit architectures
- Reorganize the code with standard and lock-free .c and .h files
v4:
- Fix 32-bit build error in test_stack.c by using %zu format specifier for
size_t
- Rebase onto master
v3:
- Rebase patchset onto master (test/test/ -> app/test/)
- Fix rte_stack_std_push() segfault introduced in v2
v2:
- Reworked structure and function naming to use rte_stack_{std, lf}_...
- Updated to the latest rte_atomic128_cmp_exchange() interface.
- Rename STACK_F_NB -> RTE_STACK_F_LF.
- Remove rte_rmb() and rte_wmb() from the generic push and pop implementations.
These are obviated by rte_atomic128_cmp_exchange()'s two memorder arguments.
- Edit stack_lib.rst text to 80 chars/line.
- Fix rte_stack.h doxygen formatting.
- Allocate popped_objs array from the heap
- Fix stack_thread_push_pop bug ("&t->sz" -> "t->sz")
- Remove unnecessary NULL check from test_stack_basic
- Properly terminate the name string in test_stack_name_length
- Add an empty array of struct rte_nb_lifo_elem elements
- In rte_nb_lifo_push(), retrieve the last element from __nb_lifo_pop()
- Split C11 implementation into a separate patchset
Gage Eads (8):
stack: introduce rte stack library
mempool/stack: convert mempool to use rte stack
test/stack: add stack test
test/stack: add stack perf test
stack: add lock-free stack implementation
stack: add C11 atomic implementation
test/stack: add lock-free stack tests
mempool/stack: add lock-free stack mempool handler
MAINTAINERS | 9 +-
app/test/Makefile | 3 +
app/test/meson.build | 7 +
app/test/test_stack.c | 423 ++++++++++++++++++++++++
app/test/test_stack_perf.c | 356 ++++++++++++++++++++
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/env_abstraction_layer.rst | 10 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 83 +++++
doc/guides/rel_notes/release_19_05.rst | 13 +
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 115 +++----
lib/Makefile | 2 +
lib/librte_stack/Makefile | 29 ++
lib/librte_stack/meson.build | 12 +
lib/librte_stack/rte_stack.c | 196 +++++++++++
lib/librte_stack/rte_stack.h | 259 +++++++++++++++
lib/librte_stack/rte_stack_lf.c | 31 ++
lib/librte_stack/rte_stack_lf.h | 106 ++++++
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 +++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++
lib/librte_stack/rte_stack_std.c | 26 ++
lib/librte_stack/rte_stack_std.h | 119 +++++++
lib/librte_stack/rte_stack_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
30 files changed, 2129 insertions(+), 72 deletions(-)
create mode 100644 app/test/test_stack.c
create mode 100644 app/test/test_stack_perf.c
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 0/8] Add stack library and new mempool handler
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 1/8] stack: introduce rte stack library Gage Eads
` (9 subsequent siblings)
10 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This patchset introduces a stack library, supporting both lock-based and
lock-free stacks, and a lock-free stack mempool handler.
The lock-based stack code is derived from the existing stack mempool handler,
and that handler is refactored to use the stack library.
The lock-free stack mempool handler is intended for usages where the rte
ring's "non-preemptive" constraint is not acceptable; for example, if the
application uses a mixture of pinned high-priority threads and multiplexed
low-priority threads that share a mempool.
Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
so it is currently limited to the x86_64 platform.
This patchset is the successor to a patchset containing only the new mempool
handler[2].
[1] http://mails.dpdk.org/archives/dev/2019-March/125751.html
[2] http://mails.dpdk.org/archives/dev/2019-January/123555.html
---
v6:
- Add load-acquire fence to the lock-free push function
- Correct generic implementation's pop_elems 128b CAS success and failure
memorder to match those in the C11 implementation.
v5:
- Add comment to explain padding in *_get_memsize() functions
- Prefix internal functions with '__'
- Use RTE_ASSERT for performance critical run-time checks
- Don't use __atomic_load in the C11 pop_elems function, and put an acquire
thread fence at the start of the 2nd do-while loop
- Change pop_elems 128b CAS success memorder to RELEASE and failure memorder to
RELAXED
- Change compile-time assertion to run for all 64-bit architectures
- Reorganize the code with standard and lock-free .c and .h files
v4:
- Fix 32-bit build error in test_stack.c by using %zu format specifier for
size_t
- Rebase onto master
v3:
- Rebase patchset onto master (test/test/ -> app/test/)
- Fix rte_stack_std_push() segfault introduced in v2
v2:
- Reworked structure and function naming to use rte_stack_{std, lf}_...
- Updated to the latest rte_atomic128_cmp_exchange() interface.
- Rename STACK_F_NB -> RTE_STACK_F_LF.
- Remove rte_rmb() and rte_wmb() from the generic push and pop implementations.
These are obviated by rte_atomic128_cmp_exchange()'s two memorder arguments.
- Edit stack_lib.rst text to 80 chars/line.
- Fix rte_stack.h doxygen formatting.
- Allocate popped_objs array from the heap
- Fix stack_thread_push_pop bug ("&t->sz" -> "t->sz")
- Remove unnecessary NULL check from test_stack_basic
- Properly terminate the name string in test_stack_name_length
- Add an empty array of struct rte_nb_lifo_elem elements
- In rte_nb_lifo_push(), retrieve the last element from __nb_lifo_pop()
- Split C11 implementation into a separate patchset
Gage Eads (8):
stack: introduce rte stack library
mempool/stack: convert mempool to use rte stack
test/stack: add stack test
test/stack: add stack perf test
stack: add lock-free stack implementation
stack: add C11 atomic implementation
test/stack: add lock-free stack tests
mempool/stack: add lock-free stack mempool handler
MAINTAINERS | 9 +-
app/test/Makefile | 3 +
app/test/meson.build | 7 +
app/test/test_stack.c | 423 ++++++++++++++++++++++++
app/test/test_stack_perf.c | 356 ++++++++++++++++++++
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/env_abstraction_layer.rst | 10 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 83 +++++
doc/guides/rel_notes/release_19_05.rst | 13 +
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 115 +++----
lib/Makefile | 2 +
lib/librte_stack/Makefile | 29 ++
lib/librte_stack/meson.build | 12 +
lib/librte_stack/rte_stack.c | 196 +++++++++++
lib/librte_stack/rte_stack.h | 259 +++++++++++++++
lib/librte_stack/rte_stack_lf.c | 31 ++
lib/librte_stack/rte_stack_lf.h | 106 ++++++
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 +++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++
lib/librte_stack/rte_stack_std.c | 26 ++
lib/librte_stack/rte_stack_std.h | 119 +++++++
lib/librte_stack/rte_stack_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
30 files changed, 2129 insertions(+), 72 deletions(-)
create mode 100644 app/test/test_stack.c
create mode 100644 app/test/test_stack_perf.c
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 1/8] stack: introduce rte stack library
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Gage Eads
2019-04-01 21:14 ` Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-02 11:14 ` Honnappa Nagarahalli
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
` (8 subsequent siblings)
10 siblings, 2 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 28 +++++
doc/guides/rel_notes/release_19_05.rst | 5 +
lib/Makefile | 2 +
lib/librte_stack/Makefile | 25 ++++
lib/librte_stack/meson.build | 8 ++
lib/librte_stack/rte_stack.c | 182 +++++++++++++++++++++++++++++
lib/librte_stack/rte_stack.h | 207 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++++++
lib/librte_stack/rte_stack_std.c | 26 +++++
lib/librte_stack/rte_stack_std.h | 119 +++++++++++++++++++
lib/librte_stack/rte_stack_version.map | 9 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
18 files changed, 661 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index e9ff2b4c2..09fd99dbf 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -416,6 +416,12 @@ F: drivers/raw/skeleton_rawdev/
F: app/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
+Stack API - EXPERIMENTAL
+M: Gage Eads <gage.eads@intel.com>
+M: Olivier Matz <olivier.matz@6wind.com>
+F: lib/librte_stack/
+F: doc/guides/prog_guide/stack_lib.rst
+
Memory Pool Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index 6292bc4af..fc8dba69d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -994,3 +994,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
# Compile the eventdev application
#
CONFIG_RTE_APP_EVENTDEV=y
+
+#
+# Compile librte_stack
+#
+CONFIG_RTE_LIBRTE_STACK=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index aacc66bd8..de1e215dd 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -125,6 +125,7 @@ The public API headers are grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
[ring] (@ref rte_ring.h),
+ [stack] (@ref rte_stack.h),
[tailq] (@ref rte_tailq.h),
[bitmap] (@ref rte_bitmap.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..7722fc3e9 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -55,6 +55,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
@TOPDIR@/lib/librte_security \
+ @TOPDIR@/lib/librte_stack \
@TOPDIR@/lib/librte_table \
@TOPDIR@/lib/librte_telemetry \
@TOPDIR@/lib/librte_timer \
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..f4f60862f 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ stack_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
new file mode 100644
index 000000000..25a8cc38a
--- /dev/null
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -0,0 +1,28 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+Stack Library
+=============
+
+DPDK's stack library provides an API for configuration and use of a bounded
+stack of pointers.
+
+The stack library provides the following basic operations:
+
+* Create a uniquely named stack of a user-specified size and using a
+ user-specified socket.
+
+* Push and pop a burst of one or more stack objects (pointers). These function
+ are multi-threading safe.
+
+* Free a previously created stack.
+
+* Lookup a pointer to a stack by its name.
+
+* Query a stack's current depth and number of free entries.
+
+Implementation
+~~~~~~~~~~~~~~
+
+The stack consists of a contiguous array of pointers, a current index, and a
+spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index bdad1ddbe..ebfbe36e5 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -121,6 +121,11 @@ New Features
Improved testpmd application performance on ARM platform. For ``macswap``
forwarding mode, NEON intrinsics were used to do swap to save CPU cycles.
+* **Added Stack API.**
+
+ Added a new stack API for configuration and use of a bounded stack of
+ pointers. The API provides MT-safe push and pop operations that can operate
+ on one or more pointers per operation.
Removed Items
-------------
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..9f90e80ad 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack
+DEPDIRS-librte_stack := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
new file mode 100644
index 000000000..6db540073
--- /dev/null
+++ b/lib/librte_stack/Makefile
@@ -0,0 +1,25 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_stack.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_stack_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
+ rte_stack_std.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
+ rte_stack_std.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
new file mode 100644
index 000000000..d2e60ce9b
--- /dev/null
+++ b/lib/librte_stack/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+allow_experimental_apis = true
+
+version = 1
+sources = files('rte_stack.c', 'rte_stack_std.c')
+headers = files('rte_stack.h', 'rte_stack_std.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
new file mode 100644
index 000000000..610014b6c
--- /dev/null
+++ b/lib/librte_stack/rte_stack.c
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_tailq.h>
+
+#include "rte_stack.h"
+#include "rte_stack_pvt.h"
+
+int stack_logtype;
+
+TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_stack_tailq = {
+ .name = RTE_TAILQ_STACK_NAME,
+};
+EAL_REGISTER_TAILQ(rte_stack_tailq)
+
+static void
+rte_stack_init(struct rte_stack *s)
+{
+ memset(s, 0, sizeof(*s));
+
+ rte_stack_std_init(s);
+}
+
+static ssize_t
+rte_stack_get_memsize(unsigned int count)
+{
+ return rte_stack_std_get_memsize(count);
+}
+
+struct rte_stack *
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_stack_list *stack_list;
+ const struct rte_memzone *mz;
+ struct rte_tailq_entry *te;
+ struct rte_stack *s;
+ unsigned int sz;
+ int ret;
+
+ RTE_SET_USED(flags);
+
+ sz = rte_stack_get_memsize(count);
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_STACK_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ STACK_LOG_ERR("Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
+ 0, __alignof__(*s));
+ if (mz == NULL) {
+ STACK_LOG_ERR("Cannot reserve stack memzone!\n");
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(te);
+ return NULL;
+ }
+
+ s = mz->addr;
+
+ rte_stack_init(s);
+
+ /* Store the name for later lookups */
+ ret = snprintf(s->name, sizeof(s->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(s->name)) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_errno = ENAMETOOLONG;
+ rte_free(te);
+ rte_memzone_free(mz);
+ return NULL;
+ }
+
+ s->memzone = mz;
+ s->capacity = count;
+ s->flags = flags;
+
+ te->data = s;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ TAILQ_INSERT_TAIL(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return s;
+}
+
+void
+rte_stack_free(struct rte_stack *s)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+
+ if (s == NULL)
+ return;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, stack_list, next) {
+ if (te->data == s)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+
+ rte_memzone_free(s->memzone);
+}
+
+struct rte_stack *
+rte_stack_lookup(const char *name)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+ struct rte_stack *r = NULL;
+
+ if (name == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, stack_list, next) {
+ r = (struct rte_stack *) te->data;
+ if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+RTE_INIT(librte_stack_init_log)
+{
+ stack_logtype = rte_log_register("lib.stack");
+ if (stack_logtype >= 0)
+ rte_log_set_level(stack_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
new file mode 100644
index 000000000..d9799d747
--- /dev/null
+++ b/lib/librte_stack/rte_stack.h
@@ -0,0 +1,207 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+/**
+ * @file rte_stack.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Stack
+ *
+ * librte_stack provides an API for configuration and use of a bounded stack of
+ * pointers. Push and pop operations are MT-safe, allowing concurrent access,
+ * and the interface supports pushing and popping multiple pointers at a time.
+ */
+
+#ifndef _RTE_STACK_H_
+#define _RTE_STACK_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+#define RTE_TAILQ_STACK_NAME "RTE_STACK"
+#define RTE_STACK_MZ_PREFIX "STK_"
+/** The maximum length of a stack name. */
+#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
+ sizeof(RTE_STACK_MZ_PREFIX) + 1)
+
+/* Structure containing the LIFO, its current length, and a lock for mutual
+ * exclusion.
+ */
+struct rte_stack_std {
+ rte_spinlock_t lock; /**< LIFO lock */
+ uint32_t len; /**< LIFO len */
+ void *objs[]; /**< LIFO pointer table */
+};
+
+/* The RTE stack structure contains the LIFO structure itself, plus metadata
+ * such as its name and memzone pointer.
+ */
+struct rte_stack {
+ /** Name of the stack. */
+ char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
+ /** Memzone containing the rte_stack structure. */
+ const struct rte_memzone *memzone;
+ uint32_t capacity; /**< Usable size of the stack. */
+ uint32_t flags; /**< Flags supplied at creation. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+} __rte_cache_aligned;
+
+#include "rte_stack_std.h"
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_pop(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return __rte_stack_std_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of free entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of free entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_free_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return s->capacity - rte_stack_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new stack named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory for a stack of
+ * size *count*. The behavior of the stack is controlled by the *flags*.
+ *
+ * @param name
+ * The name of the stack.
+ * @param count
+ * The size of the stack.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * Reserved for future use.
+ * @return
+ * On success, the pointer to the new allocated stack. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a stack with the same name already exists
+ * - ENOMEM - insufficient memory to create the stack
+ * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
+ */
+struct rte_stack *__rte_experimental
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free all memory used by the stack.
+ *
+ * @param s
+ * Stack to free
+ */
+void __rte_experimental
+rte_stack_free(struct rte_stack *s);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Lookup a stack by its name.
+ *
+ * @param name
+ * The name of the stack.
+ * @return
+ * The pointer to the stack matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - Stack with name *name* not found.
+ * - EINVAL - *name* pointer is NULL.
+ */
+struct rte_stack * __rte_experimental
+rte_stack_lookup(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_H_ */
diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
new file mode 100644
index 000000000..4a6a7bdb3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_pvt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_PVT_H_
+#define _RTE_STACK_PVT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_log.h>
+
+extern int stack_logtype;
+
+#define STACK_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
+ __func__, ##args)
+
+#define STACK_LOG_ERR(fmt, args...) \
+ STACK_LOG(ERR, fmt, ## args)
+
+#define STACK_LOG_WARN(fmt, args...) \
+ STACK_LOG(WARNING, fmt, ## args)
+
+#define STACK_LOG_INFO(fmt, args...) \
+ STACK_LOG(INFO, fmt, ## args)
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_PVT_H_ */
diff --git a/lib/librte_stack/rte_stack_std.c b/lib/librte_stack/rte_stack_std.c
new file mode 100644
index 000000000..0a310d7c6
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.c
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_std_init(struct rte_stack *s)
+{
+ rte_spinlock_init(&s->stack_std.lock);
+}
+
+ssize_t
+rte_stack_std_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_std.h b/lib/librte_stack/rte_stack_std.h
new file mode 100644
index 000000000..f9af087dc
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.h
@@ -0,0 +1,119 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_STD_H_
+#define _RTE_STACK_STD_H_
+
+/**
+ * @internal Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+ cache_objs = &stack->objs[stack->len];
+
+ /* Is there sufficient space in the stack? */
+ if ((stack->len + n) > s->capacity) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ stack->len += n;
+
+ rte_spinlock_unlock(&stack->lock);
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index, len;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+
+ if (unlikely(n > stack->len)) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ cache_objs = stack->objs;
+
+ for (index = 0, len = stack->len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ stack->len -= n;
+ rte_spinlock_unlock(&stack->lock);
+
+ return n;
+}
+
+/**
+ * @internal Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_count(struct rte_stack *s)
+{
+ return (unsigned int)s->stack_std.len;
+}
+
+/**
+ * @internal Initialize a standard stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ */
+void
+rte_stack_std_init(struct rte_stack *s);
+
+/**
+ * @internal Return the memory required for a standard stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a standard stack.
+ */
+ssize_t
+rte_stack_std_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_STD_H_ */
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
new file mode 100644
index 000000000..6662679c3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_stack_create;
+ rte_stack_free;
+ rte_stack_lookup;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 99957ba7d..90115477f 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..7e033e78c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 1/8] stack: introduce rte stack library
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 1/8] stack: introduce rte stack library Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
2019-04-02 11:14 ` Honnappa Nagarahalli
1 sibling, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 28 +++++
doc/guides/rel_notes/release_19_05.rst | 5 +
lib/Makefile | 2 +
lib/librte_stack/Makefile | 25 ++++
lib/librte_stack/meson.build | 8 ++
lib/librte_stack/rte_stack.c | 182 +++++++++++++++++++++++++++++
lib/librte_stack/rte_stack.h | 207 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++++++
lib/librte_stack/rte_stack_std.c | 26 +++++
lib/librte_stack/rte_stack_std.h | 119 +++++++++++++++++++
lib/librte_stack/rte_stack_version.map | 9 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
18 files changed, 661 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index e9ff2b4c2..09fd99dbf 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -416,6 +416,12 @@ F: drivers/raw/skeleton_rawdev/
F: app/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
+Stack API - EXPERIMENTAL
+M: Gage Eads <gage.eads@intel.com>
+M: Olivier Matz <olivier.matz@6wind.com>
+F: lib/librte_stack/
+F: doc/guides/prog_guide/stack_lib.rst
+
Memory Pool Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index 6292bc4af..fc8dba69d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -994,3 +994,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
# Compile the eventdev application
#
CONFIG_RTE_APP_EVENTDEV=y
+
+#
+# Compile librte_stack
+#
+CONFIG_RTE_LIBRTE_STACK=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index aacc66bd8..de1e215dd 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -125,6 +125,7 @@ The public API headers are grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
[ring] (@ref rte_ring.h),
+ [stack] (@ref rte_stack.h),
[tailq] (@ref rte_tailq.h),
[bitmap] (@ref rte_bitmap.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..7722fc3e9 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -55,6 +55,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
@TOPDIR@/lib/librte_security \
+ @TOPDIR@/lib/librte_stack \
@TOPDIR@/lib/librte_table \
@TOPDIR@/lib/librte_telemetry \
@TOPDIR@/lib/librte_timer \
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..f4f60862f 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ stack_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
new file mode 100644
index 000000000..25a8cc38a
--- /dev/null
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -0,0 +1,28 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+Stack Library
+=============
+
+DPDK's stack library provides an API for configuration and use of a bounded
+stack of pointers.
+
+The stack library provides the following basic operations:
+
+* Create a uniquely named stack of a user-specified size and using a
+ user-specified socket.
+
+* Push and pop a burst of one or more stack objects (pointers). These function
+ are multi-threading safe.
+
+* Free a previously created stack.
+
+* Lookup a pointer to a stack by its name.
+
+* Query a stack's current depth and number of free entries.
+
+Implementation
+~~~~~~~~~~~~~~
+
+The stack consists of a contiguous array of pointers, a current index, and a
+spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index bdad1ddbe..ebfbe36e5 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -121,6 +121,11 @@ New Features
Improved testpmd application performance on ARM platform. For ``macswap``
forwarding mode, NEON intrinsics were used to do swap to save CPU cycles.
+* **Added Stack API.**
+
+ Added a new stack API for configuration and use of a bounded stack of
+ pointers. The API provides MT-safe push and pop operations that can operate
+ on one or more pointers per operation.
Removed Items
-------------
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..9f90e80ad 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack
+DEPDIRS-librte_stack := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
new file mode 100644
index 000000000..6db540073
--- /dev/null
+++ b/lib/librte_stack/Makefile
@@ -0,0 +1,25 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_stack.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_stack_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
+ rte_stack_std.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
+ rte_stack_std.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
new file mode 100644
index 000000000..d2e60ce9b
--- /dev/null
+++ b/lib/librte_stack/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+allow_experimental_apis = true
+
+version = 1
+sources = files('rte_stack.c', 'rte_stack_std.c')
+headers = files('rte_stack.h', 'rte_stack_std.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
new file mode 100644
index 000000000..610014b6c
--- /dev/null
+++ b/lib/librte_stack/rte_stack.c
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_tailq.h>
+
+#include "rte_stack.h"
+#include "rte_stack_pvt.h"
+
+int stack_logtype;
+
+TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_stack_tailq = {
+ .name = RTE_TAILQ_STACK_NAME,
+};
+EAL_REGISTER_TAILQ(rte_stack_tailq)
+
+static void
+rte_stack_init(struct rte_stack *s)
+{
+ memset(s, 0, sizeof(*s));
+
+ rte_stack_std_init(s);
+}
+
+static ssize_t
+rte_stack_get_memsize(unsigned int count)
+{
+ return rte_stack_std_get_memsize(count);
+}
+
+struct rte_stack *
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_stack_list *stack_list;
+ const struct rte_memzone *mz;
+ struct rte_tailq_entry *te;
+ struct rte_stack *s;
+ unsigned int sz;
+ int ret;
+
+ RTE_SET_USED(flags);
+
+ sz = rte_stack_get_memsize(count);
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_STACK_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ STACK_LOG_ERR("Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
+ 0, __alignof__(*s));
+ if (mz == NULL) {
+ STACK_LOG_ERR("Cannot reserve stack memzone!\n");
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(te);
+ return NULL;
+ }
+
+ s = mz->addr;
+
+ rte_stack_init(s);
+
+ /* Store the name for later lookups */
+ ret = snprintf(s->name, sizeof(s->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(s->name)) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_errno = ENAMETOOLONG;
+ rte_free(te);
+ rte_memzone_free(mz);
+ return NULL;
+ }
+
+ s->memzone = mz;
+ s->capacity = count;
+ s->flags = flags;
+
+ te->data = s;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ TAILQ_INSERT_TAIL(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return s;
+}
+
+void
+rte_stack_free(struct rte_stack *s)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+
+ if (s == NULL)
+ return;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, stack_list, next) {
+ if (te->data == s)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+
+ rte_memzone_free(s->memzone);
+}
+
+struct rte_stack *
+rte_stack_lookup(const char *name)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+ struct rte_stack *r = NULL;
+
+ if (name == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, stack_list, next) {
+ r = (struct rte_stack *) te->data;
+ if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+RTE_INIT(librte_stack_init_log)
+{
+ stack_logtype = rte_log_register("lib.stack");
+ if (stack_logtype >= 0)
+ rte_log_set_level(stack_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
new file mode 100644
index 000000000..d9799d747
--- /dev/null
+++ b/lib/librte_stack/rte_stack.h
@@ -0,0 +1,207 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+/**
+ * @file rte_stack.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Stack
+ *
+ * librte_stack provides an API for configuration and use of a bounded stack of
+ * pointers. Push and pop operations are MT-safe, allowing concurrent access,
+ * and the interface supports pushing and popping multiple pointers at a time.
+ */
+
+#ifndef _RTE_STACK_H_
+#define _RTE_STACK_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+#define RTE_TAILQ_STACK_NAME "RTE_STACK"
+#define RTE_STACK_MZ_PREFIX "STK_"
+/** The maximum length of a stack name. */
+#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
+ sizeof(RTE_STACK_MZ_PREFIX) + 1)
+
+/* Structure containing the LIFO, its current length, and a lock for mutual
+ * exclusion.
+ */
+struct rte_stack_std {
+ rte_spinlock_t lock; /**< LIFO lock */
+ uint32_t len; /**< LIFO len */
+ void *objs[]; /**< LIFO pointer table */
+};
+
+/* The RTE stack structure contains the LIFO structure itself, plus metadata
+ * such as its name and memzone pointer.
+ */
+struct rte_stack {
+ /** Name of the stack. */
+ char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
+ /** Memzone containing the rte_stack structure. */
+ const struct rte_memzone *memzone;
+ uint32_t capacity; /**< Usable size of the stack. */
+ uint32_t flags; /**< Flags supplied at creation. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+} __rte_cache_aligned;
+
+#include "rte_stack_std.h"
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_pop(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return __rte_stack_std_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of free entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of free entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_free_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return s->capacity - rte_stack_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new stack named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory for a stack of
+ * size *count*. The behavior of the stack is controlled by the *flags*.
+ *
+ * @param name
+ * The name of the stack.
+ * @param count
+ * The size of the stack.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * Reserved for future use.
+ * @return
+ * On success, the pointer to the new allocated stack. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a stack with the same name already exists
+ * - ENOMEM - insufficient memory to create the stack
+ * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
+ */
+struct rte_stack *__rte_experimental
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free all memory used by the stack.
+ *
+ * @param s
+ * Stack to free
+ */
+void __rte_experimental
+rte_stack_free(struct rte_stack *s);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Lookup a stack by its name.
+ *
+ * @param name
+ * The name of the stack.
+ * @return
+ * The pointer to the stack matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - Stack with name *name* not found.
+ * - EINVAL - *name* pointer is NULL.
+ */
+struct rte_stack * __rte_experimental
+rte_stack_lookup(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_H_ */
diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
new file mode 100644
index 000000000..4a6a7bdb3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_pvt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_PVT_H_
+#define _RTE_STACK_PVT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_log.h>
+
+extern int stack_logtype;
+
+#define STACK_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
+ __func__, ##args)
+
+#define STACK_LOG_ERR(fmt, args...) \
+ STACK_LOG(ERR, fmt, ## args)
+
+#define STACK_LOG_WARN(fmt, args...) \
+ STACK_LOG(WARNING, fmt, ## args)
+
+#define STACK_LOG_INFO(fmt, args...) \
+ STACK_LOG(INFO, fmt, ## args)
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_PVT_H_ */
diff --git a/lib/librte_stack/rte_stack_std.c b/lib/librte_stack/rte_stack_std.c
new file mode 100644
index 000000000..0a310d7c6
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.c
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_std_init(struct rte_stack *s)
+{
+ rte_spinlock_init(&s->stack_std.lock);
+}
+
+ssize_t
+rte_stack_std_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_std.h b/lib/librte_stack/rte_stack_std.h
new file mode 100644
index 000000000..f9af087dc
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.h
@@ -0,0 +1,119 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_STD_H_
+#define _RTE_STACK_STD_H_
+
+/**
+ * @internal Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+ cache_objs = &stack->objs[stack->len];
+
+ /* Is there sufficient space in the stack? */
+ if ((stack->len + n) > s->capacity) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ stack->len += n;
+
+ rte_spinlock_unlock(&stack->lock);
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index, len;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+
+ if (unlikely(n > stack->len)) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ cache_objs = stack->objs;
+
+ for (index = 0, len = stack->len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ stack->len -= n;
+ rte_spinlock_unlock(&stack->lock);
+
+ return n;
+}
+
+/**
+ * @internal Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_count(struct rte_stack *s)
+{
+ return (unsigned int)s->stack_std.len;
+}
+
+/**
+ * @internal Initialize a standard stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ */
+void
+rte_stack_std_init(struct rte_stack *s);
+
+/**
+ * @internal Return the memory required for a standard stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a standard stack.
+ */
+ssize_t
+rte_stack_std_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_STD_H_ */
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
new file mode 100644
index 000000000..6662679c3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_stack_create;
+ rte_stack_free;
+ rte_stack_lookup;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index 99957ba7d..90115477f 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..7e033e78c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 2/8] mempool/stack: convert mempool to use rte stack
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 1/8] stack: introduce rte stack library Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 3/8] test/stack: add stack test Gage Eads
` (7 subsequent siblings)
10 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 2 +-
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 93 +++++++++----------------------
4 files changed, 33 insertions(+), 71 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 09fd99dbf..13fe49e2b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -293,7 +293,6 @@ M: Andrew Rybchenko <arybchenko@solarflare.com>
F: lib/librte_mempool/
F: drivers/mempool/Makefile
F: drivers/mempool/ring/
-F: drivers/mempool/stack/
F: doc/guides/prog_guide/mempool_lib.rst
F: app/test/test_mempool*
F: app/test/test_func_reentrancy.c
@@ -421,6 +420,7 @@ M: Gage Eads <gage.eads@intel.com>
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
+F: drivers/mempool/stack/
Memory Pool Drivers
diff --git a/drivers/mempool/stack/Makefile b/drivers/mempool/stack/Makefile
index 0444aedad..1681a62bc 100644
--- a/drivers/mempool/stack/Makefile
+++ b/drivers/mempool/stack/Makefile
@@ -10,10 +10,11 @@ LIB = librte_mempool_stack.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
# Headers
CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+LDLIBS += -lrte_eal -lrte_mempool -lrte_stack
EXPORT_MAP := rte_mempool_stack_version.map
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index b75a3bb56..03e369a41 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
+
+allow_experimental_apis = true
sources = files('rte_mempool_stack.c')
+
+deps += ['stack']
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index e6d504af5..25ccdb9af 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -1,39 +1,29 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2019 Intel Corporation
*/
#include <stdio.h>
#include <rte_mempool.h>
-#include <rte_malloc.h>
-
-struct rte_mempool_stack {
- rte_spinlock_t sl;
-
- uint32_t size;
- uint32_t len;
- void *objs[];
-};
+#include <rte_stack.h>
static int
stack_alloc(struct rte_mempool *mp)
{
- struct rte_mempool_stack *s;
- unsigned n = mp->size;
- int size = sizeof(*s) + (n+16)*sizeof(void *);
-
- /* Allocate our local memory structure */
- s = rte_zmalloc_socket("mempool-stack",
- size,
- RTE_CACHE_LINE_SIZE,
- mp->socket_id);
- if (s == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
- return -ENOMEM;
+ char name[RTE_STACK_NAMESIZE];
+ struct rte_stack *s;
+ int ret;
+
+ ret = snprintf(name, sizeof(name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(name)) {
+ rte_errno = ENAMETOOLONG;
+ return -rte_errno;
}
- rte_spinlock_init(&s->sl);
+ s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ if (s == NULL)
+ return -rte_errno;
- s->size = n;
mp->pool_data = s;
return 0;
@@ -41,69 +31,36 @@ stack_alloc(struct rte_mempool *mp)
static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index;
-
- rte_spinlock_lock(&s->sl);
- cache_objs = &s->objs[s->len];
-
- /* Is there sufficient space in the stack ? */
- if ((s->len + n) > s->size) {
- rte_spinlock_unlock(&s->sl);
- return -ENOBUFS;
- }
-
- /* Add elements back into the cache */
- for (index = 0; index < n; ++index, obj_table++)
- cache_objs[index] = *obj_table;
-
- s->len += n;
+ struct rte_stack *s = mp->pool_data;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_push(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static int
stack_dequeue(struct rte_mempool *mp, void **obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index, len;
-
- rte_spinlock_lock(&s->sl);
-
- if (unlikely(n > s->len)) {
- rte_spinlock_unlock(&s->sl);
- return -ENOENT;
- }
+ struct rte_stack *s = mp->pool_data;
- cache_objs = s->objs;
-
- for (index = 0, len = s->len - 1; index < n;
- ++index, len--, obj_table++)
- *obj_table = cache_objs[len];
-
- s->len -= n;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_pop(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static unsigned
stack_get_count(const struct rte_mempool *mp)
{
- struct rte_mempool_stack *s = mp->pool_data;
+ struct rte_stack *s = mp->pool_data;
- return s->len;
+ return rte_stack_count(s);
}
static void
stack_free(struct rte_mempool *mp)
{
- rte_free((void *)(mp->pool_data));
+ struct rte_stack *s = mp->pool_data;
+
+ rte_stack_free(s);
}
static struct rte_mempool_ops ops_stack = {
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 2/8] mempool/stack: convert mempool to use rte stack
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 2 +-
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 93 +++++++++----------------------
4 files changed, 33 insertions(+), 71 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 09fd99dbf..13fe49e2b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -293,7 +293,6 @@ M: Andrew Rybchenko <arybchenko@solarflare.com>
F: lib/librte_mempool/
F: drivers/mempool/Makefile
F: drivers/mempool/ring/
-F: drivers/mempool/stack/
F: doc/guides/prog_guide/mempool_lib.rst
F: app/test/test_mempool*
F: app/test/test_func_reentrancy.c
@@ -421,6 +420,7 @@ M: Gage Eads <gage.eads@intel.com>
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
+F: drivers/mempool/stack/
Memory Pool Drivers
diff --git a/drivers/mempool/stack/Makefile b/drivers/mempool/stack/Makefile
index 0444aedad..1681a62bc 100644
--- a/drivers/mempool/stack/Makefile
+++ b/drivers/mempool/stack/Makefile
@@ -10,10 +10,11 @@ LIB = librte_mempool_stack.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
# Headers
CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+LDLIBS += -lrte_eal -lrte_mempool -lrte_stack
EXPORT_MAP := rte_mempool_stack_version.map
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index b75a3bb56..03e369a41 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
+
+allow_experimental_apis = true
sources = files('rte_mempool_stack.c')
+
+deps += ['stack']
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index e6d504af5..25ccdb9af 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -1,39 +1,29 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2019 Intel Corporation
*/
#include <stdio.h>
#include <rte_mempool.h>
-#include <rte_malloc.h>
-
-struct rte_mempool_stack {
- rte_spinlock_t sl;
-
- uint32_t size;
- uint32_t len;
- void *objs[];
-};
+#include <rte_stack.h>
static int
stack_alloc(struct rte_mempool *mp)
{
- struct rte_mempool_stack *s;
- unsigned n = mp->size;
- int size = sizeof(*s) + (n+16)*sizeof(void *);
-
- /* Allocate our local memory structure */
- s = rte_zmalloc_socket("mempool-stack",
- size,
- RTE_CACHE_LINE_SIZE,
- mp->socket_id);
- if (s == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
- return -ENOMEM;
+ char name[RTE_STACK_NAMESIZE];
+ struct rte_stack *s;
+ int ret;
+
+ ret = snprintf(name, sizeof(name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(name)) {
+ rte_errno = ENAMETOOLONG;
+ return -rte_errno;
}
- rte_spinlock_init(&s->sl);
+ s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ if (s == NULL)
+ return -rte_errno;
- s->size = n;
mp->pool_data = s;
return 0;
@@ -41,69 +31,36 @@ stack_alloc(struct rte_mempool *mp)
static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index;
-
- rte_spinlock_lock(&s->sl);
- cache_objs = &s->objs[s->len];
-
- /* Is there sufficient space in the stack ? */
- if ((s->len + n) > s->size) {
- rte_spinlock_unlock(&s->sl);
- return -ENOBUFS;
- }
-
- /* Add elements back into the cache */
- for (index = 0; index < n; ++index, obj_table++)
- cache_objs[index] = *obj_table;
-
- s->len += n;
+ struct rte_stack *s = mp->pool_data;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_push(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static int
stack_dequeue(struct rte_mempool *mp, void **obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index, len;
-
- rte_spinlock_lock(&s->sl);
-
- if (unlikely(n > s->len)) {
- rte_spinlock_unlock(&s->sl);
- return -ENOENT;
- }
+ struct rte_stack *s = mp->pool_data;
- cache_objs = s->objs;
-
- for (index = 0, len = s->len - 1; index < n;
- ++index, len--, obj_table++)
- *obj_table = cache_objs[len];
-
- s->len -= n;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_pop(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static unsigned
stack_get_count(const struct rte_mempool *mp)
{
- struct rte_mempool_stack *s = mp->pool_data;
+ struct rte_stack *s = mp->pool_data;
- return s->len;
+ return rte_stack_count(s);
}
static void
stack_free(struct rte_mempool *mp)
{
- rte_free((void *)(mp->pool_data));
+ struct rte_stack *s = mp->pool_data;
+
+ rte_stack_free(s);
}
static struct rte_mempool_ops ops_stack = {
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 3/8] test/stack: add stack test
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Gage Eads
` (2 preceding siblings ...)
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 4/8] test/stack: add stack perf test Gage Eads
` (6 subsequent siblings)
10 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_autotest performs positive and negative testing of the stack API, and
exercises the push and pop datapath functions with all available lcores.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 1 +
app/test/Makefile | 2 +
app/test/meson.build | 3 +
app/test/test_stack.c | 410 ++++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 416 insertions(+)
create mode 100644 app/test/test_stack.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 13fe49e2b..2842f07ab 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -421,6 +421,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
F: drivers/mempool/stack/
+F: test/test/*stack*
Memory Pool Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index d6aa28bad..e5bde81af 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -90,6 +90,8 @@ endif
SRCS-y += test_rwlock.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
diff --git a/app/test/meson.build b/app/test/meson.build
index c5e65fe66..56ea13f53 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -95,6 +95,7 @@ test_sources = files('commands.c',
'test_sched.c',
'test_service_cores.c',
'test_spinlock.c',
+ 'test_stack.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -133,6 +134,7 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
+ 'stack',
'timer'
]
@@ -174,6 +176,7 @@ fast_parallel_test_names = [
'rwlock_autotest',
'sched_autotest',
'spinlock_autotest',
+ 'stack_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
new file mode 100644
index 000000000..8392e4e4d
--- /dev/null
+++ b/app/test/test_stack.c
@@ -0,0 +1,410 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_SIZE 4096
+#define MAX_BULK 32
+
+static int
+test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
+{
+ unsigned int i, ret;
+ void **popped_objs;
+
+ popped_objs = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (popped_objs == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_push(s, &obj_table[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] push returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_pop(s, &popped_objs[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] pop returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i++) {
+ if (obj_table[i] != popped_objs[STACK_SIZE - i - 1]) {
+ printf("[%s():%u] Incorrect value %p at index 0x%x\n",
+ __func__, __LINE__,
+ popped_objs[STACK_SIZE - i - 1], i);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ rte_free(popped_objs);
+
+ return 0;
+}
+
+static int
+test_stack_basic(void)
+{
+ struct rte_stack *s = NULL;
+ void **obj_table = NULL;
+ int i, ret = -1;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ goto fail_test;
+ }
+
+ for (i = 0; i < STACK_SIZE; i++)
+ obj_table[i] = (void *)(uintptr_t)i;
+
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_lookup(__func__) != s) {
+ printf("[%s():%u] failed to lookup a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_count(s) != 0) {
+ printf("[%s():%u] stack count: %u (expected 0)\n",
+ __func__, __LINE__, rte_stack_count(s));
+ goto fail_test;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, 1);
+ if (ret) {
+ printf("[%s():%u] Single object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, MAX_BULK);
+ if (ret) {
+ printf("[%s():%u] Bulk object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
+ if (ret != 0) {
+ printf("[%s():%u] Excess objects push succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_pop(s, obj_table, 1);
+ if (ret != 0) {
+ printf("[%s():%u] Empty stack pop succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = 0;
+
+fail_test:
+ rte_stack_free(s);
+
+ rte_free(obj_table);
+
+ return ret;
+}
+
+static int
+test_stack_name_reuse(void)
+{
+ struct rte_stack *s[2];
+
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[0] == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[1] != NULL) {
+ printf("[%s():%u] Failed to detect re-used name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ rte_stack_free(s[0]);
+
+ return 0;
+}
+
+static int
+test_stack_name_length(void)
+{
+ char name[RTE_STACK_NAMESIZE + 1];
+ struct rte_stack *s;
+
+ memset(name, 's', sizeof(name));
+ name[RTE_STACK_NAMESIZE] = '\0';
+
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ if (s != NULL) {
+ printf("[%s():%u] Failed to prevent long name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENAMETOOLONG) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_stack *s = rte_stack_lookup("stack_not_found");
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENOENT) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s = rte_stack_lookup(NULL);
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != EINVAL) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_free_null(void)
+{
+ /* Check whether the library proper handles a NULL pointer */
+ rte_stack_free(NULL);
+
+ return 0;
+}
+
+#define NUM_ITERS_PER_THREAD 100000
+
+struct test_args {
+ struct rte_stack *s;
+ rte_atomic64_t *sz;
+};
+
+static int
+stack_thread_push_pop(void *args)
+{
+ struct test_args *t = args;
+ void **obj_table;
+ int i;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < NUM_ITERS_PER_THREAD; i++) {
+ unsigned int success, num;
+
+ /* Reserve up to min(MAX_BULK, available slots) stack entries,
+ * then push and pop those stack entries.
+ */
+ do {
+ uint64_t sz = rte_atomic64_read(t->sz);
+ volatile uint64_t *sz_addr;
+
+ sz_addr = (volatile uint64_t *)t->sz;
+
+ num = RTE_MIN(rte_rand() % MAX_BULK, STACK_SIZE - sz);
+
+ success = rte_atomic64_cmpset(sz_addr, sz, sz + num);
+ } while (success == 0);
+
+ if (rte_stack_push(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to push %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ if (rte_stack_pop(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to pop %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ rte_atomic64_sub(t->sz, num);
+ }
+
+ rte_free(obj_table);
+ return 0;
+}
+
+static int
+test_stack_multithreaded(void)
+{
+ struct test_args *args;
+ unsigned int lcore_id;
+ struct rte_stack *s;
+ rte_atomic64_t size;
+
+ printf("[%s():%u] Running with %u lcores\n",
+ __func__, __LINE__, rte_lcore_count());
+
+ if (rte_lcore_count() < 2)
+ return 0;
+
+ args = rte_malloc(NULL, sizeof(struct test_args) * RTE_MAX_LCORE, 0);
+ if (args == NULL) {
+ printf("[%s():%u] failed to malloc %zu bytes\n",
+ __func__, __LINE__,
+ sizeof(struct test_args) * RTE_MAX_LCORE);
+ return -1;
+ }
+
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ rte_free(args);
+ return -1;
+ }
+
+ rte_atomic64_init(&size);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ if (rte_eal_remote_launch(stack_thread_push_pop,
+ &args[lcore_id], lcore_id))
+ rte_panic("Failed to launch lcore %d\n", lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ stack_thread_push_pop(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ rte_stack_free(s);
+ rte_free(args);
+
+ return 0;
+}
+
+static int
+test_stack(void)
+{
+ if (test_stack_basic() < 0)
+ return -1;
+
+ if (test_lookup_null() < 0)
+ return -1;
+
+ if (test_free_null() < 0)
+ return -1;
+
+ if (test_stack_name_reuse() < 0)
+ return -1;
+
+ if (test_stack_name_length() < 0)
+ return -1;
+
+ if (test_stack_multithreaded() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_autotest, test_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 3/8] test/stack: add stack test
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 3/8] test/stack: add stack test Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_autotest performs positive and negative testing of the stack API, and
exercises the push and pop datapath functions with all available lcores.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 1 +
app/test/Makefile | 2 +
app/test/meson.build | 3 +
app/test/test_stack.c | 410 ++++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 416 insertions(+)
create mode 100644 app/test/test_stack.c
diff --git a/MAINTAINERS b/MAINTAINERS
index 13fe49e2b..2842f07ab 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -421,6 +421,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
F: drivers/mempool/stack/
+F: test/test/*stack*
Memory Pool Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index d6aa28bad..e5bde81af 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -90,6 +90,8 @@ endif
SRCS-y += test_rwlock.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
diff --git a/app/test/meson.build b/app/test/meson.build
index c5e65fe66..56ea13f53 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -95,6 +95,7 @@ test_sources = files('commands.c',
'test_sched.c',
'test_service_cores.c',
'test_spinlock.c',
+ 'test_stack.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -133,6 +134,7 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
+ 'stack',
'timer'
]
@@ -174,6 +176,7 @@ fast_parallel_test_names = [
'rwlock_autotest',
'sched_autotest',
'spinlock_autotest',
+ 'stack_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
new file mode 100644
index 000000000..8392e4e4d
--- /dev/null
+++ b/app/test/test_stack.c
@@ -0,0 +1,410 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_SIZE 4096
+#define MAX_BULK 32
+
+static int
+test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
+{
+ unsigned int i, ret;
+ void **popped_objs;
+
+ popped_objs = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (popped_objs == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_push(s, &obj_table[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] push returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_pop(s, &popped_objs[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] pop returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i++) {
+ if (obj_table[i] != popped_objs[STACK_SIZE - i - 1]) {
+ printf("[%s():%u] Incorrect value %p at index 0x%x\n",
+ __func__, __LINE__,
+ popped_objs[STACK_SIZE - i - 1], i);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ rte_free(popped_objs);
+
+ return 0;
+}
+
+static int
+test_stack_basic(void)
+{
+ struct rte_stack *s = NULL;
+ void **obj_table = NULL;
+ int i, ret = -1;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ goto fail_test;
+ }
+
+ for (i = 0; i < STACK_SIZE; i++)
+ obj_table[i] = (void *)(uintptr_t)i;
+
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_lookup(__func__) != s) {
+ printf("[%s():%u] failed to lookup a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_count(s) != 0) {
+ printf("[%s():%u] stack count: %u (expected 0)\n",
+ __func__, __LINE__, rte_stack_count(s));
+ goto fail_test;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, 1);
+ if (ret) {
+ printf("[%s():%u] Single object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, MAX_BULK);
+ if (ret) {
+ printf("[%s():%u] Bulk object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
+ if (ret != 0) {
+ printf("[%s():%u] Excess objects push succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_pop(s, obj_table, 1);
+ if (ret != 0) {
+ printf("[%s():%u] Empty stack pop succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = 0;
+
+fail_test:
+ rte_stack_free(s);
+
+ rte_free(obj_table);
+
+ return ret;
+}
+
+static int
+test_stack_name_reuse(void)
+{
+ struct rte_stack *s[2];
+
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[0] == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[1] != NULL) {
+ printf("[%s():%u] Failed to detect re-used name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ rte_stack_free(s[0]);
+
+ return 0;
+}
+
+static int
+test_stack_name_length(void)
+{
+ char name[RTE_STACK_NAMESIZE + 1];
+ struct rte_stack *s;
+
+ memset(name, 's', sizeof(name));
+ name[RTE_STACK_NAMESIZE] = '\0';
+
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ if (s != NULL) {
+ printf("[%s():%u] Failed to prevent long name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENAMETOOLONG) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_stack *s = rte_stack_lookup("stack_not_found");
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENOENT) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s = rte_stack_lookup(NULL);
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != EINVAL) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_free_null(void)
+{
+ /* Check whether the library proper handles a NULL pointer */
+ rte_stack_free(NULL);
+
+ return 0;
+}
+
+#define NUM_ITERS_PER_THREAD 100000
+
+struct test_args {
+ struct rte_stack *s;
+ rte_atomic64_t *sz;
+};
+
+static int
+stack_thread_push_pop(void *args)
+{
+ struct test_args *t = args;
+ void **obj_table;
+ int i;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < NUM_ITERS_PER_THREAD; i++) {
+ unsigned int success, num;
+
+ /* Reserve up to min(MAX_BULK, available slots) stack entries,
+ * then push and pop those stack entries.
+ */
+ do {
+ uint64_t sz = rte_atomic64_read(t->sz);
+ volatile uint64_t *sz_addr;
+
+ sz_addr = (volatile uint64_t *)t->sz;
+
+ num = RTE_MIN(rte_rand() % MAX_BULK, STACK_SIZE - sz);
+
+ success = rte_atomic64_cmpset(sz_addr, sz, sz + num);
+ } while (success == 0);
+
+ if (rte_stack_push(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to push %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ if (rte_stack_pop(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to pop %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ rte_atomic64_sub(t->sz, num);
+ }
+
+ rte_free(obj_table);
+ return 0;
+}
+
+static int
+test_stack_multithreaded(void)
+{
+ struct test_args *args;
+ unsigned int lcore_id;
+ struct rte_stack *s;
+ rte_atomic64_t size;
+
+ printf("[%s():%u] Running with %u lcores\n",
+ __func__, __LINE__, rte_lcore_count());
+
+ if (rte_lcore_count() < 2)
+ return 0;
+
+ args = rte_malloc(NULL, sizeof(struct test_args) * RTE_MAX_LCORE, 0);
+ if (args == NULL) {
+ printf("[%s():%u] failed to malloc %zu bytes\n",
+ __func__, __LINE__,
+ sizeof(struct test_args) * RTE_MAX_LCORE);
+ return -1;
+ }
+
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ rte_free(args);
+ return -1;
+ }
+
+ rte_atomic64_init(&size);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ if (rte_eal_remote_launch(stack_thread_push_pop,
+ &args[lcore_id], lcore_id))
+ rte_panic("Failed to launch lcore %d\n", lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ stack_thread_push_pop(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ rte_stack_free(s);
+ rte_free(args);
+
+ return 0;
+}
+
+static int
+test_stack(void)
+{
+ if (test_stack_basic() < 0)
+ return -1;
+
+ if (test_lookup_null() < 0)
+ return -1;
+
+ if (test_free_null() < 0)
+ return -1;
+
+ if (test_stack_name_reuse() < 0)
+ return -1;
+
+ if (test_stack_name_length() < 0)
+ return -1;
+
+ if (test_stack_multithreaded() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_autotest, test_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 4/8] test/stack: add stack perf test
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Gage Eads
` (3 preceding siblings ...)
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 3/8] test/stack: add stack test Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 5/8] stack: add lock-free stack implementation Gage Eads
` (5 subsequent siblings)
10 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/Makefile | 1 +
app/test/meson.build | 2 +
app/test/test_stack_perf.c | 343 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 346 insertions(+)
create mode 100644 app/test/test_stack_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index e5bde81af..b28bed2d4 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -91,6 +91,7 @@ endif
SRCS-y += test_rwlock.c
SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
diff --git a/app/test/meson.build b/app/test/meson.build
index 56ea13f53..02eb788a4 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -96,6 +96,7 @@ test_sources = files('commands.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
+ 'test_stack_perf.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -241,6 +242,7 @@ perf_test_names = [
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
+ 'stack_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
new file mode 100644
index 000000000..484370d30
--- /dev/null
+++ b/app/test/test_stack_perf.c
@@ -0,0 +1,343 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+#include <rte_stack.h>
+#include <rte_cycles.h>
+#include <rte_launch.h>
+#include <rte_pause.h>
+
+#include "test.h"
+
+#define STACK_NAME "STACK_PERF"
+#define MAX_BURST 32
+#define STACK_SIZE (RTE_MAX_LCORE * MAX_BURST)
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+/*
+ * Push/pop bulk sizes, marked volatile so they aren't treated as compile-time
+ * constants.
+ */
+static volatile unsigned int bulk_sizes[] = {8, MAX_BURST};
+
+static rte_atomic32_t lcore_barrier;
+
+struct lcore_pair {
+ unsigned int c1;
+ unsigned int c2;
+};
+
+static int
+get_two_hyperthreads(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] == core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_cores(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] != core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_sockets(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if (socket[0] != socket[1]) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+/* Measure the cycle cost of popping an empty stack. */
+static void
+test_empty_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 100000000;
+ void *objs[MAX_BURST];
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++)
+ rte_stack_pop(s, objs, bulk_sizes[0]);
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Stack empty pop: %.2F\n",
+ (double)(end - start) / iterations);
+}
+
+struct thread_args {
+ struct rte_stack *s;
+ unsigned int sz;
+ double avg;
+};
+
+/* Measure the average per-pointer cycle cost of stack push and pop */
+static int
+bulk_push_pop(void *p)
+{
+ unsigned int iterations = 1000000;
+ struct thread_args *args = p;
+ void *objs[MAX_BURST] = {0};
+ unsigned int size, i;
+ struct rte_stack *s;
+
+ s = args->s;
+ size = args->sz;
+
+ rte_atomic32_sub(&lcore_barrier, 1);
+ while (rte_atomic32_read(&lcore_barrier) != 0)
+ rte_pause();
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, size);
+ rte_stack_pop(s, objs, size);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ args->avg = ((double)(end - start))/(iterations * size);
+
+ return 0;
+}
+
+/*
+ * Run bulk_push_pop() simultaneously on pairs of cores, to measure stack
+ * perf when between hyperthread siblings, cores on the same socket, and cores
+ * on different sockets.
+ */
+static void
+run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s,
+ lcore_function_t fn)
+{
+ struct thread_args args[2];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ rte_atomic32_set(&lcore_barrier, 2);
+
+ args[0].sz = args[1].sz = bulk_sizes[i];
+ args[0].s = args[1].s = s;
+
+ if (cores->c1 == rte_get_master_lcore()) {
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ fn(&args[0]);
+ rte_eal_wait_lcore(cores->c2);
+ } else {
+ rte_eal_remote_launch(fn, &args[0], cores->c1);
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ rte_eal_wait_lcore(cores->c1);
+ rte_eal_wait_lcore(cores->c2);
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], (args[0].avg + args[1].avg) / 2);
+ }
+}
+
+/* Run bulk_push_pop() simultaneously on 1+ cores. */
+static void
+run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n)
+{
+ struct thread_args args[RTE_MAX_LCORE];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ unsigned int lcore_id;
+ int cnt = 0;
+ double avg;
+
+ rte_atomic32_set(&lcore_barrier, n);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ if (rte_eal_remote_launch(fn, &args[lcore_id],
+ lcore_id))
+ rte_panic("Failed to launch lcore %d\n",
+ lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ fn(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ avg = args[rte_lcore_id()].avg;
+
+ cnt = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+ avg += args[lcore_id].avg;
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], avg / n);
+ }
+}
+
+/*
+ * Measure the cycle cost of pushing and popping a single pointer on a single
+ * lcore.
+ */
+static void
+test_single_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 16000000;
+ void *obj = NULL;
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, &obj, 1);
+ rte_stack_pop(s, &obj, 1);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Average cycles per single object push/pop: %.2F\n",
+ ((double)(end - start)) / iterations);
+}
+
+/* Measure the cycle cost of bulk pushing and popping on a single lcore. */
+static void
+test_bulk_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 8000000;
+ void *objs[MAX_BURST];
+ unsigned int sz, i;
+
+ for (sz = 0; sz < ARRAY_SIZE(bulk_sizes); sz++) {
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, bulk_sizes[sz]);
+ rte_stack_pop(s, objs, bulk_sizes[sz]);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ double avg = ((double)(end - start) /
+ (iterations * bulk_sizes[sz]));
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[sz], avg);
+ }
+}
+
+static int
+test_stack_perf(void)
+{
+ struct lcore_pair cores;
+ struct rte_stack *s;
+
+ rte_atomic32_init(&lcore_barrier);
+
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ printf("### Testing single element push/pop ###\n");
+ test_single_push_pop(s);
+
+ printf("\n### Testing empty pop ###\n");
+ test_empty_pop(s);
+
+ printf("\n### Testing using a single lcore ###\n");
+ test_bulk_push_pop(s);
+
+ if (get_two_hyperthreads(&cores) == 0) {
+ printf("\n### Testing using two hyperthreads ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_cores(&cores) == 0) {
+ printf("\n### Testing using two physical cores ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_sockets(&cores) == 0) {
+ printf("\n### Testing using two NUMA nodes ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+
+ printf("\n### Testing on all %u lcores ###\n", rte_lcore_count());
+ run_on_n_cores(s, bulk_push_pop, rte_lcore_count());
+
+ rte_stack_free(s);
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 4/8] test/stack: add stack perf test
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 4/8] test/stack: add stack perf test Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/Makefile | 1 +
app/test/meson.build | 2 +
app/test/test_stack_perf.c | 343 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 346 insertions(+)
create mode 100644 app/test/test_stack_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index e5bde81af..b28bed2d4 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -91,6 +91,7 @@ endif
SRCS-y += test_rwlock.c
SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
diff --git a/app/test/meson.build b/app/test/meson.build
index 56ea13f53..02eb788a4 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -96,6 +96,7 @@ test_sources = files('commands.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
+ 'test_stack_perf.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -241,6 +242,7 @@ perf_test_names = [
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
+ 'stack_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
new file mode 100644
index 000000000..484370d30
--- /dev/null
+++ b/app/test/test_stack_perf.c
@@ -0,0 +1,343 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+#include <rte_stack.h>
+#include <rte_cycles.h>
+#include <rte_launch.h>
+#include <rte_pause.h>
+
+#include "test.h"
+
+#define STACK_NAME "STACK_PERF"
+#define MAX_BURST 32
+#define STACK_SIZE (RTE_MAX_LCORE * MAX_BURST)
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+/*
+ * Push/pop bulk sizes, marked volatile so they aren't treated as compile-time
+ * constants.
+ */
+static volatile unsigned int bulk_sizes[] = {8, MAX_BURST};
+
+static rte_atomic32_t lcore_barrier;
+
+struct lcore_pair {
+ unsigned int c1;
+ unsigned int c2;
+};
+
+static int
+get_two_hyperthreads(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] == core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_cores(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] != core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_sockets(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if (socket[0] != socket[1]) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+/* Measure the cycle cost of popping an empty stack. */
+static void
+test_empty_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 100000000;
+ void *objs[MAX_BURST];
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++)
+ rte_stack_pop(s, objs, bulk_sizes[0]);
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Stack empty pop: %.2F\n",
+ (double)(end - start) / iterations);
+}
+
+struct thread_args {
+ struct rte_stack *s;
+ unsigned int sz;
+ double avg;
+};
+
+/* Measure the average per-pointer cycle cost of stack push and pop */
+static int
+bulk_push_pop(void *p)
+{
+ unsigned int iterations = 1000000;
+ struct thread_args *args = p;
+ void *objs[MAX_BURST] = {0};
+ unsigned int size, i;
+ struct rte_stack *s;
+
+ s = args->s;
+ size = args->sz;
+
+ rte_atomic32_sub(&lcore_barrier, 1);
+ while (rte_atomic32_read(&lcore_barrier) != 0)
+ rte_pause();
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, size);
+ rte_stack_pop(s, objs, size);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ args->avg = ((double)(end - start))/(iterations * size);
+
+ return 0;
+}
+
+/*
+ * Run bulk_push_pop() simultaneously on pairs of cores, to measure stack
+ * perf when between hyperthread siblings, cores on the same socket, and cores
+ * on different sockets.
+ */
+static void
+run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s,
+ lcore_function_t fn)
+{
+ struct thread_args args[2];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ rte_atomic32_set(&lcore_barrier, 2);
+
+ args[0].sz = args[1].sz = bulk_sizes[i];
+ args[0].s = args[1].s = s;
+
+ if (cores->c1 == rte_get_master_lcore()) {
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ fn(&args[0]);
+ rte_eal_wait_lcore(cores->c2);
+ } else {
+ rte_eal_remote_launch(fn, &args[0], cores->c1);
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ rte_eal_wait_lcore(cores->c1);
+ rte_eal_wait_lcore(cores->c2);
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], (args[0].avg + args[1].avg) / 2);
+ }
+}
+
+/* Run bulk_push_pop() simultaneously on 1+ cores. */
+static void
+run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n)
+{
+ struct thread_args args[RTE_MAX_LCORE];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ unsigned int lcore_id;
+ int cnt = 0;
+ double avg;
+
+ rte_atomic32_set(&lcore_barrier, n);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ if (rte_eal_remote_launch(fn, &args[lcore_id],
+ lcore_id))
+ rte_panic("Failed to launch lcore %d\n",
+ lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ fn(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ avg = args[rte_lcore_id()].avg;
+
+ cnt = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+ avg += args[lcore_id].avg;
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], avg / n);
+ }
+}
+
+/*
+ * Measure the cycle cost of pushing and popping a single pointer on a single
+ * lcore.
+ */
+static void
+test_single_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 16000000;
+ void *obj = NULL;
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, &obj, 1);
+ rte_stack_pop(s, &obj, 1);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Average cycles per single object push/pop: %.2F\n",
+ ((double)(end - start)) / iterations);
+}
+
+/* Measure the cycle cost of bulk pushing and popping on a single lcore. */
+static void
+test_bulk_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 8000000;
+ void *objs[MAX_BURST];
+ unsigned int sz, i;
+
+ for (sz = 0; sz < ARRAY_SIZE(bulk_sizes); sz++) {
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, bulk_sizes[sz]);
+ rte_stack_pop(s, objs, bulk_sizes[sz]);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ double avg = ((double)(end - start) /
+ (iterations * bulk_sizes[sz]));
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[sz], avg);
+ }
+}
+
+static int
+test_stack_perf(void)
+{
+ struct lcore_pair cores;
+ struct rte_stack *s;
+
+ rte_atomic32_init(&lcore_barrier);
+
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ printf("### Testing single element push/pop ###\n");
+ test_single_push_pop(s);
+
+ printf("\n### Testing empty pop ###\n");
+ test_empty_pop(s);
+
+ printf("\n### Testing using a single lcore ###\n");
+ test_bulk_push_pop(s);
+
+ if (get_two_hyperthreads(&cores) == 0) {
+ printf("\n### Testing using two hyperthreads ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_cores(&cores) == 0) {
+ printf("\n### Testing using two physical cores ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_sockets(&cores) == 0) {
+ printf("\n### Testing using two NUMA nodes ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+
+ printf("\n### Testing on all %u lcores ###\n", rte_lcore_count());
+ run_on_n_cores(s, bulk_push_pop, rte_lcore_count());
+
+ rte_stack_free(s);
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 5/8] stack: add lock-free stack implementation
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Gage Eads
` (4 preceding siblings ...)
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 4/8] test/stack: add stack perf test Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 6/8] stack: add C11 atomic implementation Gage Eads
` (4 subsequent siblings)
10 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for a lock-free (linked list based) stack to the
stack API. This behavior is selected through a new rte_stack_create() flag,
RTE_STACK_F_LF.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The lock-free push operation enqueues a linked list of pointers by pointing
the tail of the list to the current stack head, and using a CAS to swing
the stack head pointer to the head of the list. The operation retries if it
is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The lock-free pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a lock-free LIFO
list, and are allocated before stack pushes and freed after stack pops.
Since the stack has a fixed maximum depth, these elements do not need to be
dynamically created.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
doc/guides/prog_guide/stack_lib.rst | 61 +++++++++++-
doc/guides/rel_notes/release_19_05.rst | 3 +
lib/librte_stack/Makefile | 7 +-
lib/librte_stack/meson.build | 7 +-
lib/librte_stack/rte_stack.c | 28 ++++--
lib/librte_stack/rte_stack.h | 62 +++++++++++-
lib/librte_stack/rte_stack_lf.c | 31 ++++++
lib/librte_stack/rte_stack_lf.h | 102 ++++++++++++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 ++++++++++++++++++++++++++++++++
9 files changed, 446 insertions(+), 19 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 25a8cc38a..8fe8804e3 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -10,7 +10,8 @@ stack of pointers.
The stack library provides the following basic operations:
* Create a uniquely named stack of a user-specified size and using a
- user-specified socket.
+ user-specified socket, with either standard (lock-based) or lock-free
+ behavior.
* Push and pop a burst of one or more stack objects (pointers). These function
are multi-threading safe.
@@ -24,5 +25,59 @@ The stack library provides the following basic operations:
Implementation
~~~~~~~~~~~~~~
-The stack consists of a contiguous array of pointers, a current index, and a
-spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
+The library supports two types of stacks: standard (lock-based) and lock-free.
+Both types use the same set of interfaces, but their implementations differ.
+
+Lock-based Stack
+----------------
+
+The lock-based stack consists of a contiguous array of pointers, a current
+index, and a spinlock. Accesses to the stack are made multi-thread safe by the
+spinlock.
+
+Lock-free Stack
+------------------
+
+The lock-free stack consists of a linked list of elements, each containing a
+data pointer and a next pointer, and an atomic stack depth counter. The
+lock-free property means that multiple threads can push and pop simultaneously,
+and one thread being preempted/delayed in a push or pop operation will not
+impede the forward progress of any other thread.
+
+The lock-free push operation enqueues a linked list of pointers by pointing the
+list's tail to the current stack head, and using a CAS to swing the stack head
+pointer to the head of the list. The operation retries if it is unsuccessful
+(i.e. the list changed between reading the head and modifying it), else it
+adjusts the stack length and returns.
+
+The lock-free pop operation first reserves one or more list elements by
+adjusting the stack length, to ensure the dequeue operation will succeed
+without blocking. It then dequeues pointers by walking the list -- starting
+from the head -- then swinging the head pointer (using a CAS as well). While
+walking the list, the data pointers are recorded in an object table.
+
+The linked list elements themselves are maintained in a lock-free LIFO, and are
+allocated before stack pushes and freed after stack pops. Since the stack has a
+fixed maximum depth, these elements do not need to be dynamically created.
+
+The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
+rte_stack_create().
+
+Preventing the ABA Problem
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To prevent the ABA problem, this algorithm stack uses a 128-bit
+compare-and-swap instruction to atomically update both the stack top pointer
+and a modification counter. The ABA problem can occur without a modification
+counter if, for example:
+
+1. Thread A reads head pointer X and stores the pointed-to list element.
+2. Other threads modify the list such that the head pointer is once again X,
+ but its pointed-to data is different than what thread A read.
+3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+In this case thread A would not detect that the list had changed, and would
+both pop stale data and incorrect change the head pointer. By adding a
+modification counter that is updated on every push and pop as part of the
+compare-and-swap, the algorithm can detect when the list changes even if the
+head pointer remains the same.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index ebfbe36e5..3b115b5f6 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -127,6 +127,9 @@ New Features
pointers. The API provides MT-safe push and pop operations that can operate
on one or more pointers per operation.
+ The library supports two stack implementations: standard (lock-based) and lock-free.
+ The lock-free implementation is currently limited to x86-64 platforms.
+
Removed Items
-------------
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 6db540073..311edd997 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -16,10 +16,13 @@ LIBABIVER := 1
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
- rte_stack_std.c
+ rte_stack_std.c \
+ rte_stack_lf.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
- rte_stack_std.h
+ rte_stack_std.h \
+ rte_stack_lf.h \
+ rte_stack_lf_generic.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index d2e60ce9b..7a09a5d66 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -4,5 +4,8 @@
allow_experimental_apis = true
version = 1
-sources = files('rte_stack.c', 'rte_stack_std.c')
-headers = files('rte_stack.h', 'rte_stack_std.h')
+sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
+headers = files('rte_stack.h',
+ 'rte_stack_std.h',
+ 'rte_stack_lf.h',
+ 'rte_stack_lf_generic.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
index 610014b6c..1a4d9bd1e 100644
--- a/lib/librte_stack/rte_stack.c
+++ b/lib/librte_stack/rte_stack.c
@@ -25,18 +25,25 @@ static struct rte_tailq_elem rte_stack_tailq = {
};
EAL_REGISTER_TAILQ(rte_stack_tailq)
+
static void
-rte_stack_init(struct rte_stack *s)
+rte_stack_init(struct rte_stack *s, unsigned int count, uint32_t flags)
{
memset(s, 0, sizeof(*s));
- rte_stack_std_init(s);
+ if (flags & RTE_STACK_F_LF)
+ rte_stack_lf_init(s, count);
+ else
+ rte_stack_std_init(s);
}
static ssize_t
-rte_stack_get_memsize(unsigned int count)
+rte_stack_get_memsize(unsigned int count, uint32_t flags)
{
- return rte_stack_std_get_memsize(count);
+ if (flags & RTE_STACK_F_LF)
+ return rte_stack_lf_get_memsize(count);
+ else
+ return rte_stack_std_get_memsize(count);
}
struct rte_stack *
@@ -51,9 +58,16 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
unsigned int sz;
int ret;
- RTE_SET_USED(flags);
+#ifdef RTE_ARCH_64
+ RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
+#else
+ if (flags & RTE_STACK_F_LF) {
+ STACK_LOG_ERR("Lock-free stack is not supported on your platform\n");
+ return NULL;
+ }
+#endif
- sz = rte_stack_get_memsize(count);
+ sz = rte_stack_get_memsize(count, flags);
ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
RTE_STACK_MZ_PREFIX, name);
@@ -82,7 +96,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
s = mz->addr;
- rte_stack_init(s);
+ rte_stack_init(s, count, flags);
/* Store the name for later lookups */
ret = snprintf(s->name, sizeof(s->name), "%s", name);
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index d9799d747..e0f9e9cff 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -30,6 +30,35 @@ extern "C" {
#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
sizeof(RTE_STACK_MZ_PREFIX) + 1)
+struct rte_stack_lf_elem {
+ void *data; /**< Data pointer */
+ struct rte_stack_lf_elem *next; /**< Next pointer */
+};
+
+struct rte_stack_lf_head {
+ struct rte_stack_lf_elem *top; /**< Stack top */
+ uint64_t cnt; /**< Modification counter for avoiding ABA problem */
+};
+
+struct rte_stack_lf_list {
+ /** List head */
+ struct rte_stack_lf_head head __rte_aligned(16);
+ /** List len */
+ rte_atomic64_t len;
+};
+
+/* Structure containing two lock-free LIFO lists: the stack itself and a list
+ * of free linked-list elements.
+ */
+struct rte_stack_lf {
+ /** LIFO list of elements */
+ struct rte_stack_lf_list used __rte_cache_aligned;
+ /** LIFO list of free elements */
+ struct rte_stack_lf_list free __rte_cache_aligned;
+ /** LIFO elements */
+ struct rte_stack_lf_elem elems[] __rte_cache_aligned;
+};
+
/* Structure containing the LIFO, its current length, and a lock for mutual
* exclusion.
*/
@@ -49,10 +78,21 @@ struct rte_stack {
const struct rte_memzone *memzone;
uint32_t capacity; /**< Usable size of the stack. */
uint32_t flags; /**< Flags supplied at creation. */
- struct rte_stack_std stack_std; /**< LIFO structure. */
+ RTE_STD_C11
+ union {
+ struct rte_stack_lf stack_lf; /**< Lock-free LIFO structure. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+ };
} __rte_cache_aligned;
+/**
+ * The stack uses lock-free push and pop functions. This flag is only
+ * supported on x86_64 platforms, currently.
+ */
+#define RTE_STACK_F_LF 0x0001
+
#include "rte_stack_std.h"
+#include "rte_stack_lf.h"
/**
* @warning
@@ -75,7 +115,10 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_push(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_push(s, obj_table, n);
+ else
+ return __rte_stack_std_push(s, obj_table, n);
}
/**
@@ -99,7 +142,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_pop(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_pop(s, obj_table, n);
+ else
+ return __rte_stack_std_pop(s, obj_table, n);
}
/**
@@ -118,7 +164,10 @@ rte_stack_count(struct rte_stack *s)
{
RTE_ASSERT(s != NULL);
- return __rte_stack_std_count(s);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_count(s);
+ else
+ return __rte_stack_std_count(s);
}
/**
@@ -158,7 +207,10 @@ rte_stack_free_count(struct rte_stack *s)
* NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
* constraint for the reserved zone.
* @param flags
- * Reserved for future use.
+ * An OR of the following:
+ * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
+ * variants of the push and pop functions. Otherwise, it achieves
+ * thread-safety using a lock.
* @return
* On success, the pointer to the new allocated stack. NULL on error with
* rte_errno set appropriately. Possible errno values include:
diff --git a/lib/librte_stack/rte_stack_lf.c b/lib/librte_stack/rte_stack_lf.c
new file mode 100644
index 000000000..0adcc263e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.c
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count)
+{
+ struct rte_stack_lf_elem *elems = s->stack_lf.elems;
+ unsigned int i;
+
+ for (i = 0; i < count; i++)
+ __rte_stack_lf_push_elems(&s->stack_lf.free,
+ &elems[i], &elems[i], 1);
+}
+
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(struct rte_stack_lf_elem));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
new file mode 100644
index 000000000..bfd680133
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_H_
+#define _RTE_STACK_LF_H_
+
+#include "rte_stack_lf_generic.h"
+
+/**
+ * @internal Push several objects on the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects enqueued.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_push(struct rte_stack *s,
+ void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_lf_elem *tmp, *first, *last = NULL;
+ unsigned int i;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n free elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.free, n, NULL, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Construct the list elements */
+ for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
+ tmp->data = obj_table[n - i - 1];
+
+ /* Push them to the used list */
+ __rte_stack_lf_push_elems(&s->stack_lf.used, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * - Actual number of objects popped.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *first, *last = NULL;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n used elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.used,
+ n, obj_table, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Push the list elements to the free list */
+ __rte_stack_lf_push_elems(&s->stack_lf.free, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Initialize a lock-free stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param count
+ * The size of the stack.
+ */
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count);
+
+/**
+ * @internal Return the memory required for a lock-free stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a lock-free stack.
+ */
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_LF_H_ */
diff --git a/lib/librte_stack/rte_stack_lf_generic.h b/lib/librte_stack/rte_stack_lf_generic.h
new file mode 100644
index 000000000..243d71699
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_generic.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_GENERIC_H_
+#define _RTE_STACK_LF_GENERIC_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)rte_atomic64_read(&s->stack_lf.used.len);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* An acquire fence (or stronger) is needed for weak memory models
+ * to establish a synchronized-with relationship between the
+ * list->head load and store-release operations (as part of the
+ * rte_atomic128_cmp_exchange()).
+ */
+ rte_smp_mb();
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ rte_atomic64_add(&list->len, num);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = rte_atomic64_read(&list->len);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
+ len, len - num))
+ break;
+ }
+
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* An acquire fence (or stronger) is needed for weak memory models
+ * to ensure the LF LIFO element reads are properly ordered
+ * with respect to the head pointer read.
+ */
+ rte_smp_mb();
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_GENERIC_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 5/8] stack: add lock-free stack implementation
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 5/8] stack: add lock-free stack implementation Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for a lock-free (linked list based) stack to the
stack API. This behavior is selected through a new rte_stack_create() flag,
RTE_STACK_F_LF.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The lock-free push operation enqueues a linked list of pointers by pointing
the tail of the list to the current stack head, and using a CAS to swing
the stack head pointer to the head of the list. The operation retries if it
is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The lock-free pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a lock-free LIFO
list, and are allocated before stack pushes and freed after stack pops.
Since the stack has a fixed maximum depth, these elements do not need to be
dynamically created.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
doc/guides/prog_guide/stack_lib.rst | 61 +++++++++++-
doc/guides/rel_notes/release_19_05.rst | 3 +
lib/librte_stack/Makefile | 7 +-
lib/librte_stack/meson.build | 7 +-
lib/librte_stack/rte_stack.c | 28 ++++--
lib/librte_stack/rte_stack.h | 62 +++++++++++-
lib/librte_stack/rte_stack_lf.c | 31 ++++++
lib/librte_stack/rte_stack_lf.h | 102 ++++++++++++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 ++++++++++++++++++++++++++++++++
9 files changed, 446 insertions(+), 19 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 25a8cc38a..8fe8804e3 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -10,7 +10,8 @@ stack of pointers.
The stack library provides the following basic operations:
* Create a uniquely named stack of a user-specified size and using a
- user-specified socket.
+ user-specified socket, with either standard (lock-based) or lock-free
+ behavior.
* Push and pop a burst of one or more stack objects (pointers). These function
are multi-threading safe.
@@ -24,5 +25,59 @@ The stack library provides the following basic operations:
Implementation
~~~~~~~~~~~~~~
-The stack consists of a contiguous array of pointers, a current index, and a
-spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
+The library supports two types of stacks: standard (lock-based) and lock-free.
+Both types use the same set of interfaces, but their implementations differ.
+
+Lock-based Stack
+----------------
+
+The lock-based stack consists of a contiguous array of pointers, a current
+index, and a spinlock. Accesses to the stack are made multi-thread safe by the
+spinlock.
+
+Lock-free Stack
+------------------
+
+The lock-free stack consists of a linked list of elements, each containing a
+data pointer and a next pointer, and an atomic stack depth counter. The
+lock-free property means that multiple threads can push and pop simultaneously,
+and one thread being preempted/delayed in a push or pop operation will not
+impede the forward progress of any other thread.
+
+The lock-free push operation enqueues a linked list of pointers by pointing the
+list's tail to the current stack head, and using a CAS to swing the stack head
+pointer to the head of the list. The operation retries if it is unsuccessful
+(i.e. the list changed between reading the head and modifying it), else it
+adjusts the stack length and returns.
+
+The lock-free pop operation first reserves one or more list elements by
+adjusting the stack length, to ensure the dequeue operation will succeed
+without blocking. It then dequeues pointers by walking the list -- starting
+from the head -- then swinging the head pointer (using a CAS as well). While
+walking the list, the data pointers are recorded in an object table.
+
+The linked list elements themselves are maintained in a lock-free LIFO, and are
+allocated before stack pushes and freed after stack pops. Since the stack has a
+fixed maximum depth, these elements do not need to be dynamically created.
+
+The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
+rte_stack_create().
+
+Preventing the ABA Problem
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To prevent the ABA problem, this algorithm stack uses a 128-bit
+compare-and-swap instruction to atomically update both the stack top pointer
+and a modification counter. The ABA problem can occur without a modification
+counter if, for example:
+
+1. Thread A reads head pointer X and stores the pointed-to list element.
+2. Other threads modify the list such that the head pointer is once again X,
+ but its pointed-to data is different than what thread A read.
+3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+In this case thread A would not detect that the list had changed, and would
+both pop stale data and incorrect change the head pointer. By adding a
+modification counter that is updated on every push and pop as part of the
+compare-and-swap, the algorithm can detect when the list changes even if the
+head pointer remains the same.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index ebfbe36e5..3b115b5f6 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -127,6 +127,9 @@ New Features
pointers. The API provides MT-safe push and pop operations that can operate
on one or more pointers per operation.
+ The library supports two stack implementations: standard (lock-based) and lock-free.
+ The lock-free implementation is currently limited to x86-64 platforms.
+
Removed Items
-------------
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 6db540073..311edd997 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -16,10 +16,13 @@ LIBABIVER := 1
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
- rte_stack_std.c
+ rte_stack_std.c \
+ rte_stack_lf.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
- rte_stack_std.h
+ rte_stack_std.h \
+ rte_stack_lf.h \
+ rte_stack_lf_generic.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index d2e60ce9b..7a09a5d66 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -4,5 +4,8 @@
allow_experimental_apis = true
version = 1
-sources = files('rte_stack.c', 'rte_stack_std.c')
-headers = files('rte_stack.h', 'rte_stack_std.h')
+sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
+headers = files('rte_stack.h',
+ 'rte_stack_std.h',
+ 'rte_stack_lf.h',
+ 'rte_stack_lf_generic.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
index 610014b6c..1a4d9bd1e 100644
--- a/lib/librte_stack/rte_stack.c
+++ b/lib/librte_stack/rte_stack.c
@@ -25,18 +25,25 @@ static struct rte_tailq_elem rte_stack_tailq = {
};
EAL_REGISTER_TAILQ(rte_stack_tailq)
+
static void
-rte_stack_init(struct rte_stack *s)
+rte_stack_init(struct rte_stack *s, unsigned int count, uint32_t flags)
{
memset(s, 0, sizeof(*s));
- rte_stack_std_init(s);
+ if (flags & RTE_STACK_F_LF)
+ rte_stack_lf_init(s, count);
+ else
+ rte_stack_std_init(s);
}
static ssize_t
-rte_stack_get_memsize(unsigned int count)
+rte_stack_get_memsize(unsigned int count, uint32_t flags)
{
- return rte_stack_std_get_memsize(count);
+ if (flags & RTE_STACK_F_LF)
+ return rte_stack_lf_get_memsize(count);
+ else
+ return rte_stack_std_get_memsize(count);
}
struct rte_stack *
@@ -51,9 +58,16 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
unsigned int sz;
int ret;
- RTE_SET_USED(flags);
+#ifdef RTE_ARCH_64
+ RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
+#else
+ if (flags & RTE_STACK_F_LF) {
+ STACK_LOG_ERR("Lock-free stack is not supported on your platform\n");
+ return NULL;
+ }
+#endif
- sz = rte_stack_get_memsize(count);
+ sz = rte_stack_get_memsize(count, flags);
ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
RTE_STACK_MZ_PREFIX, name);
@@ -82,7 +96,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
s = mz->addr;
- rte_stack_init(s);
+ rte_stack_init(s, count, flags);
/* Store the name for later lookups */
ret = snprintf(s->name, sizeof(s->name), "%s", name);
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index d9799d747..e0f9e9cff 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -30,6 +30,35 @@ extern "C" {
#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
sizeof(RTE_STACK_MZ_PREFIX) + 1)
+struct rte_stack_lf_elem {
+ void *data; /**< Data pointer */
+ struct rte_stack_lf_elem *next; /**< Next pointer */
+};
+
+struct rte_stack_lf_head {
+ struct rte_stack_lf_elem *top; /**< Stack top */
+ uint64_t cnt; /**< Modification counter for avoiding ABA problem */
+};
+
+struct rte_stack_lf_list {
+ /** List head */
+ struct rte_stack_lf_head head __rte_aligned(16);
+ /** List len */
+ rte_atomic64_t len;
+};
+
+/* Structure containing two lock-free LIFO lists: the stack itself and a list
+ * of free linked-list elements.
+ */
+struct rte_stack_lf {
+ /** LIFO list of elements */
+ struct rte_stack_lf_list used __rte_cache_aligned;
+ /** LIFO list of free elements */
+ struct rte_stack_lf_list free __rte_cache_aligned;
+ /** LIFO elements */
+ struct rte_stack_lf_elem elems[] __rte_cache_aligned;
+};
+
/* Structure containing the LIFO, its current length, and a lock for mutual
* exclusion.
*/
@@ -49,10 +78,21 @@ struct rte_stack {
const struct rte_memzone *memzone;
uint32_t capacity; /**< Usable size of the stack. */
uint32_t flags; /**< Flags supplied at creation. */
- struct rte_stack_std stack_std; /**< LIFO structure. */
+ RTE_STD_C11
+ union {
+ struct rte_stack_lf stack_lf; /**< Lock-free LIFO structure. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+ };
} __rte_cache_aligned;
+/**
+ * The stack uses lock-free push and pop functions. This flag is only
+ * supported on x86_64 platforms, currently.
+ */
+#define RTE_STACK_F_LF 0x0001
+
#include "rte_stack_std.h"
+#include "rte_stack_lf.h"
/**
* @warning
@@ -75,7 +115,10 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_push(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_push(s, obj_table, n);
+ else
+ return __rte_stack_std_push(s, obj_table, n);
}
/**
@@ -99,7 +142,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_pop(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_pop(s, obj_table, n);
+ else
+ return __rte_stack_std_pop(s, obj_table, n);
}
/**
@@ -118,7 +164,10 @@ rte_stack_count(struct rte_stack *s)
{
RTE_ASSERT(s != NULL);
- return __rte_stack_std_count(s);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_count(s);
+ else
+ return __rte_stack_std_count(s);
}
/**
@@ -158,7 +207,10 @@ rte_stack_free_count(struct rte_stack *s)
* NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
* constraint for the reserved zone.
* @param flags
- * Reserved for future use.
+ * An OR of the following:
+ * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
+ * variants of the push and pop functions. Otherwise, it achieves
+ * thread-safety using a lock.
* @return
* On success, the pointer to the new allocated stack. NULL on error with
* rte_errno set appropriately. Possible errno values include:
diff --git a/lib/librte_stack/rte_stack_lf.c b/lib/librte_stack/rte_stack_lf.c
new file mode 100644
index 000000000..0adcc263e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.c
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count)
+{
+ struct rte_stack_lf_elem *elems = s->stack_lf.elems;
+ unsigned int i;
+
+ for (i = 0; i < count; i++)
+ __rte_stack_lf_push_elems(&s->stack_lf.free,
+ &elems[i], &elems[i], 1);
+}
+
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(struct rte_stack_lf_elem));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
new file mode 100644
index 000000000..bfd680133
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_H_
+#define _RTE_STACK_LF_H_
+
+#include "rte_stack_lf_generic.h"
+
+/**
+ * @internal Push several objects on the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects enqueued.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_push(struct rte_stack *s,
+ void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_lf_elem *tmp, *first, *last = NULL;
+ unsigned int i;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n free elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.free, n, NULL, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Construct the list elements */
+ for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
+ tmp->data = obj_table[n - i - 1];
+
+ /* Push them to the used list */
+ __rte_stack_lf_push_elems(&s->stack_lf.used, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * - Actual number of objects popped.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *first, *last = NULL;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n used elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.used,
+ n, obj_table, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Push the list elements to the free list */
+ __rte_stack_lf_push_elems(&s->stack_lf.free, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Initialize a lock-free stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param count
+ * The size of the stack.
+ */
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count);
+
+/**
+ * @internal Return the memory required for a lock-free stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a lock-free stack.
+ */
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_LF_H_ */
diff --git a/lib/librte_stack/rte_stack_lf_generic.h b/lib/librte_stack/rte_stack_lf_generic.h
new file mode 100644
index 000000000..243d71699
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_generic.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_GENERIC_H_
+#define _RTE_STACK_LF_GENERIC_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)rte_atomic64_read(&s->stack_lf.used.len);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* An acquire fence (or stronger) is needed for weak memory models
+ * to establish a synchronized-with relationship between the
+ * list->head load and store-release operations (as part of the
+ * rte_atomic128_cmp_exchange()).
+ */
+ rte_smp_mb();
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ rte_atomic64_add(&list->len, num);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = rte_atomic64_read(&list->len);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
+ len, len - num))
+ break;
+ }
+
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* An acquire fence (or stronger) is needed for weak memory models
+ * to ensure the LF LIFO element reads are properly ordered
+ * with respect to the head pointer read.
+ */
+ rte_smp_mb();
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_GENERIC_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 6/8] stack: add C11 atomic implementation
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Gage Eads
` (5 preceding siblings ...)
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 5/8] stack: add lock-free stack implementation Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-02 11:11 ` Honnappa Nagarahalli
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 7/8] test/stack: add lock-free stack tests Gage Eads
` (3 subsequent siblings)
10 siblings, 2 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds an implementation of the lock-free stack push, pop, and
length functions that use __atomic builtins, for systems that benefit from
the finer-grained memory ordering control.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack_lf.h | 4 +
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++++++++++++++++++++++++++++
4 files changed, 183 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 311edd997..8d18ce520 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -23,6 +23,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
rte_stack_std.h \
rte_stack_lf.h \
- rte_stack_lf_generic.h
+ rte_stack_lf_generic.h \
+ rte_stack_lf_c11.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 7a09a5d66..46fce0c20 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -8,4 +8,5 @@ sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
headers = files('rte_stack.h',
'rte_stack_std.h',
'rte_stack_lf.h',
- 'rte_stack_lf_generic.h')
+ 'rte_stack_lf_generic.h',
+ 'rte_stack_lf_c11.h')
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
index bfd680133..518889a05 100644
--- a/lib/librte_stack/rte_stack_lf.h
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -5,7 +5,11 @@
#ifndef _RTE_STACK_LF_H_
#define _RTE_STACK_LF_H_
+#ifdef RTE_USE_C11_MEM_MODEL
+#include "rte_stack_lf_c11.h"
+#else
#include "rte_stack_lf_generic.h"
+#endif
/**
* @internal Push several objects on the lock-free stack (MT-safe).
diff --git a/lib/librte_stack/rte_stack_lf_c11.h b/lib/librte_stack/rte_stack_lf_c11.h
new file mode 100644
index 000000000..a316e9af5
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_c11.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_C11_H_
+#define _RTE_STACK_LF_C11_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
+ __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Use an acquire fence to establish a synchronized-with
+ * relationship between the list->head load and store-release
+ * operations (as part of the rte_atomic128_cmp_exchange()).
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Use the release memmodel to ensure the writes to the LF LIFO
+ * elements are visible before the head pointer write.
+ */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ /* Ensure the stack modifications are not reordered with respect
+ * to the LIFO len update.
+ */
+ __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ uint64_t len;
+ int success;
+
+ /* Reserve num elements, if available */
+ len = __atomic_load_n(&list->len.cnt, __ATOMIC_ACQUIRE);
+
+ while (1) {
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ /* len is updated on failure */
+ if (__atomic_compare_exchange_n(&list->len.cnt,
+ &len, len - num,
+ 0, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE))
+ break;
+ }
+
+ /* If a torn read occurs, the CAS will fail and set old_head to the
+ * correct/latest value.
+ */
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* Use the acquire memmodel to ensure the reads to the LF LIFO
+ * elements are properly ordered with respect to the head
+ * pointer read.
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_C11_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 6/8] stack: add C11 atomic implementation
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 6/8] stack: add C11 atomic implementation Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
2019-04-02 11:11 ` Honnappa Nagarahalli
1 sibling, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds an implementation of the lock-free stack push, pop, and
length functions that use __atomic builtins, for systems that benefit from
the finer-grained memory ordering control.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack_lf.h | 4 +
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++++++++++++++++++++++++++++
4 files changed, 183 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 311edd997..8d18ce520 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -23,6 +23,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
rte_stack_std.h \
rte_stack_lf.h \
- rte_stack_lf_generic.h
+ rte_stack_lf_generic.h \
+ rte_stack_lf_c11.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 7a09a5d66..46fce0c20 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -8,4 +8,5 @@ sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
headers = files('rte_stack.h',
'rte_stack_std.h',
'rte_stack_lf.h',
- 'rte_stack_lf_generic.h')
+ 'rte_stack_lf_generic.h',
+ 'rte_stack_lf_c11.h')
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
index bfd680133..518889a05 100644
--- a/lib/librte_stack/rte_stack_lf.h
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -5,7 +5,11 @@
#ifndef _RTE_STACK_LF_H_
#define _RTE_STACK_LF_H_
+#ifdef RTE_USE_C11_MEM_MODEL
+#include "rte_stack_lf_c11.h"
+#else
#include "rte_stack_lf_generic.h"
+#endif
/**
* @internal Push several objects on the lock-free stack (MT-safe).
diff --git a/lib/librte_stack/rte_stack_lf_c11.h b/lib/librte_stack/rte_stack_lf_c11.h
new file mode 100644
index 000000000..a316e9af5
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_c11.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_C11_H_
+#define _RTE_STACK_LF_C11_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
+ __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Use an acquire fence to establish a synchronized-with
+ * relationship between the list->head load and store-release
+ * operations (as part of the rte_atomic128_cmp_exchange()).
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Use the release memmodel to ensure the writes to the LF LIFO
+ * elements are visible before the head pointer write.
+ */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ /* Ensure the stack modifications are not reordered with respect
+ * to the LIFO len update.
+ */
+ __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ uint64_t len;
+ int success;
+
+ /* Reserve num elements, if available */
+ len = __atomic_load_n(&list->len.cnt, __ATOMIC_ACQUIRE);
+
+ while (1) {
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ /* len is updated on failure */
+ if (__atomic_compare_exchange_n(&list->len.cnt,
+ &len, len - num,
+ 0, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE))
+ break;
+ }
+
+ /* If a torn read occurs, the CAS will fail and set old_head to the
+ * correct/latest value.
+ */
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* Use the acquire memmodel to ensure the reads to the LF LIFO
+ * elements are properly ordered with respect to the head
+ * pointer read.
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_C11_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 7/8] test/stack: add lock-free stack tests
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Gage Eads
` (6 preceding siblings ...)
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 6/8] stack: add C11 atomic implementation Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
` (2 subsequent siblings)
10 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds lock-free stack variants of stack_autotest
(stack_lf_autotest) and stack_perf_autotest (stack_lf_perf_autotest), which
differ only in that the lock-free versions pass the RTE_STACK_F_LF flag to
all rte_stack_create() calls.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/meson.build | 2 ++
app/test/test_stack.c | 41 +++++++++++++++++++++++++++--------------
app/test/test_stack_perf.c | 17 +++++++++++++++--
3 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/app/test/meson.build b/app/test/meson.build
index 02eb788a4..867cc5863 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -178,6 +178,7 @@ fast_parallel_test_names = [
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
+ 'stack_nb_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
@@ -243,6 +244,7 @@ perf_test_names = [
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
+ 'stack_nb_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
index 8392e4e4d..f199136aa 100644
--- a/app/test/test_stack.c
+++ b/app/test/test_stack.c
@@ -97,7 +97,7 @@ test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
}
static int
-test_stack_basic(void)
+test_stack_basic(uint32_t flags)
{
struct rte_stack *s = NULL;
void **obj_table = NULL;
@@ -113,7 +113,7 @@ test_stack_basic(void)
for (i = 0; i < STACK_SIZE; i++)
obj_table[i] = (void *)(uintptr_t)i;
- s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -177,18 +177,18 @@ test_stack_basic(void)
}
static int
-test_stack_name_reuse(void)
+test_stack_name_reuse(uint32_t flags)
{
struct rte_stack *s[2];
- s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[0] == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
return -1;
}
- s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[1] != NULL) {
printf("[%s():%u] Failed to detect re-used name\n",
__func__, __LINE__);
@@ -201,7 +201,7 @@ test_stack_name_reuse(void)
}
static int
-test_stack_name_length(void)
+test_stack_name_length(uint32_t flags)
{
char name[RTE_STACK_NAMESIZE + 1];
struct rte_stack *s;
@@ -209,7 +209,7 @@ test_stack_name_length(void)
memset(name, 's', sizeof(name));
name[RTE_STACK_NAMESIZE] = '\0';
- s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), flags);
if (s != NULL) {
printf("[%s():%u] Failed to prevent long name\n",
__func__, __LINE__);
@@ -328,7 +328,7 @@ stack_thread_push_pop(void *args)
}
static int
-test_stack_multithreaded(void)
+test_stack_multithreaded(uint32_t flags)
{
struct test_args *args;
unsigned int lcore_id;
@@ -349,7 +349,7 @@ test_stack_multithreaded(void)
return -1;
}
- s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
@@ -384,9 +384,9 @@ test_stack_multithreaded(void)
}
static int
-test_stack(void)
+__test_stack(uint32_t flags)
{
- if (test_stack_basic() < 0)
+ if (test_stack_basic(flags) < 0)
return -1;
if (test_lookup_null() < 0)
@@ -395,16 +395,29 @@ test_stack(void)
if (test_free_null() < 0)
return -1;
- if (test_stack_name_reuse() < 0)
+ if (test_stack_name_reuse(flags) < 0)
return -1;
- if (test_stack_name_length() < 0)
+ if (test_stack_name_length(flags) < 0)
return -1;
- if (test_stack_multithreaded() < 0)
+ if (test_stack_multithreaded(flags) < 0)
return -1;
return 0;
}
+static int
+test_stack(void)
+{
+ return __test_stack(0);
+}
+
+static int
+test_lf_stack(void)
+{
+ return __test_stack(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_autotest, test_stack);
+REGISTER_TEST_COMMAND(stack_lf_autotest, test_lf_stack);
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
index 484370d30..e09d5384c 100644
--- a/app/test/test_stack_perf.c
+++ b/app/test/test_stack_perf.c
@@ -297,14 +297,14 @@ test_bulk_push_pop(struct rte_stack *s)
}
static int
-test_stack_perf(void)
+__test_stack_perf(uint32_t flags)
{
struct lcore_pair cores;
struct rte_stack *s;
rte_atomic32_init(&lcore_barrier);
- s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -340,4 +340,17 @@ test_stack_perf(void)
return 0;
}
+static int
+test_stack_perf(void)
+{
+ return __test_stack_perf(0);
+}
+
+static int
+test_lf_stack_perf(void)
+{
+ return __test_stack_perf(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
+REGISTER_TEST_COMMAND(stack_lf_perf_autotest, test_lf_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 7/8] test/stack: add lock-free stack tests
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 7/8] test/stack: add lock-free stack tests Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds lock-free stack variants of stack_autotest
(stack_lf_autotest) and stack_perf_autotest (stack_lf_perf_autotest), which
differ only in that the lock-free versions pass the RTE_STACK_F_LF flag to
all rte_stack_create() calls.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/meson.build | 2 ++
app/test/test_stack.c | 41 +++++++++++++++++++++++++++--------------
app/test/test_stack_perf.c | 17 +++++++++++++++--
3 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/app/test/meson.build b/app/test/meson.build
index 02eb788a4..867cc5863 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -178,6 +178,7 @@ fast_parallel_test_names = [
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
+ 'stack_nb_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
@@ -243,6 +244,7 @@ perf_test_names = [
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
+ 'stack_nb_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
index 8392e4e4d..f199136aa 100644
--- a/app/test/test_stack.c
+++ b/app/test/test_stack.c
@@ -97,7 +97,7 @@ test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
}
static int
-test_stack_basic(void)
+test_stack_basic(uint32_t flags)
{
struct rte_stack *s = NULL;
void **obj_table = NULL;
@@ -113,7 +113,7 @@ test_stack_basic(void)
for (i = 0; i < STACK_SIZE; i++)
obj_table[i] = (void *)(uintptr_t)i;
- s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -177,18 +177,18 @@ test_stack_basic(void)
}
static int
-test_stack_name_reuse(void)
+test_stack_name_reuse(uint32_t flags)
{
struct rte_stack *s[2];
- s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[0] == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
return -1;
}
- s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[1] != NULL) {
printf("[%s():%u] Failed to detect re-used name\n",
__func__, __LINE__);
@@ -201,7 +201,7 @@ test_stack_name_reuse(void)
}
static int
-test_stack_name_length(void)
+test_stack_name_length(uint32_t flags)
{
char name[RTE_STACK_NAMESIZE + 1];
struct rte_stack *s;
@@ -209,7 +209,7 @@ test_stack_name_length(void)
memset(name, 's', sizeof(name));
name[RTE_STACK_NAMESIZE] = '\0';
- s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), flags);
if (s != NULL) {
printf("[%s():%u] Failed to prevent long name\n",
__func__, __LINE__);
@@ -328,7 +328,7 @@ stack_thread_push_pop(void *args)
}
static int
-test_stack_multithreaded(void)
+test_stack_multithreaded(uint32_t flags)
{
struct test_args *args;
unsigned int lcore_id;
@@ -349,7 +349,7 @@ test_stack_multithreaded(void)
return -1;
}
- s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
@@ -384,9 +384,9 @@ test_stack_multithreaded(void)
}
static int
-test_stack(void)
+__test_stack(uint32_t flags)
{
- if (test_stack_basic() < 0)
+ if (test_stack_basic(flags) < 0)
return -1;
if (test_lookup_null() < 0)
@@ -395,16 +395,29 @@ test_stack(void)
if (test_free_null() < 0)
return -1;
- if (test_stack_name_reuse() < 0)
+ if (test_stack_name_reuse(flags) < 0)
return -1;
- if (test_stack_name_length() < 0)
+ if (test_stack_name_length(flags) < 0)
return -1;
- if (test_stack_multithreaded() < 0)
+ if (test_stack_multithreaded(flags) < 0)
return -1;
return 0;
}
+static int
+test_stack(void)
+{
+ return __test_stack(0);
+}
+
+static int
+test_lf_stack(void)
+{
+ return __test_stack(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_autotest, test_stack);
+REGISTER_TEST_COMMAND(stack_lf_autotest, test_lf_stack);
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
index 484370d30..e09d5384c 100644
--- a/app/test/test_stack_perf.c
+++ b/app/test/test_stack_perf.c
@@ -297,14 +297,14 @@ test_bulk_push_pop(struct rte_stack *s)
}
static int
-test_stack_perf(void)
+__test_stack_perf(uint32_t flags)
{
struct lcore_pair cores;
struct rte_stack *s;
rte_atomic32_init(&lcore_barrier);
- s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -340,4 +340,17 @@ test_stack_perf(void)
return 0;
}
+static int
+test_stack_perf(void)
+{
+ return __test_stack_perf(0);
+}
+
+static int
+test_lf_stack_perf(void)
+{
+ return __test_stack_perf(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
+REGISTER_TEST_COMMAND(stack_lf_perf_autotest, test_lf_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 8/8] mempool/stack: add lock-free stack mempool handler
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Gage Eads
` (7 preceding siblings ...)
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 7/8] test/stack: add lock-free stack tests Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-03 17:04 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Thomas Monjalon
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 " Gage Eads
10 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for lock-free (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
stack's worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 10 ++++++++++
doc/guides/rel_notes/release_19_05.rst | 5 +++++
drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++--
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index c1346363b..1a4391898 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -563,6 +563,16 @@ Known Issues
5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+ Alternatively, applications can use the lock-free stack mempool handler. When
+ considering this handler, note that:
+
+ - It is currently limited to the x86_64 platform, because it uses an
+ instruction (16-byte compare-and-swap) that is not yet available on other
+ platforms.
+ - It has worse average-case performance than the non-preemptive rte_ring, but
+ software caching (e.g. the mempool cache) can mitigate this by reducing the
+ number of stack accesses.
+
+ rte_timer
Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 3b115b5f6..f873984ad 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -130,6 +130,11 @@ New Features
The library supports two stack implementations: standard (lock-based) and lock-free.
The lock-free implementation is currently limited to x86-64 platforms.
+* **Added Lock-Free Stack Mempool Handler.**
+
+ Added a new lock-free stack handler, which uses the newly added stack
+ library.
+
Removed Items
-------------
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 25ccdb9af..7e85c8d6b 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -7,7 +7,7 @@
#include <rte_stack.h>
static int
-stack_alloc(struct rte_mempool *mp)
+__stack_alloc(struct rte_mempool *mp, uint32_t flags)
{
char name[RTE_STACK_NAMESIZE];
struct rte_stack *s;
@@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp)
return -rte_errno;
}
- s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ s = rte_stack_create(name, mp->size, mp->socket_id, flags);
if (s == NULL)
return -rte_errno;
@@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp)
}
static int
+stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, 0);
+}
+
+static int
+lf_stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, RTE_STACK_F_LF);
+}
+
+static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
unsigned int n)
{
@@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = {
.get_count = stack_get_count
};
+static struct rte_mempool_ops ops_lf_stack = {
+ .name = "lf_stack",
+ .alloc = lf_stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
MEMPOOL_REGISTER_OPS(ops_stack);
+MEMPOOL_REGISTER_OPS(ops_lf_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v6 8/8] mempool/stack: add lock-free stack mempool handler
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
@ 2019-04-01 21:14 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-01 21:14 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for lock-free (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
stack's worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 10 ++++++++++
doc/guides/rel_notes/release_19_05.rst | 5 +++++
drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++--
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index c1346363b..1a4391898 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -563,6 +563,16 @@ Known Issues
5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+ Alternatively, applications can use the lock-free stack mempool handler. When
+ considering this handler, note that:
+
+ - It is currently limited to the x86_64 platform, because it uses an
+ instruction (16-byte compare-and-swap) that is not yet available on other
+ platforms.
+ - It has worse average-case performance than the non-preemptive rte_ring, but
+ software caching (e.g. the mempool cache) can mitigate this by reducing the
+ number of stack accesses.
+
+ rte_timer
Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 3b115b5f6..f873984ad 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -130,6 +130,11 @@ New Features
The library supports two stack implementations: standard (lock-based) and lock-free.
The lock-free implementation is currently limited to x86-64 platforms.
+* **Added Lock-Free Stack Mempool Handler.**
+
+ Added a new lock-free stack handler, which uses the newly added stack
+ library.
+
Removed Items
-------------
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 25ccdb9af..7e85c8d6b 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -7,7 +7,7 @@
#include <rte_stack.h>
static int
-stack_alloc(struct rte_mempool *mp)
+__stack_alloc(struct rte_mempool *mp, uint32_t flags)
{
char name[RTE_STACK_NAMESIZE];
struct rte_stack *s;
@@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp)
return -rte_errno;
}
- s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ s = rte_stack_create(name, mp->size, mp->socket_id, flags);
if (s == NULL)
return -rte_errno;
@@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp)
}
static int
+stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, 0);
+}
+
+static int
+lf_stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, RTE_STACK_F_LF);
+}
+
+static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
unsigned int n)
{
@@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = {
.get_count = stack_get_count
};
+static struct rte_mempool_ops ops_lf_stack = {
+ .name = "lf_stack",
+ .alloc = lf_stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
MEMPOOL_REGISTER_OPS(ops_stack);
+MEMPOOL_REGISTER_OPS(ops_lf_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v6 6/8] stack: add C11 atomic implementation
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 6/8] stack: add C11 atomic implementation Gage Eads
2019-04-01 21:14 ` Gage Eads
@ 2019-04-02 11:11 ` Honnappa Nagarahalli
2019-04-02 11:11 ` Honnappa Nagarahalli
1 sibling, 1 reply; 228+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-02 11:11 UTC (permalink / raw)
To: Gage Eads, dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
Gavin Hu (Arm Technology China),
nd, thomas, nd
> Subject: [PATCH v6 6/8] stack: add C11 atomic implementation
>
> This commit adds an implementation of the lock-free stack push, pop, and
> length functions that use __atomic builtins, for systems that benefit from the
> finer-grained memory ordering control.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
> Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> ---
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
<snip>
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v6 6/8] stack: add C11 atomic implementation
2019-04-02 11:11 ` Honnappa Nagarahalli
@ 2019-04-02 11:11 ` Honnappa Nagarahalli
0 siblings, 0 replies; 228+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-02 11:11 UTC (permalink / raw)
To: Gage Eads, dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
Gavin Hu (Arm Technology China),
nd, thomas, nd
> Subject: [PATCH v6 6/8] stack: add C11 atomic implementation
>
> This commit adds an implementation of the lock-free stack push, pop, and
> length functions that use __atomic builtins, for systems that benefit from the
> finer-grained memory ordering control.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
> Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> ---
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
<snip>
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/8] stack: introduce rte stack library
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 1/8] stack: introduce rte stack library Gage Eads
2019-04-01 21:14 ` Gage Eads
@ 2019-04-02 11:14 ` Honnappa Nagarahalli
2019-04-02 11:14 ` Honnappa Nagarahalli
2019-04-03 17:06 ` Thomas Monjalon
1 sibling, 2 replies; 228+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-02 11:14 UTC (permalink / raw)
To: Gage Eads, dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
Gavin Hu (Arm Technology China),
nd, thomas, nd
> Subject: [PATCH v6 1/8] stack: introduce rte stack library
>
> The rte_stack library provides an API for configuration and use of a bounded
> stack of pointers. Push and pop operations are MT-safe, allowing concurrent
> access, and the interface supports pushing and popping multiple pointers at a
> time.
>
> The library's interface is modeled after another DPDK data structure, rte_ring,
> and its lock-based implementation is derived from the stack mempool
> handler. An upcoming commit will migrate the stack mempool handler to
> rte_stack.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
> Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> ---
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
<snip>
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/8] stack: introduce rte stack library
2019-04-02 11:14 ` Honnappa Nagarahalli
@ 2019-04-02 11:14 ` Honnappa Nagarahalli
2019-04-03 17:06 ` Thomas Monjalon
1 sibling, 0 replies; 228+ messages in thread
From: Honnappa Nagarahalli @ 2019-04-02 11:14 UTC (permalink / raw)
To: Gage Eads, dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
Gavin Hu (Arm Technology China),
nd, thomas, nd
> Subject: [PATCH v6 1/8] stack: introduce rte stack library
>
> The rte_stack library provides an API for configuration and use of a bounded
> stack of pointers. Push and pop operations are MT-safe, allowing concurrent
> access, and the interface supports pushing and popping multiple pointers at a
> time.
>
> The library's interface is modeled after another DPDK data structure, rte_ring,
> and its lock-based implementation is derived from the stack mempool
> handler. An upcoming commit will migrate the stack mempool handler to
> rte_stack.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
> Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> ---
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
<snip>
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/8] Add stack library and new mempool handler
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Gage Eads
` (8 preceding siblings ...)
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
@ 2019-04-03 17:04 ` Thomas Monjalon
2019-04-03 17:04 ` Thomas Monjalon
2019-04-03 17:10 ` Eads, Gage
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 " Gage Eads
10 siblings, 2 replies; 228+ messages in thread
From: Thomas Monjalon @ 2019-04-03 17:04 UTC (permalink / raw)
To: Gage Eads
Cc: dev, olivier.matz, arybchenko, bruce.richardson,
konstantin.ananyev, gavin.hu, Honnappa.Nagarahalli, nd
01/04/2019 23:14, Gage Eads:
> Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
> so it is currently limited to the x86_64 platform.
I'm waiting for an update of the 128-bit compare-and-swap.
It is blocking the integration of this patch.
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/8] Add stack library and new mempool handler
2019-04-03 17:04 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Thomas Monjalon
@ 2019-04-03 17:04 ` Thomas Monjalon
2019-04-03 17:10 ` Eads, Gage
1 sibling, 0 replies; 228+ messages in thread
From: Thomas Monjalon @ 2019-04-03 17:04 UTC (permalink / raw)
To: Gage Eads
Cc: dev, olivier.matz, arybchenko, bruce.richardson,
konstantin.ananyev, gavin.hu, Honnappa.Nagarahalli, nd
01/04/2019 23:14, Gage Eads:
> Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
> so it is currently limited to the x86_64 platform.
I'm waiting for an update of the 128-bit compare-and-swap.
It is blocking the integration of this patch.
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/8] stack: introduce rte stack library
2019-04-02 11:14 ` Honnappa Nagarahalli
2019-04-02 11:14 ` Honnappa Nagarahalli
@ 2019-04-03 17:06 ` Thomas Monjalon
2019-04-03 17:06 ` Thomas Monjalon
2019-04-03 17:13 ` Eads, Gage
1 sibling, 2 replies; 228+ messages in thread
From: Thomas Monjalon @ 2019-04-03 17:06 UTC (permalink / raw)
To: Gage Eads
Cc: dev, Honnappa Nagarahalli, olivier.matz, arybchenko,
bruce.richardson, konstantin.ananyev,
Gavin Hu (Arm Technology China),
nd
02/04/2019 13:14, Honnappa Nagarahalli:
> > Subject: [PATCH v6 1/8] stack: introduce rte stack library
> >
> > The rte_stack library provides an API for configuration and use of a bounded
> > stack of pointers. Push and pop operations are MT-safe, allowing concurrent
> > access, and the interface supports pushing and popping multiple pointers at a
> > time.
> >
> > The library's interface is modeled after another DPDK data structure, rte_ring,
> > and its lock-based implementation is derived from the stack mempool
> > handler. An upcoming commit will migrate the stack mempool handler to
> > rte_stack.
> >
> > Signed-off-by: Gage Eads <gage.eads@intel.com>
> > Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> > ---
> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
It does not compile for Arm:
lib/librte_stack/rte_stack_std.h:
In function '__rte_stack_std_pop':
lib/librte_stack/rte_stack_std.h:68:6: error:
implicit declaration of function 'unlikely'
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/8] stack: introduce rte stack library
2019-04-03 17:06 ` Thomas Monjalon
@ 2019-04-03 17:06 ` Thomas Monjalon
2019-04-03 17:13 ` Eads, Gage
1 sibling, 0 replies; 228+ messages in thread
From: Thomas Monjalon @ 2019-04-03 17:06 UTC (permalink / raw)
To: Gage Eads
Cc: dev, Honnappa Nagarahalli, olivier.matz, arybchenko,
bruce.richardson, konstantin.ananyev,
Gavin Hu (Arm Technology China),
nd
02/04/2019 13:14, Honnappa Nagarahalli:
> > Subject: [PATCH v6 1/8] stack: introduce rte stack library
> >
> > The rte_stack library provides an API for configuration and use of a bounded
> > stack of pointers. Push and pop operations are MT-safe, allowing concurrent
> > access, and the interface supports pushing and popping multiple pointers at a
> > time.
> >
> > The library's interface is modeled after another DPDK data structure, rte_ring,
> > and its lock-based implementation is derived from the stack mempool
> > handler. An upcoming commit will migrate the stack mempool handler to
> > rte_stack.
> >
> > Signed-off-by: Gage Eads <gage.eads@intel.com>
> > Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> > ---
> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
It does not compile for Arm:
lib/librte_stack/rte_stack_std.h:
In function '__rte_stack_std_pop':
lib/librte_stack/rte_stack_std.h:68:6: error:
implicit declaration of function 'unlikely'
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/8] Add stack library and new mempool handler
2019-04-03 17:04 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Thomas Monjalon
2019-04-03 17:04 ` Thomas Monjalon
@ 2019-04-03 17:10 ` Eads, Gage
2019-04-03 17:10 ` Eads, Gage
1 sibling, 1 reply; 228+ messages in thread
From: Eads, Gage @ 2019-04-03 17:10 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, olivier.matz, arybchenko, Richardson, Bruce, Ananyev,
Konstantin, gavin.hu, Honnappa.Nagarahalli, nd
> 01/04/2019 23:14, Gage Eads:
> > Note that the lock-free algorithm relies on a 128-bit
> > compare-and-swap[1], so it is currently limited to the x86_64 platform.
>
> I'm waiting for an update of the 128-bit compare-and-swap.
> It is blocking the integration of this patch.
>
Sorry for that; I misunderstood your earlier comment on that patch. I'll address it and re-submit.
Thanks,
Gage
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v6 0/8] Add stack library and new mempool handler
2019-04-03 17:10 ` Eads, Gage
@ 2019-04-03 17:10 ` Eads, Gage
0 siblings, 0 replies; 228+ messages in thread
From: Eads, Gage @ 2019-04-03 17:10 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, olivier.matz, arybchenko, Richardson, Bruce, Ananyev,
Konstantin, gavin.hu, Honnappa.Nagarahalli, nd
> 01/04/2019 23:14, Gage Eads:
> > Note that the lock-free algorithm relies on a 128-bit
> > compare-and-swap[1], so it is currently limited to the x86_64 platform.
>
> I'm waiting for an update of the 128-bit compare-and-swap.
> It is blocking the integration of this patch.
>
Sorry for that; I misunderstood your earlier comment on that patch. I'll address it and re-submit.
Thanks,
Gage
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/8] stack: introduce rte stack library
2019-04-03 17:06 ` Thomas Monjalon
2019-04-03 17:06 ` Thomas Monjalon
@ 2019-04-03 17:13 ` Eads, Gage
2019-04-03 17:13 ` Eads, Gage
2019-04-03 17:23 ` Thomas Monjalon
1 sibling, 2 replies; 228+ messages in thread
From: Eads, Gage @ 2019-04-03 17:13 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, Honnappa Nagarahalli, olivier.matz, arybchenko, Richardson,
Bruce, Ananyev, Konstantin, Gavin Hu (Arm Technology China),
nd
> 02/04/2019 13:14, Honnappa Nagarahalli:
> > > Subject: [PATCH v6 1/8] stack: introduce rte stack library
> > >
> > > The rte_stack library provides an API for configuration and use of a
> > > bounded stack of pointers. Push and pop operations are MT-safe,
> > > allowing concurrent access, and the interface supports pushing and
> > > popping multiple pointers at a time.
> > >
> > > The library's interface is modeled after another DPDK data
> > > structure, rte_ring, and its lock-based implementation is derived
> > > from the stack mempool handler. An upcoming commit will migrate the
> > > stack mempool handler to rte_stack.
> > >
> > > Signed-off-by: Gage Eads <gage.eads@intel.com>
> > > Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> > > ---
> > Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
>
> It does not compile for Arm:
>
> lib/librte_stack/rte_stack_std.h:
> In function '__rte_stack_std_pop':
> lib/librte_stack/rte_stack_std.h:68:6: error:
> implicit declaration of function 'unlikely'
Missing rte_branch_prediction.h include -- I'll fix and resubmit. Thanks for checking the non-x86 builds.
I can hold off resubmission until the 128-bit CAS patch is merged, so this series is properly tested in the automated build + test pipeline, if you'd prefer.
Thanks,
Gage
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/8] stack: introduce rte stack library
2019-04-03 17:13 ` Eads, Gage
@ 2019-04-03 17:13 ` Eads, Gage
2019-04-03 17:23 ` Thomas Monjalon
1 sibling, 0 replies; 228+ messages in thread
From: Eads, Gage @ 2019-04-03 17:13 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, Honnappa Nagarahalli, olivier.matz, arybchenko, Richardson,
Bruce, Ananyev, Konstantin, Gavin Hu (Arm Technology China),
nd
> 02/04/2019 13:14, Honnappa Nagarahalli:
> > > Subject: [PATCH v6 1/8] stack: introduce rte stack library
> > >
> > > The rte_stack library provides an API for configuration and use of a
> > > bounded stack of pointers. Push and pop operations are MT-safe,
> > > allowing concurrent access, and the interface supports pushing and
> > > popping multiple pointers at a time.
> > >
> > > The library's interface is modeled after another DPDK data
> > > structure, rte_ring, and its lock-based implementation is derived
> > > from the stack mempool handler. An upcoming commit will migrate the
> > > stack mempool handler to rte_stack.
> > >
> > > Signed-off-by: Gage Eads <gage.eads@intel.com>
> > > Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> > > ---
> > Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
>
> It does not compile for Arm:
>
> lib/librte_stack/rte_stack_std.h:
> In function '__rte_stack_std_pop':
> lib/librte_stack/rte_stack_std.h:68:6: error:
> implicit declaration of function 'unlikely'
Missing rte_branch_prediction.h include -- I'll fix and resubmit. Thanks for checking the non-x86 builds.
I can hold off resubmission until the 128-bit CAS patch is merged, so this series is properly tested in the automated build + test pipeline, if you'd prefer.
Thanks,
Gage
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/8] stack: introduce rte stack library
2019-04-03 17:13 ` Eads, Gage
2019-04-03 17:13 ` Eads, Gage
@ 2019-04-03 17:23 ` Thomas Monjalon
2019-04-03 17:23 ` Thomas Monjalon
1 sibling, 1 reply; 228+ messages in thread
From: Thomas Monjalon @ 2019-04-03 17:23 UTC (permalink / raw)
To: Eads, Gage
Cc: dev, Honnappa Nagarahalli, olivier.matz, arybchenko, Richardson,
Bruce, Ananyev, Konstantin, Gavin Hu (Arm Technology China),
nd
03/04/2019 19:13, Eads, Gage:
> > 02/04/2019 13:14, Honnappa Nagarahalli:
> > > > Subject: [PATCH v6 1/8] stack: introduce rte stack library
> > > >
> > > > The rte_stack library provides an API for configuration and use of a
> > > > bounded stack of pointers. Push and pop operations are MT-safe,
> > > > allowing concurrent access, and the interface supports pushing and
> > > > popping multiple pointers at a time.
> > > >
> > > > The library's interface is modeled after another DPDK data
> > > > structure, rte_ring, and its lock-based implementation is derived
> > > > from the stack mempool handler. An upcoming commit will migrate the
> > > > stack mempool handler to rte_stack.
> > > >
> > > > Signed-off-by: Gage Eads <gage.eads@intel.com>
> > > > Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> > > > ---
> > > Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> >
> > It does not compile for Arm:
> >
> > lib/librte_stack/rte_stack_std.h:
> > In function '__rte_stack_std_pop':
> > lib/librte_stack/rte_stack_std.h:68:6: error:
> > implicit declaration of function 'unlikely'
>
> Missing rte_branch_prediction.h include -- I'll fix and resubmit. Thanks for checking the non-x86 builds.
>
> I can hold off resubmission until the 128-bit CAS patch is merged, so this series is properly tested in the automated build + test pipeline, if you'd prefer.
Yes, would be the best.
But we need to merge everything tomorrow at last.
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/8] stack: introduce rte stack library
2019-04-03 17:23 ` Thomas Monjalon
@ 2019-04-03 17:23 ` Thomas Monjalon
0 siblings, 0 replies; 228+ messages in thread
From: Thomas Monjalon @ 2019-04-03 17:23 UTC (permalink / raw)
To: Eads, Gage
Cc: dev, Honnappa Nagarahalli, olivier.matz, arybchenko, Richardson,
Bruce, Ananyev, Konstantin, Gavin Hu (Arm Technology China),
nd
03/04/2019 19:13, Eads, Gage:
> > 02/04/2019 13:14, Honnappa Nagarahalli:
> > > > Subject: [PATCH v6 1/8] stack: introduce rte stack library
> > > >
> > > > The rte_stack library provides an API for configuration and use of a
> > > > bounded stack of pointers. Push and pop operations are MT-safe,
> > > > allowing concurrent access, and the interface supports pushing and
> > > > popping multiple pointers at a time.
> > > >
> > > > The library's interface is modeled after another DPDK data
> > > > structure, rte_ring, and its lock-based implementation is derived
> > > > from the stack mempool handler. An upcoming commit will migrate the
> > > > stack mempool handler to rte_stack.
> > > >
> > > > Signed-off-by: Gage Eads <gage.eads@intel.com>
> > > > Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> > > > ---
> > > Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> >
> > It does not compile for Arm:
> >
> > lib/librte_stack/rte_stack_std.h:
> > In function '__rte_stack_std_pop':
> > lib/librte_stack/rte_stack_std.h:68:6: error:
> > implicit declaration of function 'unlikely'
>
> Missing rte_branch_prediction.h include -- I'll fix and resubmit. Thanks for checking the non-x86 builds.
>
> I can hold off resubmission until the 128-bit CAS patch is merged, so this series is properly tested in the automated build + test pipeline, if you'd prefer.
Yes, would be the best.
But we need to merge everything tomorrow at last.
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 0/8] Add stack library and new mempool handler
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Gage Eads
` (9 preceding siblings ...)
2019-04-03 17:04 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Thomas Monjalon
@ 2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` Gage Eads
` (10 more replies)
10 siblings, 11 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This patchset introduces a stack library, supporting both lock-based and
lock-free stacks, and a lock-free stack mempool handler.
The lock-based stack code is derived from the existing stack mempool handler,
and that handler is refactored to use the stack library.
The lock-free stack mempool handler is intended for usages where the rte
ring's "non-preemptive" constraint is not acceptable; for example, if the
application uses a mixture of pinned high-priority threads and multiplexed
low-priority threads that share a mempool.
Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
so it is currently limited to the x86_64 platform.
This patchset is the successor to a patchset containing only the new mempool
handler[2].
[1] http://mails.dpdk.org/archives/dev/2019-April/129014.html
[2] http://mails.dpdk.org/archives/dev/2019-January/123555.html
---
v7:
- Add rte_branch_prediction.h include to rte_stack_std.h for unlikely()
- Add rte_compat.h include to rte_stack.h for __rte_experimental
v6:
- Add load-acquire fence to the lock-free push function
- Correct generic implementation's pop_elems 128b CAS success and failure
memorder to match those in the C11 implementation.
v5:
- Add comment to explain padding in *_get_memsize() functions
- Prefix internal functions with '__'
- Use RTE_ASSERT for performance critical run-time checks
- Don't use __atomic_load in the C11 pop_elems function, and put an acquire
thread fence at the start of the 2nd do-while loop
- Change pop_elems 128b CAS success memorder to RELEASE and failure memorder to
RELAXED
- Change compile-time assertion to run for all 64-bit architectures
- Reorganize the code with standard and lock-free .c and .h files
v4:
- Fix 32-bit build error in test_stack.c by using %zu format specifier for
size_t
- Rebase onto master
v3:
- Rebase patchset onto master (test/test/ -> app/test/)
- Fix rte_stack_std_push() segfault introduced in v2
v2:
- Reworked structure and function naming to use rte_stack_{std, lf}_...
- Updated to the latest rte_atomic128_cmp_exchange() interface.
- Rename STACK_F_NB -> RTE_STACK_F_LF.
- Remove rte_rmb() and rte_wmb() from the generic push and pop implementations.
These are obviated by rte_atomic128_cmp_exchange()'s two memorder arguments.
- Edit stack_lib.rst text to 80 chars/line.
- Fix rte_stack.h doxygen formatting.
- Allocate popped_objs array from the heap
- Fix stack_thread_push_pop bug ("&t->sz" -> "t->sz")
- Remove unnecessary NULL check from test_stack_basic
- Properly terminate the name string in test_stack_name_length
- Add an empty array of struct rte_nb_lifo_elem elements
- In rte_nb_lifo_push(), retrieve the last element from __nb_lifo_pop()
- Split C11 implementation into a separate patchset
Gage Eads (8):
stack: introduce rte stack library
mempool/stack: convert mempool to use rte stack
test/stack: add stack test
test/stack: add stack perf test
stack: add lock-free stack implementation
stack: add C11 atomic implementation
test/stack: add lock-free stack tests
mempool/stack: add lock-free stack mempool handler
MAINTAINERS | 9 +-
app/test/Makefile | 3 +
app/test/meson.build | 7 +
app/test/test_stack.c | 423 ++++++++++++++++++++++++
app/test/test_stack_perf.c | 356 ++++++++++++++++++++
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/env_abstraction_layer.rst | 10 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 83 +++++
doc/guides/rel_notes/release_19_05.rst | 13 +
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 115 +++----
lib/Makefile | 2 +
lib/librte_stack/Makefile | 29 ++
lib/librte_stack/meson.build | 12 +
lib/librte_stack/rte_stack.c | 196 +++++++++++
lib/librte_stack/rte_stack.h | 260 +++++++++++++++
lib/librte_stack/rte_stack_lf.c | 31 ++
lib/librte_stack/rte_stack_lf.h | 106 ++++++
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 +++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++
lib/librte_stack/rte_stack_std.c | 26 ++
lib/librte_stack/rte_stack_std.h | 121 +++++++
lib/librte_stack/rte_stack_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
30 files changed, 2132 insertions(+), 72 deletions(-)
create mode 100644 app/test/test_stack.c
create mode 100644 app/test/test_stack_perf.c
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 0/8] Add stack library and new mempool handler
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 " Gage Eads
@ 2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 1/8] stack: introduce rte stack library Gage Eads
` (9 subsequent siblings)
10 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This patchset introduces a stack library, supporting both lock-based and
lock-free stacks, and a lock-free stack mempool handler.
The lock-based stack code is derived from the existing stack mempool handler,
and that handler is refactored to use the stack library.
The lock-free stack mempool handler is intended for usages where the rte
ring's "non-preemptive" constraint is not acceptable; for example, if the
application uses a mixture of pinned high-priority threads and multiplexed
low-priority threads that share a mempool.
Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
so it is currently limited to the x86_64 platform.
This patchset is the successor to a patchset containing only the new mempool
handler[2].
[1] http://mails.dpdk.org/archives/dev/2019-April/129014.html
[2] http://mails.dpdk.org/archives/dev/2019-January/123555.html
---
v7:
- Add rte_branch_prediction.h include to rte_stack_std.h for unlikely()
- Add rte_compat.h include to rte_stack.h for __rte_experimental
v6:
- Add load-acquire fence to the lock-free push function
- Correct generic implementation's pop_elems 128b CAS success and failure
memorder to match those in the C11 implementation.
v5:
- Add comment to explain padding in *_get_memsize() functions
- Prefix internal functions with '__'
- Use RTE_ASSERT for performance critical run-time checks
- Don't use __atomic_load in the C11 pop_elems function, and put an acquire
thread fence at the start of the 2nd do-while loop
- Change pop_elems 128b CAS success memorder to RELEASE and failure memorder to
RELAXED
- Change compile-time assertion to run for all 64-bit architectures
- Reorganize the code with standard and lock-free .c and .h files
v4:
- Fix 32-bit build error in test_stack.c by using %zu format specifier for
size_t
- Rebase onto master
v3:
- Rebase patchset onto master (test/test/ -> app/test/)
- Fix rte_stack_std_push() segfault introduced in v2
v2:
- Reworked structure and function naming to use rte_stack_{std, lf}_...
- Updated to the latest rte_atomic128_cmp_exchange() interface.
- Rename STACK_F_NB -> RTE_STACK_F_LF.
- Remove rte_rmb() and rte_wmb() from the generic push and pop implementations.
These are obviated by rte_atomic128_cmp_exchange()'s two memorder arguments.
- Edit stack_lib.rst text to 80 chars/line.
- Fix rte_stack.h doxygen formatting.
- Allocate popped_objs array from the heap
- Fix stack_thread_push_pop bug ("&t->sz" -> "t->sz")
- Remove unnecessary NULL check from test_stack_basic
- Properly terminate the name string in test_stack_name_length
- Add an empty array of struct rte_nb_lifo_elem elements
- In rte_nb_lifo_push(), retrieve the last element from __nb_lifo_pop()
- Split C11 implementation into a separate patchset
Gage Eads (8):
stack: introduce rte stack library
mempool/stack: convert mempool to use rte stack
test/stack: add stack test
test/stack: add stack perf test
stack: add lock-free stack implementation
stack: add C11 atomic implementation
test/stack: add lock-free stack tests
mempool/stack: add lock-free stack mempool handler
MAINTAINERS | 9 +-
app/test/Makefile | 3 +
app/test/meson.build | 7 +
app/test/test_stack.c | 423 ++++++++++++++++++++++++
app/test/test_stack_perf.c | 356 ++++++++++++++++++++
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/env_abstraction_layer.rst | 10 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 83 +++++
doc/guides/rel_notes/release_19_05.rst | 13 +
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 115 +++----
lib/Makefile | 2 +
lib/librte_stack/Makefile | 29 ++
lib/librte_stack/meson.build | 12 +
lib/librte_stack/rte_stack.c | 196 +++++++++++
lib/librte_stack/rte_stack.h | 260 +++++++++++++++
lib/librte_stack/rte_stack_lf.c | 31 ++
lib/librte_stack/rte_stack_lf.h | 106 ++++++
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 +++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++
lib/librte_stack/rte_stack_std.c | 26 ++
lib/librte_stack/rte_stack_std.h | 121 +++++++
lib/librte_stack/rte_stack_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
30 files changed, 2132 insertions(+), 72 deletions(-)
create mode 100644 app/test/test_stack.c
create mode 100644 app/test/test_stack_perf.c
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 1/8] stack: introduce rte stack library
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 " Gage Eads
2019-04-03 20:09 ` Gage Eads
@ 2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
` (8 subsequent siblings)
10 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 28 +++++
doc/guides/rel_notes/release_19_05.rst | 5 +
lib/Makefile | 2 +
lib/librte_stack/Makefile | 25 ++++
lib/librte_stack/meson.build | 8 ++
lib/librte_stack/rte_stack.c | 182 +++++++++++++++++++++++++++++
lib/librte_stack/rte_stack.h | 208 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++++++
lib/librte_stack/rte_stack_std.c | 26 +++++
lib/librte_stack/rte_stack_std.h | 121 +++++++++++++++++++
lib/librte_stack/rte_stack_version.map | 9 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
18 files changed, 664 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 71ac8cd4b..f30fc4aa6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -426,6 +426,12 @@ F: drivers/raw/skeleton_rawdev/
F: app/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
+Stack API - EXPERIMENTAL
+M: Gage Eads <gage.eads@intel.com>
+M: Olivier Matz <olivier.matz@6wind.com>
+F: lib/librte_stack/
+F: doc/guides/prog_guide/stack_lib.rst
+
Memory Pool Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index 6292bc4af..fc8dba69d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -994,3 +994,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
# Compile the eventdev application
#
CONFIG_RTE_APP_EVENTDEV=y
+
+#
+# Compile librte_stack
+#
+CONFIG_RTE_LIBRTE_STACK=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index aacc66bd8..de1e215dd 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -125,6 +125,7 @@ The public API headers are grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
[ring] (@ref rte_ring.h),
+ [stack] (@ref rte_stack.h),
[tailq] (@ref rte_tailq.h),
[bitmap] (@ref rte_bitmap.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..7722fc3e9 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -55,6 +55,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
@TOPDIR@/lib/librte_security \
+ @TOPDIR@/lib/librte_stack \
@TOPDIR@/lib/librte_table \
@TOPDIR@/lib/librte_telemetry \
@TOPDIR@/lib/librte_timer \
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..f4f60862f 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ stack_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
new file mode 100644
index 000000000..25a8cc38a
--- /dev/null
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -0,0 +1,28 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+Stack Library
+=============
+
+DPDK's stack library provides an API for configuration and use of a bounded
+stack of pointers.
+
+The stack library provides the following basic operations:
+
+* Create a uniquely named stack of a user-specified size and using a
+ user-specified socket.
+
+* Push and pop a burst of one or more stack objects (pointers). These function
+ are multi-threading safe.
+
+* Free a previously created stack.
+
+* Lookup a pointer to a stack by its name.
+
+* Query a stack's current depth and number of free entries.
+
+Implementation
+~~~~~~~~~~~~~~
+
+The stack consists of a contiguous array of pointers, a current index, and a
+spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index bdad1ddbe..ebfbe36e5 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -121,6 +121,11 @@ New Features
Improved testpmd application performance on ARM platform. For ``macswap``
forwarding mode, NEON intrinsics were used to do swap to save CPU cycles.
+* **Added Stack API.**
+
+ Added a new stack API for configuration and use of a bounded stack of
+ pointers. The API provides MT-safe push and pop operations that can operate
+ on one or more pointers per operation.
Removed Items
-------------
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..9f90e80ad 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack
+DEPDIRS-librte_stack := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
new file mode 100644
index 000000000..6db540073
--- /dev/null
+++ b/lib/librte_stack/Makefile
@@ -0,0 +1,25 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_stack.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_stack_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
+ rte_stack_std.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
+ rte_stack_std.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
new file mode 100644
index 000000000..d2e60ce9b
--- /dev/null
+++ b/lib/librte_stack/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+allow_experimental_apis = true
+
+version = 1
+sources = files('rte_stack.c', 'rte_stack_std.c')
+headers = files('rte_stack.h', 'rte_stack_std.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
new file mode 100644
index 000000000..610014b6c
--- /dev/null
+++ b/lib/librte_stack/rte_stack.c
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_tailq.h>
+
+#include "rte_stack.h"
+#include "rte_stack_pvt.h"
+
+int stack_logtype;
+
+TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_stack_tailq = {
+ .name = RTE_TAILQ_STACK_NAME,
+};
+EAL_REGISTER_TAILQ(rte_stack_tailq)
+
+static void
+rte_stack_init(struct rte_stack *s)
+{
+ memset(s, 0, sizeof(*s));
+
+ rte_stack_std_init(s);
+}
+
+static ssize_t
+rte_stack_get_memsize(unsigned int count)
+{
+ return rte_stack_std_get_memsize(count);
+}
+
+struct rte_stack *
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_stack_list *stack_list;
+ const struct rte_memzone *mz;
+ struct rte_tailq_entry *te;
+ struct rte_stack *s;
+ unsigned int sz;
+ int ret;
+
+ RTE_SET_USED(flags);
+
+ sz = rte_stack_get_memsize(count);
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_STACK_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ STACK_LOG_ERR("Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
+ 0, __alignof__(*s));
+ if (mz == NULL) {
+ STACK_LOG_ERR("Cannot reserve stack memzone!\n");
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(te);
+ return NULL;
+ }
+
+ s = mz->addr;
+
+ rte_stack_init(s);
+
+ /* Store the name for later lookups */
+ ret = snprintf(s->name, sizeof(s->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(s->name)) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_errno = ENAMETOOLONG;
+ rte_free(te);
+ rte_memzone_free(mz);
+ return NULL;
+ }
+
+ s->memzone = mz;
+ s->capacity = count;
+ s->flags = flags;
+
+ te->data = s;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ TAILQ_INSERT_TAIL(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return s;
+}
+
+void
+rte_stack_free(struct rte_stack *s)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+
+ if (s == NULL)
+ return;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, stack_list, next) {
+ if (te->data == s)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+
+ rte_memzone_free(s->memzone);
+}
+
+struct rte_stack *
+rte_stack_lookup(const char *name)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+ struct rte_stack *r = NULL;
+
+ if (name == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, stack_list, next) {
+ r = (struct rte_stack *) te->data;
+ if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+RTE_INIT(librte_stack_init_log)
+{
+ stack_logtype = rte_log_register("lib.stack");
+ if (stack_logtype >= 0)
+ rte_log_set_level(stack_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
new file mode 100644
index 000000000..cebb5be13
--- /dev/null
+++ b/lib/librte_stack/rte_stack.h
@@ -0,0 +1,208 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+/**
+ * @file rte_stack.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Stack
+ *
+ * librte_stack provides an API for configuration and use of a bounded stack of
+ * pointers. Push and pop operations are MT-safe, allowing concurrent access,
+ * and the interface supports pushing and popping multiple pointers at a time.
+ */
+
+#ifndef _RTE_STACK_H_
+#define _RTE_STACK_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_compat.h>
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+#define RTE_TAILQ_STACK_NAME "RTE_STACK"
+#define RTE_STACK_MZ_PREFIX "STK_"
+/** The maximum length of a stack name. */
+#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
+ sizeof(RTE_STACK_MZ_PREFIX) + 1)
+
+/* Structure containing the LIFO, its current length, and a lock for mutual
+ * exclusion.
+ */
+struct rte_stack_std {
+ rte_spinlock_t lock; /**< LIFO lock */
+ uint32_t len; /**< LIFO len */
+ void *objs[]; /**< LIFO pointer table */
+};
+
+/* The RTE stack structure contains the LIFO structure itself, plus metadata
+ * such as its name and memzone pointer.
+ */
+struct rte_stack {
+ /** Name of the stack. */
+ char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
+ /** Memzone containing the rte_stack structure. */
+ const struct rte_memzone *memzone;
+ uint32_t capacity; /**< Usable size of the stack. */
+ uint32_t flags; /**< Flags supplied at creation. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+} __rte_cache_aligned;
+
+#include "rte_stack_std.h"
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_pop(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return __rte_stack_std_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of free entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of free entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_free_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return s->capacity - rte_stack_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new stack named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory for a stack of
+ * size *count*. The behavior of the stack is controlled by the *flags*.
+ *
+ * @param name
+ * The name of the stack.
+ * @param count
+ * The size of the stack.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * Reserved for future use.
+ * @return
+ * On success, the pointer to the new allocated stack. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a stack with the same name already exists
+ * - ENOMEM - insufficient memory to create the stack
+ * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
+ */
+struct rte_stack *__rte_experimental
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free all memory used by the stack.
+ *
+ * @param s
+ * Stack to free
+ */
+void __rte_experimental
+rte_stack_free(struct rte_stack *s);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Lookup a stack by its name.
+ *
+ * @param name
+ * The name of the stack.
+ * @return
+ * The pointer to the stack matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - Stack with name *name* not found.
+ * - EINVAL - *name* pointer is NULL.
+ */
+struct rte_stack * __rte_experimental
+rte_stack_lookup(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_H_ */
diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
new file mode 100644
index 000000000..4a6a7bdb3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_pvt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_PVT_H_
+#define _RTE_STACK_PVT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_log.h>
+
+extern int stack_logtype;
+
+#define STACK_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
+ __func__, ##args)
+
+#define STACK_LOG_ERR(fmt, args...) \
+ STACK_LOG(ERR, fmt, ## args)
+
+#define STACK_LOG_WARN(fmt, args...) \
+ STACK_LOG(WARNING, fmt, ## args)
+
+#define STACK_LOG_INFO(fmt, args...) \
+ STACK_LOG(INFO, fmt, ## args)
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_PVT_H_ */
diff --git a/lib/librte_stack/rte_stack_std.c b/lib/librte_stack/rte_stack_std.c
new file mode 100644
index 000000000..0a310d7c6
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.c
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_std_init(struct rte_stack *s)
+{
+ rte_spinlock_init(&s->stack_std.lock);
+}
+
+ssize_t
+rte_stack_std_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_std.h b/lib/librte_stack/rte_stack_std.h
new file mode 100644
index 000000000..5dc940932
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.h
@@ -0,0 +1,121 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_STD_H_
+#define _RTE_STACK_STD_H_
+
+#include <rte_branch_prediction.h>
+
+/**
+ * @internal Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+ cache_objs = &stack->objs[stack->len];
+
+ /* Is there sufficient space in the stack? */
+ if ((stack->len + n) > s->capacity) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ stack->len += n;
+
+ rte_spinlock_unlock(&stack->lock);
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index, len;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+
+ if (unlikely(n > stack->len)) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ cache_objs = stack->objs;
+
+ for (index = 0, len = stack->len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ stack->len -= n;
+ rte_spinlock_unlock(&stack->lock);
+
+ return n;
+}
+
+/**
+ * @internal Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_count(struct rte_stack *s)
+{
+ return (unsigned int)s->stack_std.len;
+}
+
+/**
+ * @internal Initialize a standard stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ */
+void
+rte_stack_std_init(struct rte_stack *s);
+
+/**
+ * @internal Return the memory required for a standard stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a standard stack.
+ */
+ssize_t
+rte_stack_std_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_STD_H_ */
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
new file mode 100644
index 000000000..6662679c3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_stack_create;
+ rte_stack_free;
+ rte_stack_lookup;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index c3289f885..595314d7d 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..7e033e78c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 1/8] stack: introduce rte stack library
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 1/8] stack: introduce rte stack library Gage Eads
@ 2019-04-03 20:09 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 28 +++++
doc/guides/rel_notes/release_19_05.rst | 5 +
lib/Makefile | 2 +
lib/librte_stack/Makefile | 25 ++++
lib/librte_stack/meson.build | 8 ++
lib/librte_stack/rte_stack.c | 182 +++++++++++++++++++++++++++++
lib/librte_stack/rte_stack.h | 208 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++++++
lib/librte_stack/rte_stack_std.c | 26 +++++
lib/librte_stack/rte_stack_std.h | 121 +++++++++++++++++++
lib/librte_stack/rte_stack_version.map | 9 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
18 files changed, 664 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 71ac8cd4b..f30fc4aa6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -426,6 +426,12 @@ F: drivers/raw/skeleton_rawdev/
F: app/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
+Stack API - EXPERIMENTAL
+M: Gage Eads <gage.eads@intel.com>
+M: Olivier Matz <olivier.matz@6wind.com>
+F: lib/librte_stack/
+F: doc/guides/prog_guide/stack_lib.rst
+
Memory Pool Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index 6292bc4af..fc8dba69d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -994,3 +994,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
# Compile the eventdev application
#
CONFIG_RTE_APP_EVENTDEV=y
+
+#
+# Compile librte_stack
+#
+CONFIG_RTE_LIBRTE_STACK=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index aacc66bd8..de1e215dd 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -125,6 +125,7 @@ The public API headers are grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
[ring] (@ref rte_ring.h),
+ [stack] (@ref rte_stack.h),
[tailq] (@ref rte_tailq.h),
[bitmap] (@ref rte_bitmap.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..7722fc3e9 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -55,6 +55,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
@TOPDIR@/lib/librte_security \
+ @TOPDIR@/lib/librte_stack \
@TOPDIR@/lib/librte_table \
@TOPDIR@/lib/librte_telemetry \
@TOPDIR@/lib/librte_timer \
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..f4f60862f 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ stack_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
new file mode 100644
index 000000000..25a8cc38a
--- /dev/null
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -0,0 +1,28 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+Stack Library
+=============
+
+DPDK's stack library provides an API for configuration and use of a bounded
+stack of pointers.
+
+The stack library provides the following basic operations:
+
+* Create a uniquely named stack of a user-specified size and using a
+ user-specified socket.
+
+* Push and pop a burst of one or more stack objects (pointers). These function
+ are multi-threading safe.
+
+* Free a previously created stack.
+
+* Lookup a pointer to a stack by its name.
+
+* Query a stack's current depth and number of free entries.
+
+Implementation
+~~~~~~~~~~~~~~
+
+The stack consists of a contiguous array of pointers, a current index, and a
+spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index bdad1ddbe..ebfbe36e5 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -121,6 +121,11 @@ New Features
Improved testpmd application performance on ARM platform. For ``macswap``
forwarding mode, NEON intrinsics were used to do swap to save CPU cycles.
+* **Added Stack API.**
+
+ Added a new stack API for configuration and use of a bounded stack of
+ pointers. The API provides MT-safe push and pop operations that can operate
+ on one or more pointers per operation.
Removed Items
-------------
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..9f90e80ad 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack
+DEPDIRS-librte_stack := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
new file mode 100644
index 000000000..6db540073
--- /dev/null
+++ b/lib/librte_stack/Makefile
@@ -0,0 +1,25 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_stack.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_stack_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
+ rte_stack_std.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
+ rte_stack_std.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
new file mode 100644
index 000000000..d2e60ce9b
--- /dev/null
+++ b/lib/librte_stack/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+allow_experimental_apis = true
+
+version = 1
+sources = files('rte_stack.c', 'rte_stack_std.c')
+headers = files('rte_stack.h', 'rte_stack_std.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
new file mode 100644
index 000000000..610014b6c
--- /dev/null
+++ b/lib/librte_stack/rte_stack.c
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_tailq.h>
+
+#include "rte_stack.h"
+#include "rte_stack_pvt.h"
+
+int stack_logtype;
+
+TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_stack_tailq = {
+ .name = RTE_TAILQ_STACK_NAME,
+};
+EAL_REGISTER_TAILQ(rte_stack_tailq)
+
+static void
+rte_stack_init(struct rte_stack *s)
+{
+ memset(s, 0, sizeof(*s));
+
+ rte_stack_std_init(s);
+}
+
+static ssize_t
+rte_stack_get_memsize(unsigned int count)
+{
+ return rte_stack_std_get_memsize(count);
+}
+
+struct rte_stack *
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_stack_list *stack_list;
+ const struct rte_memzone *mz;
+ struct rte_tailq_entry *te;
+ struct rte_stack *s;
+ unsigned int sz;
+ int ret;
+
+ RTE_SET_USED(flags);
+
+ sz = rte_stack_get_memsize(count);
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_STACK_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ STACK_LOG_ERR("Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
+ 0, __alignof__(*s));
+ if (mz == NULL) {
+ STACK_LOG_ERR("Cannot reserve stack memzone!\n");
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(te);
+ return NULL;
+ }
+
+ s = mz->addr;
+
+ rte_stack_init(s);
+
+ /* Store the name for later lookups */
+ ret = snprintf(s->name, sizeof(s->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(s->name)) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_errno = ENAMETOOLONG;
+ rte_free(te);
+ rte_memzone_free(mz);
+ return NULL;
+ }
+
+ s->memzone = mz;
+ s->capacity = count;
+ s->flags = flags;
+
+ te->data = s;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ TAILQ_INSERT_TAIL(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return s;
+}
+
+void
+rte_stack_free(struct rte_stack *s)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+
+ if (s == NULL)
+ return;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, stack_list, next) {
+ if (te->data == s)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+
+ rte_memzone_free(s->memzone);
+}
+
+struct rte_stack *
+rte_stack_lookup(const char *name)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+ struct rte_stack *r = NULL;
+
+ if (name == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, stack_list, next) {
+ r = (struct rte_stack *) te->data;
+ if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+RTE_INIT(librte_stack_init_log)
+{
+ stack_logtype = rte_log_register("lib.stack");
+ if (stack_logtype >= 0)
+ rte_log_set_level(stack_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
new file mode 100644
index 000000000..cebb5be13
--- /dev/null
+++ b/lib/librte_stack/rte_stack.h
@@ -0,0 +1,208 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+/**
+ * @file rte_stack.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Stack
+ *
+ * librte_stack provides an API for configuration and use of a bounded stack of
+ * pointers. Push and pop operations are MT-safe, allowing concurrent access,
+ * and the interface supports pushing and popping multiple pointers at a time.
+ */
+
+#ifndef _RTE_STACK_H_
+#define _RTE_STACK_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_compat.h>
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+#define RTE_TAILQ_STACK_NAME "RTE_STACK"
+#define RTE_STACK_MZ_PREFIX "STK_"
+/** The maximum length of a stack name. */
+#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
+ sizeof(RTE_STACK_MZ_PREFIX) + 1)
+
+/* Structure containing the LIFO, its current length, and a lock for mutual
+ * exclusion.
+ */
+struct rte_stack_std {
+ rte_spinlock_t lock; /**< LIFO lock */
+ uint32_t len; /**< LIFO len */
+ void *objs[]; /**< LIFO pointer table */
+};
+
+/* The RTE stack structure contains the LIFO structure itself, plus metadata
+ * such as its name and memzone pointer.
+ */
+struct rte_stack {
+ /** Name of the stack. */
+ char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
+ /** Memzone containing the rte_stack structure. */
+ const struct rte_memzone *memzone;
+ uint32_t capacity; /**< Usable size of the stack. */
+ uint32_t flags; /**< Flags supplied at creation. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+} __rte_cache_aligned;
+
+#include "rte_stack_std.h"
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_pop(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return __rte_stack_std_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of free entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of free entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_free_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return s->capacity - rte_stack_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new stack named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory for a stack of
+ * size *count*. The behavior of the stack is controlled by the *flags*.
+ *
+ * @param name
+ * The name of the stack.
+ * @param count
+ * The size of the stack.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * Reserved for future use.
+ * @return
+ * On success, the pointer to the new allocated stack. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a stack with the same name already exists
+ * - ENOMEM - insufficient memory to create the stack
+ * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
+ */
+struct rte_stack *__rte_experimental
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free all memory used by the stack.
+ *
+ * @param s
+ * Stack to free
+ */
+void __rte_experimental
+rte_stack_free(struct rte_stack *s);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Lookup a stack by its name.
+ *
+ * @param name
+ * The name of the stack.
+ * @return
+ * The pointer to the stack matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - Stack with name *name* not found.
+ * - EINVAL - *name* pointer is NULL.
+ */
+struct rte_stack * __rte_experimental
+rte_stack_lookup(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_H_ */
diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
new file mode 100644
index 000000000..4a6a7bdb3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_pvt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_PVT_H_
+#define _RTE_STACK_PVT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_log.h>
+
+extern int stack_logtype;
+
+#define STACK_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
+ __func__, ##args)
+
+#define STACK_LOG_ERR(fmt, args...) \
+ STACK_LOG(ERR, fmt, ## args)
+
+#define STACK_LOG_WARN(fmt, args...) \
+ STACK_LOG(WARNING, fmt, ## args)
+
+#define STACK_LOG_INFO(fmt, args...) \
+ STACK_LOG(INFO, fmt, ## args)
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_PVT_H_ */
diff --git a/lib/librte_stack/rte_stack_std.c b/lib/librte_stack/rte_stack_std.c
new file mode 100644
index 000000000..0a310d7c6
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.c
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_std_init(struct rte_stack *s)
+{
+ rte_spinlock_init(&s->stack_std.lock);
+}
+
+ssize_t
+rte_stack_std_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_std.h b/lib/librte_stack/rte_stack_std.h
new file mode 100644
index 000000000..5dc940932
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.h
@@ -0,0 +1,121 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_STD_H_
+#define _RTE_STACK_STD_H_
+
+#include <rte_branch_prediction.h>
+
+/**
+ * @internal Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+ cache_objs = &stack->objs[stack->len];
+
+ /* Is there sufficient space in the stack? */
+ if ((stack->len + n) > s->capacity) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ stack->len += n;
+
+ rte_spinlock_unlock(&stack->lock);
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index, len;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+
+ if (unlikely(n > stack->len)) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ cache_objs = stack->objs;
+
+ for (index = 0, len = stack->len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ stack->len -= n;
+ rte_spinlock_unlock(&stack->lock);
+
+ return n;
+}
+
+/**
+ * @internal Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_count(struct rte_stack *s)
+{
+ return (unsigned int)s->stack_std.len;
+}
+
+/**
+ * @internal Initialize a standard stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ */
+void
+rte_stack_std_init(struct rte_stack *s);
+
+/**
+ * @internal Return the memory required for a standard stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a standard stack.
+ */
+ssize_t
+rte_stack_std_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_STD_H_ */
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
new file mode 100644
index 000000000..6662679c3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_stack_create;
+ rte_stack_free;
+ rte_stack_lookup;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index c3289f885..595314d7d 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..7e033e78c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 2/8] mempool/stack: convert mempool to use rte stack
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 " Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 1/8] stack: introduce rte stack library Gage Eads
@ 2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 3/8] test/stack: add stack test Gage Eads
` (7 subsequent siblings)
10 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 2 +-
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 93 +++++++++----------------------
4 files changed, 33 insertions(+), 71 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index f30fc4aa6..e09e7d93f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -303,7 +303,6 @@ M: Andrew Rybchenko <arybchenko@solarflare.com>
F: lib/librte_mempool/
F: drivers/mempool/Makefile
F: drivers/mempool/ring/
-F: drivers/mempool/stack/
F: doc/guides/prog_guide/mempool_lib.rst
F: app/test/test_mempool*
F: app/test/test_func_reentrancy.c
@@ -431,6 +430,7 @@ M: Gage Eads <gage.eads@intel.com>
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
+F: drivers/mempool/stack/
Memory Pool Drivers
diff --git a/drivers/mempool/stack/Makefile b/drivers/mempool/stack/Makefile
index 0444aedad..1681a62bc 100644
--- a/drivers/mempool/stack/Makefile
+++ b/drivers/mempool/stack/Makefile
@@ -10,10 +10,11 @@ LIB = librte_mempool_stack.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
# Headers
CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+LDLIBS += -lrte_eal -lrte_mempool -lrte_stack
EXPORT_MAP := rte_mempool_stack_version.map
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index b75a3bb56..03e369a41 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
+
+allow_experimental_apis = true
sources = files('rte_mempool_stack.c')
+
+deps += ['stack']
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index e6d504af5..25ccdb9af 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -1,39 +1,29 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2019 Intel Corporation
*/
#include <stdio.h>
#include <rte_mempool.h>
-#include <rte_malloc.h>
-
-struct rte_mempool_stack {
- rte_spinlock_t sl;
-
- uint32_t size;
- uint32_t len;
- void *objs[];
-};
+#include <rte_stack.h>
static int
stack_alloc(struct rte_mempool *mp)
{
- struct rte_mempool_stack *s;
- unsigned n = mp->size;
- int size = sizeof(*s) + (n+16)*sizeof(void *);
-
- /* Allocate our local memory structure */
- s = rte_zmalloc_socket("mempool-stack",
- size,
- RTE_CACHE_LINE_SIZE,
- mp->socket_id);
- if (s == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
- return -ENOMEM;
+ char name[RTE_STACK_NAMESIZE];
+ struct rte_stack *s;
+ int ret;
+
+ ret = snprintf(name, sizeof(name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(name)) {
+ rte_errno = ENAMETOOLONG;
+ return -rte_errno;
}
- rte_spinlock_init(&s->sl);
+ s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ if (s == NULL)
+ return -rte_errno;
- s->size = n;
mp->pool_data = s;
return 0;
@@ -41,69 +31,36 @@ stack_alloc(struct rte_mempool *mp)
static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index;
-
- rte_spinlock_lock(&s->sl);
- cache_objs = &s->objs[s->len];
-
- /* Is there sufficient space in the stack ? */
- if ((s->len + n) > s->size) {
- rte_spinlock_unlock(&s->sl);
- return -ENOBUFS;
- }
-
- /* Add elements back into the cache */
- for (index = 0; index < n; ++index, obj_table++)
- cache_objs[index] = *obj_table;
-
- s->len += n;
+ struct rte_stack *s = mp->pool_data;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_push(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static int
stack_dequeue(struct rte_mempool *mp, void **obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index, len;
-
- rte_spinlock_lock(&s->sl);
-
- if (unlikely(n > s->len)) {
- rte_spinlock_unlock(&s->sl);
- return -ENOENT;
- }
+ struct rte_stack *s = mp->pool_data;
- cache_objs = s->objs;
-
- for (index = 0, len = s->len - 1; index < n;
- ++index, len--, obj_table++)
- *obj_table = cache_objs[len];
-
- s->len -= n;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_pop(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static unsigned
stack_get_count(const struct rte_mempool *mp)
{
- struct rte_mempool_stack *s = mp->pool_data;
+ struct rte_stack *s = mp->pool_data;
- return s->len;
+ return rte_stack_count(s);
}
static void
stack_free(struct rte_mempool *mp)
{
- rte_free((void *)(mp->pool_data));
+ struct rte_stack *s = mp->pool_data;
+
+ rte_stack_free(s);
}
static struct rte_mempool_ops ops_stack = {
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 2/8] mempool/stack: convert mempool to use rte stack
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-04-03 20:09 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 2 +-
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 93 +++++++++----------------------
4 files changed, 33 insertions(+), 71 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index f30fc4aa6..e09e7d93f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -303,7 +303,6 @@ M: Andrew Rybchenko <arybchenko@solarflare.com>
F: lib/librte_mempool/
F: drivers/mempool/Makefile
F: drivers/mempool/ring/
-F: drivers/mempool/stack/
F: doc/guides/prog_guide/mempool_lib.rst
F: app/test/test_mempool*
F: app/test/test_func_reentrancy.c
@@ -431,6 +430,7 @@ M: Gage Eads <gage.eads@intel.com>
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
+F: drivers/mempool/stack/
Memory Pool Drivers
diff --git a/drivers/mempool/stack/Makefile b/drivers/mempool/stack/Makefile
index 0444aedad..1681a62bc 100644
--- a/drivers/mempool/stack/Makefile
+++ b/drivers/mempool/stack/Makefile
@@ -10,10 +10,11 @@ LIB = librte_mempool_stack.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
# Headers
CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+LDLIBS += -lrte_eal -lrte_mempool -lrte_stack
EXPORT_MAP := rte_mempool_stack_version.map
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index b75a3bb56..03e369a41 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
+
+allow_experimental_apis = true
sources = files('rte_mempool_stack.c')
+
+deps += ['stack']
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index e6d504af5..25ccdb9af 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -1,39 +1,29 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2019 Intel Corporation
*/
#include <stdio.h>
#include <rte_mempool.h>
-#include <rte_malloc.h>
-
-struct rte_mempool_stack {
- rte_spinlock_t sl;
-
- uint32_t size;
- uint32_t len;
- void *objs[];
-};
+#include <rte_stack.h>
static int
stack_alloc(struct rte_mempool *mp)
{
- struct rte_mempool_stack *s;
- unsigned n = mp->size;
- int size = sizeof(*s) + (n+16)*sizeof(void *);
-
- /* Allocate our local memory structure */
- s = rte_zmalloc_socket("mempool-stack",
- size,
- RTE_CACHE_LINE_SIZE,
- mp->socket_id);
- if (s == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
- return -ENOMEM;
+ char name[RTE_STACK_NAMESIZE];
+ struct rte_stack *s;
+ int ret;
+
+ ret = snprintf(name, sizeof(name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(name)) {
+ rte_errno = ENAMETOOLONG;
+ return -rte_errno;
}
- rte_spinlock_init(&s->sl);
+ s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ if (s == NULL)
+ return -rte_errno;
- s->size = n;
mp->pool_data = s;
return 0;
@@ -41,69 +31,36 @@ stack_alloc(struct rte_mempool *mp)
static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index;
-
- rte_spinlock_lock(&s->sl);
- cache_objs = &s->objs[s->len];
-
- /* Is there sufficient space in the stack ? */
- if ((s->len + n) > s->size) {
- rte_spinlock_unlock(&s->sl);
- return -ENOBUFS;
- }
-
- /* Add elements back into the cache */
- for (index = 0; index < n; ++index, obj_table++)
- cache_objs[index] = *obj_table;
-
- s->len += n;
+ struct rte_stack *s = mp->pool_data;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_push(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static int
stack_dequeue(struct rte_mempool *mp, void **obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index, len;
-
- rte_spinlock_lock(&s->sl);
-
- if (unlikely(n > s->len)) {
- rte_spinlock_unlock(&s->sl);
- return -ENOENT;
- }
+ struct rte_stack *s = mp->pool_data;
- cache_objs = s->objs;
-
- for (index = 0, len = s->len - 1; index < n;
- ++index, len--, obj_table++)
- *obj_table = cache_objs[len];
-
- s->len -= n;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_pop(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static unsigned
stack_get_count(const struct rte_mempool *mp)
{
- struct rte_mempool_stack *s = mp->pool_data;
+ struct rte_stack *s = mp->pool_data;
- return s->len;
+ return rte_stack_count(s);
}
static void
stack_free(struct rte_mempool *mp)
{
- rte_free((void *)(mp->pool_data));
+ struct rte_stack *s = mp->pool_data;
+
+ rte_stack_free(s);
}
static struct rte_mempool_ops ops_stack = {
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 3/8] test/stack: add stack test
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 " Gage Eads
` (2 preceding siblings ...)
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 4/8] test/stack: add stack perf test Gage Eads
` (6 subsequent siblings)
10 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_autotest performs positive and negative testing of the stack API, and
exercises the push and pop datapath functions with all available lcores.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 1 +
app/test/Makefile | 2 +
app/test/meson.build | 3 +
app/test/test_stack.c | 410 ++++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 416 insertions(+)
create mode 100644 app/test/test_stack.c
diff --git a/MAINTAINERS b/MAINTAINERS
index e09e7d93f..e4e6d1b15 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -431,6 +431,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
F: drivers/mempool/stack/
+F: test/test/*stack*
Memory Pool Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index d6aa28bad..e5bde81af 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -90,6 +90,8 @@ endif
SRCS-y += test_rwlock.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
diff --git a/app/test/meson.build b/app/test/meson.build
index c5e65fe66..56ea13f53 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -95,6 +95,7 @@ test_sources = files('commands.c',
'test_sched.c',
'test_service_cores.c',
'test_spinlock.c',
+ 'test_stack.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -133,6 +134,7 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
+ 'stack',
'timer'
]
@@ -174,6 +176,7 @@ fast_parallel_test_names = [
'rwlock_autotest',
'sched_autotest',
'spinlock_autotest',
+ 'stack_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
new file mode 100644
index 000000000..8392e4e4d
--- /dev/null
+++ b/app/test/test_stack.c
@@ -0,0 +1,410 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_SIZE 4096
+#define MAX_BULK 32
+
+static int
+test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
+{
+ unsigned int i, ret;
+ void **popped_objs;
+
+ popped_objs = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (popped_objs == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_push(s, &obj_table[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] push returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_pop(s, &popped_objs[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] pop returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i++) {
+ if (obj_table[i] != popped_objs[STACK_SIZE - i - 1]) {
+ printf("[%s():%u] Incorrect value %p at index 0x%x\n",
+ __func__, __LINE__,
+ popped_objs[STACK_SIZE - i - 1], i);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ rte_free(popped_objs);
+
+ return 0;
+}
+
+static int
+test_stack_basic(void)
+{
+ struct rte_stack *s = NULL;
+ void **obj_table = NULL;
+ int i, ret = -1;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ goto fail_test;
+ }
+
+ for (i = 0; i < STACK_SIZE; i++)
+ obj_table[i] = (void *)(uintptr_t)i;
+
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_lookup(__func__) != s) {
+ printf("[%s():%u] failed to lookup a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_count(s) != 0) {
+ printf("[%s():%u] stack count: %u (expected 0)\n",
+ __func__, __LINE__, rte_stack_count(s));
+ goto fail_test;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, 1);
+ if (ret) {
+ printf("[%s():%u] Single object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, MAX_BULK);
+ if (ret) {
+ printf("[%s():%u] Bulk object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
+ if (ret != 0) {
+ printf("[%s():%u] Excess objects push succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_pop(s, obj_table, 1);
+ if (ret != 0) {
+ printf("[%s():%u] Empty stack pop succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = 0;
+
+fail_test:
+ rte_stack_free(s);
+
+ rte_free(obj_table);
+
+ return ret;
+}
+
+static int
+test_stack_name_reuse(void)
+{
+ struct rte_stack *s[2];
+
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[0] == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[1] != NULL) {
+ printf("[%s():%u] Failed to detect re-used name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ rte_stack_free(s[0]);
+
+ return 0;
+}
+
+static int
+test_stack_name_length(void)
+{
+ char name[RTE_STACK_NAMESIZE + 1];
+ struct rte_stack *s;
+
+ memset(name, 's', sizeof(name));
+ name[RTE_STACK_NAMESIZE] = '\0';
+
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ if (s != NULL) {
+ printf("[%s():%u] Failed to prevent long name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENAMETOOLONG) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_stack *s = rte_stack_lookup("stack_not_found");
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENOENT) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s = rte_stack_lookup(NULL);
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != EINVAL) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_free_null(void)
+{
+ /* Check whether the library proper handles a NULL pointer */
+ rte_stack_free(NULL);
+
+ return 0;
+}
+
+#define NUM_ITERS_PER_THREAD 100000
+
+struct test_args {
+ struct rte_stack *s;
+ rte_atomic64_t *sz;
+};
+
+static int
+stack_thread_push_pop(void *args)
+{
+ struct test_args *t = args;
+ void **obj_table;
+ int i;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < NUM_ITERS_PER_THREAD; i++) {
+ unsigned int success, num;
+
+ /* Reserve up to min(MAX_BULK, available slots) stack entries,
+ * then push and pop those stack entries.
+ */
+ do {
+ uint64_t sz = rte_atomic64_read(t->sz);
+ volatile uint64_t *sz_addr;
+
+ sz_addr = (volatile uint64_t *)t->sz;
+
+ num = RTE_MIN(rte_rand() % MAX_BULK, STACK_SIZE - sz);
+
+ success = rte_atomic64_cmpset(sz_addr, sz, sz + num);
+ } while (success == 0);
+
+ if (rte_stack_push(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to push %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ if (rte_stack_pop(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to pop %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ rte_atomic64_sub(t->sz, num);
+ }
+
+ rte_free(obj_table);
+ return 0;
+}
+
+static int
+test_stack_multithreaded(void)
+{
+ struct test_args *args;
+ unsigned int lcore_id;
+ struct rte_stack *s;
+ rte_atomic64_t size;
+
+ printf("[%s():%u] Running with %u lcores\n",
+ __func__, __LINE__, rte_lcore_count());
+
+ if (rte_lcore_count() < 2)
+ return 0;
+
+ args = rte_malloc(NULL, sizeof(struct test_args) * RTE_MAX_LCORE, 0);
+ if (args == NULL) {
+ printf("[%s():%u] failed to malloc %zu bytes\n",
+ __func__, __LINE__,
+ sizeof(struct test_args) * RTE_MAX_LCORE);
+ return -1;
+ }
+
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ rte_free(args);
+ return -1;
+ }
+
+ rte_atomic64_init(&size);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ if (rte_eal_remote_launch(stack_thread_push_pop,
+ &args[lcore_id], lcore_id))
+ rte_panic("Failed to launch lcore %d\n", lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ stack_thread_push_pop(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ rte_stack_free(s);
+ rte_free(args);
+
+ return 0;
+}
+
+static int
+test_stack(void)
+{
+ if (test_stack_basic() < 0)
+ return -1;
+
+ if (test_lookup_null() < 0)
+ return -1;
+
+ if (test_free_null() < 0)
+ return -1;
+
+ if (test_stack_name_reuse() < 0)
+ return -1;
+
+ if (test_stack_name_length() < 0)
+ return -1;
+
+ if (test_stack_multithreaded() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_autotest, test_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 3/8] test/stack: add stack test
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 3/8] test/stack: add stack test Gage Eads
@ 2019-04-03 20:09 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_autotest performs positive and negative testing of the stack API, and
exercises the push and pop datapath functions with all available lcores.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 1 +
app/test/Makefile | 2 +
app/test/meson.build | 3 +
app/test/test_stack.c | 410 ++++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 416 insertions(+)
create mode 100644 app/test/test_stack.c
diff --git a/MAINTAINERS b/MAINTAINERS
index e09e7d93f..e4e6d1b15 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -431,6 +431,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
F: drivers/mempool/stack/
+F: test/test/*stack*
Memory Pool Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index d6aa28bad..e5bde81af 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -90,6 +90,8 @@ endif
SRCS-y += test_rwlock.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
diff --git a/app/test/meson.build b/app/test/meson.build
index c5e65fe66..56ea13f53 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -95,6 +95,7 @@ test_sources = files('commands.c',
'test_sched.c',
'test_service_cores.c',
'test_spinlock.c',
+ 'test_stack.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -133,6 +134,7 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
+ 'stack',
'timer'
]
@@ -174,6 +176,7 @@ fast_parallel_test_names = [
'rwlock_autotest',
'sched_autotest',
'spinlock_autotest',
+ 'stack_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
new file mode 100644
index 000000000..8392e4e4d
--- /dev/null
+++ b/app/test/test_stack.c
@@ -0,0 +1,410 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_SIZE 4096
+#define MAX_BULK 32
+
+static int
+test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
+{
+ unsigned int i, ret;
+ void **popped_objs;
+
+ popped_objs = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (popped_objs == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_push(s, &obj_table[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] push returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_pop(s, &popped_objs[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] pop returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i++) {
+ if (obj_table[i] != popped_objs[STACK_SIZE - i - 1]) {
+ printf("[%s():%u] Incorrect value %p at index 0x%x\n",
+ __func__, __LINE__,
+ popped_objs[STACK_SIZE - i - 1], i);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ rte_free(popped_objs);
+
+ return 0;
+}
+
+static int
+test_stack_basic(void)
+{
+ struct rte_stack *s = NULL;
+ void **obj_table = NULL;
+ int i, ret = -1;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ goto fail_test;
+ }
+
+ for (i = 0; i < STACK_SIZE; i++)
+ obj_table[i] = (void *)(uintptr_t)i;
+
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_lookup(__func__) != s) {
+ printf("[%s():%u] failed to lookup a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_count(s) != 0) {
+ printf("[%s():%u] stack count: %u (expected 0)\n",
+ __func__, __LINE__, rte_stack_count(s));
+ goto fail_test;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, 1);
+ if (ret) {
+ printf("[%s():%u] Single object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, MAX_BULK);
+ if (ret) {
+ printf("[%s():%u] Bulk object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
+ if (ret != 0) {
+ printf("[%s():%u] Excess objects push succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_pop(s, obj_table, 1);
+ if (ret != 0) {
+ printf("[%s():%u] Empty stack pop succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = 0;
+
+fail_test:
+ rte_stack_free(s);
+
+ rte_free(obj_table);
+
+ return ret;
+}
+
+static int
+test_stack_name_reuse(void)
+{
+ struct rte_stack *s[2];
+
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[0] == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[1] != NULL) {
+ printf("[%s():%u] Failed to detect re-used name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ rte_stack_free(s[0]);
+
+ return 0;
+}
+
+static int
+test_stack_name_length(void)
+{
+ char name[RTE_STACK_NAMESIZE + 1];
+ struct rte_stack *s;
+
+ memset(name, 's', sizeof(name));
+ name[RTE_STACK_NAMESIZE] = '\0';
+
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ if (s != NULL) {
+ printf("[%s():%u] Failed to prevent long name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENAMETOOLONG) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_stack *s = rte_stack_lookup("stack_not_found");
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENOENT) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s = rte_stack_lookup(NULL);
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != EINVAL) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_free_null(void)
+{
+ /* Check whether the library proper handles a NULL pointer */
+ rte_stack_free(NULL);
+
+ return 0;
+}
+
+#define NUM_ITERS_PER_THREAD 100000
+
+struct test_args {
+ struct rte_stack *s;
+ rte_atomic64_t *sz;
+};
+
+static int
+stack_thread_push_pop(void *args)
+{
+ struct test_args *t = args;
+ void **obj_table;
+ int i;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < NUM_ITERS_PER_THREAD; i++) {
+ unsigned int success, num;
+
+ /* Reserve up to min(MAX_BULK, available slots) stack entries,
+ * then push and pop those stack entries.
+ */
+ do {
+ uint64_t sz = rte_atomic64_read(t->sz);
+ volatile uint64_t *sz_addr;
+
+ sz_addr = (volatile uint64_t *)t->sz;
+
+ num = RTE_MIN(rte_rand() % MAX_BULK, STACK_SIZE - sz);
+
+ success = rte_atomic64_cmpset(sz_addr, sz, sz + num);
+ } while (success == 0);
+
+ if (rte_stack_push(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to push %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ if (rte_stack_pop(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to pop %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ rte_atomic64_sub(t->sz, num);
+ }
+
+ rte_free(obj_table);
+ return 0;
+}
+
+static int
+test_stack_multithreaded(void)
+{
+ struct test_args *args;
+ unsigned int lcore_id;
+ struct rte_stack *s;
+ rte_atomic64_t size;
+
+ printf("[%s():%u] Running with %u lcores\n",
+ __func__, __LINE__, rte_lcore_count());
+
+ if (rte_lcore_count() < 2)
+ return 0;
+
+ args = rte_malloc(NULL, sizeof(struct test_args) * RTE_MAX_LCORE, 0);
+ if (args == NULL) {
+ printf("[%s():%u] failed to malloc %zu bytes\n",
+ __func__, __LINE__,
+ sizeof(struct test_args) * RTE_MAX_LCORE);
+ return -1;
+ }
+
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ rte_free(args);
+ return -1;
+ }
+
+ rte_atomic64_init(&size);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ if (rte_eal_remote_launch(stack_thread_push_pop,
+ &args[lcore_id], lcore_id))
+ rte_panic("Failed to launch lcore %d\n", lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ stack_thread_push_pop(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ rte_stack_free(s);
+ rte_free(args);
+
+ return 0;
+}
+
+static int
+test_stack(void)
+{
+ if (test_stack_basic() < 0)
+ return -1;
+
+ if (test_lookup_null() < 0)
+ return -1;
+
+ if (test_free_null() < 0)
+ return -1;
+
+ if (test_stack_name_reuse() < 0)
+ return -1;
+
+ if (test_stack_name_length() < 0)
+ return -1;
+
+ if (test_stack_multithreaded() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_autotest, test_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 4/8] test/stack: add stack perf test
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 " Gage Eads
` (3 preceding siblings ...)
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 3/8] test/stack: add stack test Gage Eads
@ 2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 5/8] stack: add lock-free stack implementation Gage Eads
` (5 subsequent siblings)
10 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/Makefile | 1 +
app/test/meson.build | 2 +
app/test/test_stack_perf.c | 343 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 346 insertions(+)
create mode 100644 app/test/test_stack_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index e5bde81af..b28bed2d4 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -91,6 +91,7 @@ endif
SRCS-y += test_rwlock.c
SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
diff --git a/app/test/meson.build b/app/test/meson.build
index 56ea13f53..02eb788a4 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -96,6 +96,7 @@ test_sources = files('commands.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
+ 'test_stack_perf.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -241,6 +242,7 @@ perf_test_names = [
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
+ 'stack_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
new file mode 100644
index 000000000..484370d30
--- /dev/null
+++ b/app/test/test_stack_perf.c
@@ -0,0 +1,343 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+#include <rte_stack.h>
+#include <rte_cycles.h>
+#include <rte_launch.h>
+#include <rte_pause.h>
+
+#include "test.h"
+
+#define STACK_NAME "STACK_PERF"
+#define MAX_BURST 32
+#define STACK_SIZE (RTE_MAX_LCORE * MAX_BURST)
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+/*
+ * Push/pop bulk sizes, marked volatile so they aren't treated as compile-time
+ * constants.
+ */
+static volatile unsigned int bulk_sizes[] = {8, MAX_BURST};
+
+static rte_atomic32_t lcore_barrier;
+
+struct lcore_pair {
+ unsigned int c1;
+ unsigned int c2;
+};
+
+static int
+get_two_hyperthreads(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] == core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_cores(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] != core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_sockets(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if (socket[0] != socket[1]) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+/* Measure the cycle cost of popping an empty stack. */
+static void
+test_empty_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 100000000;
+ void *objs[MAX_BURST];
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++)
+ rte_stack_pop(s, objs, bulk_sizes[0]);
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Stack empty pop: %.2F\n",
+ (double)(end - start) / iterations);
+}
+
+struct thread_args {
+ struct rte_stack *s;
+ unsigned int sz;
+ double avg;
+};
+
+/* Measure the average per-pointer cycle cost of stack push and pop */
+static int
+bulk_push_pop(void *p)
+{
+ unsigned int iterations = 1000000;
+ struct thread_args *args = p;
+ void *objs[MAX_BURST] = {0};
+ unsigned int size, i;
+ struct rte_stack *s;
+
+ s = args->s;
+ size = args->sz;
+
+ rte_atomic32_sub(&lcore_barrier, 1);
+ while (rte_atomic32_read(&lcore_barrier) != 0)
+ rte_pause();
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, size);
+ rte_stack_pop(s, objs, size);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ args->avg = ((double)(end - start))/(iterations * size);
+
+ return 0;
+}
+
+/*
+ * Run bulk_push_pop() simultaneously on pairs of cores, to measure stack
+ * perf when between hyperthread siblings, cores on the same socket, and cores
+ * on different sockets.
+ */
+static void
+run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s,
+ lcore_function_t fn)
+{
+ struct thread_args args[2];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ rte_atomic32_set(&lcore_barrier, 2);
+
+ args[0].sz = args[1].sz = bulk_sizes[i];
+ args[0].s = args[1].s = s;
+
+ if (cores->c1 == rte_get_master_lcore()) {
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ fn(&args[0]);
+ rte_eal_wait_lcore(cores->c2);
+ } else {
+ rte_eal_remote_launch(fn, &args[0], cores->c1);
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ rte_eal_wait_lcore(cores->c1);
+ rte_eal_wait_lcore(cores->c2);
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], (args[0].avg + args[1].avg) / 2);
+ }
+}
+
+/* Run bulk_push_pop() simultaneously on 1+ cores. */
+static void
+run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n)
+{
+ struct thread_args args[RTE_MAX_LCORE];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ unsigned int lcore_id;
+ int cnt = 0;
+ double avg;
+
+ rte_atomic32_set(&lcore_barrier, n);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ if (rte_eal_remote_launch(fn, &args[lcore_id],
+ lcore_id))
+ rte_panic("Failed to launch lcore %d\n",
+ lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ fn(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ avg = args[rte_lcore_id()].avg;
+
+ cnt = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+ avg += args[lcore_id].avg;
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], avg / n);
+ }
+}
+
+/*
+ * Measure the cycle cost of pushing and popping a single pointer on a single
+ * lcore.
+ */
+static void
+test_single_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 16000000;
+ void *obj = NULL;
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, &obj, 1);
+ rte_stack_pop(s, &obj, 1);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Average cycles per single object push/pop: %.2F\n",
+ ((double)(end - start)) / iterations);
+}
+
+/* Measure the cycle cost of bulk pushing and popping on a single lcore. */
+static void
+test_bulk_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 8000000;
+ void *objs[MAX_BURST];
+ unsigned int sz, i;
+
+ for (sz = 0; sz < ARRAY_SIZE(bulk_sizes); sz++) {
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, bulk_sizes[sz]);
+ rte_stack_pop(s, objs, bulk_sizes[sz]);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ double avg = ((double)(end - start) /
+ (iterations * bulk_sizes[sz]));
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[sz], avg);
+ }
+}
+
+static int
+test_stack_perf(void)
+{
+ struct lcore_pair cores;
+ struct rte_stack *s;
+
+ rte_atomic32_init(&lcore_barrier);
+
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ printf("### Testing single element push/pop ###\n");
+ test_single_push_pop(s);
+
+ printf("\n### Testing empty pop ###\n");
+ test_empty_pop(s);
+
+ printf("\n### Testing using a single lcore ###\n");
+ test_bulk_push_pop(s);
+
+ if (get_two_hyperthreads(&cores) == 0) {
+ printf("\n### Testing using two hyperthreads ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_cores(&cores) == 0) {
+ printf("\n### Testing using two physical cores ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_sockets(&cores) == 0) {
+ printf("\n### Testing using two NUMA nodes ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+
+ printf("\n### Testing on all %u lcores ###\n", rte_lcore_count());
+ run_on_n_cores(s, bulk_push_pop, rte_lcore_count());
+
+ rte_stack_free(s);
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 4/8] test/stack: add stack perf test
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 4/8] test/stack: add stack perf test Gage Eads
@ 2019-04-03 20:09 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/Makefile | 1 +
app/test/meson.build | 2 +
app/test/test_stack_perf.c | 343 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 346 insertions(+)
create mode 100644 app/test/test_stack_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index e5bde81af..b28bed2d4 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -91,6 +91,7 @@ endif
SRCS-y += test_rwlock.c
SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
diff --git a/app/test/meson.build b/app/test/meson.build
index 56ea13f53..02eb788a4 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -96,6 +96,7 @@ test_sources = files('commands.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
+ 'test_stack_perf.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -241,6 +242,7 @@ perf_test_names = [
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
+ 'stack_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
new file mode 100644
index 000000000..484370d30
--- /dev/null
+++ b/app/test/test_stack_perf.c
@@ -0,0 +1,343 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+#include <rte_stack.h>
+#include <rte_cycles.h>
+#include <rte_launch.h>
+#include <rte_pause.h>
+
+#include "test.h"
+
+#define STACK_NAME "STACK_PERF"
+#define MAX_BURST 32
+#define STACK_SIZE (RTE_MAX_LCORE * MAX_BURST)
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+/*
+ * Push/pop bulk sizes, marked volatile so they aren't treated as compile-time
+ * constants.
+ */
+static volatile unsigned int bulk_sizes[] = {8, MAX_BURST};
+
+static rte_atomic32_t lcore_barrier;
+
+struct lcore_pair {
+ unsigned int c1;
+ unsigned int c2;
+};
+
+static int
+get_two_hyperthreads(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] == core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_cores(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] != core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_sockets(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if (socket[0] != socket[1]) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+/* Measure the cycle cost of popping an empty stack. */
+static void
+test_empty_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 100000000;
+ void *objs[MAX_BURST];
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++)
+ rte_stack_pop(s, objs, bulk_sizes[0]);
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Stack empty pop: %.2F\n",
+ (double)(end - start) / iterations);
+}
+
+struct thread_args {
+ struct rte_stack *s;
+ unsigned int sz;
+ double avg;
+};
+
+/* Measure the average per-pointer cycle cost of stack push and pop */
+static int
+bulk_push_pop(void *p)
+{
+ unsigned int iterations = 1000000;
+ struct thread_args *args = p;
+ void *objs[MAX_BURST] = {0};
+ unsigned int size, i;
+ struct rte_stack *s;
+
+ s = args->s;
+ size = args->sz;
+
+ rte_atomic32_sub(&lcore_barrier, 1);
+ while (rte_atomic32_read(&lcore_barrier) != 0)
+ rte_pause();
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, size);
+ rte_stack_pop(s, objs, size);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ args->avg = ((double)(end - start))/(iterations * size);
+
+ return 0;
+}
+
+/*
+ * Run bulk_push_pop() simultaneously on pairs of cores, to measure stack
+ * perf when between hyperthread siblings, cores on the same socket, and cores
+ * on different sockets.
+ */
+static void
+run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s,
+ lcore_function_t fn)
+{
+ struct thread_args args[2];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ rte_atomic32_set(&lcore_barrier, 2);
+
+ args[0].sz = args[1].sz = bulk_sizes[i];
+ args[0].s = args[1].s = s;
+
+ if (cores->c1 == rte_get_master_lcore()) {
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ fn(&args[0]);
+ rte_eal_wait_lcore(cores->c2);
+ } else {
+ rte_eal_remote_launch(fn, &args[0], cores->c1);
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ rte_eal_wait_lcore(cores->c1);
+ rte_eal_wait_lcore(cores->c2);
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], (args[0].avg + args[1].avg) / 2);
+ }
+}
+
+/* Run bulk_push_pop() simultaneously on 1+ cores. */
+static void
+run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n)
+{
+ struct thread_args args[RTE_MAX_LCORE];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ unsigned int lcore_id;
+ int cnt = 0;
+ double avg;
+
+ rte_atomic32_set(&lcore_barrier, n);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ if (rte_eal_remote_launch(fn, &args[lcore_id],
+ lcore_id))
+ rte_panic("Failed to launch lcore %d\n",
+ lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ fn(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ avg = args[rte_lcore_id()].avg;
+
+ cnt = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+ avg += args[lcore_id].avg;
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], avg / n);
+ }
+}
+
+/*
+ * Measure the cycle cost of pushing and popping a single pointer on a single
+ * lcore.
+ */
+static void
+test_single_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 16000000;
+ void *obj = NULL;
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, &obj, 1);
+ rte_stack_pop(s, &obj, 1);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Average cycles per single object push/pop: %.2F\n",
+ ((double)(end - start)) / iterations);
+}
+
+/* Measure the cycle cost of bulk pushing and popping on a single lcore. */
+static void
+test_bulk_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 8000000;
+ void *objs[MAX_BURST];
+ unsigned int sz, i;
+
+ for (sz = 0; sz < ARRAY_SIZE(bulk_sizes); sz++) {
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, bulk_sizes[sz]);
+ rte_stack_pop(s, objs, bulk_sizes[sz]);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ double avg = ((double)(end - start) /
+ (iterations * bulk_sizes[sz]));
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[sz], avg);
+ }
+}
+
+static int
+test_stack_perf(void)
+{
+ struct lcore_pair cores;
+ struct rte_stack *s;
+
+ rte_atomic32_init(&lcore_barrier);
+
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ printf("### Testing single element push/pop ###\n");
+ test_single_push_pop(s);
+
+ printf("\n### Testing empty pop ###\n");
+ test_empty_pop(s);
+
+ printf("\n### Testing using a single lcore ###\n");
+ test_bulk_push_pop(s);
+
+ if (get_two_hyperthreads(&cores) == 0) {
+ printf("\n### Testing using two hyperthreads ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_cores(&cores) == 0) {
+ printf("\n### Testing using two physical cores ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_sockets(&cores) == 0) {
+ printf("\n### Testing using two NUMA nodes ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+
+ printf("\n### Testing on all %u lcores ###\n", rte_lcore_count());
+ run_on_n_cores(s, bulk_push_pop, rte_lcore_count());
+
+ rte_stack_free(s);
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 5/8] stack: add lock-free stack implementation
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 " Gage Eads
` (4 preceding siblings ...)
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 4/8] test/stack: add stack perf test Gage Eads
@ 2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 6/8] stack: add C11 atomic implementation Gage Eads
` (4 subsequent siblings)
10 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for a lock-free (linked list based) stack to the
stack API. This behavior is selected through a new rte_stack_create() flag,
RTE_STACK_F_LF.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The lock-free push operation enqueues a linked list of pointers by pointing
the tail of the list to the current stack head, and using a CAS to swing
the stack head pointer to the head of the list. The operation retries if it
is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The lock-free pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a lock-free LIFO
list, and are allocated before stack pushes and freed after stack pops.
Since the stack has a fixed maximum depth, these elements do not need to be
dynamically created.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
doc/guides/prog_guide/stack_lib.rst | 61 +++++++++++-
doc/guides/rel_notes/release_19_05.rst | 3 +
lib/librte_stack/Makefile | 7 +-
lib/librte_stack/meson.build | 7 +-
lib/librte_stack/rte_stack.c | 28 ++++--
lib/librte_stack/rte_stack.h | 62 +++++++++++-
lib/librte_stack/rte_stack_lf.c | 31 ++++++
lib/librte_stack/rte_stack_lf.h | 102 ++++++++++++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 ++++++++++++++++++++++++++++++++
9 files changed, 446 insertions(+), 19 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 25a8cc38a..8fe8804e3 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -10,7 +10,8 @@ stack of pointers.
The stack library provides the following basic operations:
* Create a uniquely named stack of a user-specified size and using a
- user-specified socket.
+ user-specified socket, with either standard (lock-based) or lock-free
+ behavior.
* Push and pop a burst of one or more stack objects (pointers). These function
are multi-threading safe.
@@ -24,5 +25,59 @@ The stack library provides the following basic operations:
Implementation
~~~~~~~~~~~~~~
-The stack consists of a contiguous array of pointers, a current index, and a
-spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
+The library supports two types of stacks: standard (lock-based) and lock-free.
+Both types use the same set of interfaces, but their implementations differ.
+
+Lock-based Stack
+----------------
+
+The lock-based stack consists of a contiguous array of pointers, a current
+index, and a spinlock. Accesses to the stack are made multi-thread safe by the
+spinlock.
+
+Lock-free Stack
+------------------
+
+The lock-free stack consists of a linked list of elements, each containing a
+data pointer and a next pointer, and an atomic stack depth counter. The
+lock-free property means that multiple threads can push and pop simultaneously,
+and one thread being preempted/delayed in a push or pop operation will not
+impede the forward progress of any other thread.
+
+The lock-free push operation enqueues a linked list of pointers by pointing the
+list's tail to the current stack head, and using a CAS to swing the stack head
+pointer to the head of the list. The operation retries if it is unsuccessful
+(i.e. the list changed between reading the head and modifying it), else it
+adjusts the stack length and returns.
+
+The lock-free pop operation first reserves one or more list elements by
+adjusting the stack length, to ensure the dequeue operation will succeed
+without blocking. It then dequeues pointers by walking the list -- starting
+from the head -- then swinging the head pointer (using a CAS as well). While
+walking the list, the data pointers are recorded in an object table.
+
+The linked list elements themselves are maintained in a lock-free LIFO, and are
+allocated before stack pushes and freed after stack pops. Since the stack has a
+fixed maximum depth, these elements do not need to be dynamically created.
+
+The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
+rte_stack_create().
+
+Preventing the ABA Problem
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To prevent the ABA problem, this algorithm stack uses a 128-bit
+compare-and-swap instruction to atomically update both the stack top pointer
+and a modification counter. The ABA problem can occur without a modification
+counter if, for example:
+
+1. Thread A reads head pointer X and stores the pointed-to list element.
+2. Other threads modify the list such that the head pointer is once again X,
+ but its pointed-to data is different than what thread A read.
+3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+In this case thread A would not detect that the list had changed, and would
+both pop stale data and incorrect change the head pointer. By adding a
+modification counter that is updated on every push and pop as part of the
+compare-and-swap, the algorithm can detect when the list changes even if the
+head pointer remains the same.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index ebfbe36e5..3b115b5f6 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -127,6 +127,9 @@ New Features
pointers. The API provides MT-safe push and pop operations that can operate
on one or more pointers per operation.
+ The library supports two stack implementations: standard (lock-based) and lock-free.
+ The lock-free implementation is currently limited to x86-64 platforms.
+
Removed Items
-------------
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 6db540073..311edd997 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -16,10 +16,13 @@ LIBABIVER := 1
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
- rte_stack_std.c
+ rte_stack_std.c \
+ rte_stack_lf.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
- rte_stack_std.h
+ rte_stack_std.h \
+ rte_stack_lf.h \
+ rte_stack_lf_generic.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index d2e60ce9b..7a09a5d66 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -4,5 +4,8 @@
allow_experimental_apis = true
version = 1
-sources = files('rte_stack.c', 'rte_stack_std.c')
-headers = files('rte_stack.h', 'rte_stack_std.h')
+sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
+headers = files('rte_stack.h',
+ 'rte_stack_std.h',
+ 'rte_stack_lf.h',
+ 'rte_stack_lf_generic.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
index 610014b6c..1a4d9bd1e 100644
--- a/lib/librte_stack/rte_stack.c
+++ b/lib/librte_stack/rte_stack.c
@@ -25,18 +25,25 @@ static struct rte_tailq_elem rte_stack_tailq = {
};
EAL_REGISTER_TAILQ(rte_stack_tailq)
+
static void
-rte_stack_init(struct rte_stack *s)
+rte_stack_init(struct rte_stack *s, unsigned int count, uint32_t flags)
{
memset(s, 0, sizeof(*s));
- rte_stack_std_init(s);
+ if (flags & RTE_STACK_F_LF)
+ rte_stack_lf_init(s, count);
+ else
+ rte_stack_std_init(s);
}
static ssize_t
-rte_stack_get_memsize(unsigned int count)
+rte_stack_get_memsize(unsigned int count, uint32_t flags)
{
- return rte_stack_std_get_memsize(count);
+ if (flags & RTE_STACK_F_LF)
+ return rte_stack_lf_get_memsize(count);
+ else
+ return rte_stack_std_get_memsize(count);
}
struct rte_stack *
@@ -51,9 +58,16 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
unsigned int sz;
int ret;
- RTE_SET_USED(flags);
+#ifdef RTE_ARCH_64
+ RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
+#else
+ if (flags & RTE_STACK_F_LF) {
+ STACK_LOG_ERR("Lock-free stack is not supported on your platform\n");
+ return NULL;
+ }
+#endif
- sz = rte_stack_get_memsize(count);
+ sz = rte_stack_get_memsize(count, flags);
ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
RTE_STACK_MZ_PREFIX, name);
@@ -82,7 +96,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
s = mz->addr;
- rte_stack_init(s);
+ rte_stack_init(s, count, flags);
/* Store the name for later lookups */
ret = snprintf(s->name, sizeof(s->name), "%s", name);
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index cebb5be13..54e795682 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -31,6 +31,35 @@ extern "C" {
#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
sizeof(RTE_STACK_MZ_PREFIX) + 1)
+struct rte_stack_lf_elem {
+ void *data; /**< Data pointer */
+ struct rte_stack_lf_elem *next; /**< Next pointer */
+};
+
+struct rte_stack_lf_head {
+ struct rte_stack_lf_elem *top; /**< Stack top */
+ uint64_t cnt; /**< Modification counter for avoiding ABA problem */
+};
+
+struct rte_stack_lf_list {
+ /** List head */
+ struct rte_stack_lf_head head __rte_aligned(16);
+ /** List len */
+ rte_atomic64_t len;
+};
+
+/* Structure containing two lock-free LIFO lists: the stack itself and a list
+ * of free linked-list elements.
+ */
+struct rte_stack_lf {
+ /** LIFO list of elements */
+ struct rte_stack_lf_list used __rte_cache_aligned;
+ /** LIFO list of free elements */
+ struct rte_stack_lf_list free __rte_cache_aligned;
+ /** LIFO elements */
+ struct rte_stack_lf_elem elems[] __rte_cache_aligned;
+};
+
/* Structure containing the LIFO, its current length, and a lock for mutual
* exclusion.
*/
@@ -50,10 +79,21 @@ struct rte_stack {
const struct rte_memzone *memzone;
uint32_t capacity; /**< Usable size of the stack. */
uint32_t flags; /**< Flags supplied at creation. */
- struct rte_stack_std stack_std; /**< LIFO structure. */
+ RTE_STD_C11
+ union {
+ struct rte_stack_lf stack_lf; /**< Lock-free LIFO structure. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+ };
} __rte_cache_aligned;
+/**
+ * The stack uses lock-free push and pop functions. This flag is only
+ * supported on x86_64 platforms, currently.
+ */
+#define RTE_STACK_F_LF 0x0001
+
#include "rte_stack_std.h"
+#include "rte_stack_lf.h"
/**
* @warning
@@ -76,7 +116,10 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_push(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_push(s, obj_table, n);
+ else
+ return __rte_stack_std_push(s, obj_table, n);
}
/**
@@ -100,7 +143,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_pop(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_pop(s, obj_table, n);
+ else
+ return __rte_stack_std_pop(s, obj_table, n);
}
/**
@@ -119,7 +165,10 @@ rte_stack_count(struct rte_stack *s)
{
RTE_ASSERT(s != NULL);
- return __rte_stack_std_count(s);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_count(s);
+ else
+ return __rte_stack_std_count(s);
}
/**
@@ -159,7 +208,10 @@ rte_stack_free_count(struct rte_stack *s)
* NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
* constraint for the reserved zone.
* @param flags
- * Reserved for future use.
+ * An OR of the following:
+ * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
+ * variants of the push and pop functions. Otherwise, it achieves
+ * thread-safety using a lock.
* @return
* On success, the pointer to the new allocated stack. NULL on error with
* rte_errno set appropriately. Possible errno values include:
diff --git a/lib/librte_stack/rte_stack_lf.c b/lib/librte_stack/rte_stack_lf.c
new file mode 100644
index 000000000..0adcc263e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.c
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count)
+{
+ struct rte_stack_lf_elem *elems = s->stack_lf.elems;
+ unsigned int i;
+
+ for (i = 0; i < count; i++)
+ __rte_stack_lf_push_elems(&s->stack_lf.free,
+ &elems[i], &elems[i], 1);
+}
+
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(struct rte_stack_lf_elem));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
new file mode 100644
index 000000000..bfd680133
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_H_
+#define _RTE_STACK_LF_H_
+
+#include "rte_stack_lf_generic.h"
+
+/**
+ * @internal Push several objects on the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects enqueued.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_push(struct rte_stack *s,
+ void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_lf_elem *tmp, *first, *last = NULL;
+ unsigned int i;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n free elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.free, n, NULL, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Construct the list elements */
+ for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
+ tmp->data = obj_table[n - i - 1];
+
+ /* Push them to the used list */
+ __rte_stack_lf_push_elems(&s->stack_lf.used, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * - Actual number of objects popped.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *first, *last = NULL;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n used elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.used,
+ n, obj_table, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Push the list elements to the free list */
+ __rte_stack_lf_push_elems(&s->stack_lf.free, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Initialize a lock-free stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param count
+ * The size of the stack.
+ */
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count);
+
+/**
+ * @internal Return the memory required for a lock-free stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a lock-free stack.
+ */
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_LF_H_ */
diff --git a/lib/librte_stack/rte_stack_lf_generic.h b/lib/librte_stack/rte_stack_lf_generic.h
new file mode 100644
index 000000000..1191406d3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_generic.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_GENERIC_H_
+#define _RTE_STACK_LF_GENERIC_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)rte_atomic64_read(&s->stack_lf.used.len);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* An acquire fence (or stronger) is needed for weak memory
+ * models to establish a synchronized-with relationship between
+ * the list->head load and store-release operations (as part of
+ * the rte_atomic128_cmp_exchange()).
+ */
+ rte_smp_mb();
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ rte_atomic64_add(&list->len, num);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = rte_atomic64_read(&list->len);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
+ len, len - num))
+ break;
+ }
+
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* An acquire fence (or stronger) is needed for weak memory
+ * models to ensure the LF LIFO element reads are properly
+ * ordered with respect to the head pointer read.
+ */
+ rte_smp_mb();
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_GENERIC_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 5/8] stack: add lock-free stack implementation
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 5/8] stack: add lock-free stack implementation Gage Eads
@ 2019-04-03 20:09 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for a lock-free (linked list based) stack to the
stack API. This behavior is selected through a new rte_stack_create() flag,
RTE_STACK_F_LF.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The lock-free push operation enqueues a linked list of pointers by pointing
the tail of the list to the current stack head, and using a CAS to swing
the stack head pointer to the head of the list. The operation retries if it
is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The lock-free pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a lock-free LIFO
list, and are allocated before stack pushes and freed after stack pops.
Since the stack has a fixed maximum depth, these elements do not need to be
dynamically created.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
doc/guides/prog_guide/stack_lib.rst | 61 +++++++++++-
doc/guides/rel_notes/release_19_05.rst | 3 +
lib/librte_stack/Makefile | 7 +-
lib/librte_stack/meson.build | 7 +-
lib/librte_stack/rte_stack.c | 28 ++++--
lib/librte_stack/rte_stack.h | 62 +++++++++++-
lib/librte_stack/rte_stack_lf.c | 31 ++++++
lib/librte_stack/rte_stack_lf.h | 102 ++++++++++++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 ++++++++++++++++++++++++++++++++
9 files changed, 446 insertions(+), 19 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 25a8cc38a..8fe8804e3 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -10,7 +10,8 @@ stack of pointers.
The stack library provides the following basic operations:
* Create a uniquely named stack of a user-specified size and using a
- user-specified socket.
+ user-specified socket, with either standard (lock-based) or lock-free
+ behavior.
* Push and pop a burst of one or more stack objects (pointers). These function
are multi-threading safe.
@@ -24,5 +25,59 @@ The stack library provides the following basic operations:
Implementation
~~~~~~~~~~~~~~
-The stack consists of a contiguous array of pointers, a current index, and a
-spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
+The library supports two types of stacks: standard (lock-based) and lock-free.
+Both types use the same set of interfaces, but their implementations differ.
+
+Lock-based Stack
+----------------
+
+The lock-based stack consists of a contiguous array of pointers, a current
+index, and a spinlock. Accesses to the stack are made multi-thread safe by the
+spinlock.
+
+Lock-free Stack
+------------------
+
+The lock-free stack consists of a linked list of elements, each containing a
+data pointer and a next pointer, and an atomic stack depth counter. The
+lock-free property means that multiple threads can push and pop simultaneously,
+and one thread being preempted/delayed in a push or pop operation will not
+impede the forward progress of any other thread.
+
+The lock-free push operation enqueues a linked list of pointers by pointing the
+list's tail to the current stack head, and using a CAS to swing the stack head
+pointer to the head of the list. The operation retries if it is unsuccessful
+(i.e. the list changed between reading the head and modifying it), else it
+adjusts the stack length and returns.
+
+The lock-free pop operation first reserves one or more list elements by
+adjusting the stack length, to ensure the dequeue operation will succeed
+without blocking. It then dequeues pointers by walking the list -- starting
+from the head -- then swinging the head pointer (using a CAS as well). While
+walking the list, the data pointers are recorded in an object table.
+
+The linked list elements themselves are maintained in a lock-free LIFO, and are
+allocated before stack pushes and freed after stack pops. Since the stack has a
+fixed maximum depth, these elements do not need to be dynamically created.
+
+The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
+rte_stack_create().
+
+Preventing the ABA Problem
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To prevent the ABA problem, this algorithm stack uses a 128-bit
+compare-and-swap instruction to atomically update both the stack top pointer
+and a modification counter. The ABA problem can occur without a modification
+counter if, for example:
+
+1. Thread A reads head pointer X and stores the pointed-to list element.
+2. Other threads modify the list such that the head pointer is once again X,
+ but its pointed-to data is different than what thread A read.
+3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+In this case thread A would not detect that the list had changed, and would
+both pop stale data and incorrect change the head pointer. By adding a
+modification counter that is updated on every push and pop as part of the
+compare-and-swap, the algorithm can detect when the list changes even if the
+head pointer remains the same.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index ebfbe36e5..3b115b5f6 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -127,6 +127,9 @@ New Features
pointers. The API provides MT-safe push and pop operations that can operate
on one or more pointers per operation.
+ The library supports two stack implementations: standard (lock-based) and lock-free.
+ The lock-free implementation is currently limited to x86-64 platforms.
+
Removed Items
-------------
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 6db540073..311edd997 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -16,10 +16,13 @@ LIBABIVER := 1
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
- rte_stack_std.c
+ rte_stack_std.c \
+ rte_stack_lf.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
- rte_stack_std.h
+ rte_stack_std.h \
+ rte_stack_lf.h \
+ rte_stack_lf_generic.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index d2e60ce9b..7a09a5d66 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -4,5 +4,8 @@
allow_experimental_apis = true
version = 1
-sources = files('rte_stack.c', 'rte_stack_std.c')
-headers = files('rte_stack.h', 'rte_stack_std.h')
+sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
+headers = files('rte_stack.h',
+ 'rte_stack_std.h',
+ 'rte_stack_lf.h',
+ 'rte_stack_lf_generic.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
index 610014b6c..1a4d9bd1e 100644
--- a/lib/librte_stack/rte_stack.c
+++ b/lib/librte_stack/rte_stack.c
@@ -25,18 +25,25 @@ static struct rte_tailq_elem rte_stack_tailq = {
};
EAL_REGISTER_TAILQ(rte_stack_tailq)
+
static void
-rte_stack_init(struct rte_stack *s)
+rte_stack_init(struct rte_stack *s, unsigned int count, uint32_t flags)
{
memset(s, 0, sizeof(*s));
- rte_stack_std_init(s);
+ if (flags & RTE_STACK_F_LF)
+ rte_stack_lf_init(s, count);
+ else
+ rte_stack_std_init(s);
}
static ssize_t
-rte_stack_get_memsize(unsigned int count)
+rte_stack_get_memsize(unsigned int count, uint32_t flags)
{
- return rte_stack_std_get_memsize(count);
+ if (flags & RTE_STACK_F_LF)
+ return rte_stack_lf_get_memsize(count);
+ else
+ return rte_stack_std_get_memsize(count);
}
struct rte_stack *
@@ -51,9 +58,16 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
unsigned int sz;
int ret;
- RTE_SET_USED(flags);
+#ifdef RTE_ARCH_64
+ RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
+#else
+ if (flags & RTE_STACK_F_LF) {
+ STACK_LOG_ERR("Lock-free stack is not supported on your platform\n");
+ return NULL;
+ }
+#endif
- sz = rte_stack_get_memsize(count);
+ sz = rte_stack_get_memsize(count, flags);
ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
RTE_STACK_MZ_PREFIX, name);
@@ -82,7 +96,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
s = mz->addr;
- rte_stack_init(s);
+ rte_stack_init(s, count, flags);
/* Store the name for later lookups */
ret = snprintf(s->name, sizeof(s->name), "%s", name);
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index cebb5be13..54e795682 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -31,6 +31,35 @@ extern "C" {
#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
sizeof(RTE_STACK_MZ_PREFIX) + 1)
+struct rte_stack_lf_elem {
+ void *data; /**< Data pointer */
+ struct rte_stack_lf_elem *next; /**< Next pointer */
+};
+
+struct rte_stack_lf_head {
+ struct rte_stack_lf_elem *top; /**< Stack top */
+ uint64_t cnt; /**< Modification counter for avoiding ABA problem */
+};
+
+struct rte_stack_lf_list {
+ /** List head */
+ struct rte_stack_lf_head head __rte_aligned(16);
+ /** List len */
+ rte_atomic64_t len;
+};
+
+/* Structure containing two lock-free LIFO lists: the stack itself and a list
+ * of free linked-list elements.
+ */
+struct rte_stack_lf {
+ /** LIFO list of elements */
+ struct rte_stack_lf_list used __rte_cache_aligned;
+ /** LIFO list of free elements */
+ struct rte_stack_lf_list free __rte_cache_aligned;
+ /** LIFO elements */
+ struct rte_stack_lf_elem elems[] __rte_cache_aligned;
+};
+
/* Structure containing the LIFO, its current length, and a lock for mutual
* exclusion.
*/
@@ -50,10 +79,21 @@ struct rte_stack {
const struct rte_memzone *memzone;
uint32_t capacity; /**< Usable size of the stack. */
uint32_t flags; /**< Flags supplied at creation. */
- struct rte_stack_std stack_std; /**< LIFO structure. */
+ RTE_STD_C11
+ union {
+ struct rte_stack_lf stack_lf; /**< Lock-free LIFO structure. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+ };
} __rte_cache_aligned;
+/**
+ * The stack uses lock-free push and pop functions. This flag is only
+ * supported on x86_64 platforms, currently.
+ */
+#define RTE_STACK_F_LF 0x0001
+
#include "rte_stack_std.h"
+#include "rte_stack_lf.h"
/**
* @warning
@@ -76,7 +116,10 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_push(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_push(s, obj_table, n);
+ else
+ return __rte_stack_std_push(s, obj_table, n);
}
/**
@@ -100,7 +143,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_pop(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_pop(s, obj_table, n);
+ else
+ return __rte_stack_std_pop(s, obj_table, n);
}
/**
@@ -119,7 +165,10 @@ rte_stack_count(struct rte_stack *s)
{
RTE_ASSERT(s != NULL);
- return __rte_stack_std_count(s);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_count(s);
+ else
+ return __rte_stack_std_count(s);
}
/**
@@ -159,7 +208,10 @@ rte_stack_free_count(struct rte_stack *s)
* NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
* constraint for the reserved zone.
* @param flags
- * Reserved for future use.
+ * An OR of the following:
+ * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
+ * variants of the push and pop functions. Otherwise, it achieves
+ * thread-safety using a lock.
* @return
* On success, the pointer to the new allocated stack. NULL on error with
* rte_errno set appropriately. Possible errno values include:
diff --git a/lib/librte_stack/rte_stack_lf.c b/lib/librte_stack/rte_stack_lf.c
new file mode 100644
index 000000000..0adcc263e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.c
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count)
+{
+ struct rte_stack_lf_elem *elems = s->stack_lf.elems;
+ unsigned int i;
+
+ for (i = 0; i < count; i++)
+ __rte_stack_lf_push_elems(&s->stack_lf.free,
+ &elems[i], &elems[i], 1);
+}
+
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(struct rte_stack_lf_elem));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
new file mode 100644
index 000000000..bfd680133
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_H_
+#define _RTE_STACK_LF_H_
+
+#include "rte_stack_lf_generic.h"
+
+/**
+ * @internal Push several objects on the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects enqueued.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_push(struct rte_stack *s,
+ void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_lf_elem *tmp, *first, *last = NULL;
+ unsigned int i;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n free elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.free, n, NULL, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Construct the list elements */
+ for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
+ tmp->data = obj_table[n - i - 1];
+
+ /* Push them to the used list */
+ __rte_stack_lf_push_elems(&s->stack_lf.used, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * - Actual number of objects popped.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *first, *last = NULL;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n used elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.used,
+ n, obj_table, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Push the list elements to the free list */
+ __rte_stack_lf_push_elems(&s->stack_lf.free, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Initialize a lock-free stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param count
+ * The size of the stack.
+ */
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count);
+
+/**
+ * @internal Return the memory required for a lock-free stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a lock-free stack.
+ */
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_LF_H_ */
diff --git a/lib/librte_stack/rte_stack_lf_generic.h b/lib/librte_stack/rte_stack_lf_generic.h
new file mode 100644
index 000000000..1191406d3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_generic.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_GENERIC_H_
+#define _RTE_STACK_LF_GENERIC_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)rte_atomic64_read(&s->stack_lf.used.len);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* An acquire fence (or stronger) is needed for weak memory
+ * models to establish a synchronized-with relationship between
+ * the list->head load and store-release operations (as part of
+ * the rte_atomic128_cmp_exchange()).
+ */
+ rte_smp_mb();
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ rte_atomic64_add(&list->len, num);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = rte_atomic64_read(&list->len);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
+ len, len - num))
+ break;
+ }
+
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* An acquire fence (or stronger) is needed for weak memory
+ * models to ensure the LF LIFO element reads are properly
+ * ordered with respect to the head pointer read.
+ */
+ rte_smp_mb();
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_GENERIC_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 6/8] stack: add C11 atomic implementation
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 " Gage Eads
` (5 preceding siblings ...)
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 5/8] stack: add lock-free stack implementation Gage Eads
@ 2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 7/8] test/stack: add lock-free stack tests Gage Eads
` (3 subsequent siblings)
10 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds an implementation of the lock-free stack push, pop, and
length functions that use __atomic builtins, for systems that benefit from
the finer-grained memory ordering control.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack_lf.h | 4 +
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++++++++++++++++++++++++++++
4 files changed, 183 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 311edd997..8d18ce520 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -23,6 +23,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
rte_stack_std.h \
rte_stack_lf.h \
- rte_stack_lf_generic.h
+ rte_stack_lf_generic.h \
+ rte_stack_lf_c11.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 7a09a5d66..46fce0c20 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -8,4 +8,5 @@ sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
headers = files('rte_stack.h',
'rte_stack_std.h',
'rte_stack_lf.h',
- 'rte_stack_lf_generic.h')
+ 'rte_stack_lf_generic.h',
+ 'rte_stack_lf_c11.h')
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
index bfd680133..518889a05 100644
--- a/lib/librte_stack/rte_stack_lf.h
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -5,7 +5,11 @@
#ifndef _RTE_STACK_LF_H_
#define _RTE_STACK_LF_H_
+#ifdef RTE_USE_C11_MEM_MODEL
+#include "rte_stack_lf_c11.h"
+#else
#include "rte_stack_lf_generic.h"
+#endif
/**
* @internal Push several objects on the lock-free stack (MT-safe).
diff --git a/lib/librte_stack/rte_stack_lf_c11.h b/lib/librte_stack/rte_stack_lf_c11.h
new file mode 100644
index 000000000..a316e9af5
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_c11.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_C11_H_
+#define _RTE_STACK_LF_C11_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
+ __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Use an acquire fence to establish a synchronized-with
+ * relationship between the list->head load and store-release
+ * operations (as part of the rte_atomic128_cmp_exchange()).
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Use the release memmodel to ensure the writes to the LF LIFO
+ * elements are visible before the head pointer write.
+ */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ /* Ensure the stack modifications are not reordered with respect
+ * to the LIFO len update.
+ */
+ __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ uint64_t len;
+ int success;
+
+ /* Reserve num elements, if available */
+ len = __atomic_load_n(&list->len.cnt, __ATOMIC_ACQUIRE);
+
+ while (1) {
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ /* len is updated on failure */
+ if (__atomic_compare_exchange_n(&list->len.cnt,
+ &len, len - num,
+ 0, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE))
+ break;
+ }
+
+ /* If a torn read occurs, the CAS will fail and set old_head to the
+ * correct/latest value.
+ */
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* Use the acquire memmodel to ensure the reads to the LF LIFO
+ * elements are properly ordered with respect to the head
+ * pointer read.
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_C11_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 6/8] stack: add C11 atomic implementation
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 6/8] stack: add C11 atomic implementation Gage Eads
@ 2019-04-03 20:09 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds an implementation of the lock-free stack push, pop, and
length functions that use __atomic builtins, for systems that benefit from
the finer-grained memory ordering control.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack_lf.h | 4 +
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++++++++++++++++++++++++++++
4 files changed, 183 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 311edd997..8d18ce520 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -23,6 +23,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
rte_stack_std.h \
rte_stack_lf.h \
- rte_stack_lf_generic.h
+ rte_stack_lf_generic.h \
+ rte_stack_lf_c11.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 7a09a5d66..46fce0c20 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -8,4 +8,5 @@ sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
headers = files('rte_stack.h',
'rte_stack_std.h',
'rte_stack_lf.h',
- 'rte_stack_lf_generic.h')
+ 'rte_stack_lf_generic.h',
+ 'rte_stack_lf_c11.h')
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
index bfd680133..518889a05 100644
--- a/lib/librte_stack/rte_stack_lf.h
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -5,7 +5,11 @@
#ifndef _RTE_STACK_LF_H_
#define _RTE_STACK_LF_H_
+#ifdef RTE_USE_C11_MEM_MODEL
+#include "rte_stack_lf_c11.h"
+#else
#include "rte_stack_lf_generic.h"
+#endif
/**
* @internal Push several objects on the lock-free stack (MT-safe).
diff --git a/lib/librte_stack/rte_stack_lf_c11.h b/lib/librte_stack/rte_stack_lf_c11.h
new file mode 100644
index 000000000..a316e9af5
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_c11.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_C11_H_
+#define _RTE_STACK_LF_C11_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
+ __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Use an acquire fence to establish a synchronized-with
+ * relationship between the list->head load and store-release
+ * operations (as part of the rte_atomic128_cmp_exchange()).
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Use the release memmodel to ensure the writes to the LF LIFO
+ * elements are visible before the head pointer write.
+ */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ /* Ensure the stack modifications are not reordered with respect
+ * to the LIFO len update.
+ */
+ __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ uint64_t len;
+ int success;
+
+ /* Reserve num elements, if available */
+ len = __atomic_load_n(&list->len.cnt, __ATOMIC_ACQUIRE);
+
+ while (1) {
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ /* len is updated on failure */
+ if (__atomic_compare_exchange_n(&list->len.cnt,
+ &len, len - num,
+ 0, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE))
+ break;
+ }
+
+ /* If a torn read occurs, the CAS will fail and set old_head to the
+ * correct/latest value.
+ */
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* Use the acquire memmodel to ensure the reads to the LF LIFO
+ * elements are properly ordered with respect to the head
+ * pointer read.
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_C11_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 7/8] test/stack: add lock-free stack tests
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 " Gage Eads
` (6 preceding siblings ...)
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 6/8] stack: add C11 atomic implementation Gage Eads
@ 2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
` (2 subsequent siblings)
10 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds lock-free stack variants of stack_autotest
(stack_lf_autotest) and stack_perf_autotest (stack_lf_perf_autotest), which
differ only in that the lock-free versions pass the RTE_STACK_F_LF flag to
all rte_stack_create() calls.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/meson.build | 2 ++
app/test/test_stack.c | 41 +++++++++++++++++++++++++++--------------
app/test/test_stack_perf.c | 17 +++++++++++++++--
3 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/app/test/meson.build b/app/test/meson.build
index 02eb788a4..867cc5863 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -178,6 +178,7 @@ fast_parallel_test_names = [
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
+ 'stack_nb_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
@@ -243,6 +244,7 @@ perf_test_names = [
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
+ 'stack_nb_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
index 8392e4e4d..f199136aa 100644
--- a/app/test/test_stack.c
+++ b/app/test/test_stack.c
@@ -97,7 +97,7 @@ test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
}
static int
-test_stack_basic(void)
+test_stack_basic(uint32_t flags)
{
struct rte_stack *s = NULL;
void **obj_table = NULL;
@@ -113,7 +113,7 @@ test_stack_basic(void)
for (i = 0; i < STACK_SIZE; i++)
obj_table[i] = (void *)(uintptr_t)i;
- s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -177,18 +177,18 @@ test_stack_basic(void)
}
static int
-test_stack_name_reuse(void)
+test_stack_name_reuse(uint32_t flags)
{
struct rte_stack *s[2];
- s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[0] == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
return -1;
}
- s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[1] != NULL) {
printf("[%s():%u] Failed to detect re-used name\n",
__func__, __LINE__);
@@ -201,7 +201,7 @@ test_stack_name_reuse(void)
}
static int
-test_stack_name_length(void)
+test_stack_name_length(uint32_t flags)
{
char name[RTE_STACK_NAMESIZE + 1];
struct rte_stack *s;
@@ -209,7 +209,7 @@ test_stack_name_length(void)
memset(name, 's', sizeof(name));
name[RTE_STACK_NAMESIZE] = '\0';
- s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), flags);
if (s != NULL) {
printf("[%s():%u] Failed to prevent long name\n",
__func__, __LINE__);
@@ -328,7 +328,7 @@ stack_thread_push_pop(void *args)
}
static int
-test_stack_multithreaded(void)
+test_stack_multithreaded(uint32_t flags)
{
struct test_args *args;
unsigned int lcore_id;
@@ -349,7 +349,7 @@ test_stack_multithreaded(void)
return -1;
}
- s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
@@ -384,9 +384,9 @@ test_stack_multithreaded(void)
}
static int
-test_stack(void)
+__test_stack(uint32_t flags)
{
- if (test_stack_basic() < 0)
+ if (test_stack_basic(flags) < 0)
return -1;
if (test_lookup_null() < 0)
@@ -395,16 +395,29 @@ test_stack(void)
if (test_free_null() < 0)
return -1;
- if (test_stack_name_reuse() < 0)
+ if (test_stack_name_reuse(flags) < 0)
return -1;
- if (test_stack_name_length() < 0)
+ if (test_stack_name_length(flags) < 0)
return -1;
- if (test_stack_multithreaded() < 0)
+ if (test_stack_multithreaded(flags) < 0)
return -1;
return 0;
}
+static int
+test_stack(void)
+{
+ return __test_stack(0);
+}
+
+static int
+test_lf_stack(void)
+{
+ return __test_stack(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_autotest, test_stack);
+REGISTER_TEST_COMMAND(stack_lf_autotest, test_lf_stack);
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
index 484370d30..e09d5384c 100644
--- a/app/test/test_stack_perf.c
+++ b/app/test/test_stack_perf.c
@@ -297,14 +297,14 @@ test_bulk_push_pop(struct rte_stack *s)
}
static int
-test_stack_perf(void)
+__test_stack_perf(uint32_t flags)
{
struct lcore_pair cores;
struct rte_stack *s;
rte_atomic32_init(&lcore_barrier);
- s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -340,4 +340,17 @@ test_stack_perf(void)
return 0;
}
+static int
+test_stack_perf(void)
+{
+ return __test_stack_perf(0);
+}
+
+static int
+test_lf_stack_perf(void)
+{
+ return __test_stack_perf(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
+REGISTER_TEST_COMMAND(stack_lf_perf_autotest, test_lf_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 7/8] test/stack: add lock-free stack tests
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 7/8] test/stack: add lock-free stack tests Gage Eads
@ 2019-04-03 20:09 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds lock-free stack variants of stack_autotest
(stack_lf_autotest) and stack_perf_autotest (stack_lf_perf_autotest), which
differ only in that the lock-free versions pass the RTE_STACK_F_LF flag to
all rte_stack_create() calls.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/meson.build | 2 ++
app/test/test_stack.c | 41 +++++++++++++++++++++++++++--------------
app/test/test_stack_perf.c | 17 +++++++++++++++--
3 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/app/test/meson.build b/app/test/meson.build
index 02eb788a4..867cc5863 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -178,6 +178,7 @@ fast_parallel_test_names = [
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
+ 'stack_nb_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
@@ -243,6 +244,7 @@ perf_test_names = [
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
+ 'stack_nb_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
index 8392e4e4d..f199136aa 100644
--- a/app/test/test_stack.c
+++ b/app/test/test_stack.c
@@ -97,7 +97,7 @@ test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
}
static int
-test_stack_basic(void)
+test_stack_basic(uint32_t flags)
{
struct rte_stack *s = NULL;
void **obj_table = NULL;
@@ -113,7 +113,7 @@ test_stack_basic(void)
for (i = 0; i < STACK_SIZE; i++)
obj_table[i] = (void *)(uintptr_t)i;
- s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -177,18 +177,18 @@ test_stack_basic(void)
}
static int
-test_stack_name_reuse(void)
+test_stack_name_reuse(uint32_t flags)
{
struct rte_stack *s[2];
- s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[0] == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
return -1;
}
- s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[1] != NULL) {
printf("[%s():%u] Failed to detect re-used name\n",
__func__, __LINE__);
@@ -201,7 +201,7 @@ test_stack_name_reuse(void)
}
static int
-test_stack_name_length(void)
+test_stack_name_length(uint32_t flags)
{
char name[RTE_STACK_NAMESIZE + 1];
struct rte_stack *s;
@@ -209,7 +209,7 @@ test_stack_name_length(void)
memset(name, 's', sizeof(name));
name[RTE_STACK_NAMESIZE] = '\0';
- s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), flags);
if (s != NULL) {
printf("[%s():%u] Failed to prevent long name\n",
__func__, __LINE__);
@@ -328,7 +328,7 @@ stack_thread_push_pop(void *args)
}
static int
-test_stack_multithreaded(void)
+test_stack_multithreaded(uint32_t flags)
{
struct test_args *args;
unsigned int lcore_id;
@@ -349,7 +349,7 @@ test_stack_multithreaded(void)
return -1;
}
- s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
@@ -384,9 +384,9 @@ test_stack_multithreaded(void)
}
static int
-test_stack(void)
+__test_stack(uint32_t flags)
{
- if (test_stack_basic() < 0)
+ if (test_stack_basic(flags) < 0)
return -1;
if (test_lookup_null() < 0)
@@ -395,16 +395,29 @@ test_stack(void)
if (test_free_null() < 0)
return -1;
- if (test_stack_name_reuse() < 0)
+ if (test_stack_name_reuse(flags) < 0)
return -1;
- if (test_stack_name_length() < 0)
+ if (test_stack_name_length(flags) < 0)
return -1;
- if (test_stack_multithreaded() < 0)
+ if (test_stack_multithreaded(flags) < 0)
return -1;
return 0;
}
+static int
+test_stack(void)
+{
+ return __test_stack(0);
+}
+
+static int
+test_lf_stack(void)
+{
+ return __test_stack(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_autotest, test_stack);
+REGISTER_TEST_COMMAND(stack_lf_autotest, test_lf_stack);
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
index 484370d30..e09d5384c 100644
--- a/app/test/test_stack_perf.c
+++ b/app/test/test_stack_perf.c
@@ -297,14 +297,14 @@ test_bulk_push_pop(struct rte_stack *s)
}
static int
-test_stack_perf(void)
+__test_stack_perf(uint32_t flags)
{
struct lcore_pair cores;
struct rte_stack *s;
rte_atomic32_init(&lcore_barrier);
- s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -340,4 +340,17 @@ test_stack_perf(void)
return 0;
}
+static int
+test_stack_perf(void)
+{
+ return __test_stack_perf(0);
+}
+
+static int
+test_lf_stack_perf(void)
+{
+ return __test_stack_perf(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
+REGISTER_TEST_COMMAND(stack_lf_perf_autotest, test_lf_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 8/8] mempool/stack: add lock-free stack mempool handler
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 " Gage Eads
` (7 preceding siblings ...)
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 7/8] test/stack: add lock-free stack tests Gage Eads
@ 2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:39 ` [dpdk-dev] [PATCH v7 0/8] Add stack library and new " Thomas Monjalon
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 " Gage Eads
10 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for lock-free (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
stack's worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 10 ++++++++++
doc/guides/rel_notes/release_19_05.rst | 5 +++++
drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++--
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 6a04c3c33..fa8afdb3a 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -581,6 +581,16 @@ Known Issues
5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+ Alternatively, applications can use the lock-free stack mempool handler. When
+ considering this handler, note that:
+
+ - It is currently limited to the x86_64 platform, because it uses an
+ instruction (16-byte compare-and-swap) that is not yet available on other
+ platforms.
+ - It has worse average-case performance than the non-preemptive rte_ring, but
+ software caching (e.g. the mempool cache) can mitigate this by reducing the
+ number of stack accesses.
+
+ rte_timer
Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 3b115b5f6..f873984ad 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -130,6 +130,11 @@ New Features
The library supports two stack implementations: standard (lock-based) and lock-free.
The lock-free implementation is currently limited to x86-64 platforms.
+* **Added Lock-Free Stack Mempool Handler.**
+
+ Added a new lock-free stack handler, which uses the newly added stack
+ library.
+
Removed Items
-------------
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 25ccdb9af..7e85c8d6b 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -7,7 +7,7 @@
#include <rte_stack.h>
static int
-stack_alloc(struct rte_mempool *mp)
+__stack_alloc(struct rte_mempool *mp, uint32_t flags)
{
char name[RTE_STACK_NAMESIZE];
struct rte_stack *s;
@@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp)
return -rte_errno;
}
- s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ s = rte_stack_create(name, mp->size, mp->socket_id, flags);
if (s == NULL)
return -rte_errno;
@@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp)
}
static int
+stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, 0);
+}
+
+static int
+lf_stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, RTE_STACK_F_LF);
+}
+
+static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
unsigned int n)
{
@@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = {
.get_count = stack_get_count
};
+static struct rte_mempool_ops ops_lf_stack = {
+ .name = "lf_stack",
+ .alloc = lf_stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
MEMPOOL_REGISTER_OPS(ops_stack);
+MEMPOOL_REGISTER_OPS(ops_lf_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v7 8/8] mempool/stack: add lock-free stack mempool handler
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
@ 2019-04-03 20:09 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:09 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for lock-free (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
stack's worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 10 ++++++++++
doc/guides/rel_notes/release_19_05.rst | 5 +++++
drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++--
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 6a04c3c33..fa8afdb3a 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -581,6 +581,16 @@ Known Issues
5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+ Alternatively, applications can use the lock-free stack mempool handler. When
+ considering this handler, note that:
+
+ - It is currently limited to the x86_64 platform, because it uses an
+ instruction (16-byte compare-and-swap) that is not yet available on other
+ platforms.
+ - It has worse average-case performance than the non-preemptive rte_ring, but
+ software caching (e.g. the mempool cache) can mitigate this by reducing the
+ number of stack accesses.
+
+ rte_timer
Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 3b115b5f6..f873984ad 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -130,6 +130,11 @@ New Features
The library supports two stack implementations: standard (lock-based) and lock-free.
The lock-free implementation is currently limited to x86-64 platforms.
+* **Added Lock-Free Stack Mempool Handler.**
+
+ Added a new lock-free stack handler, which uses the newly added stack
+ library.
+
Removed Items
-------------
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 25ccdb9af..7e85c8d6b 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -7,7 +7,7 @@
#include <rte_stack.h>
static int
-stack_alloc(struct rte_mempool *mp)
+__stack_alloc(struct rte_mempool *mp, uint32_t flags)
{
char name[RTE_STACK_NAMESIZE];
struct rte_stack *s;
@@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp)
return -rte_errno;
}
- s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ s = rte_stack_create(name, mp->size, mp->socket_id, flags);
if (s == NULL)
return -rte_errno;
@@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp)
}
static int
+stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, 0);
+}
+
+static int
+lf_stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, RTE_STACK_F_LF);
+}
+
+static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
unsigned int n)
{
@@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = {
.get_count = stack_get_count
};
+static struct rte_mempool_ops ops_lf_stack = {
+ .name = "lf_stack",
+ .alloc = lf_stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
MEMPOOL_REGISTER_OPS(ops_stack);
+MEMPOOL_REGISTER_OPS(ops_lf_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v7 0/8] Add stack library and new mempool handler
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 " Gage Eads
` (8 preceding siblings ...)
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
@ 2019-04-03 20:39 ` Thomas Monjalon
2019-04-03 20:39 ` Thomas Monjalon
2019-04-03 20:49 ` Eads, Gage
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 " Gage Eads
10 siblings, 2 replies; 228+ messages in thread
From: Thomas Monjalon @ 2019-04-03 20:39 UTC (permalink / raw)
To: Gage Eads
Cc: dev, olivier.matz, arybchenko, bruce.richardson,
konstantin.ananyev, gavin.hu, Honnappa.Nagarahalli, nd
03/04/2019 22:09, Gage Eads:
> v7:
> - Add rte_branch_prediction.h include to rte_stack_std.h for unlikely()
> - Add rte_compat.h include to rte_stack.h for __rte_experimental
There is another error when compiling for Arm:
lib/librte_stack/rte_stack.h:76:2: error:
implicit declaration of function 'RTE_ASSERT'
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v7 0/8] Add stack library and new mempool handler
2019-04-03 20:39 ` [dpdk-dev] [PATCH v7 0/8] Add stack library and new " Thomas Monjalon
@ 2019-04-03 20:39 ` Thomas Monjalon
2019-04-03 20:49 ` Eads, Gage
1 sibling, 0 replies; 228+ messages in thread
From: Thomas Monjalon @ 2019-04-03 20:39 UTC (permalink / raw)
To: Gage Eads
Cc: dev, olivier.matz, arybchenko, bruce.richardson,
konstantin.ananyev, gavin.hu, Honnappa.Nagarahalli, nd
03/04/2019 22:09, Gage Eads:
> v7:
> - Add rte_branch_prediction.h include to rte_stack_std.h for unlikely()
> - Add rte_compat.h include to rte_stack.h for __rte_experimental
There is another error when compiling for Arm:
lib/librte_stack/rte_stack.h:76:2: error:
implicit declaration of function 'RTE_ASSERT'
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v7 0/8] Add stack library and new mempool handler
2019-04-03 20:39 ` [dpdk-dev] [PATCH v7 0/8] Add stack library and new " Thomas Monjalon
2019-04-03 20:39 ` Thomas Monjalon
@ 2019-04-03 20:49 ` Eads, Gage
2019-04-03 20:49 ` Eads, Gage
1 sibling, 1 reply; 228+ messages in thread
From: Eads, Gage @ 2019-04-03 20:49 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, olivier.matz, arybchenko, Richardson, Bruce, Ananyev,
Konstantin, gavin.hu, Honnappa.Nagarahalli, nd
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Wednesday, April 3, 2019 3:40 PM
> To: Eads, Gage <gage.eads@intel.com>
> Cc: dev@dpdk.org; olivier.matz@6wind.com; arybchenko@solarflare.com;
> Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; gavin.hu@arm.com;
> Honnappa.Nagarahalli@arm.com; nd@arm.com
> Subject: Re: [dpdk-dev] [PATCH v7 0/8] Add stack library and new mempool
> handler
>
> 03/04/2019 22:09, Gage Eads:
> > v7:
> > - Add rte_branch_prediction.h include to rte_stack_std.h for unlikely()
> > - Add rte_compat.h include to rte_stack.h for __rte_experimental
>
> There is another error when compiling for Arm:
>
> lib/librte_stack/rte_stack.h:76:2: error:
> implicit declaration of function 'RTE_ASSERT'
Fix incoming. Looking through the rest of that header, I don't see any other externally defined macros/data structures/functions/etc. that would need another include.
Also note that v7's ci/Performance-Testing build fail (http://mails.dpdk.org/archives/test-report/2019-April/079261.html) is because dpdklab applied the patch on commit 3c45889189924067, which is older than the 128-bit CAS patch.
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v7 0/8] Add stack library and new mempool handler
2019-04-03 20:49 ` Eads, Gage
@ 2019-04-03 20:49 ` Eads, Gage
0 siblings, 0 replies; 228+ messages in thread
From: Eads, Gage @ 2019-04-03 20:49 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, olivier.matz, arybchenko, Richardson, Bruce, Ananyev,
Konstantin, gavin.hu, Honnappa.Nagarahalli, nd
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Wednesday, April 3, 2019 3:40 PM
> To: Eads, Gage <gage.eads@intel.com>
> Cc: dev@dpdk.org; olivier.matz@6wind.com; arybchenko@solarflare.com;
> Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; gavin.hu@arm.com;
> Honnappa.Nagarahalli@arm.com; nd@arm.com
> Subject: Re: [dpdk-dev] [PATCH v7 0/8] Add stack library and new mempool
> handler
>
> 03/04/2019 22:09, Gage Eads:
> > v7:
> > - Add rte_branch_prediction.h include to rte_stack_std.h for unlikely()
> > - Add rte_compat.h include to rte_stack.h for __rte_experimental
>
> There is another error when compiling for Arm:
>
> lib/librte_stack/rte_stack.h:76:2: error:
> implicit declaration of function 'RTE_ASSERT'
Fix incoming. Looking through the rest of that header, I don't see any other externally defined macros/data structures/functions/etc. that would need another include.
Also note that v7's ci/Performance-Testing build fail (http://mails.dpdk.org/archives/test-report/2019-April/079261.html) is because dpdklab applied the patch on commit 3c45889189924067, which is older than the 128-bit CAS patch.
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 0/8] Add stack library and new mempool handler
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 " Gage Eads
` (9 preceding siblings ...)
2019-04-03 20:39 ` [dpdk-dev] [PATCH v7 0/8] Add stack library and new " Thomas Monjalon
@ 2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` Gage Eads
` (9 more replies)
10 siblings, 10 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This patchset introduces a stack library, supporting both lock-based and
lock-free stacks, and a lock-free stack mempool handler.
The lock-based stack code is derived from the existing stack mempool handler,
and that handler is refactored to use the stack library.
The lock-free stack mempool handler is intended for usages where the rte
ring's "non-preemptive" constraint is not acceptable; for example, if the
application uses a mixture of pinned high-priority threads and multiplexed
low-priority threads that share a mempool.
Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
so it is currently limited to the x86_64 platform.
This patchset is the successor to a patchset containing only the new mempool
handler[2].
[1] http://mails.dpdk.org/archives/dev/2019-April/129014.html
[2] http://mails.dpdk.org/archives/dev/2019-January/123555.html
---
v8:
- Add rte_debug.h include to rte_stack.h for RTE_ASSERT()
v7:
- Add rte_branch_prediction.h include to rte_stack_std.h for unlikely()
- Add rte_compat.h include to rte_stack.h for __rte_experimental
v6:
- Add load-acquire fence to the lock-free push function
- Correct generic implementation's pop_elems 128b CAS success and failure
memorder to match those in the C11 implementation.
v5:
- Add comment to explain padding in *_get_memsize() functions
- Prefix internal functions with '__'
- Use RTE_ASSERT for performance critical run-time checks
- Don't use __atomic_load in the C11 pop_elems function, and put an acquire
thread fence at the start of the 2nd do-while loop
- Change pop_elems 128b CAS success memorder to RELEASE and failure memorder to
RELAXED
- Change compile-time assertion to run for all 64-bit architectures
- Reorganize the code with standard and lock-free .c and .h files
v4:
- Fix 32-bit build error in test_stack.c by using %zu format specifier for
size_t
- Rebase onto master
v3:
- Rebase patchset onto master (test/test/ -> app/test/)
- Fix rte_stack_std_push() segfault introduced in v2
v2:
- Reworked structure and function naming to use rte_stack_{std, lf}_...
- Updated to the latest rte_atomic128_cmp_exchange() interface.
- Rename STACK_F_NB -> RTE_STACK_F_LF.
- Remove rte_rmb() and rte_wmb() from the generic push and pop implementations.
These are obviated by rte_atomic128_cmp_exchange()'s two memorder arguments.
- Edit stack_lib.rst text to 80 chars/line.
- Fix rte_stack.h doxygen formatting.
- Allocate popped_objs array from the heap
- Fix stack_thread_push_pop bug ("&t->sz" -> "t->sz")
- Remove unnecessary NULL check from test_stack_basic
- Properly terminate the name string in test_stack_name_length
- Add an empty array of struct rte_nb_lifo_elem elements
- In rte_nb_lifo_push(), retrieve the last element from __nb_lifo_pop()
- Split C11 implementation into a separate patchset
Gage Eads (8):
stack: introduce rte stack library
mempool/stack: convert mempool to use rte stack
test/stack: add stack test
test/stack: add stack perf test
stack: add lock-free stack implementation
stack: add C11 atomic implementation
test/stack: add lock-free stack tests
mempool/stack: add lock-free stack mempool handler
MAINTAINERS | 9 +-
app/test/Makefile | 3 +
app/test/meson.build | 7 +
app/test/test_stack.c | 423 ++++++++++++++++++++++++
app/test/test_stack_perf.c | 356 ++++++++++++++++++++
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/env_abstraction_layer.rst | 10 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 83 +++++
doc/guides/rel_notes/release_19_05.rst | 13 +
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 115 +++----
lib/Makefile | 2 +
lib/librte_stack/Makefile | 29 ++
lib/librte_stack/meson.build | 12 +
lib/librte_stack/rte_stack.c | 196 +++++++++++
lib/librte_stack/rte_stack.h | 261 +++++++++++++++
lib/librte_stack/rte_stack_lf.c | 31 ++
lib/librte_stack/rte_stack_lf.h | 106 ++++++
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 +++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++
lib/librte_stack/rte_stack_std.c | 26 ++
lib/librte_stack/rte_stack_std.h | 121 +++++++
lib/librte_stack/rte_stack_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
30 files changed, 2133 insertions(+), 72 deletions(-)
create mode 100644 app/test/test_stack.c
create mode 100644 app/test/test_stack_perf.c
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 0/8] Add stack library and new mempool handler
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 " Gage Eads
@ 2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 1/8] stack: introduce rte stack library Gage Eads
` (8 subsequent siblings)
9 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This patchset introduces a stack library, supporting both lock-based and
lock-free stacks, and a lock-free stack mempool handler.
The lock-based stack code is derived from the existing stack mempool handler,
and that handler is refactored to use the stack library.
The lock-free stack mempool handler is intended for usages where the rte
ring's "non-preemptive" constraint is not acceptable; for example, if the
application uses a mixture of pinned high-priority threads and multiplexed
low-priority threads that share a mempool.
Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
so it is currently limited to the x86_64 platform.
This patchset is the successor to a patchset containing only the new mempool
handler[2].
[1] http://mails.dpdk.org/archives/dev/2019-April/129014.html
[2] http://mails.dpdk.org/archives/dev/2019-January/123555.html
---
v8:
- Add rte_debug.h include to rte_stack.h for RTE_ASSERT()
v7:
- Add rte_branch_prediction.h include to rte_stack_std.h for unlikely()
- Add rte_compat.h include to rte_stack.h for __rte_experimental
v6:
- Add load-acquire fence to the lock-free push function
- Correct generic implementation's pop_elems 128b CAS success and failure
memorder to match those in the C11 implementation.
v5:
- Add comment to explain padding in *_get_memsize() functions
- Prefix internal functions with '__'
- Use RTE_ASSERT for performance critical run-time checks
- Don't use __atomic_load in the C11 pop_elems function, and put an acquire
thread fence at the start of the 2nd do-while loop
- Change pop_elems 128b CAS success memorder to RELEASE and failure memorder to
RELAXED
- Change compile-time assertion to run for all 64-bit architectures
- Reorganize the code with standard and lock-free .c and .h files
v4:
- Fix 32-bit build error in test_stack.c by using %zu format specifier for
size_t
- Rebase onto master
v3:
- Rebase patchset onto master (test/test/ -> app/test/)
- Fix rte_stack_std_push() segfault introduced in v2
v2:
- Reworked structure and function naming to use rte_stack_{std, lf}_...
- Updated to the latest rte_atomic128_cmp_exchange() interface.
- Rename STACK_F_NB -> RTE_STACK_F_LF.
- Remove rte_rmb() and rte_wmb() from the generic push and pop implementations.
These are obviated by rte_atomic128_cmp_exchange()'s two memorder arguments.
- Edit stack_lib.rst text to 80 chars/line.
- Fix rte_stack.h doxygen formatting.
- Allocate popped_objs array from the heap
- Fix stack_thread_push_pop bug ("&t->sz" -> "t->sz")
- Remove unnecessary NULL check from test_stack_basic
- Properly terminate the name string in test_stack_name_length
- Add an empty array of struct rte_nb_lifo_elem elements
- In rte_nb_lifo_push(), retrieve the last element from __nb_lifo_pop()
- Split C11 implementation into a separate patchset
Gage Eads (8):
stack: introduce rte stack library
mempool/stack: convert mempool to use rte stack
test/stack: add stack test
test/stack: add stack perf test
stack: add lock-free stack implementation
stack: add C11 atomic implementation
test/stack: add lock-free stack tests
mempool/stack: add lock-free stack mempool handler
MAINTAINERS | 9 +-
app/test/Makefile | 3 +
app/test/meson.build | 7 +
app/test/test_stack.c | 423 ++++++++++++++++++++++++
app/test/test_stack_perf.c | 356 ++++++++++++++++++++
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/env_abstraction_layer.rst | 10 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 83 +++++
doc/guides/rel_notes/release_19_05.rst | 13 +
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 115 +++----
lib/Makefile | 2 +
lib/librte_stack/Makefile | 29 ++
lib/librte_stack/meson.build | 12 +
lib/librte_stack/rte_stack.c | 196 +++++++++++
lib/librte_stack/rte_stack.h | 261 +++++++++++++++
lib/librte_stack/rte_stack_lf.c | 31 ++
lib/librte_stack/rte_stack_lf.h | 106 ++++++
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 +++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++
lib/librte_stack/rte_stack_std.c | 26 ++
lib/librte_stack/rte_stack_std.h | 121 +++++++
lib/librte_stack/rte_stack_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
30 files changed, 2133 insertions(+), 72 deletions(-)
create mode 100644 app/test/test_stack.c
create mode 100644 app/test/test_stack_perf.c
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 1/8] stack: introduce rte stack library
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 " Gage Eads
2019-04-03 20:50 ` Gage Eads
@ 2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
` (7 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 28 +++++
doc/guides/rel_notes/release_19_05.rst | 5 +
lib/Makefile | 2 +
lib/librte_stack/Makefile | 25 ++++
lib/librte_stack/meson.build | 8 ++
lib/librte_stack/rte_stack.c | 182 ++++++++++++++++++++++++++++
lib/librte_stack/rte_stack.h | 209 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++++++
lib/librte_stack/rte_stack_std.c | 26 ++++
lib/librte_stack/rte_stack_std.h | 121 +++++++++++++++++++
lib/librte_stack/rte_stack_version.map | 9 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
18 files changed, 665 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 71ac8cd4b..f30fc4aa6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -426,6 +426,12 @@ F: drivers/raw/skeleton_rawdev/
F: app/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
+Stack API - EXPERIMENTAL
+M: Gage Eads <gage.eads@intel.com>
+M: Olivier Matz <olivier.matz@6wind.com>
+F: lib/librte_stack/
+F: doc/guides/prog_guide/stack_lib.rst
+
Memory Pool Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index 6292bc4af..fc8dba69d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -994,3 +994,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
# Compile the eventdev application
#
CONFIG_RTE_APP_EVENTDEV=y
+
+#
+# Compile librte_stack
+#
+CONFIG_RTE_LIBRTE_STACK=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index aacc66bd8..de1e215dd 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -125,6 +125,7 @@ The public API headers are grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
[ring] (@ref rte_ring.h),
+ [stack] (@ref rte_stack.h),
[tailq] (@ref rte_tailq.h),
[bitmap] (@ref rte_bitmap.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..7722fc3e9 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -55,6 +55,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
@TOPDIR@/lib/librte_security \
+ @TOPDIR@/lib/librte_stack \
@TOPDIR@/lib/librte_table \
@TOPDIR@/lib/librte_telemetry \
@TOPDIR@/lib/librte_timer \
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..f4f60862f 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ stack_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
new file mode 100644
index 000000000..25a8cc38a
--- /dev/null
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -0,0 +1,28 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+Stack Library
+=============
+
+DPDK's stack library provides an API for configuration and use of a bounded
+stack of pointers.
+
+The stack library provides the following basic operations:
+
+* Create a uniquely named stack of a user-specified size and using a
+ user-specified socket.
+
+* Push and pop a burst of one or more stack objects (pointers). These function
+ are multi-threading safe.
+
+* Free a previously created stack.
+
+* Lookup a pointer to a stack by its name.
+
+* Query a stack's current depth and number of free entries.
+
+Implementation
+~~~~~~~~~~~~~~
+
+The stack consists of a contiguous array of pointers, a current index, and a
+spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index bdad1ddbe..ebfbe36e5 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -121,6 +121,11 @@ New Features
Improved testpmd application performance on ARM platform. For ``macswap``
forwarding mode, NEON intrinsics were used to do swap to save CPU cycles.
+* **Added Stack API.**
+
+ Added a new stack API for configuration and use of a bounded stack of
+ pointers. The API provides MT-safe push and pop operations that can operate
+ on one or more pointers per operation.
Removed Items
-------------
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..9f90e80ad 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack
+DEPDIRS-librte_stack := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
new file mode 100644
index 000000000..6db540073
--- /dev/null
+++ b/lib/librte_stack/Makefile
@@ -0,0 +1,25 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_stack.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_stack_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
+ rte_stack_std.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
+ rte_stack_std.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
new file mode 100644
index 000000000..d2e60ce9b
--- /dev/null
+++ b/lib/librte_stack/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+allow_experimental_apis = true
+
+version = 1
+sources = files('rte_stack.c', 'rte_stack_std.c')
+headers = files('rte_stack.h', 'rte_stack_std.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
new file mode 100644
index 000000000..610014b6c
--- /dev/null
+++ b/lib/librte_stack/rte_stack.c
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_tailq.h>
+
+#include "rte_stack.h"
+#include "rte_stack_pvt.h"
+
+int stack_logtype;
+
+TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_stack_tailq = {
+ .name = RTE_TAILQ_STACK_NAME,
+};
+EAL_REGISTER_TAILQ(rte_stack_tailq)
+
+static void
+rte_stack_init(struct rte_stack *s)
+{
+ memset(s, 0, sizeof(*s));
+
+ rte_stack_std_init(s);
+}
+
+static ssize_t
+rte_stack_get_memsize(unsigned int count)
+{
+ return rte_stack_std_get_memsize(count);
+}
+
+struct rte_stack *
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_stack_list *stack_list;
+ const struct rte_memzone *mz;
+ struct rte_tailq_entry *te;
+ struct rte_stack *s;
+ unsigned int sz;
+ int ret;
+
+ RTE_SET_USED(flags);
+
+ sz = rte_stack_get_memsize(count);
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_STACK_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ STACK_LOG_ERR("Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
+ 0, __alignof__(*s));
+ if (mz == NULL) {
+ STACK_LOG_ERR("Cannot reserve stack memzone!\n");
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(te);
+ return NULL;
+ }
+
+ s = mz->addr;
+
+ rte_stack_init(s);
+
+ /* Store the name for later lookups */
+ ret = snprintf(s->name, sizeof(s->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(s->name)) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_errno = ENAMETOOLONG;
+ rte_free(te);
+ rte_memzone_free(mz);
+ return NULL;
+ }
+
+ s->memzone = mz;
+ s->capacity = count;
+ s->flags = flags;
+
+ te->data = s;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ TAILQ_INSERT_TAIL(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return s;
+}
+
+void
+rte_stack_free(struct rte_stack *s)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+
+ if (s == NULL)
+ return;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, stack_list, next) {
+ if (te->data == s)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+
+ rte_memzone_free(s->memzone);
+}
+
+struct rte_stack *
+rte_stack_lookup(const char *name)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+ struct rte_stack *r = NULL;
+
+ if (name == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, stack_list, next) {
+ r = (struct rte_stack *) te->data;
+ if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+RTE_INIT(librte_stack_init_log)
+{
+ stack_logtype = rte_log_register("lib.stack");
+ if (stack_logtype >= 0)
+ rte_log_set_level(stack_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
new file mode 100644
index 000000000..42d042715
--- /dev/null
+++ b/lib/librte_stack/rte_stack.h
@@ -0,0 +1,209 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+/**
+ * @file rte_stack.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Stack
+ *
+ * librte_stack provides an API for configuration and use of a bounded stack of
+ * pointers. Push and pop operations are MT-safe, allowing concurrent access,
+ * and the interface supports pushing and popping multiple pointers at a time.
+ */
+
+#ifndef _RTE_STACK_H_
+#define _RTE_STACK_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_compat.h>
+#include <rte_debug.h>
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+#define RTE_TAILQ_STACK_NAME "RTE_STACK"
+#define RTE_STACK_MZ_PREFIX "STK_"
+/** The maximum length of a stack name. */
+#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
+ sizeof(RTE_STACK_MZ_PREFIX) + 1)
+
+/* Structure containing the LIFO, its current length, and a lock for mutual
+ * exclusion.
+ */
+struct rte_stack_std {
+ rte_spinlock_t lock; /**< LIFO lock */
+ uint32_t len; /**< LIFO len */
+ void *objs[]; /**< LIFO pointer table */
+};
+
+/* The RTE stack structure contains the LIFO structure itself, plus metadata
+ * such as its name and memzone pointer.
+ */
+struct rte_stack {
+ /** Name of the stack. */
+ char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
+ /** Memzone containing the rte_stack structure. */
+ const struct rte_memzone *memzone;
+ uint32_t capacity; /**< Usable size of the stack. */
+ uint32_t flags; /**< Flags supplied at creation. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+} __rte_cache_aligned;
+
+#include "rte_stack_std.h"
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_pop(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return __rte_stack_std_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of free entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of free entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_free_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return s->capacity - rte_stack_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new stack named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory for a stack of
+ * size *count*. The behavior of the stack is controlled by the *flags*.
+ *
+ * @param name
+ * The name of the stack.
+ * @param count
+ * The size of the stack.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * Reserved for future use.
+ * @return
+ * On success, the pointer to the new allocated stack. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a stack with the same name already exists
+ * - ENOMEM - insufficient memory to create the stack
+ * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
+ */
+struct rte_stack *__rte_experimental
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free all memory used by the stack.
+ *
+ * @param s
+ * Stack to free
+ */
+void __rte_experimental
+rte_stack_free(struct rte_stack *s);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Lookup a stack by its name.
+ *
+ * @param name
+ * The name of the stack.
+ * @return
+ * The pointer to the stack matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - Stack with name *name* not found.
+ * - EINVAL - *name* pointer is NULL.
+ */
+struct rte_stack * __rte_experimental
+rte_stack_lookup(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_H_ */
diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
new file mode 100644
index 000000000..4a6a7bdb3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_pvt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_PVT_H_
+#define _RTE_STACK_PVT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_log.h>
+
+extern int stack_logtype;
+
+#define STACK_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
+ __func__, ##args)
+
+#define STACK_LOG_ERR(fmt, args...) \
+ STACK_LOG(ERR, fmt, ## args)
+
+#define STACK_LOG_WARN(fmt, args...) \
+ STACK_LOG(WARNING, fmt, ## args)
+
+#define STACK_LOG_INFO(fmt, args...) \
+ STACK_LOG(INFO, fmt, ## args)
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_PVT_H_ */
diff --git a/lib/librte_stack/rte_stack_std.c b/lib/librte_stack/rte_stack_std.c
new file mode 100644
index 000000000..0a310d7c6
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.c
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_std_init(struct rte_stack *s)
+{
+ rte_spinlock_init(&s->stack_std.lock);
+}
+
+ssize_t
+rte_stack_std_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_std.h b/lib/librte_stack/rte_stack_std.h
new file mode 100644
index 000000000..5dc940932
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.h
@@ -0,0 +1,121 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_STD_H_
+#define _RTE_STACK_STD_H_
+
+#include <rte_branch_prediction.h>
+
+/**
+ * @internal Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+ cache_objs = &stack->objs[stack->len];
+
+ /* Is there sufficient space in the stack? */
+ if ((stack->len + n) > s->capacity) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ stack->len += n;
+
+ rte_spinlock_unlock(&stack->lock);
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index, len;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+
+ if (unlikely(n > stack->len)) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ cache_objs = stack->objs;
+
+ for (index = 0, len = stack->len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ stack->len -= n;
+ rte_spinlock_unlock(&stack->lock);
+
+ return n;
+}
+
+/**
+ * @internal Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_count(struct rte_stack *s)
+{
+ return (unsigned int)s->stack_std.len;
+}
+
+/**
+ * @internal Initialize a standard stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ */
+void
+rte_stack_std_init(struct rte_stack *s);
+
+/**
+ * @internal Return the memory required for a standard stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a standard stack.
+ */
+ssize_t
+rte_stack_std_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_STD_H_ */
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
new file mode 100644
index 000000000..6662679c3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_stack_create;
+ rte_stack_free;
+ rte_stack_lookup;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index c3289f885..595314d7d 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..7e033e78c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 1/8] stack: introduce rte stack library
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 1/8] stack: introduce rte stack library Gage Eads
@ 2019-04-03 20:50 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 28 +++++
doc/guides/rel_notes/release_19_05.rst | 5 +
lib/Makefile | 2 +
lib/librte_stack/Makefile | 25 ++++
lib/librte_stack/meson.build | 8 ++
lib/librte_stack/rte_stack.c | 182 ++++++++++++++++++++++++++++
lib/librte_stack/rte_stack.h | 209 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++++++
lib/librte_stack/rte_stack_std.c | 26 ++++
lib/librte_stack/rte_stack_std.h | 121 +++++++++++++++++++
lib/librte_stack/rte_stack_version.map | 9 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
18 files changed, 665 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 71ac8cd4b..f30fc4aa6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -426,6 +426,12 @@ F: drivers/raw/skeleton_rawdev/
F: app/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
+Stack API - EXPERIMENTAL
+M: Gage Eads <gage.eads@intel.com>
+M: Olivier Matz <olivier.matz@6wind.com>
+F: lib/librte_stack/
+F: doc/guides/prog_guide/stack_lib.rst
+
Memory Pool Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index 6292bc4af..fc8dba69d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -994,3 +994,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
# Compile the eventdev application
#
CONFIG_RTE_APP_EVENTDEV=y
+
+#
+# Compile librte_stack
+#
+CONFIG_RTE_LIBRTE_STACK=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index aacc66bd8..de1e215dd 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -125,6 +125,7 @@ The public API headers are grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
[ring] (@ref rte_ring.h),
+ [stack] (@ref rte_stack.h),
[tailq] (@ref rte_tailq.h),
[bitmap] (@ref rte_bitmap.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..7722fc3e9 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -55,6 +55,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
@TOPDIR@/lib/librte_security \
+ @TOPDIR@/lib/librte_stack \
@TOPDIR@/lib/librte_table \
@TOPDIR@/lib/librte_telemetry \
@TOPDIR@/lib/librte_timer \
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..f4f60862f 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ stack_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
new file mode 100644
index 000000000..25a8cc38a
--- /dev/null
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -0,0 +1,28 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+Stack Library
+=============
+
+DPDK's stack library provides an API for configuration and use of a bounded
+stack of pointers.
+
+The stack library provides the following basic operations:
+
+* Create a uniquely named stack of a user-specified size and using a
+ user-specified socket.
+
+* Push and pop a burst of one or more stack objects (pointers). These function
+ are multi-threading safe.
+
+* Free a previously created stack.
+
+* Lookup a pointer to a stack by its name.
+
+* Query a stack's current depth and number of free entries.
+
+Implementation
+~~~~~~~~~~~~~~
+
+The stack consists of a contiguous array of pointers, a current index, and a
+spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index bdad1ddbe..ebfbe36e5 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -121,6 +121,11 @@ New Features
Improved testpmd application performance on ARM platform. For ``macswap``
forwarding mode, NEON intrinsics were used to do swap to save CPU cycles.
+* **Added Stack API.**
+
+ Added a new stack API for configuration and use of a bounded stack of
+ pointers. The API provides MT-safe push and pop operations that can operate
+ on one or more pointers per operation.
Removed Items
-------------
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..9f90e80ad 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack
+DEPDIRS-librte_stack := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
new file mode 100644
index 000000000..6db540073
--- /dev/null
+++ b/lib/librte_stack/Makefile
@@ -0,0 +1,25 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_stack.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_stack_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
+ rte_stack_std.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
+ rte_stack_std.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
new file mode 100644
index 000000000..d2e60ce9b
--- /dev/null
+++ b/lib/librte_stack/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+allow_experimental_apis = true
+
+version = 1
+sources = files('rte_stack.c', 'rte_stack_std.c')
+headers = files('rte_stack.h', 'rte_stack_std.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
new file mode 100644
index 000000000..610014b6c
--- /dev/null
+++ b/lib/librte_stack/rte_stack.c
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_tailq.h>
+
+#include "rte_stack.h"
+#include "rte_stack_pvt.h"
+
+int stack_logtype;
+
+TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_stack_tailq = {
+ .name = RTE_TAILQ_STACK_NAME,
+};
+EAL_REGISTER_TAILQ(rte_stack_tailq)
+
+static void
+rte_stack_init(struct rte_stack *s)
+{
+ memset(s, 0, sizeof(*s));
+
+ rte_stack_std_init(s);
+}
+
+static ssize_t
+rte_stack_get_memsize(unsigned int count)
+{
+ return rte_stack_std_get_memsize(count);
+}
+
+struct rte_stack *
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_stack_list *stack_list;
+ const struct rte_memzone *mz;
+ struct rte_tailq_entry *te;
+ struct rte_stack *s;
+ unsigned int sz;
+ int ret;
+
+ RTE_SET_USED(flags);
+
+ sz = rte_stack_get_memsize(count);
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_STACK_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ STACK_LOG_ERR("Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
+ 0, __alignof__(*s));
+ if (mz == NULL) {
+ STACK_LOG_ERR("Cannot reserve stack memzone!\n");
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(te);
+ return NULL;
+ }
+
+ s = mz->addr;
+
+ rte_stack_init(s);
+
+ /* Store the name for later lookups */
+ ret = snprintf(s->name, sizeof(s->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(s->name)) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_errno = ENAMETOOLONG;
+ rte_free(te);
+ rte_memzone_free(mz);
+ return NULL;
+ }
+
+ s->memzone = mz;
+ s->capacity = count;
+ s->flags = flags;
+
+ te->data = s;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ TAILQ_INSERT_TAIL(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return s;
+}
+
+void
+rte_stack_free(struct rte_stack *s)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+
+ if (s == NULL)
+ return;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, stack_list, next) {
+ if (te->data == s)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+
+ rte_memzone_free(s->memzone);
+}
+
+struct rte_stack *
+rte_stack_lookup(const char *name)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+ struct rte_stack *r = NULL;
+
+ if (name == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, stack_list, next) {
+ r = (struct rte_stack *) te->data;
+ if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+RTE_INIT(librte_stack_init_log)
+{
+ stack_logtype = rte_log_register("lib.stack");
+ if (stack_logtype >= 0)
+ rte_log_set_level(stack_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
new file mode 100644
index 000000000..42d042715
--- /dev/null
+++ b/lib/librte_stack/rte_stack.h
@@ -0,0 +1,209 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+/**
+ * @file rte_stack.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Stack
+ *
+ * librte_stack provides an API for configuration and use of a bounded stack of
+ * pointers. Push and pop operations are MT-safe, allowing concurrent access,
+ * and the interface supports pushing and popping multiple pointers at a time.
+ */
+
+#ifndef _RTE_STACK_H_
+#define _RTE_STACK_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_compat.h>
+#include <rte_debug.h>
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+#define RTE_TAILQ_STACK_NAME "RTE_STACK"
+#define RTE_STACK_MZ_PREFIX "STK_"
+/** The maximum length of a stack name. */
+#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
+ sizeof(RTE_STACK_MZ_PREFIX) + 1)
+
+/* Structure containing the LIFO, its current length, and a lock for mutual
+ * exclusion.
+ */
+struct rte_stack_std {
+ rte_spinlock_t lock; /**< LIFO lock */
+ uint32_t len; /**< LIFO len */
+ void *objs[]; /**< LIFO pointer table */
+};
+
+/* The RTE stack structure contains the LIFO structure itself, plus metadata
+ * such as its name and memzone pointer.
+ */
+struct rte_stack {
+ /** Name of the stack. */
+ char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
+ /** Memzone containing the rte_stack structure. */
+ const struct rte_memzone *memzone;
+ uint32_t capacity; /**< Usable size of the stack. */
+ uint32_t flags; /**< Flags supplied at creation. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+} __rte_cache_aligned;
+
+#include "rte_stack_std.h"
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_pop(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return __rte_stack_std_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of free entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of free entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_free_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return s->capacity - rte_stack_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new stack named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory for a stack of
+ * size *count*. The behavior of the stack is controlled by the *flags*.
+ *
+ * @param name
+ * The name of the stack.
+ * @param count
+ * The size of the stack.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * Reserved for future use.
+ * @return
+ * On success, the pointer to the new allocated stack. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a stack with the same name already exists
+ * - ENOMEM - insufficient memory to create the stack
+ * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
+ */
+struct rte_stack *__rte_experimental
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free all memory used by the stack.
+ *
+ * @param s
+ * Stack to free
+ */
+void __rte_experimental
+rte_stack_free(struct rte_stack *s);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Lookup a stack by its name.
+ *
+ * @param name
+ * The name of the stack.
+ * @return
+ * The pointer to the stack matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - Stack with name *name* not found.
+ * - EINVAL - *name* pointer is NULL.
+ */
+struct rte_stack * __rte_experimental
+rte_stack_lookup(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_H_ */
diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
new file mode 100644
index 000000000..4a6a7bdb3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_pvt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_PVT_H_
+#define _RTE_STACK_PVT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_log.h>
+
+extern int stack_logtype;
+
+#define STACK_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
+ __func__, ##args)
+
+#define STACK_LOG_ERR(fmt, args...) \
+ STACK_LOG(ERR, fmt, ## args)
+
+#define STACK_LOG_WARN(fmt, args...) \
+ STACK_LOG(WARNING, fmt, ## args)
+
+#define STACK_LOG_INFO(fmt, args...) \
+ STACK_LOG(INFO, fmt, ## args)
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_PVT_H_ */
diff --git a/lib/librte_stack/rte_stack_std.c b/lib/librte_stack/rte_stack_std.c
new file mode 100644
index 000000000..0a310d7c6
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.c
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_std_init(struct rte_stack *s)
+{
+ rte_spinlock_init(&s->stack_std.lock);
+}
+
+ssize_t
+rte_stack_std_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_std.h b/lib/librte_stack/rte_stack_std.h
new file mode 100644
index 000000000..5dc940932
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.h
@@ -0,0 +1,121 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_STD_H_
+#define _RTE_STACK_STD_H_
+
+#include <rte_branch_prediction.h>
+
+/**
+ * @internal Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+ cache_objs = &stack->objs[stack->len];
+
+ /* Is there sufficient space in the stack? */
+ if ((stack->len + n) > s->capacity) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ stack->len += n;
+
+ rte_spinlock_unlock(&stack->lock);
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index, len;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+
+ if (unlikely(n > stack->len)) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ cache_objs = stack->objs;
+
+ for (index = 0, len = stack->len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ stack->len -= n;
+ rte_spinlock_unlock(&stack->lock);
+
+ return n;
+}
+
+/**
+ * @internal Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_count(struct rte_stack *s)
+{
+ return (unsigned int)s->stack_std.len;
+}
+
+/**
+ * @internal Initialize a standard stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ */
+void
+rte_stack_std_init(struct rte_stack *s);
+
+/**
+ * @internal Return the memory required for a standard stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a standard stack.
+ */
+ssize_t
+rte_stack_std_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_STD_H_ */
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
new file mode 100644
index 000000000..6662679c3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_stack_create;
+ rte_stack_free;
+ rte_stack_lookup;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index c3289f885..595314d7d 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..7e033e78c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 2/8] mempool/stack: convert mempool to use rte stack
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 " Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 1/8] stack: introduce rte stack library Gage Eads
@ 2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 3/8] test/stack: add stack test Gage Eads
` (6 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 2 +-
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 93 +++++++++----------------------
4 files changed, 33 insertions(+), 71 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index f30fc4aa6..e09e7d93f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -303,7 +303,6 @@ M: Andrew Rybchenko <arybchenko@solarflare.com>
F: lib/librte_mempool/
F: drivers/mempool/Makefile
F: drivers/mempool/ring/
-F: drivers/mempool/stack/
F: doc/guides/prog_guide/mempool_lib.rst
F: app/test/test_mempool*
F: app/test/test_func_reentrancy.c
@@ -431,6 +430,7 @@ M: Gage Eads <gage.eads@intel.com>
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
+F: drivers/mempool/stack/
Memory Pool Drivers
diff --git a/drivers/mempool/stack/Makefile b/drivers/mempool/stack/Makefile
index 0444aedad..1681a62bc 100644
--- a/drivers/mempool/stack/Makefile
+++ b/drivers/mempool/stack/Makefile
@@ -10,10 +10,11 @@ LIB = librte_mempool_stack.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
# Headers
CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+LDLIBS += -lrte_eal -lrte_mempool -lrte_stack
EXPORT_MAP := rte_mempool_stack_version.map
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index b75a3bb56..03e369a41 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
+
+allow_experimental_apis = true
sources = files('rte_mempool_stack.c')
+
+deps += ['stack']
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index e6d504af5..25ccdb9af 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -1,39 +1,29 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2019 Intel Corporation
*/
#include <stdio.h>
#include <rte_mempool.h>
-#include <rte_malloc.h>
-
-struct rte_mempool_stack {
- rte_spinlock_t sl;
-
- uint32_t size;
- uint32_t len;
- void *objs[];
-};
+#include <rte_stack.h>
static int
stack_alloc(struct rte_mempool *mp)
{
- struct rte_mempool_stack *s;
- unsigned n = mp->size;
- int size = sizeof(*s) + (n+16)*sizeof(void *);
-
- /* Allocate our local memory structure */
- s = rte_zmalloc_socket("mempool-stack",
- size,
- RTE_CACHE_LINE_SIZE,
- mp->socket_id);
- if (s == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
- return -ENOMEM;
+ char name[RTE_STACK_NAMESIZE];
+ struct rte_stack *s;
+ int ret;
+
+ ret = snprintf(name, sizeof(name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(name)) {
+ rte_errno = ENAMETOOLONG;
+ return -rte_errno;
}
- rte_spinlock_init(&s->sl);
+ s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ if (s == NULL)
+ return -rte_errno;
- s->size = n;
mp->pool_data = s;
return 0;
@@ -41,69 +31,36 @@ stack_alloc(struct rte_mempool *mp)
static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index;
-
- rte_spinlock_lock(&s->sl);
- cache_objs = &s->objs[s->len];
-
- /* Is there sufficient space in the stack ? */
- if ((s->len + n) > s->size) {
- rte_spinlock_unlock(&s->sl);
- return -ENOBUFS;
- }
-
- /* Add elements back into the cache */
- for (index = 0; index < n; ++index, obj_table++)
- cache_objs[index] = *obj_table;
-
- s->len += n;
+ struct rte_stack *s = mp->pool_data;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_push(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static int
stack_dequeue(struct rte_mempool *mp, void **obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index, len;
-
- rte_spinlock_lock(&s->sl);
-
- if (unlikely(n > s->len)) {
- rte_spinlock_unlock(&s->sl);
- return -ENOENT;
- }
+ struct rte_stack *s = mp->pool_data;
- cache_objs = s->objs;
-
- for (index = 0, len = s->len - 1; index < n;
- ++index, len--, obj_table++)
- *obj_table = cache_objs[len];
-
- s->len -= n;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_pop(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static unsigned
stack_get_count(const struct rte_mempool *mp)
{
- struct rte_mempool_stack *s = mp->pool_data;
+ struct rte_stack *s = mp->pool_data;
- return s->len;
+ return rte_stack_count(s);
}
static void
stack_free(struct rte_mempool *mp)
{
- rte_free((void *)(mp->pool_data));
+ struct rte_stack *s = mp->pool_data;
+
+ rte_stack_free(s);
}
static struct rte_mempool_ops ops_stack = {
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 2/8] mempool/stack: convert mempool to use rte stack
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-04-03 20:50 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 2 +-
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 93 +++++++++----------------------
4 files changed, 33 insertions(+), 71 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index f30fc4aa6..e09e7d93f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -303,7 +303,6 @@ M: Andrew Rybchenko <arybchenko@solarflare.com>
F: lib/librte_mempool/
F: drivers/mempool/Makefile
F: drivers/mempool/ring/
-F: drivers/mempool/stack/
F: doc/guides/prog_guide/mempool_lib.rst
F: app/test/test_mempool*
F: app/test/test_func_reentrancy.c
@@ -431,6 +430,7 @@ M: Gage Eads <gage.eads@intel.com>
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
+F: drivers/mempool/stack/
Memory Pool Drivers
diff --git a/drivers/mempool/stack/Makefile b/drivers/mempool/stack/Makefile
index 0444aedad..1681a62bc 100644
--- a/drivers/mempool/stack/Makefile
+++ b/drivers/mempool/stack/Makefile
@@ -10,10 +10,11 @@ LIB = librte_mempool_stack.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
# Headers
CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+LDLIBS += -lrte_eal -lrte_mempool -lrte_stack
EXPORT_MAP := rte_mempool_stack_version.map
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index b75a3bb56..03e369a41 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
+
+allow_experimental_apis = true
sources = files('rte_mempool_stack.c')
+
+deps += ['stack']
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index e6d504af5..25ccdb9af 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -1,39 +1,29 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2019 Intel Corporation
*/
#include <stdio.h>
#include <rte_mempool.h>
-#include <rte_malloc.h>
-
-struct rte_mempool_stack {
- rte_spinlock_t sl;
-
- uint32_t size;
- uint32_t len;
- void *objs[];
-};
+#include <rte_stack.h>
static int
stack_alloc(struct rte_mempool *mp)
{
- struct rte_mempool_stack *s;
- unsigned n = mp->size;
- int size = sizeof(*s) + (n+16)*sizeof(void *);
-
- /* Allocate our local memory structure */
- s = rte_zmalloc_socket("mempool-stack",
- size,
- RTE_CACHE_LINE_SIZE,
- mp->socket_id);
- if (s == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
- return -ENOMEM;
+ char name[RTE_STACK_NAMESIZE];
+ struct rte_stack *s;
+ int ret;
+
+ ret = snprintf(name, sizeof(name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(name)) {
+ rte_errno = ENAMETOOLONG;
+ return -rte_errno;
}
- rte_spinlock_init(&s->sl);
+ s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ if (s == NULL)
+ return -rte_errno;
- s->size = n;
mp->pool_data = s;
return 0;
@@ -41,69 +31,36 @@ stack_alloc(struct rte_mempool *mp)
static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index;
-
- rte_spinlock_lock(&s->sl);
- cache_objs = &s->objs[s->len];
-
- /* Is there sufficient space in the stack ? */
- if ((s->len + n) > s->size) {
- rte_spinlock_unlock(&s->sl);
- return -ENOBUFS;
- }
-
- /* Add elements back into the cache */
- for (index = 0; index < n; ++index, obj_table++)
- cache_objs[index] = *obj_table;
-
- s->len += n;
+ struct rte_stack *s = mp->pool_data;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_push(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static int
stack_dequeue(struct rte_mempool *mp, void **obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index, len;
-
- rte_spinlock_lock(&s->sl);
-
- if (unlikely(n > s->len)) {
- rte_spinlock_unlock(&s->sl);
- return -ENOENT;
- }
+ struct rte_stack *s = mp->pool_data;
- cache_objs = s->objs;
-
- for (index = 0, len = s->len - 1; index < n;
- ++index, len--, obj_table++)
- *obj_table = cache_objs[len];
-
- s->len -= n;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_pop(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static unsigned
stack_get_count(const struct rte_mempool *mp)
{
- struct rte_mempool_stack *s = mp->pool_data;
+ struct rte_stack *s = mp->pool_data;
- return s->len;
+ return rte_stack_count(s);
}
static void
stack_free(struct rte_mempool *mp)
{
- rte_free((void *)(mp->pool_data));
+ struct rte_stack *s = mp->pool_data;
+
+ rte_stack_free(s);
}
static struct rte_mempool_ops ops_stack = {
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 3/8] test/stack: add stack test
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 " Gage Eads
` (2 preceding siblings ...)
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 22:41 ` Thomas Monjalon
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 4/8] test/stack: add stack perf test Gage Eads
` (5 subsequent siblings)
9 siblings, 2 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_autotest performs positive and negative testing of the stack API, and
exercises the push and pop datapath functions with all available lcores.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 1 +
app/test/Makefile | 2 +
app/test/meson.build | 3 +
app/test/test_stack.c | 410 ++++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 416 insertions(+)
create mode 100644 app/test/test_stack.c
diff --git a/MAINTAINERS b/MAINTAINERS
index e09e7d93f..e4e6d1b15 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -431,6 +431,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
F: drivers/mempool/stack/
+F: test/test/*stack*
Memory Pool Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index d6aa28bad..e5bde81af 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -90,6 +90,8 @@ endif
SRCS-y += test_rwlock.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
diff --git a/app/test/meson.build b/app/test/meson.build
index c5e65fe66..56ea13f53 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -95,6 +95,7 @@ test_sources = files('commands.c',
'test_sched.c',
'test_service_cores.c',
'test_spinlock.c',
+ 'test_stack.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -133,6 +134,7 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
+ 'stack',
'timer'
]
@@ -174,6 +176,7 @@ fast_parallel_test_names = [
'rwlock_autotest',
'sched_autotest',
'spinlock_autotest',
+ 'stack_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
new file mode 100644
index 000000000..8392e4e4d
--- /dev/null
+++ b/app/test/test_stack.c
@@ -0,0 +1,410 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_SIZE 4096
+#define MAX_BULK 32
+
+static int
+test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
+{
+ unsigned int i, ret;
+ void **popped_objs;
+
+ popped_objs = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (popped_objs == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_push(s, &obj_table[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] push returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_pop(s, &popped_objs[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] pop returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i++) {
+ if (obj_table[i] != popped_objs[STACK_SIZE - i - 1]) {
+ printf("[%s():%u] Incorrect value %p at index 0x%x\n",
+ __func__, __LINE__,
+ popped_objs[STACK_SIZE - i - 1], i);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ rte_free(popped_objs);
+
+ return 0;
+}
+
+static int
+test_stack_basic(void)
+{
+ struct rte_stack *s = NULL;
+ void **obj_table = NULL;
+ int i, ret = -1;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ goto fail_test;
+ }
+
+ for (i = 0; i < STACK_SIZE; i++)
+ obj_table[i] = (void *)(uintptr_t)i;
+
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_lookup(__func__) != s) {
+ printf("[%s():%u] failed to lookup a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_count(s) != 0) {
+ printf("[%s():%u] stack count: %u (expected 0)\n",
+ __func__, __LINE__, rte_stack_count(s));
+ goto fail_test;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, 1);
+ if (ret) {
+ printf("[%s():%u] Single object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, MAX_BULK);
+ if (ret) {
+ printf("[%s():%u] Bulk object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
+ if (ret != 0) {
+ printf("[%s():%u] Excess objects push succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_pop(s, obj_table, 1);
+ if (ret != 0) {
+ printf("[%s():%u] Empty stack pop succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = 0;
+
+fail_test:
+ rte_stack_free(s);
+
+ rte_free(obj_table);
+
+ return ret;
+}
+
+static int
+test_stack_name_reuse(void)
+{
+ struct rte_stack *s[2];
+
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[0] == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[1] != NULL) {
+ printf("[%s():%u] Failed to detect re-used name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ rte_stack_free(s[0]);
+
+ return 0;
+}
+
+static int
+test_stack_name_length(void)
+{
+ char name[RTE_STACK_NAMESIZE + 1];
+ struct rte_stack *s;
+
+ memset(name, 's', sizeof(name));
+ name[RTE_STACK_NAMESIZE] = '\0';
+
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ if (s != NULL) {
+ printf("[%s():%u] Failed to prevent long name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENAMETOOLONG) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_stack *s = rte_stack_lookup("stack_not_found");
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENOENT) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s = rte_stack_lookup(NULL);
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != EINVAL) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_free_null(void)
+{
+ /* Check whether the library proper handles a NULL pointer */
+ rte_stack_free(NULL);
+
+ return 0;
+}
+
+#define NUM_ITERS_PER_THREAD 100000
+
+struct test_args {
+ struct rte_stack *s;
+ rte_atomic64_t *sz;
+};
+
+static int
+stack_thread_push_pop(void *args)
+{
+ struct test_args *t = args;
+ void **obj_table;
+ int i;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < NUM_ITERS_PER_THREAD; i++) {
+ unsigned int success, num;
+
+ /* Reserve up to min(MAX_BULK, available slots) stack entries,
+ * then push and pop those stack entries.
+ */
+ do {
+ uint64_t sz = rte_atomic64_read(t->sz);
+ volatile uint64_t *sz_addr;
+
+ sz_addr = (volatile uint64_t *)t->sz;
+
+ num = RTE_MIN(rte_rand() % MAX_BULK, STACK_SIZE - sz);
+
+ success = rte_atomic64_cmpset(sz_addr, sz, sz + num);
+ } while (success == 0);
+
+ if (rte_stack_push(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to push %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ if (rte_stack_pop(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to pop %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ rte_atomic64_sub(t->sz, num);
+ }
+
+ rte_free(obj_table);
+ return 0;
+}
+
+static int
+test_stack_multithreaded(void)
+{
+ struct test_args *args;
+ unsigned int lcore_id;
+ struct rte_stack *s;
+ rte_atomic64_t size;
+
+ printf("[%s():%u] Running with %u lcores\n",
+ __func__, __LINE__, rte_lcore_count());
+
+ if (rte_lcore_count() < 2)
+ return 0;
+
+ args = rte_malloc(NULL, sizeof(struct test_args) * RTE_MAX_LCORE, 0);
+ if (args == NULL) {
+ printf("[%s():%u] failed to malloc %zu bytes\n",
+ __func__, __LINE__,
+ sizeof(struct test_args) * RTE_MAX_LCORE);
+ return -1;
+ }
+
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ rte_free(args);
+ return -1;
+ }
+
+ rte_atomic64_init(&size);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ if (rte_eal_remote_launch(stack_thread_push_pop,
+ &args[lcore_id], lcore_id))
+ rte_panic("Failed to launch lcore %d\n", lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ stack_thread_push_pop(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ rte_stack_free(s);
+ rte_free(args);
+
+ return 0;
+}
+
+static int
+test_stack(void)
+{
+ if (test_stack_basic() < 0)
+ return -1;
+
+ if (test_lookup_null() < 0)
+ return -1;
+
+ if (test_free_null() < 0)
+ return -1;
+
+ if (test_stack_name_reuse() < 0)
+ return -1;
+
+ if (test_stack_name_length() < 0)
+ return -1;
+
+ if (test_stack_multithreaded() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_autotest, test_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 3/8] test/stack: add stack test
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 3/8] test/stack: add stack test Gage Eads
@ 2019-04-03 20:50 ` Gage Eads
2019-04-03 22:41 ` Thomas Monjalon
1 sibling, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_autotest performs positive and negative testing of the stack API, and
exercises the push and pop datapath functions with all available lcores.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 1 +
app/test/Makefile | 2 +
app/test/meson.build | 3 +
app/test/test_stack.c | 410 ++++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 416 insertions(+)
create mode 100644 app/test/test_stack.c
diff --git a/MAINTAINERS b/MAINTAINERS
index e09e7d93f..e4e6d1b15 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -431,6 +431,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
F: drivers/mempool/stack/
+F: test/test/*stack*
Memory Pool Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index d6aa28bad..e5bde81af 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -90,6 +90,8 @@ endif
SRCS-y += test_rwlock.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
diff --git a/app/test/meson.build b/app/test/meson.build
index c5e65fe66..56ea13f53 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -95,6 +95,7 @@ test_sources = files('commands.c',
'test_sched.c',
'test_service_cores.c',
'test_spinlock.c',
+ 'test_stack.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -133,6 +134,7 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
+ 'stack',
'timer'
]
@@ -174,6 +176,7 @@ fast_parallel_test_names = [
'rwlock_autotest',
'sched_autotest',
'spinlock_autotest',
+ 'stack_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
new file mode 100644
index 000000000..8392e4e4d
--- /dev/null
+++ b/app/test/test_stack.c
@@ -0,0 +1,410 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_SIZE 4096
+#define MAX_BULK 32
+
+static int
+test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
+{
+ unsigned int i, ret;
+ void **popped_objs;
+
+ popped_objs = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (popped_objs == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_push(s, &obj_table[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] push returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_pop(s, &popped_objs[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] pop returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i++) {
+ if (obj_table[i] != popped_objs[STACK_SIZE - i - 1]) {
+ printf("[%s():%u] Incorrect value %p at index 0x%x\n",
+ __func__, __LINE__,
+ popped_objs[STACK_SIZE - i - 1], i);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ rte_free(popped_objs);
+
+ return 0;
+}
+
+static int
+test_stack_basic(void)
+{
+ struct rte_stack *s = NULL;
+ void **obj_table = NULL;
+ int i, ret = -1;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ goto fail_test;
+ }
+
+ for (i = 0; i < STACK_SIZE; i++)
+ obj_table[i] = (void *)(uintptr_t)i;
+
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_lookup(__func__) != s) {
+ printf("[%s():%u] failed to lookup a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_count(s) != 0) {
+ printf("[%s():%u] stack count: %u (expected 0)\n",
+ __func__, __LINE__, rte_stack_count(s));
+ goto fail_test;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, 1);
+ if (ret) {
+ printf("[%s():%u] Single object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, MAX_BULK);
+ if (ret) {
+ printf("[%s():%u] Bulk object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
+ if (ret != 0) {
+ printf("[%s():%u] Excess objects push succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_pop(s, obj_table, 1);
+ if (ret != 0) {
+ printf("[%s():%u] Empty stack pop succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = 0;
+
+fail_test:
+ rte_stack_free(s);
+
+ rte_free(obj_table);
+
+ return ret;
+}
+
+static int
+test_stack_name_reuse(void)
+{
+ struct rte_stack *s[2];
+
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[0] == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[1] != NULL) {
+ printf("[%s():%u] Failed to detect re-used name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ rte_stack_free(s[0]);
+
+ return 0;
+}
+
+static int
+test_stack_name_length(void)
+{
+ char name[RTE_STACK_NAMESIZE + 1];
+ struct rte_stack *s;
+
+ memset(name, 's', sizeof(name));
+ name[RTE_STACK_NAMESIZE] = '\0';
+
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ if (s != NULL) {
+ printf("[%s():%u] Failed to prevent long name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENAMETOOLONG) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_stack *s = rte_stack_lookup("stack_not_found");
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENOENT) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s = rte_stack_lookup(NULL);
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != EINVAL) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_free_null(void)
+{
+ /* Check whether the library proper handles a NULL pointer */
+ rte_stack_free(NULL);
+
+ return 0;
+}
+
+#define NUM_ITERS_PER_THREAD 100000
+
+struct test_args {
+ struct rte_stack *s;
+ rte_atomic64_t *sz;
+};
+
+static int
+stack_thread_push_pop(void *args)
+{
+ struct test_args *t = args;
+ void **obj_table;
+ int i;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < NUM_ITERS_PER_THREAD; i++) {
+ unsigned int success, num;
+
+ /* Reserve up to min(MAX_BULK, available slots) stack entries,
+ * then push and pop those stack entries.
+ */
+ do {
+ uint64_t sz = rte_atomic64_read(t->sz);
+ volatile uint64_t *sz_addr;
+
+ sz_addr = (volatile uint64_t *)t->sz;
+
+ num = RTE_MIN(rte_rand() % MAX_BULK, STACK_SIZE - sz);
+
+ success = rte_atomic64_cmpset(sz_addr, sz, sz + num);
+ } while (success == 0);
+
+ if (rte_stack_push(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to push %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ if (rte_stack_pop(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to pop %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ rte_atomic64_sub(t->sz, num);
+ }
+
+ rte_free(obj_table);
+ return 0;
+}
+
+static int
+test_stack_multithreaded(void)
+{
+ struct test_args *args;
+ unsigned int lcore_id;
+ struct rte_stack *s;
+ rte_atomic64_t size;
+
+ printf("[%s():%u] Running with %u lcores\n",
+ __func__, __LINE__, rte_lcore_count());
+
+ if (rte_lcore_count() < 2)
+ return 0;
+
+ args = rte_malloc(NULL, sizeof(struct test_args) * RTE_MAX_LCORE, 0);
+ if (args == NULL) {
+ printf("[%s():%u] failed to malloc %zu bytes\n",
+ __func__, __LINE__,
+ sizeof(struct test_args) * RTE_MAX_LCORE);
+ return -1;
+ }
+
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ rte_free(args);
+ return -1;
+ }
+
+ rte_atomic64_init(&size);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ if (rte_eal_remote_launch(stack_thread_push_pop,
+ &args[lcore_id], lcore_id))
+ rte_panic("Failed to launch lcore %d\n", lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ stack_thread_push_pop(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ rte_stack_free(s);
+ rte_free(args);
+
+ return 0;
+}
+
+static int
+test_stack(void)
+{
+ if (test_stack_basic() < 0)
+ return -1;
+
+ if (test_lookup_null() < 0)
+ return -1;
+
+ if (test_free_null() < 0)
+ return -1;
+
+ if (test_stack_name_reuse() < 0)
+ return -1;
+
+ if (test_stack_name_length() < 0)
+ return -1;
+
+ if (test_stack_multithreaded() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_autotest, test_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 4/8] test/stack: add stack perf test
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 " Gage Eads
` (3 preceding siblings ...)
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 3/8] test/stack: add stack test Gage Eads
@ 2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 5/8] stack: add lock-free stack implementation Gage Eads
` (4 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/Makefile | 1 +
app/test/meson.build | 2 +
app/test/test_stack_perf.c | 343 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 346 insertions(+)
create mode 100644 app/test/test_stack_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index e5bde81af..b28bed2d4 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -91,6 +91,7 @@ endif
SRCS-y += test_rwlock.c
SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
diff --git a/app/test/meson.build b/app/test/meson.build
index 56ea13f53..02eb788a4 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -96,6 +96,7 @@ test_sources = files('commands.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
+ 'test_stack_perf.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -241,6 +242,7 @@ perf_test_names = [
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
+ 'stack_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
new file mode 100644
index 000000000..484370d30
--- /dev/null
+++ b/app/test/test_stack_perf.c
@@ -0,0 +1,343 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+#include <rte_stack.h>
+#include <rte_cycles.h>
+#include <rte_launch.h>
+#include <rte_pause.h>
+
+#include "test.h"
+
+#define STACK_NAME "STACK_PERF"
+#define MAX_BURST 32
+#define STACK_SIZE (RTE_MAX_LCORE * MAX_BURST)
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+/*
+ * Push/pop bulk sizes, marked volatile so they aren't treated as compile-time
+ * constants.
+ */
+static volatile unsigned int bulk_sizes[] = {8, MAX_BURST};
+
+static rte_atomic32_t lcore_barrier;
+
+struct lcore_pair {
+ unsigned int c1;
+ unsigned int c2;
+};
+
+static int
+get_two_hyperthreads(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] == core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_cores(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] != core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_sockets(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if (socket[0] != socket[1]) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+/* Measure the cycle cost of popping an empty stack. */
+static void
+test_empty_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 100000000;
+ void *objs[MAX_BURST];
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++)
+ rte_stack_pop(s, objs, bulk_sizes[0]);
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Stack empty pop: %.2F\n",
+ (double)(end - start) / iterations);
+}
+
+struct thread_args {
+ struct rte_stack *s;
+ unsigned int sz;
+ double avg;
+};
+
+/* Measure the average per-pointer cycle cost of stack push and pop */
+static int
+bulk_push_pop(void *p)
+{
+ unsigned int iterations = 1000000;
+ struct thread_args *args = p;
+ void *objs[MAX_BURST] = {0};
+ unsigned int size, i;
+ struct rte_stack *s;
+
+ s = args->s;
+ size = args->sz;
+
+ rte_atomic32_sub(&lcore_barrier, 1);
+ while (rte_atomic32_read(&lcore_barrier) != 0)
+ rte_pause();
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, size);
+ rte_stack_pop(s, objs, size);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ args->avg = ((double)(end - start))/(iterations * size);
+
+ return 0;
+}
+
+/*
+ * Run bulk_push_pop() simultaneously on pairs of cores, to measure stack
+ * perf when between hyperthread siblings, cores on the same socket, and cores
+ * on different sockets.
+ */
+static void
+run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s,
+ lcore_function_t fn)
+{
+ struct thread_args args[2];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ rte_atomic32_set(&lcore_barrier, 2);
+
+ args[0].sz = args[1].sz = bulk_sizes[i];
+ args[0].s = args[1].s = s;
+
+ if (cores->c1 == rte_get_master_lcore()) {
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ fn(&args[0]);
+ rte_eal_wait_lcore(cores->c2);
+ } else {
+ rte_eal_remote_launch(fn, &args[0], cores->c1);
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ rte_eal_wait_lcore(cores->c1);
+ rte_eal_wait_lcore(cores->c2);
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], (args[0].avg + args[1].avg) / 2);
+ }
+}
+
+/* Run bulk_push_pop() simultaneously on 1+ cores. */
+static void
+run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n)
+{
+ struct thread_args args[RTE_MAX_LCORE];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ unsigned int lcore_id;
+ int cnt = 0;
+ double avg;
+
+ rte_atomic32_set(&lcore_barrier, n);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ if (rte_eal_remote_launch(fn, &args[lcore_id],
+ lcore_id))
+ rte_panic("Failed to launch lcore %d\n",
+ lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ fn(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ avg = args[rte_lcore_id()].avg;
+
+ cnt = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+ avg += args[lcore_id].avg;
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], avg / n);
+ }
+}
+
+/*
+ * Measure the cycle cost of pushing and popping a single pointer on a single
+ * lcore.
+ */
+static void
+test_single_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 16000000;
+ void *obj = NULL;
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, &obj, 1);
+ rte_stack_pop(s, &obj, 1);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Average cycles per single object push/pop: %.2F\n",
+ ((double)(end - start)) / iterations);
+}
+
+/* Measure the cycle cost of bulk pushing and popping on a single lcore. */
+static void
+test_bulk_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 8000000;
+ void *objs[MAX_BURST];
+ unsigned int sz, i;
+
+ for (sz = 0; sz < ARRAY_SIZE(bulk_sizes); sz++) {
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, bulk_sizes[sz]);
+ rte_stack_pop(s, objs, bulk_sizes[sz]);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ double avg = ((double)(end - start) /
+ (iterations * bulk_sizes[sz]));
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[sz], avg);
+ }
+}
+
+static int
+test_stack_perf(void)
+{
+ struct lcore_pair cores;
+ struct rte_stack *s;
+
+ rte_atomic32_init(&lcore_barrier);
+
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ printf("### Testing single element push/pop ###\n");
+ test_single_push_pop(s);
+
+ printf("\n### Testing empty pop ###\n");
+ test_empty_pop(s);
+
+ printf("\n### Testing using a single lcore ###\n");
+ test_bulk_push_pop(s);
+
+ if (get_two_hyperthreads(&cores) == 0) {
+ printf("\n### Testing using two hyperthreads ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_cores(&cores) == 0) {
+ printf("\n### Testing using two physical cores ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_sockets(&cores) == 0) {
+ printf("\n### Testing using two NUMA nodes ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+
+ printf("\n### Testing on all %u lcores ###\n", rte_lcore_count());
+ run_on_n_cores(s, bulk_push_pop, rte_lcore_count());
+
+ rte_stack_free(s);
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 4/8] test/stack: add stack perf test
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 4/8] test/stack: add stack perf test Gage Eads
@ 2019-04-03 20:50 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/Makefile | 1 +
app/test/meson.build | 2 +
app/test/test_stack_perf.c | 343 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 346 insertions(+)
create mode 100644 app/test/test_stack_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index e5bde81af..b28bed2d4 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -91,6 +91,7 @@ endif
SRCS-y += test_rwlock.c
SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
diff --git a/app/test/meson.build b/app/test/meson.build
index 56ea13f53..02eb788a4 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -96,6 +96,7 @@ test_sources = files('commands.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
+ 'test_stack_perf.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -241,6 +242,7 @@ perf_test_names = [
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
+ 'stack_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
new file mode 100644
index 000000000..484370d30
--- /dev/null
+++ b/app/test/test_stack_perf.c
@@ -0,0 +1,343 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+#include <rte_stack.h>
+#include <rte_cycles.h>
+#include <rte_launch.h>
+#include <rte_pause.h>
+
+#include "test.h"
+
+#define STACK_NAME "STACK_PERF"
+#define MAX_BURST 32
+#define STACK_SIZE (RTE_MAX_LCORE * MAX_BURST)
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+/*
+ * Push/pop bulk sizes, marked volatile so they aren't treated as compile-time
+ * constants.
+ */
+static volatile unsigned int bulk_sizes[] = {8, MAX_BURST};
+
+static rte_atomic32_t lcore_barrier;
+
+struct lcore_pair {
+ unsigned int c1;
+ unsigned int c2;
+};
+
+static int
+get_two_hyperthreads(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] == core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_cores(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] != core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_sockets(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if (socket[0] != socket[1]) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+/* Measure the cycle cost of popping an empty stack. */
+static void
+test_empty_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 100000000;
+ void *objs[MAX_BURST];
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++)
+ rte_stack_pop(s, objs, bulk_sizes[0]);
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Stack empty pop: %.2F\n",
+ (double)(end - start) / iterations);
+}
+
+struct thread_args {
+ struct rte_stack *s;
+ unsigned int sz;
+ double avg;
+};
+
+/* Measure the average per-pointer cycle cost of stack push and pop */
+static int
+bulk_push_pop(void *p)
+{
+ unsigned int iterations = 1000000;
+ struct thread_args *args = p;
+ void *objs[MAX_BURST] = {0};
+ unsigned int size, i;
+ struct rte_stack *s;
+
+ s = args->s;
+ size = args->sz;
+
+ rte_atomic32_sub(&lcore_barrier, 1);
+ while (rte_atomic32_read(&lcore_barrier) != 0)
+ rte_pause();
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, size);
+ rte_stack_pop(s, objs, size);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ args->avg = ((double)(end - start))/(iterations * size);
+
+ return 0;
+}
+
+/*
+ * Run bulk_push_pop() simultaneously on pairs of cores, to measure stack
+ * perf when between hyperthread siblings, cores on the same socket, and cores
+ * on different sockets.
+ */
+static void
+run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s,
+ lcore_function_t fn)
+{
+ struct thread_args args[2];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ rte_atomic32_set(&lcore_barrier, 2);
+
+ args[0].sz = args[1].sz = bulk_sizes[i];
+ args[0].s = args[1].s = s;
+
+ if (cores->c1 == rte_get_master_lcore()) {
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ fn(&args[0]);
+ rte_eal_wait_lcore(cores->c2);
+ } else {
+ rte_eal_remote_launch(fn, &args[0], cores->c1);
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ rte_eal_wait_lcore(cores->c1);
+ rte_eal_wait_lcore(cores->c2);
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], (args[0].avg + args[1].avg) / 2);
+ }
+}
+
+/* Run bulk_push_pop() simultaneously on 1+ cores. */
+static void
+run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n)
+{
+ struct thread_args args[RTE_MAX_LCORE];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ unsigned int lcore_id;
+ int cnt = 0;
+ double avg;
+
+ rte_atomic32_set(&lcore_barrier, n);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ if (rte_eal_remote_launch(fn, &args[lcore_id],
+ lcore_id))
+ rte_panic("Failed to launch lcore %d\n",
+ lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ fn(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ avg = args[rte_lcore_id()].avg;
+
+ cnt = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+ avg += args[lcore_id].avg;
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], avg / n);
+ }
+}
+
+/*
+ * Measure the cycle cost of pushing and popping a single pointer on a single
+ * lcore.
+ */
+static void
+test_single_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 16000000;
+ void *obj = NULL;
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, &obj, 1);
+ rte_stack_pop(s, &obj, 1);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Average cycles per single object push/pop: %.2F\n",
+ ((double)(end - start)) / iterations);
+}
+
+/* Measure the cycle cost of bulk pushing and popping on a single lcore. */
+static void
+test_bulk_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 8000000;
+ void *objs[MAX_BURST];
+ unsigned int sz, i;
+
+ for (sz = 0; sz < ARRAY_SIZE(bulk_sizes); sz++) {
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, bulk_sizes[sz]);
+ rte_stack_pop(s, objs, bulk_sizes[sz]);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ double avg = ((double)(end - start) /
+ (iterations * bulk_sizes[sz]));
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[sz], avg);
+ }
+}
+
+static int
+test_stack_perf(void)
+{
+ struct lcore_pair cores;
+ struct rte_stack *s;
+
+ rte_atomic32_init(&lcore_barrier);
+
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ printf("### Testing single element push/pop ###\n");
+ test_single_push_pop(s);
+
+ printf("\n### Testing empty pop ###\n");
+ test_empty_pop(s);
+
+ printf("\n### Testing using a single lcore ###\n");
+ test_bulk_push_pop(s);
+
+ if (get_two_hyperthreads(&cores) == 0) {
+ printf("\n### Testing using two hyperthreads ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_cores(&cores) == 0) {
+ printf("\n### Testing using two physical cores ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_sockets(&cores) == 0) {
+ printf("\n### Testing using two NUMA nodes ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+
+ printf("\n### Testing on all %u lcores ###\n", rte_lcore_count());
+ run_on_n_cores(s, bulk_push_pop, rte_lcore_count());
+
+ rte_stack_free(s);
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 5/8] stack: add lock-free stack implementation
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 " Gage Eads
` (4 preceding siblings ...)
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 4/8] test/stack: add stack perf test Gage Eads
@ 2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 6/8] stack: add C11 atomic implementation Gage Eads
` (3 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for a lock-free (linked list based) stack to the
stack API. This behavior is selected through a new rte_stack_create() flag,
RTE_STACK_F_LF.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The lock-free push operation enqueues a linked list of pointers by pointing
the tail of the list to the current stack head, and using a CAS to swing
the stack head pointer to the head of the list. The operation retries if it
is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The lock-free pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a lock-free LIFO
list, and are allocated before stack pushes and freed after stack pops.
Since the stack has a fixed maximum depth, these elements do not need to be
dynamically created.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
doc/guides/prog_guide/stack_lib.rst | 61 +++++++++++-
doc/guides/rel_notes/release_19_05.rst | 3 +
lib/librte_stack/Makefile | 7 +-
lib/librte_stack/meson.build | 7 +-
lib/librte_stack/rte_stack.c | 28 ++++--
lib/librte_stack/rte_stack.h | 62 +++++++++++-
lib/librte_stack/rte_stack_lf.c | 31 ++++++
lib/librte_stack/rte_stack_lf.h | 102 ++++++++++++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 ++++++++++++++++++++++++++++++++
9 files changed, 446 insertions(+), 19 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 25a8cc38a..8fe8804e3 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -10,7 +10,8 @@ stack of pointers.
The stack library provides the following basic operations:
* Create a uniquely named stack of a user-specified size and using a
- user-specified socket.
+ user-specified socket, with either standard (lock-based) or lock-free
+ behavior.
* Push and pop a burst of one or more stack objects (pointers). These function
are multi-threading safe.
@@ -24,5 +25,59 @@ The stack library provides the following basic operations:
Implementation
~~~~~~~~~~~~~~
-The stack consists of a contiguous array of pointers, a current index, and a
-spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
+The library supports two types of stacks: standard (lock-based) and lock-free.
+Both types use the same set of interfaces, but their implementations differ.
+
+Lock-based Stack
+----------------
+
+The lock-based stack consists of a contiguous array of pointers, a current
+index, and a spinlock. Accesses to the stack are made multi-thread safe by the
+spinlock.
+
+Lock-free Stack
+------------------
+
+The lock-free stack consists of a linked list of elements, each containing a
+data pointer and a next pointer, and an atomic stack depth counter. The
+lock-free property means that multiple threads can push and pop simultaneously,
+and one thread being preempted/delayed in a push or pop operation will not
+impede the forward progress of any other thread.
+
+The lock-free push operation enqueues a linked list of pointers by pointing the
+list's tail to the current stack head, and using a CAS to swing the stack head
+pointer to the head of the list. The operation retries if it is unsuccessful
+(i.e. the list changed between reading the head and modifying it), else it
+adjusts the stack length and returns.
+
+The lock-free pop operation first reserves one or more list elements by
+adjusting the stack length, to ensure the dequeue operation will succeed
+without blocking. It then dequeues pointers by walking the list -- starting
+from the head -- then swinging the head pointer (using a CAS as well). While
+walking the list, the data pointers are recorded in an object table.
+
+The linked list elements themselves are maintained in a lock-free LIFO, and are
+allocated before stack pushes and freed after stack pops. Since the stack has a
+fixed maximum depth, these elements do not need to be dynamically created.
+
+The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
+rte_stack_create().
+
+Preventing the ABA Problem
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To prevent the ABA problem, this algorithm stack uses a 128-bit
+compare-and-swap instruction to atomically update both the stack top pointer
+and a modification counter. The ABA problem can occur without a modification
+counter if, for example:
+
+1. Thread A reads head pointer X and stores the pointed-to list element.
+2. Other threads modify the list such that the head pointer is once again X,
+ but its pointed-to data is different than what thread A read.
+3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+In this case thread A would not detect that the list had changed, and would
+both pop stale data and incorrect change the head pointer. By adding a
+modification counter that is updated on every push and pop as part of the
+compare-and-swap, the algorithm can detect when the list changes even if the
+head pointer remains the same.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index ebfbe36e5..3b115b5f6 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -127,6 +127,9 @@ New Features
pointers. The API provides MT-safe push and pop operations that can operate
on one or more pointers per operation.
+ The library supports two stack implementations: standard (lock-based) and lock-free.
+ The lock-free implementation is currently limited to x86-64 platforms.
+
Removed Items
-------------
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 6db540073..311edd997 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -16,10 +16,13 @@ LIBABIVER := 1
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
- rte_stack_std.c
+ rte_stack_std.c \
+ rte_stack_lf.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
- rte_stack_std.h
+ rte_stack_std.h \
+ rte_stack_lf.h \
+ rte_stack_lf_generic.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index d2e60ce9b..7a09a5d66 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -4,5 +4,8 @@
allow_experimental_apis = true
version = 1
-sources = files('rte_stack.c', 'rte_stack_std.c')
-headers = files('rte_stack.h', 'rte_stack_std.h')
+sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
+headers = files('rte_stack.h',
+ 'rte_stack_std.h',
+ 'rte_stack_lf.h',
+ 'rte_stack_lf_generic.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
index 610014b6c..1a4d9bd1e 100644
--- a/lib/librte_stack/rte_stack.c
+++ b/lib/librte_stack/rte_stack.c
@@ -25,18 +25,25 @@ static struct rte_tailq_elem rte_stack_tailq = {
};
EAL_REGISTER_TAILQ(rte_stack_tailq)
+
static void
-rte_stack_init(struct rte_stack *s)
+rte_stack_init(struct rte_stack *s, unsigned int count, uint32_t flags)
{
memset(s, 0, sizeof(*s));
- rte_stack_std_init(s);
+ if (flags & RTE_STACK_F_LF)
+ rte_stack_lf_init(s, count);
+ else
+ rte_stack_std_init(s);
}
static ssize_t
-rte_stack_get_memsize(unsigned int count)
+rte_stack_get_memsize(unsigned int count, uint32_t flags)
{
- return rte_stack_std_get_memsize(count);
+ if (flags & RTE_STACK_F_LF)
+ return rte_stack_lf_get_memsize(count);
+ else
+ return rte_stack_std_get_memsize(count);
}
struct rte_stack *
@@ -51,9 +58,16 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
unsigned int sz;
int ret;
- RTE_SET_USED(flags);
+#ifdef RTE_ARCH_64
+ RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
+#else
+ if (flags & RTE_STACK_F_LF) {
+ STACK_LOG_ERR("Lock-free stack is not supported on your platform\n");
+ return NULL;
+ }
+#endif
- sz = rte_stack_get_memsize(count);
+ sz = rte_stack_get_memsize(count, flags);
ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
RTE_STACK_MZ_PREFIX, name);
@@ -82,7 +96,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
s = mz->addr;
- rte_stack_init(s);
+ rte_stack_init(s, count, flags);
/* Store the name for later lookups */
ret = snprintf(s->name, sizeof(s->name), "%s", name);
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index 42d042715..58e68dd87 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -32,6 +32,35 @@ extern "C" {
#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
sizeof(RTE_STACK_MZ_PREFIX) + 1)
+struct rte_stack_lf_elem {
+ void *data; /**< Data pointer */
+ struct rte_stack_lf_elem *next; /**< Next pointer */
+};
+
+struct rte_stack_lf_head {
+ struct rte_stack_lf_elem *top; /**< Stack top */
+ uint64_t cnt; /**< Modification counter for avoiding ABA problem */
+};
+
+struct rte_stack_lf_list {
+ /** List head */
+ struct rte_stack_lf_head head __rte_aligned(16);
+ /** List len */
+ rte_atomic64_t len;
+};
+
+/* Structure containing two lock-free LIFO lists: the stack itself and a list
+ * of free linked-list elements.
+ */
+struct rte_stack_lf {
+ /** LIFO list of elements */
+ struct rte_stack_lf_list used __rte_cache_aligned;
+ /** LIFO list of free elements */
+ struct rte_stack_lf_list free __rte_cache_aligned;
+ /** LIFO elements */
+ struct rte_stack_lf_elem elems[] __rte_cache_aligned;
+};
+
/* Structure containing the LIFO, its current length, and a lock for mutual
* exclusion.
*/
@@ -51,10 +80,21 @@ struct rte_stack {
const struct rte_memzone *memzone;
uint32_t capacity; /**< Usable size of the stack. */
uint32_t flags; /**< Flags supplied at creation. */
- struct rte_stack_std stack_std; /**< LIFO structure. */
+ RTE_STD_C11
+ union {
+ struct rte_stack_lf stack_lf; /**< Lock-free LIFO structure. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+ };
} __rte_cache_aligned;
+/**
+ * The stack uses lock-free push and pop functions. This flag is only
+ * supported on x86_64 platforms, currently.
+ */
+#define RTE_STACK_F_LF 0x0001
+
#include "rte_stack_std.h"
+#include "rte_stack_lf.h"
/**
* @warning
@@ -77,7 +117,10 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_push(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_push(s, obj_table, n);
+ else
+ return __rte_stack_std_push(s, obj_table, n);
}
/**
@@ -101,7 +144,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_pop(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_pop(s, obj_table, n);
+ else
+ return __rte_stack_std_pop(s, obj_table, n);
}
/**
@@ -120,7 +166,10 @@ rte_stack_count(struct rte_stack *s)
{
RTE_ASSERT(s != NULL);
- return __rte_stack_std_count(s);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_count(s);
+ else
+ return __rte_stack_std_count(s);
}
/**
@@ -160,7 +209,10 @@ rte_stack_free_count(struct rte_stack *s)
* NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
* constraint for the reserved zone.
* @param flags
- * Reserved for future use.
+ * An OR of the following:
+ * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
+ * variants of the push and pop functions. Otherwise, it achieves
+ * thread-safety using a lock.
* @return
* On success, the pointer to the new allocated stack. NULL on error with
* rte_errno set appropriately. Possible errno values include:
diff --git a/lib/librte_stack/rte_stack_lf.c b/lib/librte_stack/rte_stack_lf.c
new file mode 100644
index 000000000..0adcc263e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.c
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count)
+{
+ struct rte_stack_lf_elem *elems = s->stack_lf.elems;
+ unsigned int i;
+
+ for (i = 0; i < count; i++)
+ __rte_stack_lf_push_elems(&s->stack_lf.free,
+ &elems[i], &elems[i], 1);
+}
+
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(struct rte_stack_lf_elem));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
new file mode 100644
index 000000000..bfd680133
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_H_
+#define _RTE_STACK_LF_H_
+
+#include "rte_stack_lf_generic.h"
+
+/**
+ * @internal Push several objects on the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects enqueued.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_push(struct rte_stack *s,
+ void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_lf_elem *tmp, *first, *last = NULL;
+ unsigned int i;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n free elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.free, n, NULL, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Construct the list elements */
+ for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
+ tmp->data = obj_table[n - i - 1];
+
+ /* Push them to the used list */
+ __rte_stack_lf_push_elems(&s->stack_lf.used, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * - Actual number of objects popped.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *first, *last = NULL;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n used elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.used,
+ n, obj_table, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Push the list elements to the free list */
+ __rte_stack_lf_push_elems(&s->stack_lf.free, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Initialize a lock-free stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param count
+ * The size of the stack.
+ */
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count);
+
+/**
+ * @internal Return the memory required for a lock-free stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a lock-free stack.
+ */
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_LF_H_ */
diff --git a/lib/librte_stack/rte_stack_lf_generic.h b/lib/librte_stack/rte_stack_lf_generic.h
new file mode 100644
index 000000000..1191406d3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_generic.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_GENERIC_H_
+#define _RTE_STACK_LF_GENERIC_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)rte_atomic64_read(&s->stack_lf.used.len);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* An acquire fence (or stronger) is needed for weak memory
+ * models to establish a synchronized-with relationship between
+ * the list->head load and store-release operations (as part of
+ * the rte_atomic128_cmp_exchange()).
+ */
+ rte_smp_mb();
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ rte_atomic64_add(&list->len, num);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = rte_atomic64_read(&list->len);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
+ len, len - num))
+ break;
+ }
+
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* An acquire fence (or stronger) is needed for weak memory
+ * models to ensure the LF LIFO element reads are properly
+ * ordered with respect to the head pointer read.
+ */
+ rte_smp_mb();
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_GENERIC_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 5/8] stack: add lock-free stack implementation
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 5/8] stack: add lock-free stack implementation Gage Eads
@ 2019-04-03 20:50 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for a lock-free (linked list based) stack to the
stack API. This behavior is selected through a new rte_stack_create() flag,
RTE_STACK_F_LF.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The lock-free push operation enqueues a linked list of pointers by pointing
the tail of the list to the current stack head, and using a CAS to swing
the stack head pointer to the head of the list. The operation retries if it
is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The lock-free pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a lock-free LIFO
list, and are allocated before stack pushes and freed after stack pops.
Since the stack has a fixed maximum depth, these elements do not need to be
dynamically created.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
doc/guides/prog_guide/stack_lib.rst | 61 +++++++++++-
doc/guides/rel_notes/release_19_05.rst | 3 +
lib/librte_stack/Makefile | 7 +-
lib/librte_stack/meson.build | 7 +-
lib/librte_stack/rte_stack.c | 28 ++++--
lib/librte_stack/rte_stack.h | 62 +++++++++++-
lib/librte_stack/rte_stack_lf.c | 31 ++++++
lib/librte_stack/rte_stack_lf.h | 102 ++++++++++++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 ++++++++++++++++++++++++++++++++
9 files changed, 446 insertions(+), 19 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 25a8cc38a..8fe8804e3 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -10,7 +10,8 @@ stack of pointers.
The stack library provides the following basic operations:
* Create a uniquely named stack of a user-specified size and using a
- user-specified socket.
+ user-specified socket, with either standard (lock-based) or lock-free
+ behavior.
* Push and pop a burst of one or more stack objects (pointers). These function
are multi-threading safe.
@@ -24,5 +25,59 @@ The stack library provides the following basic operations:
Implementation
~~~~~~~~~~~~~~
-The stack consists of a contiguous array of pointers, a current index, and a
-spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
+The library supports two types of stacks: standard (lock-based) and lock-free.
+Both types use the same set of interfaces, but their implementations differ.
+
+Lock-based Stack
+----------------
+
+The lock-based stack consists of a contiguous array of pointers, a current
+index, and a spinlock. Accesses to the stack are made multi-thread safe by the
+spinlock.
+
+Lock-free Stack
+------------------
+
+The lock-free stack consists of a linked list of elements, each containing a
+data pointer and a next pointer, and an atomic stack depth counter. The
+lock-free property means that multiple threads can push and pop simultaneously,
+and one thread being preempted/delayed in a push or pop operation will not
+impede the forward progress of any other thread.
+
+The lock-free push operation enqueues a linked list of pointers by pointing the
+list's tail to the current stack head, and using a CAS to swing the stack head
+pointer to the head of the list. The operation retries if it is unsuccessful
+(i.e. the list changed between reading the head and modifying it), else it
+adjusts the stack length and returns.
+
+The lock-free pop operation first reserves one or more list elements by
+adjusting the stack length, to ensure the dequeue operation will succeed
+without blocking. It then dequeues pointers by walking the list -- starting
+from the head -- then swinging the head pointer (using a CAS as well). While
+walking the list, the data pointers are recorded in an object table.
+
+The linked list elements themselves are maintained in a lock-free LIFO, and are
+allocated before stack pushes and freed after stack pops. Since the stack has a
+fixed maximum depth, these elements do not need to be dynamically created.
+
+The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
+rte_stack_create().
+
+Preventing the ABA Problem
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To prevent the ABA problem, this algorithm stack uses a 128-bit
+compare-and-swap instruction to atomically update both the stack top pointer
+and a modification counter. The ABA problem can occur without a modification
+counter if, for example:
+
+1. Thread A reads head pointer X and stores the pointed-to list element.
+2. Other threads modify the list such that the head pointer is once again X,
+ but its pointed-to data is different than what thread A read.
+3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+In this case thread A would not detect that the list had changed, and would
+both pop stale data and incorrect change the head pointer. By adding a
+modification counter that is updated on every push and pop as part of the
+compare-and-swap, the algorithm can detect when the list changes even if the
+head pointer remains the same.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index ebfbe36e5..3b115b5f6 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -127,6 +127,9 @@ New Features
pointers. The API provides MT-safe push and pop operations that can operate
on one or more pointers per operation.
+ The library supports two stack implementations: standard (lock-based) and lock-free.
+ The lock-free implementation is currently limited to x86-64 platforms.
+
Removed Items
-------------
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 6db540073..311edd997 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -16,10 +16,13 @@ LIBABIVER := 1
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
- rte_stack_std.c
+ rte_stack_std.c \
+ rte_stack_lf.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
- rte_stack_std.h
+ rte_stack_std.h \
+ rte_stack_lf.h \
+ rte_stack_lf_generic.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index d2e60ce9b..7a09a5d66 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -4,5 +4,8 @@
allow_experimental_apis = true
version = 1
-sources = files('rte_stack.c', 'rte_stack_std.c')
-headers = files('rte_stack.h', 'rte_stack_std.h')
+sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
+headers = files('rte_stack.h',
+ 'rte_stack_std.h',
+ 'rte_stack_lf.h',
+ 'rte_stack_lf_generic.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
index 610014b6c..1a4d9bd1e 100644
--- a/lib/librte_stack/rte_stack.c
+++ b/lib/librte_stack/rte_stack.c
@@ -25,18 +25,25 @@ static struct rte_tailq_elem rte_stack_tailq = {
};
EAL_REGISTER_TAILQ(rte_stack_tailq)
+
static void
-rte_stack_init(struct rte_stack *s)
+rte_stack_init(struct rte_stack *s, unsigned int count, uint32_t flags)
{
memset(s, 0, sizeof(*s));
- rte_stack_std_init(s);
+ if (flags & RTE_STACK_F_LF)
+ rte_stack_lf_init(s, count);
+ else
+ rte_stack_std_init(s);
}
static ssize_t
-rte_stack_get_memsize(unsigned int count)
+rte_stack_get_memsize(unsigned int count, uint32_t flags)
{
- return rte_stack_std_get_memsize(count);
+ if (flags & RTE_STACK_F_LF)
+ return rte_stack_lf_get_memsize(count);
+ else
+ return rte_stack_std_get_memsize(count);
}
struct rte_stack *
@@ -51,9 +58,16 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
unsigned int sz;
int ret;
- RTE_SET_USED(flags);
+#ifdef RTE_ARCH_64
+ RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
+#else
+ if (flags & RTE_STACK_F_LF) {
+ STACK_LOG_ERR("Lock-free stack is not supported on your platform\n");
+ return NULL;
+ }
+#endif
- sz = rte_stack_get_memsize(count);
+ sz = rte_stack_get_memsize(count, flags);
ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
RTE_STACK_MZ_PREFIX, name);
@@ -82,7 +96,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
s = mz->addr;
- rte_stack_init(s);
+ rte_stack_init(s, count, flags);
/* Store the name for later lookups */
ret = snprintf(s->name, sizeof(s->name), "%s", name);
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index 42d042715..58e68dd87 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -32,6 +32,35 @@ extern "C" {
#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
sizeof(RTE_STACK_MZ_PREFIX) + 1)
+struct rte_stack_lf_elem {
+ void *data; /**< Data pointer */
+ struct rte_stack_lf_elem *next; /**< Next pointer */
+};
+
+struct rte_stack_lf_head {
+ struct rte_stack_lf_elem *top; /**< Stack top */
+ uint64_t cnt; /**< Modification counter for avoiding ABA problem */
+};
+
+struct rte_stack_lf_list {
+ /** List head */
+ struct rte_stack_lf_head head __rte_aligned(16);
+ /** List len */
+ rte_atomic64_t len;
+};
+
+/* Structure containing two lock-free LIFO lists: the stack itself and a list
+ * of free linked-list elements.
+ */
+struct rte_stack_lf {
+ /** LIFO list of elements */
+ struct rte_stack_lf_list used __rte_cache_aligned;
+ /** LIFO list of free elements */
+ struct rte_stack_lf_list free __rte_cache_aligned;
+ /** LIFO elements */
+ struct rte_stack_lf_elem elems[] __rte_cache_aligned;
+};
+
/* Structure containing the LIFO, its current length, and a lock for mutual
* exclusion.
*/
@@ -51,10 +80,21 @@ struct rte_stack {
const struct rte_memzone *memzone;
uint32_t capacity; /**< Usable size of the stack. */
uint32_t flags; /**< Flags supplied at creation. */
- struct rte_stack_std stack_std; /**< LIFO structure. */
+ RTE_STD_C11
+ union {
+ struct rte_stack_lf stack_lf; /**< Lock-free LIFO structure. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+ };
} __rte_cache_aligned;
+/**
+ * The stack uses lock-free push and pop functions. This flag is only
+ * supported on x86_64 platforms, currently.
+ */
+#define RTE_STACK_F_LF 0x0001
+
#include "rte_stack_std.h"
+#include "rte_stack_lf.h"
/**
* @warning
@@ -77,7 +117,10 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_push(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_push(s, obj_table, n);
+ else
+ return __rte_stack_std_push(s, obj_table, n);
}
/**
@@ -101,7 +144,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_pop(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_pop(s, obj_table, n);
+ else
+ return __rte_stack_std_pop(s, obj_table, n);
}
/**
@@ -120,7 +166,10 @@ rte_stack_count(struct rte_stack *s)
{
RTE_ASSERT(s != NULL);
- return __rte_stack_std_count(s);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_count(s);
+ else
+ return __rte_stack_std_count(s);
}
/**
@@ -160,7 +209,10 @@ rte_stack_free_count(struct rte_stack *s)
* NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
* constraint for the reserved zone.
* @param flags
- * Reserved for future use.
+ * An OR of the following:
+ * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
+ * variants of the push and pop functions. Otherwise, it achieves
+ * thread-safety using a lock.
* @return
* On success, the pointer to the new allocated stack. NULL on error with
* rte_errno set appropriately. Possible errno values include:
diff --git a/lib/librte_stack/rte_stack_lf.c b/lib/librte_stack/rte_stack_lf.c
new file mode 100644
index 000000000..0adcc263e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.c
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count)
+{
+ struct rte_stack_lf_elem *elems = s->stack_lf.elems;
+ unsigned int i;
+
+ for (i = 0; i < count; i++)
+ __rte_stack_lf_push_elems(&s->stack_lf.free,
+ &elems[i], &elems[i], 1);
+}
+
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(struct rte_stack_lf_elem));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
new file mode 100644
index 000000000..bfd680133
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_H_
+#define _RTE_STACK_LF_H_
+
+#include "rte_stack_lf_generic.h"
+
+/**
+ * @internal Push several objects on the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects enqueued.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_push(struct rte_stack *s,
+ void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_lf_elem *tmp, *first, *last = NULL;
+ unsigned int i;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n free elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.free, n, NULL, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Construct the list elements */
+ for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
+ tmp->data = obj_table[n - i - 1];
+
+ /* Push them to the used list */
+ __rte_stack_lf_push_elems(&s->stack_lf.used, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * - Actual number of objects popped.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *first, *last = NULL;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n used elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.used,
+ n, obj_table, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Push the list elements to the free list */
+ __rte_stack_lf_push_elems(&s->stack_lf.free, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Initialize a lock-free stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param count
+ * The size of the stack.
+ */
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count);
+
+/**
+ * @internal Return the memory required for a lock-free stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a lock-free stack.
+ */
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_LF_H_ */
diff --git a/lib/librte_stack/rte_stack_lf_generic.h b/lib/librte_stack/rte_stack_lf_generic.h
new file mode 100644
index 000000000..1191406d3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_generic.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_GENERIC_H_
+#define _RTE_STACK_LF_GENERIC_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)rte_atomic64_read(&s->stack_lf.used.len);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* An acquire fence (or stronger) is needed for weak memory
+ * models to establish a synchronized-with relationship between
+ * the list->head load and store-release operations (as part of
+ * the rte_atomic128_cmp_exchange()).
+ */
+ rte_smp_mb();
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ rte_atomic64_add(&list->len, num);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = rte_atomic64_read(&list->len);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
+ len, len - num))
+ break;
+ }
+
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* An acquire fence (or stronger) is needed for weak memory
+ * models to ensure the LF LIFO element reads are properly
+ * ordered with respect to the head pointer read.
+ */
+ rte_smp_mb();
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_GENERIC_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 6/8] stack: add C11 atomic implementation
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 " Gage Eads
` (5 preceding siblings ...)
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 5/8] stack: add lock-free stack implementation Gage Eads
@ 2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 7/8] test/stack: add lock-free stack tests Gage Eads
` (2 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds an implementation of the lock-free stack push, pop, and
length functions that use __atomic builtins, for systems that benefit from
the finer-grained memory ordering control.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack_lf.h | 4 +
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++++++++++++++++++++++++++++
4 files changed, 183 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 311edd997..8d18ce520 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -23,6 +23,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
rte_stack_std.h \
rte_stack_lf.h \
- rte_stack_lf_generic.h
+ rte_stack_lf_generic.h \
+ rte_stack_lf_c11.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 7a09a5d66..46fce0c20 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -8,4 +8,5 @@ sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
headers = files('rte_stack.h',
'rte_stack_std.h',
'rte_stack_lf.h',
- 'rte_stack_lf_generic.h')
+ 'rte_stack_lf_generic.h',
+ 'rte_stack_lf_c11.h')
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
index bfd680133..518889a05 100644
--- a/lib/librte_stack/rte_stack_lf.h
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -5,7 +5,11 @@
#ifndef _RTE_STACK_LF_H_
#define _RTE_STACK_LF_H_
+#ifdef RTE_USE_C11_MEM_MODEL
+#include "rte_stack_lf_c11.h"
+#else
#include "rte_stack_lf_generic.h"
+#endif
/**
* @internal Push several objects on the lock-free stack (MT-safe).
diff --git a/lib/librte_stack/rte_stack_lf_c11.h b/lib/librte_stack/rte_stack_lf_c11.h
new file mode 100644
index 000000000..a316e9af5
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_c11.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_C11_H_
+#define _RTE_STACK_LF_C11_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
+ __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Use an acquire fence to establish a synchronized-with
+ * relationship between the list->head load and store-release
+ * operations (as part of the rte_atomic128_cmp_exchange()).
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Use the release memmodel to ensure the writes to the LF LIFO
+ * elements are visible before the head pointer write.
+ */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ /* Ensure the stack modifications are not reordered with respect
+ * to the LIFO len update.
+ */
+ __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ uint64_t len;
+ int success;
+
+ /* Reserve num elements, if available */
+ len = __atomic_load_n(&list->len.cnt, __ATOMIC_ACQUIRE);
+
+ while (1) {
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ /* len is updated on failure */
+ if (__atomic_compare_exchange_n(&list->len.cnt,
+ &len, len - num,
+ 0, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE))
+ break;
+ }
+
+ /* If a torn read occurs, the CAS will fail and set old_head to the
+ * correct/latest value.
+ */
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* Use the acquire memmodel to ensure the reads to the LF LIFO
+ * elements are properly ordered with respect to the head
+ * pointer read.
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_C11_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 6/8] stack: add C11 atomic implementation
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 6/8] stack: add C11 atomic implementation Gage Eads
@ 2019-04-03 20:50 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds an implementation of the lock-free stack push, pop, and
length functions that use __atomic builtins, for systems that benefit from
the finer-grained memory ordering control.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack_lf.h | 4 +
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++++++++++++++++++++++++++++
4 files changed, 183 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 311edd997..8d18ce520 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -23,6 +23,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
rte_stack_std.h \
rte_stack_lf.h \
- rte_stack_lf_generic.h
+ rte_stack_lf_generic.h \
+ rte_stack_lf_c11.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 7a09a5d66..46fce0c20 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -8,4 +8,5 @@ sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
headers = files('rte_stack.h',
'rte_stack_std.h',
'rte_stack_lf.h',
- 'rte_stack_lf_generic.h')
+ 'rte_stack_lf_generic.h',
+ 'rte_stack_lf_c11.h')
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
index bfd680133..518889a05 100644
--- a/lib/librte_stack/rte_stack_lf.h
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -5,7 +5,11 @@
#ifndef _RTE_STACK_LF_H_
#define _RTE_STACK_LF_H_
+#ifdef RTE_USE_C11_MEM_MODEL
+#include "rte_stack_lf_c11.h"
+#else
#include "rte_stack_lf_generic.h"
+#endif
/**
* @internal Push several objects on the lock-free stack (MT-safe).
diff --git a/lib/librte_stack/rte_stack_lf_c11.h b/lib/librte_stack/rte_stack_lf_c11.h
new file mode 100644
index 000000000..a316e9af5
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_c11.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_C11_H_
+#define _RTE_STACK_LF_C11_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
+ __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Use an acquire fence to establish a synchronized-with
+ * relationship between the list->head load and store-release
+ * operations (as part of the rte_atomic128_cmp_exchange()).
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Use the release memmodel to ensure the writes to the LF LIFO
+ * elements are visible before the head pointer write.
+ */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ /* Ensure the stack modifications are not reordered with respect
+ * to the LIFO len update.
+ */
+ __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ uint64_t len;
+ int success;
+
+ /* Reserve num elements, if available */
+ len = __atomic_load_n(&list->len.cnt, __ATOMIC_ACQUIRE);
+
+ while (1) {
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ /* len is updated on failure */
+ if (__atomic_compare_exchange_n(&list->len.cnt,
+ &len, len - num,
+ 0, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE))
+ break;
+ }
+
+ /* If a torn read occurs, the CAS will fail and set old_head to the
+ * correct/latest value.
+ */
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* Use the acquire memmodel to ensure the reads to the LF LIFO
+ * elements are properly ordered with respect to the head
+ * pointer read.
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_C11_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 7/8] test/stack: add lock-free stack tests
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 " Gage Eads
` (6 preceding siblings ...)
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 6/8] stack: add C11 atomic implementation Gage Eads
@ 2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 0/8] Add stack library and new " Gage Eads
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds lock-free stack variants of stack_autotest
(stack_lf_autotest) and stack_perf_autotest (stack_lf_perf_autotest), which
differ only in that the lock-free versions pass the RTE_STACK_F_LF flag to
all rte_stack_create() calls.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/meson.build | 2 ++
app/test/test_stack.c | 41 +++++++++++++++++++++++++++--------------
app/test/test_stack_perf.c | 17 +++++++++++++++--
3 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/app/test/meson.build b/app/test/meson.build
index 02eb788a4..867cc5863 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -178,6 +178,7 @@ fast_parallel_test_names = [
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
+ 'stack_nb_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
@@ -243,6 +244,7 @@ perf_test_names = [
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
+ 'stack_nb_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
index 8392e4e4d..f199136aa 100644
--- a/app/test/test_stack.c
+++ b/app/test/test_stack.c
@@ -97,7 +97,7 @@ test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
}
static int
-test_stack_basic(void)
+test_stack_basic(uint32_t flags)
{
struct rte_stack *s = NULL;
void **obj_table = NULL;
@@ -113,7 +113,7 @@ test_stack_basic(void)
for (i = 0; i < STACK_SIZE; i++)
obj_table[i] = (void *)(uintptr_t)i;
- s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -177,18 +177,18 @@ test_stack_basic(void)
}
static int
-test_stack_name_reuse(void)
+test_stack_name_reuse(uint32_t flags)
{
struct rte_stack *s[2];
- s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[0] == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
return -1;
}
- s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[1] != NULL) {
printf("[%s():%u] Failed to detect re-used name\n",
__func__, __LINE__);
@@ -201,7 +201,7 @@ test_stack_name_reuse(void)
}
static int
-test_stack_name_length(void)
+test_stack_name_length(uint32_t flags)
{
char name[RTE_STACK_NAMESIZE + 1];
struct rte_stack *s;
@@ -209,7 +209,7 @@ test_stack_name_length(void)
memset(name, 's', sizeof(name));
name[RTE_STACK_NAMESIZE] = '\0';
- s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), flags);
if (s != NULL) {
printf("[%s():%u] Failed to prevent long name\n",
__func__, __LINE__);
@@ -328,7 +328,7 @@ stack_thread_push_pop(void *args)
}
static int
-test_stack_multithreaded(void)
+test_stack_multithreaded(uint32_t flags)
{
struct test_args *args;
unsigned int lcore_id;
@@ -349,7 +349,7 @@ test_stack_multithreaded(void)
return -1;
}
- s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
@@ -384,9 +384,9 @@ test_stack_multithreaded(void)
}
static int
-test_stack(void)
+__test_stack(uint32_t flags)
{
- if (test_stack_basic() < 0)
+ if (test_stack_basic(flags) < 0)
return -1;
if (test_lookup_null() < 0)
@@ -395,16 +395,29 @@ test_stack(void)
if (test_free_null() < 0)
return -1;
- if (test_stack_name_reuse() < 0)
+ if (test_stack_name_reuse(flags) < 0)
return -1;
- if (test_stack_name_length() < 0)
+ if (test_stack_name_length(flags) < 0)
return -1;
- if (test_stack_multithreaded() < 0)
+ if (test_stack_multithreaded(flags) < 0)
return -1;
return 0;
}
+static int
+test_stack(void)
+{
+ return __test_stack(0);
+}
+
+static int
+test_lf_stack(void)
+{
+ return __test_stack(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_autotest, test_stack);
+REGISTER_TEST_COMMAND(stack_lf_autotest, test_lf_stack);
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
index 484370d30..e09d5384c 100644
--- a/app/test/test_stack_perf.c
+++ b/app/test/test_stack_perf.c
@@ -297,14 +297,14 @@ test_bulk_push_pop(struct rte_stack *s)
}
static int
-test_stack_perf(void)
+__test_stack_perf(uint32_t flags)
{
struct lcore_pair cores;
struct rte_stack *s;
rte_atomic32_init(&lcore_barrier);
- s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -340,4 +340,17 @@ test_stack_perf(void)
return 0;
}
+static int
+test_stack_perf(void)
+{
+ return __test_stack_perf(0);
+}
+
+static int
+test_lf_stack_perf(void)
+{
+ return __test_stack_perf(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
+REGISTER_TEST_COMMAND(stack_lf_perf_autotest, test_lf_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 7/8] test/stack: add lock-free stack tests
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 7/8] test/stack: add lock-free stack tests Gage Eads
@ 2019-04-03 20:50 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds lock-free stack variants of stack_autotest
(stack_lf_autotest) and stack_perf_autotest (stack_lf_perf_autotest), which
differ only in that the lock-free versions pass the RTE_STACK_F_LF flag to
all rte_stack_create() calls.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/meson.build | 2 ++
app/test/test_stack.c | 41 +++++++++++++++++++++++++++--------------
app/test/test_stack_perf.c | 17 +++++++++++++++--
3 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/app/test/meson.build b/app/test/meson.build
index 02eb788a4..867cc5863 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -178,6 +178,7 @@ fast_parallel_test_names = [
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
+ 'stack_nb_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
@@ -243,6 +244,7 @@ perf_test_names = [
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
+ 'stack_nb_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
index 8392e4e4d..f199136aa 100644
--- a/app/test/test_stack.c
+++ b/app/test/test_stack.c
@@ -97,7 +97,7 @@ test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
}
static int
-test_stack_basic(void)
+test_stack_basic(uint32_t flags)
{
struct rte_stack *s = NULL;
void **obj_table = NULL;
@@ -113,7 +113,7 @@ test_stack_basic(void)
for (i = 0; i < STACK_SIZE; i++)
obj_table[i] = (void *)(uintptr_t)i;
- s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -177,18 +177,18 @@ test_stack_basic(void)
}
static int
-test_stack_name_reuse(void)
+test_stack_name_reuse(uint32_t flags)
{
struct rte_stack *s[2];
- s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[0] == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
return -1;
}
- s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[1] != NULL) {
printf("[%s():%u] Failed to detect re-used name\n",
__func__, __LINE__);
@@ -201,7 +201,7 @@ test_stack_name_reuse(void)
}
static int
-test_stack_name_length(void)
+test_stack_name_length(uint32_t flags)
{
char name[RTE_STACK_NAMESIZE + 1];
struct rte_stack *s;
@@ -209,7 +209,7 @@ test_stack_name_length(void)
memset(name, 's', sizeof(name));
name[RTE_STACK_NAMESIZE] = '\0';
- s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), flags);
if (s != NULL) {
printf("[%s():%u] Failed to prevent long name\n",
__func__, __LINE__);
@@ -328,7 +328,7 @@ stack_thread_push_pop(void *args)
}
static int
-test_stack_multithreaded(void)
+test_stack_multithreaded(uint32_t flags)
{
struct test_args *args;
unsigned int lcore_id;
@@ -349,7 +349,7 @@ test_stack_multithreaded(void)
return -1;
}
- s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
@@ -384,9 +384,9 @@ test_stack_multithreaded(void)
}
static int
-test_stack(void)
+__test_stack(uint32_t flags)
{
- if (test_stack_basic() < 0)
+ if (test_stack_basic(flags) < 0)
return -1;
if (test_lookup_null() < 0)
@@ -395,16 +395,29 @@ test_stack(void)
if (test_free_null() < 0)
return -1;
- if (test_stack_name_reuse() < 0)
+ if (test_stack_name_reuse(flags) < 0)
return -1;
- if (test_stack_name_length() < 0)
+ if (test_stack_name_length(flags) < 0)
return -1;
- if (test_stack_multithreaded() < 0)
+ if (test_stack_multithreaded(flags) < 0)
return -1;
return 0;
}
+static int
+test_stack(void)
+{
+ return __test_stack(0);
+}
+
+static int
+test_lf_stack(void)
+{
+ return __test_stack(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_autotest, test_stack);
+REGISTER_TEST_COMMAND(stack_lf_autotest, test_lf_stack);
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
index 484370d30..e09d5384c 100644
--- a/app/test/test_stack_perf.c
+++ b/app/test/test_stack_perf.c
@@ -297,14 +297,14 @@ test_bulk_push_pop(struct rte_stack *s)
}
static int
-test_stack_perf(void)
+__test_stack_perf(uint32_t flags)
{
struct lcore_pair cores;
struct rte_stack *s;
rte_atomic32_init(&lcore_barrier);
- s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -340,4 +340,17 @@ test_stack_perf(void)
return 0;
}
+static int
+test_stack_perf(void)
+{
+ return __test_stack_perf(0);
+}
+
+static int
+test_lf_stack_perf(void)
+{
+ return __test_stack_perf(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
+REGISTER_TEST_COMMAND(stack_lf_perf_autotest, test_lf_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 8/8] mempool/stack: add lock-free stack mempool handler
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 " Gage Eads
` (7 preceding siblings ...)
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 7/8] test/stack: add lock-free stack tests Gage Eads
@ 2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 0/8] Add stack library and new " Gage Eads
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for lock-free (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
stack's worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 10 ++++++++++
doc/guides/rel_notes/release_19_05.rst | 5 +++++
drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++--
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 6a04c3c33..fa8afdb3a 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -581,6 +581,16 @@ Known Issues
5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+ Alternatively, applications can use the lock-free stack mempool handler. When
+ considering this handler, note that:
+
+ - It is currently limited to the x86_64 platform, because it uses an
+ instruction (16-byte compare-and-swap) that is not yet available on other
+ platforms.
+ - It has worse average-case performance than the non-preemptive rte_ring, but
+ software caching (e.g. the mempool cache) can mitigate this by reducing the
+ number of stack accesses.
+
+ rte_timer
Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 3b115b5f6..f873984ad 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -130,6 +130,11 @@ New Features
The library supports two stack implementations: standard (lock-based) and lock-free.
The lock-free implementation is currently limited to x86-64 platforms.
+* **Added Lock-Free Stack Mempool Handler.**
+
+ Added a new lock-free stack handler, which uses the newly added stack
+ library.
+
Removed Items
-------------
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 25ccdb9af..7e85c8d6b 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -7,7 +7,7 @@
#include <rte_stack.h>
static int
-stack_alloc(struct rte_mempool *mp)
+__stack_alloc(struct rte_mempool *mp, uint32_t flags)
{
char name[RTE_STACK_NAMESIZE];
struct rte_stack *s;
@@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp)
return -rte_errno;
}
- s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ s = rte_stack_create(name, mp->size, mp->socket_id, flags);
if (s == NULL)
return -rte_errno;
@@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp)
}
static int
+stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, 0);
+}
+
+static int
+lf_stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, RTE_STACK_F_LF);
+}
+
+static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
unsigned int n)
{
@@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = {
.get_count = stack_get_count
};
+static struct rte_mempool_ops ops_lf_stack = {
+ .name = "lf_stack",
+ .alloc = lf_stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
MEMPOOL_REGISTER_OPS(ops_stack);
+MEMPOOL_REGISTER_OPS(ops_lf_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v8 8/8] mempool/stack: add lock-free stack mempool handler
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
@ 2019-04-03 20:50 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 20:50 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for lock-free (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
stack's worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 10 ++++++++++
doc/guides/rel_notes/release_19_05.rst | 5 +++++
drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++--
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 6a04c3c33..fa8afdb3a 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -581,6 +581,16 @@ Known Issues
5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+ Alternatively, applications can use the lock-free stack mempool handler. When
+ considering this handler, note that:
+
+ - It is currently limited to the x86_64 platform, because it uses an
+ instruction (16-byte compare-and-swap) that is not yet available on other
+ platforms.
+ - It has worse average-case performance than the non-preemptive rte_ring, but
+ software caching (e.g. the mempool cache) can mitigate this by reducing the
+ number of stack accesses.
+
+ rte_timer
Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 3b115b5f6..f873984ad 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -130,6 +130,11 @@ New Features
The library supports two stack implementations: standard (lock-based) and lock-free.
The lock-free implementation is currently limited to x86-64 platforms.
+* **Added Lock-Free Stack Mempool Handler.**
+
+ Added a new lock-free stack handler, which uses the newly added stack
+ library.
+
Removed Items
-------------
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 25ccdb9af..7e85c8d6b 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -7,7 +7,7 @@
#include <rte_stack.h>
static int
-stack_alloc(struct rte_mempool *mp)
+__stack_alloc(struct rte_mempool *mp, uint32_t flags)
{
char name[RTE_STACK_NAMESIZE];
struct rte_stack *s;
@@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp)
return -rte_errno;
}
- s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ s = rte_stack_create(name, mp->size, mp->socket_id, flags);
if (s == NULL)
return -rte_errno;
@@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp)
}
static int
+stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, 0);
+}
+
+static int
+lf_stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, RTE_STACK_F_LF);
+}
+
+static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
unsigned int n)
{
@@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = {
.get_count = stack_get_count
};
+static struct rte_mempool_ops ops_lf_stack = {
+ .name = "lf_stack",
+ .alloc = lf_stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
MEMPOOL_REGISTER_OPS(ops_stack);
+MEMPOOL_REGISTER_OPS(ops_lf_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v8 3/8] test/stack: add stack test
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 3/8] test/stack: add stack test Gage Eads
2019-04-03 20:50 ` Gage Eads
@ 2019-04-03 22:41 ` Thomas Monjalon
2019-04-03 22:41 ` Thomas Monjalon
2019-04-03 23:05 ` Eads, Gage
1 sibling, 2 replies; 228+ messages in thread
From: Thomas Monjalon @ 2019-04-03 22:41 UTC (permalink / raw)
To: Gage Eads
Cc: dev, olivier.matz, arybchenko, bruce.richardson,
konstantin.ananyev, gavin.hu, Honnappa.Nagarahalli, nd
03/04/2019 22:50, Gage Eads:
> stack_autotest performs positive and negative testing of the stack API, and
> exercises the push and pop datapath functions with all available lcores.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
> Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> ---
> MAINTAINERS | 1 +
> app/test/Makefile | 2 +
> app/test/meson.build | 3 +
> app/test/test_stack.c | 410 ++++++++++++++++++++++++++++++++++++++++++++++++++
> 4 files changed, 416 insertions(+)
> create mode 100644 app/test/test_stack.c
Another error with Arm:
app/test/test_stack.c:275:2: error: unknown type name 'rte_atomic64_t'
I think you should install an Arm toolchain and run test-meson-builds.sh
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v8 3/8] test/stack: add stack test
2019-04-03 22:41 ` Thomas Monjalon
@ 2019-04-03 22:41 ` Thomas Monjalon
2019-04-03 23:05 ` Eads, Gage
1 sibling, 0 replies; 228+ messages in thread
From: Thomas Monjalon @ 2019-04-03 22:41 UTC (permalink / raw)
To: Gage Eads
Cc: dev, olivier.matz, arybchenko, bruce.richardson,
konstantin.ananyev, gavin.hu, Honnappa.Nagarahalli, nd
03/04/2019 22:50, Gage Eads:
> stack_autotest performs positive and negative testing of the stack API, and
> exercises the push and pop datapath functions with all available lcores.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
> Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> ---
> MAINTAINERS | 1 +
> app/test/Makefile | 2 +
> app/test/meson.build | 3 +
> app/test/test_stack.c | 410 ++++++++++++++++++++++++++++++++++++++++++++++++++
> 4 files changed, 416 insertions(+)
> create mode 100644 app/test/test_stack.c
Another error with Arm:
app/test/test_stack.c:275:2: error: unknown type name 'rte_atomic64_t'
I think you should install an Arm toolchain and run test-meson-builds.sh
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v8 3/8] test/stack: add stack test
2019-04-03 22:41 ` Thomas Monjalon
2019-04-03 22:41 ` Thomas Monjalon
@ 2019-04-03 23:05 ` Eads, Gage
2019-04-03 23:05 ` Eads, Gage
1 sibling, 1 reply; 228+ messages in thread
From: Eads, Gage @ 2019-04-03 23:05 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, olivier.matz, arybchenko, Richardson, Bruce, Ananyev,
Konstantin, gavin.hu, Honnappa.Nagarahalli, nd
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Wednesday, April 3, 2019 5:41 PM
> To: Eads, Gage <gage.eads@intel.com>
> Cc: dev@dpdk.org; olivier.matz@6wind.com; arybchenko@solarflare.com;
> Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; gavin.hu@arm.com;
> Honnappa.Nagarahalli@arm.com; nd@arm.com
> Subject: Re: [dpdk-dev] [PATCH v8 3/8] test/stack: add stack test
>
> 03/04/2019 22:50, Gage Eads:
> > stack_autotest performs positive and negative testing of the stack
> > API, and exercises the push and pop datapath functions with all available
> lcores.
> >
> > Signed-off-by: Gage Eads <gage.eads@intel.com>
> > Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> > ---
> > MAINTAINERS | 1 +
> > app/test/Makefile | 2 +
> > app/test/meson.build | 3 +
> > app/test/test_stack.c | 410
> > ++++++++++++++++++++++++++++++++++++++++++++++++++
> > 4 files changed, 416 insertions(+)
> > create mode 100644 app/test/test_stack.c
>
> Another error with Arm:
>
> app/test/test_stack.c:275:2: error: unknown type name 'rte_atomic64_t'
>
> I think you should install an Arm toolchain and run test-meson-builds.sh
>
I should've done that a while ago -- it was pretty painless on Ubuntu. Looks like rte_atomic.h is the last missing header.
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v8 3/8] test/stack: add stack test
2019-04-03 23:05 ` Eads, Gage
@ 2019-04-03 23:05 ` Eads, Gage
0 siblings, 0 replies; 228+ messages in thread
From: Eads, Gage @ 2019-04-03 23:05 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, olivier.matz, arybchenko, Richardson, Bruce, Ananyev,
Konstantin, gavin.hu, Honnappa.Nagarahalli, nd
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Wednesday, April 3, 2019 5:41 PM
> To: Eads, Gage <gage.eads@intel.com>
> Cc: dev@dpdk.org; olivier.matz@6wind.com; arybchenko@solarflare.com;
> Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; gavin.hu@arm.com;
> Honnappa.Nagarahalli@arm.com; nd@arm.com
> Subject: Re: [dpdk-dev] [PATCH v8 3/8] test/stack: add stack test
>
> 03/04/2019 22:50, Gage Eads:
> > stack_autotest performs positive and negative testing of the stack
> > API, and exercises the push and pop datapath functions with all available
> lcores.
> >
> > Signed-off-by: Gage Eads <gage.eads@intel.com>
> > Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
> > ---
> > MAINTAINERS | 1 +
> > app/test/Makefile | 2 +
> > app/test/meson.build | 3 +
> > app/test/test_stack.c | 410
> > ++++++++++++++++++++++++++++++++++++++++++++++++++
> > 4 files changed, 416 insertions(+)
> > create mode 100644 app/test/test_stack.c
>
> Another error with Arm:
>
> app/test/test_stack.c:275:2: error: unknown type name 'rte_atomic64_t'
>
> I think you should install an Arm toolchain and run test-meson-builds.sh
>
I should've done that a while ago -- it was pretty painless on Ubuntu. Looks like rte_atomic.h is the last missing header.
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 0/8] Add stack library and new mempool handler
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 " Gage Eads
` (8 preceding siblings ...)
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` Gage Eads
` (9 more replies)
9 siblings, 10 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This patchset introduces a stack library, supporting both lock-based and
lock-free stacks, and a lock-free stack mempool handler.
The lock-based stack code is derived from the existing stack mempool handler,
and that handler is refactored to use the stack library.
The lock-free stack mempool handler is intended for usages where the rte
ring's "non-preemptive" constraint is not acceptable; for example, if the
application uses a mixture of pinned high-priority threads and multiplexed
low-priority threads that share a mempool.
Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
so it is currently limited to the x86_64 platform.
This patchset is the successor to a patchset containing only the new mempool
handler[2].
[1] http://mails.dpdk.org/archives/dev/2019-April/129014.html
[2] http://mails.dpdk.org/archives/dev/2019-January/123555.html
---
v9:
- Add rte_atomic.h includes to rte_stack.h, test_stack.c, and test_stack_perf.c
to fix ARM builds
v8:
- Add rte_debug.h include to rte_stack.h for RTE_ASSERT()
v7:
- Add rte_branch_prediction.h include to rte_stack_std.h for unlikely()
- Add rte_compat.h include to rte_stack.h for __rte_experimental
v6:
- Add load-acquire fence to the lock-free push function
- Correct generic implementation's pop_elems 128b CAS success and failure
memorder to match those in the C11 implementation.
v5:
- Add comment to explain padding in *_get_memsize() functions
- Prefix internal functions with '__'
- Use RTE_ASSERT for performance critical run-time checks
- Don't use __atomic_load in the C11 pop_elems function, and put an acquire
thread fence at the start of the 2nd do-while loop
- Change pop_elems 128b CAS success memorder to RELEASE and failure memorder to
RELAXED
- Change compile-time assertion to run for all 64-bit architectures
- Reorganize the code with standard and lock-free .c and .h files
v4:
- Fix 32-bit build error in test_stack.c by using %zu format specifier for
size_t
- Rebase onto master
v3:
- Rebase patchset onto master (test/test/ -> app/test/)
- Fix rte_stack_std_push() segfault introduced in v2
v2:
- Reworked structure and function naming to use rte_stack_{std, lf}_...
- Updated to the latest rte_atomic128_cmp_exchange() interface.
- Rename STACK_F_NB -> RTE_STACK_F_LF.
- Remove rte_rmb() and rte_wmb() from the generic push and pop implementations.
These are obviated by rte_atomic128_cmp_exchange()'s two memorder arguments.
- Edit stack_lib.rst text to 80 chars/line.
- Fix rte_stack.h doxygen formatting.
- Allocate popped_objs array from the heap
- Fix stack_thread_push_pop bug ("&t->sz" -> "t->sz")
- Remove unnecessary NULL check from test_stack_basic
- Properly terminate the name string in test_stack_name_length
- Add an empty array of struct rte_nb_lifo_elem elements
- In rte_nb_lifo_push(), retrieve the last element from __nb_lifo_pop()
- Split C11 implementation into a separate patchset
Gage Eads (8):
stack: introduce rte stack library
mempool/stack: convert mempool to use rte stack
test/stack: add stack test
test/stack: add stack perf test
stack: add lock-free stack implementation
stack: add C11 atomic implementation
test/stack: add lock-free stack tests
mempool/stack: add lock-free stack mempool handler
MAINTAINERS | 9 +-
app/test/Makefile | 3 +
app/test/meson.build | 7 +
app/test/test_stack.c | 424 ++++++++++++++++++++++++
app/test/test_stack_perf.c | 358 ++++++++++++++++++++
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/env_abstraction_layer.rst | 10 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 83 +++++
doc/guides/rel_notes/release_19_05.rst | 13 +
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 115 +++----
lib/Makefile | 2 +
lib/librte_stack/Makefile | 29 ++
lib/librte_stack/meson.build | 12 +
lib/librte_stack/rte_stack.c | 196 +++++++++++
lib/librte_stack/rte_stack.h | 262 +++++++++++++++
lib/librte_stack/rte_stack_lf.c | 31 ++
lib/librte_stack/rte_stack_lf.h | 106 ++++++
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 +++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++
lib/librte_stack/rte_stack_std.c | 26 ++
lib/librte_stack/rte_stack_std.h | 121 +++++++
lib/librte_stack/rte_stack_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
30 files changed, 2137 insertions(+), 72 deletions(-)
create mode 100644 app/test/test_stack.c
create mode 100644 app/test/test_stack_perf.c
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 0/8] Add stack library and new mempool handler
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 0/8] Add stack library and new " Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 1/8] stack: introduce rte stack library Gage Eads
` (8 subsequent siblings)
9 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This patchset introduces a stack library, supporting both lock-based and
lock-free stacks, and a lock-free stack mempool handler.
The lock-based stack code is derived from the existing stack mempool handler,
and that handler is refactored to use the stack library.
The lock-free stack mempool handler is intended for usages where the rte
ring's "non-preemptive" constraint is not acceptable; for example, if the
application uses a mixture of pinned high-priority threads and multiplexed
low-priority threads that share a mempool.
Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
so it is currently limited to the x86_64 platform.
This patchset is the successor to a patchset containing only the new mempool
handler[2].
[1] http://mails.dpdk.org/archives/dev/2019-April/129014.html
[2] http://mails.dpdk.org/archives/dev/2019-January/123555.html
---
v9:
- Add rte_atomic.h includes to rte_stack.h, test_stack.c, and test_stack_perf.c
to fix ARM builds
v8:
- Add rte_debug.h include to rte_stack.h for RTE_ASSERT()
v7:
- Add rte_branch_prediction.h include to rte_stack_std.h for unlikely()
- Add rte_compat.h include to rte_stack.h for __rte_experimental
v6:
- Add load-acquire fence to the lock-free push function
- Correct generic implementation's pop_elems 128b CAS success and failure
memorder to match those in the C11 implementation.
v5:
- Add comment to explain padding in *_get_memsize() functions
- Prefix internal functions with '__'
- Use RTE_ASSERT for performance critical run-time checks
- Don't use __atomic_load in the C11 pop_elems function, and put an acquire
thread fence at the start of the 2nd do-while loop
- Change pop_elems 128b CAS success memorder to RELEASE and failure memorder to
RELAXED
- Change compile-time assertion to run for all 64-bit architectures
- Reorganize the code with standard and lock-free .c and .h files
v4:
- Fix 32-bit build error in test_stack.c by using %zu format specifier for
size_t
- Rebase onto master
v3:
- Rebase patchset onto master (test/test/ -> app/test/)
- Fix rte_stack_std_push() segfault introduced in v2
v2:
- Reworked structure and function naming to use rte_stack_{std, lf}_...
- Updated to the latest rte_atomic128_cmp_exchange() interface.
- Rename STACK_F_NB -> RTE_STACK_F_LF.
- Remove rte_rmb() and rte_wmb() from the generic push and pop implementations.
These are obviated by rte_atomic128_cmp_exchange()'s two memorder arguments.
- Edit stack_lib.rst text to 80 chars/line.
- Fix rte_stack.h doxygen formatting.
- Allocate popped_objs array from the heap
- Fix stack_thread_push_pop bug ("&t->sz" -> "t->sz")
- Remove unnecessary NULL check from test_stack_basic
- Properly terminate the name string in test_stack_name_length
- Add an empty array of struct rte_nb_lifo_elem elements
- In rte_nb_lifo_push(), retrieve the last element from __nb_lifo_pop()
- Split C11 implementation into a separate patchset
Gage Eads (8):
stack: introduce rte stack library
mempool/stack: convert mempool to use rte stack
test/stack: add stack test
test/stack: add stack perf test
stack: add lock-free stack implementation
stack: add C11 atomic implementation
test/stack: add lock-free stack tests
mempool/stack: add lock-free stack mempool handler
MAINTAINERS | 9 +-
app/test/Makefile | 3 +
app/test/meson.build | 7 +
app/test/test_stack.c | 424 ++++++++++++++++++++++++
app/test/test_stack_perf.c | 358 ++++++++++++++++++++
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/env_abstraction_layer.rst | 10 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 83 +++++
doc/guides/rel_notes/release_19_05.rst | 13 +
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 115 +++----
lib/Makefile | 2 +
lib/librte_stack/Makefile | 29 ++
lib/librte_stack/meson.build | 12 +
lib/librte_stack/rte_stack.c | 196 +++++++++++
lib/librte_stack/rte_stack.h | 262 +++++++++++++++
lib/librte_stack/rte_stack_lf.c | 31 ++
lib/librte_stack/rte_stack_lf.h | 106 ++++++
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 +++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++
lib/librte_stack/rte_stack_std.c | 26 ++
lib/librte_stack/rte_stack_std.h | 121 +++++++
lib/librte_stack/rte_stack_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
30 files changed, 2137 insertions(+), 72 deletions(-)
create mode 100644 app/test/test_stack.c
create mode 100644 app/test/test_stack_perf.c
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 1/8] stack: introduce rte stack library
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 0/8] Add stack library and new " Gage Eads
2019-04-03 23:20 ` Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-04 13:30 ` Thomas Monjalon
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
` (7 subsequent siblings)
9 siblings, 2 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 28 +++++
doc/guides/rel_notes/release_19_05.rst | 5 +
lib/Makefile | 2 +
lib/librte_stack/Makefile | 25 ++++
lib/librte_stack/meson.build | 8 ++
lib/librte_stack/rte_stack.c | 182 ++++++++++++++++++++++++++++
lib/librte_stack/rte_stack.h | 209 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++++++
lib/librte_stack/rte_stack_std.c | 26 ++++
lib/librte_stack/rte_stack_std.h | 121 +++++++++++++++++++
lib/librte_stack/rte_stack_version.map | 9 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
18 files changed, 665 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 71ac8cd4b..f30fc4aa6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -426,6 +426,12 @@ F: drivers/raw/skeleton_rawdev/
F: app/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
+Stack API - EXPERIMENTAL
+M: Gage Eads <gage.eads@intel.com>
+M: Olivier Matz <olivier.matz@6wind.com>
+F: lib/librte_stack/
+F: doc/guides/prog_guide/stack_lib.rst
+
Memory Pool Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index 6292bc4af..fc8dba69d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -994,3 +994,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
# Compile the eventdev application
#
CONFIG_RTE_APP_EVENTDEV=y
+
+#
+# Compile librte_stack
+#
+CONFIG_RTE_LIBRTE_STACK=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index aacc66bd8..de1e215dd 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -125,6 +125,7 @@ The public API headers are grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
[ring] (@ref rte_ring.h),
+ [stack] (@ref rte_stack.h),
[tailq] (@ref rte_tailq.h),
[bitmap] (@ref rte_bitmap.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..7722fc3e9 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -55,6 +55,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
@TOPDIR@/lib/librte_security \
+ @TOPDIR@/lib/librte_stack \
@TOPDIR@/lib/librte_table \
@TOPDIR@/lib/librte_telemetry \
@TOPDIR@/lib/librte_timer \
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..f4f60862f 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ stack_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
new file mode 100644
index 000000000..25a8cc38a
--- /dev/null
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -0,0 +1,28 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+Stack Library
+=============
+
+DPDK's stack library provides an API for configuration and use of a bounded
+stack of pointers.
+
+The stack library provides the following basic operations:
+
+* Create a uniquely named stack of a user-specified size and using a
+ user-specified socket.
+
+* Push and pop a burst of one or more stack objects (pointers). These function
+ are multi-threading safe.
+
+* Free a previously created stack.
+
+* Lookup a pointer to a stack by its name.
+
+* Query a stack's current depth and number of free entries.
+
+Implementation
+~~~~~~~~~~~~~~
+
+The stack consists of a contiguous array of pointers, a current index, and a
+spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index bdad1ddbe..ebfbe36e5 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -121,6 +121,11 @@ New Features
Improved testpmd application performance on ARM platform. For ``macswap``
forwarding mode, NEON intrinsics were used to do swap to save CPU cycles.
+* **Added Stack API.**
+
+ Added a new stack API for configuration and use of a bounded stack of
+ pointers. The API provides MT-safe push and pop operations that can operate
+ on one or more pointers per operation.
Removed Items
-------------
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..9f90e80ad 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack
+DEPDIRS-librte_stack := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
new file mode 100644
index 000000000..6db540073
--- /dev/null
+++ b/lib/librte_stack/Makefile
@@ -0,0 +1,25 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_stack.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_stack_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
+ rte_stack_std.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
+ rte_stack_std.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
new file mode 100644
index 000000000..d2e60ce9b
--- /dev/null
+++ b/lib/librte_stack/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+allow_experimental_apis = true
+
+version = 1
+sources = files('rte_stack.c', 'rte_stack_std.c')
+headers = files('rte_stack.h', 'rte_stack_std.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
new file mode 100644
index 000000000..610014b6c
--- /dev/null
+++ b/lib/librte_stack/rte_stack.c
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_tailq.h>
+
+#include "rte_stack.h"
+#include "rte_stack_pvt.h"
+
+int stack_logtype;
+
+TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_stack_tailq = {
+ .name = RTE_TAILQ_STACK_NAME,
+};
+EAL_REGISTER_TAILQ(rte_stack_tailq)
+
+static void
+rte_stack_init(struct rte_stack *s)
+{
+ memset(s, 0, sizeof(*s));
+
+ rte_stack_std_init(s);
+}
+
+static ssize_t
+rte_stack_get_memsize(unsigned int count)
+{
+ return rte_stack_std_get_memsize(count);
+}
+
+struct rte_stack *
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_stack_list *stack_list;
+ const struct rte_memzone *mz;
+ struct rte_tailq_entry *te;
+ struct rte_stack *s;
+ unsigned int sz;
+ int ret;
+
+ RTE_SET_USED(flags);
+
+ sz = rte_stack_get_memsize(count);
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_STACK_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ STACK_LOG_ERR("Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
+ 0, __alignof__(*s));
+ if (mz == NULL) {
+ STACK_LOG_ERR("Cannot reserve stack memzone!\n");
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(te);
+ return NULL;
+ }
+
+ s = mz->addr;
+
+ rte_stack_init(s);
+
+ /* Store the name for later lookups */
+ ret = snprintf(s->name, sizeof(s->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(s->name)) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_errno = ENAMETOOLONG;
+ rte_free(te);
+ rte_memzone_free(mz);
+ return NULL;
+ }
+
+ s->memzone = mz;
+ s->capacity = count;
+ s->flags = flags;
+
+ te->data = s;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ TAILQ_INSERT_TAIL(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return s;
+}
+
+void
+rte_stack_free(struct rte_stack *s)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+
+ if (s == NULL)
+ return;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, stack_list, next) {
+ if (te->data == s)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+
+ rte_memzone_free(s->memzone);
+}
+
+struct rte_stack *
+rte_stack_lookup(const char *name)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+ struct rte_stack *r = NULL;
+
+ if (name == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, stack_list, next) {
+ r = (struct rte_stack *) te->data;
+ if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+RTE_INIT(librte_stack_init_log)
+{
+ stack_logtype = rte_log_register("lib.stack");
+ if (stack_logtype >= 0)
+ rte_log_set_level(stack_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
new file mode 100644
index 000000000..42d042715
--- /dev/null
+++ b/lib/librte_stack/rte_stack.h
@@ -0,0 +1,209 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+/**
+ * @file rte_stack.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Stack
+ *
+ * librte_stack provides an API for configuration and use of a bounded stack of
+ * pointers. Push and pop operations are MT-safe, allowing concurrent access,
+ * and the interface supports pushing and popping multiple pointers at a time.
+ */
+
+#ifndef _RTE_STACK_H_
+#define _RTE_STACK_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_compat.h>
+#include <rte_debug.h>
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+#define RTE_TAILQ_STACK_NAME "RTE_STACK"
+#define RTE_STACK_MZ_PREFIX "STK_"
+/** The maximum length of a stack name. */
+#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
+ sizeof(RTE_STACK_MZ_PREFIX) + 1)
+
+/* Structure containing the LIFO, its current length, and a lock for mutual
+ * exclusion.
+ */
+struct rte_stack_std {
+ rte_spinlock_t lock; /**< LIFO lock */
+ uint32_t len; /**< LIFO len */
+ void *objs[]; /**< LIFO pointer table */
+};
+
+/* The RTE stack structure contains the LIFO structure itself, plus metadata
+ * such as its name and memzone pointer.
+ */
+struct rte_stack {
+ /** Name of the stack. */
+ char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
+ /** Memzone containing the rte_stack structure. */
+ const struct rte_memzone *memzone;
+ uint32_t capacity; /**< Usable size of the stack. */
+ uint32_t flags; /**< Flags supplied at creation. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+} __rte_cache_aligned;
+
+#include "rte_stack_std.h"
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_pop(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return __rte_stack_std_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of free entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of free entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_free_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return s->capacity - rte_stack_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new stack named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory for a stack of
+ * size *count*. The behavior of the stack is controlled by the *flags*.
+ *
+ * @param name
+ * The name of the stack.
+ * @param count
+ * The size of the stack.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * Reserved for future use.
+ * @return
+ * On success, the pointer to the new allocated stack. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a stack with the same name already exists
+ * - ENOMEM - insufficient memory to create the stack
+ * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
+ */
+struct rte_stack *__rte_experimental
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free all memory used by the stack.
+ *
+ * @param s
+ * Stack to free
+ */
+void __rte_experimental
+rte_stack_free(struct rte_stack *s);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Lookup a stack by its name.
+ *
+ * @param name
+ * The name of the stack.
+ * @return
+ * The pointer to the stack matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - Stack with name *name* not found.
+ * - EINVAL - *name* pointer is NULL.
+ */
+struct rte_stack * __rte_experimental
+rte_stack_lookup(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_H_ */
diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
new file mode 100644
index 000000000..4a6a7bdb3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_pvt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_PVT_H_
+#define _RTE_STACK_PVT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_log.h>
+
+extern int stack_logtype;
+
+#define STACK_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
+ __func__, ##args)
+
+#define STACK_LOG_ERR(fmt, args...) \
+ STACK_LOG(ERR, fmt, ## args)
+
+#define STACK_LOG_WARN(fmt, args...) \
+ STACK_LOG(WARNING, fmt, ## args)
+
+#define STACK_LOG_INFO(fmt, args...) \
+ STACK_LOG(INFO, fmt, ## args)
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_PVT_H_ */
diff --git a/lib/librte_stack/rte_stack_std.c b/lib/librte_stack/rte_stack_std.c
new file mode 100644
index 000000000..0a310d7c6
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.c
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_std_init(struct rte_stack *s)
+{
+ rte_spinlock_init(&s->stack_std.lock);
+}
+
+ssize_t
+rte_stack_std_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_std.h b/lib/librte_stack/rte_stack_std.h
new file mode 100644
index 000000000..5dc940932
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.h
@@ -0,0 +1,121 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_STD_H_
+#define _RTE_STACK_STD_H_
+
+#include <rte_branch_prediction.h>
+
+/**
+ * @internal Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+ cache_objs = &stack->objs[stack->len];
+
+ /* Is there sufficient space in the stack? */
+ if ((stack->len + n) > s->capacity) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ stack->len += n;
+
+ rte_spinlock_unlock(&stack->lock);
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index, len;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+
+ if (unlikely(n > stack->len)) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ cache_objs = stack->objs;
+
+ for (index = 0, len = stack->len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ stack->len -= n;
+ rte_spinlock_unlock(&stack->lock);
+
+ return n;
+}
+
+/**
+ * @internal Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_count(struct rte_stack *s)
+{
+ return (unsigned int)s->stack_std.len;
+}
+
+/**
+ * @internal Initialize a standard stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ */
+void
+rte_stack_std_init(struct rte_stack *s);
+
+/**
+ * @internal Return the memory required for a standard stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a standard stack.
+ */
+ssize_t
+rte_stack_std_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_STD_H_ */
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
new file mode 100644
index 000000000..6662679c3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_stack_create;
+ rte_stack_free;
+ rte_stack_lookup;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index c3289f885..595314d7d 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..7e033e78c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 1/8] stack: introduce rte stack library
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 1/8] stack: introduce rte stack library Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
2019-04-04 13:30 ` Thomas Monjalon
1 sibling, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 28 +++++
doc/guides/rel_notes/release_19_05.rst | 5 +
lib/Makefile | 2 +
lib/librte_stack/Makefile | 25 ++++
lib/librte_stack/meson.build | 8 ++
lib/librte_stack/rte_stack.c | 182 ++++++++++++++++++++++++++++
lib/librte_stack/rte_stack.h | 209 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++++++
lib/librte_stack/rte_stack_std.c | 26 ++++
lib/librte_stack/rte_stack_std.h | 121 +++++++++++++++++++
lib/librte_stack/rte_stack_version.map | 9 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
18 files changed, 665 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 71ac8cd4b..f30fc4aa6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -426,6 +426,12 @@ F: drivers/raw/skeleton_rawdev/
F: app/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
+Stack API - EXPERIMENTAL
+M: Gage Eads <gage.eads@intel.com>
+M: Olivier Matz <olivier.matz@6wind.com>
+F: lib/librte_stack/
+F: doc/guides/prog_guide/stack_lib.rst
+
Memory Pool Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index 6292bc4af..fc8dba69d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -994,3 +994,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
# Compile the eventdev application
#
CONFIG_RTE_APP_EVENTDEV=y
+
+#
+# Compile librte_stack
+#
+CONFIG_RTE_LIBRTE_STACK=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index aacc66bd8..de1e215dd 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -125,6 +125,7 @@ The public API headers are grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
[ring] (@ref rte_ring.h),
+ [stack] (@ref rte_stack.h),
[tailq] (@ref rte_tailq.h),
[bitmap] (@ref rte_bitmap.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..7722fc3e9 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -55,6 +55,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
@TOPDIR@/lib/librte_security \
+ @TOPDIR@/lib/librte_stack \
@TOPDIR@/lib/librte_table \
@TOPDIR@/lib/librte_telemetry \
@TOPDIR@/lib/librte_timer \
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..f4f60862f 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ stack_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
new file mode 100644
index 000000000..25a8cc38a
--- /dev/null
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -0,0 +1,28 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+Stack Library
+=============
+
+DPDK's stack library provides an API for configuration and use of a bounded
+stack of pointers.
+
+The stack library provides the following basic operations:
+
+* Create a uniquely named stack of a user-specified size and using a
+ user-specified socket.
+
+* Push and pop a burst of one or more stack objects (pointers). These function
+ are multi-threading safe.
+
+* Free a previously created stack.
+
+* Lookup a pointer to a stack by its name.
+
+* Query a stack's current depth and number of free entries.
+
+Implementation
+~~~~~~~~~~~~~~
+
+The stack consists of a contiguous array of pointers, a current index, and a
+spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index bdad1ddbe..ebfbe36e5 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -121,6 +121,11 @@ New Features
Improved testpmd application performance on ARM platform. For ``macswap``
forwarding mode, NEON intrinsics were used to do swap to save CPU cycles.
+* **Added Stack API.**
+
+ Added a new stack API for configuration and use of a bounded stack of
+ pointers. The API provides MT-safe push and pop operations that can operate
+ on one or more pointers per operation.
Removed Items
-------------
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..9f90e80ad 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack
+DEPDIRS-librte_stack := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
new file mode 100644
index 000000000..6db540073
--- /dev/null
+++ b/lib/librte_stack/Makefile
@@ -0,0 +1,25 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_stack.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_stack_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
+ rte_stack_std.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
+ rte_stack_std.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
new file mode 100644
index 000000000..d2e60ce9b
--- /dev/null
+++ b/lib/librte_stack/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+allow_experimental_apis = true
+
+version = 1
+sources = files('rte_stack.c', 'rte_stack_std.c')
+headers = files('rte_stack.h', 'rte_stack_std.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
new file mode 100644
index 000000000..610014b6c
--- /dev/null
+++ b/lib/librte_stack/rte_stack.c
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_tailq.h>
+
+#include "rte_stack.h"
+#include "rte_stack_pvt.h"
+
+int stack_logtype;
+
+TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_stack_tailq = {
+ .name = RTE_TAILQ_STACK_NAME,
+};
+EAL_REGISTER_TAILQ(rte_stack_tailq)
+
+static void
+rte_stack_init(struct rte_stack *s)
+{
+ memset(s, 0, sizeof(*s));
+
+ rte_stack_std_init(s);
+}
+
+static ssize_t
+rte_stack_get_memsize(unsigned int count)
+{
+ return rte_stack_std_get_memsize(count);
+}
+
+struct rte_stack *
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_stack_list *stack_list;
+ const struct rte_memzone *mz;
+ struct rte_tailq_entry *te;
+ struct rte_stack *s;
+ unsigned int sz;
+ int ret;
+
+ RTE_SET_USED(flags);
+
+ sz = rte_stack_get_memsize(count);
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_STACK_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ STACK_LOG_ERR("Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
+ 0, __alignof__(*s));
+ if (mz == NULL) {
+ STACK_LOG_ERR("Cannot reserve stack memzone!\n");
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(te);
+ return NULL;
+ }
+
+ s = mz->addr;
+
+ rte_stack_init(s);
+
+ /* Store the name for later lookups */
+ ret = snprintf(s->name, sizeof(s->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(s->name)) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_errno = ENAMETOOLONG;
+ rte_free(te);
+ rte_memzone_free(mz);
+ return NULL;
+ }
+
+ s->memzone = mz;
+ s->capacity = count;
+ s->flags = flags;
+
+ te->data = s;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ TAILQ_INSERT_TAIL(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return s;
+}
+
+void
+rte_stack_free(struct rte_stack *s)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+
+ if (s == NULL)
+ return;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, stack_list, next) {
+ if (te->data == s)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+
+ rte_memzone_free(s->memzone);
+}
+
+struct rte_stack *
+rte_stack_lookup(const char *name)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+ struct rte_stack *r = NULL;
+
+ if (name == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, stack_list, next) {
+ r = (struct rte_stack *) te->data;
+ if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+RTE_INIT(librte_stack_init_log)
+{
+ stack_logtype = rte_log_register("lib.stack");
+ if (stack_logtype >= 0)
+ rte_log_set_level(stack_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
new file mode 100644
index 000000000..42d042715
--- /dev/null
+++ b/lib/librte_stack/rte_stack.h
@@ -0,0 +1,209 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+/**
+ * @file rte_stack.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Stack
+ *
+ * librte_stack provides an API for configuration and use of a bounded stack of
+ * pointers. Push and pop operations are MT-safe, allowing concurrent access,
+ * and the interface supports pushing and popping multiple pointers at a time.
+ */
+
+#ifndef _RTE_STACK_H_
+#define _RTE_STACK_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_compat.h>
+#include <rte_debug.h>
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+#define RTE_TAILQ_STACK_NAME "RTE_STACK"
+#define RTE_STACK_MZ_PREFIX "STK_"
+/** The maximum length of a stack name. */
+#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
+ sizeof(RTE_STACK_MZ_PREFIX) + 1)
+
+/* Structure containing the LIFO, its current length, and a lock for mutual
+ * exclusion.
+ */
+struct rte_stack_std {
+ rte_spinlock_t lock; /**< LIFO lock */
+ uint32_t len; /**< LIFO len */
+ void *objs[]; /**< LIFO pointer table */
+};
+
+/* The RTE stack structure contains the LIFO structure itself, plus metadata
+ * such as its name and memzone pointer.
+ */
+struct rte_stack {
+ /** Name of the stack. */
+ char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
+ /** Memzone containing the rte_stack structure. */
+ const struct rte_memzone *memzone;
+ uint32_t capacity; /**< Usable size of the stack. */
+ uint32_t flags; /**< Flags supplied at creation. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+} __rte_cache_aligned;
+
+#include "rte_stack_std.h"
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_pop(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return __rte_stack_std_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of free entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of free entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_free_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return s->capacity - rte_stack_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new stack named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory for a stack of
+ * size *count*. The behavior of the stack is controlled by the *flags*.
+ *
+ * @param name
+ * The name of the stack.
+ * @param count
+ * The size of the stack.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * Reserved for future use.
+ * @return
+ * On success, the pointer to the new allocated stack. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a stack with the same name already exists
+ * - ENOMEM - insufficient memory to create the stack
+ * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
+ */
+struct rte_stack *__rte_experimental
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free all memory used by the stack.
+ *
+ * @param s
+ * Stack to free
+ */
+void __rte_experimental
+rte_stack_free(struct rte_stack *s);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Lookup a stack by its name.
+ *
+ * @param name
+ * The name of the stack.
+ * @return
+ * The pointer to the stack matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - Stack with name *name* not found.
+ * - EINVAL - *name* pointer is NULL.
+ */
+struct rte_stack * __rte_experimental
+rte_stack_lookup(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_H_ */
diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
new file mode 100644
index 000000000..4a6a7bdb3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_pvt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_PVT_H_
+#define _RTE_STACK_PVT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_log.h>
+
+extern int stack_logtype;
+
+#define STACK_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
+ __func__, ##args)
+
+#define STACK_LOG_ERR(fmt, args...) \
+ STACK_LOG(ERR, fmt, ## args)
+
+#define STACK_LOG_WARN(fmt, args...) \
+ STACK_LOG(WARNING, fmt, ## args)
+
+#define STACK_LOG_INFO(fmt, args...) \
+ STACK_LOG(INFO, fmt, ## args)
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_PVT_H_ */
diff --git a/lib/librte_stack/rte_stack_std.c b/lib/librte_stack/rte_stack_std.c
new file mode 100644
index 000000000..0a310d7c6
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.c
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_std_init(struct rte_stack *s)
+{
+ rte_spinlock_init(&s->stack_std.lock);
+}
+
+ssize_t
+rte_stack_std_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_std.h b/lib/librte_stack/rte_stack_std.h
new file mode 100644
index 000000000..5dc940932
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.h
@@ -0,0 +1,121 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_STD_H_
+#define _RTE_STACK_STD_H_
+
+#include <rte_branch_prediction.h>
+
+/**
+ * @internal Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+ cache_objs = &stack->objs[stack->len];
+
+ /* Is there sufficient space in the stack? */
+ if ((stack->len + n) > s->capacity) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ stack->len += n;
+
+ rte_spinlock_unlock(&stack->lock);
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index, len;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+
+ if (unlikely(n > stack->len)) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ cache_objs = stack->objs;
+
+ for (index = 0, len = stack->len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ stack->len -= n;
+ rte_spinlock_unlock(&stack->lock);
+
+ return n;
+}
+
+/**
+ * @internal Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_count(struct rte_stack *s)
+{
+ return (unsigned int)s->stack_std.len;
+}
+
+/**
+ * @internal Initialize a standard stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ */
+void
+rte_stack_std_init(struct rte_stack *s);
+
+/**
+ * @internal Return the memory required for a standard stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a standard stack.
+ */
+ssize_t
+rte_stack_std_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_STD_H_ */
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
new file mode 100644
index 000000000..6662679c3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_stack_create;
+ rte_stack_free;
+ rte_stack_lookup;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index c3289f885..595314d7d 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..7e033e78c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 2/8] mempool/stack: convert mempool to use rte stack
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 0/8] Add stack library and new " Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 1/8] stack: introduce rte stack library Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 3/8] test/stack: add stack test Gage Eads
` (6 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 2 +-
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 93 +++++++++----------------------
4 files changed, 33 insertions(+), 71 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index f30fc4aa6..e09e7d93f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -303,7 +303,6 @@ M: Andrew Rybchenko <arybchenko@solarflare.com>
F: lib/librte_mempool/
F: drivers/mempool/Makefile
F: drivers/mempool/ring/
-F: drivers/mempool/stack/
F: doc/guides/prog_guide/mempool_lib.rst
F: app/test/test_mempool*
F: app/test/test_func_reentrancy.c
@@ -431,6 +430,7 @@ M: Gage Eads <gage.eads@intel.com>
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
+F: drivers/mempool/stack/
Memory Pool Drivers
diff --git a/drivers/mempool/stack/Makefile b/drivers/mempool/stack/Makefile
index 0444aedad..1681a62bc 100644
--- a/drivers/mempool/stack/Makefile
+++ b/drivers/mempool/stack/Makefile
@@ -10,10 +10,11 @@ LIB = librte_mempool_stack.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
# Headers
CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+LDLIBS += -lrte_eal -lrte_mempool -lrte_stack
EXPORT_MAP := rte_mempool_stack_version.map
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index b75a3bb56..03e369a41 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
+
+allow_experimental_apis = true
sources = files('rte_mempool_stack.c')
+
+deps += ['stack']
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index e6d504af5..25ccdb9af 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -1,39 +1,29 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2019 Intel Corporation
*/
#include <stdio.h>
#include <rte_mempool.h>
-#include <rte_malloc.h>
-
-struct rte_mempool_stack {
- rte_spinlock_t sl;
-
- uint32_t size;
- uint32_t len;
- void *objs[];
-};
+#include <rte_stack.h>
static int
stack_alloc(struct rte_mempool *mp)
{
- struct rte_mempool_stack *s;
- unsigned n = mp->size;
- int size = sizeof(*s) + (n+16)*sizeof(void *);
-
- /* Allocate our local memory structure */
- s = rte_zmalloc_socket("mempool-stack",
- size,
- RTE_CACHE_LINE_SIZE,
- mp->socket_id);
- if (s == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
- return -ENOMEM;
+ char name[RTE_STACK_NAMESIZE];
+ struct rte_stack *s;
+ int ret;
+
+ ret = snprintf(name, sizeof(name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(name)) {
+ rte_errno = ENAMETOOLONG;
+ return -rte_errno;
}
- rte_spinlock_init(&s->sl);
+ s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ if (s == NULL)
+ return -rte_errno;
- s->size = n;
mp->pool_data = s;
return 0;
@@ -41,69 +31,36 @@ stack_alloc(struct rte_mempool *mp)
static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index;
-
- rte_spinlock_lock(&s->sl);
- cache_objs = &s->objs[s->len];
-
- /* Is there sufficient space in the stack ? */
- if ((s->len + n) > s->size) {
- rte_spinlock_unlock(&s->sl);
- return -ENOBUFS;
- }
-
- /* Add elements back into the cache */
- for (index = 0; index < n; ++index, obj_table++)
- cache_objs[index] = *obj_table;
-
- s->len += n;
+ struct rte_stack *s = mp->pool_data;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_push(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static int
stack_dequeue(struct rte_mempool *mp, void **obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index, len;
-
- rte_spinlock_lock(&s->sl);
-
- if (unlikely(n > s->len)) {
- rte_spinlock_unlock(&s->sl);
- return -ENOENT;
- }
+ struct rte_stack *s = mp->pool_data;
- cache_objs = s->objs;
-
- for (index = 0, len = s->len - 1; index < n;
- ++index, len--, obj_table++)
- *obj_table = cache_objs[len];
-
- s->len -= n;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_pop(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static unsigned
stack_get_count(const struct rte_mempool *mp)
{
- struct rte_mempool_stack *s = mp->pool_data;
+ struct rte_stack *s = mp->pool_data;
- return s->len;
+ return rte_stack_count(s);
}
static void
stack_free(struct rte_mempool *mp)
{
- rte_free((void *)(mp->pool_data));
+ struct rte_stack *s = mp->pool_data;
+
+ rte_stack_free(s);
}
static struct rte_mempool_ops ops_stack = {
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 2/8] mempool/stack: convert mempool to use rte stack
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 2 +-
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 93 +++++++++----------------------
4 files changed, 33 insertions(+), 71 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index f30fc4aa6..e09e7d93f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -303,7 +303,6 @@ M: Andrew Rybchenko <arybchenko@solarflare.com>
F: lib/librte_mempool/
F: drivers/mempool/Makefile
F: drivers/mempool/ring/
-F: drivers/mempool/stack/
F: doc/guides/prog_guide/mempool_lib.rst
F: app/test/test_mempool*
F: app/test/test_func_reentrancy.c
@@ -431,6 +430,7 @@ M: Gage Eads <gage.eads@intel.com>
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
+F: drivers/mempool/stack/
Memory Pool Drivers
diff --git a/drivers/mempool/stack/Makefile b/drivers/mempool/stack/Makefile
index 0444aedad..1681a62bc 100644
--- a/drivers/mempool/stack/Makefile
+++ b/drivers/mempool/stack/Makefile
@@ -10,10 +10,11 @@ LIB = librte_mempool_stack.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
# Headers
CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+LDLIBS += -lrte_eal -lrte_mempool -lrte_stack
EXPORT_MAP := rte_mempool_stack_version.map
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index b75a3bb56..03e369a41 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
+
+allow_experimental_apis = true
sources = files('rte_mempool_stack.c')
+
+deps += ['stack']
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index e6d504af5..25ccdb9af 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -1,39 +1,29 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2019 Intel Corporation
*/
#include <stdio.h>
#include <rte_mempool.h>
-#include <rte_malloc.h>
-
-struct rte_mempool_stack {
- rte_spinlock_t sl;
-
- uint32_t size;
- uint32_t len;
- void *objs[];
-};
+#include <rte_stack.h>
static int
stack_alloc(struct rte_mempool *mp)
{
- struct rte_mempool_stack *s;
- unsigned n = mp->size;
- int size = sizeof(*s) + (n+16)*sizeof(void *);
-
- /* Allocate our local memory structure */
- s = rte_zmalloc_socket("mempool-stack",
- size,
- RTE_CACHE_LINE_SIZE,
- mp->socket_id);
- if (s == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
- return -ENOMEM;
+ char name[RTE_STACK_NAMESIZE];
+ struct rte_stack *s;
+ int ret;
+
+ ret = snprintf(name, sizeof(name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(name)) {
+ rte_errno = ENAMETOOLONG;
+ return -rte_errno;
}
- rte_spinlock_init(&s->sl);
+ s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ if (s == NULL)
+ return -rte_errno;
- s->size = n;
mp->pool_data = s;
return 0;
@@ -41,69 +31,36 @@ stack_alloc(struct rte_mempool *mp)
static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index;
-
- rte_spinlock_lock(&s->sl);
- cache_objs = &s->objs[s->len];
-
- /* Is there sufficient space in the stack ? */
- if ((s->len + n) > s->size) {
- rte_spinlock_unlock(&s->sl);
- return -ENOBUFS;
- }
-
- /* Add elements back into the cache */
- for (index = 0; index < n; ++index, obj_table++)
- cache_objs[index] = *obj_table;
-
- s->len += n;
+ struct rte_stack *s = mp->pool_data;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_push(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static int
stack_dequeue(struct rte_mempool *mp, void **obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index, len;
-
- rte_spinlock_lock(&s->sl);
-
- if (unlikely(n > s->len)) {
- rte_spinlock_unlock(&s->sl);
- return -ENOENT;
- }
+ struct rte_stack *s = mp->pool_data;
- cache_objs = s->objs;
-
- for (index = 0, len = s->len - 1; index < n;
- ++index, len--, obj_table++)
- *obj_table = cache_objs[len];
-
- s->len -= n;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_pop(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static unsigned
stack_get_count(const struct rte_mempool *mp)
{
- struct rte_mempool_stack *s = mp->pool_data;
+ struct rte_stack *s = mp->pool_data;
- return s->len;
+ return rte_stack_count(s);
}
static void
stack_free(struct rte_mempool *mp)
{
- rte_free((void *)(mp->pool_data));
+ struct rte_stack *s = mp->pool_data;
+
+ rte_stack_free(s);
}
static struct rte_mempool_ops ops_stack = {
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 3/8] test/stack: add stack test
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 0/8] Add stack library and new " Gage Eads
` (2 preceding siblings ...)
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-04 7:34 ` Thomas Monjalon
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 4/8] test/stack: add stack perf test Gage Eads
` (5 subsequent siblings)
9 siblings, 2 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_autotest performs positive and negative testing of the stack API, and
exercises the push and pop datapath functions with all available lcores.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 1 +
app/test/Makefile | 2 +
app/test/meson.build | 3 +
app/test/test_stack.c | 411 ++++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 417 insertions(+)
create mode 100644 app/test/test_stack.c
diff --git a/MAINTAINERS b/MAINTAINERS
index e09e7d93f..e4e6d1b15 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -431,6 +431,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
F: drivers/mempool/stack/
+F: test/test/*stack*
Memory Pool Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index d6aa28bad..e5bde81af 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -90,6 +90,8 @@ endif
SRCS-y += test_rwlock.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
diff --git a/app/test/meson.build b/app/test/meson.build
index c5e65fe66..56ea13f53 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -95,6 +95,7 @@ test_sources = files('commands.c',
'test_sched.c',
'test_service_cores.c',
'test_spinlock.c',
+ 'test_stack.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -133,6 +134,7 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
+ 'stack',
'timer'
]
@@ -174,6 +176,7 @@ fast_parallel_test_names = [
'rwlock_autotest',
'sched_autotest',
'spinlock_autotest',
+ 'stack_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
new file mode 100644
index 000000000..6be2f876b
--- /dev/null
+++ b/app/test/test_stack.c
@@ -0,0 +1,411 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_SIZE 4096
+#define MAX_BULK 32
+
+static int
+test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
+{
+ unsigned int i, ret;
+ void **popped_objs;
+
+ popped_objs = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (popped_objs == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_push(s, &obj_table[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] push returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_pop(s, &popped_objs[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] pop returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i++) {
+ if (obj_table[i] != popped_objs[STACK_SIZE - i - 1]) {
+ printf("[%s():%u] Incorrect value %p at index 0x%x\n",
+ __func__, __LINE__,
+ popped_objs[STACK_SIZE - i - 1], i);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ rte_free(popped_objs);
+
+ return 0;
+}
+
+static int
+test_stack_basic(void)
+{
+ struct rte_stack *s = NULL;
+ void **obj_table = NULL;
+ int i, ret = -1;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ goto fail_test;
+ }
+
+ for (i = 0; i < STACK_SIZE; i++)
+ obj_table[i] = (void *)(uintptr_t)i;
+
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_lookup(__func__) != s) {
+ printf("[%s():%u] failed to lookup a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_count(s) != 0) {
+ printf("[%s():%u] stack count: %u (expected 0)\n",
+ __func__, __LINE__, rte_stack_count(s));
+ goto fail_test;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, 1);
+ if (ret) {
+ printf("[%s():%u] Single object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, MAX_BULK);
+ if (ret) {
+ printf("[%s():%u] Bulk object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
+ if (ret != 0) {
+ printf("[%s():%u] Excess objects push succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_pop(s, obj_table, 1);
+ if (ret != 0) {
+ printf("[%s():%u] Empty stack pop succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = 0;
+
+fail_test:
+ rte_stack_free(s);
+
+ rte_free(obj_table);
+
+ return ret;
+}
+
+static int
+test_stack_name_reuse(void)
+{
+ struct rte_stack *s[2];
+
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[0] == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[1] != NULL) {
+ printf("[%s():%u] Failed to detect re-used name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ rte_stack_free(s[0]);
+
+ return 0;
+}
+
+static int
+test_stack_name_length(void)
+{
+ char name[RTE_STACK_NAMESIZE + 1];
+ struct rte_stack *s;
+
+ memset(name, 's', sizeof(name));
+ name[RTE_STACK_NAMESIZE] = '\0';
+
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ if (s != NULL) {
+ printf("[%s():%u] Failed to prevent long name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENAMETOOLONG) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_stack *s = rte_stack_lookup("stack_not_found");
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENOENT) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s = rte_stack_lookup(NULL);
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != EINVAL) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_free_null(void)
+{
+ /* Check whether the library proper handles a NULL pointer */
+ rte_stack_free(NULL);
+
+ return 0;
+}
+
+#define NUM_ITERS_PER_THREAD 100000
+
+struct test_args {
+ struct rte_stack *s;
+ rte_atomic64_t *sz;
+};
+
+static int
+stack_thread_push_pop(void *args)
+{
+ struct test_args *t = args;
+ void **obj_table;
+ int i;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < NUM_ITERS_PER_THREAD; i++) {
+ unsigned int success, num;
+
+ /* Reserve up to min(MAX_BULK, available slots) stack entries,
+ * then push and pop those stack entries.
+ */
+ do {
+ uint64_t sz = rte_atomic64_read(t->sz);
+ volatile uint64_t *sz_addr;
+
+ sz_addr = (volatile uint64_t *)t->sz;
+
+ num = RTE_MIN(rte_rand() % MAX_BULK, STACK_SIZE - sz);
+
+ success = rte_atomic64_cmpset(sz_addr, sz, sz + num);
+ } while (success == 0);
+
+ if (rte_stack_push(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to push %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ if (rte_stack_pop(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to pop %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ rte_atomic64_sub(t->sz, num);
+ }
+
+ rte_free(obj_table);
+ return 0;
+}
+
+static int
+test_stack_multithreaded(void)
+{
+ struct test_args *args;
+ unsigned int lcore_id;
+ struct rte_stack *s;
+ rte_atomic64_t size;
+
+ printf("[%s():%u] Running with %u lcores\n",
+ __func__, __LINE__, rte_lcore_count());
+
+ if (rte_lcore_count() < 2)
+ return 0;
+
+ args = rte_malloc(NULL, sizeof(struct test_args) * RTE_MAX_LCORE, 0);
+ if (args == NULL) {
+ printf("[%s():%u] failed to malloc %zu bytes\n",
+ __func__, __LINE__,
+ sizeof(struct test_args) * RTE_MAX_LCORE);
+ return -1;
+ }
+
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ rte_free(args);
+ return -1;
+ }
+
+ rte_atomic64_init(&size);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ if (rte_eal_remote_launch(stack_thread_push_pop,
+ &args[lcore_id], lcore_id))
+ rte_panic("Failed to launch lcore %d\n", lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ stack_thread_push_pop(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ rte_stack_free(s);
+ rte_free(args);
+
+ return 0;
+}
+
+static int
+test_stack(void)
+{
+ if (test_stack_basic() < 0)
+ return -1;
+
+ if (test_lookup_null() < 0)
+ return -1;
+
+ if (test_free_null() < 0)
+ return -1;
+
+ if (test_stack_name_reuse() < 0)
+ return -1;
+
+ if (test_stack_name_length() < 0)
+ return -1;
+
+ if (test_stack_multithreaded() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_autotest, test_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 3/8] test/stack: add stack test
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 3/8] test/stack: add stack test Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
2019-04-04 7:34 ` Thomas Monjalon
1 sibling, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_autotest performs positive and negative testing of the stack API, and
exercises the push and pop datapath functions with all available lcores.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 1 +
app/test/Makefile | 2 +
app/test/meson.build | 3 +
app/test/test_stack.c | 411 ++++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 417 insertions(+)
create mode 100644 app/test/test_stack.c
diff --git a/MAINTAINERS b/MAINTAINERS
index e09e7d93f..e4e6d1b15 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -431,6 +431,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
F: drivers/mempool/stack/
+F: test/test/*stack*
Memory Pool Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index d6aa28bad..e5bde81af 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -90,6 +90,8 @@ endif
SRCS-y += test_rwlock.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
diff --git a/app/test/meson.build b/app/test/meson.build
index c5e65fe66..56ea13f53 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -95,6 +95,7 @@ test_sources = files('commands.c',
'test_sched.c',
'test_service_cores.c',
'test_spinlock.c',
+ 'test_stack.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -133,6 +134,7 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
+ 'stack',
'timer'
]
@@ -174,6 +176,7 @@ fast_parallel_test_names = [
'rwlock_autotest',
'sched_autotest',
'spinlock_autotest',
+ 'stack_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
new file mode 100644
index 000000000..6be2f876b
--- /dev/null
+++ b/app/test/test_stack.c
@@ -0,0 +1,411 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_SIZE 4096
+#define MAX_BULK 32
+
+static int
+test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
+{
+ unsigned int i, ret;
+ void **popped_objs;
+
+ popped_objs = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (popped_objs == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_push(s, &obj_table[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] push returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_pop(s, &popped_objs[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] pop returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i++) {
+ if (obj_table[i] != popped_objs[STACK_SIZE - i - 1]) {
+ printf("[%s():%u] Incorrect value %p at index 0x%x\n",
+ __func__, __LINE__,
+ popped_objs[STACK_SIZE - i - 1], i);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ rte_free(popped_objs);
+
+ return 0;
+}
+
+static int
+test_stack_basic(void)
+{
+ struct rte_stack *s = NULL;
+ void **obj_table = NULL;
+ int i, ret = -1;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ goto fail_test;
+ }
+
+ for (i = 0; i < STACK_SIZE; i++)
+ obj_table[i] = (void *)(uintptr_t)i;
+
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_lookup(__func__) != s) {
+ printf("[%s():%u] failed to lookup a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_count(s) != 0) {
+ printf("[%s():%u] stack count: %u (expected 0)\n",
+ __func__, __LINE__, rte_stack_count(s));
+ goto fail_test;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, 1);
+ if (ret) {
+ printf("[%s():%u] Single object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, MAX_BULK);
+ if (ret) {
+ printf("[%s():%u] Bulk object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
+ if (ret != 0) {
+ printf("[%s():%u] Excess objects push succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_pop(s, obj_table, 1);
+ if (ret != 0) {
+ printf("[%s():%u] Empty stack pop succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = 0;
+
+fail_test:
+ rte_stack_free(s);
+
+ rte_free(obj_table);
+
+ return ret;
+}
+
+static int
+test_stack_name_reuse(void)
+{
+ struct rte_stack *s[2];
+
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[0] == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[1] != NULL) {
+ printf("[%s():%u] Failed to detect re-used name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ rte_stack_free(s[0]);
+
+ return 0;
+}
+
+static int
+test_stack_name_length(void)
+{
+ char name[RTE_STACK_NAMESIZE + 1];
+ struct rte_stack *s;
+
+ memset(name, 's', sizeof(name));
+ name[RTE_STACK_NAMESIZE] = '\0';
+
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ if (s != NULL) {
+ printf("[%s():%u] Failed to prevent long name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENAMETOOLONG) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_stack *s = rte_stack_lookup("stack_not_found");
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENOENT) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s = rte_stack_lookup(NULL);
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != EINVAL) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_free_null(void)
+{
+ /* Check whether the library proper handles a NULL pointer */
+ rte_stack_free(NULL);
+
+ return 0;
+}
+
+#define NUM_ITERS_PER_THREAD 100000
+
+struct test_args {
+ struct rte_stack *s;
+ rte_atomic64_t *sz;
+};
+
+static int
+stack_thread_push_pop(void *args)
+{
+ struct test_args *t = args;
+ void **obj_table;
+ int i;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < NUM_ITERS_PER_THREAD; i++) {
+ unsigned int success, num;
+
+ /* Reserve up to min(MAX_BULK, available slots) stack entries,
+ * then push and pop those stack entries.
+ */
+ do {
+ uint64_t sz = rte_atomic64_read(t->sz);
+ volatile uint64_t *sz_addr;
+
+ sz_addr = (volatile uint64_t *)t->sz;
+
+ num = RTE_MIN(rte_rand() % MAX_BULK, STACK_SIZE - sz);
+
+ success = rte_atomic64_cmpset(sz_addr, sz, sz + num);
+ } while (success == 0);
+
+ if (rte_stack_push(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to push %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ if (rte_stack_pop(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to pop %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ rte_atomic64_sub(t->sz, num);
+ }
+
+ rte_free(obj_table);
+ return 0;
+}
+
+static int
+test_stack_multithreaded(void)
+{
+ struct test_args *args;
+ unsigned int lcore_id;
+ struct rte_stack *s;
+ rte_atomic64_t size;
+
+ printf("[%s():%u] Running with %u lcores\n",
+ __func__, __LINE__, rte_lcore_count());
+
+ if (rte_lcore_count() < 2)
+ return 0;
+
+ args = rte_malloc(NULL, sizeof(struct test_args) * RTE_MAX_LCORE, 0);
+ if (args == NULL) {
+ printf("[%s():%u] failed to malloc %zu bytes\n",
+ __func__, __LINE__,
+ sizeof(struct test_args) * RTE_MAX_LCORE);
+ return -1;
+ }
+
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ rte_free(args);
+ return -1;
+ }
+
+ rte_atomic64_init(&size);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ if (rte_eal_remote_launch(stack_thread_push_pop,
+ &args[lcore_id], lcore_id))
+ rte_panic("Failed to launch lcore %d\n", lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ stack_thread_push_pop(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ rte_stack_free(s);
+ rte_free(args);
+
+ return 0;
+}
+
+static int
+test_stack(void)
+{
+ if (test_stack_basic() < 0)
+ return -1;
+
+ if (test_lookup_null() < 0)
+ return -1;
+
+ if (test_free_null() < 0)
+ return -1;
+
+ if (test_stack_name_reuse() < 0)
+ return -1;
+
+ if (test_stack_name_length() < 0)
+ return -1;
+
+ if (test_stack_multithreaded() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_autotest, test_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 4/8] test/stack: add stack perf test
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 0/8] Add stack library and new " Gage Eads
` (3 preceding siblings ...)
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 3/8] test/stack: add stack test Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 5/8] stack: add lock-free stack implementation Gage Eads
` (4 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/Makefile | 1 +
app/test/meson.build | 2 +
app/test/test_stack_perf.c | 345 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 348 insertions(+)
create mode 100644 app/test/test_stack_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index e5bde81af..b28bed2d4 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -91,6 +91,7 @@ endif
SRCS-y += test_rwlock.c
SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
diff --git a/app/test/meson.build b/app/test/meson.build
index 56ea13f53..02eb788a4 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -96,6 +96,7 @@ test_sources = files('commands.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
+ 'test_stack_perf.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -241,6 +242,7 @@ perf_test_names = [
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
+ 'stack_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
new file mode 100644
index 000000000..a44fbb73e
--- /dev/null
+++ b/app/test/test_stack_perf.c
@@ -0,0 +1,345 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_launch.h>
+#include <rte_pause.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_NAME "STACK_PERF"
+#define MAX_BURST 32
+#define STACK_SIZE (RTE_MAX_LCORE * MAX_BURST)
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+/*
+ * Push/pop bulk sizes, marked volatile so they aren't treated as compile-time
+ * constants.
+ */
+static volatile unsigned int bulk_sizes[] = {8, MAX_BURST};
+
+static rte_atomic32_t lcore_barrier;
+
+struct lcore_pair {
+ unsigned int c1;
+ unsigned int c2;
+};
+
+static int
+get_two_hyperthreads(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] == core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_cores(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] != core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_sockets(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if (socket[0] != socket[1]) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+/* Measure the cycle cost of popping an empty stack. */
+static void
+test_empty_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 100000000;
+ void *objs[MAX_BURST];
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++)
+ rte_stack_pop(s, objs, bulk_sizes[0]);
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Stack empty pop: %.2F\n",
+ (double)(end - start) / iterations);
+}
+
+struct thread_args {
+ struct rte_stack *s;
+ unsigned int sz;
+ double avg;
+};
+
+/* Measure the average per-pointer cycle cost of stack push and pop */
+static int
+bulk_push_pop(void *p)
+{
+ unsigned int iterations = 1000000;
+ struct thread_args *args = p;
+ void *objs[MAX_BURST] = {0};
+ unsigned int size, i;
+ struct rte_stack *s;
+
+ s = args->s;
+ size = args->sz;
+
+ rte_atomic32_sub(&lcore_barrier, 1);
+ while (rte_atomic32_read(&lcore_barrier) != 0)
+ rte_pause();
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, size);
+ rte_stack_pop(s, objs, size);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ args->avg = ((double)(end - start))/(iterations * size);
+
+ return 0;
+}
+
+/*
+ * Run bulk_push_pop() simultaneously on pairs of cores, to measure stack
+ * perf when between hyperthread siblings, cores on the same socket, and cores
+ * on different sockets.
+ */
+static void
+run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s,
+ lcore_function_t fn)
+{
+ struct thread_args args[2];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ rte_atomic32_set(&lcore_barrier, 2);
+
+ args[0].sz = args[1].sz = bulk_sizes[i];
+ args[0].s = args[1].s = s;
+
+ if (cores->c1 == rte_get_master_lcore()) {
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ fn(&args[0]);
+ rte_eal_wait_lcore(cores->c2);
+ } else {
+ rte_eal_remote_launch(fn, &args[0], cores->c1);
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ rte_eal_wait_lcore(cores->c1);
+ rte_eal_wait_lcore(cores->c2);
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], (args[0].avg + args[1].avg) / 2);
+ }
+}
+
+/* Run bulk_push_pop() simultaneously on 1+ cores. */
+static void
+run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n)
+{
+ struct thread_args args[RTE_MAX_LCORE];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ unsigned int lcore_id;
+ int cnt = 0;
+ double avg;
+
+ rte_atomic32_set(&lcore_barrier, n);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ if (rte_eal_remote_launch(fn, &args[lcore_id],
+ lcore_id))
+ rte_panic("Failed to launch lcore %d\n",
+ lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ fn(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ avg = args[rte_lcore_id()].avg;
+
+ cnt = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+ avg += args[lcore_id].avg;
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], avg / n);
+ }
+}
+
+/*
+ * Measure the cycle cost of pushing and popping a single pointer on a single
+ * lcore.
+ */
+static void
+test_single_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 16000000;
+ void *obj = NULL;
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, &obj, 1);
+ rte_stack_pop(s, &obj, 1);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Average cycles per single object push/pop: %.2F\n",
+ ((double)(end - start)) / iterations);
+}
+
+/* Measure the cycle cost of bulk pushing and popping on a single lcore. */
+static void
+test_bulk_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 8000000;
+ void *objs[MAX_BURST];
+ unsigned int sz, i;
+
+ for (sz = 0; sz < ARRAY_SIZE(bulk_sizes); sz++) {
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, bulk_sizes[sz]);
+ rte_stack_pop(s, objs, bulk_sizes[sz]);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ double avg = ((double)(end - start) /
+ (iterations * bulk_sizes[sz]));
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[sz], avg);
+ }
+}
+
+static int
+test_stack_perf(void)
+{
+ struct lcore_pair cores;
+ struct rte_stack *s;
+
+ rte_atomic32_init(&lcore_barrier);
+
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ printf("### Testing single element push/pop ###\n");
+ test_single_push_pop(s);
+
+ printf("\n### Testing empty pop ###\n");
+ test_empty_pop(s);
+
+ printf("\n### Testing using a single lcore ###\n");
+ test_bulk_push_pop(s);
+
+ if (get_two_hyperthreads(&cores) == 0) {
+ printf("\n### Testing using two hyperthreads ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_cores(&cores) == 0) {
+ printf("\n### Testing using two physical cores ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_sockets(&cores) == 0) {
+ printf("\n### Testing using two NUMA nodes ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+
+ printf("\n### Testing on all %u lcores ###\n", rte_lcore_count());
+ run_on_n_cores(s, bulk_push_pop, rte_lcore_count());
+
+ rte_stack_free(s);
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 4/8] test/stack: add stack perf test
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 4/8] test/stack: add stack perf test Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/Makefile | 1 +
app/test/meson.build | 2 +
app/test/test_stack_perf.c | 345 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 348 insertions(+)
create mode 100644 app/test/test_stack_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index e5bde81af..b28bed2d4 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -91,6 +91,7 @@ endif
SRCS-y += test_rwlock.c
SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
diff --git a/app/test/meson.build b/app/test/meson.build
index 56ea13f53..02eb788a4 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -96,6 +96,7 @@ test_sources = files('commands.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
+ 'test_stack_perf.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -241,6 +242,7 @@ perf_test_names = [
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
+ 'stack_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
new file mode 100644
index 000000000..a44fbb73e
--- /dev/null
+++ b/app/test/test_stack_perf.c
@@ -0,0 +1,345 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_launch.h>
+#include <rte_pause.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_NAME "STACK_PERF"
+#define MAX_BURST 32
+#define STACK_SIZE (RTE_MAX_LCORE * MAX_BURST)
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+/*
+ * Push/pop bulk sizes, marked volatile so they aren't treated as compile-time
+ * constants.
+ */
+static volatile unsigned int bulk_sizes[] = {8, MAX_BURST};
+
+static rte_atomic32_t lcore_barrier;
+
+struct lcore_pair {
+ unsigned int c1;
+ unsigned int c2;
+};
+
+static int
+get_two_hyperthreads(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] == core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_cores(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] != core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_sockets(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if (socket[0] != socket[1]) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+/* Measure the cycle cost of popping an empty stack. */
+static void
+test_empty_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 100000000;
+ void *objs[MAX_BURST];
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++)
+ rte_stack_pop(s, objs, bulk_sizes[0]);
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Stack empty pop: %.2F\n",
+ (double)(end - start) / iterations);
+}
+
+struct thread_args {
+ struct rte_stack *s;
+ unsigned int sz;
+ double avg;
+};
+
+/* Measure the average per-pointer cycle cost of stack push and pop */
+static int
+bulk_push_pop(void *p)
+{
+ unsigned int iterations = 1000000;
+ struct thread_args *args = p;
+ void *objs[MAX_BURST] = {0};
+ unsigned int size, i;
+ struct rte_stack *s;
+
+ s = args->s;
+ size = args->sz;
+
+ rte_atomic32_sub(&lcore_barrier, 1);
+ while (rte_atomic32_read(&lcore_barrier) != 0)
+ rte_pause();
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, size);
+ rte_stack_pop(s, objs, size);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ args->avg = ((double)(end - start))/(iterations * size);
+
+ return 0;
+}
+
+/*
+ * Run bulk_push_pop() simultaneously on pairs of cores, to measure stack
+ * perf when between hyperthread siblings, cores on the same socket, and cores
+ * on different sockets.
+ */
+static void
+run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s,
+ lcore_function_t fn)
+{
+ struct thread_args args[2];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ rte_atomic32_set(&lcore_barrier, 2);
+
+ args[0].sz = args[1].sz = bulk_sizes[i];
+ args[0].s = args[1].s = s;
+
+ if (cores->c1 == rte_get_master_lcore()) {
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ fn(&args[0]);
+ rte_eal_wait_lcore(cores->c2);
+ } else {
+ rte_eal_remote_launch(fn, &args[0], cores->c1);
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ rte_eal_wait_lcore(cores->c1);
+ rte_eal_wait_lcore(cores->c2);
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], (args[0].avg + args[1].avg) / 2);
+ }
+}
+
+/* Run bulk_push_pop() simultaneously on 1+ cores. */
+static void
+run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n)
+{
+ struct thread_args args[RTE_MAX_LCORE];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ unsigned int lcore_id;
+ int cnt = 0;
+ double avg;
+
+ rte_atomic32_set(&lcore_barrier, n);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ if (rte_eal_remote_launch(fn, &args[lcore_id],
+ lcore_id))
+ rte_panic("Failed to launch lcore %d\n",
+ lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ fn(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ avg = args[rte_lcore_id()].avg;
+
+ cnt = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+ avg += args[lcore_id].avg;
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], avg / n);
+ }
+}
+
+/*
+ * Measure the cycle cost of pushing and popping a single pointer on a single
+ * lcore.
+ */
+static void
+test_single_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 16000000;
+ void *obj = NULL;
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, &obj, 1);
+ rte_stack_pop(s, &obj, 1);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Average cycles per single object push/pop: %.2F\n",
+ ((double)(end - start)) / iterations);
+}
+
+/* Measure the cycle cost of bulk pushing and popping on a single lcore. */
+static void
+test_bulk_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 8000000;
+ void *objs[MAX_BURST];
+ unsigned int sz, i;
+
+ for (sz = 0; sz < ARRAY_SIZE(bulk_sizes); sz++) {
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, bulk_sizes[sz]);
+ rte_stack_pop(s, objs, bulk_sizes[sz]);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ double avg = ((double)(end - start) /
+ (iterations * bulk_sizes[sz]));
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[sz], avg);
+ }
+}
+
+static int
+test_stack_perf(void)
+{
+ struct lcore_pair cores;
+ struct rte_stack *s;
+
+ rte_atomic32_init(&lcore_barrier);
+
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ printf("### Testing single element push/pop ###\n");
+ test_single_push_pop(s);
+
+ printf("\n### Testing empty pop ###\n");
+ test_empty_pop(s);
+
+ printf("\n### Testing using a single lcore ###\n");
+ test_bulk_push_pop(s);
+
+ if (get_two_hyperthreads(&cores) == 0) {
+ printf("\n### Testing using two hyperthreads ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_cores(&cores) == 0) {
+ printf("\n### Testing using two physical cores ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_sockets(&cores) == 0) {
+ printf("\n### Testing using two NUMA nodes ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+
+ printf("\n### Testing on all %u lcores ###\n", rte_lcore_count());
+ run_on_n_cores(s, bulk_push_pop, rte_lcore_count());
+
+ rte_stack_free(s);
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 5/8] stack: add lock-free stack implementation
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 0/8] Add stack library and new " Gage Eads
` (4 preceding siblings ...)
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 4/8] test/stack: add stack perf test Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 6/8] stack: add C11 atomic implementation Gage Eads
` (3 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for a lock-free (linked list based) stack to the
stack API. This behavior is selected through a new rte_stack_create() flag,
RTE_STACK_F_LF.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The lock-free push operation enqueues a linked list of pointers by pointing
the tail of the list to the current stack head, and using a CAS to swing
the stack head pointer to the head of the list. The operation retries if it
is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The lock-free pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a lock-free LIFO
list, and are allocated before stack pushes and freed after stack pops.
Since the stack has a fixed maximum depth, these elements do not need to be
dynamically created.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
doc/guides/prog_guide/stack_lib.rst | 61 +++++++++++-
doc/guides/rel_notes/release_19_05.rst | 3 +
lib/librte_stack/Makefile | 7 +-
lib/librte_stack/meson.build | 7 +-
lib/librte_stack/rte_stack.c | 28 ++++--
lib/librte_stack/rte_stack.h | 63 +++++++++++-
lib/librte_stack/rte_stack_lf.c | 31 ++++++
lib/librte_stack/rte_stack_lf.h | 102 ++++++++++++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 ++++++++++++++++++++++++++++++++
9 files changed, 447 insertions(+), 19 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 25a8cc38a..8fe8804e3 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -10,7 +10,8 @@ stack of pointers.
The stack library provides the following basic operations:
* Create a uniquely named stack of a user-specified size and using a
- user-specified socket.
+ user-specified socket, with either standard (lock-based) or lock-free
+ behavior.
* Push and pop a burst of one or more stack objects (pointers). These function
are multi-threading safe.
@@ -24,5 +25,59 @@ The stack library provides the following basic operations:
Implementation
~~~~~~~~~~~~~~
-The stack consists of a contiguous array of pointers, a current index, and a
-spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
+The library supports two types of stacks: standard (lock-based) and lock-free.
+Both types use the same set of interfaces, but their implementations differ.
+
+Lock-based Stack
+----------------
+
+The lock-based stack consists of a contiguous array of pointers, a current
+index, and a spinlock. Accesses to the stack are made multi-thread safe by the
+spinlock.
+
+Lock-free Stack
+------------------
+
+The lock-free stack consists of a linked list of elements, each containing a
+data pointer and a next pointer, and an atomic stack depth counter. The
+lock-free property means that multiple threads can push and pop simultaneously,
+and one thread being preempted/delayed in a push or pop operation will not
+impede the forward progress of any other thread.
+
+The lock-free push operation enqueues a linked list of pointers by pointing the
+list's tail to the current stack head, and using a CAS to swing the stack head
+pointer to the head of the list. The operation retries if it is unsuccessful
+(i.e. the list changed between reading the head and modifying it), else it
+adjusts the stack length and returns.
+
+The lock-free pop operation first reserves one or more list elements by
+adjusting the stack length, to ensure the dequeue operation will succeed
+without blocking. It then dequeues pointers by walking the list -- starting
+from the head -- then swinging the head pointer (using a CAS as well). While
+walking the list, the data pointers are recorded in an object table.
+
+The linked list elements themselves are maintained in a lock-free LIFO, and are
+allocated before stack pushes and freed after stack pops. Since the stack has a
+fixed maximum depth, these elements do not need to be dynamically created.
+
+The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
+rte_stack_create().
+
+Preventing the ABA Problem
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To prevent the ABA problem, this algorithm stack uses a 128-bit
+compare-and-swap instruction to atomically update both the stack top pointer
+and a modification counter. The ABA problem can occur without a modification
+counter if, for example:
+
+1. Thread A reads head pointer X and stores the pointed-to list element.
+2. Other threads modify the list such that the head pointer is once again X,
+ but its pointed-to data is different than what thread A read.
+3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+In this case thread A would not detect that the list had changed, and would
+both pop stale data and incorrect change the head pointer. By adding a
+modification counter that is updated on every push and pop as part of the
+compare-and-swap, the algorithm can detect when the list changes even if the
+head pointer remains the same.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index ebfbe36e5..3b115b5f6 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -127,6 +127,9 @@ New Features
pointers. The API provides MT-safe push and pop operations that can operate
on one or more pointers per operation.
+ The library supports two stack implementations: standard (lock-based) and lock-free.
+ The lock-free implementation is currently limited to x86-64 platforms.
+
Removed Items
-------------
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 6db540073..311edd997 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -16,10 +16,13 @@ LIBABIVER := 1
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
- rte_stack_std.c
+ rte_stack_std.c \
+ rte_stack_lf.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
- rte_stack_std.h
+ rte_stack_std.h \
+ rte_stack_lf.h \
+ rte_stack_lf_generic.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index d2e60ce9b..7a09a5d66 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -4,5 +4,8 @@
allow_experimental_apis = true
version = 1
-sources = files('rte_stack.c', 'rte_stack_std.c')
-headers = files('rte_stack.h', 'rte_stack_std.h')
+sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
+headers = files('rte_stack.h',
+ 'rte_stack_std.h',
+ 'rte_stack_lf.h',
+ 'rte_stack_lf_generic.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
index 610014b6c..1a4d9bd1e 100644
--- a/lib/librte_stack/rte_stack.c
+++ b/lib/librte_stack/rte_stack.c
@@ -25,18 +25,25 @@ static struct rte_tailq_elem rte_stack_tailq = {
};
EAL_REGISTER_TAILQ(rte_stack_tailq)
+
static void
-rte_stack_init(struct rte_stack *s)
+rte_stack_init(struct rte_stack *s, unsigned int count, uint32_t flags)
{
memset(s, 0, sizeof(*s));
- rte_stack_std_init(s);
+ if (flags & RTE_STACK_F_LF)
+ rte_stack_lf_init(s, count);
+ else
+ rte_stack_std_init(s);
}
static ssize_t
-rte_stack_get_memsize(unsigned int count)
+rte_stack_get_memsize(unsigned int count, uint32_t flags)
{
- return rte_stack_std_get_memsize(count);
+ if (flags & RTE_STACK_F_LF)
+ return rte_stack_lf_get_memsize(count);
+ else
+ return rte_stack_std_get_memsize(count);
}
struct rte_stack *
@@ -51,9 +58,16 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
unsigned int sz;
int ret;
- RTE_SET_USED(flags);
+#ifdef RTE_ARCH_64
+ RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
+#else
+ if (flags & RTE_STACK_F_LF) {
+ STACK_LOG_ERR("Lock-free stack is not supported on your platform\n");
+ return NULL;
+ }
+#endif
- sz = rte_stack_get_memsize(count);
+ sz = rte_stack_get_memsize(count, flags);
ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
RTE_STACK_MZ_PREFIX, name);
@@ -82,7 +96,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
s = mz->addr;
- rte_stack_init(s);
+ rte_stack_init(s, count, flags);
/* Store the name for later lookups */
ret = snprintf(s->name, sizeof(s->name), "%s", name);
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index 42d042715..fe048f071 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -20,6 +20,7 @@
extern "C" {
#endif
+#include <rte_atomic.h>
#include <rte_compat.h>
#include <rte_debug.h>
#include <rte_errno.h>
@@ -32,6 +33,35 @@ extern "C" {
#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
sizeof(RTE_STACK_MZ_PREFIX) + 1)
+struct rte_stack_lf_elem {
+ void *data; /**< Data pointer */
+ struct rte_stack_lf_elem *next; /**< Next pointer */
+};
+
+struct rte_stack_lf_head {
+ struct rte_stack_lf_elem *top; /**< Stack top */
+ uint64_t cnt; /**< Modification counter for avoiding ABA problem */
+};
+
+struct rte_stack_lf_list {
+ /** List head */
+ struct rte_stack_lf_head head __rte_aligned(16);
+ /** List len */
+ rte_atomic64_t len;
+};
+
+/* Structure containing two lock-free LIFO lists: the stack itself and a list
+ * of free linked-list elements.
+ */
+struct rte_stack_lf {
+ /** LIFO list of elements */
+ struct rte_stack_lf_list used __rte_cache_aligned;
+ /** LIFO list of free elements */
+ struct rte_stack_lf_list free __rte_cache_aligned;
+ /** LIFO elements */
+ struct rte_stack_lf_elem elems[] __rte_cache_aligned;
+};
+
/* Structure containing the LIFO, its current length, and a lock for mutual
* exclusion.
*/
@@ -51,10 +81,21 @@ struct rte_stack {
const struct rte_memzone *memzone;
uint32_t capacity; /**< Usable size of the stack. */
uint32_t flags; /**< Flags supplied at creation. */
- struct rte_stack_std stack_std; /**< LIFO structure. */
+ RTE_STD_C11
+ union {
+ struct rte_stack_lf stack_lf; /**< Lock-free LIFO structure. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+ };
} __rte_cache_aligned;
+/**
+ * The stack uses lock-free push and pop functions. This flag is only
+ * supported on x86_64 platforms, currently.
+ */
+#define RTE_STACK_F_LF 0x0001
+
#include "rte_stack_std.h"
+#include "rte_stack_lf.h"
/**
* @warning
@@ -77,7 +118,10 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_push(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_push(s, obj_table, n);
+ else
+ return __rte_stack_std_push(s, obj_table, n);
}
/**
@@ -101,7 +145,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_pop(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_pop(s, obj_table, n);
+ else
+ return __rte_stack_std_pop(s, obj_table, n);
}
/**
@@ -120,7 +167,10 @@ rte_stack_count(struct rte_stack *s)
{
RTE_ASSERT(s != NULL);
- return __rte_stack_std_count(s);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_count(s);
+ else
+ return __rte_stack_std_count(s);
}
/**
@@ -160,7 +210,10 @@ rte_stack_free_count(struct rte_stack *s)
* NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
* constraint for the reserved zone.
* @param flags
- * Reserved for future use.
+ * An OR of the following:
+ * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
+ * variants of the push and pop functions. Otherwise, it achieves
+ * thread-safety using a lock.
* @return
* On success, the pointer to the new allocated stack. NULL on error with
* rte_errno set appropriately. Possible errno values include:
diff --git a/lib/librte_stack/rte_stack_lf.c b/lib/librte_stack/rte_stack_lf.c
new file mode 100644
index 000000000..0adcc263e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.c
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count)
+{
+ struct rte_stack_lf_elem *elems = s->stack_lf.elems;
+ unsigned int i;
+
+ for (i = 0; i < count; i++)
+ __rte_stack_lf_push_elems(&s->stack_lf.free,
+ &elems[i], &elems[i], 1);
+}
+
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(struct rte_stack_lf_elem));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
new file mode 100644
index 000000000..bfd680133
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_H_
+#define _RTE_STACK_LF_H_
+
+#include "rte_stack_lf_generic.h"
+
+/**
+ * @internal Push several objects on the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects enqueued.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_push(struct rte_stack *s,
+ void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_lf_elem *tmp, *first, *last = NULL;
+ unsigned int i;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n free elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.free, n, NULL, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Construct the list elements */
+ for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
+ tmp->data = obj_table[n - i - 1];
+
+ /* Push them to the used list */
+ __rte_stack_lf_push_elems(&s->stack_lf.used, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * - Actual number of objects popped.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *first, *last = NULL;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n used elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.used,
+ n, obj_table, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Push the list elements to the free list */
+ __rte_stack_lf_push_elems(&s->stack_lf.free, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Initialize a lock-free stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param count
+ * The size of the stack.
+ */
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count);
+
+/**
+ * @internal Return the memory required for a lock-free stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a lock-free stack.
+ */
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_LF_H_ */
diff --git a/lib/librte_stack/rte_stack_lf_generic.h b/lib/librte_stack/rte_stack_lf_generic.h
new file mode 100644
index 000000000..1191406d3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_generic.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_GENERIC_H_
+#define _RTE_STACK_LF_GENERIC_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)rte_atomic64_read(&s->stack_lf.used.len);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* An acquire fence (or stronger) is needed for weak memory
+ * models to establish a synchronized-with relationship between
+ * the list->head load and store-release operations (as part of
+ * the rte_atomic128_cmp_exchange()).
+ */
+ rte_smp_mb();
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ rte_atomic64_add(&list->len, num);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = rte_atomic64_read(&list->len);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
+ len, len - num))
+ break;
+ }
+
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* An acquire fence (or stronger) is needed for weak memory
+ * models to ensure the LF LIFO element reads are properly
+ * ordered with respect to the head pointer read.
+ */
+ rte_smp_mb();
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_GENERIC_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 5/8] stack: add lock-free stack implementation
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 5/8] stack: add lock-free stack implementation Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for a lock-free (linked list based) stack to the
stack API. This behavior is selected through a new rte_stack_create() flag,
RTE_STACK_F_LF.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The lock-free push operation enqueues a linked list of pointers by pointing
the tail of the list to the current stack head, and using a CAS to swing
the stack head pointer to the head of the list. The operation retries if it
is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The lock-free pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a lock-free LIFO
list, and are allocated before stack pushes and freed after stack pops.
Since the stack has a fixed maximum depth, these elements do not need to be
dynamically created.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
doc/guides/prog_guide/stack_lib.rst | 61 +++++++++++-
doc/guides/rel_notes/release_19_05.rst | 3 +
lib/librte_stack/Makefile | 7 +-
lib/librte_stack/meson.build | 7 +-
lib/librte_stack/rte_stack.c | 28 ++++--
lib/librte_stack/rte_stack.h | 63 +++++++++++-
lib/librte_stack/rte_stack_lf.c | 31 ++++++
lib/librte_stack/rte_stack_lf.h | 102 ++++++++++++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 ++++++++++++++++++++++++++++++++
9 files changed, 447 insertions(+), 19 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 25a8cc38a..8fe8804e3 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -10,7 +10,8 @@ stack of pointers.
The stack library provides the following basic operations:
* Create a uniquely named stack of a user-specified size and using a
- user-specified socket.
+ user-specified socket, with either standard (lock-based) or lock-free
+ behavior.
* Push and pop a burst of one or more stack objects (pointers). These function
are multi-threading safe.
@@ -24,5 +25,59 @@ The stack library provides the following basic operations:
Implementation
~~~~~~~~~~~~~~
-The stack consists of a contiguous array of pointers, a current index, and a
-spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
+The library supports two types of stacks: standard (lock-based) and lock-free.
+Both types use the same set of interfaces, but their implementations differ.
+
+Lock-based Stack
+----------------
+
+The lock-based stack consists of a contiguous array of pointers, a current
+index, and a spinlock. Accesses to the stack are made multi-thread safe by the
+spinlock.
+
+Lock-free Stack
+------------------
+
+The lock-free stack consists of a linked list of elements, each containing a
+data pointer and a next pointer, and an atomic stack depth counter. The
+lock-free property means that multiple threads can push and pop simultaneously,
+and one thread being preempted/delayed in a push or pop operation will not
+impede the forward progress of any other thread.
+
+The lock-free push operation enqueues a linked list of pointers by pointing the
+list's tail to the current stack head, and using a CAS to swing the stack head
+pointer to the head of the list. The operation retries if it is unsuccessful
+(i.e. the list changed between reading the head and modifying it), else it
+adjusts the stack length and returns.
+
+The lock-free pop operation first reserves one or more list elements by
+adjusting the stack length, to ensure the dequeue operation will succeed
+without blocking. It then dequeues pointers by walking the list -- starting
+from the head -- then swinging the head pointer (using a CAS as well). While
+walking the list, the data pointers are recorded in an object table.
+
+The linked list elements themselves are maintained in a lock-free LIFO, and are
+allocated before stack pushes and freed after stack pops. Since the stack has a
+fixed maximum depth, these elements do not need to be dynamically created.
+
+The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
+rte_stack_create().
+
+Preventing the ABA Problem
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To prevent the ABA problem, this algorithm stack uses a 128-bit
+compare-and-swap instruction to atomically update both the stack top pointer
+and a modification counter. The ABA problem can occur without a modification
+counter if, for example:
+
+1. Thread A reads head pointer X and stores the pointed-to list element.
+2. Other threads modify the list such that the head pointer is once again X,
+ but its pointed-to data is different than what thread A read.
+3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+In this case thread A would not detect that the list had changed, and would
+both pop stale data and incorrect change the head pointer. By adding a
+modification counter that is updated on every push and pop as part of the
+compare-and-swap, the algorithm can detect when the list changes even if the
+head pointer remains the same.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index ebfbe36e5..3b115b5f6 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -127,6 +127,9 @@ New Features
pointers. The API provides MT-safe push and pop operations that can operate
on one or more pointers per operation.
+ The library supports two stack implementations: standard (lock-based) and lock-free.
+ The lock-free implementation is currently limited to x86-64 platforms.
+
Removed Items
-------------
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 6db540073..311edd997 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -16,10 +16,13 @@ LIBABIVER := 1
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
- rte_stack_std.c
+ rte_stack_std.c \
+ rte_stack_lf.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
- rte_stack_std.h
+ rte_stack_std.h \
+ rte_stack_lf.h \
+ rte_stack_lf_generic.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index d2e60ce9b..7a09a5d66 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -4,5 +4,8 @@
allow_experimental_apis = true
version = 1
-sources = files('rte_stack.c', 'rte_stack_std.c')
-headers = files('rte_stack.h', 'rte_stack_std.h')
+sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
+headers = files('rte_stack.h',
+ 'rte_stack_std.h',
+ 'rte_stack_lf.h',
+ 'rte_stack_lf_generic.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
index 610014b6c..1a4d9bd1e 100644
--- a/lib/librte_stack/rte_stack.c
+++ b/lib/librte_stack/rte_stack.c
@@ -25,18 +25,25 @@ static struct rte_tailq_elem rte_stack_tailq = {
};
EAL_REGISTER_TAILQ(rte_stack_tailq)
+
static void
-rte_stack_init(struct rte_stack *s)
+rte_stack_init(struct rte_stack *s, unsigned int count, uint32_t flags)
{
memset(s, 0, sizeof(*s));
- rte_stack_std_init(s);
+ if (flags & RTE_STACK_F_LF)
+ rte_stack_lf_init(s, count);
+ else
+ rte_stack_std_init(s);
}
static ssize_t
-rte_stack_get_memsize(unsigned int count)
+rte_stack_get_memsize(unsigned int count, uint32_t flags)
{
- return rte_stack_std_get_memsize(count);
+ if (flags & RTE_STACK_F_LF)
+ return rte_stack_lf_get_memsize(count);
+ else
+ return rte_stack_std_get_memsize(count);
}
struct rte_stack *
@@ -51,9 +58,16 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
unsigned int sz;
int ret;
- RTE_SET_USED(flags);
+#ifdef RTE_ARCH_64
+ RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
+#else
+ if (flags & RTE_STACK_F_LF) {
+ STACK_LOG_ERR("Lock-free stack is not supported on your platform\n");
+ return NULL;
+ }
+#endif
- sz = rte_stack_get_memsize(count);
+ sz = rte_stack_get_memsize(count, flags);
ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
RTE_STACK_MZ_PREFIX, name);
@@ -82,7 +96,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
s = mz->addr;
- rte_stack_init(s);
+ rte_stack_init(s, count, flags);
/* Store the name for later lookups */
ret = snprintf(s->name, sizeof(s->name), "%s", name);
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index 42d042715..fe048f071 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -20,6 +20,7 @@
extern "C" {
#endif
+#include <rte_atomic.h>
#include <rte_compat.h>
#include <rte_debug.h>
#include <rte_errno.h>
@@ -32,6 +33,35 @@ extern "C" {
#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
sizeof(RTE_STACK_MZ_PREFIX) + 1)
+struct rte_stack_lf_elem {
+ void *data; /**< Data pointer */
+ struct rte_stack_lf_elem *next; /**< Next pointer */
+};
+
+struct rte_stack_lf_head {
+ struct rte_stack_lf_elem *top; /**< Stack top */
+ uint64_t cnt; /**< Modification counter for avoiding ABA problem */
+};
+
+struct rte_stack_lf_list {
+ /** List head */
+ struct rte_stack_lf_head head __rte_aligned(16);
+ /** List len */
+ rte_atomic64_t len;
+};
+
+/* Structure containing two lock-free LIFO lists: the stack itself and a list
+ * of free linked-list elements.
+ */
+struct rte_stack_lf {
+ /** LIFO list of elements */
+ struct rte_stack_lf_list used __rte_cache_aligned;
+ /** LIFO list of free elements */
+ struct rte_stack_lf_list free __rte_cache_aligned;
+ /** LIFO elements */
+ struct rte_stack_lf_elem elems[] __rte_cache_aligned;
+};
+
/* Structure containing the LIFO, its current length, and a lock for mutual
* exclusion.
*/
@@ -51,10 +81,21 @@ struct rte_stack {
const struct rte_memzone *memzone;
uint32_t capacity; /**< Usable size of the stack. */
uint32_t flags; /**< Flags supplied at creation. */
- struct rte_stack_std stack_std; /**< LIFO structure. */
+ RTE_STD_C11
+ union {
+ struct rte_stack_lf stack_lf; /**< Lock-free LIFO structure. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+ };
} __rte_cache_aligned;
+/**
+ * The stack uses lock-free push and pop functions. This flag is only
+ * supported on x86_64 platforms, currently.
+ */
+#define RTE_STACK_F_LF 0x0001
+
#include "rte_stack_std.h"
+#include "rte_stack_lf.h"
/**
* @warning
@@ -77,7 +118,10 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_push(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_push(s, obj_table, n);
+ else
+ return __rte_stack_std_push(s, obj_table, n);
}
/**
@@ -101,7 +145,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_pop(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_pop(s, obj_table, n);
+ else
+ return __rte_stack_std_pop(s, obj_table, n);
}
/**
@@ -120,7 +167,10 @@ rte_stack_count(struct rte_stack *s)
{
RTE_ASSERT(s != NULL);
- return __rte_stack_std_count(s);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_count(s);
+ else
+ return __rte_stack_std_count(s);
}
/**
@@ -160,7 +210,10 @@ rte_stack_free_count(struct rte_stack *s)
* NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
* constraint for the reserved zone.
* @param flags
- * Reserved for future use.
+ * An OR of the following:
+ * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
+ * variants of the push and pop functions. Otherwise, it achieves
+ * thread-safety using a lock.
* @return
* On success, the pointer to the new allocated stack. NULL on error with
* rte_errno set appropriately. Possible errno values include:
diff --git a/lib/librte_stack/rte_stack_lf.c b/lib/librte_stack/rte_stack_lf.c
new file mode 100644
index 000000000..0adcc263e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.c
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count)
+{
+ struct rte_stack_lf_elem *elems = s->stack_lf.elems;
+ unsigned int i;
+
+ for (i = 0; i < count; i++)
+ __rte_stack_lf_push_elems(&s->stack_lf.free,
+ &elems[i], &elems[i], 1);
+}
+
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(struct rte_stack_lf_elem));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
new file mode 100644
index 000000000..bfd680133
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_H_
+#define _RTE_STACK_LF_H_
+
+#include "rte_stack_lf_generic.h"
+
+/**
+ * @internal Push several objects on the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects enqueued.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_push(struct rte_stack *s,
+ void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_lf_elem *tmp, *first, *last = NULL;
+ unsigned int i;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n free elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.free, n, NULL, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Construct the list elements */
+ for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
+ tmp->data = obj_table[n - i - 1];
+
+ /* Push them to the used list */
+ __rte_stack_lf_push_elems(&s->stack_lf.used, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * - Actual number of objects popped.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *first, *last = NULL;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n used elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.used,
+ n, obj_table, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Push the list elements to the free list */
+ __rte_stack_lf_push_elems(&s->stack_lf.free, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Initialize a lock-free stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param count
+ * The size of the stack.
+ */
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count);
+
+/**
+ * @internal Return the memory required for a lock-free stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a lock-free stack.
+ */
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_LF_H_ */
diff --git a/lib/librte_stack/rte_stack_lf_generic.h b/lib/librte_stack/rte_stack_lf_generic.h
new file mode 100644
index 000000000..1191406d3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_generic.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_GENERIC_H_
+#define _RTE_STACK_LF_GENERIC_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)rte_atomic64_read(&s->stack_lf.used.len);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* An acquire fence (or stronger) is needed for weak memory
+ * models to establish a synchronized-with relationship between
+ * the list->head load and store-release operations (as part of
+ * the rte_atomic128_cmp_exchange()).
+ */
+ rte_smp_mb();
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ rte_atomic64_add(&list->len, num);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = rte_atomic64_read(&list->len);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
+ len, len - num))
+ break;
+ }
+
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* An acquire fence (or stronger) is needed for weak memory
+ * models to ensure the LF LIFO element reads are properly
+ * ordered with respect to the head pointer read.
+ */
+ rte_smp_mb();
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_GENERIC_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 6/8] stack: add C11 atomic implementation
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 0/8] Add stack library and new " Gage Eads
` (5 preceding siblings ...)
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 5/8] stack: add lock-free stack implementation Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 7/8] test/stack: add lock-free stack tests Gage Eads
` (2 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds an implementation of the lock-free stack push, pop, and
length functions that use __atomic builtins, for systems that benefit from
the finer-grained memory ordering control.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack_lf.h | 4 +
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++++++++++++++++++++++++++++
4 files changed, 183 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 311edd997..8d18ce520 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -23,6 +23,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
rte_stack_std.h \
rte_stack_lf.h \
- rte_stack_lf_generic.h
+ rte_stack_lf_generic.h \
+ rte_stack_lf_c11.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 7a09a5d66..46fce0c20 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -8,4 +8,5 @@ sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
headers = files('rte_stack.h',
'rte_stack_std.h',
'rte_stack_lf.h',
- 'rte_stack_lf_generic.h')
+ 'rte_stack_lf_generic.h',
+ 'rte_stack_lf_c11.h')
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
index bfd680133..518889a05 100644
--- a/lib/librte_stack/rte_stack_lf.h
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -5,7 +5,11 @@
#ifndef _RTE_STACK_LF_H_
#define _RTE_STACK_LF_H_
+#ifdef RTE_USE_C11_MEM_MODEL
+#include "rte_stack_lf_c11.h"
+#else
#include "rte_stack_lf_generic.h"
+#endif
/**
* @internal Push several objects on the lock-free stack (MT-safe).
diff --git a/lib/librte_stack/rte_stack_lf_c11.h b/lib/librte_stack/rte_stack_lf_c11.h
new file mode 100644
index 000000000..a316e9af5
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_c11.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_C11_H_
+#define _RTE_STACK_LF_C11_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
+ __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Use an acquire fence to establish a synchronized-with
+ * relationship between the list->head load and store-release
+ * operations (as part of the rte_atomic128_cmp_exchange()).
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Use the release memmodel to ensure the writes to the LF LIFO
+ * elements are visible before the head pointer write.
+ */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ /* Ensure the stack modifications are not reordered with respect
+ * to the LIFO len update.
+ */
+ __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ uint64_t len;
+ int success;
+
+ /* Reserve num elements, if available */
+ len = __atomic_load_n(&list->len.cnt, __ATOMIC_ACQUIRE);
+
+ while (1) {
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ /* len is updated on failure */
+ if (__atomic_compare_exchange_n(&list->len.cnt,
+ &len, len - num,
+ 0, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE))
+ break;
+ }
+
+ /* If a torn read occurs, the CAS will fail and set old_head to the
+ * correct/latest value.
+ */
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* Use the acquire memmodel to ensure the reads to the LF LIFO
+ * elements are properly ordered with respect to the head
+ * pointer read.
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_C11_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 6/8] stack: add C11 atomic implementation
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 6/8] stack: add C11 atomic implementation Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds an implementation of the lock-free stack push, pop, and
length functions that use __atomic builtins, for systems that benefit from
the finer-grained memory ordering control.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack_lf.h | 4 +
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++++++++++++++++++++++++++++
4 files changed, 183 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 311edd997..8d18ce520 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -23,6 +23,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
rte_stack_std.h \
rte_stack_lf.h \
- rte_stack_lf_generic.h
+ rte_stack_lf_generic.h \
+ rte_stack_lf_c11.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 7a09a5d66..46fce0c20 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -8,4 +8,5 @@ sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
headers = files('rte_stack.h',
'rte_stack_std.h',
'rte_stack_lf.h',
- 'rte_stack_lf_generic.h')
+ 'rte_stack_lf_generic.h',
+ 'rte_stack_lf_c11.h')
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
index bfd680133..518889a05 100644
--- a/lib/librte_stack/rte_stack_lf.h
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -5,7 +5,11 @@
#ifndef _RTE_STACK_LF_H_
#define _RTE_STACK_LF_H_
+#ifdef RTE_USE_C11_MEM_MODEL
+#include "rte_stack_lf_c11.h"
+#else
#include "rte_stack_lf_generic.h"
+#endif
/**
* @internal Push several objects on the lock-free stack (MT-safe).
diff --git a/lib/librte_stack/rte_stack_lf_c11.h b/lib/librte_stack/rte_stack_lf_c11.h
new file mode 100644
index 000000000..a316e9af5
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_c11.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_C11_H_
+#define _RTE_STACK_LF_C11_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
+ __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Use an acquire fence to establish a synchronized-with
+ * relationship between the list->head load and store-release
+ * operations (as part of the rte_atomic128_cmp_exchange()).
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Use the release memmodel to ensure the writes to the LF LIFO
+ * elements are visible before the head pointer write.
+ */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ /* Ensure the stack modifications are not reordered with respect
+ * to the LIFO len update.
+ */
+ __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ uint64_t len;
+ int success;
+
+ /* Reserve num elements, if available */
+ len = __atomic_load_n(&list->len.cnt, __ATOMIC_ACQUIRE);
+
+ while (1) {
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ /* len is updated on failure */
+ if (__atomic_compare_exchange_n(&list->len.cnt,
+ &len, len - num,
+ 0, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE))
+ break;
+ }
+
+ /* If a torn read occurs, the CAS will fail and set old_head to the
+ * correct/latest value.
+ */
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* Use the acquire memmodel to ensure the reads to the LF LIFO
+ * elements are properly ordered with respect to the head
+ * pointer read.
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_C11_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 7/8] test/stack: add lock-free stack tests
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 0/8] Add stack library and new " Gage Eads
` (6 preceding siblings ...)
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 6/8] stack: add C11 atomic implementation Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 0/8] Add stack library and new " Gage Eads
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds lock-free stack variants of stack_autotest
(stack_lf_autotest) and stack_perf_autotest (stack_lf_perf_autotest), which
differ only in that the lock-free versions pass the RTE_STACK_F_LF flag to
all rte_stack_create() calls.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/meson.build | 2 ++
app/test/test_stack.c | 41 +++++++++++++++++++++++++++--------------
app/test/test_stack_perf.c | 17 +++++++++++++++--
3 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/app/test/meson.build b/app/test/meson.build
index 02eb788a4..867cc5863 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -178,6 +178,7 @@ fast_parallel_test_names = [
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
+ 'stack_nb_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
@@ -243,6 +244,7 @@ perf_test_names = [
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
+ 'stack_nb_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
index 6be2f876b..e972a61a7 100644
--- a/app/test/test_stack.c
+++ b/app/test/test_stack.c
@@ -98,7 +98,7 @@ test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
}
static int
-test_stack_basic(void)
+test_stack_basic(uint32_t flags)
{
struct rte_stack *s = NULL;
void **obj_table = NULL;
@@ -114,7 +114,7 @@ test_stack_basic(void)
for (i = 0; i < STACK_SIZE; i++)
obj_table[i] = (void *)(uintptr_t)i;
- s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -178,18 +178,18 @@ test_stack_basic(void)
}
static int
-test_stack_name_reuse(void)
+test_stack_name_reuse(uint32_t flags)
{
struct rte_stack *s[2];
- s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[0] == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
return -1;
}
- s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[1] != NULL) {
printf("[%s():%u] Failed to detect re-used name\n",
__func__, __LINE__);
@@ -202,7 +202,7 @@ test_stack_name_reuse(void)
}
static int
-test_stack_name_length(void)
+test_stack_name_length(uint32_t flags)
{
char name[RTE_STACK_NAMESIZE + 1];
struct rte_stack *s;
@@ -210,7 +210,7 @@ test_stack_name_length(void)
memset(name, 's', sizeof(name));
name[RTE_STACK_NAMESIZE] = '\0';
- s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), flags);
if (s != NULL) {
printf("[%s():%u] Failed to prevent long name\n",
__func__, __LINE__);
@@ -329,7 +329,7 @@ stack_thread_push_pop(void *args)
}
static int
-test_stack_multithreaded(void)
+test_stack_multithreaded(uint32_t flags)
{
struct test_args *args;
unsigned int lcore_id;
@@ -350,7 +350,7 @@ test_stack_multithreaded(void)
return -1;
}
- s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
@@ -385,9 +385,9 @@ test_stack_multithreaded(void)
}
static int
-test_stack(void)
+__test_stack(uint32_t flags)
{
- if (test_stack_basic() < 0)
+ if (test_stack_basic(flags) < 0)
return -1;
if (test_lookup_null() < 0)
@@ -396,16 +396,29 @@ test_stack(void)
if (test_free_null() < 0)
return -1;
- if (test_stack_name_reuse() < 0)
+ if (test_stack_name_reuse(flags) < 0)
return -1;
- if (test_stack_name_length() < 0)
+ if (test_stack_name_length(flags) < 0)
return -1;
- if (test_stack_multithreaded() < 0)
+ if (test_stack_multithreaded(flags) < 0)
return -1;
return 0;
}
+static int
+test_stack(void)
+{
+ return __test_stack(0);
+}
+
+static int
+test_lf_stack(void)
+{
+ return __test_stack(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_autotest, test_stack);
+REGISTER_TEST_COMMAND(stack_lf_autotest, test_lf_stack);
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
index a44fbb73e..ba27fbf70 100644
--- a/app/test/test_stack_perf.c
+++ b/app/test/test_stack_perf.c
@@ -299,14 +299,14 @@ test_bulk_push_pop(struct rte_stack *s)
}
static int
-test_stack_perf(void)
+__test_stack_perf(uint32_t flags)
{
struct lcore_pair cores;
struct rte_stack *s;
rte_atomic32_init(&lcore_barrier);
- s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -342,4 +342,17 @@ test_stack_perf(void)
return 0;
}
+static int
+test_stack_perf(void)
+{
+ return __test_stack_perf(0);
+}
+
+static int
+test_lf_stack_perf(void)
+{
+ return __test_stack_perf(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
+REGISTER_TEST_COMMAND(stack_lf_perf_autotest, test_lf_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 7/8] test/stack: add lock-free stack tests
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 7/8] test/stack: add lock-free stack tests Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds lock-free stack variants of stack_autotest
(stack_lf_autotest) and stack_perf_autotest (stack_lf_perf_autotest), which
differ only in that the lock-free versions pass the RTE_STACK_F_LF flag to
all rte_stack_create() calls.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/meson.build | 2 ++
app/test/test_stack.c | 41 +++++++++++++++++++++++++++--------------
app/test/test_stack_perf.c | 17 +++++++++++++++--
3 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/app/test/meson.build b/app/test/meson.build
index 02eb788a4..867cc5863 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -178,6 +178,7 @@ fast_parallel_test_names = [
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
+ 'stack_nb_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
@@ -243,6 +244,7 @@ perf_test_names = [
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
+ 'stack_nb_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
index 6be2f876b..e972a61a7 100644
--- a/app/test/test_stack.c
+++ b/app/test/test_stack.c
@@ -98,7 +98,7 @@ test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
}
static int
-test_stack_basic(void)
+test_stack_basic(uint32_t flags)
{
struct rte_stack *s = NULL;
void **obj_table = NULL;
@@ -114,7 +114,7 @@ test_stack_basic(void)
for (i = 0; i < STACK_SIZE; i++)
obj_table[i] = (void *)(uintptr_t)i;
- s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -178,18 +178,18 @@ test_stack_basic(void)
}
static int
-test_stack_name_reuse(void)
+test_stack_name_reuse(uint32_t flags)
{
struct rte_stack *s[2];
- s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[0] == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
return -1;
}
- s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[1] != NULL) {
printf("[%s():%u] Failed to detect re-used name\n",
__func__, __LINE__);
@@ -202,7 +202,7 @@ test_stack_name_reuse(void)
}
static int
-test_stack_name_length(void)
+test_stack_name_length(uint32_t flags)
{
char name[RTE_STACK_NAMESIZE + 1];
struct rte_stack *s;
@@ -210,7 +210,7 @@ test_stack_name_length(void)
memset(name, 's', sizeof(name));
name[RTE_STACK_NAMESIZE] = '\0';
- s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), flags);
if (s != NULL) {
printf("[%s():%u] Failed to prevent long name\n",
__func__, __LINE__);
@@ -329,7 +329,7 @@ stack_thread_push_pop(void *args)
}
static int
-test_stack_multithreaded(void)
+test_stack_multithreaded(uint32_t flags)
{
struct test_args *args;
unsigned int lcore_id;
@@ -350,7 +350,7 @@ test_stack_multithreaded(void)
return -1;
}
- s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
@@ -385,9 +385,9 @@ test_stack_multithreaded(void)
}
static int
-test_stack(void)
+__test_stack(uint32_t flags)
{
- if (test_stack_basic() < 0)
+ if (test_stack_basic(flags) < 0)
return -1;
if (test_lookup_null() < 0)
@@ -396,16 +396,29 @@ test_stack(void)
if (test_free_null() < 0)
return -1;
- if (test_stack_name_reuse() < 0)
+ if (test_stack_name_reuse(flags) < 0)
return -1;
- if (test_stack_name_length() < 0)
+ if (test_stack_name_length(flags) < 0)
return -1;
- if (test_stack_multithreaded() < 0)
+ if (test_stack_multithreaded(flags) < 0)
return -1;
return 0;
}
+static int
+test_stack(void)
+{
+ return __test_stack(0);
+}
+
+static int
+test_lf_stack(void)
+{
+ return __test_stack(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_autotest, test_stack);
+REGISTER_TEST_COMMAND(stack_lf_autotest, test_lf_stack);
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
index a44fbb73e..ba27fbf70 100644
--- a/app/test/test_stack_perf.c
+++ b/app/test/test_stack_perf.c
@@ -299,14 +299,14 @@ test_bulk_push_pop(struct rte_stack *s)
}
static int
-test_stack_perf(void)
+__test_stack_perf(uint32_t flags)
{
struct lcore_pair cores;
struct rte_stack *s;
rte_atomic32_init(&lcore_barrier);
- s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -342,4 +342,17 @@ test_stack_perf(void)
return 0;
}
+static int
+test_stack_perf(void)
+{
+ return __test_stack_perf(0);
+}
+
+static int
+test_lf_stack_perf(void)
+{
+ return __test_stack_perf(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
+REGISTER_TEST_COMMAND(stack_lf_perf_autotest, test_lf_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 8/8] mempool/stack: add lock-free stack mempool handler
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 0/8] Add stack library and new " Gage Eads
` (7 preceding siblings ...)
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 7/8] test/stack: add lock-free stack tests Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 0/8] Add stack library and new " Gage Eads
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for lock-free (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
stack's worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 10 ++++++++++
doc/guides/rel_notes/release_19_05.rst | 5 +++++
drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++--
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 6a04c3c33..fa8afdb3a 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -581,6 +581,16 @@ Known Issues
5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+ Alternatively, applications can use the lock-free stack mempool handler. When
+ considering this handler, note that:
+
+ - It is currently limited to the x86_64 platform, because it uses an
+ instruction (16-byte compare-and-swap) that is not yet available on other
+ platforms.
+ - It has worse average-case performance than the non-preemptive rte_ring, but
+ software caching (e.g. the mempool cache) can mitigate this by reducing the
+ number of stack accesses.
+
+ rte_timer
Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 3b115b5f6..f873984ad 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -130,6 +130,11 @@ New Features
The library supports two stack implementations: standard (lock-based) and lock-free.
The lock-free implementation is currently limited to x86-64 platforms.
+* **Added Lock-Free Stack Mempool Handler.**
+
+ Added a new lock-free stack handler, which uses the newly added stack
+ library.
+
Removed Items
-------------
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 25ccdb9af..7e85c8d6b 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -7,7 +7,7 @@
#include <rte_stack.h>
static int
-stack_alloc(struct rte_mempool *mp)
+__stack_alloc(struct rte_mempool *mp, uint32_t flags)
{
char name[RTE_STACK_NAMESIZE];
struct rte_stack *s;
@@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp)
return -rte_errno;
}
- s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ s = rte_stack_create(name, mp->size, mp->socket_id, flags);
if (s == NULL)
return -rte_errno;
@@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp)
}
static int
+stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, 0);
+}
+
+static int
+lf_stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, RTE_STACK_F_LF);
+}
+
+static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
unsigned int n)
{
@@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = {
.get_count = stack_get_count
};
+static struct rte_mempool_ops ops_lf_stack = {
+ .name = "lf_stack",
+ .alloc = lf_stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
MEMPOOL_REGISTER_OPS(ops_stack);
+MEMPOOL_REGISTER_OPS(ops_lf_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v9 8/8] mempool/stack: add lock-free stack mempool handler
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
@ 2019-04-03 23:20 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-03 23:20 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for lock-free (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
stack's worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 10 ++++++++++
doc/guides/rel_notes/release_19_05.rst | 5 +++++
drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++--
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 6a04c3c33..fa8afdb3a 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -581,6 +581,16 @@ Known Issues
5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+ Alternatively, applications can use the lock-free stack mempool handler. When
+ considering this handler, note that:
+
+ - It is currently limited to the x86_64 platform, because it uses an
+ instruction (16-byte compare-and-swap) that is not yet available on other
+ platforms.
+ - It has worse average-case performance than the non-preemptive rte_ring, but
+ software caching (e.g. the mempool cache) can mitigate this by reducing the
+ number of stack accesses.
+
+ rte_timer
Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 3b115b5f6..f873984ad 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -130,6 +130,11 @@ New Features
The library supports two stack implementations: standard (lock-based) and lock-free.
The lock-free implementation is currently limited to x86-64 platforms.
+* **Added Lock-Free Stack Mempool Handler.**
+
+ Added a new lock-free stack handler, which uses the newly added stack
+ library.
+
Removed Items
-------------
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 25ccdb9af..7e85c8d6b 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -7,7 +7,7 @@
#include <rte_stack.h>
static int
-stack_alloc(struct rte_mempool *mp)
+__stack_alloc(struct rte_mempool *mp, uint32_t flags)
{
char name[RTE_STACK_NAMESIZE];
struct rte_stack *s;
@@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp)
return -rte_errno;
}
- s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ s = rte_stack_create(name, mp->size, mp->socket_id, flags);
if (s == NULL)
return -rte_errno;
@@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp)
}
static int
+stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, 0);
+}
+
+static int
+lf_stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, RTE_STACK_F_LF);
+}
+
+static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
unsigned int n)
{
@@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = {
.get_count = stack_get_count
};
+static struct rte_mempool_ops ops_lf_stack = {
+ .name = "lf_stack",
+ .alloc = lf_stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
MEMPOOL_REGISTER_OPS(ops_stack);
+MEMPOOL_REGISTER_OPS(ops_lf_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v9 3/8] test/stack: add stack test
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 3/8] test/stack: add stack test Gage Eads
2019-04-03 23:20 ` Gage Eads
@ 2019-04-04 7:34 ` Thomas Monjalon
2019-04-04 7:34 ` Thomas Monjalon
1 sibling, 1 reply; 228+ messages in thread
From: Thomas Monjalon @ 2019-04-04 7:34 UTC (permalink / raw)
To: Gage Eads
Cc: dev, olivier.matz, arybchenko, bruce.richardson,
konstantin.ananyev, gavin.hu, Honnappa.Nagarahalli, nd
04/04/2019 01:20, Gage Eads:
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -431,6 +431,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
> F: lib/librte_stack/
> F: doc/guides/prog_guide/stack_lib.rst
> F: drivers/mempool/stack/
> +F: test/test/*stack*
Should be app/test/test_stack*
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v9 3/8] test/stack: add stack test
2019-04-04 7:34 ` Thomas Monjalon
@ 2019-04-04 7:34 ` Thomas Monjalon
0 siblings, 0 replies; 228+ messages in thread
From: Thomas Monjalon @ 2019-04-04 7:34 UTC (permalink / raw)
To: Gage Eads
Cc: dev, olivier.matz, arybchenko, bruce.richardson,
konstantin.ananyev, gavin.hu, Honnappa.Nagarahalli, nd
04/04/2019 01:20, Gage Eads:
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -431,6 +431,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
> F: lib/librte_stack/
> F: doc/guides/prog_guide/stack_lib.rst
> F: drivers/mempool/stack/
> +F: test/test/*stack*
Should be app/test/test_stack*
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 0/8] Add stack library and new mempool handler
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 0/8] Add stack library and new " Gage Eads
` (8 preceding siblings ...)
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` Gage Eads
` (9 more replies)
9 siblings, 10 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This patchset introduces a stack library, supporting both lock-based and
lock-free stacks, and a lock-free stack mempool handler.
The lock-based stack code is derived from the existing stack mempool handler,
and that handler is refactored to use the stack library.
The lock-free stack mempool handler is intended for usages where the rte
ring's "non-preemptive" constraint is not acceptable; for example, if the
application uses a mixture of pinned high-priority threads and multiplexed
low-priority threads that share a mempool.
Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
so it is currently limited to the x86_64 platform.
This patchset is the successor to a patchset containing only the new mempool
handler[2].
[1] http://mails.dpdk.org/archives/dev/2019-April/129014.html
[2] http://mails.dpdk.org/archives/dev/2019-January/123555.html
---
v10:
- Correct test/test/ -> app/test/ in MAINTAINERS
v9:
- Add rte_atomic.h includes to rte_stack.h, test_stack.c, and test_stack_perf.c
to fix ARM builds
v8:
- Add rte_debug.h include to rte_stack.h for RTE_ASSERT()
v7:
- Add rte_branch_prediction.h include to rte_stack_std.h for unlikely()
- Add rte_compat.h include to rte_stack.h for __rte_experimental
v6:
- Add load-acquire fence to the lock-free push function
- Correct generic implementation's pop_elems 128b CAS success and failure
memorder to match those in the C11 implementation.
v5:
- Add comment to explain padding in *_get_memsize() functions
- Prefix internal functions with '__'
- Use RTE_ASSERT for performance critical run-time checks
- Don't use __atomic_load in the C11 pop_elems function, and put an acquire
thread fence at the start of the 2nd do-while loop
- Change pop_elems 128b CAS success memorder to RELEASE and failure memorder to
RELAXED
- Change compile-time assertion to run for all 64-bit architectures
- Reorganize the code with standard and lock-free .c and .h files
v4:
- Fix 32-bit build error in test_stack.c by using %zu format specifier for
size_t
- Rebase onto master
v3:
- Rebase patchset onto master (test/test/ -> app/test/)
- Fix rte_stack_std_push() segfault introduced in v2
v2:
- Reworked structure and function naming to use rte_stack_{std, lf}_...
- Updated to the latest rte_atomic128_cmp_exchange() interface.
- Rename STACK_F_NB -> RTE_STACK_F_LF.
- Remove rte_rmb() and rte_wmb() from the generic push and pop implementations.
These are obviated by rte_atomic128_cmp_exchange()'s two memorder arguments.
- Edit stack_lib.rst text to 80 chars/line.
- Fix rte_stack.h doxygen formatting.
- Allocate popped_objs array from the heap
- Fix stack_thread_push_pop bug ("&t->sz" -> "t->sz")
- Remove unnecessary NULL check from test_stack_basic
- Properly terminate the name string in test_stack_name_length
- Add an empty array of struct rte_nb_lifo_elem elements
- In rte_nb_lifo_push(), retrieve the last element from __nb_lifo_pop()
- Split C11 implementation into a separate patchset
Gage Eads (8):
stack: introduce rte stack library
mempool/stack: convert mempool to use rte stack
test/stack: add stack test
test/stack: add stack perf test
stack: add lock-free stack implementation
stack: add C11 atomic implementation
test/stack: add lock-free stack tests
mempool/stack: add lock-free stack mempool handler
MAINTAINERS | 9 +-
app/test/Makefile | 3 +
app/test/meson.build | 7 +
app/test/test_stack.c | 424 ++++++++++++++++++++++++
app/test/test_stack_perf.c | 358 ++++++++++++++++++++
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/env_abstraction_layer.rst | 10 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 83 +++++
doc/guides/rel_notes/release_19_05.rst | 13 +
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 115 +++----
lib/Makefile | 2 +
lib/librte_stack/Makefile | 29 ++
lib/librte_stack/meson.build | 12 +
lib/librte_stack/rte_stack.c | 196 +++++++++++
lib/librte_stack/rte_stack.h | 262 +++++++++++++++
lib/librte_stack/rte_stack_lf.c | 31 ++
lib/librte_stack/rte_stack_lf.h | 106 ++++++
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 +++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++
lib/librte_stack/rte_stack_std.c | 26 ++
lib/librte_stack/rte_stack_std.h | 121 +++++++
lib/librte_stack/rte_stack_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
30 files changed, 2137 insertions(+), 72 deletions(-)
create mode 100644 app/test/test_stack.c
create mode 100644 app/test/test_stack_perf.c
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 0/8] Add stack library and new mempool handler
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 0/8] Add stack library and new " Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 1/8] stack: introduce rte stack library Gage Eads
` (8 subsequent siblings)
9 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This patchset introduces a stack library, supporting both lock-based and
lock-free stacks, and a lock-free stack mempool handler.
The lock-based stack code is derived from the existing stack mempool handler,
and that handler is refactored to use the stack library.
The lock-free stack mempool handler is intended for usages where the rte
ring's "non-preemptive" constraint is not acceptable; for example, if the
application uses a mixture of pinned high-priority threads and multiplexed
low-priority threads that share a mempool.
Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
so it is currently limited to the x86_64 platform.
This patchset is the successor to a patchset containing only the new mempool
handler[2].
[1] http://mails.dpdk.org/archives/dev/2019-April/129014.html
[2] http://mails.dpdk.org/archives/dev/2019-January/123555.html
---
v10:
- Correct test/test/ -> app/test/ in MAINTAINERS
v9:
- Add rte_atomic.h includes to rte_stack.h, test_stack.c, and test_stack_perf.c
to fix ARM builds
v8:
- Add rte_debug.h include to rte_stack.h for RTE_ASSERT()
v7:
- Add rte_branch_prediction.h include to rte_stack_std.h for unlikely()
- Add rte_compat.h include to rte_stack.h for __rte_experimental
v6:
- Add load-acquire fence to the lock-free push function
- Correct generic implementation's pop_elems 128b CAS success and failure
memorder to match those in the C11 implementation.
v5:
- Add comment to explain padding in *_get_memsize() functions
- Prefix internal functions with '__'
- Use RTE_ASSERT for performance critical run-time checks
- Don't use __atomic_load in the C11 pop_elems function, and put an acquire
thread fence at the start of the 2nd do-while loop
- Change pop_elems 128b CAS success memorder to RELEASE and failure memorder to
RELAXED
- Change compile-time assertion to run for all 64-bit architectures
- Reorganize the code with standard and lock-free .c and .h files
v4:
- Fix 32-bit build error in test_stack.c by using %zu format specifier for
size_t
- Rebase onto master
v3:
- Rebase patchset onto master (test/test/ -> app/test/)
- Fix rte_stack_std_push() segfault introduced in v2
v2:
- Reworked structure and function naming to use rte_stack_{std, lf}_...
- Updated to the latest rte_atomic128_cmp_exchange() interface.
- Rename STACK_F_NB -> RTE_STACK_F_LF.
- Remove rte_rmb() and rte_wmb() from the generic push and pop implementations.
These are obviated by rte_atomic128_cmp_exchange()'s two memorder arguments.
- Edit stack_lib.rst text to 80 chars/line.
- Fix rte_stack.h doxygen formatting.
- Allocate popped_objs array from the heap
- Fix stack_thread_push_pop bug ("&t->sz" -> "t->sz")
- Remove unnecessary NULL check from test_stack_basic
- Properly terminate the name string in test_stack_name_length
- Add an empty array of struct rte_nb_lifo_elem elements
- In rte_nb_lifo_push(), retrieve the last element from __nb_lifo_pop()
- Split C11 implementation into a separate patchset
Gage Eads (8):
stack: introduce rte stack library
mempool/stack: convert mempool to use rte stack
test/stack: add stack test
test/stack: add stack perf test
stack: add lock-free stack implementation
stack: add C11 atomic implementation
test/stack: add lock-free stack tests
mempool/stack: add lock-free stack mempool handler
MAINTAINERS | 9 +-
app/test/Makefile | 3 +
app/test/meson.build | 7 +
app/test/test_stack.c | 424 ++++++++++++++++++++++++
app/test/test_stack_perf.c | 358 ++++++++++++++++++++
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/env_abstraction_layer.rst | 10 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 83 +++++
doc/guides/rel_notes/release_19_05.rst | 13 +
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 115 +++----
lib/Makefile | 2 +
lib/librte_stack/Makefile | 29 ++
lib/librte_stack/meson.build | 12 +
lib/librte_stack/rte_stack.c | 196 +++++++++++
lib/librte_stack/rte_stack.h | 262 +++++++++++++++
lib/librte_stack/rte_stack_lf.c | 31 ++
lib/librte_stack/rte_stack_lf.h | 106 ++++++
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 +++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++
lib/librte_stack/rte_stack_std.c | 26 ++
lib/librte_stack/rte_stack_std.h | 121 +++++++
lib/librte_stack/rte_stack_version.map | 9 +
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
30 files changed, 2137 insertions(+), 72 deletions(-)
create mode 100644 app/test/test_stack.c
create mode 100644 app/test/test_stack_perf.c
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 1/8] stack: introduce rte stack library
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 0/8] Add stack library and new " Gage Eads
2019-04-04 10:01 ` Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
` (7 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 28 +++++
doc/guides/rel_notes/release_19_05.rst | 5 +
lib/Makefile | 2 +
lib/librte_stack/Makefile | 25 ++++
lib/librte_stack/meson.build | 8 ++
lib/librte_stack/rte_stack.c | 182 ++++++++++++++++++++++++++++
lib/librte_stack/rte_stack.h | 209 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++++++
lib/librte_stack/rte_stack_std.c | 26 ++++
lib/librte_stack/rte_stack_std.h | 121 +++++++++++++++++++
lib/librte_stack/rte_stack_version.map | 9 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
18 files changed, 665 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 71ac8cd4b..f30fc4aa6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -426,6 +426,12 @@ F: drivers/raw/skeleton_rawdev/
F: app/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
+Stack API - EXPERIMENTAL
+M: Gage Eads <gage.eads@intel.com>
+M: Olivier Matz <olivier.matz@6wind.com>
+F: lib/librte_stack/
+F: doc/guides/prog_guide/stack_lib.rst
+
Memory Pool Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index 6292bc4af..fc8dba69d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -994,3 +994,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
# Compile the eventdev application
#
CONFIG_RTE_APP_EVENTDEV=y
+
+#
+# Compile librte_stack
+#
+CONFIG_RTE_LIBRTE_STACK=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index aacc66bd8..de1e215dd 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -125,6 +125,7 @@ The public API headers are grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
[ring] (@ref rte_ring.h),
+ [stack] (@ref rte_stack.h),
[tailq] (@ref rte_tailq.h),
[bitmap] (@ref rte_bitmap.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..7722fc3e9 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -55,6 +55,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
@TOPDIR@/lib/librte_security \
+ @TOPDIR@/lib/librte_stack \
@TOPDIR@/lib/librte_table \
@TOPDIR@/lib/librte_telemetry \
@TOPDIR@/lib/librte_timer \
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..f4f60862f 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ stack_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
new file mode 100644
index 000000000..25a8cc38a
--- /dev/null
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -0,0 +1,28 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+Stack Library
+=============
+
+DPDK's stack library provides an API for configuration and use of a bounded
+stack of pointers.
+
+The stack library provides the following basic operations:
+
+* Create a uniquely named stack of a user-specified size and using a
+ user-specified socket.
+
+* Push and pop a burst of one or more stack objects (pointers). These function
+ are multi-threading safe.
+
+* Free a previously created stack.
+
+* Lookup a pointer to a stack by its name.
+
+* Query a stack's current depth and number of free entries.
+
+Implementation
+~~~~~~~~~~~~~~
+
+The stack consists of a contiguous array of pointers, a current index, and a
+spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index bdad1ddbe..ebfbe36e5 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -121,6 +121,11 @@ New Features
Improved testpmd application performance on ARM platform. For ``macswap``
forwarding mode, NEON intrinsics were used to do swap to save CPU cycles.
+* **Added Stack API.**
+
+ Added a new stack API for configuration and use of a bounded stack of
+ pointers. The API provides MT-safe push and pop operations that can operate
+ on one or more pointers per operation.
Removed Items
-------------
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..9f90e80ad 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack
+DEPDIRS-librte_stack := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
new file mode 100644
index 000000000..6db540073
--- /dev/null
+++ b/lib/librte_stack/Makefile
@@ -0,0 +1,25 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_stack.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_stack_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
+ rte_stack_std.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
+ rte_stack_std.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
new file mode 100644
index 000000000..d2e60ce9b
--- /dev/null
+++ b/lib/librte_stack/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+allow_experimental_apis = true
+
+version = 1
+sources = files('rte_stack.c', 'rte_stack_std.c')
+headers = files('rte_stack.h', 'rte_stack_std.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
new file mode 100644
index 000000000..610014b6c
--- /dev/null
+++ b/lib/librte_stack/rte_stack.c
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_tailq.h>
+
+#include "rte_stack.h"
+#include "rte_stack_pvt.h"
+
+int stack_logtype;
+
+TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_stack_tailq = {
+ .name = RTE_TAILQ_STACK_NAME,
+};
+EAL_REGISTER_TAILQ(rte_stack_tailq)
+
+static void
+rte_stack_init(struct rte_stack *s)
+{
+ memset(s, 0, sizeof(*s));
+
+ rte_stack_std_init(s);
+}
+
+static ssize_t
+rte_stack_get_memsize(unsigned int count)
+{
+ return rte_stack_std_get_memsize(count);
+}
+
+struct rte_stack *
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_stack_list *stack_list;
+ const struct rte_memzone *mz;
+ struct rte_tailq_entry *te;
+ struct rte_stack *s;
+ unsigned int sz;
+ int ret;
+
+ RTE_SET_USED(flags);
+
+ sz = rte_stack_get_memsize(count);
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_STACK_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ STACK_LOG_ERR("Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
+ 0, __alignof__(*s));
+ if (mz == NULL) {
+ STACK_LOG_ERR("Cannot reserve stack memzone!\n");
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(te);
+ return NULL;
+ }
+
+ s = mz->addr;
+
+ rte_stack_init(s);
+
+ /* Store the name for later lookups */
+ ret = snprintf(s->name, sizeof(s->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(s->name)) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_errno = ENAMETOOLONG;
+ rte_free(te);
+ rte_memzone_free(mz);
+ return NULL;
+ }
+
+ s->memzone = mz;
+ s->capacity = count;
+ s->flags = flags;
+
+ te->data = s;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ TAILQ_INSERT_TAIL(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return s;
+}
+
+void
+rte_stack_free(struct rte_stack *s)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+
+ if (s == NULL)
+ return;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, stack_list, next) {
+ if (te->data == s)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+
+ rte_memzone_free(s->memzone);
+}
+
+struct rte_stack *
+rte_stack_lookup(const char *name)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+ struct rte_stack *r = NULL;
+
+ if (name == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, stack_list, next) {
+ r = (struct rte_stack *) te->data;
+ if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+RTE_INIT(librte_stack_init_log)
+{
+ stack_logtype = rte_log_register("lib.stack");
+ if (stack_logtype >= 0)
+ rte_log_set_level(stack_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
new file mode 100644
index 000000000..42d042715
--- /dev/null
+++ b/lib/librte_stack/rte_stack.h
@@ -0,0 +1,209 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+/**
+ * @file rte_stack.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Stack
+ *
+ * librte_stack provides an API for configuration and use of a bounded stack of
+ * pointers. Push and pop operations are MT-safe, allowing concurrent access,
+ * and the interface supports pushing and popping multiple pointers at a time.
+ */
+
+#ifndef _RTE_STACK_H_
+#define _RTE_STACK_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_compat.h>
+#include <rte_debug.h>
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+#define RTE_TAILQ_STACK_NAME "RTE_STACK"
+#define RTE_STACK_MZ_PREFIX "STK_"
+/** The maximum length of a stack name. */
+#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
+ sizeof(RTE_STACK_MZ_PREFIX) + 1)
+
+/* Structure containing the LIFO, its current length, and a lock for mutual
+ * exclusion.
+ */
+struct rte_stack_std {
+ rte_spinlock_t lock; /**< LIFO lock */
+ uint32_t len; /**< LIFO len */
+ void *objs[]; /**< LIFO pointer table */
+};
+
+/* The RTE stack structure contains the LIFO structure itself, plus metadata
+ * such as its name and memzone pointer.
+ */
+struct rte_stack {
+ /** Name of the stack. */
+ char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
+ /** Memzone containing the rte_stack structure. */
+ const struct rte_memzone *memzone;
+ uint32_t capacity; /**< Usable size of the stack. */
+ uint32_t flags; /**< Flags supplied at creation. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+} __rte_cache_aligned;
+
+#include "rte_stack_std.h"
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_pop(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return __rte_stack_std_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of free entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of free entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_free_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return s->capacity - rte_stack_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new stack named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory for a stack of
+ * size *count*. The behavior of the stack is controlled by the *flags*.
+ *
+ * @param name
+ * The name of the stack.
+ * @param count
+ * The size of the stack.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * Reserved for future use.
+ * @return
+ * On success, the pointer to the new allocated stack. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a stack with the same name already exists
+ * - ENOMEM - insufficient memory to create the stack
+ * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
+ */
+struct rte_stack *__rte_experimental
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free all memory used by the stack.
+ *
+ * @param s
+ * Stack to free
+ */
+void __rte_experimental
+rte_stack_free(struct rte_stack *s);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Lookup a stack by its name.
+ *
+ * @param name
+ * The name of the stack.
+ * @return
+ * The pointer to the stack matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - Stack with name *name* not found.
+ * - EINVAL - *name* pointer is NULL.
+ */
+struct rte_stack * __rte_experimental
+rte_stack_lookup(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_H_ */
diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
new file mode 100644
index 000000000..4a6a7bdb3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_pvt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_PVT_H_
+#define _RTE_STACK_PVT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_log.h>
+
+extern int stack_logtype;
+
+#define STACK_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
+ __func__, ##args)
+
+#define STACK_LOG_ERR(fmt, args...) \
+ STACK_LOG(ERR, fmt, ## args)
+
+#define STACK_LOG_WARN(fmt, args...) \
+ STACK_LOG(WARNING, fmt, ## args)
+
+#define STACK_LOG_INFO(fmt, args...) \
+ STACK_LOG(INFO, fmt, ## args)
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_PVT_H_ */
diff --git a/lib/librte_stack/rte_stack_std.c b/lib/librte_stack/rte_stack_std.c
new file mode 100644
index 000000000..0a310d7c6
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.c
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_std_init(struct rte_stack *s)
+{
+ rte_spinlock_init(&s->stack_std.lock);
+}
+
+ssize_t
+rte_stack_std_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_std.h b/lib/librte_stack/rte_stack_std.h
new file mode 100644
index 000000000..5dc940932
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.h
@@ -0,0 +1,121 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_STD_H_
+#define _RTE_STACK_STD_H_
+
+#include <rte_branch_prediction.h>
+
+/**
+ * @internal Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+ cache_objs = &stack->objs[stack->len];
+
+ /* Is there sufficient space in the stack? */
+ if ((stack->len + n) > s->capacity) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ stack->len += n;
+
+ rte_spinlock_unlock(&stack->lock);
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index, len;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+
+ if (unlikely(n > stack->len)) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ cache_objs = stack->objs;
+
+ for (index = 0, len = stack->len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ stack->len -= n;
+ rte_spinlock_unlock(&stack->lock);
+
+ return n;
+}
+
+/**
+ * @internal Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_count(struct rte_stack *s)
+{
+ return (unsigned int)s->stack_std.len;
+}
+
+/**
+ * @internal Initialize a standard stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ */
+void
+rte_stack_std_init(struct rte_stack *s);
+
+/**
+ * @internal Return the memory required for a standard stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a standard stack.
+ */
+ssize_t
+rte_stack_std_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_STD_H_ */
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
new file mode 100644
index 000000000..6662679c3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_stack_create;
+ rte_stack_free;
+ rte_stack_lookup;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index c3289f885..595314d7d 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..7e033e78c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 1/8] stack: introduce rte stack library
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 1/8] stack: introduce rte stack library Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The rte_stack library provides an API for configuration and use of a
bounded stack of pointers. Push and pop operations are MT-safe, allowing
concurrent access, and the interface supports pushing and popping multiple
pointers at a time.
The library's interface is modeled after another DPDK data structure,
rte_ring, and its lock-based implementation is derived from the stack
mempool handler. An upcoming commit will migrate the stack mempool handler
to rte_stack.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
MAINTAINERS | 6 +
config/common_base | 5 +
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/index.rst | 1 +
doc/guides/prog_guide/stack_lib.rst | 28 +++++
doc/guides/rel_notes/release_19_05.rst | 5 +
lib/Makefile | 2 +
lib/librte_stack/Makefile | 25 ++++
lib/librte_stack/meson.build | 8 ++
lib/librte_stack/rte_stack.c | 182 ++++++++++++++++++++++++++++
lib/librte_stack/rte_stack.h | 209 +++++++++++++++++++++++++++++++++
lib/librte_stack/rte_stack_pvt.h | 34 ++++++
lib/librte_stack/rte_stack_std.c | 26 ++++
lib/librte_stack/rte_stack_std.h | 121 +++++++++++++++++++
lib/librte_stack/rte_stack_version.map | 9 ++
lib/meson.build | 2 +-
mk/rte.app.mk | 1 +
18 files changed, 665 insertions(+), 1 deletion(-)
create mode 100644 doc/guides/prog_guide/stack_lib.rst
create mode 100644 lib/librte_stack/Makefile
create mode 100644 lib/librte_stack/meson.build
create mode 100644 lib/librte_stack/rte_stack.c
create mode 100644 lib/librte_stack/rte_stack.h
create mode 100644 lib/librte_stack/rte_stack_pvt.h
create mode 100644 lib/librte_stack/rte_stack_std.c
create mode 100644 lib/librte_stack/rte_stack_std.h
create mode 100644 lib/librte_stack/rte_stack_version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 71ac8cd4b..f30fc4aa6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -426,6 +426,12 @@ F: drivers/raw/skeleton_rawdev/
F: app/test/test_rawdev.c
F: doc/guides/prog_guide/rawdev.rst
+Stack API - EXPERIMENTAL
+M: Gage Eads <gage.eads@intel.com>
+M: Olivier Matz <olivier.matz@6wind.com>
+F: lib/librte_stack/
+F: doc/guides/prog_guide/stack_lib.rst
+
Memory Pool Drivers
-------------------
diff --git a/config/common_base b/config/common_base
index 6292bc4af..fc8dba69d 100644
--- a/config/common_base
+++ b/config/common_base
@@ -994,3 +994,8 @@ CONFIG_RTE_APP_CRYPTO_PERF=y
# Compile the eventdev application
#
CONFIG_RTE_APP_EVENTDEV=y
+
+#
+# Compile librte_stack
+#
+CONFIG_RTE_LIBRTE_STACK=y
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index aacc66bd8..de1e215dd 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -125,6 +125,7 @@ The public API headers are grouped by topics:
[mbuf] (@ref rte_mbuf.h),
[mbuf pool ops] (@ref rte_mbuf_pool_ops.h),
[ring] (@ref rte_ring.h),
+ [stack] (@ref rte_stack.h),
[tailq] (@ref rte_tailq.h),
[bitmap] (@ref rte_bitmap.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index a365e669b..7722fc3e9 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -55,6 +55,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/lib/librte_ring \
@TOPDIR@/lib/librte_sched \
@TOPDIR@/lib/librte_security \
+ @TOPDIR@/lib/librte_stack \
@TOPDIR@/lib/librte_table \
@TOPDIR@/lib/librte_telemetry \
@TOPDIR@/lib/librte_timer \
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index 6726b1e8d..f4f60862f 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -55,6 +55,7 @@ Programmer's Guide
metrics_lib
bpf_lib
ipsec_lib
+ stack_lib
source_org
dev_kit_build_system
dev_kit_root_make_help
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
new file mode 100644
index 000000000..25a8cc38a
--- /dev/null
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -0,0 +1,28 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+Stack Library
+=============
+
+DPDK's stack library provides an API for configuration and use of a bounded
+stack of pointers.
+
+The stack library provides the following basic operations:
+
+* Create a uniquely named stack of a user-specified size and using a
+ user-specified socket.
+
+* Push and pop a burst of one or more stack objects (pointers). These function
+ are multi-threading safe.
+
+* Free a previously created stack.
+
+* Lookup a pointer to a stack by its name.
+
+* Query a stack's current depth and number of free entries.
+
+Implementation
+~~~~~~~~~~~~~~
+
+The stack consists of a contiguous array of pointers, a current index, and a
+spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index bdad1ddbe..ebfbe36e5 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -121,6 +121,11 @@ New Features
Improved testpmd application performance on ARM platform. For ``macswap``
forwarding mode, NEON intrinsics were used to do swap to save CPU cycles.
+* **Added Stack API.**
+
+ Added a new stack API for configuration and use of a bounded stack of
+ pointers. The API provides MT-safe push and pop operations that can operate
+ on one or more pointers per operation.
Removed Items
-------------
diff --git a/lib/Makefile b/lib/Makefile
index a358f1c19..9f90e80ad 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -109,6 +109,8 @@ DIRS-$(CONFIG_RTE_LIBRTE_IPSEC) += librte_ipsec
DEPDIRS-librte_ipsec := librte_eal librte_mbuf librte_cryptodev librte_security
DIRS-$(CONFIG_RTE_LIBRTE_TELEMETRY) += librte_telemetry
DEPDIRS-librte_telemetry := librte_eal librte_metrics librte_ethdev
+DIRS-$(CONFIG_RTE_LIBRTE_STACK) += librte_stack
+DEPDIRS-librte_stack := librte_eal
ifeq ($(CONFIG_RTE_EXEC_ENV_LINUX),y)
DIRS-$(CONFIG_RTE_LIBRTE_KNI) += librte_kni
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
new file mode 100644
index 000000000..6db540073
--- /dev/null
+++ b/lib/librte_stack/Makefile
@@ -0,0 +1,25 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_stack.a
+
+CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3
+CFLAGS += -DALLOW_EXPERIMENTAL_API
+LDLIBS += -lrte_eal
+
+EXPORT_MAP := rte_stack_version.map
+
+LIBABIVER := 1
+
+# all source are stored in SRCS-y
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
+ rte_stack_std.c
+
+# install includes
+SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
+ rte_stack_std.h
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
new file mode 100644
index 000000000..d2e60ce9b
--- /dev/null
+++ b/lib/librte_stack/meson.build
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+allow_experimental_apis = true
+
+version = 1
+sources = files('rte_stack.c', 'rte_stack_std.c')
+headers = files('rte_stack.h', 'rte_stack_std.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
new file mode 100644
index 000000000..610014b6c
--- /dev/null
+++ b/lib/librte_stack/rte_stack.c
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+#include <rte_rwlock.h>
+#include <rte_tailq.h>
+
+#include "rte_stack.h"
+#include "rte_stack_pvt.h"
+
+int stack_logtype;
+
+TAILQ_HEAD(rte_stack_list, rte_tailq_entry);
+
+static struct rte_tailq_elem rte_stack_tailq = {
+ .name = RTE_TAILQ_STACK_NAME,
+};
+EAL_REGISTER_TAILQ(rte_stack_tailq)
+
+static void
+rte_stack_init(struct rte_stack *s)
+{
+ memset(s, 0, sizeof(*s));
+
+ rte_stack_std_init(s);
+}
+
+static ssize_t
+rte_stack_get_memsize(unsigned int count)
+{
+ return rte_stack_std_get_memsize(count);
+}
+
+struct rte_stack *
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags)
+{
+ char mz_name[RTE_MEMZONE_NAMESIZE];
+ struct rte_stack_list *stack_list;
+ const struct rte_memzone *mz;
+ struct rte_tailq_entry *te;
+ struct rte_stack *s;
+ unsigned int sz;
+ int ret;
+
+ RTE_SET_USED(flags);
+
+ sz = rte_stack_get_memsize(count);
+
+ ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
+ RTE_STACK_MZ_PREFIX, name);
+ if (ret < 0 || ret >= (int)sizeof(mz_name)) {
+ rte_errno = ENAMETOOLONG;
+ return NULL;
+ }
+
+ te = rte_zmalloc("STACK_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL) {
+ STACK_LOG_ERR("Cannot reserve memory for tailq\n");
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ mz = rte_memzone_reserve_aligned(mz_name, sz, socket_id,
+ 0, __alignof__(*s));
+ if (mz == NULL) {
+ STACK_LOG_ERR("Cannot reserve stack memzone!\n");
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ rte_free(te);
+ return NULL;
+ }
+
+ s = mz->addr;
+
+ rte_stack_init(s);
+
+ /* Store the name for later lookups */
+ ret = snprintf(s->name, sizeof(s->name), "%s", name);
+ if (ret < 0 || ret >= (int)sizeof(s->name)) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_errno = ENAMETOOLONG;
+ rte_free(te);
+ rte_memzone_free(mz);
+ return NULL;
+ }
+
+ s->memzone = mz;
+ s->capacity = count;
+ s->flags = flags;
+
+ te->data = s;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ TAILQ_INSERT_TAIL(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ return s;
+}
+
+void
+rte_stack_free(struct rte_stack *s)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+
+ if (s == NULL)
+ return;
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+ rte_rwlock_write_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ /* find out tailq entry */
+ TAILQ_FOREACH(te, stack_list, next) {
+ if (te->data == s)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+ return;
+ }
+
+ TAILQ_REMOVE(stack_list, te, next);
+
+ rte_rwlock_write_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ rte_free(te);
+
+ rte_memzone_free(s->memzone);
+}
+
+struct rte_stack *
+rte_stack_lookup(const char *name)
+{
+ struct rte_stack_list *stack_list;
+ struct rte_tailq_entry *te;
+ struct rte_stack *r = NULL;
+
+ if (name == NULL) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ stack_list = RTE_TAILQ_CAST(rte_stack_tailq.head, rte_stack_list);
+
+ rte_rwlock_read_lock(RTE_EAL_TAILQ_RWLOCK);
+
+ TAILQ_FOREACH(te, stack_list, next) {
+ r = (struct rte_stack *) te->data;
+ if (strncmp(name, r->name, RTE_STACK_NAMESIZE) == 0)
+ break;
+ }
+
+ rte_rwlock_read_unlock(RTE_EAL_TAILQ_RWLOCK);
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return r;
+}
+
+RTE_INIT(librte_stack_init_log)
+{
+ stack_logtype = rte_log_register("lib.stack");
+ if (stack_logtype >= 0)
+ rte_log_set_level(stack_logtype, RTE_LOG_NOTICE);
+}
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
new file mode 100644
index 000000000..42d042715
--- /dev/null
+++ b/lib/librte_stack/rte_stack.h
@@ -0,0 +1,209 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+/**
+ * @file rte_stack.h
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * RTE Stack
+ *
+ * librte_stack provides an API for configuration and use of a bounded stack of
+ * pointers. Push and pop operations are MT-safe, allowing concurrent access,
+ * and the interface supports pushing and popping multiple pointers at a time.
+ */
+
+#ifndef _RTE_STACK_H_
+#define _RTE_STACK_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_compat.h>
+#include <rte_debug.h>
+#include <rte_errno.h>
+#include <rte_memzone.h>
+#include <rte_spinlock.h>
+
+#define RTE_TAILQ_STACK_NAME "RTE_STACK"
+#define RTE_STACK_MZ_PREFIX "STK_"
+/** The maximum length of a stack name. */
+#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
+ sizeof(RTE_STACK_MZ_PREFIX) + 1)
+
+/* Structure containing the LIFO, its current length, and a lock for mutual
+ * exclusion.
+ */
+struct rte_stack_std {
+ rte_spinlock_t lock; /**< LIFO lock */
+ uint32_t len; /**< LIFO len */
+ void *objs[]; /**< LIFO pointer table */
+};
+
+/* The RTE stack structure contains the LIFO structure itself, plus metadata
+ * such as its name and memzone pointer.
+ */
+struct rte_stack {
+ /** Name of the stack. */
+ char name[RTE_STACK_NAMESIZE] __rte_cache_aligned;
+ /** Memzone containing the rte_stack structure. */
+ const struct rte_memzone *memzone;
+ uint32_t capacity; /**< Usable size of the stack. */
+ uint32_t flags; /**< Flags supplied at creation. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+} __rte_cache_aligned;
+
+#include "rte_stack_std.h"
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_push(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ RTE_ASSERT(s != NULL);
+ RTE_ASSERT(obj_table != NULL);
+
+ return __rte_stack_std_pop(s, obj_table, n);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return __rte_stack_std_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Return the number of free entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of free entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+rte_stack_free_count(struct rte_stack *s)
+{
+ RTE_ASSERT(s != NULL);
+
+ return s->capacity - rte_stack_count(s);
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Create a new stack named *name* in memory.
+ *
+ * This function uses ``memzone_reserve()`` to allocate memory for a stack of
+ * size *count*. The behavior of the stack is controlled by the *flags*.
+ *
+ * @param name
+ * The name of the stack.
+ * @param count
+ * The size of the stack.
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ * @param flags
+ * Reserved for future use.
+ * @return
+ * On success, the pointer to the new allocated stack. NULL on error with
+ * rte_errno set appropriately. Possible errno values include:
+ * - ENOSPC - the maximum number of memzones has already been allocated
+ * - EEXIST - a stack with the same name already exists
+ * - ENOMEM - insufficient memory to create the stack
+ * - ENAMETOOLONG - name size exceeds RTE_STACK_NAMESIZE
+ */
+struct rte_stack *__rte_experimental
+rte_stack_create(const char *name, unsigned int count, int socket_id,
+ uint32_t flags);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Free all memory used by the stack.
+ *
+ * @param s
+ * Stack to free
+ */
+void __rte_experimental
+rte_stack_free(struct rte_stack *s);
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Lookup a stack by its name.
+ *
+ * @param name
+ * The name of the stack.
+ * @return
+ * The pointer to the stack matching the name, or NULL if not found,
+ * with rte_errno set appropriately. Possible rte_errno values include:
+ * - ENOENT - Stack with name *name* not found.
+ * - EINVAL - *name* pointer is NULL.
+ */
+struct rte_stack * __rte_experimental
+rte_stack_lookup(const char *name);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_H_ */
diff --git a/lib/librte_stack/rte_stack_pvt.h b/lib/librte_stack/rte_stack_pvt.h
new file mode 100644
index 000000000..4a6a7bdb3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_pvt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_PVT_H_
+#define _RTE_STACK_PVT_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_log.h>
+
+extern int stack_logtype;
+
+#define STACK_LOG(level, fmt, args...) \
+ rte_log(RTE_LOG_ ##level, stack_logtype, "%s(): "fmt "\n", \
+ __func__, ##args)
+
+#define STACK_LOG_ERR(fmt, args...) \
+ STACK_LOG(ERR, fmt, ## args)
+
+#define STACK_LOG_WARN(fmt, args...) \
+ STACK_LOG(WARNING, fmt, ## args)
+
+#define STACK_LOG_INFO(fmt, args...) \
+ STACK_LOG(INFO, fmt, ## args)
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_STACK_PVT_H_ */
diff --git a/lib/librte_stack/rte_stack_std.c b/lib/librte_stack/rte_stack_std.c
new file mode 100644
index 000000000..0a310d7c6
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.c
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_std_init(struct rte_stack *s)
+{
+ rte_spinlock_init(&s->stack_std.lock);
+}
+
+ssize_t
+rte_stack_std_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(void *));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_std.h b/lib/librte_stack/rte_stack_std.h
new file mode 100644
index 000000000..5dc940932
--- /dev/null
+++ b/lib/librte_stack/rte_stack_std.h
@@ -0,0 +1,121 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_STD_H_
+#define _RTE_STACK_STD_H_
+
+#include <rte_branch_prediction.h>
+
+/**
+ * @internal Push several objects on the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects pushed (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_push(struct rte_stack *s, void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+ cache_objs = &stack->objs[stack->len];
+
+ /* Is there sufficient space in the stack? */
+ if ((stack->len + n) > s->capacity) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ /* Add elements back into the cache */
+ for (index = 0; index < n; ++index, obj_table++)
+ cache_objs[index] = *obj_table;
+
+ stack->len += n;
+
+ rte_spinlock_unlock(&stack->lock);
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * Actual number of objects popped (either 0 or *n*).
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_std *stack = &s->stack_std;
+ unsigned int index, len;
+ void **cache_objs;
+
+ rte_spinlock_lock(&stack->lock);
+
+ if (unlikely(n > stack->len)) {
+ rte_spinlock_unlock(&stack->lock);
+ return 0;
+ }
+
+ cache_objs = stack->objs;
+
+ for (index = 0, len = stack->len - 1; index < n;
+ ++index, len--, obj_table++)
+ *obj_table = cache_objs[len];
+
+ stack->len -= n;
+ rte_spinlock_unlock(&stack->lock);
+
+ return n;
+}
+
+/**
+ * @internal Return the number of used entries in a stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @return
+ * The number of used entries in the stack.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_std_count(struct rte_stack *s)
+{
+ return (unsigned int)s->stack_std.len;
+}
+
+/**
+ * @internal Initialize a standard stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ */
+void
+rte_stack_std_init(struct rte_stack *s);
+
+/**
+ * @internal Return the memory required for a standard stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a standard stack.
+ */
+ssize_t
+rte_stack_std_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_STD_H_ */
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
new file mode 100644
index 000000000..6662679c3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_version.map
@@ -0,0 +1,9 @@
+EXPERIMENTAL {
+ global:
+
+ rte_stack_create;
+ rte_stack_free;
+ rte_stack_lookup;
+
+ local: *;
+};
diff --git a/lib/meson.build b/lib/meson.build
index c3289f885..595314d7d 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -22,7 +22,7 @@ libraries = [
'gro', 'gso', 'ip_frag', 'jobstats',
'kni', 'latencystats', 'lpm', 'member',
'power', 'pdump', 'rawdev',
- 'reorder', 'sched', 'security', 'vhost',
+ 'reorder', 'sched', 'security', 'stack', 'vhost',
#ipsec lib depends on crypto and security
'ipsec',
# add pkt framework libs which use other libs from above
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 262132fc6..7e033e78c 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
_LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
_LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
_LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
_LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
_LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 2/8] mempool/stack: convert mempool to use rte stack
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 0/8] Add stack library and new " Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 1/8] stack: introduce rte stack library Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 3/8] test/stack: add stack test Gage Eads
` (6 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 2 +-
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 93 +++++++++----------------------
4 files changed, 33 insertions(+), 71 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index f30fc4aa6..e09e7d93f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -303,7 +303,6 @@ M: Andrew Rybchenko <arybchenko@solarflare.com>
F: lib/librte_mempool/
F: drivers/mempool/Makefile
F: drivers/mempool/ring/
-F: drivers/mempool/stack/
F: doc/guides/prog_guide/mempool_lib.rst
F: app/test/test_mempool*
F: app/test/test_func_reentrancy.c
@@ -431,6 +430,7 @@ M: Gage Eads <gage.eads@intel.com>
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
+F: drivers/mempool/stack/
Memory Pool Drivers
diff --git a/drivers/mempool/stack/Makefile b/drivers/mempool/stack/Makefile
index 0444aedad..1681a62bc 100644
--- a/drivers/mempool/stack/Makefile
+++ b/drivers/mempool/stack/Makefile
@@ -10,10 +10,11 @@ LIB = librte_mempool_stack.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
# Headers
CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+LDLIBS += -lrte_eal -lrte_mempool -lrte_stack
EXPORT_MAP := rte_mempool_stack_version.map
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index b75a3bb56..03e369a41 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
+
+allow_experimental_apis = true
sources = files('rte_mempool_stack.c')
+
+deps += ['stack']
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index e6d504af5..25ccdb9af 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -1,39 +1,29 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2019 Intel Corporation
*/
#include <stdio.h>
#include <rte_mempool.h>
-#include <rte_malloc.h>
-
-struct rte_mempool_stack {
- rte_spinlock_t sl;
-
- uint32_t size;
- uint32_t len;
- void *objs[];
-};
+#include <rte_stack.h>
static int
stack_alloc(struct rte_mempool *mp)
{
- struct rte_mempool_stack *s;
- unsigned n = mp->size;
- int size = sizeof(*s) + (n+16)*sizeof(void *);
-
- /* Allocate our local memory structure */
- s = rte_zmalloc_socket("mempool-stack",
- size,
- RTE_CACHE_LINE_SIZE,
- mp->socket_id);
- if (s == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
- return -ENOMEM;
+ char name[RTE_STACK_NAMESIZE];
+ struct rte_stack *s;
+ int ret;
+
+ ret = snprintf(name, sizeof(name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(name)) {
+ rte_errno = ENAMETOOLONG;
+ return -rte_errno;
}
- rte_spinlock_init(&s->sl);
+ s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ if (s == NULL)
+ return -rte_errno;
- s->size = n;
mp->pool_data = s;
return 0;
@@ -41,69 +31,36 @@ stack_alloc(struct rte_mempool *mp)
static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index;
-
- rte_spinlock_lock(&s->sl);
- cache_objs = &s->objs[s->len];
-
- /* Is there sufficient space in the stack ? */
- if ((s->len + n) > s->size) {
- rte_spinlock_unlock(&s->sl);
- return -ENOBUFS;
- }
-
- /* Add elements back into the cache */
- for (index = 0; index < n; ++index, obj_table++)
- cache_objs[index] = *obj_table;
-
- s->len += n;
+ struct rte_stack *s = mp->pool_data;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_push(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static int
stack_dequeue(struct rte_mempool *mp, void **obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index, len;
-
- rte_spinlock_lock(&s->sl);
-
- if (unlikely(n > s->len)) {
- rte_spinlock_unlock(&s->sl);
- return -ENOENT;
- }
+ struct rte_stack *s = mp->pool_data;
- cache_objs = s->objs;
-
- for (index = 0, len = s->len - 1; index < n;
- ++index, len--, obj_table++)
- *obj_table = cache_objs[len];
-
- s->len -= n;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_pop(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static unsigned
stack_get_count(const struct rte_mempool *mp)
{
- struct rte_mempool_stack *s = mp->pool_data;
+ struct rte_stack *s = mp->pool_data;
- return s->len;
+ return rte_stack_count(s);
}
static void
stack_free(struct rte_mempool *mp)
{
- rte_free((void *)(mp->pool_data));
+ struct rte_stack *s = mp->pool_data;
+
+ rte_stack_free(s);
}
static struct rte_mempool_ops ops_stack = {
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 2/8] mempool/stack: convert mempool to use rte stack
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
The new rte_stack library is derived from the mempool handler, so this
commit removes duplicated code and simplifies the handler by migrating it
to this new API.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 2 +-
drivers/mempool/stack/Makefile | 3 +-
drivers/mempool/stack/meson.build | 6 +-
drivers/mempool/stack/rte_mempool_stack.c | 93 +++++++++----------------------
4 files changed, 33 insertions(+), 71 deletions(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index f30fc4aa6..e09e7d93f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -303,7 +303,6 @@ M: Andrew Rybchenko <arybchenko@solarflare.com>
F: lib/librte_mempool/
F: drivers/mempool/Makefile
F: drivers/mempool/ring/
-F: drivers/mempool/stack/
F: doc/guides/prog_guide/mempool_lib.rst
F: app/test/test_mempool*
F: app/test/test_func_reentrancy.c
@@ -431,6 +430,7 @@ M: Gage Eads <gage.eads@intel.com>
M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
+F: drivers/mempool/stack/
Memory Pool Drivers
diff --git a/drivers/mempool/stack/Makefile b/drivers/mempool/stack/Makefile
index 0444aedad..1681a62bc 100644
--- a/drivers/mempool/stack/Makefile
+++ b/drivers/mempool/stack/Makefile
@@ -10,10 +10,11 @@ LIB = librte_mempool_stack.a
CFLAGS += -O3
CFLAGS += $(WERROR_FLAGS)
+CFLAGS += -DALLOW_EXPERIMENTAL_API
# Headers
CFLAGS += -I$(RTE_SDK)/lib/librte_mempool
-LDLIBS += -lrte_eal -lrte_mempool -lrte_ring
+LDLIBS += -lrte_eal -lrte_mempool -lrte_stack
EXPORT_MAP := rte_mempool_stack_version.map
diff --git a/drivers/mempool/stack/meson.build b/drivers/mempool/stack/meson.build
index b75a3bb56..03e369a41 100644
--- a/drivers/mempool/stack/meson.build
+++ b/drivers/mempool/stack/meson.build
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2017 Intel Corporation
+# Copyright(c) 2017-2019 Intel Corporation
+
+allow_experimental_apis = true
sources = files('rte_mempool_stack.c')
+
+deps += ['stack']
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index e6d504af5..25ccdb9af 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -1,39 +1,29 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2016 Intel Corporation
+ * Copyright(c) 2016-2019 Intel Corporation
*/
#include <stdio.h>
#include <rte_mempool.h>
-#include <rte_malloc.h>
-
-struct rte_mempool_stack {
- rte_spinlock_t sl;
-
- uint32_t size;
- uint32_t len;
- void *objs[];
-};
+#include <rte_stack.h>
static int
stack_alloc(struct rte_mempool *mp)
{
- struct rte_mempool_stack *s;
- unsigned n = mp->size;
- int size = sizeof(*s) + (n+16)*sizeof(void *);
-
- /* Allocate our local memory structure */
- s = rte_zmalloc_socket("mempool-stack",
- size,
- RTE_CACHE_LINE_SIZE,
- mp->socket_id);
- if (s == NULL) {
- RTE_LOG(ERR, MEMPOOL, "Cannot allocate stack!\n");
- return -ENOMEM;
+ char name[RTE_STACK_NAMESIZE];
+ struct rte_stack *s;
+ int ret;
+
+ ret = snprintf(name, sizeof(name),
+ RTE_MEMPOOL_MZ_FORMAT, mp->name);
+ if (ret < 0 || ret >= (int)sizeof(name)) {
+ rte_errno = ENAMETOOLONG;
+ return -rte_errno;
}
- rte_spinlock_init(&s->sl);
+ s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ if (s == NULL)
+ return -rte_errno;
- s->size = n;
mp->pool_data = s;
return 0;
@@ -41,69 +31,36 @@ stack_alloc(struct rte_mempool *mp)
static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index;
-
- rte_spinlock_lock(&s->sl);
- cache_objs = &s->objs[s->len];
-
- /* Is there sufficient space in the stack ? */
- if ((s->len + n) > s->size) {
- rte_spinlock_unlock(&s->sl);
- return -ENOBUFS;
- }
-
- /* Add elements back into the cache */
- for (index = 0; index < n; ++index, obj_table++)
- cache_objs[index] = *obj_table;
-
- s->len += n;
+ struct rte_stack *s = mp->pool_data;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_push(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static int
stack_dequeue(struct rte_mempool *mp, void **obj_table,
- unsigned n)
+ unsigned int n)
{
- struct rte_mempool_stack *s = mp->pool_data;
- void **cache_objs;
- unsigned index, len;
-
- rte_spinlock_lock(&s->sl);
-
- if (unlikely(n > s->len)) {
- rte_spinlock_unlock(&s->sl);
- return -ENOENT;
- }
+ struct rte_stack *s = mp->pool_data;
- cache_objs = s->objs;
-
- for (index = 0, len = s->len - 1; index < n;
- ++index, len--, obj_table++)
- *obj_table = cache_objs[len];
-
- s->len -= n;
- rte_spinlock_unlock(&s->sl);
- return 0;
+ return rte_stack_pop(s, obj_table, n) == 0 ? -ENOBUFS : 0;
}
static unsigned
stack_get_count(const struct rte_mempool *mp)
{
- struct rte_mempool_stack *s = mp->pool_data;
+ struct rte_stack *s = mp->pool_data;
- return s->len;
+ return rte_stack_count(s);
}
static void
stack_free(struct rte_mempool *mp)
{
- rte_free((void *)(mp->pool_data));
+ struct rte_stack *s = mp->pool_data;
+
+ rte_stack_free(s);
}
static struct rte_mempool_ops ops_stack = {
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 3/8] test/stack: add stack test
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 0/8] Add stack library and new " Gage Eads
` (2 preceding siblings ...)
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 4/8] test/stack: add stack perf test Gage Eads
` (5 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_autotest performs positive and negative testing of the stack API, and
exercises the push and pop datapath functions with all available lcores.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 1 +
app/test/Makefile | 2 +
app/test/meson.build | 3 +
app/test/test_stack.c | 411 ++++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 417 insertions(+)
create mode 100644 app/test/test_stack.c
diff --git a/MAINTAINERS b/MAINTAINERS
index e09e7d93f..332ae98d7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -431,6 +431,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
F: drivers/mempool/stack/
+F: app/test/test_stack*
Memory Pool Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index d6aa28bad..e5bde81af 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -90,6 +90,8 @@ endif
SRCS-y += test_rwlock.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
diff --git a/app/test/meson.build b/app/test/meson.build
index c5e65fe66..56ea13f53 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -95,6 +95,7 @@ test_sources = files('commands.c',
'test_sched.c',
'test_service_cores.c',
'test_spinlock.c',
+ 'test_stack.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -133,6 +134,7 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
+ 'stack',
'timer'
]
@@ -174,6 +176,7 @@ fast_parallel_test_names = [
'rwlock_autotest',
'sched_autotest',
'spinlock_autotest',
+ 'stack_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
new file mode 100644
index 000000000..6be2f876b
--- /dev/null
+++ b/app/test/test_stack.c
@@ -0,0 +1,411 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_SIZE 4096
+#define MAX_BULK 32
+
+static int
+test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
+{
+ unsigned int i, ret;
+ void **popped_objs;
+
+ popped_objs = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (popped_objs == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_push(s, &obj_table[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] push returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_pop(s, &popped_objs[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] pop returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i++) {
+ if (obj_table[i] != popped_objs[STACK_SIZE - i - 1]) {
+ printf("[%s():%u] Incorrect value %p at index 0x%x\n",
+ __func__, __LINE__,
+ popped_objs[STACK_SIZE - i - 1], i);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ rte_free(popped_objs);
+
+ return 0;
+}
+
+static int
+test_stack_basic(void)
+{
+ struct rte_stack *s = NULL;
+ void **obj_table = NULL;
+ int i, ret = -1;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ goto fail_test;
+ }
+
+ for (i = 0; i < STACK_SIZE; i++)
+ obj_table[i] = (void *)(uintptr_t)i;
+
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_lookup(__func__) != s) {
+ printf("[%s():%u] failed to lookup a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_count(s) != 0) {
+ printf("[%s():%u] stack count: %u (expected 0)\n",
+ __func__, __LINE__, rte_stack_count(s));
+ goto fail_test;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, 1);
+ if (ret) {
+ printf("[%s():%u] Single object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, MAX_BULK);
+ if (ret) {
+ printf("[%s():%u] Bulk object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
+ if (ret != 0) {
+ printf("[%s():%u] Excess objects push succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_pop(s, obj_table, 1);
+ if (ret != 0) {
+ printf("[%s():%u] Empty stack pop succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = 0;
+
+fail_test:
+ rte_stack_free(s);
+
+ rte_free(obj_table);
+
+ return ret;
+}
+
+static int
+test_stack_name_reuse(void)
+{
+ struct rte_stack *s[2];
+
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[0] == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[1] != NULL) {
+ printf("[%s():%u] Failed to detect re-used name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ rte_stack_free(s[0]);
+
+ return 0;
+}
+
+static int
+test_stack_name_length(void)
+{
+ char name[RTE_STACK_NAMESIZE + 1];
+ struct rte_stack *s;
+
+ memset(name, 's', sizeof(name));
+ name[RTE_STACK_NAMESIZE] = '\0';
+
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ if (s != NULL) {
+ printf("[%s():%u] Failed to prevent long name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENAMETOOLONG) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_stack *s = rte_stack_lookup("stack_not_found");
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENOENT) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s = rte_stack_lookup(NULL);
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != EINVAL) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_free_null(void)
+{
+ /* Check whether the library proper handles a NULL pointer */
+ rte_stack_free(NULL);
+
+ return 0;
+}
+
+#define NUM_ITERS_PER_THREAD 100000
+
+struct test_args {
+ struct rte_stack *s;
+ rte_atomic64_t *sz;
+};
+
+static int
+stack_thread_push_pop(void *args)
+{
+ struct test_args *t = args;
+ void **obj_table;
+ int i;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < NUM_ITERS_PER_THREAD; i++) {
+ unsigned int success, num;
+
+ /* Reserve up to min(MAX_BULK, available slots) stack entries,
+ * then push and pop those stack entries.
+ */
+ do {
+ uint64_t sz = rte_atomic64_read(t->sz);
+ volatile uint64_t *sz_addr;
+
+ sz_addr = (volatile uint64_t *)t->sz;
+
+ num = RTE_MIN(rte_rand() % MAX_BULK, STACK_SIZE - sz);
+
+ success = rte_atomic64_cmpset(sz_addr, sz, sz + num);
+ } while (success == 0);
+
+ if (rte_stack_push(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to push %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ if (rte_stack_pop(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to pop %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ rte_atomic64_sub(t->sz, num);
+ }
+
+ rte_free(obj_table);
+ return 0;
+}
+
+static int
+test_stack_multithreaded(void)
+{
+ struct test_args *args;
+ unsigned int lcore_id;
+ struct rte_stack *s;
+ rte_atomic64_t size;
+
+ printf("[%s():%u] Running with %u lcores\n",
+ __func__, __LINE__, rte_lcore_count());
+
+ if (rte_lcore_count() < 2)
+ return 0;
+
+ args = rte_malloc(NULL, sizeof(struct test_args) * RTE_MAX_LCORE, 0);
+ if (args == NULL) {
+ printf("[%s():%u] failed to malloc %zu bytes\n",
+ __func__, __LINE__,
+ sizeof(struct test_args) * RTE_MAX_LCORE);
+ return -1;
+ }
+
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ rte_free(args);
+ return -1;
+ }
+
+ rte_atomic64_init(&size);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ if (rte_eal_remote_launch(stack_thread_push_pop,
+ &args[lcore_id], lcore_id))
+ rte_panic("Failed to launch lcore %d\n", lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ stack_thread_push_pop(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ rte_stack_free(s);
+ rte_free(args);
+
+ return 0;
+}
+
+static int
+test_stack(void)
+{
+ if (test_stack_basic() < 0)
+ return -1;
+
+ if (test_lookup_null() < 0)
+ return -1;
+
+ if (test_free_null() < 0)
+ return -1;
+
+ if (test_stack_name_reuse() < 0)
+ return -1;
+
+ if (test_stack_name_length() < 0)
+ return -1;
+
+ if (test_stack_multithreaded() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_autotest, test_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 3/8] test/stack: add stack test
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 3/8] test/stack: add stack test Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_autotest performs positive and negative testing of the stack API, and
exercises the push and pop datapath functions with all available lcores.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
MAINTAINERS | 1 +
app/test/Makefile | 2 +
app/test/meson.build | 3 +
app/test/test_stack.c | 411 ++++++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 417 insertions(+)
create mode 100644 app/test/test_stack.c
diff --git a/MAINTAINERS b/MAINTAINERS
index e09e7d93f..332ae98d7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -431,6 +431,7 @@ M: Olivier Matz <olivier.matz@6wind.com>
F: lib/librte_stack/
F: doc/guides/prog_guide/stack_lib.rst
F: drivers/mempool/stack/
+F: app/test/test_stack*
Memory Pool Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index d6aa28bad..e5bde81af 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -90,6 +90,8 @@ endif
SRCS-y += test_rwlock.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_racecond.c
diff --git a/app/test/meson.build b/app/test/meson.build
index c5e65fe66..56ea13f53 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -95,6 +95,7 @@ test_sources = files('commands.c',
'test_sched.c',
'test_service_cores.c',
'test_spinlock.c',
+ 'test_stack.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -133,6 +134,7 @@ test_deps = ['acl',
'port',
'reorder',
'ring',
+ 'stack',
'timer'
]
@@ -174,6 +176,7 @@ fast_parallel_test_names = [
'rwlock_autotest',
'sched_autotest',
'spinlock_autotest',
+ 'stack_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
new file mode 100644
index 000000000..6be2f876b
--- /dev/null
+++ b/app/test/test_stack.c
@@ -0,0 +1,411 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include <string.h>
+
+#include <rte_atomic.h>
+#include <rte_lcore.h>
+#include <rte_malloc.h>
+#include <rte_random.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_SIZE 4096
+#define MAX_BULK 32
+
+static int
+test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
+{
+ unsigned int i, ret;
+ void **popped_objs;
+
+ popped_objs = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (popped_objs == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_push(s, &obj_table[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] push returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i += bulk_sz) {
+ ret = rte_stack_pop(s, &popped_objs[i], bulk_sz);
+
+ if (ret != bulk_sz) {
+ printf("[%s():%u] pop returned: %d (expected %u)\n",
+ __func__, __LINE__, ret, bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_count(s) != STACK_SIZE - i - bulk_sz) {
+ printf("[%s():%u] stack count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ STACK_SIZE - i - bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+
+ if (rte_stack_free_count(s) != i + bulk_sz) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s),
+ i + bulk_sz);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ for (i = 0; i < STACK_SIZE; i++) {
+ if (obj_table[i] != popped_objs[STACK_SIZE - i - 1]) {
+ printf("[%s():%u] Incorrect value %p at index 0x%x\n",
+ __func__, __LINE__,
+ popped_objs[STACK_SIZE - i - 1], i);
+ rte_free(popped_objs);
+ return -1;
+ }
+ }
+
+ rte_free(popped_objs);
+
+ return 0;
+}
+
+static int
+test_stack_basic(void)
+{
+ struct rte_stack *s = NULL;
+ void **obj_table = NULL;
+ int i, ret = -1;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ goto fail_test;
+ }
+
+ for (i = 0; i < STACK_SIZE; i++)
+ obj_table[i] = (void *)(uintptr_t)i;
+
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_lookup(__func__) != s) {
+ printf("[%s():%u] failed to lookup a stack\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ if (rte_stack_count(s) != 0) {
+ printf("[%s():%u] stack count: %u (expected 0)\n",
+ __func__, __LINE__, rte_stack_count(s));
+ goto fail_test;
+ }
+
+ if (rte_stack_free_count(s) != STACK_SIZE) {
+ printf("[%s():%u] stack free count: %u (expected %u)\n",
+ __func__, __LINE__, rte_stack_count(s), STACK_SIZE);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, 1);
+ if (ret) {
+ printf("[%s():%u] Single object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = test_stack_push_pop(s, obj_table, MAX_BULK);
+ if (ret) {
+ printf("[%s():%u] Bulk object push/pop failed\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_push(s, obj_table, 2 * STACK_SIZE);
+ if (ret != 0) {
+ printf("[%s():%u] Excess objects push succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = rte_stack_pop(s, obj_table, 1);
+ if (ret != 0) {
+ printf("[%s():%u] Empty stack pop succeeded\n",
+ __func__, __LINE__);
+ goto fail_test;
+ }
+
+ ret = 0;
+
+fail_test:
+ rte_stack_free(s);
+
+ rte_free(obj_table);
+
+ return ret;
+}
+
+static int
+test_stack_name_reuse(void)
+{
+ struct rte_stack *s[2];
+
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[0] == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s[1] != NULL) {
+ printf("[%s():%u] Failed to detect re-used name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ rte_stack_free(s[0]);
+
+ return 0;
+}
+
+static int
+test_stack_name_length(void)
+{
+ char name[RTE_STACK_NAMESIZE + 1];
+ struct rte_stack *s;
+
+ memset(name, 's', sizeof(name));
+ name[RTE_STACK_NAMESIZE] = '\0';
+
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ if (s != NULL) {
+ printf("[%s():%u] Failed to prevent long name\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENAMETOOLONG) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_lookup_null(void)
+{
+ struct rte_stack *s = rte_stack_lookup("stack_not_found");
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != ENOENT) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ s = rte_stack_lookup(NULL);
+
+ if (s != NULL) {
+ printf("[%s():%u] rte_stack found a non-existent stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ if (rte_errno != EINVAL) {
+ printf("[%s():%u] rte_stack failed to set correct errno on failed lookup\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ return 0;
+}
+
+static int
+test_free_null(void)
+{
+ /* Check whether the library proper handles a NULL pointer */
+ rte_stack_free(NULL);
+
+ return 0;
+}
+
+#define NUM_ITERS_PER_THREAD 100000
+
+struct test_args {
+ struct rte_stack *s;
+ rte_atomic64_t *sz;
+};
+
+static int
+stack_thread_push_pop(void *args)
+{
+ struct test_args *t = args;
+ void **obj_table;
+ int i;
+
+ obj_table = rte_calloc(NULL, STACK_SIZE, sizeof(void *), 0);
+ if (obj_table == NULL) {
+ printf("[%s():%u] failed to calloc %zu bytes\n",
+ __func__, __LINE__, STACK_SIZE * sizeof(void *));
+ return -1;
+ }
+
+ for (i = 0; i < NUM_ITERS_PER_THREAD; i++) {
+ unsigned int success, num;
+
+ /* Reserve up to min(MAX_BULK, available slots) stack entries,
+ * then push and pop those stack entries.
+ */
+ do {
+ uint64_t sz = rte_atomic64_read(t->sz);
+ volatile uint64_t *sz_addr;
+
+ sz_addr = (volatile uint64_t *)t->sz;
+
+ num = RTE_MIN(rte_rand() % MAX_BULK, STACK_SIZE - sz);
+
+ success = rte_atomic64_cmpset(sz_addr, sz, sz + num);
+ } while (success == 0);
+
+ if (rte_stack_push(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to push %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ if (rte_stack_pop(t->s, obj_table, num) != num) {
+ printf("[%s():%u] Failed to pop %u pointers\n",
+ __func__, __LINE__, num);
+ rte_free(obj_table);
+ return -1;
+ }
+
+ rte_atomic64_sub(t->sz, num);
+ }
+
+ rte_free(obj_table);
+ return 0;
+}
+
+static int
+test_stack_multithreaded(void)
+{
+ struct test_args *args;
+ unsigned int lcore_id;
+ struct rte_stack *s;
+ rte_atomic64_t size;
+
+ printf("[%s():%u] Running with %u lcores\n",
+ __func__, __LINE__, rte_lcore_count());
+
+ if (rte_lcore_count() < 2)
+ return 0;
+
+ args = rte_malloc(NULL, sizeof(struct test_args) * RTE_MAX_LCORE, 0);
+ if (args == NULL) {
+ printf("[%s():%u] failed to malloc %zu bytes\n",
+ __func__, __LINE__,
+ sizeof(struct test_args) * RTE_MAX_LCORE);
+ return -1;
+ }
+
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] Failed to create a stack\n",
+ __func__, __LINE__);
+ rte_free(args);
+ return -1;
+ }
+
+ rte_atomic64_init(&size);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ if (rte_eal_remote_launch(stack_thread_push_pop,
+ &args[lcore_id], lcore_id))
+ rte_panic("Failed to launch lcore %d\n", lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = &size;
+
+ stack_thread_push_pop(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ rte_stack_free(s);
+ rte_free(args);
+
+ return 0;
+}
+
+static int
+test_stack(void)
+{
+ if (test_stack_basic() < 0)
+ return -1;
+
+ if (test_lookup_null() < 0)
+ return -1;
+
+ if (test_free_null() < 0)
+ return -1;
+
+ if (test_stack_name_reuse() < 0)
+ return -1;
+
+ if (test_stack_name_length() < 0)
+ return -1;
+
+ if (test_stack_multithreaded() < 0)
+ return -1;
+
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_autotest, test_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 4/8] test/stack: add stack perf test
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 0/8] Add stack library and new " Gage Eads
` (3 preceding siblings ...)
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 3/8] test/stack: add stack test Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 5/8] stack: add lock-free stack implementation Gage Eads
` (4 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/Makefile | 1 +
app/test/meson.build | 2 +
app/test/test_stack_perf.c | 345 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 348 insertions(+)
create mode 100644 app/test/test_stack_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index e5bde81af..b28bed2d4 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -91,6 +91,7 @@ endif
SRCS-y += test_rwlock.c
SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
diff --git a/app/test/meson.build b/app/test/meson.build
index 56ea13f53..02eb788a4 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -96,6 +96,7 @@ test_sources = files('commands.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
+ 'test_stack_perf.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -241,6 +242,7 @@ perf_test_names = [
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
+ 'stack_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
new file mode 100644
index 000000000..a44fbb73e
--- /dev/null
+++ b/app/test/test_stack_perf.c
@@ -0,0 +1,345 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_launch.h>
+#include <rte_pause.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_NAME "STACK_PERF"
+#define MAX_BURST 32
+#define STACK_SIZE (RTE_MAX_LCORE * MAX_BURST)
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+/*
+ * Push/pop bulk sizes, marked volatile so they aren't treated as compile-time
+ * constants.
+ */
+static volatile unsigned int bulk_sizes[] = {8, MAX_BURST};
+
+static rte_atomic32_t lcore_barrier;
+
+struct lcore_pair {
+ unsigned int c1;
+ unsigned int c2;
+};
+
+static int
+get_two_hyperthreads(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] == core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_cores(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] != core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_sockets(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if (socket[0] != socket[1]) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+/* Measure the cycle cost of popping an empty stack. */
+static void
+test_empty_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 100000000;
+ void *objs[MAX_BURST];
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++)
+ rte_stack_pop(s, objs, bulk_sizes[0]);
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Stack empty pop: %.2F\n",
+ (double)(end - start) / iterations);
+}
+
+struct thread_args {
+ struct rte_stack *s;
+ unsigned int sz;
+ double avg;
+};
+
+/* Measure the average per-pointer cycle cost of stack push and pop */
+static int
+bulk_push_pop(void *p)
+{
+ unsigned int iterations = 1000000;
+ struct thread_args *args = p;
+ void *objs[MAX_BURST] = {0};
+ unsigned int size, i;
+ struct rte_stack *s;
+
+ s = args->s;
+ size = args->sz;
+
+ rte_atomic32_sub(&lcore_barrier, 1);
+ while (rte_atomic32_read(&lcore_barrier) != 0)
+ rte_pause();
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, size);
+ rte_stack_pop(s, objs, size);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ args->avg = ((double)(end - start))/(iterations * size);
+
+ return 0;
+}
+
+/*
+ * Run bulk_push_pop() simultaneously on pairs of cores, to measure stack
+ * perf when between hyperthread siblings, cores on the same socket, and cores
+ * on different sockets.
+ */
+static void
+run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s,
+ lcore_function_t fn)
+{
+ struct thread_args args[2];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ rte_atomic32_set(&lcore_barrier, 2);
+
+ args[0].sz = args[1].sz = bulk_sizes[i];
+ args[0].s = args[1].s = s;
+
+ if (cores->c1 == rte_get_master_lcore()) {
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ fn(&args[0]);
+ rte_eal_wait_lcore(cores->c2);
+ } else {
+ rte_eal_remote_launch(fn, &args[0], cores->c1);
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ rte_eal_wait_lcore(cores->c1);
+ rte_eal_wait_lcore(cores->c2);
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], (args[0].avg + args[1].avg) / 2);
+ }
+}
+
+/* Run bulk_push_pop() simultaneously on 1+ cores. */
+static void
+run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n)
+{
+ struct thread_args args[RTE_MAX_LCORE];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ unsigned int lcore_id;
+ int cnt = 0;
+ double avg;
+
+ rte_atomic32_set(&lcore_barrier, n);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ if (rte_eal_remote_launch(fn, &args[lcore_id],
+ lcore_id))
+ rte_panic("Failed to launch lcore %d\n",
+ lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ fn(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ avg = args[rte_lcore_id()].avg;
+
+ cnt = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+ avg += args[lcore_id].avg;
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], avg / n);
+ }
+}
+
+/*
+ * Measure the cycle cost of pushing and popping a single pointer on a single
+ * lcore.
+ */
+static void
+test_single_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 16000000;
+ void *obj = NULL;
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, &obj, 1);
+ rte_stack_pop(s, &obj, 1);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Average cycles per single object push/pop: %.2F\n",
+ ((double)(end - start)) / iterations);
+}
+
+/* Measure the cycle cost of bulk pushing and popping on a single lcore. */
+static void
+test_bulk_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 8000000;
+ void *objs[MAX_BURST];
+ unsigned int sz, i;
+
+ for (sz = 0; sz < ARRAY_SIZE(bulk_sizes); sz++) {
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, bulk_sizes[sz]);
+ rte_stack_pop(s, objs, bulk_sizes[sz]);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ double avg = ((double)(end - start) /
+ (iterations * bulk_sizes[sz]));
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[sz], avg);
+ }
+}
+
+static int
+test_stack_perf(void)
+{
+ struct lcore_pair cores;
+ struct rte_stack *s;
+
+ rte_atomic32_init(&lcore_barrier);
+
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ printf("### Testing single element push/pop ###\n");
+ test_single_push_pop(s);
+
+ printf("\n### Testing empty pop ###\n");
+ test_empty_pop(s);
+
+ printf("\n### Testing using a single lcore ###\n");
+ test_bulk_push_pop(s);
+
+ if (get_two_hyperthreads(&cores) == 0) {
+ printf("\n### Testing using two hyperthreads ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_cores(&cores) == 0) {
+ printf("\n### Testing using two physical cores ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_sockets(&cores) == 0) {
+ printf("\n### Testing using two NUMA nodes ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+
+ printf("\n### Testing on all %u lcores ###\n", rte_lcore_count());
+ run_on_n_cores(s, bulk_push_pop, rte_lcore_count());
+
+ rte_stack_free(s);
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 4/8] test/stack: add stack perf test
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 4/8] test/stack: add stack perf test Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
stack_perf_autotest tests the following with one lcore:
- Cycles to attempt to pop an empty stack
- Cycles to push then pop a single object
- Cycles to push then pop a burst of 32 objects
It also tests the cycles to push then pop a burst of 8 and 32 objects with
the following lcore combinations (if possible):
- Two hyperthreads
- Two physical cores
- Two physical cores on separate NUMA nodes
- All available lcores
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/Makefile | 1 +
app/test/meson.build | 2 +
app/test/test_stack_perf.c | 345 +++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 348 insertions(+)
create mode 100644 app/test/test_stack_perf.c
diff --git a/app/test/Makefile b/app/test/Makefile
index e5bde81af..b28bed2d4 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -91,6 +91,7 @@ endif
SRCS-y += test_rwlock.c
SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack.c
+SRCS-$(CONFIG_RTE_LIBRTE_STACK) += test_stack_perf.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer.c
SRCS-$(CONFIG_RTE_LIBRTE_TIMER) += test_timer_perf.c
diff --git a/app/test/meson.build b/app/test/meson.build
index 56ea13f53..02eb788a4 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -96,6 +96,7 @@ test_sources = files('commands.c',
'test_service_cores.c',
'test_spinlock.c',
'test_stack.c',
+ 'test_stack_perf.c',
'test_string_fns.c',
'test_table.c',
'test_table_acl.c',
@@ -241,6 +242,7 @@ perf_test_names = [
'distributor_perf_autotest',
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
+ 'stack_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
new file mode 100644
index 000000000..a44fbb73e
--- /dev/null
+++ b/app/test/test_stack_perf.c
@@ -0,0 +1,345 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+
+#include <stdio.h>
+#include <inttypes.h>
+
+#include <rte_atomic.h>
+#include <rte_cycles.h>
+#include <rte_launch.h>
+#include <rte_pause.h>
+#include <rte_stack.h>
+
+#include "test.h"
+
+#define STACK_NAME "STACK_PERF"
+#define MAX_BURST 32
+#define STACK_SIZE (RTE_MAX_LCORE * MAX_BURST)
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+/*
+ * Push/pop bulk sizes, marked volatile so they aren't treated as compile-time
+ * constants.
+ */
+static volatile unsigned int bulk_sizes[] = {8, MAX_BURST};
+
+static rte_atomic32_t lcore_barrier;
+
+struct lcore_pair {
+ unsigned int c1;
+ unsigned int c2;
+};
+
+static int
+get_two_hyperthreads(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] == core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_cores(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int core[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ core[0] = lcore_config[id[0]].core_id;
+ core[1] = lcore_config[id[1]].core_id;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if ((core[0] != core[1]) && (socket[0] == socket[1])) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+static int
+get_two_sockets(struct lcore_pair *lcp)
+{
+ unsigned int socket[2];
+ unsigned int id[2];
+
+ RTE_LCORE_FOREACH(id[0]) {
+ RTE_LCORE_FOREACH(id[1]) {
+ if (id[0] == id[1])
+ continue;
+ socket[0] = lcore_config[id[0]].socket_id;
+ socket[1] = lcore_config[id[1]].socket_id;
+ if (socket[0] != socket[1]) {
+ lcp->c1 = id[0];
+ lcp->c2 = id[1];
+ return 0;
+ }
+ }
+ }
+
+ return 1;
+}
+
+/* Measure the cycle cost of popping an empty stack. */
+static void
+test_empty_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 100000000;
+ void *objs[MAX_BURST];
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++)
+ rte_stack_pop(s, objs, bulk_sizes[0]);
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Stack empty pop: %.2F\n",
+ (double)(end - start) / iterations);
+}
+
+struct thread_args {
+ struct rte_stack *s;
+ unsigned int sz;
+ double avg;
+};
+
+/* Measure the average per-pointer cycle cost of stack push and pop */
+static int
+bulk_push_pop(void *p)
+{
+ unsigned int iterations = 1000000;
+ struct thread_args *args = p;
+ void *objs[MAX_BURST] = {0};
+ unsigned int size, i;
+ struct rte_stack *s;
+
+ s = args->s;
+ size = args->sz;
+
+ rte_atomic32_sub(&lcore_barrier, 1);
+ while (rte_atomic32_read(&lcore_barrier) != 0)
+ rte_pause();
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, size);
+ rte_stack_pop(s, objs, size);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ args->avg = ((double)(end - start))/(iterations * size);
+
+ return 0;
+}
+
+/*
+ * Run bulk_push_pop() simultaneously on pairs of cores, to measure stack
+ * perf when between hyperthread siblings, cores on the same socket, and cores
+ * on different sockets.
+ */
+static void
+run_on_core_pair(struct lcore_pair *cores, struct rte_stack *s,
+ lcore_function_t fn)
+{
+ struct thread_args args[2];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ rte_atomic32_set(&lcore_barrier, 2);
+
+ args[0].sz = args[1].sz = bulk_sizes[i];
+ args[0].s = args[1].s = s;
+
+ if (cores->c1 == rte_get_master_lcore()) {
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ fn(&args[0]);
+ rte_eal_wait_lcore(cores->c2);
+ } else {
+ rte_eal_remote_launch(fn, &args[0], cores->c1);
+ rte_eal_remote_launch(fn, &args[1], cores->c2);
+ rte_eal_wait_lcore(cores->c1);
+ rte_eal_wait_lcore(cores->c2);
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], (args[0].avg + args[1].avg) / 2);
+ }
+}
+
+/* Run bulk_push_pop() simultaneously on 1+ cores. */
+static void
+run_on_n_cores(struct rte_stack *s, lcore_function_t fn, int n)
+{
+ struct thread_args args[RTE_MAX_LCORE];
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(bulk_sizes); i++) {
+ unsigned int lcore_id;
+ int cnt = 0;
+ double avg;
+
+ rte_atomic32_set(&lcore_barrier, n);
+
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ if (rte_eal_remote_launch(fn, &args[lcore_id],
+ lcore_id))
+ rte_panic("Failed to launch lcore %d\n",
+ lcore_id);
+ }
+
+ lcore_id = rte_lcore_id();
+
+ args[lcore_id].s = s;
+ args[lcore_id].sz = bulk_sizes[i];
+
+ fn(&args[lcore_id]);
+
+ rte_eal_mp_wait_lcore();
+
+ avg = args[rte_lcore_id()].avg;
+
+ cnt = 0;
+ RTE_LCORE_FOREACH_SLAVE(lcore_id) {
+ if (++cnt >= n)
+ break;
+ avg += args[lcore_id].avg;
+ }
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[i], avg / n);
+ }
+}
+
+/*
+ * Measure the cycle cost of pushing and popping a single pointer on a single
+ * lcore.
+ */
+static void
+test_single_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 16000000;
+ void *obj = NULL;
+ unsigned int i;
+
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, &obj, 1);
+ rte_stack_pop(s, &obj, 1);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ printf("Average cycles per single object push/pop: %.2F\n",
+ ((double)(end - start)) / iterations);
+}
+
+/* Measure the cycle cost of bulk pushing and popping on a single lcore. */
+static void
+test_bulk_push_pop(struct rte_stack *s)
+{
+ unsigned int iterations = 8000000;
+ void *objs[MAX_BURST];
+ unsigned int sz, i;
+
+ for (sz = 0; sz < ARRAY_SIZE(bulk_sizes); sz++) {
+ uint64_t start = rte_rdtsc();
+
+ for (i = 0; i < iterations; i++) {
+ rte_stack_push(s, objs, bulk_sizes[sz]);
+ rte_stack_pop(s, objs, bulk_sizes[sz]);
+ }
+
+ uint64_t end = rte_rdtsc();
+
+ double avg = ((double)(end - start) /
+ (iterations * bulk_sizes[sz]));
+
+ printf("Average cycles per object push/pop (bulk size: %u): %.2F\n",
+ bulk_sizes[sz], avg);
+ }
+}
+
+static int
+test_stack_perf(void)
+{
+ struct lcore_pair cores;
+ struct rte_stack *s;
+
+ rte_atomic32_init(&lcore_barrier);
+
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ if (s == NULL) {
+ printf("[%s():%u] failed to create a stack\n",
+ __func__, __LINE__);
+ return -1;
+ }
+
+ printf("### Testing single element push/pop ###\n");
+ test_single_push_pop(s);
+
+ printf("\n### Testing empty pop ###\n");
+ test_empty_pop(s);
+
+ printf("\n### Testing using a single lcore ###\n");
+ test_bulk_push_pop(s);
+
+ if (get_two_hyperthreads(&cores) == 0) {
+ printf("\n### Testing using two hyperthreads ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_cores(&cores) == 0) {
+ printf("\n### Testing using two physical cores ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+ if (get_two_sockets(&cores) == 0) {
+ printf("\n### Testing using two NUMA nodes ###\n");
+ run_on_core_pair(&cores, s, bulk_push_pop);
+ }
+
+ printf("\n### Testing on all %u lcores ###\n", rte_lcore_count());
+ run_on_n_cores(s, bulk_push_pop, rte_lcore_count());
+
+ rte_stack_free(s);
+ return 0;
+}
+
+REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 5/8] stack: add lock-free stack implementation
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 0/8] Add stack library and new " Gage Eads
` (4 preceding siblings ...)
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 4/8] test/stack: add stack perf test Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 6/8] stack: add C11 atomic implementation Gage Eads
` (3 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for a lock-free (linked list based) stack to the
stack API. This behavior is selected through a new rte_stack_create() flag,
RTE_STACK_F_LF.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The lock-free push operation enqueues a linked list of pointers by pointing
the tail of the list to the current stack head, and using a CAS to swing
the stack head pointer to the head of the list. The operation retries if it
is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The lock-free pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a lock-free LIFO
list, and are allocated before stack pushes and freed after stack pops.
Since the stack has a fixed maximum depth, these elements do not need to be
dynamically created.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
doc/guides/prog_guide/stack_lib.rst | 61 +++++++++++-
doc/guides/rel_notes/release_19_05.rst | 3 +
lib/librte_stack/Makefile | 7 +-
lib/librte_stack/meson.build | 7 +-
lib/librte_stack/rte_stack.c | 28 ++++--
lib/librte_stack/rte_stack.h | 63 +++++++++++-
lib/librte_stack/rte_stack_lf.c | 31 ++++++
lib/librte_stack/rte_stack_lf.h | 102 ++++++++++++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 ++++++++++++++++++++++++++++++++
9 files changed, 447 insertions(+), 19 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 25a8cc38a..8fe8804e3 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -10,7 +10,8 @@ stack of pointers.
The stack library provides the following basic operations:
* Create a uniquely named stack of a user-specified size and using a
- user-specified socket.
+ user-specified socket, with either standard (lock-based) or lock-free
+ behavior.
* Push and pop a burst of one or more stack objects (pointers). These function
are multi-threading safe.
@@ -24,5 +25,59 @@ The stack library provides the following basic operations:
Implementation
~~~~~~~~~~~~~~
-The stack consists of a contiguous array of pointers, a current index, and a
-spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
+The library supports two types of stacks: standard (lock-based) and lock-free.
+Both types use the same set of interfaces, but their implementations differ.
+
+Lock-based Stack
+----------------
+
+The lock-based stack consists of a contiguous array of pointers, a current
+index, and a spinlock. Accesses to the stack are made multi-thread safe by the
+spinlock.
+
+Lock-free Stack
+------------------
+
+The lock-free stack consists of a linked list of elements, each containing a
+data pointer and a next pointer, and an atomic stack depth counter. The
+lock-free property means that multiple threads can push and pop simultaneously,
+and one thread being preempted/delayed in a push or pop operation will not
+impede the forward progress of any other thread.
+
+The lock-free push operation enqueues a linked list of pointers by pointing the
+list's tail to the current stack head, and using a CAS to swing the stack head
+pointer to the head of the list. The operation retries if it is unsuccessful
+(i.e. the list changed between reading the head and modifying it), else it
+adjusts the stack length and returns.
+
+The lock-free pop operation first reserves one or more list elements by
+adjusting the stack length, to ensure the dequeue operation will succeed
+without blocking. It then dequeues pointers by walking the list -- starting
+from the head -- then swinging the head pointer (using a CAS as well). While
+walking the list, the data pointers are recorded in an object table.
+
+The linked list elements themselves are maintained in a lock-free LIFO, and are
+allocated before stack pushes and freed after stack pops. Since the stack has a
+fixed maximum depth, these elements do not need to be dynamically created.
+
+The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
+rte_stack_create().
+
+Preventing the ABA Problem
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To prevent the ABA problem, this algorithm stack uses a 128-bit
+compare-and-swap instruction to atomically update both the stack top pointer
+and a modification counter. The ABA problem can occur without a modification
+counter if, for example:
+
+1. Thread A reads head pointer X and stores the pointed-to list element.
+2. Other threads modify the list such that the head pointer is once again X,
+ but its pointed-to data is different than what thread A read.
+3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+In this case thread A would not detect that the list had changed, and would
+both pop stale data and incorrect change the head pointer. By adding a
+modification counter that is updated on every push and pop as part of the
+compare-and-swap, the algorithm can detect when the list changes even if the
+head pointer remains the same.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index ebfbe36e5..3b115b5f6 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -127,6 +127,9 @@ New Features
pointers. The API provides MT-safe push and pop operations that can operate
on one or more pointers per operation.
+ The library supports two stack implementations: standard (lock-based) and lock-free.
+ The lock-free implementation is currently limited to x86-64 platforms.
+
Removed Items
-------------
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 6db540073..311edd997 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -16,10 +16,13 @@ LIBABIVER := 1
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
- rte_stack_std.c
+ rte_stack_std.c \
+ rte_stack_lf.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
- rte_stack_std.h
+ rte_stack_std.h \
+ rte_stack_lf.h \
+ rte_stack_lf_generic.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index d2e60ce9b..7a09a5d66 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -4,5 +4,8 @@
allow_experimental_apis = true
version = 1
-sources = files('rte_stack.c', 'rte_stack_std.c')
-headers = files('rte_stack.h', 'rte_stack_std.h')
+sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
+headers = files('rte_stack.h',
+ 'rte_stack_std.h',
+ 'rte_stack_lf.h',
+ 'rte_stack_lf_generic.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
index 610014b6c..1a4d9bd1e 100644
--- a/lib/librte_stack/rte_stack.c
+++ b/lib/librte_stack/rte_stack.c
@@ -25,18 +25,25 @@ static struct rte_tailq_elem rte_stack_tailq = {
};
EAL_REGISTER_TAILQ(rte_stack_tailq)
+
static void
-rte_stack_init(struct rte_stack *s)
+rte_stack_init(struct rte_stack *s, unsigned int count, uint32_t flags)
{
memset(s, 0, sizeof(*s));
- rte_stack_std_init(s);
+ if (flags & RTE_STACK_F_LF)
+ rte_stack_lf_init(s, count);
+ else
+ rte_stack_std_init(s);
}
static ssize_t
-rte_stack_get_memsize(unsigned int count)
+rte_stack_get_memsize(unsigned int count, uint32_t flags)
{
- return rte_stack_std_get_memsize(count);
+ if (flags & RTE_STACK_F_LF)
+ return rte_stack_lf_get_memsize(count);
+ else
+ return rte_stack_std_get_memsize(count);
}
struct rte_stack *
@@ -51,9 +58,16 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
unsigned int sz;
int ret;
- RTE_SET_USED(flags);
+#ifdef RTE_ARCH_64
+ RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
+#else
+ if (flags & RTE_STACK_F_LF) {
+ STACK_LOG_ERR("Lock-free stack is not supported on your platform\n");
+ return NULL;
+ }
+#endif
- sz = rte_stack_get_memsize(count);
+ sz = rte_stack_get_memsize(count, flags);
ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
RTE_STACK_MZ_PREFIX, name);
@@ -82,7 +96,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
s = mz->addr;
- rte_stack_init(s);
+ rte_stack_init(s, count, flags);
/* Store the name for later lookups */
ret = snprintf(s->name, sizeof(s->name), "%s", name);
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index 42d042715..fe048f071 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -20,6 +20,7 @@
extern "C" {
#endif
+#include <rte_atomic.h>
#include <rte_compat.h>
#include <rte_debug.h>
#include <rte_errno.h>
@@ -32,6 +33,35 @@ extern "C" {
#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
sizeof(RTE_STACK_MZ_PREFIX) + 1)
+struct rte_stack_lf_elem {
+ void *data; /**< Data pointer */
+ struct rte_stack_lf_elem *next; /**< Next pointer */
+};
+
+struct rte_stack_lf_head {
+ struct rte_stack_lf_elem *top; /**< Stack top */
+ uint64_t cnt; /**< Modification counter for avoiding ABA problem */
+};
+
+struct rte_stack_lf_list {
+ /** List head */
+ struct rte_stack_lf_head head __rte_aligned(16);
+ /** List len */
+ rte_atomic64_t len;
+};
+
+/* Structure containing two lock-free LIFO lists: the stack itself and a list
+ * of free linked-list elements.
+ */
+struct rte_stack_lf {
+ /** LIFO list of elements */
+ struct rte_stack_lf_list used __rte_cache_aligned;
+ /** LIFO list of free elements */
+ struct rte_stack_lf_list free __rte_cache_aligned;
+ /** LIFO elements */
+ struct rte_stack_lf_elem elems[] __rte_cache_aligned;
+};
+
/* Structure containing the LIFO, its current length, and a lock for mutual
* exclusion.
*/
@@ -51,10 +81,21 @@ struct rte_stack {
const struct rte_memzone *memzone;
uint32_t capacity; /**< Usable size of the stack. */
uint32_t flags; /**< Flags supplied at creation. */
- struct rte_stack_std stack_std; /**< LIFO structure. */
+ RTE_STD_C11
+ union {
+ struct rte_stack_lf stack_lf; /**< Lock-free LIFO structure. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+ };
} __rte_cache_aligned;
+/**
+ * The stack uses lock-free push and pop functions. This flag is only
+ * supported on x86_64 platforms, currently.
+ */
+#define RTE_STACK_F_LF 0x0001
+
#include "rte_stack_std.h"
+#include "rte_stack_lf.h"
/**
* @warning
@@ -77,7 +118,10 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_push(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_push(s, obj_table, n);
+ else
+ return __rte_stack_std_push(s, obj_table, n);
}
/**
@@ -101,7 +145,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_pop(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_pop(s, obj_table, n);
+ else
+ return __rte_stack_std_pop(s, obj_table, n);
}
/**
@@ -120,7 +167,10 @@ rte_stack_count(struct rte_stack *s)
{
RTE_ASSERT(s != NULL);
- return __rte_stack_std_count(s);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_count(s);
+ else
+ return __rte_stack_std_count(s);
}
/**
@@ -160,7 +210,10 @@ rte_stack_free_count(struct rte_stack *s)
* NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
* constraint for the reserved zone.
* @param flags
- * Reserved for future use.
+ * An OR of the following:
+ * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
+ * variants of the push and pop functions. Otherwise, it achieves
+ * thread-safety using a lock.
* @return
* On success, the pointer to the new allocated stack. NULL on error with
* rte_errno set appropriately. Possible errno values include:
diff --git a/lib/librte_stack/rte_stack_lf.c b/lib/librte_stack/rte_stack_lf.c
new file mode 100644
index 000000000..0adcc263e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.c
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count)
+{
+ struct rte_stack_lf_elem *elems = s->stack_lf.elems;
+ unsigned int i;
+
+ for (i = 0; i < count; i++)
+ __rte_stack_lf_push_elems(&s->stack_lf.free,
+ &elems[i], &elems[i], 1);
+}
+
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(struct rte_stack_lf_elem));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
new file mode 100644
index 000000000..bfd680133
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_H_
+#define _RTE_STACK_LF_H_
+
+#include "rte_stack_lf_generic.h"
+
+/**
+ * @internal Push several objects on the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects enqueued.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_push(struct rte_stack *s,
+ void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_lf_elem *tmp, *first, *last = NULL;
+ unsigned int i;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n free elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.free, n, NULL, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Construct the list elements */
+ for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
+ tmp->data = obj_table[n - i - 1];
+
+ /* Push them to the used list */
+ __rte_stack_lf_push_elems(&s->stack_lf.used, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * - Actual number of objects popped.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *first, *last = NULL;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n used elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.used,
+ n, obj_table, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Push the list elements to the free list */
+ __rte_stack_lf_push_elems(&s->stack_lf.free, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Initialize a lock-free stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param count
+ * The size of the stack.
+ */
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count);
+
+/**
+ * @internal Return the memory required for a lock-free stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a lock-free stack.
+ */
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_LF_H_ */
diff --git a/lib/librte_stack/rte_stack_lf_generic.h b/lib/librte_stack/rte_stack_lf_generic.h
new file mode 100644
index 000000000..1191406d3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_generic.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_GENERIC_H_
+#define _RTE_STACK_LF_GENERIC_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)rte_atomic64_read(&s->stack_lf.used.len);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* An acquire fence (or stronger) is needed for weak memory
+ * models to establish a synchronized-with relationship between
+ * the list->head load and store-release operations (as part of
+ * the rte_atomic128_cmp_exchange()).
+ */
+ rte_smp_mb();
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ rte_atomic64_add(&list->len, num);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = rte_atomic64_read(&list->len);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
+ len, len - num))
+ break;
+ }
+
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* An acquire fence (or stronger) is needed for weak memory
+ * models to ensure the LF LIFO element reads are properly
+ * ordered with respect to the head pointer read.
+ */
+ rte_smp_mb();
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_GENERIC_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 5/8] stack: add lock-free stack implementation
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 5/8] stack: add lock-free stack implementation Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for a lock-free (linked list based) stack to the
stack API. This behavior is selected through a new rte_stack_create() flag,
RTE_STACK_F_LF.
The stack consists of a linked list of elements, each containing a data
pointer and a next pointer, and an atomic stack depth counter.
The lock-free push operation enqueues a linked list of pointers by pointing
the tail of the list to the current stack head, and using a CAS to swing
the stack head pointer to the head of the list. The operation retries if it
is unsuccessful (i.e. the list changed between reading the head and
modifying it), else it adjusts the stack length and returns.
The lock-free pop operation first reserves num elements by adjusting the
stack length, to ensure the dequeue operation will succeed without
blocking. It then dequeues pointers by walking the list -- starting from
the head -- then swinging the head pointer (using a CAS as well). While
walking the list, the data pointers are recorded in an object table.
This algorithm stack uses a 128-bit compare-and-swap instruction, which
atomically updates the stack top pointer and a modification counter, to
protect against the ABA problem.
The linked list elements themselves are maintained in a lock-free LIFO
list, and are allocated before stack pushes and freed after stack pops.
Since the stack has a fixed maximum depth, these elements do not need to be
dynamically created.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
doc/guides/prog_guide/stack_lib.rst | 61 +++++++++++-
doc/guides/rel_notes/release_19_05.rst | 3 +
lib/librte_stack/Makefile | 7 +-
lib/librte_stack/meson.build | 7 +-
lib/librte_stack/rte_stack.c | 28 ++++--
lib/librte_stack/rte_stack.h | 63 +++++++++++-
lib/librte_stack/rte_stack_lf.c | 31 ++++++
lib/librte_stack/rte_stack_lf.h | 102 ++++++++++++++++++++
lib/librte_stack/rte_stack_lf_generic.h | 164 ++++++++++++++++++++++++++++++++
9 files changed, 447 insertions(+), 19 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf.c
create mode 100644 lib/librte_stack/rte_stack_lf.h
create mode 100644 lib/librte_stack/rte_stack_lf_generic.h
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 25a8cc38a..8fe8804e3 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -10,7 +10,8 @@ stack of pointers.
The stack library provides the following basic operations:
* Create a uniquely named stack of a user-specified size and using a
- user-specified socket.
+ user-specified socket, with either standard (lock-based) or lock-free
+ behavior.
* Push and pop a burst of one or more stack objects (pointers). These function
are multi-threading safe.
@@ -24,5 +25,59 @@ The stack library provides the following basic operations:
Implementation
~~~~~~~~~~~~~~
-The stack consists of a contiguous array of pointers, a current index, and a
-spinlock. Accesses to the stack are made multi-thread safe by the spinlock.
+The library supports two types of stacks: standard (lock-based) and lock-free.
+Both types use the same set of interfaces, but their implementations differ.
+
+Lock-based Stack
+----------------
+
+The lock-based stack consists of a contiguous array of pointers, a current
+index, and a spinlock. Accesses to the stack are made multi-thread safe by the
+spinlock.
+
+Lock-free Stack
+------------------
+
+The lock-free stack consists of a linked list of elements, each containing a
+data pointer and a next pointer, and an atomic stack depth counter. The
+lock-free property means that multiple threads can push and pop simultaneously,
+and one thread being preempted/delayed in a push or pop operation will not
+impede the forward progress of any other thread.
+
+The lock-free push operation enqueues a linked list of pointers by pointing the
+list's tail to the current stack head, and using a CAS to swing the stack head
+pointer to the head of the list. The operation retries if it is unsuccessful
+(i.e. the list changed between reading the head and modifying it), else it
+adjusts the stack length and returns.
+
+The lock-free pop operation first reserves one or more list elements by
+adjusting the stack length, to ensure the dequeue operation will succeed
+without blocking. It then dequeues pointers by walking the list -- starting
+from the head -- then swinging the head pointer (using a CAS as well). While
+walking the list, the data pointers are recorded in an object table.
+
+The linked list elements themselves are maintained in a lock-free LIFO, and are
+allocated before stack pushes and freed after stack pops. Since the stack has a
+fixed maximum depth, these elements do not need to be dynamically created.
+
+The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
+rte_stack_create().
+
+Preventing the ABA Problem
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To prevent the ABA problem, this algorithm stack uses a 128-bit
+compare-and-swap instruction to atomically update both the stack top pointer
+and a modification counter. The ABA problem can occur without a modification
+counter if, for example:
+
+1. Thread A reads head pointer X and stores the pointed-to list element.
+2. Other threads modify the list such that the head pointer is once again X,
+ but its pointed-to data is different than what thread A read.
+3. Thread A changes the head pointer with a compare-and-swap and succeeds.
+
+In this case thread A would not detect that the list had changed, and would
+both pop stale data and incorrect change the head pointer. By adding a
+modification counter that is updated on every push and pop as part of the
+compare-and-swap, the algorithm can detect when the list changes even if the
+head pointer remains the same.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index ebfbe36e5..3b115b5f6 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -127,6 +127,9 @@ New Features
pointers. The API provides MT-safe push and pop operations that can operate
on one or more pointers per operation.
+ The library supports two stack implementations: standard (lock-based) and lock-free.
+ The lock-free implementation is currently limited to x86-64 platforms.
+
Removed Items
-------------
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 6db540073..311edd997 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -16,10 +16,13 @@ LIBABIVER := 1
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
- rte_stack_std.c
+ rte_stack_std.c \
+ rte_stack_lf.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
- rte_stack_std.h
+ rte_stack_std.h \
+ rte_stack_lf.h \
+ rte_stack_lf_generic.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index d2e60ce9b..7a09a5d66 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -4,5 +4,8 @@
allow_experimental_apis = true
version = 1
-sources = files('rte_stack.c', 'rte_stack_std.c')
-headers = files('rte_stack.h', 'rte_stack_std.h')
+sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
+headers = files('rte_stack.h',
+ 'rte_stack_std.h',
+ 'rte_stack_lf.h',
+ 'rte_stack_lf_generic.h')
diff --git a/lib/librte_stack/rte_stack.c b/lib/librte_stack/rte_stack.c
index 610014b6c..1a4d9bd1e 100644
--- a/lib/librte_stack/rte_stack.c
+++ b/lib/librte_stack/rte_stack.c
@@ -25,18 +25,25 @@ static struct rte_tailq_elem rte_stack_tailq = {
};
EAL_REGISTER_TAILQ(rte_stack_tailq)
+
static void
-rte_stack_init(struct rte_stack *s)
+rte_stack_init(struct rte_stack *s, unsigned int count, uint32_t flags)
{
memset(s, 0, sizeof(*s));
- rte_stack_std_init(s);
+ if (flags & RTE_STACK_F_LF)
+ rte_stack_lf_init(s, count);
+ else
+ rte_stack_std_init(s);
}
static ssize_t
-rte_stack_get_memsize(unsigned int count)
+rte_stack_get_memsize(unsigned int count, uint32_t flags)
{
- return rte_stack_std_get_memsize(count);
+ if (flags & RTE_STACK_F_LF)
+ return rte_stack_lf_get_memsize(count);
+ else
+ return rte_stack_std_get_memsize(count);
}
struct rte_stack *
@@ -51,9 +58,16 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
unsigned int sz;
int ret;
- RTE_SET_USED(flags);
+#ifdef RTE_ARCH_64
+ RTE_BUILD_BUG_ON(sizeof(struct rte_stack_lf_head) != 16);
+#else
+ if (flags & RTE_STACK_F_LF) {
+ STACK_LOG_ERR("Lock-free stack is not supported on your platform\n");
+ return NULL;
+ }
+#endif
- sz = rte_stack_get_memsize(count);
+ sz = rte_stack_get_memsize(count, flags);
ret = snprintf(mz_name, sizeof(mz_name), "%s%s",
RTE_STACK_MZ_PREFIX, name);
@@ -82,7 +96,7 @@ rte_stack_create(const char *name, unsigned int count, int socket_id,
s = mz->addr;
- rte_stack_init(s);
+ rte_stack_init(s, count, flags);
/* Store the name for later lookups */
ret = snprintf(s->name, sizeof(s->name), "%s", name);
diff --git a/lib/librte_stack/rte_stack.h b/lib/librte_stack/rte_stack.h
index 42d042715..fe048f071 100644
--- a/lib/librte_stack/rte_stack.h
+++ b/lib/librte_stack/rte_stack.h
@@ -20,6 +20,7 @@
extern "C" {
#endif
+#include <rte_atomic.h>
#include <rte_compat.h>
#include <rte_debug.h>
#include <rte_errno.h>
@@ -32,6 +33,35 @@ extern "C" {
#define RTE_STACK_NAMESIZE (RTE_MEMZONE_NAMESIZE - \
sizeof(RTE_STACK_MZ_PREFIX) + 1)
+struct rte_stack_lf_elem {
+ void *data; /**< Data pointer */
+ struct rte_stack_lf_elem *next; /**< Next pointer */
+};
+
+struct rte_stack_lf_head {
+ struct rte_stack_lf_elem *top; /**< Stack top */
+ uint64_t cnt; /**< Modification counter for avoiding ABA problem */
+};
+
+struct rte_stack_lf_list {
+ /** List head */
+ struct rte_stack_lf_head head __rte_aligned(16);
+ /** List len */
+ rte_atomic64_t len;
+};
+
+/* Structure containing two lock-free LIFO lists: the stack itself and a list
+ * of free linked-list elements.
+ */
+struct rte_stack_lf {
+ /** LIFO list of elements */
+ struct rte_stack_lf_list used __rte_cache_aligned;
+ /** LIFO list of free elements */
+ struct rte_stack_lf_list free __rte_cache_aligned;
+ /** LIFO elements */
+ struct rte_stack_lf_elem elems[] __rte_cache_aligned;
+};
+
/* Structure containing the LIFO, its current length, and a lock for mutual
* exclusion.
*/
@@ -51,10 +81,21 @@ struct rte_stack {
const struct rte_memzone *memzone;
uint32_t capacity; /**< Usable size of the stack. */
uint32_t flags; /**< Flags supplied at creation. */
- struct rte_stack_std stack_std; /**< LIFO structure. */
+ RTE_STD_C11
+ union {
+ struct rte_stack_lf stack_lf; /**< Lock-free LIFO structure. */
+ struct rte_stack_std stack_std; /**< LIFO structure. */
+ };
} __rte_cache_aligned;
+/**
+ * The stack uses lock-free push and pop functions. This flag is only
+ * supported on x86_64 platforms, currently.
+ */
+#define RTE_STACK_F_LF 0x0001
+
#include "rte_stack_std.h"
+#include "rte_stack_lf.h"
/**
* @warning
@@ -77,7 +118,10 @@ rte_stack_push(struct rte_stack *s, void * const *obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_push(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_push(s, obj_table, n);
+ else
+ return __rte_stack_std_push(s, obj_table, n);
}
/**
@@ -101,7 +145,10 @@ rte_stack_pop(struct rte_stack *s, void **obj_table, unsigned int n)
RTE_ASSERT(s != NULL);
RTE_ASSERT(obj_table != NULL);
- return __rte_stack_std_pop(s, obj_table, n);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_pop(s, obj_table, n);
+ else
+ return __rte_stack_std_pop(s, obj_table, n);
}
/**
@@ -120,7 +167,10 @@ rte_stack_count(struct rte_stack *s)
{
RTE_ASSERT(s != NULL);
- return __rte_stack_std_count(s);
+ if (s->flags & RTE_STACK_F_LF)
+ return __rte_stack_lf_count(s);
+ else
+ return __rte_stack_std_count(s);
}
/**
@@ -160,7 +210,10 @@ rte_stack_free_count(struct rte_stack *s)
* NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
* constraint for the reserved zone.
* @param flags
- * Reserved for future use.
+ * An OR of the following:
+ * - RTE_STACK_F_LF: If this flag is set, the stack uses lock-free
+ * variants of the push and pop functions. Otherwise, it achieves
+ * thread-safety using a lock.
* @return
* On success, the pointer to the new allocated stack. NULL on error with
* rte_errno set appropriately. Possible errno values include:
diff --git a/lib/librte_stack/rte_stack_lf.c b/lib/librte_stack/rte_stack_lf.c
new file mode 100644
index 000000000..0adcc263e
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.c
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#include "rte_stack.h"
+
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count)
+{
+ struct rte_stack_lf_elem *elems = s->stack_lf.elems;
+ unsigned int i;
+
+ for (i = 0; i < count; i++)
+ __rte_stack_lf_push_elems(&s->stack_lf.free,
+ &elems[i], &elems[i], 1);
+}
+
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count)
+{
+ ssize_t sz = sizeof(struct rte_stack);
+
+ sz += RTE_CACHE_LINE_ROUNDUP(count * sizeof(struct rte_stack_lf_elem));
+
+ /* Add padding to avoid false sharing conflicts caused by
+ * next-line hardware prefetchers.
+ */
+ sz += 2 * RTE_CACHE_LINE_SIZE;
+
+ return sz;
+}
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
new file mode 100644
index 000000000..bfd680133
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -0,0 +1,102 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_H_
+#define _RTE_STACK_LF_H_
+
+#include "rte_stack_lf_generic.h"
+
+/**
+ * @internal Push several objects on the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to push on the stack from the obj_table.
+ * @return
+ * Actual number of objects enqueued.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_push(struct rte_stack *s,
+ void * const *obj_table,
+ unsigned int n)
+{
+ struct rte_stack_lf_elem *tmp, *first, *last = NULL;
+ unsigned int i;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n free elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.free, n, NULL, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Construct the list elements */
+ for (tmp = first, i = 0; i < n; i++, tmp = tmp->next)
+ tmp->data = obj_table[n - i - 1];
+
+ /* Push them to the used list */
+ __rte_stack_lf_push_elems(&s->stack_lf.used, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Pop several objects from the lock-free stack (MT-safe).
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param obj_table
+ * A pointer to a table of void * pointers (objects).
+ * @param n
+ * The number of objects to pull from the stack.
+ * @return
+ * - Actual number of objects popped.
+ */
+static __rte_always_inline unsigned int __rte_experimental
+__rte_stack_lf_pop(struct rte_stack *s, void **obj_table, unsigned int n)
+{
+ struct rte_stack_lf_elem *first, *last = NULL;
+
+ if (unlikely(n == 0))
+ return 0;
+
+ /* Pop n used elements */
+ first = __rte_stack_lf_pop_elems(&s->stack_lf.used,
+ n, obj_table, &last);
+ if (unlikely(first == NULL))
+ return 0;
+
+ /* Push the list elements to the free list */
+ __rte_stack_lf_push_elems(&s->stack_lf.free, first, last, n);
+
+ return n;
+}
+
+/**
+ * @internal Initialize a lock-free stack.
+ *
+ * @param s
+ * A pointer to the stack structure.
+ * @param count
+ * The size of the stack.
+ */
+void
+rte_stack_lf_init(struct rte_stack *s, unsigned int count);
+
+/**
+ * @internal Return the memory required for a lock-free stack.
+ *
+ * @param count
+ * The size of the stack.
+ * @return
+ * The bytes to allocate for a lock-free stack.
+ */
+ssize_t
+rte_stack_lf_get_memsize(unsigned int count);
+
+#endif /* _RTE_STACK_LF_H_ */
diff --git a/lib/librte_stack/rte_stack_lf_generic.h b/lib/librte_stack/rte_stack_lf_generic.h
new file mode 100644
index 000000000..1191406d3
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_generic.h
@@ -0,0 +1,164 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_GENERIC_H_
+#define _RTE_STACK_LF_GENERIC_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)rte_atomic64_read(&s->stack_lf.used.len);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* An acquire fence (or stronger) is needed for weak memory
+ * models to establish a synchronized-with relationship between
+ * the list->head load and store-release operations (as part of
+ * the rte_atomic128_cmp_exchange()).
+ */
+ rte_smp_mb();
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ rte_atomic64_add(&list->len, num);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ /* Reserve num elements, if available */
+ while (1) {
+ uint64_t len = rte_atomic64_read(&list->len);
+
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ if (rte_atomic64_cmpset((volatile uint64_t *)&list->len,
+ len, len - num))
+ break;
+ }
+
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* An acquire fence (or stronger) is needed for weak memory
+ * models to ensure the LF LIFO element reads are properly
+ * ordered with respect to the head pointer read.
+ */
+ rte_smp_mb();
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ /* old_head is updated on failure */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_GENERIC_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 6/8] stack: add C11 atomic implementation
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 0/8] Add stack library and new " Gage Eads
` (5 preceding siblings ...)
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 5/8] stack: add lock-free stack implementation Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 7/8] test/stack: add lock-free stack tests Gage Eads
` (2 subsequent siblings)
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds an implementation of the lock-free stack push, pop, and
length functions that use __atomic builtins, for systems that benefit from
the finer-grained memory ordering control.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack_lf.h | 4 +
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++++++++++++++++++++++++++++
4 files changed, 183 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 311edd997..8d18ce520 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -23,6 +23,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
rte_stack_std.h \
rte_stack_lf.h \
- rte_stack_lf_generic.h
+ rte_stack_lf_generic.h \
+ rte_stack_lf_c11.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 7a09a5d66..46fce0c20 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -8,4 +8,5 @@ sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
headers = files('rte_stack.h',
'rte_stack_std.h',
'rte_stack_lf.h',
- 'rte_stack_lf_generic.h')
+ 'rte_stack_lf_generic.h',
+ 'rte_stack_lf_c11.h')
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
index bfd680133..518889a05 100644
--- a/lib/librte_stack/rte_stack_lf.h
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -5,7 +5,11 @@
#ifndef _RTE_STACK_LF_H_
#define _RTE_STACK_LF_H_
+#ifdef RTE_USE_C11_MEM_MODEL
+#include "rte_stack_lf_c11.h"
+#else
#include "rte_stack_lf_generic.h"
+#endif
/**
* @internal Push several objects on the lock-free stack (MT-safe).
diff --git a/lib/librte_stack/rte_stack_lf_c11.h b/lib/librte_stack/rte_stack_lf_c11.h
new file mode 100644
index 000000000..a316e9af5
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_c11.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_C11_H_
+#define _RTE_STACK_LF_C11_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
+ __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Use an acquire fence to establish a synchronized-with
+ * relationship between the list->head load and store-release
+ * operations (as part of the rte_atomic128_cmp_exchange()).
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Use the release memmodel to ensure the writes to the LF LIFO
+ * elements are visible before the head pointer write.
+ */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ /* Ensure the stack modifications are not reordered with respect
+ * to the LIFO len update.
+ */
+ __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ uint64_t len;
+ int success;
+
+ /* Reserve num elements, if available */
+ len = __atomic_load_n(&list->len.cnt, __ATOMIC_ACQUIRE);
+
+ while (1) {
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ /* len is updated on failure */
+ if (__atomic_compare_exchange_n(&list->len.cnt,
+ &len, len - num,
+ 0, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE))
+ break;
+ }
+
+ /* If a torn read occurs, the CAS will fail and set old_head to the
+ * correct/latest value.
+ */
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* Use the acquire memmodel to ensure the reads to the LF LIFO
+ * elements are properly ordered with respect to the head
+ * pointer read.
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_C11_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 6/8] stack: add C11 atomic implementation
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 6/8] stack: add C11 atomic implementation Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds an implementation of the lock-free stack push, pop, and
length functions that use __atomic builtins, for systems that benefit from
the finer-grained memory ordering control.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
lib/librte_stack/Makefile | 3 +-
lib/librte_stack/meson.build | 3 +-
lib/librte_stack/rte_stack_lf.h | 4 +
lib/librte_stack/rte_stack_lf_c11.h | 175 ++++++++++++++++++++++++++++++++++++
4 files changed, 183 insertions(+), 2 deletions(-)
create mode 100644 lib/librte_stack/rte_stack_lf_c11.h
diff --git a/lib/librte_stack/Makefile b/lib/librte_stack/Makefile
index 311edd997..8d18ce520 100644
--- a/lib/librte_stack/Makefile
+++ b/lib/librte_stack/Makefile
@@ -23,6 +23,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_STACK) := rte_stack.c \
SYMLINK-$(CONFIG_RTE_LIBRTE_STACK)-include := rte_stack.h \
rte_stack_std.h \
rte_stack_lf.h \
- rte_stack_lf_generic.h
+ rte_stack_lf_generic.h \
+ rte_stack_lf_c11.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_stack/meson.build b/lib/librte_stack/meson.build
index 7a09a5d66..46fce0c20 100644
--- a/lib/librte_stack/meson.build
+++ b/lib/librte_stack/meson.build
@@ -8,4 +8,5 @@ sources = files('rte_stack.c', 'rte_stack_std.c', 'rte_stack_lf.c')
headers = files('rte_stack.h',
'rte_stack_std.h',
'rte_stack_lf.h',
- 'rte_stack_lf_generic.h')
+ 'rte_stack_lf_generic.h',
+ 'rte_stack_lf_c11.h')
diff --git a/lib/librte_stack/rte_stack_lf.h b/lib/librte_stack/rte_stack_lf.h
index bfd680133..518889a05 100644
--- a/lib/librte_stack/rte_stack_lf.h
+++ b/lib/librte_stack/rte_stack_lf.h
@@ -5,7 +5,11 @@
#ifndef _RTE_STACK_LF_H_
#define _RTE_STACK_LF_H_
+#ifdef RTE_USE_C11_MEM_MODEL
+#include "rte_stack_lf_c11.h"
+#else
#include "rte_stack_lf_generic.h"
+#endif
/**
* @internal Push several objects on the lock-free stack (MT-safe).
diff --git a/lib/librte_stack/rte_stack_lf_c11.h b/lib/librte_stack/rte_stack_lf_c11.h
new file mode 100644
index 000000000..a316e9af5
--- /dev/null
+++ b/lib/librte_stack/rte_stack_lf_c11.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2019 Intel Corporation
+ */
+
+#ifndef _RTE_STACK_LF_C11_H_
+#define _RTE_STACK_LF_C11_H_
+
+#include <rte_branch_prediction.h>
+#include <rte_prefetch.h>
+
+static __rte_always_inline unsigned int
+__rte_stack_lf_count(struct rte_stack *s)
+{
+ /* stack_lf_push() and stack_lf_pop() do not update the list's contents
+ * and stack_lf->len atomically, which can cause the list to appear
+ * shorter than it actually is if this function is called while other
+ * threads are modifying the list.
+ *
+ * However, given the inherently approximate nature of the get_count
+ * callback -- even if the list and its size were updated atomically,
+ * the size could change between when get_count executes and when the
+ * value is returned to the caller -- this is acceptable.
+ *
+ * The stack_lf->len updates are placed such that the list may appear to
+ * have fewer elements than it does, but will never appear to have more
+ * elements. If the mempool is near-empty to the point that this is a
+ * concern, the user should consider increasing the mempool size.
+ */
+ return (unsigned int)__atomic_load_n(&s->stack_lf.used.len.cnt,
+ __ATOMIC_RELAXED);
+}
+
+static __rte_always_inline void
+__rte_stack_lf_push_elems(struct rte_stack_lf_list *list,
+ struct rte_stack_lf_elem *first,
+ struct rte_stack_lf_elem *last,
+ unsigned int num)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(first);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+#else
+ struct rte_stack_lf_head old_head;
+ int success;
+
+ old_head = list->head;
+
+ do {
+ struct rte_stack_lf_head new_head;
+
+ /* Use an acquire fence to establish a synchronized-with
+ * relationship between the list->head load and store-release
+ * operations (as part of the rte_atomic128_cmp_exchange()).
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ /* Swing the top pointer to the first element in the list and
+ * make the last element point to the old top.
+ */
+ new_head.top = first;
+ new_head.cnt = old_head.cnt + 1;
+
+ last->next = old_head.top;
+
+ /* Use the release memmodel to ensure the writes to the LF LIFO
+ * elements are visible before the head pointer write.
+ */
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ /* Ensure the stack modifications are not reordered with respect
+ * to the LIFO len update.
+ */
+ __atomic_add_fetch(&list->len.cnt, num, __ATOMIC_RELEASE);
+#endif
+}
+
+static __rte_always_inline struct rte_stack_lf_elem *
+__rte_stack_lf_pop_elems(struct rte_stack_lf_list *list,
+ unsigned int num,
+ void **obj_table,
+ struct rte_stack_lf_elem **last)
+{
+#ifndef RTE_ARCH_X86_64
+ RTE_SET_USED(obj_table);
+ RTE_SET_USED(last);
+ RTE_SET_USED(list);
+ RTE_SET_USED(num);
+
+ return NULL;
+#else
+ struct rte_stack_lf_head old_head;
+ uint64_t len;
+ int success;
+
+ /* Reserve num elements, if available */
+ len = __atomic_load_n(&list->len.cnt, __ATOMIC_ACQUIRE);
+
+ while (1) {
+ /* Does the list contain enough elements? */
+ if (unlikely(len < num))
+ return NULL;
+
+ /* len is updated on failure */
+ if (__atomic_compare_exchange_n(&list->len.cnt,
+ &len, len - num,
+ 0, __ATOMIC_ACQUIRE,
+ __ATOMIC_ACQUIRE))
+ break;
+ }
+
+ /* If a torn read occurs, the CAS will fail and set old_head to the
+ * correct/latest value.
+ */
+ old_head = list->head;
+
+ /* Pop num elements */
+ do {
+ struct rte_stack_lf_head new_head;
+ struct rte_stack_lf_elem *tmp;
+ unsigned int i;
+
+ /* Use the acquire memmodel to ensure the reads to the LF LIFO
+ * elements are properly ordered with respect to the head
+ * pointer read.
+ */
+ __atomic_thread_fence(__ATOMIC_ACQUIRE);
+
+ rte_prefetch0(old_head.top);
+
+ tmp = old_head.top;
+
+ /* Traverse the list to find the new head. A next pointer will
+ * either point to another element or NULL; if a thread
+ * encounters a pointer that has already been popped, the CAS
+ * will fail.
+ */
+ for (i = 0; i < num && tmp != NULL; i++) {
+ rte_prefetch0(tmp->next);
+ if (obj_table)
+ obj_table[i] = tmp->data;
+ if (last)
+ *last = tmp;
+ tmp = tmp->next;
+ }
+
+ /* If NULL was encountered, the list was modified while
+ * traversing it. Retry.
+ */
+ if (i != num)
+ continue;
+
+ new_head.top = tmp;
+ new_head.cnt = old_head.cnt + 1;
+
+ success = rte_atomic128_cmp_exchange(
+ (rte_int128_t *)&list->head,
+ (rte_int128_t *)&old_head,
+ (rte_int128_t *)&new_head,
+ 1, __ATOMIC_RELEASE,
+ __ATOMIC_RELAXED);
+ } while (success == 0);
+
+ return old_head.top;
+#endif
+}
+
+#endif /* _RTE_STACK_LF_C11_H_ */
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 7/8] test/stack: add lock-free stack tests
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 0/8] Add stack library and new " Gage Eads
` (6 preceding siblings ...)
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 6/8] stack: add C11 atomic implementation Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
2019-04-04 15:42 ` [dpdk-dev] [PATCH v10 0/8] Add stack library and new " Thomas Monjalon
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds lock-free stack variants of stack_autotest
(stack_lf_autotest) and stack_perf_autotest (stack_lf_perf_autotest), which
differ only in that the lock-free versions pass the RTE_STACK_F_LF flag to
all rte_stack_create() calls.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/meson.build | 2 ++
app/test/test_stack.c | 41 +++++++++++++++++++++++++++--------------
app/test/test_stack_perf.c | 17 +++++++++++++++--
3 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/app/test/meson.build b/app/test/meson.build
index 02eb788a4..867cc5863 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -178,6 +178,7 @@ fast_parallel_test_names = [
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
+ 'stack_nb_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
@@ -243,6 +244,7 @@ perf_test_names = [
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
+ 'stack_nb_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
index 6be2f876b..e972a61a7 100644
--- a/app/test/test_stack.c
+++ b/app/test/test_stack.c
@@ -98,7 +98,7 @@ test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
}
static int
-test_stack_basic(void)
+test_stack_basic(uint32_t flags)
{
struct rte_stack *s = NULL;
void **obj_table = NULL;
@@ -114,7 +114,7 @@ test_stack_basic(void)
for (i = 0; i < STACK_SIZE; i++)
obj_table[i] = (void *)(uintptr_t)i;
- s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -178,18 +178,18 @@ test_stack_basic(void)
}
static int
-test_stack_name_reuse(void)
+test_stack_name_reuse(uint32_t flags)
{
struct rte_stack *s[2];
- s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[0] == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
return -1;
}
- s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[1] != NULL) {
printf("[%s():%u] Failed to detect re-used name\n",
__func__, __LINE__);
@@ -202,7 +202,7 @@ test_stack_name_reuse(void)
}
static int
-test_stack_name_length(void)
+test_stack_name_length(uint32_t flags)
{
char name[RTE_STACK_NAMESIZE + 1];
struct rte_stack *s;
@@ -210,7 +210,7 @@ test_stack_name_length(void)
memset(name, 's', sizeof(name));
name[RTE_STACK_NAMESIZE] = '\0';
- s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), flags);
if (s != NULL) {
printf("[%s():%u] Failed to prevent long name\n",
__func__, __LINE__);
@@ -329,7 +329,7 @@ stack_thread_push_pop(void *args)
}
static int
-test_stack_multithreaded(void)
+test_stack_multithreaded(uint32_t flags)
{
struct test_args *args;
unsigned int lcore_id;
@@ -350,7 +350,7 @@ test_stack_multithreaded(void)
return -1;
}
- s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
@@ -385,9 +385,9 @@ test_stack_multithreaded(void)
}
static int
-test_stack(void)
+__test_stack(uint32_t flags)
{
- if (test_stack_basic() < 0)
+ if (test_stack_basic(flags) < 0)
return -1;
if (test_lookup_null() < 0)
@@ -396,16 +396,29 @@ test_stack(void)
if (test_free_null() < 0)
return -1;
- if (test_stack_name_reuse() < 0)
+ if (test_stack_name_reuse(flags) < 0)
return -1;
- if (test_stack_name_length() < 0)
+ if (test_stack_name_length(flags) < 0)
return -1;
- if (test_stack_multithreaded() < 0)
+ if (test_stack_multithreaded(flags) < 0)
return -1;
return 0;
}
+static int
+test_stack(void)
+{
+ return __test_stack(0);
+}
+
+static int
+test_lf_stack(void)
+{
+ return __test_stack(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_autotest, test_stack);
+REGISTER_TEST_COMMAND(stack_lf_autotest, test_lf_stack);
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
index a44fbb73e..ba27fbf70 100644
--- a/app/test/test_stack_perf.c
+++ b/app/test/test_stack_perf.c
@@ -299,14 +299,14 @@ test_bulk_push_pop(struct rte_stack *s)
}
static int
-test_stack_perf(void)
+__test_stack_perf(uint32_t flags)
{
struct lcore_pair cores;
struct rte_stack *s;
rte_atomic32_init(&lcore_barrier);
- s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -342,4 +342,17 @@ test_stack_perf(void)
return 0;
}
+static int
+test_stack_perf(void)
+{
+ return __test_stack_perf(0);
+}
+
+static int
+test_lf_stack_perf(void)
+{
+ return __test_stack_perf(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
+REGISTER_TEST_COMMAND(stack_lf_perf_autotest, test_lf_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 7/8] test/stack: add lock-free stack tests
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 7/8] test/stack: add lock-free stack tests Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds lock-free stack variants of stack_autotest
(stack_lf_autotest) and stack_perf_autotest (stack_lf_perf_autotest), which
differ only in that the lock-free versions pass the RTE_STACK_F_LF flag to
all rte_stack_create() calls.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
app/test/meson.build | 2 ++
app/test/test_stack.c | 41 +++++++++++++++++++++++++++--------------
app/test/test_stack_perf.c | 17 +++++++++++++++--
3 files changed, 44 insertions(+), 16 deletions(-)
diff --git a/app/test/meson.build b/app/test/meson.build
index 02eb788a4..867cc5863 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -178,6 +178,7 @@ fast_parallel_test_names = [
'sched_autotest',
'spinlock_autotest',
'stack_autotest',
+ 'stack_nb_autotest',
'string_autotest',
'table_autotest',
'tailq_autotest',
@@ -243,6 +244,7 @@ perf_test_names = [
'ring_pmd_perf_autotest',
'pmd_perf_autotest',
'stack_perf_autotest',
+ 'stack_nb_perf_autotest',
]
# All test cases in driver_test_names list are non-parallel
diff --git a/app/test/test_stack.c b/app/test/test_stack.c
index 6be2f876b..e972a61a7 100644
--- a/app/test/test_stack.c
+++ b/app/test/test_stack.c
@@ -98,7 +98,7 @@ test_stack_push_pop(struct rte_stack *s, void **obj_table, unsigned int bulk_sz)
}
static int
-test_stack_basic(void)
+test_stack_basic(uint32_t flags)
{
struct rte_stack *s = NULL;
void **obj_table = NULL;
@@ -114,7 +114,7 @@ test_stack_basic(void)
for (i = 0; i < STACK_SIZE; i++)
obj_table[i] = (void *)(uintptr_t)i;
- s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(__func__, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -178,18 +178,18 @@ test_stack_basic(void)
}
static int
-test_stack_name_reuse(void)
+test_stack_name_reuse(uint32_t flags)
{
struct rte_stack *s[2];
- s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[0] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[0] == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
return -1;
}
- s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s[1] = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s[1] != NULL) {
printf("[%s():%u] Failed to detect re-used name\n",
__func__, __LINE__);
@@ -202,7 +202,7 @@ test_stack_name_reuse(void)
}
static int
-test_stack_name_length(void)
+test_stack_name_length(uint32_t flags)
{
char name[RTE_STACK_NAMESIZE + 1];
struct rte_stack *s;
@@ -210,7 +210,7 @@ test_stack_name_length(void)
memset(name, 's', sizeof(name));
name[RTE_STACK_NAMESIZE] = '\0';
- s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(name, STACK_SIZE, rte_socket_id(), flags);
if (s != NULL) {
printf("[%s():%u] Failed to prevent long name\n",
__func__, __LINE__);
@@ -329,7 +329,7 @@ stack_thread_push_pop(void *args)
}
static int
-test_stack_multithreaded(void)
+test_stack_multithreaded(uint32_t flags)
{
struct test_args *args;
unsigned int lcore_id;
@@ -350,7 +350,7 @@ test_stack_multithreaded(void)
return -1;
}
- s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create("test", STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] Failed to create a stack\n",
__func__, __LINE__);
@@ -385,9 +385,9 @@ test_stack_multithreaded(void)
}
static int
-test_stack(void)
+__test_stack(uint32_t flags)
{
- if (test_stack_basic() < 0)
+ if (test_stack_basic(flags) < 0)
return -1;
if (test_lookup_null() < 0)
@@ -396,16 +396,29 @@ test_stack(void)
if (test_free_null() < 0)
return -1;
- if (test_stack_name_reuse() < 0)
+ if (test_stack_name_reuse(flags) < 0)
return -1;
- if (test_stack_name_length() < 0)
+ if (test_stack_name_length(flags) < 0)
return -1;
- if (test_stack_multithreaded() < 0)
+ if (test_stack_multithreaded(flags) < 0)
return -1;
return 0;
}
+static int
+test_stack(void)
+{
+ return __test_stack(0);
+}
+
+static int
+test_lf_stack(void)
+{
+ return __test_stack(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_autotest, test_stack);
+REGISTER_TEST_COMMAND(stack_lf_autotest, test_lf_stack);
diff --git a/app/test/test_stack_perf.c b/app/test/test_stack_perf.c
index a44fbb73e..ba27fbf70 100644
--- a/app/test/test_stack_perf.c
+++ b/app/test/test_stack_perf.c
@@ -299,14 +299,14 @@ test_bulk_push_pop(struct rte_stack *s)
}
static int
-test_stack_perf(void)
+__test_stack_perf(uint32_t flags)
{
struct lcore_pair cores;
struct rte_stack *s;
rte_atomic32_init(&lcore_barrier);
- s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), 0);
+ s = rte_stack_create(STACK_NAME, STACK_SIZE, rte_socket_id(), flags);
if (s == NULL) {
printf("[%s():%u] failed to create a stack\n",
__func__, __LINE__);
@@ -342,4 +342,17 @@ test_stack_perf(void)
return 0;
}
+static int
+test_stack_perf(void)
+{
+ return __test_stack_perf(0);
+}
+
+static int
+test_lf_stack_perf(void)
+{
+ return __test_stack_perf(RTE_STACK_F_LF);
+}
+
REGISTER_TEST_COMMAND(stack_perf_autotest, test_stack_perf);
+REGISTER_TEST_COMMAND(stack_lf_perf_autotest, test_lf_stack_perf);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 8/8] mempool/stack: add lock-free stack mempool handler
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 0/8] Add stack library and new " Gage Eads
` (7 preceding siblings ...)
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 7/8] test/stack: add lock-free stack tests Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 15:42 ` [dpdk-dev] [PATCH v10 0/8] Add stack library and new " Thomas Monjalon
9 siblings, 1 reply; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for lock-free (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
stack's worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 10 ++++++++++
doc/guides/rel_notes/release_19_05.rst | 5 +++++
drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++--
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 6a04c3c33..fa8afdb3a 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -581,6 +581,16 @@ Known Issues
5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+ Alternatively, applications can use the lock-free stack mempool handler. When
+ considering this handler, note that:
+
+ - It is currently limited to the x86_64 platform, because it uses an
+ instruction (16-byte compare-and-swap) that is not yet available on other
+ platforms.
+ - It has worse average-case performance than the non-preemptive rte_ring, but
+ software caching (e.g. the mempool cache) can mitigate this by reducing the
+ number of stack accesses.
+
+ rte_timer
Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 3b115b5f6..f873984ad 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -130,6 +130,11 @@ New Features
The library supports two stack implementations: standard (lock-based) and lock-free.
The lock-free implementation is currently limited to x86-64 platforms.
+* **Added Lock-Free Stack Mempool Handler.**
+
+ Added a new lock-free stack handler, which uses the newly added stack
+ library.
+
Removed Items
-------------
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 25ccdb9af..7e85c8d6b 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -7,7 +7,7 @@
#include <rte_stack.h>
static int
-stack_alloc(struct rte_mempool *mp)
+__stack_alloc(struct rte_mempool *mp, uint32_t flags)
{
char name[RTE_STACK_NAMESIZE];
struct rte_stack *s;
@@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp)
return -rte_errno;
}
- s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ s = rte_stack_create(name, mp->size, mp->socket_id, flags);
if (s == NULL)
return -rte_errno;
@@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp)
}
static int
+stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, 0);
+}
+
+static int
+lf_stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, RTE_STACK_F_LF);
+}
+
+static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
unsigned int n)
{
@@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = {
.get_count = stack_get_count
};
+static struct rte_mempool_ops ops_lf_stack = {
+ .name = "lf_stack",
+ .alloc = lf_stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
MEMPOOL_REGISTER_OPS(ops_stack);
+MEMPOOL_REGISTER_OPS(ops_lf_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* [dpdk-dev] [PATCH v10 8/8] mempool/stack: add lock-free stack mempool handler
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
@ 2019-04-04 10:01 ` Gage Eads
0 siblings, 0 replies; 228+ messages in thread
From: Gage Eads @ 2019-04-04 10:01 UTC (permalink / raw)
To: dev
Cc: olivier.matz, arybchenko, bruce.richardson, konstantin.ananyev,
gavin.hu, Honnappa.Nagarahalli, nd, thomas
This commit adds support for lock-free (linked list based) stack mempool
handler.
In mempool_perf_autotest the lock-based stack outperforms the
lock-free handler for certain lcore/alloc count/free count
combinations*, however:
- For applications with preemptible pthreads, a standard (lock-based)
stack's worst-case performance (i.e. one thread being preempted while
holding the spinlock) is much worse than the lock-free stack's.
- Using per-thread mempool caches will largely mitigate the performance
difference.
*Test setup: x86_64 build with default config, dual-socket Xeon E5-2699 v4,
running on isolcpus cores with a tickless scheduler. The lock-based stack's
rate_persec was 0.6x-3.5x the lock-free stack's.
Signed-off-by: Gage Eads <gage.eads@intel.com>
Reviewed-by: Olivier Matz <olivier.matz@6wind.com>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 10 ++++++++++
doc/guides/rel_notes/release_19_05.rst | 5 +++++
drivers/mempool/stack/rte_mempool_stack.c | 26 +++++++++++++++++++++++--
3 files changed, 39 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index 6a04c3c33..fa8afdb3a 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -581,6 +581,16 @@ Known Issues
5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
+ Alternatively, applications can use the lock-free stack mempool handler. When
+ considering this handler, note that:
+
+ - It is currently limited to the x86_64 platform, because it uses an
+ instruction (16-byte compare-and-swap) that is not yet available on other
+ platforms.
+ - It has worse average-case performance than the non-preemptive rte_ring, but
+ software caching (e.g. the mempool cache) can mitigate this by reducing the
+ number of stack accesses.
+
+ rte_timer
Running ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 3b115b5f6..f873984ad 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -130,6 +130,11 @@ New Features
The library supports two stack implementations: standard (lock-based) and lock-free.
The lock-free implementation is currently limited to x86-64 platforms.
+* **Added Lock-Free Stack Mempool Handler.**
+
+ Added a new lock-free stack handler, which uses the newly added stack
+ library.
+
Removed Items
-------------
diff --git a/drivers/mempool/stack/rte_mempool_stack.c b/drivers/mempool/stack/rte_mempool_stack.c
index 25ccdb9af..7e85c8d6b 100644
--- a/drivers/mempool/stack/rte_mempool_stack.c
+++ b/drivers/mempool/stack/rte_mempool_stack.c
@@ -7,7 +7,7 @@
#include <rte_stack.h>
static int
-stack_alloc(struct rte_mempool *mp)
+__stack_alloc(struct rte_mempool *mp, uint32_t flags)
{
char name[RTE_STACK_NAMESIZE];
struct rte_stack *s;
@@ -20,7 +20,7 @@ stack_alloc(struct rte_mempool *mp)
return -rte_errno;
}
- s = rte_stack_create(name, mp->size, mp->socket_id, 0);
+ s = rte_stack_create(name, mp->size, mp->socket_id, flags);
if (s == NULL)
return -rte_errno;
@@ -30,6 +30,18 @@ stack_alloc(struct rte_mempool *mp)
}
static int
+stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, 0);
+}
+
+static int
+lf_stack_alloc(struct rte_mempool *mp)
+{
+ return __stack_alloc(mp, RTE_STACK_F_LF);
+}
+
+static int
stack_enqueue(struct rte_mempool *mp, void * const *obj_table,
unsigned int n)
{
@@ -72,4 +84,14 @@ static struct rte_mempool_ops ops_stack = {
.get_count = stack_get_count
};
+static struct rte_mempool_ops ops_lf_stack = {
+ .name = "lf_stack",
+ .alloc = lf_stack_alloc,
+ .free = stack_free,
+ .enqueue = stack_enqueue,
+ .dequeue = stack_dequeue,
+ .get_count = stack_get_count
+};
+
MEMPOOL_REGISTER_OPS(ops_stack);
+MEMPOOL_REGISTER_OPS(ops_lf_stack);
--
2.13.6
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v9 1/8] stack: introduce rte stack library
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 1/8] stack: introduce rte stack library Gage Eads
2019-04-03 23:20 ` Gage Eads
@ 2019-04-04 13:30 ` Thomas Monjalon
2019-04-04 13:30 ` Thomas Monjalon
2019-04-04 14:14 ` Eads, Gage
1 sibling, 2 replies; 228+ messages in thread
From: Thomas Monjalon @ 2019-04-04 13:30 UTC (permalink / raw)
To: Gage Eads
Cc: dev, olivier.matz, arybchenko, bruce.richardson,
konstantin.ananyev, gavin.hu, Honnappa.Nagarahalli, nd
04/04/2019 01:20, Gage Eads:
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
> _LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
> _LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
> _LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
> _LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
> _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
> _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
Stack library is used by mempool, so it should appear after mempool in
the library list.
It is the same as ring lib being after mempool_ring.
If you agree, please just tell me without sending a new version,
because I'm doing other minor changes (sorting stack near ring in many files).
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v9 1/8] stack: introduce rte stack library
2019-04-04 13:30 ` Thomas Monjalon
@ 2019-04-04 13:30 ` Thomas Monjalon
2019-04-04 14:14 ` Eads, Gage
1 sibling, 0 replies; 228+ messages in thread
From: Thomas Monjalon @ 2019-04-04 13:30 UTC (permalink / raw)
To: Gage Eads
Cc: dev, olivier.matz, arybchenko, bruce.richardson,
konstantin.ananyev, gavin.hu, Honnappa.Nagarahalli, nd
04/04/2019 01:20, Gage Eads:
> --- a/mk/rte.app.mk
> +++ b/mk/rte.app.mk
> @@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -lrte_security
> _LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
> _LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
> _LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
> +_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
> _LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
> _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
> _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -lrte_mempool_ring
Stack library is used by mempool, so it should appear after mempool in
the library list.
It is the same as ring lib being after mempool_ring.
If you agree, please just tell me without sending a new version,
because I'm doing other minor changes (sorting stack near ring in many files).
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v9 1/8] stack: introduce rte stack library
2019-04-04 13:30 ` Thomas Monjalon
2019-04-04 13:30 ` Thomas Monjalon
@ 2019-04-04 14:14 ` Eads, Gage
2019-04-04 14:14 ` Eads, Gage
1 sibling, 1 reply; 228+ messages in thread
From: Eads, Gage @ 2019-04-04 14:14 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, olivier.matz, arybchenko, Richardson, Bruce, Ananyev,
Konstantin, gavin.hu, Honnappa.Nagarahalli, nd
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Thursday, April 4, 2019 8:30 AM
> To: Eads, Gage <gage.eads@intel.com>
> Cc: dev@dpdk.org; olivier.matz@6wind.com; arybchenko@solarflare.com;
> Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; gavin.hu@arm.com;
> Honnappa.Nagarahalli@arm.com; nd@arm.com
> Subject: Re: [dpdk-dev] [PATCH v9 1/8] stack: introduce rte stack library
>
> 04/04/2019 01:20, Gage Eads:
> > --- a/mk/rte.app.mk
> > +++ b/mk/rte.app.mk
> > @@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -
> lrte_security
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
> > +_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
> > _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -
> lrte_mempool_ring
>
> Stack library is used by mempool, so it should appear after mempool in the
> library list.
> It is the same as ring lib being after mempool_ring.
>
> If you agree, please just tell me without sending a new version, because I'm
> doing other minor changes (sorting stack near ring in many files).
>
Agreed. I suspect we haven't seen any linker problems because it's in a --whole-archive section (line 72), but better to put these in the correct dependency order.
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v9 1/8] stack: introduce rte stack library
2019-04-04 14:14 ` Eads, Gage
@ 2019-04-04 14:14 ` Eads, Gage
0 siblings, 0 replies; 228+ messages in thread
From: Eads, Gage @ 2019-04-04 14:14 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, olivier.matz, arybchenko, Richardson, Bruce, Ananyev,
Konstantin, gavin.hu, Honnappa.Nagarahalli, nd
> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Thursday, April 4, 2019 8:30 AM
> To: Eads, Gage <gage.eads@intel.com>
> Cc: dev@dpdk.org; olivier.matz@6wind.com; arybchenko@solarflare.com;
> Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; gavin.hu@arm.com;
> Honnappa.Nagarahalli@arm.com; nd@arm.com
> Subject: Re: [dpdk-dev] [PATCH v9 1/8] stack: introduce rte stack library
>
> 04/04/2019 01:20, Gage Eads:
> > --- a/mk/rte.app.mk
> > +++ b/mk/rte.app.mk
> > @@ -87,6 +87,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_SECURITY) += -
> lrte_security
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_COMPRESSDEV) += -lrte_compressdev
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += -lrte_eventdev
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_RAWDEV) += -lrte_rawdev
> > +_LDLIBS-$(CONFIG_RTE_LIBRTE_STACK) += -lrte_stack
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_TIMER) += -lrte_timer
> > _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL) += -lrte_mempool
> > _LDLIBS-$(CONFIG_RTE_DRIVER_MEMPOOL_RING) += -
> lrte_mempool_ring
>
> Stack library is used by mempool, so it should appear after mempool in the
> library list.
> It is the same as ring lib being after mempool_ring.
>
> If you agree, please just tell me without sending a new version, because I'm
> doing other minor changes (sorting stack near ring in many files).
>
Agreed. I suspect we haven't seen any linker problems because it's in a --whole-archive section (line 72), but better to put these in the correct dependency order.
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v10 0/8] Add stack library and new mempool handler
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 0/8] Add stack library and new " Gage Eads
` (8 preceding siblings ...)
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
@ 2019-04-04 15:42 ` Thomas Monjalon
2019-04-04 15:42 ` Thomas Monjalon
9 siblings, 1 reply; 228+ messages in thread
From: Thomas Monjalon @ 2019-04-04 15:42 UTC (permalink / raw)
To: Gage Eads
Cc: dev, olivier.matz, arybchenko, bruce.richardson,
konstantin.ananyev, gavin.hu, Honnappa.Nagarahalli, nd
04/04/2019 12:01, Gage Eads:
> This patchset introduces a stack library, supporting both lock-based and
> lock-free stacks, and a lock-free stack mempool handler.
>
> The lock-based stack code is derived from the existing stack mempool handler,
> and that handler is refactored to use the stack library.
>
> The lock-free stack mempool handler is intended for usages where the rte
> ring's "non-preemptive" constraint is not acceptable; for example, if the
> application uses a mixture of pinned high-priority threads and multiplexed
> low-priority threads that share a mempool.
>
> Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
> so it is currently limited to the x86_64 platform.
[...]
> Gage Eads (8):
> stack: introduce rte stack library
> mempool/stack: convert mempool to use rte stack
> test/stack: add stack test
> test/stack: add stack perf test
> stack: add lock-free stack implementation
> stack: add C11 atomic implementation
> test/stack: add lock-free stack tests
> mempool/stack: add lock-free stack mempool handler
Applied (with a few minor changes), thanks for bringing this new library.
^ permalink raw reply [flat|nested] 228+ messages in thread
* Re: [dpdk-dev] [PATCH v10 0/8] Add stack library and new mempool handler
2019-04-04 15:42 ` [dpdk-dev] [PATCH v10 0/8] Add stack library and new " Thomas Monjalon
@ 2019-04-04 15:42 ` Thomas Monjalon
0 siblings, 0 replies; 228+ messages in thread
From: Thomas Monjalon @ 2019-04-04 15:42 UTC (permalink / raw)
To: Gage Eads
Cc: dev, olivier.matz, arybchenko, bruce.richardson,
konstantin.ananyev, gavin.hu, Honnappa.Nagarahalli, nd
04/04/2019 12:01, Gage Eads:
> This patchset introduces a stack library, supporting both lock-based and
> lock-free stacks, and a lock-free stack mempool handler.
>
> The lock-based stack code is derived from the existing stack mempool handler,
> and that handler is refactored to use the stack library.
>
> The lock-free stack mempool handler is intended for usages where the rte
> ring's "non-preemptive" constraint is not acceptable; for example, if the
> application uses a mixture of pinned high-priority threads and multiplexed
> low-priority threads that share a mempool.
>
> Note that the lock-free algorithm relies on a 128-bit compare-and-swap[1],
> so it is currently limited to the x86_64 platform.
[...]
> Gage Eads (8):
> stack: introduce rte stack library
> mempool/stack: convert mempool to use rte stack
> test/stack: add stack test
> test/stack: add stack perf test
> stack: add lock-free stack implementation
> stack: add C11 atomic implementation
> test/stack: add lock-free stack tests
> mempool/stack: add lock-free stack mempool handler
Applied (with a few minor changes), thanks for bringing this new library.
^ permalink raw reply [flat|nested] 228+ messages in thread
end of thread, other threads:[~2019-04-04 15:42 UTC | newest]
Thread overview: 228+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-22 16:06 [dpdk-dev] [PATCH 0/7] Subject: [PATCH ...] Add stack library and new mempool handler Gage Eads
2019-02-22 16:06 ` [dpdk-dev] [PATCH 1/7] stack: introduce rte stack library Gage Eads
2019-02-25 10:43 ` Olivier Matz
2019-02-28 5:10 ` Eads, Gage
2019-02-22 16:06 ` [dpdk-dev] [PATCH 2/7] mempool/stack: convert mempool to use rte stack Gage Eads
2019-02-25 10:46 ` Olivier Matz
2019-02-22 16:06 ` [dpdk-dev] [PATCH 3/7] test/stack: add stack test Gage Eads
2019-02-25 10:59 ` Olivier Matz
2019-02-28 5:11 ` Eads, Gage
2019-02-22 16:06 ` [dpdk-dev] [PATCH 4/7] test/stack: add stack perf test Gage Eads
2019-02-25 11:04 ` Olivier Matz
2019-02-22 16:06 ` [dpdk-dev] [PATCH 5/7] stack: add non-blocking stack implementation Gage Eads
2019-02-25 11:28 ` Olivier Matz
[not found] ` <2EC44CCD3517A842B44C82651A5557A14AF13386@fmsmsx118.amr.corp.intel.com>
2019-03-01 20:53 ` [dpdk-dev] FW: " Eads, Gage
2019-03-01 21:12 ` Thomas Monjalon
2019-03-01 21:29 ` Eads, Gage
2019-02-22 16:06 ` [dpdk-dev] [PATCH 6/7] test/stack: add non-blocking stack tests Gage Eads
2019-02-25 11:28 ` Olivier Matz
2019-02-22 16:06 ` [dpdk-dev] [PATCH 7/7] mempool/stack: add non-blocking stack mempool handler Gage Eads
2019-02-25 11:29 ` Olivier Matz
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 0/8] Add stack library and new " Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 1/8] stack: introduce rte stack library Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 3/8] test/stack: add stack test Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 4/8] test/stack: add stack perf test Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 5/8] stack: add lock-free stack implementation Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 6/8] stack: add C11 atomic implementation Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 7/8] test/stack: add lock-free stack tests Gage Eads
2019-03-05 16:42 ` [dpdk-dev] [PATCH v2 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 0/8] Add stack library and new " Gage Eads
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 1/8] stack: introduce rte stack library Gage Eads
2019-03-14 8:00 ` Olivier Matz
2019-03-14 8:00 ` Olivier Matz
2019-03-28 23:26 ` Honnappa Nagarahalli
2019-03-28 23:26 ` Honnappa Nagarahalli
2019-03-29 19:23 ` Eads, Gage
2019-03-29 19:23 ` Eads, Gage
2019-03-29 21:07 ` Thomas Monjalon
2019-03-29 21:07 ` Thomas Monjalon
2019-04-01 17:41 ` Honnappa Nagarahalli
2019-04-01 17:41 ` Honnappa Nagarahalli
2019-04-01 19:34 ` Eads, Gage
2019-04-01 19:34 ` Eads, Gage
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 3/8] test/stack: add stack test Gage Eads
2019-03-14 8:00 ` Olivier Matz
2019-03-14 8:00 ` Olivier Matz
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 4/8] test/stack: add stack perf test Gage Eads
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 5/8] stack: add lock-free stack implementation Gage Eads
2019-03-14 8:01 ` Olivier Matz
2019-03-14 8:01 ` Olivier Matz
2019-03-28 23:27 ` Honnappa Nagarahalli
2019-03-28 23:27 ` Honnappa Nagarahalli
2019-03-29 19:25 ` Eads, Gage
2019-03-29 19:25 ` Eads, Gage
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 6/8] stack: add C11 atomic implementation Gage Eads
2019-03-14 8:04 ` Olivier Matz
2019-03-14 8:04 ` Olivier Matz
2019-03-28 23:27 ` Honnappa Nagarahalli
2019-03-28 23:27 ` Honnappa Nagarahalli
2019-03-29 19:24 ` Eads, Gage
2019-03-29 19:24 ` Eads, Gage
2019-04-01 0:06 ` Eads, Gage
2019-04-01 0:06 ` Eads, Gage
2019-04-01 19:06 ` Honnappa Nagarahalli
2019-04-01 19:06 ` Honnappa Nagarahalli
2019-04-01 20:21 ` Eads, Gage
2019-04-01 20:21 ` Eads, Gage
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 7/8] test/stack: add lock-free stack tests Gage Eads
2019-03-06 14:45 ` [dpdk-dev] [PATCH v3 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 0/8] Add stack library and new " Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 1/8] stack: introduce rte stack library Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 3/8] test/stack: add stack test Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 4/8] test/stack: add stack perf test Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 5/8] stack: add lock-free stack implementation Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 6/8] stack: add C11 atomic implementation Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 7/8] test/stack: add lock-free stack tests Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-03-28 18:00 ` [dpdk-dev] [PATCH v4 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
2019-03-28 18:00 ` Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 0/8] Add stack library and new " Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 1/8] stack: introduce rte stack library Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 3/8] test/stack: add stack test Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 4/8] test/stack: add stack perf test Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 5/8] stack: add lock-free stack implementation Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 18:08 ` Honnappa Nagarahalli
2019-04-01 18:08 ` Honnappa Nagarahalli
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 6/8] stack: add C11 atomic implementation Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 7/8] test/stack: add lock-free stack tests Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 0:12 ` [dpdk-dev] [PATCH v5 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
2019-04-01 0:12 ` Gage Eads
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 1/8] stack: introduce rte stack library Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-02 11:14 ` Honnappa Nagarahalli
2019-04-02 11:14 ` Honnappa Nagarahalli
2019-04-03 17:06 ` Thomas Monjalon
2019-04-03 17:06 ` Thomas Monjalon
2019-04-03 17:13 ` Eads, Gage
2019-04-03 17:13 ` Eads, Gage
2019-04-03 17:23 ` Thomas Monjalon
2019-04-03 17:23 ` Thomas Monjalon
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 3/8] test/stack: add stack test Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 4/8] test/stack: add stack perf test Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 5/8] stack: add lock-free stack implementation Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 6/8] stack: add C11 atomic implementation Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-02 11:11 ` Honnappa Nagarahalli
2019-04-02 11:11 ` Honnappa Nagarahalli
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 7/8] test/stack: add lock-free stack tests Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-01 21:14 ` [dpdk-dev] [PATCH v6 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
2019-04-01 21:14 ` Gage Eads
2019-04-03 17:04 ` [dpdk-dev] [PATCH v6 0/8] Add stack library and new " Thomas Monjalon
2019-04-03 17:04 ` Thomas Monjalon
2019-04-03 17:10 ` Eads, Gage
2019-04-03 17:10 ` Eads, Gage
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 " Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 1/8] stack: introduce rte stack library Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 3/8] test/stack: add stack test Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 4/8] test/stack: add stack perf test Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 5/8] stack: add lock-free stack implementation Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 6/8] stack: add C11 atomic implementation Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 7/8] test/stack: add lock-free stack tests Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:09 ` [dpdk-dev] [PATCH v7 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
2019-04-03 20:09 ` Gage Eads
2019-04-03 20:39 ` [dpdk-dev] [PATCH v7 0/8] Add stack library and new " Thomas Monjalon
2019-04-03 20:39 ` Thomas Monjalon
2019-04-03 20:49 ` Eads, Gage
2019-04-03 20:49 ` Eads, Gage
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 " Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 1/8] stack: introduce rte stack library Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 3/8] test/stack: add stack test Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 22:41 ` Thomas Monjalon
2019-04-03 22:41 ` Thomas Monjalon
2019-04-03 23:05 ` Eads, Gage
2019-04-03 23:05 ` Eads, Gage
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 4/8] test/stack: add stack perf test Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 5/8] stack: add lock-free stack implementation Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 6/8] stack: add C11 atomic implementation Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 7/8] test/stack: add lock-free stack tests Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 20:50 ` [dpdk-dev] [PATCH v8 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
2019-04-03 20:50 ` Gage Eads
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 0/8] Add stack library and new " Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 1/8] stack: introduce rte stack library Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-04 13:30 ` Thomas Monjalon
2019-04-04 13:30 ` Thomas Monjalon
2019-04-04 14:14 ` Eads, Gage
2019-04-04 14:14 ` Eads, Gage
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 3/8] test/stack: add stack test Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-04 7:34 ` Thomas Monjalon
2019-04-04 7:34 ` Thomas Monjalon
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 4/8] test/stack: add stack perf test Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 5/8] stack: add lock-free stack implementation Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 6/8] stack: add C11 atomic implementation Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 7/8] test/stack: add lock-free stack tests Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-03 23:20 ` [dpdk-dev] [PATCH v9 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
2019-04-03 23:20 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 0/8] Add stack library and new " Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 1/8] stack: introduce rte stack library Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 2/8] mempool/stack: convert mempool to use rte stack Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 3/8] test/stack: add stack test Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 4/8] test/stack: add stack perf test Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 5/8] stack: add lock-free stack implementation Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 6/8] stack: add C11 atomic implementation Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 7/8] test/stack: add lock-free stack tests Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 10:01 ` [dpdk-dev] [PATCH v10 8/8] mempool/stack: add lock-free stack mempool handler Gage Eads
2019-04-04 10:01 ` Gage Eads
2019-04-04 15:42 ` [dpdk-dev] [PATCH v10 0/8] Add stack library and new " Thomas Monjalon
2019-04-04 15:42 ` Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).